申请试用
HOT
登录
注册
 
Balancing Automation and Explanation in Machine Learning

Balancing Automation and Explanation in Machine Learning

Spark开源社区
/
发布于
/
8281
人观看
For a machine learning application to be successful, it is not enough to give highly accurate predictions: Customers also want to know why the model has made that prediction, so they can compare it against their intuition and (hopefully) gain trust in the model. However, there is a trade-off between model accuracy and explainability – for example, the more complex your feature transformations become, the harder it is to explain what the resulting features mean to the end customer. However, with the right system design this doesn’t mean it has to be a binary choice between these two goals. It is possible to combine complex, even automatic, feature engineering with highly accurate models and explanations. We will describe how we are using lineage tracing to solve this issue at Salesforce Einstein, allowing good model explanations to coexist with automatic feature engineering and model selection. By building this into an open source AutoML library TransmogrifAI, an extension to SparkMlLib, it is easy to ensure a consistent level of transparency in all of our ML applications. As model explanations are provided out of the box, data scientists don’t need to re-invent the wheel when model explanations need to be surfaced.
0点赞
0收藏
1下载
确认
3秒后跳转登录页面
去登陆