When we make a decision, there’s usually a reason behind it. The same goes for machine learning models. Though the process is less emotional, they’re still connecting data to come to a conclusion. Most companies who rely on these predictions don’t bother thinking about the “why” behind them, but this transparency is actually crucial. Why the “why?” It’s not just something that’s good to have, it’s necessary information. Emerging legislation is pointing toward greater transparency in AI-enabled applications. In some cases, it is already required by law – or soon will be. Soon it may be required to document in detail why, for example, a benefits claim was denied – including all the predictive factors that went into the decision. But the why goes beyond legal requirements. If you can’t understand why an AI platform delivers a certain answer, how can you be completely confident about the decisions you make using that information?
Intelligence Automation is many things, but here are four things that it is NOT... https://vimeo.com/762754617