Select Page

InRule Technology® Enhances Bias Detection, Empowering Greater Explainability and Lower Risk for Enterprise Machine Learning

Nov 8, 2022

New bias detection report enables users to confidently audit their machine learning models for bias at a glance with an easy-to-understand visualization

CHICAGO – InRule Technology®, an intelligence automation company providing integrated decisioning, machine learning and process automation software to the enterprise, today announced the release of a bias detection report, a best-in-class tool for evaluating machine learning models for harmful bias. The report furthers InRule’s mission to make automation accessible across the enterprise by eliminating the complexities of programming through no-code, explainable solutions.

Building on InRule’s powerful bias detection capabilities introduced earlier this year, the bias detection report enables InRule® Machine Learning users to quickly decipher where harmful bias may be present in models. This report provides unparalleled explainability and empowers users to swiftly assess models for harmful bias to prevent undesirable performance for individuals with protected characteristics, such as age, race, religion, etc.

Recent InRule research found that business leaders worry that harmful bias can lead to inaccurate (58 percent) or inconsistent (46 percent) decisions, decreased operational efficiency (39 percent), and loss of business (32 percent). With the bias detection report, enterprise users can de-risk their machine learning programs.

This bias detection report is a valuable tool for data science teams seeking confirmation that a model can be safely deployed. Beyond use by data scientists in model creation, the report can provide insights to technical leadership prior to model deployment.

“Many organizations hesitate to take advantage of the power of machine learning as they are keenly aware that deploying biased models exposes them to a range of regulatory and reputational risks,” said Danny Shayman, AI and machine learning product manager, InRule. “InRule’s bias detection report adds another layer to our bias detection capability, empowering teams to deploy machine learning models with confidence.”

InRule’s bias detection report couples explainable machine learning with a high-capacity clustering engine to assess the deepest subsets of a model through millions of data paths, ensuring the model operates with equal fairness within and between groups of people it learned to treat similarly. Conversely, most machine learning platforms that offer bias detection only evaluate for bias by averaging values across an entire model.

Once a model is trained, InRule’s semi-supervised clustering technology forms groups of predictions made for similar reasons. Subsequently, the bias testing within InRule Machine Learning evaluates those clusters with statistical tests to assess whether the attributes that make those predictions similar to each other are also correlated to protected characteristics.

The bias detection report for InRule Machine Learning is now available as part of the InRule free trial experience. Request a trial at www.inrule.com/free-trial.

#  #  #

About InRule Technology

InRule Technology® is an intelligence automation company providing integrated decisioning, machine learning and process automation software to the enterprise. By enabling IT and business leaders to make better decisions faster, operationalize machine learning and improve complex processes, the InRule® Intelligence Automation Platform increases productivity, drives revenue, and provides exceptional business outcomes. More than 500 organizations worldwide rely on InRule for mission critical applications. InRule Technology has been delivering measurable business and IT results since 2002. Learn how to make automation accessible at Twitter and LinkedIn.

InRule and InRule Technology are registered trademarks of InRule Technology, Inc. All other trademarks and trade names mentioned herein may be the trademarks of their respective owners and are hereby acknowledged.