What Rules Uncover: Horror Stories From Mistranslated Logic
My job on the ROAD Services Team at InRule involves connecting with a variety of InRule users, evaluating their business requirements, and turning those requirements into functional rules in irAuthor. Over the years, I’ve run across a variety of interesting situations, and I wanted to share a few (fully sanitized) funny stories of mistranslated logic.
Is My Logic Wrong? Your Logic is Wrong!
A relatively recent customer of ours was in the middle of a project in which they were moving old mainframe logic into rules. The project was humming along fine, and the customer’s main rule author got to the point where he was knocking out pretty complex irAuthor rules in no time. The rules were ultimately re-rating nearly 120,000 products in their database that were previously rated with their mainframe-based algorithms. The performance was great, the rules were readable and well designed, and things were going great.
And then we got to testing. During testing, our customer discovered that the original mainframe code was rounding certain calculations incorrectly. He found that the new, InRule-based algorithm would change about 11,000 (!) of these products where the result of the corrected algorithm would result in a rounding change. Implementing rules in InRule caused our customers to discover a long-ago bug in their algorithms that they had been using for over 10 years.
And no, they didn’t go back and change the mainframe code.
We Know Exactly How this Decision Works, But…
Years ago I got into project meetings with another customer group that was working to automate rules that their call center personnel used for transferring money in their pension management system. Their technical business team had done a marvelous job uncovering and establishing the initial rules that call center employees used, but as I started to quiz them about the details, it became clear that there were several data scenarios they had inadvertently overlooked. They had established the “happy path” for when a transfer could be accepted, but they didn’t really know how to make some of the decisions about unhappy paths when a transfer could not be immediately accepted. And when they went back to ask their call center subject matter experts (SMEs) about these gray areas, they realized that the SMEs made somewhat arbitrary decisions: sometimes they made one decision, and other times another.
The effort to put rules into code forced this customer’s call center department to establish new baseline rules for some decisions that they had previously never established. We uncovered and ended up “correcting” some of their underlying process logic. Extra bonus!
How Are We Going to Automate This?
I get a lot of customers asking this question, and sometimes the challenges are more interesting to solve than others. I’d traveled to another US state many years ago to work with a customer that had plans to move a healthcare application into rules. We started working through requirements, and we got to a point where the customer handed me a piece of paper with sample customer data. They asked how we could start automating this, and I asked where this data sits currently. The business analyst said “You’re looking at it.” The project was to automate paper forms. They didn’t have a database in place yet, nor a data entry screens planned out. I had a lot more digging to do before we could start automating anything.
I hope you enjoyed these snippets. There are many more stories to tell, but the moral of most of them is the same: actually putting effort into getting rules automated often has the side benefit of correcting—or at least shining light on—business logic that may not have been correctly or consistently applied in the first place!
And if you want to further explore why establishing baseline rules for decisions (just like my call-center customers above had to do) helps ensure automation success, checkout our latest white paper on Why Your Digital Transformation Project is More Complex Than You Think.