Select Page

Thinking Big Data with Rules and Hadoop

by | Last updated on Nov 7, 2023

Recently I explored some of the decision management touch points with Hadoop. The most obvious was the classic ETL scenario where folks (analysts and data scientists) explore batch scenarios across large datasets—really big ones. While there are so many ways for our tooling to integrate, I became curious about JavaScript specifically. It’s generally fast and because it operates without the traditional “cold start” problem it scales really well in an instant. I imagined dropping our JavaScript decision inside a MapReduce artifact wrapped with something like Rhino. With just a few extra steps, Hadoop processes may operate with the benefits of a business decision and no service calls are required. In fact, with a little extra, it’s easy to imagine putting the parts and pieces together for a full Hadoop lifecycle:

  1. Selecting the decision you need from the catalog.
  2. Transforming the decision into a JavaScript artifact.
  3. Integrating your JavaScript decision and running it (with full automation).
  4. Making rapid changes to the decision and doing it all over again.

Just imagine the Data Analyst happily working with big data and decisions without friction.

BLOG POSTS BY TOPIC:

FEATURED ARTICLES:

STAY UPDATED ON THE LATEST FROM INRULE

We'd love to send you monthly updates! Learn about our webinars and newly published content by subscribing to our emails. We'll never share your email address and you can easily unsubscribe at any time.