Theresa Benson, Product Storyteller for InRule Technology, explains the opportunity of combining declarative AI with predictive AI, and how InRule Technology is using predictive AI to empower non AI experts to develop algorithms from their existing domain knowledge.
Theresa Benson, Product Storyteller for InRule Technology, explains the opportunity of combining declarative AI with predictive AI, and how InRule Technology is using predictive AI to empower non AI experts to develop algorithms from their existing domain knowledge.
0:00:00.1 Theresa Benson: The cool thing about adding in the predictive AI is it turns the if/then into what/if.
0:00:09.8 Rob Stevenson: Welcome to How AI Happens, a podcast where experts explain their work at the cutting edge of artificial intelligence. You'll hear from AI researchers, data scientists, and machine learning engineers as they get technical about the most exciting developments in their field and the challenges they're facing along the way. I'm your host, Rob Stevenson, and we are about to learn how AI happens.
0:00:39.4 RS: Joining me today on How AI Happens is a Product Storyteller, Theresa Benson. Theresa, welcome to the show. How are you today?
0:00:45.6 TB: Hey, I'm great. Thanks for having me.
0:00:47.9 RS: I'm so pleased to have you on the podcast. I have so many things I wanna talk to you about. Before we get into all that though, do you think for the folks at home, we could maybe set some context, a little bit, about your background?
0:00:58.9 TB: Sure. So, I started my career as an electrical engineer. So, Analog VLSI Design. What I was doing, even back then, was technology enablement. You know, I worked for a large semi-conductor manufacturer. Working in consumer electronics, white goods. Figuring out ways to solve the problems that they had, and that they wanted to solve in order to serve their consumer with technology. And so, I was in the semi-conductor space for a long time, and had the opportunity to move into the industrial segment. So, industrial manufacturing. Robotics, CNC equipment. And then, I really started to move into, what the industrial space would call, IIoT and Industry 4.0. Which is really the lagging side of this thing that we've all experienced. You know, every bit of information about you available anywhere, that coming into the manufacturing space.
0:01:54.0 TB: So, the health of equipment, the uptime of a machine. Being able to predict the likelihood of it failing based on how it's failed in the past, really trying to automate and bring that forward in the industrial space. And I had been doing that for a number of years, and I learned about InRule Technology, and I learned about the Decision Automation World, and was absolutely fascinated by this area of AI.
0:02:23.2 TB: I had always thought of AI as "I, Robot," Will Smith, that sort of thing, or the more nefarious stuff that you see in the news. And what I've realised is that the declarative side of AI has so many corollaries and parallels to what I was doing in the industrial space, but on this scale that is unimaginable. And so, I came over to InRule Technology, and have been really working on telling the story of both declarative and predictive AI and the potential it has to transform literally the world. It really, really does.
0:03:06.7 RS: For the folks at home and also for the edification of a very naive podcaster, maybe just kind of put a circle around the differences between declarative AI and predictive AI, and kind of how... Where we are now with that.
0:03:20.1 TB: Yeah, yeah. So, declarative AI is more static and rules-based. A few years ago, you would have heard of a Business Rules Engine or BRMS, Business Rules Management System, and it was really this... It's a little algorithm that is programmed to execute some logic really, really fast. And people use Business Rules Engines and those sorts of things to automate all of those choices that are made in support of executing a policy or being compliant to a regulation. You know, there can be so many different decisions and considerations. And the thing about declarative AI is it's really taking that static logic and decisioning and putting it into algorithms that can execute at massive scale. So, suddenly your prescription benefits calculations or medical claims or whatever can be processed so much more quickly. Now, I'll tell you that, and you'll be like, "Well, you could do that in software." Well, of course, you could do it in code. But then, the instant you start doing that, what happens is... Let's say a business leader wants to make a change of some kind, that a policy has shifted, a regulation has been enacted that means that a bunch of stuff has to change, and maybe it's nuanced, and the business knows those nuances best.
0:04:53.4 TB: With the typical kernel style microservice Business Rules Engine, you have to go enter the software development life cycle, you have to have a few people come together to explain why and how this code needs to change. Now you add in an auditor and a regulator, and suddenly you need to explain how this code looked before the developers got in and changed what we told them to change. And, God help us all, if it's documented and commented well. And so, declarative AI and the solution of companies like InRule Technology, we abstract a lot of that and make it accessible to the business, so that you can automate all of these decisions. The static, more rule-based decisioning in such a way that it's approachable to business, and you leave the heavy lifting and get it out of code, so you can be more agile, faster, higher powered. Now, that's declarative logic. Predictive logic or non-declarative logic is more about looking at the data and looking at the output, and then constructing the logic that describes what you see. So, where declarative we're writing the logic up front, with this predictive or non-declarative it's a little bit flipped on its head. Right? So, a machine learning model, you train it... You build it and you train it with data, and it learns what is...
0:06:27.0 TB: So that you can then feed it new information to determine what will be. And so, InRule happens to do both, and you really need both if you wanna make anything actionable. It's like... It's about to rain outside, weatherman says it's 70% chance of rain. I need to make a decision. Do I take an umbrella or not, right? It's the same thing with declarative and non-declarative AI. Non-declarative AI is telling me there's a 70% likelihood that this customer who's approaching my Customer Service Agent is likely to turn, and then the declarative side is saying, "Here are the couple offers that we need to offer this person in order to retain them."
0:07:14.1 RS: What's interesting about the presentation of information and solutions in such a way that the user can interact with it, this has more to do with, I think, some of the exciting ways AI is going to impact most people's lives. Because it's less about AI is coming for your job, and more about AI is gonna make you better at your job and do the things that you don't want to do in your job. And if it looked like a piece of software that I interact with in the same way I would interact with Salesforce, then you probably don't even think of it as AI. It's just like software, right?
0:07:49.8 TB: Well, yeah. The thing is, what really struck me when I moved from the industrial automation space to this space, into the AI world in which InRule Technology inhabits, is that when manufacturers, automotive manufacturers and whatever, finally got onboard with automation on the manufacturing line in the '60s... 'Cause before that, it was really cost-prohibitive. For a lot of them, they started with very minimal rote activities that a line worker might do. You know, welding, paint jobs, things like that. And it was foreign, and it was challenging, and you were essentially trying to abstract what this person did every day. And even though, when they came on and were new at the company, they learned how to apply the paint in a certain way. Over time, they developed experiential knowledge that they applied to tightening that bolt, or applying that paint, or welding that joint, and knowing what a good weld looks like versus not.
0:08:53.9 TB: Well, how do you automate that? And how do you intuit that? So, there's all this discovery, and getting out of that line workers brain all of the little intuitive things that they do in order to do their job well. And then, what happens there is all of that gets put into code that an operations engineer, who knows how to code a robot to do that job, then makes happen. With knowledge automation, it's like it's flipped on its side. So, instead of sort of taking all of that insight and information away from Phil in Accounting or an underwriter somewhere, and then abstracting it and putting it in the software development life cycle where a bunch of engineers can read code and tell you what this strip of code does, what decision platforms do is help Phil articulate all of the steps that he goes through, all of the key decisions that go into him approving or denying a loan.
0:10:00.0 TB: Approving or denying a claim. Every little nuance, looking at birth dates, thinking experientially about when he's encountered a situation before, and then allowing him to write that in such a way that when new information is available or a little tweak needs to happen, Phil is still empowered to make that change. We don't take it out of the hands of the subject matter expert. And in fact, what we do and the companies that we see scale fastest and most successfully, are those that, instead... Yeah, they maybe have a developer on the front end, because it is sort of complex to take all those decisions that you're making and turn them into algorithms. If you have an engineering degree or you're little bit nerdy like me, I can think of my if/then statements in my head. But you make it approachable and accessible and it accelerates everything. And, like you said, it empowers that person. It keeps them in control. Now, granted, they're no longer doing the deciding, because that's all been algorithmised, but they still are empowered to make changes as needed versus removing that agency from them. It's a really profound nuanced difference between the two automation spaces, and frankly, I could see...
0:11:28.4 TB: And I've seen cases of where manufacturing... Looking at putting this type of AI, declarative AI, into their platforms. So, it's fascinating.
0:11:40.2 RS: Could you share an example of what that would look like in manufacturing?
0:11:43.8 TB: Sure. So, in my previous role, we made a product that could literally take data from anything in your plant. I don't care if it's a conveyor belt or a PLC or a grinder...
0:11:57.3 RS: A vending machine?
0:12:00.5 TB: Well, maybe a vending machine. But more likely a cutting machine or something like that, you know? And all of this different equipment spoke different standards, right? Siemens Equipment spoke really, really well and conversant in Siemen speak, but the Rockwell Automation stuff didn't necessarily play as nicely with Siemens Equipment. And so, we made this device that we could gather all of the information from anywhere, from sensors, from equipment, from anything. Well, so just like in any AI application anywhere, any machine learning application anywhere, data by itself is nothing. And if you can't figure out if that data is meaningful, that's almost worse, right? So, just having a glob of data doesn't help you. Not having insight into whether or not that glob of data is good, bad, biased, or otherwise, doesn't help you. What does help you is you talk to the people on the plant floor, and you find out on Tuesday, Wednesday, Thursday, when it's the most high volume production, and it's the middle of July, and there's not a lot of air conditioning in this place. By about 2 o'clock in the afternoon on Tuesday, Wednesday, Thursday, in the middle of July, such-and-such equipment is likely to fail. The person on the production line knows that maybe it's because it's super hot, or it's this or it's that.
0:13:22.4 TB: If you were to, instead of having him sit down with somebody who could program a PLC or program our device, and instead have him tell you the story of his experience of seeing that piece of equipment break down, and then codify those algorithms... So, he's taking in all of those data inputs. "It's hot, it's July, I've been running this machine for 37 hours non-stop. It's got that weird rattle. I'm sure it's gonna break." Now imagine if I could empower that person to write that down and codify that in an algorithm that they don't even know they're writing, because they write it in plain language, so that he no longer has to pay attention, "Is the Rattle there?"
0:14:22.6 TB: He no longer has... And instead, he can go focus on some other piece of equipment. 'Cause right now, what typically happens is that's done programmatically with code. It's the exact same problem I was talking about earlier with business. When you wanna implement declarative logic, predictive logic, you're talking to somewhere in software to get that done. With a decision platform, you can abstract all of that, centralize it, and make it available to everybody. Same thing in a manufacturing facility. You could take that guy's knowledge and codify it in algorithms that then can pay attention to that piece equipment for you, and tell you, in hour 34, when the temperature is 107 degrees, and it's the middle of July, that you should probably take that down for some preventive maintenance or else you're gonna be down for a significantly longer period of time.
0:15:19.4 RS: I love how you classify human decision-making as algorithmic. Of course it is, of course it is. What else could it be? It is if/then. It is taking in all of these factors and making a judgment, and that's what we're training machines to do. Rather than having to deploy the team with a bunch of clipboards to figure it out, it sounds like the goal here is to just give that Floor Manager something he or she can interact with to develop that very customised hyper-local algorithm based on their own knowledge. And now...
0:15:50.1 TB: Based on their knowledge.
0:15:51.6 RS: In the middle of July, instead of walking around listening for, "Is this machine about to fail?" They can... They could be playing Minecraft.
0:16:00.7 TB: Exactly. They could be playing... I'm not sure that's probably what his supervisor would want. But, yes, exactly right. And that's just it is, we are if/then machines, you know? If it's cold, take a coat. If it's gonna rain, bring an umbrella. Think about knowledge automation, decision automation, that is just so cool and fascinating. And you look and there's tons of government applications, financial applications, is... There is that if/then. You take your most experienced doer in a function. I don't care if it's a Loan Officer, an Underwriter, Customer Service Agent, whatever. You have them tell the story of their job, and why they made the calls that they did depending on what was coming at them, right? And, sure, you maybe have a developer who can intuit and get that in at first, faster than the average Joe coming up to speed.
0:16:53.6 TB: The cool thing about adding in the predictive AI is it turns the if/then into what/if, right? Because now, you take not only the process of that knowledge worker and all the different things, and every decision that they make and every policy that they check and every form that they know, 'cause they've been doing this job for 17 years. But then, you get that weird customer with the wonky problem that isn't documented in policy, it isn't part of everything that you've codified out of this knowledge workers algorithms in their head. And that's where predictive logic and like classifiers in machine learning becomes so cool, because now that's the experience of the knowledge worker. That's the gut of that knowledge worker to say, "Hey. When this sort of situation comes up and it presents all of these factors, what is the most likely outcome to happen? Or what shape does this situation start to look like?" And then, when you marry this what/if with this if/then, suddenly you are approaching that true stuff that we do every day. That what/if plus that if/then becomes AI in a practical way.
0:18:23.0 TB: And again, it's not the nefarious stuff that we hear on the news, it's just truly, "Hey, what if we... " Pharmaceutical application, you take data and you get something that says, this particular combination of proteins and chemicals are not likely to produce a viable candidate for trial. The cool thing about the predictive side of all of this is you can say, "Well, what if... What if I changed and tweaked this thing, what if I did this?" based on all of the data that you've seen before, machine learning model, what if... And that's just so inspiring to me as a nerd and a person, as much as being in the industry I'm in, because now it becomes exploratory, but with guard rails, right? I can get a What If answer back, like if I tweak this one thing that increases the likelihood of success to 83%.
0:19:32.1 TB: Well, now the declarative logic takes over and is 83% good enough for me to go ahead and make the recommendation to make that change to this one protein, or does the declarative logic say, "Go tell whomever is in charge of making those decisions to go take a look at this recommendation."?
0:19:53.7 TB: And that's the other thing I think a lot of people wig out about AI and that it's going to take humans out of the loop, and there's that human loop... Human loop thing, that cadence in our space, and what's important to realise in all of this, that is missing to a certain extent in spaces within industrial automation is humans are the loop. It's not that we're in the loop, we are the loop, and these are assistive technologies in service of the loop, you know what I mean? Versus there being some sort of a clear demarcation and, "Oh, humans aren't in control anyway." No, we are the loop. All of these things that we're putting into declarative logic, all of these things and knowledge we wanna get out of predictive logic come from our biases, beliefs, knowledge and experience, every question we ask of a model comes from something that we as humans wanna know.
0:21:00.5 RS: Yes, yes, exactly right. It's decision making, and it's so easy to go to the nefarious, what's really at stake is just efficiency, is just being more productive and getting to focus on the things that really make you a high leverage individual, that really make you valuable to your organisation.
0:21:18.9 TB: Exactly right.
0:21:21.4 RS: And so I'm curious with the output from this model, when I'm speaking with AI practitioners and with engineers and individuals who think intimately about the implications of their work and the risks of their work, they're very concerned with ethical AI with the pitfalls of black-box algorithms and what have you. The average consumer perhaps isn't, right? The average consumer might just say, "Oh, the thing says to do this, and it's usually right, so I'll do that." and not look too closely at the implications of that. What mechanisms are in place to give the average user insight into how these decisions are being made?
0:22:02.9 TB: The world is changing so rapidly. You're exactly right, there's all kinds of emergent legislation around explainability. When the Biden administration came in in the US, for example, they put a stake in the ground and said ethics and transparency were at the forefront of their administration, that the United Kingdom has these huge guidelines that they've created at the federal level to talk about right to explanation, right to full transparency, because if you think about the level of decisions that are being made, you apply for a loan, they're all of these apps where you can quickly apply for a loan, right? And let's say you're declined. Well, in certain countries, if you're declined for any of a select number of reasons, that's illegal, and with apps, if all you get is this black-box outcome of, it turns out that you aren't the right candidate for this particular loan with no ability... Like, What is the consumer's right or the citizens right to go find out why? Once we're starting to interact with all of these algorithms, there isn't a person I can call up to sit down and explain to me what happened, and so there's all of this emergent legislation and just a drum beat around that, is a heartbeat that's happening.
0:23:29.1 TB: And so one of the things that, for example, our technology does is we say it opens the black-box, so if people are used to getting a predicted outcome like this customer is 70% likely to default, so the predictive class was, "Will they default or not?" And then you get a confidence with that 70%. So, we've built a model, we've given it a ton of information about a particular applicant, and then we wanna find out if they're, okay, well, if all I get back as the vendor who needs to tell this person why they're not gonna get this loan, if all I get back is, I believe you're gonna default therefore... And the likelihood is high. What does that do for that consumer who maybe there was a data entry error, maybe their circumstances changed. What is going on in their world that might change things? And a lot of machine learning models, when they're built, you can get sort of a ranked set of, "Here are all the features that describe this model in rank order." So you can see if any protected class was is in the top 10 for building this model. Well, we blew that paradigm apart and we said, I wanna know what factors were involved in this prediction. So yeah, we tell you, "Here's what the model shape looks like now that we've trained it, and here's how all the features fell out."
0:25:03.7 TB: But what we do is we say for this one prediction, that customer that we told you it's a 70% likelihood of default, here are the... Every factor that went into predicting they would default or not default, and then it's the craziest thing that I needed to get my head around, and now that I understand it, I can't imagine anybody not having it is we don't just tell you, because this neighborhood said, Hickory Park.
0:25:35.3 TB: That told us something about their propensity to default, because this neighborhood said that. We also tell you if certain things are missing and how those things missing influenced it. So an example that I can think of is, we did an application for a huge telecommunications provider, they had internet service and phone service and streaming television or whatever, and one of the amazing things about the way that the their model falls out for them is you can present here is a potential customer and are they going to churn or not? So you wanna know that, you wanna have that anticipatory idea so that it can inform the way you engage them, whether it's a marketing campaign or just even a tech support call, Customer Service, just based on what we know about them before they've said, Hello, where are they at? Given the huge expansive, everybody else who's experienced us as a company, and so the predictions that we're able to deliver to a customer service agent are or something like the following, this person is likely to churn with an 84% confidence.
0:26:51.3 TB: One of the top factors is that they have fiber optic service, and if you continue to scroll down, you'll see that one of the factors also is because they do not have DSL service. So why is that important? Well, if you have multiple choices for a particular field in a model, like maybe it's DSL fiber optic, no service, some other type xDSL or some other sort of product offering, it becomes really hard to know what to offer. All you know is people who look and have the shape of this particular customer don't apparently like fiber optic service from this provider, and we don't know why, we just know that that's how the model describes things, that if fiber optic is present, that is not gonna be a good thing for us, and they're gonna have a high propensity to churn. Well, that's almost as good as that black-box prediction of they're 70% likely to churn. Well, okay, now we know that fiber optic being there is a problem, but if there's 17 other choices that I could offer them what does that do for me? So if my declarative side can look at that and say, "The fact that DSL isn't present is also a big contributing factor, what's the first thing I'm gonna do in my declarative logic, but go, 'Hey, if I see a prediction and the exact specific factors that come back this way, the first thing I'm gonna do... '" What would you do?
0:28:28.6 TB: I would offer a DSL package, I would see if they really need the bandwidth of fiber optic. That presence and then the lack of presence, it all goes back to that making all of the calls and all of the logic and everything approachable and automatable at scale.
0:28:52.4 RS: Yes, it's like a negative attention mechanism, it's trying to drill on on the absence of this, there's all these little things that are common in other instances that weren't in this, and that is just as important as the things that was drilling on, is the things it wasn't able to.
0:29:07.6 TB: Exactly, yeah. If these things aren't present and it becomes especially useful when it's not sort of a binomial, do they have phone service or not? That's it a either/or. So, if no phone service, then you can assume that the presence of phone service would make them a high likelihood to stay, but what if you have 15 different possible outcomes, if a model... And again, it's not talking about the shape of a model at haul, this one prediction, this one instance. And now let's go back to the legislation and that sort of heartbeat of right to explainability, truly understanding what went into making this decision, being in a highly regulated industry where it's important to not make certain calls on the basis of protected characteristics.
0:30:00.9 TB: If you can't tease out from that one prediction, what influenced how that fell, that's gonna be a challenge. That's gonna be a risk. That's gonna be a business risk. It's really important, it's really important for us as consumers, as much as it is for my company as a provider to be able to provide that transparency, so that when more consumers or more policies are enacted that say, transparency is absolutely critical and not just a marketing feature, but truly something that I need to deliver, it's important that AI technology companies are really thinking through those little nuances because they are what's gonna make the difference. And at the same time, make it all approachable 'cause we just nerded out for a little bit about the presence of this feature, this attention mechanism, etcetera, but being able to enable Phil to maintain his own business logic and be able to comprehend those nuances to ensure compliance, to ensure fairness, to ensure transparency on why was this loan rejected, why was this claim denied? That's just super critical to AI success.
0:31:35.0 RS: Yes, exactly. Theresa, we are creeping up on optimal podcast length here. This has been so fascinating, learning from you all about predictive versus declarative AI, what's at stake here and the crucial nature of building in transparency, not just because it's trendy or even because it's the right thing to do, but because at a certain point, you will legally not be able to operate, right? Unless you build those in. So, you're so right, it's so, so crucial. Theresa, this has been a delight chatting with you, thank you so much for being on the show and sharing your expertise with us today.
0:32:10.1 TB: You're so welcome. I'm glad to be here, thanks for having me.
0:32:18.6 RS: How AI happens is brought to you by Sama. Sama provides accurate data for ambitious AI, specialising in image, video and sensor data annotation and validation for machine learning algorithms in industries such as transportation, retail, E-commerce, media, med tech, robotics and agriculture. More information, head to sama.com.