How AI Happens

Credo AI Founder & CEO Navrina Singh

Episode Summary

Credo AI is a company that provides compliance and governance protocols to AI tech companies, and today we are joined by its Founder and CEO, Navrina Singh. Navrina tells us why it was essential for her to start Credo AI and why her industry has decided to create its own systems of oversight, even before government intervention.

Episode Notes

Navrina shares why trust and transparency are crucial in the AI space and why she believes  having a Chief Ethics Officer should become an industry standard. Our conversation ends with a discussion about compliance and what AI tech organizations can do to ensure reliable, trustworthy, and transparent products. To get 30 minutes of uninterrupted knowledge from The National AI Advisory Committee member, Mozilla board of directors member, and World Economic Forum young global leader Navrina Singh, tune in now!

Key Points From This Episode:

Tweetables:

“I always saw technology as the tool that would help me change the world. Especially growing up in an environment where women don’t have the luxury that some other people have, you tend to lean on things that can make your ideas happen, and technology was that for me.” —@navrinasingh [0:01:17]

“As technologists, it’s our responsibility to make sure that the technologies we are putting out in the world that are becoming the fabric of our society, we take responsibility for it.” —@navrinasingh [0:04:04]

“By its very nature, trust is all about saying something and then consistently delivering on what you said. That’s how you build trust.” —@navrinasingh [0:08:58]

“I founded Credo AI for a reason, to bring more honest accountability in artificial intelligence.” —@navrinasingh [0:10:45]

“We are going to see more trust officers and trust functions emerge within organizations, but I am not really sure if a chief ethics officer is going to emerge as a core persona, at least not in the next two to three years. Is it needed? Absolutely, it’s needed.” —@navrinasingh [0:17:32]

Links Mentioned in Today’s Episode:

Navrina Singh on Twitter

Navrina Singh on LinkedIn

Credo AI

The National AI Advisory Committee

World Economic Forum

Dr. Fei-Fei Li on LinkedIn

How AI Happens

Sama

Episode Transcription

[INTRODUCTION]

 

[0:00:04.5] RS: Welcome to How AI Happens, a podcast where experts explain their work at the cutting-edge of artificial intelligence. You’ll hear from AI researchers, data scientists, and machine learning engineers as they get technical about the most exciting developments in their field, and the challenges they’re facing along the way. I’m your host, Rob Stevenson, and we’re about to learn How AI Happens.

 

[INTERVIEW]

 

[0:00:32.4] RS: Here with me today on How AI Happens is member of the National AI Advisory Committee, member of the board of directors over at Mozilla, a young global leader for the world economic forum, as well as the CEO and founder of Credo AI, Navrina Singh. Navrina, welcome to the show, I’m so pleased you’re here with me today.

 

[0:00:49.8] NS: Well, thank you so much for having me, Rob, excited about this conversation.

 

[0:00:53.0] RS: Yeah, me as well and you have such a colorful background, we could probably spend the whole episode talking about all of that. I do want to fast forward a little bit in time, maybe jump into the deep end here. Would you mind sharing a little bit about your background and specifically why you decided to found Credo?

 

[0:01:11.2] NS: Yeah, absolutely, Rob. You know Rob, I grew up in a very small town in India, and growing up I always saw technology as the tool that would help me change the world. You know, when especially growing up in an environment where women don’t have the luxury that some of the other people have, you tend to lean on things that you can bring to the world and make your ideas happen, and technology was that for me.

 

So for me, from an early time, engineering and science is what I gravitated towards and that’s what brought me to United States. I spent the past two decades building great products in augmented reality, mobile, artificial intelligence, and some of the most prominent companies in the world. For me, I think this excitement of changing the world through technology took a different turn about 10 years ago.  

 

This is a time that we were just getting into machine learning at Qualcomm, ImageNet had just come out, so great work by Dr. Fei-Fei Li, and we started to see that there was a great opportunity for us to help move Qualcomm’s business into other areas beyond mobile, and robotics was a great place given the focus that was on computer vision. But as I started to look across the ecosystem, I would say this is the first time as an engineer, as a builder, I started to step back and think deeply about our responsibility and two key things that led to it.  

 

One was the work we were doing in building collaborative robots for manufacturing plants. So you can think about this beautiful human machine collaboration happening in the open but all the issues related to safety that can emerge from it, right? And alongside that, I saw some emerging companies, a few of them, creating digital avatars and this was a very interesting time because my daughter was just a few months old and I saw these digital avatars learning at the speed, the same speed at which she was learning.  

 

So I would say that this whole notion of physical safety when human/machine are interacting, and then how quickly can machines learn and become as good as humans at learning and not at reasoning was, I would say, an "aha moment" for me. In this moment is when I started to think about, what is our responsibility as builders, as technologist? It is not just about building great products and putting it out in the market.  

 

It is also taking accountability of how those systems are going to change the world, the society. I would say that that started 10 years ago and a mantra that I started to hold on very close to my heart was around, “What we make, makes us.” So I’m going to repeat that, “What we make, makes us.” As technologists, it’s our responsibility to make sure that the technologies we are putting out in the world that are becoming the fabric of our society, we take responsibility for it.

 

So fast-forward 10 years later, I was working at Microsoft and saw similar challenges as people building large scale AI systems, commercializing machine learning and facial recognition, in speech recognition, in large language models that this whole notion of “what is our accountability and responsibility?” started to take center stage. And then there was a big, I would say, focus for me in terms of the growing oversight deficit that started to get created between the technical stakeholders coming from machine learning, data science, product, and the stakeholders coming from nontechnical backgrounds but who understood risk coming from compliance, policy.  

 

I would say that was the movement that I was like, we need to make sure that we are taking responsibility and accountability of these systems and the only way we can do that is by bridging this gap between these two stakeholders who are paramount to the AI development, and that led me to creating Credo AI.

 

[0:05:19.9] RS: Now, oversight, compliance, et cetera, is common across lots of industries and usually, however, it is hoisted upon those industries by governments. It’s rare that it comes from the industry themselves and that’s sort of what I see in AI and machine learning, is that the practitioners and researchers and academics are the ones raising their hands and being like, “Hold on a second, there should be some oversight here.”

 

Why, and this is perhaps a fundamental bordering on Naïve question, but why do you think that is the case in this particular area of technology, when you might not see in other in sectors?

 

[0:06:00.7] NS: Yeah, what a great question, Rob. I do believe artificial intelligence is a drastically different technology than all the previous revolutions we have gone through. And the reason it is drastically different is it’s not just because it is algorithms reasoning over massive amounts of data, but it is around these learning systems, which once unleashed in the world are starting to make their own decisions, right?

 

So when you think about something which is so powerful, if not controlled, if it’s just unleashed in the world, what are the implications to again, not just the outcomes we might be expecting as consumers but the outcomes of businesses might be facing? So yes, you are right that there is a lot of raising hands, not only from the technical side saying, “Hey, what is our responsibility, how do we bring more oversight?”

 

But I think there’s a growing sense of social movement around the impact these technologies can fundamentally have on the way we are working through hiring algorithms. The way we are getting educated through education algorithms, the way we are getting healthcare, right? Through all these algorithms that have become like the backbone of the society. So yes, it is a, I would say, very different than previous technological revolutions.  

 

It is not just the technical stakeholders raising their hands and asking for more oversight. I think there’s a growing sense that whether you are in regulated industry or unregulated industry, the fundamental way you can actually deliver on the value of AI is through trust, and that’s where we’re aligned in right now.

 

[0:07:47.5] RS: Can you speak a little bit more on the trust part of that? Why is that so crucial in this sector?

 

[0:07:54.7] NS: Yeah and by the way, trust is crucial for humanity. Though you and I are interacting, yeah? So the way you and I interact, the way global powers interact, the way companies interact, if foundationally you can’t trust the other side, guess what? You’re not going to do business with them, you’re not going to be friends with them, you’re not just going to be in their sphere.

 

So the fundamental, I would say, attribute of a good economic and societal fabric is a trust that brings all of us together. In artificial intelligence, that trust can easily be broken or that trust can be augmented. The way it can be broken is because, as we’ve discussed, I think this has been in media quite a lot, is the black box problem, which I think we’ve gone past that but people still talk about it.  

 

Fundamentally, you have these algorithms. Many of them, especially the new techniques like neural networks, are very difficult to go inside and figure out how they are making decisions and determinations. And by its very nature, trust is all about saying something and then consistently delivering on what you said, that’s how you build trust. So when you have this black box, you don’t even know how it’s making these decisions, how it’s reasoning.  

 

Right at the onset, you don’t have the trust. Now, if it is making decisions on the surface which look right but as you start unpacking it, as you start saying, “Are they safe for me? Are these compliant? Can I audit them?” That’s when you start seeing, “Oh my God, on the surface, it might look like it’s a great prediction machine, but under the layers, it’s actually not performing well for all the demographics, it is not performing in all the scenarios.”  

 

That’s when the trust starts to erode. So the reason trust is critical in artificial intelligence is not only at the technical level, understanding how these systems made the predictions that they do or generate what they are generating, especially in this new realm of generative AI, but if you think about the applications by businesses, if businesses are not transparent around who worked on this system, what kind of testing was done, where are you sourcing data from, how are these systems audited, who provided oversight for these high res systems?

 

Unless all those things are clearly not stated, over time there is going to be an erosion of trust, even with the most loved brands in this world. I fundamentally believe this whole responsible AI and trustworthy AI is the competitive advantage that is going to help brands build that trust and lead in this age of AI.

 

[0:10:35.8] RS: So how does Credo help companies attack some of those issues around transparency and trust?

 

[0:10:42.9] NS: Yeah, so this goes back to, I founded Credo AI for a reason, to bring more, I would say, honest accountability in artificial intelligence. So stepping back what we do, we are a governance platform. We have both a SaaS offering as well as an on prem offering, where this governance platform sort of sits on top of your machine learning infrastructure.

 

Whether you have one system or it is a fragmented system, everything from development pipelines all the way to monitoring pipelines and productions, whatever those systems are, Credo AI comes and sits on top of it. What Credo ends up doing after that is one, we basically look across the ecosystem to see, what are some of the guardrails that are emerging?  

 

Now, these guardrails could be something as specific as a regulation, whether it’s New York City law, whether it is the upcoming EUAI act, it could be emerging standards. For example, coming out from nest is the risk management framework or going back to your point, where companies are raising hands and their technical folks are raising hands. It could be company guidelines where, “Hey, there are certain guardrails we need to operationalize and put in place.”

 

So the first step that Credo does is it takes in all these requirements that are emerging form the ecosystem from the companies, which align with company objectives as well as regulatory objectives, ingest those guardrails and then it basically creates technical measurements for your datasets and your models, against what we do assessments and testing in a black box manner as to how your data sets and models perform against those requirements.

 

But the second thing that we do is, we also take those requirements and map how your existing AI development processes as well as accountability structures aligned to the needs of those requirements, and the output of all these entire processes, the governance process, is a set of artifacts and these artifacts could be Rob, again, going back to the need for transference and trust, these artifacts could be for your internal stakeholders like your board, your audit committee, your technical teams, your compliance teams.

 

Or these artifacts could be for external stakeholders like the consumers who are impacted by your technology, the regulations who might be demanding more oversight into your systems or the auditors that you might be bringing into audit your systems. So this governance platform, as you can imagine, is a combination of not only technical assessments but also process evaluation to bring oversight across the entire AI lifecycle.  

 

Then I do also want to emphasize something that a lot of times get missed, is artificial intelligence governance cannot be a peanut butter approach. It is super context sensitive. So what I mean by that is the way Credo AI does AI governance is really based on use cases, and I use this example way too much. I’ll come up with another one next time, but if you think about facial recognition.  

 

We have certain customers who are using the same set of models, combined together in an AI use case to unlock a phone and to access into a building, which might have proprietary information. If you think about these two used cases where you don’t have to be as perfect on your phone but on accessing a building which has proprietary information, your risk profile just suddenly increases.  

 

So as you can imagine here, context is so critical to governance. How you test your systems for all these different parameters of safety and robustness and reliability drastically changes across context, and that’s what Credo AI brings in through its governance platform.

 

[0:14:35.9] RS: I see. So who is it in the organization you interface with typically? And kind of what I’m asking here is, whose job ought it to be to really be in charge of the accountability and transparency of their technology? Is it just director of, is it chief technology officer, is it a dedicated role? I’d love to know in your experience who was it and ,in your opinion, who ought it to be?

 

[0:15:00.2] NS: Yeah, great question. I’m going to give you not only those two answers but I am also going to give you like ideologically who’s responsibility it is and realistically who’s responsibility it should be. Idealistically, everyone who is touching AI systems ,whether you are coming from design, from product, from data science, all the way to compliance policy, it should be everyone’s responsibility, but that’s not how organizations work.  

 

So what we have found in our current work, in our current customers, is it really depends upon how the organization is set up. So for example, one of our customers which is one of the largest aerospace and defense contractors in the world are in that organization. The core responsibility really resides in the technical teams, where their chief AI architect is responsible for making sure that the right accountability structures and governance is put in place.  

 

One of our other customers, which is a financial services customer, in that case the core accountability resides in the governance team lead, who happens to be not from technology background but actually from compliance background, but interact day-to-day and day out with data scientist and machine learning engineers to make sure the systems are performing.  

 

So right now Rob, because this is an emerging area, which is sad to even hear that this is an emerging area because it should be foundational to how companies operate, but it is an emerging area as to who is accountable for all the AI risk but overtime, our hope is over the next five years, we are going to see core personas emerge who is going to be responsible for AI risk, trust, transparency management. But right now, I would say it is really dependent upon how the organization is set up.  

 

[0:16:50.1] RS: So will that look like a transparencies tsar or something like that, someone who reports into the chief technology officer or maybe the CEO themselves?  

 

[0:16:58.3] NS: You know, I have opinions on that. I think, I do think that’s similar to how chief AI, chief data officer’s roles have emerged in the past couple of years. We will see chief trust officer or chief ethics officer roles starting to emerge. The question that I do ask myself is why is that necessary again? Because within every function, whether you are part of the audit team, whether you’re part of compliance or safety officer team, I think there needs to be individuals responsible for it.  

 

So we are going to see more trust officers, trust functions emerge within organizations, but I am not really sure if a chief ethics officer is going to emerge as a core persona, at least in the next two to three years. Is it needed? Absolutely, it’s needed.  

 

[0:17:46.8] RS: So of course, it is good to be compliant and be in line with the regulatory bodies. I am curious what that looks like practically, meaning, I guess what I am asking is where do in your experience customers typically run afoul of that compliance? Like where are the areas in companies that the people working on this tech need to be extra weary of because they are coming up a lot, in terms of operating, outside the lines?  

 

[0:18:14.4] NS: Yes, so Rob, before I answer that question, I do want to level set here. What we are finding is the organizations who are investing in governance solutions like Credo AI right now are not doing so just to get that compliance checkbox. Compliant AI is absolutely critical but as you can imagine, right now, there is the existing regulations that are getting adapted for AI machine learning.  

 

There’s upcoming AI regulations but that, that entire ecosystem is still forming. So what we are finding is a slightly different positioning, which is really exciting to see where companies who have bet big on machine learning applications for the past couple of years are recognizing that leading with transparency, with their customers, actually have some unlocked more sales. It helps them retain employees longer.  

 

It helps them bring on new customers, retain customers longer, it helps them build a great brand. I would say a lot of our customers, I would say 50 to 70% of our customers right now use Credo AI to start building that capability set that is again, leading with trust not just focused on, “Oh, there is a regulation that I need to be compliant to, so let’s do a checkbox.” We don’t see much of that.  

 

Having said that, we are seeing, because of emerging regulations especially in the regulated sectors that we work in, be it insurance or financial services, yes, there is a certain body of thinking around, “Oh my god, there is going to be upcoming regulation that Feds are going to mandate,” or if there is going to be upcoming regulation, which is going to happen on the insurance side at local and state level, we need to make sure that we are ready for it because most of these regulated industries have seen these waves in the past.  

 

So I do want to level set there that a big part of our business right now is companies building capabilities to lead in the age of AI using trust as a mechanism, using governance as a mechanism and yes, we do see a good chunk now of companies showing up where the regulations are starting to emerge. With that context to answer your question, I think it is a very difficult problem, right?  

 

Because everything is so contextual, I’ll give you an example of what’s happening in New York City. Now, New York City last December passed a law which mandates that if you’re a company operating in New York City, by January of 2023 you need to provide a desperate impact analysis of all the hiring tools, whether they are built in-house or they’re procured from third-party, which are called automated employment decision making tools.  

 

You have to do a desperate impact analysis and provide a public facing disclosure that will reside on your website. So as you can imagine, this is obviously a precedence setting regulation, which is exciting. The other exciting thing is it is specific, not very specific but it is specific in terms of desperate impact analysis across certain protected attributes like sex, gender, ethnicity.  

 

It does not provide you what those thresholds are, it is just asking you to report on those thresholds. So for technical teams, this is a good first step. Is it comprehensive? Absolutely not, because it does not report on all the other protective attributes, which are mandated by EUC, which is the equal employment opportunity commission. So there are certainly gaps and then there is also a gap around who audits these results, who is actually saying these results are good.  

 

So what we are seeing right now in this ecosystem is that technical stakeholders are actually excited about these guardrails that some of these regulations or company policies are providing, because now they have a clear view of the tradeoffs they have to make, whether it’s accuracy, is it precision, is it recall, is a false positive rate, what are they looking for? Now, some of these regulations and company guidelines are giving them those tradeoffs rather than them trying to experiment in multiple different spaces.  

 

The not so good part is we don’t know if these are the guardrails that are going to emerge as standards. These are just early regulations, and that’s why this public-private partnership is super critical and that’s why one of the reasons I am engaged, whether it is with World Economic Forum, whether it is with National AI Advisory Committee, is because it’s our obligation as technologist and builders to be able to share our technical knowledge with the policy makers and regulators who are coming up with these guardrails, while it’s also our responsibility to understand how those policy making happens, right? How those regulations come to fruition.  

 

So this public-private partnership and this technical business policy partnership is super critical, so that we can actually have informed oversight of these systems, especially the high risk systems that are powered by machine learning.  

 

[0:23:30.0] RS: Okay, I am really glad you called that out, and particularly this notion that box checking is not enough because regulation is bureaucratic and thus slow and has always trailed industry, and here you have an industry that’s moving faster than ever before. So something that was already not able to keep pace is now going to keep even less pace. So what is the incentive for people to think beyond compliance?  

 

Because you can be compliant and still be unethical or still be delivering technology that maybe ought not to be compliant but hasn’t got to that point yet. So where do you think companies should be taking a stand here in terms of, “Okay, it’s not.” Yes, it’s compliant but is that still okay or sorry, yes it’s compliant but is that still wrong?  

 

[0:24:27.5] NS: Yeah. No, I think Rob that’s an excellent point because you could be compliant but you could still not be fair, and you could still be performing poorly on certain demographics, as an example. I think this is where the companies need to bet on transparency as a currency to build that trust. Again as I mentioned, right now, we are going through this moment in time where we are seeing emergence of AI-first companies.  

 

If you take an example of generative AI, all of the excitement we’ve seen in the past two months in emergence of new industries and new companies in that space, I think it is really important for us to take a step back and think about one, how can we ensure that governance keeps space with this technological advancement?  

 

Two, what would governance really look like? Because it cannot be that box checking exercise and if it cannot be that box checking exercise, how can we ensure that the right measures, right benchmarks, right standards emerge that we can all look across all these companies and then see how they’re doing? This is something that Credo AI is betting big on, is disclosure reporting on transparency reports as a mechanism to build that trust.

 

So why are companies doing it, at least the customers that we work with right now? Because the minute they are not only delivering machine learning based applications but along with that, they are delivering governance artifacts showing how their entire machine learning system was built and reviewed that by itself is saying, “Hey, you know what? Even in the absence of regulation or even if there is regulation, here is how we’ve not only build the system but this is how we are holding ourselves accountable.”  

 

That is a first fantastic step through these disclosure reports, through these transparency reports to put a stake in the ground that we are here, we are going to be transparent with how we are building these systems and we want to make sure that happens almost as served effectively across the globe.  

 

[0:26:22.9] RS: I love the notion of transparency report. Let’s even dial back a little bit from the transparency report, the first thing someone can do, right? If they are in an organization and they you know innately whether your company has prioritized this sort of thing or not, right? Whether you’ve measured it or you have it. Well, if you haven’t, then you definitely know that they haven’t focused on it.  

 

But the best thing a person could do in that scenario to begin building more trustworthy transparent compliant tech would of course be to go to credo.ai and click request to demo, right?  

 

[0:26:55.2] NS: That would be nice.  

 

[0:26:57.8] RS: Short of that, what can practitioners do in their organizations at the very outset to begin building towards this scenario where they have more reliable, trustworthy, transparent tech?  

 

[0:27:07.9] NS: Yeah, great question. I do want to emphasize responsible AI and trustworthy AI as a journey, and the first steps that you can take is really just take stock of where you’re actually using machine learning versus where you’re not. There is actually a conflation of terms in terms of we do see companies that slap on AI powered this and AI powered that and then when you look inside the hood, it’s nothing AI powered, right?  

 

So I think one, they are setting themselves up for failure in this new world but if you are using machine learning like really understanding what kind of models are being used, how are you training those models, how are you testing those models and taking stock of that model repository is really the first key step. The second key step is really bringing in more multi stakeholders into your development pipeline.  

 

It does not slow down your development, it actually accelerates in building more secure and safe artificial intelligence if you are bringing in folks from backgrounds like policy and risk and compliance earlier into your development pipeline. So that will be my second recommendation. The third recommendation is really making sure that as a company, not only the leadership but the technical leader stakeholders are putting responsible AI and transparency as a priority.  

 

Because it might seem like, “Oh, why do we need to do that?” Guess what? It’s really important because when things go wrong or so south, you will see that trust/transparency function similar to diversity and inclusion functions of the first ones to be let go, and I think that is a moment in time that tells you how important fundamentally building good technology was for that company. So I think a key aspect is it needs to come from top down.  

 

Leadership needs to bet on it but then you also need to bring multi-stakeholders to understand how you’re going to bring responsible AI to fruition for your company.  

 

[0:29:02.0] RS: Navrina, that is fantastic advice, at the end of 30 minutes of fantastic advice. So as we creep up on optimal podcast length here, I would just say thank you so much for being on the show. I’d love learning from you today.  

 

[0:29:13.1] NS: No, thank you so much for having me Rob.  

 

[END OF INTERVIEW]

 

[0:29:17.1] RS: How AI Happens is brought to you by Sama. Sama provides accurate data for ambitious AI, specializing in image, video and sensor data annotation and validation for machine learning algorithms and industries, such as transportation, retail, ecommerce, media, medtech, robotics and agriculture. For more information, head to sama.com.  

 

[END]