Innovation Nation
Innovation Nation

Episode · 1 year ago

AI Innovation Starts w/ Responsible Regulation

ABOUT THIS EPISODE

When we think of artificial intelligence, we think about innovation.

AI is a catalyst for important innovation, but not all innovation goes into a positive desired direction, especially when we don’t understand the risks involved in AI and machine learning.

On this episode of Innovation Nation, I talk with Sri Krishnamurthy, Founder and CEO of Quant University, about how organizations can engage with AI in a responsible manner.

We also talked about:

  • How to innovate with AI while mitigating risk.
  • Why people should care about privacy.
  • The evolution of innovation in higher ed.

Tune in on Apple Podcasts, Spotify, or wherever you listen to podcasts.

Innovation is all around us. In fact, everyone innovates, often unbeknowntes to themselves. Many mistakenly assumed the innovation is either a big capital project, a figurative bolt of lightning that brings inspiration, or the province of some exceptionally gifted person. This is the myth of innovation. But you can innovate as well. You are listening to innovation nation, the podcast where top executives and industry experts are sharing their insights on harnessing the power of innovation. We're here to help you stay ahead of the curve by driving your own innovation. Here's your host, Jasmine Martyr Rosen Hi. Welcome to to Z it's innovation nation podcast. This is your host, Jasmine march to us in. Our guest today is Shwi Krishna Mordi, who is the founder of quant university. They are actually working on some really coming edge stuff dealing with AI, and there even advisory role helping organizations engage with ai in a responsible manner. So I'm sure she will tell us more about us. Welcome, Shree thinded us. When would the pleasure to be here? So tell us. How is your que choose sandbox working and how are you innovating with ai? Well, thank you. So we have been working on the enabling and making artificial intelligence accessible for the past seven years. So when Contin University started, we saw this whole confluence of technology, artificial intelligence and machine learning the cusp of revolution and we started putting together programs to enable people in the industry to adopt these technologies within the enterprise. As we saw the adoption increase, we've started to realize that there are being a lot of developments which people are adopting without really understanding how it actually works and with black boxes being prevalent in the marketplace now decision making...

...without much of an insight on how these decisions are being enabled through machines, there's been a little bit of a concern on what does it mean when we use these decisions for making critical decisions within both in the enterprise but also in the general public. So we are looking at this whole notion of Ai Risk and how do you make sure that you pragmatically understand and ensure that the models and the artificial atteligence and machine learn technologies you're using could be responsibly adopted within the enterprise. So we do it in three farms. Queues and boxes are tool which enables automation of various AI models within the enterprise. We use so a lot of open source tooling and we also integrate with a lot of enterprise tools to make it easy to adopt and we also augmented with education and advisory services to make sure that people can understand how to use this pragmatically the net price. Thank you for sharing that. You're raising a very important question here and since when I, and I'm sure majority of people think of Ai, you think, oh, this is very innovative. Now, given the Dai has been around for decades, though, it's getting more integration into more mainstream products all the time now. They're often it's not even visible to people. They don't know they're dealing with that. But not all innovation goes necessarily into a positive desire direction. Sometimes you can innovate into a very negative shere. So obviously you're trying to mitigate that. Can you speak a little bit about that? Obviously what's happening in the industry is there is the innovation aspect, which is the you know, the Silicon Valley, trying to figure out how can you make things faster. How can you make things better? Can you make things cheaper? But there's also this whole big data revolution which we are...

...banking on right because data is the fuel which is driving all these innovations. And when we do not have good regulation, we are basically kind of in a free work free West. Know it's like this, like why? While West, where in data is leverage to make decisions which at times is not going to be the best way to addopt technology, because most of the times the modinalized classes, even the models, they could be biased and they may not be fair for certain class of people. So when we take technology and we think that we are going to be building new things, but we don't have mechanisms to understand the negative consequences of this technology and we don't even have tools and means of measuring whether the technology can be uniformly applied in various scenarios, that's where you know bad things can happen and that's that's one of the things we are trying to see how we can mitigate. You just said something very interesting, something it's about sometimes it's Gott also not conway is right, only because something comes out of a big silicon down and corporation doesn't make it automatically safe. Absolutely absolutely, and one of the things we're trying to do is look at this whole notion of disclosure. How transparent should you be when you build models? Of course there's the business secrets, but if you are going to be using it in a large setting, how can you make sure that you know you are disclosing enough about your model so that there's scrutiny and people can adopt it more transparently? Very often people say, if you're getting on a major social network and it's free, you are the data. What do you think of that? Well, if you look at how much time you spend in a day, you are going to be either a consumer or...

...a producer of data, and it's all through social media, right. So you are going on Linkedin, you're going on what's up, you're going on facebook, Tick Tock, you have multiple twitter, multiple platforms to which you are interacting. There you are revealing your thoughts, your affiliations, your connections, the interests you have, and all those are basically pieces of a big gipsaw puzzle, and that's you and your more or less revealing more or less how you would lead your life in a social media world. Initially it was an adulty. It was more of let's get on to it to figure out like what's actually happening. But as we've been producing so much data and I just looked at like how much time I spend on social media on a weekly basis and it's just mind boggling how much time you're spending and what a will be disclosed. So yeah, you're right, it's it's we are becoming in some way generators of data, which is kind of giving, giving the world who are we and what are we about, or less without much fulfilter. Interesting. So, for instance, Europe and Canada have stricter data privacy laws then the United States. I think the US is starting to get there slowly. From California, which is bell weather state in terms of legal before a lot of the time and you hear a lot of people say, why, why should I care about my data? I'm not doing anything illegal. So there is less, I would say, overall, American concerned about the sanctity of privacy. Why should people care? It's not about the individual data points, right, you know, be every people. Everyone has their own notion of what is private and that bar is kind of changing over time and when...

...we just look at somebody revealing very personally tails as well, that's their prayer, to let them do whatever they want. But if you look at it from an aggregate perspective, right, so that's where whatever we are revealing could be aggregated and could be used as a means of influencing and that's where and organizations like facebook are collecting so much data about our network, about our preferences, about our affiliations, about the organizations we are involved in, and that data could be broker and made available to other entities who could, you know, channel different kinds of advertisements to you or basically engage you in means which you probably did not, you know, initially, know when you were getting involved with various social networks. But, in addition, if you start thinking about what all could be done and what information could be made available to you, what information could be blocked from you? So in some ways you are being influenced by not only your social network, but also the other channels which will potentially influence you through advertisements, through propaganda, through fake news, and also it could be used for in our ding situations like elections or social unrest. And all these things which you know. Initially you think, well, I just gave my data and how did it help? But overall all those aggregate information matters and when, even if there is like a one person change, it could potentially lead to a lot of difference. Something seemingly very innocent could be used from very different areas negative purposes and outcomes. Absolutely, and that's that's where you know, even things like when we are in a healthcare...

...pandemic where in there's a lot of discussion about efficacies of vaccine and organizations may be involved in false propaganda. During elections, if there is a very close election, it could be used to turn out, to influence people and have repercursions in terms of how people are going to turn out towards elections. And in developing countries, in other countries, you're seeing a lot of social unrest and the role of social media during those social undrests. So it is it is something to be concerned about, especially if there are no legislations. They could be serious consequences. So it can even affect democracy and basic rights that people just take for granted. Well, you know that. That's that's a very important question, right, because our values the way we have kind of as a society world over time, and that's all change in because of social media and in various countries, because of the information now freely being available. You know something, some positive things have happened, but also the political establishments, the organizations of power or leveraging that information and are trying to see how can that be used as a central way of influencing populations, and that's where you had to be careful on how things could potentially cheat, especially when democracy is very fragile in certain establishments in the countries. So your organization has been doing this AI related advisory work for the last seven years and again, as I said, Ai to me mentally I go that's very innovative. How did you get into this? Would drove your engagement in this area? That's that's...

...a great question. And this was this was twenty years plus ago when I used to work for a manufacturing company and my undergraduate degree was in mechanical engineering and the Japanese company bought or company and they brought robots and I was fascinated by them because what would take a month and a half was done by the robots in a day. And all those were programmed and they just basically fascinated me. Like you know, there was so much productivity which could be obtained and many of them were not really intelligent. They were programmed, they were automated, but got just change my world view on how you could leverage technology and increase productivity. And my graduate studies was on computer design and robotics and by basically did a lot of work theoretically on how do we look at influence machines and how do we look at productionizing various algorithms. But at that point it was not really easy to make that ubiquitous because we do not have the technology, we did not have the various means to make it more prevalent like what we have now. And I was involved a lot of automation projects, a lot of technology projects. But the last ten years things have changed. We've seen algorithms date, we've seen hardware, all of them the COSP of revolution, and that's basically bringing me back to this whole area of how do we leverage all these powerful technologies to make a difference. So you're also fact of the member teaching at northeastern university kind of odd issues on the juncture of these areas with the I how do you see kind of the evolution of innovation with the angry generations? How are they integrating the data? What's exciting?...

What's Tryin to do? Yeah, so that's a great question. It out seven years ago when I used to work for a company for mathworks, which makes the Product College Mat Lad and that's still one of the most popular statistical packages for building machine learning and data science related products. But seven years ago, students, when they kind of went into a data science related role, it was more of a novelty. At that point. They were traditionally going for software engineering roles and when we started like looking at the power of data people were curious and they were just looking at open source products and look at visualization and saying, okay, now I can also do some data analysis work. Fast forward seven years you could build a whole production quality machine learning product without any hardware, with a simple laptop, with just a browser running elastically on the cloud, with a few dollars worth of investment and knowledge. That's what my students are realizing that you don't really require to kind of, you know, take a whole software industry and basically build out architecture build out and ID center, build out hardware, built out a whole establishment. You have recipes, you could just integrate and build out machine learning products on the fly. That's where they're training because you know, that's that's how we were able to get the revolution going. That's an entirely different universe. I mean fifteen years before that there were no data science majors. Absolutely, you know, there were statistics folks, there were Quantry of Finance folks, people loud analysis. But again, the perspective is different right, because it was a sampling bass approach to look at the past, make some samples and extrapolate for the future. Now we've done some tons of data, we can build very complex algorithms without making model asumptions. We don't have to say this is a linear relationship. Will say let the machine figure out the relationship. Fascinating and also...

...kind of by opening to stay current and be on the frontier revolution and development. Any words of wisdom about users? How we as individuals, as consumers, is organizations to should engage with. The eye would question. Should we be asking? Well, every facet of our life is going to change because of artiician relavedance and machine Larne we've already seen that. We have Alexas, serie, Google, now ring nested. There's so many products which are permeeded our lives. Now, when you look at using a test la, everything is Donna. So all those are machine learning drivel and I was seeing a documentary yesterday and now they're talking about well, you could potentially have wearables which are so closely integrated with our lives. And if we are not aware of these innovations, there's a quality of life issue because, guess what, when the pandemic hit and people were booking appointments for vaccines, it was really hard for seniors to book appointments because of the multiple websites, because of the multiple so if you're not, you know, comfortable with using technology, then you're going to be you know, it's only going to be the people who are comfortable and familiar with the technology who are going to be able to use some of these things. So that's something to be cautionary, a cautionary not to look for. But on the other hand, there are so many benefits which are possible, costwise, productivity wise, which are going to change our lives for the better. Healthcare being connected with everybody, especially in the age of pandemic. Like you know, we are all, you know, using zoom and other means of being in touch. So I think everybody should look at it both positively but also understand where things could go wrong and be watchful for those negative consequences. lascinating. Everybody should be kind of more cognizant of how...

...he eyes all around us. Any thoughts you'd like to share in closing? Thank you for the opportunity again. I know one of the things you know too, FIS looking at is like how do we look at certification? How do we look at we is in which you can measure things, and I think that is going to change in various space in the future, even in the near future, because we have seen innovation happening in various space. But as and when the technology matures and becomes more uiquitous, there is going to be scrutiny on what should and what can the technology do and what should it not do, and I think that is something which everybody should take seriously, especially when you're looking at important innovations, and we are seeing that in the backing industry. We're seeing regulators ask questions, but I'm very hopeful that because of the level of discussion which is happening in the states and in Europe and other places, we will have a good way of bringing these technologies to fruition in a wey responsible way. Thank you. She really appreciate your taking the time to join us again. This is to suit Innovation Nation PODCAST, your host jazz, and more to Russian, with our guests re Christianity Pounder of quant university. Thank you. Buy. You've been listening to innovation nation. For more subscribe to the podcast in your favorite podcast player or connect with us on Linkedin. Thanks for listening.

In-Stream Audio Search

NEW

Search across all episodes within this podcast

Episodes (32)