AI holds incredible promise to improve virtually every aspect of our lives, but we can’t ignore the possible risks, mishaps, and misuses. In her keynote, author and AI expert Juliette Powell will touch on seven principles for ensuring that machine learning supports human flourishing. These principles draw on her research at Columbia University and feature a wealth of real-world examples.
– [Narrator] And now please welcome media commentator and leading Fortune 100 technology consultant, author, and AI expert, Juliette Powell.
– Hello, hi. Hello everyone. How you doing? I know, I’ve been listening in on your conversations over there. I’ve been hearing so many great things about the food, about the location, about Poet. Did you love his presentation or what earlier? Yeah, I had the opportunity to meet him and I didn’t know it was him. So we bumped into each other and he starts chatting me up and I’m chatting him up, and then the next thing you know, he’s like, you know, you’d make a really good presenter at Attune. I’m like, oh, okay, thank you. I love when these serendipitous connections happen. And they happen all the time when you pay attention. And that’s kind of what I wanna talk to you about today, how we can pay attention to artificial intelligence. And I know for some of you it’s gonna be, yeah! And others are like, ah, I don’t know about that. I know you’d all rather be in the pool out there, so I’ll try not to torture you for too much. But I do wanna give a big shout out to Nicole Alvino. She’s the CEO of Firstup, and I met her earlier. She is, first off, one of the few women CEOs of a tech company that I’ve ever had the opportunity to meet. So you gotta give her some love for that. Yeah. And apparently, now, she didn’t tell me this, the crew did. She’s the one who wanted to have programming about responsible artificial intelligence. It’s all her fault! Thank you, Nicole. I am so pleased to be here. Also, I have to call out the entire production crew. I mean, look at this. So I’ve been giving keynotes probably for the last 15 years on various topics. And before that I worked in broadcast television. But I’ve gotta tell you, this crew, I have never, ever, ever seen this in the world of conferences. I’ve seen this in television, but I’ve never seen this in the world of conferences. They went through everything with me. They wanted to make sure that everything, and I mean, everything was perfect for you. So let’s give some love to the crew, thank you. Yeah, Attune ’23. I am just so happy to be here. All right, so when I found out that I was coming to Phoenix, I was a little nervous. I’ve gotta say, I haven’t been to Arizona in about a decade. And the last time I was here, it was also for, you know, a presentation. And I remember walking around and chatting with people just like I did here today, but nobody actually acknowledged me as a human. And it was the weirdest thing. I’m like walking around talking to people, and they weren’t really talking back to me. They weren’t connecting with me. I’m like, what is wrong with me? Why? I’m a pretty nice person, or at least I can be. Turns out, and I didn’t know this, there was a rapper that was staying at the resort with us, and they were convinced that I was the wife of the rapper. They had no idea that I was the one who was supposed to be giving the keynote presentation. And so, one of the things that we’re gonna be talking about today, I know, you think that’s funny, imagine how I felt. So one of the things that we are gonna be talking today is bias, our own human biases. And they can be good and they can be bad. But no matter what you think of our biases, the way that we share them, right, through history, through our children, through our parents, through our society, through our collective, through our organization, right? All of that stuff gets embedded into our AI systems. And so I’ll tell you a little bit about that in a moment. But first and foremost, I have to ask, because this wasn’t here the last time that I came to Arizona. Did anybody here try the robo taxis? Yes, we got a yes! It was like a lone yes, but it was a yes. All right, so let’s bring the lights up just a little bit so I can see your faces so we can all see each other, because now I’m curious, I have personally never been in a robo taxi, but from what I understand, we’ve got people here that are from across the United States. And so different places have different opportunities. And I’d be curious if we could just start with this. So how long is it going to be, in your opinion, before there are robo taxis in your local community? So who thinks that they’re gonna have robo taxis in your home in the next year? Raise your hand. Okay, I see a couple, very few hands. Very few hands, okay. The next one to five years. And I’m raising on behalf of New York City, which is kind of scary to imagine. Okay, the next five to 10 years? Oh, I see more hands coming up, okay. And finally, who here deeply believes that it will never, I mean, never come to your community, raise your hand. There you go. A lot more people than one might have thought. Now this gives me a sense of how much connection we all feel to technologies that already exist, that are out there, that are being experimented upon with humans, which means that we get to think a little bit deeper about what the implications are about artificial intelligence. And today we’re talking about all kinds of AI, not just generative AI, okay? And for those of you who are like, well, I really don’t wanna deal with AI. You know what? I’m just not into it. I’m gonna retire soon. Or it’s just not part of my job. I don’t care. That kind of would’ve been like, remember 30 years ago? I know many of you’re not old enough to remember this. I am old enough to remember this. 30 years ago, many people heard about this crazy thing called the internet. And they’re like, the internet, we don’t need no internet. Why should we need the internet? What are we gonna do with that, shop? Why would we wanna shop on online? No, never! I remember my grandmother, never. Mhm. Yeah, okay. So you remember that curve of adoption that we were just showing you? Now, fast forward social media adoption. Remember when you first got those invitations, many of you were like, I don’t have time for this. Why would I spend my time doing this? Others of you’re like, well, yeah, I’m gonna stay connected with my friends. But did any one of us really think that it would be the number one source of news? No. And if you did, you’re way ahead about the rest of us, and you should be telling us about AI. Yeah, very few of us thought that. I ended up writing a book about how social media was going to influence business, how it was going to elect the next president. Yeah, that’s how old that book is. 2008 I wrote a book about this, right? Social media impacting the world, woo! Seems kind of quaint now, doesn’t it? Alright, so now let’s look at, ah, yes, you guessed it, generative AI. I think we’ve all seen the statistics, right? Over 100 million users in the first few weeks when OpenAI launched last November. That is less than one year ago. Now, let me ask you the question again. If we could raise the lights just a little bit. How many of you have now experimented with ChatGPT, or any other form of generative AI? Oh yeah! Okay, so we’ve got a lot of adoption here in this room, and that’s really, really impressive, ’cause I give talks all over the place, and generally it’s about 25 to 30% of the audience that say yes, and everybody else like, nope, nope, not me, uh-uh. So you’re already ahead of the curve, and that says a lot about you. And hopefully that means that you also are poised to be the leaders in your organization to help shape this stuff, right? We want to be able to use it for the best possible uses within the organization as well as outside of the organization. And hopefully today I’ll be giving you a few frameworks with which you can start thinking about it maybe in a different way. But first of all, again, for those of you who are skeptics, and I know you’re out there, it’s okay. Look at the curve of investment in artificial intelligence. This is only to 2018, right? It’s only going up, which means that this stuff, again, like every other technology we’ve talked about so far, is not going away. So the key today is to figure out how we can best use it. And I’m gonna give you a few different examples from something that’s really personal to me. So, you know, a lot of these technologies accelerated during the pandemic. And some of us were in school. I had gone back to school specifically because my mom passed away, and I was just, I was devastated, I was heartbroken, I was lost. She was my best friend. And I honestly didn’t know what I was gonna do with myself. And I just thought, okay, let me throw myself into something that is technical, something that is hard, something that’s going to get me out of grief and into a position of learning where I can grow. And if I do a good job of that, then maybe I can show other people what I might have learned. And one of the things that I learned was this incredible experiment that happened out of MIT. This is 2018, it’s called the Moral Machine Experiment. And it was phenomenal. It’s still online for those of you who are curious about it. Essentially, it looked at a space in which everyone is invited to imagine that you are programming a self-driving car, Robo taxi or Uber, whatever you prefer. But the point was, how would you program it? What would you prioritize? It’s the classic trolley problem, right? If you can only make one of two choices, whose life will you spare and who will die? Because when we talk about self-driving cars, as we’ve seen, for example, in 2018 in Tempe, Arizona, and this was, I think the first case where the world actually watched when a self-driving car hit a woman who was crossing with a bicycle. Granted, it was not at an intersection, but the car did not recognize that this woman with a bicycle was actually a human being. And there are many different reasons for that, which we’re not gonna go into here. This is not a technical talk. But it’s a big distinction in the way that we as humans approach driving. Think about it, right? We’re trained from a young age, even just driving with our parents. When there’s something in the road that we don’t recognize, we slow down, defacto, we slow down. Why? Because there are huge repercussions if we don’t. We could kill somebody, we could hurt someone, we could go to jail. So that’s considered to be risk thinking. And we do it every time we get into a car and put on a seatbelt. We do it every time we decide to put on a helmet or don’t put on a helmet when we’re on a bicycle. That’s risk thinking because we’re calculating, even though it’s a very, very small probability that it might happen. If it does happen, if I do get hit by a car when I’m on a bicycle without a helmet on, there’s a very good chance that I will die. Very small probability. But that probability makes me want to opt for life as opposed to death. And others make that calculation decide, you know what? The probability is too small. So all of us do a different kind of calculus of intentional risk. And we’ll talk more a little bit about that in a few minutes. But first, back to the self-driving car. So, you’re doing the Moral Machine Experiment right now, at least just a few of the experiments. You are the one programming the self-driving car. Now, you are asked what to prioritize. Are you gonna prioritize a person? Are you gonna prioritize a dog? What are you gonna prioritize? What’s important? Excuse me?
– [Audience] The person.
– The person. Oh, that sounded pretty unanimous. Maybe if you were like, no, no, the dog, the dog. It’s okay, it’s all right. Are you gonna prioritize somebody that is walking at a crosswalk, or somebody shared with me today, a zebra walk? Or somebody who’s actually going against the light? Is one life more important than the other? Is the law abiding citizen more important than the person who decides to rebel against the light? Who are you gonna kill? Who you’re gonna save, right? It sounds like a silly question, but this is what we’re asking our moral machines, our self-driving cars and others to do. Who are you gonna prioritize? Older people or younger people, who you gonna prioritize? Older, younger, come on. Younger, older, hmm. Yeah. Okay, this is a tough one. Are you gonna prioritize more fit people or less fit people? Are you gonna prioritize men versus women? Who’s more important? Yeah, I don’t have an answer to that either, honestly. And it keeps going on and on and on. And the question of who decides in the Moral Machine Experiment, is all of us, all of us that did the experiment. In fact, this thing went viral. It started out with a website that MIT had put up, and eventually they got 39 million responses across multiple countries and jurisdictions. And you know what we have in common? Barely nothing. Barely nothing. Check this out, okay, so the global consensus is only on three points. One, yeah, we wanna save lives. We’re not killers. We wanna save human lives over pets. Yeah. And we want to save children over adults because we care about the future, we care about our legacy, we care about human survival. I think most of us would agree with that. Now, at the end of the Moral Machine Experiment, there are three questions. And the funny part is, I took this test and my co-author of The AI Dilemma book took this test, and we took it in totally different ways. So I used to program amphibious cars when I was, gosh, a teenager, long, long, long, long, long time ago. And because I had worked with vehicles and databases, I really wanted to get this right. So I took this test as if people’s life depended on it, because that’s how I think. If I’m gonna work on something, I’m gonna give it my best. And I know that I’m not the only person in this room that feels that way. But my co-author had never worked on cars. He didn’t really think about these things. And for him, it was more like a video game, right? We’re just gonna answer and it doesn’t really matter. It has absolutely no consequences. So when these three questions came up at the end of the Moral Machine Experiment, he was completely thrown off. Do you believe that your decision on moral machine will be used to program an actual self-driving car? And he’s just like, oh, no, I should have paid more attention to that. To what extent do you feel that you can trust machines in the future to make these kinds of decisions? Oh yeah, it’s not just a game, right. Robo taxis, real self-driving vehicles on the road. And then to what extent do you feel that machines will become out of control? And all of a sudden, he got nervous. He got very nervous, and we started doing more research. So the book, The AI Dilemma, is really about artificial intelligence, which we call triple A systems. And some of you have more affinity to artificial intelligence and technology. Some of you don’t. So I’ll just go over this very, very briefly. I don’t wanna bore you. So we’re talking about autonomous systems so they can perform complex tasks without human supervision. We’re talking about adaptive, right? So they can improve their performance by learning. They do not need us to learn. And finally, algorithmic. And I think most of us are aware of how the algorithm shape our lives. They shape our lives in terms of who we date, where we go, how we get there, which apartments we rent, where we work, who we connect with. Unless you’re lucky enough to kind of bump into people like Poet. Beyond that, we rely a great deal on algorithms to shape our lives. And for the most part, we feel like it gives us more control. But is it real control or is it the illusion of control? Hmm. Well that’s a deeper question for another day, which we very much delve into when we wrote The AI Dilemma book. But for today, let’s just focus on all of you who have already used generative AI. I don’t think that you need a lesson from me. These are some of the examples, some of the capabilities that the company itself that OpenAI posted initially, in terms of inspiring millions of users. What can I do with this stuff? Well, you could do this, right? But for those of you who’ve been paying attention to the news, I just got an alert this morning saying that in the next two weeks, yes, ChatGPT will be unveiling new functionality. And that new functionality is really sexy. It’s stuff that we’re used to. It’s stuff that we have come to expect. Imagine if instead of just prompting via text, you could actually prompt using your voice like you do with Siri, or OK Google. So that capability’s coming down the pipeline. The next capability that’s coming down the pipeline, again, in the next two weeks, so it’s really right here, is you could, for example, upload an image of your refrigerator, and the system will tell you based on what’s in your refrigerator, what you might be able to make, what kind of food would be tasty and yummy based on what’s in your refrigerator. Now, that sounds very exciting for those of us who never know what to eat, who by the time we get to the fridge we’re already exhausted and low blood sugar’s like, oh my God, I’ve gotta cook. No that’s me, sorry. I don’t mean to project. But I’ve gotta say that that second point kind of gave me the creeps. You wanna know why? Because I had just given an interview, I think three weeks ago when my book originally came out. And somebody was asking me what some of the current harms could be on generative AI and the way that it was being deployed when there is no calculus of intentional risk. And the story, and some of you might have seen this in the headlines, came out of New Zealand where a grocery store, they were trying to help their customers, and they decided to use generative AI to create an app that would look inside your refrigerator and you would tell it the kinds of foods that you like to eat. And the system was to tell you what you could make using your leftovers, right? Except, except, and I actually had to write this down because I was so shocked when I saw the result. So for most people, it worked perfectly well, as you could have imagined. But a few of the recipes were stuff like roast potatoes with mosquito repellent. Yes. Now, imagine you being a parent who cares about their child, and you’ve gotta work and your child comes home, and maybe they’re making their own lunches or dinners, and they’re using this app to figure out what they’re going to make. That’s the worst case scenario. But that is already happening. So we have to be conscious of the risk, and we have to make a calculus of intentional risk around the things that we do. So we have to ask better questions. We have to use our critical thinking, and we have to experiment in smaller doses before we roll things out to the public to make sure that they’re what we think they are. On the other side of the coin, there are many organizations that are already using it. In fact, when I started speaking with the folks here at Attune, they were telling me that they were at a conference a few weeks ago, and dealing with two completely different Fortune 500 companies. One of the companies wants to ban generative AI completely. The other company, the opposite. It’s like they’re empowering their employees to figure out how to use it best, not just for themselves, but for the betterment of the organization. So we’ve got very, very different views on how to use this stuff responsibly. What does that even mean? So BCG decided to do an experiment. It’s, they like to call it a study. I like to call it an experiment. But either way, here are the results. This just came out September 15th. So BCG consultants who were using AI were 12% more productive and completed work 25% faster. They also produced 40% higher quality results using AI. Now, you can imagine for people that weren’t necessarily up to snuff to start with, how that might’ve increased their productivity. Yeah, an increase of 43% for below average workers. For above average workers,, gains of 14%. And then on the other side, you’ve got the fear of disinformation. I’m throwing different things at you because I feel like every time there’s a new headline that comes out about AI, my attention shifts and I’m like, oh, this is good, oh, this is bad. I’m not sure what to think. It’s a dilemma. One of the things that makes me particularly nervous is the fact that we’ve got elections coming up in 2024, and there’s a whole lot of disinformation out there, right? That image on the right, that image of the White House burning, of course, is fake. And we’re gonna be seeing more and more fake images that are associated with politics. And so we really have to start discerning better. We really have to start using critical thinking and judgment because our democracy is at risk. But then again, we still have control, at least for the time being. And when I say that we still have control, here’s an example. So for those of you who pay attention to such things, there are, oh, about 200 or so AI leaders in the world that back in March decided to sign an open letter saying that technology, and specifically generative AI was advancing so rapidly, far more rapidly than they ever would’ve imagined that they wanted us to press pause on advanced AI research. And whether you like the names that are listed there or not, doesn’t really matter. One of the people that I have come to know most recently, Yoshua Bengio, won a Turing Award in 2018, along with Yann LeCun. And these are really the granddaddies of AI. Yann LeCun is at Meta right now, aka Facebook, really, really knowledgeable people. The Turing Award is kind of like the Pulitzer Prize. It’s like the top, top award that you can find in computer science. And when these guys tell me that it’s time to take a pause, I pay attention. I’m like, what do you see that I don’t see? Well, the world didn’t really pay attention. In fact, that six months is up. In September, right now, it’s been six months and we haven’t taken a pause. We’ve accelerated, right? We’ve got more and more products coming out. And so the dilemma remains, AI in the right hands can be beneficial to all of us. It’s an amazing tool, and it’s on us to use it well, because in the wrong hands, it can really be dangerous. Hmm. And so for those of you who don’t pay attention to such things, I decided to make my business and look at some of the regulations coming out. Because ultimately, if we ourselves are not able to regulate our own use of the technology, because we like playing with it so much, it’s so interesting. We never know what it’s gonna spit out, or what it’s gonna do, or how it’s gonna help us. And our organizations are like 50-50. Oh, ban it, no, don’t ban it. Ban it, don’t ban it. Then governments start getting involved. And when governments start getting involved, they start talking about regulation, especially when big tech says, please regulate us. We’re creating all this technology and we’re really not sure what it’s gonna do it. I mean, have you seen these guys testify in front of the committees, in front of Senate? It’s astonishing. In one breath, they’re saying, please, please regulate us. And then they’re like, wait, wait, wait, let us tell you how you can regulate it so it’s good for all of us. Ultimately, the European Union decided they were gonna take a crack at it. Why? Probably because they have far less big tech coming out of the European Union, and because they really care about the way that it’s going to affect humans, specifically European humans, but they’re really kind of throwing a net on AI around the world, because this particular regulation affects any training data that also utilizes Europeans, which I don’t know for those of you who use AI on a regular basis, whether you’re paying attention to where your training data is coming from, but a lot of it is coming from black boxes where you don’t know. And ultimately you might get fined whether you’re aware of it or not. And it’s a sad fact. So might as well inform yourself. This is important stuff. So the AIA, the Artificial Intelligence Act coming out of the EU is supposed to be coming down by the end of this year, which means that there are four categories of risk. And I’ll go through them with you very briefly, ’cause I don’t wanna bore you, I really don’t wanna bore you, but I do want you to be aware of what’s going on. So we’ve got a whole lot of uncertainty around it. But remember GDPR? Some of you, kind of, maybe? Yeah, oh yeah. I know, it was a headache. Well, it’s not going away. And I spoke to a lot of people in big tech over the last four years, but even recently, like in the last few weeks, and big tech is still running behind trying to catch up with GDPR. They are not ready for the AIA Act coming out of the EU at all. So let’s take a look at what that means. It’s essentially a risk based approach to determining whether use cases and artificial intelligence are risky to humans or not, right? Yeah, that’s simple, just that. So if you look at what that means, it’s pretty straightforward, right? So we’ve got minimal risk, AI enabled spam filters. We can all appreciate spam filters, especially if you get hundreds and hundreds of emails a day. Spam filters are our friends. Limited risk, chatbots, but only the ones that actually tell you that they’re a chatbot. You know when you’re actually, you think that you’re in customer service and you’re talking to a human, and then you realize, oh no, it’s a chatbot, and I’m asking it about its day! Yes, that happens. So they have to tell us that they are chatbots, right, under the EU. And then the high risk, predictive analytics, self-driving cars, we’ve seen why. And then unacceptable risk, right, for example, for those of you who have children, can you imagine your toys leading your children to have dangerous behavior, or collecting data from your child to then be used against your child or against you and your family? These are horrible, horrible outcomes. So we’re trying to avoid that. That’s why some regulation is necessary, unless we decide to take self-regulation seriously. So I mentioned my dissertation earlier, and oh my gosh, people rolled their eyes when I told them what my dissertation was. The limits and possibilities of self-regulation in artificial intelligence. I mean, God, even my thesis advisor’s like, why would you pick such a boring topic? I’m like, ’cause I care. Well, I’m glad I care. I’m very glad I care in fact, because strangely enough, all of this stuff happens to be coming down the pipeline just as the book is being published. And that’s a good thing, I think for all of us. So the reason why self-regulation doesn’t work, and I’m just gonna give you the really quick nuts and bolts about this, is because no matter how good a founder is at wanting to do the right thing, remember Google and Do No Evil? I think all of us as founders feel that way. We see this technology is possibility for all humans. We wanna do good with it. We want it to grow and proliferate. And then you’re asked to scale, and then you have shareholders and a board, and then you have competition, and then you’re asked to innovate more. And then you’re asked, well, what about acceptable risk? Couldn’t we just try this other thing? And aren’t we sure that, wait a minute, it’s not gonna harm anybody, just give it a shot. And slowly but surely that self-regulation window gets smaller and smaller and smaller the more you scale. And this is why we see, of course, now three major companies in all the world who are vying for the number one position in cloud AI, only three. And they’re all based right here in the United States. So when you see returns of 70% every year over year and no regulation, it’s the wild west. We can do anything we want, which is again, why I’m asking us to do a calculus of intentional risk. One of the reasons why is that question of bias. Remember when I told you that I came to Arizona 10 years ago and that nobody believed that I was the keynote, that I was like the rapper’s wife. I didn’t even have a name, I was the rapper’s wife. That’s bias. And it’s not bias based on anything I said or anything I did. It’s just the way that I appeared and the way that people perceived a keynote speaker should look. And that’s kind of creepy and it’s sad, but we all have human biases, and I certainly have my fair share of negative biases. Now part of the issue is that when we start embedding our negative biases as well as our confirmation bias into our technology, and sadly, this is what we do on an ongoing basis, whether we’re aware of it or not, computer vision is a really great example of that. There’s some really great work done by people like Timnit Gebru and Joy Buolamwini. Both of them worked on this particular project, which you can read all about, because I think both Timnit and Joy are gonna be releasing books in the next two months that really digs into their experience in gender bias. But Joy’s story really, really got my attention. This woman was doing robotics at MIT, she’s programming robots. And she literally had to put on a white mask for her robot to recognize her. Why? Because computer vision was more tuned towards people with lighter skin and more towards the male gender. Now, because of her work and her research, computer vision has gotten much, much better at identifying people of color, younger people, older people, vulnerable populations. But we still have a whole lot of work to do. And now comes the big sad story. This story freaks me out. I can’t even believe that this happened. So one of the things that we all wanna do when we hear about bad stories, bad outcomes with artificial intelligence, is to be able to hold the stakeholders accountable. And when I mentioned that horrible accident that happened in Tempe, Arizona in 2018, the operator of the vehicle ended up going to jail, right? Often it’s the operators that are responsible for AI going haywire. Well, stakeholders aren’t just operators. Stakeholders are the people that actually deploy the technology in the first place and decide that that technology is the most appropriate technology for whatever we’re trying to achieve. And in the case of the Netherlands, I don’t know if you heard about this story. They had this whole system that was designed to try to weed out those who were cheating the system. And the system decided to go after immigrants. Now, it didn’t just do this alone, it was coded to do this by humans. It didn’t code itself. We coded the system to go after immigrants because they seem to be the ones that according to all of the negative bias and all of the negative feedback, and of course, old, old data, that these were the people that were cheating the system. And as a result, thousands and thousands and thousands of immigrants were stopped. They stopped getting their benefits, their children were taken away from them. In fact, the government was so embarrassed that they literally, as soon as the public learned about this story, the government quit. Of course, they got reelected just a few months later. To this day, thousands of children in the Netherlands are still separated from their parents because of a negatively biased system. And no stakeholders really taken any kind of accountability. These are the kinds of things that happen all the time. So one of the reasons why my co-author and I decided to write this book, The AI Dilemma, is because we realize, just like you I hope, that this technology is still very much in its infancy. And just like, you know, the millions of mothers in the ’60s and ’70s that turned towards Dr. Spock’s book to try to figure out how to best raise a child, right? What are the best practices? What are the most important keys to be a responsible parent? We thought, well, AI is currently in its infancy. Can we write a book that would give some principles to help shape this technology so that it works for us and not against us? And I hope that we’ve succeeded. One of the key elements that came out of our research, and I’m so excited to share this because I think that this fits beautifully with Attune’s audience and everybody that I’ve had a chance to speak to since I got here, is this idea of creative friction. And you don’t have to be a technologist to do it. All of us get to do it. And it goes back to what Poet was talking about when he talked about connections. So let me back up. When I was at Columbia, I studied under Professor David Stark. And Professor Stark went down to the New York Stock Exchange. He wanted to understand how teams became more productive. And so what he observed and then systematically studied is that even though for most of us, we look at, you know, Wall Street, and everybody kind of sort of looks the same or thinks the same, turns out that the more diverse the teams, and I’m not just talking about race, I’m not just talking about gender, I’m talking about culture. I’m talking about education, background, neurodiversity, all of it. The more diverse the teams, the longer it took to deliberate, right? ‘Cause you go back and forth and you’re struggling, no, but what about this? Yeah, but what about that, right? People struggle. But when they finished deliberating, their outcomes were more productive, they made more money. Better outcomes for more people. Now, you might think that that’s something that you’ll see on Wall Street and just on Wall Street. No, Professor Stark then went to the Tokyo Stock Exchange. And again, the same thing. The more diverse the teams, the longer the deliberation, the more frustrating it is, the more creative friction there is, but the better the outcomes. Now, we get to you. This is where you, all of you get to participate because this is an ongoing conversation. This is not a one shot deal conversation where I’m gonna present you with all the facts, and that’s it, you get to make up your mind. No, the technology’s evolving way too fast for all of that. So let’s go into The 4 Logics of Power, which are essentially the Logics of Power around artificial intelligence as it is right now. So you’ve got the Corporate Logic, and most of you work for large corporations. You know exactly what that Corporate Logic is about, so I don’t have to explain it to you. Then you’ve got the Engineering Logic, right? The Engineering Logic, you want this to be efficient, simple, straightforward. You want the technology often to solve human problems, which in many cases it can’t, but that’s what you’re looking for, the simplest solution. I sure get the the Engineering Logic. The Social Justice Logic. What does humanity need, right? And as a woman of color, I think about that a lot. What do the most vulnerable populations on the planet need? Because don’t forget, technology isn’t just affecting us. It’s also affecting people all over the world, in many cases who aren’t even connected to the internet yet. And as they come online, more and more of these technologies are going to affect them directly and they won’t even know it. So it’s up to us in GA countries, the ones that get to play with it first, that get to experiment, the ones that get to ask better questions of our technology, to make it better for more people. Use that creative friction in the room. And of course, the Government Logic. If we as individuals, as corporations, as groups and communities do not figure it out, we will be regulated. The government will tell us what to do. And as we’re deliberating, as we’re figuring out through all the conversations that are necessary, all of this creative friction, try to keep in mind that you’re not just a corporate person and you’re not just a social justice person. You might have multiple hats simultaneously. And that’s the case for most of us. None of us are stuck in a box unless we decide to put ourselves there. So as you’re thinking about how to use this technology responsibly for yourself, for your family, for your organization, for the world, try to think in view of all of these different hats that we all wear, not just from one simple perspective. The more we can think about this as a whole, the better the outcomes are gonna be for more people. Again, the dilemma is very simple. In the right hands, it can be beneficial to all. And in the wrong hands, it could be incredibly, incredibly harmful. So I’m gonna go back to the BCG study because I do think that there is much to learn from things that are already happening right now. So the two uses that the BCG consultants found were the most productive in terms of their usage of AI, and I’m not just talking about generative AI, I’m talking about all kinds of AI within the organization, is the boring stuff, the rote task, the stuff that nobody really wants to do, but it happens to be part of your job and that’s what allows you to get to the good stuff. You can automate that. Not always, you have to experiment, do it wisely. But in some cases you can absolutely automate that. The second thing, a second group I should say, because they used AI in a very, very different way. For them it was about integrating AI into the entire task flow. Now remember, in this particular case, that means that AI is your work companion. But in some cases, AI is your competition. In some other cases, AI is your boss, right? AI is an agent to work with you or work against you if you don’t get a handle on it now, which is why we have the opportunity to experiment. Now, I’ve mentioned several times the calculus of intentional risk. And this is your takeaway. This is the thing that I’m hoping that you’re all gonna go into the lobby and do at some point. And maybe over cocktails, maybe as you’re engaging on your way home, maybe when you get back home. But ultimately ask yourself, what does it cost for you, for your organization to use this technology? Does it cost your reputation if things go wrong? If people get harmed, what about the insurance, are you covered? If there are lawsuits? ‘Cause there are a lot of lawsuits out there, again, have you checked your policy? It’s amazing to see how many people didn’t realize during the pandemic that life insurance, there was a clause in most life insurance, if you die during the pandemic, your life insurance was- Yeah, they didn’t cover that. So check your insurance policies. And on the other end, how much will it cost you not to use AI if your competition is using it, right? And different companies, different people will have different answers to that. And finally, when it comes to AI governance, and I challenge all of us to do this, again, at the individual level as well as at the organizational level, what are you not willing to do to make money? What are you not willing to do? It’s not an easy question, or is it, right? I’m not willing to kill people, I’m not. I’m not willing to maim people. I’m not willing to steal from people. I asked Nicole, the CEO of Firstup, she told me right off the bat, integrity is key. Yes. For some of us, it just comes naturally. For others, we need to surround ourselves with others and get that critical thinking going. But asking yourself what you are not willing to do is the great beginning of a conversation. And if you don’t have AI governance with your organization, that’s also a pretty important thing to determine first off. In fact, I would challenge each one of you to think about that. Does your organization have a stance on what the organization is not willing to do to make a buck? And if you’ve asked yourself that question, what are you not willing to make a buck? Do those two things align? Does your natural inclination match your organizations? And if it does, it sounds like you’re at exactly the right spot. And if you’re not, a lot of people are hiring out there. Now, I say this because so many people come up to me, you know, at conferences and elsewhere, and they’re like, oh my gosh, there’s so many new jobs out there. Yes, there are prompt engineering jobs, there are psychology jobs that are really emerging to help us kind of transition into this era of artificial intelligence. But there are also AI law jobs, there’s HR in AI. I’ve seen ChatGPT, front runners in media, in HR, in technology, in all kinds of different industries. So if you are not happy with where you are, that’s the other thing that AI allows us to do. Ask better questions, not only of our technology, but of ourselves. And on that note, I will leave you and I encourage all of you, if you have more questions, you wanna converse, reach out to me. We have an AI advisory, and I would love to meet you face to face. And thank you so much for your time. Thank you.