Status Go: Ep. 208 – AI & ChatGPT, A Cautionary Tale

On this episode of Status Go, we talk to Angie Raymond, a mental health therapist turned lawyer turned law professor who brings unique insights into international commercial law. We explore the concerns about the ethical implications of emerging technologies like AI and GPT and how they have the potential to manipulate human behavior. We discuss the importance of accountability and best practices in the tech industry and the role of educational institutions in teaching digital ethics. We also touch on the growing field of cybersecurity and the challenges of managing data in a business environment. Lastly, we consider who is making the rules and what the implications are for businesses and educational systems. Tune in for these compelling insights and conversations.

About Dr. Angie Raymond

Dr. Anjanette (Angie) Raymond is the Director of the Program on Data Management and Information Governance at the Ostrom Workshop, is an Associate Professor in the Department of Business Law and Ethics, at the Kelley School of Business, Indiana University, and an Adjunct Associate Professor of Law at Maurer Law School (Indiana). She has recently completed her PhD at the Centre for Commercial Law Studies, Queen Mary, University of London where she researched the creation of policy to assist in Managing Bias, Partiality, and Dependence in Online Justice Environments. Angie has widely written in the areas of online dispute resolution, data governance, artificial intelligence governance, privacy, international finance and commercial dispute resolution. Angie is currently one of the US National Consultant delegates to UNCITRAL reporting on the Electronic Commerce related issues and has previously attended the United Nations Commission on International Trade Law Online Dispute Resolution Working Group, Non-Governmental Organization (Institute of International Commercial Law (IICL))) and is an identified expert in Online Dispute Resolution (ODR) at Asia-Pacific Economic Cooperation (APEC) where she is leading a pilot project on cross-border ODR.

Episode Highlights:

[00:00:43]: Introductions and context

[00:02:34]: Angie Raymond’s career journey

[00:04:33]: Path to teaching

[00:05:39]: Excitement about AI

[00:07:29]: The timing of the release of ChatGPT

[00:11:49]: Is ChatGPT going to change the world?

[00:13:33]: Things to consider in the workplace

[00:17:19]: Cybersecurity and generative AI

[00:20:52]: Should we have to disclose when we have used AI

[00:25:48]: Who gets to decide?

[00:27:14]: Those that don’t play by the rules

[00:29:07]: The future of generative AI

[00:34:40]: Slowing down innovation

[00:35:30]: Call to action

[00:36:25]: Close

Episode Transcript:

Jeff Ton [00:00:43]:

Chat GPT and AI are in the news almost every place you look. It has not only been a noted technological advancement, but it has been a social phenomenon of sorts. Several weeks ago, I was invited to participate in a panel discussion about AI. The audience was a group of judges and attorneys from around the state of Indiana. Our guest today was one of my co-panelists, Angie Raymond. Angie is an associate professor at Indiana University. Go Hoosiers! Department of Business, Law and Ethics. As we both geeked out on the potential of generative AI, we almost naturally fell into this point-counterpoint style discussion, but I don’t want to use that as our intent. Today we are going to talk about some of the legal implications of AI, at least some of the things that we should be considering as leaders, and especially technology leaders. So Angie, with that, welcome to status. Go.

Angie Raymond [00:01:50]:

Thank you, Jeff. It’s wonderful to be here, and thanks for the opportunity to chat with you. I also enjoyed the panel a lot.

Jeff Ton [00:01:56]:

It was a lot of fun and we got into some great discussion and the people in the room were joining in as well. So, as listeners, if you want to join in on this conversation, be sure and comment either in LinkedIn or send a comment to the show. We would love to hear your thoughts on this. Before we really dive into AI. Angie, I’d love for you to share a little bit about your journey, your career journey with our guests. I think knowing your background a little bit kind of helps set the stage. So, tell us a little bit about how you got here today.

Angie Raymond [00:02:34]:

Absolutely. So I’m like a lot of people, when you’re a little bit older, you oftentimes don’t just have this sort of straight sidewalk path to your career. So, I actually started in psychology and sociology way back when, and so I actually was, for a considerable period of time, a mental health therapist. And during that sort of a transition, in that point in time, we were doing some program design around groups and individual therapy and stuff for a particular group of individuals, as so many people in that area. Unfortunately, you do sort of get burnt out. So, I was looking for something different, and I went away to law school and just quite honestly have never looked back with that decision. So, I sort of laugh and say, in another life, I was a mental health therapist, but I certainly draw upon that in so many different ways. I went off to law school and did all that type of stuff and discovered that there’s just so many opportunities in the business world to bring over some of the skills as it relates to mental health and law and policy. And so I got just completely enamored with international commercial law, which was using tons of tech. Right. So, these were the early adopters of e-commerce. I was again going to date myself, but that’s when I was at law school and was fortunate enough to have a mentor who was doing all the e-commerce stuff. And it sparked such an interest because, of course, as someone who’s a therapist, you look at that and like, yeah, no, I can make that button really fancy. So, you click on it a lot.

We end up with this sort of moment in time where I was just incredibly fortunate to bring those two worlds together and have continued to just enjoy every day in this area over and over.

Jeff Ton [00:04:33]:

What led you to teaching and teaching business, law, business ethics? Where did you go that path?

Angie Raymond [00:04:42]:

Sure. I had always been a teacher type, which was a bit interesting. When I went off to law school, I really never intended to truly practice law. I knew I was probably going to do policy or bring things together. And then you start teaching and you realize, wow, I, as an individual, can do one thing or maybe three things. Let’s pretend like I have nothing but spare time and I can do five things. I can take a group of people who are ridiculously excited sometimes and just sort of spread the exposure and the experience and give them the ability to think through things. And especially when we started talking about technology and stuff, I pretty quickly realized I should be a vote of one. I have an opinion, but we have to include more people in these conversations to decide how this should work and how it will be successful. And so teaching just sort of came naturally from that point.

Jeff Ton [00:05:39]:

That’s excellent. What’s driving your passion and your excitement about AI?

Angie Raymond [00:05:49]:

That’s a great question. I think it’s an interesting moment in time, right. And I’m going to be fortunate, right. I went through the e-commerce revolution where we were sort of like, we can have a button that you click, look how easy we can make it. And it almost is like candy in the cereal aisle, which is done on purpose, for those of you who are parents. Done on purpose. They’re at the checkout. They’re at in the cereal aisle on purpose, as are toys used to be, toys and things. And understanding that kind of stuff is super interesting. And then you start talking about technology and AI and you’re like, wow, there are toys and candy in the cereal aisle on purpose. Do you know the colorful boxes are hung, are placed at the level of a child sitting in the cart? Now we are able to do that just at such a wider reach, but we have all the research from the physical world, so moving into those conversations is just so cool and then realize sooner or later the AI is going to start maybe thinking against you. So, it’s just fun. And ethics is just the way we frame those conversations, in my opinion. It’s conversation about what should we be doing, which is an incredible oversimplification of ethics. So hat tip to all the true ethicists in the world who I’ve just completely offended. But in terms of technology, that is the term we use when we’re thinking about design and what should be there.

Jeff Ton [00:07:24]:

Yeah, just because we can do something, should we do it?

Angie Raymond [00:07:29]:


Jeff Ton [00:07:29]:

So, Angie, I want to take us back in time to a couple of weeks ago when we were having that panel discussion and I don’t remember if it was the first thing that you stated as you were talking about, we got on the subject specifically of ChatGPT.

Angie Raymond [00:07:49]:


Jeff Ton [00:07:50]:

So for our listeners, by now you’re probably familiar with ChatGPT. It is a type of generative AI, conversational AI, still classified in that general category of general AI. But Angie, somewhere at the beginning of what you were talking about, you talked about the timing of its release.

Angie Raymond [00:08:16]:


Jeff Ton [00:08:18]:

Can you take us back to that and just share your thoughts on the timing of the general release of Chat GPT-3?

Angie Raymond [00:08:28]:

Sure. Well, so first, like so many people, this was a magical moment, right? Anyone who does technology is immediately wanting to log in and play with this. Right. So you do sort of have this magical moment because I think a lot of us who’ve been doing tech for a while, we’re both excited but skeptical, still skeptical, by the way, of the messaging that’s used around it and the abilities that it’s claimed. But what I was pushing on in that panel was how we need to think a little bit better, in my opinion, of whether or not we regulate or whether or not things like best practices and stuff will work in the tech industry. And what I was talking about at that point was the timing of the release, which of course, hopefully everyone remembers was at least for our US listeners. You’ll understand we’re talking about Thanksgiving. It’s sort of November December, which, quite frankly, in the global community, you’d have to almost be asleep under a rock to not recognize every educational system globally almost is coming to the end of a semester or a term and releasing something like this that you can ask a question about in 500 words tell me about George Washington’s first years and have it answer. Is absolutely in my opinion, demonstration of the hubris of technology. Right. Releasing something like that on the world right before the end of the semester, when teachers from third grade all the way through university have designed a syllabus to progress someone to, either take a short essay exam or write a short paper. And this is built for a period of time, and then all of a sudden you release something that can basically write the answer for you without any way of us recognizing that, hey, it makes up stuff. Hey, it cites made-up things. And just releasing that on the world at that time is just absolutely unacceptable. And it is why I talk so much about ethics. It’s the perfect example. Is ChatGPT going to change the world? I don’t know. We can talk about that in a bit. But the timing of its release was absolutely rubbish, and someone should be holding people accountable for those choices that people make. Not thinking through consequences when you push the button and suddenly something’s widely accessible is, we just can’t continue to behave this way.

Jeff Ton [00:11:15]:

Well, I’ll use the word controls, and I don’t mean it in Big Brotherish style, but without any controls or guidelines around it. Right here.

Angie Raymond [00:11:26]:

You’ve got a heads-up.

Jeff Ton [00:11:28]:

Yeah. Here’s this third grade teacher that may never even heard of it, and all of a sudden she’s reading essays from her third graders that were written about George Washington by ChatGPT. Right. I mean, it’s crazy because you’re just totally unprepared, right?

Angie Raymond [00:11:46]:


Jeff Ton [00:11:49]:

Is Chat GPT going to change the world? Let’s go there.

Angie Raymond [00:11:55]:

It’s going to be an interesting conversation. So, in some ways, maybe, right? I think, as usual, we’re going to have sort of the Hype curve, and it’s going to do all these things. I’ve already heard from my friends who truly do tech, which I don’t. I do policy and governance, but the people who actually sit down and do some coding turns out it’s pretty doggone good at some of that stuff. You want to talk about changing the world? Understand, if I now have we were already moving toward no code type movements, which was, but now if you add this power to it, is it going to change the world? It might open doors for a lot more people to have access to that type of behavior. In addition, it’s going to change how we teach people how to engage with technology. There’s pushes for a long period of time that everyone should learn how to code. Maybe not at all at this point.

Angie Raymond [00:12:48]:

And so what I’m hearing is the people who are better prepared to enter those environments now have a little bit of coding, but understand processes and engagement and things, which is a very different way of engaging with the development of technology. And very quickly, somebody’s, not me, going to have to ramp up how we prepare students and how we think about this completely and that’s just one small example, right. But I do think it’s a really good example of, yeah, there are segments of this that will absolutely change the way we behave. And when you’re talking about education, that is in some ways a little piece of change the world.

Jeff Ton [00:13:33]:

Absolutely. Yeah. I can see classes in how to write prompts because the way you ask, and I keep using ChatGPT, but there’s others out there. It’s not just chat, but this generative AI where it’s looking at millions and millions and millions of pieces of data and creating content, basically. Classes in how to prompt it because you can guide it properly. I’ll use the term conversation and I’ll use air quotes that our listeners can’t see. You can guide it in a conversation and it builds. Right. It’ll build on that.

Jeff Ton [00:14:19]:

So, when you think about a lot of our listeners are in business, they’re probably technology professionals, decision makers, a lot of them. What are some things that they need to be thinking about, about this generative AI and bringing it into the workplace?

Angie Raymond [00:14:44]:

I think this may be a cautionary tale. So, I’ll do the cautionary side and then we’ll go super positive. So, remember that these systems and again, it’s not Chat GPT alone, right? Microsoft has launched something. I think Google has as well. It’s pretty easy to think, wow, this could basically eliminate these two jobs, right? Or they can answer all my emails or fill in the blank. The problem is to do that well, it’s lurking around in your system. We do have to understand that there’s a reason it wasn’t connect – I did air quotes too – connected. Wanted to create that barrier, so there was a moment pause between it. And if you let this loose in an environment where you have some stuff that maybe you don’t want everyone to know, you need to give thought to not just how it’s learning, let’s pretend like it learns, but where that learning is shared. And again, you can silo this, right? You can do it. But if you’ve got just an entrepreneurial employee that’s like, look what I can do, it’s too late. It’s already in your system. And it’s crawling around almost like a virus.

Sort of waiting to have its moment of enlightenment. And all of a sudden, you’re like, oh, my God, did you just write this awesome email to Dr. So and so? But it’s got patient information in it. That’s a great email that we can’t send this way, right? Like that that I think needs to be a cautionary tale. You got to think about what’s in your system, what you give it access to, and then how often you might push that information out, even in through something as simple as email, which, of course, is massive, massive. I hate the phrase HIPAA because it seems like I spend most of my life answering HIPAA questions. But the example I just used is, why have we just violated federal law, student records and employment, and all that type of thing?

Jeff Ton [00:17:19]:

Angie, I want to get back to our conversation. We talk a lot in the cybersecurity space about knowing your data and knowing where it is, and protecting the crown jewels. Yet here’s something we could unleash unintentionally.

Because we don’t know where that data is. And all of a sudden, we’ve opened up Pandora’s Box when we spent all this time trying to protect ourselves from the outside, it’s now in.

Angie Raymond [00:18:01]:

That’s right. And of course, we’ve already seen a couple of instances pushed out in the news of this, it turns out. And my guess is you have well-educated tech people who are listening to this. So, you’re going to immediately tell me, yeah, we can silo this much better than that. We can do much better than that. Of course, you can. That’s the key, right? Who’s designing how you’re going to use this? And if it’s an entrepreneurial employee who’s like, look, I’m tired of writing these emails. I’m just going to automate it. That’s the moment in time where you have to have the right people in the room having the conversations of how you do create what it has access to, where that information can be pushed out and things like that.

Jeff Ton [00:18:49]:

Because, through a prompt, you can feed it any information that you have access to as an employee. Right. I can do a copy and paste from this file record and say, hey, write me an email that talks about this.

Angie Raymond [00:19:05]:

That’s right. Yeah. That’s my understanding of it again. But you could silo it. You can create data that you do it, and that’s sort of the key. It’s not just a cybersecurity issue. It’s also well, it didn’t do that for me. Well, are you still connected to what I call the mothership? Right? Are you training a larger system, or have you deployed something in just your business environment? And now it’s only going to be used on the data that you’ve given it permission to use and only used in your environment. So you can also keep a bit of an eye on it as well and all of that stuff. But people forget about this, right? It’s hard, right? You get employees that are like, whoa, this is great, because I’m really sick of writing this email 35 times a day, all respect, but the question is how to manage that. And luckily, I will say, I don’t do cybersecurity in great detail, but I certainly have dear friends that do cybersecurity has grown up as a profession, thank goodness.

They know how to manage people, quite frankly, behaving in some ways that there are times where I’m like, I don’t get it. At what point do we have an employee who’s clicked on the link so many times that we just kick them off the island? They can’t. And the cybersecurity professionals are always like, yeah, that’s not how we function. Thank goodness you’re a profession because I am the type who’s like, yeah, no more credentials.

And so they’ll be able to handle this. But, of course, yet another potential problem that’s even more difficult to wrangle in than clicking on links, which better at right.

Jeff Ton [00:20:52]:

What about having to disclose that you’ve used it in business or in school? Right? Hey, I use this. I think back to the I forget what school it was, but it was unfortunately after one of the school shootings and they send the email out and it was generated by ChatGPT. Do you think we’d get to the point where you’ve got to disclose that, hey, I used AI to do this work.

Angie Raymond [00:21:29]:

So, my personal opinion is…this is just going to become part of the part of the way we engage with the world. I was laughing with a group of people and I said, I’m one of those people who was in school and calculators that did. And you had the faculty member, the teacher at the time, who would be like, no, Calculators, you got to know how to do this. And what are you going to carry around a calculator everywhere?

Jeff Ton [00:21:57]:

Yeah, it’s my phone.

Angie Raymond [00:22:01]:

I always wonder if those teachers would have taught slightly different if they were like, okay, you don’t have to have this memorized, but what you have to understand is the way the numbers work together. Right?

Not really just about addition, it’s about understanding that through this complex understanding of the world, we end up with this number. And there are countries that teach that way, by the way, because it happened so fast. I told my students, you have to tell me. Quite honestly, because I was really worried that a lot of places don’t have rules set up to answer that question. I said, Look, I sort of expect you’re going to use this, but you need to write it, because as I said, like here at IU, it doesn’t clearly fall within the plagiarism policy, but it falls in the academic dishonesty policy. And this is a much bigger deal. Right. So as an instructor, I can take reaction to plagiarism. I caught it. It’s a couple of paragraphs. I’m taking off some points, those type of things. Academic dishonesty is a massive thing and it applies to you not doing an academic something that you submitted. And now that probably qualifies. Right? It’s interesting. So I told them, Look, I actually expect you to use it. I was teaching a, you know, digital ethics class.

Jeff Ton [00:23:31]:

That’s kind of ironic.

Angie Raymond [00:23:33]:

Yeah. So, I’m like, I’m disappointed if you’re not thinking about how to use this, but in footnote number one, you have to tell me. And by the way, you’re responsible for all the citations and everything it writes, so don’t look at it. Right. I’m expecting you to be the critical analyst and the one who’s submitting something intelligent. But I would be sad if you weren’t using this, which I think the students were all like, really? I’m like, no, really, but you got to put a footnote that you did it this way. Yeah, but there’s tons of people aren’t comfortable with that. And when I say it, even with my colleagues, they’re like, really? And I’m like, yeah, calculator again, without this. And I’m like, still to this day angry that I couldn’t use a calculator.

Jeff Ton [00:24:18]:

It’s coming through. I feel the anger.

Angie Raymond [00:24:20]:

Right? You’re just sort of like, really? Because yes, I do.

Jeff Ton [00:24:22]:

I wonder if they felt that way when the slide rule came out.

Angie Raymond [00:24:27]:

Right. So, we’ve dealt with new technology that some people would call this transformative. I think it is going to change the way we engage with massive time barriers to us. I also think it’ll be interesting. I mean, remember, there are a lot of people who have various types of disabilities that writing is not the easiest thing for them.

So, imagine what load you take off just in terms of time and energy lost. For an individual who struggles to sit down, to type up paragraphs or to consume information and write two or three summary sentences. We’ve just changed their lives in theory. And this opens up so many possibilities that I would hate to see an overreaction of absolutely should never be used. Yeah, this seems awfully silly. We need to put a footnote here. This is how I did this. Of course, I’m responsible for it as an academic paper or a white paper at a company or whatever, so I’m not going to let it write it all on its own and not have a look at it. But please understand time and energy and how much of the world we’ve opened up to some individuals now through the use of this.

Jeff Ton [00:25:48]:

Well, and it is it’s something that I know you talk a lot about appropriate use and appropriate development, which gets into what is appropriate and who gets to decide, right?

Angie Raymond [00:26:01]:

So, my favorite thing to tell my students is, see these four walls? Guess who gets to make these decisions. And so I do remind I do that as a lesson to this is actually the question. This is actually the real question. Who’s making the rules? Who gave you the power to make these rules? Who the heck are they? Right? This is 101, quite honestly, which comes across a lot better when you’re just like, hey, here are four walls please understand, make the rules in this room with these ultra uber lords who give me sort of some general guidelines for the most part. But really it’s up to me. Now, let’s talk about who’s making the rules, who gets to decide, how do I complain? How do I take on those rules? Right? And those prompt some pretty cool conversations amongst the students. And I think we’re exactly there right now. I’d be shocked if most educational systems aren’t having these conversations. Most businesses aren’t thinking this through. This is big! But long-term, the question will be, okay, who’s going to make these rules and what are the rules going to be?

Jeff Ton [00:27:14]:

Then you’ve always got the other side that not everybody plays by the rules.

Angie Raymond [00:27:22]:

Yeah. So I’m a pretty big fan in not designing policy and rules for the people who enjoy finding the gaps and the loopholes of the rules. But, yeah, I take your point. Right, there is sort of that moment of what do we do? We certainly, as educational institutions face that, right? We know, unfortunately, the numbers are pretty sad when it comes to how many students, at least one time during their educational fill-in-the-blank, high school, college, whatever, have cheated. We know these numbers are bad. And one of the things I will say, since we’re talking about this type of stuff, is do understand one of the things to mitigate some of the cheating that occurs is to give thoughtfulness to the expectations we place. On individuals, which includes time limitations and things like that, and just creating unrealistic expectations for how much an individual should be expected to do in a period of time. Having us better able to understand that, I think lessens the load on individuals to have to make every deadline. So that’s sort of my pushback. But how am I ever going to now I need to change the world. Can’t we all have this due on Friday? Of course someone’s going to find the shortcut. I’m never sure why that surprises people. We’ve got lots of years of behavioral research that says, yeah, I’m going to do my best. I love in general, human beings do their best to try to accomplish what is expected of them. But yeah, you find shortcuts when you fall short.

Jeff Ton [00:29:07]:

Well, Angie, I’d like you to drag out your crystal ball here and talk a little bit about where you see the future and maybe, as you’re thinking about that, two dimensions, the near future, so call it the next year and then three or four years down the road. Where are we going with this? What do you see? Some of the potential and some of the conversations that we need to have.

Angie Raymond [00:29:38]:

Yeah, so near crystal ball. I think we’re in for a period of great uncertainty, which, again, mental health therapist really concerns me. We’re already coming out of a time where we were experiencing great uncertainty and now having this level of new uncertainty is just really trauma-inducing, quite frankly. Depending on what your job is, right? Depending on, oh my gosh, do I have to figure this out this fast? By the way, I already had to put my classroom online short time ago. Right. I actually do really worry about that in some professions is the number of sort of times we expose people to great uncertainty, right?

What’s the good side on it? Uncertainty oftentimes produces change, right? And that moment where you just sort of decide to either give up or do the bootstrap thing, which is not the world’s greatest way to say it, but it works. That moment is also oftentimes just this catalyst for something that could be incredible. And I think in the near future we are going to see this type of technology especially used a lot more. But the question is where? Right? And how do we do it appropriately and how do we give people the training and the skills to understand both how to use it, but also to think through the appropriate uses of it?

Okay. Long term. Long term. I hate to be the pessimist, but it seems like today is my pessimist day. I’m a little bit worried that I think, for example, the example that I gave on the panel, the timing of this release, I think I even said it here, the timing of this release demonstrates the hubris-ness of the community, right. Not even seeming to not even care. And even when people started shouting, they’re like, oh, well, you’re sort of like, okay, thanks for reminding me that you’re the uber lord and I am really in trouble. So, I worry that if the tech industry as a collective doesn’t get their proverbial stuff together, you’re going to have regulation like you’ve never seen before and that’s not going to help anyone.

I like the fact that technology industry can be entrepreneurial and come up with brilliant things. And the White House was today or yesterday pushed out something that maybe all of these should be examined before they get released, if they’re impactful. And I’m like, $20 says they would not have thought of ChatGPT as being impactful, so it wouldn’t have met the standard. Right? Going to see this really crazy attempt at regulation that is really potentially going to cause us problems and do nothing but get in the way as an industry needs to become like cybersecurity professionals, in my opinion, right. They embrace the responsibility as a profession to behave in a professional way and they all agree to do it. And it’s sort of frightening how well it works.

I mean, you talk to the security individuals at your organization and they will probably tell you, yes, I could see where you’re logging in from and stuff. I do that to ensure it’s secured and nothing else. And their commitment to understanding the importance of infrastructure as a security measure and keeping the trains running, as they say and all that type of stuff without being snoopy. Bums. Developing that as a profession has changed the trustworthiness of the amount of snooping that could be done, but is simply not not done.

And so I hold up here at IU, especially we call it UITS but our individuals who have this keep the trains running, it’s an amazing group of people who are so committed to not allowing snooping in that way. Hat tip to them, we need people to embrace professionalism and this commitment. So we’re going to set these rules and they’re not just rules that we’re going to blow off.

Just keep in mind ChatGPT had a pretty important funder that is someone who is massively always talking about trustworthiness, transparency, turns out, oops, right, come on. You can’t just tell us your profession and you’re going to live up to these best practices. When the rubber hits the road, you actually have to do it. And if this industry doesn’t begin to start doing it, you’re going to feel like the banks or the automobile industry if you’re not careful. Yeah, highly regulated and both industries will tell you, yeah, it’s a massive pain in the keister.

Jeff Ton [00:34:40]:

Yeah. Slows down innovation, right?

Angie Raymond [00:34:44]:

I’m sure the automobile industry someday I’ll do my research a little bit better, but I’m sure they were super excited to have to do seatbelts, I don’t know about you, but I grew up in the back of a station wagon, me and five of my friends WrestleMania. While we’re on our way to Chicago to go to Great America, the big theme park.

The automobile industry was super on board with, hey, seatbelt. This is what happens when you’re an industry that doesn’t gets into this regulatory cycle of look, we’re just going to regulate you, safety, speed limits, all kinds of I think tech might if they’re not careful, we could see some really crazy stuff done and none of us will be better off for it, I fear.

Jeff Ton [00:35:30]:

Yeah, I think you’re, unfortunately probably correct. Angie, what’s one thing, if you could tell our listeners, this group of IT professionals, what’s one thing that they should go do tomorrow about generative AI because they listened to our conversation today, what’s one action they should take?

Angie Raymond [00:35:55]:

Be sure people understand how they’re using it in the context of whatever setting you’re in. Simple, right? Even if it’s a painful email or a video, you have people in your business, in your industry, and wherever they are that have played with this and could potentially be dangerous, quite frankly, if you put it in the wrong place or don’t have it structured correctly. So be sure people understand this stuff, not hype. Because right now, all we’re hearing is the hype.

Jeff Ton [00:36:25]:

It’s going to dig behind the hype. I love that. Angie, thank you so much for carving out time to talk with us today. I have really enjoyed all of our conversations.

Angie Raymond [00:36:37]:

Thank you.

Jeff Ton [00:36:37]:

I hope we continue to have them agree. I would sure love to have you back on the show in the future, talking about other aspects of technology, so hopefully you’d be open to that in the future.

Angie Raymond [00:36:52]:

Absolutely. Thank you for having me. It’s always a wonderful time to have a conversation with you, of course, to our listeners.

Jeff Ton [00:36:59]:

If you want more information, if you have a question, you want to learn more, visit If you want to go directly to the podcast, it’s We need to simplify that URL, I think, the show Notes will provide links and contact information. This is Jeff Tun for Angie Raymond. Thank you very much for listening.