Status Go: Ep. 243 – To AI or Not to AI…That is the Question! | Aleta Jeffress & Alina Walters


Dive into an exhilarating exploration of the rapidly evolving world of artificial intelligence in this episode of “Status Go,” where host Jeff Ton delves into the critical debate of “To AI or Not to AI.” Expert guests Aleta Jeffress and Alina Walters navigate the intricacies and challenges of AI adoption, from ethical conundrums to security implications, discussing real-world applications in public safety and beyond. Tune in to hear about cutting-edge developments, empowering insights on responsible utilization, and compelling discussions on the intersection of efficiency and caution. Whether you’re AI-curious or steeped in the tech, this conversation will arm you with the tools and perspective to harness the transformative power of AI in the most responsible—and innovative—ways. Prepare for a thought-provoking journey at the frontier of humanity’s tech-enabled future.

About Aleta Jeffress

Aleta Jeffress is the Senior Vice President of Consulting Services and Denver Metro Lead for CGI.  She has over 20 years as a successful CIO, executive business leader and technologist building relationships between business and technology to enable digital transformation and market growth. She drives innovative strategies for business and IT leadership, and has developed teams for Cybersecurity and Project Management Offices from the ground up. She currently drives business development, operations and delivery for commercial and public sector organizations in Colorado and five other states.  Her career began in startup software companies where she started in a call center environment and moved through private and public sector organizations in the areas of software quality, product development, security, and ultimately leadership.  Aleta is based just south of Denver, CO, is active in several community organizations and enjoys being outside with her family.

About Alina Walters

Future-Ready Focused Leader, Strategic Influencer, Win-Win Transformation Seeker
With 20+ years of IT business management experience in both the private and public sectors, Alina is a proven leader focused on developing trusting relationships, empowering staff, and building a culture of respect, collaboration, and mutually rewarding successes. Alina began working with the City of Lakewood in 2014 as the IT Department’s Business Transformation Manager and was appointed to CIO in May 2021. She is passionate about people and technology and is committed to exploring and delivering future-ready solutions for the city that ultimately benefit the community. In her free time, you can find Alina exploring the great outdoors – usually on a “forced-family fun” adventure – with her husband, son, and adorably stubborn dog, Tazer.

Episode Highlights

[00:00:00] – Gear up for an electrifying debate on AI: To integrate or not to integrate, that is the question!

[00:00:37] – Status Go: Where technology meets tomorrow – join us on a journey to innovation!

[00:00:54] – Diving into the crucial debate of AI adoption and its ever-evolving landscape in today’s world!

[00:02:12] – Unveiling the complexities and the undeniable importance of responsible AI usage!

[00:03:19] – Organizations’ varied AI appetites: From data conundrums to ethical dilemmas!

[00:05:34] – Fear vs. foresight: Addressing AI adoption caution with proactive policy crafting!

[00:06:58] – Embracing efficiency: The public sector’s growing romance with artificial intelligence.

[00:09:11] – AI’s shining role in public safety and beyond: A look at the push for AI urgency!

[00:10:56] – Smarter living through AI: Personal AI applications that are transforming daily life!

[00:12:41] – Recommendations revealed: Your first steps into the world of AI made simple!

[00:13:34] – The verdict is in: The key to AI is responsible use, join the charge at!

[00:15:13] – Lifelong AI learners unite: Staying on top of AI trends with cutting-edge resources!

[00:16:51] – Building AI wisdom: Ethical debates and educational uses that shape our digital future!

[00:18:24] – Visionary AI advancements: From life-saving conversations to solving the unsolvable!

[00:20:36] – Navigating the AI liability labyrinth: The tricky territory of driverless cars and data reliability!

[00:24:14] – Social media and AI: Drawing parallels in the evolutionary tech timeline!

[00:26:42] – Empowering the world: AI’s incredible potential in enhancing accessibility for all!

[00:28:21] – Innovation imperative: Unleashing AI to deliver stellar services and disrupt the status quo!

[00:30:07] – Harmonizing efficiency and ethics: The delicate balance in the responsible AI journey!

[00:32:17] – Minds over machines: The irreplaceable need for human oversight in the AI arena!

[00:33:59] – Data, biases, and beyond: The critical care for AI systems that think before they act!

[00:35:39] – The pendulum of trust in AI: Will reliance sway as it has with cloud adoption?

[00:38:16] – Drafting the future: Pioneering an AI policy document for a tech-smart tomorrow.

Episode Transcript

Alina Walters [00:00:00]:

Everyone is unique right now. Everyone has a unique situation. It’s not cookie-cutter. And so, responsible use is finding out what it means for you.

Jeff Ton [00:00:35]:

Welcome to another episode of the Status Quo podcast, where we navigate the ever-evolving landscape of technology and its impacts on our organizations. Today, we’re diving into the realm of artificial intelligence. It’s kind of been the rage for the last 12, 13, 14 months. But what we want to explore today is that age-old question with a twist. To AI or not to AI? That is the question. I’m your host, Jeff Ton. Joining me are two esteemed guests, Alina Walters, the chief information officer of the city of Lakewood, Colorado, and Aleta Jeffers, the senior vice president of CGI. They’ve been on the show before, and I know that we’re going to have a great conversation.

Anytime we get the three of us together, it seems like we could talk forever. But together today, we’ll be unraveling the complexities of AI adoption and its usage, shedding a light on how organizations can leverage this technology responsibly and effectively. In our discussion, we’ll be drawing on a couple of different things. One is Aleta’s company, CGI, is holding an AI summit, and some of the topics that they’ll be talking about there, we’ll kind of touch on those today. But also in addition to that, Alina has written a draft white paper policy document that she’s using there at Lakewood. And so we’ll kind of explore both of those to see what elements you can take out as you begin to decide your AI future. So if you’re a technology professional navigating the waves of AI innovation, join us as we embark on this journey to demystify AI and uncover how it can shape the future of our organizations. It’s time to answer that question.

To AI or not to AI? Let’s find out together. Aleta. Alina, welcome back to status go.

Alina Walters [00:02:41]:

Thank you. It’s so good to be back. Jeff. Hi, Aleta. Thank you for having us once again. And I’m excited to talk about this. There’s so much to unravel. I love that you use the word demystify because sometimes it seems like it’s a mystery every day just because it’s changing so quickly, so you don’t know what you’re in for day after day.

But I do believe we’re on the right track for making some headway with using this amazing technology responsibly.

Jeff Ton [00:03:14]:

Well, we’ve seen this in a lot of different technologies, and it seems like AI, we’re seeing it more than we have in some other things. This almost A or B… A, I’m going to use it. I’m all in. Or B, I’m not going to let anybody in our organization touch it.

Aleta, in your work, have you seen that, that it’s almost divided in half between the audiences that you work with?

Aleta Jeffress [00:03:47]:

I’ve seen quite a gamut of things, actually, in the last couple of months. I think that some of the organizations that we’ve worked with are simply so they want to engage with it, they want to be able to use it, but they struggle either with the use cases or even if they know what they want to be able to do, they know that their data isn’t in a place to support that. And so then they become conflicted. Right. That now I have to spend time and energy to fix the data, which could be a bigger problem than maybe they even realized, but yet they want to be able to move forward with AI. So there are some small things. As we all know, you can use AI for just about anything anymore. Alina uses it for recipes.

I love that. But there’s so many different facets of it. So do we start small or do we fix a bigger problem? I think it depends on making sure that people know what they want the outcome of it to be and how do they really want to go about that. And I think that’s an important question that some people embrace and some people really are still kind of afraid of.

Jeff Ton [00:04:53]:

Yeah, I think the fear, I guess, to use that word, is probably justified. This is kind of a new animal when you think about AI and its ability to go pretty nuts pretty quickly. But I also think that there’s some that are a little bit, they’re not the early adopters, they’re more. I hate the term laggards, but that’s kind of how they’re defined in the hype cycle. Right. Laggards where they’re waiting to see what happens. Alina, you’ve kind of taken an approach there at Lakewood of you’re going to hit it head on. Can you talk a little bit about your journey about discovering AI? Right, I’m going to use air quotes that no one can see, but discovering AI and then trying to make sure that the city government and it agencies are using it in an appropriate manner.

Alina Walters [00:06:01]:

Yeah. The word journey is perfect because I do believe everyone is on their own unique journey based on the organization they’re with, based on the type of work that they do, based on their risk tolerance level. And I think that’s what it comes down to. Right. So when we think about that hype cycle that you mentioned, the quote unquote laggards, I see that as more conservative risk management. Right. And rightfully so, because there are so many hidden and really unexplored consequences. Right.

There’s ethical use that you have to think about intellectual property rights, data privacy and security bias, fairness. There’s things like social and psychological impact and a whole slew of other big words like that. So it’s a serious topic. I understand the conservative approach to it. Like you said, though, here at Lakewood, I believe we are poised to embrace it. It’s just that we want to do it in a responsible way. Right. So already I know people are using it for many personal uses.

And personal can range from, as Aleta said, that banana bread recipe, which I highly recommend looking up because turned out amazing, to using it to maybe take some research that you’re doing and asking for some key points out of it. Right. That can save you some time in distilling a lot of information down to key points. So people are already using it in that way. And then there’s the contributing to the industry way where agencies are taking AI and building it into their platforms so that the products that they deliver leverage this amazing technology. Right. And you do have to be ready for that. Your data has to be ready for that.

And there are so many things to consider. So back to your question for us. What I’m eager about is building some guidelines that we can talk about at the city. Right now. They’re just in draft form. We’re still working on it, and I will be sharing them with our leadership team so that we can understand, talk about, and really see what is that pace of adoption that we want to make.

Jeff Ton [00:08:54]:

Aleta, what are you seeing? I know your focus is predominantly on the public sector, but I also think you get involved with some private companies as well. Are you seeing a difference in the level of innovation or the desire to really dive into this, to change a culture, change a company between public and private?

Aleta Jeffress [00:09:22]:

Not really. I think it really depends on the organization. And as we’ve kind of talked about, what’s their appetite for AI? Obviously, commercial or private sector? They might be able to maybe move a little more quickly, just depending on their structure and what they’re interested in. But then I do think there are some more challenges about how do they use that, how quickly do they use that? Do they deliver it as part of their. Right. There’s, as Alina mentioned, there’s some security, there’s some ethical considerations when you start to get into, how are you deploying that to your customer base? Or maybe you’re building it into your IP solution, right. There’s a lot of different things to consider when you’re talking about that generative AI. I think on the public sector side, a lot of it maybe is more around the efficiency, because that’s typically what public sector is trying to do.

How can they deliver services more efficiently, more effectively? Maybe they’re not really building IP, but they can certainly look at their processes and determine how can we be more efficient, not only from automation, which I feel like sometimes is kind of that step one that people need to consider when they’re considering AI, but truly, how are they going to use an AI platform that will help constituents be better served or have a better experience when dealing with their local government or local agencies. So I think it’s the level of innovation and just the appetite to move those things forward. I also, when, when CGI talks about, really we help people break it down into a number of different thought processes, right? So if you’re going to innovate, what are you really trying to do? What’s your vision for that and how are you going to launch that? Where do you really want to be? That applies kind of across the board, and then, as I’m sure we’ll talk about, there’s other iterations, right. As you start to move through that process. So depending on how much you want to experiment with it and then how much are you really going to engineer it, and then as you continue to go, usually once you do that and figure out what works for you and what your tolerance is, then you’re going to expand it. Right? So a lot of e words to help you as you’re using generative AI, but I think it depends on the leader and the level of innovation and how much they’re really willing to kind of push the envelope and move forward.

Jeff Ton [00:11:56]:

Well, and it raises the question for some reason, this technology, more than some of the others that we’ve seen over the last decade or so, raises this question of responsible use. What is responsible use? How do you define that in your organizations, either one of you?

Alina Walters [00:12:19]:

I’m going to say that from my perspective, you have to balance that goal of efficiency gained efficiencies and enhanced productivity with an over reliance on these tools that can have really unpredicted outcomes and vulnerabilities. And so the responsible use is making it work for your business. And this is what I believe Aleta was saying. What are some use cases that make sense and start small? See what that looks like. Educate. Right? So you have to be in a cycle, in a continuous cycle of learning, envisioning, experimenting, learning, envisioning, experimenting so that you can define your guardrails of what responsible is. And to Aleta’s point, that can vary on that risk tolerance that each organization has based on their leadership, based on what they’re trying to accomplish with their product offerings, and based on the types of people that they employ and all those things, right? So I don’t know that everyone is unique right now. Everyone has a unique situation.

It’s not cookie cutter. And so responsible use is finding out what it means for you.

Jeff Ton [00:14:03]:


Aleta Jeffress [00:14:06]:

Well, and I think I would add to that, responsible use is really, I still think it has to come back to some of the human factor, because if you let AI kind of run on its own, the one thing that AI doesn’t always take into account is context. And so you still have to have the human part of it come back and check the context. So when Alina and I were talking about AI a week or so ago, right, the example she used was you might give AI a picture of a brown cow in a field, right? So everybody can picture that. Well, as AI iterates, or sometimes even fraudulently iterates over those types of things, what you might end up with is like a purse and some grass. So you have to still come back into it and say, is this still the right thing? I liken it back to the old, old days, right? You played telephone, right? And the person that started the chain with this sentence was certainly not what ended up 50 people later. So it’s that same cycle, I guess, making sure that what you’re putting in, you’re still taking the time to validate that. You’re making sure that you’re really getting the outcomes that you want, because if it starts to deviate, it’s not going to know. You still need to come in and make sure that you’re getting what you thought you were getting.

And whether that’s servicing constituents or IP or customers, whatever that is, you’re still ultimately responsible.

Jeff Ton [00:15:44]:

We seem to forget that that human oversight still has to be part of the process, part of the checks and balances. Right. Because it does tend to hallucinate at times and make up facts. And while I’m sure iterations in the future are going ton be better at that, it reminds me of a conversation I had with Deal Daly here on this program. Deal is the CIO of Toysmith, and he talked about that. It’s almost this living, breathing organism. It’s not like putting a new storage array into your infrastructure. You’re putting something in there that evolves and learns.

And so it kind of blows up this notion of change control, because it’s changing all the time. How do you handle that, Alina? And Aleta, how do you counsel your clients to handle this ever-changing process of AI?

Alina Walters [00:16:50]:

It comes down to something very basic, I believe, with data. Garbage in, garbage out. So if you approach it in that way, then you can better manage how you use it and what the outcomes are. You do have to apply that human context. That’s huge. I don’t believe that can be replaced, because that’s unique to each of us, and that’s what makes us amazing creatures. Right. We have this ability to think and reason and judge, and AI is more of an equation.

Right. So it’s going to take a while. I don’t know when and how for it to be like a human. So I think it’s important to know what you’re putting in as data. I think you have to validate it and understand where it’s wrong so you can avoid biases.

Aleta Jeffress [00:18:02]:

Yeah. And I think along with that comes that whole experiment phase. Right. We’ve talked about that, learning from that. So you’re always going to have to continue to tweak and continue to monitor. But being able to experiment with it, that’s where you find out data is good. Data is not good. So now what? Do I need to go change? Is my outcome what I expected? No.

So now I need to go change the human oversight factor. Right. I now have a purse instead of a cow. I need to go make a change. So all those things come back to making sure that you’re iterative. And like we’ve said, it’s not a static thing. You’re not just plugging it in and walking away. It really needs some constant care and feeding.

Alina Walters [00:18:47]:

Yeah. And you really have to think about, again, the use cases. Right. Where are you going to use it? Because I really see the pendulum kind of swinging. Oh, AI, it’s great. It can do so many things for us. It can answer all these questions. It can be that immediate first source of knowledge.

But the people who go to that can really get tired of that, and really may swing back to saying, I just want to kind of talk and think this through with another human. I’m not looking for a black and white answer. I want to reason and brainstorm. And while AI can likely reason and brainstorm, it is a very different interaction when it’s human to human. And I’m curious to see how the pendulum is going to swing back and forth right through these iterations with people using it, relying on it, overusing it, over relying on it, not wanting to use it, learning from it. Yeah, fascinating.

Jeff Ton [00:19:49]:

It reminds me of the cloud, right? Because part of what we’re seeing today, and I saw a couple of posts on LinkedIn about this last week about repatriation, where you’re bringing your cloud workloads back in because you went all in without a plan or without a vision perhaps, and found out, oops, it can be expensive if you don’t put controls around it. Now we’re seeing the same kind of thing with AI, right, that pendulum back and forth. I love that visual of the pendulum. Alina, when you sat down to start this policy document, this white paper that I referenced earlier, how did you get started? What were the steps you were trying to take at the beginning of that education?

Alina Walters [00:20:38]:

I wanted to think through what I was writing about and why.

Jeff Ton [00:20:46]:

So education on your own, of yourself?

Alina Walters [00:20:49]:

Yes, in part education of myself. Thinking no question is a bad question. And usually when you ask a question, somebody else is thinking about it. So going through that process of educating myself is something that I believe everyone will need to do and want to do and should do in order to get to that responsible use. So there is so much information out there, it can be overwhelming every day. There’s podcasts like this one…

Jeff Ton [00:21:25]:

Not as good as this one, but there are podcasts out there, right?

Alina Walters [00:21:29]:

There are articles, there are webinars, you name it. It’s too much, right. For me, I wanted to start with understanding what does it mean for me and how do I want to talk about it to generate excitement and help drive responsible use, because I think as an IT organization, that is our responsibility to introduce and provide a safe way to experiment, like Aleta was saying, to learn and to course correct and to build. So that’s where I started.

Jeff Ton [00:22:14]:

Aleta, how about you? How have you attacked your own personal edification about AI, generative AI specifically, but what are some of the things you have done?

Aleta Jeffress [00:22:29]:

Well, I think just like Alina, right, it’s been a lot of self-education. I attended a course the other day, and even then, even out of kind of an introductory 30 minute course. There were still things that I learned and nuggets that I took away that I was like, oh, well, this makes sense now. I mean, you’re just layering it right on top of everything. And it moves so quickly that you need to always look for new content. It seems like because things change, it’s always changing. There’s something new to think about, there’s another security thing to think about, or there’s another use case that comes up that maybe I didn’t consider before. So I think just the continuing to learn, kind of on your own about that and looking into where does it really make sense for me.

So one of my goals, right, is to use it more personally. Well, what does that look like? Back to the recipe example, right? Is it that? Is it project management? Is it what? Different things you can type into, chat GBT and see what you come back with. And it’s all fascinating. It is really fascinating. So I just think making sure, it almost reminds me when security was kind of a thing coming out, we always said, well, that’s the it department, or that’s the security department. And now what is it? Well, now it’s baked into everything that we do, right? We know not to open the phishing email or the text from the post office, right? Those types of things. So I think that that’s the same thing that will happen to AI. It will become more and more embedded in what we do.

And so we’ll learn how to use it and we’ll learn how to manage it. But we’re just not ton that point yet.

Alina Walters [00:24:18]:

There are so many exciting uses and we also have for our kids, for example, right? They can go and say, hey, tutor me in eigth grade geometry, give me ten quiz questions on such and such. Help me learn the top five things I need to know in this course. They can be quiz, it can be used as a tutor. So that’s a way to use it personally. At the same time, I think we have to teach our kids, hey, don’t put personal information in there because you don’t know how that can be extracted and used against you. Right? Because unfortunately, with everything good, there’s like five things that can be so good.

Aleta Jeffress [00:25:07]:

When your kid’s using it to write his research papers or whatnot. Right? That’s kind of the guardrail then the human intervention we talked about a little bit.

Jeff Ton [00:25:15]:

Yeah, I’m sure your kids are not doing that. Right. Well, I love what you’re talking about, how fast it’s evolving and layering on. We’ve talked about this before that. About a year ago, I did a podcast episode where I interviewed Chat GPT, which was text based, and then took the output, fed it through a text to speech generator to create an audio. I saw a demo just last Friday of a company that has developed a prototype of a person’s image that is AI behind it. And you can talk to it and have a conversation with it and we’ll answer you. I mean, it’s limited on what topics it can cover right now, but you can have this conversation with AI now.

I mean, that’s pretty wild when you think about being in customer support or being in where you’ve got constituents in your districts that are interacting with the government, and now you can have an avatar, I suppose is the right word for it, that is responding in a human empathetic way to their questions. Is that where this is going next?

Alina Walters [00:26:43]:

Do you think that’s exciting stuff? I love it. I mean, think of the impacts that could have, right, from a health perspective, right. People who have various disabilities could potentially not have those challenges, right? Or they could be treated in different ways. I joke it can be used from a banana bread recipe to figuring out, how do you live forever, right? I mean, obviously we’re not there, but the uses are so exciting. And everything is new now because it’s new. But as with everything, after a few iterations, it becomes the norm. So our baseline of what that norm is across industries and uses is exponentially growing every day. And our tolerance of what’s possible is expanding.

And I love that. That’s exciting for me. It’s scary and complicated, but it’s really exciting.

Aleta Jeffress [00:27:53]:

I think too, with that comes my brain goes to the part of that that you have to think about the liability associated with that as well. So not only, yes, it is definitely cool, and there are so many just amazing things that could come out of that. But then ultimately, who’s liable for that? Driverless cars, right? As much as the hype was there, and there’s lots of AI built into that, we’re not really there. I mean, you still don’t see empty cars driving down the street as much as I think people thought you would. So I think that that’s the component that still remains, and it’ll just take time. I mean, look at all the jokes about arguing with Siri and Alexa and things like that, right? We’ve come a long way from there, and the information I feel like is way more valuable and applicable, but I still think there’s going to be that component of how are we making sure that we’re really getting the right thing? And who’s really liable if people make decisions based on that data.

Alina Walters [00:28:54]:

Yeah. You think about social media, right? When it came out and everybody was like, putting everything out there every 5 minutes, whatever they were doing, it was advertised, and how that has evolved. And then people started questioning, well, is this real, what I’m seeing on social media? Is it not? And then they started making choices. I don’t want to see this. I only want to see this. So it’s that same journey.

Jeff Ton [00:29:28]:

I love that. And I think while it feels that technology is evolving so fast, it is in some ways, but in other ways, as you were talking about, Aleta, it is a little bit slower than it feels like. Sometimes we don’t see a lot of cars driving around without drivers. I think we’ll get there eventually. But just like yesterday was the Super bowl, and one of the ads was for a cell phone that a visually impaired person could take pictures with because it told them what it was seeing in the camera lens. That, to me, goes back to what you were, accessibility and how this technology can be used for the greater good in things. So before we get to our closing question that you all know that I love to ask, I want to ask where you two are going next in your AI journey. So, Aleta, what’s up for you next as you continue to explore this new world of generative AI and large language?

Aleta Jeffress [00:30:50]:

Know, I think there’s two things, I think from a kind of corporate perspective, really working with my prospects and my customers to see how can this really benefit mentioned. You know, CGI is hosting an AI summit coming up, and so we’ve brought our experts into our Denver office, and we’re really going to have in person some conversation about what does that look like and how can we help organizations really hone in on their use cases and how do they implement that and what does that really look like? So a lot of education and some of the things that are really fascinating to me about that is that we have some organizations that we work with and just the level of urgency that they have. Think public safety. For example, when you start embedding AI into public safety, where should ambulances be? Where should police officers be? What fire apparatus should be? Where. Right. Just based on what they see with the weather or past history or things like that, I mean, just the application of that can just be enormous. Right. So really thinking about all of those cases and where can we really best benefit our communities, where we live and how they can take care of us.

So I think that’s one aspect of it. And then for me personally, I think it’s using more of it. Where does it make sense for me, where does it make sense to leverage my baking or non baking ability? Or where else can I embed it into what I do every day just so that I’m continuing to learn and I’m continuing to see the benefits so that I can promote that as well?

Jeff Ton [00:32:39]:

What’s the best downhill track for one of the ski runs that you go on?

Aleta Jeffress [00:32:43]:

Oh, that’s amazing. Yes. I might have to use that this weekend.

Jeff Ton [00:32:49]:

Alina, how about you? Where are you going with your exploration of this technology? There, with the city.

Alina Walters [00:33:00]:

There are two focus areas, the first being, let’s understand how we want to use it. Let’s put some guardrails around it. We already actually gave some recommendations on, hey, don’t do these top five things. Right? Just basically, just don’t do these things. Don’t put your personal information in there. Check, double check, triple check, just to make sure what you’re using makes sense, is accurate, is not some untruth that can’t be undone somehow. And then what I want to do is, well, continue to learn every day. I want to learn something new every day, right.

And manage my excitement again with that ability to use it in a responsible way because it can become so overwhelming and out of control. But I also want to learn what the city’s other departments see as opportunities for using this technology in their internal operations and to make the lives of our community members better because there are so many, again, opportunities for using this technology. They’re endless opportunities. It’s just, can we start small and build on that and just have a plan for doing that? So creating a framework within which we can do that in a repeatable way and learn from it.

Jeff Ton [00:34:39]:

Well, Alina, I’m going to stick with you. What are one or two things our listeners should do tomorrow? Because they listened to our conversation today.

Alina Walters [00:34:49]:

Think about three things or think about three steps that they want to take on their journey. Define them and think about them. Find some substantiating information. Right. And then use those three steps to get to the.

Jeff Ton [00:35:16]:

Yep, I like that. Plan it. Have a, have a destination in mind as you’re taking those steps. I love that. Aleta, how about you? What are one or two things our listeners should do tomorrow because they listen.

Aleta Jeffress [00:35:30]:

To us know, I think that when you come across something either at work or at home, and you want to know, know, maybe instead of googling know, enter it into chat GBT. Right. Use that as a tool and see what kind of result or see what kind of possibility comes out of that. I think that would be something that know, kind of just substitute that for a while and see, and then it’ll be intriguing, I’m sure. And then based on know, take that and experiment with it kind of to Alina’s point, you’re going to learn more. What’s next for you? Because it’s going to give you things that you haven’t thought about. So take those things that you didn’t come to top of mind, but drill down on those things and see where you can experiment with that. And what kind of outcomes can you drive using things that you hadn’t considered but AI had?

Jeff Ton [00:36:30]:

Well, I think based on our conversation, the answer to the question of to AI or not to AI, the answer is to AI…responsibly.

Aleta Jeffress [00:36:43]:

Yes, totally agree.

Jeff Ton [00:36:46]:

Well, thank you both very much for joining us again on Status Go. I love our conversations. I learn something new every time. Thank you so very much.

Alina Walters [00:36:56]:

Thank you for having us.

Aleta Jeffress [00:36:58]:


Jeff Ton [00:36:59]:

To our listeners. If you have a question or want to learn more, visit The show notes will provide links and contact information. This is Jeff Ton for Alina Walters and Aleta Jeffress. Thank you very much for listening.