Summary
In this episode of “Status Go,” host Jeff Ton and guest Brian Jackson delve into the captivating world of AI and the trends to watch in 2024. Using the Info-Tech Research Tech Trends 2024 report, they explore the disruptive potential of AI across various business models, discuss case studies, and analyze the differences between AI adopters and skeptics. With a focus on opportunities and risks, they present strategies for organizations to harness the value of AI while mitigating its associated challenges. Join them as they navigate the complexities of AI and unveil the strategic opportunities that this emerging trend holds for businesses. Don’t miss out on the insightful discussion that will keep you on the cutting edge of future technological advancements.
Podcast: Play in new window | Download
Subscribe: Apple Podcasts | Spotify | Email | RSS
About Brian Jackson
As a Research Director in the CIO practice, Brian focuses on emerging trends, executive leadership strategy, and digital strategy. After more than a decade as a technology and science journalist, Brian has his fingers on the pulse of leading-edge trends and organizational best practices toward innovation.
Prior to joining Info-Tech Research Group, Brian was the Editorial Director at IT World Canada, responsible for the B2B media publisher’s editorial strategy and execution across all of its publications. A leading digital thinker at the firm, Brian led IT World Canada to become the most award-winning publisher in the B2B category at the Canadian Online Publishing Awards. In addition to delivering insightful reporting across three industry-leading websites, Brian also developed, launched, and grew the firm’s YouTube channel and podcasting capabilities.
Brian started his career with Discovery Channel Interactive, where he helped pioneer Canada’s first broadband video player for the web. He developed a unique web-based Live Events series, offering video coverage of landmark science experiences including a Space Shuttle launch, a dinosaur bones dig in Alberta’s badlands, a concrete canoe race competition hosted by Survivorman, and FIRST’s educational robot battles.
Brian holds a Bachelor of Journalism from Carleton University. He is regularly featured as a technology expert by broadcast media including CTV, CBC, and Global affiliates.
Episode Highlights
[00:00:00]: AI. The CIO’s responsibility?
[00:00:37]: Introducing the Trends to Watch in 2024 Info-Tech Research
[00:04:19]: AI. THE Trend in 2024
[00:06:11]: Adopters vs. Skeptics
[00:08:46]: AI. The Opportunities
[00:12:10]: Autonomizing the Back Office
[00:14:33]: RPA and AI
[00:17:28]: AI. New Business Models
[00:22:22]: AI. The Obstacles
[00:25:17]: Holding Vendors Accountable
[00:27:17]: Who Owns My Data?
[00:30:11]: Who’s Responsible for AI?
[00:33:57]: The CIO?
[00:36:14]: Action: Read. The. Report.
[00:37:14]: AI. Where to Start
[00:38:39]: Thank You and Close
Transcript
Brian Jackson [00:00:00]:
First of all, I’ll say that our survey is taken by a lot of CIOs and the results out of the AI adopters was that one-third of those respondents expect the CIO to be solely responsible for governing AI.
Voice Over – Ben Miller [00:00:18]:
Technology is transforming how we think, how we lead, and how we win from InterVision. This is Status Go, the show helping IT leaders move beyond the status quo, master their craft, and propel their IT vision.
Jeff Ton [00:00:37]:
Welcome to status. Go. This is your host, Jeff Ton. I have to tell you that I have been looking forward to this conversation for several months, really since I met Brian Jackson almost a year ago. Brian is with Info-Tech Research and I met him about a year ago when they released their 2023 Trends report. And you may recall, at the beginning of 2023, we had an interview with Brian, and we have poked on some of the trends that he identified in that report throughout the year for a lot of our episodes.
And so I’ve been looking forward to the next year and 2024 in that report. And it was just recently released. We’re recording this on October 23, and I think the report was released on October 11.
Is that right, Brian?
Brian Jackson [00:01:34]:
Yeah, we released it October 11. We’re a little bit earlier this year. We were excited to get our insights out into the market, and we wanted to be ready as well for our live conference in Las Vegas. So that was another incentive to get things done earlier. So, I think it’s one of the most comprehensive tech trends reports that we’ve ever produced.
It’s backed by a survey data with about 900 respondents. We’ve got different case studies from cutting edge examples on the market and I think there’s just a lot there for any technology leader to explore.
Jeff Ton [00:02:12]:
Well, I became a fan of Brian’s work through the 2023 report, and so you’ve probably seen me on LinkedIn and some of our conversations that we’ve had here on Status Go, commenting on his work as well as sharing his work. And I really have to recommend this piece to every CIO that’s out there because the focus of this report is on AI. Last year, you may recall, it was one of the trends that Brian identified for us to keep our eyes on during 2023. This year, the entire report is dedicated to AI. And I have to say that a recent episode that we had here on Status Go was about AI as a continuous emerging trend that we’re going to be talking about for years to come.
And so, I was really excited that Brian agrees with that as he’s looking at that. So, as you look at this report, just a little bit about the structure, and then we’ll dive into this. Brian and his team at Info-Tech have identified some opportunities, and let’s call them obstacles, so opportunities and threats, all related to AI and where we are with that, and where we’re going this year.
And then the survey responses were divided into skeptics and adopters. And I think some of the trends that we see in there are really interesting based on those two approaches to technology.
So, Brian, when you started looking at this, what really jumped out at you that said, “Hey, we need to dedicate our trends report to this thing, generative AI, large language models, all the things going on in the AI space.”
What jumped out at you?
Brian Jackson [00:04:19]:
Yeah, well, it all started when we were designing the research survey that backs the Tech Trends report. And, of course, we did feature generative AI as one of the trends for 2023. And in that report, we say that it had the potential to transform every industry. And I don’t think we really accounted for exactly how big of a trend that would be. And it’s just become the dominant thing that’s happening in the technology marketplace right now. It’s impacting every industry, every company.
So, from that standpoint, when we’re releasing a Tech trends report to focus on things that are affecting every organization from a strategic standpoint, well, it just made sense to do a big, deep dive and explore all the implications of not only generative AI. Jeff, but other AI implications because there’s huge opportunities.
But I think it’s a high-risk, high reward type of scenario where, yes, you could seize those opportunities. You could create new value, you could go into new lines of business and maybe even make your staff more productive.
At the same time, there’s new risks we just haven’t considered before. From the data privacy standpoint, from the security standpoint, intellectual property, and even new vulnerabilities and attack vectors that we’ve opened up here for cybercriminals, perhaps.
So, there’s a lot to unpack. And when I saw that in our own data, AI, it’s the fastest-growing emerging technology that companies are planning to invest new money in. So, we know that people are right now in the process of discovering the use cases, and adopting AI, and we want to be there to help them not only realize the value but make sure that it doesn’t get away from them in terms of the risk.
Jeff Ton [00:06:11]:
Well, as I was reading through this, and you actually identified this in the report, as well as we’ve been talking about AI, really all year on Status Go. But also in some conferences that I’ve attended, some panels that I’ve moderated, the conversation, or at least the panel conversations, have been divided into two camps. One, you call them skeptics versus adopters. On one hand, we’ve got companies that have said we’re going to ban it. We’re not going to let our employees use it, we’re going to block it from our network, all those types of things. On the other extreme, we’ve got CIOs and CEOs really saying, hey, we’re all in on this AI space.
How do you, as you’re looking at this report and thinking about the opportunities and the threats, how do you balance those two audiences in what you’re trying to get them to pay attention to for ‘24?
Brian Jackson [00:07:25]:
Yeah, well, I think that we all have to be paying attention to this AI trend one way or the other. And the way we defined it, we wanted to see how the AI adopters would differ from the skeptics in a range of different analyses. That’s how we approached it for the report. So, the real dividing line was just whether you’d already invested in AI or were you planning to do so in 2024, and for organizations that were just not planning to do so in 2024, I think it’s worth it illustrating the difference in maturity levels. And this is a theme that you’ll notice across the report where you compare the organizations that are ready to make that investment to those that are not.
I think the main theme is that, hey, you’re more mature, you’re more ready from an infrastructure and data policy standpoint to make that investment in AI versus the other group here, the skeptics. They may be skeptical of AI’s benefits and wary of the risks, but the truth is I think that maturity is often a stopping point for them where they feel like they’re just not ready organizationally to take advantage of the new value that’s in front of them.
Jeff Ton [00:08:46]:
I like that differentiation because some of the leaders that I’ve talked to that are on the hey, we’re going to block it standpoint doesn’t mean they’re not investing in it. It just means that they’re taking maybe a slower approach, a more cautious approach to that. So, I like how you differentiate between those that are more mature. They may be investing in it, they may not be rolling it out yet, but they’re at least looking at it and how it can impact because, as you so well point out in your report, you can’t really stop employees from using it. When you think of probably the most popular or the most common Chat GPT, you can do that from your smartphone. You don’t need to be on the company’s Internet to be able to do that. Right? And so, you’ve got to put those guardrails around.
Let’s talk about the opportunities that you’ve identified here, Brian, and then we’ll shift and talk about what are some of the threats…some of the obstacles to it. But you focused on three different opportunities: AI-driven business models, autonomized back office, and spatial computing.
Why those three? What made those float to the top for you?
Brian Jackson [00:10:07]:
Yeah. Well, I wanted to explain how organizations that harnessed AI as a core value creator and really built their new business models around it would probably be in the best position to grow their business, expand profits, improve the customer experience compared to other organizations that were not doing that.
And also realizing that not everybody will be ready to re-engineer the entire business around AI right away. We wanted then to look at, well, what if you just augmented your current employee base and gave them the tools and the knowledge that they needed to do their job more effectively using AI, right? And that can cut costs, that can improve the bottom line as well. So it’s important to just focus there…to what would that look like compared to a business model point of view?
And then there was another thing to consider. We all saw Apple’s debut of Vision Pro back in the spring. And along with that debut, Jeff, called…they sort of called this new trend spatial computing where this idea where you’d have the physical space you inhabit combined with the digital layer that’s added to it. And, of course, Apple was showing us how that comes through the Vision Pro.
But that’s just one way that we’re going to experience that space. And in knowing our data, we see that a lot of organizations, they’re not ready to make that mixed reality investment yet. They don’t see that the hardware is really there, really built out just yet. So, I think that’s going to come through the technology that we’re already using today. Our smartphones, our computers…and AI is going to play a huge role in creating the interface around our experience of spatial computing even before mixed reality like the Vision Pro will be able to.
Jeff Ton [00:12:10]:
Well, we’ve been consumer IT-driven or consumer technology-driven for some time now. We see these things happen in gaming and those arenas before we adopt them in business.
I was excited to see the spatial computing showing up on here because as you know so well, your report last year talked about the metaverse, and I actually interviewed George Matos of Nvidia on the Status Go podcast to talk about…I think they call it Omniverse, but it’s metaverse concepts. And to kind of see that growing was exciting.
But in my mind, and correct me if you see this differently than I do, that we’re probably going to see the back office automation leading the way, at least in the short term for these types of things. Would you concur with that?
Brian Jackson [00:13:09]:
I would. And that’s what our data showed us is that there’s a lot more AI adopters that are keen to harness the new features that are coming to them through their vendors and using that to autonomize the back office. Meaning that, hey, are there some tasks that you do every day that are more repeatable, more predictable? And how could we harness AI to get those done for you either faster or maybe completely automate them away.
And often what we’re seeing with generative AI is you need the human in the loop because we know that AI of this variety tends to hallucinate, Jeff, or to put it more plainly, it makes mistakes. So, you need your expert humans there exercising their judgment, applying their experience just like you would with sort of a junior employee, right? If you’ve been on the job for less than six months, I wouldn’t just take your output and put it in front of clients. I’d want to have somebody more senior take a look at that first and verify it. And that’s sort of the working relationship that we see develop with AI right now. But it is something that people are getting very comfortable with, and we’re seeing it built into more and more different operational processes.
Jeff Ton [00:14:33]:
Well and I use it here on Status. Go. There’s a tool called CastMagic that the back end is the API interface to Chat GPT. And what it does is it will ingest the audio of a podcast episode. And not only does it provide a great transcription of the conversation, but it also provides some marketing collateral that I spend hours doing each week…finding appropriate pull quotes and writing LinkedIn posts and blog posts. And it does that automatically for you. But it’s not prime time, right? You still have to go in, and it probably gives you maybe you’re 70% of the way there, but you still have to go in and edit it and massage it a little bit.
And I think we’re starting to see that in people using it in business a lot of times in marketing because of the content creation or generation side of it.
The other thing that I saw, and I noticed that you mentioned this in the report as well, there seems to be a new interest in RPA, and it’s almost hand in hand with AI. Are you seeing those in the future kind of merging into one? Is that where you’re going?
Brian Jackson [00:16:03]:
Yeah, I think that there will be some merging of those two. And while robotic process automation, there’s going to be the type of that and the execution of that where you don’t even need AI. You can set it up with certain business rules and have it running for you. So that’s still going to be relevant for a lot of tasks where it’s very predictable and there’s only certain fixed answers that you can select from. Right? So, I think that’s going to remain a huge market because there’s lots of value to be gained just from that. And in some cases, you don’t want to be querying an AI model because that can be expensive in terms of the CPU, the infrastructure cost behind it, electricity consumption, and all that. But then there’s this category of intelligent robotic automation that we’re seeing.
So, there’s new investment driven by that, and that is where you’re using AI to go up to a higher level abstraction, bring in more data that’s not being codified for you, or it’s at least undescribed data. It’s not in a spreadsheet. I’m trying to say it’s not formally prepared for machine reading. So, if you’re able to intake that type of data, then you can suddenly automate new processes. And you might call that intelligent process automation.
Jeff Ton [00:17:28]:
Okay, I like that. And the other thing that you talked about in the report was that right now, a lot of the ways that we interact, we as normal human beings, interact with AI is through text, and there’s going to be merging of inputs, right? Where you get video and audio. Same thing with outputs. You start to see that.
But before we move on to the obstacles, the threats part of this conversation, Brian…AI driven business models, what are some of the things you’re seeing there? What are you seeing emerging as digital transformation has been about, hey, what are some new business models that we might be able to get into some new revenue streams now with AI? Now we’re talking about new business models there. What are you seeing in that space?
Brian Jackson [00:18:27]:
Yeah, well, let’s use the case study there because I think that just illuminates how many different business models that AI could disrupt that we haven’t even considered yet. And I think that people will be surprised at what sort of things AI can predict and how quickly and easily it does that and how accurately it does it as well, to the degree that it becomes the best way to solve many different types of problem.
So, in a case study for this trend, AI-driven business models, we looked at Cognitive Systems. This is a startup out of Waterloo, Ontario, and they developed a system called Wi-Fi Motion. So, what it does is it deploys a home security application, to your home by putting a piece of software onto your router. And it’s looking at the Wi-Fi signals and how they’re disrupted as people move around your home. Right? So, everybody has a Wi-Fi router these days, of course. Right? We’re probably both connected to ours right now on our devices.
So, you can see the value of being able to deploy a security solution to your router without having to install any hardware like video cameras or additional sensors on your doors and windows or motion sensors. That’s a big point of friction for people when they’re considering going to install a security system because of the cost in that the hassle, and even the privacy concerns.
So, what Cognitive did is it partnered with ISPs, who can reach millions of customers and deploy a piece of code to their routers or their modems. Right? Often, you have some sort of combined device that an ISP is shipped to your home to get connected. And now it’s a value-added service that an ISP can offer to its customers for an extra fee or sometimes just for a value add.
So, the ISPs love that, and then customers are getting more value out of the ISP service. And Cognitive Systems sets up its monthly recurring revenue model. Right? So that’s just a great example, I think, of. You wouldn’t expect AI to predict disturbances in Wi-Fi fields. And that leads to an application where you can have home security, home monitoring, and even another application to do is Smart Home Automation, which is, you know, things like that we’re going to see more and more often where we’re surprised by how AI has suddenly disrupted a new business.
Jeff Ton [00:21:05]:
Yeah, I like that example because I’m thinking about my own home. And you’re right. I’ve got a Wi-Fi router and I’ve got a repeater upstairs and all that. But I’ve also got the home security. I’ve got cameras, I’ve got motion sensors, I’ve got glass break. And I wouldn’t have to do any of that with this technology of being able to see the Wi-Fi patterns throughout my house. Never would have thought of that.
Brian Jackson [00:21:35]:
Yeah. Get that alert on your device on your smartphone when you’re out, and you’re not expecting a person to be at home, why not be alerted when that’s happening? Right? And then you can take action.
Jeff Ton [00:21:45]:
As long as you can tell the difference between a person and my cat. Yeah, it definitely set off my security alarms.
Brian Jackson [00:21:53]:
Yeah. Actually, when I was interviewing the CEO of Cognitive Systems, Taj Mankou, he explained how in the training process for their algorithms, they had to teach it how to discriminate between humans and pets and humans and fans that might cause curtains to blow around and things like that.
Jeff Ton [00:22:14]:
Yeah. And you don’t even really stop to think about that. Your fan is going to disrupt your Wi-Fi signal.
Brian Jackson [00:22:21]:
Right!
Jeff Ton [00:22:22]:
That’s kind of mind-blowing in and of itself.
Well, let’s switch to the other side of the coin. You also identified some risks that our CIOs, our listeners of the Status Go podcast, need to be aware of or focused on as they’re looking at AI and looking to where to deploy it.
And you identified three risks responsible AI, security by design and data sovereignty in general. Why those three risks? How did those float to the top for you?
Brian Jackson [00:23:00]:
Yeah, well, I’m trying to cover the different dimensions of risk that we have out of AI. So on the responsible AI front, I’m covering what I see as the compliance and more ethical risk, too. Right? Because we have to do more than just comply with the law. I think that’s the bare minimum, Jeff, when it comes to being a responsible organization. So, we have to develop new policies around how to deploy AI. When do we explain where it’s used, what sort of decisions will it make? And that’s what countries are going to in governments. Right?
They’re going to start asking of us as well. So, it’s not just a nice to have, this is a must have if you’re considering the use of AI.
And then on the security by design front, I’m thinking about what are the risks posed by the different vendors that you could work with within the space. And there’s a new wrinkle when it comes to AI that we all have to consider. And we’ve seen in recent years how devastating these software supply chain attacks can be. Right. So, I won’t name off the vendors that have been really subject to them lately, because I feel like they’ve gotten enough of a grilling already. But what we’ve seen is software that we’re told is secure and that we can deploy to sensitive systems and depend on for our key capabilities just ends up not being as secure as we’d hoped for.
And then on the digital sovereignty trend. This is another consideration where we have to think about not only our own intellectual property and how it’s being exposed to these AI algorithms that could be trained on it but how are we treating the data of our customers, our employees, the sensitive data? And data privacy has always been important, Jeff, as you know, to IT. It’s a capability that we consider and try to have a strong practice around. But again, here there’s new considerations because of the capabilities of AI that we have to think about as we’re revisiting these capabilities for ourselves.
Jeff Ton [00:25:17]:
I like how you point out in the report, and I think it was the security by design section that you’re talking about, maybe some regulations. I know everybody hates that word, but necessary, but that software vendors are coming under more and more pressure to make sure that their code is safe. And you talk a little bit about that internationally. Are you seeing some countries further ahead than others in trying to mitigate that risk with our software vendors?
Brian Jackson [00:25:57]:
Yes, the US is taking the lead on this topic, and if you look at the White House’s National Cybersecurity Strategy, they specifically mention security by design in that strategy. And their executive director, Jen Easterly, has been giving speeches on this topic. So, this is one pillar of their strategy moving forward. And to me, Jeff, it’s sort of like going back to the 1960s in the US. When vehicle manufacturers, by law had to put a seatbelt in their vehicles for the first time. And these days, imagine you went to buy a new car, or it didn’t have a seatbelt in it. You’d just say, okay, well, next car, please. Obviously, I can’t consider this one.
It just seems ridiculous that we would ever have cars without seatbelts. And when you think about it, why, then are we accepting of this risk? When we buy software from our vendors, they’ve created the software, we deploy it, they say it’s secure, and then we take on the risk of the vulnerabilities that are in that software, and we can pay the price of that through cybersecurity breaches, loss of reputation, and these days, cyber insurance. So how do we rebalance that so it’s more fair?
Jeff Ton [00:27:17]:
Yeah. Well, I think that’s got to be one of the changes in our industry, is the acceptance of basically flawed software where it’s got the vulnerabilities in it and holding our software partners accountable to that similar.
And you talk about this in the Data Sovereignty section of how our data is treated. And speaking as a consumer…who owns my data and what they can do with it? Are you seeing that we’ve got some hope of that changing in our industry where my data becomes more of my property and less of the conglomerate’s property?
Brian Jackson [00:28:08]:
Yeah, we are seeing some progress on that front. And in certain jurisdictions, like California, it’s in law now that consumers should have control over their personal data. So, if you’re a resident of California, what you can do is send a request to a business to say, hey, I have to correct some of my data. Or you can just ask them what data of mine you own. And you can also just request that all of your data be deleted. Right? So, there’s more than the average level of control. If you’re a resident of California, and in fact, if you’re a resident of the European Union, there’s some similar control you have through GDPR, right, which has been in place since 2018.
So not only is it intermined in law, but in this Trends Report, one of the things that I learned about was the Data Rights Protocol, DRP, and what that group is trying to do is through the use of a protocol, just like Http, right? When we use the Web, it’s a protocol that allows us to connect to websites and read the information from it without even thinking about it, and just works like magic in the background. So why not have a protocol that takes all the work out of the requests, handling these requests for the companies that receive them? Because often it’s large companies that we’re talking about. They’re receiving thousands of these requests for individuals, and they have to think about, okay, where’s that data? Do we have to meet this request? How do we meet it? What do we have to do? And it becomes very manual, and it creates a lot of friction and cost. So, this technology is trying to help them automate that and empower the consumers who can now just use a mobile app to sort of set what their preferences are for how their data is treated across all organizations.
Jeff Ton [00:30:11]:
I like that, and it’s been a long time coming. I think the third section that you talk about when you’re talking about mitigating risks is responsible AI. And so I’ve got a question for you. As a lot of our listeners are CIOs…Chief Information Officers, maybe CTOs…Chief Technology officers, how can we, as consumers of that, not consumers outside the technology space, but we’re consuming AI as IT leaders, how can we ensure responsible AI in our organizations? What steps should we take?
Brian Jackson [00:30:53]:
Great question. And we wanted to know what steps that our technology leaders in our community were taking as well. So, in our survey, we asked them about a number of options, sort of put our heads together, and looked at also what was recommended by some of this draft legislation that governments would be looking out for to say, hey, you’d better be following these governance steps.
So, some options include implementing measures to manage anonymized data, right? So, if I’m getting in personal data, how am I anonymizing that before storing it or having it used to train a model, something like that? You could conduct impact assessments on your AI systems, meaning how is this impacting people that are interacting with it, and what sort of decisions is it making that might have negative impacts on stakeholders I haven’t considered. You can publish clear explanations about how AI is intended to be used and what predictions it’s making when it’s following your processes. You need to monitor deployed AI to ensure it behaves as expected. What we see over time, Jeff, is to go back to the car analogy.
It’s sort of like running an old car where once it drives off the lot, it’s probably running great, and everything works as expected. But the more miles you put in, the longer you use it, the more action it sees. Well, the performance might degrade over time, so you have to take it back to the mechanic. AI is sort of the same way, and you have to use AI models that are explainable well where possible.
I wouldn’t say you always have to do this, but it’s one thing you can do to have better transparency. And the difficulty there is a lot of these foundational models are not explainable totally. And then those are the options. Now, when we asked people, what we found is that if you’re an AI adopter, a little more than a third of you are not doing anything today, which is obviously a concern, like there’s more work to be done.
Although I will say, a lot of these organizations are still at that piloting or discovery phase. So that’s okay. If you’re just at the piloting and discovery phase of this, you don’t really know where you’re going to deploy it in the organization. You don’t necessarily have to have all of your AI governance figured out, but once you do go to the deploy stage, you need to have that figured out, or else you’re going to find friction. You’re going to have problems when you do deploy it. So don’t let that stop you. Get ahead of the game. And then I’ll just point out that among AI skeptics, of course, 70% are saying that they’re not going to take any steps to be responsible for AI because they’re not planning to use it.
They’re not looking to even adopt it.
Jeff Ton [00:33:57]:
Well, and that brings up one of the things that we talked about before we went on the air, Brian, that I want to be sure and touch on is who is responsible for AI in an organization, and what did your survey show as far as responsibility when it comes to AI?
Brian Jackson [00:34:16]:
Well, first of all, I’ll say that our survey is taken by a lot of CIOs. Probably about a third, at least of the respondents would be considered at the CIO level, and then everyone else is in IT as well, maybe at the director level or manager level, but you can consider them all to be in IT leadership. And the results out of the AI adopters, of course, for this question was that one-third of those respondents expect the CIO to be solely responsible for governing AI.
Jeff Ton [00:34:50]:
Well, and that reminds me of the issue that we’ve had for years of cybersecurity. We want It to be shared responsibility throughout the organization, that it can’t just be the CIO or the CISO, and now we have another technology that we need to share that responsibility across the organization.
Brian Jackson [00:35:12]:
Right! I totally agree with that. I think that if you’re a CIO and you expect to be totally responsible for AI governance, well, you’d better start the conversation now about why other business leaders have to get involved, if not all the way up to your enterprise risk function on your board. I think there are some scenarios where only the CIO would be involved, and that’s if you’re just planning to adopt features through your vendor software, through your vendor relationships, and It is managing that, then, yes, I guess the CIO would be accountable for that type of AI deployment.
But for any development of AI internally, where you’re customizing one of these models and deploying it to some sort of business function that would create external value, meaning customers or people you work with are going to be exposed to It. Well, there’s a business lead that’s involved in that process, and they need to share the accountability.
Jeff Ton [00:36:14]:
I think that’s right. I think that shared responsibility, shared accountability model has got to be in place in our organizations.
Brian, we’ve come to that point in time on Status Go that I know you’re familiar with because you’ve turned into a listener of Status Go since our last interview, which I really appreciate. And that’s where I ask you to provide an action or two our listeners should do because they listen to our conversation. And before I ask you that, I’m going to jump in with my own action. And my action for our listeners is: read this report. It’s a great report. I think it will shape your IT organization in 2024 and beyond.
So be sure to pick up a copy of this report, and we’ll have a link in the show notes as to where you can get that.
So, Brian, what’s an action that you would encourage our listeners to take after listening to our conversation today?
Brian Jackson [00:37:14]:
Well, if I can give two actions, if you’ll allow it, Jeff, because, given the spirit of the report of balancing opportunities and mitigating risks, I’ll start with opportunities and say that go and collect the use cases in your organization that will be early candidates for getting value out of AI. Whether you’re going to be changing a process that creates external value for your customers or you’re going to just augment your staff, your employees, with AI and improve their productivity and cut costs with it. Just go and brainstorm with your organization. Ramp up that innovation pipeline and get a lot of use cases written down and ranked in priority about where you want to start. That’s going to help you realize value more quickly.
And then on the Mitigating threat side, get that responsible AI policy framework in place. Info-Tech offers a great framework for this that you can pick up and use to get started. Of course, every organization will want to make sure that aligns with their own individual organizational value set, but you need to have that in place before you can go ahead and roll out AI and deploy it to operational standards. Otherwise, you just won’t know how to put the compliance checks around it.
Jeff Ton [00:38:39]:
Well, I love those two actions, and I’ll add for our listeners that in this report, in each one of those sections, it’s one of the things I like about the research work that comes out of Info-Tech is that they do provide actions, they provide resources for you to dive deeper into each one of those topics.
So, Brian, thank you so much for joining us today. I really appreciate it. I have really enjoyed getting to know you over the last year and following your work. I think you do some great work for our industry. So, thank you for that, and thank you for joining us today.
Brian Jackson [00:39:19]:
Thanks a lot, Jeff. It was a lot of fun to join you, and I look forward to more Status Go episodes in 2024.
Jeff Ton [00:39:25]:
I appreciate that!
To our listeners, if you want to learn more, visit intervision.com. Our show notes will provide links and contact information, and we’ll be sure and provide a link to the 2024 Trends Report from Infotech. This is Jeff Ton for Brian Jackson. Thank you very much for listening.
Voice Over – Ben Miller [00:39:48]:
You’ve been listening to the Status Go podcast. You can subscribe on iTunes or get more information at intervision.com. If you’d like to contribute to the conversation, find InterVision on Facebook, LinkedIn, or Twitter. Thank you for listening. Until next time.