Status Go: Ep. 242 – AI: Just One More Thing | Deal Daly

Summary

In this thought-provoking episode of “Status Go,” Jeff Ton sits down with the seasoned Deal Daly to navigate the complex and often misunderstood waters of artificial intelligence from a CIO’s perspective. With over three decades of experience across diverse sectors, Daly unpacks the nuances of integrating AI within corporate strategies—highlighting both the remarkable possibilities and the cautionary tales. They delve into the critical importance of aligning C-suite visions with AI initiatives, addressing the unique challenges of managing AI projects, and confronting the inherent biases and decision-making pitfalls of these technologies. Whether you’re a CEO looking to bridge the gap between tech and strategic vision, or an enthusiast eager to understand the real-world application of AI, this episode, “AI – Just One More Thing,” will arm you with the insights to approach AI with both excitement and a calculated outline for success.

 

About Deal Daly

Deal is an innovative technology visionary as a CIO/CISO/CDO with experience in leading digital transformations of business operations. He has deep expertise in infrastructure and development, data management, web scale cloud based operations experience in B2B and B2C, end to end supply chain management. He has applied these skills in multiple industry verticals: digital media, gaming, fintech, big tech, wholesale distribution, storage tech, and semiconductor.

He is passionate about connecting startups with business value opportunities, building innovation and outcome-focused teams, driving year-on-year business benefits, transforming technology, process, culture and technology investment strategies.

Deal’s career spans key leadership positions at renowned organizations such as Intuit, LexisNexis, and Ancestry, Bertelsmann, MetLife and Toysmith where he has spearheaded transformative initiatives, ranging from web scale architectures to cloud migrations. He boasts a multifaceted background encompassing IT transformation, M&A integrations, and fulfillment operations.

Deal holds advisory roles in prominent technology companies such as MX.com, Hammerspace.com, and Storj.io, Tech9 and Zetta Health Solutions. With a proven track record in guiding emerging technology adoption, Deal has also served on advisory boards for industry giants like Dell and EMC.

Deal’s academic foundation in Theoretical Physics and Mathematics from CUNY adds a unique dimension to his strategic thinking and problem-solving capabilities.

Deal enjoys classical literature, piano and music composition and supports wildlife and other natural environmental causes.

Episode Highlights

[00:00:00] – Kickoff: Unravel the wonders and worries of AI for CIOs – a must-listen for tech leaders!

[00:00:37] – Welcome to Status Go: The podcast lighting the way for IT innovation and leadership.

[00:00:54] – Stepping into the AI realm: A 30-year tech vet shares insights on AI’s influence on CIOs.

[00:02:12] – Unveiling the journey with Ancestry.com and the transformative power of AI in business.

[00:03:19] – Deep Dive: The CIO’s playbook for navigating AI project priorities and alignment with business strategy.

[00:05:34] – Master the art of the tech-business tango: Forging unity across the C-suite for AI success.

[00:06:58] – Weighing the scales: AI’s potential to dazzle customers vs. the risks of going awry.

[00:09:11] – AI vs. Traditional IT: Charting a new course for project implementation.

[00:10:56] – Critical C-suite conversations: The key to mapping out AI’s ownership and risk.

[00:12:41] – Bespoke AI strategies: The unique considerations setting AI projects apart.

[00:13:34] – Embrace small starts: The wisdom of proof of concepts in AI ventures.

[00:15:13] – Staying on top of the AI game: Why something as dynamic as AI can’t be set and forgotten.

[00:16:51] – Change control in AI: A distinct approach for aligning AI with business evolution.

[00:18:24] – Listener’s roadmap: Aligning AI with business strategies and impacting transformation.

[00:20:36] – Exploring six streams of transformation to navigate the AI revolution successfully.

[00:24:14] – Concluding insights: The pivotal role of CIOs in marrying AI initiatives with business visions.

[00:26:42] – Demystifying AI: A learning-first approach for implementing AI aligned with company values.

[00:28:21] – Bridging the gap: Translating tech AI chatter to strategic visionaries and boardroom bigwigs.

[00:30:07] – Tactical trust-building: The necessity of a trusted partner in avoiding AI biases and blunders.

[00:32:17] – AI for the cautious: Starting small and sensible in less risky environments.

[00:33:59] – Shattering myths: Addressing common misconceptions about AI among CEOs and boards.

[00:35:39] – Tune in and transform: How AI projects can bring the right value and quality to your business.

[00:38:16] – Closing the chapter: Wrapping up with a powerful message to ignite your AI strategy and execution.

 

Episode Transcript

Deal Daly [00:00:00]:

The risks associated to this are not just, “Well, it’s going to cost more because there’s more compute involved,” and all of that’s already measurable. What’s not measurable is how do you, you know, mitigate and minimize the risk of biases and different types of decision-making aberrations that occur as a result of using a large language model?

Voice Over – Ben Miller [00:00:27]:

Technology is transforming how we think, how we lead, and how we win. From InterVision, this is Status Go, the show, helping IT leaders move beyond the status quo, master their craft, and propel their IT vision.

Jeff Ton [00:00:44]:

Welcome to Status Go, the podcast where we explore the dynamic world of technology and business leadership. This is your host, Jeff Ton. In today’s episode, we have the privilege of delving into the intricate role of the chief information officer and the challenges they face in navigating the ever-evolving technology landscape, especially with the introduction of AI.

Our guest today, Deal Daily, is not just a seasoned innovator and an advisor, but he’s also a CIO with a remarkable track record of driving business agility, performance, security, and scale across diverse domains. Join us as we sit down with Deal to unravel the complexities of AI from a CIO’s perspective. As he aptly put it when he and I talked a couple of weeks ago, AI is just one more thing on the CIO’s plate.

We’ll be exploring how CIOs prioritize AI projects amidst the myriad of other crucial initiatives, unraveling the strategies they employ to communicate the value of AI to CEOs and boards, and understanding the nuances that differentiate an AI project from your standard IT endeavor if there is such a thing as a standard IT endeavor.

If you’re curious about how technology leaders navigate this new AI landscape, prioritize projects, and bridge the communication gap with executive leadership, you won’t want to miss this insightful conversation. Get ready to elevate your understanding of AI in the C-suite on this episode of Status Go.

Let’s dive in, Deal. Thank you so much for carving out time to be with us today, and welcome to Status Go.

Deal Daly [00:02:44]:

Thanks, Jeff. It’s great to talk with you again and looking forward to the great conversation today. Thanks.

Jeff Ton [00:02:50]:

Before we dive into AI as another thing on the CIO’s plate, would you share with us a little bit about your background and kind of what brought you to where you are today?

Deal Daly [00:03:03]:

Yeah, glad to. So, I guess the executive summary level would be I have about a 30 year career in operations and technology, and it spanned several industries, manufacturing, legal search, financial services B to C, B to B. My career path started with Bertelsman AG and BMG, then moved to LexisNexis, then Intuit, then Ancestry.com, and then Toysmith, and now doing advisory roles and transformation projects ad hoc.

Jeff Ton [00:03:49]:

And I think when you and I met, you were with Ancestry. That’s something that’s near and dear to my heart as a long time Ancestry user. Love their platform for genealogy. So that may be kind of what struck a chord with us when we met, gosh, years ago now out in Las Vegas, I think.

Deal Daly [00:04:10]:

Yes, I remember it well. Thanks.

Jeff Ton [00:04:12]:

Yep. Well, let’s dive into now AI has been around for a while. We’ve been talking about it for a while. But as our listeners know, it exploded last year with the introduction of Chat GPT and other more consumer focused, I suppose, implementations of AI. And now all of a sudden the CIO is bombarded with yet another thing to have to worry about. As you’ve been working as a CIO and more importantly as an advisor to other CIOs and other businesses, how do you begin to work with them to understand where this fits in their overall work?

Deal Daly [00:05:01]:

Yeah, that’s a great topic. And it really starts with business alignment or the alignment of the technology portfolio to the business plan and strategy. So as AI topics come up amongst the business, we really have to sit down with business leaders and assess the current portfolio and see if there’s displacement of projects or rescheduling of projects based on new initiatives, particularly AI ones that might yield better marketing solutions, better go to market solutions, better cost savings, whatever the benefits might be. But it’s best to start with a base of knowledge and information. So, there are a couple of strategies I try to use in working with business leaders, and that goes across the C-suite, because the CEO and the general counsel and the CFO and the chief operating officer and the chief product officer all have unique perspectives, unique interests that they need to protect for the business. And we need to work across that group to align the work that we end up doing in order to make sure that we’re providing the best value to the business.

Jeff Ton [00:06:23]:

I love how you started with looking at the portfolio, because sometimes IT is faced with…CIOs are faced with the belief that this funnel is ever widening and always available to expand. You’ve got no constraints on your resources, right? And so, you got 300 projects on your project list and now you’ve added a bunch more. How do you counsel your CIO clients to be successful in that negotiation of, hey, let’s move this project out and let’s bring this AI project in.

Deal Daly [00:07:09]:

Yeah, I really think it gets to coordinating across the C-suite. So, one of the traps that CIOs or VPs of it can fall into is they begin to negotiate independently with each leader. And, of course, each leader is going to say, “Yeah, my thing is the number one thing, and I have a list of five of them. And you need to work on these today.” And then when you’d speak to the next leader, he’s got the same set of things in a different bucket, and his is most important as well. There really has to be a cross-C-suite discussion around prioritizing to come up with the most important thing. And that’s where a lot of technology executives get stuck, is they don’t have a forum to go across the C-suite and then say, okay, well, the chief product officer wants us to do XYZ. Is that more important than creating a new AI-based chatbot on our website? Which thing is more important or most important? Right.

In the context of the portfolio view, there’s a fixed set of resources, usually, and they get consumed by whatever we’re going to be working on. And therefore, the business has to understand that it’s a finite resource box. Right?

And if you want to put something in, you have to take something out. Right?

Unless everyone decides we’re just going to make the box bigger. Okay, that’s fair.

Okay, if you’re going to fund more resources to do more work, or we’re going to leverage a third party which has a cost associated with it in order to get something done, that’s okay too. Right? But it really is kind of that portfolio view that allows you to have that discussion.

Jeff Ton [00:09:16]:

I can remember when I was CIO at Goodwill, which is where I was when you and I met years ago. And one of the first things I did when I joined the organization was create this concept of an IT steering team to help manage the priorities. And I remember a conversation I got into with one of the other members of the C-suite, and he had declined my invitation to come to the IT steering team committee. I said, Keith, I need you there. Well, what’s the meeting about? Well, it’s to help align priorities. He says, well, you can’t tell me what my priorities are. I’ve got priorities in my business. You can’t tell me what they are.

And I changed the tune a little bit, and I said, but I have fixed resources. I have constraints that I have to manage within. I can’t possibly do it all. I need your help in helping me to manage those constraints. And somewhere along the line. He finally agreed and started to attend the meetings to help with this.

And I know that with this introduction of AI, we’re faced with another bright, shiny object. So, as you’re counseling people on this prioritization piece, how do you have them look at AI? What are the elements or the factors of an AI project that might be different than a website refresh, for example?

Deal Daly [00:10:53]:

Yeah, so there’s a couple of factors that make this different. One is with artificial intelligence or machine learning projects, there can be an assumed or even projected huge benefit. It’s going to do something dramatic, right? So, that dramatic potential creates a kind of momentum of its own and an energy that drives enthusiasm to get something going. Let’s do it. Let’s get it done tomorrow. We want to be the first in to get this out there and all that that makes AI different.

In some cases. It’s a little bit like the cloud journey where some people said, yeah, okay, I’m going to be the first in. And then they wind up getting all the pain and figuring out how to make it work. So, AI projects are similar in that there’s going to be this learning phase. So, what I advise is let’s learn first and then jump, right? So, let’s give access to business leaders, to training, to seminars, to written material that will allow them to understand what the potential benefits can be in the different type of domain areas of the business, like finance and operations and customer support, marketing, sales and so forth, and allow them to understand all that.

And then as with other new technologies, I usually look at them as emerging, kind of stable and then mature, right? So, in the emerging category like AI, there’s a lot of thrashing that goes around in the provider community to create things that can be used. And what I advocate is a period of testing, right? So, if you have a use case that is kind of a minimal risk or a less risky use case, where if you get it wrong, it’s not going to hurt that bad, and that allows you to test things, it allows you to learn about it, how it works.

There are a number of considerations in this that are meaningful in that if you’re going to use AI, are you going to have someone external develop it or are you going to try to acquire resources that you own, then to do it yourself? Are you in a position to evaluate the risks based on the learning that you’ve done? So there needs to be some sort of journey line to the adoption curve and you can do things quickly as long as they’re not fraught with risk. And they’re targeted on getting some result quickly so that you can do it and then learn. And if you fail at it, you pull that out, you do it differently, you get better at it, and you learn, and you do it again, and then you can get into more meaningful projects.

So, it’s quite a bit like that.

Jeff Ton [00:14:03]:

Well, and I love how you put it when we talked a couple of weeks ago, start with what you’re comfortable with. And I think you even brought up the vision and mission. So, how are you counseling your clients to start with what they’re comfortable with?

Deal Daly [00:14:23]:

Yeah. That helps drive their thinking. Right? It’s like, how should they be thinking about this? Right?

So, you go all the way back to the CEO’s vision and the mission of the company and what it’s intended to do that helps you kind of rationalize. Are there AI projects that would be central to that vision and mission, or are they peripheral ones or there are adjacencies that allow you to jump into some other market very quickly? Well, let’s make sure that you’re not contaminating your go to market or in your main business.

So, the real thing is to go back to your vision and mission. Align AI projects to where they fit within the business strategies that you’ve articulated so that they have a place.

And you’re not just saying, I have my business strategy, and now I’ve got eight business leaders saying they have 50 new AI projects, and we’re just going to go start running around doing them. That’s not the right way.

You want to align them so you select the right ones that are going to provide the right value in the right time frame and can be delivered with a very high quality.

Jeff Ton [00:15:43]:

I know in your work you also work very closely with CEOs and boards, not just CIOs. How do you bridge the gap between the technical aspects of AI and the strategic vision of the CEO or the leadership team? How do you help navigate that?

Deal Daly [00:16:05]:

Well, in advisory work, in working with boards, they’re most interested in benefits and risks. Right. And it’s relatively easy for everyone to describe the benefits, the potential benefits or the theoretical benefits. Right. CEOS and product people are great at telling the story about the value that this is going to bring. It’s harder to articulate a real sense of the risks in a meaningful way because it tends to get into a lot of what-if scenarios. Well, if this happened, then this other thing would be bad.

And sometimes that gets to be discounted. And one of the differentiators in AI is that unless AI is managed and controlled, it’s like a living organism.

So, for example, in a large language model, because it’s intended as a learning system on its own, it’s going to increasingly bring in new information into that model and then begin to make different types of decisions based on that new information. And we’re not really curating that really very well. So, there could be biases that are bought into the results of a large language model, and that’s a very new thing.

How will anyone deal with that internally? If you don’t have an expert team yourself and you’re going to rely on a third party, then you really have to have a trusted partner, and I mean really trusted. That is not going to deliver kind of biased results into your decision-making model that then affects your business. So, the risks associated to this are not just, well, it’s going to cost more because there’s more compute involved and more storage because there’s more data. All of that’s real stuff and already measurable. What’s not measurable is how do you mitigate and minimize the risk of biases and hallucinations and different types of decision-making aberrations that occur as a result of using a large language model? And since a lot of it is unknown, meaning what is the answer to that?

Is the answer, don’t worry, we’ll take care of it. Or how does a third party or your own team demonstrate the ability to control that? Yeah, and that’s why starting small in a less risky environment, automating something that’s simple and not very dramatic. We hear already stories of chat models that begin speaking inappropriately because they’re taking in data. And some of that data is customers who aren’t speaking appropriately. So, they don’t know that that’s not the right thing to do. So, they begin bringing that language into their responses, and that sort of destroys the potential benefit of automating the activity quickly.

So there’s a lot of that has to be done, and a lot of it’s unknown right now.

Jeff Ton [00:19:41]:

When you’re talking with boards and CEOs about AI, what are some of the common misconceptions that you run into?

Deal Daly [00:19:53]:

Well, number one, that it’s going to improve our customer experience. And that leads back to the topic I just talked about. It theoretically could increase because if more data is available to the model than data that would be available to a person, then in theory, the quality of the response could be better. Still, maybe the manner of that response would not be acceptable. Right. So, misconceptions are that we can get the benefits without the downsides.

And that’s the trouble that you get into. That’s largely the misconception, is that we’re going to get the benefit, but how bad could the downside really be? And then all of a sudden you find out it’s bad and we have to stop and redo this.

Jeff Ton [00:20:53]:

Yeah, I think you touched on this a little bit ago, and I’d like to dive a little bit deeper into this. This is different than a typical IT project. Right? You’re not just going out to get this new storage array that you’re bringing in. You’re bringing in, I think you called it, a living organism. And we know it’s still technology, but it does, it learns and grows. Right. So, how do you approach this differently than a standard IT project?

Deal Daly [00:21:32]:

Well, one of the things I learned, and I’ll give Intuit credit for this, is we follow a pattern of core and context. And core is developing capabilities that you have in your business that are specifically required for your business. Like, I have to be good at this. My business has to be good at X, Y and Z in order to succeed. Well, and the countervailing thing would be, it wouldn’t be something that I have somebody else do for me.

Context or contextual things would be things that I need to get done, but I don’t need to do them. I could have somebody else do them.

It’s like if you’re building a big business and you need a telemarketing center, do you build your own? No. You could rent one.

You can subscribe out to a service because you could determine that that’s not central to your business. I think the thing with AI is people are going to have to figure out, is it core to their business or is it context? And the trouble with making it context is that for a third party service that we know, we can sort of put it in a box and we know exactly what it is, we know how to measure it. We know our third party things that we subscribe to, like SaaS services and all, we know the value they provide. We know what level of mistakes they might make. We know what it costs and so forth.

If you decide that AI is context or contextual to your business and give it to a third party to do, how do you manage it? How do you know it’s doing the right thing? How do you know those things? Because I would argue that many of them right now couldn’t tell you how they would do it, manage it themselves as third parties, right? Yeah, they’re just building things.

So, I think the core context type discussion is a really important one for some businesses. They may say that based on our business and what we do, artificial intelligence is going to be central to what we do. Well, in that case, you can embark on a strategy where you’re going to go and hire the best possible AI, machine learning, data analysis, technology capabilities, and hire them into your company, curate them, and so forth. Many big companies do that already, right? When they want something inside, because they want to do it the best they can possibly do. They want to create competitive advantages within that function. So, the way they do it is better than the way somebody else does it.

If it’s contextual, then you have that different problem where you say, I’m going to have a third party do it, but how do I manage it? I don’t know yet. Yeah.

Jeff Ton [00:24:34]:

I’ll call them typical project management. I know we’ve got waterfall versus agile and that back and forth, but how do typical things like project timelines, resource requirements, and how does that differ in an AI project from a new storage array?

Deal Daly [00:24:55]:

Well, first of all, there are known technologies, and there are kind of predictable project paths for those types of technologies. If you’re buying a new telecom system or a new network as a service, many people have done this before. The project plans already exist. They’re templated, they’re applicable regardless of which technology you’re going to use. Sometimes, as products become differentiated, let’s say in storage, for example, if you do hardware-based storage, then your project plan is going to look a certain way. If it’s software-based storage that’s going to be resident in the cloud, that project plan will look somewhat different, but it’s going to be largely the same.

AI has different elements to it. Right. For an AI project, people have to map out what are all the things that need to be figured out for an AI project, and those are not, I’d say they’re getting templated, meaning that the first thing you have to do is make decisions about are you owning it as a core context or is it a third party? And if you’re going to own it, what are the pieces and parts of owning it? Are you going to build your own large language model? Are you going to use a public one? Are you going to have someone build one for you that they curate?

So, there’s different types of decisions you’d make in that technology, much like when we started with cloud projects. How do you evaluate a cloud project if nobody’s ever done one before. But inherent in that is the same storyline that we had with cloud, which is, if I do this, what are the risks that I’m going to confront? AI is the same thing, right? We’re using a new paradigm, a new operational paradigm to drive benefits and cost benefits and such to the business. What are the potential downsides? What are the risks that need to be mitigated? So, my conversations with product officers, CEOs, and revenue-generating staff, for example, are all to…How do we optimize what you’re doing? How do we make your thing the best possible thing we could be doing?

My conversations with general counsels, chief risk officers, and CFOs are all about this is what we’re talking about on this side. On the left hand, all these big plans about what we’re going to do, I have to be the one that at least stokes the flame around. Okay, here are some potential known, Mr. CFO, Miss CFO, and general counsel. You need to tell me how important these risk factors are because these are our risk factors. How important are they? Do you not care about them right now? Because you just want to get the benefits, but you need to be in this conversation, understanding and knowing about it so that we do the right thing. All of this has to be then input into the project plan.

Because at the end of the day, when we deliver something, we don’t want the general counsel coming down and saying, oh, that bad thing happened. How come I wasn’t engaged to help you mitigate that?

So it has to be across the C-suite evaluation.

Jeff Ton [00:28:34]:

Yeah. Well, and it sounds like when you’re going down this path, I love what you stated earlier about start small, start with a proof of concept. Start with something that’s kind of in your comfort zone. And I think when you think about this as a living organism, we’ll use that again. I wish our audience could see my air quotes. Are there other unique considerations that we need to keep in mind as we head down this path into AI large language models?

Deal Daly [00:29:13]:

I think the biggest thing is that a lot of technologies are relatively static between changes that are made to them. So, if you buy a storage system until the vendor is going to upgrade or make a maintenance change or you’re going to customize it in some way, you can be pretty well assured that it’s going to be the same today as it was yesterday, barring hardware issues and malfunctions or so forth.

But it’s predictable and it’s going to be very stable. With AI because the results may change over time based on new data being inputted into the language model. It can’t be left alone. You can’t just say, oh, okay, nobody’s changed it. Well, yeah, it’s going to be changing all the time.

Jeff Ton [00:30:11]:

Right.

Deal Daly [00:30:12]:

And it may be changing so slightly that you won’t notice it until something becomes egregious.

Jeff Ton [00:30:19]:

Right.

Deal Daly [00:30:19]:

So, I think the ability to manage that either through the third party that you have doing this, you should be asking the third party, how are you going to detect how this model changes over time? And will you be able to tell me how it’s changing and demonstrate examples of how it’s changing so that I can assess whether I like that or not? And other proofs, like, for example, how do I know that on day 37 we haven’t arbitrarily added PII data into this large language model? And because I’m under a regulatory scenario that requires that I don’t touch PII data, then I can be on the wrong end of the deal here. So, I think it’s important that they can’t say, as with other systems, yes, it’s stable and it doesn’t need to be addressed every day because I don’t need to worry about it changing. In the AI world, you need to understand whether it’s changing and how it’s changing.

Jeff Ton [00:31:31]:

Boy, it sure turns our change control process and world upside down, doesn’t it? Because you’re not bringing a change request in and saying, hey, I’m going to implement this patch or I’m going to do this upgrade. It’s constantly doing that.

Well, Deal. We have hit time here, and as I warned you, we are all about action here on Status Go. And we want to leave our listeners with a very clear call to action. What are one or two things our listeners should do tomorrow because they listen to our conversation today.

Deal Daly [00:32:11]:

Okay, two things then. The first is to align the potential AI projects to business strategies. Require that you know where they fit and what value they’re bringing to where they fit. And the second is, as you evaluate them, you have to think about six streams of impact, people and skills, what are the technologies? What processes do they affect? How does it affect the culture of your organization? How does it impact the financial model that you use? And what are the security implications?

Those are my transformation streams that I have in my head. How does it affect all of those streams? And then you’ll be able to create a roadmap for the future.

Jeff Ton [00:33:03]:

That’s excellent. Could you repeat those six streams again? I want to make sure our listeners catch them.

Deal Daly [00:33:08]:

Sure. People skills. That’s one. Technologies. What are the processes, both technical and business? What’s the cultural impact? What are the financial model changes that might be incurred? And what are the impacts on security and risk?

Jeff Ton [00:33:28]:

I love that. I think that’s a great template, a great map to use as you’re thinking about any project, but specifically this new world of AI and large language models.

Deal, thank you so much for carving out time to talk with us today. I really appreciate it. I’ve enjoyed reconnecting after all these years, and hopefully, it won’t be ten years before we talk again.

Deal Daly [00:33:54]:

Thanks, Jeff. It was great chatting with you today.

Jeff Ton [00:33:57]:

Thank you to our listeners. If you have a question or want to learn more, be sure to visit intervision.com. The show notes will provide links and contact information. This is Jeff Ton for Deal Daly. Thank you very much for listening.

Voice Over – Ben Miller [00:34:15]:

You’ve been listening to the Status Go podcast. You can subscribe on iTunes or get more information at intervision.com. If you’d like to contribute to the conversation, find InterVision on Facebook, LinkedIn or Twitter. Thank you for listening. Until next time.