Status Go: Ep. 227 – Myth Buster: The Cloud is Not Reliable – Expert Perspective | David Gaffney

Summary

In this episode of “Status Go,” host Jeff Ton interviews cloud expert David Gaffney to debunk the myth that the cloud is not reliable. With over 20 years of experience in the managed services space, David provides a comprehensive perspective on the reliability of cloud infrastructure, sharing insights on fault isolation, availability zones, multi-region architectures, and more. He discusses how cloud providers offer tools and guidance to design resilient workloads, the cost implications of achieving higher availability targets, and the importance of workload design and best practices. If you want to learn the truth about cloud reliability and how it compares to traditional on-premises solutions, this episode is a must-listen.

About David Gaffney

David is the Cloud Product Director at InterVision and holds a number of technical certifications including the AWS Solution Architect Professional.  He has worked for managed service providers for the past 20 years including roles in Operations, Pre-Sales Engineering, Solutions Architecture and Product.  In his free time, he likes to read, travel and ride bikes.

[dfd_spacer screen_wide_spacer_size=”30″ screen_normal_resolution=”1024″ screen_tablet_resolution=”800″ screen_mobile_resolution=”480″]

Episode Highlights

[00:00:00]: Do You Need five nines?

[00:00:36]: Introduction

[00:02:22]: David Gaffney’s Journey

[00:03:56]: Is the Cloud Reliable? Compared to What?

[00:07:01]: Reliability and Availability, What’s the Diff?

[00:08:38]: Not one, but Three Questions

[00:10:00]: What Does Architecture Have to Do With It?

[00:13:46]: Before or After the Lift and Shift?

[00:15:57]: Whose Fault Is It Anyway?

[00:20:09]: Diving Deeper Into Availability Targets

[00:24:11]: Network and Reliability

[00:26:33]: Podcasting and Reliability

[00:27:23]: Disaster Recovery’s Role in Reliability

[00:28:58]: Time to Bust! That! Myth!

[00:31:51]: Action!

[00:33:18]: Thank You and Close


Episode Transcript

David Gaffney [00:00:00]:

When you’re designing your workload, it’s really important that you gain a consensus and agreement as to how mission-critical this application is. Do I need five nines, or can I get away with three nines? Because there’s cost implications to each of these.

Voice Over – Ben Miller [00:00:19]:

Technology is transforming how we think, how we lead, and how we win. From InterVision Systems, this is the Status Go podcast, the show helping IT leaders move beyond the status quo, master their craft, and propel their IT vision.

Jeff Ton [00:00:36]:

Welcome back to our continuing series Myth Busters: Cloud, Security, and Innovation. Like the much more famous MythBusters TV show, we’re going to dive into several Myths and, through interviews, case studies, and data, bust that myth. Follow us over the next several months as we share blogs, infographics, and, of course, podcast episodes. During the second week of each month, we will interview a peer CIO, CTO, or business owner who has successfully busted the Myth. Two weeks later, we will hear from an InterVision expert who will further destroy that myth.

This month, we are focused on busting the myth that the cloud is not reliable. This myth has its roots in many factors: early cloud service outages, lack of control, data security concerns, and, frankly, confirmation bias. Not to mention, we, as humans, are resistant to change…and we may have staff and vendors advising us against the cloud because they perceive it in their best interest to do so.

Wow!

No wonder there are some who are digging in their heels. Our guest today is David Gaffney. David is the Cloud product director here at InterVision. He works with our products and services to ensure our customers’ migration to and experience in the cloud meet and exceeds their expectations.

David is here today to help us bust that Myth. David. Welcome to Status Go, my friend.

David Gaffney [00:02:19]:

Thanks, Jeff, glad to be here.

Jeff Ton [00:02:22]:

Hey, I always love to start by having our guests talk a little bit about their career journeys as a way of introduction and also just to let people know who you are. So why don’t you start there? Tell us a little bit about your background and your journey. What brings you here today?

David Gaffney [00:02:41]:

Thanks, Jeff.

So, I’ve been in the managed services space for about the past 20 years, primarily as a presale solution architect and product manager. And I’ve designed over that time a number of managed compute, storage, and networking environments and have been building an AWS since 2017.

So, ten years ago, I was working for a different MSP, and I designed and implemented a multisite private cloud solution. I did a number of these, but this one stands out. It was beautiful but very expensive. It included a private cloud, VMware stacks and, two regionally diverse data centers, dedicated storage arrays with redundant controllers, RAID six bi-directional replication, local load balancing, global load balancing, off-site backups, dedicated network links, firewalls…it took months to implement, was very expensive and required a dedicated team to operate.

What I love about the cloud is that the same or a very similar architecture can be designed and deployed within days in the cloud with similar reliability at a much lower cost to the end user. So that’s the beauty of the cloud access to resilient enterprise-class technology on-demand at a fraction of the cost of deploying physical hardware itself.

Jeff Ton [00:03:56]:

Awesome. Well, it is great to have you here. And I know our tenure at Intervision overlapped by several years, and I always appreciated the insights that you would bring. You were able to bring the technology perspective but also the customer perspective, which was incredibly valuable.

So, it’s great to have you here on Status Go. Let’s dive into this myth that sometimes, I find it hard to believe that people still hold this myth as true that the cloud is not reliable. So let me ask you right off the bat: is the cloud reliable?

David Gaffney [00:04:38]:

Well, when they say that the cloud is not reliable, I always think not reliable compared to what? And for most people that ask this question they’re trying to compare the reliability of the cloud as some sort of on-premises data center or colocation environment. So, in the on-premises world, resiliency means having a primary and a secondary data center, each with its own backup power. That way, if one site’s lost, you can still run your core systems in some capacity while the primary site is restored.

In the cloud world, resiliency means architecting workloads that run across multiple availability zones or regions. So, what’s more reliable on-prem data centers or the cloud? It’s a simple question, but getting a straight answer is difficult.

In 2022, the Uptime Institute, an organization that certifies data centers for resiliency, released its Outage analysis report and concluded that investment in cloud technologies and distributed resiliency has helped reduce the impact of site-level failures. But it did not directly answer the question, which is more reliable? It did, however, provide some insights into what the causes of the major It outages are. And it turns out, as you probably suspected, power failures are listed as the number one most common outage 43% of the time. And usually it’s a result of UPS failures.

So, the problem with definitively answering the question which is more reliable? Cloud or on-prem? Is that private companies don’t make public their records on downtime. And even if they did, you couldn’t rely on the data.

Jeff Ton [00:06:09]:

We have a reliability problem with the data.

David Gaffney [00:06:12]:

Well, exactly. There’s no standards or uniformity. And in terms of how you define downtime, there’s no universally accepted definition that’s applied uniformly to every company within IT . So, companies might brag about availability percentages close to 100%. But quite often, those same companies will tweak the calculation of uptime to exclude random events, external forces, or anything they don’t want to include. They’ll exclude outages that went unnoticed by anyone except the IT department. They’ll shift outages to the responsibility of vendors or third parties. And the big one, they’ll exclude scheduled maintenance. So if I said it was going to be down, it doesn’t count, right? Yeah. So, when you exclude all these types of outages, it’s a lot easier to achieve those high levels of uptime or.

Jeff Ton [00:07:01]:

Well, and you’ve used those words in your explanation there, David, reliability and availability. What’s the difference between the two?

David Gaffney [00:07:12]:

Yeah, good question. Reliability is really a measure of the average continuous operating time between failures, sometimes referred to as the mean time between failures. Availability is the percentage of time a workload is available for use. So, think of this as time available over total time. And so, it’s a percentage, basically, of availability.

And then there’s resiliency, which is often sort of used as a proxy for talking about these things. But resiliency is the ability of workload recover from an infrastructure or a service failure and then dynamically acquire resources to meet the demands. So, they’re often used in the same context, but they have different meanings. And the truth is that cloud providers don’t publish numbers on reliability, but their service level agreements are totally tied to availability. So, really, when we’re talking about availability in the cloud, we really need to focus on availability because that is where the rubber meets the road, and that’s where you’re going to be able to hold the cloud provider’s feet to the fire when it comes to achieving their availability targets because that’s where their SLAs are.

Jeff Ton [00:08:18]:

Just like back in the day when I worked for BlueLock and eventually InterVision Systems, it was all based, at least in the IAAS portion of what we did; it was all based on availability.

David Gaffney [00:08:38]:

Yeah. And really, Jeff, it’s kind of like three separate topics, right? So, when you’re talking about is the cloud reliable, it’s three different but related questions. One, is the underlying cloud infrastructure reliable? Well, the answer is yes. AWS currently offers 99.99% service availability guarantees for EC2 instances when deployed in two different availability zones. Likewise, Azure offers four 9s SLA for virtual machines and availability set when running in two different AZs.

So, yes, the cloud is reliable, but it also relies on the end user to configure the environment properly.

And then the second question is, did you design your workload to function reliably in the cloud? So, it’s recommended that workloads running in the cloud are designed to be fault-tolerant, and cloud providers provide recommendations for workload resiliency in the cloud. So, it’s important that your workload be a good fit for cloud reliability.

And then, finally, did you configure your cloud infrastructure for reliability? So, the configuration of the cloud environment is the customer’s responsibility. Cloud providers have created best practice guidelines and reference architectures that maximize reliability in the cloud. And they’ve also provided tools to help you build highly resilient applications. But in the end, you’re responsible for the resiliency of your own application. So, it’s important to understand how these topics overlap and interconnect.

Jeff Ton [00:10:00]:

Well, you mentioned Workload and Workload architecture there, and you touched on this. But take us a little bit deeper. Why is my architecture important when it comes to resiliency?

David Gaffney [00:10:18]:

You know, that’s a great question, Jeff. I mean, cloud reliability really starts with your workload architecture. Whether you’re designing a new application from scratch or planning on deploying an existing application in the cloud, you need to use best practices for workload design when you’re deploying them in the cloud.

So, legacy monolithic applications are usually not a good fit for the cloud. Instead, it’s better to segment your workload into smaller components. In fact, it’s recommended that cloud workloads use a service-oriented architecture, sometimes called SOA, or a microservices architecture. And even if you start with a monolithic architecture, it’s important that it be modular so that it can ultimately evolve into SOA or microservices over time.

But with a service-oriented architecture or microservices architecture, individual components of the application will communicate with each other using Service Interface or APIs. So, think of these just as a standard way of communicating between different parts of the app. When you split an app into separate components like this, you can perform independent functional releases, so independent software release cycles. And in addition to these individual components, they can actually independently scale without requiring the other services to scale at the same time.

So, if a portion of your app needed additional resources, that portion could scale up, but the rest of the app wouldn’t need to at the same time. And when you break up the monolith into individual components, you end up creating these dependencies. So, certain application components rely on other application components to complete their tasks. These dependencies can either be tightly coupled or loosely coupled.

In a cloud environment, you want to implement loosely coupled dependencies. So, implementing these loose coupling between dependencies isolates failure in one from impacting another. An example of loose coupling would be using a load balancer to route traffic to only healthy compute instances. Another example of loose coupling would be using a queuing service like Amazon’s SQS to have requests go into a queue and have compute instances listening for messages in the queue. If any of these compute instances fail, the message remains in the queue for another instance to process it.

The benefit of this approach is to isolate the behavior of one component from the components that depend on it, and that increases the resiliency and agility of the app. Failure of one component is isolated from the others.

Finally, make your services stateless if possible. Now, again, this is if possible. But of course, when I talk about state, I’m talking about session state here. So, if you’re not familiar with what state is when users interact in an application, they often perform a series of interactions that form a session. A session is unique data for users that persist between requests while they’re using the application. A stateless application is one that doesn’t need to know about these previous interactions and doesn’t store it.

So, stateless applications are great because they allow service to be replaced at will without causing an impact on the end user. But if you can’t make your application stateless, an other alternative is to offload session information into a caching service like ElastiCache or a database such as DynamoDB. Once you’ve designed your app to be stateless, you can use serverless compute services such as Lambda or Fargate.

The bottom line is no matter how good cloud reliability gets, application developers have to design software architectures to tolerate cloud failures. Cloud is reliable, but workloads can be impacted by the failure of the underlying components. So, the key is to anticipate the failures while the software architecture is being designed.

Jeff Ton [00:13:46]:

So, plan for the failures as you’re building this.

Now, I’m going to put you on the spot here just a little bit, David, and when we talked a couple of episodes ago, a few episodes ago we talked about cloud migrations and the lift and shift versus the refactoring and the re-architecting. This work that you’re talking about, this workload architecture, is that happening after a lift and shift, or do we need to do some of this before a lift and shift?

David Gaffney [00:14:20]:

Honestly, it’s happening both. Right, so there are applications that are just lifted and shifted into the cloud because of time constraints or cost, or maybe they’re not mission-critical apps. And so, if there’s some hiccups, it’s not a big deal.

But ideally, in a perfect world, you’re going to make sure that your application is suitable for the cloud, and if it needs to be refactored, you’ll do that upfront. In reality, does that happen all the time? It depends. It depends on how mission-critical the app is and what your timeline is for doing the migration.

But I think the important thing is there is unless you have these skills in-house work with a trusted partner to help you evaluate workload design and make sure that it’s a good fit for the cloud.

Jeff Ton [00:15:02]:

Yeah, it’s always great to have a guide, right, to kind of help you navigate through all this.

Well, we’re going to pause right here, David, and listen to a word from our sponsor and your employer, InterVision Systems. And then, when we come back, I want to dive a little bit deeper into fault isolation and availability targets and continue our discussion there. So, let’s listen to what InterVision has to offer.

Voice Over – Ben Miller [00:15:36]:

Unlock the power of more. With InterVision Systems, we provide the cutting-edge technology and expert guidance you need to take your business to the next level. Don’t settle for less. Choose InterVision Systems and discover what’s possible. Contact us now to learn more.

Jeff Ton [00:15:57]:

And if you do want to learn more, visit intervision.com. Or, if you want to look into some of the myths that we’ve been busting, visit Intervision.com/myths. That’s m y t h s.

Today, we’re talking with David Gaffney, the senior Product Director of Cloud for InterVision Systems. And we’re busting the myth that the cloud is not reliable. And so, we’re having this great discussion about workload architecture. And one of the things that David mentioned right before the break was the importance of fault isolation. And I want to take a little bit deeper dive into what is that and how do we achieve that.

So, David, what is fault isolation?

David Gaffney [00:16:47]:

Yeah, Jeff. So, fault isolation is just a way of limiting the impact of an infrastructure failure. So, a key design principle for deploying workloads in the cloud is avoiding single points of failure in the underlying infrastructure. Cloud providers have designed their data centers and their services to isolate potential faults in certain Zones. So, Availability Zones, as they’re called, are really data centers with independent physical infrastructure. Each one has dedicated connections to utility power, backup generators, battery banks for UPS, independent mechanical services such as cooling and fire suppression, and independent network connectivity. This way, any fault in any of these systems will only affect one Availability Zone.

And regions are made up of multiple availability zones. So, each Availability Zone is isolated from the others but close enough to allow these high throughput, low latency networking connections between them. So, for example, you can synchronously replicate data between databases in different Availability Zones. By running your workload in multiple Availability Zones, you can protect it from faults in power, cooling, networking, and most natural disasters like fire and flood.

Some examples of multi-AZ functionality are using a load balancer to distribute traffic between compute instances or compute instances located across multiple Availability Zones. The load balancer can detect when instances are not available, and then they’ll send traffic to the remaining ones.

Cloud providers have a number of tools that you can use to make your workload more resilient, and would be really costly to try to implement these, or more difficult really to try to implement these in an on-prem or colocated environment. So, they make it easy to implement self-healing. For instance, you can deploy your instances or containers using auto-scaling. When you create an auto-scaling group, you define the launch template for the instance that you want to scale. If one of the instances in the auto-scaling group becomes unhealthy, the automation will deploy a new instance based on that launch template.

Or if you don’t want to or can’t use auto-scaling groups for some reason, you can still implement self-healing by creating an alarm, a CloudWatch alarm, that will automatically deploy a copy of the instance based on the default configuration if the instance fails a system status check, for example. So, you can further isolate faults by combining auto-scaling groups with load balancing. This allows you to spread the load across instances and Availability Zones. This ensures that if an instance fails or if the entire Availability Zone becomes unreachable, your workload will still be able to handle these requests.

And then there’s the multi-AZ feature of AWS relational database service, RDS. When you configure RDS for multi-AZ, you have a primary database in one AZ and a secondary database in a different AZ. When the primary database fails, RDS automatically switches traffic to the standby database. So, you can also set up this as a multiregional replication as well. And it works in a similar way.

This kind of functionality, while somewhat possible in an on-prem environment, becomes much more expensive when not done in the cloud. For most workloads, the multi-AZ strategy within a single region is enough to meet Availability goals. Multiregion architectures are really for workloads with either extremely high availability requirements or are part of a disaster recovery strategy.

Jeff Ton [00:20:09]:

We’ve talked about workload design, and we’ve talked about Availability Targets. What else do we need to understand about availability? Where does the network come into play here?

David Gaffney [00:20:21]:

Well, you mentioned Availability targets, and I kind of want to drill down on that a little bit because that is really key to workload design. So, we talk about fault isolation. But when it comes to workload design, implementing an Availability target is really important. So, when you’re sitting down, you’re doing your workload design, you need to figure out what target you’re shooting for. And this is really important because what you’re trying to do is set expectations both internally and externally as to how the application or the workload is going to function. And these decisions help drive the application design process because what you’re doing when you design the application design is you’re really evaluating different technologies and considering various different trade-offs.

These workloads need to have specific resilience targets so that they can be properly monitored and supported. And then, while you’re going through the design process, you need to define these to help you basically inform decisions about what you’re going to choose. So, the Cloud providers fortunately have provided guidance and recommended architectural designs for specific Availability Targets.

So, for example, you can achieve two nines or 99% Availability simply by deploying your workload in one region and one Availability Zone, the application including the database. In this case, it could simply be deployed on a single compute instance.

You can achieve three nines or 99.9% Availability by deploying in one region and two Availability Zones. In this case, you’re using load balancing, auto-scaling, and Amazon’s relational database service RDS with the multi-AZ configuration. So, RDS would automatically promote the secondary database to the primary in the event that the primary Availability Zone fails.

Then you can even achieve four nines or 99.99% Availability in a single region using three Availability Zones. In this case, what you’re doing is configuring each Availability Zone to handle 50% of capacity. So, if peak load requires four compute instances, you’d use a minimum of two instances in each of the three Availability Zones. For a total of six instances, that way if one fails, you can still meet the capacity, and then the database would use probably in this case, RDS Aurora with read replicas in all three Availability Zones.

So, these are defined architectures that target specific availability targets and are provided by the cloud providers. And then there’s even multiregion architectures. So, you can achieve three/five nines by deploying the workload in two different regions. This is called the warm standby approach. And basically, what you do is you deploy the workload to both regions, but in the secondary region or the passive site, you scale down the architecture. So, it’s very similar to the four nine scenario but includes the regional failover piece. Again, the database tier is going to use RDS, in this case, RDS Aurora, cross-region Replicas. And then you would fail over to the secondary region manually.

That’s why it’s three/five nines, because you have to make a decision to fail over.

Finally, five nines of Availability is achievable using an active-active approach across multiple regions. The architecture here is similar to that of the four nine scenarios, but you’re including regional Failover. And in this case, RDS would be configured with multi-AZ, but the regional failover piece would be automated.

So, this is possible. And they actually have workload architectures that are best-practice architectural designs for these high levels of availability. But the bottom line is, Jeff, when you’re designing your workload, it’s really important that you gain a consensus and agreement as to how mission-critical this application is. Do I need five nines, or can I get away with three nines? Because there’s cost implications to each of these.

Jeff Ton [00:24:11]:

I like that, and thank you for taking us on that deeper look at it because then it really does become a business decision as opposed to a technology decision. Pick the number of nines you want or that you need, probably better stated, and architect your workload that way.

Now, one of the things that you talked about earlier was, in a lot of cases, many outages in the on-prem world come from power outages. But another source of outage seems like the network. So, as we’re talking about resiliency and availability, where does the network play?

David Gaffney [00:24:53]:

Yeah. Great question, Jeff. So, I mean, it’s great that your application is running and healthy in the cloud, but if nobody can reach it, then it appears that it’s down. Right? So, you have to make sure that the networks and pathways that go into your cloud environment are highly available and stable. This really starts with using highly available DNS, such as Route 53, so that your customers can resolve domain names to the corresponding IP addresses.

Absolutely. You want to configure the health checks that come along with the DNS so that if one zone or one resource is unavailable, it’ll pull that in rotation and fail it over. And then, you can even use DNS health checks to control DNS failover from the primary region to the secondary region. So, highly available DNS is key.

Second is, especially if you’re delivering content in your site, using a content delivery network, these are a lot less expensive than they were ten years ago, and the cloud providers have really sort of driven that home. You can use tools like Amazon Cloud Front to distribute content with low latency and high data transfer rates. But what’s important about this is what you’re doing is you’re shifting the load away from your primary servers to these CDN edge locations, and that really takes the load off of your app and allows that to remain stable.

Finally, you need to provision redundant connectivity between private networks in the cloud and your on-premises environments. This means using either multiple Direct Connect or express route connections, or VPN tunnels between separately deployed private networks. So, if you’re using a VPN appliance that’s not resilient by design, then you need to deploy a redundant connection through a second appliance.

Jeff Ton [00:26:33]:

I love that you talked about the CDN or the content distribution network because I think, and you tell me if I’m wrong, I think we use something like that for the podcast. Right? You can go to Intervision.com/Status-go and click on this episode, and listen to it. But it’s not streaming from our website; it’s streaming from Blubrry.

David Gaffney [00:26:59]:

That’s right.

Jeff Ton [00:27:00]:

They’re the ones that, so that’s basically.

David Gaffney [00:27:03]:

A CDN is really important if you’re talking about streaming content, right? Or high bandwidth content. Right. If you can offload that to a CDN provider, what it does is it takes the load off your servers. So, that makes your core environment more stable.

Jeff Ton [00:27:23]:

Well, we’ve got time for one last area because we’ve talked about this three-legged stool of resiliency availability, and you mentioned recovery, but where does DR…disaster recovery play in resiliency and availability?

David Gaffney [00:27:42]:

Yeah, Jeff, so they’re both aspects of business continuity, but they’ve got different objectives and methods. Availability is the ability of a system to operate continuously without experiencing failure. Disaster Recovery is about how an organization really recovers from failure. And then Availability uses resiliency to eliminate single points of failure and prevent interruptions. Disaster recovery is about using policies, procedures, and automation to restore critical services after a natural or human-caused disaster.

Light, Warm Standby, and Multiregion Active-Active.

Earlier, when I was talking about availability targets, I talked through each of the designs so you kind of have a feel for what they are. But what’s interesting is that when you look at solutions like Warm Standby and Multiregion Active-Active, these are both highly available solutions that also include disaster recovery. So, really, at that level, when you’re talking about three/five nines or five nines availability, disaster recovery and availability really kind of overlap.

Jeff Ton [00:28:58]:

Well, DR is near and dear to my heart, coming through the BlueLock family into InterVision. DRaaS was our primary product line, and now I know it’s part of the RPaaS suite there at InterVision, so it’s alive and well.

Well, David, we’ve come to that point where it’s time to really bust that myth. So, if you’re sitting down with a customer or a prospect and they told you that they were not going to leverage the cloud because it’s not reliable, what would you tell them?

David Gaffney [00:29:39]:

I’d say, Jeff, not only is the cloud more reliable, but it’s more cost-effective to build resilient workloads in the cloud by far. When most people talk about reliability in the cloud, what they really mean is availability. So, anything up to 99% is easy. The PC under my desk hits 99%, and it’s reliant on Windows 11 and Spectrum Internet. 99% allows three days a year of downtime. No sweat. The complexity of HA only becomes more apparent when you look at the range above 99%, and the Spectrum goes from cheap and easy to expensive and impossible.

For a basic app, 99.9% can probably be achieved by most $50 a month shared hosting solutions. Even the availability for a single EC2 instance comes in at 99.95%, with no additional architecture needed.

But as you plug in the extra nines, you’ll find that things that you may not have considered start to become a problem. So, what’s the availability of the hosting provider’s, networking, power supply, ISP, and general facilities? What’s the availability of the hardware, considering maintenance, hardware failure, and human error? And what’s the chance of the whole data center failing?

The point where you have to think beyond the rack is where it becomes incredibly hard. In fact, somewhere around the 99.95% mark is where the cost-performance tradeoff between the traditional on-prem world and the cloud environment starts to diverge significantly. Before this point, you can live in a single data center and maybe just use one or two servers to support your application. After this, you have to start considering a second data center, multiple facilities, expert installation, headcount, and maintenance.

And as you stray into truly HA territory, you’re adding more regions, more availability zones, more load balancers, more instances, more backups. This gets comparatively cheaper and cheaper in the cloud since you don’t have to physically build or buy any of this or hire an army of people to manage it.

The bottom line, if you need the highest levels of availability, the cloud is ready to support you. Attempting to do this on your own would be extremely challenging, not to mention fiscally irresponsible. So, unless you really want to spend millions of dollars building and maintaining your own IT infrastructure, it’s highly unlikely you can compete with the stability and reliability of the public cloud providers.

Jeff Ton [00:31:51]:

There you have it myth busted! The cloud is reliable when you architect it and select the availability target that you’re looking for partner that with DR, and you’ve got an incredibly reliable platform.

Now, this is the fifth Myth in this series that we have busted. Next month, we will be wrapping up the Myth Buster series with a special crossover episode. No, we are not going to be on the MythBusters TV show. Better than that. We’re going to be on the LinkedIn Live digital Dialogue from the Institute for Digital Transformation. Tune in on October 10 to watch it live on LinkedIn, or catch the Status Go episode on October 23.

Okay, David, as you know, we are all about action here on Status Go. What are one or two things our listeners should go do tomorrow? Because they listen to us today.

David Gaffney [00:32:58]:

Jeff, so I’ve just summarized the best practices when it comes to designing and configuring workloads for cloud. InterVision has experts in workload design, application development, professional services, and managed services for cloud environments. So, if you’re interested in learning more about any of these topics and how we can help, reach out to your InterVision account rep, who can put you in touch with the right people on our side.

Jeff Ton [00:33:18]:

That’s awesome. Thank you so much. That’s a great call to action.

David, thank you so much for carving out time. I know you’ve been incredibly busy, man. We’ve been scheduling this for a couple of months now to get on your calendar, so I appreciate the time, and it’s always great to chat with you, man.

David Gaffney [00:33:40]:

Thanks, Jeff, always a pleasure.

Jeff Ton [00:33:40]:

To our listeners, if you have a question or want to learn more, visit intervision.com or go to Intervision.com/myths. If you want to review the show notes for this particular episode, go to intervision.com/status-go. Those show notes will provide links and contact information. This is Jeff Ton for David Gaffney, thank you very much for listening.

Voice Over – Ben Miller [00:34:11]:

You’ve been listening to the Status Go podcast. You can subscribe on iTunes or get more information at intervision.com. If you’d like to contribute to the conversation, find InterVision on Facebook, LinkedIn or Twitter. Thank You!

[dfd_spacer screen_wide_spacer_size=”60″ screen_normal_resolution=”1024″ screen_tablet_resolution=”800″ screen_mobile_resolution=”480″]