By Dennis Wenk, Dimension Data: Cloud computing is rapidly becoming the IT industry’s “new normal,” and with this rise of cloud, organizations must rethink their business continuity strategies. Traditional business continuity tools, processes and strategies were all designed for standardized IT-environments; where wholesale data centers were recovered, in mass after some type of catastrophic event.
Today, the threats to an organization are much different. Business continuity solutions require proactive steps that mitigate the operational risks to the organization rather than solutions that simply re-engineer yesterday’s recovery responses.
The business continuity paradigm has clearly shifted. The operational-risks for a modern organization are now related to the continuity of its IT-service. This means that both the challenges and the solutions to business continuity strategies lie unmistakably within IT’s domain. Next-generation business continuity must provide end-to-end capabilities for hybrid IT, enterprise-grade hybrid cloud, seamless operational IT resiliency, and timely, cost-effective recovery.
When visiting with clients I consistently hear three questions with respect to business continuity and cloud computing.
- Is Cloud Computing an Acceptable Source for Business Continuity Solutions?
- How Should Cloud Services Integrate With the Legacy BC Strategy?
- Once Applications Have Been Migrated to the Cloud, How to Manage Cloud Service Interruptions?
Is Cloud Computing an Acceptable Source for Business Continuity Solutions?
Cloud computing delivers advantages at many levels but there is one benefit that appears to outshine anything else: cloud has the potential to offer cost effective business continuity. A few years back, cloud computing was seen as more of a risk than a safe haven. Today, these perceptions are changing as organizations understand how cloud computing reduces the possibility of costly downtime and promotes productivity.
While Clouds might not be the right answer for every circumstance, the consumption-model of cloud computing is quite efficient and cost effective for short- term use, such as fail-over need. The ability for clouds to offer shared infrastructure on demand have evolved from traditional hosting models and rigid legacy architectures to a more virtualized approach. Organizations can invest in various forms of cloud-based services that provide continuous data access even during times of an outage. This characteristic provides an invaluable service for mitigating service interruptions as well as disasters.
Cloud is an acceptable, cost-effective source for organizations with stringent business continuity requirements by protecting physical, virtual, and cloud environments that operate on multiple system platforms.
How Should Cloud Services Integrate With the Legacy BC Strategy?
Businesses that run traditional on-premise applications like databases, ERP, CRM, HR, and payroll are at risk of losing access to both the data and the application in case of an outage. Hybrid IT environments that use a mixture of in-house managed and software-as-a-service applications remain more protected from a local IT departmental outage as long as any localized data is replicated to the cloud on a real-time basis and network connectivity remains in place.
IT infrastructure meshes together applications, middleware, servers, networks and storage that deliver business services. An application by itself is not a service. A business service may depend on multiple applications, as well as underlying databases, firewalls, physical and virtual servers, disparate operating systems, networks, and storage both inside and outside of the firewall. If any of these pieces encounter a problem, the entire business-service is in jeopardy. Unfortunately, many organizations lack a clear view of how their services are delivered to fully understand its complexity and the associated business risk.
Cloud adoption therefore, poses some particular challenges for organizations whose businesses depend on legacy applications – some of which may have affinities and rely on programming languages, system libraries, and execution environments that aren’t fully supported by or readily available in the cloud. While Cloud computing is designed to mask complexity from the end user, cloud does not eliminate the complexity. Hybrid IT that can have many interdependencies, mapping those interdependencies is an essential prerequisite to ensure continuity of services that fall more and more outside the corporate firewall.
Once Applications Have Been Migrated to the Cloud, How to Manage Cloud Service Interruptions?
The increasing use of cloud services means that interruptions now can involve the loss of underlying cloud services, such as cloud-based storage or even basic internet connectivity. A single copy of data isn’t appropriate for in-house data and it is equally not appropriate to have a single copy of data in the cloud. The basic tenet of having more than one copy of data available on separated systems remains just as important today as it did in the past, but maintaining accessibility to that information has become an increasing concern due to cloud services. Too many organizations are turning a blind eye to this situation.
Data storage has become fragmented across repositories for many organizations, including private/public/hybrid cloud and software as a service, backup as a service, platform as a service, providers. In the past transactions were executed within a single datacenter. Today, these transactions may now traverse several applications across multiple repositories in geographically dispersed locations. It is no longer sufficient to restore a database to a consistent state or have an application program manage consistency. Instead, these multiple repositories must be restored to a consistent state based on transactions.
Once hybrid cloud services are deployed, an application, should be able to change its locality on demand, based on business values. Once this change occurs, it is no longer sufficient to plan for disaster scenarios that affect entire sites, data centers, or storage arrays because none of those boundaries contains all the components of an application. For these new designs, it is more useful to design applications under the expectation that individual components will fail so that the application can survive such failures.
Business continuity is rapidly evolving because organizations are aggressively entrusting their data and business IT services to cloud providers. Successful organizations will need to carefully consider the interdependencies of their IT portfolios to create roadmaps for cloud deployments that include ‘Next Generation’ business continuity. A better understanding of cloud computing is a prerequisite to making better decisions about the ‘next generation’ of business continuity.
Dennis Week is a Principal Consultant with Dimension Data, a global systems integrator and managed services provider that designs, manages, and optimises today’s evolving technology environments to enable its clients to leverage data in a digital age.