Contact Us

Continuity Insights Management Conference

By Laz Vekiarides, ClearSky Data: Three-two-one. Anyone who works in IT knows this is one of the prevailing strategies for data protection. In brief, the 3-2-1 strategy means a company has three copies of its data, two copies of which are local but on different media, and one copy offsite.

While 3-2-1 seems secure, it has a number of potential flaws, not the least of which is that you’re paying to replicate, transport and store at least three copies of each piece of data. With data volumes growing rapidly, this is a cost many organizations just can’t afford.

Recent cloud offerings make it possible to replace the 3-2-1 strategy with one durable copy of your data. For organizations struggling to keep up with data volumes, and looking for an airtight backup and recovery solution, this strategy can solve both problems.

A Typical 3-2-1 Data Backup Scenario

Before we move on to getting rid of the 3-2-1 rule, it’s helpful to look at exactly what a data strategy that complies with it looks like:

  • The first copy of your data is the primary storage. This is stored locally, either on individual machines, or in your own physical data center.
  • The second copy is your backup. This can also be local, on media like a backup drive, disk-to-disk system or tape.
  • The third copy of your data is for offsite backup or disaster recovery. It’s on separate hardware offsite, and data has to be replicated to it regularly.

If you’re like most organizations, the 3-2-1 rule doesn’t necessarily stop at three copies, either. You probably keep historical copies of all your data in each location, to protect against viruses or accidental user changes. Before you know it, 3-2-1 has become 6-5-4-3-2-1. There’s also the support structure that goes along with all the data copies: compute in different locations, the networking required to connect sites together and to the cloud, the software you use to move data, and multiple physical and cloud-based locations.

What These Multiple Data Copies Cost
The costs of the 3-2-1 model as outlined above add up quickly. The temptation when calculating these costs is to go by the simple dollars per gigabyte you pay for storing your data. In fact, there are several costs associated with the 3-2-1 model, of which $/GB is just one consideration. To get a true accounting of what you’re paying for data storage now, and what you’ll pay in the future, you have to look at total cost of ownership (TCO).

Beyond $/GB, your TCO calculation should include the personnel used to manage each of your data storage solutions – whether they’re cloud or on-premises – what you pay for backup and disaster recovery, all the way to real estate and power costs.

The next cost is the risk of your data being out of sync should you have to failover. For example, if your primary site is further ahead in time than your backup site. This can happen for any number of reasons, including a downed network link. If you then have to failover to your backup site, it’s not in sync with the primary, which means you risk data loss and downtime.

Consider a Different Data Protection Rule of Thumb: The 1 Rule
Stop making copies. After completing a thorough accounting of what storing three, four or more copies of your data is costing, the true appeal of keeping just one, durable copy of your data becomes pretty clear. The goal of the “1 rule” is to pay for just one copy of each piece of data and access it from anywhere. Features like built-in backup and disaster recovery, and flexible storage options make the “1 rule” possible with certain cloud options.

Looking at it practically, the first question to ask is, “How much of my company’s data do users access regularly?” If you’re like most companies, it’s a small percentage of your entire data footprint.

This data that’s used frequently – or “hot” data – is stored close to your users, at the edge, and on fast media, like flash. This ensures users have the access they need to your most-used data. As you step down tiers to “warm” and “cold” data, it can be stored farther away, on less expensive media.

Now, all this data is written to the cloud and being backed up automatically. Even your hot data is in the cloud and automatically backed up, while your users get the same performance as on-premises.

As cloud technology has improved and options have increased, there’s no reason to make, manage or keep multiple copies of your data. To save on data storage costs, while improving your ability to recover from disaster, consider moving from 3-2-1 to 1.

Laz Vekiarides is the co-founder and CTO at ClearSky Data. For over 20 years, he has served in key technical and leadership roles to bring new technologies to market. Most recently, he served as the executive director of software engineering for Dell’s EqualLogic Storage Engineering group.

 

 

Continuity Insights

Similar Articles

Under a Quarter of Businesses Will Return to ‘Old Model’

The latest Business Continuity Institute (BCI) report, Coronavirus – A Pandemic Response, indicates that many organizations will be forever changed by the COVID-19 pandemic. Only 24.8% of businesses expect to …

Financial Institutions Re-engineering Risk Management Systems

Faced with significant economic dangers and nonfinancial risks, financial  institutions will need to fundamentally re-examine and re-engineer their risk management functions. Those are the top-level findings of the 11th edition …

How to Develop a Business Continuity Plan

We’ve all seen this statistic before. According to FEMA, 40% of small businesses never reopen after a disaster, and another 25%, that do reopen, fail within a year. And with …

Leave a Comment

Share to...