Being the era of big data and with the large volumes of data organizations seek for, you can no longer depend on tried-and-true methods. You definitely need to implement a tiered approach just like is seen in cybersecurity for successful data recovery (DR) in order to cut lull and downtime, with the mindset of protecting your high-priority data.
Any attempt at Data protection and DR that is not scientific can never be a realistic approach. Therefore, to succeed in the face of the increasing frequency of attacks, use of automation, and better social engineering to elevate the likelihood of a successful attack you must up your protection and recovery plans.
A tiered approach to data protection and DR will not be a new thing since it’s been practiced with cybersecurity. All your data are important but nothing stops you from giving priority to the very vital ones.
Making a case for the prioritization of data, Ed Featherston, vice president and principal cloud architect at Cloud Technology Partners says “I do think DR plans can benefit from the tiered approach, and some organizations are taking that step. Conceptually, application data in most organizations have already been tiered by definition through recovery point objectives (RPO) and recovery time objectives (RTO). But, frequently, the final DR plan gets set to the least common denominator, namely the toughest RPO and RTO numbers, and becomes an all-or-nothing effort.”
A cogent reason why you need a tiered approach is that the volume of data you have to handle on a day-to-day basis has greatly increased. If you continue with your one-tier approach, you’ll only end up enmeshing yourself in big problems from the angles of time, resources, and cost.
The option you have, which some organizations have embarked upon is to take those RPO and RTO numbers and go-ahead to create tiers of recovery, just like the way you may have possibly strategized your cyber defense strategy.
Doing away with downtime
Christophe Bertrand, an analyst at Enterprise Strategy Group (ESG) says, “Modeling DR approaches after cyber defense is a sound strategy.”
It’s sound knowledge that the data you have are not of equal value, even after you have sorted them out, your effort in finding means of protecting each type still means navigating ever-more-stringent backup and recovery needs for every kind of data. Honestly, any organization that really wants to survive the avalanche and unrelenting nature of today’s cyberattacks must have the least tolerance for even modest downtime.
In a study conducted by ESG, IT pros were asked about their DR priorities and tolerance — for example, application unavailability or data unavailability for high-priority applications. Where 14% of respondents said they were not in any mood to tolerate any form of downtime for their high-priority applications, 36% said they could tolerate it for only up to 15 minutes. Just 21% said they could tolerate 15 to 60 minutes of downtime.
That is what you should set out to achieve. You shouldn’t be concerned with the fact that some organizations have not been able to achieve such a feat.
Inasmuch as you want to be as close as to no downtime as possible, you will have to adopt the requisite technologies, classifications, and policies to deliver on your service-level agreements. You must have behind your mind that you are into an era of big data and must necessarily handle large volumes of data.
As long as you can’t replicate and store everything, your only option and a very good one for that is to tier your data. As you go about it, you must focus on the criticality of the data you have.
It may be important for you to put up DR occurrences for that and orchestrate recovery in a way that supports that analysis. As more organizations are embarking on it, improvement in methods and capacities are pointers that have DR is becoming better over the years.
There are indications that DR has become more democratic with more data subject to comprehensive backup and at least comparatively rapid recovery. As good as this may sound, more effort is still expected in tiering if it has to be part of the foreseeable future.
Tiering is not a new kid on the block
When applied to cybersecurity, we use multiple tiers and multiple strategies to ensure internal and external data protection from all sorts of threats. This is exactly the kind of thing that you need for data protection.
The multiple tiers you’ll implement are to enhance your backup and your business continuity and DR against technology glitches, hardware failure and human actions, whether accidental or intentional.
Greg Schulz, an analyst at StorageIO firmly asserts that “The best protection is multiple tiers supplemented by multiple layers and multiple points.” It’s necessary to say that this may not be as easy as said, however, if you really work towards it, you should be able to come up with best practices.
As a case in point, Schulz noted that many organizations use the terms grandfather, father, son or “three, two, one” to describe multiple levels of backup on different media and off-site or in the cloud.
For you to succeed with your approach, you must make greater granularity your key to efficiency and getting the results you need for your most critical data and applications. You should consider tiered disaster recovery approaches or those that allow partial backup if you find yourself in a situation where you need to recover all or part of your email system.
This is a good opportunity to embark on tiering.