What’s Holding Back Next Generation Backup and Recovery?
I talk with many IT professionals that are dismayed at how little backup and recovery has changed in the last ten years. Most IT organizations still run traditional weekly fulls and daily incremental backups, they still struggle to meet backup windows and to improve recovery capabilities, to improve backup and restore success rates and to keep up with data growth. Sure there have been some improvements — the shift to disk as the primary target for backup did improve backup and recovery performance, but it hasn't fundamentally changed backup operations or addressed the most basic backup challenges. Why hasn't disk dragged backup out of the dark ages? Well, disk alone can't address some of the underlying causes. Unfortunately, many IT organizations:
- Still run outdated versions of their backup software. I'm always amazed to find how many IT organizations want to change their backup software because it doesn't have the ability to backup XYZ application or provide granular recovery capabilities for XYZ application. I'm usually confused for a moment because I'm almost positive their current backup software does in fact provide the functionality they're looking for….and then I find out they're two versions behind the current version. In many of these situations, the IT organization is actually entitled to an upgrade as part of their maintenance agreement.
- Have not paid for advanced backup agent functionality that provides application awareness. In an effort to save costs, many IT organizations will not pay the premium for the application-aware agent, they pay for only the filesystem agent. So instead of putting XYZ database in hot backup mode, splitting off a snapshot, immediately taking the database out of hot backup mode, mounting the snapshot to a proxy server and then initiating the backup (a process that a backup administrator can automate easily from the backup software, limits the impact to the database, and eliminates the backup window), database administrators will manually copy database files to the network where backup administrators backup the database files as if they were just any other file. A process that can only be automated with scripts and that complicates recovery.
- Do not leverage virtual fulls even though they are backing up to disk. With traditional weekly fulls and daily incrementals, if you need to recover, you must recover the last full and all the daily incrementals until you reach the point in time you want. With virtual fulls, after the initial backup of the data, each subsequent backup only backups incremental changes but each backup is a fully recoverable point in time. When you need to recover, you point and click on the last backup you need to recover and that's it. If you're backing up to disk, you should be leverage virtual fulls.
- Use their backups as archives. Based on my direct customer experience as well as survey work that I’ve performed on behalf of vendors, the vast majority of companies (90%+) use their backups as archives. So despite the risks associated with regulatory compliance and eDiscovery, only companies in highly regulated and litigious industries have invested in archiving software and hardware for a true archive. When you separate backup versus archiving, you can and should reduce your backup retention schedules from weeks, months and years to 30 days or less. If your backups are there for operational recovery, what are the chances that you would ever want or need to recover a database from 30 days ago? If you can separate backup versus archiving, you're in a much better position to completely eliminate tape in backup. It will also allow you to consider completely new backup strategies. You can continue to use the core backup software you use today but if you don't need to backup to tape or keep data for longer than 30 days, it's also possible to use alternative technologies such as unlimited snapshots (storage or server based) and continuous data protection (CDP).
And I'm not trying to put all the blame on IT; technology vendors have not made things easy on customers. Vendors charge additional license fees for their advanced functionality (it's typically $1000 for a filesystem agent and $1500-2000 per agent for an advanced agent – it can be as much as double this for an enterprise class backup software application). Annual maintenances fees are 10%-15% of the initial acquisition cost of the software. And while disk libraries are less expensive the production storage systems they're not cheap, they cost approximately $4000 to $7000 per useable terabyte (TB). Let's assume that your prospective disk library vendor believes that based on a presales assessment of your environment that you'll likely see a 7:1 deduplication ratio. This means that if you need 42 TBs of disk storage, you'll need to buy at least 6 TBs of useable, physical storage. You probably don't want to cut it too close, so you'll buy a few extra TBs, so 8 TBs in total. This 8 TBs will cost approximately $32K (let's assume you get a nice discount off list and your cost is closer to the $4K per TB). If you expect an immediate ROI to your investment, that’s not likely to happen but if you build your business case on 3 year ROI, the benefits (reduction or elimination of backup windows, improved recovery objectives, granular recovery capabilities, improved backup and restore success rates, reduction in tape investment etc.) will outweigh the costs in the long run.
Check out Stephanie's research
You can follow Stephanie on here