Liat Malki, director of marketing Axxana (http://www.axxana.com/), says:

In order to ensure business continuity and the ability to recover operations in a disaster, it is critical to have two copies of your company’s data located a good distance apart. Getting the data to the recovery site can be a challenge, however. Sending tape backups by truck or plane is relatively inexpensive, but not very timely, and it doesn’t protect the most recent updates. Transmitting data over networks is not very affordable, but it is more timely. Unless the location of the secondary data center is very close, some data will still be lost, because you won’t be able to use synchronous replication.

Companies are increasingly using data deduplication to reduce the communication line costs when replicating data between primary and secondary data centers. Depending upon the deduplication technology and the type of data, deduplication ratios can range from 2:1 to 20:1 and higher. A 2:1 deduplication ratio reduces storage requirements, and thus communication expenses, by 50%, a 10:1 ratio delivers a 90% reduction, and a 20:1 ratio delivers a 95% reduction.

While you will often hear organizations discussing the savings in physical storage, the real savings when replicating deduplicated data for disaster recovery comes from the reduction in communication line charges. Rates vary by country, region, city and even neighborhoods, but regardless of location, wide area network line charges remain one of the greatest impediments to comprehensive data replication and disaster recovery plans. And even if affordable, in some locations, particularly rural areas and newly developing regions, access to high-quality data lines may be very limited or non-existent, and the only option may be low-quality, low-speed communication links. In these instances, data deduplication can be critically important.

For data consistency reasons, most deduplication technologies operate on an application-consistent point-in-time copy of the data. The process looks like this:

  1. Take an application-consistent point-in-time copy of the data
  2. Deduplicate the data
  3. Transmit the deduplicated data to the remote site.

Until the data has been safely stored at the remote site, however, there is a risk of data loss. Regardless of the technology and the deduplication ratio, the process of data deduplication takes time. As data centers are not stagnant, new data is being created all the time. The longer the deduplication process takes, the greater the exposure and risk of losing newly created data.

Axxana has developed technology to maintain a mirrored copy of the most recent data updates in a disaster-proof storage system at the primary data center. In doing so, companies can take full advantage of the latest in data-deduplication technology, reducing storage and bandwidth costs, without increasing the risk of data loss during the deduplication and transmission process. Data that has not yet been transmitted to the remote site is safely stored in our black box. From the time the first asynchronous replication is committed at the second data center, our fire-proof, heat resistant, water proof storage system protects all the data generated between then and the next asynchronous update. Because the amount of changed data between snapshots is relatively small, we can store all of the changed data on rugged solid-state disk drives that are more tolerant of extremes in environmental conditions. If a disaster occurs, our system automatically transmits the data to the remote data center or can be physically retrieved and transmitted over an IP network. By combining Axxana and data deduplication technology, companies can achieve the most complete and most affordable data protection possible. You can learn more about the ROI of Axxana’s approach to data replication, by reading “ROI Model: The Cost of Communication Bandwidth for Remote Replication.”