Tip

Data deduplication methods: Block-level versus byte-level dedupe

Block-level and byte-level data deduplication methods deliver the benefit of optimizing storage capacity. When, where and how the processes work should be reviewed for your data backup environment and its specific requirements before selecting one approach over another.

Data deduplication identifies duplicate data, removing redundancies and reducing the overall capacity of data transferred and stored. In my last article, I reviewed the differences between file-level and block-level data deduplication. In this article, I'll assess byte-level versus block-level deduplication. Byte-level deduplication provides a more granular inspection of data than block-level approaches, ensuring more accuracy, but it often requires more knowledge of the backup stream to do its job.

Block-level approaches

Block-level data deduplication segments data streams into blocks, inspecting the blocks to determine if each has been encountered before (typically by generating a digital signature or unique identifier via a hash algorithm for each block). If the block is unique, it is written to disk and its unique identifier is stored in an index; otherwise, only a pointer to the original, unique block is stored. By replacing repeated blocks with much smaller pointers rather than storing the block again, disk storage space is saved.

The criticism of block-based approaches are 1) the use of a hash algorithm to calculate the unique ID brings the risk of generating a false positive; and 2) storing unique IDs in an index can slow the inspection process as it grows larger and requires disk I/O (unless the index size is kept in check and data comparison occurs in memory).

Hash collisions could spell a false positive when use a hash-based algorithm for determining duplicates. Hash algorithms, such as MD5 and SHA-1, generate a unique number for the chunk of data being examined. While hash collisions and the resulting data corruption are possible, the chances are slim that a hash collision will occur.

Byte-level data deduplication

Analyzing data streams at the byte level is another approach to deduplication. By performing a byte-by-byte comparison of new data streams versus previously stored ones, a higher level of accuracy can be delivered. Deduplication products that use this method have one thing in common: It's likely that the incoming backup data stream has been seen before, so it is reviewed to see if it matches similar data received in the past.

Products leveraging a byte-level approach are typically "content aware," which means the vendor has done some reverse engineering of the backup application's data stream to understand how to retrieve information such as the file name, file type, date/time stamp, etc. This method reduces the amount of computation required to determine unique versus duplicate data. The caveat? This approach typically occurs post-process -- performed on backup data once the backup has completed. Backup jobs, therefore, complete at full disk performance, but require a reserve of disk cache to perform the deduplication process. It's also likely that the deduplication process is limited to a backup stream from a single backup set and not applied "globally" across backup sets.

Once the deduplication process is complete, the solution reclaims disk space by deleting the duplicate data. Before space reclamation is performed, an integrity check can be performed to ensure that the deduplicated data matches the original data objects. The last full backup can also be maintained so recovery is not dependent on reconstituting deduplicated data, enabling rapid recovery.

Which approach Is best?

Both block- and byte-level methods deliver the benefit of optimizing storage capacity. When, where, and how the processes work should be reviewed for your backup environment and its specific requirements before selecting one approach over another. Your vetting process should also include references from organizations with similar characteristics and requirements.

About the author:
Lauren Whitehouse is an analyst with Enterprise Strategy Group and covers data protection technologies. Lauren is a 20-plus-year veteran in the software industry, formerly serving in marketing and software development roles.

Do you have comments on this tip? Let us know.

Next Steps

The pros and cons of file-level vs. block-level data deduplication

Medical center dedupes its way to better backups and disaster recovery

inline vs. post-processing deduplication appliances

The downsides of data deduplication

Dig Deeper on Data reduction and deduplication