Ontap 9.1, using deduplication, data compression, and data compaction feature to increase storage efficiency of your storage systems.
You can run deduplication and data compression together or independently on a FlexVol volume or an Infinite Volume to achieve optimal space savings. Deduplication eliminates the duplicate data blocks and data compression compresses the data blocks to reduce the amount of physical storage required.
Inline data compaction stores multiple user data blocks and files within a single 4 KB block on a system running ONTAP. Inline data compaction is enabled by default on All Flash FAS (AFF) systems, and can be optionally enabled on volumes on FAS systems.
The volume space guarantee must be set to none.
Without data compaction, ONTAP stores each file in one or more 4 KB blocks on solid state drives (SSDs) and hard disk drives (HDDs). Data compaction increases storage efficiency by storing more data in less space.
The data compaction process has CPU overhead, and is best suited for faster controllers. Storage savings can be significant for environments with highly compressible I/O and small files or I/O.
If enabled, data to be written goes through inline zero deduplication, then inline compression, and then inline deduplication. Data compaction takes place after compression and deduplication, and is independent of these operations.
For example, assume files of the following sizes have been compressed and are ready to write to SSD:
Without data compaction, each file would get its own 4 KB block, consuming 36 KB:
With data compaction, multiple files are written to each 4 KB block, consuming only 12 KB:
To enable the Compaction for FAS systems :
To check the compaction status of Aggregates and Volumes.
To enable the compaction of FAS systems.
To enable the volume efficiency of volume vol_db1.
To check the inline duplication, compression and compaction.
To enable the compression and inline compression.
Create a Flecvol using only SSD disks.
Enable compression and compaction.
- You can move between secondary compression and adaptive compression depending on the amount of data read.
- Adaptive compression is typically preferred when there are lot of random reads on the system and higher performance is required.
- Secondary compression is preferred when data is written sequentially, and higher compression savings are required.
Undo the compression first.
Now change the compression type.
There are two types of policies,
One is Scheduled and another one is Threshold.
Create a new threshold policy type.
Associate this policy to the volume.
To check space by volume efficiency.
To check the volume efficiency statistics.