dmsetup does not discard blocks when removing a thin device. The unused blocks
are reused by the thin-pool, but will remain allocated in the underlying
device indefinitely. For loop device backed thin-pools, this results in
"lost" disk space in the underlying file system as the blocks remain allocated
in the loop device's backing file.
This change adds an option, discard_blocks, to the devmapper snapshotter which
causes the snapshotter to issue blkdiscard ioctls on the thin device before
removal. With this option enabled, loop device setups will see disk space
return to the underlying filesystem immediately on exiting a container.
Fixes#5691
Signed-off-by: Kern Walster <walster@amazon.com>
If mkfs on device mapper thin pool fails, it will show pool status
as returned by dmsetup for enahnced error reporting.
Signed-off-by: Alakesh Haloi <alakeshh@amazon.com>
No need to use the private losetup command line wrapper package.
The generic package provides the same functionality.
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
1. reason to deactivate committed snapshot
The thin device will not be used for IO after committed,
and further thin snapshotting is OK using an inactive thin
device as origin. The benefits to deactivate are:
- device is not unneccesary visible avoiding any unexpected IO;
- save useless kernel data structs for maintaining active dm.
Quote from kernel doc (Documentation/device-mapper/provisioning.txt):
"
ii) Using an internal snapshot.
Once created, the user doesn't have to worry about any connection
between the origin and the snapshot. Indeed the snapshot is no
different from any other thinly-provisioned device and can be
snapshotted itself via the same method. It's perfectly legal to
have only one of them active, and there's no ordering requirement on
activating or removing them both. (This differs from conventional
device-mapper snapshots.)
"
2. an thinpool metadata bug is naturally removed
An problem happens when failed to suspend/resume origin thin device
when creating snapshot:
"failed to create snapshot device from parent vg0-mythinpool-snap-3"
error="failed to save initial metadata for snapshot "vg0-mythinpool-snap-19":
object already exists"
This issue occurs because when failed to create snapshot, the
snapshotter.store can be rollbacked, but the thin pool metadata
boltdb failed to rollback in PoolDevice.CreateSnapshotDevice(),
therefore metadata becomes inconsistent: the snapshotID is not
taken in snapshotter.store, but saved in pool metadata boltdb.
The cause is, in PoolDevice.CreateSnapshotDevice(), the defer calls
are invoked on "first-in-last-out" order. When the error happens
on the "resume device" defer call, the metadata is saved and
snapshot is created, which has no chance to be rollbacked.
Signed-off-by: Eric Ren <renzhen@linux.alibaba.com>