One of my favorite features of modern filesystems is the sparse file, where "empty" portions of files are unallocated, so they don't consume any space. This allows oversubscription of disk blocks, so long as you never actually complete fill all of the files at once. But one common problem of sparse files is that they tend to grow, and there is often no way to solve this problem in place. But there is, with ZFS.
Without going into a lot of detail, ZFS is a snazzy and relatively new filesystem that came to us from the latter era of Sun's SVR5-based Solaris operating system. It has a lot of advanced features, almost none of which I'm going to talk about here. If you want to know more about it, there is an embarrassment of information on its workings available on the internets. But besides support for sparse files, I do want to mention one of them in particular — compression.
ZFS is far from the first compressing filesystem, and even before there were other filesystems with compression built in, there were reasonably solid attempts to provide the same functionality as an overlay over the normal file system at least as far back as DOS. I've used them there as well as on the classic MacOS, and on the Amiga. But with ZFS, the support is truly transparent, and with modern CPUs the overhead is negligible compared to the benefits, particularly when hard disk drives are involved. The savings in data transferred are truly substantial, and the cost of determining whether a file will compress is very low. One of the drawbacks of sparse files is that they result in fragmentation, which is a problem on hard disk drives in particular; compression reduces random access by compressing data that would normally be stored in a larger number of blocks.
The reason these two things are worth mentioning together is that ZFS will automatically detect and prune blocks of zeroes if compression is enabled for a volume, regardless of the method. This is not only cheaper than compression, but it's also more efficient, and it's very easy to use.
My particular use case for large sparse files is virtual disk images for VMs. I am now using KVM/QEMU to provide VM services, managed by libvirt. This provides essentially the same sort of functionality as vmware server. If the files are stored on most filesystems, you have to first zero out the empty space (enlarging the virtual disk to its maximum size) and then make a sparse copy of it (with cp --sparse=always or with qemu-img convert, or any of several other options.) But with ZFS, with a volume with compression enabled, simply zero out the empty space in the volume and the file will be made sparse.
The advantage of using qemu-img convert is that it not only handles sparse files, it can actually produce a smaller file which grows later. This is convenient for archival purposes, because it is smaller when unpacked even on a filesystem which does not support sparse files.
On a Unixlike, use dd if=/dev/zero of=somefilename inside of the VM to fill up any unused space on the filesystem with zeroes, then delete somefilename. On Windows, use sdelete -z to do the same thing on NTFS.
The easiest and most available way to see if your file is sparse on Unixlikes is with ls -slh which can be used on a file, directory, glob pattern etc., and shows the apparent and then the real size in the output. On Windows, use fsutil sparse queryflag filename.
The easiest way to check for ZFS compression is with zfs get compression, which will tell you what compression method is used on all storage pools (blocks from which are used to construct filesystems in ZFS.) zfs set compression=lz4 zpoolname will enable it. There are other compression methods available, but lz4 provides an excellent balance between CPU usage and compression ratio. It does have some limitations (like poor performance on very small files) but in general it is well worth enabling.