

It is for generated data, like a JSON API. Static content is often pre-compressed though, since there’s no reason to do that every request if it can be done once. Compression formats is largely limited to whatever the client supports, and gzip works pretty much everywhere, so it’s generally preferred.
yeah, if the page is dynamically generated, you would likely live compress it, but it would also be so little data that the cpu overhead would be pretty minimal, and you could still cache the compressed data if you tried hard enough. For something like steam where you’re effectively shipping and unpacking a 50-200gb zip file, there’s no reason not to statically compress it.
I’m not really sure how latency is related for FS operations. Are you saying if the CPU is lagging behind the read speed, it’ll mess up the stream? Or are you saying something else? I’m not an expert on filesystems.
it’s important because the entire system is based on a filesystem, if you’re doing regular calls to a drive, in high quantity latency is going to start being a bottleneck pretty quickly. Obviously it doesn’t matter much on certain things, but after a certain point it can start being problematic. There’s practically no chance of corruption or anything like that, unless you have a dysfunctional compression/decompression algorithm, but you would likely expect system performance to be noticeably slower in disk benchmarks specifically. Especially if you’re running really fast drives like gen 4 NVME ssds. Ideally, it shouldn’t be a huge thing, but it’s something to consider if you care about it.
There are two primary things to consider when making a functional file system, one is atomicity, because you want to be able to write data, and be certain that it was written correctly (to prevent corruption) and you want to maximize performance. File IO is always one of the slowest forms of interaction, it’s why you have ram, yes, and it’s why your CPU has cache, but optimizing drive performance in software, is still free performance gains. That’s an improvement that can make heavy read/write operations faster, more efficient, and more scalable. Which in the world of super fast modern NVMEs, is something we’re all thankful for. If you remember the switch from spinning rust, to solid state storage for operating systems, you’ll see a similar improvement. HDDs necessarily have really bad random IOPs performance, they literally physically find the data on the disk, and read it back, it’s mechanically limited, this increases latency considerably. And SSDs don’t have this problem, because they’re a matrix of registers, so you can get MASSIVELY uplifted random IOPs performance from an SSD compared to a hdd. And that’s still true today.
hopefully it improves. I’m honestly waiting for heroic to implement steam, or a decent CLI implementation to come about, SteamCMD does exist, but it’s meant for server hosting, theoretically you could use it for client use, but i don’t think it recommends you do that, for several reasons.