Audio recording
Audio recording has its own special requirements for hardware that are the opposite of almost every other computer use.
Latencyβ³
Latency is critical in audio recording.
While for every other computer use, better maximum performance per second is best, for audio the requirement is a minimum number of kilobytes per millisecond .
The major consideration for audio recording is latency, which is the total of the delays that occur with the information travelling through the system. People begin to notice delays of a few milliseconds, especially when listening to themselves in real-time. Analog-to-digital (ADC) and digital-to-analog (DAC) convertors have a delay. Every communication channel, like USB or SATA has delays due to their buffers. These buffers queue up information while the next stage finishes processing the previous block of information.
For most computer purposes, people tend to notice when delays reach into the seconds, so maximum transfer rates become important. It becomes important to make sure that there is always information ready when its needed for transmission. Multiple buffers help to queue up the information so that a full buffer is always ready to go. Having buffers that may be 30 milliseconds or more is of no consequence compared to the total time of transferring a few hundred megabytes. However, always having full buffers for maximum throughput means that every transfer is delayed by the length of the buffer.
That will not work for audio, so the audio buffers in the digital audio workstation (DAW) software need to be as short as possible to limit the maximum possible delays. However, there must always be data ready, otherwise there will be glitches in the audio. This is the reason why audio recording systems must have a minimum transfer rate per millisecond to avoid them. The combination of minimum short timeframe rates with minimal buffers sizes means audio recording systems need to be finely tuned to be reliable, especially since the worst case must be catered for to avoid glitches.
An analogy is going into a bank to go to a teller (rare these days). If there are no queues, there is no wait. But if the queues are full, then everybody waits the same maximum amount of time. Of course, banks want maximum staff utilisation, so they will want queues near maximum, rather than having too many waiting for customers. This is the situation with drive makers who want to show off maximum throughput by testing with all buffers full. Audio systems are like a bank that never wants a customer to wait, but always wants customers to be there. It all has queuing theory at its core.
Some drive manufacturers, like Samsung, have modes for their SSDs that may lead to latencies up to 30 milliseconds, so those modes should be disabled for audio purposes. Note that it is only for recording that latencies have to be low. When mixing down, DAW buffer sizes can be maximised, especially if using processor-intensive plugins.
SSDsβ³
Solid state drives (SSDs) have obsoleted hard drives for audio work, so that storage is no longer a bottle neck, but there are some considerations.
For samples, any write issues with SSDs are irrelevant β you are not writing to the drives except for the install and occasional updates. Instead, with a single fast SSD you get better performance than RAID0 hard drives. Large SSDs use RAID0 anyway. The data will keep forever as there are no read cycle number limitations, and any for write are never likely to come into play.
Just note than not all the nominal capacity is actually available. All drives are quoted for 1GB = 109 bytes, rather than 230, making them 7% less than might be expected. Just take this into account when estimating how many drives will need for libraries.
As for SSD write, this is where the crunch comes. To update an SSD, it is not just a matter of changing the sector involved, but the whole block (256KB) must be read, the sector updated and the whole block written back. You can therefore see that using 4kB sectors can involve a lot of updates to a block if many sectors are being updated. This is what is called write amplification - the ratio of writes actually performed to the writes really required.
Drive speeds can be substantially improved if using large sector sizes, like 64kB sectors, compared to the standard 4kB. For audio, where files are MBs, this only causes 0.5% capacity loss. For other files, as for OS, apps and general data, the loss is about 8%. Note that Windows does not allow setting the sector size for system drives. However, even for HDDs, a 30-50% improvement in the time to copy 1GB of general and audio files between drives when using 64kB sectors compared to the default 4kB can be seen. This is probably due to the extra OS overhead in servicing each sector.
The other main write issue with SSDS is the limited number of write cycles to the same block - 100,000 for SLC types and 10,000 for MLC (much cheaper) types. All MLC SSDs use smart algorithms to spread the writes evenly over all the blocks. However, if an SSD is pretty full with stuff that is unchanging but the little left changes a lot, or there are lots of rewriting (as in database servers), the write limits may be reached relatively quickly.
Servers would need SLCs, but most other uses would likely NEVER see an issue with this using MLC SSDs. Note that the only effect of blocks running out of life is a reduction in the capacity as they are taken out of the rewrite pool. Data is not lost at all β just the reduction in the space of where to write to.
Another issue associated with this is the loss of performance over time. This is due to the pool of virgin blocks (those never written to) being used up because the wear-levelling algorithms write to them before writing over previously used (marked as available but actually needing a complete rewrite to make virgin). Basically, the refresh of the used block has been delayed until a write is needed, but any SSD that has been used for a while will end up with only such blocks left, leading to a marked drop (a few %) in write performance.
To rectify this, both the SSD and the OS have to support the TRIM command which instructs the SSD to refresh a block soon after it is freed, so that when a write to it is needed, there is no delay. Most OSs and drives now support TRIM.
If the data SSD starts losing capacity sooner due to heavier write use, just get a new one and relegate the old one for samples.
Some have RAIDed SSDs for better performance, but there may be no advantage for using RAID for audio because to maintain minimal latencies, all simultaneous tracks/samples have to be accessed quickly, meaning that under heavy loading, data may be interleaved from different parts of the drive so much that the advantages of RAID don't kick in. That is, unless sectors are bigger than stripe widths, RAID may offer no advantage than having libraries typically used together on separate non-RAID drives.
Fragmentationβ³
SSDs do have a form of fragmentation due to their blocks being large.
HDD fragmentation results from different sectors of a file being in non-contiguous parts of the drive, making the heads travel more than they need to. SSD don't have any such positional penalties, so it is unnecessary to make sectors contiguous.
SSDs have 256KB blocks and so have multiple sectors in one block. The problem is that to read a sector, the whole block is read, and for a write, it is a block read-modify-write. Obviously, it would be better if each block only contained contiguous sectors from the same file, thus optimising how much data is read, and minimising how many times each block is written (assuming that the SSD firmware buffers multiple-sector writes to the same block into a single write). Typically, TRIM does not do this infra-block optimisation, so the easy way to minimise this is to specify larger sector sizes.