Okay, so where SSDs bog down, as with any hard drive (but it's more noticeable on SSDs because of how fast they are normally), is small random writes. The reason this is such a problem for SSDs is because, once the wear-leveling algorithm has written data to all of the empty pages on the SSD, all further writes require first reading an entire block of pages into the SSD's RAM, then changing or erasing the data in the targeted pages, then erasing the old block of pages off the FLASH memory, then writing the modified block of pages to from the RAM to the FLASH. The better SSDs support a command called TRIM, which the OS can send when it knows the SSD will be idle for a while, and the SSD will erase a few unused pages here and there when the slow rewrite process isn't going to affect the user's experience. Unfortunately, my SSD is too cheap for that, and Windows XP doesn't know how to use the TRIM command anyway -- I'd have to manually run a TRIM utility kinda like having to manually run defrag on a normal hard drive. So, if I want any better performance, I'll have to come up with a way to minimize the number of rewrite operations that have to be performed. What I'm thinking is this: reformat the SSD with the largest cluster size NTFS supports, which in this case is 64kB, or 1/8 the size of a block of pages on the SSD. Why? Because that way Windows will never try to access a chunk of data smaller than 64kB on the SSD instead of 4kB, which means (in theory) 16x fewer rewrite operations over the same span of operating time. Obviously that also means that any files smaller than 64kB will still take up 64kB of allocated space on the SSD, so I'd be losing a decent chunk of space if there's a fuckton of small files. But I'm not sure I care, if it speeds up the SSD's operation, especially considering I have a 64GB SSD that I will almost certainly never fill. Am I speaking Chinese? Do I need to get my head checked? Or does that kinda make sense?