The Toshiba OCZ RC100 has no DRAM chip on the PCB, which helps optimize cost even beyond what is possible with TLC as a significant portion of the BOM cost comes from the DRAM memory, in the order of about $8 per GB of DRAM.
The Flash Translation Layer
Traditionally, SSDs have used DRAM memory as cache, to store the block map, which is used by the Flash Translation Layer (FTL) to keep track of where on the various flash chips a certain block of data is stored. One key in turning slow flash memory into awesomely fast SSDs is to employ multiple flash chips that are written to in a parallel fashion to spread the load over as many chips as possible, adding up transfer rates with lower latency. A single data block (as seen by the operating system) is pretty much never guaranteed to end up on a single flash chip in a contiguous way. Rather, the block is split up into multiple smaller chunks where each one gets written to a separate NAND chip. These smaller blocks usually don't end up at the same relative address inside each chip either, so a mechanism is needed to keep track of where the data is physically located, which is exactly what the NAND block map does. This flash translation layer serves as a dictionary for the SSD controller, so it can find your data again.
Since the mapping table gets read/written for every single read/write access, processing it should be as fast as possible, which is why it is usually stored in on-drive DRAM, which is much faster than flash, especially when it comes to writes. DRAM is roughly 1000x faster to read and 10,000x faster to write than NAND and doesn't have a limited amount of write cycles. DRAM is volatile though, so the mapping table would disappear when the drive loses power. In order to prevent that, a copy of it is saved to NAND, with the working copy always residing in DRAM cache for the fastest possible access.
Typically, the mapping table takes up about 0.1% of the SSD's total storage capacity because the length of the translation data for a 4 KB page is 4 bytes, so for a 1 TB drive, that's 1 GB. For SSDs with capacities larger than 4 TB, this creates an additional challenge as the controller would have to be a 64-bit processor for it to address a table larger than 4 GB. With DRAM chips reaching only up to 2 GB per chip, such large SSDs require multiple DRAM chips to be installed, which further increases cost, complexity, latency, and power consumption.
Without DRAM?
The challenge for DRAM-less SSDs now is to somehow run the flash translation layer without the immense performance benefits of a DRAM chip, while still achieving acceptable performance. SSD controllers do have some small internal memory in the order of megabytes (not gigabytes) that gets used for this task. One approach is to reduce the size of the mapping table by increasing its granularity, so instead of addressing one 4K page per table entry, multiple pages are joined together. Some designs even increase the granularity all the way up to the NAND's erase block size (in the order of 1 MB, which varies per chip model). This linearly reduces memory requirements for the mapping table; for example, just 4 MB for a 1 TB SSD at 1 MB granularity. The problem with this approach is that it can significantly impact random write performance as these larger blocks will have to be read and rewritten every time a write request updates only part of a block. Sequential writes are not affected as incoming data can be stored temporarily to then be written in a large chunk, matching the mapping table's granularity.
Another approach to optimization is to exploit the fact that data transfers are either large and sequential or small and random—no consumer application performs random writes to more than a few GB of data in a short time frame, which lets the controller do the following: read just a small portion of the mapping table (that completely fits into its own memory) and then hope that future accesses go to roughly the same area on the disk for the mapping table to be reused many times without incurring the performance hit of writing it back to flash and reading another section of it. This is similar to how your Windows pagefile works. Here again, the issue is due to random writes over a large area, which forces the controller to constantly read/write the map, causing a drop in performance.
Several more advanced techniques exist, using combinations of the above, or more complex approaches similar to a journaling filesystem. This is an active topic of research (Google for "Flash Translation Layer +PDF").
NVMe Host Memory Buffer
Being an NVMe drive, the OCZ RC100 uses a fairly novel feature called Host Memory Buffer (HMB), which was introduced with NVMe 1.2. When the operating system driver initializes the SSD, the SSD will send back a message "please allocate me X MB of memory, if possible, at least Y MB". The operating system can now decide to give the SSD X to Y MB of memory or none at all. Microsoft Windows 10's stock NVMe driver supports HMB since Fall Creators Update. This portion of memory is taken from your system RAM and becomes available exclusively to the SSD over PCI-Express. While PCI-Express is slower and has higher latency than an on-drive DRAM chip, it's still much faster than the drive's own NAND and essentially free since modern computers have gigabytes of RAM—a few missing MB won't matter. The SSD controller will now use this memory to store the mapping table (or part of it), leading to significant performance gains.
The OCZ RC100 480 GB requests between 10 MB and 38 MB of host memory, which isn't a lot—certainly not enough for a full copy of the flash translation layer data, so it stores only a subset of it in DRAM.