What is SDRAM (synchronous DRAM)?
Synchronous Dynamic Random-Access Memory (SDRAM) is a generic name for various kinds of DRAM that are synchronized with the clock speed that the microprocessor is optimized for. Synchronization between the SDRAM and system bus increases the number of instructions that the processor can perform in a given time. This allows for faster data transfer in SDRAMs compared to asynchronous DRAMs.
Structure of SDRAM
The specifics of the architecture of SDRAM chips can vary among manufacturers. However, the general setup usually remains the same.
Within the SDRAM, data is stored within capacitors or memory cells that form a two-dimensional array (meaning, they are organized into rows and columns). The rows are called pages. The array is the area of the SDRAM chip that contains memory cells. It is divided into multiple banks which in turn are divided into smaller areas called segments.
The array does not occupy the entire area of the chip. The area that's not occupied by the array is called the periphery, which holds the control and addressing circuitry. Line drivers and sense amplifiers are also located in the periphery.
The area occupied by the array contributes to the SDRAM's cell efficiency, i.e., the amount of memory on-board. The periphery does not contribute to the on-board memory. Cell efficiency is expressed as a percentage using the following formula:
Cell efficiency = Array area x 100
Overall chip area
Cell efficiency for most SDRAMs tends to be in the range of 60%-70% although this can vary depending on the manufacturer.
How SDRAM works
To access data, the memory controller first addresses the required page. It then addresses a specific column within that page. The controller sends the row and column elements separately to allow for the successive addressing of multiple column elements when a row is open.
Once the controller sends row and column addresses to the SDRAM, the SDRAM activates the corresponding memory cell and reads the data stored in it or writes data into it. The memory chip's timing is synchronized with the CPU's system clock so the SDRAM can quickly retrieve data and supply bits.
There's no need to resend row addresses using this technique. Rather, multiple columns can be addressed on the row to isolate the data storage elements (cells) to be read from or written. Also, a burst counter on the chip ensures that the column part of the address is quickly incremented. This method reduces latency and improves memory access speed in sequential reads. Also, the package pin count of SDRAM is lowered since the row and column addresses are multiplexed onto the same lines. Fewer pins result in a smaller package, which then reduces the overall chip cost.
The speed of SDRAM is rated in MHz rather than in nanoseconds (ns). This makes it easier to compare the bus speed and the RAM chip speed. The RAM clock speed can be converted to nanoseconds by dividing the chip speed into 1 billion ns (which is one second). For example, an 83 MHz RAM would be equivalent to 12 ns.
Uses of SDRAM
SDRAM memory modules are compatible with a wide range of computing systems, including desktops and laptops. Many servers also use SDRAM. This makes SDRAM suitable for many different applications, including basic applications like web browsing and word processing, as well as more complex applications like gaming.
That said, memory requirements vary by application so it's important to check these requirements before implementing SDRAM. This will ensure compatibility between the SDRAM and the system bus and prevent accessibility or performance problems later.
Also, complex, resource-intensive applications and applications with high graphics requirements may require higher amounts of SDRAM compared to simpler applications. One example is games. Games that feature complex simulations, detailed environments, demanding graphics and AI capabilities can benefit from a system with multiple SDRAM modules. Fast modules with low latency can speed up game data access, reduce loading times, and make gameplay smoother and more compelling. More SDRAM is also recommended for scenarios involving heavy multitasking.
Benefits and limitations of SDRAM
SDRAM offers a good balance between speed, density and capacity. It is also more efficient, offering several times the performance of the other standard DRAMs.
Cost effectiveness is another benefit of SDRAM. SDRAMs are more cost effective than some other types of memory, such as static random-access memory (SRAM) and dynamic random-access memory (DRAM). For these reasons, SDRAM is a popular choice for many modern computing systems and applications.
Another benefit of SDRAM is that it's possible to overclock the SDRAM to increase its data transfer rate. System admins can do this by modifying the frequency and timings in the computer's BIOS settings. However, it's important to do this correctly, otherwise it can cause the system to become unstable. Carelessness or inexperience with these adjustments can also result in data loss.
A drawback of SDRAM is that it cannot be used with older motherboards. Also, it's not advisable to mix different types or generations of SDRAM, unless the motherboard allows for such mixing-and-matching. If it doesn't, using different types may adversely affect the system's performance and stability.
Another drawback of SDRAM is that it only works in a single data rate (SDR). It limits the data transfer rate because it transfers data on only one edge of the clock signal (either the rising or falling edge) per clock cycle. For example, if the clock is running at 100 MHz and if each transfer is one bit, then the SDR can only transfer data at 100 million data bits per second.
DDR SDRAM (Double Data Rate SDRAM) eliminates this drawback of SDRAM. So, for a clock running at 100 MHz, the data transfer rate of DDR would be 200 million data bps. This is because unlike SDRAM, DDR SDRAM transfers data twice per clock cycle, once on the rising edge and once on the falling edge of the clock signal.
History of SDRAM
The first SDRAM, KM48SL2000, was manufactured by Samsung Electronics in 1992 using a complementary metal-oxide-semiconductor (CMOS) fabrication process. The capacity of this SDRAM was just 16Mb. Within a year, the company mass-produced the SDRAM.
The CMOS IC fabrication process generally starts with a p-type substrate and involves the use of SiO2 (silicon dioxide, also known as silica) for oxidation. A protective layer of SiO2 is created on the surface of the wafer to protect the junction and to isolate devices from each other. Then a light-sensitive material is placed on the wafer, after which the photoresist region is selectively exposed to UV rays as per the mask pattern to achieve the desired shape and pattern it with the photoresist.
Next, an etching process removes the SiO2, and a thin film of material (polysilicon or SiO2) is deposited on the surface of the Si wafer. Finally, some impurities are added to the wafer, a metal contact is created and the NMOS and PMOS terminals are assigned names.
Within a decade of its introduction, SDRAM became the RAM of choice for many device manufacturers due to its ability to offer lower latency, faster speed, greater performance and cost effectiveness.
Samsung released the DDR SDRAM in 1998. This 64Mbit chip is a more modern evolution of SDRAM. It doubles the data transfer rate compared to traditional SDRAM because the data can be transferred on both the rising and falling edges of the clock signal.

Over the years, newer evolutions of SDRAM have emerged, such as DDR2 SDRAM, DDR3 SDRAM, DDR3 SDRAM and DDR4 SDRAM. These modules offer higher data transfer rates and better system performance than the original SDRAM. The latest DDR RAMs can handle more data at once and use less power to reduce the system's overall energy consumption.
Flash memory and conventional RAM, though both classified as solid-state memory, are significantly different. Learn about the difference between flash memory vs. RAM.