Sunday, April 5, 2020

SOC Chips and Memory Design

A System-On-A-Chip, or SOC is understood to mean a chip that includes one or more microprocessors (cores) and any number of associated peripheral chips. The definition is not a hard definition. Often a SOC is categorized as any chip whose function performs that of any electronic system regardless of whether or not it has a microprocessor or not. The origin of SOC terminology dates back to the 1980s when the idea of a PC on a chip was introduced. At that point in time integrated circuit technology had advanced to a point that more than just a microprocessor could be integrated into a single chip.

A typical microprocessor system then basically consisted of a microprocessor, a memory controller and an IO controller. Memory controllers are used to access DRAM, Flash, SRAM and ROM memory banks within the computers memory. IO controllers are used to access serial and parallel peripheral devices such as disk drives, USB ports, the keyboard, the mouse, and the screen. Another chip that can be found in the classic PC is a DMA controller. A DMA controller, which can be also classified as an IO controller, is used to access main memory without the need for microprocessor intervention.  This lowers  memory access time and lowers overall power consumption. At the same time it frees the microprocessor to perform other processing tasks.

Today a SOC includes much more than a microprocessor and a few peripheral control chips. For example, the SnapDragon line of mobile phone processing chips include up to eight microprocessor cores, a graphic processing unit. a DSP processor, a security processor, cache tag and data SRAM memory as well as a DDR SDRAM and cache memory controller.

The block diagram below shows a SOC chip that includes two microprocessor cores, data cache SRAM, a SDRAM DDR controller and a cache controller with a content addressable memory (CAM). Having the data cache SRAM on chip speeds memory accesses and lowers power consumption.  Sending data off chip always increases delay time and raises power consumption This is because the internal chip does not have the high capacitance associated with PC traces and the input and outputs of a chip's pins.

The integrate cache controller, CAM and data cache are critical to improving memory access time

The CAM also speeds memory access time. The CAM is a special type of memory that uses a digital comparator to locate data at an address in a high speed data cache SRAM. The cache tag SRAM, or CAM, contains the addressees of the data that the microprocessor uses most often. When the microprocessor sends out an address to the cache tag a bank of comparators within the cache tag compares the microprocessor address in each CAM location simultaneously to see if that address is in the CAM. If there is an address match, the data location that matches that address signals a "hit." The cache tag controller then accesses the data cache SRAM address so the microprocessor can read the data.

One of the reasons why CAMs are so much faster than SRAMs or DRAMs is that a comparator instead of a decoder is used to locate an address location. A decoder has several layers of logic that introduce delays in accessing a memory's data. DRAMs because of their much more complex decoding scheme are slower than SRAMs. The key point to remember about cache memory architecture is that it allows data that is used most often to stay in localized high speed SRAM memory. This means accesses to slower DRAMs, FLASH and disk drives are minimized. In fact, accesses to slower memory devices can be reduced by as much as 98 percent.



No comments:

Post a Comment