What Is a CPU Cache?
In computer systems, CPU cache (English: CPU Cache, referred to as cache in this article) is a component used to reduce the average time required for a processor to access memory. It is located in the top-down second layer in the pyramid storage system, second only to the CPU registers. Its capacity is much smaller than the memory, but the speed can approach the frequency of the processor.
CPU cache
- The capacity of the CPU cache is much smaller than the memory but the swap speed is much faster than the memory. The emergence of the cache is mainly to solve the contradiction between the CPU operation speed and the memory read and write speed, because the CPU operation speed is much faster than the memory read and write speed, which will make the CPU spend a long time waiting for data to come or write data to memory . [1]
- The cache memory is a temporary storage located between the CPU and the memory. Its capacity is smaller than the memory but the exchange speed is faster.
- The data in the cache is a small part of the memory, but this small part is about to be accessed by the CPU in a short time. When the CPU calls a large amount of data, it can avoid the memory and call directly from the cache, thereby speeding up the reading speed. . It can be seen that adding a cache to the CPU is an efficient solution, so that the entire internal memory (Cache + memory) has become a high-capacity storage system with both high-speed cache and memory.
- Cache has a great impact on the performance of the CPU, mainly due to the data exchange order of the CPU and the bandwidth between the CPU and the cache.
- First and second level cache comparison
- L1 cache vs L2 Cache The part of the cache used to store data is usually called RAM, and the information in it will disappear after power off. There are two types of RAM, one is static RAM (SRAM); the other is dynamic RAM (DRAM). The storage speed of the former is much faster than the latter. The memory we use is generally dynamic RAM. The L1 cache of the CPU is usually static RAM, which is very fast, but the integration level of static RAM is low (same data is stored, the size of static RAM is 6 times that of dynamic RAM), and the price is relatively expensive (with the same capacity (Static RAM is four times larger than dynamic RAM). Enlarging static RAM as a cache is not a cost-effective approach, but in order to improve the performance and speed of the system, it is necessary to expand the cache. This has a compromise: without expanding the original static RAM cache capacity, Just add some high-speed dynamic RAM as L2 cache. High-speed dynamic RAM is faster than conventional dynamic RAM, but slower than the original static RAM cache, and the cost is relatively moderate. The contents of the first-level cache and the second-level cache are all copies (maps) of frequently accessed data in memory, and they exist to reduce the access of slow memory by high-speed CPUs. The difference between the high-end and low-end CPUs of the same core is often different in the second-level cache, which shows the importance of the second-level cache to the CPU. In higher-end CPUs, a third-level cache is designed for data that misses after reading the second-level cache. In a sense, the increase in prefetching efficiency greatly reduces the production cost but provides performance close to the ideal state. Unless production technology becomes very strong one day, memory will still exist, and the incremental performance of the cache will still be retained. Relationship between CPU cache and memory Now that the CPU cache can greatly improve the performance of the CPU, some friends may ask, is it possible in the future that the system memory will be replaced by the CPU?
- The answer should be no. First, although the CPU cache's transfer rate is indeed high, it is still not feasible to completely replace the memory. This is mainly because the cache is only a copy of a small amount of data in the memory, so the CPU looks in the cache When the data is found, it may not be found (because the data is not copied from the memory to the cache). At this time, the CPU will still look for data in the memory. At the same time, the system slows down, but the CPU The data will be copied to the cache so that it will not be fetched in memory next time. That is to say, as the cache increases to a certain extent, its impact on CPU performance will become smaller and smaller, and in terms of performance, it will become less and less cost-effective. From the perspective of cache capacity, cost, and power consumption, it is far from being able to compete with memory. In addition, in a sense, memory is also a form of CPU cache, but it is much slower in speed, but it is in capacity. Has huge advantages in terms of power, power consumption and cost. If the memory can be strong enough in the future, it is very likely to replace the CPU cache. The cache read and write algorithms are also important. Even if the cache data exchange capability integrated in the CPU is very strong, it still needs to do some screening on the retrieved data. This is because over time, the most frequently accessed data is not static, that is to say, the data that was not frequent just now needs to be frequently accessed at this time, and the most frequent data just now is not frequent. Therefore, the data in the cache must be replaced according to a certain algorithm, so as to ensure that the data in the cache is frequently accessed most frequently. The most commonly used "minimum and least use algorithm" (LRU algorithm) in the hit rate algorithm. When the replacement is needed, the data row with the largest row counter count value is eliminated. This is an efficient, scientific algorithm that improves cache utilization. As an integral part of the CPU, the cache has been incorporated into the consideration of performance improvement. With the further development of production technology, the number of cache levels will increase and the capacity will further increase. As the cache of the CPU performance booster, it will still play a huge advantage in terms of cost and power control, and performance will also make great progress.
- cpu running speed