What Is Business Strategy Mapping?

The block mapping strategy means that when a new data block from memory needs to be loaded into the cache, its storage location is determined by the block mapping strategy. According to the position of the block in the cache, the block mapping strategy can be divided into direct mapping, fully associative mapping, and group associative mapping.

A fast and relatively small memory can be set between the main memory and the CPU, which stores the programs and data that the CPU is currently using and will be used in a short period of time. This can greatly speed up the CPU access to memory Speed to improve the operating efficiency of the machine.
The function of the cache is to store instructions and data that need to be run in the near future. The purpose is to improve the speed of the CPU's access to the memory. To this end, two technical problems need to be solved:
One is the mapping and conversion of the main memory address and the cache address, which is the block mapping strategy . [1]

Block Mapping Strategy

Fully associative mapping refers to the way that any block in main memory can be mapped to any block in Cache, that is, when a block in main memory needs to be transferred to the Cache, it can be based on the block occupation or allocation of the Cache at that time. Select a block to store in the main storage block. The selected Cache block can be any block in the cache. For example, suppose there are 2 blocks in the cache and 2 blocks in the main memory. When a certain block j in the main memory needs to be transferred into the cache, it can be stored in the block 0, block 1, ..., block i, ... or block 2-1 of the cache. On any piece of it. As shown in Figure 1.
(Figure 1) Fully associative mapping method
In the fully associative mapping mode, the CPU accesses the main memory address as shown in Figure 2:
Figure II
Among them, M is the main block number, and W is the font size in the block. The address form for the CPU to access the cache is shown in Figure 3.
Figure three
Among them, C is the block number of the Cache, and W is the font size within the block.
The conversion from the main memory address to the Cache address is done by looking up a block table implemented by an associative memory. The formation process is shown in Figure 4.
(Figure 4) Fully associative address translation
When a main memory block is called into the cache, it will be registered in an associative memory that stores the main memory block number and the cache block number mapping table. When the CPU fetches memory, first, it looks for the cache block number C in the associative memory according to the main memory block number M in the main memory address. If it is found, the current cache hit hits, so the corresponding cache block number is taken out and sent. The block number C field of the cache address is accessed, and then the font size W in the block of the main storage address is directly sent to the font size W field in the cache address to form an address for accessing the cache. Finally, the cache unit is accessed according to the address.
Advantages: The hit rate is relatively high, and the cache storage space is highly utilized.
Disadvantages: The associative memory is huge and the comparison circuit is complicated. When accessing the relevant memory, it must be compared with the entire content each time. The speed is low and the cost is high, so it is only suitable for small-capacity caches and has few applications.

Block Mapping Strategy Direct Associative Mapping

Address mapping rules: A block in the main memory can only be mapped to a specific block of the cache.
(1) The main memory and the cache are divided into data blocks of the same size.
(2) The main memory capacity should be an integer multiple of the cache capacity. The main memory space is divided into areas according to the capacity of the cache. The number of blocks in each area of the main memory is equal to the total number of caches.
(3) When a block of a certain area in the main memory is stored in the cache, only the same block number in the cache can be stored.
Figure 5 shows the direct association mapping rules. It can be seen that the data blocks of the same block number in each area in the main memory can be respectively transferred to the addresses of the same block number in the cache, but at the same time only one area of the block can be stored in the cache. The main and cache block numbers are the same. Therefore, when registering the directory, only the area number of the loaded block can be recorded.
(Figure 5) Direct association mapping
Figure 6 shows the format of the main and buffer addresses, the format of the directory table, and the address translation rules. The main, cache block number, and address within the block are exactly the same. The directory table is stored in a high-speed small-capacity memory, which includes two parts: the area number of the data block in the main memory and the effective bit. The capacity of the catalog table is the same as the number of cached blocks.
(Figure 6) Directly associated address translation
Address conversion process: Use the block number B in the main storage address to access the directory memory, and compare the read area number with the area number E in the main storage address. The comparison result is equal, and the valid bit is 1. The cache hit can be directly used The buffer address composed of the block number and the address within the block is fetched from the cache; the comparison result is not equal, the valid bit is 1, and can be replaced. If the valid bit is 0, the required block can be directly transferred.
Advantages: The address mapping method is simple. When data is accessed, you only need to check whether the area codes are equal. Therefore, you can get a faster access speed and simple hardware equipment.
Disadvantages: frequent replacement operations and low hit rates.
Example: In the above example, the main memory capacity is 1M, the cache capacity is 32KB, and the size of each block is 16 words (or bytes). Outline the main and cached address formats, directory table formats, and their capacities.
Capacity: Same as the number of buffer blocks, ie 211 = 2048 (or 32K / 16 = 2048).

Block Mapping Strategy Group Association Mapping

Group associative mapping is actually a compromise between direct mapping and fully associative mapping. Its organizational structure is shown in Figure 7. Both the main memory and the cache are grouped. The number of blocks in a group in the main memory is the same as the number of groups in the cache. Direct mapping is used between groups, and fully associative mapping is used in the group. In other words, the cache is divided into u groups, and each group of v blocks, which group the main memory block is stored in is fixed, and which block is stored in this group is flexible. For example, the main memory is divided into 256 groups with 8 blocks each, and the cache is divided into 8 groups with 2 blocks each.
(Figure 7) Group association mapping
There is a fixed mapping relationship between each block in main memory and the cache group number, but it can be freely mapped to any block in the corresponding cache group. For example, the 0th block, 8th block in main memory ... are mapped to the 0th group of the cache, but they can be mapped to the 0th or 1st block in the 0th group of the cache Nine blocks ... are mapped to the first group of the cache, but can be mapped to the second or third block in the first group of the cache.
Advantages: The collision probability of the block is relatively low, the utilization rate of the block is greatly improved, and the block failure rate is significantly reduced.
Disadvantages: The difficulty and cost of implementation are higher than the direct mapping method.
The group-associated structure Cache that is often used has 2, 4, 8, and 16 blocks in each group, which are called 2-way, 4-way, 8-way, and 16-way group-linked caches. Group associative structure Cache is a compromise between the first two methods, which takes into account the advantages of both and moderately avoids the disadvantages of both, so it is widely used.

IN OTHER LANGUAGES

Was this article helpful? Thanks for the feedback Thanks for the feedback

How can we help? How can we help?