Thursday, December 5, 2019

Significance of Cache Memory on Performance Improvement

Question: Discuss about the Significance of Cache Memory on Performance Improvement. Answer: Introduction The major aim of this study is to explore the significance of cache memory on performance improvement of the computer system. Hence, in this regard, this study provides a detailed analysis on the operations of cache memory, which is located between the CPU and Main Memory. Therefore, this study provides all the details about the operations of cache memory along with demonstrating cache optimization. This study also identifies few issues of cache memory along with exploring its advantages. Thus, this study also portrays how the performance of cache can be evaluated and improved as well. Cache Memory can be referred to the Random Access Memory with the help of which the computer microprocessor can more quickly access than it can access the regular RAM. On the other hand, this particular memory is directly integrated with the CPU chip on a separate chip, which has the differentiated bus interconnected with the CPU (Chun et al., 2013). The Execution of Computer Program involves often loops that mean that the data stored within a given block of memory locations would be fetched several times. Cache memory can also be defined as the device for storing the high speed data where a microprocessor stores that data which is utilized most often. Moreover, the execution of instruction program involves often loops that are the data stored within a given block of memory locations would be retrieved several time. In addition, the execution of the program is localized. Data within the neighboring memory locations are very likely to be fetched if data within a given memory location is being retrieved. While CPU wants any specific instruction or data, then it sends the address to the cache memory. Initially, the address is searched within the cache memory then it is provided to the CPU after finding the data in cache (Fofack et al., 2014). In cache memory, finding a particular data within cache is known as the cache HIT. Furthermore, cache MISS is obtained if a particular data is not found within the cache memory. Thus, in this particular scenario, address is searched within the Main Memory for the instruction or data. A data block is transferred to the Cache memory from the Main Memory for the instruction or data after the data is received from the Main Memory so that all requests can be accomplished from cache memory (Chun et al., 2013). Hence, in this regard, it can be stated that the performance factor of the cache memory can be measured or determined with the help of the HIT ratio. Hit Ratio can be denoted as the ratio of the number of Hits to the addition of number of Hits an d number of Miss. The operations of cache memory are based on the principal of reference locality (Whitham, Audsley Davis, 2014). With the help of this fact, it can be stated that the related locations of storage are being accessed frequently while a instruction program is executed on a computer. Cache Optimization There are three basic cache optimization techniques such as the reduction of the Miss Rate, reduction of the Miss Penalty and the Reduction of the time to hit within the Cache memory (Noguchi et al., 2014). The cache optimization technique about reducing the Miss Rate encompasses the larger Block size, largest size of cache and higher Associativity. On the other hand, reduction of the Miss Penalty incorporates the multilevel caches and giving reads priority over writes. Furthermore, reducing the time to obtain a hit in cache incorporates the avoidance of the address translation during cache indexing. Significance of Cache Memory - The major aim of the cache memory is for storing the instructions of several programs those are re-referenced frequently by the software during the continuation of any kind of operation (Stengel et al., 2015). Therefore, the fast access towards these program instructions simply enhances the speed of the overall software program. It first looks in the cache memory as the microprocessor processes data. It does not have to do a more time-consuming data reading from the other data storage devices or from larger memory, if it finds the instructions there. The clock speed of processor determines the maximum rate through that the processors can make the execution of instructions. Cache memory chips permit microprocessors for running at full speed as they are designed for delivering data or instructions as fast as the microprocessor can use those (Das Dey, 2014). The processor can perform at its maximum speed of specified processor clock if the data and instru ctions are in the cache memory. The size of the cache memory which has high-speed is the key factor within the determination of how much the speed of the computer would enhance. Very large caches improvise the processing speed of computers much more than the smaller caches as they can store huge data within their high-speed memories. Most of the programs utilize some resources just once they have been operated as well as opened for a while mostly because the re-referenced and frequent instructions generally have the tendency to be cached (Gonzlez, Aliagas Valero, 2014). It is capable of explaining why the system performance measurements with the slower processors in computer but larger caches generally tend to be faster that the system performance measurements within computers. The major role of cache memory lies in deciding the performance of the multi-core systems as it is the fastest memory placed between the Main Memory as well as CPU. The performance gap between the Main Memory and processors has continued to be widened (Das Dey, 2014). Therefore, in this regard, increasing aggressive implementation of the cache memory is required for bridging this kind of performance gap. It works as the buffer between CPU as well as its Main Memory and it is therefore, utilized for synchronizing the rate of data transfer between main memory and CPU. Cache Utilization in Different Applications There is a simple method, which is fundamental to how the cache memory works such as the locality of reference. The locality of reference is categorized by two classifications such as temporal locality and spatial locality (Tsompanas, Kachris Sirakoulis, 2016). Spatial locality is the easiest type of locality for understanding as most of the users have utilized the media applications like DVD players, mp3 players and other types of apps whose datasets consist of ordered and large files. Moreover, it is the fancy way to label the general rule that if the CPU requires an item from memory at any given moment, and then it is likely to need its neighbors. On the other hand, temporal locality can be referred to the name that is provided to the general protocol that it is likely to be accessed again in the near future if an item in memory was accessed once (Chun et al., 2013). In case of the business apps, business data such as the word processo rs have often great spatial locality. Spatial locality is only the way of saying that the related data chunks tend to clump together within memory. In addition, they also tend to be processed in batches together through the CPU since they related (Gonzlez, Aliagas Valero, 2014). Therefore, Spatial locality also applies to the code likewise it does for the data or information as most of the well-written codes try for avoiding branches as well as jumps so that the processor can make the execution through large contiguous and uninterrupted blocks. Improving Cache Performance - A way to improvise the performance of the cache memory is to make a proper prediction regarding the future access of instructions or data, which have to be replaced from cache (Alexoudi et al., 2013). As mentioned earlier, the performance of the cache memory can easily be enhanced and improved by reducing the miss rate, miss penalty and time to hit in the cache. In addition, the easiest way for reducing the miss rate is for increasing the size of cache block. The miss rates of caches can be improved by the Higher Associativity. There are few significant replacement algorithms of CPU which are generally utilized for reducing miss rates and making the performance of cache better (Gonzlez, Aliagas Valero, 2014). One way to reduce the gap between the memory latency and CPU cycle us to utilize a multi-level cache. By introducing the second level caches, the first level misses can be managed. Conclusion After conducting the entire study, a conclusion can easily be drawn that cache memory plays the most crucial role in enhancing the CPU performance by synchronizing the rate of data transfer between the Main Memory and CPU. This status has also depicted the fact that the benefit of storing data on cache in comparison to RAM is its faster retrieval times. However, this study has also identified few performance issues associated with the cache memory. Therefore, the thorough analysis made in this study has provided an in-depth insight regarding performance evaluation as well as the performance improvement of the cache memory. References Alexoudi, T., Papaioannou, S., Kanellos, G. T., Miliou, A., Pleros, N. (2013). Optical cache memory peripheral circuitry: Row and column address selectors for optical static RAM banks.Journal of Lightwave Technology,31(24), 4098-4110. Chun, K. C., Zhao, H., Harms, J. D., Kim, T. H., Wang, J. P., Kim, C. H. (2013). A scaling roadmap and performance evaluation of in-plane and perpendicular MTJ based STT-MRAMs for high-density cache memory.IEEE Journal of Solid-State Circuits,48(2), 598-610. Das, S., Dey, S. (2014, December). Exploiting fault tolerance within cache memory structures. InHigh Performance Computing and Applications (ICHPCA), 2014 International Conference on(pp. 1-6). IEEE. Fofack, N. C., Nain, P., Neglia, G., Towsley, D. (2014). Performance evaluation of hierarchical TTL-based cache networks.Computer Networks,65, 212-231. Gonzlez, A., Aliagas, C., Valero, M. (2014, June). A data cache with multiple caching strategies tuned to different types of locality. InACM International Conference on Supercomputing 25th Anniversary Volume(pp. 217-226). ACM. Maniotis, P., Fitsios, D., Kanellos, G. T., Pleros, N. (2013). Optical buffering for chip multiprocessors: A 16GHz optical cache memory architecture.Journal of Lightwave Technology,31(24), 4175-4191. Noguchi, H., Ikegami, K., Shimomura, N., Tetsufumi, T., Ito, J., Fujita, S. (2014, June). Highly reliable and low-power nonvolatile cache memory with advanced perpendicular STT-MRAM for high-performance CPU. InVLSI Circuits Digest of Technical Papers, 2014 Symposium on(pp. 1-2). IEEE. Stengel, H., Treibig, J., Hager, G., Wellein, G. (2015, June). Quantifying performance bottlenecks of stencil computations using the execution-cache-memory model. InProceedings of the 29th ACM on International Conference on Supercomputing(pp. 207-216). ACM. Tsompanas, M. A. I., Kachris, C., Sirakoulis, G. C. (2016). Modeling cache memory utilization on multicore using common pool resource game on cellular automata.ACM Transactions on Modeling and Computer Simulation (TOMACS),26(3), 21 Whitham, J., Audsley, N. C., Davis, R. I. (2014). Explicit reservation of cache memory in a predictable, preemptive multitasking real-time system.ACM Transactions on Embedded Computing Systems (TECS),13(4s), 120.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.