16-5 Off-line caching
Modern computers use a cache to store a small amount of data in a fast memory. Even though a program may access large amounts of data, by storing a small subset of the main memory in the cache—a small but faster memory—overall access time can greatly decrease. When a computer program executes, it makes a sequence \(\langle r_1, r_2, \ldots, r_n \rangle\) of \(n\) memory requests, where each request is for a particular data element. For example, a program that accesses 4 distinct elements \(\\{a, b, c, d\\}\) might make the sequence of requests \(\langle d, b, d, b, d, a, c, d, b, a, c, b \rangle\). Let \(k\) be the size of the cache. When the cache contains \(k\) elements and the program requests the \((k + 1)\)st element, the system must decide, for this and each subsequent request, which \(k\) elements to keep in the cache. More precisely, for each request \(r_i\), the cache-management algorithm checks whether element \(r_i\) is already in the cache. If it is, then we have a cache hit; otherwise, we have a cache miss. Upon a cache miss, the system retrieves \(r_i\) from the main memory, and the cache-management algorithm must decide whether to keep \(r_i\) in the cache. If it decides to keep \(r_i\) and the cache already holds \(k\) elements, then it must evict one element to make room for \(r_i\) . The cache-management algorithm evicts data with the goal of minimizing the number of cache misses over the entire sequence of requests.
Typically, caching is an on-line problem. That is, we have to make decisions about which data to keep in the cache without knowing the future requests. Here, however, we consider the off-line version of this problem, in which we are given in advance the entire sequence of \(n\) requests and the cache size \(k\), and we wish to minimize the total number of cache misses.
We can solve this off-line problem by a greedy strategy called furthest-in-future, which chooses to evict the item in the cache whose next access in the request sequence comes furthest in the future.
a. Write pseudocode for a cache manager that uses the furthest-in-future strategy. The input should be a sequence \(\langle r_1, r_2, \ldots, r_n \rangle\) of requests and a cache size \(k\), and the output should be a sequence of decisions about which data element (if any) to evict upon each request. What is the running time of your algorithm?
b. Show that the off-line caching problem exhibits optimal substructure.
c. Prove that furthest-in-future produces the minimum possible number of cache misses.
a. Suppose there are \(m\) distinct elements that could be requested. There may be some room for improvement in terms of keeping track of the furthest in future element at each position. If you maintain a (double circular) linked list with a node for each possible cache element and an array so that in index \(i\) there is a pointer corresponding to the node in the linked list corresponding to the possible cache request \(i\). Then, starting with the elements in an arbitrary order, process the sequence \(\langle r_1, \dots, r_n \rangle\) from right to left. Upon processing a request move the node corresponding to that request to the beginning of the linked list and make a note in some other array of length \(n\) of the element at the end of the linked list. This element is tied for furthest-in-future. Then, just scan left to right through the sequence, each time just checking some set for which elements are currently in the cache. It can be done in constant time to check if an element is in the cache or not by a direct address table. If an element need be evicted, evict the furthest-in-future one noted earlier. This algorithm will take time \(O(n + m)\) and use additional space \(O(m + n)\).
C++ | |
---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|
If we were in the stupid case that \(m > n\), we could restrict our attention to the possible cache requests that actually happen, so we have a solution that is \(O(n)\) both in time and in additional space required.
b. Index the subproblems \(c[i, S]\) by a number \(i \in [n]\) and a subset \(S \in \binom{[m]}{k}\). Which indicates the lowest number of misses that can be achieved with an initial cache of \(S\) starting after index \(i\). Then,
which means that \(x\) is the element that is removed from the cache unless it is the current element being accessed, in which case there is no cost of eviction.
c. At each time we need to add something new, we can pick which entry to evict from the cache. We need to show the there is an exchange property. That is, if we are at round \(i\) and need to evict someone, suppose we evict \(x\). Then, if we were to instead evict the furthest in future element \(y\), we would have no more evictions than before. To see this, since we evicted \(x\), we will have to evict someone else once we get to \(x\), whereas, if we had used the other strategy, we wouldn't of had to evict anyone until we got to \(y\). This is a point later in time than when we had to evict someone to put \(x\) back into the cache, so we could, at reloading \(y\), just evict the person we would of evicted when we evicted someone to reload \(x\). This causes the same number of misses unless there was an access to that element that wold of been evicted at reloading \(x\) some point in between when \(x\) any \(y\) were needed, in which case furthest in future would be better.
本页面的全部内容在 小熊老师 - 莆田青少年编程俱乐部 0594codes.cn 协议之条款下提供,附加条款亦可能应用