Abstract | ||
---|---|---|
Encrypting data in unprotected memory has gained much interest lately for digital rights protection and security reasons. Counter Mode is a well-known encryption scheme. It is a symmetric-key encryption scheme based on any block cipher, e.g. AES. The schemeýs encryption algorithm uses a block cipher, a secret key and a counter (or a sequence number) to generate an encryption pad which is XORed with the data stored in memory. Like other memory encryption schemes, this method suffers from the inherent latency of decrypting encrypted data when loading them into the on-chip cache. One solution that parallelizes data fetching and encryption pad generation requires the sequence numbers of evicted cache lines to be cached on-chip. On-chip sequence number caching can be successful in reducing the latency at the cost of a large area overhead. In this paper, we present a novel technique to hide the latency overhead of decrypting counter mode encrypted memory by predicting the sequence number and pre-computing the encryption pad that we call one-time-pad or OTP. In contrast to the prior techniques of sequence number caching, our mechanism solves the latency issue by using idle decryption engine cycles to speculatively predict and pre-compute OTPs before the corresponding sequence number is loaded. This technique incurs very little area overhead. In addition, a novel adaptive OTP prediction technique is also presented to further improve our regular OTP prediction and precomputation mechanism. This adaptive scheme is not only able to predict encryption pads associated with static and infrequently updated cache lines but also those frequently updated ones as well. Experimental results using SPEC2000 benchmark show an 82% prediction rate. Moreover, we also explore several optimization techniques for improving the prediction accuracy. Two specific techniques, Two-level prediction and Context-based prediction are presented and evaluated. For the two-level prediction, the prediction rate was improved from 82% to 96%. With the context-based prediction, the prediction rate approaches 99%. Context-based OTP prediction outperforms a very large 512KB sequence number cache for many memory-bound SPEC programs. IPC results show an overall 15% to 40% performance improvement using our prediction and precomputation, and another 7% improvement when context-based prediction techniques is used.
|
Year | DOI | Venue |
---|---|---|
2005 | 10.1109/ISCA.2005.30 | ISCA |
Keywords | Field | DocType |
computer architecture,data security,symmetric key encryption,one time pad,cryptography,data engineering,chip,data privacy,information security,coprocessors,optimization,block cipher,security architecture | Multiple encryption,Precomputation,Block cipher,Computer science,Cache,Parallel computing,Encryption,Real-time computing,40-bit encryption,On-the-fly encryption,56-bit encryption | Conference |
Volume | Issue | ISSN |
33 | 2 | 0163-5964 |
ISBN | Citations | PageRank |
0-7695-2270-X | 45 | 1.81 |
References | Authors | |
16 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Weidong Shi | 1 | 145 | 6.45 |
Hsien-Hsin Sean Lee | 2 | 1657 | 102.66 |
Mrinmoy Ghosh | 3 | 367 | 22.39 |
Chenghuai Lu | 4 | 142 | 10.02 |
Alexandra Boldyreva | 5 | 2297 | 114.80 |