Title | ||
---|---|---|
Deep Reinforcement Learning For A Dictionary Based Compression Schema (Student Abstract) |
Abstract | ||
---|---|---|
An increasingly important process of the internet age and the massive data era is file compression. One popular compression scheme, Lempel Ziv Welch (LZW), maintains a dictionary of previously seen strings. The dictionary is updated throughout the parsing process by adding new encountered substrings. Klein, Opalinsky and Shapira (2019) recently studied the option of selectively updating the LZW dictionary. They show that even inserting only a random subset of the strings into the dictionary does not adversely affect the compression ratio. Inspired by their approach, we propose a reinforcement learning based agent, RLZW, that decides when to add a string to the dictionary. The agent is first trained on a large set of data, and then tested on files it has not seen previously (i.e., the test set). We show that on some types of input data, RLZW outperforms the compression ratio of a standard LZW. |
Year | Venue | Keywords |
---|---|---|
2021 | THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE | Data compression,String (computer science),Test set,Reinforcement learning,Schema (genetic algorithms),Substring,Compression ratio,Parsing,Natural language processing,Computer science,Artificial intelligence |
DocType | Volume | ISSN |
Conference | 35 | 2159-5399 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Keren Nivasch | 1 | 0 | 1.35 |
Dana Shapira | 2 | 144 | 32.15 |
Amos Azaria | 3 | 272 | 32.02 |