1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586 |
- <!doctype html>
- <html lang="en">
- <head>
- <meta charset="utf-8">
- <meta http-equiv="X-UA-Compatible" content="IE=edge">
- <meta name="viewport" content="width=device-width, initial-scale=1">
- <meta name="description" content="">
- <title>level6 | Casey DeLorme's Portfolio / caseydelorme.com</title>
- <link rel="icon" type="image/x-icon" href="https://d2xxklvztqk0jd.cloudfront.net/favicon.ico" />
- <link rel="stylesheet" type="text/css" href="/css/main.css#00d95bc">
- </head>
- <body>
- <header class="group">
- <h1><a href='/'>Casey DeLorme</a></h1>
-
- <nav>
- <ul>
-
- <li><a href='/index.html'>index</a></li>
-
- <li><a href='/projects/'>projects</a></li>
-
- <li><a href='/resume.html'>resume</a></li>
-
- </ul>
- </nav>
-
- </header>
- <div class="content group">
- <h1><a href="https://github.com/cdelorme/level6">level6</a></h1>
- <p>This was a fun and amusing project I chose to build for a variety of reasons. I myself have roughly 6 terrabytes of personal data files, a lot of which includes images, videos, and of course copious amounts of text files from project source code.</p>
- <p>While my scale is perhaps a bit larger, my problem is no different than that which my family and friends face regularly. There are no free or simple tools out there that will handle file deduplication flexibly and efficiently.</p>
- <p>My goal with this project was to create a free and open source software that could handle file deduplication on personal computers. Ideally at break-neck speed, but with a high degree of accuracy.</p>
- <p>The name of the project came about from the concept of destroying cloned files. As an avid anime fan I decided to name it after a particular project in a series I enjoyed, although the connotation of the project in that series is certainly darker than the purpose of this project.</p>
- <h2>design considerations</h2>
- <p>Originally I wanted to simply go with sha256 hash comparison. My first implementation simply created a list of hashes for every file in a single threaded loop and printed out sets of results by hash.</p>
- <p>Then, as I read more about comparison, I realized that file size is relevant. I took the single core file walk and grouped files by their size, then created sha256 hashes. I proceeded to add options to delete or move the files, and json support for output such that it could be consumed and used by other applications in an easier way.</p>
- <p>My next stage was adding concurrency, which was my first attempt at golangs concurrency model. It was actually quite refreshing once I understood how it worked, and I managed to improve the performance of the software quite a bit.</p>
- <p>I showed it to a few friends who gave me some suggestions, including adding summary data such as how many files were scanned, how many hashes were generated, and how long the operation took. Another suggestion was to use a lighter hashing algorithm as a stop-gap before generating sha256 hashes. This turned out to be another boon to performance, since sha256 is an expensive algorithm compared to something like crc32, so I added crc32 as a first-measure before running sha256 hashing. I implemented full counts of how many hashes were generated, plus how many duplicates were found.</p>
- <p>The end result is roughly 600 lines of code, which can provide a very fast way to identify and manage duplicate files. There is still a lot of room for improvement, but I’m pretty happy with how quickly I was able to put it together.</p>
- <h2>future plans</h2>
- <ul>
- <li>dodgy windows compatibility</li>
- <li>byte-comparison for large files and 100%-accuracy going a step beyond sha256</li>
- <li>core library, cli & gui repositories</li>
- <li>detailed multimedia comparison</li>
- </ul>
- <p>Tests on Windows Vista 32 bit have crashed, but Windows 8.1 64 bit work. However, on Windows it seems that resource-exhaustion occurs during which the OS forces the program to close. This problem stems from its less-than-frugal use of ram to store contents when generating sha256 hashes. One solution is to set a max-size, and simply omit them from comparison. This seems to have worked well in my test cases, but it’s obviously not perfect.</p>
- <p>I would like to implement byte-by-byte comparison for a more detailed approach to comparing two files. While the odds of sha256 conflict are absurd, this would give us a 100% trust option for identifying duplicates. Similarly it would allow us to assume a safe arbitrary max file size, reducing necessary cli arguments, and still being capable of parsing very large files.</p>
- <p>I already worked to separate the code into its own library folder, but in the future I want to separate that as its own repository, then import and use it with both a cli and gui implementation. This would allow something like <code>level6-sdl</code> to provide a graphical interface, stacked ontop of the same source as the cli implementation.</p>
- <p>I would like to use more detailed comparison methods against images, videos, and audio. Algorithms that identify key-points in similar files, and can draw conclusions such as altered contrast, brightness, cropping, rotation, etc and still manage to match similar items. These small changes may barely be visible in the files themselves but would cause hash or byte comparison to fail.</p>
- <h5><em>written on 2014-12-22</em></h5>
- </div>
- <footer class="group">
- <a href='https://www.facebook.com/CaseyRDeLorme' class='link facebook'></a>
- <a href='https://www.linkedin.com/in/cdelorme' class='link linkedin'></a>
- <a href='https://www.youtube.com/user/LordOfElm' class='link youtube'></a>
- <a href='https://github.com/cdelorme' class='link github'></a>
- <a href='skype:casey.delorme?chat' class='link skype'></a>
- <div class="scripts">
- <script type="text/javascript" src="/js/main.js#00d95bc" async></script>
- </div>
- </footer>
- </body>
- </html>
|