Timeline for optimize human-readable database with index
Current License: CC BY-SA 3.0
4 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Jul 16, 2014 at 23:31 | comment | added | mulllhausen |
i think i'll do both, then i can compare performance. also, for hash abcdef012 rather than doing a/b/c/d/e/f/0/1/2.txt i will split it like abc/def/012.txt this way any given dir can have at most 0xfff files, which seems like it would be quick on any filesystem.
|
|
| Jul 16, 2014 at 15:12 | comment | added | Dan Pichelman | I've implemented option 3 for a similar project. It was a pain in the rear. If I had to do it again, I'd go with @david.pfx's advice & use a directly hashed file store. These days, you can probably find a library somewhere that already implements it for you. | |
| Jul 16, 2014 at 14:54 | comment | added | mulllhausen | tbh option 3 sounds simpler to implement than the directly hashed file store. would you be able to include a quick example in case i am missing something | |
| Jul 16, 2014 at 14:36 | history | answered | david.pfx | CC BY-SA 3.0 |