Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

4
  • 1
    +1 for the COPY advice. An import from CSV (COPY FROM) will also work great Commented Mar 31, 2014 at 19:09
  • Thanks for the advice. So as long as I have the index, querying should be fast even if there are billions of rows? Is there anything else needed besides that and the foreign key? I have realised that COPY is the way to go for getting the data in, would take (close to) forever otherwise. Commented Apr 1, 2014 at 5:15
  • 2
    Actually, I'm going to take my "Two Tables" advice back. I didn't realize you were dealing with Billions; I thought you only had Millions. Postgres doesn't perform too well at the billions range -- once you hit 1B, people often try to partition the data, and your original approach is essentially a partition based on location. I thought about running a script to generate your classes, but 2k tables is huge. The dynamic approach does seem the right direction. I'd suggest asking this on the SqlALchemy list. I'm sure Mike will have some good advice on how to generate the classes dynamically. Commented Apr 1, 2014 at 15:03
  • Ok, thanks. I've been thinking of writing a script that outputs a python file with the 2k mapped classes in it, but again - feels very hacky. I'll ask on the mailing list - thanks! Commented Apr 1, 2014 at 21:24