Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

9
  • 2
    The 100 ms production / 5 ms local testing advice sort of assumes that different queries will generally scale the same way with database size. As the rest of your answer correctly points out, that might not be the case. I guess it's maybe still useful, and being fast on a small database is probably necessary for it to be fast on a large database, but it's no guarantee of being fast enough in production, so you still need to do the other things. Just wanted to emphasize that point. Commented Jan 17, 2024 at 14:47
  • 2
    You are right that the tight timings can be inaccurate. Though, I have caught slow queries by comparing timings. I have run queries that take 50ms locally and seconds on production (when manually run) which was unusually slow. Then once I figured out the correct index it took 5ms locally and then on production the fixed query took 15ms or so. It is just another tool since, as others have mentioned, you won't really know until it is in production. Commented Jan 17, 2024 at 16:34
  • 7
    The worst situations I've had are when the execution plans that PostgreSQL comes up with are different locally and in production. Which is due to different statistics on the tables. And then having different execution plans between the first (cold) run and the second (warm) run due to caching and other optimizations. Which points out the "non-deterministic" issues with DBs that @Steve mentioned in his answer. DB optimizations are awesome for workloads, terrible for troubleshooting. Commented Jan 17, 2024 at 16:39
  • 3
    Ok cool, so a tight time limit locally doesn't guarantee safety in production, but it can catch some problems early. Sounds like a good idea, especially if it sometimes saves you from going down a dead-end design road that requires a query that can't be made fast enough in production. Commented Jan 17, 2024 at 16:44
  • 2
    @davidbak Correct! Just like you are more likely to read a book from a library than to make changes to it, you would create multiple read-only copies so more people can read the book. Nobody likes being in a waitlist. The same principle applies with databases and applications. You create more copies so more people can access the data. Plus, like a library, you don't want people making changes to a copy (hence "read only"). If you make a change, you go to the single "source of truth" copy that you modify and then distribute those changes from there. Commented Jan 17, 2024 at 18:08