Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

3
  • I like the idea of a custom database for the raw data. However, after making a quick proof of concept for the client sorting, it turns out that network is gonna be a bottleneck. I don't have full control of the clients, and management wants me to make an api that is easy to consume. (To make this concrete, as gzipped csv, the data for 2 years is 6.58 MB, which is too big for a mobile application to request regularly). Commented Mar 8, 2017 at 8:19
  • Martijn, this is precisely why we did not do a web or server based application for UT. The amount of data to be analyzed could be enormous. Also, the building engineers collecting the data (from loggers) do not always even have access to the internet. We did structure the data processing such that the re-sampling algorithms work with a stream; that is, they do not load the entire data stream into memory, to do any analysis other than regressions. Commented Mar 8, 2017 at 17:17
  • However binary streams can be much more efficiently compressed than csv. Commented Mar 8, 2017 at 17:18