1

I am currently working on a project where a python program is supposed to be running for several days, essentially in an endless loop until an user intervenes. I have observed that the ram usage (as shown in the windows task manager) rises - slowly, but steadily. For example from ~ 80 MB at program start to ~ 120 MB after one day. To get a closer look at this, I started to log the allocated memory with tracemalloc.get_traced_memory() at regular intervalls throughout the program execution. The output was written to the time series db (see image below).

tracemalloc output for one day runtime

tracemalloc output for one day runtime

To me it looks like the memory that is needed for the program does not accumulate over time. How does this fit in the output of the windows task manager? Should I go through my program to search for growing data structures? Thank your very much in advance!

3
  • What does your program do? Commented May 19, 2022 at 6:40
  • Essentially, it re-evaluates possible tasks for a robot and assigns the eligible task with the highest priority Commented May 19, 2022 at 6:45
  • Please provide enough code so others can better understand or reproduce the problem. Commented May 19, 2022 at 21:05

1 Answer 1

2

Okay, turns out the answer is: no, this is not proper behaviour, the ram usage can stay absolutely stable. I have tested this for three weeks now and the ram usage never exceeded 80 mb. The problem was in the usage of the influxdb v2 client. You need to close both the write_api (implicitly done with the "with... as write_api:" statement) and the client itself (explicitly done via the "client.close()" in the example below). In my previous version that had increasing memory usage, I only closed the write_api and not the client.

client = influxdb_client.InfluxDBClient(url=self.url, token=self.token, org=self.org)
with client.write_api(write_options=SYNCHRONOUS) as write_api:
    # force datatypes, because influx does not do fluffy ducktyping
    datapoint = influxdb_client.Point("TaskPriorities")\
        .tag("task_name", str(task_name))\
            .tag("run_uuid", str(run_uuid))\
                    .tag("task_type", str(task_type))\
                        .field("priority", float(priority))\
                            .field("process_time_h", float(process_time))\
                                .time(time.time_ns())
    answer= write_api.write(bucket=self.bucket, org=self.org, record=datapoint)
client.close()
Sign up to request clarification or add additional context in comments.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.