0

I experience problems with SharedMemory() in Python 3.12.0, it is not properly released. I use below context manager to handle share memory segments:

@contextmanager
def managed_shm(name=None, size=0, create=False):
    shm = None
    try: 
        shm = SharedMemory(create=create, name=name, size=size)
        yield shm
    finally:
        if shm:
            shm.close()
            if create: 
                shm.unlink()

Process writing shared memory runs a class method in a thread. The method reads data from socket and updates shared memory with it, simplified code:

def recvmsg(self, msglen=60):
    self.conn, self.addr = self.socket.accept()
    data = b''
    with managed_shm(name=self.shmname, size=self.msglen, create=True) as mqltick:
        while True:
            buf = self.conn.recv(msglen-len(data))
            if not buf:
                break
            data += buf
            if len(data) == msglen:
                self.data = data
                mqltick.buf[:] = data
                data = b''

In other process I read data from shared memory with this function:

def test_shm(maxok=10):
    prev = None
    errors=0
    ok=0
    reads = 0
    with managed_shm(name=SocketServer.shmname) as shm:  #SocketServer.shmname='mqltick4'
        while True:
            try:
                cur =  shm.buf.tobytes()
                if cur != prev:
                    reads +=1
                    ok+=1
                    prev=cur
                    if reads % 100 == 0:
                        print(reads, ok, errors, errors/reads if reads >0 else None)    
                    if ok >= maxok: break
            except ValueError:
                errors+=1
                print('errors', errors)
    return reads, ok, errors, errors/reads if reads >0 else None

It works fine, however upon interpreter exit I get the following warning:

Python-3.12.0/Lib/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '

Second attempt to run the code results in error:

File "/home/jb/opt/pythonsrc/Python-3.12.0/Lib/multiprocessing/shared_memory.py", line 104, in __init__
    self._fd = _posixshmem.shm_open(
               ^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/mqltick4'

Program updating shared memory works fine, however it will not restart after exit without shared memory name change. Question is there something wrong with my code or I hit some Python 3.12.0 bug?

Update(1): After switching to generated memory segment names problem disappeared

with managed_shm(name=self.shmname, size=self.msglen, create=True) as mqltick:
# generate shared memory segment name upon creation
with managed_shm(size=self.msglen, create=True) as mqltick:
            self.shmname = mqltick.name 

Shared memory segment name discovery had to be implemented, but it is another story. So it looks like Python 3.12.0 bug affecting shared memory segments created with user defined names.

Update(2): My joy was premature. With new setup client process witch each contextmanager call adds new entry to potential memory leaks.

UserWarning: resource_tracker: There appear to be 2 leaked shared_memory objects to clean up at shutdown

Since only 1 segment was created only 1 can be leaked... Server side has to be restarted in order to restore communication. Of course unlink() on server side reports error too:

_posixshmem.shm_unlink(self._name)
FileNotFoundError: [Errno 2] No such file or directory: '/psm_046507f5'

So Python 3.12.0 bug scores again

1

1 Answer 1

0

We can't see in your question where you're creating the shared memory segment.

If you're managing context then all you need to do is ensure that the "clients" (i.e., those instances that merely attach rather than create the segment) are nested within the creator.

Here's an example that runs without error:

from multiprocessing.shared_memory import SharedMemory
import pickle
from concurrent.futures import ThreadPoolExecutor
from typing import Self, Any


class MyShm(SharedMemory):
    def __init__(
        self: Self, *, create: bool = False, size: int = 0, name: str | None = None
    ) -> None:
        super().__init__(name=name, create=create, size=size)
        self._create = create

    def __enter__(self: Self) -> Self:
        return self

    def __exit__(self: Self, *_: Any) -> None:
        self.close()
        if self._create:
            self.unlink()


def writer(name: str) -> None:
    global data, size
    with MyShm(name=name) as client:
        client.buf[:size] = data


if __name__ == "__main__":
    alist = list(range(10))
    data = pickle.dumps(alist)
    size = len(data)

    with MyShm(size=size, create=True) as creator:
        with ThreadPoolExecutor() as tpe:
            tpe.submit(writer, creator.name)
        print(pickle.loads(creator.buf))

Output:

[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Sign up to request clarification or add additional context in comments.

2 Comments

I've added shared memory writer code snippet, writer and reader(s) run in separate processes. decorator @contextmanager and your implementation are equivalent (docs.python.org/3/library/contextlib.html). Clients access existing segment with SharedMemory(name=name)
@JacekBłocki I have made significant changes to my original answer to incorporate a "client" thread. The code runs without error(s) on MacOS 15.4.1 and Python 3.13.3. Please try it in your 3.12.x environment

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.