Up until now, I have been making an egregious error in my Python development: I have been assuming that streams get closed once their corresponding objects go out of scope. Specifically, I assumed that when a file or some instance of a class inheriting io.IOBase invoked __del__, it would run the object's close method. However, upon executing the following code I noticed that this was certainly not the case.
def wrap_close(old_close):
def new_close(*args, **kwargs):
print("closing")
return old_close(*args, **kwargs)
return new_close
f = open("tmp.txt")
f.close = wrap_close(f.close)
f.close() # prints "closing"
f = open("tmp.txt")
f.close = wrap_close(f.close)
del(f) # nothing printed
My question is, what is the benefit of not closing a file or stream automatically when its __del__ method is invoked? It seems like the implementation would be trivial, but I imagine there has to be a reason for allowing streams to stay open after their corresponding object goes out of scope.
.close().deldoesn't have any special behavior built-in to deal with streams - it just removes the object from the namespace, and then the garbage collector composts it. Neither of them care that there's an open file descriptor somewhere, because how would they know?f.fileno()) shows that they are re-used, i.e. the file is closed. Note that.closeis a Python level function, whereas_iois implemented in C and its classes will directly call its C functions, not the Python wrappers.__del__withdelperhaps? Python does use the equivalent of_io.TextIOWrapper.__del__, but it is a C function which doesn't call the Python levelf.close.delonly unlinks a name. As a result, this might push the reference count down to 0 (in CPython) or eventually trigger garbage collection (any implementation). It does not directly invoke__del__.