The question is old, but there were no correct answer about str size in memory. So let me explain.
The most interesting thing with str in Python: it has adaptive representation depending on present characters: it could be latin-1 (1 byte per char), UCS-2 (2 bytes per char) or UCS-4 (4 bytes per char).
For ASCII strings you may find that each char adds +1 byte to empty string (it is Python 3.9, in Py3.14 it is more compact like 41 bytes):
>>> sys.getsizeof("")
49
>>> sys.getsizeof("a")
50
>>> sys.getsizeof("ab")
51
where 49 is a size of initial C structure inside.
But for 2 byte characters even the intial C structure has different size (74 bytes in Py3.9, in Py3.14 it is like 59 bytes):
>>> sys.getsizeof("你好世界!"[:1]) # Hello, World! in Chinese
76
>>> sys.getsizeof("你好世界!"[:2])
78
>>> sys.getsizeof("你好世界!"[:3])
80
>>> sys.getsizeof("你好世界!"[:4])
82
>>> sys.getsizeof("你好世界!"[:5])
84
and even ASCII exclamation point ! adds 2 bytes anyway because max char width == 2.
The same fixed increase by 4 bytes happens for string with more complicated symbols.
Current adaptive representation had introduced in Python 3.3, and in general it is still not changed (3.14 is coming this year, but no much changes in str) except some optimizations for initial C structure size and having cached strings (so called interned strings).
There is a long term plan to move str to UTF-8 representation: https://github.com/faster-cpython/ideas/issues/684 but it should happen not earlier than in 3.16 I guess.
If you encode to UTF-8 currently, it is only creation of bytes object which is not connected with original string in memory.
P.S. If you send the string by network, encoding to bytes is necessary as kind of serialization. In such case .encode("utf-8") is good choice since UTF-8 doesn't depend on byte order. But you have to call .decode("utf-8") on another side.