146

I'm writing a class in python and I have an attribute that will take a relatively long time to compute, so I only want to do it once. Also, it will not be needed by every instance of the class, so I don't want to do it by default in __init__.

I'm new to Python, but not to programming. I can come up with a way to do this pretty easily, but I've found over and over again that the 'Pythonic' way of doing something is often much simpler than what I come up with using my experience in other languages.

Is there a 'right' way to do this in Python?

3
  • 34
    IMO none of these answers is correct. OP wanted a cached class property, eg Foo.something_expensive. All these answers are about cached instance properties, which means something_expensive will be recalculated for every new instance, which is less than optimal in most cases Commented Apr 6, 2017 at 22:43
  • 2
    As of Python 3.9, all of the below can now be wrapped as a @classmethod, which should give you a cached class property. Commented Jun 10, 2021 at 19:45
  • 4
    @JakeStevens-Haas I do not believe that is correct. I tried making a cached class property in Python 3.10.2 by using the @classmethod and @functools.cached_property together -- in both orders -- and neither worked. By contrast, the @cachedclassproperty decorator from the Dickens library did work for me. Commented Aug 8, 2022 at 14:58

11 Answers 11

204

3.8 ≤ Python @property and @functools.lru_cache have been combined into @cached_property.

import functools
class MyClass:
    @functools.cached_property
    def foo(self):
        print("long calculation here")
        return 21 * 2

3.2 ≤ Python < 3.8

You should use both @property and @functools.lru_cache decorators:

import functools
class MyClass:
    @property
    @functools.lru_cache()
    def foo(self):
        print("long calculation here")
        return 21 * 2

This answer has more detailed examples and also mentions a backport for previous Python versions.

Python < 3.2

The Python wiki has a cached property decorator (MIT licensed) that can be used like this:

import random
# the class containing the property must be a new-style class
class MyClass(object):
   # create property whose value is cached for ten minutes
   @cached_property(ttl=600)
   def randint(self):
       # will only be evaluated every 10 min. at maximum.
       return random.randint(0, 100)

Or any implementation mentioned in the others answers that fits your needs.
Or the above mentioned backport.

Sign up to request clarification or add additional context in comments.

7 Comments

lru_cache has also been backported to python 2: pypi.python.org/pypi/functools32/3.2.3
@orlp lru_cache has a default size of 128, for 128 different argument configurations. This will only be an issue if you are generating more objects than your cache size, as the only changing argument here is self. If you are generating so many objects, you really shouldn't be using an unbounded cache, as it will force you to keep all objects that have ever called the property in memory indefinitely, which could be a horrendous memory leak. Regardless, you probably would be better off with a caching method that stores the cache in the object itself, so the cache is cleaned up with it.
The @property @functools.lru_cache() method is giving me a TypeError: unhashable type error, presumably because self is not hashable.
Watch out! It appears to me functools.lru_cache causes instances of the class to avoid GC as long as they are in the cache. Better solution is functools.cached_property in Python 3.8.
Bit misleading to say "@property and @functools.lru_cache have been combined into @cached_property." This answer shows how the 2 can be combined but doesn't behave like @cachedproperty if you try setting a new value without writing a setter. In your 2 example snippets try x=MyClass(), print(x.foo), x.foo=6*9 to see the difference. From the docs: "The mechanics of cached_property() are somewhat different from property(). A regular property blocks attribute writes unless a setter is defined. In contrast, a cached_property allows writes."
|
61

I used to do this how gnibbler suggested, but I eventually got tired of the little housekeeping steps.

So I built my own descriptor:

class cached_property(object):
    """
    Descriptor (non-data) for building an attribute on-demand on first use.
    """
    def __init__(self, factory):
        """
        <factory> is called such: factory(instance) to build the attribute.
        """
        self._attr_name = factory.__name__
        self._factory = factory

    def __get__(self, instance, owner):
        # Build the attribute.
        attr = self._factory(instance)

        # Cache the value; hide ourselves.
        setattr(instance, self._attr_name, attr)

        return attr

Here's how you'd use it:

class Spam(object):

    @cached_property
    def eggs(self):
        print 'long calculation here'
        return 6*2

s = Spam()
s.eggs      # Calculates the value.
s.eggs      # Uses cached value.

4 Comments

Wonderful! Here's how it works: Instance variables take precedence over non-data descriptors. At the first access of the attribute, there is no instance attribute but only the descriptor class attribute and hence the descriptor is executed. However, during its execution the descriptor creates an instance attribute with the cached value. This means that when the attribute is accessed a second time the previously created instance attribute is returned instead of the descriptor being executed.
There is a cached_property package on PyPI. It includes thread-safe and time-expire versions. (Also, thanks, @Florian, for the explanation.)
Yay for esoteric corner cases: you can't use a cached_property descriptor when using __slots__. Slots are implemented using data descriptors, and using a cached_property descriptor simply overrides the generated slot descriptor, so the setattr() call won't work as there is no __dict__ to set the attribute in and the only descriptor available for this attribute name is the cached_property.. Just putting this here to help others avoid this pitfall.
This solution didn't work with nested DRF Serializer, but the cached_property package on PyPI suggested by @leewz did the job.
43

The usual way would be to make the attribute a property and store the value the first time it is calculated

import time

class Foo(object):
    def __init__(self):
        self._bar = None

    @property
    def bar(self):
        if self._bar is None:
            print "starting long calculation"
            time.sleep(5)
            self._bar = 2*2
            print "finished long caclulation"
        return self._bar

foo=Foo()
print "Accessing foo.bar"
print foo.bar
print "Accessing foo.bar"
print foo.bar

4 Comments

In Python3.2+, is there any motivation to use this approach over @property + @functools.lru_cache()? The quasi-private attribute way seems to be reminiscent of Java/setters/getters; in my humble opinion just decorating with lru_cache is more pythonic
(As in @Maxime's answer)
@Brad @functools.lru_cache() would cache the result keyed with the self arg, and this would also prevent that instance from being GC'd as long as it was in the cache.
With this method, my IDE can easily auto-complete.
24

As mentioned, functools.cached_property will work for cached instance attributes. For cached class attributes:

3.9 <= python < 3.13

from functools import cache

class MyClass:
    @classmethod
    @property
    @cache  # or lru_cache() for python < 3.9
    def foo(cls):
        print('expensive calculation')
        return 42
>>> MyClass.foo
expensive calculation
42
>>> MyClass.foo
42

And if you want a reusable decorator:

def cached_class_attr(f):
    return classmethod(property(cache(f)))

class MyClass:
    @cached_class_attr
    def foo(cls):
        ...

python >= 3.13

In 3.13 chaining classmethod and property is disallowed so you will have to use metaclasses or a custom decorator, here's an example of a read-only cached attribute:

class MyMeta(type):
    @property
    @cache
    def foo(self):
        ...

class MyClass(metaclass=MyMeta):
    ...

MyClass.foo  # read-only access

Or alternatively a custom decorator:

class classproperty:
    def __init__(self, func) -> None:
        functools.update_wrapper(self, func)
    def __get__(self, instance, owner):
        return self.__wrapped__(owner)

class MyClass:
    @classproperty
    @cache
    def foo(cls):
        ...

5 Comments

Only took 12 years and 10 other (highly voted) answers for someone to finally give the correct one. Thanks
Why python < 3.11? Anything changed on this in 3.11?
@Davy stacking descriptions (classmethod, property) no longer allowed :(
i can't see anywhere that stacking decorators is no longer allowed in 3.11 Additionally the code above works for me in 3.11.1
in 3.11, chaining classmethod and property is deprecated, but may still work for a while, see note here: docs.python.org/3.11/library/functions.html#classmethod
23

Python 3.8 includes the functools.cached_property decorator.

Transform a method of a class into a property whose value is computed once and then cached as a normal attribute for the life of the instance. Similar to property(), with the addition of caching. Useful for expensive computed properties of instances that are otherwise effectively immutable.

This example is straight from the docs:

from functools import cached_property

class DataSet:
    def __init__(self, sequence_of_numbers):
        self._data = sequence_of_numbers

    @cached_property
    def stdev(self):
        return statistics.stdev(self._data)

    @cached_property
    def variance(self):
        return statistics.variance(self._data)

The limitation being that the object with the property to be cached must have a __dict__ attribute that is a mutable mapping, ruling out classes with __slots__ unless __dict__ is defined in __slots__.

Comments

6

The dickens package (not mine) offers cachedproperty, classproperty and cachedclassproperty decorators.

To cache a class property:

from descriptors import cachedclassproperty

class MyClass:
    @cachedclassproperty
    def approx_pi(cls):
        return 22 / 7

Comments

2
class MemoizeTest:

      _cache = {}
      def __init__(self, a):
          if a in MemoizeTest._cache:
              self.a = MemoizeTest._cache[a]
          else:
              self.a = a**5000
              MemoizeTest._cache.update({a:self.a})

Comments

1

You could try looking into memoization. The way it works is that if you pass in a function the same arguments, it will return the cached result. You can find more information on implementing it in python here.

Also, depending on how your code is set up (you say that it is not needed by all instances) you could try to use some sort of flyweight pattern, or lazy-loading.

Comments

1

Most if not all current answers are about caching instance attributes. To cache class attributes, you can simply use a dictionary. This ensures the attributes are calculated once per class, instead of once per instance.

mapping = {}

class A:
    def __init__(self):
        if self.__class__.__name__ not in mapping:
            print('Expansive calculation')
            mapping[self.__class__.__name__] = self.__class__.__name__
        self.cached = mapping[self.__class__.__name__]

To illustrate,

foo = A()
bar = A()
print(foo.cached, bar.cached)

gives

Expansive calculation
A A

Comments

-3

The most simple way of doing this would probably be to just write a method (instead of using an attribute) that wraps around the attribute (getter method). On the first call, this methods calculates, saves and returns the value; later it just returns the saved value.

Comments

-4

With Python 2, but not Python 3, here's what I do. This is about as efficient as you can get:

class X:
    @property
    def foo(self):
        r = 33
        self.foo = r
        return r

Explanation: Basically, I'm just overloading a property method with the computed value. So after the first time you access the property (for that instance), foo ceases to be a property and becomes an instance attribute. The advantage of this approach is that a cache hit is as cheap as possible because self.__dict__ is being used as the cache, and there is no instance overhead if the property is not used.

This approach doesn't work with Python 3.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.