Skip to main content
edited tags
Link
Starship
  • 1.5k
  • 9
  • 36
added 38 characters in body; edited tags
Source Link
kaya3
  • 22.4k
  • 50
  • 139

Is it a good idea to let Which functions should a compiler cache their outputs for?

Because a bare function object might not have constant time complexity, and because the function object might be called with the same argument repeatedly, I thought it might be good to cache its outputs. This is possible if the function has equality-comparable domain. Let me illustrate by a C++ code snippet:

#include <functional>
#include <map>

template <class R, class ...T>
class CachedFunc {
private:
    std::function<R(T...)> func;
    mutable std::map<std::tuple<T...>, R> caches;
public:
    // ...
    R operator() (const T &...args) const {
        auto key = std::make_tuple(args...);
        try {
            return caches.at(key);
        } catch (const std::out_of_range &) {
            auto result = func(args...);
            caches.insert_or_assign(key, result);
            return result;
        }
    }
};

The dictionary caches stores the cached outputs. Upon when the function object is called, if there is no cache for the input, it is newly cached.

Yet, I see several issues with this idea:

  • This function class is just a pointless overhead if the function itself has constant time complexity.

  • For imperative languages, this CachedFunc can be a convenient function class on its own. However, because it inevitably involves a mutable state, I don't see how this class would go well with functional languages.

In summary, if my (functional) programming language to incorporate the mechanism of CachedFunc, it would be the compiler's responsibility whether to actually use it. I don't think it's an easy task, so was thisWhat techniques can a good ideacompiler use to determine which functions will benefit from the starthaving their results cached?

Is it a good idea to let functions cache their outputs?

Because a bare function object might not have constant time complexity, and because the function object might be called with the same argument repeatedly, I thought it might be good to cache its outputs. This is possible if the function has equality-comparable domain. Let me illustrate by a C++ code snippet:

#include <functional>
#include <map>

template <class R, class ...T>
class CachedFunc {
private:
    std::function<R(T...)> func;
    mutable std::map<std::tuple<T...>, R> caches;
public:
    // ...
    R operator() (const T &...args) const {
        auto key = std::make_tuple(args...);
        try {
            return caches.at(key);
        } catch (const std::out_of_range &) {
            auto result = func(args...);
            caches.insert_or_assign(key, result);
            return result;
        }
    }
};

The dictionary caches stores the cached outputs. Upon when the function object is called, if there is no cache for the input, it is newly cached.

Yet, I see several issues with this idea:

  • This function class is just a pointless overhead if the function itself has constant time complexity.

  • For imperative languages, this CachedFunc can be a convenient function class on its own. However, because it inevitably involves a mutable state, I don't see how this class would go well with functional languages.

In summary, if my (functional) programming language to incorporate the mechanism of CachedFunc, it would be the compiler's responsibility whether to actually use it. I don't think it's an easy task, so was this a good idea from the start?

Which functions should a compiler cache outputs for?

Because a bare function object might not have constant time complexity, and because the function object might be called with the same argument repeatedly, I thought it might be good to cache its outputs. This is possible if the function has equality-comparable domain. Let me illustrate by a C++ code snippet:

#include <functional>
#include <map>

template <class R, class ...T>
class CachedFunc {
private:
    std::function<R(T...)> func;
    mutable std::map<std::tuple<T...>, R> caches;
public:
    // ...
    R operator() (const T &...args) const {
        auto key = std::make_tuple(args...);
        try {
            return caches.at(key);
        } catch (const std::out_of_range &) {
            auto result = func(args...);
            caches.insert_or_assign(key, result);
            return result;
        }
    }
};

The dictionary caches stores the cached outputs. Upon when the function object is called, if there is no cache for the input, it is newly cached.

Yet, I see several issues with this idea:

  • This function class is just a pointless overhead if the function itself has constant time complexity.

  • For imperative languages, this CachedFunc can be a convenient function class on its own. However, because it inevitably involves a mutable state, I don't see how this class would go well with functional languages.

In summary, if my (functional) programming language to incorporate the mechanism of CachedFunc, it would be the compiler's responsibility whether to actually use it. What techniques can a compiler use to determine which functions will benefit from having their results cached?

Source Link
Dannyu NDos
  • 1.5k
  • 10
  • 28

Is it a good idea to let functions cache their outputs?

Because a bare function object might not have constant time complexity, and because the function object might be called with the same argument repeatedly, I thought it might be good to cache its outputs. This is possible if the function has equality-comparable domain. Let me illustrate by a C++ code snippet:

#include <functional>
#include <map>

template <class R, class ...T>
class CachedFunc {
private:
    std::function<R(T...)> func;
    mutable std::map<std::tuple<T...>, R> caches;
public:
    // ...
    R operator() (const T &...args) const {
        auto key = std::make_tuple(args...);
        try {
            return caches.at(key);
        } catch (const std::out_of_range &) {
            auto result = func(args...);
            caches.insert_or_assign(key, result);
            return result;
        }
    }
};

The dictionary caches stores the cached outputs. Upon when the function object is called, if there is no cache for the input, it is newly cached.

Yet, I see several issues with this idea:

  • This function class is just a pointless overhead if the function itself has constant time complexity.

  • For imperative languages, this CachedFunc can be a convenient function class on its own. However, because it inevitably involves a mutable state, I don't see how this class would go well with functional languages.

In summary, if my (functional) programming language to incorporate the mechanism of CachedFunc, it would be the compiler's responsibility whether to actually use it. I don't think it's an easy task, so was this a good idea from the start?