Because a bare function object might not have constant time complexity, and because the function object might be called with the same argument repeatedly, I thought it might be good to cache its outputs. This is possible if the function has equality-comparable domain. Let me illustrate by a C++ code snippet:
#include <functional>
#include <map>
template <class R, class ...T>
class CachedFunc {
private:
std::function<R(T...)> func;
mutable std::map<std::tuple<T...>, R> caches;
public:
// ...
R operator() (const T &...args) const {
auto key = std::make_tuple(args...);
try {
return caches.at(key);
} catch (const std::out_of_range &) {
auto result = func(args...);
caches.insert_or_assign(key, result);
return result;
}
}
};
The dictionary caches stores the cached outputs. Upon when the function object is called, if there is no cache for the input, it is newly cached.
Yet, I see several issues with this idea:
This function class is just a pointless overhead if the function itself has constant time complexity.
For imperative languages, this
CachedFunccan be a convenient function class on its own. However, because it inevitably involves a mutable state, I don't see how this class would go well with functional languages.
In summary, if my (functional) programming language to incorporate the mechanism of CachedFunc, it would be the compiler's responsibility whether to actually use it. What techniques can a compiler use to determine which functions will benefit from having their results cached?