I have a problem with your code, in that the code seems unrelated to the problem statement you give at the top.
You say you are "working on a REST API, where I currently create python objects from the JSON data returned by the server. Though not all attributes of the created objects will be called upon in user code."
None of your code involves REST, or JSON. So while it's an interesting demonstration of certain parts of Python, it doesn't much address your issue.
IMO, the feasibility of what you are discussing is going to be determined by the nature and character of the JSON data, and the REST API. Parsing JSON is generally a one-shot deal. If you get JSON data like this:
'{ "name":"John", "age":31, "city":"New York" }'
how would it benefit you to "defer" the evaluation or construction of the age and city attributes? It would make far more sense to just parse and assign them all.
On the other hand, if your API returns a list of 1,000's of, say, homes with lat/long coordinates, and part of the object initialization process involves computing the distance from a given origin address, then perhaps delaying the computation or delaying object instantiation makes sense.
Please share with us the nature of the problem you're really trying to solve, and we'll be able to comment effectively on whether your proposed solution is a good idea.
EDIT:
I had a look at your linked code. The key was, I think, in the helpers.py:get_attribute() function.
You appear to be re-implementing the behavior of the __getattr__ or __getattribute__ dundermethods. I suggest that you look at implementing a simpler approach, where you store (say) '_attrname' as the lazy-evaluating function, and use __getattr__ to expand that value into 'attrname' when you fetch it the first time.
Since Python will check for obj.attrname and only call obj.__getattr__() if not found, this would provide the advantage of doing the subsequent lookups in C code rather than Python.