We have a server that receives queries. We use the command pattern for this, e.g.
q := NewQuery(database)
q.Execute(request1)
q.Execute(request2)
The queries read a "model" from the database
type database interface {
ReadModel(id string) *Model
}
Problem #1: The model is stored raw. But when executing the query, we need to convert it to an ExtendedModel that has methods for easy manipulation: like this
model := database.ReadModel(id)
extendedModel := extended_model.New(model)
// or
extendedModel, err := extended_model.NewAndValidate(model)
How do we avoid repeating that?
- I feel uneasy about changing the
databaseinterface method. It's not the DB's responsibility to convert the model... - If I keep the code as is, we run into:
Problem #2: To reduce database lookups, we want a cache of ExtendedModel that all the queries can use. Each server has its own cache in memory.
How do we solve this in a way that doesn't allow us to "forget" that there is a cache of ExtendedModels?
Right now, we manually read from the cache (like here), but in some commands we forgot to remove the unneeded DB read.
It's not the DB's responsibility to convert the model. Why? If the conversion is done consistently after each fetch, it looks like the best place to do the mapping to the extended model isdatabase. You could do it automatically or pass a mapper along with the ID to the get function, so the function either returns Model or the the type generated by the mapper.