In our wonderful profession of software engineering, there is one ever-recurring subject: upgrades. Language upgrades, framework upgrades, key package upgrades, you name it. We have all had our share of it, dealt with it one way or another, by dealing or by not dealing with it. It is a never ending cycle of major version jumps, minor update steps, LTS versions, deprecations, often scheduled on a bi-yearly or yearly cycle. Often these changes are small and do not directly affect our application code, but sometimes they are more significant or even represent a paradigm shift by their creators, requiring time to integrate. The more we lean on a certain component the longer it often takes.
From a business perspective needing to reserve time for these upgrades is something not always very well understood.
Most likely upgrades feel comparable to the ‘joyful’ experience of bringing your car to the garage for its major service interval; You’ll be unable to use the vehicle for a day, guaranteed of a hefty bill and if you’re unlucky you’ll get a call during the day that some part you’ve probably never heard of needs to be replaced straight away - thus supersizing that already unpleasant bill. And when you collect your vehicle at the end of the day, it doesn’t feel as though anything special has been done
On top of that, upgrades inherently bring a form of risk. When the upgrade has been done unexpected things could happen - though this is more of a rarity.I vividly remember the strangest bug we encountered with a password reset after an upgrade, where internal staff could not reproduce our customers’ complaints at all. Only then did I realized the cache TTL had changed from minutes to seconds after the upgrade, leaving our password reset with a TTL of only 60 seconds, during which we conducted smoke testing, but most customers did not
One could make the argument that executing these upgrades is a matter of necessity for security of the business. Although this holds some truth, a zero-day vulnerability could be discovered and pose a significant risk; however, multiple layers of protection often prevent malicious intent from reaching the code in the first place. While important, it is simply a confounding factor
Or one could make the case that executing the upgrades is to improve performance of the system, removing bugs and faults that would hinder us if we would stick on the same versioning. Though hopefully this should not have affected our business logic in the first place since it should be adequately test covered to begin with.
No, the definitive case for upgrades can be explained much simpler, it is serving the business needs of being focused on delivering application code: No sane developer would consider opening a TCP/IP connection to port 25 of an SMTP server to manually specify all headers and encode payloads to send an email—we use libraries for that. A lot of them, which stem from the infinite pool of wisdom that is called the open source community. Frameworks, libraries, packages; battle tested pieces of logic that are already out there to use freely.
Executing these upgrades is part of the total cost of ownership of the codebase. It is simply the price we have to pay for using code we did not have to write ourselves to get the business goal moving forward. It is also the price we are willing to pay in advance to plug in future open source components that allow us to move the business goal forward
Not doing the upgrades means not investing in the compatibility of the codebase, because eventually we will run into a situation where a new open source package could really speed things up but we’re either forced to write something ourselves completely from scratch or use an inferior or outdated alternative. Or to summarize it in a one-liner; it is spending quality effort in having a forwards-compatibe application.
And not executing upgrades for a prolonged time will eventually leave you in some kind of language-framework-package conundrum where you can’t move up any component slowly at all because; A depends on B depends on C and C cannot be upgraded because that would require fully upgrade X. X had a paradigm shift which requires redoing 43 application code classes to be fully compatible again which breaks 185 unittests because we kept working so long on this current version of X which makes the upgrade backlog feel like an impossible endeavor.
Often thus, due to business misunderstandings of its necessity these upgrades get postponed over and over again, getting the team stuck with unhappy developer experience having to browse legacy documentation, create forks and other workarounds to support abandoned packages and feel general discontent on the state of the application ultimately slowing down the velocity.
And when you finally commence and tackle all the components, the operation feels daunting, as it involves replacing numerous building blocks simultaneously. Thus you’ll be either forced to do risky big-bang upgrades or take the time consuming road of version-by-version like you should have done in the first place.
Perhaps those periodic upgrades are a necessity after all. Just as those scheduled service interval for a car are there to ensure getting a sizable mileage out of your vehicle, periodic upgrades on our application are there to make sure we drive the business forward. Not always the most fun part, granted, but one of the best ways to make sure of having the best traction and velocity as possible!
Top comments (0)