634 words | 3 minutes to read
As a developer there is nothing more frustrating than something breaking, whether it be our computer hardware, our network connection, the software we rely on to communicate and work, or the dependencies we rely on to build our software. When things break, our productivity grinds to a halt and we are forced to spend time getting things working again.
When we discuss breaking changes in the context of library development, we typically refer to ‘breaking API changes’ or ‘breaking behavioural changes’. Regardless of the type of breaking being discussed, we should always treat these breakages as something that is undesirable. We should strive to never break, and if we must, then it must be for a very good reason. API and behavioural changes are breaking in very different ways, but fortunately we have tools available that can help reduce the amount of breaking that we do.
A breaking API change is one where an API that you published is no longer available for users to call. User code that called this API will now no longer compile, and they will be forced to update their code and redeploy their application, if they wish to upgrade to your new API. Breaking API can occur for a number of reasons, such as:
When it comes to breaking APIs, we really need to strive to do this as rarely as possible, so that we don’t create unnecessary work on our users. In fact, our baseline expectation should be to never break our API, and only to allow it in exceptional situations. At the point we need to break, we then need to follow semantic versioning rules.
Further best practices are outlined in the characteristics of good API guidance are related to evolvability, as well as in the design for exensibility.
Fortunately, we do have tools on our side to keep us honest and aware of breaking API changes. These take the form of API analysis and change tracking tools such as RevAPI. They typically work by comparing a locally-built version of your library against the last released version found in Maven Central (or elsewhere), and can be configured to report or fail a build if changes have been detected (and have not been surpressed in the rare case where breaking change has been deemed to be the correct thing to do). In my team at Microsoft, RevAPI has proven to be a very useful tool, and I recommend everyone consider integrating it (or a similar tool) into your build process.
The other kind of breaking change that we need to be very careful of is changes in behaviour. For example, if the
foo() method is documented to always return
42, and users begin to depend on this behaviour, it would be very unwise to change it to instead return
0 in a future release. The obvious solution to prevent this from happening is to introduce good quality unit tests to validate expected behaviour, and to ensure this does not change unexpectedly between releases.
As mentioned in JBP-6, we should make sure to introduce code coverage tooling such as JaCoCo to validate that we are writing tests that cover these critical behavioural aspects of our code. Reading the reports generated by tools such as JaCoCo, we can very quickly identify code paths that may not be getting enough test coverage, and writing tests for these paths can help prevent accidental behavioural breaking changes.