Off Topic: Sources of Technical Depth
Software projects which are in the maintenance phase of their life cycle have to deal with Technical Depth soon or late. Sometimes this can be completely ignored by the development team and project managers. Early warning of a non managed technical depth is causing a full stop in the new feature development. In worst case it can cause a reboot in the software project which means efforts to maintain project stability is bigger than redevelopment of the product.
Classically technical depth is something hidden in the source code of the software. Therefore naturally it is something invisible for the product managers without any software development skills. In this case it is software development teams responsibility to make it visible for other steak holders. This type of depth is grouped in the left side of the source diagram.
Internal Sources of Technical Depth
Classical technical depth sources can be enumerated as follows:- Legacy Code
- Workarounds
- Clean Code Violations
- Design Failures
Any piece of code written in the past without respecting the architectural constraints or developed without related test cases or documentation falls into this category. They are the possible impediments for development and possible bug sources for your next release. In case of it is untouched it works perfectly, unfortunately working and being executable is the only property we know about it. Again only working perfectly alone creates a fragile situation against any kind of change or enhancement. When legacy code needs to be changed either we face with new bugs or this small enhancement consumes more time than any worst case time prediction.
Workarounds or quick fixes are most efficient ways of legacy code creation. Either skipping tests and documentation or destroying architectural constraints are the reason for defining a solution as workaround or quick fix. They can also cause defects in the domain model of the software by introducing half finished concepts. Due to time restrictions most often they are unavoidable but effects must be maintained.
Clean Code principles helps us to develop more readable and easily manageable source code. These principles can be checked by external tools such as CheckStyle or FindBugs etc. Clean code metrics together with code coverage metrics give a sufficient view on the project risks caused by the legacy code, workarounds and quick fixes. This view is unfortunately not complete due to next category of technical depth sources.
Design failures are impediments placed in the heart of the software product. Most of the time wrong domain models are causing canceling new feature requests due to infeasiblity or postpone them into an unknown release version in the future.
A domain model can define the relations between entities false. In that case this domain model can be marked as wrong. However sometimes the domain model is not completely wrong but only insufficient or missing other parts. Every piece of software has its own understanding for the outer world and this is always based on a restricted view of the real situation. Most of the time initial domain model requires to have only a subset of the entities or relations on the real world for initial software product. In this case domain model is not wrong but only providing an insufficient view on the problem domain for the new requirements. Most of the time existing entities and their relations would be optimized by implicit assumptions. Exactly this optimization brings additional difficulty to enhance the domain model because now domain expert first needs to find out implicit assumptions and make them explicit in order to add new entities which are not relying on the same assumptions as before.
External Sources of Technical Depth
Second group of sources for the technical depth are coming from the external places from the product source code. Software development team does not have control over these triggers and have to react against them. Since they are not hidden in the source code of the project, they are more visible for the project managers. Sources in this category can be listed as following:
- Architecture Changes
- External Environment Changes
Architectural changes are most of the time comes into context as a reaction to new market requirements. IoT bring short time to market requirements and these two trends together triggered a migration of software architecture from the monoliths to micro services.
One important fact that software development team has to consider is that, they are not the only team developing software on the planet. This brings advantages such as profiting from existing knowledge, libraries and frameworks. On the other hand using this already existing libraries and frameworks comes with a dependency maintaining cost. This cost is of course far more less when we compare it with the "invent your own wheel" approach, but still it is a cost and some effort needs to be given for maintaining that. New versions of the libraries must be integrated into software product and software product itself must be integrated or become compatible with new versions of the frameworks continuously. This is a continuous effort and therefore regularly time needs to be invested for this "being up to date" situation.
Of course ignoring effort and changes in the outside world is possible. In case of changes are ignored, security vulnerabilities found in older versions of the used libraries or frameworks becomes a nightmare for customers and transparently for the development team. Security vulnerabilities are the first station in the "do not react on changes" railway. Same as any other software, frameworks and operating systems are also developed to capture new requirements. This means some of the old APIs or requirements become obsolete and deprecated in the new versions of this software. Finally frameworks or operating systems are canceling old APIs. Being incompatible with the underlying operating system or framework API is the final station in the "do not react on changes" railway and this potentially declares the end of life for your software product. More precisely underlying operating system or framework provider declares end of life for your product by stopping maintaining their oldest version, which will become eventually the only version your product is compatible because changes in the outside world is ignored continuously.