Is Unit Testing the Best Method to Avoid Code Decay?

Addressing a question posted on Linked In (here).

No, and yes. Unit Test is one of the valuable and fundamental tools in our rich and diversified toolkit for software and processes development projects.

The best method to avoid code decay – and to assure and monitor software and processes overall quality – is to keep the code in alignment with the business (or functional) value/requirements it is designed to achieve; by having an official, managed and integrated [sort of] SALMS – Software Asset Lifetime Management System.

Next I suggest how we can address this, in a somewhat organized and brief way.

I – “Context is the king”, and Quality is Non-negotiable.

So, again, Unit Test (UT) is a valuable and fundamental “tool”, in our so rich and diversified toolkit set. But it is just one, in a series of “must have” development and management tools (and methodologies, disciplines, standards, formalized [and as automated as possible] project processes, procedures etc.) for every professional software development project.

Since software will most probably evolve somehow, sometime; our ultimate goal is not to keep fussing, changing or refactoring the code, time after time; but to deliver value and decrease TTM [time to market], even in the form of discrete units (“building blocks”) of reusable, qualified software “assets”.

II – SUT and Test Code Quality.

UTs code also evolve; and not necessarily at the same pace as the “production”, SUT – System Under Test code. So, for both of them, we need a solid (well-structured, standard and managed) VMF: Versioning Mechanism Framework, processes etc., in order to keep us in alignment with their “evolution”.

This, paired with an also corporate-wide, robust tracking and monitoring solution would allow us address issues not only like who/when/where the SUT code was tested (against which UT version), and to trace tests history and evidences. It also lays the foundation for the even more crucial regression (compatibility) tests; while providing transparency, accessibility and credibility to the result of our development efforts. Besides, it also assures and enforces one of the test discipline tenet/principle: “Tests should be repeatable” [on demand].

III – Code coverage vs. Requisite ([end-to-end] functionality) coverage.

One of the most important tenet of OOP is the “open-closed principle”, which states that code “should be open for extension, but closed for modification”; so, there is not such thing as “endless refactoring” [if there is, then something is definitively wrong]. Good (clear, tested) code, supported by a standard VMF and an integrated SALMS: Software Asset Lifetime Management System should be in place to address this issue.

Also, there is no such thing as “ideal code”. Just “code that works” [as intended], code that delivers the functionality it was [originally] designed to, satisfying the specified requirements (goals, value). And these code are usually simple, clear and “elegant”. But again, as the requirements evolve, so might the code [and UTs]; and code change – of whatever kind – should always be [well] managed, as suggested above.

Yet in regards to refactoring, Martin Fowler always advocated that – for [backward] compatibility assurance – mandatory pre-and-post-testing calls (“assurance”) should surround each refactoring [baby] step. And, as stated before, although vital, refactoring should never be an “endless” process effort. In a well-managed environment, it should always be encouraged; but it should be a piece of cake, harmless [otherwise it would be purposeless and a waste of time and money. After all, why to care about this doodad, when the term “quality” seems to be only a political or marketing “tool”? And when this is the case, I usually use to rethink about my abidance in such a place…].

That said, its crucial to remember that Code Coverage (CC) [an another “must have” tool] goes side by side with UTs, and should be specified, managed and tracked; although it does not guarantee, by itself, the overall software quality [it’s not its purpose], or code alignment [with its ultimate goal], which we should be always striving to achieve.

Along with UT and CC, CA – Code Analysis is another also fundamental tool to help us manage the overall code quality, through the assurance of code “compliance” with [corporate-wide] standards such as naming conventions, methods [and classes and parameters] length, patterns [as I exemplify a bit later on], etc.

IV – Context overview

As I see, multiple and different kinds of software “products” (packages, assemblies) – with distinct responsibilities (core utilities, entities, mapping, DAL, BLL, GUI, supporting frameworks etc.) – are usually “integrated” (composed) to assemble a feasible “deliverable”.

[Here ends the “realm” of UTs and enters other mandatory types of tests; such as component, system, integration, UAT – User Acceptance Tests, etc.]

Then, the aforementioned “products” may be combined to form (embody) a “service” and/or a series of well-structured and [functionally] related services. And that’s the meaning of the expression “software products and services”.

[Here we usually deal with other important kinds of test, such as load, stress, etc.]

Why am I stressing this so well-known point? Because these assets [products and services], by themselves, do NOT deliver the ultimate value our client (sponsor, user, etc.) expects and is paying for. They are, by themselves, just “tools”, a means to an end. At the end of the day, our users – the real “product owner” – are only interested in their [business] “processes”. And not just that: if they represent a well-structured corporation, they will probably have a series of integrated processes, that might (or not) be end-to-end (or macro) processes. And this is our overall landscape; here lives our utter scope, context, boundary; almost always.

[And here another layer of QA – Quality Assurance is required, to assess the perceived value of our combined effort, the real strength of our solutions and deliverables. And with this overall context [always] in mind, we should structure our own QA layers and environments.]

V – Quality environments, processes and management.

The [widely adopted, Cobit-based] SOX (Sarbanes–Oxley Act 2002) Standards Compliance dictates that development, QA and “production” environments should be completely isolated from each other. This should enforce overall requirements observance; and lay the foundation for effective acceptance tests, approvals and ratifications.

Besides, a well-structured and managed software environments “ecosystem” should also address the infamous and yet not so rare “but it runs on my machine” issue. And this also lead us to another management responsibility, or process [in the scope of processes and/or software development project]: environment and infrastructure compliance and alignment assurance.

Such environment isolation (transparency, agnosticism) also implies that the development team “owns” the code. Of course the sponsor ultimately owns the code, but the development team is the code “guardian”, or as I prefer, the “custodian” of the code. And this avoid and prevent the unprofessional, harmful and outrageous practice of having “production” (operations, help desk) teams changing the code [time after time, at their will], at all. After all, management implies control. They can access the code, they can even debug the code, if needed, but they cannot change it. Doing so, at any level or amount, should be denied, and considered code “corruption” and/or asset “violation”, subjected to severe penalties; due to its potential “detrimental” impact and negative side-effects on code (in particular) and end-to-end business processes (in general).

If consulted, I would advise further abstraction on the SOX layers’ model, as follows:

  • Development Layer – Code, UTs, CCs and CAs [among an almost infinitude of other software artifacts and tools] domain. Development and test teams.
  • Internal QA, IT Layer – Package candidates “pit-stop”. Corporate code-and-software compliance and client-requirements assurance. Package complementation (SLAs, user manuals, marketing sheets etc.). Read-only code copies access. Test and QA teams.
  • Protected QA, Client Layer – UAT of all kinds. Unofficial DMZ packages. No code access. QA and client teams.
  • Delivery, Corporate Layer – Official, ready-to-deliver packages. Read-only code copies. Deploy and management teams.
  • Production, Current, Real Time Layer – Official, in-use software and processes environment. Read-only code copies access. Operations, client and management teams.

With such a SoC – Separation of Concerns (and “workspaces”) in place, and the proper VMF and promotion strategies and processes support; a solid, robust and manageable SALMS should be easily envisioned, specified and implemented. And this leave us with two last crucial considerations.

VI – What about the legacy and/or external (partner) code?

In all cases where UTs might seem “impossible to apply”, I would suggest another abstraction layer, which – as a matter of fact – should be applied for every “black-boxed” resource or dependency. Here, the traditional GoF’s “Design Patterns” [1995] and the more recent Thomas Erl’s “SOA Design Patterns” [2009] teachings come in handy. Patterns – such as Legacy Wrapper (441), Federated Endpoint Layer (713), Facade (185), etc. – should be applied and enforced.

Something like:

  • Encapsulate all the needed dependency features (as below);
  • Always consume the code generated in the previous step; and
  • Never consume the black-boxed features directly.
data green

public namespace EncapsulationSample
   using SomeCompany.NeededBlackBox; //where WrappedResource lives.

   public class WrapperClass
      private WrappedResource _dependency = new WrappedResourse();

      public int EncapsulatedMethod()
         return _dependency.NeededMethod();

Then formalize, publicize and enforce this unique access point “rule” [standard], by:

  • Amending the CA mechanism [only one “new WrappedResouce()” should exist]; and
  • Unit testing the WrapperClass features [“services”].

Moreover, since these legacy features are potentially useful to other solutions, it is recommended – and it should be mandatory – to abstract its facade (unique official, allowed access point) to a wholly new [“first-citizen”] supporting development project; from where it can evolve as needed. And of course we must do the same with its related UTs and CCs. This way, my code never refer [directly] to the wrapped features, but only to our official wrapper reference, which is fully tested and well-managed. [I can keep consuming the “good-enough-for-me” EncapsulationSample v.1.0, even if other solutions are using its extended 2.0 version.]

This way, even with a large amount of “unmanaged” [foreign or messy] code, and even if I have access to the source code:

  • I don’t change (or “corrupt”) the original legacy code; and
  • I don’t test this code [directly]; but
  • I unit test every public encapsulated feature (state, behavior etc.) [on EncapsulationSample]; and
  • I assure its proper code coverage and analysis [for full code compliance].

And this is the essence of code reuse and loose coupling [and perhaps almost all about it].

VII – And what about other software artifacts and development project requirements?

There is a whole series of software “artifacts” [other than code] – which are usually neglected and overlooked by our [sometimes virtual, unmanaged and reactive] change processes and procedures – that can cause serious and unpredictable disasters – ranging from TTM increase to severe financial losses due to system(s) crash.

These include schemas, scripts and configuration files [or variables] of all types: OS environment variables, IIS parameters and settings, machine.config file, database scripts, metadata schemas, firewall port rules, endpoint security settings etc. etc. Each one of these artifacts, among others, can “inadvertently” cause unexpected damages – no matter how minimal or “trivial” a modification is supposed to be.

Why? Because these artifacts usually impacts not only one, but a bunch of – not necessarily related – software products, services and processes; which I usually refer to as business resources (BR), or software assets, for short; or even [sometimes] only as resources or assets, in general. These terms may also encompass software related artifacts of any kind – “critical” (as those mentioned above), or not (diagrams, manuals, specifications, documentation etc.).

But, back to the subject. Even if the modified artifact impacts only in one specific BR [and we can seldom assert this for sure], we don’t know, in advance, neither what other resources depend on that specific-impacted-BR, nor the resources that relies on those [“first-level”] dependent-resources, the second-level dependent-resources, and so on. As you can note, we are potentially dealing with a whole net hierarchy of [descendent, downstream] dependencies.

And that is not all. The fact that the impacted-resource is changing its own state – and/or behavior – may impact the behavior of the resources the impacted resource depends on; and/or the behavior of the resources above those ones – the ascendent, upstream chain of dependencies. Is that clear? Because it gets worst [as it can]!

Since the modified-artifact is rarely well-and-proactively-managed (controlled, monitored, assessed, and – most important – versioned), it is just impossible to predict or analyze the impact and the risk such change can impose. Moreover, since they are not properly managed, these artifacts are often careless created and reactively modified. As a matter of fact, they are reactively updated [often] because they are careless created and not properly managed.

And here, my dear friends, lives [hidden] the reason we often find ourselves “reinventing the wheel”, all over again [and again and again] – to forge a momentary lapse of apparent control of the environment and the context of our so called “solutions”. We just do not have neither [full] control nor [complete] “knowledge” of [the implications of] what we are [really] doing.

VIII – Progressively directing our steps to real safety, effectiveness and value

What can we do about it? How do we gain control and acquire knowledge about such critical artifacts [which I refer to as “assets”, due to their high importance and status within various of our contexts]? And, since they are so powerful and yet so weak, how can we “strengthen” and “empower” them?

Well, you must have a glimpse of the most obvious, logical, easy and simple thing to do. The answer lies right in front of us [perhaps we may need to “step back” a little bit, in order to descry]. Fortunately, we just need to do the very same thing we have always have been doing year after year, but now focused on our own “problem space”. You got it? Exactly! I knew it!

We simply define, specify, analyze, model and persist their [draft, at first] representation on a relational database. Then we test and refine our models [and scripts etc.], envision and implement some prototypes and PoCs – Proof of Concepts. And so on and so forth. Oh, come on! We all know “the path”, don’t we? The road we always build, for others to travel [more smoothly]. The question is: why we deny ourselves the same “privilege”? Why don’t we underlay our own floor? We surely already have all the needed tools and [of course] the know-how.

It is simple, but not easy, however. Or better, it is simple and easy – after all, we have done it for years, and we now master all about it [don’t we?] – but it also surely implies a huge amount of work, time, effort and [of course] money [or better, “investment”, an investment on our own “capacity”, on the improvement of our “competence” and “effectiveness”]. And if we step yet a little bit back and up [to gain a better perspective], we will see that with such an initiative we would end up modeling our entire company, at least. It would culminate with a sort of corporate “big data”. [And so what? Isn’t it good or desirable?]

And this for sure would require all the [advanced] knowledge we’ve gained so far: DM/DW and ETL; multi-threading and parallel processing; multi-layered applications, SoC and AOP; IoC/DI and DDD etc. We would probably have to resort to all tools of our kit [which is nothing less than super exciting, isn’t it?]. This would also require a progressive culture change and [another] paradigm shift. And so what? We are also used to it. Aren’t we?

And finally, for sure this implies in lots of work and effort; along with grounded and motivated posture and mindsets; but we don’t have to embrace all the world. Not at once, at least. We can, again, use what we’ve already learned from our rich experience. From a PoC to a prototype to a pilot to some strategically selected [bunch of] processes; and then to an entire business unit, then perhaps to a regional, a country, a division [set of countries], our company, a holding. And in time, also horizontally, to our business partners and suppliers, to other domains and offices of our branch, to other branches on the same country, and so on.

We all already know the path and master the job, don’t we? After all, nowadays after three years of graduation every developer is used to be considered (or at least [self-] “labeled”) a senior professional, an expert. Aren’t they?

IX – Tying it all together (Any code, system or solution [and nobody] is an island)

To recap, so far we have introduced:

  1. VMF – Versioning Mechanism Framework; [should Mechanism be replaced by Methodology, or Strategy?]
  2. The need of a corporate-wide, robust tracking and monitoring solution; embraced by the
  3. SALMS – Software Asset Lifetime Management System;
  4. A landscaped view of our environments and contexts;
  5. The need of an “environment and infrastructure compliance and alignment assurance” management process; in the context of processes and/or software development projects [and out of it, as well, for monitoring purposes etc.].
  6. An extended QA layer structure, inspired on SOX requirements;
  7. The need of code promotion strategies and [automated] development processes support;
  8. An official standard approach proposal to deal with legacy, black-boxed [and/or “messy”] code;
  9. The hierarchical net concept of upstream and downstream dependency chains;
  10. The need of “critical software related artifacts” management [definition, mapping, “catalogation”, discovery, etc.];
  11. A [very] well-known path for effectively and proactively evolve and manage our current software and processes development projects environments.

Now, since these items are clearly related with each other – in some level and/or degree – one question still remains: how to position and line up all of them into a cohesive and well-defined structure that allow us to [progressively] evolve our aforementioned PoC and prototypes into a mature, robust and integrated solution, in order to really upgrade our environment and development processes [this time from the inside out]?

Of course I have a proposal of my own. It is based on:

  • A series of principles, disciplines [or pillars] and [r-evolutionary] concepts; that lays the foundation for
  • A set of integrated frameworks, a well-defined methodology – that align [vertical, horizontal and orthogonal] cross-cutting precepts and requirements – and an infrastructure platform of physical and logical resources; which, in turn, support and enable
  • A solution that comprises a wide range of well-coordinated series of business resources [products, services and processes], and hosts an implementation of the SALMS (items #2 and #3 on the previous listing), among many other features and tools.

In order to enable it, a few other concepts need to be addressed. Allow me first a tiny testimony: as an IT professional with 20+ years of experience, I’ve always witness an understandable fixed, rigid posture of many of my colleagues regarding three main topics – avoiding and neglecting them like a bat out of the hell [and, of course, they are all interrelated].

  • Versioning – Most often than not, there is a clear rejection to this subject. And this obviously lead us to an “endless refactoring”. We keep patching the code over and over again, and continuously harvesting the same old well-known negative side-effects.
  • Critical artifacts – As mentioned in the beginning of the previous section, there are a lot of types of such “assets”; but we usually tend to deal with them in two basic ways: or we keep them under rigorous surveillance (e.g.: database scripts), or we simply overlook and neglect their importance (e.g.: workstation configuration files and settings). Both postures also lead us to some inconvenient and counter-productive consequences, such as the aforementioned “but it works on my machine” conundrum, the “latency” to gain some access permissions, to have some software installed, and even to get an initial fully operational working environment, etc.
  • Governance – Despite having collaborating on various well-known multinational corporation [see my profile], I have only seen this “concept” on books and the internet, unfortunately. Even with the current Agile methodologies popularity, and despite their avoidance of “over specification and documentation” [what one can also refer to as over-formalization etc.], it is important to note that Agile teams have a very well established structure [activities workflow] of “software [value] development and delivery” process(es) [or say, system].

    And, as a matter of fact, Dean Leffingwell [in “Agile Software Requirements – Lean requirements practices for teams, programs and the enterprise”, 2011] states that “governance and oversight is still important, even in the agile enterprise.” (442); and “for developers, architects and businesspeople who have experience building large-scale systems and portfolios consisting of systems, products, and services – with the extensibility and scalability a successful solution demands – […] architecture planning and governance is necessary to reliably produce and maintain such systems. […] The larger, enterprise system needs to evolve. It can’t simply emerge. Something has to tie it all together.”; while advocating for “[intentional] stronger architectural governance”. (386)

These concepts simply cannot be avoided, neglected, overlooked or whatsoever. On the contrary. If we intend to smooth our own path or reach an “holistic thing”, we ought to embrace them, and give them the status and consideration they surely deserve.

Additionally, the following previously outlined QA layer suggestion needs to be refined:

  • Delivery, Corporate Layer – Official, ready-to-deliver packages. Read-only code copies. Deploy and management teams.

An actually mature solution should split it into:

  • Portfolio, Corporate Layer – Official, locked, long-living packages. Locked and secured code sources. Governance and Management teams. Here, and only here, lives the real business assets; properly, systematically and formally packaged, labeled [“versioned”] and [well] managed. Every single business resource or software asset and artifact (critical or not) derives from this “central” repository. And any one of its [official, “derived”] copies can be easily and promptly located and managed; and securely overwritten, as needed.
  • Delivery, Corporate Layer – Official, ready-to-deliver packages. A temporary, volatile subset of the Portfolio Layer packages. Read-only code copies, if required. Deploy and management teams.

And finally, here are some other important issues to consider on a such innovative endeavor [among a substantial other amount of not less important topics]:

  • The solution shall be highly flexible and pluggable; providing multiples [well-known] extension points (EPs). It must provide a default implementation for every development [major] concept [and let the user follow its “guidance” and “expertise”], or provide hers or his own “model” [and implementation]. For instance: we can choose to provide NUnit as UT default option, but we must allow the user to -1- use another “built-in” tool (MbUnit, MSTest etc.) or to -2- provide, configure and use their own “implementation”.

    [This, by the way, is the very core definition of how a framework works, right? This sort of solution shall provide a whole series (yet to be defined) of supporting frameworks – each one with its responsibities, and each one with its own EPs]

    And, as stated, this “directive” should be applied to every and each development major concept; which includes both development technologies (as just exemplified) and management methodologies: “and for user-interface-automated-tests, what tool should I use?” Whatever you want (are used [or bound] to). We offer “these” built-in options for your convenience, and enforce the usage of “this one” by default. We are also ready to accept any other one of your preference. All you have to do is to provide us some information (to plug your choice of preference to the proper extension points) and follow some simple rules [basically, to adhere to some metadata schema, or “contract” – what data will/needs to flow through which communication “channel”, and how they have to present themselves, in what “format”].

    Other extensions points should include IoC/DI, development-project management processes (COBIT, ITIL, CMMI, TOGAF, etc.), Continuous Delivery and Integration mechanisms, technologies and tools, etc. [yet to be defined]

  • The solution shall support a hierarchical/granulated configuration for the provided EPs.
    data green
    Project X uses A, while any other project uses B.
    Country X can use A, B or C, country Y can use A or D, any other country
    can only use A.
    No future project [on country X] will be allowed to use B.
  • The solution shall support a transparent and harmless switch between the provided EPs.
    data green
    On the next version of project X, we will be using MSTest [instead of the default 
    option we've used so far]
  • The solution shall provide some sort of package [and package items, perhaps] “maturity certification” model (MCM) builted-in mechanism.
  • The solution shall assure and enforce its MCM over all the upstream and downstream dependency channels.
  • The solution shall assure and enforce the alignment between all levels of architecture models (enterprise, project, system etc).
  • The solution shall assure and enforce the use of required/allowed patterns, standards, guidelines, templates etc. [yet to be defined]
  • And so on and so forth…

X – Conclusion.

This way, and as I see, only this way, we will finally be able to really assure, ensure and trust on the effectiveness of our job –

  • the correctness, relevance and compliance of our code;
  • the alignment and adherence of the business resources we are raising up with ours costumers’ actual current needs, values and expectations; and
  • the certainty and confidence that we are really adding value, profits and credibility to our own business;

day after day after day.

With the support of such a wide, deep and all-encompassing; “holistic”, “organic” and integrated; qualified and differentiated asset centered and people and value oriented solution; what else would we want, besides the opportunity to work on a place like this?

For sure, I’d love to.

Ufa! That’s all, folks!

Thank you for your time!

Good luck, and “don’t worry, be happy”!

See you around! Bye!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: