Technology Management Revolution

clipNote

The primary, official contents regarding the whole “EAGLE – Enterprise Asset Governance Lifetime Evolution” Project are published here [Portuguese].

New approaches for IT governance.

pt

Adapted from “SaaC, CaaC Software Governance Strategies”.

Typically, a company that relies some or most of its business processes and assets on software products or services usually counts on a set of proprietary software artifacts and solutions; which in turn are often supported by a mix of open-source offers and commercial third-party products. Ideally, such valuable, sensitive and critical software assets should be subjected to periodical, systematic and formalized assessment, in order to monitor and ensure its overall quality. Besides, such software should also be designed – from the beginning – to last stable and “healthy”; due to a coherent and effective governance strategy, driven by the appliance and enforcement of the most relevant patterns, standards, rules and guidelines.

In thesis, the natural outcome of such diligent management approach should be a well-defined and well-structured set of related, independent and collaborative software artifacts; which, along with its supporting assets, could be positioned as the company’s software portfolio (CSP). This ideal governance style would naturally lead to the best of the worlds: ease to maintain, reuse and enhance existing software; easy to introduce and integrate new software assets to the company’s landscape; along with optimized time to market (TTM), return on investment (ROI), total cost of ownership (TCO), and so on. Unfortunately, nowadays that’s seldom or never the case.

So, let’s consider some business strategies that actually could lead us this way, through the “right path”.

ITaaC – IT as a Company

An ideal software application or solution should be designed, developed and maintained according to the currently well-known and widely-accepted best practices, policies, patterns, standards and guidelines. Software principles should be periodically reviewed and, whenever possible, automatically re-enforced. Software quality should always be closely monitored; also in a formal, systematic way. Customers (or clients) and business analysts should actively take part in each software process, from development to assessment, to maintenance, to retirement/evolution. Besides, the governance of each software development project should – from day one – assess software as the company itself, along with its current business strategies.

Considering some annual company’s board meeting, and apply the same kind of questions, or CONCERNS, like:

  • What are our goals, regarding this project, for the next year (given its current status, context and so on)? And for the next 2, 3, 5 years?
  • What are exactly the actual need, relevance and value of this project? And how will we assess and measure them?
  • Are the project’s goals feasible? If so, what strategies, policies, constraints, patterns and standards should be applied? And how will they be enforced, assured and monitored?
  • How should this software be positioned into the overall company’s software assets (portfolio)? In what context it fits? With what other assets and/or strategies will it relate to and/or depend on?
  • Are there some similar or contradictory initiatives currently in place?
  • Have we ever tried something like this before? If so, what were the results then? Are there some other “learned lessons” that should be also considered?
  • Are there some other alternatives? If so, what are their pros and cons?
  • When and how this initiative and its outcomes will be measured and evaluated? What will be its success and failure metrics and rules?
  • What contingency and mitigation plans, strategies and procedures will be required? Are they already in place, fully tested and operational?
  • How does this project depend on, impact on and/or relate to other assets, currently in place? What’s needed to assure it won’t mess anything up? How and when will this be enforced?

Moreover, as opposed to their company’s counterparts, software projects’ governance strategies, processes, goals, rules and policies should be public, widely spread and well known. Each and every stakeholder should know, in details, how does the company position, value and support the software s/he’s dealing with.

From the design perspective, I actually envision software growing up from POCO, N-Layered, command-query, service-and-functional-oriented patterns; even when these patterns are not apparently required, at first. Then, with such architectural foundation in place, we would extend, enhance and enrich its features; in a modularized, prototyped, incremental, iterative, test-driven and integrated way.

Some fundamental guidelines

The following guidelines should [almost always] be considered non-negotiable:

  • Encapsulation, reuse, independency [referential transparency] and loose coupling should always drive implementation, testing and integration efforts.
  • Principles and tenets should always be enforced. Statements like “separation of concerns”, “open for extension, closed for modification” and “principle of single responsibility” are far, far away from mere academic concepts or marketing phrases.
  • Best practices should always be explicitly stated, widely published, and hardly enforced. They are called best “practices” – and not best “intentions”, or best “ideas” – for a very good reason: in order to work properly, they must be put into action, into “practice”. Otherwise, they just don’t work, at all. Simple like that! And, as the previous item, they should also be prohibited to be used merely as propaganda.
  • “First, we do it the best way; then (if, when and where needed) we refactor it”. This could be our “culture”.
  • “Fail fast, for fast feedback!” [fast quality/approval] This is a valuable “mantra”.
  • Zero-tolerance for compiler’s warnings should also always be enforced.

Additionally, when dealing with legacy, pre-existing software, or with other software dependencies, such as black-boxed third-party software components; don’t touch them. Instead:

  • Wrap all the needed features into a set of well-defined and well-designed interfaces (service endpoints or component facades).
  • Let them collaborate with other software assets and functionalities only through these endpoints/facades.

And, finally, the following should also be always considered:

  • Test everything, always; as soon and frequently as possible. Use unit tests, mocks and stubs.
  • Diagnose, monitor and manage test code coverage, and enforce static code analysis.
  • Carefully and judiciously design, plan and apply components, system and integration tests; as well as acceptance tests (AT), both internal (IT AT) and public (UAT). And never neglect regression tests.
  • Catalog each and every software asset. Create and maintain a centralized software assets’ catalog, repository, or registry.
  • Create and maintain a centralized versioning system, with standardized strategies, rules, policies and processes. This is crucial, in order to provide control over sensitive and critical software development processes; such as software portfolio management, regression tests management, rollbacks, parallel execution (blue-green-deployment) etc.
  • Consider continuous integration, from the very beginning of each software development project. Minimize the need and the occurrences of code branches and merges. Keep software always functional (green) and ready-to-be-promoted to the next “environment” on the specification-coding-implementation-testing-qa-staging-production-retirement pipeline.

The good and the not-so-good news

The good news: we already have all the needed knowledge, technologies and tools. To name a few: OOP, SOA, DDD, AOP, DI; IIS, WAS, ADFS; MVC, MVP, MVVM; PMO, CMMI, MSF, APM; UML, TFS, CI, MEF; DM/DW, SSIS and so on.

The hard news: we’ll have to build – arrange, compose, structure – them, from scratch, for the sake of the company’s ROI, TTM etc. We’ll have to tailor them to our specific business needs and processes. We’ll have to track, monitor and manage them. And yes, this will require a systematic and deliberated effort, along with a corresponding, coherent and continued investment on defining, structuring, formalizing, standardizing, publishing and publicizing our software assets, as well as enforcing related rules, best practices, processes and procedures. Not to mention the required concurrent effort on raising some amount of corporate culture’s dogma and stakeholders mindsets.

The bad news: it won’t be an easy, overnight endeavor. It will take time, money, focus, commitment and discipline. It will also take both a broader vision and a serious, deliberated and continued effort. For, particularly here – at the very foundation of ours company’s software landscape – there are not such things as magic, silver bullets or ready-to-use solutions.

stop

Of course we can always fake, trick, deny and neglect this initiative, but the results and consequences of such a naïve attitude would reveal themselves soon enough.

I dare to say that that’s exactly the most common scenario nowadays, at the majority of our software or IT related companies. For the sake – or with the excuse – of the current budgets, schedules and the like; we are still insisting on the old behavior of keeping ourselves far away from the “hard, right path”; and thus harvesting its “unintended” and unexpected – but logical and straightforward – outcomes.

These, as you may have already noticed, is not desirable, at all. Actually, it leads us to the very same sort of brittle, hard-to-maintain, weak, incidental and dangerous “solutions” we are trying to avoid and overcome; far from our original, well-intentioned goals: the so wanted and expected quality, satisfaction, standards, productivity and profitability.

As consequences, we are continuously missing our goals, while wasting our time and the sponsor’s money. And, at the end of the day, we are failing to fulfill our mission, our calling – to provide good, manageable, scalable, robust IT solutions.

The root of our problems, as I see, lies on our lack of perception that, ultimately, a software asset should be considered as a valuable, critical company asset. And it should be treated as so. Each software-oriented solution development project (SDP) should be ruled by the company’s software governance standards (SGS); and should also be periodically reviewed and reassessed, regarding to their near-med-long terms goals, strategies, constraints etc.

Our initial challenge

The challenge, then, involves the following activities:

IT Resources Portfolio Lifecycle


Figure 1. IT Resources Portfolio Lifecycle

  1. Gathering: grouping and structuring all the available and/or desirable software assets – knowledge, legacies, technologies and tools – into a coherent, meaningful, centralized, formalized and standardized sort of repository, or portfolio. All these activities should be assigned to a dedicated, full-time software governance team (SGT).
  2. Profiling: defining and describing each one of the gathered assets; by stating its business value, features, dependencies, specifications, requirements, statuses, and so on, into a standardized sort of software profile sheet (SPS). Word, not Excel, document; at first (refer to the next section, below).
  3. Assessing: classifying and evaluating each software asset; both in relation with each other and as the company would evaluate its most valuable business asset. This activity should include the questions listed in the above CONCERNS section. The same SPS used in the Activity 2 should be used here, in order to ensure stable-data-centralization.
  4. Positioning and Envisioning: re-evaluating the assessed software, now focused on a two-folded concern:
    • How can it be currently positioned; due to its current status, business value, dependencies etc.?
      and/or
    • Could it be better positioned – regarding its overall role and status – into the next desired company’s software landscape scenario? If so, how can we achieve this? What would be the new specifications and requirements for it?

      This activity should include all the above lists, both in the CONCERNS and in the GUIDELINES sections. Here, the SPS should be extended, or augmented, in order to isolate fixed, stable information from unapproved data and mutable, unachieved statuses.

  5. Planning: analyzing the data gathered so far (SPSs); grouping/packing the assets selected to be evolved/promoted; and using them to generate a feasible software portfolio evolution plan (PEP), along with its rationale, strategies and alternatives. The generated PEP should then be presented to some higher-hierarchy company’s decision/sponsoring instance, and submitted to its approval.The outcome of this activity should be an approved PEP.
  6. Repositioning: implementing the approved PEP; reviewing and updating all the related artifacts and documentation. This activity includes some critical tasks, such as “linking” preexisting software to the newly created software assets (interfaces, services etc.) and ensuring both that this occurs smoothly and that it doesn’t impact any preexisting assets’ behaviors.
  7. Evolving the spiral: restarting from Activity 1, now with the knowledge gathered with the new achievements. An important task here is a “spring meeting”, to assess the activities performed during this “loop” or iteration, register the “lessons learned”, plan/format the next iteration etc.

Each one of these activities should review, complement and validate the previous ones; leading us to an iterative, progressive, evolutionary approach. Furthermore, after each “re-evolution” spin, we should have an updated and very well defined company’s software centralized knowledge summary, with software assets’ positioning, statuses and other related information at disposal for every stakeholder, to whatever needed decision-making.

Does it make sense?

No. It’s neither easy, nor cheap or simple; but it’s surely the most logical path, if we are actually success-driven, in the mid-long-term. The activities and processes just described act as tactical procedures, targeting a strategic – and more wise, solid, meaningful and valuable – goal.

Nevertheless, all of what has been described so far represent only the half part of the real (or, say, “royal”) path – a very valuable and useful achievement per se, indeed; but just one half of the “golden coin”. So, let’s face the other face, let’s explore the other half of the path.

CaaC – Company as a Client

The principle/rationale here is very simple and well-known:

“If you want to ‘sell’ something, then ‘consume’ it!”

We, as IT “experts”, are used to state a bunch of optimal, “perfect” solutions to our [potential or not] clients: “in order to match your requirements, we’ll model this, automate that; join, assemble and distribute these, publish and share those. The solution’s architecture will guarantee everything that’s needed; and we’ll be using all the required technologies, tools and resources, as a mean to deliver the expected quality, efficiency and satisfaction.” Then, when the excited and amazed client buys our proposal, we rapidly run back to our customary “hammers and nails”; using the very same old traditional, un-modeled, un-automated development processes and tools.

Here and there, we try to improve our development methods and processes – as the budged and schedule allows, or some sort of crisis forces us to. We then go after the “best practices” and the latest development “style”, as PMI, Scrum, CI, System Center, and so on, and so forth. Although this is both expected and desirable, we still keep missing the “target”, while waiting for the next wave of improvements, for the next generation of tools, resources, patterns etc.; another set of nails, or a bready new hammer.

So, what’s the target we are still missing, every single time, over and over again?

“If you want to ‘sell’ something, ‘consume’ it!”

“Seller, consume your own goods!”

“Doctor, take your own medicine!”

“Healer, heal yourself!”

Or:

“Developer, automatize the activities of your own organization!”

“Analyst, model the processes of your own corporation!”

“Manger, define and standardize your own business!”

Does it make sense? Isn’t this the most logical path, once we are actually success-driven? Once again: it’s neither a simple nor a cheap solution, at first; but it’s the most basic and fundamental step, since we ourselves are our most important and valuable “client”, right? So, let’s consider it seriously, professionally.

So, back to the client’s metaphor:
— Do we have a customer, project, … database?
— Well, we should do.
— Is our software development domain modeled?
— It should be.
— Are our software development processes (SDP) automated?
— They should be.
— Are our SDPs’ requirements well-known?
— We should specify them.
— Have all SPDs’ scenarios been specified?
— So, let’s model them.

And so on, and so forth…

In the last section, we outlined a set of challenges and activities that, in fact, describes a business process, right? As a matter of fact, it exemplifies the very same kind of process we are so used to model, automate, package and deliver to our clients. Then, why not “try our own medicine”? Doesn’t it seem “logical”? So, let’s go for it!

In this case, we would have something like:

  • What are the central concepts?
  • What are the main “nouns and verbs”?
  • What are the candidates for types/classes, properties, methods/operations, services, processes, interfaces, proxies, and so on?
  • What are the outlined – and the implicit – rules, constraints, dependencies, and so forth?
  • How the artifacts gathered so far collaborate with each other?
  • Is a graphical user interface (GUI) required? If so, what kind of GUI?
  • What kind of environment should be hosting this new “solution”?
  • How data will be persisted and retrieved?
  • And so on, and so forth…

During the development of our not-so-hypothetical new “solution”, some “unknowns” will certainly reveal themselves. We’ll probably be finding some missing supporting models, not explicitly described anywhere else so far. As a natural consequence, we’ll soon find ourselves modeling the entire company: headquarter, branch offices, departments; clients, suppliers, users etc. So what? Nobody said it would be an easy or a fast endeavor. And it’s completely aligned with the proposed CaaC approach. Isn’t it? Besides, wouldn’t we suggest it to our clients, in order to “optimize” their environments and processes?

clipNote

Note that this can – and actually should – be done incrementally, by defining and adding both “revision and control marks” (milestones) and also other marks (placeholders) and “extension points” – in order to assure a rapind and continuous demonstration of the progressive, evolutionary solution state (status).

The CaaC approach actually complements and supports ITaaC, which imposes some requirements on the former and, indeed, should be built on the top of it. The CaaC then acts as a common-utility layer, supporting this and probably many other initiatives of the company’s software governance team.

Software Governance Strategies

When thinking about the CaaC, a very reasonable question may rise. What about the other dozens of catalogs held by the company? We already have (invested a lot in) CRMs, SAPs, Active Directories, BizTalk catalogs, custom proprietary databases etc. It would make no sense in adding to the list, right? Yes, absolutely right! But that’s exactly the point.

All of these so useful solutions already in place impose its own model to the data. So, we end up with a bunch of “related” models – representing the same types of business entities – widely spread across the entire company. Once these solutions and models don’t “talk” to each other, the information they hold – and some of them are pretty sensitive – soon becomes out of sync. As data goes out of sync, teams go out of sync, services go out of sync, departments and offices go out of sync, the company goes out of sync. Can you foresee the results?

Although we’re not advocating a centralized database, we consider the consolidation of all the existing models – into a unified and centralized model – a fundamental requirement; and that’s exactly the primary goal of the CaaC. In fact, all the isolated (legacy) models should serve as input to CaaC’s data “normalization” task. CaaC also should keep track of which software assets can generate each piece of data, and which ones of them – currently or potentially – can consume or have some other interest in those data.

Then, the ITaaC – or more precisely, some dedicated ITaaC-driven software asset – should step in, and take the responsibility of integrating the related models and assets, and keeping their data in sync, as well; on a controlled, centralized, unified, monitored and managed fashion. Moreover, to support ITaaC’s iterative and incremental tenets, its participant software assets should be required to be “registered” into the CaaC’s repository; which in turn would allow ITaaC to support integration and synchronization processes.

Does it make sense, now?

Finally…

When considering the CaaC, the ITaaC and the company’s software governance strategies as a whole, we can think of this initiative as an evolution of the most valuable current best practices, such as separation of concerns, loose coupling and the like; a kind of “refactoring” of the entire system, environment, macro-context – a step further, the most logical next step, towards the already paved way to success and profitability.

Using the civil engineering analogy, I’d say that the foundations are already in place; all the tools and machinery, available; so, now it’s up to us, to roll up our sleeves and start to build what’s needed; even if it requires us to blow up some old bridges and walls, and hire some new skilled teams. Of course there will be some dust and a period of an apparent chaos, while the hard work is been done. But, at the end, a bread new landscape scenario would be there, allowing us to benefit from the results.

It’s a huge, innovative and bold endeavor, no doubt; but with an equivalent promise of earnings, for every stakeholder; from user/client to business analysts, to developers, testers and system administrators, to managers, CIOs and CEOs.

A very important, crucial last tip

If there’s something I’ve learned and witnessed during the last 20+ years devoted to software development and client’s consulting is that there’s no value in delivering lies, no matter how beautifully and/or tastefully we present them.

If, outside the computer, our system – as a set of business domains, models and processes – is weak, ill or a mess; it will continue to be a mess after automation – a “faster mess”. And its weakness will soon be revealed and exposed, despite our best “efforts and intentions”.

So, the first thing to do is to clean the mess. The very first requirement should be to define, to structure and organize our “stuff”, in order to gain control over them; by enhancing and enforcing their visibility and correctness. Does it make sense?

Once there’s no magic, no golden or silver bullet, and no easy-or-ready-to-use solution, it’s up to us to do our own homework, our own job; and trail “the right, hard path”.

We’ll always harvest whatever we plant. The good news is that you can choose what to plant, how to lay down the seeds, and what’s to be done, next. So, choose widely, embrace the path, and enjoy the job; but choose wisely.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: