Using Metrics and KPI’s to Refactor and Improve Your Enterprise Software

Refactoring exercises are sometimes “a shot in the dark” where as the depth of the problem isn’t always measured or well understood nor can the success of the initiative be measured easily. This article indicates an approach to use data collected over time to determine and prioritize software refactoring exercises to give us the biggest bang for the buck as well as to provide projections and justification of the initiative to business stakeholders. KPI’s are then established to measure the success or failure of the initiative.


In both greenfield and brownfield development, one of the more negotiable items is refactoring or taking on large refactoring exercises for improving the health of the software. In greenfield projects these initiatives may come up at any time in the project and typically the risk/benefit is measured against project timelines, launch dates, milestones, and feature scope. On the other hand, brownfield projects typically have been running in Production for a while, and at this stage, refactoring exercises may focus on areas which have had noticeable issues in production, or were exercises deferred until after launch for a variety of reasons. In either case, it’s sometimes hard to justify the exercise even though we think we know that it’s the right thing to do.

The best agile experts will even tell you that even in the best of Agile environments there is always refactoring to be done. The ultimate Agile purists may disagree that in their perfect world, refactoring is completely unnecessary because refactoring is done as we go. It just doesn’t work like that – that’s clever marketing though! I won’t go into detail about how software gets fragmented even when the entire team has the best of intentions, but it does. Fragmentation happens, and moving too far one way makes it difficult and risky to move another way without significant refactoring efforts which cannot always be justified at the time of development.

Refactoring can have many purposes, including, reducing complexity, improving performance, improving reliability or scalability, because it’s cool, or just needed to add new features. However, how do you justify the benefit vs risk of these initiatives? One approach is to use data by establishing data points, metrics, and KPI’s to not only justify the means, but also to validate the effort. Imagine the performance of the application, we measure the performance, we see the numbers, and we realize we are way off. We need to improve performance to meet our NFR timings, and maybe it’s a quick fix, or maybe it requires a larger refactoring exercise. The justification is in the numbers. We create a PoC to further justify the performance gain, we implement, we measure again, and voila. We have met our goal. Elementary stuff.

Quite often, our refactoring efforts may not be that cut and dry. There just may not be enough justification for teams to consider the refactoring exercise or for the stakeholders to approve it. Also, applications may have a series of different problems, maintenance headaches, and technical debt. How do we know what to prioritize, where the plumbing needs a serious make-over, or identify the biggest areas of concern? Look at the data.

Tracking the health of your software through data points on an ongoing basis can help greatly with identifying the big problems, or identifying the small problems before they become big problems. We may have an Async service that we’ve been itching to re-write, however the metrics may indicate that we have far more important complexity issues in some of our MVC controllers which have lead to a large number of functional defects and page crashes. Because of data, we can not only justify, but we can prioritize what our refactoring efforts should be. My experience is that if you want to justify something, especially to business stakeholders, it’s easier when you have data to back it up.

Good ideas of data points include collecting information from exception logs such as components, classes, methods that have the most number of exceptions, or by tapping into your ALM system weather it be TFS or other systems to query the list of defect fixes to get a list of the file check in’s related to the fix in order to determine the files which have had the most number of fixes applied. This can be further categorized or aggregated as necessary, but it’s important to understand which data points are important to you in order to get those metrics and measure the problematic areas. Another data point example could be number (or severity) of defects per functional areas. How you get that data will depend on your environment, but it’s important to be pre-emptive about this. Part of new software initiatives should always include data-point capture. Ensure your logging is capturing the data in a meaningful way, or that you are tracking adequate data in your ALM system which will allow you to determine your biggest problem areas. It is important to determine, prepare, and track your needed data points well before you need to use and analyze them.

Code complexity is another type of measurement which gives a numeric ranking to pieces of code indicating how complex the code is. There are many tools on the market which can help determine and rank your code complexity. Complexity often can map directly to stability and functional problems, and also generally increases the maintenance cost of your software. A recent larger enterprise SOA application I have been consulting with has tens of millions of lines of code and up to 100 people engaged in the project at any given time. As an exercise to determine where our biggest software problems are, we pulled together two sets of data. One set of data was data that was aggregated and compiled from our ALM functional defects as well as our crash logs to see where, historically, our biggest issues in our application were and to determine the types of issues we were seeing repeatedly. The data allowed us to see the trends. The other set of data came from complexity reports using tools which help measure the cyclomatic complexity of our software. We aligned the data and what we found was that in many of the cases, the data showed a direct correlation between code complexity and the unique number of issues and number of recurring issues we are seeing in our crash and ALM data. This gives us hard data to justify refactoring and software improvement based on code complexity as well as the actual numbers of issues and crashes we are seeing in the code.

Metrics, data points, and KPI’s, can help you determine the state of your software on an ongoing basis. Data can prove to be vital in helping meet your software’s requirements, nonfunctional requirements, feature-set, and release goals. Justification can be provided to business stakeholders using data with the promise that the results of the initiatives can also be measured and reacted upon. Let’s consider we have custom data translation with tens of thousands of lines of code where we are repeatedly opening new defects and seeing multiple crashes per day/week/whatever. We know this code is overly complex, and we also know that the trends and data we compiled are showing that we have 40 crashes here a month, and our regression or defect rate for this part of our code is 1 to 1. Meaning for every defect we close in this area, one more opens. Because we have analyzed the data, we have seen that this is one of a few areas we can target to give us the biggest bang for the buck, and we can now use all of this data to provide justification to our business stakeholders.

Why does this even have to be justified to business stakeholders? We all know we can’t always just write the code that we think will make the system better. Project team members typically have an idea of what the pain points of the system are, and know code that is overly complex or a PITA (pain in the ass). Some project teams will just say – go ahead – we have to re-write X, so let’s just start doing it – no justification and no measurements needed. That might work, and if we get it right, good. Of course, we’ll have no way to really measure it without data. We may think that it’s the right thing to do and we have the best of intentions, but we don’t always get it right. However, part of compiling data and creating KPIs for justifying our actions is not only to justify it to business stakeholders when we need to, it’s also about justifying it to ourselves – the project team.

Let’s go back to the example I cited before. We have our justification for refactoring or rewriting our data translation components. We have a new solution which will use an ORM mapping tool to replace all of our fragmented data translators. This will takes 2 months of re-development, and introduce some risk, but that risk is completely offset by the negative data trends we are seeing in this area. We are seeing a 1:1 ratio of defects in this areas as well as 40 crashes a month. Terrible, in comparison to the other components in our application.

Now let’s take this a step further. Not only do we have the data and trends to identify the rewrites that will give us the biggest bang for the buck, we have data and trends to create and measure KPIs. After analyzing a sample of our biggest issues in our data translators we see inherent problems that repeat month after month. Let’s say these inherent problems represent 80% of the 40 issues we are seeing, we know that we can reduce the number of issues in this area by up to 80% alone inherently based on the stability record of the new (and fully tested) 3rd party ORM framework we will be introducing. Let’s say the other 20% of issues are random one-off issues which may be inherent to our complex code, but some may still re-occur due to other external factors. Part of the justification will be establishing KPI’s to measure weather we met our goal or not.

After analyzing all of the data, we can establish justification and KPIs that look something like this example:

Justification and Approach

  •  We see 40 crashes a month in our data translator code
  • Defect rate of 1:1 in this area, specifically.
  • Replace with ORM XYZ component. A 3rd party Object Relational Mapping tool which will replace all of our existing data translators.
  • Cost of 1,200 development hours and an ETA of 2 months
  • As the rest of the application is seeing a defect rate of 1:0.75, these 8 defects will continue to decline at that rate. Further initiatives can reduce our defect and regression rates.
  • Milestone date will be impacted by 1 month at the benefit of a much more stable data layer and large reduction of measurable crashes per month.

Measurable KPI

  • Whereas we originally had on average 40 defects per month in this area, we will maintain an average of 8 or less defects per month in this area (further reduced as our defect rate improves) one month after the new solution is implemented.

Proposing the improvements, and backing it up with hard data, and proposing measurable KPIs helps to move these types of initiatives forward and helps get everyone on-board and moving in the same direction. It provides justification to everyone from development team members, business analysts, project managers, and other business stakeholders. Establishing and committing to measuring KPIs as related to the initiative may be scary for some as every time we measure the success of an initiative there is a chance we miscalculated or made a mistake which lead to no improvement or not as big of an improvement that we had thought. However, done right, it shows confidence to the team and business stakeholders and helps drive the team to think smartly about the goal we are trying to achieve. “We’re not just making software better, we’re not just making it cooler, we have a measurable goal and let’s ensure we all work hard to achieve it!”

Let’s say the initiative was a success. If our data was right and our projections were right, it should be! But, maybe we don’t get as far as we wanted to with an initiative. Well, we step up, we retrospect, we introspect, and we improve. We demonstrate that the initiative wasn’t as successful that we had hoped, but we demonstrate what we learned and how we will apply this next time. I’m not saying to expect failure, but failure is always inevitable at some point. Take it gracefully, and be even better next time. We’ll still be in a much better situation than if we had done it as a shot in the dark based on suspicion rather than being based on data. Otherwise, we would have had no way to know if it was successful or not, because we couldn’t measure it very well before, and we can’t easily measure it’s success as an isolated contribution to the overall big picture.

The beauty of using data is that let’s say we had an idea to replace our translators to use an ORM mapping framework or other data management framework, we may have found that the data shows this code to be surprisingly stable and there are few issues here. It may be a little more complex than we would like, but it works well, and it’s stable. The data could have shown that we absolutely didn’t need to spend two months rewriting this component and introducing new risk which wasn’t there before. The data would show us that the best bang for the buck would have been in other areas, and that there is little to no risk in keeping our existing data translation code in tact.

Aggregating data points to give us the insight we need is very valuable to help justify driving key software improvements. It is an important and often overlooked approach to making sure refactoring and improvement exercises are worthwhile, and can provide important KPIs which will provide confidence to business stakeholders and help ensure we have been successful in our initiatives. In addition to using data to measure and prioritize the potential impact of initiatives, other methods may need to be employed to reduce overall defect rates, and ensuring the team is moving in the right direction, and that overall software quality is improving.


Agile Software Architectural Governance in the Real World

This article focuses on the use of the ‘Architect’ role within Agile environments by taking into consideration the experience of the author, as well as objective opinions from other Software professionals who have found their own version of successful software architecture via different means in the agile environment.  The consideration of using the ‘architect’ role within an agile team is discussed along with how the architect can ensure the software does not become fragmented as well as ensuring architectural governance and accountability.


Architecture for any software project is the glue and foundation that keeps the software together, keeps it stable, keeps it maintainable, and keeps it performing well, among other things. How you “get to the architecture” depends on many factors, including, the structure of the project, the methodology used, the people on the project, and other factors. This article is going to focus on architecture as it is used specifically within Agile environments.

In addition to my own background and insight having worked in Architecture in Agile (as well as waterfall and others), recently, I put out a few questions to the architecture and agile communities hoping to gain additional opinions from those working with Architecture in Agile environments and how architecture is used and influenced in these different cultures. The response and encouragement has been overwhelming. Because this article is part opinion and part research, I take into account the experiences and opinions of myself as well as other top professionals. My goal is to be as objective as possible while sharing the experiences and opinions of other professionals weather or not I have shared similar experiences, use different methodologies, or agree or disagree.

Using the waterfall methodology, many software projects try to nail down all of the requirements up front, and these include both functional and non-functional requirements. Architecture documents are also created to various levels of detail describing the architecture. Waterfall typically can’t account well for variations in business or technical requirements along the way, so trying to get it right the first time (never happens) is typically important for many organizations. So typically, lots of time is spent up front tying to nail this down as best as possible.

Agile teams handle architecture differently, and depending on the team, there may be an initial iteration (or two, or three, etc) to nail down the initial architecture. Some agile teams will try to at least nail the most significant decisions as related to architecture (see, Introducing Significant Architectural Change within the Agile Iterative Development Process) hopefully trying to mitigate future architectural changes while understanding the costs associated with it. Other Agile teams let their software grow organically, as many Agile proponents promote YAGNI (You ain’t gonna need it), or also verbosely explained as “let’s not write anything until we actually have a business or technical reason to do it – aka, let’s not over-architect). So, YAGNI is great for a lot of things, but it doesn’t go far enough to account for significant architectural decisions that need to be baked into the software without significant cost down the road. The “Last Responsible Moment” principal tries to address this as well, and it works great for some teams, but it’s up to teams to determine when the “Last Responsible Moment” for creating and implementing the architecture is, and that becomes very subjective with the prospect that if you wait too long, the more re-engineering work is required.

So, how are Agile teams doing Architecture out in the field? My experience tells me how I’ve done it, as well as what has worked in the past for me and my teams. How does everybody else do it? Do they do it like me, do they have a better way, do they do it worse? Are they successful? Knowing that “the team” makes all the difference and what works for one team won’t necessarily work for another team, I have compiled what I think are the best responses, along with my commentary, to the questions put out to the community.

Dani Mannes – Agile Modellers & Developers

Dani Mannes is the Founder and Chief Architect at ACTL Systems Ltd. His work focuses on the defence industry where he is as a consultant and trainer to helping this predominately waterfall industry adopt agile.

In Dani’s approach, he uses the terms “agile developers” and “agile modellers” to define agile team members which do either development or design. He runs into a typical problem that I have seen in many Agile environments, and as Dani puts it “The teams are supposed to refactor, but they often don’t do it because of time pressure. This leads ultimately to spaghetti code/architecture and the velocity will eventually drop dramatically.”

The modellers will use a modelling tool to sketch out and ensure architecture documentation is up to date. “So in each sprint you have an architecture description of the sprint scope. But the architecture should not only focus on the current sprint but also take into consideration stories that will most probably be tackled in the next 2 sprints”. Dani emphasizes that taking into consideration the future stories is essential, but these should not be modelled at this point since “taking into consideration means only to think about them but not actually find a solution for them”.

In Dani’s world, ”the team acts as the architect”, and he states that there is no need for the architect role. But, he does keep one person in the role of architect, or “architect champion” just to keep discussions short, and to monitor the need for architectural change during each sprint.

“Our experience has shown that when the team applies a model based design process during first days of each sprint where focus is set on the sprint scope and attention is given to the scope of the next 2-3 sprints, the team is capable of coming up with a good architecture that serves as guidance during implementing the sprint scope.”, says Dani, and “Since the team has come up together with the architecture, all members are aware of the modules”.

Lee Fox – Architecture as Part of The Team

Lee Fox is a Software and Cloud Architect, Agilist, and Innovator, and he ensures that the architect is always a contributor and a team player. In his experience with waterfall projects, the architect is isolated from the rest of the team and isn’t necessarily even part of it, “As part of the team, I preach that the architect MUST be a contributing member and with some degree of consistency even contribute to team deliverables.”, and “the architect needs to really enhance the idea of empowerment and encourage the team to make architectural decisions.” Lee Fox also values that the architect must have the teams trust and maintain the big picture vision. This is a recurring theme in many agile processes when it comes to working within an agile team.

Lee’s vision is that “Agile architects work both in the low level with the team as well as the high level with the business. They use their broad exposure with the business to help guide a team’s decisions in the right direction. “ He is fine with architects working on multiple teams, but cautions that the “the architect must contribute to each team he is a member of and keep up with the big picture”. What Lee has seen through his approach and coaching is an increase in both velocity and code quality.

The Need for Governance – Dan’s Thoughts

In Agile, teams are self organized where there is no one on the team who should have additional responsibilities than anyone else as everyone is working together to achieve the same goal. Work is picked up and worked on by any team member and the expertise that is created is shared amongst all team members. However, I believe, as well as other respected Software Professionals, such as, Simon Brown, author of Software Architecture for Developers (, that you need someone responsible for the Big Picture of the Software, and that includes the architecture. This creates accountability and governance for the architecture and all non-functional requirements (NFRs).

There are multiple ways to approach this, but ultimately realizing that having that architect who is responsible for the big picture can help ensure that the architecture is continually in-line with the functional and non-functional requirements. The architect will have increased access to both the technical side and the business side to ensure that the team(s) are continually aligning to not only the business/functional requirements, but the architecture is in alignment with both short and long term functional and non functional requirements, and that the existing architecture is followed, revised, and re-worked as necessary.

Ok, so I know some agile purists out there are thinking “Long term requirements are very subjective. Until a user story gets chosen by a ‘Product Manager’ and moves from the backlog to being actively worked on during a sprint, it’s not really a requirement yet”. Ok fine. I get that, and I understand the advantages here which is makes agile a process which helps you to change direction or add new to the project features midway through a development phase. User stories are (should be) always functional requirements though. When considering the architecture, we need to understand what is in the backlog or the general type of functional items that are in the backlog which helps the architect to create the technical vision. The technical vision can change as the project progresses, but ultimately understanding the grand vision will help ensure an architecture that takes into account current requirements and ensure it meets future product functionality with minimal re-work.

A drawback of Agile teams without architectural governance is that the system tends to fragment or suffers from too much rework and often non-functional requirements (such as performance, scalability, and others as related to architecture) get tossed out the window. Imagine a team has estimated a total of 20 points for the upcoming sprint. They have to consider, how can they get it done? Among these considerations is what is needed now, and unfortunately what is needed now, is often at the expense of technical accountability. Fragmentation of the system (or of the architecture) occurs when the now becomes the most important piece rather than ensuring we are adhering to sound architectural principals an meeting our NFRs as well as the long term architectural vision of the software. This is why architectural governance is important.

In agile planning meetings, the team will typically talk about design and may also discuss architectural changes. Ultimately, the team may still be on their own to make these decisions and ensure that the architecture they have will meet the existing business and technical requirements of the sprint, but it’s up to the architect to ensure that the decisions and approaches undertaken by the team are in fact in line and consistent with the architectural vision. The architect is accountable, and accountable for the team’s decisions related to architecture and any architectural changes that come about.

The architect is also a team member who may also code but ultimately has responsibility for the continuous evolvement of the architecture. Typically the architect needs to work in the trenches alongside the development and business teams to ensure constant technical team and business communication. The architect should be involved in all of the agile processes from planning, development, to retrospectives. If the architecture has failed, or we spent too much time on a sprint worrying about “the now” and compromised our architecture, it needs to be brought up and addressed (think retrospective). The whole team should be able to come up with reasons why it failed and how we can improve.

In Summary

When using agile teams, the need for the role of the architect needs to have a high level of consideration. It’s important for this role to work alongside the team and have a very sound vision of how the architecture will meet short and long term objectives with minimal re-work. The architect should be part of all regular agile processes and held to accountability while allowing the entire team to propose and come up with architecture, but ensuring the architect is accountable. Having the architect maintain accountability provides a level of governance to the project from a technical architecture perspective that would otherwise get lost and lead to fragmentation in most agile environments.

Are Scrum Processes Such As Burndowns and Task Estimation Working Against Your Organization’s Agility?


This article contrasts Scrum with Agile methodologies and points out that Scrum can fit within an Agile development team, but Scrum alone doesn’t mean your organization has become Agile.  Scrum processes such as task estimation and burndowns may be established Scrum processes, but upon further analyzing their effects you see that they could actually be working against your team’s agility.  It’s important to evaluate the value of established processes, to determine if they are helping or are detrimental to your organization’s Agile initiatives.  A real world example is used to demonstrate an example of how a good intentioned process becomes inefficient within an organization’s Agile implementation.

Are Scrum Processes Such As Burndowns and Task Estimation Working Against Your Organization’s Agility?

Recently, a colleague asked me about task estimation and how to reduce the amount of time it’s taking their team to do it.  I gave him some suggestions, but it got me thinking more about task estimation and why agile teams are doing it at all.  I’ve also been involved with many organizations who are following Scrum processes, calling themselves agile, and they are getting there, but they are still putting Scrum processes before people and not really thinking about the value of the processes they are following.  This article is going to talk about how using Scrum processes such as burndown charts and task estimation can actually be working against your organization’s initiatives to improving agility.

Scrum actually predates agile, so it’s fair to say that just because you are using Scrum you are not necessarily embracing agile practices, however when you look up Scrum in Wikipedia you will see the following “Scrum is an iterative and incremental agile software development framework for managing software projects and product or application development.”.  Ok, so it is an “agile software development framework”, so you can forgive people’s perception (even mine) when they feel that by following scrum processes we are Agile.  But, I believe there still needs to be a further distinction here.  Scrum is still just a process for managing software development, and I see Scrum as a set of rules that “could” be used in an agile environment, but by following Scrum processes to the letter of the law, we could actually be putting processes over people – which is definitely the opposite of what the agile manifesto is trying to achieve.

Scrum, like many other agile implementations has the basic premise of a user story backlog.  We estimate story points with a number of relativity, instead of using an absolute measurement of time, and over time we can determine our velocity – which is how many story points we can generally complete in a sprint or iteration.  We can now easily judge our backlog to get an idea of how long it will take to complete the stories in the backlog.

Once estimated, we task out the stories for the sprint, but unlike standard Scrum implementations, I don’t like to estimate hours for tasks.  Inherently there is nothing wrong with it, but myself and many Agile experts don’t see a lot of value .  It’s just very difficult to estimate work accurately in absolute measurements of time – that’s why we story point the stories to begin with.  Estimating at even the task level also has the same problems of not being so accurate.

With Scrum implementations, estimating tasks in absolute time is still a vital part of the process.  Wikipedia defines a scrum task as the following: “Added to the story at the beginning of a sprint and broken down into hours. Each task should not exceed 12 hours, but it’s common for teams to insist that a task take no more than a day to finish. “ [Author’s note, the latest version of the Scrum guide has removed task estimation from the Scrum requirements]

I had a friendly discussion with a previous client on estimating tasks and I could never get a good answer as to why they do it other than “it’s part of the process”, and “it’s agile”.  There certainly is a lot of confusion out there about what exactly is Agile, and what constitutes adding value as part of an agile implementation.  Sometimes teams feel that by following a Scrum process, we’re agile – but as a team we need to think about value.  Scrum, agile, or not – what value are we actually getting from spending the time to estimate tasks?  If, as a team, we cannot answer that question we need to re-evaluate this process and determine if it’s a waste of time or not.  If there is value, sure let’s continue doing it, but many times a process is followed for process sake.

However, it’s easy to see why task estimation is being done.  We need the information for our burndown charts.  These charts tells us how many task hours we have completed versus how many hours are remaining and are part of the Scrum process that should be presented during the daily stand-up meeting.  Ok, sure – then theoretically, IF we need burndown charts, then we need tasks estimated in hours.

But, do we need burndown charts?

There are a few teams out there that can accurately estimate blocks of development work in absolute units of time, but it’s not the majority.  The reason we estimate in story points for user stories is because software estimates in absolute measurements of time are barely ever accurate.  So, why does Scrum insist on estimating in story points for user stories, but insist on estimating task hours for individual tasks within the user stories?  Even when estimating dozens of small tasks individually we succumb to the same inaccuracy as we would if we were estimating user stories the same way.

The only thing that is important at the end of the sprint (or iteration) is a completed story.  A non-completed story isn’t worth anything at the end of the sprint even if 10 of 12 hours are completed.  A quick look at completed tasks vs non-completed tasks should be enough motivation for the team to know and decide what needs to be done to complete the work by the end of the sprint.  Many agile experts would agree, including George Dinwiddie who recommends in a Better Software article to use other indicators instead of hours remaining for the burndown charts such as burning down or counting story points.  Gil Broza, the author of “The Human Side of Agile” will also recommend that burndown charts shouldn’t be used at all, and instead among other things, use swim lanes for tracking progress.

In my experience, knowing the number of hours outstanding in a sprint by looking at task hours doesn’t help and is an inaccurate metric to use to plan additional “resource” hours in the sprint.  Even though some organizations do this, the thought of using task hours to help plan for additional “resource” hours during a sprint isn’t valuable since the absolute time measurements aren’t accurate.

If you really are determined to be Agile, you need to make sure you understand where processes make sense,  After all, the agile manifesto preaches “individuals and interactions over processes and tools”.  Following Scrum processes doesn’t necessarily make you Agile, and you need to really think about the value of these processes and determine if they are working for or against your initiatives to become more agile.  In the real world examples of task estimation and burndown charts, it was clear that these established Scrum processes were actually working against the organization in terms of becoming more agile.  Even if your goal isn’t to be agile, by identifying and eliminating processes that don’t add value, you are eliminating waste and opening the door to replace the processes with initiatives that can truly add new value.

Dan Douglas is a professional independent Software Consultant and an experienced and proven subject matter expert, decision maker, and leader in the area of Software Architecture. His professional experience represents over 12 years of architecting and developing highly successful large scale solutions.  Dan has been the architect lead on over 15 development projects.

A Little Documentation Please, Defining Just Enough Code Documentation for Agile Manifesto No. 2


This article helps clarify agile manifesto number two “Working software over comprehensive documentation” by defining the different types of documentation being referred to. An analysis is given on code documentation indicating the complexities that warrant it, and how refactoring and pair programming can also help to reduce complexity. Finally, an approach is given on how to evaluate when to document your code and when not to.

A Little Documentation Please, Defining Just Enough Code Documentation for Agile Manifesto No. 2

The second item in the agile manifesto “Working software over comprehensive documentation” indicates that working software is more valued than comprehensive documentation, but it’s important to note that there is still some real value in documentation.

I see two types of documentation being referred to here. 1) Code and technical documentation typically created in the code by developers working on the system, and 2) System and architectural documentation created by a combination of developers and architects to document the system at a higher level.

I will save discussing system and architectural documentation for future articles, as it is a much more in-depth topic.

So let’s discuss the first point – code documentation

Questions that many teams ask (agile, or not) are – “How much code documentation is enough?” or “Do we need to document our code at all?”.  Some teams don’t ask, and subsequently don’t document.  Many TDD and Agile Experts will indicate that TDD and will go a long way to self-document your code.  You don’t necessarily need to do TDD, but there is a general consensus that good code should be somewhat self-documenting – to what level is subjective and opinions will vary.

Software code can be self-documenting, but there are almost always complex use cases or business logic that need to have more thorough documentation. In these cases it’s important that just enough documentation is created to reduce future maintenance and technical debt cost.  Documentation in the code helps people understand and visualize what the code is supposed to do.

If technology is complex to implement, does something that is unexpected such as indirectly affecting another part of the system, or is difficult to understand without examining every step (or line of code), then you should at a minimum consider refactoring the code to make it easier to follow and understand.  Refactoring may only go so far, and there may be complexities that need to be documented even if the code has been nicely refactored.  If the code cannot be refactored for any reason, you have another warrant for documentation.

Think about what can happen if code isn’t documented well.  Developers will spend too much time looking at complex code trying to figure out what a system does, how it works, and how it affects other systems.  A developer may also jump in without a full understanding of how everything works and what the effects are on other systems.  This could create serious regression bugs and create technical debt if the original intent of the code is deviated from.  A little documentation gives you the quick facts about the intent of the code.  Something every developer will value when looking at the code in the future.

Of course, in addition to documentation, pair programming can and should be used to aid in knowledge transfer and can be a good mechanism for peers to help each other understand how the code and system work.  Pair programming is also a good mechanism for helping junior and intermediate developers understand when and where they should be documenting their code.

The way I distinguish what code should be documented and what shouldn’t be is based on future maintenance cost.  Consider how your documentation lends itself to ease of ongoing maintenance and how easing the ongoing maintenance will reduce technical debt and contribute to working software over time.  If your documentation will directly contribute to working software by eliminating future complexity and maintenance, then document. If there is no value to the ongoing future maintenance, then don’t.  If you are unsure, ask someone a little more senior, or do some pair programming to help figure it out.  I’ve also seen value in peer reviews to help ensure documentation is being covered adequately, but I still prefer to instill trust within the team to get it done properly rather than a formal review process.  When in doubt, document – a little more documentation is always better than missing documentation.

Note: My motivation for writing this article came after reading a good article on the same topic

A question came in the comments on how to weight the amount of documentation needed in a project.  My article was meant to address that question regarding code documentation with my own opinions.  System level documentation is much more in-depth, so look for a future article addressing documentation at the system level.


Dan Douglas is a professional independent Software Consultant and an experienced and proven subject matter expert, decision maker, and leader in the area of Software Architecture. His professional experience represents over 12 years of architecting and developing highly successful large scale solutions.  Dan has been the architect lead on over 15 development projects.

Navigating the Technology Stack to Get a Bigger Return on Your Technology Selection


This article introduces the reasons why organizations choose to standardize a technology stack to use on existing and future projects in order to maximize ROI of their technology choices. When selecting technology for new projects, the architect should consider both technologies within and outside of the existing technology stack, but the big picture needs to be carefully understood and consideration needs to be placed on the ROI of introducing new technology versus using existing technology within the technology stack.

Navigating the Technology Stack to Get a Bigger Return on Your Technology Selection

As a Software Architect, understanding the long term affects and ROI of technology selection is critical.  When thinking about technology selection for a new and upcoming project, you need to consider the project requirements, but also look beyond them at the bigger picture.  Even though at first glance, a specific technology might seem perfect for a new project, there may already be a more familiar technology that will actually have a much bigger return on investment in the long term.

Many organizations stick to specific technology stacks to avoid the cost, overhead, and complexity of dealing with too many platforms.   An architect should have specialized technical knowledge of the technology stack used by the organization, and if the organization’s technology stack isn’t standardized, the architect should work to standardize it.

Advantages to an organization by standardizing their technology stack

Development costs – It’s easier to find employees who are specialized on a specific platform instead of having multiple platform specializations with conflicting technologies.  When you need developers with conflicting skillsets (ex: .NET and Java), you will likely need to pay for additional employees to fill the specialization gap.

Licensing costs – It’s typically advantageous to stick with only a few technology vendors to attain better group discounts and lower licensing costs.

Physical tier costs – It’s cheaper and more manageable to manage physical tiers that use the same platforms or technologies.  Using multiple platforms (ex: Apache and Windows based web servers require double the skillset to maintain both server types and to develop applications that work in both environments)

Understand the bigger picture beyond your project to better understand ROI

As an architect you have responsibility with technology selection as it retains to ROI.  Once you are familiar with the organization’s technology stack and the related constraints, you can make a better decision related to technology selection for your new project.  You may want to put a higher weight on technology choices known collectively by your team, but it comes down to understanding the bigger picture beyond your current project and understanding the ROI of both the project and ongoing costs of the technology choices used.  You may need to deviate from your existing technology stack to get a bigger ROI, but be careful that the cost of supporting multiple platforms and technologies doesn’t exceed the cost savings of using a specific specialized technology for a specific case in the long term.

When Microsoft released .NET and related languages (VB.NET and C#) in 2001, many organizations made the choice to adopt VB.NET or C# and fade out classic VB development.  Those that made the switch early had an initial learning curve cost.  Organization’s that chose to keep their development technology choice as classic VB eliminated additional costs at the onset, however they paid a bigger price later when employees left the company, finding classic VB developers became more difficult, and the technology became so out of date that maintenance costs and technical debt began to increase dramatically.

Sometimes the choice and ROI will be obvious; the technology in question might not be in use by your organization, but it lends itself well to your existing technology.  For example, introducing SQL Server Reporting Services is a logical next step if you are already using SQL Server, or introducing WPF and WCF will compliment an organization that is already familiar with development on the Microsoft .NET platform.

In another case, it may make sense to add completely new technology to your technology stack.  For example, it may be advantageous from a cost perspective to roll out Apple iPhones and iPads to your users in the field, even though your primary development technology has been Microsoft based.  Users are already familiar with the devices, and there are many existing productivity apps they will benefit from.  Developing mobile applications will require an investment to learn Apple iOS development or HTML5 development, but the total ROI will be higher than if the organization decided to roll out Microsoft Windows 8 based devices just because their development team is more familiar with Windows platform development.

Finally, there will be cases where even though the new technology solves a business problem more elegantly than your existing technology stack could, it doesn’t make sense to do a complete platform change in order to get there.  In these cases, the ongoing licensing costs, costs of hiring specialized people, and complexities introduced down the line far outweigh any benefits gained by using the new technology.


It’s important that the software architect facilitates the technology selection process by evaluating technology based on ROI of the project while also considering the long term ROI and associated costs of the selected technology.  It’s important not to focus only on your existing technology stack however, and consideration should be given to unknown or emerging technologies within the technology selection process.  Careful consideration should be given to the cost of change and ongoing maintenance of any new technology and the ROI needs to be evaluated against the ROI of sticking to the existing technology stack over the long term.

Dan Douglas is a professional independent Software Consultant and an experienced and proven subject matter expert, decision maker, and leader in the area of Software Architecture. His professional experience represents over 12 years of architecting and developing highly successful large scale solutions.  Dan has been the architect lead on over 15 development projects.

Introducing Significant Architectural Change within the Agile Iterative Development Process


As an architect working within an iterative agile environment, it’s important to understand how significant architectural decisions need to be made as early in the development process as possible to mitigate the high cost of change of making these decisions too late in the game. In iterative development, it’s important to realize the distinction between requirements that require significant consideration to the architecture and those that are insignificant. This article contrasts the difference between significant and insignificant requirements and demonstrates the best approach to implementing both types of requirements. In agile environments, it’s important to realize that significant architecture decisions will sometimes need to be determined iteratively and late in the development cycle. Guidance is provided on determining how to move forward by considering many factors including ROI, risk, cost of change, scope, alternative options, regression, testing bottlenecks, release dates, and more. Further guidance is provided on how the architect can collaboratively move forward with an approach to implementation while ensuring team vision and architectural alignment with the business requirement.

Introducing Significant Architectural Change within the Agile Iterative Development Process

As agile methodologies such as scrum and XP focus on iterative development (that is, development completed within short iterations of days or weeks), it’s important to distinguish the requirements that are significant to your architecture within the iterative development process.  Iteratively contributing to your software architecture is very important to maintaining your architecture, ensuring the right balance between architecture and over architecture, and ensuring that the architecture is aligned with the business objectives and ROI throughout the iterative development process

Many agile teams are not making a conscious effort to ensure significant decisions relating to software architecture are being accounted for iteratively throughout the software development process.  Sometimes it’s because there is a rush to complete features as required within the iteration and little thought is given to significant architectural changes, or sometimes it is lack of experience or lack of a team vision.  There is usually a lack of understanding how the important guiding principles of the architecture need to be continually established and how they shape the finished product and align with the business objectives in order to see a return on investment. This is where the architect plays a huge role within the iterative development process.

The lack of attentiveness to significant architecture decisions whether at the beginning or mid-way through a release cycle can cause significant long term costs and delay product shipping significantly. Many teams are finding out too late in the game that guiding principles of their architecture are not in place and strategies to get to that point still need to be put in place at the expense of time, technical debt, and being late to ship.  When requirements from the product team involve core architecture changes or re-engineering, changes are sometimes done so without recognizing the need to strategize and ensure that guiding principles of your core-architecture are in place to ensure on-going business alignment and minimal technical debt cost.

Within the iterative development process, it is important that the agile development teams (including the architect) learn to recognize when new requirements are significant and when they are not.  Deciding which requirements are significant and will be carried on as guiding principles of your architecture can be worked out collaboratively with the developers and architects during iteration planning or during a collaborative architectural review session.  This will help ensure that development is not started without consideration to the architecture of these new significant requirements, and that there is time to get team buy-in, ensure business alignment, and create a shared vision of the new guiding principles of your architecture.

In addition, to help prevent surprises during iteration planning, the architect can be involved and work with the product team when preparing the user story backlog to help identify stories that could have a significant impact on the architecture prior to iteration planning.  Steps can be put in place to help the product team understand the impact and to assist the product team in understanding what the ROI should be in contrast to the cost to implement major architectural changes.

So, what distinguishes significant architectural requirements with insignificant ones?

Separate which requirements have architectural significance with ones that do not where significant is distinguished by the alignment of business objectives and a high cost to change, and insignificant is more closely aligned with changing functional requirements within the iterative development process.

An insignificant architectural requirement will be significant as it retains to the functional requirements’, but not significant in terms of the core architecture.  To further contrast the difference between which decisions to consider significant and which to consider insignificant take a look at the following table.

Significant Insignificant
A high cost to change later if we get it wrong We write code to satisfy functional requirements and can easily refactor as needed
Functionality is highly aligned to key business drivers such as modular components paid for by our customers and customer platform requirements A new functional requirement that can be improved or duplicated later by means of refactoring if necessary.
The impact is core to the application and introducing this functionality too late in the game will have a very high refactoring and technical debt cost Impact is localized and can be refactored easily in the future if necessary.
Decisions that affect the direction of development, including platform, database selection, technology selection, development tools, etc. Development decisions as to which design patterns to use, when to refactor existing components and decouple them for re-use, how to introduce new behavior and functionality, etc.
Some of the ‘ilities’ (incl: Scalability, maintainability/testability and portability). Some of the ‘ilities’, such as usability depending on the specific case can be better mapped to functional requirements.  Some functional requirements may require more usability engineering than others.

It is best to handle the significant decisions as early as possible in the development process.  As contrasted in the table below, you can see how the iterative approach lends itself well to requirements that have an insignificant impact on the architecture.  You can also see how significant architectural requirements really form guiding principles of your architecture and getting it right early on lessons the impact of change on the product.

Insignificant Decisions How to Approach

  • New functional requirements (ex: to allow users   to export a report to PDF. There is talk in the future about allowing export to Excel as well, but it is currently not in scope.)
  • Modifications and additions to existing   functionality and business logic


Use an agile iterative approach to development.  This is a functional requirement, with a low cost to change and a low cost to refactoring.  Write the component to only handle its specific case and don’t plan your code too much for what you think the future requirements might be.  If the time comes to add or improve functionality, then we refactor the original code to expose a common interface, use repeatable patterns, etc, In true agile form, this will prevent over-architecture if future advances within the functional requirements are never realized, and there is minimal cost to refactor if  necessary.
Significant Decisions How to Approach

  • Our customers are using both Oracle and SQL Server
  • Performance and scalability
  • Security Considerations
  • Core application features that have a profound   impact on the rest of the system (example, a shared undo/redo system across all existing components) 
These decisions need to be made early as possible and an architectural approach has to be put in place to satisfy the business and software requirements’.  These decisions are usually significant as there is a huge cost to change (refactoring and technical debt) and potential revenue loss if not put in place correctly, or needs to be refactored later.  These decisions are core to the key business requirements and will have a huge cost to change if refactoring is required later.

Agile Isn’t Optimized For Significant Architectural Change

It’s unfair to assume that the agile development process is built to excel at the introduction of significant functionality and architecture changes without a large cost.  This is why some requirements are significant and need to be made as early on in the development cycle as possible while ensuring there is alignment with the business objectives.

As great as it would be to map out every possible significant requirement as early on as possible, there are sometimes surprises.  This is agile development after all.  We need to understand that along with changing functional requirements, significant business changes part way through the development process can occur and could still have a significant impact on our core architecture and guiding principles, so we need to mitigate and strategize how to move forward.

Certainly, it’s possible to introduce significant core-architecture changes by means of refactoring, or scrapping old code and writing new functionality – that’s the agile approach to changing functional requirements and how agile can help prevent over-architecture and ensure we are only developing what’s needed.  The problem is that it doesn’t work well for the significant decisions of your architecture, and when we do refactor, the cost can be so high that it will exponentially increase your time to market, cause revenue or customer loss, and potential disruption to your business.  In these cases, the architect, along with the product and development team need to create a plan to get to where they need to be.

Mitigating the Cost of Significant Change

There is always a cost to introducing core architectural functionality too late in the game.  The higher the cost of the change and the higher the risk of impact, the more thought and consideration needs to be put into the points below.  Here are some points that need to be considered.

  • The refactoring cost will be high.  Is there an alternative way we can introduce this functionality in a way that will have a minimal impact now without affecting the integrity of the system later?
  • This change is significant and will require a huge burden by the development team to get it right. Will we have a significant ROI to justify the huge cost to change?  For example, is Oracle support really necessary after developing only for Sql Server for the first 6 months or is it just a wish from the product team?  Do we really have customers that will only buy our product if it’s Oracle, and what are the sales projections for these customers?  Is there a way we can convince these customers to buy a Sql Server version of the product?  The architect needs to work with the product and business teams to determine next steps.
  • How will this affect regression testing?  Are we creating a burden for the testing team that will require a massive regression testing initiative that will push back our ship date even further?  Is it worth it?
  • How close are we to release?  Do we have time to do this and make our release ship date?
  • What is the impact of delaying our product release because of this change?
  • Is it critical for this release or can it wait until a later release?
  • Can we compromise on functionality?  Can we introduce some functionality now to satisfy some of the core requirements and put a plan in place to slowly introduce refactoring and change to have a minimal impact up front, but still allow us to meet our goal in the future?
  • What is the minimal amount of work and refactoring we need to do?
  • What is the risk of regression bugs implementing these major changes late in the game?  Do we have capacity in our testing team to regression test while also testing new functionality?
  • Are we introducing unnecessary complexity?  Can we make this simple?

All individuals involved in the software need to be aware of the impact and high cost that significant late to game changes will have on the system, development and testing teams, ship dates, complexity, refactoring, and technical debt that could be introduced.  There are strategies that can be used, and the points above are a great start in determining how to strategize the implementation of significant architecture changes.  One of the roles of the architect is to help facilitate and create the architecture and guiding principles of the system and ensure its long term consistency.  As the system grows larger and more development is complete, introducing significant architecture changes becomes more complex.  The architect needs to work with all facets of the business (developers, qa, product team, sales, business teams, business analysts, executives, etc) to help ensure business alignment and a solid ROI of significant architecture decisions.

Moving Forward

Once a significant decision is made that will form part of your architectures significant guiding principles, the architect needs to understand the scope of work, determine what will be included and what won’t, collaboratively create a plan on how we will get there, and understand how the changes will fit within the iterative development cycle moving forward.  The architect needs to ensure that the product and development teams share the vision, understand the reasons we are introducing the significant change, and understand the work that will be required to get there.  If your team is not already actively pairing, it may be a good time to introduce it or alternatively introduce peer reviews or other mechanisms to help ensure consistent quality when refactoring existing code to support significant architectural changes.

Depending on the level of complexity, the testing team may need to adjust their testing process to ensure adequate regression testing takes place that tests new and existing requirements that are affected by the significant architecture change.  For example, if we make a significant change to support both Oracle and Sql Server we need to ensure existing functionality that was only tested for Sql Server support is now re-tested in both Oracle and Sql Server environments.  The architect or developers can work with the testing team for a short time to help determine the degree of testing and which pieces of functionality specifically need to be focused on and tested to ensure the QA/testing teams are correctly focusing their efforts.


It’s important to distinguish significant architecture decisions from requirements that are insignificant as they relate to the core architecture of your system.  When introducing significant architecture changes iteratively within an agile environment, it’s important to understand the impact and complexity that significant changes have when introduced late in the game.   It’s important to understand the business impact of the changes and it’s important that the architect works with the rest of the organization to determine business alignment, risk, and ROI of these changes while understanding the cost of change before moving forward with a plan to introduce the significant architectural changes.

Dan Douglas is a professional independent Software Consultant and an experienced and proven subject matter expert, decision maker, and leader in the area of Software Architecture. His professional experience represents over 12 years of architecting and developing highly successful large scale solutions.  Dan has been the architect lead on over 15 development projects.

Your Team’s Ongoing Vision Will Determine the Long Term Consistency and Alignment of Your Software Architecture


The ongoing consistency of your software architecture depends on its alignment with your business objectives, how suitable the technology choices are, and that your team has bought into and is following the architectural vision. This article demonstrates key factors in determining how suitable your software architecture is in being able to sustain a collaborative long term vision and growth of your software along with the difficulties of trying to ensuring business value without a shared vision or collaborative team input. Real world examples from the field are used to demonstrate successful scenarios, the thought behind them, how buy-in was attained, and how they complemented the QA testing strategy of the business objectives. An approach is given on how to recover failing projects where there was no consistent strategy and to turn chaos into a coherent strategy that is aligned with the business objectives.

Your Team’s Ongoing Vision Will Determine the Long Term Consistency and Alignment of Your Software Architecture

I’ve written a recent article that focused on ensuring that your architecture was aligned with your business objectives.  See  Aligning Your Software Architecture With Your Key Business Objectives and Why Your Business Needs It.  This article goes a step further and demonstrates how team buy-in and a shared team vision contributes to the consistency of the architecture and business value throughout the development cycle.

One of the problems that the architecture of your software should address is consistency. Consistency with technology, design patterns, approaches, layers, frameworks, etc.  As software projects evolve, it’s important to ensure that the architecture, especially as aligned with your business objectives, remains consistent.  Functional requirements can certainly change along the way, but the key is to do enough architecture work, and ensure consistency, for those big design decisions that need to be made as early in the game as possible.

Ensuring consistency in this capacity isn’t about guidelines for best practices such as re-use of existing components and developing components that are decoupled from one another.  These best practices should be part of most (all) development projects.  Ensuring consistency of the architecture is about ensuring that the guiding principles of your core-architecture have buy-in, are clear, are being followed, and are driving the long term success of your software in relation to your critical business objectives.

Ensuring consistency can require a certain level of control in some scenarios, however more often than not, very little controls are required when you have team buy-in, and a shared vision from the beginning as to how the business value is being provided.  This is especially true once you have a team that is consistently delivering business value with the software.  Enforcing too strict controls  on teams can be demoralizing for most, and I’m very opposed to forcing development teams to do things a certain way or dictating how work will be done.  I’m all for ensuring consistency and a good architecture across the application, but this can be done without forcing, controlling, and dictating how it will be done.

To help ensure consistency and buy-in across the board, it is important to consider the following when coming up with your architecture:

  1. How crucial are the recommendations at hand to the business?
  2. Will we see a business benefit by evaluating and following guidelines set around our evaluation?
  3. What is the long term detrimental impact to the software of not doing this or doing it too late in the game?
  4. Will following these recommendations eliminate refactoring costs and technical debt later on?

These considerations will help ensure buy-in and ease collaborative agreements with the team as the team will have a better understanding and vision as to the importance of the architecture to the business.  Having a consistent vision of the business value helps ensure consistency with the architecture moving forward.

Be sure that team members who want to participate can help collaborate on the architecture or guidelines.  This provides ownership by the team members which helps drive and ensure continued consistency.  I’ve said many times that architects should not dictate requirements, rather they should create recommendations facilitated by understanding the software, technology, business, customers, etc.  These recommendations should involve collaboration and review with the other team members before final architecture decisions are agreed upon and finalized.

Let’s look at some real life examples from the field

Example 1) The sales team had trouble in the past selling to some large customers who used primarily Oracle database servers.  The architect discussed the scenario with the business leaders who made up a business case for supporting both Oracle and SqlServer.  Collaboratively with the development team, the architect determined that the code base of the entire application could remain the same, but the data layer could be swapped out to support different platforms.  The business case helped ensure buy-in of an ORM mapping tool to be used as a data layer that supported both platforms.  The team collaborated and evaluated different options finally selecting a tool that met all of the business requirements for performance, scalability, and multiple platforms.  In fact, the tool selected would work easily with SqlServer and Oracle platforms with little overhead.  It was clear to the entire team how important the ongoing use of this ORM framework throughout development would be to the business and that deviating from this could cause considerable damage to the product and business model.  The team was completely onboard with the vision.  In addition, this drove the QA testing team to put controls in their testing processes to ensure compliance with this business requirement.  The QA testing team made sure, as part of their process, to test the software on both SqlServer and Oracle platforms and multiple platform testing environments were created.

Example 2) Another scenario allowed customers to pay for specific application modules , but not others.  It was evident that the architecture needed to adhere to this business model that consistency of the architecture throughout the development process would ensure ongoing compliance with the business objectives.  Collaborative team buy-in of the business benefit of this along with the dependency injection and inversion of control framework ensured that the components being built were modular were able to be swapped in and out of the application easily.   This would also drive QA testing initiatives that would ensure test plans accounted for and tested this modularity as it is a core part of the business model.  From a development standpoint, the team came to a shared vision as to the business reason components are built using this approach.  Teams understand that by not doing this or by deviating from this approach that they are creating a refactoring and technical debt cost to fix this later on.  Of course, teams are free to improve upon this when new functional requirements are added within the iterative development process.

In these two examples, it was clear to see that team buy in and a shared vision were established because the architectural approaches were well thought out, evaluated, aligned to the business requirements, and had a huge cost to change if we got it wrong.  Importantly, the entire team had an opportunity to be involved collaboratively in coming up with the architecture which further strengthened their commitment as they implicitly took ownership among themselves in coming up with the architecture.

It is much more difficult to ensure consistency across development teams without a shared vision about the value being delivered.  For example, dictating certain design patterns to be used over other patterns is a subjective decision that will likely fail buy-in as the business value isn’t clear.  A forced buy-in approach which will likely fail and will likely lead to team demoralization.

Guidance, recommended patterns, approaches, and coding standards can all be put in place.  In reality, it means very little unless we are leading by example and have shown through practice, team buy-in, and business value why we are using said approaches and what the business advantage is.  Instead of working on aligning architecture with business requirements, I’ve seen teams spend weeks coming up with coding practices (how to declare variables, which variable naming pattern to use, etc) for new projects.  The problem is that most people don’t read or care to look at the documents created from these team sessions, and ensuring compliance for compliance-sake can be difficult and is a waste of time.  Even if there is a little bit of value in ensuring how code is written and that variable naming is consistent across the board, I don’t believe documents standardizing the approach provide the value or incentives to do this.

Sometimes, to rescue a failing project, you may need to assert more control and constraints in order to get to a point where the software is coherent and beginning to meet the business objectives.  This is a state that we are trying to avoid by doing our up-front architecture work and ensuring consistency with a shared vision.  However, as consultants, sometimes we are brought into the project too late in the game.  If this happens, trying to fix the solution may require short term measures and controls, but don’t lose sight of the fact that the real value is in ensuring consistent business value through architecture and a shared vision.  There may be a lot of refactoring that needs to be done, but the teams still need to share a vision as to the business value of what is trying to be accomplished.  My experience is that dictating control will only work in emergency scenarios in the short term just to get to a stabilization point, but for the long term, the team needs to work with a shared vision and understanding of the business value to make consistent progress.

Getting the team’s buy-in and creating a shared vision may be a bit challenging and may take longer in failing projects where the vision wasn’t there from the beginning, but it’s the best bet for the long term success of your software and for the consistency and sake of your business objectives.  Once the team has a shared vision and is consistently contributing to the business value, less controls will be required.  Your software will be continuously aligned with your business objectives as both the development and testing teams work together to ensure that your software is adhering to your critical business objectives.

Dan Douglas is a professional independent Software Consultant and an experienced and proven subject matter expert, decision maker, and leader in the area of Software Architecture. His professional experience represents over 12 years of architecting and developing highly successful large scale solutions.  Dan has been the architect lead on over 15 development projects.

Aligning Your Software Architecture With Your Key Business Objectives and Why Your Business Needs It


Software development projects need to be aligned with your key business objectives.  The key business objectives that have the biggest cost to change need to be baked into your core architecture as early in the development process as possible.  Key business objectives relating to what the customer is paying for, such as, modularity of components (ex: paying for some components, but not others), performance and scalability, and data accessibility, are a few of a plethora of possible key business objectives that need to be baked into the core-architecture.  Failure to do so can double or triple your development costs, leading to months of refactoring and potential customer and revenue loss.  In agile environments, this article places importance on ensuring that significant core-architecture decisions are not made too late in the game as the myth of “no upfront architecture in agile” is debunked with real world examples where user stories, while aligned with functional requirements, were not aligned with the key business objectives nor the core-architecture.

Aligning Your Software Architecture With Your Key Business Objectives and Why Your Business Needs It

Software projects need an architecture – a core architecture, but defining that architecture and ensuring it meets the business and technology objectives is a bit more thought out than just doing “architecture”.  This article will focus on specific areas in creating a core-architecture as related to key business objectives.

Listing some design patterns and layers for your new system and presenting to the team might have some technical merit, but it isn’t enough if you want to ensure a well thought out and successful software system that meets your business objectives. Architecture isn’t just design patterns, layers, and code design. In fact that’s a very small part of it, if it even qualifies at all. Architecture is more about making very significant decisions that will help ensure the alignment of your completed software with your business and technology objectives.

Creating the right architecture requires business and domain knowledge, product and customer knowledge, research, communication, technological evaluations, technical agility, and expert experience in software development.  The decisions made here will shape the final software solution, so your core-architecture really represents these decisions that have been made during this architecture creation phase.

Now, separate this from day to day software development.  Teams make “architectural-like” decisions all the time when they determine how to implement specific functionality.  Ideas will get tossed around about how many layers of abstraction will be implemented, which design patterns to use, and so forth.  Although some would categorize this solely as design, in a general sense this is still architecture, but maybe not your core-architecture. You could certainly say that not all of these design decisions will have a significant impact on the software.

Grady Booch states that “All architecture is design, but not all design is architecture”.  He also states that “Architecture represents the significant design decisions that shape a system, where significant is measured by cost of change”

Your core-architecture needs to represent those significant decisions and most importantly of all, they need to be aligned with the business requirements.

Some examples of these significant decisions that become part of your core architecture include technology selection (development languages, frameworks, server platforms, etc), application deployment considerations, and technology considerations for key business objectives.

Some examples of architectural decisions to support technology considerations for key business objectives are as follows:

  • The system must work on both Oracle and SQL Server databases. -> An ORM tool is selected that will allow the application to easily swap DBMS vendors.  The ORM tool that has been selected has been evaluated against other tools and also meets the performance requirements. Using this tool will also lead to faster data layer development time over the alternative tools.
  • Customers can use and pay for a variety of modules that need to be plugged in at run time.  The modules need to work together to share information, but also operate independently if related modules are not available. -> Dependency Injection and Inversion of Control frameworks are evaluated against writing an internal DI/IoC framework, and an approach is selected to control how and when the components get loaded and used.  The components will adhere to a specific interface to ensure compatibility and modularity within the system.
  • The system has to be fast – a single implementation must handle a minimum of 100 customers and 5,000 simultaneous users with no performance degradation.  This performance must be maintained with over 100,000,000 database records in the core database tables, and will be measured by… etc… -> Performance considerations relating to how data is retrieved and stored, caching, scalability, and data load operations are reviewed.  Frameworks and architecture decisions related to this are reviewed and selected to ensure that performance considerations are baked into the core architecture. Coding standards and design patterns are put in place to ensure the UI is always responsive and UI data is loaded asynchronously to not freeze the user experience for any amount of time.
  • Customers are paying to be able to access their data in a standard way using 3rd party tools, and want to write automated scripts to retrieve and access this data. -> Business layer will be accessible to and have a corresponding REST API where all customer data will be accessible through this secure REST API.
  • We have licensed 3rd party vendors that pay a yearly fee to write and sell reports to our customers.  We need an open reporting tool with easy access to report data. -> A technology evaluation is completed and multiple technology vendors are evaluated.  The choice is made to use SQL Server Reporting Services to allow other vendors to easily create and sell reports to our customers. Reporting will not be done on the live database to minimize performance impact and mitigate the risk of rogue reports making their way into the reporting module.  A separate star-schema analytics server will be deployed that contains aggregated customer data that is suitable for very fast customer reporting with no impact to the live production system.

As part of this process you may need to include specific details about how this will be implemented, measured, how the testing team will test the performance, the specific technology in question and how it will be used, etc.  Your project will also need a shared vision of this architecture as the success of the project will depend on ensuring this architecture is maintained throughout the development process.

It should be clear to see that the core-architecture will represent these decisions that will have a big impact on the completed software.  Trying to change or implement these core-architecture decisions mid-way through the development process can have vigorous consequences and there will be a refactoring cost that could require months of development time. The worst case scenario is that it could be deemed to be virtually impossible without a major re-write if you have moved too far in another direction.

Could you imagine discovering close to a customer release that your software that was supposed to scale, doesn’t scale?  Or that specific functionality doesn’t work when more than a few users are accessing it at the same time?  It happens often when the software team was not mature enough to ensure that there was a core-architecture aligned with the key business objectives and ensure that the developed software code was aligned with the core-architecture.  These things must be accounted for and mitigated against by your core-architecture and the concepts of the architecture need to be baked into all development decisions made by the team throughout the development process.

If the problems mentioned above happen on your development project it is likely that your core-architecture was incorrect or never established, never followed, or effectively established too late in the development process. These problems alone could easily double or triple your total development costs, cause considerable delay to product shipping, and cost your organization customer and profit losses.

“What about Agile?  We don’t do up front architecture in Agile, we do it as needed during our iterations and we follow the last responsible moment design principle in doing so.”

I’m a proponent of agile methodologies, and I’ve seen the great benefits that effective agile teams can have on a development project, but one thing I’ve seen over and over again in many agile environments is the assumption that less (sometimes zero) time should be spent on the architecture up front. The principle itself is sound as removing most up-front design for functional requirements to be developed as slices of functionality throughout the iterations provides a benefit of not over complicating and over architecting code. However, the assumption that we throw away all up-front design is incorrect. Even on agile projects there are significant decisions relating to your core-architecture and key business requirements that need to be made before development begins and other significant decisions that need to be made as early in the development process as possible. These decisions and related architecture need to be reviewed, updated, and maintained throughout the iterative development process.

Agility is fantastic when dealing with functional requirements and the need to respond quickly to changing functional requirements. However, this agility needs to be kept separate from the up-front core-architectural decisions that need to be made that are aligned with the key business objectives and that will help ensure your software products success and conformance to these key business objectives.

Even in agile environments where YAGNI (You aren’t gonna need it) and “build now, refactor later” are the trends, refactoring code too late in the development process in order meet significant design decisions that align with the business objectives is going to cost you. As mentioned earlier, refactoring costs of months and months of development time is the norm for organizations that didn’t account for core-architecture decisions in their development – especially in agile environments where there is a misconception that these architecture decisions were supposed to be made as late in the game as possible.

In many agile implementations, these core-architecture decisions are made too late as they don’t typically relate to a single user story and they end up having a huge technical debt cost because there is a huge cost to change and cost to refactor existing code once the architecture decisions get made. Days, weeks, or months, can be lost to refactoring.

Another problem on some agile teams is what I call “the race to the finish”. Development teams race to satisfy the requirements of the user stories as quickly as possible to ensure they complete the stories within their iteration and to keep their velocity up (average user story points completed per sprint). And although the functional requirements of the user stories are solid and intact, thought isn’t always given to core-architecture decisions such as modularity, scalability, performance, etc, even though the core-architecture is aligned in parallel with the business objectives. Especially, if the core-architecture hasn’t been or has only loosely been defined, you can expect even less consideration, as the focus turns to completing the work as described instead of ensuring the development is in alignment with the core-architecture. To mitigate this in true agile form, agile teams and the product owners need to ensure that business objectives relating to the core architecture are part of the acceptance criteria for the user stories, and that they are thoroughly tested before the testing team can give the “Ok” on the completed user stories.

Depending on the agile team, how experienced they are, and how senior they are, the focus on how and when design decisions are made can also vary, and not every decision needs to be made during the inception of the project. Typically the biggest decisions with the biggest cost to change should be made as early in the software development process as possible. I would make sure 100% that you don’t lose sight of the “architecture” and the significant design decisions of that architecture and how these decisions are aligning with the key business requirements.

In summary, part of a successful architecture for a successful software product will require significant design decisions to be made up front.  The bigger the cost of change for the design, the sooner the decision needs to be made and implemented within the solution.  Not establishing an architecture can lead to months of wasted development time refactoring code that wasn’t originally aligned with key business objectives such as modularity, performance, and scalability objectives.  In agile environments, it’s especially important to ensure that thought is given to up-front architecture to mitigate the cost of refactoring in the future.  Establishing and maintaining a core-architecture which is tied in parallel with the key business objectives will go a long way to ensuring a successful product rollout.

Dan Douglas is a professional independent Software Consultant and an experienced and proven subject matter expert, decision maker, and leader in the area of Software Architecture. His professional experience represents over 12 years of architecting and developing highly successful large scale solutions.  Dan has been the architect lead on over 15 development projects.

Perhaps you want to learn more about how to establish the architecture, how to present and collaborate with your development team to finalize the architecture and create a shared vision, how to ensure consistency across the architecture, how to bake in non-business requirements such as logging, cross-cutting, and other technical concerns into the architecture.

Or maybe your architecture is established, and your team has done a damn fine job of ensuring the architecture is solid and that it will meet the key business and technology objectives.  How do you maintain this? How do you cope with architecture change when it is warranted or when the business changes?

Stay tuned, as I am writing a series of articles on these topics which will be available in the future.

Technical Debt In Software – The Fine Line Between No Architecture and Over Architecture

As I am writing this, I am sitting on my condo’s terrace in downtown Toronto, baking in the sun while I take in the noise of the city and cars and people down below at the street level.  There is a helicopter that seems to be circling the downtown core for quite a while now.  It’s freaking hot today and I could probably use some water.  Be right back.

At the time of writing...

At the time of writing...

(1 minute later…)

Ok – water has been acquired….. I also grabbed a Kilkenny and poured it into my Kilkenny glass (Kilkenny requires the right glass to be enjoyed properly)

It’s been a while since I’ve posted, and career wise a lot of big and exciting changes have been made.  I made the move to independent consulting, and I am enjoying it big time!  Now my efforts are shifting from helping one organization develop bleeding edge scalable systems to helping many organizations with development, architecture, minimizing technical debt, and team building.

Along the lines of the type of work I’m focusing on, I want to write a bit about bad architecture, over architecture, and technical debt.

Ok, so I’m going to talk about the benefits of a “value added approach” to software.  I’ve seen a lot of systems in my day – ranging from poorly architected apps that deliver high amounts of business value to over architected systems that, instead of delivering their expected business value, became a total utter failure for a multitude of reasons… and everything between.

To the business, a successful system is typically one that delivers on it’s promises on value to the business.  The negative impact of technical debt is not always seen by the business teams and is sometimes seen, albeit indirectly, as “necessary”.  So, in these scenarios – why is it necessary?  is it job security for the dev team? Lack of training?  Lack of standards?  There could be a plethora of reasons, but in the end the technical debt introduced by these systems is high and could cost the organization millions of dollars.

High business value systems and mission critical systems can suffer from an endless amount of technical debt due to lack of design standards and architecture.  This technical debt is not always apparent to the senior business teams. The business loves the system however, but they sure don’t understand why it takes such a long time to add new features or track down bugs.  The business leaders at the top see the system as great too, but the fact that it’s overly fragile, requires daily maintenance and an overly large team just to support it seems somewhat necessary – plus it’s “just the way it is”, right?

The real deal here is that technical debt, and overly large dev support teams are just not necessary at all.  The right people, training, technical skills, system architecture, and business leaders have the potential to create the right systems and find that fine line between a proper development architecture and business value.

Yet, there is another extreme…..Over architecting and big ego’s….

People have ego’s… Fact of life .. When architecting a system, I’ve seen many software architects with an ego.  Their system is great, using all the right design patterns.. Look at how cool my undo system is with the command pattern implementation I came up with.  Watch how I can add one entry to the configuration file and all the sudden the entire behavior and business logic of the app can be changed… Our customers can now create their own custom modules using my handy dependency injection techniques and augment the app with their own fancy things they want to do…..  Pretty sweet eh?

I agree, ok it’s pretty sweet (and fun to work on) and I’ve seen and created my fair share of ‘coolness’ in my systems.  There are always business cases for these types of things and having the right technical team and abilities to implement them properly is key.  The problem is over architecting when they aren’t needed or for the dreaded reason ‘just in case in the future’.  The fact is, this can sometimes delay shipping the product and complicate the development.  There is typically a big disconnect between the development team and the business value in these scenarios.  You end up having a technical team who is more focused on architecting than on providing business value.  Focus on what’s needed, and if it is, and there is a case for having it… Build away and have fun!

Both of these scenarios above can lead to long term technical debt.  The trick in software is building a team who can leave their ego’s out of it, share knowledge of the system and the business value proposition, and focus on what’s important for the long term success of the project while minimizing technical debt.  Doing this has the potential to create phenomenal systems that are well architected, bugs become easy to track down, the system can grow organically the way it needs to, the size of the development team can remain minimal, and the business is happy knowing that maintenance costs are reduced and new features can be added to the system quickly.  There is no more fragility – the system becomes clear and small changes are less likely to have unexpected bad consequences.

There is a fine line between both of these scenarios and it requires practice and discipline to build the right team who can achieve and continually create well architected solutions that maximize business value and eliminate technical debt.  It can be done and can be done very successfully.  To truly master this as a software/business team, it needs to be instilled as part of the culture of the team.  The right leaders and people are very important in truly creating world class software solutions.

Measuring the Time Saved When Reusing a Well Architected Component

A well architected component is easy to develop given the right technical knowledge, practice, skill, and motivation to do the right thing.  I’m taking a practical example of a component I developed recently, described in the blog post titled Getting to the Monetary Value of Software Architecture.  Relatively speaking this component wasn’t complex to develop, but it did require some trial and error and some clever thinking to perfect it.  Development of this component took just over 4 hours to complete – and contains 80+ lines of code (that’s code, not including comments or blank lines), and once it was complete we could apply and reuse this functionality in various places with extreme simplicity.  Actually, with only one line of code!!  That’s the power of simplicity and reusability.  See the actual code for the component here: Class to Add Instant ‘As You Type’ Filter Functionality to Infragistics UltraCombo Control

In a recent blog post, I talked about the monetization of well architected solutions.  Here I am going to put some hard values against it.  I used the following variables to come up with the data to put into the chart.

Initial development time required: 240 mins

Time required to duplicate this functionality once (non architected solution): 15 mins (assuming the user is somewhat familiar with the code and is copying and pasting) – I would comfortably say it could take 30 mins for someone who isn’t quite sure how the code worked to begin with and therefore not knowing exactly which code is required to be copied – this would double the blue time line as shown on the char below.

Time required to duplicate this functionality once (well architected solution): 1 min  (it’s literally one line of code!)

These numbers were taken from the time it took to actually do the implementation several time.  The 15 mins value required to duplicate the functionality for the non architected solution was derived from practicing a copy and paste scenario to integrate code to add the functionality required. 


This does not account for the extra effort required in the future to maintain the code or make updates and enhancements.  With the well architected component you have to make an update somewhere in 80 lines of code to fix a bug or add an enhancement.  In the non architected version, you would be making a change within (80 x N) lines of code where N is the number of implementations.  Let’s say 10 – so we’ve got 800 lines of this code that essentially do the same thing over again that need to be looked at to make a code change!  Plus all of the other application code intertwined within it that you have to ignore.

This is just one example!  There are great architectures all over the place and we all (mostly) know how valuable they are.  Other than just saying “Yeah, they save tons of time..”, I’m here putting some hard values against it. 

This is one scenario, but there are many more scenarios that save even more time – I’d even say, exponentially save more time!

Getting to the Monetary Value of Software Architecture

Software Architecture is a huge topic and it’s something I am passionate about. I believe, and can prove, that continuous improvement in this area will contribute to overall better system design, faster bug tracking and fixing, reduced maintenance time, faster development time, developer happiness, quicker time to market, and allowing you to allocate more time to keep up to date on the newest technology – to name just a few things….

Now, translate this into the Bottom Line of the Business….

  • Faster ROI for software projects
  • Reduced Downtime
  • Faster time to market for new features
  • Reduced labour costs on software projects where the additional labour can be put towards additional stages of the project or other areas
  • Bottom Line -> Reduced Costs, Greater Profits

Let me share a case in point that compliments this theory nicely along with details to indicate the higher monetary value of the well architected solution …..

I’m going to discuss a feature we added to an existing project and how it was implemented using a sound architectural strategy. This is just a small piece, an enhancement really, to an existing system that has been well architected.  

We developed a solution to allow users to search and filter a combo box while typing into the text area of the combo. This allowed users to find what they were looking for faster. The before and after scenarios are contrasted below…


User had to know the complete item name in the list of potentially thousands of items and carefully scroll through the list to find the item.



Users only need to know part of the item name that they are looking for and just have to type it into the ComboBox to filter. This is an incredibly quick and easy feature for the user and eliminates time required for them to scroll through the list. As an added feature, a little filter icon is shown in the combo box to denote that the combo box list is only showing items that match the filter criteria.


(Note: Some data has been blurred to keep certain customer information confidential)

As you can see above, the user just types in ‘007’ which is the suffix to the part they were looking for in the list. This enabled them to quickly find what they were looking for; in this case, only one part had that specific suffix. In this case, there are over 240 items in the drop down list that the user would have to navigate through to find what they were looking for if this filter functionality was not in place.

Implementation Scenarios

I could have implemented this in many ways, but I decided on an architecture that would allow me to reuse this functionality many times over with little effort. I created a class that encapsulated all of the functionality required for the filter functionality to work. This approach requires no additional filter specific code in the main application. With this new class (about 64 lines of new code), I can create a new instance of it in any project and have instant and seamless filter/search functionality for any Combo Box.

It’s now literally one line of code to add this functionality (essentially 64 lines worth of functionality) to any of our ComboBoxes. You could loosely (very loosely) say that your productivity is increased by 64 times when implementing this filter functionality in any new scenario using this approach. However, read further and I’ll cover a more accurate metric.

I’ll contrast this implementation scenario with another very common scenario:

clip_image006User: “This application is great, but man it sure is hard to find the information I need sometimes – especially when I only know the suffix to the part I am looking for”

clip_image008 Developer: “I have a great idea, let me add filter functionality for you to search your list and find what you need. “

clip_image009User: “That sounds great! That’ll save me a ton of time!”

The developer opens the project and finds the Combo Box he is going to add the filter functionality to and starts coding away by handling the events of the Combo Box, adding code to various places, debugging, and a few hours later he has something that works pretty well. He/she does a bit more testing, fixes some bugs, makes a few changes, and here we have something that seems to be perfected.

The change is rolled into production and the users LOVE it! They can think of how useful it would be to have this functionality on a few more Combo Boxes.

(this is where it all goes wrong)

The developer goes to add this functionality to a few more Combo Boxes. For each combo box the developer is doing the following:

1) Find all the code in another area of the application that has the filter combo box functionality and copy it …. (“Where is all the code I need?  Grrr – which pieces of it do I need again?”)

2) Paste it into the code module in another area of the application where the new functionality is needed

3) Look through all of the code and replace control names, key strings, and other variables with the ones we want to use for this instance

4) Test everything to make sure we aren’t missing anything

5) Ooops, something is not working right – maybe I forgot to copy something or change a value somewhere?

6) Ahh found it, I didn’t handle one of the events of the Combo Box properly and this was causing all kinds of problems

7) Copy and paste this piece of it

8) Code has been added in various places to support this new functionality, I need to do some serious integration testing to make sure I didn’t screw something else up

9) Ok, finally, everything is a go – let me post this.

Now the user wants the functionality in a few more places. The developer finds this tedious, but continues to do it this way for each combo box. This is tedious and time consuming and introduces more opportunity for bugs due code integrated into multiple places and tightly coupled within the system; due to time constraints it becomes tougher to introduce this functionality in too many additional areas.

With the well architected solution we get the following direct benefits that we would not see in the above scenario:

  • Write once, reuse many times – easily!
  • Effort does not need to be repeated for each place we want to use this functionality
  • Changes to functionality are made in the Class and bug fixes and enhancements do not need to be manually replicated for as many times as the functionality has been implemented
  • Code is refined, tested, and encapsulated from the rest of the project and from the ComboBox itself
  • Test Driven Development is supported as the idea is to de-couple the functionality from the system so that it can be reused – this decoupling makes it easy to write automated tests if necessary
  • Testing of the main components limits any re-testing required because the code is always the exact same code and not just copied and re-integrated from place to place

How can we put a monetary value on this?

Now, add up the time it took to create, test, and debug the initial filter component and get it working once. This is your Y value.

Now, add up the time it takes to copy the functionality to one more area as per the steps listed above and multiply that by the number of times you need to duplicate the functionality. This is your X value.

Now, come up with a number as to how many places could benefit from this new introduced functionality. This is your N value.

Now, add up the time it takes to add one line of code to a project to enable this filter functionality on an additional Combo Box. This is your Z value.

Cost of a well architected approach to implement: Time = Y + (N x Z)

Non architected solution: Time = Y + (N x X)

This just shows the initial up front cost savings. Take the other benefits into account, as listed above, and you can see a much higher monetary value.

To Add Insult To Injury…..

Now, wouldn’t it be great if we could have the filter search all of the columns for the combo box instead of just the one. We could just add a new option to the class to allow that or enable it by default for all ComboBox’s using the filter. Let’s hope they are all using this architecture – I don’t know a developer who would enjoy going back and modifying (or adding) the code for every combo box so that it can support this extra functionality. Unfortunately, in that scenario you’ll have added unnecessary labour cost to the project to get this additional functionality or the functionality just wouldn’t be added and the feature set of the system would suffer.

In a follow up blog post, I’ll actually discuss the code used and the approach taken with the code for this solution in particular. I will however share, in this blog post, the one line of code required to duplicate this functionality (64 lines of dispersed code, reduced to 1):

Dim partfilter As New UltraComboTypeFilter(ucboPart, “PartNo”)

The Art and Process of Reusability in Software Development

Reusability is the art of planning and developing application components so that they can be easily reused in other areas, be easily built on top of, and provide a decoupled approach to development and testing,

When developing software and writing your code, a great amount of care has to be taken into consideration on the subject of reusability.  The first question you should be thinking about is:

Which components do we have available to use that have already been developed?

So, we’re thinking about which components or software code that have been developed already that we can reuse (either within the same application or a new application).  Components in this context, could mean any of the following scenarios:

1. Code that we have available that wasn’t necessarily designed to be reusable:

This is code that we may have developed within another application without really thinking about it’s reusability, and even though it wasn’t designed to be reusable we can still harness the value of the code.  Depending on the situation we could abstract the code from the original location and make it reusable – this would be important to do if you can really see this piece of code or component being reused again and again.  This requires modifying the original code or component and the application using it in order to get the abstraction.  The original application will now use this component and your new application will be reusing that exact same component.  Improvements to the component could now affect both applications.

Another option is the good ol’ copy and paste method.  Bad, Bad, Bad!  Well sometimes it’s bad – not always.  Go in to the other application, select what you want to copy, and paste it into the new application – modify as needed.  Presto!  We’ve all done this, and it can be justified if the effort required to copy/paste/modify as many times as you project you will ever need this code is much less than the time it would take to decouple it.  Sometimes you may just do it to be lazy – hopefully if you do it due to laziness it doesn’t bite you in the ass the next three or four times you want to re-use the same code again – wishing you’d have decoupled it from the get go.

Sometimes, you may have code or components that you want to reuse but you are having difficulty decoupling it from it’s original source.  Reusability wasn’t taken into account when the code was originally written and it’s too tightly coupled to the original application.  The reusability factor here is lost and typically you have to duplicate the effort and re-write from scratch into a new application.  Now, hopefully the second time it gets written – it’s designed to be reusable.

2. 3rd party components we can plug into our application:

There are tons of time saving components from 3rd party vendors out there that we can plug into our applications.  These components typically provide functionality that is not native out of the box functionality for your application.  Some examples of available third party components are:  Data grids, ORM mapping, charting, reporting, etc.  These can provide an enhancement to your applications that will save you development time in exchange for the licensing fee of the component.  Purchasing new 3rd party components can be time consuming as you want to do an extensive search and evaluation on competing components from many vendors before making a purchase decision.

3. Using free source code or components found online:

There are many great source code examples and free components available online that you can plug into your application.  These can be a real timesaver, but typically should be thoroughly tested before production to a greater extent than other components as they typically provide no warranty and can sometimes introduce very unexpected bugs if you are not careful.


Ok, so you’ve thought about the ideas above but still feel you must begin development with new code – you now need to think about future reusability of the code you are writing.

I’ll get into more detail about developing for reusability in Part II of this blog posting.  Coming Soon!

The Basics of Software Architecture for .NET Developers – Presentation

At our last CIPS executive meeting held on July 21, 2009.  We decided that it would be a good idea to create the first Coffee and Code event held jointly by the CIPS chapter of London (Ontario) and the new London .NET User Group.  We held the event last night (August 18, 2009 – on my birthday – not purposely). As part of this event I developed and presented a PowerPoint presentation titled The Basics of Software Architecture for .NET Developers.  The presentation touches on the basics of Software Architecture, along with ideas, tools, and resources that go along with software architecture for the .NET developer. 

Here is the link to the presentation.


I’d like to give a shout out to Tony Curcio, President of the London .NET User Group.  He’s done a great job of putting together this new .NET User Group in London.  Tim Hodges, is the CIPS London chapter president who also has done a great job in the past and present organizing local CIPS events.

Content on slides 2, 3, and 4 was taken from the Software Architecture presentation I worked on along with team members Adam DeMille and Matt Higgins for the Top Gun training program in 2008.

My Speech on Important Points in Considering Software Architecture

In a recent posting, My Speech on Using Technology to Solve a Business Problem, I transcribed the contents of a speech I performed for Toastmasters.   In this posting I will share the follow up speech I did titled “Important Points in Considering Software Architecture”.

The speech was developed for a non-technical audience using layman’s terms and examples.  I really tried hard in the speech to simplify the ideas of software architecture.  The night I did the speech, there were three speakers speaking about different topics with an audience of about 20 people.  At the end of the evening, I was voted as “Best Speaker” by the audience members.  I felt good about that; Toastmasters really is a great organization to help you grow your public speaking.

Ok, here is the speech!

In my last speech I talked about being effective as a member of the Information Technology field.  I briefly discussed steps involved in developing a solution – from concept to development.  In this speech, I will take this one step further and go over another important step in the overall software development process.  I will discuss important points to consider in the area of software architecture.

Software architecture is the fundamental design of a computer program.  Consider the architecture of a car.  A basic architecture of a standard car should have four wheels, an engine, fuel tank, etc.  The architecture of a car also defines how these components will work together to produce a working vehicle.  In software, it is the same idea.  The basic architecture of a computer program dictates how the computer program will work, and how it will work together with other computer programs.  Along with this basic architecture are proven design patterns.  Design patterns are fundamental patterns that are proven and reliable and used as a template for developing pieces of your applications.  To put this in perspective, imagine someone attempting to develop a new car without knowledge of how current cars work and what their fundamental “design patterns” are.  Consider the time that would be saved and the ease of future maintenance if they could build this new car upon an existing template.  Would it not make sense to take in the knowledge of an existing proven design, and use it as a base model for your new software applications?  Of course you may improve on top of the initial fundamental design while still keeping the fundamental concept of how the car works as per the basic car “design pattern”.

Looking at the option of being able to reuse existing components as building blocks or even sometimes main features to your software application is important.  Why re-invent the wheel?  Consider, when developing a new car, the cost savings that could be had in re-using standard components that have already been developed and proven on previous model years.  Why start from scratch?  Again, software design is very similar.  Purchasing pre-existing components that have been developed by third party vendors or that are freely available could be highly beneficial.  Take the following example:  Your application requires a rich user experience that is highly functional with a look and feel very similar to Microsoft Word.  Your development team could spend one month developing and testing this new feature themselves, but this time could have an estimated cost of $10,000.  Meanwhile, there are dozens of vendors out there who are offering this as a component, or a “building block” that you can plug into your application to give you the functionality you need.  All that your development team has to do is customize these proven components to suit the needs of the application.  This could have an estimated cost of $1,000 for the license to use the components, and maybe another $1,000 worth of development time to customize the components as you need to.  In this example, you could easily see an overall savings of $8,000.

Modularity is an important factor to consider as well when designing your software application.  To be modular, is to be described as, something that consists of “plug-in units” which can be added together and on top of one another to make the system larger or to improve its capabilities.  As an example, think of a modular cabinet system where you can purchase additional cabinets and add them together to make one larger cabinet.  This saves you money because you don’t have to throw away your existing cabinet when you need something bigger or better.  In software the same ideas apply.  You can design an application to be modular so that future enhancements can be developed faster without having to re-design the application from scratch or change it to be able to add additional functionality.  As an example: Part of your application contains information about customers, and now new changes will require the application to contain information about about your customer’s suppliers.  Being able to develop an independent “unit” that can be plugged into the application that contains this supplier information, limits the amount of changes that need to be made to the existing program.  This will save time in the future.

Designing a good workable software architecture takes time and practice, but if done properly will save you even more time in the future as you begin to work on implementing this design.  Using effective and proven design patterns, pre-made components from 3rd party vendors, and keeping modularity in mind can help your development team come up with a stable and effective software architecture.