Using Metrics and KPI’s to Refactor and Improve Your Enterprise Software

Refactoring exercises are sometimes “a shot in the dark” where as the depth of the problem isn’t always measured or well understood nor can the success of the initiative be measured easily. This article indicates an approach to use data collected over time to determine and prioritize software refactoring exercises to give us the biggest bang for the buck as well as to provide projections and justification of the initiative to business stakeholders. KPI’s are then established to measure the success or failure of the initiative.


In both greenfield and brownfield development, one of the more negotiable items is refactoring or taking on large refactoring exercises for improving the health of the software. In greenfield projects these initiatives may come up at any time in the project and typically the risk/benefit is measured against project timelines, launch dates, milestones, and feature scope. On the other hand, brownfield projects typically have been running in Production for a while, and at this stage, refactoring exercises may focus on areas which have had noticeable issues in production, or were exercises deferred until after launch for a variety of reasons. In either case, it’s sometimes hard to justify the exercise even though we think we know that it’s the right thing to do.

The best agile experts will even tell you that even in the best of Agile environments there is always refactoring to be done. The ultimate Agile purists may disagree that in their perfect world, refactoring is completely unnecessary because refactoring is done as we go. It just doesn’t work like that – that’s clever marketing though! I won’t go into detail about how software gets fragmented even when the entire team has the best of intentions, but it does. Fragmentation happens, and moving too far one way makes it difficult and risky to move another way without significant refactoring efforts which cannot always be justified at the time of development.

Refactoring can have many purposes, including, reducing complexity, improving performance, improving reliability or scalability, because it’s cool, or just needed to add new features. However, how do you justify the benefit vs risk of these initiatives? One approach is to use data by establishing data points, metrics, and KPI’s to not only justify the means, but also to validate the effort. Imagine the performance of the application, we measure the performance, we see the numbers, and we realize we are way off. We need to improve performance to meet our NFR timings, and maybe it’s a quick fix, or maybe it requires a larger refactoring exercise. The justification is in the numbers. We create a PoC to further justify the performance gain, we implement, we measure again, and voila. We have met our goal. Elementary stuff.

Quite often, our refactoring efforts may not be that cut and dry. There just may not be enough justification for teams to consider the refactoring exercise or for the stakeholders to approve it. Also, applications may have a series of different problems, maintenance headaches, and technical debt. How do we know what to prioritize, where the plumbing needs a serious make-over, or identify the biggest areas of concern? Look at the data.

Tracking the health of your software through data points on an ongoing basis can help greatly with identifying the big problems, or identifying the small problems before they become big problems. We may have an Async service that we’ve been itching to re-write, however the metrics may indicate that we have far more important complexity issues in some of our MVC controllers which have lead to a large number of functional defects and page crashes. Because of data, we can not only justify, but we can prioritize what our refactoring efforts should be. My experience is that if you want to justify something, especially to business stakeholders, it’s easier when you have data to back it up.

Good ideas of data points include collecting information from exception logs such as components, classes, methods that have the most number of exceptions, or by tapping into your ALM system weather it be TFS or other systems to query the list of defect fixes to get a list of the file check in’s related to the fix in order to determine the files which have had the most number of fixes applied. This can be further categorized or aggregated as necessary, but it’s important to understand which data points are important to you in order to get those metrics and measure the problematic areas. Another data point example could be number (or severity) of defects per functional areas. How you get that data will depend on your environment, but it’s important to be pre-emptive about this. Part of new software initiatives should always include data-point capture. Ensure your logging is capturing the data in a meaningful way, or that you are tracking adequate data in your ALM system which will allow you to determine your biggest problem areas. It is important to determine, prepare, and track your needed data points well before you need to use and analyze them.

Code complexity is another type of measurement which gives a numeric ranking to pieces of code indicating how complex the code is. There are many tools on the market which can help determine and rank your code complexity. Complexity often can map directly to stability and functional problems, and also generally increases the maintenance cost of your software. A recent larger enterprise SOA application I have been consulting with has tens of millions of lines of code and up to 100 people engaged in the project at any given time. As an exercise to determine where our biggest software problems are, we pulled together two sets of data. One set of data was data that was aggregated and compiled from our ALM functional defects as well as our crash logs to see where, historically, our biggest issues in our application were and to determine the types of issues we were seeing repeatedly. The data allowed us to see the trends. The other set of data came from complexity reports using tools which help measure the cyclomatic complexity of our software. We aligned the data and what we found was that in many of the cases, the data showed a direct correlation between code complexity and the unique number of issues and number of recurring issues we are seeing in our crash and ALM data. This gives us hard data to justify refactoring and software improvement based on code complexity as well as the actual numbers of issues and crashes we are seeing in the code.

Metrics, data points, and KPI’s, can help you determine the state of your software on an ongoing basis. Data can prove to be vital in helping meet your software’s requirements, nonfunctional requirements, feature-set, and release goals. Justification can be provided to business stakeholders using data with the promise that the results of the initiatives can also be measured and reacted upon. Let’s consider we have custom data translation with tens of thousands of lines of code where we are repeatedly opening new defects and seeing multiple crashes per day/week/whatever. We know this code is overly complex, and we also know that the trends and data we compiled are showing that we have 40 crashes here a month, and our regression or defect rate for this part of our code is 1 to 1. Meaning for every defect we close in this area, one more opens. Because we have analyzed the data, we have seen that this is one of a few areas we can target to give us the biggest bang for the buck, and we can now use all of this data to provide justification to our business stakeholders.

Why does this even have to be justified to business stakeholders? We all know we can’t always just write the code that we think will make the system better. Project team members typically have an idea of what the pain points of the system are, and know code that is overly complex or a PITA (pain in the ass). Some project teams will just say – go ahead – we have to re-write X, so let’s just start doing it – no justification and no measurements needed. That might work, and if we get it right, good. Of course, we’ll have no way to really measure it without data. We may think that it’s the right thing to do and we have the best of intentions, but we don’t always get it right. However, part of compiling data and creating KPIs for justifying our actions is not only to justify it to business stakeholders when we need to, it’s also about justifying it to ourselves – the project team.

Let’s go back to the example I cited before. We have our justification for refactoring or rewriting our data translation components. We have a new solution which will use an ORM mapping tool to replace all of our fragmented data translators. This will takes 2 months of re-development, and introduce some risk, but that risk is completely offset by the negative data trends we are seeing in this area. We are seeing a 1:1 ratio of defects in this areas as well as 40 crashes a month. Terrible, in comparison to the other components in our application.

Now let’s take this a step further. Not only do we have the data and trends to identify the rewrites that will give us the biggest bang for the buck, we have data and trends to create and measure KPIs. After analyzing a sample of our biggest issues in our data translators we see inherent problems that repeat month after month. Let’s say these inherent problems represent 80% of the 40 issues we are seeing, we know that we can reduce the number of issues in this area by up to 80% alone inherently based on the stability record of the new (and fully tested) 3rd party ORM framework we will be introducing. Let’s say the other 20% of issues are random one-off issues which may be inherent to our complex code, but some may still re-occur due to other external factors. Part of the justification will be establishing KPI’s to measure weather we met our goal or not.

After analyzing all of the data, we can establish justification and KPIs that look something like this example:

Justification and Approach

  •  We see 40 crashes a month in our data translator code
  • Defect rate of 1:1 in this area, specifically.
  • Replace with ORM XYZ component. A 3rd party Object Relational Mapping tool which will replace all of our existing data translators.
  • Cost of 1,200 development hours and an ETA of 2 months
  • As the rest of the application is seeing a defect rate of 1:0.75, these 8 defects will continue to decline at that rate. Further initiatives can reduce our defect and regression rates.
  • Milestone date will be impacted by 1 month at the benefit of a much more stable data layer and large reduction of measurable crashes per month.

Measurable KPI

  • Whereas we originally had on average 40 defects per month in this area, we will maintain an average of 8 or less defects per month in this area (further reduced as our defect rate improves) one month after the new solution is implemented.

Proposing the improvements, and backing it up with hard data, and proposing measurable KPIs helps to move these types of initiatives forward and helps get everyone on-board and moving in the same direction. It provides justification to everyone from development team members, business analysts, project managers, and other business stakeholders. Establishing and committing to measuring KPIs as related to the initiative may be scary for some as every time we measure the success of an initiative there is a chance we miscalculated or made a mistake which lead to no improvement or not as big of an improvement that we had thought. However, done right, it shows confidence to the team and business stakeholders and helps drive the team to think smartly about the goal we are trying to achieve. “We’re not just making software better, we’re not just making it cooler, we have a measurable goal and let’s ensure we all work hard to achieve it!”

Let’s say the initiative was a success. If our data was right and our projections were right, it should be! But, maybe we don’t get as far as we wanted to with an initiative. Well, we step up, we retrospect, we introspect, and we improve. We demonstrate that the initiative wasn’t as successful that we had hoped, but we demonstrate what we learned and how we will apply this next time. I’m not saying to expect failure, but failure is always inevitable at some point. Take it gracefully, and be even better next time. We’ll still be in a much better situation than if we had done it as a shot in the dark based on suspicion rather than being based on data. Otherwise, we would have had no way to know if it was successful or not, because we couldn’t measure it very well before, and we can’t easily measure it’s success as an isolated contribution to the overall big picture.

The beauty of using data is that let’s say we had an idea to replace our translators to use an ORM mapping framework or other data management framework, we may have found that the data shows this code to be surprisingly stable and there are few issues here. It may be a little more complex than we would like, but it works well, and it’s stable. The data could have shown that we absolutely didn’t need to spend two months rewriting this component and introducing new risk which wasn’t there before. The data would show us that the best bang for the buck would have been in other areas, and that there is little to no risk in keeping our existing data translation code in tact.

Aggregating data points to give us the insight we need is very valuable to help justify driving key software improvements. It is an important and often overlooked approach to making sure refactoring and improvement exercises are worthwhile, and can provide important KPIs which will provide confidence to business stakeholders and help ensure we have been successful in our initiatives. In addition to using data to measure and prioritize the potential impact of initiatives, other methods may need to be employed to reduce overall defect rates, and ensuring the team is moving in the right direction, and that overall software quality is improving.


Are Scrum Processes Such As Burndowns and Task Estimation Working Against Your Organization’s Agility?


This article contrasts Scrum with Agile methodologies and points out that Scrum can fit within an Agile development team, but Scrum alone doesn’t mean your organization has become Agile.  Scrum processes such as task estimation and burndowns may be established Scrum processes, but upon further analyzing their effects you see that they could actually be working against your team’s agility.  It’s important to evaluate the value of established processes, to determine if they are helping or are detrimental to your organization’s Agile initiatives.  A real world example is used to demonstrate an example of how a good intentioned process becomes inefficient within an organization’s Agile implementation.

Are Scrum Processes Such As Burndowns and Task Estimation Working Against Your Organization’s Agility?

Recently, a colleague asked me about task estimation and how to reduce the amount of time it’s taking their team to do it.  I gave him some suggestions, but it got me thinking more about task estimation and why agile teams are doing it at all.  I’ve also been involved with many organizations who are following Scrum processes, calling themselves agile, and they are getting there, but they are still putting Scrum processes before people and not really thinking about the value of the processes they are following.  This article is going to talk about how using Scrum processes such as burndown charts and task estimation can actually be working against your organization’s initiatives to improving agility.

Scrum actually predates agile, so it’s fair to say that just because you are using Scrum you are not necessarily embracing agile practices, however when you look up Scrum in Wikipedia you will see the following “Scrum is an iterative and incremental agile software development framework for managing software projects and product or application development.”.  Ok, so it is an “agile software development framework”, so you can forgive people’s perception (even mine) when they feel that by following scrum processes we are Agile.  But, I believe there still needs to be a further distinction here.  Scrum is still just a process for managing software development, and I see Scrum as a set of rules that “could” be used in an agile environment, but by following Scrum processes to the letter of the law, we could actually be putting processes over people – which is definitely the opposite of what the agile manifesto is trying to achieve.

Scrum, like many other agile implementations has the basic premise of a user story backlog.  We estimate story points with a number of relativity, instead of using an absolute measurement of time, and over time we can determine our velocity – which is how many story points we can generally complete in a sprint or iteration.  We can now easily judge our backlog to get an idea of how long it will take to complete the stories in the backlog.

Once estimated, we task out the stories for the sprint, but unlike standard Scrum implementations, I don’t like to estimate hours for tasks.  Inherently there is nothing wrong with it, but myself and many Agile experts don’t see a lot of value .  It’s just very difficult to estimate work accurately in absolute measurements of time – that’s why we story point the stories to begin with.  Estimating at even the task level also has the same problems of not being so accurate.

With Scrum implementations, estimating tasks in absolute time is still a vital part of the process.  Wikipedia defines a scrum task as the following: “Added to the story at the beginning of a sprint and broken down into hours. Each task should not exceed 12 hours, but it’s common for teams to insist that a task take no more than a day to finish. “ [Author’s note, the latest version of the Scrum guide has removed task estimation from the Scrum requirements]

I had a friendly discussion with a previous client on estimating tasks and I could never get a good answer as to why they do it other than “it’s part of the process”, and “it’s agile”.  There certainly is a lot of confusion out there about what exactly is Agile, and what constitutes adding value as part of an agile implementation.  Sometimes teams feel that by following a Scrum process, we’re agile – but as a team we need to think about value.  Scrum, agile, or not – what value are we actually getting from spending the time to estimate tasks?  If, as a team, we cannot answer that question we need to re-evaluate this process and determine if it’s a waste of time or not.  If there is value, sure let’s continue doing it, but many times a process is followed for process sake.

However, it’s easy to see why task estimation is being done.  We need the information for our burndown charts.  These charts tells us how many task hours we have completed versus how many hours are remaining and are part of the Scrum process that should be presented during the daily stand-up meeting.  Ok, sure – then theoretically, IF we need burndown charts, then we need tasks estimated in hours.

But, do we need burndown charts?

There are a few teams out there that can accurately estimate blocks of development work in absolute units of time, but it’s not the majority.  The reason we estimate in story points for user stories is because software estimates in absolute measurements of time are barely ever accurate.  So, why does Scrum insist on estimating in story points for user stories, but insist on estimating task hours for individual tasks within the user stories?  Even when estimating dozens of small tasks individually we succumb to the same inaccuracy as we would if we were estimating user stories the same way.

The only thing that is important at the end of the sprint (or iteration) is a completed story.  A non-completed story isn’t worth anything at the end of the sprint even if 10 of 12 hours are completed.  A quick look at completed tasks vs non-completed tasks should be enough motivation for the team to know and decide what needs to be done to complete the work by the end of the sprint.  Many agile experts would agree, including George Dinwiddie who recommends in a Better Software article to use other indicators instead of hours remaining for the burndown charts such as burning down or counting story points.  Gil Broza, the author of “The Human Side of Agile” will also recommend that burndown charts shouldn’t be used at all, and instead among other things, use swim lanes for tracking progress.

In my experience, knowing the number of hours outstanding in a sprint by looking at task hours doesn’t help and is an inaccurate metric to use to plan additional “resource” hours in the sprint.  Even though some organizations do this, the thought of using task hours to help plan for additional “resource” hours during a sprint isn’t valuable since the absolute time measurements aren’t accurate.

If you really are determined to be Agile, you need to make sure you understand where processes make sense,  After all, the agile manifesto preaches “individuals and interactions over processes and tools”.  Following Scrum processes doesn’t necessarily make you Agile, and you need to really think about the value of these processes and determine if they are working for or against your initiatives to become more agile.  In the real world examples of task estimation and burndown charts, it was clear that these established Scrum processes were actually working against the organization in terms of becoming more agile.  Even if your goal isn’t to be agile, by identifying and eliminating processes that don’t add value, you are eliminating waste and opening the door to replace the processes with initiatives that can truly add new value.

Dan Douglas is a professional independent Software Consultant and an experienced and proven subject matter expert, decision maker, and leader in the area of Software Architecture. His professional experience represents over 12 years of architecting and developing highly successful large scale solutions.  Dan has been the architect lead on over 15 development projects.

A Little Documentation Please, Defining Just Enough Code Documentation for Agile Manifesto No. 2


This article helps clarify agile manifesto number two “Working software over comprehensive documentation” by defining the different types of documentation being referred to. An analysis is given on code documentation indicating the complexities that warrant it, and how refactoring and pair programming can also help to reduce complexity. Finally, an approach is given on how to evaluate when to document your code and when not to.

A Little Documentation Please, Defining Just Enough Code Documentation for Agile Manifesto No. 2

The second item in the agile manifesto “Working software over comprehensive documentation” indicates that working software is more valued than comprehensive documentation, but it’s important to note that there is still some real value in documentation.

I see two types of documentation being referred to here. 1) Code and technical documentation typically created in the code by developers working on the system, and 2) System and architectural documentation created by a combination of developers and architects to document the system at a higher level.

I will save discussing system and architectural documentation for future articles, as it is a much more in-depth topic.

So let’s discuss the first point – code documentation

Questions that many teams ask (agile, or not) are – “How much code documentation is enough?” or “Do we need to document our code at all?”.  Some teams don’t ask, and subsequently don’t document.  Many TDD and Agile Experts will indicate that TDD and will go a long way to self-document your code.  You don’t necessarily need to do TDD, but there is a general consensus that good code should be somewhat self-documenting – to what level is subjective and opinions will vary.

Software code can be self-documenting, but there are almost always complex use cases or business logic that need to have more thorough documentation. In these cases it’s important that just enough documentation is created to reduce future maintenance and technical debt cost.  Documentation in the code helps people understand and visualize what the code is supposed to do.

If technology is complex to implement, does something that is unexpected such as indirectly affecting another part of the system, or is difficult to understand without examining every step (or line of code), then you should at a minimum consider refactoring the code to make it easier to follow and understand.  Refactoring may only go so far, and there may be complexities that need to be documented even if the code has been nicely refactored.  If the code cannot be refactored for any reason, you have another warrant for documentation.

Think about what can happen if code isn’t documented well.  Developers will spend too much time looking at complex code trying to figure out what a system does, how it works, and how it affects other systems.  A developer may also jump in without a full understanding of how everything works and what the effects are on other systems.  This could create serious regression bugs and create technical debt if the original intent of the code is deviated from.  A little documentation gives you the quick facts about the intent of the code.  Something every developer will value when looking at the code in the future.

Of course, in addition to documentation, pair programming can and should be used to aid in knowledge transfer and can be a good mechanism for peers to help each other understand how the code and system work.  Pair programming is also a good mechanism for helping junior and intermediate developers understand when and where they should be documenting their code.

The way I distinguish what code should be documented and what shouldn’t be is based on future maintenance cost.  Consider how your documentation lends itself to ease of ongoing maintenance and how easing the ongoing maintenance will reduce technical debt and contribute to working software over time.  If your documentation will directly contribute to working software by eliminating future complexity and maintenance, then document. If there is no value to the ongoing future maintenance, then don’t.  If you are unsure, ask someone a little more senior, or do some pair programming to help figure it out.  I’ve also seen value in peer reviews to help ensure documentation is being covered adequately, but I still prefer to instill trust within the team to get it done properly rather than a formal review process.  When in doubt, document – a little more documentation is always better than missing documentation.

Note: My motivation for writing this article came after reading a good article on the same topic

A question came in the comments on how to weight the amount of documentation needed in a project.  My article was meant to address that question regarding code documentation with my own opinions.  System level documentation is much more in-depth, so look for a future article addressing documentation at the system level.


Dan Douglas is a professional independent Software Consultant and an experienced and proven subject matter expert, decision maker, and leader in the area of Software Architecture. His professional experience represents over 12 years of architecting and developing highly successful large scale solutions.  Dan has been the architect lead on over 15 development projects.

Navigating the Technology Stack to Get a Bigger Return on Your Technology Selection


This article introduces the reasons why organizations choose to standardize a technology stack to use on existing and future projects in order to maximize ROI of their technology choices. When selecting technology for new projects, the architect should consider both technologies within and outside of the existing technology stack, but the big picture needs to be carefully understood and consideration needs to be placed on the ROI of introducing new technology versus using existing technology within the technology stack.

Navigating the Technology Stack to Get a Bigger Return on Your Technology Selection

As a Software Architect, understanding the long term affects and ROI of technology selection is critical.  When thinking about technology selection for a new and upcoming project, you need to consider the project requirements, but also look beyond them at the bigger picture.  Even though at first glance, a specific technology might seem perfect for a new project, there may already be a more familiar technology that will actually have a much bigger return on investment in the long term.

Many organizations stick to specific technology stacks to avoid the cost, overhead, and complexity of dealing with too many platforms.   An architect should have specialized technical knowledge of the technology stack used by the organization, and if the organization’s technology stack isn’t standardized, the architect should work to standardize it.

Advantages to an organization by standardizing their technology stack

Development costs – It’s easier to find employees who are specialized on a specific platform instead of having multiple platform specializations with conflicting technologies.  When you need developers with conflicting skillsets (ex: .NET and Java), you will likely need to pay for additional employees to fill the specialization gap.

Licensing costs – It’s typically advantageous to stick with only a few technology vendors to attain better group discounts and lower licensing costs.

Physical tier costs – It’s cheaper and more manageable to manage physical tiers that use the same platforms or technologies.  Using multiple platforms (ex: Apache and Windows based web servers require double the skillset to maintain both server types and to develop applications that work in both environments)

Understand the bigger picture beyond your project to better understand ROI

As an architect you have responsibility with technology selection as it retains to ROI.  Once you are familiar with the organization’s technology stack and the related constraints, you can make a better decision related to technology selection for your new project.  You may want to put a higher weight on technology choices known collectively by your team, but it comes down to understanding the bigger picture beyond your current project and understanding the ROI of both the project and ongoing costs of the technology choices used.  You may need to deviate from your existing technology stack to get a bigger ROI, but be careful that the cost of supporting multiple platforms and technologies doesn’t exceed the cost savings of using a specific specialized technology for a specific case in the long term.

When Microsoft released .NET and related languages (VB.NET and C#) in 2001, many organizations made the choice to adopt VB.NET or C# and fade out classic VB development.  Those that made the switch early had an initial learning curve cost.  Organization’s that chose to keep their development technology choice as classic VB eliminated additional costs at the onset, however they paid a bigger price later when employees left the company, finding classic VB developers became more difficult, and the technology became so out of date that maintenance costs and technical debt began to increase dramatically.

Sometimes the choice and ROI will be obvious; the technology in question might not be in use by your organization, but it lends itself well to your existing technology.  For example, introducing SQL Server Reporting Services is a logical next step if you are already using SQL Server, or introducing WPF and WCF will compliment an organization that is already familiar with development on the Microsoft .NET platform.

In another case, it may make sense to add completely new technology to your technology stack.  For example, it may be advantageous from a cost perspective to roll out Apple iPhones and iPads to your users in the field, even though your primary development technology has been Microsoft based.  Users are already familiar with the devices, and there are many existing productivity apps they will benefit from.  Developing mobile applications will require an investment to learn Apple iOS development or HTML5 development, but the total ROI will be higher than if the organization decided to roll out Microsoft Windows 8 based devices just because their development team is more familiar with Windows platform development.

Finally, there will be cases where even though the new technology solves a business problem more elegantly than your existing technology stack could, it doesn’t make sense to do a complete platform change in order to get there.  In these cases, the ongoing licensing costs, costs of hiring specialized people, and complexities introduced down the line far outweigh any benefits gained by using the new technology.


It’s important that the software architect facilitates the technology selection process by evaluating technology based on ROI of the project while also considering the long term ROI and associated costs of the selected technology.  It’s important not to focus only on your existing technology stack however, and consideration should be given to unknown or emerging technologies within the technology selection process.  Careful consideration should be given to the cost of change and ongoing maintenance of any new technology and the ROI needs to be evaluated against the ROI of sticking to the existing technology stack over the long term.

Dan Douglas is a professional independent Software Consultant and an experienced and proven subject matter expert, decision maker, and leader in the area of Software Architecture. His professional experience represents over 12 years of architecting and developing highly successful large scale solutions.  Dan has been the architect lead on over 15 development projects.

Introducing Significant Architectural Change within the Agile Iterative Development Process


As an architect working within an iterative agile environment, it’s important to understand how significant architectural decisions need to be made as early in the development process as possible to mitigate the high cost of change of making these decisions too late in the game. In iterative development, it’s important to realize the distinction between requirements that require significant consideration to the architecture and those that are insignificant. This article contrasts the difference between significant and insignificant requirements and demonstrates the best approach to implementing both types of requirements. In agile environments, it’s important to realize that significant architecture decisions will sometimes need to be determined iteratively and late in the development cycle. Guidance is provided on determining how to move forward by considering many factors including ROI, risk, cost of change, scope, alternative options, regression, testing bottlenecks, release dates, and more. Further guidance is provided on how the architect can collaboratively move forward with an approach to implementation while ensuring team vision and architectural alignment with the business requirement.

Introducing Significant Architectural Change within the Agile Iterative Development Process

As agile methodologies such as scrum and XP focus on iterative development (that is, development completed within short iterations of days or weeks), it’s important to distinguish the requirements that are significant to your architecture within the iterative development process.  Iteratively contributing to your software architecture is very important to maintaining your architecture, ensuring the right balance between architecture and over architecture, and ensuring that the architecture is aligned with the business objectives and ROI throughout the iterative development process

Many agile teams are not making a conscious effort to ensure significant decisions relating to software architecture are being accounted for iteratively throughout the software development process.  Sometimes it’s because there is a rush to complete features as required within the iteration and little thought is given to significant architectural changes, or sometimes it is lack of experience or lack of a team vision.  There is usually a lack of understanding how the important guiding principles of the architecture need to be continually established and how they shape the finished product and align with the business objectives in order to see a return on investment. This is where the architect plays a huge role within the iterative development process.

The lack of attentiveness to significant architecture decisions whether at the beginning or mid-way through a release cycle can cause significant long term costs and delay product shipping significantly. Many teams are finding out too late in the game that guiding principles of their architecture are not in place and strategies to get to that point still need to be put in place at the expense of time, technical debt, and being late to ship.  When requirements from the product team involve core architecture changes or re-engineering, changes are sometimes done so without recognizing the need to strategize and ensure that guiding principles of your core-architecture are in place to ensure on-going business alignment and minimal technical debt cost.

Within the iterative development process, it is important that the agile development teams (including the architect) learn to recognize when new requirements are significant and when they are not.  Deciding which requirements are significant and will be carried on as guiding principles of your architecture can be worked out collaboratively with the developers and architects during iteration planning or during a collaborative architectural review session.  This will help ensure that development is not started without consideration to the architecture of these new significant requirements, and that there is time to get team buy-in, ensure business alignment, and create a shared vision of the new guiding principles of your architecture.

In addition, to help prevent surprises during iteration planning, the architect can be involved and work with the product team when preparing the user story backlog to help identify stories that could have a significant impact on the architecture prior to iteration planning.  Steps can be put in place to help the product team understand the impact and to assist the product team in understanding what the ROI should be in contrast to the cost to implement major architectural changes.

So, what distinguishes significant architectural requirements with insignificant ones?

Separate which requirements have architectural significance with ones that do not where significant is distinguished by the alignment of business objectives and a high cost to change, and insignificant is more closely aligned with changing functional requirements within the iterative development process.

An insignificant architectural requirement will be significant as it retains to the functional requirements’, but not significant in terms of the core architecture.  To further contrast the difference between which decisions to consider significant and which to consider insignificant take a look at the following table.

Significant Insignificant
A high cost to change later if we get it wrong We write code to satisfy functional requirements and can easily refactor as needed
Functionality is highly aligned to key business drivers such as modular components paid for by our customers and customer platform requirements A new functional requirement that can be improved or duplicated later by means of refactoring if necessary.
The impact is core to the application and introducing this functionality too late in the game will have a very high refactoring and technical debt cost Impact is localized and can be refactored easily in the future if necessary.
Decisions that affect the direction of development, including platform, database selection, technology selection, development tools, etc. Development decisions as to which design patterns to use, when to refactor existing components and decouple them for re-use, how to introduce new behavior and functionality, etc.
Some of the ‘ilities’ (incl: Scalability, maintainability/testability and portability). Some of the ‘ilities’, such as usability depending on the specific case can be better mapped to functional requirements.  Some functional requirements may require more usability engineering than others.

It is best to handle the significant decisions as early as possible in the development process.  As contrasted in the table below, you can see how the iterative approach lends itself well to requirements that have an insignificant impact on the architecture.  You can also see how significant architectural requirements really form guiding principles of your architecture and getting it right early on lessons the impact of change on the product.

Insignificant Decisions How to Approach

  • New functional requirements (ex: to allow users   to export a report to PDF. There is talk in the future about allowing export to Excel as well, but it is currently not in scope.)
  • Modifications and additions to existing   functionality and business logic


Use an agile iterative approach to development.  This is a functional requirement, with a low cost to change and a low cost to refactoring.  Write the component to only handle its specific case and don’t plan your code too much for what you think the future requirements might be.  If the time comes to add or improve functionality, then we refactor the original code to expose a common interface, use repeatable patterns, etc, In true agile form, this will prevent over-architecture if future advances within the functional requirements are never realized, and there is minimal cost to refactor if  necessary.
Significant Decisions How to Approach

  • Our customers are using both Oracle and SQL Server
  • Performance and scalability
  • Security Considerations
  • Core application features that have a profound   impact on the rest of the system (example, a shared undo/redo system across all existing components) 
These decisions need to be made early as possible and an architectural approach has to be put in place to satisfy the business and software requirements’.  These decisions are usually significant as there is a huge cost to change (refactoring and technical debt) and potential revenue loss if not put in place correctly, or needs to be refactored later.  These decisions are core to the key business requirements and will have a huge cost to change if refactoring is required later.

Agile Isn’t Optimized For Significant Architectural Change

It’s unfair to assume that the agile development process is built to excel at the introduction of significant functionality and architecture changes without a large cost.  This is why some requirements are significant and need to be made as early on in the development cycle as possible while ensuring there is alignment with the business objectives.

As great as it would be to map out every possible significant requirement as early on as possible, there are sometimes surprises.  This is agile development after all.  We need to understand that along with changing functional requirements, significant business changes part way through the development process can occur and could still have a significant impact on our core architecture and guiding principles, so we need to mitigate and strategize how to move forward.

Certainly, it’s possible to introduce significant core-architecture changes by means of refactoring, or scrapping old code and writing new functionality – that’s the agile approach to changing functional requirements and how agile can help prevent over-architecture and ensure we are only developing what’s needed.  The problem is that it doesn’t work well for the significant decisions of your architecture, and when we do refactor, the cost can be so high that it will exponentially increase your time to market, cause revenue or customer loss, and potential disruption to your business.  In these cases, the architect, along with the product and development team need to create a plan to get to where they need to be.

Mitigating the Cost of Significant Change

There is always a cost to introducing core architectural functionality too late in the game.  The higher the cost of the change and the higher the risk of impact, the more thought and consideration needs to be put into the points below.  Here are some points that need to be considered.

  • The refactoring cost will be high.  Is there an alternative way we can introduce this functionality in a way that will have a minimal impact now without affecting the integrity of the system later?
  • This change is significant and will require a huge burden by the development team to get it right. Will we have a significant ROI to justify the huge cost to change?  For example, is Oracle support really necessary after developing only for Sql Server for the first 6 months or is it just a wish from the product team?  Do we really have customers that will only buy our product if it’s Oracle, and what are the sales projections for these customers?  Is there a way we can convince these customers to buy a Sql Server version of the product?  The architect needs to work with the product and business teams to determine next steps.
  • How will this affect regression testing?  Are we creating a burden for the testing team that will require a massive regression testing initiative that will push back our ship date even further?  Is it worth it?
  • How close are we to release?  Do we have time to do this and make our release ship date?
  • What is the impact of delaying our product release because of this change?
  • Is it critical for this release or can it wait until a later release?
  • Can we compromise on functionality?  Can we introduce some functionality now to satisfy some of the core requirements and put a plan in place to slowly introduce refactoring and change to have a minimal impact up front, but still allow us to meet our goal in the future?
  • What is the minimal amount of work and refactoring we need to do?
  • What is the risk of regression bugs implementing these major changes late in the game?  Do we have capacity in our testing team to regression test while also testing new functionality?
  • Are we introducing unnecessary complexity?  Can we make this simple?

All individuals involved in the software need to be aware of the impact and high cost that significant late to game changes will have on the system, development and testing teams, ship dates, complexity, refactoring, and technical debt that could be introduced.  There are strategies that can be used, and the points above are a great start in determining how to strategize the implementation of significant architecture changes.  One of the roles of the architect is to help facilitate and create the architecture and guiding principles of the system and ensure its long term consistency.  As the system grows larger and more development is complete, introducing significant architecture changes becomes more complex.  The architect needs to work with all facets of the business (developers, qa, product team, sales, business teams, business analysts, executives, etc) to help ensure business alignment and a solid ROI of significant architecture decisions.

Moving Forward

Once a significant decision is made that will form part of your architectures significant guiding principles, the architect needs to understand the scope of work, determine what will be included and what won’t, collaboratively create a plan on how we will get there, and understand how the changes will fit within the iterative development cycle moving forward.  The architect needs to ensure that the product and development teams share the vision, understand the reasons we are introducing the significant change, and understand the work that will be required to get there.  If your team is not already actively pairing, it may be a good time to introduce it or alternatively introduce peer reviews or other mechanisms to help ensure consistent quality when refactoring existing code to support significant architectural changes.

Depending on the level of complexity, the testing team may need to adjust their testing process to ensure adequate regression testing takes place that tests new and existing requirements that are affected by the significant architecture change.  For example, if we make a significant change to support both Oracle and Sql Server we need to ensure existing functionality that was only tested for Sql Server support is now re-tested in both Oracle and Sql Server environments.  The architect or developers can work with the testing team for a short time to help determine the degree of testing and which pieces of functionality specifically need to be focused on and tested to ensure the QA/testing teams are correctly focusing their efforts.


It’s important to distinguish significant architecture decisions from requirements that are insignificant as they relate to the core architecture of your system.  When introducing significant architecture changes iteratively within an agile environment, it’s important to understand the impact and complexity that significant changes have when introduced late in the game.   It’s important to understand the business impact of the changes and it’s important that the architect works with the rest of the organization to determine business alignment, risk, and ROI of these changes while understanding the cost of change before moving forward with a plan to introduce the significant architectural changes.

Dan Douglas is a professional independent Software Consultant and an experienced and proven subject matter expert, decision maker, and leader in the area of Software Architecture. His professional experience represents over 12 years of architecting and developing highly successful large scale solutions.  Dan has been the architect lead on over 15 development projects.

Getting Closer To The Fine Line In Software

My latest blog post touched on the fine line between no architecture and over architecture in software.  I talked a lot about technical debt and why it’s bad. I got some feedback and some were wondering exactly how do you find that fine line, so here are some suggestions for improving in order to get closer to that fine line.

There are a lot of factors to look at to find that ‘fine line’ between over architecture and no architecture.  If we are looking at a team, some questions to ask might be – How well is the team working together?  What are the team dynamics like?  Solid trust within the team?  Is the team focused on team goals and team wins or is it more or less individuals on the team looking more for personal glory or wins as a higher priority than the teams goals?  Or, does the team even have any team goals – is it strictly individual goals that are being strived for?  Are people afraid of their superiors and feel free to provide their own opinion even if it seems constructive in order to better achieve the right team solution?

Asking these questions can help narrow down if there are general team problems that are contributing to the software problems…

If it’s a team problem, I’d suggest starting improvements at that level.  Strategize some team building sessions, meet weekly to discuss and resolve any issues from the prior week (in agile environments, this would be done at the retrospective at the end of each sprint/iteration.). Get the team working together, setting goals together, and build a culture where the team is working together all the time.  People aren’t afraid to speak up and everybody is working together to improve because the only goals that are important are the team goals.  If the team wins, everybody wins, if the team loses everybody loses.

The teams that are the most successful are the teams that work well together.  This creates a much bigger win than individuals working in silos would be able to do.  The result in software is much much improved software and less technical debt.  The same could be said for a sports team or a team of any nature.

Ok, so that’s some basics around team stuff and sorting out team dysfunction.

Once that is sorted out you need to collectively create goals.  With mutual respect in place among team members, better architectural discussions can be had with everybody speaking up about the architecture and coding standards.  And do some pair programming as well :)…

Some say code reviews are ineffective and don’t work.  They do work, I’ve seen them work.  So, try code reviews.  A great way for the entire team to review and interact on existing code and talk about refactoring for future software updates.

Understand the business requirements as best as you can up front, ask the right questions, and get everyone on board with where the software is going.

One thing I’ve always done when thinking about what to include in an architecture is come up with a scenario where the said architecture or pattern will help solve a problem (problems could be barriers to implementation, complexity concerns, business use cases, etc) and think about how the architecture lends itself to that.  Look at the cost of implementing and maintaining the architecture and also the learning curve required versus the added value you are providing.  Sometimes, I’ll only partly implement a pattern or architecture just so it’s there and if I really need to fully implement it in the future, the refactoring is simplified to make it so.  This is to have a minimal affect on maintainability and over architecture as it creates a scenario where you can do the architecture up later when needed without mega refactoring.

Also – try not to create lists, weighting scales, or pro and con arguments on paper or a white board in coming up with your design.  If you are solo just take some time out and really think about it, and talk to colleagues about it.  In a team, get the team together and pound it out.  Don’t analyze it to death though, it’s not worth the cost of that ;)

Have the architects create a vision and strategy and then discuss it with the entire team to come up with something even better. Implement it and continually review and adapt it as necessary.  Don’t have someone make an individual decision about an architecture or pattern and then leave it to the other developers to have to deal with it and the technical debt it could create. Always get team buy in for technical strategy.

Someone at the Senior Architect level, by my standard, should surely know how to listen and communicate with the developers on these types of decisions in order to get the best possible outcome.

Software Development, Mission Statements, Business Alignment, and Identifying the Job to Be Done

In my last blog post Software Development and Steven R. Covey on Leadership, I wrote about an interesting audio excerpt relating to IT departments and Software Development from Steven R. Coveys audiobook “Stephen R. Covey on Leadership: Great Leaders, Great Teams, Great Results” .

As an exercise, I put some of my own thoughts together while reflecting on some of my current and past projects.

Mission Statements

As Software Developers, instead of saying “Our job is to have world class technology” we could become much more specific on a project by project basis. There should be a specific mission statement for each solution: For example, a Part Inspection solution being developed for a large automotive manufacturing organization could have the following mission statement – which helps identify the job to be done: “Use technology to eliminate paperwork distribution on the shop floor and reduce the quantity of scrapped parts”.

A global Subject Matter Expert team of Software Developers that comes together to collaborate and help solve technical challenges within a large global organization needs a mission statement too so that every member of the team can truly understand how initiatives and ideas fit into the mission of the team. Amongst other things, ideas, initiatives, and discussions can be evaluated against the mission statement to ensure these are in alignment with it.

How are we aligned with the business?

Being aligned with the business is extremely important and all team members need to be aligned with the business in supporting the right goals. We need to understand the metrics used buy the organization to determine which goals are being achieved and what the objectives are. We should only exist to help the business achieve their objectives. This holds true for both employees and consultants and neither should lose sight of the goals of individual projects or the goals of the organization. It’s also important to understand how they fit together.

How do we identify the job to be done?

Working with the business, the users, and their current processes with or without the use of technology will help us identify how they are currently working and where innovation in technology will help.

Based on discussions, meetings, our own business knowledge, etc we get a good idea of what needs to be done technically. Depending on the project the team will have a combination of developers + project managers + architects and every team member needs to be aligned and understand the job to be done.

To identify the job to be done, we ask questions about how the technology will add value to the business to get an understanding of what specifically the problem is and how innovation in technology could remove the problem and really add business value. We need to understand how the solution has a big effect on the bottom line of the business, or more specifically, how it either increases profits or reduces expenses. It’s also important to see how not implementing a solution could pose its own negative consequences (ex: Mission critical legacy systems which lack vendor support). We need to remember that EBIT (earnings before interest and taxes) is the bottom line and it’s what is most important to the organization and therefore most important to us.

In Conclusion….

After reading this above post, I challenge the readers of my blog to share some of their own thoughts and opinions. If you wish to share you could either contact me directly to discuss, email me, or leave a comment on this blog page. Use the template below if you wish.

  1. Mission Statements
  2. How are we aligned with the business?
  3. How do we identify the job to be done?

Software Development and Steven R. Covey on Leadership

As part of my subscription I had the opportunity to listen to “Stephen R. Covey on Leadership: Great Leaders, Great Teams, Great Results” It’s a really good listen and has some very good ideas. I recommend purchasing it and listening to it in its entirety.

Specifically, I thought this excerpt of audio I took from the audiobook would be valuable to share with my blog readers as it is an example related to IT departments and specifically Software Development within an organization. Please find the audio excerpt here.

Take a listen (audio excerpt is attached) – it’s only a few minutes long. I also think it would be a good catalyst for future discussion.

Notes from the audio:

  • Adding bells and whistles with nothing to do with the needs of the users
  • Look at key jobs that technology is supposed to do
  • Instead of saying “Our job is to have world class technology” they might say “our job is to increase sales by 15% through proper use of our technology”
  • How does the company identify the job to be done?
  • To understand what features should go into the product Intuit would watch their customers installing and learning how to use the product. Have a conversation with the customer and watch what features they used. Get a better sense of how the software can even do a better job for those customers.
  • Went from 0% to 85% of the Software Small Business Market in 2 years. All the other vendors were focusing on improving functions that were irrelevant.
  • Identify the job to be done will influence the choices you make

In my follow up article, I’ve written my own thoughts on this subject – please continue reading at Software Development, Mission Statements, Business Alignment, and Identifying the Job to Be Done

Notes From DevTeach Toronto 2010

I attended DevTeach Toronto 2010 in early March and have found some time to finally summarize my notes. There is certainly value in being able to network with other developers in the community and learn from some of the top people in the industry who are speaking on many different developer related subjects. I tend to write a lot down while attending these kinds of events. I have about 34 pages of notes and chicken scratch written down, and after reviewing them I decided to summarize them and take the most memorable and useful points I jotted down and put them together in a blog post. Along with the 3 days of sessions in which I mostly attended the sessions in the Software Architecture track, I also attended a pre-conference workshop on Agile Application Architecture. Many of the concepts and ideas in the training I was already familiar with but I will still include them here to further emphasize their importance.

You can download my summarized notes by clicking the following link:

Class to Add Instant ‘As You Type’ Filter Functionality to Infragistics UltraCombo Control

In my last blog post, I talked about a filter enhancement to a ComboBox that could be added to a single combo box with just one line of code. The blog post wasn’t necessarily to talk about the code, but was to talk about a good architecture and the importance of a good architecture for a system or application.

The filter component follows a sound architectural strategy as it is something that we could write once and easily reuse many times. Please see the blog post about the architecture here: Getting to the Monetary Value of Software Architecture

This post will focus on the actual class that was created in order to allow us to have this functionality.
The code was designed for WinForms Infragistics UltraCombo controls (v6.2 CLR2.0). These controls are multi column drop down lists, essentially. We’re using this control and not the native WinForms ComboBox because of the multi column ability. In this organization, we use a lot of Infragistics controls, but there are other vendors who make very good controls as well.

Before (without filter functionality):

After (with new filter functionality enabled):

(Some information in screen shots above is blurred out to protect customer confidentiality)

As the user is typing into the UltraCombo control the list filters based on what they are typing. In the example above the user has typed in ‘007’, so the list shows any items that have ‘007’ somewhere in the value. As the user is typing in order to filter, a filter icon is displayed in the UltraCombo. All of this functionality is encapsulated in the UltraComboTypeFilter class.

Here is the code for the UltraComboTypeFilter class that will give any UltraCombo control this functionality:

''' <summary>
''' Class is used to encapsulate functionality to allow user to type into 
''' ultra combo drop down and have the list display and filter the list as 
''' the user types 
''' Dan Douglas Mar 5, 2010
''' </summary>
''' <remarks></remarks>
Public Class UltraComboTypeFilter
    Dim UltraComboControl As Infragistics.Win.UltraWinGrid.UltraCombo
    Dim KeyColumn As String
    Dim _FilterImage As Image

    ''' <summary>
    ''' Create a new instance of the UltraComboTypeFilter class
    ''' </summary>
    ''' <param name="UltraCombo">The Infragistics UltraComboBox control to apply the filter functionality to</param>
    ''' <param name="ColumnToFilter">The key of the column you want to be searched for filtering</param>
    ''' <remarks></remarks>
    Public Sub New(ByVal UltraCombo As Infragistics.Win.UltraWinGrid.UltraCombo, ByVal ColumnToFilter As String)
UltraComboControl = UltraCombo

        'Add handlers so that the methods in this class can handle the events from the control
        AddHandler UltraComboControl.KeyUp, AddressOf ucbo_KeyUp
        AddHandler UltraComboControl.AfterCloseUp, AddressOf ucbo_AfterCloseUp
        AddHandler UltraComboControl.TextChanged, AddressOf ucbo_TextChanged
        AddHandler UltraComboControl.BeforeDropDown, AddressOf ucbo_BeforeDropDown

        KeyColumn = ColumnToFilter

        FilterImage = My.Resources.FilterIcon() 'the filter icon is storred as an embedded resource in the resource file

        'turn off automatic value completion as it can potentially interfere at times with the search/filter functionality
        UltraComboControl.AutoEdit = False

        UltraComboControl.Appearance.ImageHAlign = Infragistics.Win.HAlign.Right 'filter icon will be always displayed on the right side of the text area of the control

        ClearCustomPartFilter() 'by default, clear filters
    End Sub

    Private Sub ShowFilterIcon()
        'add the filter icon to the ComboBox
        UltraComboControl.Appearance.Image = FilterImage
    End Sub

    Private Sub HideFilterIcon()
        UltraComboControl.Appearance.Image = Nothing
    End Sub

    Private Sub ucbo_TextChanged(ByVal sender As Object, ByVal e As System.EventArgs)
        If Trim(UltraComboControl.Text) = "" Then
            ClearCustomPartFilter() 'if there are no characters in the textbox (from dropdown) then remove filters
        End If
    End Sub

    Private Sub ClearCustomPartFilter()
        'clear any filters if they exist 
    End Sub

    Private Sub DoPartDropDownFilter()
        UltraComboControl.DisplayLayout.Bands(0).ColumnFilters(KeyColumn).FilterConditions.Add(Infragistics.Win.UltraWinGrid.FilterComparisionOperator.Like, "*" & UltraComboControl.Text & "*")
    End Sub

    Private Sub ucbo_BeforeDropDown(ByVal sender As Object, ByVal e As System.ComponentModel.CancelEventArgs)
        'clear any filters if they exist before the user drops down the list, if the user starts typing again - filter will be shown
        'this is done so that if the user leaves the combo box and then goes back to it and drops down the list the full list will be 
        'there until they start typing a filter again; this is by design
    End Sub

    Private Sub ucbo_KeyUp(ByVal sender As Object, ByVal e As System.Windows.Forms.KeyEventArgs)
        'the code in this method is to start the filtering process to filter the drop down list if the drop down isn't 'dropped'
        'with thi procedure the user can just start typing into the combo box and have the box drop down automatically and filter
        'KeyPress event is not used because of timing issues - the timing of the event is too late for us to filter properly

        'Do not do filter or drop down if user hits ESC - also we will check other non entry keys like Left, Right, etc)
        ''Array of keys that we won't do anything with

        Dim IgnoreKeys As New List(Of Integer)

        If IgnoreKeys.Contains(e.KeyCode) = False Then
            'if inputted key press is valid for drop down filtering
            Dim iSelLoc As Integer = UltraComboControl.Textbox.SelectionStart 'get location of cursor
            If UltraComboControl.IsDroppedDown = False Then
                'toggling drop down causes all text to be highlighted so we will deselect it and put the cursor position back where it was instead of being at 0
                UltraComboControl.Textbox.SelectionLength = 0
                UltraComboControl.Textbox.SelectionStart = iSelLoc
            End If

        End If

    End Sub

    ''' <summary>
    ''' The image to use for the filter icon shown on the control to be displayed while the control is filtered
    ''' </summary>
    ''' <value></value>
    ''' <returns></returns>
    ''' <remarks></remarks>
    Public Property FilterImage() As Image
            Return _FilterImage
        End Get
        Set(ByVal value As Image)
            _FilterImage = value
        End Set
    End Property

    Private Sub ucbo_AfterCloseUp(ByVal sender As Object, ByVal e As System.EventArgs)
    End Sub

    Protected Overrides Sub Finalize()
        RemoveHandler UltraComboControl.KeyUp, AddressOf ucbo_KeyUp
        RemoveHandler UltraComboControl.AfterCloseUp, AddressOf ucbo_AfterCloseUp
        RemoveHandler UltraComboControl.TextChanged, AddressOf ucbo_TextChanged
        RemoveHandler UltraComboControl.BeforeDropDown, AddressOf ucbo_BeforeDropDown
    End Sub
End Class

Now, to enable the functionality on any UltraCombo control just use one single line of code.

Dim PartFilterFunction As New UltraComboTypeFilter(ucboPart, "CustPartNo")

Add the filter icon/image as a resource to the project that contains the UltraComboTypeFilter class and name the resource FilterIcon.

Screencast: A ‘Hello World’ Example of .NET Reflection in Action

As part of my presentation at the October 2009 London .NET User Group event on .NET Attributes and Reflection (see: ) i did a walkthrough on how to use .NET reflection.

This demonstration walks you through the source code of creating a late-bound instance of a class from an assembly that was instantiated using reflection methods.  Once we have the instance of the class created we use reflection again to call one of its methods.

Click this link to view the screencast recording from the presentation:

I do not yet have the source code available to download because doesn’t allow me to post a .zip file.  Once I find another location to post it, I will share the link.  In the meantime feel free to contact me for it.


Screencast: Redgate .NET Reflector Demo From My Last Presentation

In a follow up to my last post, I talked about Redgate’s .NET Reflector during my presentation at the London .NET User Group event last week (see: )

I recorded the entire presentation using Camtasia Studio, but here is just a snippet from the presentation. A small piece about Redgate .NET Reflector and a little demo on how to use it and how it works.

Check out the screencast here:

The volume is a little low at the beginning because I was using my laptop mic as the audio input and I had walked away from the laptop while presenting the slide.  Once I got into the demo, I was right at my laptop so the audio quality is much better.


My Talk Titled, “.NET Attributes and Reflection – what a developer needs to know……”

On October 19, 2009, the first official meeting of the London .NET User Group met at Fanshawe College.  There were at least 45 people attending!  Great job Tony

Including my presentation, we also had a presentation on ASP.NET MVC and the featured presentation by Rob Labbe from Microsoft (security consultant from the ACE team) on Web Threats and how to Mitigate them.  Rob is a fantastic speaker and really captivated the audience. 

My presentation ran about 22 minutes including a couple of Q and A’s.  Here’s the link to the PowerPoint as well as the source code I used for the live example of .NET reflection in action.

Here are the live recordings (screencasts) of the 2 demos I did during the presentation:

Using .NET Reflection In Code Demo

Redgate .NET Reflector Demo

Here’s a link to the write up of the event on the user group home page.  It includes a few pictures as well.

Hmm – well I was about to upload the .zip file for the source code of the reflection example I walked through as part of the presentation, but unfortunately doesn’t allow me to upload zip files.  :(  I’ll try and figure out an alternative place to upload them.  If anyone wants them in the meantime just let me know and I’’ll arrange to send the samples to you.

Also, as of today I’ve joined the Executive Committee for the London .NET User Group.  I’m looking forward to it!

My Video Interview From TechDays Toronto 2009 (SharePoint and BI)

On the final day of TechDays 2009 in Toronto, I was interviewed by It in Canada ( about my experience at TechDays and specifically about one of the sessions involving BI and SharePoint.  I’d like to share the link to the interview.  It was completely ad-hoc and they caught me at the end of the day, and after asking me a few questions about my experience at TechDays they asked me if I’d like to do an interview.

It’s part of something called “The Efficiency Platform” and is sponsored by Microsoft.

You can get to the video from the main site by scrolling to the bottom of the page.  They have a little sub site there at the bottom where you can scroll through the videos.  Mine is listed as SharePoint with Dan Douglas

This direct link also works to direct straight to the video with no bells and whistles.

Near the end of the video the background noise really seemed to pick up, so you might need to adjust the volume to hear everything properly at the end.

Screen shot from the site:



My Experience With the VS 2008 Documentation…..

I was using Visual Studio today, like I do most days J, and I wanted context sensitive help. I thought, maybe I’ll try it – I haven’t used it in a while and I know Document Explorer 2008 is installed as well (this is a newly built Win 7 development box). I remember the value of the MSDN Library back in the VB 6 days, but as we got into VS.NET, VS.NET 2003 and so on, it seemed the MSDN Library took a wrong turn somewhere. It was slow and typically gave us pretty irelevant infrmation.

So, I was looking for help on a .NET method I was planning on using (I don’t recall the exact method) and I clicked on F1 to bring up help in the MSDN Library. I knew the .NET MSDN Library files were not installed, but I set the MSDN Library to look online first for content. I figured this should work ok without having to install the files locally as it was looking online. Wrong! “Information Not Found!” L Crap…..

So, I thought – meh – I’ll just suck it up and install the entire .NET MSDN library to my hard drive. After installing a couple gigs of data to my hard drive I was ready to roll…. I thought….

I wanted to take it for a trial run, so I started selecting code in the Visual Studio editor and pressing the F1 key. I selected Add from the Data Dictionary object of the Exception class hoping that it would be accurate and give me context help on the IDictionary.Add method.

To my disgust :), I got help for some Add method in the Office API L

Grrr. After trying a few more and noticing I was getting completely irrelevant results, I thought I must be missing something.

I tried it again, and in this example I left the carot on Add but didn’t highlight the text and then clicked F1.

Presto! Well not quite, because the documentation gave me results for Silverlight for some reason – but hey – it’s pretty close!


What I Learned At TechDays 2009 Toronto! Part 3

This will be my last post in my Tech Days 2009 blog post “series” or “trilogy?”.  I just finished soldering some wires for my car stereo – I can listen to my iPod in my car again :) … Now, I figured I could score a few minutes (well much more than that) of free time while supper is cooking, so here we go with my last update on my experience at Tech Days 2009 in Toronto.

In a quick update to yesterday’s posting What I Learned At TechDays 2009 Toronto! Part 2 Business Intelligence where I talked about the BI session and how useful the session was to me.  Today, I’ve started going through Analysis Services and Integration Services in more detail than I ever had before.  As already planned, an Analysis Services and Integration Services implementation will be a part of our SQL Server 2008 migration (migrating from SQL Server 2000).  I’m currently setting up some proof of concept Analysis Services projects in our SQL Server 2008 test environment that will allow me to demonstrate the power of Business Intelligence to our power users.

Before I had attended this session at TechDays I was aware of these services as well as Microsoft’s BI option in general.  I’ve dabbled in it before and have watched demos and played around with it, but now I have a renewed interest in this – especially since we have some great projects in the works where strong Business Intelligence will be a powerful addition.

Ok, so on with the new.  As continued from my previous postings, I want to briefly discuss a few more of the sessions I attended at TechDays 2009 Toronto.

3. C# Advanced Features

In the “Going From 0 to 100 Dollars Per Hour with the .NET you never knew” session, I had the opportunity to see explanations and examples of advanced C# language features.  The presenters did a great job in explaining the content, and I definitely learned a few things.  Some of the features that were discussed such as Best Practices for Exception Handling and Generics, I already am taking advantage of and using quite often.  I’ve found generics to be extremely powerful and really can add a lot of value to the architecture of an application.  One interesting point that came up was that “Exceptions are for exceptional errors, not for process flow”.  Although I agree with this, I have (once or twice), by design, had my data layer or facade layer raise it’s own custom exception that signified that, for example, a duplicate entry was being added and told the UI to “change course” and notify the user that this is not allowed.  I didn’t (and still don’t) see any real harm in this :).

Some other things discussed in this session are things that would be useful in some scenarios and can contribute to a great architecture, but you just need to know when and how they should be used in cases where they will add value.  Let me briefly mention these things along with the big “take-aways” I took from each item.

  • Anonymous Methods – Keeping method concerns from leaking into class interface – don’t use for repeatable logic
  • Lambdas – Not for anything re-usable; like an anonymous method on steroids
  • Extension Methods – Add behaviour to types without modifying types –  Good for string manipulation, enumerations
  • LINQ To Objects – The SQL of Collections
  • Closures – Powerful way of creating delegate with context

4. Team System

The “Database Change Management with Team System” session was important, because I think (like us) there are many people using Team System (and TFS) that are not using it effectively.  At the beginning of the session, the question was asked “How many people in the room have a good change management process?” – no one in the room raised their hand – and the room was packed!

We’ve been using Team System and Team Foundation Server for years, and it’s a big improvement from Visual Source Safe; we are familiar with work items and bug tracking, etc, but we still aren’t getting the best value out of it.  This session explained various important features that would be valuable to our organization.

  • Branching – This is probably one of the biggest areas we could add value to.  We don’t currently branch effectively and with TFS you can branch and merge quite well.
  • Managing Change Sets
  • Work Items
  • Classifications
  • Build Automation

The ideas presented in this session will be useful in improving our processes around Team System.

5. Layering

I’m all about layering really.  I’ve done a few presentations on layering, but in the “Layers – The Secret Language of Architects” session I learned about some new things as well.  By the way, the title of this session holds true, in my opinion – layering is one of the fundamental cores in software development that an application architect should understand.  This session, by Adam Dymitruk along with John Bristowe, touched on some new topics for me, including, MVVM (Model-View-View-Model) which has been recently introduced by Microsoft but is not yet standardized.  This session also touched on the following topics (ASP.NET MVC, Domain Model, Design by Contract, Domain Driven Design, and more).  Domain Driven Design really interests me and I am in the process of learning much more about it.

It was also strongly recommended that you check out MSMQ as it is very useful for message queuing.  This is something I’ve used a little bit and I will agree can be pretty valuable.

This was a great session as it introduced different layering models and design patterns used for an application architecture.

So, that wraps up the top 5 sessions (in no particular order) that I attended from TechDays 2009 Toronto.  If anyone reading this has any comments on any of the three articles I would welcome them very much.  I’d also welcome any comments on how I could improve this type of blog posting in the future.

(I know I left some names out – credit to many of the points listed above goes to the individuals presenting the content – I am hoping to get names to fill in more of the details about the individuals doing the presentations – I didn’t write them all down)



What I Learned At TechDays 2009 Toronto! Part 2 Business Intelligence

In follow up to my last article What I Learned At TechDays 2009 Toronto! Part 1 Windows Mobile 6.5, in this article I will talk about the session I attended on BI.

At the end of the final day of the event, I was video-interviewed by about my thoughts on TechDays 2009 Toronto and my specific thoughts about what I took away from the session on BI.  The interview required some quick thinking, but I feel I did well for an impromptu 3-minute interview.  The point I made at the end of the interview, that I’d like to re-iterate is that when attending an event like this it’s important to really think about the sessions in terms of value to the organization – and how you can use the information and advice to bring value back to your organization.  When I get the link to the video I will post it on my blog.

Ok, so I’d like to move on with more of my “take-aways” from TechDays 2009 Toronto –

2. Business Intelligence

I took a lot out of the  “Using Microsoft Dashboards, Scorecards, and Analytics to Monitor the Health of your IT Infrastructure” session, and for me, it was probably one of the most valuable sessions I attended at TechDays this year.  It focused on Micorosft’s BI offering, and also used practical examples of using BI for monitoring the health of your IT infrastructure (as repeated from the title of the session :))

I feel there is a large amount of value for good Business Intelligence, especially in my current organization (a global automotive parts manufacturer).  We have some power users that can do incredible things with Excel as is, but they’ve also become incredibly reliant on IT to provide them with all kinds of custom reports, graphs, etc.  I’ve seen the work they can do with data analysis and charting data in Excel, and if we empowered them to take advantage of data, that lets say we make available through Analysis Services, they would have a plethora of data at their disposal to easily consume, chart, and analyze in Excel or other tools.  This would take strain off of IT development in providing reports along with many custom reporting options and allow users to create, share, and manipulate the exact data reports and charts they need.  We have a global corporate intranet hosted at our head office that is SharePoint based that would be a perfect location for users to manage and share this content that they create.

The demos in this session were great.  I got a good look at how to use Excel 2007 to analyze analytical data from Analysis Services.  It was literally drag and drop, point and click, to embed and aggregate this data in excel as a chart or table.

There was a lot more to this session as well, and it dug pretty deep into using Integration Services, Analysis Services, Cube Design, Star Schema, Data Source Views, Fact Tables, SharePoint, PerformancePoint Server, etc – each of these technologies can contribute to making a BI solution work!

– to be continued –

What I Learned At TechDays 2009 Toronto! Part 1 Windows Mobile 6.5

TechDays Toronto 2009 wrapped up nicely on Wednesday, and I’ve finally had a chance to go through and review my pages and pages of notes (writing, diagrams, and chicken scratch).  I learned a lot at this event, and I’m planning on blogging a few posts over the next few days about it.  I find that just by blogging and thinking about the things I wrote down at the event helps me to retain a lot more than I would have otherwise, and it gives me another opportunity to think about these topics more deeply.

Ok – before I get to the sessions, let me start with lunch –> Lunch the 1st day was satisfactory at best, but lunch redeemed itself on the second day with the Chicken Salad Sandwich.  There were some booths and tables set up outside the lunch area demoing products and other things, but I didn’t see much there for developers – albeit I gave it a quick “twice-over” and didn’t look too deeply at any of the tables.

The power of Twitter!  I’ve been able to get involved with the TechDays twitter conversation with the tag #TechDays_ca – this was a powerful way to connect with many people attending the event and also many of the speakers and organizers.  I’d recommend to anybody to hit up the Twitter bandwagon.  I use TweetDeck to manage my tweets and twitter conversations.

So, in no particular order, I want to talk about some of the top things I learned and that interested me the most….. Let me start with the Windows Mobile Session…..

1.Windows Mobile 6.5

 In the “Taking Your Application on the Road with Windows Mobile Software” session, Mark Arteaga and Anthony Bartolo did a presentation on Windows Mobile 6.5 development and the Windows Mobile Marketplace.  This was a session I was really looking forward to, and it didn’t disappoint.  I have done some mobile development as it relates to the manufacturing environment that’s mostly related to data collecting, bar code scanning, etc.  I’ve done some interesting things around queuing to local SQL databases when the server is unavailable and things like that.  However, in this session Marc explained things in Windows mobile 6.5 that were just completely cool, but not only cool – practical demos and applications were also discussed.  The potential with Windows Mobile 6.5 is really exciting.

Let me summarize the key points, from my notes, that were most interesting to me:

Fake GPS

  • Used for development of GPS enabled applications
  • Emulates a physical GPS
  • Uses a text file for reading raw GPS data

Cellular Emulator

  • Integrate with Pocket Outlook (contacts, email, SMS, appointments, tasks)
  • In the development environment (Visual Studio .NET)
  • Send phone calls to the emulated phone
  • Send SMS messages

Windows Mobile Marketplace

  • Launch to coincide with the release of Windows Mobile 6.5
  • A market place for developers to sell their applications to Windows mobile users (my impression is that it will be similar to the app store on the iPod)
  • Developers get 70% of money for the purchase of the software by consumers.  Microsoft gets 30% which goes directly into the infrastructure of the Marketplace

WM 6.5 & Misc

  • Full IE browser with the same capabilities as the desktop browser (Note: This is a huge feature in my opinion.  I’m a Windows Mobile user and the browser is very limited.  Although I hear positive things about Opera, my experience with is has left me wanting to go back to Pocket IE)
  • Gestures (I can see these touch gestures being useful and allowing the developer to create better mobile apps with native gestures built in for flipping, panning, etc)
  • Widgets – similar to gadgets available in Windows XP
  • System state – trapping phone calls, SMS, media player song information, etc
  • Accelerometer available on certain devices – Unified Sensor API is available on codeplex
  • GSensor – Shake and Drop detection

Although I don’t see the new features having a big impact right now on the type of mobile application development that I’m currently involved with on the shop floor, there is definitely potential in the future as more device manufacturers provide hardware that is compatible with the latest Windows Mobile OS.  The move to include a full IE browser, as I understand it, will give the mobile device the same IE functionality as with a desktop PC, but I don’t believe that this functionality will take away from the types of applications that are currently developed natively for Windows Mobile (versus running in a browser) using .NET, C++, etc…. but in the realm of mobile browser based applications this is a huge step forward.  It also guarantees that any existing website should work on the mobile browser – however, if it is not customized for the mobile screen display resolution it may not look correctly or will require you to pan and scan the page.

I do see a huge impact for mobile development in the areas outside the shop floor environment with Windows Mobile 6.5.  For anybody interested in (or currently) developing mobile applications on the .NET framework, WM 6.5 is very promising.  Based on what I’ve seen, the quality of available applications should be increased with WM 6.5, and time to market from development to production will likely be able to decrease due to the addition of new native functionality.

At the end of the session they gave away Rock Band Beatles Edition to the winner of an audience competition where audience members got up in front of everyone to describe the mobile application they’ve been working on.  Cool!

To be continued – I will post again shortly about some more things I learned at TechDays Toronto 2009.


The Art and Process of Reusability in Software Development

Reusability is the art of planning and developing application components so that they can be easily reused in other areas, be easily built on top of, and provide a decoupled approach to development and testing,

When developing software and writing your code, a great amount of care has to be taken into consideration on the subject of reusability.  The first question you should be thinking about is:

Which components do we have available to use that have already been developed?

So, we’re thinking about which components or software code that have been developed already that we can reuse (either within the same application or a new application).  Components in this context, could mean any of the following scenarios:

1. Code that we have available that wasn’t necessarily designed to be reusable:

This is code that we may have developed within another application without really thinking about it’s reusability, and even though it wasn’t designed to be reusable we can still harness the value of the code.  Depending on the situation we could abstract the code from the original location and make it reusable – this would be important to do if you can really see this piece of code or component being reused again and again.  This requires modifying the original code or component and the application using it in order to get the abstraction.  The original application will now use this component and your new application will be reusing that exact same component.  Improvements to the component could now affect both applications.

Another option is the good ol’ copy and paste method.  Bad, Bad, Bad!  Well sometimes it’s bad – not always.  Go in to the other application, select what you want to copy, and paste it into the new application – modify as needed.  Presto!  We’ve all done this, and it can be justified if the effort required to copy/paste/modify as many times as you project you will ever need this code is much less than the time it would take to decouple it.  Sometimes you may just do it to be lazy – hopefully if you do it due to laziness it doesn’t bite you in the ass the next three or four times you want to re-use the same code again – wishing you’d have decoupled it from the get go.

Sometimes, you may have code or components that you want to reuse but you are having difficulty decoupling it from it’s original source.  Reusability wasn’t taken into account when the code was originally written and it’s too tightly coupled to the original application.  The reusability factor here is lost and typically you have to duplicate the effort and re-write from scratch into a new application.  Now, hopefully the second time it gets written – it’s designed to be reusable.

2. 3rd party components we can plug into our application:

There are tons of time saving components from 3rd party vendors out there that we can plug into our applications.  These components typically provide functionality that is not native out of the box functionality for your application.  Some examples of available third party components are:  Data grids, ORM mapping, charting, reporting, etc.  These can provide an enhancement to your applications that will save you development time in exchange for the licensing fee of the component.  Purchasing new 3rd party components can be time consuming as you want to do an extensive search and evaluation on competing components from many vendors before making a purchase decision.

3. Using free source code or components found online:

There are many great source code examples and free components available online that you can plug into your application.  These can be a real timesaver, but typically should be thoroughly tested before production to a greater extent than other components as they typically provide no warranty and can sometimes introduce very unexpected bugs if you are not careful.


Ok, so you’ve thought about the ideas above but still feel you must begin development with new code – you now need to think about future reusability of the code you are writing.

I’ll get into more detail about developing for reusability in Part II of this blog posting.  Coming Soon!

Understanding the Implicit Requirements of Software Architecture

I was reading an article today on the MSDN Architecture website titled Are We Engineers or Craftspeople? I found the following point very interesting:

Implicit requirements are those that engineers automatically include as a matter of professional duty. Most of these are requirements the engineer knows more about than their sponsor. For instance, the original Tacoma Narrows Bridge showed that winds are a problem for suspension bridges. The average politician is not expected to know about this, however. Civil engineers would never allow themselves to be in a position to say, after a new bridge has collapsed, “We knew wind would be a problem, but you didn’t ask us to deal with it in your requirements.”

This is a great analogy to implicit requirements within software architecture, and I believe that this idea separates the experienced senior software developers and software architects within the industry. 

In determining the architecture of a software system, it is the “duty” of the software architect to determine potential problems or risks with a design and mitigate or eliminate these risks.  The stakeholders of the project, don’t necessarily understand these risks nor do they necessarily understand their importance to the long term success of the project.

Let me describe four risks in software architecture and development that a Software Architect needs to implicitly understand and realize about the system they are designing.  When it comes to these potential risks, getting it right the first time should be a top priority in the architecture of the system.


Recognizing the scalability requirements of an application are very important.  It is important to understand what the projected future usage, user growth, and data growth will be in the future.  A good rule of thumb is to then multiply this by a factor of 2 or 3 and develop the system based on that projected future growth.  It is important that your development environment be continually tested against this high usage and ensure that your development methods, strategy, tools, environment, and connected systems will effectively scale as well.

Also, regardless of the future requirements to scale, experience will demonstrate the type of development or tools to use that will scale well that do not necessarily have an impact on the development time or effort required.  These are approaches that should always be used, and they are a testament to the skill and experience of the developer or the individual leading the developers, such as the Software Architect.  An example of this would be in developing your database views or queries:  It is known, based on experience that there are good ways to develop these queries that will give the best performance out of the box versus other designs that may give the same results, but are slower, inefficient, and don’t scale well. 

By overlooking the importance of scalability, there is the potential for complete system breakdown when the usage of the system exceeds its capacity.  This will leave developers scrambling to spend additional time to fix the core scaling problems of the system, or force a potentially expensive purchase of beefier hardware (that otherwise should not be required) to support a badly scaling system.


It is necessary to identify any points of incompatibility with the software system.  You have to look at all of the interfaces and interactions of the software system, human and systems,  currently and in the future.  This ranges from the users using the system to the other software and hardware components that interact with the system in a direct or indirect way.  It also includes future compatibility because it’s important to look at future requirements and ensure that the system is developed to meet those requirements.  To do this effectively, the Software Architect needs to have a broad understanding of a wide range of technology in order to make the right choices and also the business processes around the software system.  In essence, based on experience and skill, the Software Architect will pick the correct technology to support the current and future system compatibility of the application.

Failing to effectively perform this step could result in overlooking a critical system connection and cause additional development, resources, and funding to correct.  A system could leave some users in the dark and unable to access or use the system if they are using older or unsupported systems.  A good architecture would have accounted for this from the beginning to ensure all users (legacy and current) can use the system.  Another example could be a web application or intranet site that doesn’t work properly with a newer browser such as Internet Explorer 8.  Now, additional time and money would need to be spent in order to get it up to a standard that will work across multiple web browsers.  This could also impede a company wide initiative to upgrade all web browsers to Internet Explorer 8.

Future Maintenance and Enhancements

The future maintenance of a software system is incredibly important.   This idea should be instilled in your brain from the beginning of the software project.  Future maintenance and enhancements encompass everything that will make future updates, bug fixes, and new functionality easier.  A solid framework for your application is important, along with development/coding consistency, standards,  design patterns, reusability, modularity, and documentation.  It is important to understand these concepts, in order to fully benefit and utilize them to their full extent.  An experienced Senior Developer or Software Architect should have a full understanding of these concepts, why they are important, and how to implement them effectively. 

Overlooking this key factor could leave you with a working application, but code updates, fixes, enhancements, testing, and the learning curve required for new project members will be greatly diminished.

This step is sometimes what I call a “silent killer”, missing this step or lacking experience in this area may not be apparent to the end users or stakeholders of the software system at first, but it will have a huge drain on the ability to use, leverage, and maintain the application.

Some serious disadvantages I’ve seen first hand with this type of system are that users will report critical bugs that are difficult for the developers to track down and fix and developers become “mentally drained” and discouraged from doing any kind of maintenance or enhancements to the system.  Because of this and the fact that it will take many times longer to add new functionality to a poorly maintainable application, these types of systems evolve poorly and in a lot of cases end up being completely replaced by another system.  Think about the potential for unfortunate long term financial and business consequences when this step is overlooked! 


The software has to be useable.  You need to determine which functions are most common to the user and ensure they are easy to find and are the most prevalent features within the application.  Looking at a way to allow the user to customize the application goes a long way in order for individual users to customize the user interface so they get the most bang for their buck. 

The user interface, the technology behind the user interface (is it web? windows? java? or a combination of these technologies?), user customization, colors, contrast, and user functionality are important.  I also believe that a user interface has to look somewhat attractiveThe application itself should be useable and self describing without requiring the user to read a manual or documentation.  You’ll find that you will have more enthusiastic users and less technical or service desk support calls when the application is easy to use and performs the functions that the user needs to perform.  It should make the job of the user easier!  Simple things in your application such as toolbars, context sensitive menus, tabbed navigation, and even copy/paste functionality should not be overlooked.  User interface standards also need to be followed, as you do not want the user to be confused if the basic operation of the application differs substantially from the applications that they are used to. 

Basically, if users or customers do not want to use the software or application because it is too difficult or cumbersome, you end up with the users not actually using it and going back to the old way of doing things, or being forced by “the powers that be” to use it against their own objections about its usability.  Neither of these situations are ideal and they result in lost productivity or the inability to have future potential productivity gains come to fruition.



A failure to identify and mitigate or eliminate these issues could mean a failure or breakdown of the system.  This costs large amounts of money and time to do an “after the fact” correction that’s required, or in a worse case – completely wasted money on a failed implementation that ended up getting axed all together.  I’ve witnessed first hand accounts of both of these scenarios and they are not pleasurable for anyone involved.  As part of eliminating wasteful time and money we need to make sure that we do software right; gaining the right experience, skill, and paying attention to and understanding the implicit requirements expected of a Software Architect, we’ll have high functioning software that will serve its current and future requirements well, and provide a continual and exceptional value and return on investment.

In addition to the points above and though not touched on in this posting, I haven’t forgotten about Buy-In, Security, Availability, Having Proper Business Processes In Place, The Role of The Business Analyst, Communication, Team Leadership, etc.  These points are also very important in order to have a solid foundation for a software project.  I’ll definitely talk about these items in a future blog posting.

Thanks for reading!  I welcome any comments (positive or constructive).