Are Scrum Processes Such As Burndowns and Task Estimation Working Against Your Organization’s Agility?

Abstract

This article contrasts Scrum with Agile methodologies and points out that Scrum can fit within an Agile development team, but Scrum alone doesn’t mean your organization has become Agile.  Scrum processes such as task estimation and burndowns may be established Scrum processes, but upon further analyzing their effects you see that they could actually be working against your team’s agility.  It’s important to evaluate the value of established processes, to determine if they are helping or are detrimental to your organization’s Agile initiatives.  A real world example is used to demonstrate an example of how a good intentioned process becomes inefficient within an organization’s Agile implementation.


Are Scrum Processes Such As Burndowns and Task Estimation Working Against Your Organization’s Agility?

Recently, a colleague asked me about task estimation and how to reduce the amount of time it’s taking their team to do it.  I gave him some suggestions, but it got me thinking more about task estimation and why agile teams are doing it at all.  I’ve also been involved with many organizations who are following Scrum processes, calling themselves agile, and they are getting there, but they are still putting Scrum processes before people and not really thinking about the value of the processes they are following.  This article is going to talk about how using Scrum processes such as burndown charts and task estimation can actually be working against your organization’s initiatives to improving agility.

Scrum actually predates agile, so it’s fair to say that just because you are using Scrum you are not necessarily embracing agile practices, however when you look up Scrum in Wikipedia you will see the following “Scrum is an iterative and incremental agile software development framework for managing software projects and product or application development.”.  Ok, so it is an “agile software development framework”, so you can forgive people’s perception (even mine) when they feel that by following scrum processes we are Agile.  But, I believe there still needs to be a further distinction here.  Scrum is still just a process for managing software development, and I see Scrum as a set of rules that “could” be used in an agile environment, but by following Scrum processes to the letter of the law, we could actually be putting processes over people – which is definitely the opposite of what the agile manifesto is trying to achieve.

Scrum, like many other agile implementations has the basic premise of a user story backlog.  We estimate story points with a number of relativity, instead of using an absolute measurement of time, and over time we can determine our velocity – which is how many story points we can generally complete in a sprint or iteration.  We can now easily judge our backlog to get an idea of how long it will take to complete the stories in the backlog.

Once estimated, we task out the stories for the sprint, but unlike standard Scrum implementations, I don’t like to estimate hours for tasks.  Inherently there is nothing wrong with it, but myself and many Agile experts don’t see a lot of value .  It’s just very difficult to estimate work accurately in absolute measurements of time – that’s why we story point the stories to begin with.  Estimating at even the task level also has the same problems of not being so accurate.

With Scrum implementations, estimating tasks in absolute time is still a vital part of the process.  Wikipedia defines a scrum task as the following: “Added to the story at the beginning of a sprint and broken down into hours. Each task should not exceed 12 hours, but it’s common for teams to insist that a task take no more than a day to finish. “ [Author’s note, the latest version of the Scrum guide has removed task estimation from the Scrum requirements]

I had a friendly discussion with a previous client on estimating tasks and I could never get a good answer as to why they do it other than “it’s part of the process”, and “it’s agile”.  There certainly is a lot of confusion out there about what exactly is Agile, and what constitutes adding value as part of an agile implementation.  Sometimes teams feel that by following a Scrum process, we’re agile – but as a team we need to think about value.  Scrum, agile, or not – what value are we actually getting from spending the time to estimate tasks?  If, as a team, we cannot answer that question we need to re-evaluate this process and determine if it’s a waste of time or not.  If there is value, sure let’s continue doing it, but many times a process is followed for process sake.

However, it’s easy to see why task estimation is being done.  We need the information for our burndown charts.  These charts tells us how many task hours we have completed versus how many hours are remaining and are part of the Scrum process that should be presented during the daily stand-up meeting.  Ok, sure – then theoretically, IF we need burndown charts, then we need tasks estimated in hours.

But, do we need burndown charts?

There are a few teams out there that can accurately estimate blocks of development work in absolute units of time, but it’s not the majority.  The reason we estimate in story points for user stories is because software estimates in absolute measurements of time are barely ever accurate.  So, why does Scrum insist on estimating in story points for user stories, but insist on estimating task hours for individual tasks within the user stories?  Even when estimating dozens of small tasks individually we succumb to the same inaccuracy as we would if we were estimating user stories the same way.

The only thing that is important at the end of the sprint (or iteration) is a completed story.  A non-completed story isn’t worth anything at the end of the sprint even if 10 of 12 hours are completed.  A quick look at completed tasks vs non-completed tasks should be enough motivation for the team to know and decide what needs to be done to complete the work by the end of the sprint.  Many agile experts would agree, including George Dinwiddie who recommends in a Better Software article to use other indicators instead of hours remaining for the burndown charts such as burning down or counting story points.  Gil Broza, the author of “The Human Side of Agile” will also recommend that burndown charts shouldn’t be used at all, and instead among other things, use swim lanes for tracking progress.

In my experience, knowing the number of hours outstanding in a sprint by looking at task hours doesn’t help and is an inaccurate metric to use to plan additional “resource” hours in the sprint.  Even though some organizations do this, the thought of using task hours to help plan for additional “resource” hours during a sprint isn’t valuable since the absolute time measurements aren’t accurate.

If you really are determined to be Agile, you need to make sure you understand where processes make sense,  After all, the agile manifesto preaches “individuals and interactions over processes and tools”.  Following Scrum processes doesn’t necessarily make you Agile, and you need to really think about the value of these processes and determine if they are working for or against your initiatives to become more agile.  In the real world examples of task estimation and burndown charts, it was clear that these established Scrum processes were actually working against the organization in terms of becoming more agile.  Even if your goal isn’t to be agile, by identifying and eliminating processes that don’t add value, you are eliminating waste and opening the door to replace the processes with initiatives that can truly add new value.

Dan Douglas is a professional independent Software Consultant and an experienced and proven subject matter expert, decision maker, and leader in the area of Software Architecture. His professional experience represents over 12 years of architecting and developing highly successful large scale solutions.  Dan has been the architect lead on over 15 development projects.

A Little Documentation Please, Defining Just Enough Code Documentation for Agile Manifesto No. 2

Abstract

This article helps clarify agile manifesto number two “Working software over comprehensive documentation” by defining the different types of documentation being referred to. An analysis is given on code documentation indicating the complexities that warrant it, and how refactoring and pair programming can also help to reduce complexity. Finally, an approach is given on how to evaluate when to document your code and when not to.


A Little Documentation Please, Defining Just Enough Code Documentation for Agile Manifesto No. 2

The second item in the agile manifesto “Working software over comprehensive documentation” indicates that working software is more valued than comprehensive documentation, but it’s important to note that there is still some real value in documentation.

I see two types of documentation being referred to here. 1) Code and technical documentation typically created in the code by developers working on the system, and 2) System and architectural documentation created by a combination of developers and architects to document the system at a higher level.

I will save discussing system and architectural documentation for future articles, as it is a much more in-depth topic.

So let’s discuss the first point – code documentation

Questions that many teams ask (agile, or not) are – “How much code documentation is enough?” or “Do we need to document our code at all?”.  Some teams don’t ask, and subsequently don’t document.  Many TDD and Agile Experts will indicate that TDD and will go a long way to self-document your code.  You don’t necessarily need to do TDD, but there is a general consensus that good code should be somewhat self-documenting – to what level is subjective and opinions will vary.

Software code can be self-documenting, but there are almost always complex use cases or business logic that need to have more thorough documentation. In these cases it’s important that just enough documentation is created to reduce future maintenance and technical debt cost.  Documentation in the code helps people understand and visualize what the code is supposed to do.

If technology is complex to implement, does something that is unexpected such as indirectly affecting another part of the system, or is difficult to understand without examining every step (or line of code), then you should at a minimum consider refactoring the code to make it easier to follow and understand.  Refactoring may only go so far, and there may be complexities that need to be documented even if the code has been nicely refactored.  If the code cannot be refactored for any reason, you have another warrant for documentation.

Think about what can happen if code isn’t documented well.  Developers will spend too much time looking at complex code trying to figure out what a system does, how it works, and how it affects other systems.  A developer may also jump in without a full understanding of how everything works and what the effects are on other systems.  This could create serious regression bugs and create technical debt if the original intent of the code is deviated from.  A little documentation gives you the quick facts about the intent of the code.  Something every developer will value when looking at the code in the future.

Of course, in addition to documentation, pair programming can and should be used to aid in knowledge transfer and can be a good mechanism for peers to help each other understand how the code and system work.  Pair programming is also a good mechanism for helping junior and intermediate developers understand when and where they should be documenting their code.

The way I distinguish what code should be documented and what shouldn’t be is based on future maintenance cost.  Consider how your documentation lends itself to ease of ongoing maintenance and how easing the ongoing maintenance will reduce technical debt and contribute to working software over time.  If your documentation will directly contribute to working software by eliminating future complexity and maintenance, then document. If there is no value to the ongoing future maintenance, then don’t.  If you are unsure, ask someone a little more senior, or do some pair programming to help figure it out.  I’ve also seen value in peer reviews to help ensure documentation is being covered adequately, but I still prefer to instill trust within the team to get it done properly rather than a formal review process.  When in doubt, document – a little more documentation is always better than missing documentation.

Note: My motivation for writing this article came after reading a good article on the same topic http://www.boost.co.nz/blog/random-thoughts/agile-manifesto-number-two/

A question came in the comments on how to weight the amount of documentation needed in a project.  My article was meant to address that question regarding code documentation with my own opinions.  System level documentation is much more in-depth, so look for a future article addressing documentation at the system level.

-Dan

Dan Douglas is a professional independent Software Consultant and an experienced and proven subject matter expert, decision maker, and leader in the area of Software Architecture. His professional experience represents over 12 years of architecting and developing highly successful large scale solutions.  Dan has been the architect lead on over 15 development projects.

Navigating the Technology Stack to Get a Bigger Return on Your Technology Selection

Abstract

This article introduces the reasons why organizations choose to standardize a technology stack to use on existing and future projects in order to maximize ROI of their technology choices. When selecting technology for new projects, the architect should consider both technologies within and outside of the existing technology stack, but the big picture needs to be carefully understood and consideration needs to be placed on the ROI of introducing new technology versus using existing technology within the technology stack.


Navigating the Technology Stack to Get a Bigger Return on Your Technology Selection

As a Software Architect, understanding the long term affects and ROI of technology selection is critical.  When thinking about technology selection for a new and upcoming project, you need to consider the project requirements, but also look beyond them at the bigger picture.  Even though at first glance, a specific technology might seem perfect for a new project, there may already be a more familiar technology that will actually have a much bigger return on investment in the long term.

Many organizations stick to specific technology stacks to avoid the cost, overhead, and complexity of dealing with too many platforms.   An architect should have specialized technical knowledge of the technology stack used by the organization, and if the organization’s technology stack isn’t standardized, the architect should work to standardize it.

Advantages to an organization by standardizing their technology stack

Development costs – It’s easier to find employees who are specialized on a specific platform instead of having multiple platform specializations with conflicting technologies.  When you need developers with conflicting skillsets (ex: .NET and Java), you will likely need to pay for additional employees to fill the specialization gap.

Licensing costs – It’s typically advantageous to stick with only a few technology vendors to attain better group discounts and lower licensing costs.

Physical tier costs – It’s cheaper and more manageable to manage physical tiers that use the same platforms or technologies.  Using multiple platforms (ex: Apache and Windows based web servers require double the skillset to maintain both server types and to develop applications that work in both environments)

Understand the bigger picture beyond your project to better understand ROI

As an architect you have responsibility with technology selection as it retains to ROI.  Once you are familiar with the organization’s technology stack and the related constraints, you can make a better decision related to technology selection for your new project.  You may want to put a higher weight on technology choices known collectively by your team, but it comes down to understanding the bigger picture beyond your current project and understanding the ROI of both the project and ongoing costs of the technology choices used.  You may need to deviate from your existing technology stack to get a bigger ROI, but be careful that the cost of supporting multiple platforms and technologies doesn’t exceed the cost savings of using a specific specialized technology for a specific case in the long term.

When Microsoft released .NET and related languages (VB.NET and C#) in 2001, many organizations made the choice to adopt VB.NET or C# and fade out classic VB development.  Those that made the switch early had an initial learning curve cost.  Organization’s that chose to keep their development technology choice as classic VB eliminated additional costs at the onset, however they paid a bigger price later when employees left the company, finding classic VB developers became more difficult, and the technology became so out of date that maintenance costs and technical debt began to increase dramatically.

Sometimes the choice and ROI will be obvious; the technology in question might not be in use by your organization, but it lends itself well to your existing technology.  For example, introducing SQL Server Reporting Services is a logical next step if you are already using SQL Server, or introducing WPF and WCF will compliment an organization that is already familiar with development on the Microsoft .NET platform.

In another case, it may make sense to add completely new technology to your technology stack.  For example, it may be advantageous from a cost perspective to roll out Apple iPhones and iPads to your users in the field, even though your primary development technology has been Microsoft based.  Users are already familiar with the devices, and there are many existing productivity apps they will benefit from.  Developing mobile applications will require an investment to learn Apple iOS development or HTML5 development, but the total ROI will be higher than if the organization decided to roll out Microsoft Windows 8 based devices just because their development team is more familiar with Windows platform development.

Finally, there will be cases where even though the new technology solves a business problem more elegantly than your existing technology stack could, it doesn’t make sense to do a complete platform change in order to get there.  In these cases, the ongoing licensing costs, costs of hiring specialized people, and complexities introduced down the line far outweigh any benefits gained by using the new technology.

Summary

It’s important that the software architect facilitates the technology selection process by evaluating technology based on ROI of the project while also considering the long term ROI and associated costs of the selected technology.  It’s important not to focus only on your existing technology stack however, and consideration should be given to unknown or emerging technologies within the technology selection process.  Careful consideration should be given to the cost of change and ongoing maintenance of any new technology and the ROI needs to be evaluated against the ROI of sticking to the existing technology stack over the long term.

Dan Douglas is a professional independent Software Consultant and an experienced and proven subject matter expert, decision maker, and leader in the area of Software Architecture. His professional experience represents over 12 years of architecting and developing highly successful large scale solutions.  Dan has been the architect lead on over 15 development projects.

Introducing Significant Architectural Change within the Agile Iterative Development Process

Abstract

As an architect working within an iterative agile environment, it’s important to understand how significant architectural decisions need to be made as early in the development process as possible to mitigate the high cost of change of making these decisions too late in the game. In iterative development, it’s important to realize the distinction between requirements that require significant consideration to the architecture and those that are insignificant. This article contrasts the difference between significant and insignificant requirements and demonstrates the best approach to implementing both types of requirements. In agile environments, it’s important to realize that significant architecture decisions will sometimes need to be determined iteratively and late in the development cycle. Guidance is provided on determining how to move forward by considering many factors including ROI, risk, cost of change, scope, alternative options, regression, testing bottlenecks, release dates, and more. Further guidance is provided on how the architect can collaboratively move forward with an approach to implementation while ensuring team vision and architectural alignment with the business requirement.


Introducing Significant Architectural Change within the Agile Iterative Development Process

As agile methodologies such as scrum and XP focus on iterative development (that is, development completed within short iterations of days or weeks), it’s important to distinguish the requirements that are significant to your architecture within the iterative development process.  Iteratively contributing to your software architecture is very important to maintaining your architecture, ensuring the right balance between architecture and over architecture, and ensuring that the architecture is aligned with the business objectives and ROI throughout the iterative development process

Many agile teams are not making a conscious effort to ensure significant decisions relating to software architecture are being accounted for iteratively throughout the software development process.  Sometimes it’s because there is a rush to complete features as required within the iteration and little thought is given to significant architectural changes, or sometimes it is lack of experience or lack of a team vision.  There is usually a lack of understanding how the important guiding principles of the architecture need to be continually established and how they shape the finished product and align with the business objectives in order to see a return on investment. This is where the architect plays a huge role within the iterative development process.

The lack of attentiveness to significant architecture decisions whether at the beginning or mid-way through a release cycle can cause significant long term costs and delay product shipping significantly. Many teams are finding out too late in the game that guiding principles of their architecture are not in place and strategies to get to that point still need to be put in place at the expense of time, technical debt, and being late to ship.  When requirements from the product team involve core architecture changes or re-engineering, changes are sometimes done so without recognizing the need to strategize and ensure that guiding principles of your core-architecture are in place to ensure on-going business alignment and minimal technical debt cost.

Within the iterative development process, it is important that the agile development teams (including the architect) learn to recognize when new requirements are significant and when they are not.  Deciding which requirements are significant and will be carried on as guiding principles of your architecture can be worked out collaboratively with the developers and architects during iteration planning or during a collaborative architectural review session.  This will help ensure that development is not started without consideration to the architecture of these new significant requirements, and that there is time to get team buy-in, ensure business alignment, and create a shared vision of the new guiding principles of your architecture.

In addition, to help prevent surprises during iteration planning, the architect can be involved and work with the product team when preparing the user story backlog to help identify stories that could have a significant impact on the architecture prior to iteration planning.  Steps can be put in place to help the product team understand the impact and to assist the product team in understanding what the ROI should be in contrast to the cost to implement major architectural changes.

So, what distinguishes significant architectural requirements with insignificant ones?

Separate which requirements have architectural significance with ones that do not where significant is distinguished by the alignment of business objectives and a high cost to change, and insignificant is more closely aligned with changing functional requirements within the iterative development process.

An insignificant architectural requirement will be significant as it retains to the functional requirements’, but not significant in terms of the core architecture.  To further contrast the difference between which decisions to consider significant and which to consider insignificant take a look at the following table.

Significant Insignificant
A high cost to change later if we get it wrong We write code to satisfy functional requirements and can easily refactor as needed
Functionality is highly aligned to key business drivers such as modular components paid for by our customers and customer platform requirements A new functional requirement that can be improved or duplicated later by means of refactoring if necessary.
The impact is core to the application and introducing this functionality too late in the game will have a very high refactoring and technical debt cost Impact is localized and can be refactored easily in the future if necessary.
Decisions that affect the direction of development, including platform, database selection, technology selection, development tools, etc. Development decisions as to which design patterns to use, when to refactor existing components and decouple them for re-use, how to introduce new behavior and functionality, etc.
Some of the ‘ilities’ (incl: Scalability, maintainability/testability and portability). Some of the ‘ilities’, such as usability depending on the specific case can be better mapped to functional requirements.  Some functional requirements may require more usability engineering than others.

It is best to handle the significant decisions as early as possible in the development process.  As contrasted in the table below, you can see how the iterative approach lends itself well to requirements that have an insignificant impact on the architecture.  You can also see how significant architectural requirements really form guiding principles of your architecture and getting it right early on lessons the impact of change on the product.

Insignificant Decisions How to Approach
Examples:

  • New functional requirements (ex: to allow users   to export a report to PDF. There is talk in the future about allowing export to Excel as well, but it is currently not in scope.)
  • Modifications and additions to existing   functionality and business logic

 

Use an agile iterative approach to development.  This is a functional requirement, with a low cost to change and a low cost to refactoring.  Write the component to only handle its specific case and don’t plan your code too much for what you think the future requirements might be.  If the time comes to add or improve functionality, then we refactor the original code to expose a common interface, use repeatable patterns, etc, In true agile form, this will prevent over-architecture if future advances within the functional requirements are never realized, and there is minimal cost to refactor if  necessary.
Significant Decisions How to Approach
Examples:

  • Our customers are using both Oracle and SQL Server
  • Performance and scalability
  • Security Considerations
  • Core application features that have a profound   impact on the rest of the system (example, a shared undo/redo system across all existing components) 
These decisions need to be made early as possible and an architectural approach has to be put in place to satisfy the business and software requirements’.  These decisions are usually significant as there is a huge cost to change (refactoring and technical debt) and potential revenue loss if not put in place correctly, or needs to be refactored later.  These decisions are core to the key business requirements and will have a huge cost to change if refactoring is required later.

Agile Isn’t Optimized For Significant Architectural Change

It’s unfair to assume that the agile development process is built to excel at the introduction of significant functionality and architecture changes without a large cost.  This is why some requirements are significant and need to be made as early on in the development cycle as possible while ensuring there is alignment with the business objectives.

As great as it would be to map out every possible significant requirement as early on as possible, there are sometimes surprises.  This is agile development after all.  We need to understand that along with changing functional requirements, significant business changes part way through the development process can occur and could still have a significant impact on our core architecture and guiding principles, so we need to mitigate and strategize how to move forward.

Certainly, it’s possible to introduce significant core-architecture changes by means of refactoring, or scrapping old code and writing new functionality – that’s the agile approach to changing functional requirements and how agile can help prevent over-architecture and ensure we are only developing what’s needed.  The problem is that it doesn’t work well for the significant decisions of your architecture, and when we do refactor, the cost can be so high that it will exponentially increase your time to market, cause revenue or customer loss, and potential disruption to your business.  In these cases, the architect, along with the product and development team need to create a plan to get to where they need to be.

Mitigating the Cost of Significant Change

There is always a cost to introducing core architectural functionality too late in the game.  The higher the cost of the change and the higher the risk of impact, the more thought and consideration needs to be put into the points below.  Here are some points that need to be considered.

  • The refactoring cost will be high.  Is there an alternative way we can introduce this functionality in a way that will have a minimal impact now without affecting the integrity of the system later?
  • This change is significant and will require a huge burden by the development team to get it right. Will we have a significant ROI to justify the huge cost to change?  For example, is Oracle support really necessary after developing only for Sql Server for the first 6 months or is it just a wish from the product team?  Do we really have customers that will only buy our product if it’s Oracle, and what are the sales projections for these customers?  Is there a way we can convince these customers to buy a Sql Server version of the product?  The architect needs to work with the product and business teams to determine next steps.
  • How will this affect regression testing?  Are we creating a burden for the testing team that will require a massive regression testing initiative that will push back our ship date even further?  Is it worth it?
  • How close are we to release?  Do we have time to do this and make our release ship date?
  • What is the impact of delaying our product release because of this change?
  • Is it critical for this release or can it wait until a later release?
  • Can we compromise on functionality?  Can we introduce some functionality now to satisfy some of the core requirements and put a plan in place to slowly introduce refactoring and change to have a minimal impact up front, but still allow us to meet our goal in the future?
  • What is the minimal amount of work and refactoring we need to do?
  • What is the risk of regression bugs implementing these major changes late in the game?  Do we have capacity in our testing team to regression test while also testing new functionality?
  • Are we introducing unnecessary complexity?  Can we make this simple?

All individuals involved in the software need to be aware of the impact and high cost that significant late to game changes will have on the system, development and testing teams, ship dates, complexity, refactoring, and technical debt that could be introduced.  There are strategies that can be used, and the points above are a great start in determining how to strategize the implementation of significant architecture changes.  One of the roles of the architect is to help facilitate and create the architecture and guiding principles of the system and ensure its long term consistency.  As the system grows larger and more development is complete, introducing significant architecture changes becomes more complex.  The architect needs to work with all facets of the business (developers, qa, product team, sales, business teams, business analysts, executives, etc) to help ensure business alignment and a solid ROI of significant architecture decisions.

Moving Forward

Once a significant decision is made that will form part of your architectures significant guiding principles, the architect needs to understand the scope of work, determine what will be included and what won’t, collaboratively create a plan on how we will get there, and understand how the changes will fit within the iterative development cycle moving forward.  The architect needs to ensure that the product and development teams share the vision, understand the reasons we are introducing the significant change, and understand the work that will be required to get there.  If your team is not already actively pairing, it may be a good time to introduce it or alternatively introduce peer reviews or other mechanisms to help ensure consistent quality when refactoring existing code to support significant architectural changes.

Depending on the level of complexity, the testing team may need to adjust their testing process to ensure adequate regression testing takes place that tests new and existing requirements that are affected by the significant architecture change.  For example, if we make a significant change to support both Oracle and Sql Server we need to ensure existing functionality that was only tested for Sql Server support is now re-tested in both Oracle and Sql Server environments.  The architect or developers can work with the testing team for a short time to help determine the degree of testing and which pieces of functionality specifically need to be focused on and tested to ensure the QA/testing teams are correctly focusing their efforts.

Summary

It’s important to distinguish significant architecture decisions from requirements that are insignificant as they relate to the core architecture of your system.  When introducing significant architecture changes iteratively within an agile environment, it’s important to understand the impact and complexity that significant changes have when introduced late in the game.   It’s important to understand the business impact of the changes and it’s important that the architect works with the rest of the organization to determine business alignment, risk, and ROI of these changes while understanding the cost of change before moving forward with a plan to introduce the significant architectural changes.


Dan Douglas is a professional independent Software Consultant and an experienced and proven subject matter expert, decision maker, and leader in the area of Software Architecture. His professional experience represents over 12 years of architecting and developing highly successful large scale solutions.  Dan has been the architect lead on over 15 development projects.

Software Development and Steven R. Covey on Leadership

As part of my audible.com subscription I had the opportunity to listen to “Stephen R. Covey on Leadership: Great Leaders, Great Teams, Great Results” It’s a really good listen and has some very good ideas. I recommend purchasing it and listening to it in its entirety.

Specifically, I thought this excerpt of audio I took from the audiobook would be valuable to share with my blog readers as it is an example related to IT departments and specifically Software Development within an organization. Please find the audio excerpt here.

Take a listen (audio excerpt is attached) – it’s only a few minutes long. I also think it would be a good catalyst for future discussion.

Notes from the audio:

  • Adding bells and whistles with nothing to do with the needs of the users
  • Look at key jobs that technology is supposed to do
  • Instead of saying “Our job is to have world class technology” they might say “our job is to increase sales by 15% through proper use of our technology”
  • How does the company identify the job to be done?
  • To understand what features should go into the product Intuit would watch their customers installing and learning how to use the product. Have a conversation with the customer and watch what features they used. Get a better sense of how the software can even do a better job for those customers.
  • Went from 0% to 85% of the Software Small Business Market in 2 years. All the other vendors were focusing on improving functions that were irrelevant.
  • Identify the job to be done will influence the choices you make

In my follow up article, I’ve written my own thoughts on this subject – please continue reading at Software Development, Mission Statements, Business Alignment, and Identifying the Job to Be Done

CIPS London Annual General Meeting 2009 Summary

I just want to write about tonight’s London CIPS event and Annual General Meeting.  The event took place at the InfoTech head office here in London, Ontario and began with Pizza and Pop followed by a presentation by David Canton on Website Legal Issues.  The CIPS Annual General Meeting followed, and among other things we nominated new board members followed by voting and announcing of the officers.  I’m pleased to announce my new position on the executive as Treasurer.  I’ll be taking over the role from Jonathan Korchuk who has been serving in that role for the last several years. 

I’m looking forward to my new responsibilities as Treasurer, and it fits nicely within the scope of my long term objectives.  Along with being a contributor and providing input at the executive meetings, as Treasurer, I will be responsible for the following:

“The Treasurer is the custodian of all the official property and records of the Section, and will deposit all the funds of the Section in a financial institution as approved by the Board. He will collect all monies, keep complete accounts, and arrange for payment of all approved indebtedness of the Section and keep proper vouchers for such payments. The Treasurer will issue, as required, notices to members whose annual dues are in arrears, and will submit an annual financial report, and any other financial reports as required by the Board. In addition, the Treasurer will submit an annual audited financial statement to the Board and to the National Office within six months following the Section’s Annual General Meeting.”

We have a total of four new CIPS Executive members tonight!  I believe we have a very strong executive team and I’m looking forward to a great year on the Executive.

Tim Hodges received a great appreciation at the event tonight for his work as CIPS London President for the last several years.  He’ll continue in this role again for this Executive year.  A job well done!

We’ll soon be updating the CIPS London website to reflect this new team which reminds me that I have to send in a short description/bio along with a picture to our webmasters so they can update the site.  http://local.cips.ca/london/executive.asp

 

CIPS London 2009/2010 Executive Team

Tim Hodges – President

Tony Curcio – Vice President

Jonathan Korchuk – CIPS Ontario Representitive

Margaret Kubasek-Vizniowski – Program Chair

Yotam Sichilima – Web Master

Abdalla EL Najjar – Student Services

Mathew Whitehead – At large, role to be determined (former role: Social Networking)

Dan Douglas – Treasurer

Carolyn Marshall– Communications Director

Mike Bondi – Social Networking

We’ve decided to also create a Communications Team which will consist of the Web Masters, Social Networking officer, and Communications Director.

IT Events Update

I’d like to throw some information out there regarding IT/Networking events, user group meetings, code camps, etc.  You can find like minded individuals at all of these types of developer events.  There is a large supportive IT community out there, and I would encourage anyone who is in this industry to check out some of these events.

Links to some of the types of events I’ve attended this year – most of these types of events should be available in a community near you – although I don’t know where you could be reading this from :)

Toronto Code Camp

This is an annual one-day free event held on a Saturday in Toronto. This year it was held on April 25, 2009.  There were several tracks that were mostly geared towards .NET development.  It’s a great turnout with hundreds of IT developer professionals in the industry and a great networking event.  Among other people, I met Mark Arteaga, owner of Red Bit Development, who I had the chance to discuss mobile device application development with the .NET compact framework. Mark is a Microsoft MVP, so I picked his brain about how the program works and the process to becoming an MVP.  Many major cities are also hosting annual code camps.

http://www.torontocodecamp.net/

Toronto SharePoint Code Camp

This is also an annual one day free event held on a Saturday in Toronto geared towards SharePoint.  This event really opened my eyes to how large the SharePoint community is.  SharePoint has really taken off and it seems that a lot of people are really jumping on the bandwagon.  As with the regular code camp, many major cities across North America are also hosting free annual SharePoint code camps.  A similar event is the SharePoint Saturday which is hosted in many cities including Toronto.

http://www.torontosharepointcamp.com

User Groups

User groups are a great way to get out there and support the developer community.  You have the opportunity to network with other developers, meet new people, and even present to the group.  There are many user groups available in many disciplines (including, VB.NET, SQL Server, Software Architecture, and Sharepoint).  They are a great way to introduce yourself to other members of the developer community.

I’ve personally attended the following user group meetings and have met some of the top people in our industry:    SharePoint user group (TSPUG), Visual Basic user group (TVBUG), and the Toronto IT Architecture user group.  The Toronto Visual Basic User Group was especially welcoming – the evening typically finishes off with drinks at a pub across the street.

All of these groups offer speaking or presenting opportunities for their members; I presented a presentation titled “The Basics of Software Architecture for the .NET Developer” at a coffee and code event for the London .NET user group last month.

iNeta provides support and resources to many user groups

Professional Developer Events

I’m planning on attending Tech Days 2009 in Toronto this year.  It has a great track and I’m looking forward to learning from the many speakers giving presentations at the event.  It also looks like it will be a great opportunity to network.

Business/ IT Networking

Almost every city has got these types of events.  Typically there is a presentation for the first half of the event followed by socializing and networking.  Very often these events are located at a bar or pub, so you can easily grab a beer or two if you like.  I’ve attended many of these types of events, and some have been useful and others haven’t.  It’s hit and miss, but still a great way to make contacts.

Meetup Groups

Almost every city has these too.  Basically, the purpose is just to get out to a bar or restaurant and mingle with other like minded individuals on topics such as Web 2.0, Web Design, Software, Podcasting, Blogging, or whatever… There are tons of these.  A good resource to see what’s available in your area is http://www.meetup.com

Professional IT Groups

You can become a member of CIPS or just attend their events for a slightly higher cost than non members.  I’ve attended many of these events including topics on SQL Server 2008, .NET Development, Cloud Computing, and events where vendors or developers show off the cool software or products they’ve developed.  I’ve been sitting in on the executive meetings.  Our next event is the Annual General Assembly being held on September 10th.  This is an annual meeting where we will nominate and elect new executive committee members followed by a regular CIPS professional event.

CIPS also regularly partners with Microsoft for many of their events such as the Heroes Happen {here} series of events.  I was asked my local CIPS chapter president to lead a discussion group at this event.  It was a great opportunity, and we had a great discussion (see here); I really liked this event structure.

Canada’s Association of Information Technology Professionals

If anyone has any recommendations for similar events please let me know by leaving a comment….

Dan

Understanding the Implicit Requirements of Software Architecture

I was reading an article today on the MSDN Architecture website titled Are We Engineers or Craftspeople? I found the following point very interesting:

Implicit requirements are those that engineers automatically include as a matter of professional duty. Most of these are requirements the engineer knows more about than their sponsor. For instance, the original Tacoma Narrows Bridge showed that winds are a problem for suspension bridges. The average politician is not expected to know about this, however. Civil engineers would never allow themselves to be in a position to say, after a new bridge has collapsed, “We knew wind would be a problem, but you didn’t ask us to deal with it in your requirements.”

This is a great analogy to implicit requirements within software architecture, and I believe that this idea separates the experienced senior software developers and software architects within the industry. 

In determining the architecture of a software system, it is the “duty” of the software architect to determine potential problems or risks with a design and mitigate or eliminate these risks.  The stakeholders of the project, don’t necessarily understand these risks nor do they necessarily understand their importance to the long term success of the project.

Let me describe four risks in software architecture and development that a Software Architect needs to implicitly understand and realize about the system they are designing.  When it comes to these potential risks, getting it right the first time should be a top priority in the architecture of the system.

Scaling

Recognizing the scalability requirements of an application are very important.  It is important to understand what the projected future usage, user growth, and data growth will be in the future.  A good rule of thumb is to then multiply this by a factor of 2 or 3 and develop the system based on that projected future growth.  It is important that your development environment be continually tested against this high usage and ensure that your development methods, strategy, tools, environment, and connected systems will effectively scale as well.

Also, regardless of the future requirements to scale, experience will demonstrate the type of development or tools to use that will scale well that do not necessarily have an impact on the development time or effort required.  These are approaches that should always be used, and they are a testament to the skill and experience of the developer or the individual leading the developers, such as the Software Architect.  An example of this would be in developing your database views or queries:  It is known, based on experience that there are good ways to develop these queries that will give the best performance out of the box versus other designs that may give the same results, but are slower, inefficient, and don’t scale well. 

By overlooking the importance of scalability, there is the potential for complete system breakdown when the usage of the system exceeds its capacity.  This will leave developers scrambling to spend additional time to fix the core scaling problems of the system, or force a potentially expensive purchase of beefier hardware (that otherwise should not be required) to support a badly scaling system.

Incompatibility

It is necessary to identify any points of incompatibility with the software system.  You have to look at all of the interfaces and interactions of the software system, human and systems,  currently and in the future.  This ranges from the users using the system to the other software and hardware components that interact with the system in a direct or indirect way.  It also includes future compatibility because it’s important to look at future requirements and ensure that the system is developed to meet those requirements.  To do this effectively, the Software Architect needs to have a broad understanding of a wide range of technology in order to make the right choices and also the business processes around the software system.  In essence, based on experience and skill, the Software Architect will pick the correct technology to support the current and future system compatibility of the application.

Failing to effectively perform this step could result in overlooking a critical system connection and cause additional development, resources, and funding to correct.  A system could leave some users in the dark and unable to access or use the system if they are using older or unsupported systems.  A good architecture would have accounted for this from the beginning to ensure all users (legacy and current) can use the system.  Another example could be a web application or intranet site that doesn’t work properly with a newer browser such as Internet Explorer 8.  Now, additional time and money would need to be spent in order to get it up to a standard that will work across multiple web browsers.  This could also impede a company wide initiative to upgrade all web browsers to Internet Explorer 8.

Future Maintenance and Enhancements

The future maintenance of a software system is incredibly important.   This idea should be instilled in your brain from the beginning of the software project.  Future maintenance and enhancements encompass everything that will make future updates, bug fixes, and new functionality easier.  A solid framework for your application is important, along with development/coding consistency, standards,  design patterns, reusability, modularity, and documentation.  It is important to understand these concepts, in order to fully benefit and utilize them to their full extent.  An experienced Senior Developer or Software Architect should have a full understanding of these concepts, why they are important, and how to implement them effectively. 

Overlooking this key factor could leave you with a working application, but code updates, fixes, enhancements, testing, and the learning curve required for new project members will be greatly diminished.

This step is sometimes what I call a “silent killer”, missing this step or lacking experience in this area may not be apparent to the end users or stakeholders of the software system at first, but it will have a huge drain on the ability to use, leverage, and maintain the application.

Some serious disadvantages I’ve seen first hand with this type of system are that users will report critical bugs that are difficult for the developers to track down and fix and developers become “mentally drained” and discouraged from doing any kind of maintenance or enhancements to the system.  Because of this and the fact that it will take many times longer to add new functionality to a poorly maintainable application, these types of systems evolve poorly and in a lot of cases end up being completely replaced by another system.  Think about the potential for unfortunate long term financial and business consequences when this step is overlooked! 

Usability

The software has to be useable.  You need to determine which functions are most common to the user and ensure they are easy to find and are the most prevalent features within the application.  Looking at a way to allow the user to customize the application goes a long way in order for individual users to customize the user interface so they get the most bang for their buck. 

The user interface, the technology behind the user interface (is it web? windows? java? or a combination of these technologies?), user customization, colors, contrast, and user functionality are important.  I also believe that a user interface has to look somewhat attractiveThe application itself should be useable and self describing without requiring the user to read a manual or documentation.  You’ll find that you will have more enthusiastic users and less technical or service desk support calls when the application is easy to use and performs the functions that the user needs to perform.  It should make the job of the user easier!  Simple things in your application such as toolbars, context sensitive menus, tabbed navigation, and even copy/paste functionality should not be overlooked.  User interface standards also need to be followed, as you do not want the user to be confused if the basic operation of the application differs substantially from the applications that they are used to. 

Basically, if users or customers do not want to use the software or application because it is too difficult or cumbersome, you end up with the users not actually using it and going back to the old way of doing things, or being forced by “the powers that be” to use it against their own objections about its usability.  Neither of these situations are ideal and they result in lost productivity or the inability to have future potential productivity gains come to fruition.

 

Conclusion

A failure to identify and mitigate or eliminate these issues could mean a failure or breakdown of the system.  This costs large amounts of money and time to do an “after the fact” correction that’s required, or in a worse case – completely wasted money on a failed implementation that ended up getting axed all together.  I’ve witnessed first hand accounts of both of these scenarios and they are not pleasurable for anyone involved.  As part of eliminating wasteful time and money we need to make sure that we do software right; gaining the right experience, skill, and paying attention to and understanding the implicit requirements expected of a Software Architect, we’ll have high functioning software that will serve its current and future requirements well, and provide a continual and exceptional value and return on investment.

In addition to the points above and though not touched on in this posting, I haven’t forgotten about Buy-In, Security, Availability, Having Proper Business Processes In Place, The Role of The Business Analyst, Communication, Team Leadership, etc.  These points are also very important in order to have a solid foundation for a software project.  I’ll definitely talk about these items in a future blog posting.

Thanks for reading!  I welcome any comments (positive or constructive).

Dan

My Speech on Important Points in Considering Software Architecture

In a recent posting, My Speech on Using Technology to Solve a Business Problem, I transcribed the contents of a speech I performed for Toastmasters.   In this posting I will share the follow up speech I did titled “Important Points in Considering Software Architecture”.

The speech was developed for a non-technical audience using layman’s terms and examples.  I really tried hard in the speech to simplify the ideas of software architecture.  The night I did the speech, there were three speakers speaking about different topics with an audience of about 20 people.  At the end of the evening, I was voted as “Best Speaker” by the audience members.  I felt good about that; Toastmasters really is a great organization to help you grow your public speaking.

Ok, here is the speech!

In my last speech I talked about being effective as a member of the Information Technology field.  I briefly discussed steps involved in developing a solution – from concept to development.  In this speech, I will take this one step further and go over another important step in the overall software development process.  I will discuss important points to consider in the area of software architecture.

Software architecture is the fundamental design of a computer program.  Consider the architecture of a car.  A basic architecture of a standard car should have four wheels, an engine, fuel tank, etc.  The architecture of a car also defines how these components will work together to produce a working vehicle.  In software, it is the same idea.  The basic architecture of a computer program dictates how the computer program will work, and how it will work together with other computer programs.  Along with this basic architecture are proven design patterns.  Design patterns are fundamental patterns that are proven and reliable and used as a template for developing pieces of your applications.  To put this in perspective, imagine someone attempting to develop a new car without knowledge of how current cars work and what their fundamental “design patterns” are.  Consider the time that would be saved and the ease of future maintenance if they could build this new car upon an existing template.  Would it not make sense to take in the knowledge of an existing proven design, and use it as a base model for your new software applications?  Of course you may improve on top of the initial fundamental design while still keeping the fundamental concept of how the car works as per the basic car “design pattern”.

Looking at the option of being able to reuse existing components as building blocks or even sometimes main features to your software application is important.  Why re-invent the wheel?  Consider, when developing a new car, the cost savings that could be had in re-using standard components that have already been developed and proven on previous model years.  Why start from scratch?  Again, software design is very similar.  Purchasing pre-existing components that have been developed by third party vendors or that are freely available could be highly beneficial.  Take the following example:  Your application requires a rich user experience that is highly functional with a look and feel very similar to Microsoft Word.  Your development team could spend one month developing and testing this new feature themselves, but this time could have an estimated cost of $10,000.  Meanwhile, there are dozens of vendors out there who are offering this as a component, or a “building block” that you can plug into your application to give you the functionality you need.  All that your development team has to do is customize these proven components to suit the needs of the application.  This could have an estimated cost of $1,000 for the license to use the components, and maybe another $1,000 worth of development time to customize the components as you need to.  In this example, you could easily see an overall savings of $8,000.

Modularity is an important factor to consider as well when designing your software application.  To be modular, is to be described as, something that consists of “plug-in units” which can be added together and on top of one another to make the system larger or to improve its capabilities.  As an example, think of a modular cabinet system where you can purchase additional cabinets and add them together to make one larger cabinet.  This saves you money because you don’t have to throw away your existing cabinet when you need something bigger or better.  In software the same ideas apply.  You can design an application to be modular so that future enhancements can be developed faster without having to re-design the application from scratch or change it to be able to add additional functionality.  As an example: Part of your application contains information about customers, and now new changes will require the application to contain information about about your customer’s suppliers.  Being able to develop an independent “unit” that can be plugged into the application that contains this supplier information, limits the amount of changes that need to be made to the existing program.  This will save time in the future.

Designing a good workable software architecture takes time and practice, but if done properly will save you even more time in the future as you begin to work on implementing this design.  Using effective and proven design patterns, pre-made components from 3rd party vendors, and keeping modularity in mind can help your development team come up with a stable and effective software architecture.

Six Great Reasons to do a Technical Webcast

There is a lot of value in conducting and creating a technical webcast. I have a strong background in Software Development, and webcasts have helped me in my career and personal growth in various ways. I am going to list my top reasons for performing a webcast (in no particular order).

  1. Learn a new technology or dig deeper into something you already know

    Webcasts can dive deep into technology so learning the technology (content) is a mandate, and doing a webcast is a great motivator to learn something new. This can enhance your skill set and get you up to speed with an emerging or any technology that you are not currently on the up and up about. While researching the content for the webcast you will likely come across pieces of information about the subject that you were not originally aware of.

  2. Practice speaking in front of a live audience

    In the case of a live webcast, where you are speaking and demonstrating to a live audience, you have to speak – and people are listening! It’s important that you know and are able to relay the information inside your head to an audience in a way that is engaging, so they remain interested and they learn. This is something that requires practice. The more you do, the more you practice, the better you will be. It will also give you a chance to demonstrate “thinking on your feet”, while you should have at least a pseudo script and ideas drawn out, there will usually be questions that are asked during and at the end of the webcast. These questions will test your knowledge and your ability to think about and respond to questions quickly. These are great skills that can be taken back to the office with you.

  3. Learn how to teach

    Use it to learn how to teach and demonstrate to an audience. Your speaking style, tone and demeanour will go into a “teaching mode” that can be reused outside of the webcast (ex: conducting training for a new application or demonstrating an application’s features to a client).

  4. Share your knowledge with your peers and technical community

    Think about abundance and sharing with the community. I believe an abundance mentality is good for the soul. Personally, I get a lot of satisfaction in being able to give to the community and participate within a community (for example, the large community of developers sharing content around the world via the web). It’s win-win, and another positive side effect of giving is that it will come back to you in due time. It also gives you a chance to showcase and express your talents to the community. The community will get a better sense of what you do and what you know and have a chance to learn the material you are presenting. Remember, it’s not who you know – it’s who knows you, that knows what you can do well.

  5. A new addition to your portfolio

    In the case of an offline recorded webcast or a live webcast that was recorded and made available offline, it can serve as a valuable tool. It demonstrates your confidence in your subject matter and your ability to present this information to an audience in a coherent fashion.

  6. Accomplishment and the art of the creation

    I think that most developers enjoy creating. They enjoy seeing the hard work, design, and innovation of their development projects going from inception to fruition. I also believe this to be true in the case of conducting or creating a webcast. Completing a webcast will leave you with a feeling of accomplishment in addition to the reasons listed above. Having accomplished this can fuel motivation. If you are creating a webcast solely for offline viewing (not live) then you also have more creative options available in post-production.

    

There could be many other motivators or reasons to create a webcast, but the bottom line is personal growth. I believe people need to continue to learn and continue to grow, and by conducting a webcast, you will learn and grow. Your audience will learn and grow as well. It’s Win-Win.

There are tools out there designed to help you create a webcast. In the past, I’ve used two great tools to conduct live webcasts (Microsoft Live Meeting and Webex). To create an offline webcast, I’ve used TechSmith Camtasia Studio.

These products are not free, but you can download and use (for a limited time) free trial versions of them. Also, (here’s a plug) TechSmith has an associated website named screencast.com that allows you to host and share your webcasts for free (or paid if you need additional bandwidth) – Camtasia Studio can create the files necessary for screencast.com and post it to your screencast.com profile automatically. It’s pretty sweet!

 

Dan

OneNote 2007 Introduction, The Webcast….

Finally, I’ve had a few hours to finish my webcast on OneNote 2007. It’s meant to be an introduction to the application and give the audience a feel for how it’s used and what it can be used for. I created a demo personal introduction video, created with Camtasia Studio, last month, but decided to go against the introduction video in the final version as I didn’t feel it added a tremendous value in this particular project. However, it was a solid personal exercise for me to speak on film and I got some great feedback. (Plus, I need to get a serious hair cut, and so I shouldn’t appear on any sort of moving film right now (that’s *supposed* to be humour) – I would have had to re-film….)

Although I have done several live webcasts in the past, this is my first dabble at an offline webcast and my first dabble with Camtasia Studio….

I welcome any comments, questions, or constructive criticism.

http://www.screencast.com/t/QYn1GtyPSyT

 

Dan

Heroes happen {here} – Microsoft event in London

Last winter, I attended a Microsoft event called ‘Heroes happen {here}’ in London.  It was interesting and although they touched briefly on technology (Windows Server 2008, Visual Studio 2008, and SQL Server 2008) a lot of the presentation touched on the skills required to be successful in IT and touched on career, personal growth, leadership, etc.  The interesting part of this event is that it broke out into 5 separate breakout discussion sessions of about an hour long each.  The event went on well over 3 hours.  This was a much different style of event then the typical Microsoft events and I’ll definitely want to attend a similar event in the future.  It was a great opportunity to network with other IT professionals in the London area. Among their typical handouts and evaluation software they also gave each attendee a 1 GB USB Key which included audio programs that are geared towards your IT career.

I had the opportunity to be the discussion leader for one of these break-out sessions, so I wanted to share some of the results of the discussion we had on “Application Lifecycle – Upgrade, Migrate, Rebuild… etc”.  There were about 100 people attending the event, and about 20 people in the IT field that were involved in this particular discussion. 

The summary notes of the breakout session discussion have been transcribed and are available here if anyone is interested: https://dandouglas.files.wordpress.com/2009/05/discussion-summary-of-heroes-happen-here.doc

It seems that many other companies run into the similar scenarios that we currently run into at organizations I’ve been affiliated with, but in other areas many have evolved past many of the issues many other companies are currently experiencing.

CIPS Executive … and a little more

I’m sitting at Coffee Culture (another local coffee shop) after attending a meeting of the executive for the CIPS chapter of London. It was a well ran meeting and we came to a lot of resolutions. I participated as an invited guest, and provided ideas that were considered valuable by the rest of the executive team. I appreciated the feedback. CIPS is the Canadian Information Processing Society.

We were discussing event ideas and upcoming events including the annual general assembly of CIPS members. I’ve been offered a position on the executive and I am considering it. I can see it as a way to further develop and market my skills. Finance/ budgets/ accounting, leadership, event organizing, planning, and networking would be a few advantages. Beyond this there are many speaking and presentation opportunities that are available at their events.

I believe that communicating with, networking with, and learning from people is one of the greatest tools we can use and it generally only costs us our time. I had a valuable talk with a professional from a large organization in London who also sits on the executive committee. He’s gone from developer, to consultant, to quoting on and receiving multi-million dollar consulting projects from the federal government. These are the type of people that I really learn from: People who have a passion for what they do, have followed a path that I am following, and who are willing to share their good and bad experiences and provide career advice. I appreciate both positive feedback and constructive criticism.

I want to share the key points I picked up during the conversation.   I will also share my own thoughts on these points (in grey italics font).

  • Lean or bad economic times typically lead to consultants being the first to be axed 
    • This is not a big factor for me at the current time.  IT has a  much lower unemployment rate than the national average (according to MS) – The opportunities are there, you just need to ask yourself “How do I get them, and continue to get them?”
    • I feel it is your responsibility to ensure that you continue to be marketable and have the skill set to pick up another position if the need ever arises due to cut backs or a bad economic situation within your current industry.  Don’t be dependent on the fact that your current position is your source for income.  Always continue to be marketable to ensure long term career success.  If you have the marketable skills, there are always opportunities available if you have are motivated enough to find them.
  • Consultants travel and must be willing to travel for opportunities
    • I love to travel, and currently I am in a position where I am able to travel.  In the past, I’ve traveled with my current employer to do work in Mexico and Alabama.  Most large cities also have plenty of consulting positions available all of the time – so I don’t see travel as a 100% necessity.  If you look at Toronto or the GTA as a whole there is a plethora of opportunity for people who don’t want to travel outside of the GTA.
  • The places to recruit with are staffing companies such as Ajilon and Brainhunter (these were just two examples that happened to be mentioned as there are many recruiters out there with a lot to offer – it wasn’t meant to be a testament about the quality of these recruiters versus other recruiters).
    • I definitely agree, although I cannot speak for any of those companies as I have not been affiliated with them.  I also have found that many recruiters attend IT and business networking events and that is a great way to meet and talk to recruiters and other people in your industry in general.
    • There is also opportunity to do it on your own without the middle man (recruiter): You could start your own business and look for opportunities and maybe, eventually, hire a sales person to do this.  Respond to and complete RFPs (Request for Proposal) advertised by companies looking to implement IT projects.  The more connected you are to the business community, along with a proven track record and good communication skills, the higher chance you will have at success.
      • Get the contracts and get people to work for you – this also means that your money is working for you
  • It is smart to limit the number of head hunters that you affiliate yourself with especially as many of these head hunters will be submitting resumes for the same positions
    • An advantage would be that you could develop a closer working relationship to these recruiters if you deal exclusively with one or two recruiters
  • Be wary of living at a means of spending based on your current income as consultant opportunities can change or contracts can be ended early
    • Stories of people relocating to a new city and getting a year contract at $x/hr that ends early can leave you with accommodation expenses on a year lease with potentially no income
    • I think this statement is true in general and unrelated to consultants specifically.  Understanding your own cash flow and balance sheet will help you understand your own financial situation better and help you determine how long you can get by living at the same means if the money from the consulting contract stopped coming in.  You could also have a termination clause in the contract to ease such an event.  I don’t like to think about these types of negative situations, but I do because they are still important to consider.
  • Software Architects usually demand a higher per dollar hour than developers
    • Software Architecture is a riskier consulting position – there may be less of these consulting opportunities available and generally are available in larger organizations
      •  If you are confident in your ability, the only additional risk is that there may be less opportunity as a software architect versus a software developer.   Architect opportunities do exist in smaller organizations as well as larger ones.
    • There is a lot to learn about different software architecture methodologies and expectations are generally very high
      • As with any career that requires high technical skills and people skills you need to understand that expectations are high and that you are not indispensable.   You need to do everything you can to meet and exceed those expectations.   

This CIPS executive member has a wealth of knowledge. I am looking forward to some future discussions with him.

In addition to keeping up with current technology, I am also spending a lot of time learning about business, finance, and general consulting. These are great skills to have.  All of this contributes to me having to find time to 1) Have fun; 2) Live the life I want to live; 3) Friends/Family; 4) Research and Education; 5) Regular exercise and working out; 6) Travel; 7) Business Opportunities; 8 ) Investing; 9) A lot of other things; I could probably categorize those items into smaller amalgamated categories, as I am going on a bit of a tangent about them.

I enjoy the atmosphere at Coffee Culture over Williams and their coffee is pretty good too. The crowd seems younger and more sociable versus the mixed crowd found at Williams Coffee Pub. Williams has better food from my experience however, and I even found a Williams Coffee Pub in London that serves beer J.

Until next time!

Dan

My Speech on Using Technology to Solve a Business Problem

In search for content for this new blog, I started to think about past work projects, past experience, and past content that I could use to help in the writing process. I thought about two speeches I did over a year ago. They were career orientated speeches in the area of Software Development. They were my third and fourth speeches I did for an organization called Toastmasters. My intent is to transcribe the main content and focus of the first speech into a blog posting. It’s funny, because I’ve started to go through my old notes for the speech, and I remember all of the practicing and reciting I did. I can almost hear myself reciting it as I am going through the notes.

My previous blog posting, titled, Making the Business Case for Technology, II: Demonstrating Value After the Fact focused on responsibility, ownership, and accountability – after the fact. The ideas mentioned in that blog posting would come into play once the solution, as discussed here, was put into place.

This speech was designed for an audience with little technical knowledge and who were not in the IT field. Due to time constraints in the speech (about 7 minutes), it was meant to be a basic overview for the audience as to what they could expect the process to be in the design of a technological solution to solve a business problem. The example I used in the speech was a real scenario I ran into at work a couple of years ago.

So here it goes….

How do you become effective in your job? If your job was in the field of Information Technology, what would your process be to ensure you were using your resources as effectively as possible? Many IT solutions are successful, and many are not. As a person making decisions on technology solutions for business problems it is important to ensure you are using your resources effectively.

Many business users, or customers, are very interested in using technology to solve their problems. Technology is a great way to help solve business problems, but many projects fail because throwing technology at a problem alone, rather than solutions, rarely gives satisfying results. It is critical that proper business processes be defined before throwing a technical solution at the problem. As an example, the Quality Assurance department may be having difficulty getting accurate numbers from their weld testing machines into the weld destruct result tracking system, so they make a request to have the results of the weld destruct tests moved automatically from the weld testing machine (push-out tester) to the weld result tracking system. At first glance, this may seem like a great idea because seemingly it will limit human error because everything will be done electronically and automatically. However, upon further discussion with key individuals, further investigation, and further analysis you discover that the process for testing the welds is poorly defined or is not followed correctly. The operators are not using the push-out testers properly or consistently and therefore the numbers they are reading from the weld testing machines are not accurate. However, if the analysis of the problem showed that the operators were using the weld testing machines properly and that a technical approach would save them time, and therefore money, we would move on to the requirements gathering and analysis phase.

One part of gathering requirements involves discussing potential solutions with the customer. Potential solutions can vary between custom developed software and third party applications. The solution could automate the entire process or only part of the process. This analysis may take some time to develop, but it is a critical step in order to identify potential problems that could arise in the future and also to discover potential roadblocks (including significantly large costs and limits of technology). In the example mentioned earlier, the analysis showed that there was no third party software that would help in this scenario, but potential solutions could include: 1) Custom software written to have the weld result tracking system automatically pull the information about the weld tests from the weld tester. 2) Have the weld tester push the information to the result tracking system via custom software. With any solution we need to identify any potential problems that could be created along with it. As an example, what if the operator did the weld test wrong and the wrong results were sent to the weld tracking system? We need to make sure these types of factors are looked at and resolved before moving forward. After the analysis, there may be factors that make a technological solution unfeasible, which in this case, you would sit down with the business users and explain why the project in itself is unfeasible and will cause problems on its own on an attempt to implement it.

As we begin to move forward with the development and implementation, it is important to establish a good working relationship with the stakeholders involved. “Buy In” from all of the stakeholders involved is critical, and it is important that costs, timelines, technical specifications, and design are approved and signed off by the stakeholders. The stakeholders need to be made aware of the responsibilities of the individuals developing and implementing the solution and also need to be aware of their responsibilities to the ultimate success of the project. Stakeholders must provide feedback throughout the duration of the project and ensure at each milestone that the solution is tested to meet their business requirements. Stakeholders must also help identify problems in the solution that could disrupt their business processes.

Although we have just scratched the surface of the process behind building a successful technology solution to solve a business problem, it is important to remember that a solution is designed to help solve a business problem that has a defined business process. For a technology solution to be successful, we need to ensure that the solution will solve the problem, that there is “buy in” from the project stakeholders, and that the requirements have been satisfied.

Making the Business Case for Technology, II: Demonstrating Value After the Fact

I was a member of a team of people within our work organization in reviewing and presenting different chapters in the book titled In Search of Business Value: Ensuring a Return on Your Technology Investment. 

Although it required a moderate amount of my personal time, I found being a team member in this project to be very beneficial to me.  I had the opportunity to read the book and summarize and present a review of Chapter 5 of the book, titled “Making the Business Case for Technology, II: Demonstrating Value After the Fact”.  The summary was presented to members of our global IT team, including developers, systems analysts, business  analysts, IT managers, and IT directors.  One of the authors, Robert McDowell (VP of Microsoft Corporation) joined us on the conference call as well.

I found tremendous value in the ideas presented in the book, but the most value was gained in the practice of writing down key points, reviewing, sharing, and discussing these ideas on a global conference call with other reviewers of different chapters. 

In this blog posting, I am sharing a modified summary and review that I came up with and presented back in December 2008.

Chapter Summary

The whole chapter deals with responsibility, ownership, and accountability.  Routine follow-up audits in IT projects are critical for determining whether or not the projected results were achieved.  Most companies do not do this, but the consensus is that it needs to be done.  Unless we become responsible for conducting follow up audits, we cannot criticize whether or not an IT project has paid off.  We need a way to determine if the productivity gains promised in the business case were actually achieved, and at the start of the project it needs to be identified who will be held accountable for achieving the projected results.

Benefits to the Organization

It will be a big undertaking to implement the ideas presented in this chapter, but as a start I believe there are some things that we could put in place in the short term to ease a transition into it:

·         The project champion should be from the business unit and we also need to ensure that we assign someone in the business unit to be accountable for the project

·         Usage logging can be used to verify or track that an application is being used as frequently as expected.  Ex)  A report that was given high priority development status and passed all of the user testing and acceptance approvals could be audited simply by looking at the report usage log to determine if the report is being used or being used as frequently as per the projected usage. 

o   This information could be used a lesson learned in considering future high priority projects

·         User surveys could be put in place to provide information about IT projects.  The questions on the survey can be geared to show how effective the project has been and indicate any actual user cost or time savings.  This could easily be set up using SharePoint.

·         A simple analysis can be completed at the beginning of IT projects to be used as a gate to determine the projected benefits vs development/implementation time

To fully receive the benefits of the ideas realized in this chapter will require a long term strategy and strong commitment.  Here are some ideas that could be used over the long term:

·         An analysis must be done to actually document  the projected benefits of implementing the ideas presented in the chapter

·         Forms and standardization procedures could be developed and standardized for all IT projects to document projected vs actual benefits.  We could use this as a pilot in some projects and revise and complete lessons learned documentation as necessary

·         Awareness, trust, and understanding of the value in this process would need to be created with IT and the individual business units at each division to truly be effective in this process

Key Points in the Chapter

·         Two major core issues:

o   Ensuring that an analysis is performed after the fact to determine whether the benefits promised had actually been achieved

§  This needs to be a routine practice from the largest companies to the smallest companies with only one IT person.

o   Accountability

§  Every project with an expected business value requires the following:

·         Name the business person who is accountable – this must be the business owner who will take accountability

·        

The business person who is accountable is required to demonstrate that whether the investment made sense or not when measured against the promises or expectation
 

 

·         Most companies give a considerable amount of time on the up-front analysis of technology projects, but give little attention to any sort of after the fact evaluation

o   They don’t verify if their goals were met

o   Most companies are not where they want to be in this regard

·         If there is a standardized process to evaluate success or failure of initiatives, a better job can be done at post-project assessments (p80)

·         It is important to do a routine and vigorous after the fact cost/benefit analysis on every major technology project

o   It needs to be determined if the project really paid for itself?

·         We must step up to the responsibility of conducting follow up audits as a standard practice (p. 94)

·         Put a formal process in place to determine which benefit was actually realized from technology projects (p. 94)

·         If an audit shows a productivity gain you must go to the next step and determine what is now being done with the time that has been freed up (p. 94)

·         At each milestone, measure if we have achieved the projected benefit at this time (this benefit must be something that is measurable)

o   This is like a gate in the project – how well are we doing relative to the timeline of the project and how well are we relative to the expected value

·         Look at technology spend not as a cost but as an investment

·         IT cannot be the one to deliverer the business value, it must be done in conjunction with the business unit (p.84)

·         Must have strong buy in which may be difficult as there are risks.  One risk being it may show your project was a failure. (p.84)

·         Some organizations apply very liberal assumptions while calculating business value just to have a project funded (approved). 

o   In these cases, the combination of liberal assumptions and no true accountability afford the company to fund projects that may ultimately never deliver the promised returns. (p.85)

·         Look at what was spent on the technology/IT side to deliver the result (p.88). 

o   Analyze the time spent vs. returned value.

·         Auditing gives a clearer view of how certain project managers’ teams have been more successful than other teams. 

o   Past results are an indicator of future attempts.

·         Conducting user surveys can be used as a form of validating from the users if an IT solution has been effective or not.

o   Example question:  Do you feel more enabled with the new desktop software? (p.93)

·         Even with infrastructure projects it’s just as important, even if it goes no further than comparing actual cost vs. original estimate.

·         The lack of holding people accountable is the biggest source of failure and waste on technology projects (p.93)