Some weeks ago one of my customers decided that one of its biggest ASP.NET web intranet projects needed a sort of architectural revision, mainly to support better its customers with built-in fault tolerance but also to unchain development of the various sub-projects through better separation between software modules.
When small software companies get bigger they embark on what can be a bumpy ride of change. One of those changes will probably be to do with the way they tackle the analysis phase of the software development life-cycle (SDL). Just to be clear, when I say “analysis phase”, I mean the part before coding starts i.e. requirements elicitation, analysis and system specification.
Typically (although I am sure that there are plenty of shining examples where this is not the case) small software companies with a handful of developers, where the entire SDL for a project is covered by one or two developers, tend not to have a formalised analysis phase. Why is that?
Recently I stumbled upon a couple of articles1,2 and, remembering my experience with EC2, I discovered that utility computing was not what I was searching for: I was searching for something that helped me without adding complexity, but I was not happy with simple web hosting offers, I wanted also complete control over my infrastructure to have the technical freedom that I could need and because, when I think about my customers’ data, I trust no one.
The most expensive phase of software construction is coding and this is because it’s the less intuitive: it requires constant attention and reasoning, errors (logical or not) are difficult to spot because they are immersed in text that often is long, separated in more than one file, and not written by us.