Scott Ambler, of Ambysoft and Agile Modeling fame, has just had an article published in Dr Dobbs about disciplined agile architecture.
(This material is apparently based on his and Mark Lines' new book Disciplined Agile Delivery: A practitioner's Guide to Agile Software Delivery in the Enterprise, though I haven't read it yet.)
In this article Scott presents a number of choices that need to be made when determining an initial technical strategy about architecture in agile development. The choices are the level of model detail (detailed end-to-end, detailed interface, high-level overview, or none), the view types (technology, business architecture, or user interface), and the modeling strategy (formal modeling sessions, informal modeling sessions, single candidate architecture, multiple candidate architecture).
Interestingly, Scott doesn't mention experimentation at all either in this article or in related articles. In my research, experiments – whether they’re prototypes/proofs of concept, spikes or A/B tests – are far more popular up-front activities than analysis and modeling.
Participants in my research frequently talk about problems that can't be solved through analysis, particularly when using new or unknown technologies where analysis or modeling can't be used, when technologies don’t work as expected, when there are unexpected interactions between technologies, and when there is risk that needs to be mitigated.
Without presenting a full analysis of the research results, here are half a dozen quotes from participants (referred to by their code numbers – P1 to P30) that explain their views on analysis and modeling versus experimentation.
P7 (business analyst) was in a team that was working with unfamiliar technology and building unique systems:
(This material is apparently based on his and Mark Lines' new book Disciplined Agile Delivery: A practitioner's Guide to Agile Software Delivery in the Enterprise, though I haven't read it yet.)
In this article Scott presents a number of choices that need to be made when determining an initial technical strategy about architecture in agile development. The choices are the level of model detail (detailed end-to-end, detailed interface, high-level overview, or none), the view types (technology, business architecture, or user interface), and the modeling strategy (formal modeling sessions, informal modeling sessions, single candidate architecture, multiple candidate architecture).
Interestingly, Scott doesn't mention experimentation at all either in this article or in related articles. In my research, experiments – whether they’re prototypes/proofs of concept, spikes or A/B tests – are far more popular up-front activities than analysis and modeling.
Participants in my research frequently talk about problems that can't be solved through analysis, particularly when using new or unknown technologies where analysis or modeling can't be used, when technologies don’t work as expected, when there are unexpected interactions between technologies, and when there is risk that needs to be mitigated.
Without presenting a full analysis of the research results, here are half a dozen quotes from participants (referred to by their code numbers – P1 to P30) that explain their views on analysis and modeling versus experimentation.
P7 (business analyst) was in a team that was working with unfamiliar technology and building unique systems:
“In the kind of work we do, which is kind-of more cutting edge, more complicated business problems and more complicated technical environments, then it’s just natural that everybody’s feeling their way around a bit, and so [up-front analysis] is very difficult ... yeah, so it’s the process of learning and understanding and realising ... you have to get your feet wet, your hands dirty.”and similarly P6 (development manager):
“Sometimes you have no choice but to go and write a few tests, to write a test program to explore how something works in practice because if you have a new technology you might not know.”P19 (development manager):
“These are problems that we are going to have to [solve]... not because we didn’t think about it, but because they would only become evident once you start digging into the code ... you really have to do it to catch the problems.”P10 (agile coach) said that building the system was the only way to truly tell if the system could meet its required load:
“You can estimate till the cows come home before actually trying it out in the flesh, and when you try it out in the flesh then you learn uh-oh, all sorts of things are going on ... you can’t afford to try and do everything as a thought experiment because the criticality of the system is such that you really need to know if you have enough headroom to support peak load.”P4 (director of architecture) was one of many who talked about the role of experimentation in risk management:
“We wouldn’t try to work out what the risk would be when we could actually try it and see.”Finally, P17 (team manager) and his team built a system whose load was higher than expected. P17 noted that even if they had done more analysis up-front and had a better understanding of the load on the system they wouldn’t have done anything differently:
“The only thing analysis might have helped with was some of the performance stuff. I’m not sure we would have made that many different decisions ... The only thing it would have given us is a bit more forewarning [of performance problems].”So that’s just a small random selection of what my research participants have been telling me about experimentation, but it’s very clear that experimentation plays a very important role in the up-front architecture planning and design activities.