21 September 2012

Modeling or experimentation?

Scott Ambler, of Ambysoft and Agile Modeling fame, has just had an article published in Dr Dobbs about disciplined agile architecture.

(This material is apparently based on his and Mark Lines' new book Disciplined Agile Delivery: A practitioner's Guide to Agile Software Delivery in the Enterprise, though I haven't read it yet.)

In this article Scott presents a number of choices that need to be made when determining an initial technical strategy about architecture in agile development. The choices are the level of model detail (detailed end-to-end, detailed interface, high-level overview, or none), the view types (technology, business architecture, or user interface), and the modeling strategy (formal modeling sessions, informal modeling sessions, single candidate architecture, multiple candidate architecture).

Interestingly, Scott doesn't mention experimentation at all either in this article or in related articles. In my research, experiments – whether they’re prototypes/proofs of concept, spikes or A/B tests – are far more popular up-front activities than analysis and modeling.

Participants in my research frequently talk about problems that can't be solved through analysis, particularly when using new or unknown technologies where analysis or modeling can't be used, when technologies don’t work as expected, when there are unexpected interactions between technologies, and when there is risk that needs to be mitigated.

Without presenting a full analysis of the research results, here are half a dozen quotes from participants (referred to by their code numbers – P1 to P30) that explain their views on analysis and modeling versus experimentation.

P7 (business analyst) was in a team that was working with unfamiliar technology and building unique systems:
“In the kind of work we do, which is kind-of more cutting edge, more complicated business problems and more complicated technical environments, then it’s just natural that everybody’s feeling their way around a bit, and so [up-front analysis] is very difficult ... yeah, so it’s the process of learning and understanding and realising ... you have to get your feet wet, your hands dirty.”
and similarly P6 (development manager):
“Sometimes you have no choice but to go and write a few tests, to write a test program to explore how something works in practice because if you have a new technology you might not know.” 
P19 (development manager):
“These are problems that we are going to have to [solve]... not because we didn’t think about it, but because they would only become evident once you start digging into the code ... you really have to do it to catch the problems.” 
P10 (agile coach) said that building the system was the only way to truly tell if the system could meet its required load:
“You can estimate till the cows come home before actually trying it out in the flesh, and when you try it out in the flesh then you learn uh-oh, all sorts of things are going on ... you can’t afford to try and do everything as a thought experiment because the criticality of the system is such that you really need to know if you have enough headroom to support peak load.” 
P4 (director of architecture) was one of many who talked about the role of experimentation in risk management:
“We wouldn’t try to work out what the risk would be when we could actually try it and see.” 
Finally, P17 (team manager) and his team built a system whose load was higher than expected. P17 noted that even if they had done more analysis up-front and had a better understanding of the load on the system they wouldn’t have done anything differently:
“The only thing analysis might have helped with was some of the performance stuff. I’m not sure we would have made that many different decisions ... The only thing it would have given us is a bit more forewarning [of performance problems].” 
So that’s just a small random selection of what my research participants have been telling me about experimentation, but it’s very clear that experimentation plays a very important role in the up-front architecture planning and design activities.

01 August 2012

Learning agile methods at Victoria University

This week the labs for Victoria University's Agile Methods course have started. The class of about seventy software engineering (and computer science) students is divided into teams, and have each chosen a development project to work on. Most projects have been provided by industry organisations who have some development work suitable for students.

The projects last about ten weeks. The teams get to work through the complete development process, starting with choosing their agile methodology, processes and tools, choosing the technology stack, and developing the high level road map and plan of what they're building. Most teams have chosen to use scrum, but there is at least one Kanban team. Each team showcases their work every two weeks to the class and customer.

As well as hands-on agile experience, the students get good experience in working as part of a team and delivering useful working software for real clients.

And the postgrad agile students get to play Agile Coach!

28 June 2012

Wanted: Sydneysiders to help solve the "how much architecture?" paradox.

Wanted: Sydneysiders to help solve the "how much architecture?" paradox.

I'm being let out of the office, and will be in Sydney for the week of 9 July 2012. If anyone in Sydney can spare an hour (at a time and place that suits you) and would like to participate in my research, drop me an email and I can fill you in with the details.

Basically, I'm looking for agile practitioners who have some exposure to architecture (whether they're team leads, architects, project managers, etc) and can talk about their experiences of designing architecture in agile development. I'm particularly interested in hearing from teams building large systems, teams in large organisations, start-ups and teams building mass market apps. (Though I'm still getting data from other types of development as well, such as outsourced standalone systems!)

In return I'll let you in on the findings from my research.

Note: I'm also interested in hearing from anyone in Wellington or Auckland who'd like to participate.

25 June 2012

Why qualitative research?

Why is my research qualitative, using data gathered from interviews, rather than quantitative, using data gathered from (say) measuring teams or software applications?

Here is the long answer:

When it comes to up-front planning in agile software development, there is general agreement that the 'big up-front design' (BUFD) doesn't work (that's the whole point of agile!), and that the 'no up-front design' is not the answer either. A better solution lies somewhere in between - the 'just enough up-front design'. Just enough up-front design can be implemented in a number of ways, such as the architecture spike, the walking skeleton, George Fairbanks' excellent risk-based approach, and Scott Ambler's agile modelling.

However many of these methods do not help in determining which architectural requirements should be designed up-front, how architectural design should be performed, and how to validate architectural features. There are very few studies into what agile practitioners actually do and what actually works.

I have summarised those unknowns into the one question “How much architecture?”. And the difficulty is there is no single correct answer. (In fact, any particular system may not even have a single correct architecture [Booch], [Fairbanks]). “How much" depends on context – which includes factors such as size, criticality, business model, stable architecture, team distribution, governance, rate of change and age of system [Abramhamsson, Babar, Kruchten]. It goes further than this however: the development team themselves are part of that context; any two  architects are likely to produce different architectures for the same problem with the same boundaries, for “software architecture is a result of technical, business and social influences” – and this includes “the background and experience of the architects”.

In other words, architecture depends not only on the technical and business constraints of a system, but also on the experience of the architect and of the development team, on their judgement and abilities, and on what they believe to be the correct architecture.

And of course, agile development adds another complicating factor: the requirements are not known before starting, and therefore the minimum amount of architectural effort cannot be rationally determined in advance.

Thus it doesn't appear likely that there is some simple formula that a development team can apply to determine when architecture should be designed up-front, and when it should be left to emerge during development. So, this research is taking a different approach – exploring how agile practitioners determine how much architecture is planned up-front – the methods they use, what factors they consider, the choices they make. That sort of research is qualitative, with data obtained by talking to people in interviews. And as I wrote earlier, the outcome of this research will be “an explanation, a discourse, that explains how agile development teams deal with up-front design. It will be a story that teams can use to put their own situations into context, to reassure them that they are thinking about all the right things when planning – or to give them a few ideas of things they should be thinking about.”

And the short answer?

I think this sums it up fairly well:
“...Software engineering is full of why and how questions for which numbers and statistics are of little help. For example, if you've managed a software team, you've probably asked yourself a number of questions. Why won't my developers write unit tests? Why do users keep filling out this form incorrectly? Why are some of my developers 10 times as productive as the others? These questions can't be answered with numbers, but they can with the careful application of qualitative methods.”  [Andre Ko, in “Understanding Software Engineering Through Qualitative Methods” from  Making Software: What Really Works, and Why We Believe It.] 
And the even shorter answer?

Doing a PhD is a very lonely task, so going out and talking to agilists helps me to keep a grip on reality!

(Footnote: some of this material is based on a paper I presented at Agile India earlier this year. Also I'm sorry for the links to articles that aren't free!)

06 June 2012

XP2012 PhD Symposium

Last month, I attended the agile software development conference XP2012, held in Malmö in Sweden. I participated in the doctoral symposium, where a panel gives PhD students feedback on their research progress.

Each student submitted a paper to the conference, presented the paper with a twenty minute talk, and presented a poster that summarises the research at a poster session.

For this symposium the papers include the research problem and motivation for solving it (the background to the problem), the aims and objectives of the research (there has to be a reason for doing the research -- why do we want to solve the problem?), the research methodology (the research needs to be done appropriately), work completed to date, the work plan to completion (the purpose of doctoral research is to gain an academic qualification -- there must be an end to it!), and the contribution of the research to academia (what the research adds to the "body of knowledge" is a vital part of doctoral research).

For those interested, my paper is titled "Reconciling architecture and agility: how much architecture?". The abstract is:
"Software architecture design is an exercise in planning ahead, while one of the key philosophies of agile software development is to not plan ahead. These opposing needs between planning ahead and not planning ahead create an apparent paradox. This research is exploring that paradox, focusing on the effects that the architectural skills, judgement and tacit knowledge of the development team, and the methods that they use, have on the level of up-front architecture design in their agile development projects."
...and the full paper is available.

26 May 2012

An introduction to my research

According to the famous software engineer Grady Booch, software architecture represents the significant design decisions that shape a system, where significant is measured by the cost of change. Therefore, architecture design is an exercise in planning ahead – any changes made at the architectural level may require rework across the whole system. Architectural decisions include things like the technology stack, the development framework and architectural patterns. If you want to change from .NET to Java after you've started development,  or change from a web-based system to a desktop app,  then you basically have to throw everything away and start again.

Agile development methods are based on a development philosophy and a set of principles that allow the development team to deal with changing and unknown requirements. To accept change, agile methods discourage detailed up-front design, because requirements are not initially known and up-front design may therefore be wrong. Refactoring is used to ensure quality remains high.

Is this a paradox? How can you design an architecture while using a methodology that promotes not planning ahead? How can you use a methodology that encourages refactoring on something that can't be refactored?

This research is using the experiences of agile practitioners to explore this apparent paradox. Generally, practitioners get around the paradox by compromising -- doing "just enough up-front design". But how much is just enough up-front design? (Note: while this research is labelled "how much architecture?", it's more about up-front design and planning than specifically about architecture, if we can differentiate between them). According to a number of authors, how much? depends on context. Context include technical factors, business factors and people-factors -- the team itself and the decisions it makes.

Because of all the context factors, this research is not aiming to come up with some formula, a recipe, where you plug in a bunch of parameters (such as project size, criticality, stability etc) and come up with an answer that describes your optimal level of up-front design. Instead, this research involves talking to agile practitioners in industry to hear of their experiences and how they deal with of up-front design.

Thus the outcome of this research will be an explanation, a discourse, that explains how agile development teams deal with up-front design. It will be a story that teams can use to put their own situations into context, to reassure them that they are thinking about all the right things when planning -- or to give them a few ideas of things they should be thinking about.

Perhaps.