Using Agile in less-than-perfect situations since Y2K
It’s nearing the end of the year, which is a good time to reflect on things. I have found myself reflecting a bit on adages, axioms, and the like. You know, the “stitch in time saves nine,” “look before you leap” kinds of things. Those little snippets of wisdom that help to convince us to do the right thing.
Software development has a few of these:
- red, green, refactor
- Individuals and interactions over processes and tools
- Don’t break the build
- Perfect is the enemy of the good
That last one really started off a train of thought for me. How do we know when we’ve reached “good” and when we’re trying too hard to reach perfect? This is particularly interesting to me as I often struggle with “leaving well enough alone,” to pull out another cliché. I’m not sure of the exact set of circumstances that led to it, but somehow I started thinking about value versus cost. All of that coalesced to this:
The Best You Can Afford
That’s how you know when to stop. When you can’t afford to keep making it better anymore. That is, when the value of what you’re working on isn’t worth the effort compared to the other things you could spend that effort on. (I’m sure that’s one of the worst-written sentences ever, but it makes the point.)
At first blush this seems obvious. We all apply this every day to an extent right? Well, not necessarily. The best you can afford is very easy to confuse with the worst you can afford. That is putting in the least effort possible to just get by. We see and do this all the time.
What, really, is the difference? The difference is in what these two tactics get you in the long run. Aiming for the worst is attractive because it is, somewhat obviously, the cheaper option. However, that obvious cheapness results in, well, results that are obviously cheap. In physical products this can mean flimsy plastic parts and a shorter life span. In software, this can mean more defects, a hard-to-use product, and upset end users. Worse yet for software, it often also means a harder time performing the maintenance tasks that can be more than 80% of the cost and time spent working on the software.
Aiming for the best is harder to justify up front because so many of those costs are unknown. However, the attention to detail means physical products that people love to use, use for a longer time, and will recommend to others. It mostly means the same in software. Another benefit for software is that maintenance takes less time and is easier for those who must do it.
So now we have a guideline that can tell us when we are spending too much on heading for perfect: when the value of what you are doing is less than the value of doing something else.
I’m going to finish up with an example that a lot of people have used when making similar points. I’m sure a number of readers will roll their eyes as soon as I mention the product, but I urge you to not focus on that, but instead pick some other product or product category and make some comparisons of your own. When has the “worst” product won? How did it win? When has the “best” product won? How did it win? Which of those products started out ahead an then lost?
The product that I think I can most easily make a case for being built using “the best you can afford” mentality is: the iPhone. To see why, we have to go back to before the iPhone was announced or really speculated about. Back in the early 2000s, there were rumors of Apple working on a tablet computer. These rumors persisted ultimately until the iPad was released. It seems, based on all accounts, that these rumors were true. And again based upon what we have heard, Apple was working on what would become the iPad *before* the iPhone.
This is significant because it points out that Apple, whether you decide that means Steve Jobs, Jonny Ive, a host of other folks, or the combination of all of them, Apple was focused on making the best thing they could. At some point, who knows exactly when, someone realized that what they had—a touchscreen computer with a simplified OS—would make a great phone. At that point, the team felt that the iPad was the “best” product, but the “best they could afford” to make at the time was the iPhone.
Further, it is obvious that Apple doesn’t and never did release a version of the iPhone (or iPad) without work already being in progress on the next version. This implies that the “best we can afford” is a philosophy deeply ingrained into Apple. They are continually working on something better, but they release something that is “good enough”.
There are a number of people out there who will point out that the iPhone isn’t the number one phone any longer or that it won’t stay that way for long. I’m sorry, but I don’t buy it. For one thing, there are at most two kinds of iPhones (not counting storage size differences) available for sale at any one time. This isn’t true for any other type of phone. Also, while Apple only has a certain share of the number of devices sold, its share of the profits is much larger. That is, Apple can afford to sell fewer units because it can also manage to charge more for them. Profit is the goal of a company. Selling the most units is worthless if you don’t also have profit.
I humbly believe another example of the application of “the best you can afford” is the Improving Podcasts that I co-host with Allen Hurst. This was an idea that Allen and I had around two years ago. We wanted to create a software development podcast and have it associated with our company, Improving Enterprises. We spent several months getting some momentum built and finally realized that it was at a point where we had to release something or risk never publishing an episode. Because of that, the first episodes were more work, and the audio quality was suspect. Over time the best we could afford got better until we found, espeically due to the time involved, recording all audio over Skype was a solid balance of creating something of value without detracting from our other valuable works. This reduced the editing time significantly, allowed for remote participation, and was repeatable by either one of us. However, it also means that occaisionally we are subject to the audio difficulties and other problems associated with remote audio.
I could go on, but at this point this article is as long as I—and probably you as well—can afford. It is also the best I can afford right now. I’m sure I could fix some grammar, make some points more clearly, etc. However, I have other work to do and family to be with. I leave you with this question: How might your life and the lives of those around you improve if you stopped doing the worst you can afford and instead did the best you can afford?
The one or two of you who actually come to the site to read these posts may have noticed that a few weeks ago, I added a link to Improving Podcasts in my sidebar. Allen Hurst, I, and a few other Improving employees have started producing a bi-weekly (every other Tuesday) podcast covering software development topics.
We have three episodes produced, and the latest was posted just this afternoon. (It may take some time for it to show up in iTunes.) The topics to date have centered around Agile projects management and development practices. Head over there, listen, subscribe, and please provide feedback. We really would love some positive feedback in iTunes, but to suggest topics, ask questions, or invite yourself to join us, the best place is the web site itself.
Many people conflate formality and rigidity, treating them as inseparable. It is easy to see why. Formality leads to rigidity as mechanisms are put into place to ensure formalized practices and processes are followed. You have likely seen this go wrong as I know I have. Architecture groups and PMOs shift their focus. No longer an aid to progress, they become an obstruction. They keep their formality, but lose their purpose as they become rigid.
Agile is, in part, a reaction to the rigid process models of the past. It is no surprise that many throw formality out as well. However, without a degree of formality, we lose repeatability. This has happened in countless efforts to implement Agile methods. The team, seeing a chance to break from the rigidity, pushes off any attempts at formality as well. As the project progresses, cracks begin to form. There is no formal test plan, so high-level bugs creep in. That is, while the expressed stories may work as requested, certain aspects are simply not covered. Because there was no formal effort to design the user interface flow, usability of the product starts to plummet as the complexity of the combined requirements weighs upon each successive screen.
These, and other aspects, are things we as software development professionals have learned to do reasonably well in the last 30-plus years. Ignoring them hurts our chances of success. Are these things non-Agile or anti-Agile?
No they are not. To completely ignore formality would be to throw even Agile practices out the window. Software development requires people to coordinate actions. The amount of formality to do so varies based upon the team and its goal. However, some degree of formality can be an aid when more than one person is involved.
It is all in how we apply formality. Things such as test plans, information architecture and user interface flow, and domain modeling applied in an “all up front” fashion would be difficult to reconcile. However, none of these things has to be rigid. They can adapt just as easily as we can refactor our code. If the plans are designed with this in mind, small adaptations are easy while larger ones remain possible.
It is worth repeating that not every effort requires the same degree or type of formality. If a project is just starting or has only minimal and very simple user interactions, perhaps a comprehensive outline of user interface flow really isn’t necessary. However, we must train ourselves to recognize when it becomes necessary. As soon as anyone in a retrospective or user demo says, “I can’t find a screen where I can do X,” it might be time to examine the flow. In fact, it may be worth formalizing a list of “smells” that point to the need for adding some of these practices. Somewhat similar to those in Agile Adoption Patterns. What do you think?
Great post from Bill Mill. Go on, read it. When you’re done, here’s the key quote:
The Warriors could optimize like crazy for the press, but they’d be training and working and trading and drafting to be, at best, a pretty good team.
This is why those that preach from the gospel of Lean try to impress upon us that local optimization is an insidious evil. It often looks good from a certain viewpoint, but it can hide other improvements that would provide a more holistic benefit. It can even prevent you from reaching your true goal.
A mailing list pointed me to a recent post by Jeremy Miller on Agile design and an older one by Joe Ocampo. Both discuss how design happens in an Agile project. Both are great posts. Jeremy’s post sticks largely to what is considered the Agile “party line”: design is done all the time rather than up front. Joe’s post talks about using Domain-Driven Design to develop the “ubiquitous language” that teams need for communication. Jeremy admits in the comments that a bit of up-front establishment of a domain is useful as context, but distinguishes it from design. Presumably he is equating it with analysis. Having written a book that, in part, shows how use of consistent domain modeling patterns can inform design and implementation, I would say it’s both. But that’s not what I want to talk about….
Reading these posts and exploring a bit further, I began to relate the establishing of a domain context for stories with establishing an architectural context.
One of the main challenges I have faced and hear others describe is how to maintain agility when there is a scope of multiple Agile projects (what the PMBOK would call a program) or an enterprise portfolio that really needs consistent architecture. I am generally a believer in simple, evolutionary design. However, as Jeremy points out in his post, it requires people with the proper skills to maintain an open design/architecture and who know how to tell when they have reached the last responsible moment. What I have discovered is that as the scope of systems under development scale, the last responsible moment for certain architectural decisions tends to shift earlier.
As such, a reasonable step would be to include a (short) architectural analysis session for each release that would mirror the domain modeling exercises Joe mentions late in his post. In that way, those with architectural knowledge could get involved and discuss what architectures support the features required and which decisions need to be agreed up front. Provided this is timeboxed and the group puts appropriate emphasis on postponing that which can be, this is in line with Agile values as well.
I should also note that I would only add this practice if the team found it necessary. As a coach, I might recommend it upon seeing the following things either in a retrospective or during planning or other working sessions:
- multiple components, libraries, and/or frameworks in use that accomplish the same goals
- “not invented here” syndrome
- lack of integration testing between project teams’ output
- failure of or difficulty with coordinated deployment (usually preceded by the lack of integration testing)
- no or limited consideration of nonfunctional requirements (performance, stability, etc.)
- failure to meet nonfunctional requirements
I’m sure there are other smells that would indicate a need for architectural planning. If you think of others, let me know in the comments.