How Transaction Costs Influence Story Size

Of all the aspects of the INVEST principle, I think the attribute that software developers have the most influence over is S. Small

Small stories have huge benefits. We can control what gets built with much more granularity.  Less scope means fewer requirements for the team to understand. Less for the end user to understand. Less code to for the developer to change. Less code to test. Less code to release and maintain.

Less risk.

Ron Jeffries recently tweeted a short video on story slicing that triggered a question in my mind about why engineers resist small stories. As I thought about it more, connections formed to Don Reinertsen’s work in  Managing the Design Factory and Principles of Product Development Flow. There he describes the “transaction cost” of an activity. The transaction cost is the cost/time of everything needed to execute an activity.

If we put this in the context of releasing software, this may include the time to build, test, and deploy an application. If those tasks are manual and time consuming, human nature is to perform these tasks less often.

Transaction costs follow a U-curve, where performing an activity may be prohibitively expensive if the size of the activity is too small or too big.  For example, think about a story for user account registration. It would be costly to create a story for each word in the form text. Likewise, the cost of a story would be huge if it includes creating the form, validating it, sending notifications, account activation, error handling, and so on.

So, back to Ron’s video and push for thinly sliced stories. I posed the question about what to do when developers resist small stories. To dig into that, we need to understand why developers resist small stories.

A common argument that I hear against small stories is that it’s not efficient to work that way. We developers are always looking to maximize the work not done 🙂

This can be a problem, because the true work for a story includes everything to get it production ready. Writing each story. Developing each story. Code reviewing each story. Testing each story.

That is the transaction cost of a story.

Driving down those costs drives down the resistance from the developers. There are lots of ways to reduce these costs. We can pair program to eliminate the out-of-band code review. Use TDD and automated acceptance testing to build quality in from the start. Create an automated build and deploy pipeline to continuously deliver stories.

As we reduce the overhead of each story, we can slice stories thinner and thinner.

But wait, there’s more!

As Bill Caputo rightfully pointed out, big “efficient” stories include a lot more work than may be necessary. Thinking back about an account registration story, we may put account registration and account activation into the same story. That builds in an assumption that we have to activate accounts – which we may not.

Not splitting stories means we may efficiently build something we don’t even need. Peter Drucker famously said –

There is nothing so useless as doing efficiently that which should not be done at all.

Addendum – Be careful what you measure

Another reason developers may resist breaking things down into small deliverables is how they’re measured.

In most scrum worlds, a team is being measured based on velocity. Small stories are fewer story points, thus lower velocity.

Uh oh, let’s make bigger stories and drive higher velocity.

To address this problem, Ron suggested giving “credit” for stories completed rather than story points completed. This encourages splitting large stories into thin stories with little pieces of the overall functionality.

What Do DevOps, Test Automation, and Test Metrics Have in Common?

This is a guest post by Limor Wainstein

Agile testing is a method of testing software that follows the twelve principles of Agile software development. An important objective in Agile is frequently releasing high-quality software regularly, and automating tests is one of the main practices that achieves this aim through faster testing efforts.

DevOps is a term that Adam Jacob (CTO of Chef Software) defines as “a word we will use to describe the operational side of the transition to enterprises being software led”. In other words, DevOps is an engineering practice that aims at unifying software development (Dev) and software operation (Ops). DevOps strongly advocates automation and monitoring at all stages of the software project.

DevOps is not a separate concept to Agile, rather, it extends Agile to also include Operations in its cross-functional team. In a DevOps organization, different parts of the team that were previously siloed collaborate as one, with one objective to deliver software fully to the customer.

Agile and DevOps both utilize the value of automation. But there must be a way to measure automation, its progress, and how effective it is in achieving the aims of Agile and DevOps—this is where test metrics become useful.
In this article, you’ll find out about different types of test automation, how automating tests can help your company transition to DevOps, and some relevant test metrics that Agile teams and organizations transitioning to DevOps can benefit from to improve productivity and achieve their aims.

Types of Test Automation

Test automation means using tools that programmatically execute software tests, report outcomes, and compare those outcomes with predicted values. However, there are different types of automation that aim to automate different things.

Automated Unit Tests

Unit tests are coded verifications of the smallest testable parts of an application. Automating unit tests corresponds with one of the main Agile objectives—rapid feedback on software quality. You must aim to automate all possible unit tests.

Automated Functional Tests

Functional tests verify whether applications do what the user needs them to do by testing a slice of functionality of the whole system in each test. Automating functional tests is useful because it saves time—typical testing tools can mimic the actions of a human, and then check for expected results, saving valuable time and improving productivity.

Automated Integration Tests

Integration tests combine individual software modules (units) and test how they work together. Again, by automating integration tests, you get tests that are repeatable and run quickly, increasing the chances of finding defects as early as possible, when they are cheaper to fix.

Test Automation and DevOps

DevOps is a culture that aims to reduce overall application deployment time by uniting development and operations in software-led enterprises. Automation is at the heart of the DevOps movement—reduced deployment time and more frequent software releases means increased testing volume. Without automation, teams must run large numbers of test cases manually, which slows down deployment and ensures the DevOps movement does not succeed in its aims.

One potential pitfall that can hamper the transition to DevOps is not having the required automation knowledge. After all, test automation is technically complex. Acquiring the knowledge to effectively automate tests takes time and costs money. You can either hire expert consultants to get you up and running with automation, hire qualified automation engineers, or retrain current testing staff. Whichever option you choose, it’s clear that automating is essential for implementing a DevOps culture in your development teams.

Enter Test Metrics

What Are Test Automation Metrics?

Implementing automation blindly without measuring it and improving on automated processes is a waste of time. This is where test metrics provide invaluable feedback on you automated testing efforts—test automation metrics are simply measurements of automated tests.

Test automation metrics allow you to gauge the ROI from automated tests, get feedback on test efficiency in finding defects, and a host of other valuable insights.

How Can Test Automation Metrics Help DevOps?

By measuring testing duration, you find out whether current automation efforts are decreasing development cycles and accelerating the time-to- market for software. If tests don’t run quicker with automation than manual tests, then there are clearly technical issues with the automation efforts—perhaps the wrong tests are being automated.

How to Measure Test Automation

Some examples of test metrics used to measure test automation are:

  • Total test duration—a straightforward and useful metric that tracks whether automation is achieving the shared Agile and DevOps aim of faster software testing through increased automation.
  • Requirements coverage—a helpful metric to track what features are tested, and how many tests are aligned with a user story or requirement. This metric provides insight on the maturity of test automation in your company.
  • Irrelevant results—this measurement highlights test failures resulting from changes to the software or problems with the testing environment. In other words, you get insight on the factors that reduce the efficiency of automation from an economic standpoint. Irrelevant results are often compared with useful results, which are test results corresponding to a simple test pass or test failure caused by a defect.

Closing Thoughts

The DevOps movement extends Agile and aims to modernize software development with faster releases of high-quality software. Testing is a common bottleneck in the development cycle, which can hamper any attempt to integrate a DevOps culture.

Test automation is the link that gets software testing up to the speed of development, helping to achieve the aims of Agile and DevOps.

However, there must be a way to track all attempts to automate tests, since test automation is, in itself, an expensive investment and a technical challenge. Test metrics provide valuable feedback to help improve automation and ensure positive ROI.

About Limor

Limor is a technical writer and editor at Agile SEO, a boutique digital marketing agency focused on technology and SaaS markets. She has over 10 years’ experience writing technical articles and documentation for various audiences, including technical on-site content, software documentation, and dev guides. She specializes in big data analytics, computer/network security, middleware, software development and APIs.

I Don’t Hire Rockstars

Wow, it’s been a really long time since I’ve posted anything. Booked has been keeping me very busy, but I’m going to make an attempt to publish more posts this year.

I was out to dinner with some friends the other night. These are guys that I worked with in a past life and some of the best teammates I’ve ever had, so I have a ton of respect for their opinions.

The topic of “rockstar” developers came up – I’m not sure how, but we’re a bunch of software developers so I suppose it was inevitable. The question was around if rockstar developers are good for a team.

Throughout my career I’ve heard leaders bellow from the towers – “[insert name here] is a rockstar! We need to hire more rockstars like [repeated name here].” Obviously, if we had a team of rockstars we’d build incredible applications so quickly and such high quality that our competition could never catch us, right?


Look, I’ve never met a literal rock star, but I’ve watched my fair share of VH1’s Behind the Music. Stars want to be stars. Maybe they don’t even want to be stars, I don’t know, but they seem to gravitate towards being the center of attention. There is an air of arrogance and ego, literally putting themselves ahead of the rest of the band.

Rock bands usually only have one front man – one star. Many frontmen have kept playing over the years, using the same band name but rotating through an assortment of supplemental musicians. To them it’s not about the group – it’s about themselves.

How good would a song be if you had 5 people shredding sick guitar solos at the same time? I mean, I’d watch it, but it would be a disaster and hardly resemble a coherent song. Or you’d have a bunch of people jockeying for the lead role. Here’s what happens when you put a bunch of rockstars on the same team. Don’t actually watch that – it’s not good.

Contrast that with a jazz band. Typically, everyone shines at some point, but while one musician is delivering their solo, the rest of the group is supporting them.

It’s a similar situation on most sports teams. There may be one person that scores most of the points most of the time, but teams rarely win having a single, dominating star. Even the best person on a sports team has to support, coach and mentor other people on the team if they want to succeed as a group. Jordan needed Luc Longley for the whole team to succeed.


I’ve been interviewing and hiring developers for a long time and spoken with more amazingly talented people than I can count. People who could program circles around me. But software development is about so much more than hands-on-the-keyboard development. Software development requires differing opinions, reciprocal coaching, and an ability to check your ego at the door.

A team full of big egos will spend more time debating theoretical performance of an algorithm than actually building a simple solution that solves the problem.

A team of passionate, competent developers who care about getting a working solution in front of customers will naturally select a solution that works. That solution may not be the most technically impressive, but it will solve the problem. And solve it correctly.

While the team of rockstars is fighting with each other about which framework is best for implementing a complex distributed messaging solution in order to save form data, a team of passionate, competent developers will have built an entire application that does everything needed. No more, no less.

A team full of big egos is truly a recipe for a toxic team. These types of people tend to believe that their opinions are always correct. These teams will bicker and argue incessantly over inconsequential details.

But what about a single rockstar? Surely you need that person to carry the rest of the team.

A team with a single rockstar will silence the more reserved people on the team, which kills great ideas before they are ever introduced. In my experience, the rockstars rarely pair program and rarely coach. They rarely write unit tests, let alone practice test driven development (if their initial instincts always right then the tests just slow them down!). They create a mirage of short term speed, hiding the fact quality is slipping and most of the team doesn’t know what’s going on.

A team of passionate, competent developers will support each other. Their ego doesn’t blind them to the fact that they are human and will make mistakes. They will find simple solutions. They will outperform a team of rockstars every time. This is the team I want.

What do you think? Have you worked on a team of “rockstars”? How did it go?

When will you be home?

What time will you be home today? It’s a simple question. If you’re like most people you go to work and come home about the same time every day. You should be an expert in estimating when you’ll arrive at home. I bet you cannot predict the time that you’ll get home tonight, though. I bet you’d be even less accurate if I asked you to predict what time you’ll be home six months from now.

Sometimes I feel like this is what we ask software teams to do when we ask them for estimates. Software developers build software every day. We should be able to predict how long it will take to build new software – but we can’t reliably do it! There are ways to improve our estimates, though.

Let’s jump back to the commute example. If your commute is short then you’ll be accurate most of the time. Sometimes you’ll have a meeting that runs late. Maybe it’s Friday and your calendar is clear so you take off early. You can be way off from time to time, but pretty often you’ll be pretty darn accurate. This is why we want to break deliverables into small pieces. Small pieces have less uncertainty.

Things get less predictable the further out you go. Going from a half-mile commute to a mile commute will increase the variability. One mile to twenty miles increases that variability by orders of magnitude. There are simply more unknowns as the size of your commute grows. Traffic, accidents, weather – there are a lot of variables that can affect the actual time.

Big features and deliverables have this same problem. There are simply too many unknowns to be accurate. Undocumented limitations of libraries, complex algorithms, changing requirements, for example. And asking software developers to commit to anything with that level of variance isn’t fair.

But it’s done anyway and leads to all kinds of dysfunctions. Unrealistic estimates based on little knowledge are treated as promises. Broken promises lead to distrust. Distrust leads to infighting, incessant status reporting, and pissed off developers.

So lets stop asking for estimates that are months away. I don’t know even know what time I’ll be home tonight 🙂

Loss Aversion and Tech Debt

Humans are loss-adverse. We place an irrationally high value on losing something over gaining an identical item. So for example, I’d be more upset about losing $10 than the happiness I’d feel by gaining $10. If I buy a meal and hate it, I’ll likely finish it anyway.

In general, people would rather gamble on a 50% chance of losing $1000 rather than giving up $500. Down $50 at the blackjack table? Odds are most people will want to try to win that back rather than take the loss. Curiously, most people would rather accept a guaranteed $500 rather than accept a 50% chance of making $1000. Irrational? Yup, but extremely predictable.

Loss Aversion is the fancy name for the phenomenon. People prefer avoiding losses to acquiring gains, which drives them to become more-risk tolerant in the face of loss. I think it can help explain how we build up and continue to live with technical debt in software development.

Tech debt is a useful metaphor describing the long term consequences inflicted upon a code base by deferring work. Similar to financial debt, we often choose to avoid doing the “right thing” now in favor of a faster path to the end result. The interest accrued by the debt adds up over time and can cause major problems.

There are lots of reasons that software engineers knowingly take on tech debt – deadlines, lack of knowledge or skills, too much work in progress – the list goes on. Sometimes it is unavoidable, sometimes not. Every project has some level of debt, though.

Paying off accumulated technical debt is where I see the ties into loss aversion. The time spent fixing a hastily implemented data access strategy, for example, is time not spent implementing a cool new feature. There is rarely any directly visible customer value delivered by paying off technical debt. In most people’s eyes, this is a loss of time, opportunity, and resources.

We are irrationally risk-tolerant in the face of this loss. Instead of spending $500 to pay off the debt, we’ll flip the coin, let the problems grow, and take the risk of losing $1000. Who knows, maybe the problems won’t surface for a long time. Maybe never. Maybe tomorrow, though.

So how do we fix this if the human mind is hardwired to avoid losses?

Shift the mindset of technical debt. Knowingly taking on technical debt is a loss, not a gain. We are losing the ability to easily respond to future requirements; we are not gaining a new feature in a shorter time frame. And existing tech debt should be seen as a sunk cost – it’s lost, and it’s better to forget the past.

If we accept the current state rather than treating tech debt as an incurred loss we will be less likely to gamble with the possibility of future losses. And hopefully our minds will start to blast warning sirens as we consider taking on new technical debt in the future.

Please, Stop Saying “I Can’t”

I’m as guilty as anyone when it comes to uttering the words “I can’t…”. That’s almost never actually true, though.

Here are some things I know that I can’t do:
I can’t fly by flapping my arms
I can’t speak Mandarin
I can’t eat my weight in Oreos

Here are some things I’ve claimed that I can’t do:
I can’t take a break to eat lunch
I can’t help you find a solution to problem XYZ
I can’t work on your project

The first list contains physical or scientific impossibilities. The second list contains choices. Unless you are locked in a room, we all can take a break to eat lunch. This represents a choice to temporarily stop doing one thing and start doing another.

There are influences that go into all of our choices and there are consequences to all of our choices. By joining a meeting we may be making a choice to skip a phone call or hold off on sending an email. There are plenty of reasons why one activity may be more important than another.

So instead of claiming that you cannot do something, start making your choices explicit. “I can’t take a break” instead becomes “I am choosing not to take a break because task XYZ is more important right now”. Recognize that you have a choice. Decisions are easier when we aren’t held hostage by what we claim we can and cannot do.

PS – The same goes for saying “I have to”. You don’t have to work late. You don’t have to send that email right now. Most of the things we have to do boil down to trade-offs.

Believe me, there is much that you “can’t” do or “have” to do 🙂

On Hiring Techies

I’ve had the good fortune to have worked on some incredible teams. The best teams I have been a part of each had a very structured and rigorous interview process. It took longer than average to hire new team members, but the tenure of team members was high and our attrition rates were unbelievably low. The teams gelled and that collaboration was reflected in the applications that we built.

Having an interview process like this leads to a lot of interviews over the course of my career – speaking with dozens, maybe hundreds, of potential candidates. I started writing this as a single blog post but it grew too large. Here’s what has worked for me, broken into a short blog series.

Evaluate Potential, Not Accomplishments
Coding Challenge
The Team Interview
Hire For Cultural Fit

Other Interviewing Inspiration

Johanna Rothman has some great books, articles and blogs on hiring technical people. She’s also an infinitely better writer than I, so I strongly suggest reading her work.

On Hiring Techies – Evaluate Potential, Not Accomplishments

This is a part of a series on Hiring Techies.

Evaluate Potential, Not Accomplishments

I don’t spend a ton of time reading resumes. Depending on the position we’re hiring for, I may look for a few critical skills but I’m mainly looking for themes that tell me that the candidate has a passion for technology. Do you contribute to an open source project? Do you blog? Do you have a history of attending or even speaking at conferences? Show me that technology is an important part of your life.

Don’t get me wrong. I do not want to read about your hobbies or what you do for fun. The fact that you have been the president of your local Pokemon club for 3 years or that you’re and avid White Sox fan is irrelevant. I want to get a sense of what you will bring to the team.

Does the candidate’s resume only highlight individual accomplishments? Are there any bullet points about past team’s successes? How about any experience introducing new technologies or organizing brown bag lunch sessions? I can typically spot a true team player based on how they highlight their successes and what they are proud of.

I want to work with a team of leaders. I want a team of mentors. There are millions of people who can write a web application. There are very few people who can write a web app and help me build a better team.

I’m looking for technological aptitude. A laundry list of impressive technical languages and tools is indeed impressive but it should never be the reason for hiring a candidate. Worse yet, it should never be the reason for passing on an interview opportunity or turning a candidate down. It took me a long time to realize that I don’t need to find someone with that fulfills every item on my wish list.

It’s important to remember that when we hire, we’re hiring a person and not a bag of skills.

Skills and experience are important, of course, but this should support your decision to hire someone rather than be the basis of your decision. This is probably the most frequent mistake I’ve seen folks make. Teams have missed out on great candidates because they were missing some specific tool in their belt. I’ve also seen very smart people hired who end up being culturally destructive.

I can teach an Eclipse user Visual Studio. I can teach a C# developer JavaScript. I cannot teach someone how to be passionate. I cannot teach someone how to fit into our culture.

Every position is different and the skills that are critical for the candidate to be successful in that position will vary. If your domain requires certain skills or experience then, absolutely, screen for that. I would expect this to be the exception rather than the rule, though.

On Hiring Techies – Coding Challenge

This is a part of a series on Hiring Techies.

Coding Challenge

Of course, if I’m evaluating a candidate to join my team I want to ensure that they know what they’re talking about. There are many ways to do this. Some people slam a candidate with technical questions for hours. Some people choose to do a whiteboard session solving a problem real time.

Like most, my preference is to go first through a short technical phone screen. If it’s obvious that the candidate has the critical skills we need, I’ll send out a coding challenge to be completed within the next few days. This is a short but non-trivial set of acceptance criteria – something that an average developer can complete in a few hours. Some folks balk at this request. This actually tells me a lot about the candidate and is an immediate red flag. Most candidates are happy to be given an opportunity to show what they can do and submit a solution within a day or two.

At this point the team reviews the submitted solution and decides whether or not to bring the candidate in for an interview. We make it clear that the solution that was crafted will be discussed during the interview.

As programmers, we spend considerably more time reading and discussing code than we do writing it. We need to be able to clearly explain our ideas and be willing to listen to feedback. The discussion I have with a candidate about their submitted solution will tell me more about their technical skills than a barrage of technical questions ever would. We’ll review design decisions, the application structure, SOLID principles and clean code, any technologies or frameworks used, and so on.

I also love reviewing the unit tests that were submitted (or having a discussion regarding the lack of tests). We’ll talk about TDD – why it was or wasn’t applied. It’s always fascinating to hear how the tests influenced the design.

I will very rarely challenge a candidate to solve some problem in person. The reason I don’t like to have the candidate code live is because the interview itself is already high pressure. Most of us have experienced this kind of test when interviewing. I want to see what they do when they’re at their best – not when they have a bunch of strangers staring at them. Programming isn’t about memorization. Anyone can find framework documentation online in seconds. Programming is about problem solving and I want to see how candidates think about problems.

Working code is one of the best ways I’ve found to evaluate a candidates technical skill level.