blog

What I Should Have Said

Mike Birbiglia is one of my favorite comedians. His effortless mix of storytelling and comedy is something I haven’t seen anyone else be able to pull off.

Throughout his jokes he has a recurring theme where he finds himself in a high-stakes conversation. He builds up the audience with the statement “What I should have said…. was nothing.” It’s an incredibly simple concept to grasp, but incredibly difficult to practice.

This is something I’ve been trying to get better at. I’m chock-full of opinions and thoughts that I share freely and unsolicited at times. Instead, I’m trying to listen more, talk less, and understand before sharing what’s on my mind.

I think this will make me a better manager and will help me better focus on the people around me.

Look Mom, No Hands! Test Driving Code Without A Mocking Framework

I love TDD. I haven’t found a more effective way to incrementally design and build software in the 15+ years that I’ve been doing this. I have formed and evolved a lot of opinions about how I approach TDD, though.

Recently, I wrote a post for EuroStar Software Testing titled Look Mom, No Hands! Test Driving Code Without A Mocking Framework

This is a topic that has been on my mind for a long time. It’s not intended to start a mocks vs stubs flamewar or anything like that. Instead, I wanted to walk through my progression of TDD practices over the years and share what I’ve learned.

Don’t get me wrong – test-driving with a mocking framework is better than not test-driving at all. I just prefer stubs.

Looking back at the test cases in the Booked source code which utilize PHPUnit’s mocking framework (yes, there are still a lot), I can see just how entangled the test code is with the implementation of the production code. The source for Booked changes frequently and it is covered by more than 1000 unit tests. New features are introduced and, occasionally, some of the unrelated tests fail.

They fail because there is too much specified in the mock setup. In order to validate the behavior of some area of the code, I have to set up unrelated mock expectations to get collaborating objects to return usable data. If I change the implementation of an object to no longer use that data, my test shouldn’t fail.

A couple of years ago I stopped using PHPUnit’s mock objects and I’ve seen the resiliency of my unit test suite increase. I’ve also seen my development speed and design quality improve. Instead of ambiguous mock expectations scattered throughout the tests, I’ve built up a library of stub objects which have logical default behavior.

When test-driving increments of functionality, I’m able to concentrate on the behavior that I need to implement rather than getting distracted with test setup and management.

More focus. Better design. Higher quality. No mocks.

Booked 2.7 is out in beta!

I’m thrilled to announce that Booked 2.7 has been released in beta. This version is packed full of long-requested features.

I don’t care about the details – how do I get it?

Eager to try it out? I don’t blame you 🙂

Download the beta from SourceForge

Give feedback and ask questions

Try the live beta demo

This is still beta software. While I’ve tested this pretty well, I wouldn’t recommend production usage just yet!

So, what’s all this new stuff?

I’m glad you asked!

Charging for Reservations

One of the most requested features over the years what the ability to hook into payment gateways to charge for reservations. 2.7 comes with integrated support for Stripe and PayPal.

You can now let your users purchase credits to be used for reservations. This harnesses the power of the Booked credit system – allowing admins to set peak and standard usage, configuring different credit rates for different resources – and seamlessly integrates it with payments. Your users will see an upfront cost for a reservation along with their credit balance. (For security reasons, this functionality is not enabled in the demo.)

Set up your payment details by setting allow.purchase to true in the credits section of your application configuration. Then open up Application Management > Payments to set the cost per credit, view the transaction log, and configure payment gateway details.

Terms of Service

Gaining a user’s consent before allowing access to a resource is a critical part of the reservation workflow for many organizations. You can now upload terms, link to terms, or simply embed them directly into the application. You can configure when to prompt users for acknowledgment of the TOS, either for each reservation or during registration.

Add terms of service from Application Management > Reservations, then choosing Terms of Service from the right hand drop down.

Schedule Availabiity

A schedule may only be available for a portion of the year. Seasonality, staffing, or business needs may allow reservations for a limited period of time. Per schedule, admins can now configure an option open and close date for a schedule. So if you have an event that runs May – July, it’s simple to ensure only the available dates are shown and limit reservations to that date range.

Set schedule availability from Application Management > Schedules, then editing the Available dates.

Overlapping Resource Reservations

A fundamental usage for Booked is to help organizations ensure that resources are never double booked. But many of you have asked to bend that rule. It’s now possible to let all resources on a schedule be booked by more than one user concurrently. Schedules set up to allow concurrent reservations will flip to the calendar view for a simple display of all activity.

Allow overlapping reservations from Application Management > Schedules, then changing the option to allow resources to be reserved by more than one person at a time.

Fully Customizable Layouts

In most cases resource availability follows a fairly standard schedule – every 30 minutes between 9am and 5pm, for example. But there are some cases where you want to create very specific slots and prevent bookings at all other times. Say you want to set availability for two slots per month – an 8 hour slot on the first and second Friday. Switching your schedule to use a custom layout gives you full control to set specific availability times. These slots will show up as available on the calendar view and the schedule view will only show these times.

Customize your layout from Application Management > Schedules, then clicking Switch to a custom layout.

Distinct Add, Update, Cancel Notice Times

A simply powerful feature of Booked is the ability to set how much notice must be given before a resource can be reserved. But until now the notice period applied to adding, updating, and cancelling a reservation. An admin can now configure a resource to require 24 hours notice for reservation, 2 hours notice for an update, and 48 hours notice for cancellation, for example.

Set distinct notice times from Application Management > Resources, then setting any of the made, updated and deleted settings under the Access section for a resource.

Announcements on the Login Screen

Important announcements and updates can apply to all users. When posting announcements, admins can now choose where it shows up. The same power you have to set display times and priorities works for login announcements, too!

Add a login announcement from Application Management > Announcements, then choosing Login as the display page when creating it.

Default Group Membership

Would you like all new accounts to be added to groups by default? Finally, it’s possible by simply setting the group as a default.

Set default groups from Application Management > Groups. Check Automatically add new users to this group when adding or updating groups.

Multiple Resource Images

Another long-requested feature finally makes it to prime time. There’s not much to explain here – you can now upload an unlimited number of resource images. If there is more than one image for a resource, we’ll show a carousel and let your users scroll through all the pictures.

Add multiple resource images from Application Management > Resources, then changing the resource image. You can add as many images as you want!

Embed a Booked Calendar Directly in Another Website

Until now you’ve had to either IFRAME (ech!) or use the API to display a Booked calendar on another website. We now have the ability to include a single script reference to load a configurable view of Booked events. Just enable public visibility for a resource or schedule, then add drop one line of HTML on a page!

And more

There are dozens of other enhancements and fixes in 2.7 to make Booked the best resource scheduling software you’ll ever use!

  • Added ability to set comma or semicolon delimited admin.email configuration setting to allow multiple admin emails
  • Added ability to send a reservation to Google Calendar
  • Added ability to select a resource image while adding
  • Added ability to begin a reservation directly from Slack
  • Added ability to set view-only resource permissions
  • Added ability to sync group membership from LDAP and CAS
  • Added blackouts to schedule and resource calendar view
  • Added view calendar page
  • Added ability to require reservation title and description
  • Added user groups to report output
  • Resource QR code will open ongoing reservation if it requires check in
  • Upgraded jQuery to latest
  • Bugfixes

A Shout Out for Hosting

Love Booked, but hate the idea of installing, managing, and supporting yet another application? We offer professional Booked hosting directly from the authors. For just $10/month you get unlimited usage of Booked, premium support, early access to features, and more.

Start a no-obligation 30 day free trial now!

We Built the Wrong Thing – From Ambiguity to Stability

Let’s set the scene. You’re out to lunch with your team celebrating a successful launch of a new feature. Your product owner interruptions the conversation to relay an email from a disappointed stakeholder.

From: Stakeholder, Mary (mary.stakeholder@nickkorbel.com)
Sent: Thursday, April 19, 2018 11:51 AM
To: Owner, Product (product.owner@nickkorbel.com)
Subject: Can we talk?

Thank you for all of your work, but this doesn’t do what I thought it would. Can we talk?

– Mary

The discussion around the table quietly shifts to how nobody ever knows what they want. “We followed all the agile best practices”, a senior developer frustratedly quips, “How did we build the wrong thing?”

What went wrong?

When you get back to the office, you huddle up with Mary and pull up the acceptance criteria.

Story: Adding events to a calendar
As a user
I want to enter events into a calendar
So that everyone knows when people are available
Scenarios:
Given I've entered an event into the calendar
When I view the calendar
Then I can see that event

“This is what you asked for, right?”

Mary replies, “Yes – but it’s not what I wanted.”

“What do you mean?”

“Look at this. I want to set up a 3 day training session, but I only have one date picker. And every new event is the same color, so it’s really hard to see who is booked when. And I have no way to know when a new event is created. And…”

“Oh.” you interrupt. “We didn’t know you wanted that. You had all of those meetings with our PO. Why didn’t you ask?”

Mary, now frustrated with the amount of time seemingly wasted, responds “I thought we were all on the same page!”

Specification by Example

Is this a familiar story? Even using the de-facto acceptance criteria format so popular in agile, it’s very easy to build ambiguous expectations. Ambiguity leads to disappointed customers and frustrated developers.

Years ago, I read Gojko Adzic’s Specification by Example and it changed the way I view user stories. I cannot possibly do justice to all of the incredible advice and ideas from the book in the blog post, but I’ll try to summarize.

Instead of a PO or BA working with customers to capture the stories and later reviewing those stories with developers, Gojko recommends running specification workshops. We follow a simple workflow for this:

Derive scope from goals > Specify collaboratively > Illustrate requirements using examples > Refine specifications > Frequently validate the application against the specifications

Deriving scope from goals is probably the biggest change a team will need to make. Instead of being presented a set of acceptance criteria, the team is presented with a goal. For example, stating a goal to know people’s availability instead of the scope to build a calendar.

Working with the stakeholders, the team collaboratively identifies the acceptance criteria. Maybe a calendar is what is built. Maybe it’s a simple list. Maybe it’s a search. The point is that we start with the goal in mind, and collectively identify the scope. This eliminates the translation layer between stakeholder to product owner to development team.

Ambiguity— —

The next couple steps are iterative. We extract real-world examples from the scenarios, and illustrate the acceptance criteria using those examples.

Instead of

Given I've picked a date
When I book that date
Then that date is booked

We have something like

Given Mary has selected 10:00 am on April 18th, 2018
When she completes the booking
Then the calendar indicates that Mary is unavailable on April 18th 2018 between 10:00 am and 10:30 am

It’s only a slight change, but it has massive effects. Using real examples leads to real questions. What if Mary is already busy at that time? What kind of indication should we show? Is the default event length 30 minutes? Can that be changed?

Ambiguity— —

And here’s where it gets fun

Most teams write automated end-to-end tests for their applications, but a lot of the time these tests are defined and written after the functionality is built. We end up simply validating that what we built works how we built it. Even if the tests are built based on more traditional acceptance criteria, the person writing the test has to make some assumptions about how to make the application behave in the way that meets the criteria.

If we have a Cucumber feature file that looks like this:

Story: Adding events to a calendar
As a user
I want to enter events into a calendar
So that everyone knows when people are available
Scenarios:
Given I've entered an event into the calendar
When I view the calendar
Then I can see that event

The person implementing the tests has no choice but to make up some dates to pick and the validation will likely be something generic.

When writing automated acceptance tests based on real-world examples, the tests can match the acceptance criteria 1:1. Not only does this enhance the clarity of how to test the application, it also brings gaps in the shared understanding of a story to light early.

Story: Adding events to a calendar
As an event organizer
I want to be able to indicate any events I'm participating in
So that everyone knows when I am available
Scenarios:
Given Mary has selected 10:00 am on April 18th, 2018
When she completes the booking
Then the calendar indicates that Mary is unavailable on April 18th 2018 between 10:00 am and 10:30 am

Ambiguity— —

Automating the Acceptance Criteria

One common frustration of test automation is maintenance and fragility. Features change and evolve over time. When tests are driven from an interpretation of the specifications rather than the the actual specifications, maintenance becomes a challenge. It’s difficult to trace a specification change to an associated test (or set of tests). So minor changes in specifications tend to have major impacts to tests.

If the specifications are automated, instead of translated into automated tests, you know exactly what test is affected. In changing the specification, you are forced to change the test and underlying code. You can make micro changes and receive instant feedback that the application still works.

Stability++

No silver bullets

This isn’t an overnight change. Like most things, it takes deliberate practice. Practice facilitating specification discussions with non-technical people. Practice with finding the right type and number of examples.

The return on this investment can be huge. Specification workshops often lead to significant reduction in scope because technical people and business people are speaking the same language and understand the problem in the same way.

The resulting specifications are free of ambiguity, so everyone has a shared understanding of the exact behaviors they should expect from the application. Validating the application against the specifications in an automated way ensures the application is always working the way everyone understands and expects.

Eliminating the specification ambiguity builds a shared understanding between everyone involved, which leads to long term application stability. And that’s good for everyone.

Have you tried this?

I’m interested in hearing from readers about their experiences. Have you tried this or something similar? How did it go?

How Transaction Costs Influence Story Size

Of all the aspects of the INVEST principle, I think the attribute that software developers have the most influence over is S. Small

Small stories have huge benefits. We can control what gets built with much more granularity.  Less scope means fewer requirements for the team to understand. Less for the end user to understand. Less code to for the developer to change. Less code to test. Less code to release and maintain.

Less risk.

Ron Jeffries recently tweeted a short video on story slicing that triggered a question in my mind about why engineers resist small stories. As I thought about it more, connections formed to Don Reinertsen’s work in  Managing the Design Factory and Principles of Product Development Flow. There he describes the “transaction cost” of an activity. The transaction cost is the cost/time of everything needed to execute an activity.

If we put this in the context of releasing software, this may include the time to build, test, and deploy an application. If those tasks are manual and time consuming, human nature is to perform these tasks less often.

Transaction costs follow a U-curve, where performing an activity may be prohibitively expensive if the size of the activity is too small or too big.  For example, think about a story for user account registration. It would be costly to create a story for each word in the form text. Likewise, the cost of a story would be huge if it includes creating the form, validating it, sending notifications, account activation, error handling, and so on.

So, back to Ron’s video and push for thinly sliced stories. I posed the question about what to do when developers resist small stories. To dig into that, we need to understand why developers resist small stories.

A common argument that I hear against small stories is that it’s not efficient to work that way. We developers are always looking to maximize the work not done 🙂

This can be a problem, because the true work for a story includes everything to get it production ready. Writing each story. Developing each story. Code reviewing each story. Testing each story.

That is the transaction cost of a story.

Driving down those costs drives down the resistance from the developers. There are lots of ways to reduce these costs. We can pair program to eliminate the out-of-band code review. Use TDD and automated acceptance testing to build quality in from the start. Create an automated build and deploy pipeline to continuously deliver stories.

As we reduce the overhead of each story, we can slice stories thinner and thinner.

But wait, there’s more!

As Bill Caputo rightfully pointed out, big “efficient” stories include a lot more work than may be necessary. Thinking back about an account registration story, we may put account registration and account activation into the same story. That builds in an assumption that we have to activate accounts – which we may not.

Not splitting stories means we may efficiently build something we don’t even need. Peter Drucker famously said –

There is nothing so useless as doing efficiently that which should not be done at all.

Addendum – Be careful what you measure

Another reason developers may resist breaking things down into small deliverables is how they’re measured.

In most scrum worlds, a team is being measured based on velocity. Small stories are fewer story points, thus lower velocity.

Uh oh, let’s make bigger stories and drive higher velocity.

To address this problem, Ron suggested giving “credit” for stories completed rather than story points completed. This encourages splitting large stories into thin stories with little pieces of the overall functionality.

What Do DevOps, Test Automation, and Test Metrics Have in Common?

This is a guest post by Limor Wainstein

Agile testing is a method of testing software that follows the twelve principles of Agile software development. An important objective in Agile is frequently releasing high-quality software regularly, and automating tests is one of the main practices that achieves this aim through faster testing efforts.

DevOps is a term that Adam Jacob (CTO of Chef Software) defines as “a word we will use to describe the operational side of the transition to enterprises being software led”. In other words, DevOps is an engineering practice that aims at unifying software development (Dev) and software operation (Ops). DevOps strongly advocates automation and monitoring at all stages of the software project.

DevOps is not a separate concept to Agile, rather, it extends Agile to also include Operations in its cross-functional team. In a DevOps organization, different parts of the team that were previously siloed collaborate as one, with one objective to deliver software fully to the customer.

Agile and DevOps both utilize the value of automation. But there must be a way to measure automation, its progress, and how effective it is in achieving the aims of Agile and DevOps—this is where test metrics become useful.
In this article, you’ll find out about different types of test automation, how automating tests can help your company transition to DevOps, and some relevant test metrics that Agile teams and organizations transitioning to DevOps can benefit from to improve productivity and achieve their aims.

Types of Test Automation

Test automation means using tools that programmatically execute software tests, report outcomes, and compare those outcomes with predicted values. However, there are different types of automation that aim to automate different things.

Automated Unit Tests

Unit tests are coded verifications of the smallest testable parts of an application. Automating unit tests corresponds with one of the main Agile objectives—rapid feedback on software quality. You must aim to automate all possible unit tests.

Automated Functional Tests

Functional tests verify whether applications do what the user needs them to do by testing a slice of functionality of the whole system in each test. Automating functional tests is useful because it saves time—typical testing tools can mimic the actions of a human, and then check for expected results, saving valuable time and improving productivity.

Automated Integration Tests

Integration tests combine individual software modules (units) and test how they work together. Again, by automating integration tests, you get tests that are repeatable and run quickly, increasing the chances of finding defects as early as possible, when they are cheaper to fix.

Test Automation and DevOps

DevOps is a culture that aims to reduce overall application deployment time by uniting development and operations in software-led enterprises. Automation is at the heart of the DevOps movement—reduced deployment time and more frequent software releases means increased testing volume. Without automation, teams must run large numbers of test cases manually, which slows down deployment and ensures the DevOps movement does not succeed in its aims.

One potential pitfall that can hamper the transition to DevOps is not having the required automation knowledge. After all, test automation is technically complex. Acquiring the knowledge to effectively automate tests takes time and costs money. You can either hire expert consultants to get you up and running with automation, hire qualified automation engineers, or retrain current testing staff. Whichever option you choose, it’s clear that automating is essential for implementing a DevOps culture in your development teams.

Enter Test Metrics

What Are Test Automation Metrics?

Implementing automation blindly without measuring it and improving on automated processes is a waste of time. This is where test metrics provide invaluable feedback on you automated testing efforts—test automation metrics are simply measurements of automated tests.

Test automation metrics allow you to gauge the ROI from automated tests, get feedback on test efficiency in finding defects, and a host of other valuable insights.

How Can Test Automation Metrics Help DevOps?

By measuring testing duration, you find out whether current automation efforts are decreasing development cycles and accelerating the time-to- market for software. If tests don’t run quicker with automation than manual tests, then there are clearly technical issues with the automation efforts—perhaps the wrong tests are being automated.

How to Measure Test Automation

Some examples of test metrics used to measure test automation are:

  • Total test duration—a straightforward and useful metric that tracks whether automation is achieving the shared Agile and DevOps aim of faster software testing through increased automation.
  • Requirements coverage—a helpful metric to track what features are tested, and how many tests are aligned with a user story or requirement. This metric provides insight on the maturity of test automation in your company.
  • Irrelevant results—this measurement highlights test failures resulting from changes to the software or problems with the testing environment. In other words, you get insight on the factors that reduce the efficiency of automation from an economic standpoint. Irrelevant results are often compared with useful results, which are test results corresponding to a simple test pass or test failure caused by a defect.

Closing Thoughts

The DevOps movement extends Agile and aims to modernize software development with faster releases of high-quality software. Testing is a common bottleneck in the development cycle, which can hamper any attempt to integrate a DevOps culture.

Test automation is the link that gets software testing up to the speed of development, helping to achieve the aims of Agile and DevOps.

However, there must be a way to track all attempts to automate tests, since test automation is, in itself, an expensive investment and a technical challenge. Test metrics provide valuable feedback to help improve automation and ensure positive ROI.

About Limor

Limor is a technical writer and editor at Agile SEO, a boutique digital marketing agency focused on technology and SaaS markets. She has over 10 years’ experience writing technical articles and documentation for various audiences, including technical on-site content, software documentation, and dev guides. She specializes in big data analytics, computer/network security, middleware, software development and APIs.

I Don’t Hire Rockstars

Wow, it’s been a really long time since I’ve posted anything. Booked has been keeping me very busy, but I’m going to make an attempt to publish more posts this year.

I was out to dinner with some friends the other night. These are guys that I worked with in a past life and some of the best teammates I’ve ever had, so I have a ton of respect for their opinions.

The topic of “rockstar” developers came up – I’m not sure how, but we’re a bunch of software developers so I suppose it was inevitable. The question was around if rockstar developers are good for a team.

Throughout my career I’ve heard leaders bellow from the towers – “[insert name here] is a rockstar! We need to hire more rockstars like [repeated name here].” Obviously, if we had a team of rockstars we’d build incredible applications so quickly and such high quality that our competition could never catch us, right?

Ehhhhhhhhhhhhh…..

Look, I’ve never met a literal rock star, but I’ve watched my fair share of VH1’s Behind the Music. Stars want to be stars. Maybe they don’t even want to be stars, I don’t know, but they seem to gravitate towards being the center of attention. There is an air of arrogance and ego, literally putting themselves ahead of the rest of the band.

Rock bands usually only have one front man – one star. Many frontmen have kept playing over the years, using the same band name but rotating through an assortment of supplemental musicians. To them it’s not about the group – it’s about themselves.

How good would a song be if you had 5 people shredding sick guitar solos at the same time? I mean, I’d watch it, but it would be a disaster and hardly resemble a coherent song. Or you’d have a bunch of people jockeying for the lead role. Here’s what happens when you put a bunch of rockstars on the same team. Don’t actually watch that – it’s not good.

Contrast that with a jazz band. Typically, everyone shines at some point, but while one musician is delivering their solo, the rest of the group is supporting them.

It’s a similar situation on most sports teams. There may be one person that scores most of the points most of the time, but teams rarely win having a single, dominating star. Even the best person on a sports team has to support, coach and mentor other people on the team if they want to succeed as a group. Jordan needed Luc Longley for the whole team to succeed.

Luuuuuuuc!

I’ve been interviewing and hiring developers for a long time and spoken with more amazingly talented people than I can count. People who could program circles around me. But software development is about so much more than hands-on-the-keyboard development. Software development requires differing opinions, reciprocal coaching, and an ability to check your ego at the door.

A team full of big egos will spend more time debating theoretical performance of an algorithm than actually building a simple solution that solves the problem.

A team of passionate, competent developers who care about getting a working solution in front of customers will naturally select a solution that works. That solution may not be the most technically impressive, but it will solve the problem. And solve it correctly.

While the team of rockstars is fighting with each other about which framework is best for implementing a complex distributed messaging solution in order to save form data, a team of passionate, competent developers will have built an entire application that does everything needed. No more, no less.

A team full of big egos is truly a recipe for a toxic team. These types of people tend to believe that their opinions are always correct. These teams will bicker and argue incessantly over inconsequential details.

But what about a single rockstar? Surely you need that person to carry the rest of the team.

A team with a single rockstar will silence the more reserved people on the team, which kills great ideas before they are ever introduced. In my experience, the rockstars rarely pair program and rarely coach. They rarely write unit tests, let alone practice test driven development (if their initial instincts always right then the tests just slow them down!). They create a mirage of short term speed, hiding the fact quality is slipping and most of the team doesn’t know what’s going on.

A team of passionate, competent developers will support each other. Their ego doesn’t blind them to the fact that they are human and will make mistakes. They will find simple solutions. They will outperform a team of rockstars every time. This is the team I want.

What do you think? Have you worked on a team of “rockstars”? How did it go?

When will you be home?

What time will you be home today? It’s a simple question. If you’re like most people you go to work and come home about the same time every day. You should be an expert in estimating when you’ll arrive at home. I bet you cannot predict the time that you’ll get home tonight, though. I bet you’d be even less accurate if I asked you to predict what time you’ll be home six months from now.

Sometimes I feel like this is what we ask software teams to do when we ask them for estimates. Software developers build software every day. We should be able to predict how long it will take to build new software – but we can’t reliably do it! There are ways to improve our estimates, though.

Let’s jump back to the commute example. If your commute is short then you’ll be accurate most of the time. Sometimes you’ll have a meeting that runs late. Maybe it’s Friday and your calendar is clear so you take off early. You can be way off from time to time, but pretty often you’ll be pretty darn accurate. This is why we want to break deliverables into small pieces. Small pieces have less uncertainty.

Things get less predictable the further out you go. Going from a half-mile commute to a mile commute will increase the variability. One mile to twenty miles increases that variability by orders of magnitude. There are simply more unknowns as the size of your commute grows. Traffic, accidents, weather – there are a lot of variables that can affect the actual time.

Big features and deliverables have this same problem. There are simply too many unknowns to be accurate. Undocumented limitations of libraries, complex algorithms, changing requirements, for example. And asking software developers to commit to anything with that level of variance isn’t fair.

But it’s done anyway and leads to all kinds of dysfunctions. Unrealistic estimates based on little knowledge are treated as promises. Broken promises lead to distrust. Distrust leads to infighting, incessant status reporting, and pissed off developers.

So lets stop asking for estimates that are months away. I don’t know even know what time I’ll be home tonight 🙂

Loss Aversion and Tech Debt

Humans are loss-adverse. We place an irrationally high value on losing something over gaining an identical item. So for example, I’d be more upset about losing $10 than the happiness I’d feel by gaining $10. If I buy a meal and hate it, I’ll likely finish it anyway.

In general, people would rather gamble on a 50% chance of losing $1000 rather than giving up $500. Down $50 at the blackjack table? Odds are most people will want to try to win that back rather than take the loss. Curiously, most people would rather accept a guaranteed $500 rather than accept a 50% chance of making $1000. Irrational? Yup, but extremely predictable.

Loss Aversion is the fancy name for the phenomenon. People prefer avoiding losses to acquiring gains, which drives them to become more-risk tolerant in the face of loss. I think it can help explain how we build up and continue to live with technical debt in software development.

Tech debt is a useful metaphor describing the long term consequences inflicted upon a code base by deferring work. Similar to financial debt, we often choose to avoid doing the “right thing” now in favor of a faster path to the end result. The interest accrued by the debt adds up over time and can cause major problems.

There are lots of reasons that software engineers knowingly take on tech debt – deadlines, lack of knowledge or skills, too much work in progress – the list goes on. Sometimes it is unavoidable, sometimes not. Every project has some level of debt, though.

Paying off accumulated technical debt is where I see the ties into loss aversion. The time spent fixing a hastily implemented data access strategy, for example, is time not spent implementing a cool new feature. There is rarely any directly visible customer value delivered by paying off technical debt. In most people’s eyes, this is a loss of time, opportunity, and resources.

We are irrationally risk-tolerant in the face of this loss. Instead of spending $500 to pay off the debt, we’ll flip the coin, let the problems grow, and take the risk of losing $1000. Who knows, maybe the problems won’t surface for a long time. Maybe never. Maybe tomorrow, though.

So how do we fix this if the human mind is hardwired to avoid losses?

Shift the mindset of technical debt. Knowingly taking on technical debt is a loss, not a gain. We are losing the ability to easily respond to future requirements; we are not gaining a new feature in a shorter time frame. And existing tech debt should be seen as a sunk cost – it’s lost, and it’s better to forget the past.

If we accept the current state rather than treating tech debt as an incurred loss we will be less likely to gamble with the possibility of future losses. And hopefully our minds will start to blast warning sirens as we consider taking on new technical debt in the future.