Improve communication to deliver your software on specification

Improve communication to deliver your software on specification

As a Product Owner (PO), I cannot stand it when my team does not deliver the increment (in its entirety!) they promised by the end of the sprint. Indeed, I spend time with my customers finding out what they want. After that, I design user features which I transform into work packages. I then prioritize, refine, and plan with the team. We even seal an agreement together before we start a new sprint. After all that, when my team doesn’t deliver everything they committed to, I have the unpleasant impression that all that preparation work was just wasted effort. All that SCRUM overhead for nothing! I feel like Rigor Mortis in Aye, Dark Overlord:

rigor mortis
“- We did not see this problem coming”
“- It took more time than we expected”
“- The library is poorly documented”
“- Our reference system was down”
“- We did not understand it that way”
“- Someone on the team was sick for three days”,

would say, for example, my teams of goblins to creatively justify why they don’t deliver on specification at the end of the sprint. Because they failed, some features have to be postponed to a later sprint, therefore they are not delivered on time. Sometimes, bugs would be discovered during software demonstrations to our customers. Suddenly, the software just doesn’t work, “for no reason”, and the goblins invoke the “demo effect”. In the end, Rigor Mortis has no other choice than doling out the withering look.

withering-look-hidora-news

Of course, I can, as a PO, also be responsible for failure. Sometimes, I would guide my teams of goblins toward the wrong features. We are indeed all familiar with the tree swing project management cartoon:

tree swing on hidora website

It is pretty easy for a PO to misunderstand the customers needs and even when they are understood correctly, the PO can fail in her mission to communicate what she wants to the team. Also, when the PO uses the inappropriate means to monitor the team’s progress, she is fully responsible for failure. Discussing with the team members (e.g. in the daily meetings) is not sufficient to get a glimpse on what is going on. Instead, what the PO needs is numbers to assess progress. As Lord Kelvin said, “when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind”. In software development, “working software is the primary measure of progress”, as stated by the 7th agile principle. Therefore, we need to find a way to link what our customer wants with what our team is developing in a somehow measurable way, which is precisely what I would like to address now by presenting … a free communication tool.

Time to escape from Rigor Mortis inferno

Let’s close the agile parenthesis for now and focus on the actual problems. Assume some customer is coming around and asks:

“- hey, I would like you to add a logon functionality to my platform”

We have two problems. On the one hand, we need to figure out exactly what they want. On the other hand, we need to make sure that our software team precisely understand what we want and figure out by themselves how to make it happen efficiently and reliably.

From the customer to the “Three Amigos”

According to the customer, the feature can be summarized as the following user story:

As a shop manager,

I want to log on the management dashboard,

so that I can access my shop's sensitive data.

But, well, that user story involves many aspects. To begin with, the customer needs to decide what kind of credentials they want to use. A set of username / password ? Can the username be an e-mail? Do they want to enable multi-factor authentication (MFA)? Also, do we accept any password or do we enforce a password policy? Furthermore, how long should a logon remain valid? Do they need a “remember me” feature? Finally, what should the feature visually look like?

Discussing with the customer might lead to the following Gherkin specification:

 

Feature: Manager can log on the management dashboard

 
   
As a shop manager,

I want to log on the management dashboard,

so that I can access my shop's sensitive data.



Scenario: The manager provides valid credentials on the admin application



Given a registered manager

When she logs on the admin application with valid email and password

Then she is granted access to her dashboard



Scenario: The manager provides wrong credentials



Given a registered manager

When she logs on the admin application with invalid credentials

Then she gets unauthorized access



--


Feature: Password policy

As a user,

I need to be encouraged to employ strong passwords,

for the sake of security.



Scenario Outline: The password is invalid

 

A password complies to the password policy if, and only if, 
it satisfies the following criteria:


- contains at least 8 characters

- mixes alpha-numerical characters

- contains at least one special character



Given the password 
Then it is not compliant

 

Examples:


    | non-compliant password |

    | 1234                   |

    | 1l0v3y0u               |

    | ufiDo_anCyx            |

    | blabli 89qw lala hI    |

That is a summary of our discussion with the customer. We put us in the situation of a manager using the system and came up with use-cases or scenarios. Note that the above specification is written with Gherkin syntax but you can choose any syntax of your liking. As far as I am concerned, I like to use Gherkin syntax, which is supported by many programming languages, or gauge markdown syntax, which is supported by fewer programming languages (C#, Java, Javascript, Python, and Ruby, at the time of this writing). You can find more information on those two possibilities here along with some other choices.

In addition to the above specification, the customer gives us carte blanche for the visual aspects. That’s all for the high-level use cases. In essence, they want no MFA and no “remember me” feature. Instead, they want to keep it simple and stupid, with a basic email / password authentication and a simple password policy.

However, the story doesn’t end here. The feature needs to be implemented and that’s where we involve the “Three Amigos”. While it is cool to have come up with a clear statement of what the customer wants, there may be missing scenarios checking “what would happen if …”. For example, our developers and testers might find some more non-compliant passwords. In addition, the above high-level features surely don’t cover the whole logon feature. For example, what does it mean to be logged on? How can we validate logon? The team may choose (but is not limited) to go for paseto or JWT authentication for which the next, more technical, feature might be designed:

Feature: Authenticated users validate their JWTs

  

As a user,
I want to validate my JWT,
so that I can infer if my session is still valid.

Scenario: A valid JWT validates



Given a registered user
And the user has logged in with valid credentials
 
When she validates her JWT

Then she gets her user id



Scenario: An expired JWT does not validate



Given a registered user

When she validates an expired JWT

Then she gets error message

   """

Could not verify JWT: JWT expired

   """



Scenario: A JWT with inconsistent payload does not validate



Given a registered user

And the user has logged in with valid credentials

When she modifies the JWT payload

And validates the JWT

Then she gets error message

   """
 
Could not verify JWT: JWT error

   """

  

Scenario Outline: A non-JWT does not validate

Given the non-JWT "<non-jwt>"

When the user validates the JWT

Then she gets error message

   """

   <error message>
   """



Examples:

    | description   | non-jwt        | error message |
   
    | random string | my-invalid-jwt | Could not verify JWT: not a jwt |

    | empty string  |                | Could not verify JWT: empty token |

Note in passing how our initial user story maps to three different Gherkin features. There is no mapping between user stories and Gherkin features, except, maybe, in a project’s initial phase. User stories are a planning tool while Gherkin features are a communication tool. They do not live on the same layer.

We can imagine all sorts of agile / v-model / whatever processes to discuss the updated specification with the customer or the stakeholders and further increase clarity on the feature. That is high-level business communication of what the feature should look like.

Nevertheless, although we can say we’ve made it clear what there is to do, it is still difficult to estimate when the logon feature will be delivered, because we have put no thought yet on how to make it happen. In order to establish that estimation, let’s implement the above tests. Indeed, we now have text files containing how the feature is supposed to behave. Why not write code that would emulate a manager trying to log on her dashboard? When the dev team do so, they will discover what high-level interfaces, objects, APIs they need to interact with, which will settle down the overall architecture for our logon feature. As Uncle Bob writes in his book Agile Software Development, Principles, Patterns, and Practices, “the act of writing acceptance tests first has a profound effect upon the architecture of the system” (Chapter 4, “Testing”). That’s somehow the same kind of experience as that of Uncle Bob and Robert S. Koss in The Bowling Game: An example of test-first pair programming, except that experience focusses on unit testing, therefore on implementation details, rather than on the system’s overall architecture. Depending on how your organization is structured, the whole team or the architects or the team leads will write those tests as a preparation for the planning. In so doing, they’ll surely stumble upon some surprises and might even spike the functionality, to acquire highest knowledge on the topic. If we can avoid a planning “poker” (or equivalent), we drastically reduce the risk of deceiving our stakeholders. The more information the team gets on the implementation, the less risky the poker will be.

Let’s take a simple scenario as an example to illustrate how the specification could be linked to test code. Assume we want to implement

Scenario: A valid JWT validates



Given a registered user
And the user has logged in with valid credentials

When she validates her JWT

Then she gets her user id

A possible implementation of it in python could look like this (e.g. with the behave library):

@given(u'a registered user')

def step_impl(context):

    # the auth_fixtures contain registered user data
    
    context.current_user = context.auth_fixtures[0]





@given(u'the user has logged in with valid credentials')

def step_impl(context):

    user = User(context)

    # the User class abstracts out communication with the api through a client

    context.current_jwt, _ = user.login(

    context.current_user['email'],
context.current_user['password'])





@when(u'she validates her JWT')

def step_impl(context):

    token = context.current_jwt

    # this method abstracts out communication with the api through a client

    # it makes a context.response_payload available, which will be used in the "then" step
 
    validate_token(context, token)





@then(u'she gets her user id')

def step_impl(context):

    assert context.response_payload['userId'] == 
context.current_user['id']

That code is the result of the team’s architectural debates. The team members find compromises, look for technologies, and spike, until they come up with a complete set of implemented Gherkin steps triggering the most pragmatic, minimalist, and cost-efficient implementation of the desired features.

The above python code snippet makes use of the software your team is going to develop. Before the start of a development iteration, the scenarios fail to run, because of lacking implementations. As time goes by, more and more of those Gherkin scenarios provide success feedback. When the team is done with implementation, that code runs flawlessly and provides living documentation on what is going on in the software. That documentation is very powerful. Every feature can be provided with architectural decisions, sketches and directly relates to running productive code.

Now we have a specification along with its underlying tests implemented. Our team knows what to do and how. When they run those tests, they know how far they have progressed towards their goal. Planning / tasking is easy because the features have been refined and spiked. If the development team implements continuous integration, Rigor Mortis can even get live feedback on the development progress. When his goblins are on a mission, he can monitor their doings and get a better feeling on what and how they are doing, for example by means of a dashboard of the following kind, that would be updated upon each and every goblin push to the repository’s main branch:

pickles report on hidora websiteThat is precisely the numbers we talked about earlier in this post. The dashboard clearly shows how many scenarios are passing, failing, or inconclusive. We can put numbers on development progress. We can measure how much of a working software the team has come up with until now. From the results depicted by that particular dashboard, we can infer that the team is soon ready to deliver their committed user stories.
Putting that live reporting and live documentation in place is fairly easy. For example, with the Gitlab Server from the Jelastic marketplace on Hidora,

gitlab marketplace on hidora website

or with Hidora’s outstanding Gitlab as a Service, you can define pipeline jobs on your repository that perform the Gherkin tests, pack their results in an xml file, generate the pickles report, and publish it somehow. Assuming you have a docker image pullable from some docker registry, say

my-docker-registry/pickles

defined like this:

FROM mono:latest AS unpacker



ARG RELEASE_VERSION=2.20.1



ADD https://github.com/picklesdoc/pickles/releases/download/v${RELEASE_VERSION}/Pickles-exe-${RELEASE_VERSION}.zip /pickles.zip



RUN apt-get update \
 
  && apt-get install unzip \

  && mkdir /pickles \

  && unzip /pickles.zip -d /pickles



FROM mono:latest



COPY --from=unpacker /pickles /pickles

you can for example complete your gitlab pipelines like this in .gitlab-ci.yaml for a SpecFlow acceptance test project:

stages:

- build

- test

- deploy

- review

- publish
-pickles-reports



[...]



# here you deploy e.g. your staging environment

# the scripts are kept to their minimum for the sake of readability

# usually, you would want to ensure that your environment is in stopped state

# and you would also want to wait until all k8s deployments / jobs have been deployed

deploy:

  stage: deploy

  image: my-docker-registry/devspace:latest

  variables:

    <your deployment variables>
  environment:

    name: $YOUR_STAGING_ENV_NAME

    # we deploy to our staging domain

    url: $URL_TO_YOUR_STAGING_ENVIRONMENT

  before_script:

  - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
 $CI_REGISTRY

  script:

  - devspace deploy --build-sequential -n $KUBE_NAMESPACE -p staging

  only:

  - master



# here you perform your acceptance tests on the deployed system

acceptance-test:

  stage: review

  image: mcr.microsoft.com/dotnet/sdk:5.0-alpine-amd64

  variables:

    <your acceptance test variables>
  script:

  # this command assumes you have JunitXml.TestLogger and NunitXml.TestLogger installed in your .net core / .net 5 project

  - |

    dotnet test ./features/Features.csproj --
logger:"nunit;LogFilePath=.\features\nunit-test-reports\test-result.xml" \

      --logger:"junit;LogFilePath=.\features\junit-test-reports\test-result.xml;MethodFormat=Class;FailureBodyFormat=Verbose"

  artifacts:

    reports:

      # gitlab does not understand nunit, we need junit test data here

      # for the gitlab test reporting

      junit:

      - ./features/junit-test-reports/*.xml

    paths:

    - ./features/junit-test-reports/*.xml

    # the following xml files will be used in the pages job

    - ./features/nunit-test-reports/*.xml

  only:

  - master



[...]



# here you publish your living documentation with the test results

pages:

  stage: publish-pickles-reports

  image: my-docker-registry/pickles:latest

  dependencies:

  - acceptance-test

  script:

  # this command generates a pickles documentation with test results
 
 # in the ./public folder, which is the source folder for the gitlab

  # pages

  - |

    mono /pickles/Pickles.exe --feature-directory=./features \

      --output-directory=./public \

      --system-under-test-name=my-test-app \

      --system-under-test-version=$CI_COMMIT_SHORT_SHA \

      --language=en \

      --documentation-format=dhtml \

      --link-results-file=./features/nunit-test-reports/test-result.xml \

      --test-results-format=nunit3

  artifacts:

    paths:

    - public

    expires_in: 30 days

  only:

  - master

The pickles report, along with pickles dynamic html specification, can get published on a gitlab page as in the above snippet or some docker container that can then be deployed on your kubernetes cluster, your Jelastic environment, or your cdn.

Final words

Back to the agile metaphor, I hope it is clear to any SCRUM team that “Product Owner” is a synonym for “Mr / Mrs Clarity”. A PO should have her requirements crystal clear before she hands them in to her development teams for implementation. Furthermore, to avoid surprises at the end of a sprint, a PO needs a clear live measurement of how the sprint is going on. And a sprint burn-down chart means exactly nothing about how much of a working software the team has come up with. Of course, my suggestion will not prevent failure all the time, but at least will contribute a lot to failure reduction and happier stakeholders.

On another side, when newcomers are joining your development team and you want to get them productive straightaway, you can think of the Gherkin features and corresponding step implementations as some kind of a “do-it-yourself” with a high-level guidance. When they run the feature tests, they autonomously get feedback on how they’re doing with their implementation. The overall architecture has already been defined, the newcomer only needs to fill in the gaps with implementation details. The same holds when you need to delegate software development to a third-party company. Instead of redacting long Word documents, you might want to give the methodology in this post a try. Code doesn’t lie and cannot be misinterpreted. Word documents can, especially when you are e.g. a Swiss company writing your documents in English for a company in Czech Republic. In that case, even though everyone has a good level in English, cultural contexts make it sometimes difficult for people from different countries to clearly understand each other, turning software development delegation to a Chinese Whispers game.

Written by

Laurent MICHEL
17/11/2021

Product owner at Softozor and Hidora customer since 2017, using the jelastic PaaS and the managed gitlab ci/cd to reduce infrastructure overhead of their e-commerce platform.

Receive our news

Subscribe to our monthly newsletter to stay informed