Improve the reliability of your Jelastic manifests with living documentation

Improve the reliability of your Jelastic manifests with living documentation

Jelastic manifests are sometimes so complex that it is difficult to track all the little details that can fail during an installation. Most of the time, a complex manifest is also difficult to understand. Indeed, jelastic manifests are often a bunch of scripts written in multiple languages and it is easy to get lost when the infrastructure they define gets bigger. Each script is responsible for a small detail that will eventually make the whole system stick together. As a software provider on the Jelastic platform, you surely want to make sure that the manifests you push to the marketplace will still work fine when Jelastic releases a new platform update. At least, you want to get somehow notified if your manifests don’t work anymore, so that you can fix them before people use them. I have too many times been slowed down in my projects just because a manifest from the marketplace suddenly didn’t work anymore. I had installed it many times in the past, but with the new Jelastic update, it doesn’t work anymore. Sometimes, I would switch to an equivalent software whose manifest just works. Other times, because there is no alternative, I’d have to notify my Jelastic provider and wait until the manifest is fixed.

I am of the opinion that Jelastic manifest providers would benefit a lot of some kind of validation of their manifest installation as well as a documentation synchronized with what their manifests are delivering. Automated testing is one the cornerstones of any professional software. A living documentation will allow you to do just this: validate your manifests and document what they do.

Let me show you what I mean with a simple example.

The hasura manifest

Hasura drastically simplifies the creation of web APIs basing on a database (especially postgresql). Typical applications designed with hasura consist of

I am currently developing a manifest to install hasura on Jelastic and I thought that I would provide you with a concrete example of how to make living documentation happen on a simple case. You can find the code in this gitlab repository. For the sake of conciseness, let’s focus on the validation of part of the faas engine. The method I describe below is generalizable to the development of any kind of manifest.

It all starts with a Gherkin feature:

# faasd.feature

Feature: Install faas engine

  The faas engine will allow to bind hasura actions and

  events to functions.

  Background: Docker node is available

    Given a jelastic environment with a docker node is available in
 group 'faas' with image 'ubuntu:latest'

    And the faas engine is installed

  Scenario: Log on

    When a user logs on the faas engine

    Then she gets a success response

  Scenario: Deploy new function

    When a user deploys the 'hello-python' function to the faas engine

    Then she gets a success response

  Scenario: Call function

    Given the 'hello-python' function has been deployed on the faas engine

    When a user invokes it with payload 'it is me'

    Then she gets the response


      Hello! You said: it is me


Do you see that nice description of what the faas manifest wants to achieve? The nice and clear English wording? And that feature file is minimalistic. I could’ve added pictures, scenario descriptions, or added details on the feature description.

The above few scenarios make sure our Jelastic manifest successfully installs faasd and we can perform the basic operations on the faas engine. With this simple feature file, we describe the bare minimum we need to achieve with our faas engine after its successful installation: we need to

  • be able to deploy functions to the faas engine
  • invoke functions on the faas engine

In essence, the above feature file is your specification. In a typical project, you will have a lot of feature files. It is therefore pretty handy to turn them to dynamic html format. You can achieve that e.g. with pickles, for which you have both a UI and a console tool, making it the perfect fit for your gitlab pipeline! The static website generated by pickles makes it easy to browse your features. You can even attach test results to that web report, making it a good tracker of your team’s progress in the current development iteration.

In the remaining of this article, we want to make that specification live and we’ll focus on the cucumber way. An alternative to cucumber is gauge.

Python example setup

Let’s first focus on the code setup required to make those test scenarios alive. There exists frameworks for the majority of the most popular programming languages, as you can see here. Let’s assume we’ll program the tests in python, with behave, because it is very easy. First, install behave

pip install behave

Then, as I started the project, the source tree of this test project looked like this:


├── features

│   ├──

│   ├── faasd.feature

│   ├──

│   ├── steps

│   │   └──

├── manifest.jps

└── serverless

    └── manifest.jps

On the one hand, we have the features folder, where all the magic of behave tests will happen. On the other hand, we have our jps manifests that we want to test and document. In the features folder, we find an setting up the testing environment. In essence, this is where we apply the fixtures defined in the, i.e. you set up what will happen before all tests, before each feature, before each scenario, after all tests, etc. For example, the might look like this:


from fixtures import *

from behave import use_fixture

def before_all(context):

    # the following fixtures are applied before all tests

    use_fixture(api_clients, context)

    use_fixture(random_seed, context)

    use_fixture(worker_id, context)
    use_fixture(commit_sha, context)

    use_fixture(project_root_folder, context)

    use_fixture(serverless_manifest, context)

    use_fixture(faas_port, context)

def before_scenario(context, scenario):

    # the following fixtures are applied before each scenario

    use_fixture(clear_environment, context)

In our jps manifest tests, we will typically need to create Jelastic environments, clear them out after testing, verify some stuff on the environments, etc. That is why we need Jelastic API clients. In order to ease testing of the hasura-jps manifests, we put up a jelastic client in python. You will see it in action below. Moreover, because we might have a lot of tests running concurrently (for example from different branches of our repository), we have to choose our Jelastic environment names carefully. That explains the random_seed, worker_id, and commit_sha fixtures. The fixtures are defined like this:


import os
import random

from behave import fixture

from jelastic_client import JelasticClientFactory


def random_seed(context):



def worker_id(context):

    context.worker_id = 'master'

    return context.worker_id


def commit_sha(context):

    # this is data coming from the command-line, see .gitlab-ci.yaml below

    userdata = context.config.userdata

    context.commit_sha = userdata['commit-sha']

    return context.commit_sha


def project_root_folder(context):

    # this is data coming from the command-line, see .gitlab-ci.yaml below

    userdata = context.config.userdata

    context.project_root_folder = userdata['project-root-folder'] if
 'project-root-folder' in userdata else '.'

    return context.project_root_folder


def api_clients(context):

    # this is data coming from the command-line, see .gitlab-ci.yaml below

    userdata = context.config.userdata

    api_url = userdata['api-url']

    api_token = userdata['api-token']

    api_client_factory = JelasticClientFactory(api_url, api_token)

    # this partially wraps!/api/


    context.jps_client = api_client_factory.create_jps_client()

    # this partially wraps!/api/environment.Control

    context.control_client =

    # this partially wraps!/api/environment.File

    context.file_client = api_client_factory.create_file_client()


def faas_port(context):

    context.faas_port = 8080

    return faas_port


def new_environment(context):

    context.current_env_name = get_new_random_env_name(

        context.control_client, context.commit_sha, context.worker_id)

    yield context.current_env_name

    env_info = context.control_client.get_env_info(


    if env_info.exists():



def serverless_manifest(context):

    context.serverless_manifest = os.path.join(

        context.project_root_folder, 'serverless', 'manifest.jps')

    return context.serverless_manifest

In essence, behave makes a context available to all test scenarios. The fixtures are putting stuff in that context, so that they are available in the test steps we will define later. For example, we don’t want to create our Jelastic API clients in our step methods. Therefore we define them once and for all in a fixture and make them available in the context.
The corresponding feature testing pipeline looks like this in gitlab:

# .gitlab-ci.yml


  - test


  stage: test

  # you need at least behave, jelastic-client, sh

  image: some-python-image-with-the-relevant-dependencies-installed


    - |

      behave --junit --junit-directory ./features/test-reports --tags ~wip \

        -D project-root-folder="${CI_PROJECT_DIR}" \

        -D api-url="${JELASTIC_API_URL}" \

        -D api-token="${JELASTIC_ACCESS_TOKEN}" \

        -D commit-sha="${CI_COMMIT_SHORT_SHA}"




        - $CI_PROJECT_DIR/features/test-reports/*.xml

      - $CI_PROJECT_DIR/features/test-reports/*.xml

Note the options -D in the command-line, which are accessed to via the userdata in our fixtures.
Now we can address the first scenario, with title Log on. The procedure to get the other scenarios implemented is the same. The implementation goes along these lines:



    u'a jelastic environment with a docker node is available in group
 \'{node_group}\' with image \'{docker_image}\'')

def step_impl(context, node_group, docker_image):

    node_type = 'docker'

    env = EnvSettings(shortdomain=context.current_env_name)

    docker_settings = DockerSettings(image=docker_image,

    node = NodeSettings(docker=docker_settings,

                        flexibleCloudlets=16, nodeType=node_type)

    created_env_info = context.control_client.create_environment(env,

    assert created_env_info.is_running()

@given(u'the faas engine is installed')

def step_impl(context):


        context.serverless_manifest, context.current_env_name)

    context.current_env_info = context.control_client.get_env_info(


    faas_node_ip = context.current_env_info.get_node_ips(

        node_type=faas_node_type, node_group=faas_node_group)[0]

    assert host_has_port_open(faas_node_ip, context.faas_port)

@when(u'a user logs on the faas engine')

def step_impl(context):

    faas_node_ip = context.current_env_info.get_node_ips(

        node_type=faas_node_type, node_group=faas_node_group)[0]

    username =





    password =





    faas_client = FaasClient(



    context.exit_code = faas_client.login(username, password)

@then(u'she gets a success response')

def step_impl(context):

    assert context.exit_code == 0

I am not giving the definition of everything here because it would take too long to explain everything. I hope the code is self-explanatory and its intent is clear. In the first given step, we use our Jelastic API client to create a new Jelastic environment with a docker node with specified docker image in the specified Jelastic node group. The second given step uses the Jelastic marketplace.jps API to install our faasd manifest and wait until the faasd node has a port 8080 open, which will be necessary for all the subsequent operations. The when step uses our home-made FaasClient to log on faasd. The FaasClient is essentially a shell wrapper on the faas-cli executable. Finally, the then step checks that the log on was successful. You find all the details on our public repository. The source code of our Jelastic API client is also open-source, as you can see here. With the above step definitions in place, we have linked our plain English specification with python code and made it alive.


I hope I was able to make you curious about acceptance testing and to motivate you to make your first steps towards a good living documentation of your Jelastic manifests. Of course, besides all the benefits of the living documentation, there are down-sides. Writing documentation is an overhead. Writing good specifications needs exercise and you have to produce test code. Additionally, the example I’ve presented here tests Jelastic environment creation, which is very slow, hence making your tests very slow. There are, however, ways to optimize a bit. For example, you could try to use a behave version supporting concurrency, but there’s nothing officially supported right now (only a pending pull request on github). You can also run your tests on pre-created environments. You can define feature-level tags that will apply feature-level fixtures that would create the relevant Jelastic environments for a feature once and for all. Scenarios of that feature would all run on the pre-configured Jelastic environments.

Written by

Laurent MICHEL

Product owner at Softozor and Hidora customer since 2017, using the jelastic PaaS and the managed gitlab ci/cd to reduce infrastructure overhead of their e-commerce platform.

Receive our news

Subscribe to our monthly newsletter to stay informed