Improve the reliability of your Jelastic manifests with live documentation image

Improve the reliability of your Jelastic manifests with live documentation

Jelastic manifests are sometimes so complex that it is difficult to keep track of all the little details that can go wrong during an installation. Most of the time, a complex manifest is also difficult to understand. This is because Jelastic manifests are often a bunch of scripts written in multiple languages and it’s easy to get lost as the infrastructure they define grows. Each script is responsible for a small detail that will eventually hold the whole system together. As a software provider on the Jelastic platform, you certainly want to make sure that the manifests you release will still work properly when Jelastic releases a new update to the platform. At the very least, you want to be notified if your manifests stop working, so you can fix them before users use them. Too many times I’ve been slowed down in my projects just because a manifest on the market no longer works. I had installed it many times in the past, but with the new Jelastic update, it no longer works. Sometimes I switch to an equivalent software whose manifest just works. Other times, as there is no alternative, I have to notify my Jelastic provider and wait for the manifest to be fixed.

I would argue that Jelastic manifest vendors would benefit greatly from some sort of validation of their manifest installation as well as documentation that is in sync with what their manifests deliver. Automated testing is one of the cornerstones of any professional software. Live documentation will allow you to do just that: validate your manifests and document what they do.

Let me show you what I mean with a simple example.

The hasura manifesto

Hasura greatly simplifies the creation of database-based web APIs (especially postgresql). Typical applications built with Hasura are

I’m currently developing a manifest to install hasura on Jelastic and I thought I’d provide you with a concrete example of how to make live documentation on a simple case. You can find the code in this gitlab repository. For the sake of brevity, we will focus on validating one part of the faas engine. The method I describe below is generalizable to the development of any type of manifest.

It all starts with a characteristic of the pickle:

faasd.feature

    Feature: Installation of the faas engine

    The faas engine will allow to link hasura actions and events
    hasura events to features.

    Context: A Docker node is available

        Given that a jelastic environment with a docker node is available in
    group faas with the image ubuntu:latest.
    And the faas engine is installed

    Scenario: Connecting

        When a user connects to the faas engine
        She then gets a success response

    Scenario: Deploying a new function

        When a user deploys the 'hello-python' function to the faas engine
        It will get a successful response

    Scenario: Calling the function

        The hello-python function has been deployed to the faas engine.
        When a user invokes it with the payload this is me.
        It then gets the response
      
        Hello ! You said: its me

Do you see this beautiful description of what the faas manifesto wants to achieve? The clear and nice wording in English? And this feature file is minimalist. I could have added pictures, scenario descriptions or details about the feature description.

The above few scenarios ensure that our Jelastic manifest installs faasd and that we can perform basic operations on the faas engine. With this simple feature file, we describe the bare minimum we need to do with our faas engine after its successful installation: we need to

In essence, the feature file above is your specification. In a typical project, you will have a large number of feature files. It is therefore quite convenient to turn them into a dynamic html format. You can do this, for example, with pickles, for which you have both a user interface and a console tool, making it ideal for your gitlab pipeline! The static website generated by pickles makes it easy to browse your features. You can even attach test results to this web report, making it a good tool for tracking your team’s progress in the current development iteration.

In the remainder of this article, we want to bring this specification to life and will focus on the cucumber method. An alternative to cucumber is gauge.

Configuration of the Python example

Let’s focus first on the code configuration needed to bring these test cases to life. There are frameworks for most of the popular programming languages, as you can see here. Let’s assume we’re going to program the tests in python, using behave, as it’s very easy. First of all, install behave

pip install behave

Then, when I started the project, the source tree for this test project looked like this:

.
├── features
├── features
│ ├── environment.py
│ ├── faasd.feature
│ ├── fixtures.py
│ ├── steps
│ └── faasd_steps.py
├── manifest.jps
└── serverless
    └── manifest.jps

On the one hand, we have the features folder, where all the behavioural testing magic is going to happen. On the other hand, we have our jps manifests that we want to test and document. In the features folder, we find an environment.py which defines the test environment. Essentially, this is where we apply the fixtures defined in the fixtures.py file, i.e. you define what’s going to happen before all tests, before each feature, before each scenario, after all tests, etc. For example, the environment.py file might look like this:

environment.py

from fixtures import *
from behave import use_fixture


def before_all(context):
    # the following fixtures are applied before all tests
    use_fixture(api_clients, context)
    use_fixture(random_seed, context)
    use_fixture(worker_id, context)
    use_fixture(commit_sha, context)
    use_fixture(project_root_folder, context)
    use_fixture(serverless_manifest, context)
    use_fixture(faas_port, context)


def before_scenario(context, scenario):
    # the following fixtures are applied before each scenario
    use_fixture(clear_environment, context)
    

In our jps manifest tests, we usually need to create Jelastic environments, empty them after the tests, check some things on the environments, etc. This is why we need Jelastic API clients. In order to make testing hasura-jps manifests easier, we have implemented a Jelastic client in python. You can see it in action below. Also, as we may have many tests running simultaneously (for example from different branches of our repository), we need to choose our Jelastic environment names carefully. This explains the random_seed, worker_id, and commit_sha fixtures. The fixtures are defined as follows:

# fixtures.py

import os
import random

from behave import fixture
from jelastic_client import JelasticClientFactory


@fixture
def random_seed(context):
    random.seed('hasura-jps-tests')


@fixture
def worker_id(context):
    context.worker_id = 'master'
    return context.worker_id


@fixture
def commit_sha(context):
    # this is data coming from the command-line, see .gitlab-ci.yaml below
    userdata = context.config.userdata
    context.commit_sha = userdata['commit-sha']
    return context.commit_sha


@fixture
def project_root_folder(context):
    # this is data coming from the command-line, see .gitlab-ci.yaml below
    userdata = context.config.userdata
    context.project_root_folder = userdata['project-root-folder'] if 'project-root-folder' in userdata else '.
    return context.project_root_folder


@fixture
def api_clients(context):
    # this is data coming from the command-line, see .gitlab-ci.yaml below
    userdata = context.config.userdata
    api_url = userdata['api-url']
    api_token = userdata['api-token']
    api_client_factory = JelasticClientFactory(api_url, api_token)
    # this partially wraps https://docs.jelastic.com/api/#!/api/marketplace.Jps
    context.jps_client = api_client_factory.create_jps_client()
    # this partially wraps https://docs.jelastic.com/api/#!/api/environment.Control
    context.control_client = api_client_factory.create_control_client()
    # this partially wraps https://docs.jelastic.com/api/#!/api/environment.File
    context.file_client = api_client_factory.create_file_client()


@fixture
def faas_port(context):
    context.faas_port = 8080
    return faas_port


@fixture
def new_environment(context):
    context.current_env_name = get_new_random_env_name(
        context.control_client, context.commit_sha, context.worker_id)
    yield context.current_env_name
    env_info = context.control_client.get_env_info(
        context.current_env_name)
    if env_info.exists():
        context.control_client.delete_env(context.current_env_name)


@fixture
def serverless_manifest(context):
    context.serverless_manifest = os.path.join(
        context.project_root_folder, 'serverless', 'manifest.jps')
    return context.serverless_manifest
    

In essence, behave makes a context available to all test cases. Fixtures place items in that context, so that they are available in the test steps we define later. For example, we don’t want to create our Jelastic API clients in our step methods. So we define them once and for all in a fixture and make them available in the context.

The corresponding feature test pipeline looks like this in gitlab:

.gitlab-ci.yml

stages:
  - test
acceptance-test:
  stage: test
  # you need at least behave, jelastic-client, sh
  image: some-python-image-with-the-relevant-dependencies-installed
  script:
    - |
      behave --junit --junit-directory ./features/test-reports --tags ~wip \
        -D project-root-folder="${CI_PROJECT_DIR}" \
        -D api-url="${JELASTIC_API_URL}" \
        -D api-token="${JELASTIC_ACCESS_TOKEN}" \
        -D commit-sha="${CI_COMMIT_SHORT_SHA}"
  artifacts:
    reports:
      junit:
        - $CI_PROJECT_DIR/features/test-reports/*.xml
    paths:
      - $CI_PROJECT_DIR/features/test-reports/*.xml

Note the -D options in the command line, which are accessed via userdata in our fixtures.

We can now turn to the first scenario, with the title Log on. The procedure for implementing the other scenarios is the same. The implementation is as follows:

faasd_steps.py

@given(
    u'a jelastic environment with a docker node is available in group \'{node_group}\' with image \'{docker_image}\'')
def step_impl(context, node_group, docker_image):
    node_type = 'docker'
    env = EnvSettings(shortdomain=context.current_env_name)
    docker_settings = DockerSettings(image=docker_image, nodeGroup=node_group)
    node = NodeSettings(docker=docker_settings,
                        flexibleCloudlets=16, nodeType=node_type)
    created_env_info = context.control_client.create_environment(env, [node])
    assert created_env_info.is_running()
@given(u'the faas engine is installed')
def step_impl(context):
    context.jps_client.install(
        context.serverless_manifest, context.current_env_name)
    context.current_env_info = context.control_client.get_env_info(
        context.current_env_name)
    faas_node_ip = context.current_env_info.get_node_ips(
        node_type=faas_node_type, node_group=faas_node_group)[0]
    assert host_has_port_open(faas_node_ip, context.faas_port)
@when(u'a user logs on the faas engine')
def step_impl(context):
    faas_node_ip = context.current_env_info.get_node_ips(
        node_type=faas_node_type, node_group=faas_node_group)[0]
    username = context.file_client.read(
        context.current_env_name,
        '/var/lib/faasd/secrets/basic-auth-user',
        node_type=faas_node_type,
        node_group=faas_node_group)
    password = context.file_client.read(
        context.current_env_name,
        '/var/lib/faasd/secrets/basic-auth-password',
        node_type=faas_node_type,
        node_group=faas_node_group)
    faas_client = FaasClient(
        gateway_url=faas_node_ip,
        gateway_port=context.faas_port)
    context.exit_code = faas_client.login(username, password)
@then(u'she gets a success response')
def step_impl(context):
    assert context.exit_code == 0

I’m not going to give the definition of everything here as it would take too long to explain everything. I hope the code is self-explanatory and its intent is clear. In the first step given, we use our Jelastic API client to create a new Jelastic environment with a docker node with the specified docker image in the specified Jelastic node group. The second step uses the Jelastic marketplace.jps API to install our faasd manifest and wait for the faasd node to have a port 8080 open, which will be required for all subsequent operations. Step when uses our homegrown FaasClient to connect to faasd. The FaasClient is essentially a shell wrapper on the faas-cli executable. Finally, the then step checks that the connection was successful. You can find all the details on our public repository. The source code for our Jelastic API client is also open-source, as you can see here. With the definitions of the above steps in place, we’ve linked our plain English specification with the python code and made it live.

Conclusion

I hope I have succeeded in arousing your curiosity about acceptance testing and motivating you to take your first steps towards good live documentation of your Jelastic manifests. Of course, besides all the advantages of live documentation, there are disadvantages. Writing documentation is an extra burden. Writing good specifications takes practice and you need to produce test code. Also, the example I’ve presented here tests the creation of the Jelastic environment, which is very slow, making your tests very slow. However, there are ways to optimise a bit. For example, you could try using a version of the behaviour that supports concurrency, but there is nothing officially supported at the moment (only a pull request pending on github). You can also run your tests on pre-created environments. You can define feature-level tags that will apply feature-level fixtures that will create the relevant Jelastic environments for a feature once and for all. The scenarios for this feature will all run on the preconfigured Jelastic environments.

Written By

Laurent MICHEL

03/02/2022

Product owner at Softozor and Hidora customer since 2017, using jelastic PaaS and gitlab managed ci/cd to reduce the infrastructure overhead of their e-commerce platform.

Start your free trial

No credit card required. Free 14 day trial.

We only use your personal data to create your account, promise!

Choose your currency

chf
eur

Read more articles

bg

Receive our news