The following section provides an overview of the testing approach adopted to ensure the quality and reliability of openDesk. This concept balances leveraging existing quality assurance (QA) processes with targeted testing efforts tailored to the specific needs of openDesk. The outlined strategy focuses on three key areas:
These efforts are designed to complement each other, minimizing redundancy while ensuring robust testing coverage.
openDesk contains applications from different suppliers. As a general approach, we rely on the testing conducted by these suppliers for their respective applications.
We review the supplier’s QA measures on a regular basis, to ensure a reliable and sufficient QA of the underlying applications.
We receive the release notes early before a new application release is integrated into openDesk, so we are able to check for the existence of a sufficient set of test cases. The suppliers create a set of test cases for each new function.
We develop and maintain a set of end-to-end tests focussing on:
functional.yaml.gotmpl, e.g.
We execute the tests using English and German as language profile.
The development team utilizes the test automation described above for QA’ing their feature branches.
We use the functional e2e-tests in nightly testruns on a matrix of deployments addressing different application profiles to ensure the quality of the development branch’s current state.
The following naming scheme is applied for the deployment matrix:
<edition>-<type>-<profile> resulting e.g. in ce-init-default or ee-upgr-extsrv<edition>
ce: openDesk Community Editionee: openDesk Enterprise Edition<type>
init: Initial / fresh / from the scratch deployment into an empty namespace.upgr: Deploy latest migration release into an empty namespace, afterwards run upgrade deployment with current state.upd: Deploy latest release into an empty namespace, afterwards run upgrade deployment with current state.<profile>: The following profiles are defined
default: With
functional.yaml: No changes beside specific 2FA testing group and enabled UDM REST API (required for user import).secrets.yaml.gotmplfunct1: Different configuration of functional.yaml, self-signed-certs [and when available external secrets].extsrv: External services (where possible).gitops: Argo CD based deployment.All executions of the end-to-end tests are tracked in a central platform running Allure TestOps.
As the TestOps tool contains infrastructure details of our development and test clusters it is currently only accessible for to project members.
Our goal is to deliver openDesk as application-grade software with the ability to serve large user bases.
We create and perform load- and performance tests for each release of openDesk.
Our approach consists of different layers of load testing.
For these tests we define a set of “normal”, uncomplicated user-interactions with openDesk.
For each testcase in this set, we measure the duration of the whole testcase (and individual steps within the testcase) on a given, unloaded environment, prepared with a predefined setup and openDesk release installed.
As a result, we receive the total runtime of one iteration of the given testcase, the runtime of each step inside the testcase, the error rate and min/max/median runtimes.
Most importantly, the environment should not be used by other users or have running background tasks, so it should be an environment in a mostly idle state.
The results can be compared with the results of the previous release, so we can see if changes in software components improve or decrease the performance of a testcase.
These tests are performed to ensure the correct processing and user interaction, even under high-load scenarios.
We use the same test cases as in the base performance tests.
Now we measure the duration on a well-defined environment while the system is being used by a predefined number of test users in parallel. The number of users is incrementally scaled up.
Our goal is to see constant runtimes of each testcase iteration, despite the increased overall throughput due to the increasing number of parallel users.
At a certain point, increasing the number of users does not lead to higher overall throughput, but instead leads to an increase in the runtime of each testcase iteration.
This point, the saturation point, is the load limit of the environment. Up to this point, the environment and the installed software packages can handle the load. Beyond this point, response times increase and error rates rise.
For partners interested in large scale openDesk deployments, we offer a tailored workshop in which we define scenarios and perform load testing analysis.
This way, we can help you decide on the appropriate sizing for the planned openDesk deployment.
If necessary, we perform overload tests, which will saturate the system with multiple test cases until no further increase in throughput is visible. Then we add even more load until the first HTTP requests run into timeouts or errors. After a few minutes, we reduce the load below the saturation point. Now we can check if the system is able to recover from the overload status.
We use helm-unittest to validate Helm chart templates and ensure chart changes don’t break existing functionality. These tests run in CI and can be executed locally during development.
go install github.com/helm-unittest/helm-unittest/cmd/helm-unittest@latest
Make sure $GOPATH/bin is in your $PATH.
Test a single chart:
helm-unittest helmfile/charts/{chart}
Test all charts:
for chart in $(ls helmfile/charts/); do helm-unittest helmfile/charts/$chart; done
A test file contains one or more suites. Each suite targets specific templates and defines assertions to verify the rendered manifests:
suite: Deployment
templates:
- deployment.yaml
values:
- ../ci/ci-values.yaml
tests:
- it: should create a Deployment with correct kind
asserts:
- isKind:
of: Deployment
- it: should use the correct container image
asserts:
- matchRegex:
path: spec.template.spec.containers[0].image
pattern: linuxserver/bookstack:.+
Most tests load CI-specific values using a relative path:
values:
- ../ci/ci-values.yaml
This ensures tests run with minimal, predictable configuration independent of production settings.
isKind: Verify the Kubernetes resource kindequal: Check exact value matchcontains: Verify a value exists in a listisNotNull: Ensure a path exists and is not nullmatchRegex: Match a path value against a regex patternhasDocuments: Verify the expected number of manifests are generatedWhen a template renders multiple manifests (e.g., a ConfigMap and a CronJob in the same file), use documentIndex to target specific manifests:
suite: SSO Check CronJob
templates:
- sso-check-cronjob.yaml
values:
- ../ci/ci-values.yaml
tests:
- it: should create a CronJob with correct kind
documentIndex: 1 # Second manifest (0-based)
asserts:
- isKind:
of: CronJob
Override values for individual tests without modifying the base values file:
suite: SSO Check CronJob (enabled)
templates:
- sso-check-cronjob.yaml
values:
- ../ci/ci-values.yaml
set:
ssoCheck.enabled: true
tests:
- it: should not be suspended when ssoCheck.enabled is true
documentIndex: 1
asserts:
- equal:
path: spec.suspend
value: false
- it: should use custom schedule when provided
documentIndex: 1
set:
ssoCheck.schedule: "*/5 * * * *"
asserts:
- equal:
path: spec.schedule
value: "*/5 * * * *"
When testing multiple templates in one suite, scope assertions to a specific template:
suite: Deployment
templates:
- deployment-chat.yaml
- deployment-core.yaml
- deployment-frontend.yaml
values:
- ../ci/ci-values.yaml
tests:
- it: should create all deployments with correct kind
asserts:
- isKind:
of: Deployment
- it: should create core Deployment with correct name
template: deployment-core.yaml
asserts:
- equal:
path: metadata.name
value: RELEASE-NAME-f13-core
See existing tests for reference: