Reference Docs in Tests#

In tests, you want to reference requirements (needs). The docs-as-code tool will create backlinks in the documentation.

Docs-as-code parses test.xml files produced by Bazel under bazel-testlogs/. To attach metadata to tests use the project tooling decorator (provided by the attribute plugin). Example usage:

from attribute_plugin import add_test_properties

@add_test_properties(
                partially_verifies=["tool_req__docs_common_attr_title", "tool_req__docs_common_attr_description"],
                test_type="interface-test",
                derivation_technique="boundary-values",
)
def test_feature():
                """Short description of what the test does."""
                ...

TestLink will extract test name, file, line, result and verification lists (PartiallyVerifies, FullyVerifies) and create external needs from tests and testlink attributes on requirements that reference the test.

Hint

It is possible to have ‘additional’ properties on tests. They will not show up in the TestLink but also won’t break the parsing process.

Limitations#

  • Not compatible with Esbonio/Live_preview.

  • To create a valid Testlink Tags and XML must match the expected format.

  • Partial properties will lead to no Testlink creation. If you want a test to be linked, please ensure all requirement properties are provided.

  • Tests must be executed by Bazel first so test.xml files exist.

CI/CD Gate for Linkage Percentage#

The traceability tooling uses a two-step architecture:

  1. The Sphinx build computes metrics via the score_metamodel extension and writes a machine-readable metrics.json (schema v1) to the build output directory alongside needs.json.

  2. traceability_gate reads that metrics.json and enforces configurable coverage thresholds.

Separating computation (Sphinx extension, during docs build) from gating (thin CLI, in CI) keeps the gate decoupled from the Sphinx/Bazel build: it never parses needs.json itself and has direct access to all sphinx-needs data.

Note

metrics.json is the single source of truth for traceability data. It is written by the Sphinx docs build (via the score_metamodel extension) to <outdir>/metrics.json. The same computation that powers the dashboard pie charts produces this file, so the gate and the dashboard always show the same numbers.

Current workflow:

  1. Run tests.

  2. Build docs (score_metamodel extension writes metrics.json automatically).

  3. Run the gate against the exported metrics.

bazel test //...
bazel build //:needs_json

bazel run //scripts_bazel:traceability_gate -- \
   --metrics-json bazel-bin/needs_json/_build/needs/metrics.json \
   --min-req-code 100 \
   --min-req-test 100 \
   --min-req-fully-linked 100 \
   --min-tests-linked 100 \
   --fail-on-broken-test-refs

In repository CI, wire the gate target to depend on the test-report and //:needs_json targets so Bazel handles the build order automatically.

The --require-all-links shortcut is equivalent to setting all --min-* flags to 100 and enabling --fail-on-broken-test-refs.

The gate reports:

  • Percentage of requirements with source_code_link

  • Percentage of requirements with testlink

  • Percentage of requirements with both links (fully linked)

  • Percentage of testcases linked to at least one requirement

  • Broken testcase references (testcases referencing an unknown requirement ID)

Note

Testcase-based metrics depend on testcase needs being present in the exported needs.json. Testcases are currently generated as external needs, so values such as testcase linkage percentage or broken testcase references are only meaningful if those external testcase needs are also included in the exported dataset.

To restrict which need types are treated as requirements when computing metrics, set score_metamodel_requirement_types in your Sphinx conf.py (default: tool_req):

score_metamodel_requirement_types = "tool_req,comp_req"

By default, dashboard and gate use only needs defined in the current repository (is_external == False). This supports per-repo CI gates. For integration repositories that intentionally aggregate across dependencies, you can include external needs in both dashboard and gate by setting:

score_metamodel_include_external_needs = True

You can also override dashboard behaviour per pie chart via filter args:

.. needpie:: Requirements with Codelinks
   :filter-func: src.extensions.score_metamodel.checks.traceability_dashboard.pie_requirements_with_code_links(tool_req,true)

Use lower thresholds during rollout and tighten towards 100% over time.