Build Dashboards and Quality Gates#
This guide is for repositories that consume docs-as-code as a Bazel dependency. Examples are module repositories and integration repositories that want to:
publish their own traceability dashboards,
export
metrics.jsonduring documentation builds, andenforce quality gates in CI.
The docs-as-code repository itself documents tooling coverage. Consumer repositories use the same extensions to document their own requirements, architecture, source-code links, and verification evidence.
What You Get#
When a consumer repository integrates docs-as-code correctly, it can:
build an HTML dashboard from its own Sphinx needs,
include external needs from other repositories when desired,
export
needs.jsonandmetrics.jsonfor machine-readable reporting,gate CI on traceability thresholds via
traceability_gate.
Typical Setup#
Add docs-as-code as a Bazel dependency as described in Setup.
Define the documentation target via the
docs(...)macro.Provide process or upstream needs via the
dataargument when cross-repo traceability is required.Provide implementation sources via
scan_codesosource_code_linkcan be generated.Add test metadata so
testlinkand testcase needs can be generated.
Minimal Consumer Example#
In BUILD:
load("@score_docs_as_code//:docs.bzl", "docs")
filegroup(
name = "module_sources",
srcs = glob([
"src/**/*.py",
"src/**/*.cpp",
"src/**/*.h",
"src/**/*.rs",
]),
)
docs(
source_dir = "docs",
data = [
"@score_process//:needs_json",
],
scan_code = [":module_sources"],
)
In docs/conf.py:
score_metamodel_requirement_types = "feat_req,comp_req,aou_req"
score_metamodel_include_external_needs = False
Use score_metamodel_include_external_needs = True only in repositories that
intentionally aggregate traceability across dependencies, such as integration
repositories.
Building the Dashboard#
Run:
bazel run //:docs
This generates HTML output under _build/.
Run:
bazel build //:needs_json
This generates machine-readable output under:
bazel-bin/needs_json/_build/needs/needs.jsonbazel-bin/needs_json/_build/needs/metrics.json
The HTML dashboard and the exported metrics.json are backed by the same
traceability metric implementation, so the charts and the CI gate evaluate the
same data.
Inputs for Linkage Metrics#
To get meaningful dashboard and gate values, consumer repositories typically need three inputs:
Requirement and architecture needs in the documentation itself.
Source code references via Reference Docs in Source Code.
Test metadata via Reference Docs in Tests.
If one of those inputs is missing, the related chart or gate metric will remain empty or low.
Choosing Local vs Aggregated Views#
There are two common modes:
Module repository
Set
score_metamodel_include_external_needs = False.Gate only on the needs owned by the repository itself.
Use this for per-module implementation progress and traceability.
Integration repository
Set
score_metamodel_include_external_needs = True.Aggregate requirements across module dependencies when that is the intended repository purpose.
Use this for system or integration-level dashboards.
CI Quality Gate#
After building //:needs_json, run the gate on the exported metrics:
bazel run @score_docs_as_code//scripts_bazel:traceability_gate -- \
--metrics-json bazel-bin/needs_json/_build/needs/metrics.json \
--min-req-code 70 \
--min-req-test 70 \
--min-req-fully-linked 60 \
--min-tests-linked 70
Useful flags:
--require-all-linksfor strict 100 percent gating--fail-on-broken-test-refsto fail when testcase references point to unknown requirement IDs
Recommended Rollout#
For a new consumer repository:
Start with local-only metrics.
Enable
scan_codeand verifysource_code_linkcoverage first.Add test metadata and verify
testlinkcoverage.Introduce modest thresholds in CI.
Raise thresholds over time as the repository matures.