Details
-
Feature
-
Should have
-
SRCnet
-
-
-
2.5
-
2
-
0
-
Team_MAGENTA
-
Sprint 5
-
-
-
-
24.3
-
Stories Completed, Outcomes Reviewed, Satisfies Acceptance Criteria
-
-
PI23 SRC23-PB SRCNet0.x example-workflows-and-benchmarks tests-compilation
Description
The deployment of new monitoring infrastructure has been removed from the previous version of this feature, since it is possible to use existing DB/dashboard instances.
This feature is now to focus on the development needed within the src-workloads repository. Before automated performance metrics about task runs can be captured, it needs to be possible to automatically determine whether a task ran successfully (or failed). This can use e.g. hardcoded checksum validation (or even just verify that expected output files created successfully) for now.
Once this has been done, the STARS script should be modified so that it can push the performance score, together with other easily obtainable (or user-set) metadata, to a running Elastic instance (or other document-type NoSQL DB). During local development this can use an ephemeral Elastic container.
Finally, a simple dashboard view of this data can be added to an existing Grafana instance.
Feature Point estimate sized to give time to learn about relevant technologies.