Uploaded image for project: 'SAFe Program'
  1. SAFe Program
  2. SP-1509

Establish environment for SDP workflow testing

Change Owns to Parent OfsSet start and due date...
    XporterXMLWordPrintable

Details

    • Enabler
    • Should have
    • PI13
    • None
    • None
    • Services
    • Hide

      Setting up standard benchmarking and profiling environments will allow us to compare not just the scientific performance, but computational performance of individual components and workflows as well. This will be a valuable tool to identify performance bottlenecks and limitations, thereby providing guidance on where to focus our future efforts.

      Show
      Setting up standard benchmarking and profiling environments will allow us to compare not just the scientific performance, but computational performance of individual components and workflows as well. This will be a valuable tool to identify performance bottlenecks and limitations, thereby providing guidance on where to focus our future efforts.
    • Hide

      (ideal result - not sure this is doable, to be discussed)

      Show
      (ideal result - not sure this is doable, to be discussed) Introduce way to run https://gitlab.com/ska-telescope/sdp/ska-sdp-continuum-imaging-pipelines CI in distributed fashion As it is currently NextFlow based, the easiest option might be to use the Kubernetes backend? However, if we end up having a dedicated workflow environment, it would clearly be great if we could adapt it appropriately instead Connect up with SP-1692 benchmarking facilities: Measure and report on workflow performance statistics Ideally factor as much platform-specific code as possible into separate repositories so the services ART can maintain it independent of the DP ART I.e. custom stages, or scripts provided in GitLab container? Write documentation on how to set this up ( https://confluence.skatelescope.org/display/SWSI/Platform+and+services+developer+documentation )
    • 1
    • 1
    • 1
    • Overdue
    • PI24 - UNCOVERED

    Description

      As a result of SP-1326, the workflow CoP agreed on a standard way to run workflows for validation (NextFlow). This puts us in a position to compare components and workflows built from them both in terms of computational performance and in terms of scientific performance.

      To use this for benchmarking, a suitable benchmarking and profiling environment needs to be set up. Ideally, this environment should resemble the environment of the to-be-built HPC as closely as possible. However, if a suitable hardware platform does not become available, we might fall back to the Kubernetes STFC (or Engage?) resources currently used for CI.

      Possibly useful context: https://www.youtube.com/watch?v=7koEc8iX7AM

      Attachments

        Issue Links

          Structure

            Activity

              People

                m.deegan Deegan, Miles
                p.wortmann Wortmann, Peter
                Votes:
                0 Vote for this issue
                Watchers:
                0 Start watching this issue

                Feature Progress

                  Story Point Burn-up: (0%)

                  Feature Estimate: 1.0

                  IssuesStory Points
                  To Do00.0
                  In Progress   00.0
                  Complete00.0
                  Total00.0

                  Dates

                    Created:
                    Updated:

                    Structure Helper Panel