Uploaded image for project: 'SAFe Program'
  1. SAFe Program
  2. SP-1992

Produce first packaged set of micro-benchmarks

Details

    • Feature
    • Must have
    • PI12
    • None
    • Services
    • Hide
      • As a hardware engineer I need to be able to easily and quickly assess the viability of candidate hardware. A packaged and portable set of micro-benchmarks is essential.
      • As a software developer and systems architect I would like to have a convenient and portable way to package up a software component in such a way that it can run on (nearly) any platform provided and deliver useful metrics with only minimal overhead.
      Show
      As a hardware engineer I need to be able to easily and quickly assess the viability of candidate hardware. A packaged and portable set of micro-benchmarks is essential. As a software developer and systems architect I would like to have a convenient and portable way to package up a software component in such a way that it can run on (nearly) any platform provided and deliver useful metrics with only minimal overhead.
    • Hide

      A documented and packaged set of micro-benchmarks containing at least

      • one industry standard benchmark
      • one radio astronomy specific pipeline component
      • any data sets needed to run the above
      • user manual and
      • documented "proof" that the above runs on various hardware available to us (ideally with an estimate of "cost", i.e. additional effort, to make it work)
      Show
      A documented and packaged set of micro-benchmarks containing at least one industry standard benchmark one radio astronomy specific pipeline component any data sets needed to run the above user manual and documented "proof" that the above runs on various hardware available to us (ideally with an estimate of "cost", i.e. additional effort, to make it work)
    • 5
    • 5
    • 0
    • Team_PLANET
    • Sprint 4
    • Hide

      Following industry standard micro benchmarks are onboarded SKA SDP Benchmark tests repository -> https://gitlab.com/ska-telescope/sdp/ska-sdp-benchmark-tests

      • Babel Stream
      • HPL
      • HPCG
      • Intel MPI Benchmarks
      • IOR
      • GPU Direct RDMA Test
      • NCCL Test (intra GPU communications)
      • STREAM

      Besides, a performance benchmark is designed for NIFTY gridder using synthetic data from SKA1 MID configuration -> https://gitlab.com/ska-telescope/sdp/ska-sdp-benchmark-tests/-/tree/main/apps/level1/cng_test/src

      Instructions to run the benchmarks are documented here -> https://developer.skao.int/projects/ska-sdp-benchmark-tests/en/latest/content/benchmarks.html

      Micro benchmarks are run on JUWELS (Cluster and Booster) and Dahu (Grid5000 cluster) and results are plotted in notebooks. They can be found inside folder of each test here -> https://gitlab.com/ska-telescope/sdp/ska-sdp-benchmark-tests/-/tree/main/apps/level0

      Show
      Following industry standard micro benchmarks are onboarded SKA SDP Benchmark tests repository ->  https://gitlab.com/ska-telescope/sdp/ska-sdp-benchmark-tests Babel Stream HPL HPCG Intel MPI Benchmarks IOR GPU Direct RDMA Test NCCL Test (intra GPU communications) STREAM Besides, a performance benchmark is designed for NIFTY gridder using synthetic data from SKA1 MID configuration ->  https://gitlab.com/ska-telescope/sdp/ska-sdp-benchmark-tests/-/tree/main/apps/level1/cng_test/src Instructions to run the benchmarks are documented here ->  https://developer.skao.int/projects/ska-sdp-benchmark-tests/en/latest/content/benchmarks.html Micro benchmarks are run on JUWELS (Cluster and Booster) and Dahu (Grid5000 cluster) and results are plotted in notebooks. They can be found inside folder of each test here ->  https://gitlab.com/ska-telescope/sdp/ska-sdp-benchmark-tests/-/tree/main/apps/level0
    • 12.6
    • Stories Completed, Accepted by FO
    • PI22 - UNCOVERED

    • Team_PLANET

    Description

      Using the infrastructure produced in SP-1778, include the first useful micro-benchmarks for consumption outside the PlaNet team. This should be a limited selection of both industry standard benchmarks (such as HPL, etc) and radio astronomy/SKA specific codes such as IDG and NIFTY. Goal is not completeness but an initial set to start out with.

      In future PIs this will grow to a comprehensive set of micro-benchmarks to be used to characterise the performance of candidate hardware and software components.

      Eventually, the developed infrastructure can be readily integrated into CI/CD environment so that we will have the "gold standard" reference in terms of computational performance for our pipelines and/or components. A first draft on the sub-system of this CI/CD environment can be found [here|https://confluence.skatelescope.org/pages/viewpage.action?pageId=148819296] on how we can integrate the developed infrastructure for automatising the benchmarking and profiling of our softwares.

      Attachments

        Issue Links

          Structure

            Activity

              People

                m.deegan Deegan, Miles
                c.broekema Chris Broekema
                Votes:
                0 Vote for this issue
                Watchers:
                0 Start watching this issue

                Feature Progress

                  Story Point Burn-up: (100.00%)

                  Feature Estimate: 5.0

                  IssuesStory Points
                  To Do00.0
                  In Progress   00.0
                  Complete1134.0
                  Total1134.0

                  Dates

                    Created:
                    Updated:
                    Resolved:

                    Structure Helper Panel