Uploaded image for project: 'SAFe Program'
  1. SAFe Program
  2. SP-3213

Taranta dashboards for the MCCS

Change Owns to Parent OfsSet start and due date...
    XporterXMLWordPrintable

Details

    • Obs Mgt & Controls
    • Hide

      We need a monitoring UI for the MCCS:

      • for public demos (to other teams, to managers, to external people)
      • to better support the needs of MCCS developers and engineers when they are debugging it
      • to support AIV engineers when testing the MCCS.

      We also need to tackle the problem of what interaction structure needs to be supported by a UI in order for people to be able and productive in drilling-down and wrapping-up info for a complex system of devices. This is a complex problem of UIs (complexity as per Cynefin) that requires experimentation to identify a suitable solution. We will be hit by this problem in future versions of the telescopes, and therefore we need to proactively plan to address it. We can use the MCCS as a system with moderate (at the moment) complexity and start working on it.

      If successul, we will end up with a system of Taranta dashboards that can be used as a means to obtain additional feedback as we go on with development, testing, and using our software and hardware.

      Show
      We need a monitoring UI for the MCCS: for public demos (to other teams, to managers, to external people) to better support the needs of MCCS developers and engineers when they are debugging it to support AIV engineers when testing the MCCS. We also need to tackle the problem of what interaction structure needs to be supported by a UI in order for people to be able and productive in drilling-down and wrapping-up info for a complex system of devices. This is a complex problem of UIs (complexity as per Cynefin) that requires experimentation to identify a suitable solution. We will be hit by this problem in future versions of the telescopes, and therefore we need to proactively plan to address it. We can use the MCCS as a system with moderate (at the moment) complexity and start working on it. If successul, we will end up with a system of Taranta dashboards that can be used as a means to obtain additional feedback as we go on with development, testing, and using our software and hardware.
    • Hide

      The scope should be fairly limited to ensure success of this initiative.  Therefore:

      • the dashboards should cover health-status related items
      • and/or temperatures 
      • and/or voltages.

      Only failures and drill-down/wrap-ups related with the above data will be supported.

      A persistent Taranta instance will be made available at the LOW ITF such that:

      • it contains dashboards and user accounts that can run these dashboards; data about users and dashboards persist across updates of Taranta code;
      • the dashboards will present  the above-mentioned information;
      • a user can navigate through these dashboards to drill-down and wrap-up as needed by the task at hand.
      • control of the MCCS is out of scope, and expected to be performed through other means (like CLI tools).
        • However control also through Taranta dashboards is allowed and welcome.
      • the resulting dashboards have evolved through a number of usability tests performed with a handful of prospective users belonging to the categories mentioned above.

      NOTE "Persistent Taranta instance" is something that CREAM and SYSTEM teams are expected to provide. (I spoke already with U.Yilmaz on this)

      Show
      The scope should be fairly limited to ensure success of this initiative.  Therefore: the dashboards should cover health-status related items and/or temperatures  and/or voltages. Only failures and drill-down/wrap-ups related with the above data will be supported. A persistent Taranta instance will be made available at the LOW ITF such that: it contains dashboards and user accounts that can run these dashboards; data about users and dashboards persist across updates of Taranta code; the dashboards will present  the above-mentioned information; a user can navigate through these dashboards to drill-down and wrap-up as needed by the task at hand. control of the MCCS is out of scope, and expected to be performed through other means (like CLI tools). However control also through Taranta dashboards is allowed and welcome. the resulting dashboards have evolved through a number of usability tests performed with a handful of prospective users belonging to the categories mentioned above. NOTE "Persistent Taranta instance" is something that CREAM and SYSTEM teams are expected to provide. (I spoke already with  U.Yilmaz  on this)
    • Inter Program
    • 5
    • 0
    • Hide

      see benefits

      Show
      see benefits
    • 18.5
    • PI23 - UNCOVERED

    • Taranta Team_MCCS mccs_ui
    • SOL-G2

    Description

      Description

      Following work from PI17 on the MCCS "thin control slice" (SP-3028, MCCS-1232, MCCS-1224), develop a set of inter-related dashboards for the MCCS (in the context of Solution Goal 2  | https://confluence.skatelescope.org/display/SE/Test+AAVS3+release+of+MCCS+at+the+LOW+ITF ]) on LOW ITF.

      In particular, those dashboards should be used:

      • by developers of the MCCS to monitor its behaviour and expose its relevant internal state (including value of relevant attributes), and to support identification of failures and to support diagnosis;
      • by AIV engineers to carry out testing of the MCCS;
      • by managers and others when doing public demos.

      The dashboards are "inter-related" because there should be some means for the user to switch between dashboards, each providing focus and details about parts or "perspectives" of the MCCS components.

      • These means should be exploitable when 'drilling-down' across abstraction levels to get awareness of what is happening on lower level parts;
      • These means should also be exploitable when viewing 'wrapped-up' data of underlying components.

      A necessary outcome of this work should be that after PI18 people actually use these dashboards, and evolve them as the needs change.

      NOTE: one possible approach to follow is what n.rees calls "maximum severity". Another is to adopt an approach similar to what you do in FMECA (Failure Mode, Effects, and Criticality Analysis: https://www.weibull.com/basics/fmea.htm): we are not aiming at estimating likelihood of failures, but just understand how failures of parts contribute to failures of super-parts (m.miccolis  to be credited for this suggestion).

       

      Method

      It is expected that this work is the result of a tight collaboration between CREAM and MCCS:

      CREAM is expected to provide:

      • expertise in using Taranta, and in finding creative ways to circumvent possible limitations of Taranta
      • skills in how to apply Lean UX practices to understand the drill-down/wrap-up problem.

      MCCS:

      • expertise on how to interface to the MCCS internal Tango devices, and how to interpret their states, attributes and commands
      • expertise in what it means to monitor the MCCS, to identify failures, to diagnose them.

      I expect

      • several iterations over early prototypes (be they paper prototypes or early dashboards) before converging to reasonable solutions
      • adoption of  a Lean UX approach (hypothesize solutions, put them to work, assess their efficacy) to learn about the problem 
      • we will discover limitations of Taranta that will have to be circumvented in some way, or logged as future work to be done
      • existing work that is relevant will be reused (eg, previous mockups of MCCS UIs).

      In terms of efforts, what we need to plan for is at least:

      1. 1 or 2 meetings involving a few members of CREAM and of MCCS to refine the problem (what does it mean to monitor the MCCS? who/when/why will do that? what decisions can be made? ...) and jot down first cut ideas of UI designs
      2. building/revising the prototypes
      3. planning for informal usability tests and executing them
      4. 1-2 meetings to carry out such tests
      5. 1-2 meetings to discuss the feedback gathered during these tests
      6. repeat from step 2 until a satisfactory solution is found.

      I recon that all in all this should require no more than 5FPs (including people from both teams, like 3FP for CREAM and 2 FP for MCCS). This really depends on an accurate planning of the work. Planning that will occur during PI planning I guess.

      Attachments

        Issue Links

          Structure

            Activity

              People

                g.brajnik Brajnik, Giorgio
                g.brajnik Brajnik, Giorgio
                Votes:
                0 Vote for this issue
                Watchers:
                1 Start watching this issue

                Feature Progress

                  Story Point Burn-up: (0%)

                  Feature Estimate: 5.0

                  IssuesStory Points
                  To Do00.0
                  In Progress   00.0
                  Complete00.0
                  Total00.0

                  Dates

                    Created:
                    Updated:
                    Resolved:

                    Structure Helper Panel