Details
-
Feature
-
Won't have (this time)
-
None
-
Data Processing
-
-
15
-
0
-
-
-
17.4
Description
Inferred actions (taken from context) - TBC
- Review the possibility of reducing the array-level calibration interval from the assumed 10 min interval specified in the listed LFAA requirements which would open the possibility of relaxing the specified tolerances.
- By simulation, assess the impact of the calibration strategy against the listed requirements
This would require developing simulation tools/scripts for SKA1 LOW which include:
- Beams which are evaluated with configurable gain and phase errors between the individual antennas, derived from a configurable representative model of the receive path between the antenna and ADC and the gain tolerance of the station beam that varies as a function of the specified target calibration interval.
- A representative atmospheric model
- A pipeline providing analysis of varying the interval of array-level calibration against introduced calibration errors/artefacts under the assumption that these should remain below some suitable estimate of the thermal noise level.
Analysis based on running a set of simulations (possibly of a single calibration interval) which:
- Simulate data with a variable calibration interval (and therefore vary random LFAA errors)
- Calibrate the data within the specified interval
- Compare calibration errors against the estimated thermal noise level.
- Result in a plot of calibration interval vs calibration error level to identify the shortest possible interval (which would allow for maximum relaxation of the LFAA requirement) were the calibration errors stay below the nominal thermal noise floor.
To be discussed:
- Could we get away with simulations of a subset of stations?
- Is it sufficient to perform the analysis at the list of frequencies specified by the LFAA requirements
Context
There are 2 LFAA requirements to limit random errors. These are:
SKA1-LFAA-228 "Relative gain tolerance":
The station beam accuracy for LFAA over a 10-minute interval shall have a relative gain tolerance of better than:
Frequency (MHz) | 50 | 80 | 110 | 160 | 220 | 280 | 340 | 350 |
Rel. gain tolerance (%) | 1.05 | 0.58 | 0.39 | 0.59 | 0.97 | 1.54 | 2.39 | 2.55 |
SKA1-LFAA-133 "Receive path stability"
The receive paths (all analogue electronics from the antenna to ADC) shall have RMS amplitude variations and RMS phase variations over a 600-second interval better than:
Frequency (MHz) | 50 | 80 | 110 | 160 | 220 | 280 | 340 | 350 |
Constr. Rel. gain tolerance RX path (%) | 8.45 | 4.68 | 3.18 | 4.78 | 7.76 | 12.36 | 14 | 14 |
RMS amplitude tolerance (dB) | 0.70 | 0.39 | 0.27 | 0.40 | 0.64 | 1.00 | 1.15 | 1.15 |
RMS phase tolerance (deg) | 4.85 | 2.68 | 1.82 | 2.74 | 4.46 | 7.13 | 8.1 | 8.1 |
Further work is needed to:
- Refine these requirements
- Analyse compliance of the system against these requirements
Refining the requirements
These two requirements are derived assuming an array-calibration (not station calibration) update rate of 10 minutes and that the direction-dependent calibration errors at the array level should not exceed the thermal noise.
The magnitude of these direction-dependent calibration errors, combined with the length of the interval, automatically result in a maximum tolerable relative rate of change across the array caused by electronic drift.
Note that, with longer integration, the sensitivity to the calibration sources increases allowing more accurate calibration. This has the counterintuitive effect that a longer integration time enforces a more stable system / smaller rate of change. This means that the requirements on the rate of change can be relaxed significantly if the integration interval is reduced to, for example, 1 minute or 5 minutes, as long as that interval allows you to detect sufficient calibration sources to deal with all the direction-dependent calibration parameters that need to be solved for.
Changing this assumption of calibrating every 10 minutes would therefore change the values in both these requirements.
In a recent discussion, it was suggested to take a different approach to these requirements:
- Balance the rate of change of the gain of the individual receive paths against intrinsic effects (eg. ionospheric variability and beam shape changes due to projection effects) that the calibration and imaging process needs to deal with.
- Part of this analysis may be provided in this draft paper EEPs_in_calibration.pdf which gives a limit on the rate of change of the relative error on the directional response of the stations.
- It may also be necessary to have requirements on the maximum allowed relative rate of change over frequency as well as time. This should be considered.
Analysing compliance against requirements regarding station calibration
Once the details of a calibration strategy are determined (e.g. offline/online, how often, self-calibration/holography/other, average EEP/individual EEPs, etc.), a simulation should be re-done to assess compliance against SKA1-LFAA-228 and SKA1-LFAA-133, taking into account the expected stability of the receive path gains based on empirical measurements that we already have (see section 3.3.2 of the SKA Low Station Calibration Report SKA-TEL-SKO-0001088).
Even before the precise details of a calibration strategy are determined, compliance could be analysed in a few different calibration scenarios.