This work was funded partially by SDP and partially by TCC.

The original memo is here. A presentation from the SDP F2F meeting (July 2016) is here.

To help the readers, I include here some clarifications of the memo, mostly based on email dialogs. I prefer to receive comments via email or documents so that the question and my responses on this pages.


1. Mitch (Daniel Mitchell from CSIRO) asked about the assumptions concerning visibility noise. The visibility noise values are taken from Table 3 of Baseline Design v1, for 100kHz bandwidth. The bandwidth is perhaps low but I was trying to ensure that the connection in lambda^2 due to TEC was possible. In the various tables the high number of sources case show what would happen if the noise level was correspondingly lower. Van Weeren et al [1] use 2MHz bandwidth on LOFAR facet calibration. With this improved noise level, a typical field has 66 sources usable for pierce point analysis at 100 MHz. For the proposed configuration LOWBD2, this provides good but not perfect coverage (see the image immediately below). The observing time to complete is about 8.4 years, or about 25 years in elapsed time. With the dithered uniform array, the time to complete is slightly better: 6.4 years or about 20 years in elapsed time (if the observing efficiency is 0.3)


2. I assume that the phase estimates can be fit as a function of lambda^2 to obtain TEC. However, to allow that I had to assume that each 10s, 100kHz sample had sufficient SNR to permit the pierce point solution. The TEC could in principle be tracked smoothly, thus allowing better SNR. As discussed in point 1, a better sampling would be 10s and 2MHz.

3. To elaborate on the former point, continuity across time, frequency, and spatially could in principle be used to improve SNR. I think this requires some considerable development, and is beyond the scope of this memo. I encourage future work in this area. However, any improvement in the model should also include some of the negative points listed on page 37, such as layering of the ionosphere, off-zenith effect, and so on.

4. The point about faceting deserves more emphasis. My assertion is that faceting destroys the phase estimation on the scales needed for EOR. Someone could look at that assertion quantitatively with simulations. And my flavour of analysis could be extended to include that case.

5. That LOFAR can image at high dynamic range across small parts of the field of view is no guarantee that the phase connection needed across the whole FOV needed for EOR will be sufficiently good.

6. Mitch (Daniel Mitchell from CSIRO) asked if fitting across the core would allow improved SNR on some points. My reply is as follows:

"Much can be derived from the plots of singular values (see e.g. Figure 22). Each has the case of just core alone using station to station. Your approach would condense the clusters of pierce points into single points. (See figure 13). But the singular value spectrum will be at best that of the station to station core singular spectrum, but scaled up by sqrt(166) (the number of stations in the BDv2 core) i.e. about 13. Boosting up the station by a factor of thirteen is good but it only scales up the green curve which cuts off at low values of J anyway. So it helps but only to recover the first few Zernikes. This occurs because there are not enough station pierce points to constrain the zernikes down to reasonably fine spatial scales. By the way, this shows the value of the singular value analysis different components (core, core-halo, and halo-halo) can be tracked and assembled separately."

The image below shows the pierce points arising from the core stations (in the inner 1km). By using the core stations as a unit, with a simple phase gradient across it, these points could be improved in SNR but the impact on phase screen fitting will be minimal. The green curve on figure 22 corresponds to the points in the figure below. Improving the SNR of poorly spaced samples is an ineffective strategy compared to spreading out the stations.


In addition to the single core pierce points, the improved SNR on the baselines arising from single core to halo stations will nominally improve the SNR of the pierce points. However, the redundancy relied upon for the self-calibration of the pierce points is very poor since the SNR on the baselines is poorly equalised. This means that the conditioning of the estimation of the pierce points will be poorly conditioned. This is similar to the difficulties in using very large antennas (e.g. 100m, Lovell) with smaller antennas (e.g. 25m diameter).

7. Paul Alexander asked if peeling is included. It is. The cutoff for a pierce point is SNR 5. I assume that all sources brighter than 10 times this limit (i.e. 50 sigma) are peeled, which I think is fairly aggressive, and possibly unrealistic. This should be investigated at the next level of modelling.

8. Mitch comments that the observing efficiency of 30% is too optimistic and recommends a value of more like 10% for any given field. The SKAO has evidently estimated the efficiency at 11%. I agree with this point and will use it from now on. I think there are two thresholds in elapsed time: a project running for say, 3 years, is quite possible. A project running for more than 10 years elapsed time will be difficult to staff and execute. A project running for more than the design life of the telescope is obviously not possible. So the range of acceptable observing times (i.e. actual on-source time) is 0.3 years to 5 years.

9. Various people want to push up the bandwidth even more. With 30MHz bandwidth, there are 664 sources in a field. The coverage of pierce point space is as seen in the next diagram (the point is now a pixel to allow able to see the gaps. Computing the singular value spectrum for this number of pierce points is going to be difficult. But we are getting into another difficulty with too many sources. The solution for piercing points only works because:

Nsource * Nant <<  Nant * Nant

If NSource ~ Nant then the solution for the pierce points is poorly constrained and potentially unstable. Thus, for example, slight errors in the Sky Model will lead to substantial errors in the pierce point phases.


10. Mitch raises a good point about using the array core (166 stations) as a unit and determine phase and gradients across it. This would then increase the number of pierce points available, but in an uneven way since the differing baselines can have widely different SNR. My analysis assumes that the pierce point solutions are well-behaved. This may not always be the case. Remember that the solution for the pierce points is basically self cal, so the solution will be quite unstable (because of the large range in SNR and the large number of sources). I’m not modelling the piercing point solution at all in the current model. We understand selfcal in the case of equal size antennas but as one antenna gets to be much more sensitive, intuition says that the SNR spreads around in a peculiar way. So if we were to treat the core as one big station and fit it by a number of parameters O(6) then I would expect the solution to become unstable and the pierce point phases would not be reliable. With these considerations, we are coming to see the limitations of the current analysis, and a more sophisticated approach may be needed.

11. Mitch (see email) has reiterated his conviction that we can improve the SNR behaviour by suitable parametrisation in both frequency (TEC), and spatial/time (using the core). His suggestions stretch the theoretical framework presented in memo26. The two-stage process of fit pierce point/fit Zernikes begins to become inappropriate.

12. Stefan Wijnholds (see email) asks about the source counts I'm using. I use the Condon et al 2012 [2] source counts. In this analysis we are only interested in source counts above about 0.3Jy and there seems to be little controversy about those (as opposed to the counts at low fluxes.

13. Stefan believes that we can work closer to the "NSource ~ Nant Wall" than I do. I think this has to be investigated as a topic by itself. I have also assumed that the 50 sigma sources in the field are peeled and that the A-Team sources are removed. This also should be used in the analysis.

14. My reply to Stefan required a bit more context to my memo than I have given so far. It's worth repeating here:

"My work has focused on producing a theoretical framework that is predictive of the final noise level as a function of integration time, and also takes into account the spatial distribution of antennas (where I credit you [Stefan] and Jaap, and others for emphasising the importance of the antenna layout). I am very concerned that the current LOW proposed layout has several sub-optimal aspects:

- The retention of closely spaced stations along spiral arms.
- The poor sampling in real space
- The super-stations.

Both of these make a substantial difference to the stability of the solution, as I argued in the March 2015 calibration meeting which I chaired. So I see that the array configuration is being designed apparently without a tradeoff analysis between calibration and configuration. I can only conclude that the calibration team and/or the office is much too optimistic about the difficulty of calibration. Why this should be so is beyond me. The LOFAR experience does not lead to optimism. So far, I have concentrated on 100MHz and above. My memo says that 50MHz (Cosmic Dawn) is not possible at all. This is a big deal for the science. Has that been considered? The calibration issues are such a strong function of frequency that it may not be possible to design an array that performs adequately over the entire 50 - 300 MHz band. For example, the ionospheric footprint must be shadowed on the ground. This means that the layout must incorporate 80 km baselines at 50MHz and 55km baselines at 100MHz. At 50MHz, this overflows Boolardy station. And as my memo shows, a real-space filling configuration with a 55km maximum baseline does considerably better than the current array."

15. I'm getting close to calling this discussion done. I would summarise Mitch's suggestions by saying that I think we need a more ambitious calibration approach:

- Representing the screen directly rather than via pierce points,
- Accounting for peeling, and probably sources outside the main lobe,
- Using fewer parameters for the core,
- Searching in TEC space for the optimum and then averaging over frequency (which is what I call tracking),

An analysis of this enhanced approach should be as rigorous as that presented in Memo 26. I will have to think about how to do that.

One motivation for doing this work is that I now contract for SDP and I was (and still am) concerned that the processing required for LOW is essentially unknown and thus not quantifiable in terms of computing sources needed. So while I appreciate what Mitch has in mind, my memo and Mitch's responses have already validated my concerns. The calibration processing for SKA1-LOW is going to be much more expensive that considered so far.

The fact that the proposed LOW configuration that does not appear to pay any attention to constraints arising from calibration is also concerning. There should at least be some trade off analysis.

Based on all the feedback that I’ve received, I don’t believe that my fundamental point that calibration of LOW (with current algorithms) is infeasible has been successfully challenged. I think at this point, I need to ponder a while where to go next (and how to fund any future work).


[1] R. J. van Weeren, W. L. Williams, M. J. Hardcastle, T. W. Shimwell, D. A. Rafferty, J. Sabater, G. Heald, S. S. Sridhar, T. J. Dijkema, G. Brunetti, M. Brüggen, F. Andrade-Santos, G. A. Ogrean, H. J. A. Röttgering, W. A. Dawson, W. R. Forman, F. de Gasperin, C. Jones, G. K. Miley, L. Rudnick, C. L. Sarazin, A. Bonafede, P. N. Best, L. Birzan, R. Cassano, K. T. Chyzy, J. H. Croston, T. Ensslin, C. Ferrari, M. Hoeft, C. Horellou, M. J. Jarvis, R. P. Kraft, M. Mevius, H. T. Intema, S. S. Murray, E. Orru, R. Pizzo, A. Simionescu, A. Stroe, S. van der Tol, and G. J. White, “LOFAR Facet Calibration,” ApJS, vol. 223, no. 1, p. 2, Mar. 2016.

[2] J. J. Condon, W. D. Cotton, E. B. Fomalont, K. I. Kellermann, N. Miller, R. A. Perley, D. Scott, T. Vernstrom, and J. V. Wall, “Resolving the Radio Source Background: Deeper Understanding through Confusion,” ApJ, vol. 758, no. 1, p. 23, Oct. 2012.