It has been a few weeks since my last update on the u-blox data processing. The lack of stability of the position estimates in the single-frequency PPP (SF-PPP) solution without precise external ionospheric constraints bothered me and I spent some time thinking about possible means of improving the solution. In this blog post, I present the preliminary results of my investigation and I invite everybody with insights to share them so that we can all solve this issue.
Estimating slant ionospheric delays in the SF-PPP filter is quite a reliable approach for stations equipped with geodetic-quality receivers and antennas. Choke-ring antennas eliminate a great deal of the code multipath and further multipath and noise reduction algorithms in the receivers can provide pseudorange noise of 10-20 cm at higher elevation angles. These measurements thus allow for a precise estimation of the slant ionospheric delays, leading to cm- to dm-level positioning accuracies.
In the u-blox data set that I collected with a 20$ patch antenna, code observations were contaminated by meter-level multipath effects. It is easy to visualize, based on the GRAPHIC combination, that code multipath propagates directly into the position solution. When explicitly estimating slant ionospheric delays, the process noise can filter multipath effects and provide smoother position time series. However, in this data set, this strategy alone was not sufficient to avoid contamination of position estimates by code multipath. It is clear, in this case, that unmodeled time-correlated errors in the observations become a nuisance to the filter estimates.
On the other hand, the time variation of the slant ionospheric delays derived from a global ionospheric map (GIM) are smooth and can often offer a more representative estimate of the true ionosphere variation than multipath-contaminated code observations. One of the problems with using GIMs to define pseudo-observables in the PPP filter is that the STEC values computed are often biased due to several factors such as the mathematical representation of VTEC, the temporal resolution of the maps, the spatial interpolation required, etc. Unless there are ionospheric irregularities, the biases contained in the GIM values vary rather slowly and therefore contain time-correlated errors. As a consequence, adding constraints from GIMs at every epoch without accounting for time correlation will put too much weight on this observable and will most likely bias the solution.
From the discussion above, it should be apparent that unmodeled time-correlated errors are a limiting factor in our context. As explained in a previous blog post, there are several means of accounting for these errors in PPP, but the easiest method for me to implement was the state-augmentation approach, leading to the following functional model for the carrier-phase (L), code (C) and GIM (G) observables on frequency “i” to satellite “j”:
It is assumed here that the phase and code observables were corrected from error sources such as satellite clock errors and tropospheric effects. The parameters to be estimated are thus the receiver position (contained in rho), the receiver clock offset (dT), the slant ionospheric delays (I) and the carrier-phase ambiguities (N). To model time-correlated errors, two new types of parameters are now included in the filter: satellite-dependent code multipath (M) and GIM bias (B). Even though this is a serious case of over-parameterization (we have a lot more unknowns that observations), I will show hereafter that these parameters are helpful in absorbing errors in the measurements.
The tedious part of the state-augmentation approach is to provide meaningful a priori constraints and process noise for these new parameters. I could have a lengthy discussion on this topic but I will refrain from going into these details for the moment. After running the software dozens of times with different (realistic) values, I came up with the following solution:
Fig 1 Kinematic SF-PPP solution with modeling of time-correlated errors
Even though we did not expect a full convergence of the solution with one hour of data, at least the estimated displacement is greatly improved with respect to my previous attempt (knowing that the receiver was stationary). It is also interesting to have a look at the estimated multipath for satellite G13 with respect to the values obtained by simply differencing the code and the phase observations:
Fig 2 Estimated multipath vs multipath computed from observations
The previous plot suggests that the process noise on my multipath parameter, although quite permissive, was still too tight to capture the full peak-to-peak variations. However, increasing the process noise on multipath parameters basically means that code observations do not contribute to estimating the position anymore (they only serve in estimating the multipath). This is why tuning the initial constraints and process noise for time-correlation parameters is quite complex: they are data set dependent. I tried applying the same estimation strategy to a geodetic-quality station and, although the results were not terrible, they were not as good as without modeling time correlation, which can be quite easily explained.
In view of an automated online PPP service, these results are puzzling. Should we model time correlation and help low-cost users get rid of some code multipath while biasing geodetic-quality solutions? Or does it become necessary to put in place a pre-assessment of data quality to determine the level of code multipath prior to computing a PPP solution?