It is common practice to constrain the tropospheric zenith delay (TZD) parameter in a PPP solution. Depending on the quality of the temperature, pressure and relative humidity input data, I typically apply a constraint on the initial TZD value with a standard deviation between 5 and 20 cm. I never gave this pseudo-observation much attention until a problematic data set made me revisit this concept.
The purpose of adding a constraint on the initial TZD value in a sequential least-squares adjustment is usually to either slightly speed up the PPP convergence time or to avoid singularities with poor geometry. Otherwise, with at least 5 visible satellites, the system is solvable and external information is not required. Still, adding a TZD constraint is usually recommended, although the constraint you specify might not have the anticipated effect.
In order to show this concept, I am using GPS data collected during an airborne survey. Data quality is rather poor, with noisy measurements, several short data gaps, and cycle slips. In the figure below, I first plotted, in red, the TZD values obtained from the GPT model, with a mean value of approximately 2.27 m. Abrupt changes in the time series correlate with altitude changes of the airplane. I then computed two smoothed PPP solutions without applying any constraints to the TZD parameter, processed with 1 Hz (green) and 10 Hz (blue) data respectively. It is obvious that the TZD estimates are problematic: the large departure from the GPT values are most likely an artifact of poor data quality and geometry. Even though the TZD estimates for both solutions differ, they nevertheless remain within about 7 cm of each other.
The interesting part comes when adding an initial constraint on the TZD parameter with a standard deviation of 10 cm. The two resulting solutions, shown in purple and cyan, now have a 30 cm offset in TZD estimates. What happened?
Fig.1 Tropospheric zenith delay (TZD) estimates following different strategies
As I mentioned in a previous post, neglecting time correlation in a PPP solution leads to an over-optimistic precision for the estimated parameters. In the unconstrained solutions, the standard deviation of the 1 Hz TZD estimate was about 14.5 cm, while this value was reduced to 5 cm with 10 Hz data. Does it mean that the 10 Hz solution is better? Certainly not: it would have been better only if all observations were truly independent.
Here is how the initial TZD constraint works: it can be shown that a weighted average of the unconstrained TZD values (green) and the initial TZD constraint from the GPT model (red) gives exactly the constrained values (purple) shown on the graph. In other words, since the 1 Hz TZD estimate was three times less precise than the 10 Hz estimate, the initial constraint pulled the 1 Hz constrained estimates (purple) much closer to the GPT values (red) than the constrained 10 Hz solution (cyan).
In reality, even the standard deviation of the 1 Hz TZD estimate was probably way too optimistic. With a proper covariance matrix, the initial 10 cm constraint would have pulled the solution even closer to the a priori value since observations alone could not reliably decorrelate the TZD and height parameters.
What is the solution to this problem? The proper solution, of course, is to model time correlation directly in the PPP filter.