Measurement Error Model:
$$x_{ik}=\alpha_i+\beta_i\mu_k+\epsilon_{ik}$$
where \(x_{ik}\) is the measurement by the ith of \(N\) methods for the kth of \(n\) items, \(i = 1\) to \(N\ge 3\), \(k = 1\) to \(n\), \(\mu_k\) is the true value for the kth item, \(\epsilon_{ik}\) is the
normally distributed random error with variance \(\sigma_i^2\) for the ith method and the kth item, and
\(\alpha_i\) and \(\beta_i\) are the accuracy parameters for the ith method. The beta for the first column of data
) is set to one. The corresponding alpha is set to 0. These constraints or similar are required for model identification.
The imprecision for the ith method is \(\sigma_i\). If all alphas are zeroes and all betas are ones, there is
no bias. If all betas equal 1, then there is a constant bias. If some of the betas differ from one there is a nonconstant bias. Note that the individual betas are not unique - only ratios of the betas are unique. If you divide all the betas by \(\beta_i\), then the betas represent the scale bias of the other devices/methods relative to device/method \(i\). Also, when the betas differ from one, the sigmas are not directly comparable because the measurement scales (size of the units) differ. To make the sigmas comparable, divide them by their corresponding beta.
Technically, the alphas and betas describe the measurements in terms of the unknown true values (i.e., the unknown true values can be thought of as a latent variable). The "true values" are ALWAYS unknown (unless you have a real, highly accurate reference method/device). The real goal is to calibrate one device/method in terms of another. This is easily accomplished because each measurement is a linear function of the same unknown true values. For methods 1 and 2, the calibration curve is given by:
$$E[x_{1k}]=\left(\alpha_1-\alpha_2\beta_1/\beta_2\right)+\left(\beta_1/\beta_2\right)E[x_{2k}]$$
or equivalently
$$E[x_{2k}]=\left(\alpha_2-\alpha_1\beta_2/\beta_1\right)+\left(\beta_2/\beta_1\right)E[x_{1k}]$$.
Use cplot
, with the alpha.beta.sigma argument specified, to display this calibration curve, calibration equation, and the corresponding scale-bias adjusted imprecision standard deviations.
Note that likelihood confidence intervals and bootstrapped confidence intervals can be returned. Wald-type intervals based on the standard errors are alos available by using the confint
function on the returned fit
object. See examples.