In the data generation process, first, the functional predictors are simulated based on the following process: $$
X_m(s) = \sum_{j=1}^5 \kappa_j v_j(s),$$ where \( \kappa_j \) is a vector generated from a Normal distribution with mean one and variance \(\sqrt{a} j^{-1/2}\), \(a\) is a uniformly generated random number between 1 and 4, and $$v_j(s) = \sin(j \pi s) - \cos(j \pi s).$$ The bivariate regression coefficient functions are generated from a coefficient space that includes ten different functions such as: $$b \sin(2 \pi s) \sin(\pi t)$$ and $$b e^{-3 (s - 0.5)^2} e^{-4 (t - 1)^2},$$ where \(b\) is generated from a uniform distribution between 1 and 3. The error function \(\epsilon(t)\), on the other hand, is generated from the Ornstein-Uhlenbeck process: $$\epsilon(t) = l + [\epsilon_0(t) - l] e^{-\theta t} + \sigma \int_0^t e^{-\theta (t-u)} d W_u,$$ where \(l, \theta > 0, \sigma > 0\) are constants, \(\epsilon_0(t)\) is the initial value of \(\epsilon(t)\) taken from \(W_u\), and
\(W_u\) is the Wiener process. If outliers are allowed in the generated data, i.e., \(out.p > 0\), then, the randomly selected \(n.curve \times out.p\) of the data are generated in a different way from the aforementioned process. In more detail, if \(out.p > 0\), the bivariate regression coefficient functions (possibly different from the previously generated coefficient functions) generated from the coefficient space with \(b^*\) (instead of \(b\)), where \(b^*\) is generated from a uniform distribution between 1 and 2, are used to generate the outlying observations. In addition, in this case, the following process is used to generate functional predictors: $$
X_m^*(s) = \sum_{j=1}^5 \kappa_j^* v_j^*(s),$$ where \( \kappa_j^* \) is a vector generated from a Normal distribution with mean one and variance \(\sqrt{a} j^{-3/2}\) and $$v_j^*(s) = 2 \sin(j \pi s) - \cos(j \pi s).$$ All the functions are generated equally spaced point in the interval \([0, 1]\).