Given a set of \(n\) units or datasets, each having \(m\) unlabeled feature vectors, the one-to-one matching problem is to find a set of \(n\) labels that produce the best match of feature vectors across units. The objective function to minimize is the sum of (weighted) squared Euclidean distances between all pairs of feature vectors having the same label. This amounts to minimizing the sum of the within-label variances.
The sample means and sample covariances of the matched feature vectors are calculated as a post-processing step.
If x is a matrix, the rows should be sorted by increasing unit label and unit should be a nondecreasing sequence of integers, for example \((1,...,1,2,...,2,...,n,...,n)\) with each integer \(1,...,n\) replicated \(m\) times.
The argument w can be specified as a vector of positive numbers (will be recycled to length \(p\) if needed) or as a positive definite matrix of size \((p,p)\).
The optional argument control is a list with three fields: sigma, starting point for the optimization (\((m,n)\) matrix of permutations; maxit, maximum number of iterations; and equal.variance, logical value that specifies whether the returned sample covariance matrices V for matched features should be equal between labels/classes (TRUE) or label-specific (FALSE, default).