Function partykit::ctree is a reimplementation of (most of)
  party::ctree employing the new party infrastructure
  of the partykit infrastructure. The vignette vignette("ctree", package = "partykit")
  explains internals of the different implementations.
  
Conditional inference trees estimate a regression relationship by binary recursive
  partitioning in a conditional inference framework. Roughly, the algorithm
  works as follows: 1) Test the global null hypothesis of independence between
  any of the input variables and the response (which may be multivariate as well). 
  Stop if this hypothesis cannot be rejected. Otherwise select the input
  variable with strongest association to the response. This
  association is measured by a p-value corresponding to a test for the
  partial null hypothesis of a single input variable and the response.
  2) Implement a binary split in the selected input variable. 
  3) Recursively repeate steps 1) and 2).
The implementation utilizes a unified framework for conditional inference,
  or permutation tests, developed by Strasser and Weber (1999). The stop
  criterion in step 1) is either based on multiplicity adjusted p-values 
  (testtype = "Bonferroni" in ctree_control)
  or on the univariate p-values (testtype = "Univariate"). In both cases, the
  criterion is maximized, i.e., 1 - p-value is used. A split is implemented 
  when the criterion exceeds the value given by mincriterion as
  specified in ctree_control. For example, when 
  mincriterion = 0.95, the p-value must be smaller than
  $0.05$ in order to split this node. This statistical approach ensures that
  the right-sized tree is grown without additional (post-)pruning or cross-validation.
  The level of mincriterion can either be specified to be appropriate
  for the size of the data set (and 0.95 is typically appropriate for
  small to moderately-sized data sets) or could potentially be treated like a
  hyperparameter (see Section~3.4 in Hothorn, Hornik and Zeileis, 2006).
  The selection of the input variable to split in
  is based on the univariate p-values avoiding a variable selection bias
  towards input variables with many possible cutpoints. The test statistics
  in each of the nodes can be extracted with the sctest method.
  (Note that the generic is in the strucchange package so this either
  needs to be loaded or sctest.constparty has to be called directly.)
  In cases where splitting stops due to the sample size (e.g., minsplit
  or minbucket etc.), the test results may be empty.
Predictions can be computed using predict, which returns predicted means,
  predicted classes or median predicted survival times and 
  more information about the conditional
  distribution of the response, i.e., class probabilities
  or predicted Kaplan-Meier curves. For observations
  with zero weights, predictions are computed from the fitted tree 
  when newdata = NULL.
By default, the scores for each ordinal factor x are
  1:length(x), this may be changed for variables in the formula 
  using scores = list(x = c(1, 5, 6)), for example.
For a general description of the methodology see Hothorn, Hornik and
  Zeileis (2006) and Hothorn, Hornik, van de Wiel and Zeileis (2006).