Learn R Programming

BayesianFROC (version 0.3.0)

ppp: MRMC or srsc: Posterior Predictive P value (PPP) for MRMC or srsc.

Description

PPP for chi square goodness of fit statistic.

Usage

ppp(StanS4class, Colour = TRUE, dark_theme = TRUE, plot = TRUE, summary = TRUE)

Arguments

StanS4class

An S4 object of class stanfitExtended which is an inherited class from the S4 class stanfit. This R object is a fitted model object as a return value of the function fit_Bayesian_FROC().

It can be passed to DrawCurves(), ppp() and ... etc

Colour

Logical: TRUE of FALSE. whether Colour of curves is dark theme or not.

dark_theme

TRUE or FALSE

plot

Logical, whether replicated datasets are drawn.

summary

Logical: TRUE of FALSE. Whether to print the verbose summary. If TRUE then verbose summary is printed in the R console. If FALSE, the output is minimal. I regret, this variable name should be verbose.

Value

A positive number between zero and one, indicating Posterior Predictive P value (PPP). In addition, it plots replicated datasets which are used to calculate a ppp.

Details

I hate the notion of p value and this is the motivation that I developed new FROC theory. However, I cannot overcome the traditional bitch. I hate statistics since p value is bitch, monotonically decreases when the sample size is large. In some papers, I forget the name, but in some papers, one pointed out that the frequentist p values precisely coincides some poseterior probability of some event (I forget this but such as mean1 is greater than mean2).

In some suitable condition, I conjecture that Bayesian p value coincides to frequentist p value in some sense such as analytically or its expectation of a posterior or etc or large MCMC samples So, p value is bitch and bitch and bitch. I emphasize that notion of p value is bitch and its background is unknown. In suitable condition, frequentist p value bitch is equal to a probability of some event measured by posterior. So,... Bayesian method cannot break the traditional frequentist bitch. Bayesian and frequentist are all bitch!! Of course, intuitively, it is good. But, the theoretically, it does not satisfies naturalist.

Examples

Run this code
# NOT RUN {


# }
# NOT RUN {


#   The 1-st example: MRMC data
#========================================================================================
#                        1)  Fit a Model to MRMC Data
#========================================================================================




               fit <- fit_Bayesian_FROC( ite  = 111,  dataList = ddd )





#========================================================================================
#  2)  Evaluate Posterior Predictive P value for the Goodness of Fit
#========================================================================================






                                ppp(fit)





#  If this quantity, namely a p value, is greater,
#  then we may say that our goodness of fit is better. (accept the null hypothesis)
#  In the traditional procedure, if p-value is less than 0.05 or 0.01 then we reject
#  the null hypothesis that our model fit to data well.




# Of course, even if p-values is small, we should not ignore our result.
# P value bitch is not so clear what it does and in frequentist methods,
# we experianced p value is bitch with respect to sample size.
# So, in Bayesian context, this bitch might be bitch with respect to ...
# Anyway, but ha....many statisticians like this bitch.






#   The 2-nd example uses  data named d
#========================================================================================
#                  1)  Fit a Model to  Data
#========================================================================================




                       fitt <- fit_Bayesian_FROC( ite  = 111,  dataList = d )




#========================================================================================
#  2)  Evaluate Posterior Predictive P value for the Goodness of Fit
#========================================================================================



                               ppp(fitt)



#  If this quantity is greater, then we may say that our model is better.

#  I made this ppp at 2019 August 25.



#========================================================================================
#                             PPP is problematic
#========================================================================================

# Consider the dataset:


dat <- list(c=c(4,3,2,1),    #     Confidence level. Note that c is ignored.
            h=c(77,97,32,31), #     Number of hits for each confidence level
            f=c(77,1,14,74),  #     Number of false alarms for each confidence level

            NL=259,        #     Number of lesions
            NI=57,         #     Number of images
            C=4)           #     Number of confidence level#'


# Fit a model to the data


             fit <- fit_Bayesian_FROC(dat,ite=111)


# calculate p value



             ppp(fit)


# Then we can see that FPF and TPF are far  from FROC curve, but p value is not
# so small, and thus in this case, ppp is not the desired one for us.


# In our model, we need monotonicity condition, namely
#
#    h[1] > h[2] > h[3] > h[4]
#    f[1] < f[2] < f[3] < f[4]
#
#  However the above dataset is far from this condition, and it would relate the
#  above undesired p value.
#   Revised 2019 Sept 7
# Of course it is no need to satisfy this monotonicity precisely, but good data
# should satisfy.
# Since doctor should not wrong (false positive) diagnosis with his high confidence.









# }
# NOT RUN {



# }

Run the code above in your browser using DataLab