Learn R Programming

languageR (version 1.0)

lexdec: Lexical decision latencies for 79 English nouns

Description

Lexical decision latencies elicited from 21 subjects for 79 English concrete nouns, with variables linked to subject or word.

Usage

data(lexdec)

Arguments

source

Data collected with Jen Hay, University of Canterbury, Christchurch, New Zealand, 2004.

Examples

Run this code
data(lexdec)
library(lme4, keep.source = FALSE)

lexdec.lmer = lmer(RT ~ 1 + Correct + Trial + PrevType * meanWeight + 
Frequency + NativeLanguage * Length + (1|Subject) + (1|Word), 
data = lexdec)
pvals.fnc(lexdec.lmer)$summary

# random slopes

lexdec.lmerA = lmer(RT ~ 1 + Correct + Trial + PrevType * meanWeight + 
Frequency + NativeLanguage * Length + (Trial|Subject) + (1|Word), 
data = lexdec)
anova(lexdec.lmer, lexdec.lmerA)

lexdec.lmerB = lmer(RT ~ 1 + Correct + Trial + PrevType * meanWeight + 
Frequency + NativeLanguage * Length + (Trial|Subject) + 
(Length|Subject) + (1|Word), data = lexdec)
anova(lexdec.lmerA, lexdec.lmerB)

# model criticism

qqnorm(resid(lexdec.lmerB))

lexdec.lmerC = lmer(RT ~ 1 + Correct + Trial + PrevType * meanWeight + 
Frequency + NativeLanguage * Length + 
(Trial|Subject) + (Length|Subject) + (1|Word), 
data = lexdec[abs(scale(resid(lexdec.lmerB)))<2,])

qqnorm(resid(lexdec.lmerC))
# for models with random correlation parameters, pvals.fnc does not work 
# as mcmcsamp() (now in lme4) is currently under reconstruction 
# pvals.fnc(lexdec.lmerC)$summary    (currently broken)

Run the code above in your browser using DataLab