Abstract
This chapter presents the first of seven studies that evaluate lexical facility as a second language (L2) vocabulary construct. Study 1 examines the sensitivity of the lexical facility measures to differences in three university English populations, and a preuniversity group of L2 English students in a university language program, L2 university students, and first language (L1) university students. The sensitivity of the three measures (vocabulary size, mean recognition speed, and recognition speed consistency) to group differences is examined for each measure individually and as composites. Construct validity is also established by comparing performance across frequency levels.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The false-alarm data depart markedly from a normal distribution, as some participants had few-to-none false alarms. A Kruskal–Wallis test was run to test for the equality of the group false-alarm means. There was a significant difference between the groups, χ 2 = 18.18, p < .001, η 2 = .82. Follow-up Mann–Whitney tests showed that the difference between the preuniversity and L2 university groups was significant at U = 289.50, p < .001, d = .94 (Lenhard and Lenhard 2014).
- 2.
The use of a multivariate ANOVA (MANOVA) is motivated in conceptual terms, as the three measures are all assumed to be elements of the lexical facility construct. However, the data departed significantly from a key assumption for the test, namely that of homogeneity of variance/covariance, and so the MANOVA procedure was not done. Alternatively, five univariate ANOVAs are carried out to compare the effect of the three individual measures with each other and with the two composite measures of interest, VKsize_mnRT and VKsize_mnRT_CV.
- 3.
Boxplot inspections used throughout the book assume outliers to be values 1.5 box lengths from the edge of the box.
- 4.
Two of these were the VKsize scores for the L2 university (p < .005) and L1 university groups (p < .02), both showing a tendency to higher scores, as reflected in the moderately negative skew. The others were the CV score for the L2 university group (p < .02) and the composite VKsize_mnRT score for the L1 university group (p = .008).
- 5.
A central aim in the empirical research presented in these chapters is to demonstrate that the measures of processing skill (mnRT and CV), combined with VKsize, will result in a more sensitive measure of proficiency differences than the VKsize measure alone. This question is particularly conducive to treatment in a regression format, where the effect of the individual measures on group differences can be sequentially analyzed and quantified. A candidate technique for the current study is the ordinal logistic regression; the MANOVA-related discriminant analysis is another (Field 2009). The procedure can be used to predict an ordinal (categorical) variable, as in proficiency group membership, given one or more independent variables—in this case, the three lexical facility measures. This is the same logic as standard multiple or hierarchical regression, but the criterion is an ordered category instead of a continuous variable. An ordinal logistic regression was tried with these data, but the assumptions were not met, particularly that of proportional odds.
References
Beeckmans, R., Eyckmans, J., Janssens, V., Dufranne, M., & Van de Velde, H. (2001). Examining the yes/no vocabulary test: Some methodological issues in theory and practice. Language Testing, 18(3), 235–274.
Cameron, L. (2002). Measuring vocabulary size in English as an additional language. Language Teaching Research, 6(2), 145–173.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale: Lawrence Erlbaum.
Coxhead, A. (2000). A new academic word list. TESOL Quarterly, 34(2), 213–238.
Eyckmans, J. (2004). Learners’ response behavior in Yes/No vocabulary tests. In H. Daller, M. Milton, & J. Treffers-Daller (Eds.), Modelling and assessing vocabulary knowledge (pp. 59–76). Cambridge: Cambridge University Press.
Field, A. (2009). Discovering statistics using SPSS (3rd ed.). London: Sage.
Harrington, M. (2006). The lexical decision task as a measure of L2 lexical proficiency. EUROSLA Yearbook, 6(1), 147–168.
Heitz, R. P. (2014). The speed-accuracy tradeoff: History, physiology, methodology, and behavior. Frontiers in Neuroscience, 8, 150.
Hulstijn, J. H., Van Gelderen, A., & Schoonen, R. (2009). Automatization in second language acquisition: What does the coefficient of variation tell us? Applied PsychoLinguistics, 30(04), 555–582.
Larson-Hall, J. (2016). A guide to doing statistics in second language research using SPSS and R. New York: Routledge.
Laufer, B., & Nation, P. (1995). Vocabulary size and use: Lexical richness in L2 written production. Applied Linguistics, 16(3), 307–322.
Laufer, B., & Nation, P. (2001). Passive vocabulary size and speed of meaning recognition: Are they related? EUROSLA Yearbook, 1(1), 7–28.
Lenhard, W., & Lenhard, A. (2014). Calculation of effect sizes. Retrieved November 29, 2014, from http://www.psychometrica.de/effect_size.html
Maxwell, S. E., & Delaney, H. D. (2004). Designing experiments and analyzing data: A model comparison perspective (2nd ed.). New York: Psychology Press.
Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. American Psychologist, 50(9), 741.
Mochida, A., & Harrington, M. (2006). The yes-no test as a measure of receptive vocabulary knowledge. Language Testing, 26(1), 73–98. doi:10.1191/0265532206lt321oa.
Moder, K. (2010). Alternatives to F-test in one way ANOVA in case of heterogeneity of variances (a simulation study). Psychological Test and Assessment Modeling, 52(4), 343–353.
Plonsky, L., & Derrick, D. J. (2016). A meta-analysis of reliability coefficients in second language research. The Modern Language Journal, 100, 538–553.
Plonsky, L., & Oswald, F. L. (2014). How big is “big”? Interpreting effect sizes in L2 research. Language Learning, 64, 878–912. doi:10.1111/lang. 12079.
Ratcliff, R., Gomez, P., & McKoon, G. (2004). A diffusion model account of the lexical decision task. Psychological Review, 111(1), 159–182.
Schmitt, N., Schmitt, D., & Clapham, C. (2001). Develo** and exploring the behaviour of two new versions of the vocabulary levels test. Language Testing, 18(1), 55–89. doi:10.1191/026553201668475857.
Schmitt, N., Jiang, X., & Grabe, W. (2011). The percentage of words known in a text and reading comprehension. The Modern Language Journal, 95(1), 26–43. doi:10.1111/j.1540-4781.2011.01146.x.
Segalowitz, N., & Segalowitz, S. J. (1993). Skilled performance, practice and differentiation of speed-up from automatization effects: Evidence from second language word recognition. Applied PsychoLinguistics, 14(3), 369–385. doi:10.1017/S0142716400010845.
van Heuven, W. J. B., Dijkstra, T., & Grainger, J. (1998). Orthographic neighborhood effects in bilingual word recognition. Journal of Memory and Language, 39(3), 458–483. doi:10.1006/jmla.1998.2584.
Ziegler, J. C., & Perry, C. (1998). No more problems in Coltheart’s neighborhood: Resolving neighborhood conflicts in the lexical decision task. Cognition, 68(2), B53–B62.
Author information
Authors and Affiliations
Copyright information
© 2018 The Author(s)
About this chapter
Cite this chapter
Harrington, M. (2018). Lexical Facility as an Index of L2 Proficiency. In: Lexical Facility. Palgrave Macmillan, London. https://doi.org/10.1057/978-1-137-37262-8_6
Download citation
DOI: https://doi.org/10.1057/978-1-137-37262-8_6
Published:
Publisher Name: Palgrave Macmillan, London
Print ISBN: 978-1-137-37261-1
Online ISBN: 978-1-137-37262-8
eBook Packages: Social SciencesSocial Sciences (R0)