Skip to main content

Recognition vocabulary knowledge as a predictor of academic performance in an English as a foreign language setting

Abstract

This paper presents findings of a study of recognition vocabulary knowledge as a predictor of written Academic English Proficiency (AEP) and overall Academic Achievement in an English medium higher education program in an English-as-a-Foreign-Language (EFL) context. Vocabulary knowledge was measured using a Timed YES/NO (TYN) test. AEP was assessed using an academic writing test based on IELTS. Performance on these measures was correlated with Grade Point Average (GPA) as a measure of academic achievement for Arabic L1 users (N=70) at an English-medium College of Applied Sciences in the Sultanate of Oman. Vocabulary size and speed correlated with both academic writing and GPA measures. The combined vocabulary and writing measures were also examined as predictors of academic achievement. The TYN test is discussed as reliable, cost and time effective general measure of AEP and for showing if students have the necessary vocabulary knowledge to undertake study in a tertiary level English medium program.

Background

English as a medium of instruction

English has become the dominant medium of instruction used in higher education internationally. In 2006 English was used as a medium of instruction in 103 countries, whereas the second most commonly used university classroom language was French in 42 countries (Ammon, 2006)a. This growth is particularly noticeable in countries defined by Kachru (1985) as being in the outer- and expanding-circles of English use where English is traditionally considered a foreign language (Bashir, 2007).

In Kachru’s notion of language circles, the national context of English use can be classified as situated within inner-, outer- or expanding-circles, according to a nation’s historico-political relationship with Anglo-American hegemonic power. Inner-circle nations are those where English is the primary language of the state (e.g. New Zealand). Outer-circle nations consist of former British dominions where English is still widely used as a second language to communicate between different groups (e.g. India). The expanding-circle refers to those nations without direct experience of British rule, where English plays an important role, be that in commerce, education or elsewhere (e.g. Oman).

Research exploring the challenges facing L2 students in English medium programs primarily focus on L2 English users in Anglophone countries, such as Australia and USA (see, Murray, 2012; Read, 2008), rather than English L2 users at universities and colleges in countries where English is not the primary language outside the classroomb. Those studies concerned with L2 English users studying at English medium universities in expanding- and outer-circle contexts indicate that such students experience a number of language proficiency related difficulties. In one self-assessment questionnaire study of students’ perceptions of their study experience in English medium faculties of Social Science, Humanities, Engineering and Business at Norwegian and German universities, 42% of the Norwegian sample and 72% of the German students reported substantial comprehension difficulties (Hellekjaer, 2010). In another self-report study of students in English medium higher education programs in China, students identified comprehending lectures, understanding specialist vocabulary and writing in an appropriate academic style, as the major difficulties associated with studying through English as an L2 (Evans & Morrison, 2011)c. Research by Kırkgöz (2005) in Turkey, similarly found among first and fourth year English L2 undergraduate students that the incomprehensibility of both lectures and reading material were amongst students’ greatest concerns. Though little empirical research exists into the relationship between L2 comprehension and performance in these expanding- and outer-circle educational contexts, one study of 2000 bilingual Arab students in an English medium tertiary level science program, found that proficiency in pre-faculty study preparatory year English correlated significantly with academic performance in a Calculus course (Yushau & Omar, 2007). The current research aims to contribute to the literature by investigating the relationship between academic English proficiency and overall academic performance of students studying in English medium tertiary level programs in the Sultanate of Oman, a country on the Arabian Peninsula with no direct experience of British rule but where English plays a key role in education and commerce, and as such will be considered as being in the expanding-circle of English use for the purpose of this study.

Vocabulary and success at English medium universities

Research in the 1980s and 90s lead to the recognition that vocabulary knowledge is a precondition for most other language abilities, (see, Alderson & Banerjee, 2001, for a review of that research); and, the emergence of lexical approaches to language learning (Willis, 1990; Lewis, 2002; McCarthy, 2003). Corpus linguistics studies reported that the most frequent 2,000 headwords used in English account for between 80-85% of the words of any spoken or written English text, depending on the text type (Nation, 1990; Nation & Waring, 1997; Nation & Newton, 1997). The educational corollary of this research is that L2 learners should first master these 2000 most frequently occurring words before attempting to study content through English (Meara, Lightbown & Halter, 1997).

Unassisted comprehension of English texts requires vocabulary knowledge greater than the most 2,000 frequently occurring headwords. In this research the headword build, forms the core of a word-family including related words such as builder, building, and built. Between 95-98% of the word-families in a text must be comprehensible for unassisted comprehension of that text (Waring & Nation, 2004; Hsueh-Chao & Nation, 2000). Knowledge of only the 2000 most frequent word-families would typically mean that approximately one in every five words encountered in an academic English text will still be unknown and thereby render the text largely unintelligible (Nation & Waring, 1997). More recent studies suggest that learners need knowledge of 8,000 to 9,000 word-families for unassisted comprehension of written texts, and 6,000 to 7,000 for spoken texts (Nation, 2006). However, this figure of 8,000-9,000 word-families is not a gatekeeping figure for undertaking higher education studies in English medium environments. Although some researchers insist that a vocabulary of 10,000 word-families is necessary for successful study in linguistically demanding higher degree courses (Hazenberg & Hulstijn 1996 in Schmitt, Schmitt & Clapham, 2001),there is general agreement in the literature (Nation, 1990, Schmitt, 2000, Meara et al., 1997) that an L2 learner’s passive vocabulary should reach a minimum threshold of 5,000 word-families before L2 users undertake studies in English medium universities and colleges.

The greater the learner’s vocabulary knowledge, the less cognitive demands are placed on a learner. More developed levels of vocabulary knowledge enable learners to read with less effort (Segalowitz & Segalowitz, 1993; Segalowitz, Segalowitz & Wood, Segalowitz, Segalowitz & Wood, ; Segalowitz, Segalowitz & Wood,1998) and result in better performance on comprehension tests (Chen, 2011; Miller & Peleg, 2010; Nassaji, 2003; Qian, 1999). The preponderance of the evidence indicates that in order for an L2 to become a vehicle for learning, vocabulary knowledge must first be sufficiently developed. This in turn suggests that measures of vocabulary knowledge could predict written AEP proficiency and academic performance.

Learners’ vocabulary size has been assessed using both traditional multiple-choice checklist formats involving yes/no judgments (Meara & Buxton, 1987; Mochida & Harrington, 2006. A widely used paper-based test of vocabulary size is the Vocabulary Levels Test (Nation, 1990). The test uses a multiple matching format in which items are banded by frequency levels, allowing size to be inferred from performance across decreasing frequency levels. An alternative measure of vocabulary size is the Yes/No test first introduced by (Meara & Buxton, 1987; Meara & Jones, 1988). The test presents a mix of frequency-banded words as well as phonologically-possible nonwords. The learner simply checks which words are known and which are not. The inclusion of nonwords is designed to control for guessing, with nonword errors subtracted from correct word performance for an overall score (Mochida & Harrington, 2006). Although the validity and usefulness of the correcting for guessing in educational testing has been questioned (Ebel, 1979), nonword performance provides an important control variable for the format, especially when guessing varies across participants. The use of non words does raise a number of issues for the validity and reliability of the format, and these have received significant attention from researchers (Beeckmans, et al., 2001; Cameron, 2002; Eyckmans, 2004; Huibregtse, Admiraal, & Meara Huibregtse, Admiraal, & Meara ; Huibregtse, Admiraal, & Meara 2002; Mochida& Harrington, 2006). Although the YN test does not directly elicit vocabulary knowledge, it has been shown to correlate highly with standard measures of vocabulary knowledge (Mochida & Harrington, 2006). The Timed Yes/No test used here differs from earlier versions in that it also collects participants’ response time, thus providing an additional, implicit measure of L2 vocabulary knowledge.

The TYN test measures vocabulary knowledge as a discrete, context-free (trait) entity (Read & Chapelle, 2001). The use of both vocabulary size and speed of access provides, in principle, a measure of vocabulary ability that will be more valid than either dimension alone (Chapelle, 2006). The instrument has been piloted with tertiary EFL learners in Singapore, and has also been tested extensively in Australia on students who were newly arrived in that country, and thus had EFL equivalent proficiency at the time of testing. Test findings, including validity and reliability data have been reported (Harrington & Carey, 2009; Harrington, 2006; Harrington, 2007; Mochida & Harrington, 2006). The effectiveness of this timed Yes/No response test has never been established in an outer- or expanding circle EFL setting, and this is one of the reasons why this study is important.

A secondary aim of this study is to establish whether decontextualized word recognition tests are valid tools for assessing Arabic L1 users’ English language proficiency. Schmitt, Schmitt& Clapham (2001, 64) point out that the linguistic, cognitive and cultural background of participants influences their response (time and/or accuracy) to vocabulary tests. Research by Fender (2003; 2008) and Ryan & Meara (1991) indicates that Arab L1 users experience greater difficulty than proficiency equivalent ESL learners generally in reading tasks and in isolated word recognition tasks. Fender (2008) suggests that nature of Arabic literacy and a difference in decoding skills used in English and Arabic may account for these difficulties. Across the Arab world online and print materials of the mass media are typically written in Modern Standard Arabic, an alphabet that does not encode short vowels thus requiring that readers not only utilize the explicit phonological information in the orthography, but also extra-lexical information such as morpho-syntactic knowledge and discourse context (Fender, 2008; 26; Abu Rabia & Seigel,, 26; Abu Rabia & Seigel, 26; Abu Rabia & Seigel,1995). In terms of culture, universal school education began in the Sultanate of Oman in the late sixties, prior to which there were only three schools and no universities in Oman (Roche, 2009), as a result of which the widespread illiteracy of the 1960’s dropped to below 19% in the early twenty-first century (CIA, 2011). Al-Amrani (2009) found that current Omani EFL university students are reluctant readers who use less effective bottom-up reading strategies than proficiency equivalent non-Omani EFL student peers. Given this background the present study also attempts to establish whether word-recognition tasks can be used with Arabic L1 users to determine their Academic English Proficiency.

The study

This study explores the relationship between vocabulary knowledge, written Academic English proficiency and academic performance of EFL students in Oman. The first objective of the study, is to establish whether:

  1. 1)

    Academic performance by Arabic L1 users at an English medium institution in Oman can be predicted by two key measures of written academic English proficiency.

Academic Performance will be measured by Grade Point Average (GPA) taken from students’ academic transcripts, and written Academic English Proficiency (AEP) will be measured through scores on a mock IELTS academic writing test. The second hypothesis is a refinement of the first:

  1. 2)

    The TYN Test is an effective predictor of written academic English proficiency for the participants.

The motivation for this study is to examine the relationship between vocabulary recognition knowledge, written academic English proficiency, and student academic performance in an English medium institution in an expanding-circle context for Arabic L1 users. Of particular interest is the potential use of measures of vocabulary recognition knowledge as a tool for identifying learners who may lack the needed written English proficiency to undertake tertiary studies in English medium institutions in this context. To the extent the effectiveness of the tool can be established, it will provide these institutions an efficient and cost-effective method for identifying students who may require further academic English language support before attempting tertiary study in English.

Methods

Participants

The participants from this study (n= 70) were students in an English language medium higher education institution in the Sultanate of Oman, Rustaq College of Applied Science. These students were from the first and fourth year of Faculty study and all reported their first language to be Arabic. The data collection period took place within one week with students in their 13th and 14th week of the second 15-week semester for that academic year. Participation was voluntary and the study, was carried out in accordance with the College’s ethical guidelines.

Materials

Academic English written proficiency was assessed using a task adapted from practice IELTS materials (Cambridge University Press, 2005). Each paper was rated on a ten-point scale reflecting proficiency in grammar, vocabulary, coherence and response to the task.Each student was directed to write a 250 word short essay on the topic “Oman in the past, Oman in the future”. Students were given 40 minutes to complete the task.

A discrete item computerized timed YES/NO response test was used to determine vocabulary knowledge. Two versions of 100 items each were used. Each test consisted of 72 words (18 each from four frequency levels) and 28 nonwords. The words are drawn from bands of the 1000 (1K), 2000 (2K), 3000 (3K), 5000 (5k) and 10,000 (10K) most frequently occurring words in the British National Corpus (Harrington & Carey, 2009). In the first set, Word Test A, less frequent words were used from the 2K, 3K, 5K, 10K bands; in Word Test B, more common words from the 1K-5K frequency bands were tested. Nonwords used are phonologically permissible English words (e.g. blurge) as opposed to non-permissible items (e.g. rbgeul). The TYN test measures participants’ reaction time and accuracy, responding “YES” to words and “NO” to nonwords. “YES” responses to nonwords (false-alarms) resulted in a reduction in the participants’ score. See Table 1 for the matrix of possible responses used to test vocabulary knowledge. For each set of words, each item appears for 5000 milliseconds (5 seconds), if a participant fails to respond within that time the word disappears to be replaced by a new word. Response time score is taken from the mean response of each participant to the presented items. The test was administered using LMAP, a web-based testing tool developed at the University of Queensland, Australia, http://www.languagemap.com.

Table 1 Matrix of possible responses, where UPPER CASE = correct responses

The academic performance of the students was measured by the participants’ Grade Point Average (GPA) in their studies for the current semester. The GPAs were provided with the students’ permission, by the Dean’s Office.

Procedure

Data collection took place during the 13th and 14th weeks of the 15-week second semester. The tests were administered by the first author and collaborating staff. All testing was done in a computer lab. Students first completed the writing tests, followed by the computer-based vocabulary tests. Instructions in Modern Standard Arabic were given in written form and were also read aloud by an Omani research assistant. The testing format was explained and students did a set of practice items for each test. In both tests they were encouraged to work as quickly and as accurately as possible, as both accuracy and response time measures were being collected.

For the Vocabulary test students were warned that clicking “YES” for nonwords would result in lower scores. They were also told that each item would appear on the screen for only 5000 milliseconds (5 seconds) and then disappear. If there was no response it was counted as incorrect. There were very few no responses, comprising less than 0.05% of the total response set. In addition to being an additional window on underlying proficiency, the inclusion of the response time condition discouraged strategic and reflective processing on the part of the students, thus providing a more direct measure of vocabulary knowledge.

Results

The academic writing papers were marked by trained and practicing IELTS examiners, with a random sample of essays (20%)marked by both raters. Interrater reliability was assessed by using the intra-class coefficient (ICC) statistic, which measures the consistency between the raters’ judgment (Field, 2009). ICC = .88, p < .001; 95% confidence interval, 67.00 - 96.10. This indicates a high level of consistency across the two raters.

Score reliability for the vocabulary measures was assessed using Cronbach’s Alpha. Table 2 reports the reliability coefficients for accuracy and response time measures for the Advanced and Basic versions.

Table 2 Cronbach’s Alpha reliability coefficients for advanced and basic word tests

Separate coefficients for the word and nonword items were calculated as the recognition of words and the rejection of nonwords arguably reflect different dimensions of underlying lexical knowledge (Mochida & Harrington, 2006). Combined reliability values were also calculated.

The descriptive statistics for the language measures and grade-point averages are presented in Table 3. Mean performance on the Basic test was better than on the Advanced tests, for both accuracy and response time, presumably reflecting the difficulty of the items in the respective tests.

Table 3 Descriptive statistics (means, standard deviations, range) for advanced and basic word tests, grade point average and IELTS writing scores, N = 70

A pair wise t-test on the mean accuracy differences for Test A and B was significant, t(69) = 12.43, p<.001, Cohen’s d = 1.22, the latter indicating a strong effect size. The mean response times were log-transformed for all statistical analyses reported here. The response time differences were not significant. False alarm rates for both word tests compare with pre-faculty English language students in Australia whose mean false alarm scores were 25% for beginners and 10% for advanced learners (Harrington & Carey, 2009). There were a small number of students with extremely high false alarm rates that could have arguably been removed as outliers. However, given that a motivation for the study is to assess the effectiveness of the TYN Test in an authentic testing context, these potentially distorting data points were not removed, as they might be in a typical psychology or laboratory setting study.

The strength of association among the word, writing and GPA measures are evaluated by first reporting on the bivariate correlations between the measures. See Table 4.Also reported is a composite word measure. This measure combines the individual participants z-scores for accuracy and response times into a single measure in an attempt to provide a more stable measure of underlying word skill (Ackerman & Cianciolo, 2000). The effectiveness of the composite score as a predictor of academic achievement will be assessed in both the bivariate correlations and a regression analysis that evaluates the relative contributions the word and writing measures make in predicting academic performance.

Table 4 Bivariate correlations for language measures and grade-point-average

As is evident in Table 4 all the language measures, with the exception of the results for Advanced Word Accuracy, had a moderately strong correlation with GPA. The lack of correlation between accuracy and response time for the respective tests indicated there was no systematic speed-accuracy trade-off by the participants.

In general, faster and more accurate word recognition skill had a significant correlation with GPA, as did performance on the writing task. The composite scores had a stronger correlation with GPA than the accuracy and response time results for the respective tests. It was also evident that vocabulary skill was a moderately strong predictor of writing outcomes.

Regression analyses were also carried out to assess how well the vocabulary and writing measures predicted GPA when the two variables were considered in combination. Given the inter-correlations between vocabulary, writing and GPA, it is not clear if better vocabulary performance is due to being a better writer, or better vocabulary skills determine writing performance, the similar correlations (r = .3 and -.4 ) with GPA indicating that the two measures tap the same underlying knowledge. Alternatively, the vocabulary and writing scores may make relatively independent contributions to academic achievement. If the former, then the two measures would be interchangeable as tests of academic English skill, though with important practical differences between the two in terms of administration and scoring. Alternatively, if the vocabulary (accuracy and response time) and writing measures each account for a substantial amount of additional variance in GPA, this would indicate that two complement each other in indexing learner proficiency levels. The regression models assess how much overall variance the measures together account for, and the relative contribution of each measure to this amount.

Table 5 reports four analyses. The Advanced and Basic Word results are evaluated separately. For each word test two models were developed. The first assess the contribution of the word measures to predicting the criterion GPA after the writing measures are entered. The second model will enter the vocabulary measures first and then the writing measures. The composite scores, which were generated from the raw measures, will not be analysed.

Table 5 Hierarchical regression analyses of the advanced word and writing measures with GPA as criterion variable and writing and word measure as predictors

To summarize the regression analyses, as expected from the bivariate correlations, the word measures and writing measures predicted significant variance in GPA differences. The model based on the Basic word measures accounted for nearly 25% of the GPA variance (total adjusted R2 = .234) while the Advanced word model only accounted for 16% (.162). This is due to both the accuracy and response time results serving to discriminate between GPA differences. Although the word and writing measures have similar correlations with GPA in the bivariate comparisons, writing is a better predictor when the two variables are considered together.

Discussion

The first objective of the study was to assess the ability of Arabic L1users’written Academic English Proficiency (AEP) to predict academic performance at tertiary education institutions in an expanding-circle context. Results indicate that AEP, both academic writing skills and vocabulary knowledge are good predictors of overall academic performance in this context. The TYN Test is a less sensitive measure than the academic writing tests but nonetheless predicts academic performance. These results give support to research outlining the importance of English proficiency (Yushau & Omar, 2007), and in particular, vocabulary knowledge as a prerequisite for academic success in English medium programs (Meara, et al. 1997; Waring & Nation, 2004; Hsueh-Chao & Nation, 2000).

It is of note that the results gathered on the TYN Test present a high number of false alarms. As noted above, lower false alarm rates were evident in studies in Anglophone countries. The results of the regression analysis indicate that the TYN Test can be used as a screening tool with fairly comparable effectiveness to a writing task. However, the results here differed noticeably from those obtained from previous studies with English L2 university students in English speaking countries. As noted above, the mean false alarm rates were much higher and the group means (as a measure of vocabulary size) were also lower, with students in this study performing at the level of pre-tertiary students in Australia. It is possible that the results are influenced by the participants’ L1 Arabic. Arab students have been shown to encounter greater difficulty with English spelling and word processing than proficiency equivalent ESL learners from other L1 backgrounds. This effect has been attributed to the influence of Arabic orthography and literacy practices (Fender, 2008; Ryan & Meara, 1991; Milton, 2009). Performance on the written TYN Test requires knowledge of spelling and word meaning, and it is not clear the extent to which the poorer performance on both the words and nonwords (the latter conforming to English spelling rules) can be reduced to spelling difficulties. This is a question for future research.

This overall higher false alarm rate might be because the instructions were not clear and participants were not aware that they were being penalized for incorrect guesses; or, that these were acquiescent responses (Dodorico-McDonald, 2008) - random clicks that brought the test to an un-taxing end rather than reflecting their knowledge or lack of knowledge of the items presented. For improved results the test results should be meaningful to the students, for example we would expect fewer false alarms if the test acted as gateway to further study or was integrated into courses and contributed towards students marks. Further research might also look at how test performance interacts with instructional variables such as time of semester. The comparatively slower response times may indicate that no tall students were aware that speed of response was also being measured, another recommendation of the study is that the instructions should be delivered online in a standardized format with a video demonstration on the computer to improve the clarity of the instructions.

The results also show that vocabulary recognition test scores serve as a predictor of written academic English language proficiency. There was a strong correlation between Language MAP test scores and academic writing scores, particularly with Word Test B, which represents words from the 1-5K most commonly occurring words, which authors such as Nation (1990), Schmitt (2000) and Meara et al., (1997)) claim are essential for reading English texts and are the minimum knowledge required to begin study in English medium programs. Once again, the findings here are not as strong as the correlations resulting from studies using Language MAP and placement tests for students studying in inner-circle settings, such as pre-Faculty courses in Australia (Harrington & Carey, 2009) but the results are nonetheless significant. The weaker the student’s vocabulary knowledge the poorer they are likely to perform both in measures of their academic English proficiency, which is fundamental to success in their studies, and in overall academic performance. This highlights the TYN Test’s potential for use as an English language proficiency placement tool, or for tracking and monitoring changes in written academic English proficiency among matriculated students for Post-Enrolment Language Assessment (PLA) purposes.

Conclusion

This study contributes to a growing body of research stressing the fundamental importance of L2 English proficiency for achieving success in English medium tertiary education programs (Murray, 2012; Read, 2008; Harrington & Roche, submitted). The results presented here recommend the TYN Test as a useful predictor of both overall academic achievement and written academic English language proficiency for Arabic L1 users, despite concerns raised to the contrary in the literature concerning the written language processing strategies used by Arabic L1 users of English (Ryan & Meara, 1991; Fender, 2008; Abu Rabia & Seigel, 1995), and in particular their difficulties in reading English electronic texts (Al-Amrani, 2009). The results show that visual word recognition is a good of measure of L2 proficiency. The TYN Test is especially attractive given the limited resources needed to administer the test and generate results. As for future directions, research is underway that assesses the predictive power of the test as a diagnostic for readiness to take IELTS by learners in expanding-circle settings like the one examined here.

Endnotes

aOf approximately 6,000 living languages, 82 are used in tertiary education, 39 languages are used in 2 or more countries’ universities and only 13 are used in three or more (Ammon, 2006, 557).

bFor example, research shows that non-English speaking background (NESB) students’ English language proficiency correlates positively with academic success in Australian Universities (Feast, 2002).

cThe students’ perceptions fit well with Lin & Morrison’s (2010) vocabulary levels research which reported that the majority (76.1%) of Hong Kong students at English medium universities do not have sufficient vocabulary to comprehend lectures in English or undertake tertiary study.

Authors information

Thomas Roche is the Director of Studies of the English Language Centre at Southern Cross University, Australia, an Associate Professor at Sohar University, Sultanate of Oman, and an Associate Research Fellow in the School of Languages & Comparative Cultural Studies at the University of Queensland, Australia. His research interests include individual difference in foreign language learning and language testing.

Michael Harrington is a Senior Lecturer in Second Language Acquisition in the School of Languages & Comparative Cultural Studies at the University of Queensland, Australia. He has published in areas including lexical processing and the measurement of L2 vocabulary skills.

References

  1. Abu Rabia S, Seigel LS: Different orthographies, different context effects: The effects of Arabic sentence context in skilled and poor readers. Reading Psychology 1995, 16: 1–19. 10.1080/0270271950160101

    Article  Google Scholar 

  2. Ackerman PL, Cianciolo AT: Cognitive, perceptual-speed, and psychomotor determinants of individual skill acquisition. Journal of Experimental Psychology: Applied 2000, 6: 259–290.

    Google Scholar 

  3. Alderson JC, Banerjee J: Language testing and assessment (Part 1) “State of the art review”. Language Testing 2001, 18: 213–236.

    Google Scholar 

  4. Al-Amrani S: Strategies for reading on-line and printed texts by Omani EAP students. In Orientations in language learning and translation. Edited by: Roche T. Muscat, Oman: Al Falaj Press; 2009:41–60.

    Google Scholar 

  5. Ammon U: The language of tertiary education. In Encyclopaedia of languages & linguistics. Edited by: Brown EK. Amsterdam: Elsevier; 2006:556–559.

    Chapter  Google Scholar 

  6. Bashir S: Trends in international trade in higher education: implications and options for developing countries education working papers series. Washington: World Bank; 2007.

    Google Scholar 

  7. Beeckmans R, Eyckmans J, Janssens V, Dufranne M, Van de Velde H: Examining the yes-no vocabulary test: Some methodological issues in theory and practice. Language Testing 2001, 18: 235–274.

    Google Scholar 

  8. Cambridge University Press: Cambridge IELTS 4. Examination papers from University of Cambridge ESOL Examinations. Cambridge, UK: Cambridge University Press; 2005.

    Google Scholar 

  9. Cameron L: Measuring vocabulary size in English as an additional language. Language Teaching Research 2002, 6: 145–173. 10.1191/1362168802lr103oa

    Article  Google Scholar 

  10. Chapelle C: L2 vocabulary acquisition theory: The role of inference, dependability and generalizability in assessment. In Inference and generalizability in applied linguistics: multiple perspectives. Edited by: Chalhoub-Deville M, Chapelle CA, Duff P. Amsterdam: John Benjamins; 2006:47–64.

    Chapter  Google Scholar 

  11. Chen KY: The impact of EFL students’vocabulary breadth of knowledge on literal reading comprehension. Asian EFL Journal 2011., 51: Retrieved from http://www.academia.edu/2346635/The_Impact_of_Vocabulary_Knowledge_Level_on_EFL_Reading_Comprehension

    Google Scholar 

  12. CIA: World factbook: Middle East: Oman. 2011. Retrieved from https://www.cia.gov/library/publications/the-world-factbook/geos/mu.html

    Google Scholar 

  13. Dodorico-McDonald J: Measuring personality constructs: The advantages and disadvantages of self-reports, informant reports and behavioural assessments. Enquire 2008., 1(1): Retrieved from http://www.nottingham.ac.uk/shared/shared_enquire/PDFs/Dodorico_J.pdf

    Google Scholar 

  14. Ebel RL (Ed): Essentials of educational measurement. Englewood Cliffs, NJ: Prentice-Hall; 1979.

    Google Scholar 

  15. Evans S, Morrison B: Meeting the challenges of English-medium higher education: The first-year experience in Hong Kong. English for Specific Purposes 2011, 30(3):198–208. 10.1016/j.esp.2011.01.001

    Article  Google Scholar 

  16. Eyckmans J: Measuring receptive vocabulary size. Utrecht: LOT; 2004.

    Google Scholar 

  17. Feast V: The impact of IELTS scores on performance at University. International Education Journal 2002, 3(4):70–85.

    Google Scholar 

  18. Fender M: English word recognition and word integration skills of native Arabic- and Japanese-speaking learners of English as a second language. Applied Psycholinguistics 2003, 24: 289–315.

    Article  Google Scholar 

  19. Fender M: Spelling knowledge and reading development: Insights from Arab ESL learners. Reading in a Foreign Language 2008, 20(1):19–42.

    Google Scholar 

  20. Field A: Discovering statistics using SPSS. London, UK: SAGE Publications; 2009.

    Google Scholar 

  21. Harrington M: The lexical decision task as a measure of L2 lexical proficiency. EUROSLA Yearbook 2006, 6: 147–168. 10.1075/eurosla.6.10har

    Article  Google Scholar 

  22. Harrington M: The coefficient of variation as an index of L2 lexical processing skill. University of Queensland working papers in linguistics. Brisbane, Australia: School of English, Media Studies and Art History, The University of Queensland; 2007.

    Google Scholar 

  23. Harrington M, Carey M: The on-line Yes/No test as a placement tool. System: An International Journal of Educational Technology and Applied Linguistics 2009, 37: 614–626. 10.1016/j.system.2009.09.006

    Article  Google Scholar 

  24. Harrington M, Roche T: Identifying academically at-risk students at an English-medium university in Oman: Post-enrolment language assessment in an English-as-a-foreign language setting. submitted

  25. Hellekjaer GO: Lecture comprehension in English-medium higher education. Hermes 2010, 45: 11–34.

    Google Scholar 

  26. Hsueh-Chao MH, Nation P: Unknown vocabulary density and reading comprehension. Reading in a Foreign Language 2000, 13: 403–30.

    Google Scholar 

  27. Huibregtse I, Admiraal W, Meara P: Scores on a yes-no vocabulary test: correction for guessing and response style. Language Testing 2002, 19: 227–245. 10.1191/0265532202lt229oa

    Article  Google Scholar 

  28. Kachru BB: Standards, codification and sociolinguistic realism: The English language in the outer circle. In English in the world: Teaching and learning the language and literatures. Edited by: Widdowson RQAH. Cambridge, UK: Cambridge University Press; 1985:11–36.

    Google Scholar 

  29. Kırkgöz Y: Motivation and student perception of studying in an English-medium university. Journal of Language and Linguistic Studies 2005, 1(1):101–123.

    Google Scholar 

  30. Lewis M: Implementing the lexical approach. Boston: Thomson and Heinle; 2002.

    Google Scholar 

  31. Lin LHF, Morrison B: The impact of the medium of instruction in Hong Kong secondary schools on tertiary students' vocabulary. Journal of English for Academic Purposes 2010, 9(4):255–266. 10.1016/j.jeap.2010.09.002

    Article  Google Scholar 

  32. McCarthy M: Vocabulary. Oxford: Oxford University Press; 2003.

    Google Scholar 

  33. Meara P, Buxton B: An alternative multiple-choice vocabulary test. Language Testing 1987, 4: 142–145. 10.1177/026553228700400202

    Article  Google Scholar 

  34. Meara P, Jones G: Vocabulary size as a placement indicator. In Applied Linguistics in Society. Edited by: Grunwell P. London, UK: CILT; 1988.

    Google Scholar 

  35. Miller P, Peleg O: Doomed to read in a second language: Implications for learning. Journal of Psycholinguistic Research 2010, 39(1):51–65. 10.1007/s10936-009-9125-3

    Article  Google Scholar 

  36. Milton J: Measuring second language vocabulary acquisition. Bristol, England: Multilingual Matters; 2009.

    Google Scholar 

  37. Mochida A, Harrington M: The Yes-No test as a measure of receptive vocabulary knowledge. Language Testing 2006, 23: 73–98. 10.1191/0265532206lt321oa

    Article  Google Scholar 

  38. Meara P, Lightbown P, Halter R: Classrooms as lexical environments. Language Teaching Research 1997, 1: 28–47. 10.1177/136216889700100103

    Article  Google Scholar 

  39. Murray N: Ten “Good Practice Principles” ten key questions: considerations in addressing the English language needs of higher education students. Higher Education Research & Development 2012, 31(2):233–246. 10.1080/07294360.2011.555389

    Article  Google Scholar 

  40. Nassaji H: Higher-level and lower-level text processing skills in advanced ESL reading comprehension. The Modern Language Journal 2003, 87: 261–276. 10.1111/1540-4781.00189

    Article  Google Scholar 

  41. Nation P: Teaching and learning vocabulary. Boston: Heinle & Heinle; 1990.

    Google Scholar 

  42. Nation P: The Canadian Modern Language Review /La revue canadienne des langues vivantes. 2006, 63(1):59–81. September/septembre 2006

    Article  Google Scholar 

  43. Nation P, Newton J: Teaching vocabulary. In Second language vocabulary acquisition. Edited by: Coady J, Huckin T. Cambridge, UK: Cambridge University Press; 1997.

    Google Scholar 

  44. Nation P, Waring R: Vocabulary size text coverage. In Vocabulary: description, acquisition and pedagogy. Edited by: Schmitt NA. Cambridge, UK: Cambridge University Press; 1997.

    Google Scholar 

  45. Qian DD: Assessing the roles of depth and breadth of vocabulary knowledge in reading comprehension. Canadian Modern Language Review/La Revue canadienne des languesvivantes 1999, 56(2):282–308. 10.3138/cmlr.56.2.282

    Article  Google Scholar 

  46. Read J: Identifying academic language needs through diagnostic assessment. Journal of English for Academic Purposes 2008, 7(3):180–190. 10.1016/j.jeap.2008.02.001

    Article  Google Scholar 

  47. Read J, Chapelle CA: A framework for second language vocabulary assessment. Language Testing 2001, 18(1):1–32.

    Article  Google Scholar 

  48. Roche T: Introduction. In Orientations in language learning and translation. Edited by: Roche T. Muscat, Oman: Al Falaj Press; 2009:7–8.

    Google Scholar 

  49. Ryan A, Meara P: The case of the invisible vowels: Arabic speakers reading English words. Reading in a Foreign Language 1991, 7: 531–540.

    Google Scholar 

  50. Schmitt N: Vocabulary in language teaching. Cambridge, UK: Cambridge University Press; 2000.

    Google Scholar 

  51. Schmitt N, Schmitt D, Clapham C: Developing and exploring the behaviour of two new versions of the vocabulary levels test. Language Testing 2001, 18: 55–88.

    Google Scholar 

  52. Segalowitz N, Segalowitz S: Skilled performance, practice, and the differentiation of speed-up from automatization effects: Evidence from second language word. Applied Psycholinguistics 1993, 14: 369–385. 10.1017/S0142716400010845

    Article  Google Scholar 

  53. Segalowitz SJ, Segalowitz NS, Wood AG: Assessing the development of automaticity in second language word recognition. Applied Psycholinguistics 1998, 19: 53–67. 10.1017/S0142716400010572

    Article  Google Scholar 

  54. Waring R, Nation ISP: Second language reading and incidental vocabulary learning. Angles on the English Speaking World 2004, 4: 97–110.

    Google Scholar 

  55. Willis D: The lexical syllabus: A new approach to language teaching. London, UK: Collins COBUILD; 1990.

    Google Scholar 

  56. Yushau B, Omar MH: Preparatory year program courses as predictors of first calculus course grade. Mathematics and Computer Education 2007, 41(2):92–108.

    Google Scholar 

Download references

Acknowledgements

This research was supported by a grant from the Omani Research Council [Grant number ORG SU HER 10 003]. The authors would like to thank the anonymous reviewer for useful comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas Roche.

Additional information

Competing interest

The authors declare that they have no competing interest.

Authors’ contribution

Both authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Roche, T., Harrington, M. Recognition vocabulary knowledge as a predictor of academic performance in an English as a foreign language setting. Language Testing in Asia 3, 12 (2013). https://doi.org/10.1186/2229-0443-3-12

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/2229-0443-3-12

Keywords