Dr. Dan Dewey, Department of Linguistic and English Language
We were able to achieve most of the goals established in the MEG proposal we submitted in the Fall of 2008 (for a grant Winter, 2009). We did end up adjusting the project and use of the funds slightly to adapt to changing needs and to fit the personnel available to accomplish the tasks. One example of adapting was that we had an outstanding graduate student in Linguistics who was able to team up with several undergraduates to create an Elicited Imitation (EI) test in Japanese. Much of our development occurred in Japanese as a result.
Overall our efforts were very successful, leading to 4 papers accepted for publication and others in preparation, 7 conference presentations, and work on several MA theses and senior projects.
In this report we summarize some of our results. We do so largely in the form of lists in order to shorten and simplify the report.
1. Evaluation of how well the academic objectives of the proposal were met
a. Refine and evaluate Elicited Imitation (EI) test Achieved through creation of new English versions of test (Form F and Form I) and the creation of Japanese and Spanish versions. Manuscripts and presentations related are Matsushita et al. (2010), Christensen et al. (2010), McGhee et al. (2009).
b. Cull items to make EI more reliable. Achieved in the process of creating new versions. Manuscripts and presentations related to this process include Lonsdale et al. (2009), Graham et al. (in print), Christensen et al. (2010).
c. Correlational analysis Completed and presented at the Conference of the American Association for Applied Linguistics (Graham et al., 2009)
d. Improve scoring methods Largely completed. Related presentations and papers include Son (2010), Lonsdale et al. (2009), and Son et al. (in preparation)
e. Addition of a fluency component to the EI test Completed for Japanese. Will be part of MA thesis (Matsushita, in preparation). Will add later in other languages. 2
2. Evaluation of the mentoring environment
a. Held weekly meetings successfully all terms (including Spring and Summer).
b. Met regularly with students outside of weekly meetings.
c. Co‐authored 5 papers and 8 conference presentations with students.
3. List of students who participated and what academic deliverables they have produced or it is anticipated they will produce
a. Students Involved Undergraduate: Brian Mortenson, Christian Weibell, Dan Allongo, Darron Johnson, Kathy Washburn, Matt LeGare, Malena Weitze, Nate Tylka, Parker Heiner, Ross Hendrickson, William Wilson, Ruth Schnebley, Jason Housley, Paul Handy, Erika Sato, Kendon Kurzer, Rob Felt Graduate: Aaron Johnson, Beth Anne Schnebley, Brian Mortenson, Carl Christensen (started as undergraduate), Jeremiah McGhee, Meghan Eckerson (started as undergraduate), Joshua Caldwell, Minhye Son, Kevin Cook, Benjamin Millard
b. Products (see attached bibliography for details) 5 papers accepted for publication, 8 conference presentations, 1 MA thesis completed, 4 MA theses in preparation (data collected during the MEG grant period as part of the projects funded)
4. Description of the results/findings of the project
We have further developed Elicited Imitation into a test that is now in preparation for delivery both on and off campus for a variety of purposes. We are currently preparing tools to have this test of English speaking proficiency administered at any location via a web browser and automatically machine scored via speech recognition. During the MEG period we have refined our measure and have found correlations between human and machine scoring ranging from r=.80 to r=.95. We have also found strong correlations between our EI measure and standardized speaking proficiency tests (typically at r=.80 or above). Two of the biggest projects supported directly by MEG funds involved creating and refining a Japanese version of the EI and checking the reliability of scoring by a range of native and non‐native speakers. For the first item, we have a solid EI measure (see Matsushita et al., 2010 for details) in Japanese and are collecting data to validate a version that includes a fluency measure. For the second item, we found through extensive scoring that there is no significant variation in scoring based on native language or second language background. This is good news because it indicates that EI can be scored reliably by speakers of English from any language background. In some cases, human scoring is still desirable (or necessary in cases where automatic speech recognition is not possible).
5. Description of how the budget was spent (all figures rounded to nearest hundred dollar figure)
• $16,200 in student wages (approximately evenly split between graduate and undergraduate, though more hours for undergraduate than graduate).
• $1,900 in supplies, including gift certificates for research participants, printing fees for conference posters, and student membership and registration fees.
• $900 in travel (less than expected, since funds from external grants and internal student awards were used instead).
Bibliography of Works Resulting from MEG Related Work Winter 2009-Fall 2010
Papers Accepted for Publication
Carl Christensen, Ross Hendrickson, and Deryle Lonsdale (2010). Principled Construction of Elicited Imitation Tests; In (N. Calzolari, K. Choukri, B. Maegaard, J. Mariani, J. Odijk, S. Piperidis, M. Rosner, and D. Tapias, Eds.) Proceedings of the 7th Conference on International Language Resources and Evaluation (LREC ’10); European Language Resources Association (ELRA): Valetta, Malta; pp. 233‐238; ISBN 2‐9517408‐6‐7.
Matsushita, H., Dewey, D. P., & Lonsdale, D. (2010). Japanese elicited imitation: ASR‐ based oral proficiency test and optimal item creation. In S. Ishikawa (Ed.), Proceedings of the 6th International Conference of the Society of Information and Communicative Technologies in Analysis, Teaching and Learning of Language (ICTATLL). Tokyo: Kobe University Press.
Weitze, M., McGhee, J. Dewey, D. P., Graham, C. R., & Eggett, D. L. (2010). Variability in L2 acquisition across L1 backgrounds. Manuscript accepted pending revisions, 2009 Second Language Research Forum Proceedings. Peer Reviewed.
C. Ray Graham, Jeremiah McGhee, and Benjamin Millard (2009). In Matthew T. Prior, Yukiko Watanabe, and Sang‐Ki Lee (Editors). The Role of Lexical Choice in Elicited Imitation Item Difficulty. Proceedings of Second Language Research Forum (SLRF) 2008. Somerville, MA: Casadila Press. Ross
Hendrickson, Meghan Eckerson, Aaron Johnson, and Jeremiah McGhee (2009) What makes an item difficult? A syntactic, lexical, and morphological study of Elicited Imitation test items. In Matthew T. Prior, Yukiko Watanabe, and Sang‐ Ki Lee (Editors). Proceedings of the Second Language Research Forum, 2008. Somerville, MA: Casadila Press. ￼
Presentations at Professional Conferences
Professional Conference Presentations
Matsushita, H., Dewey, D. P., & Lonsdale, D. (2010, September). Japanese elicited imitation: ASRbased oral proficiency test and optimal item creation. Presentation given at the International Conference of the Society of Information and Communicative Technologies in Analysis, Teaching and Learning of Language (ICTATL), Kyoto, Japan.
Dewey, D. P., & Matsushita, H. (2010, April). Effects of utterance speed, timing control, and repeated exposure on elicited imitation performance in Japanese as a Second Language. Presentation given at the Annual Conference of the Language Testing Research Colloquium (LTRC), Cambridge, England.
Matsushita, H., & LeGare, M. (2010). Elicited imitation as measure of Japanese L2 proficiency. Presentation given at the Conference of the Association of Teachers of Japanese, Philadelphia, PA.
Weitze, M., McGhee, J., Graham, C. R., & Dewey, D. P. (2009, October). Variability in L2 Acquisition across L1 language families. Presentation given at the Second Language Research Forum, East Lansing, MI.
Deryle Lonsdale, Dan P. Dewey, Jeremiah McGhee, Ross Hendrickson and Aaron Johnson (2009). Methods of Scoring Elicited Imitation Items: an Empirical Study. Paper presented at the 2009 Conference of the American Association for Applied Linguistics (AAAL), Denver, CO. pdf
C. Ray Graham, Ben Millard, Meghan Eckerson and Carl Christensen (2009). Approximating Oral Language Proficiency Using Elicited Imitation. 2009 Poster presentation given at the 2009 Conference of the American Association for Applied Linguistics (AAAL), Denver, CO. pdf
Deryle Lonsdale and Ross Hendrickson (2009). The use of NLP technologies to engineer oral proficiency test items. Pre‐CALICO Workshop on Automatic Analysis of Learner Language. Presentation. pdf
Jeremiah McGhee, Aaron Johnson, Ross Hendrickson, Meghan Eckerson, Malena Weitze, Ben Millard, Ray Graham, Deryle Lonsdale, and Dan Dewey (2009). Improving automated oral testing: identifying features and enhancing speech recognition. Pre‐CALICO Workshop on Automatic Analysis of Learner Language. Presentation. pdf ￼￼￼￼
Son, Minhye (2010). Examining rater bias: An evaluation of possible factors influencing elicited imitation ratings. Unpublished masters thesis, Brigham Young University.
Millard, B. (in preparation). Elicited imitation as a measure of oral proficiency in French. Unpublished masters thesis, Brigham Young University.
McGhee, J. (in preparation). Automatic speech recognition and elicited imitation evaluation. Unpublished masters thesis, Brigham Young University.
Matsushita, H. (in preparation). Automatic speech recognition and corpus‐based evaluation of elicited imitation results in Japanese as a second language. Unpublished masters thesis, Brigham Young University.
Tsuchiya, S. (in preparation). Measuring the acquisition of Japanese as a second language using elicited imitation. Unpublished masters thesis, Brigham Young University.
Manuscripts in Preparation or Submitted for Review
Son, M., Dewey, D. P., & McGhee, J. (in review). Examining Rater Bias in Elicited Imitation Scoring: Influence of L1 and L2 Background. Manuscript submitted for publication.