ラウトレッジ版 論文評価自動化ハンドブック<br>The Routledge International Handbook of Automated Essay Evaluation (Routledge International Handbooks)

個数:

ラウトレッジ版 論文評価自動化ハンドブック
The Routledge International Handbook of Automated Essay Evaluation (Routledge International Handbooks)

  • 提携先の海外書籍取次会社に在庫がございます。通常3週間で発送いたします。
    重要ご説明事項
    1. 納期遅延や、ご入手不能となる場合が若干ございます。
    2. 複数冊ご注文の場合は、ご注文数量が揃ってからまとめて発送いたします。
    3. 美品のご指定は承りかねます。

    ●3Dセキュア導入とクレジットカードによるお支払いについて
  • 【入荷遅延について】
    世界情勢の影響により、海外からお取り寄せとなる洋書・洋古書の入荷が、表示している標準的な納期よりも遅延する場合がございます。
    おそれいりますが、あらかじめご了承くださいますようお願い申し上げます。
  • ◆画像の表紙や帯等は実物とは異なる場合があります。
  • ◆ウェブストアでの洋書販売価格は、弊社店舗等での販売価格とは異なります。
    また、洋書販売価格は、ご注文確定時点での日本円価格となります。
    ご注文確定後に、同じ洋書の販売価格が変動しても、それは反映されません。
  • 製本 Hardcover:ハードカバー版/ページ数 626 p.
  • 言語 ENG
  • 商品コード 9781032502564
  • DDC分類 371.2720285

Full Description

The Routledge International Handbook of Automated Essay Evaluation (AEE) is a definitive guide at the intersection of automation, artificial intelligence, and education. This volume encapsulates the ongoing advancement of AEE, reflecting its application in both large-scale and classroom-based assessments to support teaching and learning endeavors.

It presents a comprehensive overview of AEE's current applications, including its extension into reading, speech, mathematics, and writing research; modern automated feedback systems; critical issues in automated evaluation such as psychometrics, fairness, bias, transparency, and validity; and the technological innovations that fuel current and future developments in this field. As AEE approaches a tipping point of global implementation, this Handbook stands as an essential resource, advocating for the conscientious adoption of AEE tools to enhance educational practices ethically. The Handbook will benefit readers by equipping them with the knowledge to thoughtfully integrate AEE, thereby enriching educational assessment, teaching, and learning worldwide.

Aimed at researchers, educators, AEE developers, and policymakers, the Handbook is poised not only to chart the current landscape but also to stimulate scholarly discourse, define and inform best practices, and propel and guide future innovations.

Contents

Foreword
Jill Burstein

Section 1: Introduction to AEE and Modern AEE Systems

1. Introduction to Automated Evaluation
Mark D. Shermis and Joshua Wilson

2. Automated Essay Evaluation at Scale: Hybrid Automated Scoring/Hand Scoring in the Summative Assessment Program
Corey Palermo and Arianto Wibowo

3. Exploration of the Stacking Ensemble Learning Algorithm for Automated Scoring of Constructed-Response Items in Reading Assessment
Hong Jiao, Shuangshuang Xu, and Manqian Liao

4. Scoring Essays Written in Persian Using a Transformer-Based Model: Implications for Multilingual AES
Tahereh Firoozi and Mark J. Gierl

5. SmartWriting-Mandarin: An Automated Essay Scoring System for Chinese Foreign Language Learners
Tao-Hsing Chang and Yao-Ting Sung

6. NLP Application in the Hebrew Language for Assessment and Learning
Yoav Cohen, Anat Ben-Simon, Anat Bar-Siman-Tov, Yona Doleve, Tzur Karelitiz, and Effi Levi

Section 2: Expanding Automated Evaluation: Reading, Speech, Mathematics, and Writing Research

7. Automated Scoring for NAEP Short-Form Constructed Responses in Reading
Mark D. Shermis

8. Automated Scoring and Feedback for Spoken Language
Klaus Zechner and Ching-Ni Hsieh

9. Automated Scoring of Math Constructed-Response Items
Scott Hellman, Alejandro Andrade, Kyle Habermehl, Alicia Bouy, and Lee Becker

10. We Write Automated Scoring: Using ChatGPT for Scoring in Large-Scale Writing Research Projects
Kausalai (Kay) Wijekumar, Debra McKeown, Shuai Zhang, Pui-Wa Lei, Nikolaus Hruska, and Pablo Pirnay-Dummer

Section 3: Innovations in Automated Writing Evaluation

11. Exploring the Role of Automated Writing Evaluation as a Formative Assessment Tool Supporting Self-Regulated Learning in Writing
Joshua Wilson and Charles MacArthur

12. Supporting Students' Text-Based Evidence Use via Formative Automated Writing and Revision Assessment
Rip Correnti, Elaine Lin Wang, Lindsay Claire Matsumura, Diane Litman, Zhexiong Liu, and Tianwen Li

13. The Use of AWE in Non-English Majors: Student Responses to Automated Feedback and the Impact of Feedback Accuracy
Aysel Saricaoglu and Zeynep Bilki

14. Relationships Between Middle-School Teachers' Perceptions and Application of Automated Writing Evaluation and Student Performance
Amanda Delgado, Joshua Wilson, Corey Palermo, Tania M. Cruz Cordero, Matthew C. Myers, Halley Eacker, Andrew Potter, Jessica Coles, and Saimou Zhang

15. Automated Writing Trait Analysis
Paul Deane

16. Advances in Automating Feedback for Argumentative Writing: Feedback Prize as a Case Study
Perpetual Baffour and Scott Crossley

17. Automated Feedback in Formative Assessment
Harry A. Layman

Section 4: Factors Affecting the Performance of Automated Evaluation

18. Using Automated Scoring to Support Rating Quality Analyses for Human Raters
Stefanie A. Wind

19. Calibrating and Evaluating Automated Scoring Engines and Human Raters over Time Using Measurement Models
Stefanie A. Wind and Yangmeng Xu

20. AI Scoring and Writing Fairness
Mark D. Shermis

21. Automating Bias in Writing Evaluation: Sources, Barriers, and Recommendations
Maria Goldshtein, Amin G. Alhashim, and Rod D. Roscoe

22. Explainable AI and AWE: Balancing Tensions between Transparency and Predictive Accuracy
David Boulanger and Vivekanandan Suresh Kumar

23. Validity Argument Roadmap for Automated Scoring
David Dorsey, Hillary Michaels, and Steve Ferrara

Section 5: Technological Innovations: "Where Do We Go From Here?"

24. Redesigning Automated Scoring Engines to Include Deep Learning Models
Sue Lottridge, Chris Ormerod, and Milan Patel

25. Automated Short-Response Scoring for Automated Item Generation in Science Assessments
Jinnie Shin and Mark J. Gierl

26. Latent Dirichlet Allocation of Constructed Responses
Jordan M. Wheeler, Shiyu Wang, and Allan S. Cohen

27. Computational Language as a Window into Cognitive Functioning
Peter W. Foltz and Chelsea Chandler

28. Expanding AWE to Incorporate Reading and Writing Evaluation
Laura K. Allen, Püren Öncel, and Lauren E. Flynn

29. The Two U's in the Future of Automated Essay Evaluation: Universal Access and User-Centered Design
Danielle S. McNamara and Andrew Potter

最近チェックした商品