Mastering Assessment (15-Volume Set) : A Self-Service System for Educators (2 PCK SLP)

Mastering Assessment (15-Volume Set) : A Self-Service System for Educators (2 PCK SLP)

  • ただいまウェブストアではご注文を受け付けておりません。 ⇒古書を探す
  • 製本 Paperback:紙装版/ペーパーバック版
  • 言語 ENG
  • 商品コード 9780132732918
  • DDC分類 370

Full Description


Mastering Assessment(hereafter referred to as MA) is a set of 15 booklets intended to be the grist for a wide variety of professional development programs focused on educational assessment. Each of the MA booklets was deliberately written to permit a one- sitting or two- sittings reading by busy educators. The resultant brevity of the MA booklets, coupled with their being provided as separate documents, is intended to provide users of the MA system with considerable latitude in determining how best to use the booklets. A Facilitator's Guide is available to guide educators in using the 15 booklets in their professional development programs and can be downloaded at no additional charge from Pearson's Instructor Resource Center.Mastering Assessment boxset incudes:* Appropriate and Inappropriate Tests for Evaluating Schools* Assessing Students' Affect* Assessing Students with Disabilities* Assessment Bias: How to Banish It* Classroom Evidence of Successful Teaching* College Entrance Examinations: The SAT and the ACT* Constructed-Response Tests: Building and Bettering* How Testing Can Help Teaching* Interpreting the Results of Large-Scale Assessments* Portfolio Assessment and Performance Testing* Reliability: What Is It and Is It Necessary?* Selected-Response Tests: Building and Bettering* The Role of Rubrics in Testing and Teaching* Test Preparation: Sensible or Sordid?* Validity: Assessment's Cornerstone

Contents

Assessing Students' AffectWhat Is Affect?One Coin in Curriculum's 3-Coin FountainPotentially Assessable Affective TargetsWhy Assess Affect?A Three-Part Strategy for Assessing Students' AffectSelf-Report InventoriesAnonymity: An ImperativeFour Anonymity-Enhancement TacticsGroup-Focused Inferences OnlyBuilding Your Own Affective InventoriesLikert Affective InventoriesMultifocus Affective InventoriesConfidence InventoriesAppropriate and Inappropriate Tests for Evaluating SchoolsThe Emergence of Test-Based AccountabilityA Source of National PrideThe Arrival of ESEAA Profession SleepsWhat Can Be Done?Two Types of Instructionally Insensitive TestsTraditionally Constructed Standardized Achievement TestsIn Pursuit of Score-SpreadLinking Items to Suitably Spread VariablesStandards-Based Tests Built for a Particular StateInstructionally Sensitive Accountability TestsA Manageable Number of Extraordinarily Significant Curricular AimsSuccinct, Teacher-Palatable Assessment DescriptionsReports for Each Curricular Aim for Individual StudentsInstructionally Meaningful ReportsA Continuum of Instructional SensitivityCollege Entrance Examinations - The SAT and the ACTThe Role of College Entrance TestsA Fixed-Quota QuandaryPredictive PowerA Crucial InsightPlain Talk about the SAT and the ACTMajor Differences and Similarities: An OverviewThe SAT: Background and DescriptionThe ACT: Background and DescriptionMission-Governed Test MakingInterpreting the Results of Large-Scale AssessmentsWhat Makes a Test Standardized?Score InterpretationTwo Interpretive FrameworksSometimes a Choice of InterpretationsPercentilesGrade-Equivalent ScoresScale ScoresComparing Three Score-Interpretation MethodsAccuracy EstimatesValidity: Assessment's CornerstoneWhat is Assessment Validity?Score-Based InferencesWords and MeaningsValidity EvidenceContent-Related Evidence of ValidityWebb's Alignment ApproachContent-Related Evidence of Validity for Large-Scale TestsContent-Related Evidence of Validity for Classroom TestsCriterion-Related Evidence of ValidityConstruct-Related Evidence of ValidityConstructed-Response Tests: Building and BetteringPayoffs and Perils of Constructed-Response ItemsPayoffsPerilsRules for Item GenerationGeneral Item-Development CommandmentsShort-Answer ItemsEssay ItemsBettering Constructed-Response ItemsScoring Responses to Essay ItemsHow Testing Can Help TeachingHigh-Stakes TestsTest Influenced Instructional DecisionsPreassessmentEn Route Assessment and a Potentially Potent ProcessPostassessmentToo Much Time and Too Much Trouble?Grain SizeHow Many Suitably-Sized Curricular Aims?Tests as Curricular ClarifiersSelected-Response Tests: Building and BetteringPayoffs and Perils of Selected-Response ItemsEfficiency and CoverageOverbooking on Memory-Focused ItemsRules for Item GenerationGeneral Test-Development CommandmentsBinary Choice ItemsMatching ItemsMultiple Choice ItemsImproving Selected-Response ItemsJudgmental-Based ImprovementsEmpirically-Based ImprovementsAssessment Bias: How to Banish ItThe Nature of Assessment BiasOffensivenessUnfair PenalizationInference DistortionDisparate Impact: A Clue, Not a VerdictThree Common Sources of Assessment BiasRacial/Ethnic BiasGender BiasSocioeconomic Bias, Assessment's Closeted SkeletonBias DetectionJudgmental ApproachesEmpirical ApproachesTest Preparation: Sensible or Sordid?Forensics of FraudLong-Standing Pressures to Raise Test ScoresMalevolence or Ignorance?Tawdry Accountability TestsThe Professional Ethics GuidelineThe Educational Defensibility GuidelineCommon Test-Preparation ActivitiesTeaching to the TestAssessing Students with DisabilitiesMust Students with Disabilities be Assessed on the Same Curricular Aims as Other Students? A Mini-History of Pertinent Federal LawIdentical Curricular AimsFederal Law and Courtroom RulingsAccommodationsNot Altering What's Being MeasuredThe CCSSO Accommodations ManualAppropriate Expectations for Students with DisabilitiesReducing ExpectationsUnaltered ExpectationsA Personal OpinionReliability: What Is It and Is It Necessary?Consistency, Consistency, ConsistencyReliability and ValidityCategories of Reliability EvidenceScore Consistency and Classification ConsistencyCorrelation-Based ReliabilityScore ConsistencyClassification ConsistencyStability ReliabilityAlternate-Form ReliabilityInternal Consistency ReliabilityThe Standard Error of Measurement: A Terrific Tool for TeachersPortfolio Assessment and Performance TestingPortfolio AssessmentPerformance TestsPerformance Assessment DefinedTask IdentificationTwo Qualitative ConcernsA Capsule Judgment About Portfolio Assessment and Performance TestsThe Role of Rubrics in Testing and TeachingWhat's In a Name?When To Use a Rubric?Why Use a Rubric?What's In a Rubric?Evaluative CriteriaQuality Distinctions for the Evaluative CriteriaApplication StrategyAn Unwarranted Reverence for RubricsRubrics - The Rancid and the RapturousHypergeneral RubricsTask-Specific RubricsSkill-Focused RubricsJudging RubricsVaried Formats, Qualitative DifferencesA Rubric to Evaluate RubricsClassroom Evidence of Successful TeachingA Professional's ResponsibilityCountering Flawed School AppraisalsCross-Sectional versus Longitudinal Evaluation DesignsEvidence-Enhancing Evaluative ProceduresThe Importance of Pretesting StudentsBlind ScoringNonpartisan ScoringThe Split-and-Switch DesignImplementing a Split-and-Switch DesignStrengths and Cautions

最近チェックした商品