
IDE 641 - Techniques in Educational Evaluation
IDE 641 isn’t your average educational evaluation course, it doesn’t play in hypotheticals or live in academic abstraction. This course builds evaluators, not spectators. It drills into formative and summative evaluation as tools, not theory. By the end, you don’t just talk about evaluation, you execute it with clarity, credibility, and purpose. From day one, this course throws you into a deep dive, client relationships, data collection, ethics, and evaluation management. The structure forces you to shift between individual grit and team dynamics. You’ll plan, implement, and report evaluations that mirror real-life complexities, not case studies dressed up as practice. No busywork here. Every assignment has weight and consequence, and the grading reflects that. If you slack, the syllabus doesn’t hand-hold. If you show up with effort and thought, it’ll show in your work and your final grade. You’ll navigate digital environments, assess interactive systems, and learn how to deliver feedback that doesn’t waste anyone’s time. The class uses Zoom meetups to build community, tight, practical circles, not forced group therapy. And with on-site meetings for the military cohort, there’s a clear intention: real-world relevance over academic theater. The heartbeat of IDE 641 is one word, application. Whether you’re dissecting a mobile app’s instructional efficacy or drafting a summative evaluation plan rooted in logic models, you're sharpening skills that matter far beyond Blackboard. Dr. Cho lays out expectations with zero fluff: collaborate, communicate, and commit. You’re expected to show up with questions, clarity, and a professional mindset. Because in the field, that’s the baseline. IDE 641 teaches you how to think like an evaluator and work like one. It trains you to be deliberate, strategic, and critical. It doesn’t sugarcoat the workload. But if you’re ready to put in the hours and sharpen your edge, it pays off in skills you’ll actually use.
Course Overall Grade: A
Purpose and Direction
My e-portfolio on Liela Shadmani’s IDE-641 page captures the shift from learning about evaluation to becoming the evaluator. This course didn’t just teach the difference between formative and summative evaluation, it demanded I apply both, with precision and purpose. Every artifact on this page reflects how I learned to assess instructional tools not by assumption but through structured inquiry, stakeholder alignment, and hands-on review. I moved from theoretical understanding to evidence-based decision-making, running expert reviews, conducting user evaluations, and drafting plans that mirror the rigor of real-world program analysis. From the first scavenger hunt to the final summative plan, this portfolio documents my evolution, where data became more than numbers, and evaluation became a tool for integrity, improvement, and impact.
Outlining The Purpose For Each Section
Formative Evaluation Report: Soft Skills for U.S. Army NCOs
This report strips away the fluff and gets to the point: soft skills aren’t optional for Army NCOs, they’re operational necessities. Built on a course from edX after JKO failed to deliver, this formative evaluation dissects the development, delivery, and effectiveness of a civilian-origin soft skills course repurposed for Army use. The team didn’t just evaluate content, they pressure-tested it. Expert and user feedback drove revisions that tackled everything from engagement issues to military relevancy. The report reads like a diagnostic scan, identifying where the course hits, where it misses, and what it needs to serve Soldiers better. The real power of this document lies in how it reframes soft skills as mission-critical assets and offers a blueprint for integrating them into Army leadership pipelines.
Summative Evaluation Report: JKO Course Soft Skills for U.S. Army NCOs
This report steps into the big leagues, measuring outcomes, not just intentions. Using rigorous data collection, Team L3M0N8 assessed the course’s real-world impact on communication, emotional intelligence, adaptability, and conflict resolution. They didn’t just collect surveys, they ran pre- and post-tests, stakeholder interviews, and deep dives into behavioral changes on the job. The takeaway? The course works, but not without friction. This summative evaluation offers actionable insights for TRADOC, NCOLCoE, and decision-makers looking to invest in meaningful leadership development. The evaluation matrix is methodical. The reflections are raw. This isn’t just a report, it’s a call to reimagine how the Army defines readiness in the human domain.
Website Evaluation: PBS LearningMedia and CK-12
This isn’t a surface-level review, it’s an autopsy. The team dissects PBS LearningMedia and CK-12 with surgical precision, using a refined 7-point rubric adapted from Harmon & Reeves. They evaluate everything, accuracy, usability, interactivity, instructional value, then back it all up with screenshots, user flow breakdowns, and real-world context. PBS gets praised for curated content and video strength. CK-12 wins on customization and STEM depth. But both platforms get called out where it counts, lack of update transparency, navigational quirks, and accessibility gaps. This document doesn’t just evaluate platforms, it redefines what educators should expect from “free” online content. It’s not about whether these tools are useful. It’s about whether they’re worth integrating at scale, and what it would take to get them there.
Deliverables
The deliverables in this course didn’t just check academic boxes, they showed what it looks like when evaluation gets tactical. The website evaluation broke down digital learning platforms like a field manual, pinpointing their usability strengths and functional gaps with a 7-point precision rubric that cut through the marketing noise. The formative evaluation wasn’t just a rehearsal, it was a field test. We treated a civilian course like a prototype, rewired it for military use, and ran it through expert reviews and one-on-one user trials that exposed both its promise and its blind spots. The peer evaluation held a mirror to team dynamics, laying bare what real collaboration looks like when deadlines are tight and standards are high. And the summative evaluation closed the loop—pulling together hard data and qualitative insights to answer the only question that really matters: Did the training work? These artifacts reflect more than process, they reflect impact.