
IDE 712 - Analysis for Human Performance Technology Decisions
IDE 712 isn’t just another instructional design class with shiny models and theoretical jargon. It’s a surgical deep dive into performance problems, the real kind, the kind that stall missions, tank teams, or waste resources because no one bothered to ask why things weren’t working in the first place. This course builds analysts, not order-takers. It gives you the tools to diagnose performance issues like a professional, not a theorist with a checklist fetish. From the jump, IDE 712 throws you into the mess, real barriers, real consequences. You’ll break down human performance tech models, sift through motivational rot, and isolate whether the problem is environmental clutter or just someone not knowing what the hell to do. The course doesn’t spoon-feed. Each unit expects clarity, thought, and backbone. You’ll build a Front-End Analysis (FEA) Plan from scratch. Not hypothetically. Not with fake scenarios. You’ll pick a performance issue and go to work on it, unpacking, mapping, solving. No busywork. No fluff. Each assignment threads directly into your FEA plan and either sharpens your ability to think systemically or exposes gaps you need to fix. The structure leans heavy into application. You don’t just read Harless, you use his work to cut through noise. You don’t just talk about human performance technology, you live inside it, testing it against the real-world stakes your future clients will care about. This course doesn’t chase idealism. It trains you to think critically, act deliberately, and decide what actually solves a problem. IDE 712 forces you to be clear-headed and exacting. Because in performance improvement, guessing gets expensive.
Course Overall Grade: A
Purpose and Direction
My e-portfolio on Liela Shadmani’s IDE-712 page captures the shift from understanding performance problems as vague frustrations to dissecting them with strategic clarity. This course didn’t just teach human performance technology, it forced me to use it as a scalpel, not a safety net. Every artifact on this page reflects how I learned to trace performance gaps back to their root causes, challenge assumptions, and decide whether training was the right fix, or just an easy default. I moved from reading about Front-End Analysis to building one from the ground up, mapping real-world constraints, vetting data sources, and proposing interventions that hold up under scrutiny. From the first cause analysis to the final FEA plan, this portfolio documents the evolution of my thinking, where “performance” stopped being a buzzword and became a problem worth solving with discipline, insight, and measurable impact.
Outlining The Purpose For Each Section
Team L3M0N8 PowerPoint: Enhancing Collaboration in Graduate-Level Group Projects (A Front-End Analysis)
This presentation slices through the glossy brochure version of group work and gets honest about the dysfunction most grad students endure. The team calls it what it is, an institutional blind spot. The deck breaks down the context, performance gap, root causes, stakeholder influence, and ends with two solid interventions: a micro-course to teach actual collaboration and standardized toolkits that hold students accountable. What sets this apart isn’t the bullet points, it’s the unapologetic tone and precision. The literature isn’t used for decoration. It’s leveraged to underscore how group dynamics consistently fail when instructors rely on assumptions. The PowerPoint doesn’t pitch “fixes” for optics, it proposes systemic shifts backed by clear, scalable timelines. And it makes a case for real consequences when collaboration is left to chance.
Front End Analysis Plan
The full FEA plan reads like a takedown of every lazy group project thrown into a syllabus under the illusion of “collaborative learning.” It opens with a clear assertion: the failure isn’t in student motivation, it’s in design negligence. Backed by hard data and brutal honesty, the analysis walks through how students are consistently set up to fail, through unclear roles, tech that doesn’t integrate, and instructors who turn into crisis managers. The plan identifies six root cause areas, skills, knowledge, environment, motivation, time management, and academic experience, and then builds interventions that directly strike at the pain points. No fluff. Just a roadmap to close the gap between what instructors hope group work achieves and what it actually delivers. The evaluation section refuses to let “vibes” pass for data. Every metric has a target. Every target ties back to reducing faculty burnout and student resentment. It doesn’t offer magic. It offers a functional, tested fix.
Literature Review: Needs Assessment in Action
I didn’t write this literature review to sugarcoat the group work problem. I wrote it to drag it into the open and dissect it. I looked at four peer-reviewed studies and didn’t just summarize them, I interrogated them. What I found confirmed what every student already knows: group projects fall apart not because students are lazy, but because the system sets them up to fail. The tools we’ve been told to use, peer evaluations, assigned roles, group contracts, don’t hold up without buy-in, structure, and enforcement. Through this review, I pulled from Chang and Brickman’s student-level insight, Guan’s systemic lens, Donelan and Kear’s 20-year sweep, and Kelly’s refreshing mindset shift. I compared, contrasted, and called out the gaps, like the overreliance on self-reported data, the lack of long-term follow-up, and the black hole that is instructor involvement. What I found most useful was how these articles converged around one core truth: collaboration doesn’t happen by default. It happens when we design for it, prepare for it, and stay engaged through it. This review shaped how I approached our FEA Plan. It gave me the vocabulary, the data, and most importantly, the clarity to stop designing based on hope and start designing based on need. Needs assessment isn’t just a box to check. It’s how we stop building on broken foundations.
Deliverables
The deliverables in IDE 712 weren’t just assignments, they were blueprints for how to stop designing solutions on autopilot and start solving real performance problems with clarity and purpose. The Team L3M0N8 PowerPoint set the foundation by mapping out the performance gap in graduate-level group projects and making the problem visible, not vague, not personal, but structural. It called out where things go off the rails and laid out a clean, actionable path to fix it. The second deliverable, the FEA Plan Final Draft, expanded the framework. It moved from insight to intervention, combining stakeholder analysis, root cause diagnosis, and a phased implementation timeline built for impact, not appearance. Every step was backed by real data, and every intervention had teeth, no theoretical fluff. The third deliverable, the Literature Review, gave the whole project its backbone. It pulled from four peer-reviewed studies and challenged the lazy assumptions behind group work. Instead of parroting research, it critiqued it, connected the dots, and called out what still isn’t being said. Together, these three artifacts built a throughline, from analysis to design to evaluation, that reshaped how I approach instructional problems. Not as an observer, but as someone responsible for fixing them.