AI Integration
Using AI to prevent cheating
AI Integration
Making Coursera assessments harder to cheat on so students can earn credit in emerging fields
I led design for Coursera’s first AI-powered assessment tool, giving educators a fast way to create rigorous, credit-worthy exams. Within 4 months, 48 institutions adopted the tool, expanding access to university credit in fields where students previously had none.
My Role
AI Prompt Engineering
Assessment Design
Feature Prioritization
Competitive Analysis
Research & Testing
Content Strategy
UX Writing
Information Architecture
Wireframes
Visual Design
Prototyping
Usability Testing
Stakeholder Alignment
Documentation
Design Systems
AI Prompt Engineering
Assessment Design
Feature Prioritization
Competitive Analysis
Research & Testing
Content Strategy
UX Writing
Information Architecture
Wireframes
Visual Design
Prototyping
Usability Testing
Stakeholder Alignment
Documentation
Show lessShow more
Tools
Figma
Loom
Confluence
JIRA
Designed For
Educator UX - Web (Desktop)
Learner UX - Responsive Web
The Team
Principal Product Designer (me)
Product Managers  (1)
Engineers (14)
Challenge
Universities were hesitant to award credit for Coursera courses because existing assessments were too easy to cheat on. Answers were often shared online, undermining trust in exam results.
The problem was especially visible in large international classes, where cheating concerns put partnerships and for-credit programs at risk. I worked across product, engineering, legal, and marketing to align scope and timelines, while leading faculty research to uncover what universities needed to trust Coursera assessments.
Approach
Audit
Mapped the educator platform and existing assessment tools to identify weak points in rigor.


Research
Interviewed faculty from large international programs, where classes often had 1,000 students per teacher and cheating was harder to monitor.

AI Experiments
Used generative AI to create varied question banks mapped to Bloom’s Taxonomy.


Validation
Ran usability sessions with instructors to refine flows, controls, and confidence in AI outputs.
Final Designs
The final tool gave educators a simple, self-serve flow:
  1. Generate new questions for an existing Coursera course
  2. Automatically add those questions to a reusable Question Bank
  3. Add everything to a new assessment with randomized questions across learners and time and attempt limits to strengthen integrity
  • Create shells for new courses and manage existing content
  • Mark English courses as available for translation only when they were ready
  • Prevent other locales from translating unfinished or incorrect courses
  • See course structure and translation progress in one view
  • Use better search, filters, and navigation to move faster
This gave educators a fast, flexible way to build credible assessments and gave universities confidence that students were being tested fairly.
Outcomes
Rapid adoption and access
Within four months, 48 institutions launched credit-worthy AI-powered assignments, expanding access without requiring subject matter expertise.

Stronger credentials
Rigorous assessments made courses harder to game, helping universities award credit in emerging fields while lowering authoring costs
Up next
Up next