Navigated to How to Use Data to Improve Instructional Design - Transcript

How to Use Data to Improve Instructional Design

Episode Transcript

Jackie Pelegrin

Hello, and welcome to the Designing with Love Podcast.

I am your host, Jackie Pelegrin, where my goal is to bring you information, tips, and tricks as an instructional designer.

Hello, instructional designers and educators.

Welcome to episode 77 of the Designing with Love Podcast.

Today, we're diving into data-driven design decisions, how to use analytics and feedback to improve learning experiences with confidence, clarity, and care.

By the end, you'll have a simple flow for defining success, collecting the right metrics, testing small changes, and sharing results with your team.

So, grab your notebook, a cup of coffee, and settle in as we explore this topic together.

Before we start sketching, every strong design begins with a clear creative brief.

Let's write yours.

Set the brief.

Define success before you measure.

Data only matters in relation to a goal.

What change do we want to see in learner behavior or performance?

What's the goal?

Align metrics to meaningful learning and business outcomes, not vanity numbers.

What to include?

First, state your primary outcome in plain language.

For example, increase task accuracy, reduce time to proficiency, or complete within a defined window.

Next, set your minimum success criteria as the baseline win you must achieve.

Then, set your stretch goal as the aspirational level you would love to reach.

After that, choose two or three metrics at most that map directly to your outcome, such as completion rate, item level accuracy, time on task, evidence of on the job application, number of support tickets, or manager observations.

Finally, remove any metric that does not inform a decision you are prepared to make.

With your brief in place, let's lay out the tools on your design table so you can actually see what's happening.

Lay out the tools.

Collect the right data.

If it isn't instrumented, we're guessing.

What's the goal?

Capture reliable signals such as behavioral, attitudinal, and qualitative while protecting learner privacy.

What to include?

First, track behavioral analytics by capturing learning management system events and experience API events, reviewing quiz item analysis and monitoring time on task.

Next, add feedback signals by using one-minute pulse surveys, short exit surveys, and a direct prompt that asks what was least clear.

Then gather qualitative insights by running five quick think aloud usability tests or scheduling focused 15-minute learner interviews.

After that, be explicit about ethics by explaining what you collect and why, minimizing personally identifiable information, aggregating results when possible, and storing data securely.

Finally, maintain a data dictionary that lists each metric with its definition, its source system, its refresh cadence, and the person responsible for it.

Now that the tools are out, it's time for a constructive critique.

Let's review the draft with clear eyes.

Critique the draft.

Diagnose with simple analysis flow.

Patterns first, then causes.

What's the goal?

Move from scattered data to clear, testable hypotheses.

What to include?

First, run the pattern, probe, then propose loop from start to finish.

Next, in the pattern step, flag a notable trend, such as drop off at slide seven, or a quiz item that 62% of learners miss.

Then in the probe step, validate your hunch with qualitative checks like screen recordings, learner comments, or a quick five user test.

After that, in the proposed step, craft a change that targets the most likely root cause.

Finally, review item discrimination and common wrong answer patterns to spot confusing stems or distractors, and scan quick visuals such as start to finish funnel, click heat maps, and score distributions by cohort or role.

With the critique in hand, let's iterate the mock-up.

Small targeted edits be a total redesign.

Iterate the mock-up.

Design small, test fast.

Ship improvements in slices.

What's the goal?

Reduce risk and learn faster through small, measurable changes.

What to include?

First, start with low lift fixes by shortening an overlong video, clarifying instructions, chunking practice, adding a worked example, or rewriting a confusing distractor.

Next, when you run an A B split test, change only one variable, such as the title, the sequence, or the activity type, and keep everything else constant.

Then pilot your change with one team or with roughly 10 to 15 learners and measure before and after performance on your key metric.

After that, set guardrails by defining a clear rollback rule.

For example, if completion drops by more than 15%, revert the change.

Finally, log each tweak in a design change log that records what changed, why you changed it, the expected impact, the owner, and the date.

Great.

Now let's mount the revised piece and host a quick studio walkthrough so everyone can learn from it.

Exhibit the work.

Share wins, learn publicly, and systemize.

Evidence is a team sport.

What's the goal?

Turn isolated fixes into repeatable team-wide practice rooted inequity.

What to include?

First, publish a one-page learning brief that captures the problem, a concise data snapshot, the change you made, the outcome, and the next step.

Next, post a 20-minute evidence roundup that highlights one success, one surprise, and one next experiment to try.

Then, templatize your survey items, your analytics dashboard views, and your checklists for new builds.

After that, run an equity check by comparing outcomes across cohorts, roles, and devices to ensure gains serve everyone.

Finally, capture lessons learned in a shared repository so the practice continues even when team members change.

Let's do a quick client review to see how this plays out on a real project.

Real life example, client review.

Here's the scenario.

There's a new hire software onboarding with one 45-minute module plus a 10 question quiz.

What's the goal?

Reduce time to first ticket resolution from 10 days to 7 days, with eight days set as your minimum success criteria.

What to include?

First, instrument the experience by tracking video completion, capturing item level quiz data, and monitoring help desk tickets during the first 30 days.

Add one question exit poll that asks what was least clear.

Next, read the signals by noting a 58% drop-off on a 9-minute API video, observing that quiz item 6 on authentication is missed 64% of the time with the same distractor and collecting comments that point to excessive jargon.

Then make the change by splitting the API video into three three-minute clips with captions and a small inline glossary chip, inserting a worked example before item six and rewriting the distractor.

Next, pilot one cohort with 24 learners against a control group with 26 learners.

After that, evaluate outcomes by aiming for higher completion, higher item accuracy, and lower time to first resolution.

For example, 7.4 days in the pilot compared to 9.6 days in the control group.

Finally, close the loop by publishing a learning brief and planning the next test that compares a printable job aid with inline tips.

Ready to put pencil to paper?

Here's your next sketch.

Call to action, your next sketch.

Now it's your turn.

Pick one active course or module this week.

What's the goal?

Move from listening to action with a single manageable experiment.

What to include?

First, write down one outcome you care about.

Next, select two metrics that truly reflect that outcome.

Then, run one small experiment, such as shorten a video, clarify one quiz item, or add a worked example.

After that, record the change in your design change log and share a one-page learning brief with your team.

Finally, send me your mini case so I can feature a few in a future episode.

Before we close, here's a 30-second studio recap.

Set the brief.

Define the outcome, your minimum success criteria, and stretch your goal.

Lay out the tools, capture LMS events, experience API events, quick surveys, and short interviews with ethics in mind.

Critique the draft.

Run the pattern, probe, then propose loop to find causes, not just symptoms.

Iterate the mock-up, ship small edits, run an A B split test on one variable, and set rollback rules.

Exhibit the work.

Publish a one-page learning brief, hold a monthly evidence roundup, and check equity so improvements lift everyone.

To make this easy to use and incorporate into your projects, I've put together an interactive infographic that walks you through the exact flow step by step.

You'll find it linked in the show notes and in the companion blog post on the Designing with Love website.

As I conclude this episode, I would like to share an inspiring quote by W.

Edwards Deming, a well-known statistician and quality pioneer who showed organizations how to use data for continuous improvement.

In God We Trust, all others must bring data.

Remember, behind every data point is a learner.

We use evidence to serve people, not just dashboards.

Until next time, keep your goals clear, your data clean, and your iterations small.

Thank you for taking some time to listen to this podcast episode today.

Your support means the world to me.

If you'd like to help keep the podcast going, you can share it with a friend or colleague, leave a heartfelt review, or offer a monetary contribution.

Every act of support, big or small, makes a difference, and I'm truly thankful for you.

Never lose your place, on any device

Create a free account to sync, back up, and get personal recommendations.