How one question helped us make usability visible—and raise the bar across a company.
Most product teams think they know what “good” looks like. We certainly did—we asked just one question:
“Overall, how easy or difficult was it to complete this task?”
That’s the Single Ease Question, or SEQ. And answering it reshaped how we prioritize, how we ship, and how we protect quality at Atlassian.
Where it started
This approach didn’t start with me. A design manager on my team, working closely with a design researcher, piloted SEQ during a high-stakes integration: the Chartio acquisition.
They were trying to solve a real problem—
How do we prove when something is unusable, not just undesigned?
There was intense pressure to move fast. The acquisition agreement created urgency to release to customers quickly, but internally, we had concerns. It wasn’t just about branding or polish—it was about core usability.
Would this launch help customers—or frustrate them?
That simple post-task question gave us something we didn’t have before: evidence.
For example: Connection Access scored a 2.3 out of 7. It wasn’t subjective. It wasn’t design opinion. It was failure to complete a basic task.
That score helped us do what intuition and design arguments couldn’t:
Make the case to delay GA, invest in more resourcing, and fix the experience before launch.
My role was to take that spark—and scale it.
Why SEQ?
In fast-paced environments, it’s tempting to equate “done” with “good enough.” But speed without quality isn’t customer-centric—it’s short-term thinking.
Potential measures for experiences
We needed a way to:
Flag broken experiences before they shipped
Align cross-functional teams with evidence, not opinions
Make usability improvements measurable and visible
At the time, we didn’t have a shared definition of what “good” looked like—especially one that could scale across 18+ products, multiple orgs, and a wide range of stakeholders.
That changed after a series of leadership sessions with Amazon. Inspired by Working Backwards, I partnered with our Strategy & Business Operations lead to explore how we could connect design to business outcomes.
This is where SEQ sits with platform maturity for experiences.
What input metrics could we tie to outcome metrics like CSAT or retention—so we could prove the value of experience quality?
SEQ versus CSAT
We didn’t want just output. We needed something earlier. Something practical. Quantifiable. Consistent. Something that made quality visible, long before CSAT dropped or churn spiked.
That’s where SEQ came in.
What is SEQ?
It’s one question, asked after a task:
“On a scale of 1 to 7, how easy or difficult was it to complete this task?”
Quick to run in moderated or unmoderated testing
Gives a quantifiable score, paired with the “why”
Benchmarks progress over time
Works across any product or surface
We standardized a baseline of 5.5—but that’s the floor, not the goal. The aim is 6.0+ for confidence, 6.5+ for delight.
From practice to platform
After Chartio, we knew SEQ had potential. The challenge was scaling it across a large, complex org with many teams, surfaces, and priorities.
Alongside the SBO and design manager, we envisioned a global SEQ dashboard—a way to track usability signals with the same discipline we applied to business performance.
Tracking the number of tasks that were measured through usability studies.
To get there, we:
Defined a monthly OKR tied to SEQ adoption
Collaborated with research to train, triage, and onboard 400+ designers
Set submission targets and SEQ goals (% of tasks scoring 5.5+)
Built a queryable database of SEQ scores across key journeys
The rollout took nearly 10 months. It wasn’t perfect, but it gave us a foundation—and a shared standard for usability across the company.
SEQ in action: Simple problems, fast fixes
One of my favorite examples comes from the Teams experience.
Case study of Teams, Option 2 was chosen!
Users simply couldn’t find the entry point. It was labeled “People”, which made sense internally—but not to anyone else.
SEQ scores flagged the issue. In a moderated session with PMs, engineers, and designers watching live, users consistently failed to find it. The team brainstormed ideas, and they found a simple one.
We changed the label from “People” to “Teams.” Discoverability improved overnight—and the fix shipped in ~48 hours.
Sometimes, the fix is simple. SEQ helped us see it faster, align faster, and act faster.
Creating a shared experience language at the executive level
A few months into the SEQ rollout, our senior-most design leader introduced a new ritual:
Weekly Experience Reviews (WERs)—a forum where experience quality was discussed with the same intensity as business metrics.
These reviews weren’t just about SEQ. They made design visible—end-to-end walkthroughs, scoring frameworks, and hard conversations about product readiness.
Executives could see and compare experiences across the full portfolio
They recognized duplication, inconsistency, and leverage points
They began asking better questions: “Has this been SEQ tested?” “Is this experience really ready to ship?”
This helped onboard executive and cross-functional leadership into a shared language of experience quality.
Not everything needed to be perfect. But we needed to know where we stood.
SEQ became one of the earliest, clearest signals in those conversations.
Reading SEQ results
Score
Meaning
Action
1–5.4
Broken
Do not ship. This is a usability failure. Fix it before release.
5.5
Baseline
Minimum acceptable. Proceed only with caution—and only if supported by strong rationale.
6.0–6.4
Good
Usable and working as expected. Reliable performance.
6.5–7.0
Great
Smooth, intuitive, and satisfying. These are your standout experiences.
“Shipping anything under 5.5 isn’t just risky—it’s irresponsible. SEQ gave us the proof to stop bad experiences before they reached customers.”
SEQ doesn’t just help you improve. It helps you draw the line between what’s acceptable and what’s not—before your users do it for you.
How to get started
Don’t try to boil the ocean. Start small and focused.
Pick 3–5 critical tasks
Run SEQ testing with 6–8 users
Capture both the score and the “why”
Share the findings with your team
Prioritize, improve, and re-test
You’ll raise the quality of what you ship—and increase your team’s confidence while doing it.
Final thoughts
SEQ isn’t a silver bullet. It won’t rewrite your roadmap or fix broken processes.
But it will make usability visible. It will create clarity across functions. And it will help your teams build better products, faster.
If you want to build a culture of continuous UX improvement, you don’t need a thousand tools or a huge reorg. You just need one question.