How we use AI
Our stance
Good UX work has always been about understanding people: what they need, how they think, and where products fall short. AI doesn't change that.
We use AI where it helps us prepare, organize, draft, and move faster through parts of the process. We don't use it to replace real participants, researcher judgment, designer judgment, or client accountability.
Used well, AI gives our team more time for the work that matters most: asking better questions, interpreting what we hear, making thoughtful design decisions, and helping clients move forward with confidence.
Where AI fits our process
We use AI in places where it helps us prepare, organize, draft, and compare. That includes discussion guide drafts, workshop planning, transcript review, early synthesis support, UX audits, competitive analysis, wireframe exploration, content structure, roadmap drafts, and development handoff notes.
In every case, a researcher or designer is actively shaping the work. AI-generated drafts are starting points. Findings, recommendations, and design decisions come from our team.
Current tools include Claude for synthesis and drafting, Zoom and Otter AI for transcription, and Figma Make or Claude Design for design exploration. We review our toolset as capabilities and client requirements change.
Where we draw the line
The findings we deliver reflect real conversations with real people. That's not a constraint we work around; it's the foundation our recommendations stand on.
We use AI to: prepare guides, organize notes, identify possible patterns, draft starting points, and support documentation.
We don't use AI to: replace participants, simulate user feedback, generate unreviewed findings, make final recommendations, or substitute for researcher or designer judgment.
There's an important difference between synthesis support and synthesis itself. A model can organize and surface observations. It can't weigh them against the tone of a participant's voice, challenge them against what the data doesn't show, or connect them to the strategic context a researcher builds over the course of a project. That's the expertise we bring.
How we protect client and participant data
Protecting client and participant information is part of how we work, not an afterthought we apply to AI specifically.
We don't put raw session recordings or client-confidential materials into AI tools. When we use AI for synthesis, we work from transcripts, excerpts, and notes only when doing so is consistent with the client's agreement and project requirements. We remove participant-identifying details before using any AI tool.
We use Claude under a commercial Team plan. Under Anthropic's commercial terms, prompts and outputs are not used to train Claude models unless a customer explicitly agrees otherwise. Our existing client confidentiality agreements govern what can be shared with any third-party tool, and AI tools are no exception. If a client's policies prohibit the use of AI tools on their project, we follow those requirements.
How we stay accountable
We outline how we use AI in our Statement of Work, and we walk through it again during the project kickoff meeting. That gives clients a clear picture before work begins and an early opportunity to raise questions or flag any organizational policies we should work within.
Every deliverable that leaves our studio has a person's name behind it. That person reviewed the work, shaped it, and owns it. AI assists our process. It doesn't attend client meetings, field questions, or take responsibility for recommendations. If a client wants to know what AI contributed to a specific piece of work, we'll tell them plainly.
How this evolves
UX practice has always adapted to new tools and methods. Our approach to AI will continue to do the same.
We'll expand where it genuinely improves our work. We'll pull back where experience shows it doesn't. This page reflects our current practice, and we'll update it when something meaningful changes.
If you have questions about how we handle AI in a particular context, reach out. We'd rather have that conversation than leave you guessing.
