2025
Product Designer
Conceptual exploration of ethical AI/UX patterns prioritising transparency, agency, and trust
• How existing AI/UX principles apply to a new synthesis tools
• Designing for transparency - showing AI reasoning and confidence levels
• Creating validation points that keep users in control
• Balancing AI pattern recognition with human judgment
AI-powered travel app that keeps you travel-ready at anytime
I explored trustworthy AI design through stakeholder synthesis - a problem I've experienced firsthand. Scaffold is the conceptual tool I used to test how transparency and validation patterns might maintain human control while accelerating this workflow.



From using AI tools, observations and research, I identified areas I wanted to explore through my design practice.
Rather than building a production tool, I focused on testing how these principles might translate to real design decisions through a conceptual stakeholder synthesis tool called Scaffold.

Scaffold shows how AI can accelerate stakeholder synthesis from weeks to minutes without removing human judgment through transparent, validated collaboration.
This conceptual tool let me practice applying transparency and validation principles to a real design problem I've experienced.

AI that isn't explicit with its reasoning which can make user trust challenging. They aren't able to validate outputs or understand where AI might be wrong.
Be clear with AI reasoning, confidence levels, and evidence at every decision point so users can verify claims and maintain appropriate doubt.
• Confidence scores
• Evidence including number of mentions, quotes, and direct from source materials
• Validation required - user reviews before AI proceeds
AI is designed to replace human work which removes the thinking that makes the work valuable. This leads to an over-reliance on potential inaccurate recommendations.
AI handles pattern recognition at scale whilst users make strategic judgment calls.
• AI generates strategic options with explicit varying degrees of confidence
• Each option shows stakeholder fit percentage, pros/cons, trade-offs
• AI recommends whilst user compares and decides
• AI never auto-selects or advances without the user decisions

Progressive disclosure of information allows for a balances between summaries for scanning and expandable details for validation
Chose not to auto select the option with highest confidence, the user always chooses the strategic direction to maintain control
Maintained transparency and honesty by keeping low confidence scores
This conceptual project let me practice applying AI/UX principles through essential workflows, not comprehensive features.
A production version would expand functionality, but designed around the same core principles.

Through this project, I learned that transparency needs validation loops and confidence communication to be actionable - these principles work as a feedback system.
This exploration taught me that effective AI/UX comes from designing the minimum interactions that maintain human control, not exposing every capability the AI can perform.
I acknowledge and pay respect to the past, present and emerging traditional custodians of the land on which I work and live, the Gadigal people of the Eora Nation.
★ This website was built brick-by-brick on webflow ★
© 2025