Designing an Approval-First AI Agent

February 19th I spent five hours at Wordware’s Beach House in San Francisco building an interface for something most people will never touch: an autonomous AI agent runtime called OpenClaw.

And yes — I ended up a finalist again.

Wordware Hack Night was focused on one specific UX challenge: how do you make AI agents that can take real-world actions feel usable to normal humans? OpenClaw is powerful infrastructure. It can manage files, automate browsers, send emails, execute shell commands, and run proactively. It connects to Claude, GPT, or local models. It works 24/7.

But right now? It mostly lives in terminals, Telegram, and Discord.

The challenge wasn’t to build another chatbot wrapper. It was to rethink the interface layer for autonomous AI — to design something that makes “AI that acts” approachable instead of intimidating.

My Approach: Workflow, Not Prompts

Instead of designing for developers, I designed for freelancers — specifically videographers.

The problem: pitching is repetitive, manual, and time-consuming. Every time a producer sends a project brief, the videographer has to:

– search past work
– assemble relevant clips
– rewrite an intro
– format a deck
– email it out

It’s friction-heavy and cognitively draining.

Here’s how it works:

  1. The videographer connects their email and portfolio once.

  2. When a producer sends a project brief, OpenClaw detects the email trigger.

  3. The agent searches the creator’s portfolio for relevant past work.

  4. Based on the brief and matched assets, it calls Wan AI via API to generate tailored example video cuts.

  5. It drafts a pitch that embeds curated video snippets aligned to the project needs.

  6. The videographer reviews a small set of curated options and selects what to send.

  7. OpenClaw packages it into a clean presentation and sends it.

Users are selecting from intelligently prepared options. Review → approve → sent.

Why This Matters

The shift is subtle but important: it moves from “AI writes your pitch” to “AI assembles structured, context-aware options you approve.” Autonomous AI is not primarily a UX problem about buttons or dashboards; it is a trust problem. The autonomy operates in the background, while the human makes the final call, with clear guardrails and a strong sense of security. My goal was not only to build the backend workflow automation with OpenClaw, but also to experiment with the interface layer and demonstrate how powerful Wan AI can be in a real, outcome-driven use case. The event itself was live demos only — builders building, the Wordware team deep-diving into APIs, and real conversations about infrastructure with industry experts, users, designers, and engineers. These environments are rare: high signal, low fluff, and filled with people who ship. Being a finalist again was rewarding, but more importantly it reinforced something I care deeply about: if autonomous AI is going to become mainstream, the UX layer must evolve beyond chat.

We need interfaces that make powerful systems feel safe, structured, and outcome-driven.

And that’s the kind of work I want to keep doing.

Previous
Previous

Claude Cowork GTM Workshop

Next
Next

From Copilots to Agentic Frontends: Notes on ​Lighting Talks at WorkOS