I explore how people and AI agents can understand each other better: interfaces that explain what models see, multimodal assistants that act across devices, and infrastructure for the emerging agentic web where websites expose clear affordances to LLM-based agents.
- Agent-ready web design. VOIX lets developers describe what actions an agent can take on their site without brittle scraping. I test it with builders during multi-day hackathons and iterate with their prototypes.
- Explainable, multimodal interaction. From interactive aesthetics explainers to conversational robots in enterprise contexts, I build tools that keep people in the loop and make AI behavior legible.
- Pragmatic tooling. Open-source libraries like SymphonAI, tidy-env, and LiDAR localization experiments grow out of the day-to-day needs of researchers and practitioners I collaborate with.




