How AI agents can redefine universal design to increase accessibility


Our research direction: Designing for accessibility

In our early research, we found that a significant barrier to digital equity is the “accessibility gap”, i.e., the delay between the release of a new feature and the creation of an assistive layer for it. To close this gap, we are shifting from reactive tools to agentic systems that are native to the interface.

Research pillar: Using multi-system agents to improve accessibility

Multimodal AI tools provide one of the most promising paths to building accessible interfaces. In specific prototypes, such as our work with web readability, we’ve tested a model where a central Orchestrator acts as a strategic reading manager.

Instead of a user navigating a complex maze of menus, the Orchestrator maintains shared context — understanding the document and making it more accessible by delegating the tasks to expert sub-agents.

  • The Summarization Agent: It masters complex documents by breaking down information and delegating key tasks to expert sub-agents, making even the deepest insights clear and accessible.
  • The Settings agent: Handles UI adjustments, such as scaling text, dynamically.

By testing this modular approach,our research shows users can interact with systems more intuitively, ensuring that specialized tasks are always handled by the right expert without the user needing to hunt for the “correct” button.

Toward multimodal fluency

Our research also focuses on moving beyond basic text-to-speech toward multimodal fluency. By leveraging Gemini’s ability to process voice, vision, and text simultaneously, we’ve built prototypes that can turn live video into immediate, interactive audio descriptions.

This isn’t just about describing a scene; it’s about situational awareness. In our co-design sessions, we’ve observed how allowing users to interactively query their environment — asking for specific visual details as they happen — can reduce cognitive load and transform a passive experience into an active, conversational exploration.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *