multi-modal web page understanding via accessibility trees and visual analysis
Processes web pages by combining accessibility tree (axtree) extraction, DOM element parsing, and screenshot analysis to build a unified representation of page structure and content. The system extracts interactive elements, their positions, and semantic relationships, enabling VLMs to reason about page layout without raw HTML. This multi-modal approach allows agents to understand both the logical structure (via axtree) and visual presentation (via screenshots) simultaneously.
Unique: Combines accessibility tree extraction with screenshot analysis in a unified pipeline, allowing agents to reason about both semantic structure and visual layout simultaneously — most web agents use either DOM parsing OR screenshots, not both integrated
vs alternatives: Provides richer context than DOM-only parsing (which misses visual layout) and more reliable than screenshot-only analysis (which lacks semantic structure), enabling more accurate element targeting and interaction planning
natural language to action sequence planning with goal decomposition
Converts high-level natural language instructions into executable multi-step action sequences using specialized planning agents (HighLevelPlanningAgent, ContextAwarePlanningAgent). The system decomposes complex goals into sub-tasks, reasons about dependencies, and generates structured action plans that can be executed by function-calling agents. Planning agents leverage VLM reasoning to understand task semantics and generate contextually appropriate action sequences.
Unique: Implements both stateless (HighLevelPlanningAgent) and memory-integrated (ContextAwarePlanningAgent) planning variants through a factory pattern, allowing developers to choose between fresh planning and adaptive planning that learns from workflow history
vs alternatives: Provides explicit goal decomposition and plan generation (vs. reactive agents that decide actions step-by-step), enabling better long-horizon reasoning and the ability to preview/validate plans before execution
vision-language model integration with multi-provider support
Integrates multiple Vision-Language Model providers (OpenAI GPT-4V, Anthropic Claude, etc.) through a unified interface, handling model-specific API differences, function-calling schemas, and response formats. The system abstracts away provider-specific details, allowing agents to work with different VLMs without code changes. Configuration specifies the model provider and parameters, enabling easy model switching.
Unique: Abstracts VLM provider differences through a unified interface, enabling agents to work with OpenAI, Anthropic, and other providers without code changes, with automatic handling of function-calling schema variations
vs alternatives: More flexible than provider-locked agents (which require rewriting for model changes), and more maintainable than custom provider adapters (which duplicate logic)
browser automation with playwright/selenium integration
Provides browser automation capabilities through integration with Playwright and Selenium, handling browser lifecycle management, page navigation, element interaction, and screenshot capture. The system abstracts browser-specific details, providing a unified interface for common automation tasks (click, type, scroll, submit). Async support enables non-blocking browser operations for concurrent agent execution.
Unique: Provides async-first browser automation integration with support for both Playwright and Selenium, enabling concurrent agent execution without blocking on browser operations
vs alternatives: More flexible than single-library approaches (supports both Playwright and Selenium), and more efficient than synchronous automation (which blocks on browser operations)
workflow execution tracing and state management
Tracks agent execution state throughout a workflow, capturing action sequences, page states, and outcomes at each step. The system maintains a complete execution trace that can be replayed, analyzed, or used for debugging. State management handles browser session state, agent memory state, and workflow progress, enabling recovery from failures and analysis of execution paths.
Unique: Provides integrated execution tracing and state management that captures complete workflow traces including page states, action sequences, and outcomes, enabling replay and analysis
vs alternatives: More comprehensive than simple logging (which lacks state snapshots), and more actionable than raw browser logs (which lack semantic structure)
function-based web action execution with structured tool registry
Executes web interactions through a structured function-calling interface where web actions (click, type, scroll, submit) are registered as callable functions with defined schemas. The FunctionCallingAgent maps VLM-generated function calls to actual browser automation commands, handling parameter validation and execution. This approach decouples action planning from execution, enabling tool reuse across different agent types and VLM providers.
Unique: Implements a schema-based tool registry pattern where web actions are defined as callable functions with explicit parameter schemas, enabling VLM-agnostic action execution and provider-independent agent logic
vs alternatives: More structured and auditable than prompt-based action selection (which uses natural language descriptions), and more flexible than hard-coded action logic (which requires code changes for new actions)
agent workflow memory system with past execution integration
Stores and retrieves past web automation workflows to inform future agent decisions through the Agent Workflow Memory (AWM) module. The system captures execution traces (states, actions, outcomes) and enables context-aware agents to retrieve relevant past workflows, learning from successes and failures. This memory integration allows agents to adapt behavior based on historical context without explicit fine-tuning.
Unique: Implements Agent Workflow Memory (AWM) as a first-class system component integrated into the agent factory, allowing any agent type to access and learn from past executions through a unified memory interface
vs alternatives: Provides explicit workflow-level memory (vs. token-level context windows in standard LLMs), enabling agents to learn patterns across multiple executions and adapt behavior without retraining
set-of-mark visual element interaction with prompt-based control
Implements Set-of-Mark (SoM) technique where interactive elements on a webpage are visually marked with unique identifiers (numbers, labels) in a modified screenshot, and agents interact with elements by referencing these marks in natural language prompts. The PromptAgent uses this visual marking approach to ground agent instructions in specific UI elements without requiring precise coordinate calculations or DOM element selection.
Unique: Implements Set-of-Mark (SoM) as a first-class agent type (PromptAgent) with integrated screenshot marking pipeline, providing a research-backed alternative to coordinate-based or selector-based element targeting
vs alternatives: More robust than coordinate-based clicking (which breaks on layout changes) and more interpretable than DOM selector-based approaches (which require technical knowledge to debug)
+5 more capabilities