lazy expression construction with symbolic dataframe operations
Builds an abstract syntax tree (AST) of dataframe operations without executing them, using Ibis's core expression system (ibis/expr/operations and ibis/expr/types) to represent table selections, projections, filters, and aggregations as composable symbolic objects. Expressions are constructed through method chaining on Table and Column types, with each operation creating a new immutable expression node that references its inputs, enabling deferred execution and optimization before compilation to backend-specific code.
Unique: Uses a strongly-typed expression system with deferred execution via immutable AST nodes (ibis/expr/operations/core.py) rather than eager evaluation like pandas, enabling backend-agnostic query representation and multi-pass optimization before compilation. The expression graph is traversed and validated at construction time using pattern matching (ibis/common/patterns.py) to catch type errors early.
vs alternatives: Unlike pandas (eager evaluation) or SQLAlchemy (SQL-first), Ibis provides a Python-native lazy API with full type safety and backend portability, allowing the same code to run on DuckDB for 1GB datasets and BigQuery for 1TB datasets without modification.
multi-backend sql compilation with sqlglot integration
Translates Ibis expression trees into backend-specific SQL dialects using SQLGlot as the compilation engine (ibis/backends/sql/compiler.py integration). Each backend registers its own SQL compiler that walks the expression DAG, applies backend-specific type mappings (via ibis/expr/operations type registry), and generates optimized SQL strings. The compilation layer handles dialect differences (e.g., window function syntax, string functions, date arithmetic) transparently, allowing a single Ibis expression to produce valid SQL for DuckDB, PostgreSQL, BigQuery, Snowflake, Spark SQL, and 15+ other engines.
Unique: Delegates SQL generation to SQLGlot rather than implementing dialect handling directly, enabling support for 20+ backends without maintaining separate code paths. Each backend registers a custom compiler class (e.g., DuckDBCompiler, BigQueryCompiler) that inherits from a base SQL compiler and overrides dialect-specific methods, creating a plugin architecture for new backends.
vs alternatives: More comprehensive dialect support than hand-rolled SQL generation (e.g., in Polars or Dask), and more portable than SQLAlchemy which requires explicit dialect specification and doesn't provide a unified dataframe API across backends.
expression optimization and rewriting via e-graph
Applies automated query optimization using an e-graph (equality graph) data structure (ibis/common/egraph.py) that represents equivalent expressions and enables rewriting rules to find more efficient query plans. The optimizer applies algebraic transformations (e.g., pushing filters down before joins, eliminating redundant projections, constant folding) to the expression DAG before compilation. Rewriting rules are defined declaratively and applied iteratively until a fixed point is reached, with cost-based selection to choose the most efficient equivalent expression.
Unique: Uses an e-graph (equality graph) data structure to represent multiple equivalent expressions and apply rewriting rules systematically, rather than ad-hoc pattern matching. This enables discovering optimization opportunities that require multiple rewriting steps and provides a principled way to add new optimization rules without affecting existing ones. The e-graph approach is inspired by egg (Equality Saturation) and enables exhaustive search for optimal query plans.
vs alternatives: More principled than hand-coded optimization rules (e.g., in Pandas or Polars) and more comprehensive than backend-specific optimizers (which only see the final SQL). Comparable to Calcite's cost-based optimizer but with a simpler, more maintainable implementation.
comprehensive backend test suite with docker environment
Provides a unified testing framework (ibis/backends/tests/) that runs the same test suite against all 20+ backends using Docker containers for database services. Tests are organized by feature (SQL, aggregation, window functions, etc.) and automatically skipped for backends that don't support a feature. The test infrastructure includes base test classes (e.g., BackendTestBase) that define test methods, and backend-specific test classes that override methods for backend-specific behavior. Docker Compose is used to spin up database services (PostgreSQL, MySQL, BigQuery emulator, etc.) for testing.
Unique: Implements a shared test suite (ibis/backends/tests/) that runs against all backends, with automatic skipping for unsupported features via decorators (e.g., @pytest.mark.notimplemented). This ensures consistent behavior across backends and makes it easy to add new backends by inheriting from base test classes. Docker Compose is used to manage database services, enabling reproducible testing across different environments.
vs alternatives: More comprehensive than backend-specific tests (which only test one backend) and more maintainable than duplicating tests for each backend. Comparable to Polars' test infrastructure but with support for 20+ backends instead of just one.
streaming and incremental data loading from multiple sources
Supports loading data incrementally from files (Parquet, CSV, JSON), databases (via SQL), and cloud storage (S3, GCS, Azure Blob) using backend-specific readers that stream data without loading it all into memory. Ibis abstracts the loading logic behind a unified API (ibis.read_parquet(), ibis.read_csv(), ibis.read_sql()) that returns a Table expression. For backends that support it (e.g., DuckDB), data is read lazily and only materialized when .execute() is called. For backends that don't support lazy reading, data is materialized locally and pushed to the backend.
Unique: Provides a unified API for loading data from multiple sources (files, databases, cloud storage) that abstracts backend-specific reader implementations. For backends that support lazy reading (e.g., DuckDB), data is read lazily and only materialized when needed. For backends that don't, data is materialized locally and pushed to the backend, enabling a consistent API across all backends.
vs alternatives: More unified than using backend-specific readers directly (e.g., google.cloud.bigquery.load_table_from_uri) and more flexible than Pandas (which loads all data into memory). Comparable to Polars but with multi-backend support and better cloud storage integration.
deferred computation with expression caching and reuse
Caches expression objects to enable efficient reuse of intermediate results without recomputation. When the same expression is used multiple times in a query (e.g., a filtered table used in two different aggregations), Ibis detects the duplication and generates SQL that computes the expression once and reuses it (via CTEs or subqueries). The caching system uses expression hashing and structural equality to detect duplicates, and is transparent to the user — no explicit caching API is required.
Unique: Automatically detects repeated subexpressions in the expression DAG using structural hashing and generates SQL with CTEs or subqueries to avoid recomputation. This is done transparently without requiring explicit caching API calls, making it easy for users to benefit from caching without changing their code.
vs alternatives: More automatic than explicit caching (e.g., in Spark) and more efficient than recomputing the same expression multiple times. Unique among dataframe libraries in providing transparent expression caching.
string operations and text manipulation with backend-specific functions
Implements string operations (substring, length, upper, lower, replace, split, concatenate, regex matching) that compile to backend-specific string function syntax. The system abstracts over differences in string function names and behavior across backends (e.g., SUBSTR vs SUBSTRING, regex syntax differences), providing a unified API for text manipulation.
Unique: Abstracts string function syntax across backends by providing a unified API (e.g., t.column.upper(), t.column.substr(0, 5)) that compiles to backend-specific functions. The system handles backends with limited string function support by providing fallback implementations.
vs alternatives: More portable than raw SQL string functions because the same code works across backends; more readable than Pandas string methods because it integrates with the fluent API.
array and struct operations with nested data type support
Supports operations on complex types (arrays, structs) including element access, flattening, unnesting, and aggregation of nested data. The system compiles array/struct operations to backend-specific syntax (UNNEST in SQL, explode in Spark, LATERAL FLATTEN in Snowflake), handling differences in nested data support across backends.
Unique: Provides a unified API for nested data operations across backends with vastly different nested type support, using backend-specific compilation (UNNEST, explode, LATERAL FLATTEN) to handle differences. The system includes type inference for nested structures.
vs alternatives: More portable than raw SQL nested operations because the same code works across backends; more flexible than Pandas (which lacks native nested type support) because it works with modern data warehouses' native nested types.
+8 more capabilities