A schema adapter allows Calcite to read particular kind of data, presenting the data as tables within a schema.
- Cassandra adapter (calcite-cassandra)
- CSV adapter (example/csv)
- Druid adapter (calcite-druid)
- Elasticsearch adapter (calcite-elasticsearch2 and calcite-elasticsearch5)
- File adapter (calcite-file)
- Geode adapter (calcite-geode)
- JDBC adapter (part of calcite-core)
- MongoDB adapter (calcite-mongodb)
- OS adapter (calcite-os)
- Pig adapter (calcite-pig)
- Solr cloud adapter (solr-sql)
- Spark adapter (calcite-spark)
- Splunk adapter (calcite-splunk)
- Eclipse Memory Analyzer (MAT) adapter (mat-calcite-plugin)
Other language interfaces
Many projects and products use Apache Calcite for SQL parsing, query optimization, data virtualization/federation, and materialized view rewrite. Some of them are listed on the “powered by Calcite” page.
A driver allows you to connect to Calcite from your application.
The JDBC driver is powered by Avatica. Connections can be local or remote (JSON over HTTP or Protobuf over HTTP).
The basic form of the JDBC connect string is
property2 are properties as described below.
(Connect strings are compliant with OLE DB Connect String syntax,
as implemented by Avatica’s
JDBC connect string parameters
|approximateDecimal||Whether approximate results from aggregate functions on
|approximateDistinctCount||Whether approximate results from
|approximateTopN||Whether approximate results from “Top N” queries (
|caseSensitive||Whether identifiers are matched case-sensitively. If not specified, value from
|conformance||SQL conformance level. Values: DEFAULT (the default, similar to PRAGMATIC_2003), LENIENT, MYSQL_5, ORACLE_10, ORACLE_12, PRAGMATIC_99, PRAGMATIC_2003, STRICT_92, STRICT_99, STRICT_2003, SQL_SERVER_2008.|
|createMaterializations||Whether Calcite should create materializations. Default false.|
|defaultNullCollation||How NULL values should be sorted if neither NULLS FIRST nor NULLS LAST are specified in a query. The default, HIGH, sorts NULL values the same as Oracle.|
|druidFetch||How many rows the Druid adapter should fetch at a time when executing SELECT queries.|
|forceDecorrelate||Whether the planner should try de-correlating as much as possible. Default true.|
|fun||Collection of built-in functions and operators. Valid values are “standard” (the default), “oracle”, “spatial”, and may be combined using commas, for example “oracle,spatial”.|
|lex||Lexical policy. Values are ORACLE (default), MYSQL, MYSQL_ANSI, SQL_SERVER, JAVA.|
|materializationsEnabled||Whether Calcite should use materializations. Default false.|
|model||URI of the JSON model file.|
|parserFactory||Parser factory. The name of a class that implements interface SqlParserImplFactory and has a public default constructor or an
|quoting||How identifiers are quoted. Values are DOUBLE_QUOTE, BACK_QUOTE, BRACKET. If not specified, value from
|quotedCasing||How identifiers are stored if they are quoted. Values are UNCHANGED, TO_UPPER, TO_LOWER. If not specified, value from
|schema||Name of initial schema.|
|schemaFactory||Schema factory. The name of a class that implements interface SchemaFactory and has a public default constructor or an
|schemaType||Schema type. Value must be “MAP” (the default), “JDBC”, or “CUSTOM” (implicit if
|spark||Specifies whether Spark should be used as the engine for processing that cannot be pushed to the source system. If false (the default), Calcite generates code that implements the Enumerable interface.|
|timeZone||Time zone, for example “gmt-3”. Default is the JVM’s time zone.|
|typeSystem||Type system. The name of a class that implements interface RelDataTypeSystem and has a public default constructor or an
|unquotedCasing||How identifiers are stored if they are not quoted. Values are UNCHANGED, TO_UPPER, TO_LOWER. If not specified, value from
To make a connection to a single schema based on a built-in schema type, you don’t need to specify a model. For example,
jdbc:calcite:schemaType=JDBC; schema.jdbcUser=SCOTT; schema.jdbcPassword=TIGER; schema.jdbcUrl=jdbc:hsqldb:res:foodmart
creates a connection with a schema mapped via the JDBC schema adapter to the foodmart database.
Similarly, you can connect to a single schema based on a user-defined schema adapter. For example,
jdbc:calcite:schemaFactory=org.apache.calcite.adapter.cassandra.CassandraSchemaFactory; schema.host=localhost; schema.keyspace=twissandra
makes a connection to the Cassandra adapter, equivalent to writing the following model file:
Note how each key in the
operand section appears with a
schema. prefix in the connect string.
Calcite’s core module (
calcite-core) supports SQL queries (
SELECT) and DML
but does not support DDL operations such as
CREATE SCHEMA or
As we shall see, DDL complicates the state model of the repository and makes
the parser more difficult to extend, so we left DDL out of core.
The server module (
calcite-server) adds DDL support to Calcite.
It extends the SQL parser,
using the same mechanism used by sub-projects,
adding some DDL commands:
DROP FOREIGN SCHEMA
CREATE TABLE ... AS SELECT)
DROP MATERIALIZED VIEW
Commands are described in the SQL reference.
To enable, include
calcite-server.jar in your class path, and add
to the JDBC connect string (see connect string property
Here is an example using the
calcite-server module is optional.
One of its goals is to showcase Calcite’s capabilities
(for example materialized views, foreign tables and generated columns) using
concise examples that you can try from the SQL command line.
All of the capabilities used by
calcite-server are available via APIs in
If you are the author of a sub-project, it is unlikely that your syntax
extensions match those in
calcite-server, so we recommend that you add your
SQL syntax extensions by extending the core parser;
if you want DDL commands, you may be able to copy-paste from
into your project.
At present, the repository is not persisted. As you execute DDL commands, you are modifying an in-memory repository by adding and removing objects reachable from a root Schema. All commands within the same SQL session will see those objects. You can create the same objects in a future session by executing the same script of SQL commands.
Calcite could also act as a data virtualization or federation server:
Calcite manages data in multiple foreign schemas, but to a client the data
all seems to be in the same place. Calcite chooses where processing should
occur, and whether to create copies of data for efficiency.
calcite-server module is a step towards that goal; an
industry-strength solution would require further on packaging (to make Calcite
runnable as a service), repository persistence, authorization and security.
There are many other APIs that allow you to extend Calcite’s capabilities.
In this section, we briefly describe those APIs, to give you an idea what is possible. To fully use these APIs you will need to read other documentation such as the javadoc for the interfaces, and possibly seek out the tests that we have written for them.
Functions and operators
There are several ways to add operators or functions to Calcite. We’ll describe the simplest (and least powerful) first.
User-defined functions are the simplest (but least powerful). They are straightforward to write (you just write a Java class and register it in your schema) but do not offer much flexibility in the number and type of arguments, resolving overloaded functions, or deriving the return type.
It you want that flexibility, you probably need to write you a user-defined operator (see interface SqlOperator).
If your operator does not adhere to standard SQL function syntax,
f(arg1, arg2, ...)”, then you need to
extend the parser.
There are many good examples in the tests: class UdfTest tests user-defined functions and user-defined aggregate functions.
User-defined aggregate functions are similar to user-defined functions, but each function has several corresponding Java methods, one for each stage in the life-cycle of an aggregate:
initcreates an accumulator;
addadds one row’s value to an accumulator;
mergecombines two accumulators into one;
resultfinalizes an accumulator and converts it to a result.
For example, the methods (in pseudo-code) for
SUM(int) are as follows:
Here is the sequence of calls to compute the sum of two rows with column values 4 and 7:
A window function is similar to an aggregate function but it is applied to a set
of rows gathered by an
OVER clause rather than by a
GROUP BY clause.
Every aggregate function can be used as a window function, but there are some
key differences. The rows seen by a window function may be ordered, and
window functions that rely upon order (
RANK, for example) cannot be used as
Another difference is that windows are non-disjoint: a particular row can appear in more than one window. For example, 10:37 appears in both the 9:00-10:00 hour and also the 9:15-9:45 hour.
Window functions are computed incrementally: when the clock ticks from 10:14 to 10:15, two rows might enter the window and three rows leave. For this, window functions have have an extra life-cycle operation:
removeremoves a value from an accumulator.
It pseudo-code for
SUM(int) would be:
Here is the sequence of calls to compute the moving sum, over the previous 2 rows, of 4 rows with values 4, 7, 2 and 3:
Grouped window functions
Grouped window functions are functions that operate the
GROUP BY clause
to gather together records into sets. The built-in grouped window functions
You can define additional functions by implementing
Table functions and table macros
User-defined table functions
are defined in a similar way to regular “scalar” user-defined functions,
but are used in the
FROM clause of a query. The following query uses a table
User-defined table macros use the same SQL syntax as table functions, but are defined differently. Rather than generating data, they generate an relational expression. Table macros are invoked during query preparation and the relational expression they produce can then be optimized. (Calcite’s implementation of views uses table macros.)
class TableFunctionTest tests table functions and contains several useful examples.
Extending the parser
Suppose you need to extend Calcite’s SQL grammar in a way that will be
compatible with future changes to the grammar. Making a copy of the grammar file
Parser.jj in your project would be foolish, because the grammar is edited
Parser.jj is actually an
template that contains variables that can be substituted.
The parser in
calcite-core instantiates the template with default values of
the variables, typically empty, but you can override.
If your project would like a different parser, you can provide your
parserImpls.ftl files and therefore generate an
Customizing SQL dialect accepted and generated
To control how SQL is generated for an external database (usually via the JDBC
The dialect also describes the engine’s capabilities, such as whether it
Defining a custom schema
To define a custom schema, you need to implement interface SchemaFactory.
During query preparation, Calcite will call this interface to find out what tables and sub-schemas your schema contains. When a table in your schema is referenced in a query, Calcite will ask your schema to create an instance of interface Table.
That table will be wrapped in a TableScan and will undergo the query optimization process.
A reflective schema (class ReflectiveSchema) is a way of wrapping a Java object so that it appears as a schema. Its collection-valued fields will appear as tables.
It is not a schema factory but an actual schema; you have to create the object and wrap it in the schema by calling APIs.
Defining a custom table
To define a custom table, you need to implement interface TableFactory. Whereas a schema factory a set of named tables, a table factory produces a single table when bound to a schema with a particular name (and optionally a set of extra operands).
If your table is to support DML operations (INSERT, UPDATE, DELETE, MERGE),
your implementation of
interface Table must implement
If your table is to support streaming queries,
your implementation of
interface Table must implement
See class StreamTest for examples.
Pushing operations down to your table
If you want more control, you should write a planner rule. This will allow you to push down expressions, to make a cost-based decision about whether to push down processing, and push down more complex operations such as join, aggregation, and sort.
You can customize some aspects of the type system by implementing interface RelDataTypeSystem.
All relational operators implement interface RelNode and most extend class AbstractRelNode. The core operators (used by SqlToRelConverter and covering conventional relational algebra) are TableScan, TableModify, Values, Project, Filter, Aggregate, Join, Sort, Union, Intersect, Minus, Window and Match.
Each of these has a “pure” logical sub-class,
and so forth. Any given adapter will have counterparts for the operations that
its engine can implement efficiently; for example, the Cassandra adapter has
but there is no
You can define your own sub-class of
RelNode to add a new operator, or
an implementation of an existing operator in a particular engine.
To make an operator useful and powerful, you will need planner rules to combine it with existing operators. (And also provide metadata, see below). This being algebra, the effects are combinatorial: you write a few rules, but they combine to handle an exponential number of query patterns.
If possible, make your operator a sub-class of an existing operator; then you may be able to re-use or adapt its rules. Even better, if your operator is a logical operation that you can rewrite (again, via a planner rule) in terms of existing operators, you should do that. You will be able to re-use the rules, metadata and implementations of those operators with no extra work.
A planner rule (class RelOptRule) transforms a relational expression into an equivalent relational expression.
A planner engine has many planner rules registered and fires them to transform the input query into something more efficient. Planner rules are therefore central to the optimization process, but surprisingly each planner rule does not concern itself with cost. The planner engine is responsible for firing rules in a sequence that produces an optimal plan, but each individual rules only concerns itself with correctness.
A calling convention is a protocol used by a particular data engine.
For example, the Cassandra engine has a collection of relational operators,
CassandraFilter and so forth, and these operators can be
connected to each other without the data having to be converted from one format
If data needs to be converted from one calling convention to another, Calcite uses a special sub-class of relational expression called a converter (see class Converter). But of course converting data has a runtime cost.
When planning a query that uses multiple engines, Calcite “colors” regions of the relational expression tree according to their calling convention. The planner pushes operations into data sources by firing rules. If the engine does not support a particular operation, the rule will not fire. Sometimes an operation can occur in more than one place, and ultimately the best plan is chosen according to cost.
A calling convention is a class that implements interface Convention, an auxiliary interface (for instance interface CassandraRel), and a set of sub-classes of class RelNode that implement that interface for the core relational operators (Project, Filter, Aggregate, and so forth).
Built-in SQL implementation
How does Calcite implement SQL, if an adapter does not implement all of the core relational operators?
The answer is a particular built-in calling convention, EnumerableConvention. Relational expressions of enumerable convention are implemented as “built-ins”: Calcite generates Java code, compiles it, and executes inside its own JVM. Enumerable convention is less efficient than, say, a distributed engine running over column-oriented data files, but it can implement all core relational operators and all built-in SQL functions and operators. If a data source cannot an implement a relational operator, enumerable convention is a fall-back.
Statistics and cost
Calcite has a metadata system that allow you to define cost functions and statistics about relational operators, collectively referred to as metadata. Each kind of metadata has an interface with (usually) one method. For example, selectivity is defined by interface RelMdSelectivity and the method getSelectivity(RelNode rel, RexNode predicate).
There are many built-in kinds of metadata, including collation, column origins, column uniqueness, distinct row count, distribution, explain visibility, expression lineage, max row count, node types, parallelism, percentage original rows, population size, predicates, row count, selectivity, size, table references, unique keys, and selectivity; you can also define your own.
You can then supply a metadata provider that computes that kind of metadata
for particular sub-classes of
RelNode. Metadata providers can handle built-in
and extended metadata types, and built-in and extended
While preparing a query Calcite combines all of the applicable metadata
providers and maintains a cache so that a given piece of metadata (for example
the selectivity of the condition
x > 10 in a particular
is computed only once.