Adapters
Schema adapters
A schema adapter allows Calcite to read particular kind of data, presenting the data as tables within a schema.
- Arrow adapter (calcite-arrow)
- Cassandra adapter (calcite-cassandra)
- CSV adapter (example/csv)
- Druid adapter (calcite-druid)
- Elasticsearch adapter (calcite-elasticsearch)
- File adapter (calcite-file)
- Geode adapter (calcite-geode)
- InnoDB adapter (calcite-innodb)
- JDBC adapter (part of calcite-core)
- MongoDB adapter (calcite-mongodb)
- OS adapter (calcite-os)
- Pig adapter (calcite-pig)
- Redis adapter (calcite-redis)
- Solr cloud adapter (solr-sql)
- Spark adapter (calcite-spark)
- Splunk adapter (calcite-splunk)
- Eclipse Memory Analyzer (MAT) adapter (mat-calcite-plugin)
- Apache Kafka adapter
Other language interfaces
- Piglet (calcite-piglet) runs queries in a subset of Pig Latin
Engines
Many projects and products use Apache Calcite for SQL parsing, query optimization, data virtualization/federation, and materialized view rewrite. Some of them are listed on the “powered by Calcite” page.
Drivers
A driver allows you to connect to Calcite from your application.
The JDBC driver is powered by Avatica. Connections can be local or remote (JSON over HTTP or Protobuf over HTTP).
The basic form of the JDBC connect string is
jdbc:calcite:property=value;property2=value2
where property
, property2
are properties as described below.
(Connect strings are compliant with OLE DB Connect String syntax,
as implemented by Avatica’s
ConnectStringParser.)
JDBC connect string parameters
Property | Description |
---|---|
approximateDecimal | Whether approximate results from aggregate functions on DECIMAL types are acceptable. |
approximateDistinctCount | Whether approximate results from COUNT(DISTINCT ...) aggregate functions are acceptable. |
approximateTopN | Whether approximate results from “Top N” queries (ORDER BY aggFun() DESC LIMIT n ) are acceptable. |
caseSensitive | Whether identifiers are matched case-sensitively. If not specified, value from lex is used. |
conformance | SQL conformance level. Values: DEFAULT (the default, similar to PRAGMATIC_2003), LENIENT, MYSQL_5, ORACLE_10, ORACLE_12, PRAGMATIC_99, PRAGMATIC_2003, STRICT_92, STRICT_99, STRICT_2003, SQL_SERVER_2008. |
createMaterializations | Whether Calcite should create materializations. Default false. |
defaultNullCollation | How NULL values should be sorted if neither NULLS FIRST nor NULLS LAST are specified in a query. The default, HIGH, sorts NULL values the same as Oracle. |
druidFetch | How many rows the Druid adapter should fetch at a time when executing SELECT queries. |
forceDecorrelate | Whether the planner should try de-correlating as much as possible. Default true. |
fun | Collection of built-in functions and operators. Valid values are “standard” (the default), “oracle”, “spatial”, and may be combined using commas, for example “oracle,spatial”. |
lex | Lexical policy. Values are BIG_QUERY, JAVA, MYSQL, MYSQL_ANSI, ORACLE (default), SQL_SERVER. |
materializationsEnabled | Whether Calcite should use materializations. Default false. |
model | URI of the JSON/YAML model file or inline like inline:{...} for JSON and inline:... for YAML. |
parserFactory | Parser factory. The name of a class that implements interface SqlParserImplFactory and has a public default constructor or an INSTANCE constant. |
quoting | How identifiers are quoted. Values are DOUBLE_QUOTE, BACK_TICK, BACK_TICK_BACKSLASH, BRACKET. If not specified, value from lex is used. |
quotedCasing | How identifiers are stored if they are quoted. Values are UNCHANGED, TO_UPPER, TO_LOWER. If not specified, value from lex is used. |
schema | Name of initial schema. |
schemaFactory | Schema factory. The name of a class that implements interface SchemaFactory and has a public default constructor or an INSTANCE constant. Ignored if model is specified. |
schemaType | Schema type. Value must be “MAP” (the default), “JDBC”, or “CUSTOM” (implicit if schemaFactory is specified). Ignored if model is specified. |
spark | Specifies whether Spark should be used as the engine for processing that cannot be pushed to the source system. If false (the default), Calcite generates code that implements the Enumerable interface. |
timeZone | Time zone, for example “gmt-3”. Default is the JVM’s time zone. |
typeSystem | Type system. The name of a class that implements interface RelDataTypeSystem and has a public default constructor or an INSTANCE constant. |
unquotedCasing | How identifiers are stored if they are not quoted. Values are UNCHANGED, TO_UPPER, TO_LOWER. If not specified, value from lex is used. |
typeCoercion | Whether to make implicit type coercion when type mismatch during sql node validation, default is true. |
To make a connection to a single schema based on a built-in schema type, you don’t need to specify a model. For example,
creates a connection with a schema mapped via the JDBC schema adapter to the foodmart database.
Similarly, you can connect to a single schema based on a user-defined schema adapter. For example,
makes a connection to the Cassandra adapter, equivalent to writing the following model file:
Note how each key in the operand
section appears with a schema.
prefix in the connect string.
Server
Calcite’s core module (calcite-core
) supports SQL queries (SELECT
) and DML
operations (INSERT
, UPDATE
, DELETE
, MERGE
)
but does not support DDL operations such as CREATE SCHEMA
or CREATE TABLE
.
As we shall see, DDL complicates the state model of the repository and makes
the parser more difficult to extend, so we left DDL out of the core.
The server module (calcite-server
) adds DDL support to Calcite.
It extends the SQL parser,
using the same mechanism used by sub-projects,
adding some DDL commands:
-
CREATE
andDROP SCHEMA
-
CREATE
andDROP FOREIGN SCHEMA
-
CREATE
andDROP TABLE
(includingCREATE TABLE ... AS SELECT
) -
CREATE
andDROP MATERIALIZED VIEW
-
CREATE
andDROP VIEW
-
CREATE
andDROP FUNCTION
-
CREATE
andDROP TYPE
Commands are described in the SQL reference.
To enable, include calcite-server.jar
in your class path, and add
parserFactory=org.apache.calcite.sql.parser.ddl.SqlDdlParserImpl#FACTORY
to the JDBC connect string (see connect string property
parserFactory).
Here is an example using the sqlline
shell.
The calcite-server
module is optional.
One of its goals is to showcase Calcite’s capabilities
(for example materialized views, foreign tables and generated columns) using
concise examples that you can try from the SQL command line.
All of the capabilities used by calcite-server
are available via APIs in
calcite-core
.
If you are the author of a sub-project, it is unlikely that your syntax
extensions match those in calcite-server
, so we recommend that you add your
SQL syntax extensions by extending the core parser;
if you want DDL commands, you may be able to copy-paste from calcite-server
into your project.
At present, the repository is not persisted. As you execute DDL commands, you
are modifying an in-memory repository by adding and removing objects
reachable from a root
Schema
.
All commands within the same SQL session will see those objects.
You can create the same objects in a future session by executing the same
script of SQL commands.
Calcite could also act as a data virtualization or federation server:
Calcite manages data in multiple foreign schemas, but to a client the data
all seems to be in the same place. Calcite chooses where processing should
occur, and whether to create copies of data for efficiency.
The calcite-server
module is a step towards that goal; an
industry-strength solution would require further on packaging (to make Calcite
runnable as a service), repository persistence, authorization and security.
Extensibility
There are many other APIs that allow you to extend Calcite’s capabilities.
In this section, we briefly describe those APIs, to give you an idea of what is possible. To fully use these APIs you will need to read other documentation such as the javadoc for the interfaces, and possibly seek out the tests that we have written for them.
Functions and operators
There are several ways to add operators or functions to Calcite. We’ll describe the simplest (and least powerful) first.
User-defined functions are the simplest (but least powerful). They are straightforward to write (you just write a Java class and register it in your schema) but do not offer much flexibility in the number and type of arguments, resolving overloaded functions, or deriving the return type.
If you want that flexibility, you probably need to write a
user-defined operator
(see interface SqlOperator
).
If your operator does not adhere to standard SQL function syntax,
“f(arg1, arg2, ...)
”, then you need to
extend the parser.
There are many good examples in the tests:
class UdfTest
tests user-defined functions and user-defined aggregate functions.
Aggregate functions
User-defined aggregate functions are similar to user-defined functions, but each function has several corresponding Java methods, one for each stage in the life-cycle of an aggregate:
-
init
creates an accumulator; -
add
adds one row’s value to an accumulator; -
merge
combines two accumulators into one; -
result
finalizes an accumulator and converts it to a result.
For example, the methods (in pseudo-code) for SUM(int)
are as follows:
Here is the sequence of calls to compute the sum of two rows with column values 4 and 7:
Window functions
A window function is similar to an aggregate function but it is applied to a set
of rows gathered by an OVER
clause rather than by a GROUP BY
clause.
Every aggregate function can be used as a window function, but there are some
key differences. The rows seen by a window function may be ordered, and
window functions that rely upon order (RANK
, for example) cannot be used as
aggregate functions.
Another difference is that windows are non-disjoint: a particular row can appear in more than one window. For example, 9:37 appears in both the 9:00-10:00 hour and also the 9:15-9:45 hour.
Window functions are computed incrementally: when the clock ticks from 10:14 to 10:15, two rows might enter the window and three rows leave. For this, window functions have an extra life-cycle operation:
-
remove
removes a value from an accumulator.
It pseudo-code for SUM(int)
would be:
Here is the sequence of calls to compute the moving sum, over the previous 2 rows, of 4 rows with values 4, 7, 2 and 3:
Grouped window functions
Grouped window functions are functions that operate the GROUP BY
clause
to gather together records into sets. The built-in grouped window functions
are HOP
, TUMBLE
and SESSION
.
You can define additional functions by implementing
interface SqlGroupedWindowFunction
.
Table functions and table macros
User-defined table functions
are defined in a similar way to regular “scalar” user-defined functions,
but are used in the FROM
clause of a query. The following query uses a table
function called Ramp
:
User-defined table macros use the same SQL syntax as table functions, but are defined differently. Rather than generating data, they generate a relational expression. Table macros are invoked during query preparation and the relational expression they produce can then be optimized. (Calcite’s implementation of views uses table macros.)
class TableFunctionTest
tests table functions and contains several useful examples.
Extending the parser
Suppose you need to extend Calcite’s SQL grammar in a way that will be
compatible with future changes to the grammar. Making a copy of the grammar file
Parser.jj
in your project would be foolish, because the grammar is edited
quite frequently.
Fortunately, Parser.jj
is actually an
Apache FreeMarker
template that contains variables that can be substituted.
The parser in calcite-core
instantiates the template with default values of
the variables, typically empty, but you can override.
If your project would like a different parser, you can provide your
own config.fmpp
and parserImpls.ftl
files and therefore generate an
extended parser.
The calcite-server
module, which was created in
[CALCITE-707] and
adds DDL statements such as CREATE TABLE
, is an example that you could follow.
Also see
class ExtensionSqlParserTest
.
Customizing SQL dialect accepted and generated
To customize what SQL extensions the parser should accept, implement
interface SqlConformance
or use one of the built-in values in
enum SqlConformanceEnum
.
To control how SQL is generated for an external database (usually via the JDBC
adapter), use
class SqlDialect
.
The dialect also describes the engine’s capabilities, such as whether it
supports OFFSET
and FETCH
clauses.
Defining a custom schema
To define a custom schema, you need to implement
interface SchemaFactory
.
During query preparation, Calcite will call this interface to find out
what tables and sub-schemas your schema contains. When a table in your schema
is referenced in a query, Calcite will ask your schema to create an instance of
interface Table
.
That table will be wrapped in a
TableScan
and will undergo the query optimization process.
Reflective schema
A reflective schema
(class ReflectiveSchema
)
is a way of wrapping a Java object so that it appears
as a schema. Its collection-valued fields will appear as tables.
It is not a schema factory but an actual schema; you have to create the object and wrap it in the schema by calling APIs.
See
class ReflectiveSchemaTest
.
Defining a custom table
To define a custom table, you need to implement
interface TableFactory
.
Whereas a schema factory a set of named tables, a table factory produces a
single table when bound to a schema with a particular name (and optionally a
set of extra operands).
Modifying data
If your table is to support DML operations (INSERT, UPDATE, DELETE, MERGE),
your implementation of interface Table
must implement
interface ModifiableTable
.
Streaming
If your table is to support streaming queries,
your implementation of interface Table
must implement
interface StreamableTable
.
See
class StreamTest
for examples.
Pushing operations down to your table
If you wish to push processing down to your custom table’s source system,
consider implementing either
interface FilterableTable
or
interface ProjectableFilterableTable
.
If you want more control, you should write a planner rule. This will allow you to push down expressions, to make a cost-based decision about whether to push down processing, and push down more complex operations such as join, aggregation, and sort.
Type system
You can customize some aspects of the type system by implementing
interface RelDataTypeSystem
.
Relational operators
All relational operators implement
interface RelNode
and most extend
class AbstractRelNode
.
The core operators (used by
SqlToRelConverter
and covering conventional relational algebra) are
TableScan
,
TableModify
,
Values
,
Project
,
Filter
,
Aggregate
,
Join
,
Sort
,
Union
,
Intersect
,
Minus
,
Window
and
Match
.
Each of these has a “pure” logical sub-class,
LogicalProject
and so forth. Any given adapter will have counterparts for the operations that
its engine can implement efficiently; for example, the Cassandra adapter has
CassandraProject
but there is no CassandraJoin
.
You can define your own sub-class of RelNode
to add a new operator, or
an implementation of an existing operator in a particular engine.
To make an operator useful and powerful, you will need planner rules to combine it with existing operators. (And also provide metadata, see below). This being algebra, the effects are combinatorial: you write a few rules, but they combine to handle an exponential number of query patterns.
If possible, make your operator a sub-class of an existing operator; then you may be able to re-use or adapt its rules. Even better, if your operator is a logical operation that you can rewrite (again, via a planner rule) in terms of existing operators, you should do that. You will be able to re-use the rules, metadata and implementations of those operators with no extra work.
Planner rule
A planner rule
(class RelOptRule
)
transforms a relational expression into an equivalent relational expression.
A planner engine has many planner rules registered and fires them to transform the input query into something more efficient. Planner rules are therefore central to the optimization process, but surprisingly each planner rule does not concern itself with cost. The planner engine is responsible for firing rules in a sequence that produces an optimal plan, but each individual rules only concerns itself with correctness.
Calcite has two built-in planner engines:
class VolcanoPlanner
uses dynamic programming and is good for exhaustive search, whereas
class HepPlanner
fires a sequence of rules in a more fixed order.
Calling conventions
A calling convention is a protocol used by a particular data engine.
For example, the Cassandra engine has a collection of relational operators,
CassandraProject
, CassandraFilter
and so forth, and these operators can be
connected to each other without the data having to be converted from one format
to another.
If data needs to be converted from one calling convention to another, Calcite
uses a special sub-class of relational expression called a converter
(see interface Converter
).
But of course converting data has a runtime cost.
When planning a query that uses multiple engines, Calcite “colors” regions of the relational expression tree according to their calling convention. The planner pushes operations into data sources by firing rules. If the engine does not support a particular operation, the rule will not fire. Sometimes an operation can occur in more than one place, and ultimately the best plan is chosen according to cost.
A calling convention is a class that implements
interface Convention
,
an auxiliary interface (for instance
interface CassandraRel
),
and a set of sub-classes of
class RelNode
that implement that interface for the core relational operators
(Project
,
Filter
,
Aggregate
,
and so forth).
Built-in SQL implementation
How does Calcite implement SQL, if an adapter does not implement all of the core relational operators?
The answer is a particular built-in calling convention,
EnumerableConvention
.
Relational expressions of enumerable convention are implemented as “built-ins”:
Calcite generates Java code, compiles it, and executes inside its own JVM.
Enumerable convention is less efficient than, say, a distributed engine
running over column-oriented data files, but it can implement all core
relational operators and all built-in SQL functions and operators. If a data
source cannot implement a relational operator, enumerable convention is
a fall-back.
Statistics and cost
Calcite has a metadata system that allows you to define cost functions and
statistics about relational operators, collectively referred to as metadata.
Each kind of metadata has an interface with (usually) one method.
For example, selectivity is defined by
class RelMdSelectivity
and the method
getSelectivity(RelNode rel, RexNode predicate)
.
There are many built-in kinds of metadata, including collation, column origins, column uniqueness, distinct row count, distribution, explain visibility, expression lineage, max row count, node types, parallelism, percentage original rows, population size, predicates, row count, selectivity, size, table references, and unique keys; you can also define your own.
You can then supply a metadata provider that computes that kind of metadata
for particular sub-classes of RelNode
. Metadata providers can handle built-in
and extended metadata types, and built-in and extended RelNode
types.
While preparing a query Calcite combines all of the applicable metadata
providers and maintains a cache so that a given piece of metadata (for example
the selectivity of the condition x > 10
in a particular Filter
operator)
is computed only once.