

The documentation of the SQL Client commands can be accessed by typing the HELP command. Kill the line to the left from the cursor Kill the line to the right from the cursor History search backward (behaves same as up line from history in case of empty input) History search forward (behaves same as down line from history in case of empty input)

There is a list of available key-strokes in SQL Client Key-Stroke (Linux, Windows(WSL))
#Datagrip kafka how to#
The configuration section explains how to declare table sources for reading data, how to declare table sinks for writing data, and how to configure other table program properties. See SQL Client Configuration below for more details.Īfter a query is defined, it can be submitted to the cluster as a long-running, detached Flink job. The SET command allows you to tune the job execution and the sql client behaviour. If you simply want to try out the SQL Client, you can also start a local cluster with one worker using the following command: For more information about setting up a Flink cluster see the Cluster & Deployment part. It requires only a running Flink cluster where table programs can be executed. The SQL Client is bundled in the regular Flink distribution and thus runnable out-of-the-box. This section describes how to setup and run your first Flink SQL program from the command-line. The SQL Client CLI allows for retrieving and visualizing real-time results from the running distributed application on the command line. The SQL Client aims to provide an easy way of writing, debugging, and submitting table programs to a Flink cluster without a single line of Java or Scala code. This more or less limits the usage of Flink to Java/Scala programmers. Moreover, these programs need to be packaged with a build tool before being submitted to a cluster. We recommend you use the latest stable version.įlink’s Table & SQL API makes it possible to work with queries written in the SQL language, but these queries need to be embedded within a table program that is written in either Java or Scala. This documentation is for an unreleased version of Apache Flink. Hadoop MapReduce compatibility with Flink.

