SQL for ClickHouse DBA

 

SQL DBA.png

May 12, 2020

My name is Meo and I’ve been a DataBase Administrator (DBA) for the last 35 years of my life. After getting a degree at the University in Turin (Italy) many years ago, I started working with relational databases, and I’m still enjoying it! Sometimes I worked as a programmer, and later as an analyst, but my favourite job is the DBA. I’ve worked with Oracle, PostgreSQL, MySQL, and now with ClickHouse in XeniaLab.

The DBA uses particular SQL statements to extract the most interesting information from the databases it manages. This article describes some useful SQL commands a DBA must know to manage a ClickHouse database, like checking sessions, locks, disk space usage, performance and replication. This is my personal “Run Book,” and I am happy to share it in the Altinity blog.

The document is deliberately short and practical with working examples of SQL statements.

Sessions

A first overview of the database usage is the list of the current and active connections. From the operating system, the sessions connected to the database are not visible since ClickHouse uses only one process and one thread for each connection. So DBAs need an SQL statement!

Here’s how to get the list of active sessions on the database:

SHOW PROCESSLIST; 
SELECT query_id, user, address, elapsed, query   FROM system.processes ORDER BY query_id;

The first command is the simplest one and heavily resembles a MySQL command. The second SQL command allows you to put extra WHERE conditions and sort the results. Both commands extract currently running queries only, not all connections to the server.

In order to get the number of total connections, the query is:

SELECT * FROM system.metrics WHERE metric LIKE '%Connection';

This query returns the number of connections of different types, not the list of previous queries. 

An experienced DBA should notice that the last two queries used the system database. It is the most important database for the DBA since it contains views for a lot of ClickHouse internals.

Lock

Good news — there are NO locks in ClickHouse! Well, at least no user-visible locks.

ClickHouse performs INSERT asynchronously: the MergeTree Engine collects and inserts the data in parts that are merged later in the background. New data is visible to queries immediately once INSERT is completed. UPDATE and DELETE are not DML clauses in ClickHouse. This particular behavior makes locks infrequent and short-lived.

Anyway, it is still possible to have long running queries. These can be checked with the SHOW PROCESSLIST command described above, and can be interrupted with a KILL command:

SHOW PROCESSLIST;
KILL QUERY WHERE query_id='query_id';

ClickHouse UPDATE and DELETE commands are DDLs. They are called mutations and are performed asynchronously. It is not possible to rollback mutations, but it is possible to interrupt them if some of them hang. The commands are similar to the previous ones:

SELECT * FROM system.mutations; 
KILL MUTATION mutation_id = 'trx_id';

Ethical note: before killing someone it is important to be sure you know what you are doing!

Disk Space Usage

ClickHouse disk space management is very important. Typical data warehouse databases are very large. Even if ClickHouse uses sophisticated data compression algorithms, it is important for a ClickHouse DBA to keep an eye on occupied and free disk space.

Let’s start with the used space in the database:

SELECT database, table, partition, name part_name, active, bytes_on_disk 
  FROM system.parts ORDER BY database, table, partition, name;

The previous query is very detailed since it reports every Part, Partition, Table, etc., but you can use GROUP BY to get summary values by database, by table or by partition. For example, the following query displays disk usage by a database:

SELECT database, sum(bytes_on_disk)
  FROM system.parts 
 GROUP BY database;

Note from Altinity: with multi-disk configurations, it is also useful to look at used space by a particular disk or volume: SELECT * FROM system.disks

During data entry, parts are not immediately consolidated into partitions with a merge step. To force the merge, the following command can be used:

OPTIMIZE TABLE table [PARTITION partition] [FINAL]

The compression factor is very important and this is different for each column. Here is the query to check the space usage for each column:

SELECT database, table, column, any(type),
       sum(column_data_compressed_bytes) compressed, 
       sum(column_data_uncompressed_bytes) uncompressed,
       uncompressed/compressed ratio,
       compressed/sum(rows) bpr,
       sum(rows) 
  FROM system.parts_columns 
 WHERE active AND database <> 'system' 
 GROUP BY database, table, column 
 ORDER BY database, table, column;

Performance

Performance optimization is every DBA’s favorite job.

By default, CH does not trace the queries performed, but it is possible to do so by setting the parameter log_queries = 1 at the session level or in the users.xml configuration file. I strongly suggest enabling it.

Here's how to discover the longest running queries:
SELECT  user, 
    client_hostname AS host, 
    client_name AS client, 
    formatDateTime(query_start_time, '%T') AS started, 
    query_duration_ms / 1000 AS sec, 
    round(memory_usage / 1048576) AS MEM_MB, 
    result_rows AS RES_CNT, 
    toDecimal32(result_bytes / 1048576, 6) AS RES_MB, 
    read_rows AS R_CNT, 
    round(read_bytes / 1048576) AS R_MB, 
    written_rows AS W_CNT, 
    round(written_bytes / 1048576) AS W_MB, 
    query
  FROM system.query_log
 WHERE type = 2
 ORDER BY query_duration_ms DESC
 LIMIT 10

Once long running queries are identified, you can start optimizing them. The main techniques are: correct choice of ORDER BY columns, codecs and encodings. Please refer to the Altinity Webinar for more detail.

A particularly useful feature for performance is Materialized Views which allows you to define alternative views to the data. Materialized Views can consolidate the data or order it differently. Analyzing the heaviest and most recurring queries allows you to address the design of materialized views.

If you use a recent ClickHouse version (20.3.x), it stores logs in system.metric_log table, which allows you to go into OS level data with SQL:

SELECT toStartOfMinute(event_time) AS time, 
       sum(ProfileEvent_UserTimeMicroseconds) AS user_time, 
       bar(user_time, 0, 60000000, 80) AS bar 
  FROM system.metric_log 
 WHERE event_date = today() 
 GROUP BY time ORDER BY time;

Note the ‘bar’ function — one of the hundreds of cool ClickHouse functions!

Replication

It is possible to configure more ClickHouse nodes connected in a cluster.

ClickHouse clusters allow both Replication for HA (High Availability) and concurrent access and Sharding for distribuite queries and for high INSERT throughput. The configuration is very flexible since replication and sharding can be defined for a single table.

ClickHouse replication is asynchronous and multimaster (internally it uses ZooKeeper for Quorum). Its main goal is HA but if something goes wrong… here’s how to check for various ‘bad’ things that may happen:

SELECT database, table, is_leader, total_replicas, active_replicas 
  FROM system.replicas 
 WHERE is_readonly 
    OR is_session_expired 
    OR future_parts > 20 
    OR parts_to_check > 10 
    OR queue_size > 20 
    OR inserts_in_queue > 10 
    OR log_max_index - log_pointer > 10 
    OR total_replicas < 2 
    OR active_replicas < total_replicas;

/etc

ClickHouse is very DBA friendly, and the system database offers everything that a DBA may need. In this article I presented some useful SQL statements for ClickHouse DBAs. It is far from being a complete list, but should help you to jump more quickly into the topic without a need to search through the documentation. I keep those SQL statements in a file, so I can copy-paste them quickly when needed. Probably, every DBA does the same, and has got their own set of queries. Feel free to use those examples for your personal ClickHouse DBA runbook!

Share