November 28, 2018
Altinity is happy to introduce ‘Altinity stable’ ClickHouse release marking. Altinity stable releases undergo additional testing from our side, we carefully monitored community feedback for any issues, as well as operated such releases in some heavy loaded production systems.
That being said, Altinity announces ClickHouse 18.14.15 release as Altinity stable.
Since the previous Altinity stable release 1.1.54385, the new one has major new features that may be a reason to upgrade:
- ALTER TABLE UPDATE/DELETE
- Decimal data type
- LowCardinality data type
- JOIN syntax enhancements towards SQL standard
- Predicate push-down for views and subselects (enable_optimize_predicate_expression setting)
- Number of function aliases to improve compatibility with SQL standard
- Numerous Kafka engine enhancements
- Numerous performance optimizations
- Usability improvements, including auto-complete in clickhouse-client
- Improved profiling and introspection system tables
For those who have already upgraded to earlier 18.14.x releases we recommend to upgrade to 18.14.15, since it fixes few regressions including a memory issue with processing array columns.
Full release notes can be found at ClickHouse GitHub project.
We had no issues with upgrades. It is possible to run old and new version at the same cluster if new functionality is not used, so a smooth rolling upgrade is possible. Please check out the list of backward incompatible changes and known issues below. Downgrade to the previous version is also possible with no issues.
If you are upgrading from a version older than 1.1.54385, please contact us to confirm.
Backward incompatible changes since 1.1.54385:
- In requests with JOIN, the star character expands to a list of columns in all tables, in compliance with the SQL standard. You can restore the old behavior by setting asterisk_left_columns_only=1 (18.12.13)
- Parameters for Kafka engine were changed from Kafka(kafka_broker_list, kafka_topic_list, kafka_group_name, kafka_format[, kafka_schema, kafka_num_consumers]) to Kafka(kafka_broker_list, kafka_topic_list, kafka_group_name, kafka_format[, kafka_row_delimiter, kafka_schema, kafka_num_consumers]). If your tables use kafka_schema or kafka_num_consumers parameters, you have to manually edit the metadata files path/metadata/database/table.sql and add kafka_row_delimiter parameter with ” value. Also look at more SQL friendly format for engine definition https://clickhouse.yandex/docs/en/operations/table_engines/kafka/ (18.4.0)
- Converting a string containing the number zero to DateTime does not work. Example: SELECT toDateTime(‘0’). This is also the reason that DateTime DEFAULT ‘0’ does not work in tables, as well as 0 in dictionaries. Solution: replace 0 with 0000-00-00 00:00:00. (18.1.0)
- Removed escaping in Vertical and Pretty* formats and deleted the VerticalRaw format. (1.1.54388)
- If servers with version 1.1.54388 (or newer) and servers with an older version are used simultaneously in a distributed query and the query has the cast(x, ‘Type’) expression without the AS keyword and doesn’t have the word cast in uppercase, an exception will be thrown with a message like Not found column cast(0, ‘UInt8’) in block. Solution: Update the server on the entire cluster. (1.1.54388)
- #3583 — range hashed dictionaries do not work correctly
- #2581 — ‘select_sequential_consistency’ may incorrectly return an empty result in some cases
- ‘enable_optimize_predicate_expression=1’ may result in failed queries in some cases, it is turned off by default.
- LowCardinality is still considered as beta and may work suboptimal sometimes
Please contact us at firstname.lastname@example.org if you experience any issues with the upgrade.