Another month goes by, which means itβs time for another release!
ClickHouse version 24.9 contains 23 new features π 14 performance optimisations π· 76 bug fixes π
In this release, weβve got the APPEND
clause for refreshable materialized views, new functions for the JSON
data type, and the Variant
type can be returned by automatic schema inference.
New Contributors
As always, we send a special welcome to all the new contributors in 24.9! ClickHouse's popularity is, in large part, due to the efforts of the community that contributes. Seeing that community grow is always humbling.
Below are the names of the new contributors:
1on, Alexey Olshanskiy, Alexis Arnaud, Austin Bruch, Denis Hananein, Dergousov, Gabriel Mendes, Konstantin Smirnov, Kruglov Kirill, Marco Vilas Boas, Matt Woenker, Maxim Dergousov, Michal Tabaszewski, NikBarykin, Oleksandr, Pedro Ferreira, Rodrigo Garcia, Samuel Warfield, Sergey (Finn) Gnezdilov, Tuan Pham Anh, Zhigao Hong, baolin.hbl, gao chuan, haozelong, imddba, kruglov, leonkozlowski, m4xxx1m, marco-vb, megao, mmav, neoman36, okunev, siyuan
Hint: if youβre curious how we generate this listβ¦ here.
You can also view the slides from the presentation.
APPEND for Refreshable Materialized Views
Contributed by Michael Kolupaev
Refreshable Materialized Views is an experimental version of materialized views that store the result of a query for quick retrieval later. In this release, weβve added APPEND
functionality, which means that rather than replacing the whole view, new rows will be added to the end of the table.
One use of this feature is to capture snapshots of values at a point in time. For example, letβs imagine that we have an events
table populated by a stream of messages from Redpanda, Kafka, or another streaming data platform.
SELECT *
FROM events
LIMIT 10
Query id: 7662bc39-aaf9-42bd-b6c7-bc94f2881036
βββββββββββββββββββtsββ¬βuuidββ¬βcountββ
β 2008-08-06 17:07:19 β 0eb β 547 β
β 2008-08-06 17:07:19 β 60b β 148 β
β 2008-08-06 17:07:19 β 106 β 750 β
β 2008-08-06 17:07:19 β 398 β 875 β
β 2008-08-06 17:07:19 β ca0 β 318 β
β 2008-08-06 17:07:19 β 6ba β 105 β
β 2008-08-06 17:07:19 β df9 β 422 β
β 2008-08-06 17:07:19 β a71 β 991 β
β 2008-08-06 17:07:19 β 3a2 β 495 β
β 2008-08-06 17:07:19 β 598 β 238 β
βββββββββββββββββββββββ΄βββββββ΄ββββββββ
This dataset has 4096
values in the uuid
column, and we can write the following query to find the ones with the highest total count:
SELECT
uuid,
sum(count) AS count
FROM events
GROUP BY ALL
ORDER BY count DESC
LIMIT 10
ββuuidββ¬βββcountββ
β c6f β 5676468 β
β 951 β 5669731 β
β 6a6 β 5664552 β
β b06 β 5662036 β
β 0ca β 5658580 β
β 2cd β 5657182 β
β 32a β 5656475 β
β ffe β 5653952 β
β f33 β 5653783 β
β c5b β 5649936 β
ββββββββ΄ββββββββββ
Letβs say we want to capture the count for each uuid
every 10 seconds and store it in a new table called events_snapshot
. The schema of events_snapshot
would look like this:
CREATE TABLE events_snapshot (
ts DateTime32,
uuid String,
count UInt64
)
ENGINE = MergeTree
ORDER BY uuid;
We could then create a refreshable materialized view to populate this table:
SET allow_experimental_refreshable_materialized_view=1;
CREATE MATERIALIZED VIEW events_snapshot_mv
REFRESH EVERY 10 SECOND APPEND TO events_snapshot
AS SELECT
now() AS ts,
uuid,
sum(count) AS count
FROM events
GROUP BY ALL;
We can then query events_snapshot
to get the count over time for a specific uuid
:
SELECT *
FROM events_snapshot
WHERE uuid = 'fff'
ORDER BY ts ASC
FORMAT PrettyCompactMonoBlock
βββββββββββββββββββtsββ¬βuuidββ¬βββcountββ
β 2024-10-01 16:12:56 β fff β 5424711 β
β 2024-10-01 16:13:00 β fff β 5424711 β
β 2024-10-01 16:13:10 β fff β 5424711 β
β 2024-10-01 16:13:20 β fff β 5424711 β
β 2024-10-01 16:13:30 β fff β 5674669 β
β 2024-10-01 16:13:40 β fff β 5947912 β
β 2024-10-01 16:13:50 β fff β 6203361 β
β 2024-10-01 16:14:00 β fff β 6501695 β
βββββββββββββββββββββββ΄βββββββ΄ββββββββββ
Variant Types in schema inference
Contributed by Shaun Struwig
ClickHouse now supports automatic usage of the Variable
data type for schema inference. This feature is disabled by default but enabled by setting ``input_format_try_infer_variants.
Letβs have a look at how it works when reading the following file:
data1.json
{"id": [1], "name": "Mark"}
{"id": "agerty", "name": "Dale"}
The id
field is an array of integers on the first row and a string on the second. Letβs query the file and return the type of the id
column:
select *, toTypeName(id)
FROM file('data1.json')
SETTINGS input_format_try_infer_variants=1;
ββidββββββ¬βnameββ¬βtoTypeName(id)βββββββββββββββββββββββββββ
β [1] β Mark β Variant(Array(Nullable(Int64)), String) β
β agerty β Dale β Variant(Array(Nullable(Int64)), String) β
ββββββββββ΄βββββββ΄ββββββββββββββββββββββββββββββββββββββββββ
If we do that query without setting input_format_try_infer_variants=1
, weβll see the following error message instead:
Received exception:
Code: 636. DB::Exception: The table structure cannot be extracted from a JSON format file. Error:
Code: 53. DB::Exception: Automatically defined type String for column 'id' in row 1 differs from type defined by previous rows: Array(Int64). You can specify the type for this column using setting schema_inference_hints. (TYPE_MISMATCH) (version 24.9.1.3278 (official build)).
You can specify the structure manually: (in file/path/to/24.9/data1.json). (CANNOT_EXTRACT_TABLE_STRUCTURE)
Remember that the Variant
data type wonβt always be inferred where you want (or even expect) it to be inferred. For example, if the values in the id
field can be cast to String
, that will be the inferred type even if the Variant
type could also be inferred. This is the case in the following file:
data2.json
{"id": 1, "name": "Mark"}
{"id": "agerty", "name": "Dale"}
{"id": "2021-01-04", "name": "Tom"}
If we run the following query:
select *, toTypeName(id)
FROM file('data2.json')
SETTINGS input_format_try_infer_variants=1;
ββidββββββββββ¬βnameββ¬βtoTypeName(id)ββββ
β 1 β Mark β Nullable(String) β
β agerty β Dale β Nullable(String) β
β 2021-01-04 β Tom β Nullable(String) β
ββββββββββββββ΄βββββββ΄βββββββββββββββββββ
The id
column is inferred as Nullable(String)
because every value can be cast to a string. You can still have it infer the id
column as Variant
, but youβll have to supply a hint:
SET allow_experimental_variant_type=1;
SELECT *, toTypeName(id)
FROM file('data2.json')
SETTINGS schema_inference_hints='id Variant(String, Int64, Date)';
ββidββββββββββ¬βnameββ¬βtoTypeName(id)ββββββββββββββββ
β 1 β Mark β Variant(Date, Int64, String) β
β agerty β Dale β Variant(Date, Int64, String) β
β 2021-01-04 β Tom β Variant(Date, Int64, String) β
ββββββββββββββ΄βββββββ΄βββββββββββββββββββββββββββββββ
Aggregate functions to analyze JSON
Contributed by Pavel Kruglov
In the 24.8 release post, we learned about the new JSON
data type. This release sees more functions to operate on data in the JSON
and Dynamic
data types. Letβs see how to use them on this sample dataset:
data3.json
{"id": 1, "name": "Mark"}
{"id": "agerty", "name": "Dale"}
{"id": "2021-01-04", "name": "Tom"}
{"id": ["1", 2, "3"], "name": "Alexey", "location": "Netherlands"}
We have the distinctJSONPaths
function, which returns distinct JSON paths.
SELECT distinctJSONPaths(json)
FROM file('data3.json', JSONAsObject)
FORMAT Vertical;
Row 1:
ββββββ
distinctJSONPaths(json): ['id','location','location.city','location.country','name']
If you want to return the types as well, distinctJSONPathsAndTypes
does that:
SELECT distinctJSONPathsAndTypes(json)
FROM file('data3.json', JSONAsObject)
FORMAT Vertical;
Row 1:
ββββββ
distinctJSONPathsAndTypes(json): {'id':['Array(Nullable(String))','Int64','String'],'location':['String'],'location.city':['String'],'location.country':['String'],'name':['String']}
Finally, we have distinctDynamicTypes
, which returns distinct types for Dynamic
columns.
SELECT distinctDynamicTypes(json.id)
FROM file('data3.json', JSONAsObject)
FORMAT Vertical
Row 1:
ββββββ
distinctDynamicTypes(json.id): ['Array(Nullable(String))','Int64','String']
_headers column for URL engine
Contributed by Flynn
When you query the url
table function, you can now access the response headers via the _headers
virtual column:
SELECT _headers
FROM url(
'https://en.wikipedia.org/w/api.php?action=query&list=recentchanges&rcprop=title%7Cids%7Csizes%7Cflags%7Cuser%7Cuserid%7Ctimestamp&format=json&rcdir=newer'
)
LIMIT 1
FORMAT Vertical;
Row 1:
ββββββ
_headers: {'accept-ranges':'bytes','age':'0','cache-control':'private, must-revalidate, max-age=0','content-disposition':'inline; filename=api-result.json','content-type':'application/json; charset=utf-8','date':'Tue, 01 Oct 2024 15:32:59 GMT','nel':'{ "report_to": "wm_nel", "max_age": 604800, "failure_fraction": 0.05, "success_fraction": 0.0}','report-to':'{ "group": "wm_nel", "max_age": 604800, "endpoints": [{ "url": "https://intake-logging.wikimedia.org/v1/events?stream=w3c.reportingapi.network_error&schema_uri=/w3c/reportingapi/network_error/1.0.0" }] }','server':'mw-api-ext.codfw.main-54d5bc66d9-98km5','server-timing':'cache;desc="pass", host;desc="cp3067"','set-cookie':'WMF-Last-Access=01-Oct-2024;Path=/;HttpOnly;secure;Expires=Sat, 02 Nov 2024 12:00:00 GMT','strict-transport-security':'max-age=106384710; includeSubDomains; preload','transfer-encoding':'chunked','vary':'Accept-Encoding,Treat-as-Untrusted,X-Forwarded-Proto,Cookie,Authorization','x-cache':'cp3067 miss, cp3067 pass','x-cache-status':'pass','x-client-ip':'82.35.72.115','x-content-type-options':'nosniff','x-frame-options':'DENY'}
overlay function
If you need to replace a string fragment with another string, that just got easier with the overlay
function. You provide the initial string, the replacement string, and then the index where you want the replacement string to start and how many characters should be replaced.
We can use this function to make sure everyone knows that chDB is cool as well!
SELECT overlay('ClickHouse is cool', 'and chDB are', 12, 2) AS res
ββresβββββββββββββββββββββββββββ
β ClickHouse and chDB are cool β
ββββββββββββββββββββββββββββββββ