Clickhouse usage report

George Shuklin
OpsOps
Published in
2 min readDec 19, 2018

I’ve just wrote a comment on Clickhouse database, and I think it’s interesting enough to make it into a separate post.

Use case

Data: Geodistributed reachability information with latency counters, with consistent incoming flow about 5 Mb/s. Data are gathered from about 6 crawlers. Compressed database size is about 20Gb, with weekly rotation.

Data are used for fixed sets of queries for average numbers, lists of top outliners (in latency) and few more.

Stability

Monitoring report for last 6 months:

Service ‘clickhouse’ On Host ‘click1(censored)’
OK: 99.997%
Scheduled downtime: 0.003%
Unscheduled downtime: 0.000%

Usability

Zero issues for operators insofar, programmers are happy.

Installation & configuration

https://github.com/AlexeySetevoi/ansible-clickhouse with small patch for private repos (https://github.com/amarao/ansible-clickhouse).

Resource consumption

Memory: 2.7–3Gb RSS for the last few month, with no outliners.

CPU: 2–3% (10-minutes averages), 5399 CPU-minutes for 153 days.

Disk: 2 IOPS for writing, 1 IOPS for reading (10 minute averages)

Stats

For last 153 days (since last server reboot):

Use system;
Select * from events;
┌─event───────────────────────────────────┬──────────value─┐
│ Query │ 96791 │
│ SelectQuery │ 49786 │
│ InsertQuery │ 46647 │
│ FileOpen │ 8593010 │
│ Seek │ 281225 │
│ ReadBufferFromFileDescriptorRead │ 8083356 │
│ ReadBufferFromFileDescriptorReadBytes │ 4395742351368 │
│ WriteBufferFromFileDescriptorWrite │ 11376241 │
│ WriteBufferFromFileDescriptorWriteBytes │ 5331940951010 │
│ ReadCompressedBytes │ 4281721863794 │
│ CompressedReadBufferBlocks │ 324315838 │
│ CompressedReadBufferBytes │ 26547060994203 │
│ IOBufferAllocs │ 17199683 │
│ IOBufferAllocBytes │ 6907648898660 │
│ ArenaAllocChunks │ 14 │
│ ArenaAllocBytes │ 57344 │
│ FunctionExecute │ 443977 │
│ MarkCacheHits │ 208043 │
│ MarkCacheMisses │ 21889 │
│ CreatedReadBufferOrdinary │ 2055119 │
│ CreatedWriteBufferOrdinary │ 1759034 │
│ InsertedRows │ 172063397956 │
│ InsertedBytes │ 7823063594209 │
│ SelectedParts │ 23909 │
│ SelectedRanges │ 43259 │
│ SelectedMarks │ 3022074 │
│ MergedRows │ 552942274463 │
│ MergedUncompressedBytes │ 26253035436924 │
│ MergesTimeMilliseconds │ 94125577 │
│ MergeTreeDataWriterRows │ 172063397956 │
│ MergeTreeDataWriterUncompressedBytes │ 8167190390121 │
│ MergeTreeDataWriterCompressedBytes │ 1537157838359 │
│ MergeTreeDataWriterBlocks │ 180200 │
│ RegexpCreated │ 1 │
│ ContextLock │ 28044904 │
│ RWLockAcquiredReadLocks │ 48842132 │
│ RWLockAcquiredWriteLocks │ 310 │
│ RWLockReadersWaitMilliseconds │ 34336 │
│ RWLockWritersWaitMilliseconds │ 4008 │
└─────────────────────────────────────────┴────────────────┘

Conclusion

It’s so good that there is no need to mind it on a daily basis.

--

--

George Shuklin
OpsOps

I work at Servers.com, most of my stories are about Ansible, Ceph, Python, Openstack and Linux. My hobby is Rust.