Deploying Single-Node ClickHouse® on Small Servers

If you’ve heard of ClickHouse®, you’ve probably heard it described as a distributed column-oriented database designed for massive analytical workloads. So why would anyone run it on a single node?
The truth is, ClickHouse’s raw performance is so impressive that a single well-configured server can handle workloads that would bring traditional databases to their knees. For development environments, staging setups, or even low-to-moderate traffic production systems, a single node deployment makes a lot of sense. You get all the analytical power without the operational complexity of managing a cluster.
Recommended Hardware
ClickHouse is not lightweight, but it can be tuned to run on minimal hardware: 4+ CPU cores, 8GB+ RAM, SSD storage.
The minimum viable setup is around 4GB RAM, but you’ll want at least 8GB for any real workload.
Installing ClickHouse
There are many ways to install ClickHouse. There is this one-liner from the official documentation:
curl https://clickhouse.com/ | shWarning: Installing with this command will install the latest commit from `main`, and will create some configuration files on your system. We don’t recommend using this command.
Instead, you can use Docker and the official ClickHouse Docker images or the Altinity Stable Builds to run a containerized version of ClickHouse.
In my own home lab, I’m using NixOS’s built-in support for OCI containers to run a containerized ClickHouse as a systemd service.
Understanding ClickHouse’s Configuration Structure
However you’ve got ClickHouse installed, let’s review the configuration files. The server ships with two main files:
- /etc/clickhouse-server/config.xml — Server settings
- /etc/clickhouse-server/users.xml — User profiles and access controls
Important: you should never modify these files directly. ClickHouse is designed to load additional configuration from subdirectories, which means you can override any setting without touching the vendor files. This makes upgrades painless and your customizations easy to track.
The directory structure looks like this:
/etc/clickhouse-server/
├── config.xml # Don't modify
├── users.xml # Don't modify
├── config.d/ # Your server config overrides
│ ├── listen.xml
│ ├── memory.xml
│ └── storage.xml
└── users.d/ # Your user/profile overrides
└── profiles.xmlFiles in config.d/ and users.d/ are processed alphabetically and merged with the base configuration. This pattern is common in Unix-style configuration, and ClickHouse implements it well.
Basic Configuration: Listening on All Interfaces
Out of the box, ClickHouse only listens on localhost. That’s secure by default, but not useful if you need to connect from other machines. The first configuration file we’ll create enables network access:
<!-- /etc/clickhouse-server/config.d/listen.xml -->
<clickhouse>
<listen_host>0.0.0.0</listen_host>
</clickhouse>This tells ClickHouse to accept connections on all network interfaces. A few important notes:
- Security matters: Once you bind to 0.0.0.0, anyone who can reach your server can attempt to connect. Make sure you have firewall rules in place, and consider setting up authentication (covered in the official docs).
- IPv6: If you need IPv6, add <listen_host>::</listen_host> as well.
Create this file and restart ClickHouse (assuming you’re using systemd):
sudo systemctl restart clickhouse-serverVerify it’s listening:
curl http://localhost:8123/
# Should return "Ok."(You should also be able to do this from a remote host, replacing `localhost` with the IP address of the host where ClickHouse is running.)
Memory Configuration: Standard vs Constrained Environments
Memory management is where ClickHouse configuration gets interesting. The defaults assume you’re running on a beefy server with plenty of RAM, but not everyone has that luxury.
Standard Configuration (16GB+ RAM)
If you have a server with 16GB or more of RAM, the defaults are reasonable. You might still want to set explicit limits to prevent runaway queries from consuming everything:
<!-- /etc/clickhouse-server/config.d/memory.xml -->
<clickhouse>
<!-- Cap server memory at 12GB, leaving headroom for the OS -->
<max_server_memory_usage>12884901888</max_server_memory_usage>
</clickhouse>This is a “set it and forget it” configuration. ClickHouse will respect these limits and throw errors rather than letting the OOM killer terminate your process.
Constrained Configuration (4-8GB RAM)
Running ClickHouse on a smaller server requires more aggressive tuning. When I first set up ClickHouse on an 8GB VM, I hit out-of-memory errors within hours. The problem wasn’t query execution—it was background merge operations consuming all available memory.
The official documentation contains a guide on which settings to adjust for low memory environments.
Disabling System Logs
ClickHouse maintains numerous internal log tables for debugging and monitoring. These are invaluable for troubleshooting, but they also consume memory, disk space, and CPU cycles. For resource-constrained deployments or high-volume systems, you can selectively disable them:
<!-- /etc/clickhouse-server/config.d/disable_logs.xml -->
<clickhouse>
<!-- Disable system logs to save resources -->
<trace_log remove="1"/>
<text_log remove="1"/>
<metric_log remove="1"/>
<asynchronous_metric_log remove="1"/>
<query_log remove="1"/>
<part_log remove="1"/>
<processors_profile_log remove="1"/>
<query_views_log remove="1"/>
<query_metric_log remove="1"/>
</clickhouse>The remove=”1″ attribute completely disables each log table.
NB: Think carefully before disabling query_log. It’s your primary tool for understanding what queries are running and how long they take. In production, I’d recommend keeping it enabled and instead setting a short TTL to limit disk usage:
<query_log>
<database>system</database>
<table>query_log</table>
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
<ttl>event_date + INTERVAL 7 DAY DELETE</ttl>
</query_log>Storage Configuration
Storage policies control how ClickHouse manages disk space. At minimum, you should configure free space thresholds to prevent your disk from filling completely:
<!-- /etc/clickhouse-server/config.d/storage.xml -->
<clickhouse>
<storage_configuration>
<disks>
<default>
<!-- Always keep 2GB free on disk -->
<keep_free_space_bytes>2147483648</keep_free_space_bytes>
</default>
</disks>
<policies>
<default>
<volumes>
<main>
<disk>default</disk>
<!-- Cap individual part sizes at 10GB -->
<max_data_part_size_bytes>10737418240</max_data_part_size_bytes>
</main>
</volumes>
</default>
</policies>
</storage_configuration>
</clickhouse>The keep_free_space_bytes setting is critical—ClickHouse will stop accepting writes rather than fill your disk. The max_data_part_size_bytes limit prevents any single data part from growing too large.
MergeTree Engine Settings
The MergeTree family of engines is the heart of ClickHouse. These global settings affect all MergeTree tables:
<!-- /etc/clickhouse-server/config.d/merge_tree.xml -->
<clickhouse>
<merge_tree>
<!-- Pool entry thresholds -->
<!-- These must be less than: background_pool_size * background_merges_mutations_concurrency_ratio -->
<!-- With pool_size=4 and ratio=2, max is 8, so we use 6 -->
<number_of_free_entries_in_pool_to_execute_mutation>6</number_of_free_entries_in_pool_to_execute_mutation>
<number_of_free_entries_in_pool_to_execute_optimize_entire_partition>6</number_of_free_entries_in_pool_to_execute_optimize_entire_partition>
<number_of_free_entries_in_pool_to_lower_max_size_of_merge>6</number_of_free_entries_in_pool_to_lower_max_size_of_merge>
<!-- Parts thresholds - increase to prevent "Too many parts" errors -->
<parts_to_throw_insert>1500</parts_to_throw_insert>
<parts_to_delay_insert>750</parts_to_delay_insert>
<!-- Use compact format for small parts to reduce memory footprint -->
<min_bytes_for_wide_part>10485760</min_bytes_for_wide_part>
<min_rows_for_wide_part>100000</min_rows_for_wide_part>
</merge_tree>
</clickhouse>A few key points:
“Too many parts” errors: ClickHouse has built-in protection against accumulating too many unmerged parts. If merges can’t keep up with inserts, you’ll get errors. The parts_to_throw_insert and parts_to_delay_insert settings control these thresholds. Increasing them gives merges more time to catch up, but very high values can lead to performance degradation during queries.
Wide vs compact part format: ClickHouse stores data parts in two formats. Wide format is faster for queries but uses more memory. Compact format is more memory-efficient for small parts. The min_bytes_for_wide_part and min_rows_for_wide_part settings control the threshold. On memory-constrained systems, keeping these values high ensures small parts stay in compact format.
Pool entry settings: These control when certain operations are allowed to run based on how many background threads are available. The values must be less than background_pool_size * background_merges_mutations_concurrency_ratio. If you set background_pool_size to 4 and the ratio to 2, that gives you 8 total slots—so these values should be below 8.
Verifying Your Configuration
After creating your configuration files and restarting ClickHouse, verify everything is working:
# Check ClickHouse is listening on HTTP port
curl http://localhost:8123/
# Connect via native client
clickhouse-client --query "SELECT 1"
# View applied memory settings
clickhouse-client --query "SELECT name, value FROM system.settings WHERE name LIKE '%memory%' ORDER BY name"
# Check current memory usage
clickhouse-client --query "SELECT metric, value FROM system.asynchronous_metrics WHERE metric LIKE '%Memory%'"
# Verify MergeTree settings
clickhouse-client --query "SELECT name, value FROM system.merge_tree_settings WHERE name LIKE '%parts%'"If something isn’t applied correctly, check the server logs:
sudo journalctl -u clickhouse-server -fClickHouse logs configuration errors at startup, including which files were loaded and any parsing issues.
Conclusion and Next Steps
You now have a stable development-ready single-node ClickHouse deployment with:
- Network accessibility for remote connections
- Memory limits appropriate for your hardware
- Optimized merge settings to prevent OOM conditions
- Storage policies to protect disk space
- Query-level resource limits
This configuration will handle most single-node workloads without surprises. As your data grows, monitor these metrics:
- system.parts table for part counts (watch for “Too many parts” warnings)
- system.asynchronous_metrics for memory pressure
- system.query_log for slow queries (if you kept it enabled)
When a single node is no longer enough, ClickHouse’s clustering capabilities scale horizontally. But don’t rush to add complexity—you might be surprised how much a single well-configured server can handle.
Further Reading
- Altinity Knowledge Base — Excellent resource for ClickHouse operations
- ClickHouse Documentation — Official reference
- Configure ClickHouse for Low Memory Environments — Deep dive on memory tuning
- Server Config Files — Configuration structure details
- Tracing ClickHouse with OpenTelemetry — Native OpenTelemetry support
ClickHouse® is a registered trademark of ClickHouse, Inc.; Altinity is not affiliated with or associated with ClickHouse, Inc.