Use the highest memory value you can: this limits the number of images generated for a single heatmap, and speeds things up. But be careful: the more memory you'll use, the less will be available for you system during heatmap rendering (after the rendering, memory is available again to the system). Imagine you have 10 heatmap renderings (say 10 users on the viewing page) at the same time with a limit of 100MB, this means that you'll use up to 1GB of memory! So be careful with this option, the best is to use 32MB if your php.ini says 8MB, and 128 if your php.ini tells so.
Page's group naming:
If you use page's title or page's url to track clicks, ClickHeat will create as many tracking directories as there's different titles. This may lead to a huge number of directories, and affect a lot your disk usage/limits. So you'd better use a few group names.
Avoid loading PHP engine for each tracked click:
Every time a click is logged, PHP engine is loaded. This makes a little load as the script is light, but the server is really slow in send PHP answers compared to static files, as some benchmarks show: Lighttpd/mod_cml - PHP : 150 req/s, Static : 4900 req/s
A solution is to move ClickHeat calls (GET click.php?x=123&y=456) to a static file (GET clickempty.html?x=123&y=456), then parse the logfile of those static calls. This solution is brought to experienced users of Apache/Perl by wat.tv (partage de vidéos, musique et photos)
Use a static file as clickHeatServer:
clickHeatGroup = 'page';
clickHeatServer = '/clickheat/clickempty.html';
Then add a special log to your Apache's configuration (you can adapt this part to your needs, using cronolog or similar tool, a dedicated sub domain, etc...):
SetEnvIf Request_URI clickempty.html clickheat
CustomLog "clickheat.%Y-%m-%d-%H" "%r" env=clickheat
Finally you'll have to run the parseClickLogs.pl script included in the downloaded archive (located in /scripts/)
Another script is available, in Python, at GitHub, provided by Arnar.