Kloudless uses Telegraf and InfluxDB for our system and application metrics as well as Grafana for visualizations and alerting. Telegraf allows us great flexibility in terms of data sources. It supports everything from the StatsD wire-protocol to custom shell scripts and database queries. Together, these tools form InfluxData’s TICK stack.
For context, Kloudless provides an API abstraction layer that enables apps to integrate with any of their users’ SaaS tools like Google Drive, Slack, and Salesforce, with a single implementation. As part of this, Kloudless tracks changes in connected accounts, to offer an activity stream API endpoint and webhooks. A single account can have thousands of events in a couple of minutes that must all be temporarily stored in our database for apps to retrieve from our platform.
Kloudless uses the TICK stack not only to track billions of API requests themselves, but also to monitor our database’s performance.
To exemplify the latter scenario, we can use a real life issue that we encountered with PostgreSQL. Queries involving a certain table were performing poorly. We narrowed down the cause to table bloat, but in order to ensure that our changes were having any effect, we needed to measure it over time. Otherwise, any solution of ours would just be a best guess!
In order to manually check the table bloat, we can use the following SQL query adapted from the Postgres wiki (for versions 9.0+):
SELECT current_database() as datname, schemaname, tblname, (bs*tblpages)::bigint AS real_size, ((tblpages-est_tblpages)*bs)::bigint AS extra_size, CASE WHEN tblpages - est_tblpages > 0 THEN 100 * ((tblpages - est_tblpages)/tblpages)::double precision ELSE 0.0::double precision END AS extra_ratio, fillfactor, CASE WHEN tblpages - est_tblpages_ff > 0 THEN ((tblpages-est_tblpages_ff)*bs)::bigint ELSE 0::bigint END AS bloat_size, CASE WHEN tblpages - est_tblpages_ff > 0 THEN 100 * (tblpages - est_tblpages_ff)/tblpages::double precision ELSE 0.0::double precision END AS bloat_ratio, is_na FROM ( SELECT ceil( reltuples / ( (bs-page_hdr)/tpl_size ) ) + ceil( toasttuples / 4 ) AS est_tblpages, ceil( reltuples / ( (bs-page_hdr)*fillfactor/(tpl_size*100) ) ) + ceil( toasttuples / 4 ) AS est_tblpages_ff, tblpages, fillfactor, bs, tblid, schemaname, tblname, heappages, toastpages, is_na FROM ( SELECT ( 4 + tpl_hdr_size + tpl_data_size + (2*ma) - CASE WHEN tpl_hdr_size%ma = 0 THEN ma ELSE tpl_hdr_size%ma END - CASE WHEN ceil(tpl_data_size)::int%ma = 0 THEN ma ELSE ceil(tpl_data_size)::int%ma END ) AS tpl_size, bs - page_hdr AS size_per_block, (heappages + toastpages) AS tblpages, heappages, toastpages, reltuples, toasttuples, bs, page_hdr, tblid, schemaname, tblname, fillfactor, is_na FROM ( SELECT tbl.oid AS tblid, ns.nspname AS schemaname, tbl.relname AS tblname, tbl.reltuples, tbl.relpages AS heappages, coalesce(toast.relpages, 0) AS toastpages, coalesce(toast.reltuples, 0) AS toasttuples, coalesce(substring( array_to_string(tbl.reloptions, ' ') FROM 'fillfactor=([0-9]+)')::smallint, 100) AS fillfactor, current_setting('block_size')::numeric AS bs, CASE WHEN version()~'mingw32' OR version()~'64-bit|x86_64|ppc64|ia64|amd64' THEN 8 ELSE 4 END AS ma, 24 AS page_hdr, 23 + CASE WHEN MAX(coalesce(null_frac,0)) > 0 THEN ( 7 + count(*) ) / 8 ELSE 0::int END + CASE WHEN tbl.relhasoids THEN 4 ELSE 0 END AS tpl_hdr_size, sum( (1-coalesce(s.null_frac, 0)) * coalesce(s.avg_width, 1024) ) AS tpl_data_size, bool_or(att.atttypid = 'pg_catalog.name'::regtype) OR count(att.attname) <> count(s.attname) AS is_na FROM pg_attribute AS att JOIN pg_class AS tbl ON att.attrelid = tbl.oid JOIN pg_namespace AS ns ON ns.oid = tbl.relnamespace LEFT JOIN pg_stats AS s ON s.schemaname=ns.nspname AND s.tablename = tbl.relname AND s.inherited=false AND s.attname=att.attname LEFT JOIN pg_class AS toast ON tbl.reltoastrelid = toast.oid WHERE att.attnum > 0 AND NOT att.attisdropped AND tbl.relkind = 'r' GROUP BY 1,2,3,4,5,6,7,8,9,10, tbl.relhasoids ORDER BY 2,3 ) AS s ) AS s2 ) AS s3 order by bloat_ratio desc;
With the PostgreSQL Extensible Telegraf plugin, the following configuration will send the results of the above query to the configured outputs:
[[inputs.postgresql_extensible]] # database specified here is only used for initial connection address="postgres://USER:PASSWORD@localhost/DATABASE" databases=["DATABASE"] [[inputs.postgresql_extensible.query]] measurement="postgresql_bloat" sqlquery="""-- insert above query here""" # minimum postgresql version version=900 # DB name is already included in the query withdbname=false # These columns will be converted to influxdb tags for efficient filtering tagvalue="datname,schemaname,tblname"
One caveat is that the user must have read access on all of the relevant tables in that database. Grant read-only permissions with the following SQL query:
GRANT SELECT ON ALL TABLES IN SCHEMA public TO username; ALTER DEFAULT PRIVELEGES IN SCHEMA public GRANT SELECT ON TABLES TO username;
Querying the data in InfluxDB is relatively straightforward. The following InfluxQL shows recent measurements:
SELECT * FROM postgresql_bloat WHERE time > now() - 10m ORDER BY time DESC LIMIT 10;
Viewing Results in Grafana
The easiest way to monitor the stored data is via Grafana. Using dashboard level filters to limit the graph of Bloat Ratio to a particular table, we can easily see that the behavior over time changes:
There is a minor improvement to the steady-state bloat ratio after updating our code to avoid unnecessary tuple updates. The most dramatic improvement comes from periodically repacking the table with pg_repack. Pg_repack performs a full clean up of dead tuples without blocking the table the way a full vacuum would. The smaller variations in the ratio correspond with the background autovacuuming processes. Autovacuuming does seem to help, but is not sufficient to keep space usage for this particular table under control.
At the end of the day, it’s always important to track metrics to ensure things are behaving as expected. This will definitely not be the last query we monitor, since it has been so useful! This system can be used to track the results of any query, whether for monitoring Postgres itself or your application.