How much of your pgbench test data is in the OS disk cache using linux-fincore

Following little script which utilizes linux-fincore hosted here gives you the OS disk cache resident pages of the tables in your pgbench database, the one used in a typical pgbench test.

echo “tablename | size | total_pages | min_cached page | cached_pages | cached_size | cached_perc”

psql -d pgbench -tc \
“SEELCT ‘$PGDATA/base/’ || d.oid::text || ‘/’ || c.relfilenode::text, c.relname\
FROM pg_class c, pg_database d\
WHERE c.relname LIKE ‘pgbench_%’ AND d.datname=’pgbench’” | \

tr -d ” ” | \
sed ‘s/|/\t/g’ |\
grep -v “^$” | \

while read path file; \
do \
echo -n $file;
linux-fincore -so $path* | \
grep -v “filename\|—” |\
awk \
‘{print “\t”, $2,”\t”,$3,”\t”,$4,”\t”,$5,”\t”,$6,”\t”,$7}’ | \
tr -d “,”; \
done

Basics of Distributed Computing

  • What qualifies as a self-sufficient data processing node in solving a given data problem? Well, for example, sorting of records in a file can be performed on a full-fledged Linux box with sufficient storage (and memory?). We should not however be concerned about individual components involved in the system if it can solve the problem working as a whole.
  • A distributed system (a cluster of above nodes) which working together as a whole can solve still bigger (in size, at least) problems or solve existing ones faster. This is based on the idea of dividing the work into parts each of which is just a smaller instance of the original problem and can be solved by each node in a self-sufficient manner. 
  • A distributed data processing framework would then consist of mechanism for dividing the work, co-ordination and possibly the presentation of final results after combining the work done by each of the individual nodes in the system.

PostgreSQL Statistics: pg_statistic and pg_stat(io)_*

We hear about two kinds of statistics in PostgreSQL:

  1. PostgreSQL planner statistics (pg_statistic catalog/pg_stats view), and
  2. PostgreSQL server activity statistics (pg_stat(io)_* views)

It is important to understand that #1 pertains to the statistical data about the contents of the database, whereas #2, the statistics of server activity viz. the accesses to tables, indexes, disk-block accesses that they incur and such.

So, what are the differences between how these statistics are used:

Well, #1 is used by the planner as different kinds of estimates about the data contained in various tables, indexes in the cluster (that is the data directory managed by this server instance). This may include various things like distribution of data in various columns in tables viz. fraction of NULLs, average row width, number of distinct values, and so on. This link sheds some light on the matter. Planner, with this data (about data) at its disposal can make smart decisions about choosing plans for a given query. These statistics are mainly updated by ANALYZE.

#2, on the other hand, are the operational data about server activity. A DBA or a system administrator may be interested in such data for reporting purposes. There are various tools (like pg_statsinfo) which utilize various pg_stat_* views exported by the server to create activity reports. I won’t go into details of all the statistics here for the sake of brevity. There is an auxiliary PostgreSQL process called stats collector process, that transmits collected information to backends through temporary files located under $PGDATA/pg_stat_tmp directory. They are also copied permanently to $PGDATA/pg_stat directory at the server shutdown to be used after the server is restarted.

A Cool pg_buffercache Query

Today, at work, I found this particular query using a contrib module called pg_buffercache very useful:

SELECT c.relname,count(*) AS buffers, c.relpages
FROM pg_class c INNER JOIN pg_buffercache b
ON b.relfilenode=c.relfilenode INNER JOIN pg_database d
ON ( b.reldatabase=d.oid AND d.datname=current_database( ) )
GROUP BY c.relname,c.relpages
ORDER BY 2 DESC;

Found it in a presentation by a PostgreSQL performance guru Greg Smith.

It basically gives you the number of page buffers (in the shared memory) being used by (the pages of) different relations in a given database at the point of running the query. Moreover they are sorted in the decreasing order of number of buffers used thus listing the relation using the most number of buffers at the top.