Rate this post

I regularly come across a situation where many developers sincerely believe that the index in PostgreSQL is such a Swiss knife that universally helps with any query performance problem. It is enough to add some new index to the table or to include the field somewhere in the existing one, and then (magic-magic!) all queries will use this index effectively.

First, of course, either they will not, or not efficiently, or not all. Secondly, redundant indexes will only add performance issues when writing.

Most often, such situations occur during “long-playing” development, when a non-custom-made product is made according to the “write once, give, forget” model, but, as in our case, a service with a long life cycle is created.

Improvements occur iteratively by the forces of many distributed teams, which are distributed not only in space but also in time. And then, not knowing the whole history of the development of the project or the features of the applied distribution of data in its database, you can easily “mess up” with the indexes. But considerations and test requests below allow you to predict and detect part of the problems in advance:

  •     unused indexes
  •     prefix “clones”
  •     timestamp “in the middle”
  •     indexable boolean
  •     arrays in the index
  •     Null trash

The simplest thing is to find indices for which there were no passes at all. You just need to make sure that the statistics reset (pg_stat_reset ()) happened a long time ago and you don’t want to delete the used one. We use the system view pg_stat_user_indexes:

SELECT * FROM pg_stat_user_indexes WHERE idx_scan = 0;

But even if the index is used and did not fall into this selection, this does not mean at all that it is well suited for your queries.

What indexes are [not] suitable

In order to understand why some queries “go bad on the index”, we’ll think about the structure of a regular btree index, the most common instance. Indexes from a single field usually do not create any problems, therefore, we consider the problems that arise on a composite of a pair of fields.

An extremely simplified way, as it can be represented, is a “layered cake”, where in each layer there are ordered trees according to the values of the field corresponding in order.

It immediately becomes clear that the field A is ordered globally, and B – only within the framework of the specific value of A. Let’s look at examples of conditions that occur in real queries and how they will “go” along the index.

Good: prefix condition

Note that the btree (A, B) index includes the btree (A) sub-index. This means that all the rules described below will work for any prefix index.

If you create a more complex index than in our example, something like btree (A, B, C) – you can assume that your database automatically “appears”:

  •     btree (A, B, C)
  •     btree (A, B)
  •     btree (A)

And this means that the “physical” presence of the prefix index in the database is redundant in most cases. After all, the more indexes a table has to write – the worse it is for PostgreSQL, since it calls Write Amplification – Uber complained about it (and here you can find an analysis of their claims).

And if something prevents the base from living well, it is worth finding and eliminating it. Let’s look at an example:

CREATE TABLE tbl(A integer, B integer, val integer);
CREATE INDEX ON tbl(A, B)
  WHERE val IS NULL;
CREATE INDEX ON tbl(A) -- префиксный #1
  WHERE val IS NULL;
CREATE INDEX ON tbl(A, B, val);
CREATE INDEX ON tbl(A); -- префиксный #2

Ideally, you should get an empty selection, but look – these are our suspicious index groups:

nmt | wh            | nmf$      | tpf$             | nmi             | def
---------------------------------------------------------------------------------------
tbl | (val IS NULL) | {a}       | {int4}           | tbl_a_idx       | CREATE INDEX ...
tbl | (val IS NULL) | {a,b}     | {int4,int4}      | tbl_a_b_idx     | CREATE INDEX ...
tbl |               | {a}       | {int4}           | tbl_a_idx1      | CREATE INDEX ...
tbl |               | {a,b,val} | {int4,int4,int4} | tbl_a_b_val_idx | CREATE INDEX ...

Then you decide for each group yourself whether it was worth removing the shorter index or the longer one was not needed at all.

Good: all are constants except the last field

If the values of all fields of the index, except the last, are set by constants (in our example, this is field A), the index can be used normally. In this case, the value of the last field can be set arbitrarily: constant, inequality, interval, dialing through IN (…) or = ANY (…). And also it can be sorted by it.

  • WHERE A = constA AND B [op] constB / = ANY(...) / IN (...)
    op : { =, >, >=, <, <= }
  • WHERE A = constA AND B BETWEEN constB1 AND constB2
  • WHERE A = constA ORDER BY B

Based on the prefix indexes described above, this will work well:

 

  • WHERE A [op] const / = ANY(...) / IN (...)
    op : { =, >, >=, <, <= }
  • WHERE A BETWEEN const1 AND const2
  • ORDER BY A
  • WHERE (A, B) [op] (constA, constB) / = ANY(...) / IN (...)
    op : { =, >, >=, <, <= }
  • ORDER BY A, B

 

Bad: full enumeration of the “layer”

With part of the queries, the only enumeration of the movement along the index becomes a complete enumeration of all values in one of the “layers”. It’s lucky if there are unity of such values – and if thousands? 

Typically, such a problem occurs if inequality is used in the query, the condition does not determine the fields that are previous in order of index, or this order is violated during sorting.

  •      WHERE A <> const
  •      WHERE B [op] const / = ANY (…) / IN (…) 
  •      ORDER BY B
  •      ORDER BY B, A

Bad: interval or set is not in the last field

As a consequence of the previous one – if you need to find several values or their range on some intermediate “layer”, and then filter or sort by the fields lying “deeper” in the index, there will be problems if the number of unique values “in the middle” of the index is great.

  •      WHERE A BETWEEN constA1 AND constA2 AND B BETWEEN constB1 AND constB2
  •      WHERE A = ANY (…) AND B = const
  •      WHERE A = ANY (…) ORDER BY B
  •      WHERE A = ANY (…) AND B = ANY (…)

Bad: expression instead of field

Sometimes a developer unknowingly turns a column in a query into something else – into some expression for which there is no index. This can be fixed by creating an index from the desired expression, or by performing the inverse transformation:

  •      WHERE A – const1 [op] const2
  •      fix: WHERE A [op] const1 + const2
  •      WHERE A :: typeOfConst = const
  •      fix: WHERE A = const :: typeOfA

Take into account the cardinality of the fields

Suppose you need an index (A, B), and you plan to choose only by equality: (A, B) = (constA, constB). The use of a hash index would be ideal, but … In addition to non-journaling (wal logging) of such indexes up to version 10, they also cannot exist on several fields:

CREATE INDEX ON tbl USING hash(A, B);
-- ERROR:  access method "hash" does not support multicolumn indexes

In general, you have chosen btree. So how is it better to arrange the columns in it – (A, B) or (B, A)? To answer this question, it is necessary to take into account such a parameter as the cardinality of the data in the corresponding column –  how many unique values it contains.

Bad: a lot and out of place (timestamp “in the middle”)

Exactly for this reason it always looks suspicious if in your index a field with obviously large variability like timestamp [tz] is not the last. As a rule, the values of the timestamp field increase monotonically, and the following index fields have only one value at each time point.

CREATE TABLE tbl(A integer, B timestamp);
CREATE INDEX ON tbl(A, B);
CREATE INDEX ON tbl(B, A); -- 

Here we analyze immediately both the types of the incoming fields themselves and the classes of operators applied to them – since some timestamptz-function like date_trunc may turn out to be an index field.

Bad: too little (Boolean)

The flip side of the same coin is the situation when a boolean field appears in the index, which can take only 3 values: NULL, FALSE, TRUE. Of course, its presence makes sense if you want to use it for applied sorting — for example, by designating them as the type of node in the tree hierarchy — whether it is a folder or a leaf (“folders first”).

CREATE TABLE tbl(
  id
    serial
      PRIMARY KEY
, leaf_pid
    integer
, leaf_type
    boolean
, public
    boolean
);
CREATE INDEX ON tbl(leaf_pid, leaf_type); 
CREATE INDEX ON tbl(public, id);

But, in most cases, this is not like that, and requests come with some specific value of the boolean field. And then it becomes possible to replace the index with this field with its conditional version:

CREATE INDEX ON tbl(id) WHERE public;

Arrays in btree

A separate point is the attempt to “index the array” using the btree index. This is entirely possible, since the corresponding operators apply to them.

But the trouble is that they want to use it with the inclusion and intersection operators: <@, @>, &&. Of course, this does not work – because they need other types of indexes. How such a btree does not work for the function access to a specific element arr [i].

We learn to find such too:

CREATE TABLE tbl(
  id
    serial
      PRIMARY KEY
, pid
    integer
, list
    integer[]
);
CREATE INDEX ON tbl(pid);
CREATE INDEX ON tbl(list); -- что-то подозрительное

NULL index entries

The last quite common problem is “littering” the index with completely NULL entries. What means, records where the indexed expression in each of the columns is NULL. Such records do not have any practical use.

Usually they appear when you create an FK field or value relationship with optional padding in the table. Then roll the index so that FK works out quickly … and here they are. The less often the connection will be filled, the more “garbage” will fall into the index. We will simulate:

CREATE TABLE tbl(
  id
    serial
      PRIMARY KEY
, fk
    integer
);
CREATE INDEX ON tbl(fk);

INSERT INTO tbl(fk)
SELECT
  CASE WHEN i % 10 = 0 THEN i END
FROM
  generate_series(1, 1000000) i;

In most cases, such an index can be converted to a conditional one, which also takes less space:

CREATE INDEX ON tbl(fk) WHERE (fk) IS NOT NULL;
_tmp=# \di+ tbl*
                               List of relations
 Schema |      Name      | Type  |  Owner   |  Table   |  Size   | Description
--------+----------------+-------+----------+----------+---------+-------------
 public | tbl_fk_idx     | index | postgres | tbl      | 36 MB   |
 public | tbl_fk_idx1    | index | postgres | tbl      | 2208 kB |
 public | tbl_pkey       | index | postgres | tbl      | 21 MB   |

To find such indexes, we need to know the actual distribution of the data – that is, after all, read all the contents of the tables and superimpose it according to the WHERE-conditions of the occurrence (we will do this using dblink), which can take a very long time.

-[ RECORD 1 ]--------+--------------
table_name           | tbl
index_name           | tbl_fk_idx
index_size           | 37838848
index_size_humanize  | 36 MB
index_fields         | fk
index_cond           |
table_rec_count      | 1000000
index_rec_count      | 1000000
index_rec_count_null | 900000
index_cover_prc      | 100.00 -- 100% покрытие всех записей таблицы
index_null_prc       | 90.00  -- из них 90% NULL-"мусора"

I hope this article will help you.