Sunday, 7 August 2022

PostgreSQL DBA Monitoring And Daily Health check Commands

 

PostgreSQL DBA Daily Health Check Activity

 

 

Oracle DBA performs below activities as part of DB Health Check.

 

 


 

 

PostgreSQL Monitoring Basics

Important variables in PostgreSQL monitoring are:

·     Buffer cache performance (cache hits vs disk reads)

·     Number of commits

·     Number of connections

·     Number of sessions

·     Checkpoints and bgwriter statistics

·     Vacuums

  • Locks
  • Replication
  • And last but definitely not least, queries

Generally there are two ways in a monitoring setup to perform data collection:

·     To acquire data via a Log

·     To acquire data by querying PostgreSQL system

 

 

 

 

1.  Check if PostgreSQL is running.

 

 

postgres@easytraining:~$ ps -ef | grep postgres

postgres     539       1  0 Jun17 ?        00:01:40 /usr/lib/postgresql/13/bin/postgres -D /var/lib/postgresql/13/main -c config_file=/etc/postgresql/13/main/postgresql.conf

postgres     541     539  0 Jun17 ?        00:00:00 postgres: 13/main: checkpointer

postgres     542     539  0 Jun17 ?        00:00:20 postgres: 13/main: background writer

postgres     543     539  0 Jun17 ?        00:00:21 postgres: 13/main: walwriter

postgres     544     539  0 Jun17 ?        00:01:12 postgres: 13/main: autovacuum launcher

postgres     545     539  0 Jun17 ?        00:00:02 postgres: 13/main: archiver last was 00000001000000000000001F

postgres     546     539  0 Jun17 ?        00:01:07 postgres: 13/main: stats collector

postgres     547     539  0 Jun17 ?        00:00:00 postgres: 13/main: logical replication launcher

root      209928  209925  0 06:32 pts/0    00:00:00 su - postgres

postgres  209929  209928  0 06:32 pts/0    00:00:00 -bash

postgres  209976  209929  0 06:35 pts/0    00:00:00 ps -ef

postgres  209977  209929  0 06:35 pts/0    00:00:00 grep postgres

postgres@easytraining:~$ postgres@easytraining:~$

 

[easyadmin@easypostgresvm ~]$ ps -ef|grep -i post

 

touch_shahique@easytraining:~$ sudo su -

root@easytraining:~# service postgresql status

● postgresql.service - PostgreSQL RDBMS

     Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled)

     Active: active (exited) since Fri 2022-06-17 11:00:21 UTC; 1 weeks 5 days ago

    Process: 558 ExecStart=/bin/true (code=exited, status=0/SUCCESS)

   Main PID: 558 (code=exited, status=0/SUCCESS)

        CPU: 1ms

 

Jun 17 11:00:21 easytraining systemd[1]: Starting PostgreSQL RDBMS...

Jun 17 11:00:21 easytraining systemd[1]: Finished PostgreSQL RDBMS.

root@easytraining:~# root@easytraining:~#

 

or

 

[postgres@easypostgresvm ~]$ systemctl status postgresql-13

● postgresql-13.service - PostgreSQL 13 database server

   Loaded: loaded (/usr/lib/systemd/system/postgresql-13.service; enabled; vendor preset: disabled)

   Active: active (running) since Tue 2022-03-08 04:04:53 UTC; 58min ago

     Docs: https://www.postgresql.org/docs/13/static/

 Main PID: 1377 (postmaster)

    Tasks: 8 (limit: 26213)

   Memory: 28.6M

   CGroup: /system.slice/postgresql-13.service

           ├─1377 /usr/pgsql-13/bin/postmaster -D /var/lib/pgsql/13/data/

           ├─1528 postgres: logger

           ├─1547 postgres: checkpointer

           ├─1548 postgres: background writer

           ├─1549 postgres: walwriter

           ├─1550 postgres: autovacuum launcher

           ├─1551 postgres: stats collector

           └─1552 postgres: logical replication launcher

[postgres@easypostgresvm ~]$

 

postgres@easytraining:~$

postgres@easytraining:~$ psql -U postgres

psql (13.7 (Debian 13.7-0+deb11u1))

Type "help" for help.

 

postgres=# \l

                                  List of databases

      Name       |  Owner   | Encoding  | Collate |  Ctype  |    Access privileges   

-----------------+----------+-----------+---------+---------+-------------------------

 dbname2         | postgres | UTF8      | C.UTF-8 | C.UTF-8 | =Tc/postgres           +

                 |          |           |         |         | postgres=CTc/postgres  +

                 |          |           |         |         | app_readonly=c/postgres

 easydb            | postgres | UTF8      | C.UTF-8 | C.UTF-8 |

 easynew         | postgres | SQL_ASCII | C       | C       |

 easynew3      | postgres | UTF8      | C.UTF-8 | C.UTF-8 |

 easytbs           | postgres | UTF8      | C.UTF-8 | C.UTF-8 |

 example_backups | postgres | UTF8      | C.UTF-8 | C.UTF-8 |

 interndb        | postgres | UTF8      | C.UTF-8 | C.UTF-8 |

 mydb              | postgres | UTF8      | C.UTF-8 | C.UTF-8 | =Tc/postgres           +

                 |          |           |         |         | postgres=CTc/postgres  +

                 |          |           |         |         | myuser=CTc/postgres    +

                 |          |           |         |         | app_readonly=c/postgres

 postgres        | postgres | UTF8      | C.UTF-8 | C.UTF-8 |

 template0       | postgres | UTF8      | C.UTF-8 | C.UTF-8 | =c/postgres            +

                 |          |           |         |         | postgres=CTc/postgres

 template1       | postgres | UTF8      | C.UTF-8 | C.UTF-8 | =c/postgres            +

                 |          |           |         |         | postgres=CTc/postgres

(11 rows)

 

postgres=#

postgres=#

 

postgres=# \l+

 

postgres=# SHOW SERVER_VERSION;

        server_version       

------------------------------

 13.7 (Debian 13.7-0+deb11u1)

(1 row)

 

postgres=#

 

[postgres@easypostgresvm ~]$ ps -ef|grep -i postmaster

postgres  7188     1  0 13:10 ?        00:00:00 /usr/pgsql-13/bin/postmaster -D /var/lib/pgsql/13/data/

postgres  8005  7646  0 14:43 pts/1    00:00:00 grep --color=auto -i postmaster

[postgres@easypostgresvm ~]$

 

 

postgres=#

postgres=# SELECT current_database();

 current_database

------------------

 postgres

(1 row)

 

postgres=# \c easydb

You are now connected to database "easydb" as user "postgres".

easydb=# SELECT current_database();

 current_database

------------------

 easydb

(1 row)

 

easydb=#

postgres=#

 

 

easydb=# SELECT

    pg_database.datname,

    pg_size_pretty(pg_database_size(pg_database.datname)) AS size

    FROM pg_database;

     datname     |  size  

-----------------+---------

 postgres        | 8221 kB

 template1       | 7901 kB

 template0       | 7753 kB

 example_backups | 7901 kB

 easytbs         | 7753 kB

 dbname2         | 8029 kB

 mydb            | 7901 kB

 easydb          | 8061 kB

 interndb        | 8109 kB

 easynew         | 7753 kB

 easynew3        | 7901 kB

(11 rows)

 

easydb=#

Or

 

postgres=# SELECT pg_database.datname as "database_name", pg_database_size(pg_database.datname)/1024/1024/1024 AS size_in_GB FROM pg_database ORDER by size_in_GB DESC;

  database_name  | size_in_gb

-----------------+------------

 postgres        |          0

 template1       |          0

 template0       |          0

 example_backups |          0

 easytbs         |          0

 dbname2         |          0

 mydb            |          0

 easydb          |          0

 interndb        |          0

 easynew         |          0

 easynew3        |          0

(11 rows)

 

postgres=#

 

 

easydb=# select setting from  pg_settings where name='data_directory';

           setting          

-----------------------------

 /var/lib/postgresql/13/main

(1 row)

 

easydb=#

easydb=# \q

 

C:\Program Files (x86)\PostgreSQL\10\bin>pg_ctl -D "C:/Program Files/PostgreSQL/13/data" status

pg_ctl: no server running

 

or

postgres@easytraining:~$ ps -ef|grep post

postgres     539       1  0 Jun17 ?        00:01:40 /usr/lib/postgresql/13/bin/postgres -D /var/lib/postgresql/13/main -c config_file=/etc/postgresql/13/main/postgresql.conf

postgres     541     539  0 Jun17 ?        00:00:00 postgres: 13/main: checkpointer

postgres     542     539  0 Jun17 ?        00:00:20 postgres: 13/main: background writer

postgres     543     539  0 Jun17 ?        00:00:21 postgres: 13/main: walwriter

postgres     544     539  0 Jun17 ?        00:01:12 postgres: 13/main: autovacuum launcher

postgres     545     539  0 Jun17 ?        00:00:02 postgres: 13/main: archiver last was 00000001000000000000001F

postgres     546     539  0 Jun17 ?        00:01:07 postgres: 13/main: stats collector

postgres     547     539  0 Jun17 ?        00:00:00 postgres: 13/main: logical replication launcher

root      209928  209925  0 06:32 pts/0    00:00:00 su - postgres

postgres  209929  209928  0 06:32 pts/0    00:00:00 -bash

postgres  210094  209929  0 06:39 pts/0    00:00:00 /usr/lib/postgresql/13/bin/psql -U postgres

root      210145  210045  0 06:44 pts/1    00:00:00 sudo su - postgres

root      210146  210145  0 06:44 pts/1    00:00:00 su - postgres

postgres  210147  210146  0 06:44 pts/1    00:00:00 -bash

postgres  210162     539  0 06:45 ?        00:00:00 postgres: 13/main: postgres easydb [local] idle

postgres  210200  210147  0 06:49 pts/1    00:00:00 ps -ef

postgres  210201  210147  0 06:49 pts/1    00:00:00 grep post

postgres@easytraining:~$ cd /usr/lib/postgresql/13/bin

postgres@easytraining:/usr/lib/postgresql/13/bin$ pg_ctl -D "/var/lib/postgresql/13/main" status

-bash: pg_ctl: command not found

postgres@easytraining:/usr/lib/postgresql/13/bin$ ./pg_ctl -D "/var/lib/postgresql/13/main" status

pg_ctl: server is running (PID: 539)

/usr/lib/postgresql/13/bin/postgres "-D" "/var/lib/postgresql/13/main" "-c" "config_file=/etc/postgresql/13/main/postgresql.conf"

postgres@easytraining:/usr/lib/postgresql/13/bin$

 

postgres-> \conninfo

You are connected to database "postgres" as user "easyadmin@easypostgresserver" on host "easypostgresserver.postgres.database.azure.com" (address "13.75.33.21") at port "5432".

SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)

postgres->

postgres->

 

easydb=#

easydb=# SELECT xact_start, state, usename FROM pg_stat_activity;

          xact_start          | state  | usename 

------------------------------+--------+----------

                              |        |

                              |        | postgres

 2022-07-02 06:55:43.39248+00 | active | postgres

                              |        |

                              |        |

                              |        |

(6 rows)

 

easydb=#

 

2.  Monitor log file for error (using tail -f)

 

               Locate cofig and data file

-----------------------

psql -U postgres -c 'SHOW config_file'

easydb=#

easydb=# SHOW config_file;

               config_file              

-----------------------------------------

 /etc/postgresql/13/main/postgresql.conf

(1 row)

 

psql -U postgres -c 'SHOW data_directory'

easydb=# SHOW data_directory;

       data_directory       

-----------------------------

 /var/lib/postgresql/13/main

(1 row)

 

easydb=#

 

Configuring PostgreSQL to Generate Log Output

--------------------------------------

With the postgresql.conf file open, scroll down to the ERROR REPORTING AND LOGGING section and

you’ll likely see a number of configuration options commented out.

The most critical of these settings are log_destination and logging_collector.

Below are the recommended settings, though feel free to change these to suit your own needs:

 

#------------------------------------------------------------------------------

# ERROR REPORTING AND LOGGING

#------------------------------------------------------------------------------

 

# - Where to Log -

 

log_destination = 'csvlog'              # Valid values are combinations of

                                        # stderr, csvlog, syslog, and eventlog,

                                        # depending on platform.  csvlog

                                        # requires logging_collector to be on.

 

# This is used when logging to stderr:

logging_collector = on          # Enable capturing of stderr and csvlog

                                        # into log files. Required to be on for

                                        # csvlogs.

                                        # (change requires restart)

 

# These are only used if logging_collector is on:

log_directory = 'pg_log'                # directory where log files are written,

                                        # can be absolute or relative to PGDATA

log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # log file name pattern,

                                        # can include strftime() escapes

 

 

Here we’re telling postgres to generate logs in the CSV format and to output them to the pg_log directory (within the data directory).

We’ve also uncommented the log_filename setting to produce some proper name including timestamps for the log files.

 

 

cd /var/lib/postgresql/9.3/main/pg_log

 

tail /var/log/postgresql/postgresql-9.3-main.log

 

root@postgres-tutorial:/etc/postgresql/13/main#

 

/var/log/postgresql/postgresql-9.6-main.log

 

root@postgres-tutorial:/var/log# cd postgresql

root@postgres-tutorial:/var/log/postgresql# ls -ltr

total 12

-rw-r----- 1 postgres adm 9649 Jun  9 12:48 postgresql-13-main.log

root@postgres-tutorial:/var/log/postgresql# pwd

/var/log/postgresql

root@postgres-tutorial:/var/log/postgresql#

 

tail -200f < log name>

tail 10f /var/log/messages

 

3.  Check  blocking query

 

 

SELECT pid, usename, pg_blocking_pids(pid) AS blocked_by, query AS blocked_query

FROM pg_stat_activity WHERE cardinality(pg_blocking_pids(pid)) > 0;

 

 pid | usename | blocked_by | blocked_query

-----+---------+------------+---------------

(0 rows)

 

 

4.      Check  query information on database

 

 

select pid,usename,application_name,client_addr,backend_start,xact_start,query_start,state,backend_xid,backend_xmin,query,backend_type from pg_stat_activity where state='active';

 

 

select pid,usename,application_name,client_addr,backend_start,xact_start,query_start,state,backend_xid,backend_xmin,query,backend_type from pg_stat_activity where datid=411547739 and usename ='achix' and state='active';

 

or

 

Pg_stat_statements records queries that are run against your database

 

 

A more refined way to monitor statements is via the pg_stat_statements contrib extension, mentioned earlier. On a recent Linux system (Ubuntu 17.10, PostgreSQL 9.6), this can be installed fairly easy:

 

Debian/Ubuntu:

# Replace 9.X with your installed Postgres version:

sudo apt-get install postgresql-contrib-9.X

RedHat/CentOS/SLES:

sudo yum install postgresql-contrib

 

 

root@easytraining:~# sudo apt-get install postgresql-contrib-14

Reading package lists... Done

Building dependency tree... Done

Reading state information... Done

E: Unable to locate package postgresql-contrib-14

root@easytraining:~#

 

 

These parameters must be set in postgresql.conf. Typical usage might be:

 

 

 

# postgresql.conf

shared_preload_libraries = 'pg_stat_statements'

 

compute_query_id = on

pg_stat_statements.max = 10000

pg_stat_statements.track = all

 

 

testdb=# create extension pg_stat_statements ;

CREATE EXTENSION

Testdb=# \d pg_stat_statements

testdb=# alter system set shared_preload_libraries TO 'pg_stat_statements';

ALTER SYSTEM

testdb=# \q

postgres@achix-dell:~$ sudo systemctl restart postgresql

postgres@achix-dell:~$ psql testdb

psql (9.6.7)

Type "help" for help.

 

testdb=# \d pg_stat_statements

 

 

Verify that pg_stat_statements returns data

As a superuser, run the following statements:

 

CREATE EXTENSION IF NOT EXISTS pg_stat_statements;

SELECT calls, query FROM pg_stat_statements LIMIT 1;

 

 

 

 

 

SELECT pg_stat_statements_reset();

 

SELECT * FROM pg_stat_statements;

 

SELECT query, calls, total_exec_time, rows, 100.0 * shared_blks_hit /

               nullif(shared_blks_hit + shared_blks_read, 0) AS hit_percent

          FROM pg_stat_statements ORDER BY total_exec_time DESC LIMIT 5;

 

 

SELECT

  (total_time / 1000 / 60) as total,

  (total_time/calls) as avg,

  query

FROM pg_stat_statements

ORDER BY 1 DESC

LIMIT 100;

 

             

 

5.  Check is there any maintenance running & check the status of the same

 

 

ps -axww | grep autovacuum

 

SELECT name, setting FROM pg_settings WHERE name='autovacuum';

 

 

-- Get table's vacuum/analyze status

                                                                       

 SELECT psut.relname,

     to_char(psut.last_vacuum, 'YYYY-MM-DD HH24:MI') as last_vacuum,

     to_char(psut.last_autovacuum, 'YYYY-MM-DD HH24:MI') as last_autovacuum,

     to_char(pg_class.reltuples, '9G999G999G999') AS n_tup,

     to_char(psut.n_dead_tup, '9G999G999G999') AS dead_tup,

     to_char(CAST(current_setting('autovacuum_vacuum_threshold') AS bigint)

         + (CAST(current_setting('autovacuum_vacuum_scale_factor') AS numeric)

            * pg_class.reltuples), '9G999G999G999') AS av_threshold,

     CASE

         WHEN CAST(current_setting('autovacuum_vacuum_threshold') AS bigint)

             + (CAST(current_setting('autovacuum_vacuum_scale_factor') AS numeric)

                * pg_class.reltuples) < psut.n_dead_tup

         THEN '*'

         ELSE ''

     END AS expect_av

 FROM pg_stat_user_tables psut

     JOIN pg_class on psut.relid = pg_class.oid

 ORDER BY 1; ORDER BY 1;

 relname  |   last_vacuum    | last_autovacuum |     n_tup      |    dead_tup    |  av_threshold  | expect_av

----------+------------------+-----------------+----------------+----------------+----------------+-----------

 accounts |                  |                 |              0 |              0 |             50 |

 student  |                  |                 |              0 |              0 |             50 |

 test     | 2022-06-11 12:28 |                 |              6 |              0 |             51 |

(3 rows)

 

postgres=#

 

 

SELECT relname, last_vacuum, last_autovacuum FROM pg_stat_user_tables;

 

The PostgreSQL statistics collector

 

The PostgreSQL statistics collector enables users to access most of the metrics described in Part 1,

by querying several key statistics views, including:

 

pg_stat_database (displays one row per database)

pg_stat_user_tables (one row per table in the current database)

pg_stat_user_indexes (one row per index in the current database)

pg_stat_bgwriter (only shows one row, since there is only one background writer process)

pg_statio_user_tables (one row per table in the current database)

The collector aggregates statistics on a per-table, per-database, or per-index basis, depending on the metric.

 

SELECT * FROM pg_stat_user_tables;

SELECT * FROM pg_statio_user_tables;

SELECT sum(heap_blks_read) as blocks_read, sum(heap_blks_hit) as blocks_hit, sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as hit_ratio FROM pg_statio_user_tables;

 

SELECT * FROM pg_stat_user_indexes;

SELECT * FROM pg_stat_bgwriter;

 

=========To check table size==============

SELECT

       relname AS "table_name",

       pg_size_pretty(pg_table_size(C.oid)) AS "table_size"

FROM

       pg_class C

LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)

WHERE nspname NOT IN ('pg_catalog', 'information_schema') AND nspname !~ '^pg_toast' AND relkind IN ('r')

ORDER BY pg_table_size(C.oid)

DESC LIMIT 5;

 

Or

 

-- returns the size for each table

SELECT schemaname || '.' || relname,

       pg_size_pretty(pg_table_size(schemaname || '.' || relname))

  FROM pg_stat_user_tables;

 

Or

 

-- returns the no. of seq. scans and the percentage of seq. scans for each table

SELECT schemaname || '.' || relname,

        seq_scan,

        round(seq_scan::numeric*100/(seq_scan+idx_scan), 1)

 FROM pg_stat_user_tables

WHERE seq_scan+idx_scan > 0;

 

-----index size -----

 

-- returns the size for each index

SELECT schemaname || '.' || indexrelname,

       pg_size_pretty(pg_total_relation_size(indexrelid))

  FROM pg_stat_user_indexes;

 

6.  How much Wal File generated per day

 

 

 

 

 

WAL files are stored in $PGDATA/pg_wal (pg_wal was renamed from pg_xlog in Postgres v10). Typically these are 16 MB files with a 24-character filename made of hex numbers (0-9, A-F).

 

 

Note : Spikes in WAL file count are typically caused by VACUUM-like maintenance tasks that create large volume of changes, or temporary tables and objects in a short period of time. These should come slowly back down to normal levels. These typically result in a lot of disk I/O and CPU activity, causing application queries to run slower until things are back to normal.

 

 

 

Queries

PostgreSQL does not have a built-in function or view that directly returns WAL file related information. You can however, use this query:

 

SELECT COUNT(*) FROM pg_ls_dir('pg_xlog') WHERE pg_ls_dir ~ '^[0-9A-F]{24}';

 

that does the job of getting the count of WAL files. Note that you’ll need superuser privileges or explicit GRANTs to do a pg_ls_dir.

 

 

postgres=#

postgres=# SELECT * FROM pg_stat_archiver;

 archived_count |    last_archived_wal     |      last_archived_time       | failed_count | last_failed_wal | last_failed_time |          stats_reset         

----------------+--------------------------+-------------------------------+--------------+-----------------+------------------+-------------------------------

             31 | 00000001000000000000001C | 2022-06-30 10:00:37.547815+00 |            0 |                 |                  | 2022-06-10 06:28:45.325958+00

(1 row)

 

postgres=#

 

 

 

 

 

PostgreSQL 9.4 is now able to collect some information regarding the two main events of the archiving process: success and failure. In particular, for both operations, the catalogue view reports:

 

count (since cluster initialisation or the last time statistics were reset);

WAL name of the last operation;

time of the last operation.

Moreover, you can reset the statistics of the archiver with the following SQL statement:

 

-- Requires superuser privileges

SELECT pg_stat_reset_shared('archiver');

 

 

 

SELECT *,

    current_setting('archive_mode')::BOOLEAN

        AND (last_failed_wal IS NULL

            OR last_failed_wal <= last_archived_wal)

        AS is_archiving,

    CAST (archived_count AS NUMERIC)

        / EXTRACT (EPOCH FROM age(now(), stats_reset))

        AS current_archived_wals_per_second

FROM pg_stat_archiver

 

  

show wal_level

 

show wal_keep_size;

 

select name,setting,unit from pg_settings where name in ('wal_level','wal_keep_size','wal_keep_segments','wal_segment_size','archive_mode','archive_command','archive_timeout');

      

select name,setting,unit from pg_settings where name in ('wal_level','wal_keep_size','wal_keep_segments','wal_segment_size','archive_mode','archive_command','archive_timeout');

 

7.  To check session,SQL detail and kill a session

 

pg_terminate_backend() AND pg_cancel_backend()

 

1) kill all connections  of  a specified database name.

 

-- List all available databases

 

postgres=>\l

 

postgres=> SELECT pg_terminate_backend(pg_stat_activity.pid)

           FROM   pg_stat_activity

           WHERE  datname = 'DATABASE NAME'

              AND pid <> pg_backend_pid();

2) kill all connections  of a specified user name.

 

postgres=> SELECT pg_terminate_backend(pg_stat_activity.pid)

             FROM pg_stat_activity

            WHERE usename = 'testuser'

              AND pid <> pg_backend_pid();

3) kill all sessions of current database.

 

postgres=> SELECT pg_terminate_backend(pg_stat_activity.pid)

             FROM pg_stat_activity

            WHERE datname = current_database()

              AND pid <> pg_backend_pid();

4) kill a connection  of a specified pid.

 

postgres=> SELECT pg_terminate_backend(pid);

5) kill a running SQL of a session, but does not kill the session.

 

postgres=> SELECT pg_cancel_backend( pid );

It is useful when a running SQL is taking long time than expected.

 

 

We can determine the specific SQL statement for each ID:

 

8.  Check the usage of SGA

 

   postgres=# SHOW ALL;

 

select * from pg_settings where name like '%auto%';

 

select * from pg_settings where name like '%log%';

 

show log_destination;

show config_file;

SHOW max_connections;

show hba_file;

 

easydb=#

easydb=# select name,setting,unit from pg_settings where name in ('wal_level','wal_keep_size','wal_keep_segments','wal_segment_size','archive_mode','archive_command','archive_timeout');

       name       | setting  | unit

------------------+----------+------

 archive_command  |          |

 archive_mode     | on       |

 archive_timeout  | 0        | s

 wal_keep_size    | 0        | MB

 wal_level        | replica  |

 wal_segment_size | 16777216 | B

(6 rows)

 

easydb=#

postgres=#

postgres=# show wal_level;

 wal_level

-----------

 replica

(1 row)

 

postgres=# show max_wal_size;

 max_wal_size

--------------

 1GB

(1 row)

 

postgres=#

 

postgres=# show archive_mode;

 archive_mode

--------------

 off

(1 row)

 

postgres=# show archive_command;

 archive_command

-----------------

 (disabled)

(1 row)

 

postgres=# show archive_timeout;

 archive_timeout

-----------------

 0

(1 row)

 

postgres=#

 

postgres=# select name,setting from pg_settings where name like 'archive%' ;

          name           |                     setting                     

-------------------------+-------------------------------------------------

 archive_cleanup_command |

 archive_command         | cp %p /var/lib/postgresql/archive/archive_wal%f

 archive_mode            | on

 archive_timeout         | 3600

(4 rows)

 

postgres=#

 

STEP 7: Force log switch using pg_switch_wal and check whether archive is generating or not.

 

postgres=# select pg_switch_wal();

pg_switch_wal

---------------

0/1D392648

(1 row)

 

postgres=# select pg_switch_wal();

pg_switch_wal

 

---------------

0/1E000000

(1 row)

STEP 8: Now, verify archive mode settings.

 

bash-4.1$ ls -lrth /scratch/postgres/backup/archive

total 48M

-rwx------ 1 postgres postgres 16M Nov  7 09:05 archive000000010000000000000006

-rw------- 1 postgres postgres 16M Nov  7 09:06 archive000000010000000000000007

-rw------- 1 postgres postgres 16M Nov  7 09:07 archive000000010000000000000008

 

postgres=# show archive_mode;

 archive_mode

--------------

 on

(1 row)

 

 

 

postgres=# SELECT * FROM pg_stat_archiver;

 archived_count |    last_archived_wal     |      last_archived_time       | failed_count | last_failed_wal | last_failed_time |          stats_reset         

----------------+--------------------------+-------------------------------+--------------+-----------------+------------------+-------------------------------

             30 | 00000001000000000000001B | 2022-06-30 06:00:37.438802+00 |            0 |                 |                  | 2022-06-10 06:28:45.325958+00

(1 row)

 

postgres=#

 

 

 

 

 

 

9.  Check the free space at O/s Level

 

df -g|grep <database_name>

df -k|grep <database_name>  ---->solaris

df -h|grep <database_name>  ---->solaris

 

 

 

 

10.         Check all last night backups were successful.

 

 

postgres=# select pg_start_backup('label_goes_here');

 

SELECT * FROM pg_xlogfile_name_offset(pg_stop_backup());

 

 

 

11.       Check tablespaces should not be used more that 95%.

 

 

 

 

easydb=> \db

          List of tablespaces

    Name    |     Owner     | Location

------------+---------------+----------

 pg_default | cloudsqladmin |

 pg_global  | cloudsqladmin |

(2 rows)

 

easydb=> \db+

                                    List of tablespaces

    Name    |     Owner     | Location | Access privileges | Options |  Size  | Description

------------+---------------+----------+-------------------+---------+--------+-------------

 pg_default | cloudsqladmin |          |                   |         | 48 MB  |

 pg_global  | cloudsqladmin |          |                   |         | 559 kB |

(2 rows)

 

easydb=>  SELECT d.datname

  FROM pg_database d,

  (

    SELECT pg_tablespace_databases(oid) AS datoid

      FROM pg_tablespace t

     WHERE t.spcname='&tblspace'

  ) t

WHERE t.datoid = d.oid;

 datname

---------

(0 rows)

 

easydb=>  SELECT t.spcname AS "Tablespace",

                  array_to_string(

                    ARRAY(

                        (

                           SELECT datname

                             FROM pg_database

                            WHERE oid IN ( SELECT pg_tablespace_databases(t.oid) AS datoid )

                        ORDER BY 1

                        )

                      ),

                      ','

                  ) AS "Database(s)"

             FROM pg_tablespace t

         ORDER BY 1;spcname != 'pg_global'

 Tablespace |                            Database(s)

------------+-------------------------------------------------------------------

 pg_default | cloudsqladmin,easydb,example_backups,postgres,template0,template1

(1 row)

 

easydb=>

 

select spcname,oid , pg_tablespace_location(oid) from pg_tablespace;

select oid,datname from pg_database where datname = ‘postgres’;

 

select spcname tablespacenme, 

       0 as megs_free,

       0 as megs_alloc,

       pg_tablespace_size(spcname) as megs_used,

       0 as Pct_Free,

       0 as Pct_used,

       0 as max_space

from pg_tablespace

 

tablespacenme | megs_free | megs_alloc | megs_used | pct_free | pct_used | max_space

---------------+-----------+------------+-----------+----------+----------+-----------

 pg_default    |         0 |          0 |  64930080 |        0 |        0 |         0

 pg_global     |         0 |          0 |    572520 |        0 |        0 |         0

 easytbs1      |         0 |          0 |         0 |        0 |        0 |         0

 easytbs       |         0 |          0 |   7942659 |        0 |        0 |         0

 tbs3          |         0 |          0 |   8225327 |        0 |        0 |         0

 interntbs     |         0 |          0 |   8315439 |        0 |        0 |         0

 temp          |         0 |          0 |      4096 |        0 |        0 |         0

(7 rows)

 

PostgreSQL | Prisma's Data Guide

 

postgres-#

postgres-# \db+

                                                   List of tablespaces

    Name    |  Owner   |                 Location                  | Access privileges | Options |  Size   | Description

------------+----------+-------------------------------------------+-------------------+---------+---------+-------------

 easytbs    | postgres | /var/lib/postgresql/easytbs               |                   |         | 7757 kB |

 easytbs1   | postgres | /var/lib/postgresql/easytab               |                   |         | 0 bytes |

 interntbs  | postgres | /var/lib/postgresql/easytbs/internopening |                   |         | 8121 kB |

 pg_default | postgres |                                           |                   |         | 62 MB   |

 pg_global  | postgres |                                           |                   |         | 559 kB  |

 tbs3       | postgres | /var/lib/postgresql/tbs1                  |                   |         | 8033 kB |

 temp       | postgres | /var/lib/postgresql/temptablespace        |                   |         | 388 kB  |

(7 rows)

 

postgres-#

 

postgres=#

postgres=# SELECT * FROM pg_tablespace;

  oid  |  spcname   | spcowner | spcacl | spcoptions

-------+------------+----------+--------+------------

  1663 | pg_default |       10 |        |

  1664 | pg_global  |       10 |        |

 16426 | easytbs1   |       10 |        |

 16442 | easytbs    |       10 |        |

 16385 | tbs3       |       10 |        |

 16459 | interntbs  |       10 |        |

 16533 | temp       |       10 |        |

(7 rows)

 

postgres=# show temp_tablespaces;

 temp_tablespaces

------------------

 temp

(1 row)

 

postgres=#

 

postgres=# select pg_tablespace_size('tbs3');

 pg_tablespace_size

--------------------

            8225327

(1 row)

 

postgres=#

 

SELECT

    pg_size_pretty (

        pg_tablespace_size ('pg_default')

    );

 

 

 

To check Temp Ta bleSpace

 

postgres@easytraining:~$ mkdir temptablespace

postgres@easytraining:~$ cd temptablespace

postgres@easytraining:~/temptablespace$ pwd

/var/lib/postgresql/temptablespace

postgres@easytraining:~/temptablespace$ psql

psql (13.7 (Debian 13.7-0+deb11u1))

Type "help" for help.

 

postgres=# CREATE TABLESPACE temp LOCATION '/var/lib/postgresql/temptablespace';

CREATE TABLESPACE

postgres=# ALTER SYSTEM SET temp_tablespaces = 'temp';

ALTER SYSTEM

postgres=# SELECT pg_reload_conf();

 pg_reload_conf

----------------

 t

(1 row)

 

postgres=#

 

 

WITH tablespaces AS (

    SELECT

        spcname AS tbl_name,

        coalesce(nullif(pg_tablespace_location(oid), ''), (current_setting('data_directory') || '/base')) AS tbl_location

    FROM pg_tablespace

),

tablespace_suffix AS (

    SELECT

        tbl_name,

        tbl_location || '/pgsql_tmp' AS path

    FROM tablespaces

    WHERE tbl_name = 'pg_default'

    UNION ALL

    SELECT

        tbl_name,

        tbl_location || '/' || path || '/pgsql_tmp'

    FROM tablespaces, LATERAL pg_ls_dir(tbl_location) AS path

    WHERE path ~ ('PG_' || substring(current_setting('server_version') FROM '^(?:\d\.\d\d?|\d+)'))

),

stat AS (

    SELECT

        substring(file from '\d+\d') AS pid,

        tbl_name AS temp_tablespace,

        pg_size_pretty(sum(pg_size_bytes(size))) AS size

    FROM tablespace_suffix, LATERAL pg_ls_dir(path, true, false) AS file,

    LATERAL pg_size_pretty((pg_stat_file(path || '/' || file, true)).size) AS size

    GROUP BY pid, temp_tablespace

)

SELECT

    a.datname,

    a.pid,

    coalesce(size, '0 MB') AS temp_size_written,

    coalesce(temp_tablespace, 'not using temp files') AS temp_tablespace,

    a.application_name,

    a.client_addr,

    a.usename,

    (clock_timestamp() - a.query_start)::interval(0) AS duration,

    (clock_timestamp() - a.state_change)::interval(0) AS duration_since_state_change,

    trim(trailing ';' FROM left(query, 1000)) AS query,

    a.state,

    a.wait_event_type || ':' || a.wait_event AS wait

FROM pg_stat_activity AS a

LEFT JOIN stat ON a.pid = stat.pid::int

WHERE a.pid != pg_backend_pid()

ORDER BY temp_size_written DESC;

 

 

 

 

 

[postgres@easypostgresvm data]$ mkdir tbs1

[postgres@easypostgresvm data]$ chmod 775 tbs1

[postgres@easypostgresvm data]$

[postgres@easypostgresvm data]$ psql

psql (13.1)

Type "help" for help.

 

postgres=# create tablespace tbs1 location '/tmp/data/tbs1';

CREATE TABLESPACE

postgres=#

 

 

1)Archivelog filesystem & Trace mount (<70%)

 

df -g |grep /ora/database_name/arch

df -g |grep /ora/database_name/trace

 

"No space left on device

The Following cause of space full

·        Temporary tables or files that are created by PostgreSQL transactions

·        Data files

·        Write ahead logs (WAL logs)

·        Replication slots

·        DB logs (error files) that are retained for too long

·        Other DB or Linux files that support the consistent state of the RDS DB instance

 

 

Solution :

 

Please check for the following:

·        Size of the DB log files

·        Presence of temporary files

·        Constant increase in transaction logs disk usage

  • Replication slot:
    • Physical replication slots are created by cross-Region read replicas or same-Region read replicas only if they are running on PostgreSQL 14.1 and higher versions
    • Logical replication slots are created for a replica or subscriber
  • Bloat or improper removal of dead rows
  • Presence of orphaned files

Check for temporary files

 

SELECT datname, temp_files AS "Temporary files",temp_bytes AS "Size of temporary files" FROM pg_stat_database ;

 

 

Run this query to confirm the presence and size of replication slots:

 

PostgreSQL v9:

 

psql=> SELECT slot_name, pg_size_pretty(pg_xlog_location_diff(pg_current_xlog_location(),restart_lsn)) AS

replicationSlotLag, active FROM pg_replication_slots ;

 

PostgreSQL v10 and later:

 

psql=> SELECT slot_name, pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(),restart_lsn)) AS replicationSlotLag,

active FROM pg_replication_slots ;

 

postgres=# SELECT slot_name, pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(),restart_lsn)) AS replicationSlotLag,

active FROM pg_replication_slots ;

 slot_name | replicationslotlag | active

-----------+--------------------+--------

(0 rows)

 

 

postgres=#

 

After you identify the replication slot that isn't being consumed (with an active state that is False), drop the replication slot by running this query:

 

psql=> SELECT pg_drop_replication_slot('Your_slotname_name');

 

Drop the query by running the following command:

 

psql=> SELECT pg_drop_replication_slot('xc36ujql35djp_00013322_907c1e0a_9f8b_4c13_89ea_ef0ea1cf143d');

 

Check the status of cross-Region read replicas

 

psql=> SELECT * FROM pg_replication_slots;

 

To check for the estimated number of dead tuples, use the pg_stat_all_tables view. For more information, see the PostgreSQL documentation for pg_stat_all_tables view. In this example, there are 1999952 dead tuples (n_dead_tup):

 

psql => SELECT * FROM pg_stat_all_tables WHERE relname='test';

 

Check for orphaned files

 

Orphaned files can occur when the files are present in the database directory but there are no objects that point to those files. This might happen if your instance runs out of storage or the engine crashes during an operation such as ALTER TABLE, VACUUM FULL, or CLUSTER. To check for orphaned files, follow these steps:

 

1.    Log in to PostgreSQL in each database.

 

2.    Run these queries to assess the used and real sizes.

 

# Size of the database occupied by files

psql=> SELECT pg_size_pretty(pg_database_size('DATABASE_NAME'));

 

# Size of database retrieved by summing the objects (real size)

psql=> SELECT pg_size_pretty(SUM(pg_relation_size(oid))) FROM pg_class;

3.    Note the results. If the difference is significant, then orphaned files might be using storage space.

 

 Data Filesystem space (<85%) And also check the oracle home size

 

df -g |grep <database_name>

df -g /oracle

 

 

 

 

12.       Check all crons and Oracle Jobs are completed without any error

  

    crontab -l  

 

13.                    To see the size of a database:

 

postgres=# \l

SELECT pg_size_pretty(pg_database_size(‘&database_name’));

 

SELECT pg_size_pretty(pg_database_size(easydb));

 

or

After connecting with psql, you can see the sizes of all databases with l+ command.

 

postgres=# \l+

                                                                    List of databases

      Name       |  Owner   | Encoding  | Collate |  Ctype  |    Access privileges    |  Size   | Tablespace |                Description                

-----------------+----------+-----------+---------+---------+-------------------------+---------+------------+--------------------------------------------

 dbname2         | postgres | UTF8      | C.UTF-8 | C.UTF-8 | =Tc/postgres           +| 8029 kB | tbs3       |

                 |          |           |         |         | postgres=CTc/postgres  +|         |            |

                 |          |           |         |         | app_readonly=c/postgres |         |            |

 easydb          | postgres | UTF8      | C.UTF-8 | C.UTF-8 |                         | 8021 kB | pg_default |

 easynew         | postgres | SQL_ASCII | C       | C       |                         | 7753 kB | pg_default |

 easynew3        | postgres | UTF8      | C.UTF-8 | C.UTF-8 |                         | 7901 kB | pg_default |

 easytbs         | postgres | UTF8      | C.UTF-8 | C.UTF-8 |                         | 7753 kB | easytbs    |

 example_backups | postgres | UTF8      | C.UTF-8 | C.UTF-8 |                         | 7901 kB | pg_default |

 interndb        | postgres | UTF8      | C.UTF-8 | C.UTF-8 |                         | 8109 kB | interntbs  |

 mydb            | postgres | UTF8      | C.UTF-8 | C.UTF-8 | =Tc/postgres           +| 7901 kB | pg_default |

                 |          |           |         |         | postgres=CTc/postgres  +|         |            |

                 |          |           |         |         | myuser=CTc/postgres    +|         |            |

                 |          |           |         |         | app_readonly=c/postgres |         |            |

 postgres        | postgres | UTF8      | C.UTF-8 | C.UTF-8 |                         | 8197 kB | pg_default | default administrative connection database

 template0       | postgres | UTF8      | C.UTF-8 | C.UTF-8 | =c/postgres            +| 7753 kB | pg_default | unmodifiable empty database

                 |          |           |         |         | postgres=CTc/postgres   |         |            |

 template1       | postgres | UTF8      | C.UTF-8 | C.UTF-8 | =c/postgres            +| 7901 kB | pg_default | default template for new databases

                 |          |           |         |         | postgres=CTc/postgres   |         |            |

(11 rows)

 

postgres=#

 

 

To see the size of a table:

 

After connecting to database with psql;

 

SELECT pg_size_pretty( pg_total_relation_size(‘table_name’));

 

 

  

14.       Verify resources for acceptable performance

 

·         EXPLAIN and EXPLAIN ANALYZE is extremely useful for understanding what's going on in your db-engine

·         Make sure relevant columns are indexed

·         Make sure irrelevant columns are not indexed (insert/update-performance can go down the drain if too many indexes must be updated)

·         Make sure your postgres.conf is tuned properly

·         Know what work_mem is, and how it affects your queries (mostly useful for larger queries) other  parameter

shared_buffers, work_mem, maintenance_work_mem, temp_buffers 1. shared_buffer

 

2. wal_buffers

 

3. effective_cache_size

 

4. work_mem

 

5. synchronous_commit

 

6. max_connections

 

7. max_prepared_transactions

 

8. max_worker_processes

 

9. max_parallel_workers_per_gather

 

10. max_parallel_workers

 

11. effective_io_concurrency

 

12. random_page_cost

 

13. min_ and max_wal_size

 

15.     max_fsm_pages

 

·         Make sure your database is properly normalized

·         VACUUM for clearing out old data

·         ANALYZE for updating statistics (statistics target for amount of statistics)

·         Persistent connections (you could use a connection manager like pgpool or pgbouncer)

·         Understand how queries are constructed (joins, sub-selects, cursors)

·         Caching of data (i.e. memcached) is an option

 

easydb=>

show. shared_buffer

easydb-> SELECT * FROM pg_stat_archiver;

ERROR:  syntax error at or near "."

LINE 1: show. shared_buffer

            ^

easydb=> show. shared_buffer;

ERROR:  syntax error at or near "."

LINE 1: show. shared_buffer;

            ^

easydb=> show shared_buffer;

ERROR:  unrecognized configuration parameter "shared_buffer"

easydb=>

easydb=>

easydb=>

easydb=> show shared_buffers;

 shared_buffers

----------------

 8688MB

(1 row)

 

easydb=> show  wal_buffers;

 wal_buffers

-------------

 16MB

(1 row)

 

easydb=> show  effective_cache_size;

 effective_cache_size

----------------------

 10676208kB

(1 row)

 

easydb=> show  work_mem;

 work_mem

----------

 4MB

(1 row)

 

easydb=> show synchronous_commit;

 synchronous_commit

--------------------

 on

(1 row)

 

easydb=> show  max_connections;

 max_connections

-----------------

 500

(1 row)

 

easydb=> show  max_prepared_transactions;

 max_prepared_transactions

---------------------------

 0

(1 row)

 

easydb=> show  max_worker_processes;

 max_worker_processes

----------------------

 8

(1 row)

 

easydb=> show  max_parallel_workers_per_gather;

 max_parallel_workers_per_gather

---------------------------------

 2

(1 row)

 

easydb=> show. max_parallel_workers;

ERROR:  syntax error at or near "."

LINE 1: show. max_parallel_workers;

            ^

easydb=> show effective_io_concurrency;

 effective_io_concurrency

--------------------------

 1

(1 row)

 

easydb=> show random_page_cost;

 random_page_cost

------------------

 4

(1 row)

 

easydb=> show  min_ and max_wal_size;

ERROR:  syntax error at or near "and"

LINE 1: show  min_ and max_wal_size;

                   ^

easydb=> show  max_fsm_pages;

ERROR:  unrecognized configuration parameter "max_fsm_pages"

easydb=>

 

 

122.171.18.127

 

# shared_buffers and effective_cache_size should be 30%-50%

# of your machine free memory

 

shared_buffers = 3GB

effective_cache_size = 2GB

checkpoint_segments = 64

checkpoint_timeout = 1h

max_locks_per_transaction = 512

max_pred_locks_per_transaction = 512

 

# If you know what you're doing you can uncomment and adjust the following values

 

#cpu_tuple_cost = 0.0030

#cpu_index_tuple_cost = 0.0001

#cpu_operator_cost = 0.0005

 

 

Basic Performance Check

 

postgres_connections.sql

-- Get current session info:

 

SELECT datname as database ,usename as user ,client_addr,state, count(*) as total_connections,query

FROM pg_stat_activity WHERE pid<>pg_backend_pid()

GROUP BY usename,client_addr,datname,state,query;

 

/*

 database |   user   | client_addr | state  | total_connections |                       query                      

----------+----------+-------------+--------+-------+---------------------------------------------------------------

 test11   | dba      | 192.168.2.1 | idle   |     1             | delete from tabl3 where id in (select id from tabl3 LIMIT 2);

          | postgres |             |        |     1             |

 postgres | postgres |             | active |     1             | SELECT pg_sleep(20);

          | slave    | 127.0.0.1   | active |     1             |

(4 rows)

 */

 

-- Get current session info with Duration

 

SELECT datname as database ,usename as user ,client_addr,state,

        now() - backend_start as duration, count(*) as total_connections,query

FROM pg_stat_activity

WHERE state <> 'idle' AND pid<>pg_backend_pid()

GROUP BY usename,client_addr,datname,state, now() - backend_start, query

 

/*

 database |  user   | client_addr |        state        |        duration        | total_connections |                               query                               

----------+---------+-------------+---------------------+------------------------+-------------------+--------------------------------------------------------------------

 postgres | datadog | 127.0.0.1   | idle in transaction | 00:00:12.81569         |                 1 | select split_part(split_part(version(), 'PostgreSQL ', 2), ' ', 1)

          | slave   | 10.10.48.81 | active              | 3 days 20:13:40.662896 |                 1 |

(2 rows)

*/

 

-- Get db connection count by user

 

SELECT datname as database ,usename as user ,client_addr,state, count(*) as total_connections

FROM pg_stat_activity WHERE pid<>pg_backend_pid()

GROUP BY usename,client_addr,datname,state;

 

/*

 database |      user      | client_addr |        state        | total_connections

----------+----------------+-------------+---------------------+-------------------

          |                |             |                     |                 4

          | slave          | 10.10.48.81 | active              |                 1

 beacon   | beacon_service | 10.10.17.23 | idle                |                 6

 postgres | datadog        | 127.0.0.1   | idle                |                 1

          | postgres       |             |                     |                 1

 postgres | datadog        | 127.0.0.1   | idle in transaction |                 1

 beacon   | beacon_service | 10.10.17.24 | idle                |                36

(7 rows)

*/

 

-- Active sessions by state

 

SELECT datname as database ,usename as user ,client_addr,state, count(*) as total_connections,query

FROM pg_stat_activity

WHERE state <> 'idle' AND pid<>pg_backend_pid()

GROUP BY usename,client_addr,datname,state,query;

 

/*

database |   user   | client_addr | state  | total_connections |                                                            query                                                            

----------+----------+-------------+--------+-------+------------------------------------------------------------------------------------------------------------------------------------------

 test11   | dba      | 192.168.2.1 | active |     1            | insert into tabl3 (cool_int,cool_json) values ( 113,'{"name":"testdata","ts": "var_created", "userId":  "1037" , "username":  "dba"}' );

 postgres | postgres |             | active |     1            | SELECT pg_sleep(20);

          | slave    | 127.0.0.1   | active |     1            |

(3 rows)

*/

 

-- Queries running for greater than x ms

 

SELECT (now() - query_start)  as query_time_taken,datname as database ,usename as user,query

FROM pg_stat_activity

WHERE xact_start IS NOT NULL AND (now() - query_start) > interval '300 ms';

 

/*

query_time_taken | database |    user    |        query        

------------------+----------+------------+----------------------

 00:00:06.432377  | test11   | app_level3 | SELECT pg_sleep(20);

(1 row)

*/

 

select sid, serial#, context, sofar, totalwork,round(sofar/totalwork*100,2) "%_complete" from v$session_longops where sofar <> totalwork;

 

 

select inst_id,sql_hash_value,sql_id, sid,serial#,to_char(start_time,'dd-mon:hh24:mi') start_time,

             opname,target,totalwork,sofar,(elapsed_Seconds/60) elamin,username,

             time_remaining tre

          from gv$session_longops

          where totalwork <> SOFAR

          order by 7,tre, start_time,sid

 

or

$ cat status.sql

SELECT SID, SERIAL#, CONTEXT, SOFAR, TOTALWORK, ROUND(SOFAR/TOTALWORK*100,2) "%_COMPLETE" FROM gV$SESSION_LONGOPS WHERE OPNAME LIKE 'RMAN%' AND OPNAME NOT LIKE '%aggregate%' AND TOTALWORK != 0 AND SOFAR != TOTALWORK order by 6 desc

/

$

 

          Temp usage / Rollback segment/PGA usage:

 

We can get information of temporary tablespace usage details with the help of below query:

Set lines 1000

SELECT b.tablespace,

ROUND(((b.blocks*p.value)/1024/1024),2)||'M' "SIZE",

a.sid||','||a.serial# SID_SERIAL,

a.username,

a.program

FROM sys.v_$session a,

sys.v_$sort_usage b,

sys.v_$parameter p

WHERE p.name = 'db_block_size'

AND a.saddr = b.session_addr

ORDER BY b.tablespace, b.blocks;

 

 

select st.sid "SID", sn.name "TYPE",

ceil(st.value / 1024 / 1024/1024) "GB"

from v$sesstat st, v$statname sn

where st.statistic# = sn.statistic#

and sid in

(select sid from v$session where username like UPPER('&user'))

and upper(sn.name) like '%PGA%'

order by st.sid, st.value desc;

 

 

 

1.      Generate AWR,ASH and ADDM Report for further analysis

 

PostgreSQL AWR - Database Tutorials (dbtut.com)

 

We need to write different queries and review them one by one. Instead of doing these operations, we can use the pg_profile extension, which collects them all together and outputs them in HTML format.

The pg_profile extension is based on the standard statistical views of PostgreSQL. It is similar to Oracle AWR architecture. Like Oracle, it takes snapshot at specified times and it gives you html format to interpret the statistical data between snapshots.

 

About POWA :

PoWA is PostgreSQL Workload Analyzer that gathers performance stats and provides real-time charts and graph to help monitor and tune your PostgreSQL servers. It is similar to Oracle AWR or SQL Server MDW.

 

 

 

-- View all roles with attributes and parent role using psql command

 

/*

postgres=# \du

                                   List of roles

 Role name |                         Attributes                         | Member of

-----------+------------------------------------------------------------+-----------

 app       | Create DB                                                  | {}

 backups   | Replication                                                | {}

 datadog   |                                                            | {mon}

 dba       | Superuser, Create role, Create DB                          | {}

 dev       |                                                            | {}

 devadmin  | Create DB                                                  | {}

 mon       |                                                            | {}

 mozdba    | Superuser                                                  | {dba}

 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

 psql_app  |                                                            | {app}

 repl      | Cannot login, Replication                                  | {}

 slave     | Replication                                                | {repl}

*/

 

-- View all roles with attributes and parent role using query

/*

* equivalent query for /du command, found using psql -E

*/

 

SELECT r.rolname, r.rolsuper, r.rolinherit,

  r.rolcreaterole, r.rolcreatedb, r.rolcanlogin,

  r.rolconnlimit, r.rolvaliduntil,

  ARRAY(SELECT b.rolname

        FROM pg_catalog.pg_auth_members m

        JOIN pg_catalog.pg_roles b ON (m.roleid = b.oid)

        WHERE m.member = r.oid) as memberof

, r.rolreplication

, r.rolbypassrls

FROM pg_catalog.pg_roles r

WHERE r.rolname !~ '^pg_'

ORDER BY 1;

 

/*

                                   List of roles

 Role name |                         Attributes                         | Member of

-----------+------------------------------------------------------------+-----------

 app       | Create DB                                                  | {}

 backups   | Replication                                                | {}

 datadog   |                                                            | {mon}

 dba       | Superuser, Create role, Create DB                          | {}

 dev       |                                                            | {}

 devadmin  | Create DB                                                  | {}

 mon       |                                                            | {}

 mozdba    | Superuser                                                  | {dba}

 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}

 psql_app  |                                                            | {app}

 repl      | Cannot login, Replication                                  | {}

 slave     | Replication                                                | {repl}

*/

 

 

 

 

 

Database performance check And Troubleshooting

 

 

1)To Check server load

 

          topas  or top - To know top consuming process and load on server

          nmon and press m to know memory and paging utilization

          vmstat 5 5

          lsps -s To know paging usages

 

 

2)Session Monitoring

 

select pid as process_id, usename as username,  datname as database_name,

client_addr as client_address, application_name,backend_start,state,state_change

from pg_stat_activity;

 

root@postgres-tutorial:/etc/postgresql/13/main#

root@postgres-tutorial:/etc/postgresql/13/main# dpkg -l | grep postgres

ii  postgresql                            13+225                         all          object-relational SQL database (supported version)

ii  postgresql-13                         13.7-0+deb11u1                 amd64        The World's Most Advanced Open Source Relational Database

ii  postgresql-client                     13+225                         all          front-end programs for PostgreSQL (supported version)

ii  postgresql-client-13                  13.7-0+deb11u1                 amd64        front-end programs for PostgreSQL 13

ii  postgresql-client-common              225                            all          manager for multiple PostgreSQL client versions

ii  postgresql-common                     225                            all          PostgreSQL database-cluster manager

ii  postgresql-contrib                    13+225                         all          additional facilities for PostgreSQL (supported version)

 

-----------log file-----------

 

root@postgres-tutorial:/etc/postgresql/13/main#

 

/var/log/postgresql/postgresql-9.6-main.log

 

root@postgres-tutorial:/var/log# cd postgresql

root@postgres-tutorial:/var/log/postgresql# ls -ltr

total 12

-rw-r----- 1 postgres adm 9649 Jun  9 12:48 postgresql-13-main.log

root@postgres-tutorial:/var/log/postgresql# pwd

/var/log/postgresql

root@postgres-tutorial:/var/log/postgresql#

 

------------

 

root@assaf-desktop:/home/assaf# ps axf | grep postgres

14338 ?        S      0:00 /opt/djangostack-1.3-0/postgresql/bin/postgres -D /opt/djangostack-1.3-0/postgresql/data -p 5432

14347 ?        Ss     0:00  \_ postgres: writer process                                                                       

14348 ?        Ss     0:00  \_ postgres: wal writer process                                                                   

14349 ?        Ss     0:00  \_ postgres: autovacuum launcher process                                                          

14350 ?        Ss     0:00  \_ postgres: stats collector process                                                              

15139 pts/1    S+     0:00              \_ grep --color=auto postgres

root@assaf-desktop:/home/assaf# netstat -nltp | grep 5432

tcp        0      0 127.0.0.1:5432          0.0.0.0:*               LISTEN      14338/postgres 

tcp6       0      0 ::1:5432                :::*                    LISTEN      14338/postgres 

root@assaf-desktop:/home/assaf#

 

---------

 

tail 10f /var/log/messages

 

3) Take explain plan and check statistics of tables and indexes

 

PostgreSQL query planner

------------------------

CREATE TABLE employees ( id SERIAL, name varchar, department varchar);

INSERT INTO employees (name, department) VALUES ('Sue', 'Sales'), ('Dan', 'Operations'), ('Snoop', 'Sales');

SELECT * FROM employees;

 

EXPLAIN SELECT * FROM employees ORDER BY name;

 

SELECT relpages, reltuples FROM pg_class WHERE relname='employees';

 

EXPLAIN ANALYZE SELECT * FROM employees ORDER BY name;

 

Eg

 

easydb=#

easydb=# EXPLAIN SELECT * FROM employees ORDER BY name;

                            QUERY PLAN                            

-------------------------------------------------------------------

 Sort  (cost=59.86..61.98 rows=850 width=68)

   Sort Key: name

   ->  Seq Scan on employees  (cost=0.00..18.50 rows=850 width=68)

(3 rows)

 

easydb=# SELECT relpages, reltuples FROM pg_class WHERE relname='employees';

 relpages | reltuples

----------+-----------

        0 |         0

(1 row)

 

easydb=# EXPLAIN ANALYZE SELECT * FROM employees ORDER BY name;

                                                 QUERY PLAN                                                 

-------------------------------------------------------------------------------------------------------------

 Sort  (cost=59.86..61.98 rows=850 width=68) (actual time=0.135..0.136 rows=3 loops=1)

   Sort Key: name

   Sort Method: quicksort  Memory: 25kB

   ->  Seq Scan on employees  (cost=0.00..18.50 rows=850 width=68) (actual time=0.006..0.007 rows=3 loops=1)

 Planning Time: 0.049 ms

 Execution Time: 0.149 ms

(6 rows)

 

easydb=#

 

 

 

select a.owner, a.table_name, a.num_rows, b.bytes/1048576 MB, a.last_analyzed from dba_tables a, dba_segments b 

where  a.owner=b.owner and a.table_name=b.segment_name and a.table_name='&table_name' order by b.bytes desc;

 

SELECT

    tablename,

    indexname,

    indexdef FROM pg_indexes WHERE

    schemaname = 'public'

    ORDER BY tablename,indexname;

 

SELECT

    indexname, indexdef

FROM pg_indexes WHERE tablename = 'customer';

 

SELECT pg_size_pretty (pg_indexes_size('film'));

 

 

4) displays the estimated amount of bloat in your tables and indices

 

SELECT

  current_database(), schemaname, tablename, /*reltuples::bigint, relpages::bigint, otta,*/

  ROUND((CASE WHEN otta=0 THEN 0.0 ELSE sml.relpages::float/otta END)::numeric,1) AS tbloat,

  CASE WHEN relpages < otta THEN 0 ELSE bs*(sml.relpages-otta)::BIGINT END AS wastedbytes,

  iname, /*ituples::bigint, ipages::bigint, iotta,*/

  ROUND((CASE WHEN iotta=0 OR ipages=0 THEN 0.0 ELSE ipages::float/iotta END)::numeric,1) AS ibloat,

  CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta) END AS wastedibytes

FROM (

  SELECT

    schemaname, tablename, cc.reltuples, cc.relpages, bs,

    CEIL((cc.reltuples*((datahdr+ma-

      (CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::float)) AS otta,

    COALESCE(c2.relname,'?') AS iname, COALESCE(c2.reltuples,0) AS ituples, COALESCE(c2.relpages,0) AS ipages,

    COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0) AS iotta -- very rough approximation, assumes all cols

  FROM (

    SELECT

      ma,bs,schemaname,tablename,

      (datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE hdr%ma END)))::numeric AS datahdr,

      (maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2

    FROM (

      SELECT

        schemaname, tablename, hdr, ma, bs,

        SUM((1-null_frac)*avg_width) AS datawidth,

        MAX(null_frac) AS maxfracsum,

        hdr+(

          SELECT 1+count(*)/8

          FROM pg_stats s2

          WHERE null_frac<>0 AND s2.schemaname = s.schemaname AND s2.tablename = s.tablename

        ) AS nullhdr

      FROM pg_stats s, (

        SELECT

          (SELECT current_setting('block_size')::numeric) AS bs,

          CASE WHEN substring(v,12,3) IN ('8.0','8.1','8.2') THEN 27 ELSE 23 END AS hdr,

          CASE WHEN v ~ 'mingw32' THEN 8 ELSE 4 END AS ma

        FROM (SELECT version() AS v) AS foo

      ) AS constants

      GROUP BY 1,2,3,4,5

    ) AS foo

  ) AS rs

  JOIN pg_class cc ON cc.relname = rs.tablename

  JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname = rs.schemaname AND nn.nspname <> 'information_schema'

  LEFT JOIN pg_index i ON indrelid = cc.oid

  LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid

) AS sml

ORDER BY wastedbytes DESC

 

5) To find the PID using pg_stat_activity

 

 

select pid, wait_event_type, wait_event, query from pg_stat_activity;

 

Now, with PID pg_locks output is

 

postgres=# select locktype, relation, virtualxid, transactionid, mode, granted from pg_locks

SELECT locktype,transactionid,virtualtransaction,mode FROM pg_locks

 

 

select pid, wait_event_type, wait_event, query from pg_stat_activity;

 

How long have the existing queries been working:

 

 

SELECT pid,datname,usename, now() , state,query FROM pg_stat_activity;

 

  pid   | datname  | usename  |              now              | state  |                                 query                                 

--------+----------+----------+-------------------------------+--------+------------------------------------------------------------------------

    544 |          |          | 2022-06-30 05:49:59.265337+00 |        |

    547 |          | postgres | 2022-06-30 05:49:59.265337+00 |        |

 181869 | postgres | postgres | 2022-06-30 05:49:59.265337+00 | active | SELECT pid,datname,usename, now() , state,query FROM pg_stat_activity;

    542 |          |          | 2022-06-30 05:49:59.265337+00 |        |

    541 |          |          | 2022-06-30 05:49:59.265337+00 |        |

    543 |          |          | 2022-06-30 05:49:59.265337+00 |        |

(6 rows)

 

postgres=#

 

 

 

To stop a query:

 

select pg_cancel_backend(<pid of the process>)

 

To kill if the query does not stop using the above method:

 

select pg_terminate_backend(<pid of the process>)

 

To see locked objects:

SELECT * FROM pg_locks;

 

 

 ========= pg_buffercache usages=======

 

show shared_buffers;       

 

ostgres=#

postgres=# CREATE EXTENSION pg_buffercache;

CREATE EXTENSION

postgres=#

postgres=#

postgres=# select c.relname,pg_size_pretty(count(*) * 8192) as buffered, round(100.0 * count(*) / ( select setting from pg_settings where name='shared_buffers')::integer,1) as buffer_percent, round(100.0*count(*)*8192 / pg_table_size(c.oid),1) as percent_of_relation from pg_class c inner join pg_buffercache b on b.relfilenode = c.relfilenode inner join pg_database d on

( b.reldatabase =d.oid and d.datname =current_database()) group by c.oid,c.relname order by 3 desc limit 10;

          relname          |  buffered  | buffer_percent | percent_of_relation

---------------------------+------------+----------------+---------------------

 pg_toast_2618             | 480 kB     |            0.4 |                93.8

 pg_statistic              | 232 kB     |            0.2 |                65.9

 pg_depend                 | 128 kB     |            0.1 |                25.8

 pg_depend_reference_index | 120 kB     |            0.1 |                34.1

 pg_rewrite                | 128 kB     |            0.1 |                19.5

 pg_operator               | 112 kB     |            0.1 |                77.8

 pg_amproc                 | 32 kB      |            0.0 |                50.0

 pg_amop                   | 48 kB      |            0.0 |                60.0

 pg_sequence               | 8192 bytes |            0.0 |               100.0

 pg_am                     | 16 kB      |            0.0 |                40.0

(10 rows)

 

postgres=#

 

https://www.interdb.jp/pg/pgsql08.html

 

 

if we need to check the contents of the shared buffer.

 

SELECT pg_size_pretty((count(*) * 9856)) as shared_buffered, a.relname,

round (406.2 * count(*) * 9192 / pg_table_size(a.oid),5) AS relation_of_percentage,

round (305.1 * count(*) / ( SELECT setting FROM pg_settings WHERE name='shared_buffers')::integer,5) AS percentage_of_shared_buffers

FROM pg_class a

left JOIN pg_buffercache b ON b.relfilenode = a.relfilenode

left JOIN pg_database d ON (( d.datname = current_database() AND b.reldatabase = d.oid))

WHERE pg_table_size(a.oid) > 2

GROUP BY a.relname, a.oid

ORDER BY 4 DESC

LIMIT 16;

 

relation use count in PostgreSQL

 

select usagecount,count(*) as shared_buffers, a.relname

from pg_class a

right join pg_buffercache b on a.relfilenode = b.relfilenode

left join pg_database d on ( d.datname =current_database()AND b.reldatabase = d.oid)

group by usagecount, a.relname

order by usagecount, a.relname;

 

 Disk uses of the shared buffer.

 

select pg_size_pretty(pg_table_size(a.oid)) as "Disked_size",nspname,relname

from pg_class a inner join pg_namespace s on ( a.relnamespace=s.oid)

where nspname not in ('information_schema','pg_catalog')

order by pg_table_size(a.oid) desc limit 40;

 

Minimum and maximum value of shared buffers.

 

select name, setting, min_val, max_val, context from

pg_settings where name='shared_buffers';

 

 

 

 

 

 

 

 

SELECT * FROM pg_stat_database;

select * from pg_stat_activity;

select * from pg_stat_user_indexes;

select relname,last_analyze,last_autoanalyze from pg_stat_user_tables;

 

postgres=# SELECT pid,datname,usename, now() , state,query FROM pg_stat_activity;

  pid   | datname  | usename  |              now              | state  |                                 query                                 

--------+----------+----------+-------------------------------+--------+------------------------------------------------------------------------

    544 |          |          | 2022-06-30 05:40:37.604108+00 |        |

    547 |          | postgres | 2022-06-30 05:40:37.604108+00 |        |

 181869 | postgres | postgres | 2022-06-30 05:40:37.604108+00 | active | SELECT pid,datname,usename, now() , state,query FROM pg_stat_activity;

    542 |          |          | 2022-06-30 05:40:37.604108+00 |        |

    541 |          |          | 2022-06-30 05:40:37.604108+00 |        |

    543 |          |          | 2022-06-30 05:40:37.604108+00 |        |

(6 rows)

 

postgres=#

 

 

 

postgres=# SELECT

  datname as database_name,

  pg_database_size(datname)/1024/1024 as size

FROM

  pg_database

WHERE

  datistemplate = false;

  database_name  | size

-----------------+------

 postgres        |    8

 example_backups |    7

 easytbs         |    7

 dbname2         |    7

 mydb            |    7

 easydb          |    7

 interndb        |    7

 easynew         |    7

 easynew3        |    7

(9 rows)

 

postgres=#

 

 

 

postgres=#

postgres=# SELECT

  sum(heap_blks_read) as reads,

  sum(heap_blks_hit) as hits,

  ROUND(

    sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)),

    4

  ) as hit_ratio

FROM

  pg_statio_user_tables;

 reads | hits | hit_ratio

-------+------+-----------

     5 |   13 |    0.7222

(1 row)

 

postgres=#

 

 

postgres=# SELECT

  COUNT(*) as count,

  relname as table_name

FROM

  pg_stat_all_indexes

WHERE

  idx_scan = 0

  and schemaname = 'public'

GROUP BY

  table_name

postgres-# ;

 count | table_name

-------+------------

(0 rows)

 

postgres=#

 

 

postgres=#

postgres=# SELECT

  c .relname AS entity_name,

  count(*) AS buffers

FROM

  pg_buffercache b

  INNER JOIN pg_class c ON b.relfilenode = pg_relation_filenode(c .oid)

  AND b.reldatabase IN (

    0,

    (

      SELECT

        oid

      FROM

        pg_database

      WHERE

        datname = current_database()

    )

  )

WHERE

  c .relname NOT LIKE 'pg_%'

GROUP BY

  c .relname

ORDER BY

  2 DESC;

   entity_name   | buffers

-----------------+---------

 accounts_id_seq |       1

(1 row)

 

postgres=#

 

 

select query, calls, (total_time/calls)::integer as avg_time_ms

from pg_stat_statements

where calls > 1000

order by avg_time_ms desc

limit 100;

 

 

 

select query, calls, (total_time/calls)::integer as avg_time_ms

from pg_stat_statements

where calls > 1000

and rows < 5000

order by avg_time_ms desc

limit 100;

 

 

with timeperquery as

(

select query, (calls/total_time) as ms_per_call

from pg_stat_statements

where rows != 0

)

select query, sum(ms_per_call)::integer as ms_per_query

from timeperquery

group by query

order by ms_per_query desc

 

 

select

          query,

          calls,

          total_time::integer,

          (calls/total_time)::integer as ms_per_call,

          shared_blks_hit,

          shared_blks_read

from pg_stat_statements pss

order by calls desc

limit 10;

 

 

demo=# create table t (c bigint);

CREATE TABLE

demo=# insert into t select generate_series(1, 1000000);

INSERT 0 1000000

demo=# explain (analyze on, costs off) table t order by c;

 

or

 

demo=# create table t1 (c) as select generate_series(1, 1000000);

SELECT 1000000

demo=# create table t2 (c) as select generate_series(1, 1000000, 100);

SELECT 10000

demo=# vacuum analyze t1, t2;

VACUUM

demo=# set work_mem to '64kB';

SET

demo=# explain (analyze on, costs off, timing off)

demo-# select * from t1 join t2 using (c);

 

or

 

demo=# create table t (c bigint)

demo-# with (autovacuum_enabled = false);

CREATE TABLE

demo=# insert into t select generate_series(1, 1000000);

INSERT 0 1000000

demo=# create index on t (c);

CREATE INDEX

demo=# analyze t;

ANALYZE

demo=# explain (analyze on, costs off, timing off, summary off)

demo-# select c from t where c <= 2000;

 

or

 

 demo=# vacuum t;

VACUUM

demo=# explain (analyze on, costs off, timing off, summary off)

demo-# select c from t where c <= 2000;

 

 

 

 

 

 

 

postgres=#

postgres=# SELECT * FROM pg_stat_statements;

ERROR:  pg_stat_statements must be loaded via shared_preload_libraries

postgres=#

 

 

-- current_user will provide the rolname of the authenticated user

SELECT * FROM pg_stat_statements pss

          JOIN pg_roles pr ON (userid=oid)

WHERE rolname = current_user;

 

 

0r

 

-- query the 10 longest running queries with more than 500 calls

WITH statements AS (

SELECT * FROM pg_stat_statements pss

                   JOIN pg_roles pr ON (userid=oid)

WHERE rolname = current_user

)

SELECT calls,

          mean_exec_time,

          query

FROM statements

WHERE calls > 500

AND shared_blks_hit > 0

ORDER BY mean_exec_time DESC

LIMIT 10;

 

 

Or

 

-- query the 10 longest running queries

WITH statements AS (

SELECT * FROM pg_stat_statements pss

                   JOIN pg_roles pr ON (userid=oid)

WHERE rolname = current_user

)

SELECT calls,

          shared_blks_hit,

          shared_blks_read,

         shared_blks_hit/(shared_blks_hit+shared_blks_read)::NUMERIC*100 hit_cache_ratio,

          query

FROM statements

WHERE calls > 500

AND shared_blks_hit > 0

ORDER BY calls DESC, hit_cache_ratio ASC

LIMIT 10;

 

or

 

-- query the 10 longest running queries

WITH statements AS (

SELECT * FROM pg_stat_statements pss

                   JOIN pg_roles pr ON (userid=oid)

WHERE rolname = current_user

)

SELECT calls,

          min_exec_time,

          max_exec_time,

          mean_exec_time,

          stddev_exec_time,

          (stddev_exec_time/mean_exec_time) AS coeff_of_variance,

          query

FROM statements

WHERE calls > 500

AND shared_blks_hit > 0

ORDER BY mean_exec_time DESC

LIMIT 10;

 

 

 Replication Database Healhchek

 

 

 

Monitoring Postgres Replication - pgDash

What to Look for if Your PostgreSQL Replication is Lagging | Severalnines

 

 

1) View current replication status (on Primary)

 

psql -x -c "select * from pg_stat_replication;"

psql -x -c "select sync_state from pg_stat_replication;"

testdb=# select * from pg_stat_subscription ;

 

SELECT usename as user, application_name, client_addr, client_port, state, sent_lsn, write_lsn, flush_lsn, replay_lsn

FROM pg_stat_replication;                                    

 

/*

 user  | application_name | client_addr | client_port |   state   | sent_lsn  | write_lsn | flush_lsn | replay_lsn

-------+------------------+-------------+-------------+-----------+-----------+-----------+-----------+------------

 slave | walreceiver      | 127.0.0.1   |       47050 | streaming | 0/3B35F70 | 0/3B35F70 | 0/3B35F70 | 0/3B35F70

(1 row)

*/

 

Monitoring Primary Servers

-----------------------------

There are two type of things to monitor on primary servers – outgoing replication connections and replication slots.

 

Monitoring Outgoing Replication

The system view pg_stat_replication returns one row for each open outgoing replication connection.

 

bench=# select * from pg_stat_replication;

 

Monitoring Physical Replication Slots

--------------------------------------

The view pg_replication_slots gives one row for each replication slot in the primary. Unlike streaming replication connections, replication slots have to be created and deleted explicitly. When there is an associated client connection, slots are considered active. Inactive slots are dangerous, because the server will retain the WAL files required for that slot forever, at the risk of filling up the disk.

 

Replicaton slots can also be temporary, which will go away when the replication client connection goes away. Newer versions of pg_basebackup use temporary replication slots.

 

bench=# select * from pg_replication_slots;

-[ RECORD 1 ]-------+--------------

 

2)View replication lag (on Secondary): Version 10 and up

 

SELECT CASE WHEN pg_last_wal_receive_lsn() = pg_last_wal_replay_lsn()

THEN 0

ELSE EXTRACT (EPOCH FROM now() - pg_last_xact_replay_timestamp())

END AS log_delay;

 

 

3)View replication lag (on Secondary): Version less than 10

 

SELECT CASE WHEN pg_last_xlog_receive_location() = pg_last_xlog_replay_location()

THEN 0

ELSE EXTRACT (EPOCH FROM now() - pg_last_xact_replay_timestamp())

END AS log_delay;

 

 

Monitoring Incoming Replication

 

The stats related to the incoming replication can be monitored using pg_stat_wal_receiver view (available in v9.6 and above). On a standby, it contains values similar to:

 

bench=# select * from pg_stat_wal_receiver;

 

 

Monitoring Recovery Status

-------------------------------

The pg_stat_wal_receiver only gives stats about the receiving of WAL records, not of it’s replay. For that, we need to use:

 

bench=# select pg_last_wal_receive_lsn(), pg_last_wal_replay_lsn(), pg_last_xact_replay_timestamp();

 

  

The best way to monitor replication is to use pg_stat_replication, which contains a lot of vital information.

postgres=# \d pg_stat_replication

                    View "pg_catalog.pg_stat_replication"

      Column      |           Type           | Collation | Nullable | Default

------------------+--------------------------+-----------+----------+---------

 pid              | integer                  |           |          |

 usesysid         | oid                      |           |          |

 usename          | name                     |           |          |

 application_name | text                     |           |          |

 client_addr      | inet                     |           |          |

 client_hostname  | text                     |           |          |

 client_port      | integer                  |           |          |

 backend_start    | timestamp with time zone |           |          |

 backend_xmin     | xid                      |           |          |

 state            | text                     |           |          |

 sent_lsn         | pg_lsn                   |           |          |

 write_lsn        | pg_lsn                   |           |          |

 flush_lsn        | pg_lsn                   |           |          |

 replay_lsn       | pg_lsn                   |           |          |

 write_lag        | interval                 |           |          |

 flush_lag        | interval                 |           |          |

 replay_lag       | interval                 |           |          |

 sync_priority    | integer                  |           |          |

 sync_state       | text                     |           |          |

 reply_time       | timestamp with time zone |           |          |

 

postgres=#

         

 

 

 

 

Queries to Use In the Standby Node

----------------------------

select pg_is_in_recovery(),pg_is_wal_replay_paused(), pg_last_wal_receive_lsn(), pg_last_wal_replay_lsn(), pg_last_xact_replay_timestamp();

 

 

In older versions, you can use the following query:

 

postgres=# select pg_is_in_recovery(),pg_last_xlog_receive_location(), pg_last_xlog_replay_location(), pg_last_xact_replay_timestamp();

 

 

Step 1: Determine if There's a Lag

postgres=# SELECT CASE WHEN pg_last_wal_receive_lsn() = pg_last_wal_replay_lsn()

postgres-# THEN 0

postgres-# ELSE EXTRACT (EPOCH FROM now() - pg_last_xact_replay_timestamp())

postgres-# END AS log_delay;

 log_delay

-----------

         

(1 row)

 

postgres=#

 

Step 2: Determine the WAL Segments Received From the Primary and Compare with Standby Node

 

 

postgres=# SELECT pg_current_wal_lsn();

 pg_current_wal_lsn

--------------------

 0/1D000000

(1 row)

 

postgres=#

 

 

 

 

 

 

 

For older versions of PG < 10, use pg_current_xlog_location.

 

## Get the current WAL segments received (flushed or applied/replayed)

 

 

postgres=# select pg_is_in_recovery(),pg_is_wal_replay_paused(), pg_last_wal_receive_lsn(), pg_last_wal_replay_lsn(), pg_last_xact_replay_timestamp();

ERROR:  recovery is not in progress

HINT:  Recovery control functions can only be executed during recovery.

postgres=#

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

-- Get Databases with their size

SELECT

    pg_database.datname as database,

    pg_size_pretty(pg_database_size(pg_database.datname)) AS size

    FROM pg_database

    WHERE datistemplate=false

    AND pg_database_size(pg_database.datname) > 0;

 

 

To check WAL config status

 

 

postgres=#

SELECT name, setting, pending_restart

    FROM pg_settings

    WHERE name LIKE '%wal%';

             name              |  setting  | pending_restart

-------------------------------+-----------+-----------------

 max_slot_wal_keep_size        | -1        | f

 max_wal_senders               | 4         | f

 max_wal_size                  | 1024      | f

 min_wal_size                  | 80        | f

 wal_block_size                | 8192      | f

 wal_buffers                   | 512       | f

 wal_compression               | off       | f

 wal_consistency_checking      |           | f

 wal_init_zero                 | on        | f

 wal_keep_size                 | 0         | f

 wal_level                     | replica   | f

 wal_log_hints                 | off       | f

 wal_receiver_create_temp_slot | off       | f

 wal_receiver_status_interval  | 10        | f

 wal_receiver_timeout          | 60000     | f

 wal_recycle                   | on        | f

 wal_retrieve_retry_interval   | 5000      | f

 wal_segment_size              | 16777216  | f

 wal_sender_timeout            | 60000     | f

 wal_skip_threshold            | 2048      | f

 wal_sync_method               | fdatasync | f

 wal_writer_delay              | 200       | f

 wal_writer_flush_after        | 128       | f

(23 rows)

 

postgres=#

 

Backup

 

 

Backup

Using pg_basebackup to back up a PostgreSQL cluster that has multiple tablespaces needs a couple of extra steps.

If you’re using tarball-format backup, each tablespace comes out as it’s own tarball (with the tablespace OID as the filename). While restoring, this must be restored to the same path (like “/tmp/space2”) that used to be present while restoring. This is probably a bit of a pain because the backup script needs to store this additional information also somewhere alongside the backup.

Here’s how the tarball backup happens:

/tmp$ pg_basebackup --format=t --gzip --compress=9 -D tarbackup

/tmp$ ls -l tarbackup

total 3684

-rw-r--r-- 1 alice alice  937355 May  8 13:22 16385.tar.gz

-rw-r--r-- 1 alice alice 2812516 May  8 13:22 base.tar.gz

-rw------- 1 alice alice   19259 May  8 13:22 pg_wal.tar.gz

For plain format backups, it is possible to specify a new location for each tablespace. The data from each tablespace is written out into a new location. Every tablespace (other than pg_default and pg_global) must be mapped to a new location in the command-line, like this:

/tmp$ pg_basebackup --format=p --tablespace-mapping=/tmp/space2=/tmp/space2backup -D plainbackup

/tmp$ ls -l plainbackup/pg_tblspc/

total 0

lrwxrwxrwx 1 alice alice 17 May  8 13:35 16385 -> /tmp/space2backup

 

 

 

Use pg_dumpall to backup all databases

 

$ pg_dumpall -U postgres > allbackuppostgress.sql

 

Use pg_dump to backup a database

 

$ pg_dump -d mydb -f mydb_backup.sql

 

  -a   Dump only the data, not the schema

  -s   Dump only the schema, no data

  -c   Drop database before recreating

  -C   Create database before restoring

  -t   Dump the named table(s) only

  -F   Format (c: custom, d: directory, t: tar)

 

Use pg_dump -? to get the full list of options

 

 

Most of the views provide cumulative server activity information. pg_stat_activity is different.

It provides a snapshot of all the database operations at that moment.

 

 

Views such as pg_stat_all_tables and pg_stat_user_tables are cumulative since the statistics counter was reset or from the point of inception.

The command pg_stat_progress_vacuum will read much like a status bar.

 

 

For example, if a DBA wanted to see the number of selects per minute, they would query pg_stat_user_tables every minute.

The user could then find the delta between what it was before

and what it is now to get the number of selects per minute on a table.

 

This list highlights some commonly used pg_stat commands. For the full list, see pg_stats.

 

pg_stat_activity

pg_stat_all_tables

pg_stat_user_tables

pg_stat_all_indexes

pg_stat_user_functions

pg_stat_xact_all_tables

pg_stat_xact_sys_tables

pg_stat_xact_user_functions

 

Monitoring connection count

--------------------------

select count(1) from pg_stat_activity

 

Monitoring process duration

---------------------------

 

select pid, now() - backend_start as connect_time

from pg_stat_activity

order by connect_time Desc

 

 

To review information about state monitoring, select each of the following tabs.

-------------------------------------

 

Active - The database is doing work. DBAs want to see active connections. For example, when a query is currently active in the database, there will be active connections.

IDLE - DBAs do not want to have too many idle connections. Having an idle status means the connections are not processing and are just sitting. Too many idle connections means that there is a high use of resources in the database resulting in unnecessary overhead.

 

IDLE IN TRANSACTION - Idle in transaction means that the query is in the middle of a transaction, but is not currently doing anything. An example would be if a user typed begin, and then walked away for lunch.

 

select state,count(*) from pg_stat_activity

order by state

group by count DESC

 

-To check wait event

---------------------

select pid,wait_event_type,wait_event from pg_stat_activity where wait_event is NOT NULL;

 

------------

 

select * from pg_stat_ssl;

 

==================

 

Query monitoring example 1

 

select * from pg_stat_activity;

 

Query monitoring example 2

 

This example shows more detail with a calculation of the query runtime: now () - query_start.

This query is to include information about runtime, pid, and user name.

 A user can see the current state and wait events that have occurred and the runtime for each.

 

select now() -query_start as runtime, pid, username,state,waiting_query::varchar(40)

from pg_stat_activity limit 25;

 

Monitoring table usage

 

To view usage statistics for their database tables,

 DBAs can use the SELECT * FROM pg_stat_user_tables query.

The results contain information and usage from the individual schema and the included tables.

 

Monitoring table usage example 2

-----------------------------

select relid,schemaname,relname,seq_scan,seq_tup_read,idx_scan,idx_tup_fetch,n_tup_ins,n_tup_upd,n_tup_del from pg_stat_user_tables;

 

============================

 

 

\dt: This command will describe the table.

\dv: This command will describe the view. 

 

=======================================

 

The table pg_database contains a row for every database in the cluster,

including the three that come as default in PostgreSQL: postgres, template0, and template1.

 

select datname,datcollate,datconnlimit,datacl from pg_database;

 

 

Rows in the system catalog tables contain a key called an OID.

OIDs uniquely identify each row in a system catalog table and can be used to join system catalog tables.

 

select oid,* from pg_namespace;

 

 

In this example, the user is gathering information to view the number of active

 and idle connections when the connections were established.

 

select datname,pid,application_name,state,age(backend_start) from pg_stat_activity;

 

 

pg_locks can be joined to several different columns to gather specific data about how locks are being used.

 For example, the pid column of pg_locks can be joined to the pid column of the pg_stat_activity view to get more information about the session holding or awaiting each lock.

 

select * from pg_locks pl left join pg_stat_activity psa on pl.pid=psa.pid;

 

 

 

 

 

 

 

 

 

28.2. The Statistics Collector

28.2.1. Statistics Collection Configuration

28.2.2. Viewing Statistics

28.2.3. pg_stat_activity

28.2.4. pg_stat_replication

28.2.5. pg_stat_replication_slots

28.2.6. pg_stat_wal_receiver

28.2.7. pg_stat_subscription

28.2.8. pg_stat_ssl

28.2.9. pg_stat_gssapi

28.2.10. pg_stat_archiver

28.2.11. pg_stat_bgwriter

28.2.12. pg_stat_wal

28.2.13. pg_stat_database

28.2.14. pg_stat_database_conflicts

28.2.15. pg_stat_all_tables

28.2.16. pg_stat_all_indexes

28.2.17. pg_statio_all_tables

28.2.18. pg_statio_all_indexes

28.2.19. pg_statio_all_sequences

28.2.20. pg_stat_user_functions

28.2.21. pg_stat_slru

28.2.22. Statistics Functions

 

 

 

https://pgxn.org/dist/pgtelemetry/doc/pgtelemetry.html

 

select relname, n_live_tup, n_dead_tup

from pg_stat_user_tables

group by 1, 2, 3

order by 2, 3 desc

------------------------------

SELECT pg_stat_user_tables.schemaname

,

    pg_stat_user_tables.relname

,

    pg_stat_user_tables.last_vacuum

,

    date_part

('epoch'::text

     , age

     (now

           ()

           , pg_stat_user_tables.last_vacuum

     )

) AS age_last_vacuum

,

    pg_stat_user_tables.vacuum_count

,

    pg_stat_user_tables.last_autovacuum

,

    date_part

('epoch'::text

     , age

     (now

           ()

           , pg_stat_user_tables.last_autovacuum

     )

) AS age_last_autovacuum

,

    pg_stat_user_tables.autovacuum_count

,

    pg_stat_user_tables.last_analyze

,

    date_part

('epoch'::text

     , age

     (now

           ()

           , pg_stat_user_tables.last_analyze

     )

) AS age_last_analyze

,

    pg_stat_user_tables.analyze_count

,

    pg_stat_user_tables.last_autoanalyze

,

    date_part

('epoch'::text

     , age

     (now

           ()

           , pg_stat_user_tables.last_autoanalyze

     )

) AS age_last_autoanalyze

,

    pg_stat_user_tables.autoanalyze_count

  

FROM pg_stat_user_tables;

 

---------

 

SELECT c.oid

,

   

(c.oid)::regclass AS relation

,

    pg_total_relation_size

(

     (c.oid)::regclass

) AS inclusive_bytes

,

    pg_size_pretty

(pg_total_relation_size

     (

           (c.oid)::regclass

     )

) AS inclusive_size

,

    pg_relation_size

(

     (c.oid)::regclass

) AS exclusive_bytes

,

    pg_size_pretty

(pg_relation_size

     (

           (c.oid)::regclass

     )

) AS exclusive_size

  

FROM (pg_class c

    

  JOIN pg_namespace n

    ON (

           (c.relnamespace = n.oid)

     )

)

 

WHERE (

     (c.relkind = 'r'::"char")

   AND (n.nspname = ANY

           (ARRAY['pg_catalog'::name

                 ,'information_schema'::name]

           )

     )

);

 

 

This gives you the number of connections (cluster-wide) by application name.

 By default the application name is the program name that connected to the db.

 

SELECT pg_stat_activity.application_name

,

    count

(*) AS count

  

FROM pg_stat_activity

 

GROUP BY pg_stat_activity.application_name;

 

==========

 

SELECT pg_stat_activity.client_addr

,

    count

(*) AS count

  

FROM pg_stat_activity

 

GROUP BY pg_stat_activity.client_addr;

 

================

 

SELECT

        CASE

            WHEN

(pg_stat_activity.wait_event IS NULL) THEN pg_stat_activity.state

            ELSE 'waiting'::text

        END AS state

,

    count

(*) AS count

  

FROM pg_stat_activity

 

GROUP BY

        CASE

            WHEN

(pg_stat_activity.wait_event IS NULL) THEN pg_stat_activity.state

            ELSE 'waiting'::text

        END;

 

==============

 

SELECT pg_database.datname AS name

,

    pg_database_size

(pg_database.oid) AS bytes

,

    pg_size_pretty

(pg_database_size

     (pg_database.oid)

) AS size

  

FROM pg_database;

===============

 

SELECT c.oid

,

   

(c.oid)::regclass AS index

,

    pg_relation_size

(

     (c.oid)::regclass

) AS bytes

,

    pg_size_pretty

(pg_relation_size

     (

           (c.oid)::regclass

     )

) AS size

  

FROM (pg_class c

    

  JOIN pg_namespace n

    ON (

           (c.relnamespace = n.oid)

     )

)

 

WHERE (

     (c.relkind = 'i'::"char")

   AND (n.nspname <> ALL

           (ARRAY['pg_toast'::name

                 ,'pg_catalog'::name

                 ,'information_schema'::name]

           )

     )

);

 

==========

 

SELECT pg_locks.mode

,

    count

(*) AS count

  

FROM pg_locks

 

GROUP BY pg_locks.mode;

 

========

 

SELECT pg_locks.locktype

,

    count

(*) AS count

  

FROM pg_locks

 

GROUP BY pg_locks.locktype;

==================

 

SELECT pg_stat_activity.application_name

,

    pg_stat_activity.state

,

    pg_stat_activity.wait_event_type

,

    pg_stat_activity.wait_event

,

    pg_stat_activity.query

,

    pg_stat_activity.pid

,

    pg_stat_activity.client_addr

,

    age

(now

     ()

     , pg_stat_activity.query_start

) AS running_for

  

FROM pg_stat_activity

 

WHERE (pg_stat_activity.state = 'active'::text)

 

ORDER BY (age

     (now

           ()

           , pg_stat_activity.query_start

     )

) DESC;

 

===============

 

SELECT c.oid

,

   

(c.oid)::regclass AS relation

,

    pg_relation_size

(

     (t.oid)::regclass

) AS exclusive_bytes

,

    pg_size_pretty

(pg_relation_size

     (

           (t.oid)::regclass

     )

) AS exclusive_size

  

FROM (

     (pg_class c

    

        JOIN pg_class t

          ON (

                 (

                       (t.relname)::text =

                       ('pg_toast_'::text ||

                             (c.oid)::text

                       )

                 )

           )

     )

    

  JOIN pg_namespace n

    ON (

           (c.relnamespace = n.oid)

     )

);

 

==============

 

SELECT c.oid

,

   

(c.oid)::regclass AS relation

,

    pg_total_relation_size

(

     (c.oid)::regclass

) AS inclusive_bytes

,

    pg_size_pretty

(pg_total_relation_size

     (

           (c.oid)::regclass

     )

) AS inclusive_size

,

    pg_relation_size

(

     (c.oid)::regclass

) AS exclusive_bytes

,

    pg_size_pretty

(pg_relation_size

     (

           (c.oid)::regclass

     )

) AS exclusive_size

  

FROM (pg_class c

    

  JOIN pg_namespace n

    ON (

           (c.relnamespace = n.oid)

     )

)

 

WHERE (

     (c.relkind = 'r'::"char")

   AND (n.nspname <> ALL

           (ARRAY['pg_toast'::name

                 ,'pg_catalog'::name

                 ,'information_schema'::name]

           )

     )

);

 

 

==============

 

SELECT s.slot_name

,

    s.slot_type

,

    s.active

,

    s.restart_lsn

,

    to_jsonb

(s.*) AS full_data

,

    now

() AS querytime

,

        CASE

            WHEN pg_is_in_recovery

() THEN pg_last_xlog_replay_location

()

            ELSE pg_current_xlog_location

()

        END AS pg_current_xlog_location

,

        CASE

            WHEN pg_is_in_recovery

() THEN

(NULL::integer)::numeric

            ELSE

(pg_current_xlog_location

     () - s.restart_lsn

)

        END AS current_lag_bytes

  

FROM pg_replication_slots s

 

ORDER BY s.slot_name;

 

==================

 

 

SELECT d.datname

,

    pg_stat_statements.queryid

,

    pg_stat_statements.query

,

    sum

(pg_stat_statements.calls) AS sum

,

    sum

(pg_stat_statements.shared_blks_hit) AS shared_blks_hit

,

    sum

(pg_stat_statements.shared_blks_read) AS shared_blks_read

,

    sum

(pg_stat_statements.shared_blks_dirtied) AS shared_blks_dirtied

,

    sum

(pg_stat_statements.shared_blks_written) AS shared_blks_written

,

    sum

(pg_stat_statements.temp_blks_read) AS tmp_blkd_read

,

    sum

(pg_stat_statements.temp_blks_written) AS tmp_blkd_written

  

FROM (pg_stat_statements

    

  JOIN pg_database d

    ON (

           (d.oid = pg_stat_statements.dbid)

     )

)

 

GROUP BY d.datname

, pg_stat_statements.queryid

, pg_stat_statements.query;

 

====================

 

 

SELECT d.datname

,

    pg_stat_statements.queryid

,

    pg_stat_statements.query

,

    sum

(pg_stat_statements.calls) AS calls

,

    sum

(pg_stat_statements.total_time) AS total_time

,

    sum

(pg_stat_statements.rows) AS rows

  

FROM (pg_stat_statements

    

  JOIN pg_database d

    ON (

           (d.oid = pg_stat_statements.dbid)

     )

)

 

GROUP BY d.datname

, pg_stat_statements.queryid

, pg_stat_statements.query;

 

================

 

SELECT pg_tablespace.spcname AS name

,

    pg_tablespace_size

(pg_tablespace.oid) AS bytes

,

    pg_size_pretty

(pg_tablespace_size

     (pg_tablespace.oid)

) AS size

  

FROM pg_tablespace;

Index - Schema pgtelemetry

 

==================

 

SELECT pg_stat_user_tables.schemaname

,

    pg_stat_user_tables.relname

,

    pg_stat_user_tables.seq_scan

,

    pg_stat_user_tables.seq_tup_read

,

    pg_stat_user_tables.idx_scan

,

    pg_stat_user_tables.idx_tup_fetch

,

    pg_stat_user_tables.n_tup_ins

,

    pg_stat_user_tables.n_tup_upd

,

    pg_stat_user_tables.n_tup_del

,

    pg_stat_user_tables.n_tup_hot_upd

,

    pg_stat_user_tables.n_live_tup

,

    pg_stat_user_tables.n_dead_tup

,

    pg_stat_user_tables.n_mod_since_analyze

  

FROM pg_stat_user_tables;

 

=================

 

SELECT pg_stat_activity.wait_event_type

,

    count

(*) AS count

  

FROM pg_stat_activity

 

WHERE (pg_stat_activity.wait_event IS NOT NULL)

 

GROUP BY pg_stat_activity.wait_event_type;

 

================

 

No comments:

Post a Comment