Saturday, December 21, 2024
HomeHow ToHow to Secure PostgreSQL Database - Tips and Tricks

How to Secure PostgreSQL Database – Tips and Tricks

Databases with their volumes of various valuable data stored within are the prize that is most desired by hackers. Database security is an area where new challenges emerge daily, and where you must not just react to threats, but pay all your efforts to prevent risks.

Speaking of database security, we can’t focus on one or two methods only. Reliable protection grounds on a system with several security levels. So, let us talk about these levels. To start, we’ll take one of the most popular database management systems – PostgreSQL.

download studio for postgresql

PostgreSQL security: why is it important?

PostgreSQL, aka Postgres, is open-source, super extensible, and extremely functional, with plenty of useful features. Apple, Netflix, and Spotify are among the major Postgres users. It is often the default choice for web and mobile applications and for big data analytics projects. If you refer to our step-by-step tutorial and useful tips for PostgreSQL, you’ll see how flexible and powerful this system is.

The common usage of PostgreSQL for big sophisticated solutions is one more reason why you need to protect it with all efforts. Such databases often store valuable and sensitive data.

In our article, we are going to review the following security aspects:

  • Network-level security (sockets and firewalls)
  • Transport-level security (secure communication with the database)
  • Database-level security (user accounts configuration)

All of these three levels must be covered, and the database administrator has to be familiar with the risks and security means for every level.

Network-level security

On the network level, the main security components are:

  • Unix domain sockets
  • Firewalls

Now, we are going to dive deeper and examine them in detail.

Unix domain sockets

If you have a PostgreSQL database and need to connect it in a Unix-based environment, Unix domain sockets (or UDS) is the feature that enables the said connection. It looks like a separate file in the filesystem. The socket owner is the OS user that runs the Postgres server, and the only way to access that UDS file directly is from the machine where it is installed.

To control and manage this file, you can use the unix_socket_group and unix_socket_permissions configuration options. Besides, as the UDS file is one of the files of the filesystem, it is possible to modify the access controls on the directory where the socket file is stored, and thus change the permissions.

The unix_socket_directories option allows you to create multiple sockets for PostgreSQL in various folders. Each folder can have different permissions for applications or users. As a result, the database administrator can control the connection to Postgres databases and only allow the trusted ones.

Firewalls

Firewalls are the most popular means that control access through specific ports. By default, Postgres listens to the TCP port 5425.

The methods of blocking ports depend on the operating system you work with. If it is Linux (the OS favored by most developers and database admins), you can use the standard iptables utility:

# Make sure to keep the established connections valid.
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

# Allow SSH.
iptables -A INPUT -p tcp -m state --state NEW --dport 22 -j ACCEPT

# Allow PostgreSQL.
iptables -A INPUT -p tcp -m state --state NEW --dport 5432 -j ACCEPT

# Allow all outbound, drop everything else inbound.
iptables -A OUTPUT -j ACCEPT
iptables -A INPUT -j DROP
iptables -A FORWARD -j DROP

Note: If the iptables rules get updated, use the iptables-apply option – it will roll the changes back and prevent you from being accidentally blocked.

To improve the security level, we can configure our firewall to accept connections from definite IP addresses or subnets only:

# Only allow access to the PostgreSQL port from the local subnet.
iptables -A INPUT -p tcp -m state --state NEW --dport 5432 -s 192.168.1.0/24 -j ACCEPT

Considering that our goal is to prevent all incoming connection attempts to port 5432, we might need a kind of local agent. It will establish and support the outbound connection to the client node (or nodes) and will be a proxy for the traffic to our local Postgres instance. This method is known as “reverse tunneling.”

For instance, we can open the reverse tunnel by executing the following command from the node where the Postgres database is located and running:

ssh -f -N -T -R 5432:localhost:5432 user@<client-host>

In this example, <client-host> must be accessible for the PostgreSQL node with an active SSH daemon.

The command relays port 5432 on the database server to the same port on the client machine. As a result, we have the path to the database through the tunnel:

psql "host=localhost port=5432 user=postgres dbname=postgres"

Firewalls are effective means that prevent unauthorized connections, but they aren’t the only means we have. Another efficient practice is restricting the addresses that the server listens to.

Restrict listening to addresses

If the node with PostgreSQL running on it uses several network interfaces, you can make use of the listen_addresses parameter of the configuration file. This parameter ensures that the server listens only to those addresses that the clients will use to connect.

listen_addresses = 'localhost, 192.168.0.1'

If all users connecting to the database are always on the same node, you can disable listening to TCP sockets. Therefore, the server will accept connections from Unix domain sockets only, and not from the network.

To achieve this, place an empty line as the address that the server should listen to as below:

listen_addresses = ''

These methods are helpful to ensure an appropriate level of security at the network level and protect your Postgres databases.

Transport-level security (TLS)

TLS is a secure protocol for safe communication on the Internet. PostgreSQL supports it by default for data transport and database connection (it is called SSL in the documentation). When enabled, it allows the client to authenticate itself on the server (and vice versa) using the certificates issued by a trusted certificate authority.

Typically, it is used to enable connection to databases for applications with multiple tiers or services. Another option is connecting to PostgreSQL to remotely administer databases through dedicated tools.

Here it does not depend on whether you prefer to use the standard psql terminal, the popular pgAdmin solution, or a third-party PostgreSQL client covering much more tasks than just database administration. In any case, you must take care to protect that connection.

Let us dive deeper to see how we can protect PostgreSQL databases by mutually authenticating the server and the client.

Server TLS

The server authentication requires getting a certificate. Further, our server will provide that certificate to the clients. It is possible to get free X.509 certificates, and some tools simplify this process, such as CLI certbot. You can use this tool in the following way to retrieve the certificate with the Let’s Encrypt service:

certbot certonly --standalone -d postgres.example.com

By default, certbot calls HTTP-01 ACME to check the certificate request. Therefore, it requires a valid DNS pointing to the node in question, and port 80 must be open.

Another option is generating certificates locally, without calling Let’s Encrypt. For that, you can use OpenSSL CLI:

# Make a self-signed server CA.
openssl req -sha256 -new -x509 -days 365 -nodes \
    -out server-ca.crt \
    -keyout server-ca.key

# Generate server CSR. Put the hostname you will be using to connect to the database 
# in the CN field.
openssl req -sha256 -new -nodes \
    -subj "/CN=postgres.example.com" \
    -out server.csr \
    -keyout server.key

# Sign the server certificate.
openssl x509 -req -sha256 -days 365 \
    -in server.csr \
    -CA server-ca.crt \
    -CAkey server-ca.key \
    -CAcreateserial \
    -out server.crt

Important: All certificates must be always valid and updated on time!

Client TLS

Using the X.509 certificate to authenticate the client is another efficient security measure. It allows the server to check that the certificate provided by the client is valid and signed by a trusted certificate authority (CA).

The best option is to use different certificate authorities for the server and the client. So, let us create a client CA and use it to sign the client certificate:

# Make a self-signed client CA.
openssl req -sha256 -new -x509 -days 365 -nodes \
    -out client-ca.crt \
    -keyout client-ca.key

# Generate client CSR. CN must contain the name of the database role you will be using 
# to connect to the database.
openssl req -sha256 -new -nodes \
    -subj "/CN=jane" \
    -out client.csr \
    -keyout server.key

# Sign the client certificate.
openssl x509 -req -sha256 -days 365 \
    -in client.csr \
    -CA client-ca.crt \
    -CAkey client-ca.key \
    -CAcreateserial \
    -out client.crt

Important: The CommonName (CN) field must provide a user login for the target database. The PostgreSQL server will use it to identify and authorize the client.

TLS configuration

When you have both the server and the client TLS, you can configure the PostgreSQL server to accept TLS connections in the following way:

ssl = on
ssl_cert_file = '/path/to/server.crt'
ssl_key_file = '/path/to/server.key'
ssl_ca_file = '/path/to/client-ca.crt'

# This setting is on by default but it's better to have it explicit for security.
ssl_prefer_server_ciphers = on

# TLS 1.3 will give the best security and is recommended 
# to control both servers and clients.
ssl_min_protocol_version = 'TLSv1.3'

Pay attention to the final stage of the configuration process – you need to update the host-based server authentication file (pg_hba.conf) to establish TLS as mandatory for all connections and use X.509 certificates to authenticate clients. Use the following command:

# TYPEDATABASEUSERADDRESSMETHOD
hostsslallall::/0cert
hostsslallall0.0.0.0/0cert

After that, all users connecting to the server will have to provide a valid certificate signed by the client CA:

psql "host=postgres.example.com \
      user=jane \
      dbname=postgres \
      sslmode=verify-full \
      sslrootcert=/path/to/server-ca.crt \
      sslcert=/path/to/client.crt \
      sslkey=/path/to/client.key"

Note that psql will not verify the server certificate by default. That’s why you need to set the value for the sslmode parameter yourself. This value should be verify-full or verify-ca (depending on the hostname used to connect to the Postgres server – if it is the one provided in the CN field of the certificate).

Also, it is not necessary to enter the full paths to TLS certificates whenever you want to connect to the database. Instead, you can use the service file that handles PostgreSQL connections. With this file, you can group all connections into “services.” Then refer to them via the “service” parameter.

For example, let’s create a ~/.pg_service.conf file with the following content:

[example]
host=postgres.example.com
user=jane
sslmode=verify-full
sslrootcert=/path/to/server-ca.crt
sslcert=/path/to/client.crt
sslkey=/path/to/client.key

Now, you only need to provide the service name and specify the database to connect to get access:

psql "service=example dbname=postgres"

This way, you will secure the connection to the PostgreSQL server and databases additionally. Now, let us move on to database-level security measures.

Database-level security

So, until now, we’ve been reviewing methods to protect our PostgreSQL databases from unauthorized connection attempts and ensure that the server and clients can trust each other (using TLS authentication). However, even with all checks at those stages passed successfully, there is still a risk of harming the database. That’s why it is critical to define what users can or can’t do with the data.

Roles and permissions

Database users in PostgreSQL obey the “roles” concept – the system of permissions and restrictions specified by the database admins for any user. In fact, any username is a user role with the LOGIN attribute. When you are creating a new user, you are creating a user role.

Have a look at the below commands:

CREATE USER jane;

Or

CREATE ROLE jane LOGIN;

These commands are the same, and you can use this or that one freely, they bring the same results.

Besides the login functionality, user roles can include other attributes. It could be the possibility to create other databases, other user roles, or pass all authorization checks at once (superusers), etc. Besides the attributes, user roles may have various access permissions.

Assume that our working scenario demands monitoring the server inventory. Let’s create a table:

CREATE TABLE server_inventory (
    id            int PRIMARY KEY,
    description   text,
    ip_address    text,
    environment   text,
    owner         text,
);

Now, the question is, who is going to supervise this process. The simplest option would be the default “superuser” role in PostgreSQL. But this is not the best option. The security principles always recommend providing users with minimum privileges – with the ones that are necessary to perform the work duties only, and nothing more.

That’s why we don’t allocate superusers to monitor the server inventory. Instead, we create a role without such privileges and assign the necessary access permissions to it.

It is possible to create a kind of “group role” and make other user roles “group members.” It saves time and effort as you won’t need to assign permissions and set restrictions for each new role individually.

Assume we have three employees, Jane, Victor, and Alex. They all need permission to view the list of servers, but they should not be allowed to change that list.

# Create a group role without the option to log in by itself 
# and provide it with the SELECT privileges on the server inventory table.
CREATE ROLE inventory_viewer;
GRANT SELECT ON server_inventory TO inventory_viewer;

# Create three user accounts that will inherit the "inventory_viewer" permissions 
# upon logging into the database.
CREATE ROLE jane LOGIN INHERIT;
CREATE ROLE victor LOGIN INHERIT;
CREATE ROLE alex LOGIN INHERIT;

# Assign these user accounts to the "inventory_viewer" group role.
GRANT inventory_viewer TO jane, victor, alex;

It means that our three employees (Jane, Victor, and Alex) will inherit the permissions set for the inventory_viewer group role when they connect to the database. As the members of that group role, they will be able to query the server_inventory table and fetch data from it, but never change that data.

However, in our case, we assign the SELECT privileges to all table columns of the server_inventory table by default. What if we want to restrict access to a part of the data stored in that table? For instance, our employees should only view the basic data available in several columns. Let’s create a new role for that.

CREATE ROLE rookie;
GRANT SELECT(table_id, name, description) ON server_inventory TO rookie;
CREATE ROLE dan LOGIN INHERIT;
GRANT rookie TO dan;

This way, you can allow particular users to connect to specific databases, create new database objects, perform the INSERT, UPDATE, or DELETE operations, and so on. The full information about setting privileges is provided in the official PostgreSQL documentation.

The next option to secure your database is enabling row-level security (RLS) for tables.

Row-level security (RLS)

The essence of the row-level security method is setting privileges to table rows. It relates to both the existing rows brought by the SELECT command and the new rows that result from the INSERT, UPDATE, or DELETE commands. To apply this method, you need to enable RLS for the table and configure access (create a policy).

Assume that we want to allow the user to update specific servers in the table. The first step is to enable RLS:

ALTER TABLE server_inventory ENABLE ROW LEVEL SECURITY;

However, this option without any additional configurations blocks access to the table for all users except for the table owner. This is not what we need. Let us allow the users to view servers and update only those servers that belong to those users. We create a new policy.

CREATE POLICY select_all_servers
    ON server_inventory FOR SELECT
    USING (true);

CREATE POLICY update_own_servers
    ON server_inventory FOR UPDATE
    USING (current_user = owner)
    WITH CHECK (current_user = owner);

Note: To create or update this RLS policy, you have to be the table owner.

Auditing

Implementing all security measures on all levels will certainly improve your database security. However, they can’t guarantee that the hacker will be unable to intrude. That’s why it is critical to monitor the database in real time and know what is happening on the server at any moment.

One of the efficient yet often missed options is keeping a detailed log. To enable it, add the following parameters to the server configuration file:

; Log successful and unsuccessful connection attempts.
log_connections = on

; Log terminated sessions.
log_disconnections = on

; Log all executed SQL statements.
log_statement = all

Unfortunately, the standard self-hosted PostgreSQL does not provide more options. Still, there are third-party tools with many security capabilities that you can apply to your system and databases.

Conclusion

If you deal with databases, their security must be among your top priorities. You need to apply the appropriate measures and set them to work together as solid barriers against intruders. Such barriers will block the hackers or slow them down as much as possible, should they slip past the first security cordon.

Big support here comes from professional tools that streamline configuring security measures. You can download our PostgreSQL GUI tool to operate in a convenient visual interface and save lots of time. But the most essential thing is that you need to work on security continually. It is always a lasting process.

download studio for postgresql
Jane Williams
Jane Williams
dbForge Team
RELATED ARTICLES

Whitepaper

Social

Topics

Products