
A+L has been working on a Single Page Application (SPA) wherein our client's users take on the role of Staff Users (think: project managers) as they aid their Customer Users in using the application to complete a complex project.
The architecture is common to many SPAs:

- RDBMS - PostgreSQL
- REST API Server - asynchronous python-3.11 using SQLAlchemy 2.x with asyncpg as the database engine.
- Reverse Proxy Server - nginx
- SPA Clients - Vue. As noted: the SPA supports different classes of users. Our client's users are the Staff Users, taking on the role of Project Manager, aiding their customers (Customer Users in the diagram) in completing a project.
- Notifications - early versions of the application configured the API servers to send notifications via an external SMTP server.
Live Updates
The SPA has a dashboard which shows a Customer User their progress in completing all their tasks; a simple graphic of a bar, 0% to 100%, with the bar filled to the percentage of tasks completed by the customer.

Below the graphic, a list of outstanding tasks is displayed. An interesting aspect of this application is that many customers, working on the same project, can affect other customer's progress. E.g., a customer who is idle in the application may still see the progress increase as another customer completes a task.
Early demands on the project (mostly time-demands on getting a MVP delivered for client review) prevented us from integrating websockets, so the SPA polls every n-minutes for progress via the REST API.
As we approach publication of the first version of the product we're now exploring ideas we had to table during early MVP development. These future enhancements mostly center around data updates and notifications:
- Calculating progress is not cheap. Having every user poll every n-minutes is very expensive, especially when you consider it is difficult for the SPA to know how frequent to poll. Since the server knows when data comes in from collaborating users that may affect progress, it's a clear win to have the server send notifications.
- There's a fair amount of complexity in completing tasks and it is expected customers will have several questions for staff, so there's a desire to add a live chat feature, allowing customers to ask questions. These questions and answers are not ephemeral, but must be recorded for the life of the project.
- We're planning on removing immediate notification of events by email, rather, sending notifications via the same mechanism as live chat. To handle missed notifications, a background task can poll the database for unseen notifications and send email only after n-hours of going unread. This will also limit "spamming" staff with each completed task (which they might already know about if they're active in the SPA), spooling up several notifications in one email.
Enter websockets. Integrating websockets gives the server a direct connection to the SPA and can send notifications for all these desired notifications.
Pub/Sub
When I think of implementing Pub/Sub for an application my go-to has been redis. It is both performant and scalable. It is an excellent choice for many use-cases and particularly useful for ephemeral communication that is not tied to any data.
But if you need to know your notifications have been seen or need archives of all communication, you will need persistence of your messages. This makes redis less desireable as you will need to maintain your data in two places: 1) a permanent storage, like a RDBMS, 2) your redis server.
If you're using PostgreSQL for your data storage you already have (a little known) pub/sub engine.
Demo Preparation
You can run all the code in this tutorial if you have python-3 (tested on 3.8–3.11) and access to docker for installation of PostgreSQL. If you have earlier versions of python and/or your own installation of PostgreSQL, your mileage may vary as you follow along.
- Clone the repo holding demo SQL and python script:
$ git clone https://github.com/artandlogic/postgresql-pubsub-demo.git
$ cd postgresql-pubsub-demo
$ ls
demo.py demo.sql README.md
- Review two files in the repo:
demo.sql, SQL schema for the demo, anddemo.py, a simple asynchronous python application. - Start PostgreSQL via docker. The following commands will launch a container, loading the SQL downloaded in step 1. It is important to run the command from the cloned directory containing
demo.sql, so the database initializes properly. This binds host port 8432 to the container's port 5432 (postgresql). If that port is not available, edit as necessary
$ export PGHOST=localhost PGPORT=8432 PGUSER=demo PGPASSWORD="passw0rd!"
$ docker run --rm -d --name pg-pubsub-demo \
-v $(pwd):/docker-entrypoint-initdb.d \
-e POSTGRES_USER=$PGUSER -e POSTGRES_PASSWORD=$PGPASSWORD \
-p $PGPORT:5432 \
postgres:14-alpine
$ alias psql='docker exec -it pg-pubsub-demo psql -U demo'
- Run
psqlto verify the setup. You should see the following:
$ psql
psql (14.5)
Type "help" for help.
demo=# \d
List of relations
Schema | Name | Type | Owner
--------+----------------+----------+-------
public | appuser | table | demo
public | appuser_id_seq | sequence | demo
public | message | table | demo
public | message_id_seq | sequence | demo
(4 rows)
demo=# select * from appuser;
id | fullname
----+----------
1 | Abe
2 | Bob
3 | Charlie
(3 rows)
NOTE: When you are finished with the demo, you can stop/remove the container with
docker stop pg-pubsub-demo.
NOTIFY/LISTEN/UNLISTEN
PostgreSQL has three non-standard SQL commands:
Let's look at examples with psql:
demo=# listen foo;
LISTEN
demo=# notify foo, 'Hello, world';
NOTIFY
Asynchronous notification "foo" with payload "Hello, world" received from server process with PID 60807.
The NOTIFY command expects constants for both the channel name (a legal SQL identifier) and the text notification (type TEXT). You cannot use an expression:
demo=# notify foo, 'This will ' || 'not work.';
ERROR: syntax error at or near "||"
LINE 1: notify foo, 'This will ' || 'not work.';
Fortunately, PostgreSQL provides the function pg_notify which allows dynamic generation of the notifications:
demo=# select pg_notify('foo', 'This ' || 'is a ' || 'computed value!');
Asynchronous notification "foo" with payload "This is a computed value!" received from server process with PID 60807.
A session can "unsubscibe" with the UNLISTEN command:
demo=# unlisten foo;
UNLISTEN
demo=# notify foo, 'Hello, world';
NOTIFY
NOTIFY in Transactions
What is particularly attractive about NOTIFY is you can place it in a transaction and if the transaction fails, notifications are not delivered. Back in psql:
demo=# listen foo;
LISTEN
demo=# begin;
BEGIN
demo=*# notify foo, 'This will not be delivered if we rollback!';
NOTIFY
demo=*# rollback;
ROLLBACK
demo=# begin;
BEGIN
demo=*# notify foo, 'This will be delivered when we commit!';
NOTIFY
demo=*# commit;
COMMIT
Asynchronous notification "foo" with payload "This will be delivered when we commit!" received from server process with PID 60807.
This means we can write trigger functions that send notifications on events, making our notifications data-driven.
NOTE: Compare and contrast with what we would need to do if we used redis for pub/sub. After committing out postgres transaction we would have to make a separate call to redis. This separation of data event from event notification would be a weak link in the system.
If you look at demo.sql you will see a table, message, for the permanent storage of all messages, and a trigger function, notify_message(), that will send a notification (the JSON representation of the inserted record) on successful INSERT.
In psql:
LISTEN broadcast;
User "Abe" delivers a message to the group on channel "broadcast":
-- Abe saves a message:
insert into message (sender, channel, content)
select id, 'broadcast', 'Hello, everone!'
from appuser where fullname = 'Abe';
INSERT 0 1
Asynchronous notification "broadcast" with payload "{"id":2,"sender":1,"channel":"broadcast","content":"Hello, everone!"}" received from server process with PID 61006.
We're also protected from false-notifications (which we would be susceptible to if we managed notifcations in application logic, i.e., first complete the transaction, followed by manually sending notifications). Let's say "Bob" sends a message as part of a large transaction that fails:
demo=# begin;
BEGIN
demo=*# -- insert lots of work in the transaction here
demo=*#
demo=*# -- Bob saves a message:
demo=*# insert into message (sender, channel, content)
demo-*> select id, 'broadcast', 'I just completed the...'
demo-*> from appuser where fullname = 'Bob';
INSERT 0 1
demo=*#
demo=*# -- insert lots more work in the transaction here that fails
demo=*# rollback;
ROLLBACK
No notification delivered!
In Python
As noted above, our architecture uses asyncpg which has support for registering callbacks on notifications.
First we must create a virtualenv to install asyncpg.
$ python3.11 -m venv venv
$ source venv/bin/activate
(venv) $ pip install asyncpg
Simple Demo
The python script, demo.py, establishes a connection to PostgreSQL using the asyncpg package and adds a listener for notifications on channel "broadcast". It then simulates a server waiting on requests by sleeping.
(venv) $ python demo.py
Type Ctrl-c to exit
Main task Waiting for notifications...
With the application running in one terminal, run psql in another. In that session, when an INSERT on table message is committed, we will see our python application report it:
insert into message (sender, channel, content)
select id, 'broadcast', 'Hello, demo application!'
from appuser where fullname = 'Abe';
In the other terminal we will see:
RECEIVED notification from <asyncpg.connection.Connection object at 0x7f23f93b7920>[pid: 202] on channel broadcast:
{
"id": 9,
"sender": 1,
"channel": "broadcast",
"content": "Hello, demo application!"
}
------------------------------------------------------------------------------
Summary
For applications already using PostgreSQL you can use NOTIFY, LISTEN, and UNLISTEN to manage pub/sub for your application without having to install redis solely to add pub/sub infrastructure, allowing you to reduce the footprint of your application suite.
By placing NOTIFY in transactions, and more specifically, triggers, you can make your notifications data-driven.
For asynchronous applications using asyncpg, an API exists for registering callbacks which will be run asynchronously as notifications are delivered.
NOTE, not shown here: if you're using synchronous python with
psycopg, the library has support for receiving notifications.