If you need to run Postgres database for development needs you can just install it manually on local machine. But you should be aware that this procedure will require some level of knowledge about Postgres installation and maintaining procedures.
On the other hand there is much more simple way - run Postgres database inside Docker container. Docker effectively incapsulates deployment, administration and configuration procedures. So if you want to deploy Postgres locally with minimum efforts Docker is the best choice. All you need to do is just start pre-build Docker container and you will have Postgres database ready for your service.
Here is my github repo to build Docker container with embedded Postgres database: https://github.com/alexd84/dockerized-postgres.
(If you don't have Docker yet, please read how to install it in my previous article here.)
To build Docker container you will need to run 3 simple commands in terminal:
git clone https://github.com/alexd84/dockerized-postgres.git docker build -t postgres dockerized-postgres docker run -di -p 5432:5432 postgres
Here you are cloning project from github, building container and launching it.
So now you can verify Postgres database instance running on your local machine:
telnet 192.168.99.100 5432 Trying 192.168.99.100... Connected to 192.168.99.100. Escape character is '^]'.
Possibly you can wonder why we use ip address 192.168.99.100 and not 127.0.0.1 if we run it locally? This is actually default address of virtual machine running Docker container which hosts Postgres database for us. If your container has different ip address, please consider to use it instead of default one.
To authorise into Postgres database following default credentials should be used: postgres/postgres.
How is it working? (for those who actually care)
Repository you just clonned contains 3 files:
This is main file which instructs Docker to create new container based on Ubuntu image, download Postgres distributive, configure and proceed to entrypoint.sh script:
FROM ubuntu:14.04 RUN apt-get update -y RUN apt-get install postgresql postgresql-contrib -y RUN mv /etc/postgresql/9.3/main/pg_hba.conf /etc/postgresql/9.3/main/pg_hba.conf.backup COPY pg_hba.conf /etc/postgresql/9.3/main/pg_hba.conf RUN echo "listen_addresses = '*'" >> /etc/postgresql/9.3/main/postgresql.conf EXPOSE 5432 COPY entrypoint.sh / ENTRYPOINT sudo /entrypoint.sh
Here we start Postgres database and setting default login and password for access:
sudo service postgresql start sudo -u postgres psql -c "ALTER USER postgres WITH PASSWORD 'postgres'" tail
The last tail command is needed to assure that entrypoint.sh script never completes and it effectively makes container to run in background, otherwise it will stop execution immediately after startup.
local all postgres peer host all all 127.0.0.1/32 md5 host all all 0.0.0.0/0 md5
This file is exclusively used to configure Postgres to accept connections from remote hosts. By default Postgres permits external connections only from localhost, so when running in Docker you cant access it from your host machine. Thus we add pg_hba.conf configuration file to permit all inbound connections from any machines.
So now you have got Postgres database running locally without deep knowledge about it's internals. What is also useful to know that there are pre-build Docker containers for almost all kind of applications you can imagine like email server or Hadoop cluster. And so you can effectively rely on them to start complex applications in minutes and save time by avoiding manual installation and configuration.