Ghost is an open-source, professional publishing platform designed to create blogs, magazines, and news sites. Originally launched in 2013 as a streamlined alternative to WordPress, Ghost focuses on content creation with a clean, distraction-free interface. This comprehensive guide walks you through installing and securing Ghost CMS using Docker on Ubuntu 20.04 or newer versions. Docker provides an isolated, consistent environment that makes deployment straightforward and maintenance simpler. By containerizing Ghost, you’ll gain the benefits of easy updates, consistent environments, and simplified backups while maintaining robust security practices.
Understanding Ghost CMS and Docker
Ghost CMS stands out as a modern, JavaScript-based publishing platform powered by Node.js. It offers a markdown-focused editor, built-in SEO tools, and native subscription capabilities, making it ideal for professional bloggers, journalists, and content creators. Ghost emphasizes performance and simplicity, providing all essential publishing features without the complexity found in some other content management systems.
Docker revolutionizes application deployment by packaging software and dependencies into standardized containers. This approach eliminates environment inconsistencies and simplifies management. For Ghost specifically, Docker eliminates the need to manually configure Node.js, database systems, and other dependencies. You can deploy, update, and migrate your Ghost blog with minimal technical overhead.
When combined, Ghost and Docker create a powerful, flexible publishing platform that’s easy to maintain and scale. This guide uses Docker Compose V2 (the docker compose
command) rather than the deprecated Docker Compose V1 (docker-compose
), ensuring you’re following current best practices as of March 2025.
Instruction layout
Prerequisites and System Requirements
Before beginning the installation process, ensure your system meets the following requirements:
Hardware Requirements
- CPU: At least 1 dedicated core (2+ cores recommended for production)
- RAM: Minimum 1GB (2GB or more recommended for production use)
- Storage: At least 10GB free space for Docker images, database, and content
Software Requirements
- Ubuntu 20.04 LTS or newer (22.04 LTS or 24.04 LTS)
- A non-root user with sudo privileges
- Updated system packages
Network Requirements
- A domain name pointing to your server (for production use)
- Ability to open ports 80 and 443 for HTTP/HTTPS traffic
- Unrestricted outbound internet access to pull Docker images
Required Knowledge
- Basic familiarity with terminal commands
- Understanding of web hosting concepts
- Ability to follow step-by-step instructions
Initial System Setup
Start by ensuring your Ubuntu system is fully updated. This prevents package conflicts and security vulnerabilities during installation.
sudo apt update
This command refreshes your package lists, ensuring you have access to the latest software versions. If this fails, check your internet connection or try changing your Ubuntu mirror.
sudo apt upgrade -y
This upgrades all installed packages to their latest versions. The -y
flag automatically answers “yes” to the upgrade prompt. This process may take several minutes depending on your internet speed and how many packages need updating.
Next, install essential tools that will be needed throughout this process:
sudo apt install -y ca-certificates curl gnupg lsb-release
This installs certificate authority certificates for secure connections, curl for downloading files, gnupg for verifying package signatures, and lsb-release for detecting your Ubuntu version. These are prerequisites for Docker installation.
Configuring UFW Firewall
Ubuntu comes with Uncomplicated Firewall (UFW), which provides a user-friendly interface for managing firewall rules. Start by checking if the firewall is active:
sudo ufw status
If the firewall is inactive, you’ll see “Status: inactive”. If it’s already active, you’ll see a list of allowed connections.
Allow SSH connections to prevent locking yourself out when enabling the firewall:
sudo ufw allow OpenSSH
This ensures you maintain SSH access after enabling the firewall. Without this rule, you might lose connection to your server and require console access to regain control.
Next, allow HTTP and HTTPS traffic, which will be necessary for accessing your Ghost blog:
sudo ufw allow 80
This opens port 80 for HTTP traffic. This port is necessary for initial Let’s Encrypt certificate verification and for redirecting users to the secure HTTPS version of your site.
sudo ufw allow 443
This opens port 443 for HTTPS traffic, which will be used for encrypted connections to your Ghost blog. All production traffic will go through this port.
Now, enable the firewall:
sudo ufw enable
You’ll receive a warning that enabling the firewall might disrupt existing SSH connections. Type ‘y’ and press Enter to continue. Since you’ve allowed OpenSSH, your current connection shouldn’t be affected.
Verify that the firewall is properly configured:
sudo ufw status
This should display a list of allowed services and ports, including OpenSSH, port 80, and port 443. If any are missing, add them using the sudo ufw allow
command.
Installing Docker and Docker Compose
Docker will manage the containers that run your Ghost CMS and supporting services. First, uninstall any older versions of Docker that might cause conflicts:
sudo apt remove docker docker-engine docker.io containerd runc
This removes any previously installed Docker packages. If none are installed, you’ll see a message that these packages weren’t found, which is fine.
Create a directory for Docker’s GPG key:
sudo mkdir -p /etc/apt/keyrings
This creates a directory to store repository keys. If the directory already exists, this command will have no effect.
Download and add Docker’s official GPG key:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
This downloads Docker’s verification key and adds it to your system, enabling you to securely download Docker packages. If you encounter a GPG error, try running sudo apt install gnupg
first.
Add the Docker repository to your system:
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
This adds the appropriate Docker repository for your Ubuntu version and system architecture. If this command fails, check if the lsb-release package is installed.
Update apt package list to include the Docker repository:
sudo apt update
This refreshes your package lists to include Docker packages from the newly added repository.
Install Docker Engine, containerd, and Docker Compose:
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
This installs Docker Engine, Docker CLI tools, containerd runtime, and the Docker Compose plugin. If this fails with dependency errors, run sudo apt --fix-broken install
and try again.
Verify that Docker is properly installed:
docker --version
This should display the installed Docker version. If you get a “command not found” error, try logging out and back in or rebooting your system.
Verify that Docker Compose V2 is installed:
docker compose version
This should show the installed Docker Compose version. Note that we’re using docker compose
(with a space), which is the current Compose V2 syntax, not the older docker-compose
command.
Add your user to the Docker group to avoid needing sudo for Docker commands:
sudo usermod -aG docker $USER
This adds your current user to the Docker group, allowing you to run Docker commands without sudo. You’ll need to log out and back in for this change to take effect.
Verify Docker is working properly by running a test container:
docker run --rm hello-world
This runs a simple test container that confirms Docker is installed and functioning correctly. If successful, you’ll see a message that your installation appears to be working correctly.
Installing Certbot for SSL Certificates
Secure your Ghost blog with free SSL certificates from Let’s Encrypt using Certbot:
sudo snap install core
This ensures the core snap is installed and up-to-date. Snap is Ubuntu’s package management system that comes pre-installed on Ubuntu 20.04 and newer.
sudo snap refresh core
This updates the core snap package to the latest version. This step helps avoid conflicts with Certbot installation.
Remove any previously installed Certbot packages:
sudo apt remove certbot
This removes any previous Certbot installations that might conflict with the snap version. If Certbot wasn’t previously installed, this command has no effect.
Install Certbot using Snap:
sudo snap install --classic certbot
This installs the latest version of Certbot. The --classic
flag gives Certbot the permissions it needs to manage your certificates.
Create a symbolic link to make Certbot easily accessible:
sudo ln -s /snap/bin/certbot /usr/bin/certbot
This makes the Certbot command available system-wide. If this command fails, the link may already exist.
Setting Up Ghost with Docker Compose
Now you’re ready to set up Ghost using Docker Compose. Create a directory for your Ghost installation:
mkdir -p ~/ghost-blog
This creates a directory to store your Ghost configuration and content. The -p
flag ensures parent directories are created if they don’t exist.
Change to the new directory:
cd ~/ghost-blog
Create a Docker Compose configuration file:
nano docker-compose.yml
This opens the nano text editor to create a new Docker Compose file. If you don’t have nano, you can install it with sudo apt install nano
or use any other text editor.
Add the following configuration, replacing yourdomain.com
with your actual domain name and using strong, unique passwords:
version: '3.1'
services:
ghost:
image: ghost:latest
restart: always
environment:
url: https://yourdomain.com
database__client: mysql
database__connection__host: db
database__connection__user: ghost
database__connection__password: YourStrongPasswordHere
database__connection__database: ghost
# Set Node.js to production mode
NODE_ENV: production
volumes:
- ./ghost-content:/var/lib/ghost/content
depends_on:
- db
networks:
- ghost_network
db:
image: mysql:8.0
restart: always
environment:
MYSQL_ROOT_PASSWORD: YourMySQLRootPasswordHere
MYSQL_DATABASE: ghost
MYSQL_USER: ghost
MYSQL_PASSWORD: YourStrongPasswordHere
volumes:
- ./mysql-data:/var/lib/mysql
networks:
- ghost_network
networks:
ghost_network:
driver: bridge
Save the file by pressing Ctrl+X, then Y, then Enter. This configuration creates two containers: one for Ghost and one for MySQL, with persistent storage for both.
Create directories for persistent storage:
mkdir -p ghost-content mysql-data
This creates directories that will be mounted as volumes to store Ghost content and MySQL data. If these directories exist, this command has no effect.
Set appropriate permissions for these directories:
sudo chown -R 1000:1000 ghost-content
This sets the correct ownership for the Ghost content directory. Ghost runs as user ID 1000 inside the container, so this ensures proper file access.
Start the Ghost application:
docker compose up -d
This starts the Ghost and MySQL containers in detached mode. The -d
flag runs containers in the background. The initial startup might take a few minutes as Docker downloads the required images.
Check if the containers are running:
docker compose ps
This shows the status of your containers. Both should show “Up” under the Status column. If any container shows an error or keeps restarting, check the logs with docker compose logs
.
Obtaining SSL Certificates
Before configuring your web server, you need to obtain SSL certificates. Ensure your domain is pointed to your server’s IP address before continuing.
Temporarily stop any running web servers to free port 80:
sudo systemctl stop nginx
This stops the Nginx service if it’s running. If you don’t have Nginx installed yet, this command will show an error which you can ignore.
Obtain SSL certificates for your domain:
sudo certbot certonly --standalone -d yourdomain.com -d www.yourdomain.com
Replace yourdomain.com
with your actual domain. This command runs a temporary web server to verify your domain ownership and obtain certificates. You’ll be prompted to enter your email address and agree to the terms of service.
Verify that certificates were successfully obtained:
sudo ls -la /etc/letsencrypt/live/yourdomain.com/
This lists the certificate files. You should see files including fullchain.pem and privkey.pem. If these files don’t exist, the certificate acquisition failed, and you should check Certbot’s output for error messages.
Installing and Configuring Nginx
Nginx will act as a reverse proxy for your Ghost blog, handling SSL termination and serving static content efficiently:
sudo apt install -y nginx
This installs the Nginx web server. If the installation fails, try running sudo apt update
first and ensure no other process is using port 80.
Create an Nginx configuration file for your Ghost blog:
sudo nano /etc/nginx/sites-available/yourdomain.com
Replace yourdomain.com
with your actual domain name. This creates a new Nginx configuration file.
Add the following configuration, replacing yourdomain.com
with your actual domain:
server {
listen 80;
listen [::]:80;
server_name yourdomain.com www.yourdomain.com;
# Redirect HTTP to HTTPS
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name yourdomain.com www.yourdomain.com;
# SSL configuration
ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:10m;
# Security headers
add_header X-Frame-Options SAMEORIGIN always;
add_header X-Content-Type-Options nosniff always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
# Ghost Reverse Proxy
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:2368;
proxy_buffering off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# Increase max body size for media uploads
client_max_body_size 50M;
# Optimize content caching
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
proxy_pass http://localhost:2368;
expires 30d;
add_header Cache-Control "public, no-transform";
}
}
Save the file by pressing Ctrl+X, then Y, then Enter. This configuration redirects HTTP traffic to HTTPS and sets up a reverse proxy to your Ghost container, along with security headers and content caching.
Link the configuration to the enabled sites:
sudo ln -s /etc/nginx/sites-available/yourdomain.com /etc/nginx/sites-enabled/
This activates your Nginx configuration. If this command fails, the symbolic link may already exist.
Test the Nginx configuration for syntax errors:
sudo nginx -t
This checks your Nginx configuration for errors. If you see “syntax is ok” and “test is successful”, proceed to the next step. If there are errors, check your configuration file for typos.
Restart Nginx to apply the configuration:
sudo systemctl restart nginx
This applies your new Nginx configuration. If Nginx fails to restart, check the error logs with sudo journalctl -xe
and verify your SSL certificate paths.
Check if Nginx is running:
sudo systemctl status nginx
This shows the status of the Nginx service. It should show “active (running)”. If not, check the error output for clues about what went wrong.
Setting Up Automatic Certificate Renewal
Let’s Encrypt certificates expire after 90 days, so automatic renewal is essential:
sudo systemctl status snap.certbot.renew.service
This checks if the automatic renewal service is active. You should see “active (running)” or information about the next scheduled run.
Test the renewal process to ensure it works correctly:
sudo certbot renew --dry-run
This simulates the renewal process without actually replacing certificates. You should see a message indicating that the renewal simulation succeeded. If there are errors, they might relate to Nginx configuration or certificate ownership.
Configuring Ghost for Production
Now that Ghost is running behind Nginx with SSL, you need to update some Ghost settings for production use:
Stop the Ghost container:
docker compose down
This stops all containers defined in your docker-compose.yml file. Your content and database data remain safe in the persistent volumes.
Edit the Docker Compose file to add email configuration:
nano docker-compose.yml
Add email configuration to the ghost service environment section:
environment:
# Existing environment variables...
mail__transport: SMTP
mail__options__service: Mailgun # or another provider like SendGrid, Postmark, etc.
mail__options__host: smtp.mailgun.org
mail__options__port: 587
mail__options__auth__user: your-smtp-username
mail__options__auth__pass: your-smtp-password
Replace the SMTP details with your own email service credentials. This configures Ghost to send emails through your SMTP provider, which is essential for user invitations, password resets, and newsletters.
Save the file and restart the containers:
docker compose up -d
This starts your containers with the updated configuration. Check the Ghost logs for any errors related to the new configuration:
docker compose logs ghost
This shows the logs from the Ghost container. Look for any error messages related to the SMTP configuration or other issues.
Creating a Ghost Admin User
Access your Ghost blog at https://yourdomain.com/ghost to set up your admin account:
- Open a web browser and navigate to https://yourdomain.com/ghost
- Follow the on-screen instructions to create your admin account
- Set a strong, unique password for your admin account
Creating Backup and Restore Scripts
Create a backup script to regularly save your Ghost data:
nano backup-ghost.sh
Add the following content to the script:
#!/bin/bash
# Create backup directory if it doesn't exist
BACKUP_DIR="./backups"
mkdir -p $BACKUP_DIR
# Set timestamp for backup files
TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
# Backup Ghost content
echo "Backing up Ghost content..."
tar -czf $BACKUP_DIR/ghost_content_$TIMESTAMP.tar.gz -C ./ghost-content .
# Backup MySQL database
echo "Backing up MySQL database..."
docker compose exec db sh -c 'exec mysqldump -uroot -p"$MYSQL_ROOT_PASSWORD" ghost' > $BACKUP_DIR/ghost_db_$TIMESTAMP.sql
# Compress database backup
gzip $BACKUP_DIR/ghost_db_$TIMESTAMP.sql
echo "Backup completed: $BACKUP_DIR/ghost_content_$TIMESTAMP.tar.gz and $BACKUP_DIR/ghost_db_$TIMESTAMP.sql.gz"
# Optional: Remove backups older than 30 days
find $BACKUP_DIR -name "ghost_*" -type f -mtime +30 -delete
Save the file and make it executable:
chmod +x backup-ghost.sh
This allows the script to be executed. You can now run it with ./backup-ghost.sh
to create a backup.
Create a restore script:
nano restore-ghost.sh
Add the following content to the script:
#!/bin/bash
if [ $# -ne 2 ]; then
echo "Usage: $0 content_backup.tar.gz database_backup.sql.gz"
exit 1
fi
CONTENT_BACKUP=$1
DB_BACKUP=$2
# Stop containers
docker compose down
# Restore content
echo "Restoring content from $CONTENT_BACKUP..."
rm -rf ./ghost-content/*
tar -xzf $CONTENT_BACKUP -C ./ghost-content
# Restore database
echo "Restoring database from $DB_BACKUP..."
gunzip -c $DB_BACKUP > ./restore_db.sql
docker compose up -d db
sleep 10 # Wait for database to start
docker compose exec db sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD" ghost' < ./restore_db.sql
# Start Ghost
docker compose up -d
echo "Restoration completed. Check your Ghost blog to verify."
Make the restoration script executable:
chmod +x restore-ghost.sh
Set up automated backups with cron:
crontab -e
Add the following line to run backups daily at 2:00 AM:
0 2 * * * /home/username/ghost-blog/backup-ghost.sh >> /home/username/ghost-blog/backups/backup.log 2>&1
Replace /home/username/ghost-blog/
with the actual path to your Ghost directory.
Enhancing Security
Improve your Ghost installation’s security with these additional steps:
Secure MySQL by limiting network access:
Edit your docker-compose.yml file:
nano docker-compose.yml
Add the following to the db service to make it only accept connections from the internal network:
db:
# Existing configuration...
command: --default-authentication-plugin=mysql_native_password --bind-address=0.0.0.0 --mysqlx=0
This disables the MySQL X Protocol and configures the authentication plugin. Save the file and restart the containers:
docker compose down
docker compose up -d
Set up container resource limits to prevent resource exhaustion:
Edit your docker-compose.yml file again:
nano docker-compose.yml
Add resource limits to both services:
services:
ghost:
# Existing configuration...
deploy:
resources:
limits:
memory: 1G
reservations:
memory: 512M
db:
# Existing configuration...
deploy:
resources:
limits:
memory: 1G
reservations:
memory: 256M
This prevents containers from consuming excessive resources, which could lead to system instability. Save the file and restart the containers:
docker compose down
docker compose up -d
Maintenance Procedures
Regularly update your Ghost installation to receive security patches and new features:
Check for new Ghost versions:
docker pull ghost:latest
This pulls the latest Ghost image. The output will indicate if a newer version is available.
Create a backup before updating:
./backup-ghost.sh
This creates a backup of your content and database before updating.
Update Ghost and other containers:
docker compose pull
docker compose up -d
This pulls the latest versions of all images in your docker-compose.yml file and recreates the containers.
Monitor container logs for any issues after updating:
docker compose logs -f
This shows real-time logs from all containers. Press Ctrl+C to exit the log view.
Troubleshooting Common Issues
Ghost Container Won’t Start
Check the container logs:
docker compose logs ghost
Look for error messages that might indicate the problem. Common issues include:
- Database connection problems: Verify MySQL is running and credentials are correct
- Port conflicts: Make sure nothing else is using port 2368
- File permission issues: Run
sudo chown -R 1000:1000 ghost-content
Database Container Won’t Start
Check the database container logs:
docker compose logs db
Common MySQL issues include:
- Data directory permission problems: Run
sudo chown -R 999:999 mysql-data
- Corrupted data files: Restore from a backup or initialize with a fresh database
- Insufficient system resources: Check your system’s available memory and disk space
Nginx Shows 502 Bad Gateway
This usually means Nginx can’t connect to Ghost. Verify Ghost is running:
docker compose ps
Check if Ghost is accessible locally:
curl http://localhost:2368
This tests if Ghost is responding on its default port. If this works but Nginx shows 502, check your Nginx configuration.
Email Delivery Problems
Test email configuration:
docker compose exec ghost curl -X POST "http://localhost:2368/ghost/api/admin/mail/test/" \
-H "Origin: http://localhost:2368" \
-H "Content-Type: application/json" \
-d '{"mail":{"to":"your-email@example.com"}}'
Replace your-email@example.com
with your actual email address. Check Ghost logs for SMTP errors:
docker compose logs ghost | grep -i mail
This searches logs for mail-related entries. Verify your SMTP credentials and server settings.
SSL Certificate Issues
If your SSL certificate isn’t working, check the certificate files:
sudo ls -la /etc/letsencrypt/live/yourdomain.com/
Test certificate renewal:
sudo certbot renew --dry-run
Verify Nginx is using the correct certificate paths in your configuration file.
Known Limitations and Workarounds
Memory Usage
Ghost can consume significant memory, especially with many images or a large database. If you experience performance issues:
- Increase the container memory limit in docker-compose.yml
- Consider using a swapfile for additional virtual memory:
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
To make the swap permanent, add it to fstab:
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
Upload Limits
If you can’t upload large files to Ghost, check your Nginx client_max_body_size setting:
sudo nano /etc/nginx/sites-available/yourdomain.com
Increase the value if needed:
client_max_body_size 100M;
Save and restart Nginx:
sudo nginx -t
sudo systemctl restart nginx
Conclusion
You now have a fully functional Ghost CMS installation running on Docker with proper security measures in place. Regular maintenance, backups, and security updates will ensure your Ghost blog remains reliable and secure. By following this guide, you’ve created a robust publishing platform that’s easy to manage and scale as your needs grow.
Remember to:
- Regularly update Ghost and other components
- Monitor system resource usage
- Maintain regular backups
- Check for security advisories related to Ghost, Docker, and Ubuntu
With proper maintenance, your Ghost blog will provide a stable and secure platform for your content for years to come.
Leave a Reply