Building a small server for photography website

Here are my notes on building a small web server with Ubuntu Server, Docker, MySQL, Apache, Nginx, Naxsi, and Piwigo. Hopefully some people will find them useful. My aim is following architecture:

Server Architecture

This has been an interesting learning project for me. If you find any errors or glaring security holes, please leave a comment. My intention was to build a server that can face real Internet. However security is not a binary and I would keep it behind a firewall and inside a DMZ. See for example DMZ setup with router Asus RT-N56U. Everything is also ready for a quick rebuild in case of a compromise.

I try to provide step-by-step instructions but there are places where you need to provide your own values. Here are some examples:

  • 192.168.2.xxx would be your webserver IP address, e.g. 192.168.2.99
  • {my-mysql-root-password} needs to be replaced with a strong password. The parentheses are part of the substitution, i.e. they won’t be part of the final text.

Edit:

I have updated the instructions below to include https traffic. In fact this configuration will redirect all unencrypted traffic to encrypted. This is important to protect integrity of your content. This way an attacker cannot easily insert an exploit code into your webpage, while it is being loaded by your client’s browser – who wants to complicit with criminal acts? There are other good reasons to encrypt all communications, see for example this article.

1. Install base O/S: Ubuntu Server 14.04 LTS

See http://www.ubuntu.com/download/server

Installed with pretty much all defaults.
Select “Install security updates automatically”.
No extra packages installed during the installation process.

Configuration:

sudo apt-get install openssh-server
sudo ufw allow ssh

At this point I have setup a static IP address 192.168.2.xxx for my server. I also setup a second IP address which is my public facing static IP address to enable the server to talk to itself via the public IP. Piwigo application does it in some circumstances. Check out section Server configuration in DMZ setup with router Asus RT-N56U.

Now I am able to ssh into the server from my workstation:

ssh 192.168.2.xxx

2. O/S configuration and tweaks

sudo apt-get install unzip smartmontools tree

My hard-drive is SSD. Therefore I want to enable SSD trim. Note that this shouldn’t be required on newer versions of Ubuntu, see also http://askubuntu.com/questions/18903/how-to-enable-trim

cat /etc/cron.weekly/fstrim
sudo sed -i 's/exec fstrim-all/exec fstrim-all --no-model-check/g' /etc/cron.weekly/fstrim

Other optimizations, see also:

a) Swappiness

uname -a

My current kernel version is: 3.16.0-37-generic => set swappiness=0. For kernels 3.5 and over set swappiness=1. Checking that we currently use the default of 60:

sysctl vm.swappiness

Setting it to zero:

sudo sysctl -w vm.swappiness=0

b) Disable access time logging

sudo nano -w /etc/fstab

Change “errors=remount-ro” to “noatime,errors=remount-ro”. Save the file and reboot the server.

c) Monitoring the expected SSD life-time

sudo smartctl -data -A /dev/sda

At ID# 233 you see the Media_Wearout_Indicator. This is a value starting at 100 and when it reaches values below 10 you should start to worry. Note that different manufacturers may have different names and numbers for this indicator.

3. Install Docker

Follow notes in http://docs.docker.com/installation/ubuntulinux/

I am not able to find a Docker installer with publicly available hashes. The suggested installation command is:

wget -qO- https://get.docker.com/ | sh

To make it little more secure, I will download the script and review it before I run it:

wget -Oinstall_docker.sh https://get.docker.com/
less install_docker.sh
chmod +x install_docker.sh
./install_docker.sh
sudo usermod -aG docker $USER

Logout & login back. Now I don’t need to use sudo to run docker client. Test that everything works:

docker run --rm -it ubuntu:14.04 bash
exit

4. Create data directory structure

sudo mkdir -p /data/mysql/db
sudo mkdir -p /data/mysql/conf/sample
sudo mkdir -p /data/mysql/logs
sudo mkdir -p /data/mysql/db_backups
sudo mkdir -p /data/web/www
sudo mkdir -p /data/web/conf/sample
sudo mkdir -p /data/web/build
sudo mkdir -p /data/web/logs
sudo mkdir -p /data/proxy/conf/sample
sudo mkdir -p /data/proxy/build
sudo mkdir -p /data/proxy/logs

Check the directory tree:

tree /data

5. Create MySQL container

Caveat: These instructions utilize MySQL and Ubuntu images from Docker official repository. Unfortunately as of the time of writing, the delivery of images from the repository is insecure. It doesn’t check MD5 hashes of the actual content or similar. If you want to prevent a potential MITM attack during an image pull, you need to use an alternative method – for example build your own base images. This tutorial is not covering that.

Create option file with MySQL root password to be used during backups:

cd /data/mysql/conf
sudo touch root.cnf
sudo chmod 600 root.cnf
sudo nano -w root.cnf

Content of the file (don’t forget to replace “{my-mysql-root-password}” with a strong password):

[client]
password={my-mysql-root-password}

Extract original config file from mysql container:

docker run --name mysql -v /data/mysql/db:/var/lib/mysql -v /data/mysql/logs:/var/log/mysql -e MYSQL_ROOT_PASSWORD=`sudo grep password /data/mysql/conf/root.cnf | cut -d= -f2` -d mysql:5.7
sudo docker cp mysql:/etc/mysql/my.cnf /data/mysql/conf
cd /data/mysql/conf
sudo cp my.cnf sample
sudo nano -w my.cnf

Add 3 lines into [mysqld] section (this is to resolve compatibility issues with Piwigo 2.7.4):

# equivalent of: SET GLOBAL sql_mode = 'NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION';
# check with: SELECT @@GLOBAL.sql_mode global;
sql-mode = NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION

Stop and remove this container:

docker stop mysql
docker rm mysql

Re-create it with configuration file mounted read/only and many other options:

docker run --name mysql \
-v /data/mysql/db:/var/lib/mysql \
-v /data/mysql/conf/my.cnf:/etc/mysql/my.cnf:ro \
-v /data/mysql/conf/root.cnf:/root/root.cnf:ro \
-v /data/mysql/logs:/var/log/mysql \
-v /data/mysql/db_backups:/mnt/db_backups \
-e MYSQL_ROOT_PASSWORD=`sudo grep password /data/mysql/conf/root.cnf | cut -d= -f2` \
-d --restart=always mysql:5.7

6. Build Apache + PHP container

sudo chown $USER /data/web/build
cd /data/web/build
touch Dockerfile
nano -w Dockerfile

Make the content of the docker file following text. Replace {my-docker-name} with your Docker maintainer name. If you don’t intend to contribute to Docker image repository you can just make something up. You might also need to change the timezone to match your timezone in Ubuntu Server (/usr/share/zoneinfo/Pacific/Auckland). This will be important later when setting up Fail2ban:

FROM ubuntu:14.04

MAINTAINER {my-docker-name}

# Install the relevant packages:
# Apache, PHP, and PHP modules required by Piwigo
# (you can add package mysql-client for testing connectivity to the database)
RUN apt-get update && apt-get install -y apache2 libapache2-mod-php5 php5-imagick php5-mysql php5-gd

# Enable the php mod we just installed, also remote IP address module
RUN a2enmod php5 remoteip

# Set the same timezone as my host server
RUN ln -sf /usr/share/zoneinfo/Pacific/Auckland /etc/localtime

# expose port 80 so that our webserver can respond to requests
EXPOSE 80

# Manually set up the apache environment variables
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid

# Start apache2 service
CMD ["/usr/sbin/apache2", "-D", "FOREGROUND"]

Build the container image:

docker build -t web .

Get sample configs:

docker run --name web -it web bash
exit
sudo docker cp web:/etc/apache2/apache2.conf /data/web/conf/
sudo docker cp web:/etc/php5/apache2/php.ini /data/web/conf/
cd /data/web/conf/
sudo cp apache2.conf sample
sudo cp php.ini sample

Remove the container now:

docker rm web

Edit Apache configuration:

sudo nano -w /data/web/conf/apache2.conf

a) Disable directory browsing in Apache

Search for line Options Indexes FollowSymLinks and remove Indexes from it.

b) Hide Apache version number in error pages

As per http://www.techiecorner.com/2007/how-to-hide-apache2-version-number-in-error-page/ add 3 new lines at the end of apache2.conf:

# Hide version number in error pages
ServerTokens Prod
ServerSignature Off

c) Show real client IP in Apache logs

As per http://syslint.com/syslint/replacing-mod_rpaf-with-mod_remoteip-in-apache-2-4-nginx-real_ip-problem-solution/
Note: 172.17.0.0/16 covers the whole docker containers IP range. Our containers get their IP address assigned randomly in this range. Add another 3 lines to apache2.conf:

# Log real IP address coming in via Nginx proxy
RemoteIPHeader X-Real-IP
RemoteIPInternalProxy 172.17.0.1/16

Then find and edit following line – replace %h with %a:
ORIGINAL:

LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined

EDITED:

LogFormat "%a %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined

Now start the container:

docker run --name web --link mysql:mysql \
-v /data/web/www:/var/www/html \
-v /data/web/conf/apache2.conf:/etc/apache2/apache2.conf:ro \
-v /data/web/conf/php.ini:/etc/php5/apache2/php.ini:ro \
-v /data/web/logs:/var/log/apache2 \
-d --restart=always web

Note: Running nano as root (using sudo) changed permissions on .nano_history file. To avoid error messages running nano as your normal user, you can alter the permissions:

sudo chmod 666 ~/.nano_history

7. Build Nginx container (setup as a reverse proxy)

Nginx configs inspired by:

Web application firewall (WAF) – Naxsi
Probably better in my case than mod_security (the latter needs manual compilation of Nginx from source and requires complex setup).

SSL setup:

sudo chown $USER /data/proxy/build
cd /data/proxy/build
touch Dockerfile
nano -w Dockerfile

Content of Dockerfile (same comments as in the previous case apply):

FROM ubuntu:14.04

MAINTAINER {my-docker-name}

# Install the relevant packages:
# Apache, PHP, and PHP modules required by Piwigo
# (you can add package mysql-client for testing connectivity to the database)
RUN apt-get update && apt-get install -y nginx-naxsi

# Set the same timezone as my host server
RUN ln -sf /usr/share/zoneinfo/Pacific/Auckland /etc/localtime

# expose ports 80 and 443 so that our proxy server can respond to requests
EXPOSE 80 443

# Start nginx
CMD ["nginx", "-g", "daemon off;"]

Build the container image:

docker build -t proxy .

Get sample configs:

docker run --name proxy -it proxy bash
exit
sudo docker cp proxy:/etc/nginx/nginx.conf /data/proxy/conf/
sudo docker cp proxy:/etc/nginx/naxsi_core.rules /data/proxy/conf/
sudo docker cp proxy:/etc/nginx/naxsi.rules /data/proxy/conf/
cd /data/proxy/conf/
sudo cp nginx.conf sample
sudo cp naxsi_core.rules sample
sudo cp naxsi.rules sample

Remove the container now:

docker rm proxy

Comment out setting LearningMode from naxsi.rules:

sudo nano -w naxsi.rules

Replace content of nginx.conf:

sudo nano -w nginx.conf

Replace it with following text (change argument of server_name clause to your domain name):

user www-data;
worker_processes 4;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

events {
 worker_connections 1024;
}

http {
 server_tokens off;
 include /etc/nginx/mime.types;
 default_type application/octet-stream;

 log_format main '$remote_addr - $remote_user [$time_local] "$request" '
 '$status $body_bytes_sent "$http_referer" '
 '"$http_user_agent" "$http_x_forwarded_for"';

 access_log /var/log/nginx/access.log main;

 sendfile on;
 keepalive_timeout 70;
 ssl_session_cache shared:SSL:10m;
 ssl_session_timeout 10m;
 # Gzip on
 gzip on;
 gzip_min_length 1100;
 gzip_buffers 4 32k;
 gzip_types text/plain application/x-javascript text/xml text/css;

 # Cache most accessed static files
 open_file_cache max=10000 inactive=10m;
 open_file_cache_valid 2m;
 open_file_cache_min_uses 1;
 open_file_cache_errors on;

 # nginx-naxsi config
 include /etc/nginx/naxsi_core.rules;

server {
 listen 80;
 charset utf-8;
 return 301 https://yourwebsite.net$request_uri;
}

server {
 listen 443 ssl;
 charset utf-8;
 server_name yourwebsite.net;

 ssl_certificate /etc/ssl/public.crt;
 ssl_certificate_key /etc/ssl/private.key;
 ssl_dhparam /etc/ssl/dhparam.pem;
 ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
 ssl_ciphers 'ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4';
 ssl_prefer_server_ciphers on;
 add_header Strict-Transport-Security max-age=15768000;
 location ~ /\.ht {
 deny all;
 }

 location ~* \.(gif|jpg|jpeg|png|ico|wmv|3gp|avi|mpg|mpeg|mp4|flv|mp3|mid|js|css|html|htm|wml)$ {
 include /etc/nginx/naxsi.rules;
 root /var/www/html/;
 expires 365d;
 }

 # Various admin functions, too many whitelisting rules...
 set $naxsi_flag_enable 1;
 if ($uri = "/photos/admin.php") {
 set $naxsi_flag_enable 0;
 }

 location / {
 include /etc/nginx/naxsi.rules;

 # white list rules
 # Home page > login to Piwigo
 BasicRule wl:1015 "mz:$URL:/photos/identification.php|BODY|$BODY_VAR:password";
 BasicRule wl:1315 "mz:$URL:/photos/identification.php|BODY|$BODY_VAR:redirect";

 # trying to go into authenticated location without being logged in
 BasicRule wl:1315 "mz:$URL:/photos/identification.php|ARGS|$ARGS_VAR:redirect";

 # Administration > Users > Manage
 BasicRule wl:1000 "mz:$URL:/photos/admin/user_list_backend.php|BODY|NAME";

 # Upload new photos
 BasicRule wl:2 "mz:$URL:/photos/ws.php|BODY";

 # Show photo metadata
 BasicRule wl:12 "mz:$URL:/photos/picture.php|ARGS";

 # Add a comment
 BasicRule wl:1009 "mz:$URL:/photos/picture.php|BODY|$BODY_VAR:website_url";
 BasicRule wl:1100 "mz:$URL:/photos/picture.php|BODY|$BODY_VAR:website_url";

 # Delete a comment
 BasicRule wl:1000 "mz:$URL:/photos/picture.php|ARGS|NAME";

 # Create new album in add photos function
 BasicRule wl:1010 "mz:$URL:/photos/ws.php|BODY";
 BasicRule wl:1011 "mz:$URL:/photos/ws.php|BODY";
 BasicRule wl:1308 "mz:$URL:/photos/ws.php|BODY";
 BasicRule wl:1309 "mz:$URL:/photos/ws.php|BODY";

 # Tools menu (Admin Tools plugin)
 BasicRule wl:11 "mz:$URL:/photos/ws.php|BODY";

 client_max_body_size 10m;
 client_body_buffer_size 128k;

 proxy_send_timeout 90;
 proxy_read_timeout 90;
 proxy_buffer_size 128k;
 proxy_buffers 4 256k;
 proxy_busy_buffers_size 256k;
 proxy_temp_file_write_size 256k;
 proxy_connect_timeout 30s;

 proxy_pass http://web;

 proxy_set_header Host $host;
 proxy_set_header X-Real-IP $remote_addr;
 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 }

 location /RequestDenied {
 return 500;
 }
}
}

SSL setup

Edit: StartSSL root will be removed from all major browsers’ trust stores. You can read about it for example here. I would recommend using different CA. One good free option is Let’s Encrypt.

Obtain your SSL certificate, for example a free one from https://www.startssl.com/ (you can follow this how-to article up to the point where you have your private and public key files and signing chain). I have got one with 2048 bit key valid for 12 months. Make sure you have good backups of all the files – browser certificate including the private key file (.p12), your private key, signed certificate and signing chain. Also make sure you keep your private keys safe.

Decrypt your private key:

openssl rsa -in ssl.key -out private.key

Concatenate public key with signing certificate:

cat ssl.crt sub.class1.server.ca.pem ca.pem > public.crt

Copy following files onto the server into /data/proxy/conf/, owned by user root:

  • public.crt
  • private.key

Now back on the server (the last command will take a while to complete):

cd /data/proxy/conf/
sudo chmod 400 private.key
sudo chown www-data private.key
sudo openssl dhparam -out dhparam.pem 2048

It is important to keep your private key as secure as possible, only the webserver should be able to read it:

$ll public.crt private.key dhparam.pem
-rw-r--r-- 1 root root 424 Jul 13 21:21 dhparam.pem
-r-------- 1 www-data root 1679 Jul 13 20:06 private.key
-rw-r--r-- 1 root root 7163 Jul 13 20:06 public.crt

Start the container:

docker run --name proxy --link web:web \
-v /data/proxy/conf/nginx.conf:/etc/nginx/nginx.conf:ro \
-v /data/proxy/conf/naxsi_core.rules:/etc/nginx/naxsi_core.rules:ro \
-v /data/proxy/conf/naxsi.rules:/etc/nginx/naxsi.rules:ro \
-v /data/proxy/logs:/var/log/nginx \
-v /data/web/www:/var/www/html:ro \
-d -p 80:80 --restart=always proxy

Check that the web application firewall works. In your browser go to URL (use your server IP) http://192.168.2.xxx/?a=<>. You should see error 500 and a line in your Nginx error log like this:

2015/06/09 10:06:12 [error] 14#0: *5 NAXSI_FMT: ip={your-client-ip}&server={your-server-ip}&uri=/&learning=0&total_processed=4&total_blocked=2&zone0=ARGS&id0=1302&var_name0=a, client: {your-client-ip}, server: {your-website}, request: "GET /?a=%3C%3E HTTP/1.1", host: "{your-server-ip}"

Note: I have included some white-listing rules and ignore Naxsi for most admin functions as I wasn’t able to get the while-listing work the way I needed there ($uri = “/photos/admin.php”). You can write your white-listing rules as per documentation in https://github.com/nbs-system/naxsi/wiki/white-lists

8. Database setup for Piwigo

Connect to MySQL container via shell. Then connect to MySQL instance and create a new user and database. Replace {my-piwigo-db-password} with a strong password:

docker exec -it mysql bash
mysql -u root -p
Enter password: {my-mysql-root-password}
mysql> 
create database db1;
CREATE USER 'piwigo'@'%' IDENTIFIED BY '{my-piwigo-db-password}';
grant all privileges on db1.* to piwigo@'%';
SHOW GRANTS FOR piwigo@'%';
exit
exit

9. Download and install Piwigo

Follow notes in http://piwigo.org/basics/installation_manual

cd /data/web/www
sudo wget -O piwigo.zip http://piwigo.org/download/dlcounter.php?code=latest
sudo unzip piwigo.zip
sudo mv piwigo photos
sudo mv piwigo.zip ..
sudo chown -hR www-data:www-data photos

Point your browser at (use your server IP) http://192.168.2.xxx/photos/

Database configuration
Host: mysql
User: piwigo
Password: {my-piwigo-db-password}
Database name: db1
Database tables prefix: piwigo_

Administration configuration
Username: Admin
Password: {my-piwigo-admin-password}
Email address: {my-email-address}

10. Configure Piwigo

cd /data/web/www/photos/local/config
touch config.inc.php
sudo chown www-data:www-data config.inc.php
sudo nano -w config.inc.php

Make the content of this file following:


Go to the application – http://192.168.2.xxx/photos/ – and login as Admin

Go to Administration (http://192.168.2.xxx/photos/admin.php), Plugins > Manage > Other plugins available. Search for Exif View. Click Install, then Activate. Note: This plugin will convert EXIF data into human readable form.

In a similar way install and activate plugin Log Failed Logins. It will add a new entry into Plugins menu on the left. Click on it and set Log filename to /var/log/apache2/piwigoFailedLogins.log and click Submit. We will use this later on in Fail2ban configuration.

The above are just some examples of how you can configure and customize Piwigo. More inspiration is for example here:

11. Database backups

Inspired by http://www.backuphowto.info/how-backup-mysql-database-automatically-linux-users See also for root password security considerations: http://stackoverflow.com/questions/6861355/mysqldump-launched-by-cron-and-password-security

crontab -e

Add line:

15 2 * * * docker exec mysql bash -c "mysqldump --defaults-extra-file=/root/root.cnf -u root --all-databases | gzip > /mnt/db_backups/database_`date '+\%Y\%m\%d'`.sql.gz"

This will run backup of all databases at 2:15 am every night. The resulting dump file will be gzipped and stored in /mnt/db_backups inside mysql container, which happens to be mounted from the server on /data/mysql/db_backups. The filename contains year, month, and day to make the files unique. You will need to implement some old backup files purging process – either manual or automatic.

12. Synchronisation of server /data to NAS

This is our current directory structure (tree -d -L 4 /data):

/data
├── mysql
│   ├── conf
│   │   └── sample
│   ├── db
│   │   └── ...
│   ├── db_backups
│   └── logs
├── proxy
│   ├── build
│   ├── conf
│   │   └── sample
│   └── logs
└── web
    ├── build
    ├── conf
    │   └── sample
    ├── logs
    └── www
        └── photos
            └── ...

I want to synchronize everything from /data to my NAS except following:

  • /data/mysql/db (i.e. live database files)
  • /data/web/www (i.e. static website content)

But I want to include:

  • /data/web/www/photos (i.e. Piwigo files)

Implementation

I like to minimize ability of the server to reach outside world and my internal network. This is useful if/when the server gets compromised. For this reason the synchronisation will be done from the NAS side using rsync via SSH, logging with a cryptographic key. My NAS is QNAP so you will need to adapt these instruction to fit your situation.

Generate private-public key pair on the NAS (confirm all defaults, empty passphrase):

ssh-keygen

Copy public key only onto server (using your normal username and server IP):

scp /share/homes/admin/.ssh/id_rsa.pub {my-username}@192.168.2.xxx:/tmp

On the server:

sudo nano -w /etc/ssh/sshd_config

Make sure this line is present (no change required in my case):

PermitRootLogin without-password

Add NAS public key into accepted keys and set permissions:

sudo mkdir /root/.ssh
sudo chmod 700 /root/.ssh
sudo mv /tmp/id_rsa.pub /root/.ssh/authorized_keys
sudo chown root:root /root/.ssh/authorized_keys
sudo chmod 600 /root/.ssh/authorized_keys
sudo service ssh reload

Back on NAS – Test I can SSH to the server without password now:

ssh -i /share/homes/admin/.ssh/id_rsa root@192.168.2.xxx

I want to create a one-off copy of all files (including a cold backup of my database):

a) Stop all docker containers on the server

docker stop proxy web mysql

b) Run rsync on the NAS (/share/webserver is the location of the copy on the NAS):

rsync -avh --delete --stats -e 'ssh -i /share/homes/admin/.ssh/id_rsa' root@192.168.2.xxx:/data/ /share/webserver/

c) Start all docker containers on the server again

docker start mysql web proxy

Now create rsync rules file on the NAS:

touch /share/webserver/rsync_rules.txt
nano -w /share/webserver/rsync_rules.txt

And copy/paste there following lines:

- /mysql/db/**
+ /web/www/photos/***
- /web/www/*
- /rsync_rules.txt

Now test the whole rsync command with the rules file used:

rsync -avh --delete --stats --filter='merge /share/webserver/rsync_rules.txt' -e 'ssh -i /share/homes/admin/.ssh/id_rsa' root@192.168.2.xxx:/data/ /share/webserver/

If it’s all working, set it up as a cron job to run every 4 hours at half past (as an example here). Note: If you set it up using “crontab -e” it will only work until next reboot. This is a QNAP specific thing, see http://wiki.qnap.com/wiki/Add_items_to_crontab

  1. Edit /etc/config/crontab and add your custom entry.
  2. Run ‘crontab /etc/config/crontab’ to load the changes.
  3. Restart cron service.
nano -w /etc/config/crontab

Add line at the end:

30 2,6,10,14,18,22 * * * rsync -avh --delete --stats --filter='merge /share/webserver/rsync_rules.txt' -e 'ssh -i /share/homes/admin/.ssh/id_rsa' root@192.168.2.xxx:/data/ /share/webserver/
crontab /etc/config/crontab
/etc/init.d/crond.sh restart

13. Log rotation – Docker, Apache, and Nginx

See https://sandro-keil.de/blog/2015/03/11/logrotate-for-docker-container/

cd /etc/logrotate.d/
sudo touch docker-logs
sudo nano -w docker-logs

Copy/paste the content:

/var/lib/docker/containers/*/*.log {
 rotate 10
 size=1M
 compress
 missingok
 copytruncate
}
sudo touch docker-containers-logs
sudo nano -w docker-containers-logs

Copy/paste the content:

/data/*/logs/*.log {
 rotate 10
 size=1M
 compress
 missingok
 copytruncate
}

Test the configurations:

sudo logrotate -f docker-logs
sudo logrotate -f docker-containers-logs
ls -l /data/web/logs/
ls -l /data/proxy/logs/

14. Setup centralized logging to remote syslog server

Having an unspoiled syslog content can help us understand what happened in case of a compromise. My centralized syslog server runs on my NAS. In order to get my server to log there:

sudo nano -w /etc/rsyslog.d/50-default.conf

add at the end of the file (replace {syslog-server-ip} with your syslog server IP):

# send everything to my syslog server
# make sure UDP port 514 is open in the router firewall
*.* @{syslog-server-ip}:514

Now restart the syslog service:

sudo service rsyslog restart

15. Setup Fail2ban for SSH, Nginx/Naxsi, and Piwigo

See:

At the moment I am not planning to open SSH to the outside traffic. However it might be useful one day. And I definitely want to implement the Nginx/Naxsi and Piwigo parts.

sudo apt-get update
sudo apt-get install fail2ban
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
sudo nano -w /etc/fail2ban/jail.local

a) Change parameter in [DEFAULT] section – how many seconds to ban an IP address for:

bantime = 1800

b) Enable section [nginx-http-auth], change logpath, and add chain directive. The chain directive needs to be set to FORWARD because our Nginx runs inside a Docker container, i.e. inside a different network:

[nginx-http-auth]
enabled = true
filter = nginx-http-auth
port = http,https
#logpath = /var/log/nginx/error.log
logpath = /data/proxy/logs/error.log
chain = FORWARD

c) Add new sections for Naxsi errors and Piwigo login failures:

[nginx-naxsi]
enabled = true
port = http,https
filter = nginx-naxsi
logpath = /data/proxy/logs/error.log
chain = FORWARD
[piwigo]
enabled = true
port = http,https
filter = piwigo
logpath = /data/web/logs/piwigoFailedLogins.log
chain = FORWARD

d) Now create new configuration files for Naxsi and Piwigo filters:

sudo nano -w /etc/fail2ban/filter.d/nginx-naxsi.conf
[INCLUDES]
before = common.conf
[Definition]
failregex = NAXSI_FMT: ip=
ignoreregex =
sudo nano -w /etc/fail2ban/filter.d/piwigo.conf
[INCLUDES]
before = common.conf
[Definition]
failregex = ip=
ignoreregex =

e) Create the Piwigo logfile and give it appropriate owner and group (so that Piwigo can write into it):

sudo touch /data/web/logs/piwigoFailedLogins.log
sudo chown www-data:www-data /data/web/logs/piwigoFailedLogins.log

f) Change the fail2log logging location to syslog:

sudo nano -w /etc/fail2ban/fail2ban.conf
#logtarget = /var/log/fail2ban.log
logtarget = SYSLOG

Restart the service:

sudo service fail2ban stop
sudo service fail2ban start

Test triggering some events until your IP gets banned (trigger Naxsi rules, Piwigo failed logins, etc.) Check logs and iptables:

tail -f /var/log/syslog
tail -f /data/proxy/logs/error.log
sudo iptables -L -v -n --line-numbers

Note: Make sure time zones in your containers are identical to your server hosting them. Otherwise fail2ban won’t work correctly (most likely won’t ban anything).

16. Disable IPv6

I have disabled IPv6 on the router. I will disable it on the server as well just to prevent possible noise in logs. See http://askubuntu.com/questions/440649/how-to-disable-ipv6-in-ubuntu-14-04

sudo nano -w /etc/sysctl.conf

Add 4 lines at the end of the file:

# Disable IPv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

Activate and check the change – the second command should return value “1”:

sudo sysctl -p
cat /proc/sys/net/ipv6/conf/all/disable_ipv6

17. Pulling new versions of base images and rebuilding images

In order to apply latest patches from docker repository images (mysql, ubuntu) pull new ones, rebuild web and proxy images and drop and recreate containers:

docker pull mysql:5.7
docker pull ubuntu:14.04
cd /data/web/build
docker build -t web .
cd /data/proxy/build
docker build -t proxy .
docker stop proxy web mysql
docker rm proxy web mysql
docker run --name mysql \
-v /data/mysql/db:/var/lib/mysql \
-v /data/mysql/conf/my.cnf:/etc/mysql/my.cnf:ro \
-v /data/mysql/conf/root.cnf:/root/root.cnf:ro \
-v /data/mysql/logs:/var/log/mysql \
-v /data/mysql/db_backups:/mnt/db_backups \
-e MYSQL_ROOT_PASSWORD=`sudo grep password /data/mysql/conf/root.cnf | cut -d= -f2` \
-d --restart=always mysql:5.7
docker run --name web --link mysql:mysql \
-v /data/web/www:/var/www/html \
-v /data/web/conf/apache2.conf:/etc/apache2/apache2.conf:ro \
-v /data/web/conf/php.ini:/etc/php5/apache2/php.ini:ro \
-v /data/web/logs:/var/log/apache2 \
-d --restart=always web
docker run --name proxy --link web:web \
-v /data/proxy/conf/nginx.conf:/etc/nginx/nginx.conf:ro \
-v /data/proxy/conf/naxsi_core.rules:/etc/nginx/naxsi_core.rules:ro \
-v /data/proxy/conf/naxsi.rules:/etc/nginx/naxsi.rules:ro \
-v /data/proxy/logs:/var/log/nginx \
-v /data/web/www:/var/www/html:ro \
-d -p 80:80 --restart=always proxy

Clean-up old containers and images using following commands if required. Read Docker documentation for details:

docker ps -a
docker rm {container-id-to-delete}
docker images
docker rmi {image-id-to-delete}

Conclusion

I have built this server using Gigabyte Brix (with some RAM and SSD) to keep the ongoing power consumption and noise low. I am very happy with the result.

Advertisements

2 comments

  1. Pingback: How to generate iptables rules for smtp.gmail.com « freeandthings

  2. Pingback: Docker container for Nginx with Naxsi based on Ubuntu 16.04 « freeandthings


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s