Using Docker to Run Everything From a Single Server
Table of Contents
- 1. Abstract
- 2. The Problem
- 3. Using an HTTP Reverse Proxy
- 4. Using Virtual Machines
- 5. Using Docker
- 6. Implementing the Docker solution: WSProxy
- 7. WSProxy: Serving a Remote Domain Through an SSH-Tunnel
- 8. WSProxy: Managing the HTTPS Certificates
- 9. Case Study 1: Installing Moodle
- 10. Case Study 2: Installing ShellInBox
- 11. Case Study 3: Installing SchoolTool
- 12. Managing Your Own Domain Name Server
- 13. Conclusion
- 14. Future Work
- 15. References
1 Abstract
Often it is desirable to install several web pages or web applications in the same server, in order to minimize costs. However this is not so easy and presents some challenges. We will discuss several ways of trying to accomplish it, and argue that using Docker is the most efficient and the most easy one. We will also see some use cases of applications that can be installed with Docker, which actually make use of some custom shell scripts that I have developed.
2 The Problem
Usually a small organization or institution has limited resources and it makes sense to use them as efficiently as possible. For example let's say they have a single server with public IP and would like to install in it their website, several web applications, etc. Traditionally this is done by installing all the applications and their dependencies in the same place, and trying to fix their configurations so that they coexist peacefully with each-other. We can use the name-based virtual hosting of apache2 to host several domains/subdomains on the same webserver.
However the issue is a little bit more complicated than this, because sometimes it is not possible (or suitable, or convenient) to host two different websites on the same server. For example, let's say that a website is built on Joomla, and it has some modules that do not work well with the latest version of PHP, so it depends on the previous version of PHP. Now we have to install manually the previous version of PHP and this makes the whole thing more difficult. Moreover, this previous version of PHP is not suitable or may break the other websites or applications, which require the latest version of PHP. So, we will have to keep 2 parallel installations of PHP, each one for a different version, and to configure each website or application for using the right version.
This makes very high the complexity of installation, configuration and maintenance. Sometimes it may even be impossible due to conflicting dependencies. The only solution in this case would be to separate these websites/applications into two different servers, one that has the previous version of PHP and another with the latest one. But now it is more expensive (because requires two or more servers, instead of one). It also requires more than one public IP, one for each server, which sometimes is more expensive, and sometimes is simply not possible.
3 Using an HTTP Reverse Proxy
In this case, the Reverse Proxy module of apache2 comes to rescue. The idea is that the main webserver can forward the http requests to the other webservers, behaving like a kind of http gateway or hub. So, we need a public IP only for the proxy server and all the requests for each of the hosted domains/subdomains go to it. Then it will use the name-based virtual hosting feature of apache2 to distinguish between them, and forward each of them to the appropriate internal server, acting as a proxy between the webclient and the webserver.
The apache2 configuration on the proxy server looks like this:
<VirtualHost *:80> ServerName site1.example.org ProxyPass / http://site1.example.org/ ProxyPassReverse / http://site1.example.org/ ProxyRequests off </VirtualHost>
There is a section like this for each domain/subdomain that is hosted.
Now, the tricky part is to modify /etc/hosts
and to add lines like
this for each supported domain/subdomain:
192.168.1.115 site1.example.org
Here 192.168.1.115
is the internal IP of the webserver that hosts
the domain site1.example.org
. Without this configuration apache2
would not know where to forward the request, the request would be sent
again to the proxy server itself, and this would create an endless
loop.
For handling HTTPS requests, the configuration of apache2 would look like this:
<VirtualHost _default_:443> ServerName site1.example.org ProxyPass / https://site1.example.org/ ProxyPassReverse / https://site1.example.org/ ProxyRequests off SSLEngine on SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key #SSLCertificateChainFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLProxyEngine on SSLProxyVerify none SSLProxyCheckPeerCN off SSLProxyCheckPeerName off SSLProxyCheckPeerExpire off </VirtualHost>
Also these apache2 modules have to be enabled:
a2enmod ssl proxy proxy_http proxy_connect proxy_balancer cache headers rewrite
4 Using Virtual Machines
Instead of using a separate physical machine for the proxy server and for each of the webservers, we can use virtual machines inside a single physical machine. This brings some benefits in terms of management and maintenance because it is easier to manage a virtual machine than a physical one. For example you can make a snapshot of the virtual machine (which is like a full backup of everything), you can make a clone of it and use the clone for testing new installations/configurations, you have more flexibility in increasing or decreasing the resources of the virtual machine (for example increasing the amount of RAM if needed), etc.
However, the machine that hosts the virtual servers has to be more powerful than a normal server. It needs to have more RAM, more processing power, faster storage, etc. because these resources are divided among the virtual servers. For example, suppose that we need to have 3 webservers, one for each website or webapp. We also need a webproxy server, as discussed in the previous section, to handle the http requests for all the domains. If we assume that each webserver needs at least 512MB of RAM to run properly, then for these 4 virtual machines we need at least 2GB or RAM. The host system as well needs at least 512MB of RAM to run properly, so in total we would need a machine with at least 2-3 GB of RAM.
On the other hand, if we were able run all the websites and webapps on the same server, without separating them into different machines, 1GB of RAM would be more than enough for running everything properly. This happens because each virtual machine has to spend resources for running its own system, its own apache2 server etc. so in total they need more resources than a single server.
Besides this, another problem is that once some resources are allocated to a virtual machine they cannot be shared with the other virtual machines, even if they are idle (unused). For example, suppose that we allocate 512MB of RAM to virt-ws-1 and 512MB or RAM to virt-ws-2. Let's also say that when virt-ws-1 is very busy virt-ws-2 is idle, and when virt-ws-1 is idle virt-ws-2 is very busy. Even though one of them is idle, it cannot share its free resources (RAM) with the busy one because the virtual machines and their allocated resources are totally isolated from each-other. If both applications were installed on the same server though, they would have been able to use the whole 1GB available, while they are busy and the other one is idle.
5 Using Docker
Docker is a tool that allows us to create and manage virtual machines. However the virtual machines created by Docker are lightweight, they use much less resources than the traditional virtual machines. This comes from the fact that the virtual machines created by Docker can share resources with the hosting system and between each-other. For example, all the Docker virtual machines use the same kernel as the hosting system, no need to use extra resources for running a separate kernel for each virtual machine. Also, there are no limitations on the amount of RAM that each Docker VM can use. If the hosting system has 2GB of RAM, this amount of RAM is available to all of the VMs and they share it as needed. If one of the VMs is busy at a certain time and the other ones are idle, then the one that is busy can go on and use all the available free RAM, up to 2GB. Also, Docker VMs can easily share the disk space, if needed, and there are no limits on the amount of disk space that each one of them can use.
This ability of Docker for sharing resources among the virtual machines provides a better and more efficient resource utilization. So, Docker virtual machines can provide both isolation between different applications, so that they are not affected by the dependencies of each-other, and efficient utilization of the available resources. As such, they are a perfect solution in many situations. One of these situations is also our case of running several webservers separated from each-other.
Besides all these, Docker VMs are also very easy to install and manage and have full support for scripting and automation, making the life easy for sysadmins. They are also highly configurable and resource sharing among virtual machines can be made as restrictive or as flexible as needed.
6 Implementing the Docker solution: WSProxy
I have developed some scripts that automate the process of building and maintaining a web server proxy with Docker. It is quite simple and easy to install and to build a Docker container that does the job of a web server proxy:
mkdir -p /opt/src cd /opt/src/ git clone https://github.com/docker-build/wsproxy mkdir -p /opt/workdir/wsproxy1 cd /opt/workdir/wsproxy1 ln -s /opt/src/wsproxy . cp wsproxy/utils/config.sh . vim config.sh wsproxy/docker/build.sh wsproxy/docker/create.sh wsproxy/docker/start.sh
So, we first get the code of the scripts from GitHub and save it on
/opt/src/wsproxy
. Then we create a new directory for managing the
container, called /opt/workdir/wsproxy1
and link the source
directory inside it. Then we copy and tweak the configuration file,
build the docker image, create the docker container, and then start
it.
Now let's say that we have two webservers, webserver-1 and webserver-2, each one running in a docker container. Later we will see in more details how to create and start such webservers, but for the time being let's say that they are created like this:
docker run -d --name=ws1 --hostname=site1.example.org webserver-1 docker run -d --name=ws2 --hostname=site2.example.org webserver-2
Note that no HTTP ports are exposed to the host (for example using
options -p 80:80 -p 443:443
), and this is important because the HTTP
ports are already being used by the wsproxy container.
Now we can fix the configuration of wsproxy so that it can serve as a proxy for these domains, and we can do it like this:
wsproxy/domains-add.sh ws1 site1.example.org wsproxy/domains-add.sh ws2 site2.example.org site3.example.org
So, all the needed configurations are done by the script
domains-add.sh
and we pass it as parameters the name of the
container that actually serves the domain and the name of the
domain. A container can also serve more than one domain, like ws2
which is serving the domains site2.example.org
and
site3.example.org
(it can do it using the name-based virtual hosting
of apache2, for example).
Now all the websites are served from the same server, but they are isolated from each-other using docker containers, which make an efficient usage of the server resources and are easy to maintain.
7 WSProxy: Serving a Remote Domain Through an SSH-Tunnel
The web server proxy can also serve a domain that is installed on an other machine. For example, let's say that the domain site4.example.org is installed on another hosting server, which does not have a public IP. Then we can use this command to tell wsproxy to serve this domain as well:
wsproxy/sshtunnel-add.sh site4.example.org
This command will adjust the configuration of wsproxy to serve this
domain and it will also create the script
sshtunnel-keys/site4.example.org.sh
. The script
site4.example.org.sh
should be transferred to the server where this
domain is hosted and executed there. Once it is executed for the first
time, it will set up the remote server to connect to wsproxy and
establish a ssh tunnel. The web server proxy can use this tunnel to
forward http(s) requests to the webserver. The scripts that are
installed on the webserver try also to test periodically the
established ssh tunnel and to re-establish it if broken.
8 WSProxy: Managing the HTTPS Certificates
The scripts of wsproxy can also obtain free SSL certificates from letsencrypt for the managed domains. It is as simple as this:
wsproxy/get-ssl-cert.sh user@gmail.com site1.example.org --test wsproxy/get-ssl-cert.sh user@gmail.com site1.example.org wsproxy/get-ssl-cert.sh user@gmail.com site2.example.org site3.example.org -t wsproxy/get-ssl-cert.sh user@gmail.com site2.example.org site3.example.org
In the first command we run the script in testing mode, to make sure
that everything works as expected. Then we remove the testing option
and get the certificate for the domain site1.example.org. The last
two commands are the same, but it shows that we can actually use the
same certificate for several domains. The email provided is that of
the maintainer of wsproxy. When the certificates are about to
expire, letsencrypt will send a warning to this email address so that
the certificates are renewed in time. However the script
get-ssl-cert.sh
takes care to set up automatic renewal of the
certificates, so normally no manual intervention is needed in order to
renew them, unless something is wrong.
9 Case Study 1: Installing Moodle
Moodle is a powerful learning platform that can be useful for universities and schools. It is not difficult to install it on a web server, however I have built some scripts for installing it in a Docker container, which make its installation and configuration even easier. They can be used like this:
mkdir -p /opt/src/ cd /opt/src/ git clone https://github.com/docker-build/moodle mkdir -p /opt/workdir/moodle1.example.org cd /opt/workdir/moodle1.example.org/ ln -s /opt/src/moodle . cp moodle/utils/settings.sh . vim settings.sh moodle/docker/build.sh moodle/docker/create.sh moodle/docker/start.sh moodle/config.sh
First we get the code of the scripts from GitHub and save it on
/opt/src/moodle
. Then we create a new directory for managing the
container, called /opt/workdir/moodle1.example.org
and link the
source directory inside it. Then we copy and tweek the file
settings.sh
. Then we build the docker image, create the docker
container and start and configure it.
The file settings.sh
should look like this:
### Docker settings. IMAGE=moodle CONTAINER=moodle1-example-org #PORT_HTTP=80 #PORT_HTTPS=443 #PORT_SSH=2222 DOMAIN="moodle1.example.org" [ . . . . . ]
Notice that the http ports are commented out, so that the container does not expose them. If this was a standalone installation then normally we would expose these ports. However we are running moodle1-example-org behind the wsproxy, so the http requests should come from wsproxy. To tell wsproxy to forward the http requests for this domain to the right container, we use these commands:
cd /opt/workdir/wsproxy1/ wsproxy/domains-add.sh moodle1-example-org moodle1.example.org
And while we are here, we also tell wsproxy to get and manage a free "letsencrypt" SSL certificate for this domain:
wsproxy/get-ssl-cert.sh user@gmail.com moodle1.example.org -t wsproxy/get-ssl-cert.sh user@gmail.com moodle1.example.org
The installation/configuration is as easy as that, and now we can open in browser https://moodle1.example.org
10 Case Study 2: Installing ShellInBox
This is a tool that allows shell access to a server using a browser as a terminal. I use it to provide a shell access to my Linux students, so that they can try the commands, do the homeworks, etc. Installing it in a docker container is very easy and similar to the installation of Moodle:
mkdir -p /opt/src/ cd /opt/src/ git clone https://github.com/docker-build/shellinabox mkdir -p /opt/workdir/shell1.example.org cd /opt/workdir/shell1.example.org ln -s /opt/src/shellinabox . cp shellinabox/utils/settings.sh . vim settings.sh cp shellinabox/utils/accounts.txt . vim accounts.txt shellinabox/docker/build.sh shellinabox/docker/create.sh shellinabox/docker/start.sh shellinabox/config.sh
The file settings.sh
should look like this:
### Docker settings. IMAGE=shellinabox CONTAINER=shell1-example-org PORT=
Again, we leave the port empty because we don't any ports to be exposed by the container, since it is running behind the wsproxy. To let wsproxy know about handling this domain we use these commands:
cd /opt/workdir/wsproxy1/ wsproxy/domains-add.sh shell1-example-org shell1.example.org wsproxy/get-ssl-cert.sh user@gmail.com shell1.example.org -t wsproxy/get-ssl-cert.sh user@gmail.com shell1.example.org
And now we can open in browser: https://shell1.example.org
11 Case Study 3: Installing SchoolTool
SchoolTool is a web based student information system. Installing it is quite easy and similar to the applications that we have seen above:
mkdir -p /opt/src/ cd /opt/src/ git clone https://github.com/docker-build/SchoolTool mkdir -p /opt/workdir/school1.example.org cd /opt/workdir/school1.example.org/ ln -s /opt/src/SchoolTool . cp SchoolTool/utils/settings.sh . vim settings.sh SchoolTool/docker/build.sh SchoolTool/docker/create.sh SchoolTool/docker/start.sh SchoolTool/config.sh
The configuration file settings.sh
looks like this:
### docker image and container IMAGE=schooltool CONTAINER=school1-example-org PORTS= ### domain of the site DOMAIN="school1.example.org" [ . . . . . ]
Here again we leave the PORTS empty because we don't want the container to expose the http ports. To let wsproxy know about handling this domain we use these commands:
cd /opt/workdir/wsproxy1/ wsproxy/domains-add.sh school1-example-org school1.example.org wsproxy/get-ssl-cert.sh user@gmail.com school1.example.org -t wsproxy/get-ssl-cert.sh user@gmail.com school1.example.org
The latest stable version of SchoolTool depends on ubuntu-14.04, while the rest of the applications that we have seen depend on ubuntu-16.04. This makes it impossible to install it on the same server with the others, unless virtual machines or Docker are used. This illustrates again the advantage of using wsproxy and docker containers for installing all the applications on the same server.
12 Managing Your Own Domain Name Server
The sites or webapps that we have installed on our server each one has its own domain or subdomain. Nowadays domains and subdomains are managed by service providers and everything is done from a nice web interface. However we will see in this section that it is not difficult to manage our domains with our server. But of course you need to know first some basic concepts about how DNS works.
I have also built some scripts that make it even easier. These scripts install bind9 inside a docker container and configure it as a hidden, master, authoritative-only DNS server.
Hidden means that it stays behind a firewall, not accessible from the outside world. Master or primary means that it is a primary source of information for the domains that it provides. There are also slave/secondary DNS servers, which get the information of the domains that they cover from other (master/primary) servers. If we update the domain on a master server, the slaves will synchronize with it automatically after a certain time.
Authoritative-only means that the server will just give answers for the domains that it masters, and nothing else. DNS servers can possibly do several things, for example give answers to DNS requests from clients, both for the domains that they are responsible for and for other domains. If they don’t know the answer, they get it from the Internet, fetch it to the client and then cache it for future requests. However this server does not do any of these things. It just answers for its own domains.
But there is a catch here: if the server stays behind the firewall, hidden from the world and does not accept any requests from clients, where should the clients send queries about our domain? The answer is that they will query some slave/secondary DNS servers which are synchronized with our server. Fortunately there are lots of free secondary DNS services out there: http://networking.ringofsaturn.com/Unix/freednsservers.php
Before continuing with the rest of installation/configuration, register your domain(s) on two or more secondary servers. For example I have used puck.nether.net and buddyns.com.
Next get the scripts:
cd /opt/workdir/ git clone https://github.com/docker-build/bind9 cd bind9/
Then modify config/etc/bind/named.conf.local
to look like this:
zone "example.org" { type master; also-notify { 173.244.206.26; # a.transfer.buddyns.com 88.198.106.11; # b.transfer.buddyns.com 204.42.254.5; # puck.nether.net }; file "/var/cache/bind/db.example.org"; };
Of course, you can use some slave DNS servers different from
buddyns.com and puck.nether.net (and the corresponding IPs). Fix
the IPs of the slave DNS servers also on
config/etc/bind/named.conf.options
and on ufw.sh
.
Now create a file like config/var/cache/bind/db.example.org
with a
content like this:
; example.org $TTL 24h $ORIGIN example.org. @ 1D IN SOA ns1.example.org. admin.example.org. ( 2017063001 ; serial 3H ; refresh 15m ; retry 1w ; expire 2h ; minimum ) IN NS b.ns.buddyns.com. IN NS c.ns.buddyns.com. IN NS puck.nether.net. IN MX 10 aspmx.l.google.com. ; server host definitions ns1.example.org. IN A 12.345.67.89 @ IN A 12.345.67.89 ; point to the server any subdomain * IN A 12.345.67.89 mail IN CNAME ghs.google.com.
You can see that only the secondary servers are listed as nameservers for our domain. So, when clients have any questions about our domain, they go and ask them, not our server (which is behind a firewall and cannot be reached). Also don’t forget to change the serial number whenever this file is modified, otherwise the changes may not be noticed and propagated on the Internet.
For managing another domain, we should add another zone at
config/etc/bind/named.conf.local
and another db file at
config/var/cache/bind/
.
Once we are done with the config files, we can give ./build.sh
and
./run.sh
and the bind9 container will be up and running. In case
that configuration files need to be changed, update them and rebuild
the container by running: ./rm.sh && ./build.sh && ./run.sh
13 Conclusion
We have seen that it is quite possible to install several websites, webapps etc. in the same server. Using Docker makes it more efficient, more robust and easier to install and maintain them. This is due to the nice features of Docker, like having a small overhead, sharing resources, flexibility in configuration, being scriptable etc. We have also seen that using a bunch of small shell scripts with Docker goes a long way towards making the installation and maintenance of such a server very easy.
14 Future Work
- Improving the docker scripts that are used to install and manage web applications mentioned in this article (Moodle, SchoolTool, Shellinabox, etc.)
- Developing docker scripts for more applications that can be useful for schools, universities, organizations, institutions, small businesses, etc.
15 References
- https://docs.docker.com/
- https://github.com/docker-build/wsproxy
- https://letsencrypt.org/docs/
- https://docs.moodle.org/
- http://schooltool.org/
- https://www.tecmint.com/shell-in-a-box-a-web-based-ssh-terminal-to-access-remote-linux-servers/
- https://blog.lundscape.com/2009/05/configure-a-reverse-proxy-with-apache/
- http://blogs.adobe.com/cguerrero/2010/10/27/configuring-a-reverse-proxy-with-apache-that-handles-https-connections/
- https://help.ubuntu.com/lts/serverguide/dns-configuration.html
- http://wernerstrydom.com/2013/02/23/configuring-ubuntu-server-12-04-as-dns-server/