Installing Carbonio CE with LXD
1. Intro
I have a Dedicated Server on hetzner.com and I want to install Carbonio CE. It requires ubuntu:20.04, but I have installed on the server ubuntu:22.04 (the latest LTS release), because I want to install there some other applications as well (with docker). So, I’d like to use an LXC container for installing Carbonio.
2. Preparing the host
As mentioned in the introduction, we need to have a freshly installed ubuntu:22.04 on the hetzner machine.
2.1. Secure the server
I also install firewalld
and fail2ban
to protect the server from
the attacks:
# install firewalld
apt install --yes firewalld
firewall-cmd --list-all
# firewall-cmd --permanent --zone=public --set-target=DROP
# firewall-cmd --reload
# install fail2ban
apt install --yes fail2ban
fail2ban-client status
fail2ban-client status sshd
Their default configuration is usually fine, so for the time being we don’t need to change anything.
2.2. Install LXD
To manage the LXC containers I use LXD. I install it like this:
apt install snapd
snap install lxd --channel=latest/stable
snap refresh lxd --channel=latest
snap list
lxc list
lxd init
The output from the last command looks like this:
Would you like to use LXD clustering? (yes/no) [default=no]: Do you want to configure a new storage pool? (yes/no) [default=yes]: Name of the new storage pool [default=default]: Name of the storage backend to use (btrfs, dir, lvm, zfs, ceph) [default=zfs]: btrfs Create a new BTRFS pool? (yes/no) [default=yes]: Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: Size in GB of the new loop device (1GB minimum) [default=30GB]: 70 Would you like to connect to a MAAS server? (yes/no) [default=no]: Would you like to create a new local network bridge? (yes/no) [default=yes]: What should the new bridge be called? [default=lxdbr0]: What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: Would you like the LXD server to be available over the network? (yes/no) [default=no]: Would you like stale cached images to be updated automatically? (yes/no) [default=yes] Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
2.3. Fix the firewall
For more details about this look at the corresponding appendix. |
Any interface that is not explicitly added to a zone, is added to the
default zone, which is the zone public
. This zone is meant for the
interfaces that are facing the public internet, so it is
restricted. For example DHCP requests are blocked, and the containers
cannot get an IP.
To fix this, we can add the bridge interface to the trusted
zone,
where everything is allowed:
firewall-cmd --zone=trusted --list-all
firewall-cmd --permanent --zone=trusted --add-interface=lxdbr0
firewall-cmd --reload
firewall-cmd --zone=trusted --list-all
We should also make sure that forwarding is enabled:
firewall-cmd --permanent --direct --add-rule \
ipv4 filter FORWARD 0 -j ACCEPT
firewall-cmd --reload
firewall-cmd --direct --get-all-rules
3. Create the container
The latest stable version of Carbonio requires ubuntu:20.04
.
lxc launch images:ubuntu/20.04 carbonio
lxc list
lxc list -c ns4t
lxc info carbonio
lxc config show carbonio
Let’s also enable bash-completion and set a better prompt in it:
lxc exec carbonio -- bash
# make sure that bash-completion is installed
apt install --yes bash-completion
# customize ~/.bashrc
sed -i ~/.bashrc \
-e '/^#\?force_color_prompt=/ c force_color_prompt=yes' \
-e '/bashrc_custom/d'
echo 'source ~/.bashrc_custom' >> ~/.bashrc
cat <<'EOF' > ~/.bashrc_custom
# set a better prompt
PS1='${debian_chroot:+($debian_chroot)}\[\033[01;31m\]\u\[\033[01;33m\]@\[\033[01;36m\]\h \[\033[01;33m\]\w \[\033[01;35m\]\$ \[\033[00m\]'
# enable programmable completion features
if [ -f /etc/bash_completion ] && ! shopt -oq posix; then
source /etc/bash_completion
fi
EOF
source ~/.bashrc
The installation of Carbonio CE also requires Python3 and Perl, so let’s make sure that they are installed:
# lxc exec carbonio -- bash
apt install --yes python3 perl
4. Networking requirements
We are making a single-server installation of Carbonio, so only the
ports for the external connections are
required
to be open: 25
, 465
, 587
, 80
, 110
, 143
, 443
, 993
,
995
, 5222
, 6071
.
The connection of our container to the network goes through the host, which serves as a gateway. So, the Carbonio server is behind NAT and we need to forward these ports from the host.
Forwarding them is easier if the container has a fixed
IP (rather then a dynamic one, obtained from DHCP). So, first of all,
let’s limit the range of DHCP for the lxdbr0
bridge, and then
change the network configuration inside the container so that it has a
fixed IP that is outside the DHCP range.
Another requirement before starting to install Carbonio is also to disable IPv6 inside the container.
4.1. Limit the DHCP range
lxc network set lxdbr0 \
ipv4.dhcp.ranges 10.210.64.2-10.210.64.200
lxc network show lxdbr0
4.2. Set a fixed IP
Network configuration on ubuntu is managed by netplan
.
lxc exec carbonio -- bash
ip address
ip route
rm /etc/netplan/*.yaml
cat <<EOF > /etc/netplan/01-netcfg.yaml
network:
version: 2
ethernets:
eth0:
dhcp4: no
addresses:
- 10.210.64.201/8
nameservers:
addresses: [8.8.8.8, 8.8.4.4]
routes:
- to: default
via: 10.210.64.1
EOF
netplan apply
ip address
ip route
ping 8.8.8.8
4.3. Forward ports
We can use the command lxc network forward
to forward these
ports to the internal IP of the Carbonio container:
HOST_IP=10.11.12.13 # the public IP of the host
CONTAINER_IP=10.210.64.201
lxc network forward create lxdbr0 $HOST_IP
for port in 25 465 587 110 143 993 995 5222 6071 ; do \
lxc network forward port add lxdbr0 $HOST_IP tcp $port $CONTAINER_IP ; \
done
lxc network forward list lxdbr0
lxc network forward show lxdbr0 $HOST_IP
We can use netcat
to test that ports are forwarded correctly. On the
server run:
lxc exec carbonio -- nc -l 110
Outside the server run:
nc mail.example.org 110
We are assuming that mail.example.org is resolved to the
external IP of the server.
|
Every line that is typed outside the server should be displayed inside the server.
4.4. Forward the TCP ports 80 and 443
Forwarding these two ports is a bit more complex and cannot be done
with the same method that was used above. This is because these ports
need to be used by other applications as well, beside Carbonio. We
need to forward these ports to different applications or containers,
based on the domain that is being used. We can use sniproxy
for
this.
apt install -y sniproxy
vim /etc/sniproxy.conf
killall sniproxy
service start sniproxy
ps ax | sniproxy
Make sure that the configuration file /etc/sniproxy.conf
looks
like this:
user daemon pidfile /var/run/sniproxy.pid listen 0.0.0.0:80 { proto http } listen 0.0.0.0:443 { proto tls } # error_log { # syslog daemon # priority notice # } # # access_log { # filename /var/log/sniproxy/access.log # priority notice $ } table { # . . . . . mail.example.org 10.210.64.201 .*.mail.example.org 10.210.64.201 # . . . . . }
We are using 10.210.64.201, which is the fixed IP of the carbonio container. |
4.5. Disable IPv6 inside the container
These commands can disable it:
# lxc exec carbonio -- bash
sysctl -w net.ipv6.conf.all.disable_ipv6=1
sysctl -w net.ipv6.conf.default.disable_ipv6=1
sysctl -w net.ipv6.conf.lo.disable_ipv6=1
The problem is that this is not permanent, next time that the container restart we have to disable IPv6 again. So, let’s save them in a simple script:
cat <<EOF > /usr/local/bin/disable_ipv6.sh
#!/bin/bash
sysctl -w net.ipv6.conf.all.disable_ipv6=1
sysctl -w net.ipv6.conf.default.disable_ipv6=1
sysctl -w net.ipv6.conf.lo.disable_ipv6=1
EOF
chmod +x /usr/local/bin/disable_ipv6.sh
ip addr
disable_ipv6.sh
ip addr
Now let’s create also a simple systemd service that runs this script each time that the container starts:
cat <<EOF > /etc/systemd/system/disable_ipv6.service
Unit]
After=network.target
[Service]
Type=simple
ExecStart=/usr/local/bin/disable_ipv6.sh
[Install]
WantedBy=default.target
EOF
systemctl daemon-reload
systemctl enable disable_ipv6
Let’s test it:
exit
lxc restart carbonio
lxc exec carbonio -- bash
ip addr
We should also remove the IPv6 entries from /etc/hosts
:
cat /etc/hosts
sed -i /etc/hosts -e '/::/d'
cat /etc/hosts
5. Minimal DNS setup
Before starting to install the mail server, let’s make sure that we already have some minimal DNS setup, which should look like this:
mail.example.org. IN A 10.11.12.13
example.org. IN MX 10 mail.example.org.
example.org. IN MX 20 mail.example.org.
example.org. IN TXT "v=spf1 mx -all"
The last line basically tells to the other SMTP servers that only this
server is allowed to send emails on behalf of this domain, and no
other servers. This is done to prevent spammers from faking your email
addresses. If a spammer tries to send a mail as if it is coming from
your domain, the SMTP server that is getting this email will consult
this DNS record and will figure out that the server of the spammer is
not allowed to send emails on behalf of example.org
.
You can use dig
to verify that these DNS records have been activated:
dig MX example.org +short
dig A smtp.example.org +short
dig TXT example.org +short
However, keep in mind that DNS changes may take some time to propagate.
6. Installing Carbonio inside the container
We are going to make a single-server installation, following these instructions: https://docs.zextras.com/carbonio-ce/html/single-server-installation.html
6.1. Repository configuration
apt install --yes gnupg2
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 \
--recv-keys 52FD40243E584A21
cat <<EOF > /etc/apt/sources.list.d/zextras.list
deb https://repo.zextras.io/release/ubuntu focal main
EOF
apt update
6.2. Setting hostname
hostnamectl set-hostname mail.example.org
hostname
echo "$(hostname -I) $(hostname -f) mail" >> /etc/hosts
cat /etc/hosts
6.3. Install packages
apt update
apt upgrade --yes
apt install --yes \
service-discover-server \
carbonio-directory-server \
carbonio-proxy \
carbonio-webui \
carbonio-files-ui \
carbonio-mta \
carbonio-appserver \
carbonio-user-management \
carbonio-files-ce \
carbonio-files-db \
carbonio-storages-ce \
carbonio-preview-ce \
carbonio-docs-connector-ce \
carbonio-docs-editor \
carbonio-prometheus \
postgresql-12
Check the status of Carbonio services:
systemctl status carbonio-*
systemctl restart carbonio-prometheus-nginx-exporter.service
6.4. Configuration
carbonio-bootstrap
service-discover setup-wizard
# apt install --yes python3-pip
# pip install requests
apt install --yes python3-requests
pending-setups -a
6.5. DB setup
# create main db role and database
DB_ADM_PWD=Ee5hfaevVo7vieri
su - postgres -c "psql --command=\"CREATE ROLE carbonio_adm WITH LOGIN SUPERUSER encrypted password '$DB_ADM_PWD';\""
su - postgres -c "psql --command=\"CREATE DATABASE carbonio_adm owner carbonio_adm;\""
# bootstrap carbonio files databases
PGPASSWORD=$DB_ADM_PWD carbonio-files-db-bootstrap carbonio_adm 127.0.0.1
# restart the main mailbox process as the zextras user
su - zextras -c 'zmcontrol stop'
su - zextras -c 'zmcontrol start'
7. Setup
7.1. SSL certificate
We are going to use a LetsEncrypt certificate.
-
First, let’s install
certbot
:apt install --yes snapd snap install core snap refresh core snap install --classic certbot ln -s /snap/bin/certbot /usr/bin/certbot
-
Get a certificate:
DOMAIN=mail.example.org EMAIL=user@gmail.com certbot certonly \ --standalone \ --preferred-chain "ISRG Root X1" \ --domains $DOMAIN \ --email $EMAIL \ --agree-tos \ --non-interactive \ --keep-until-expiring \ --dry-run certbot certonly \ --standalone \ --preferred-chain "ISRG Root X1" \ --domains $DOMAIN \ --email $EMAIL \ --agree-tos \ --non-interactive \ --keep-until-expiring
The certificate is saved at
/etc/letsencrypt/live/$DOMAIN/
. -
Copy
privkey.pem
to the Carbonio directory:cp /etc/letsencrypt/live/$DOMAIN/privkey.pem \ /opt/zextras/ssl/carbonio/commercial/commercial.key chown zextras:zextras \ /opt/zextras/ssl/carbonio/commercial/commercial.key
-
Proceed and deploy the SSL certificates:
cp /etc/letsencrypt/live/$DOMAIN/cert.pem /tmp cp /etc/letsencrypt/live/$DOMAIN/chain.pem /tmp
-
Download the ISRG Root X1 chain as below:
apt install --yes wget wget -O /tmp/ISRG-X1.pem https://letsencrypt.org/certs/isrgrootx1.pem.txt cat /tmp/ISRG-X1.pem >> /tmp/chain.pem rm /tmp/ISRG-X1.pem
-
Verify the certificate:
su - zextras \ -c 'zmcertmgr verifycrt comm \ /opt/zextras/ssl/carbonio/commercial/commercial.key \ /tmp/cert.pem \ /tmp/chain.pem'
-
Finally, deploy the certificate and restart the services to finish the deployment:
su - zextras -c 'zmcertmgr deploycrt comm /tmp/cert.pem /tmp/chain.pem' su - zextras -c 'zmcertmgr viewdeployedcrt' su - zextras -c 'zmcontrol restart'
8. Appendices
8.1. Fixing the firewall
If we create a test container, we will notice that the network in the container is not working:
lxc launch images:ubuntu/22.04 u22
lxc ls
lxc exec u22 -- ip addr
The container did not get an IP, as it normally should.
However, if you stop firewalld and restart the container, everything works fine.
systemctl status firewalld
systemctl stop firewalld
lxc restart u22
lxc ls
lxc exec u22 -- ip addr
lxc exec u22 -- ping 8.8.8.8
systemctl start firewalld
systemctl status firewalld
So the problem is that the firewall is not configured properly. Let’s fix it.
8.1.1. Add the bridge interface to the trusted zone
Any interface that is not explicitly added to a zone, is added to the
default zone, which is the zone public
. This zone is meant for the
interfaces that are facing the public internet, so it is
restricted. For example DHCP requests are blocked, and the containers
cannot get an IP.
To fix this, we can add the bridge interface to the trusted
zone,
where everything is allowed:
firewall-cmd --zone=trusted --list-all
firewall-cmd --permanent --zone=trusted --add-interface=lxdbr0
firewall-cmd --reload
firewall-cmd --zone=trusted --list-all
Let’s check that it is working:
lxc restart u22
lxc ls
lxc exec u22 -- ip addr
lxc exec u22 -- ping 8.8.8.8
8.1.2. Fix the FORWARD chain
If the ping is still not working, usually the problem is that
forwarding is blocked. If you try iptables-save | head
and see
something like this: :FORWARD DROP [4:2508]
, it means that the
policy for the FORWARD chain is DROP. Maybe it is set by Docker,
if you have installed it.
You can make the default policy ACCEPT, like this: iptables -P
FORWARD ACCEPT
. However, the next time that the server will be
rebooted, or firewalld
restarted, you may loose this
configuration.
A better way is to add a direct (explicit) rule with
firewall-cmd
, like this:
firewall-cmd --permanent --direct --add-rule \
ipv4 filter FORWARD 0 -j ACCEPT
firewall-cmd --reload
firewall-cmd --direct --get-all-rules
This will enable (ACCEPT) forwarding for all the interfaces, the current ones and the ones that will be created in the future. If this is not what you want, you can use more specific rules, like these:
firewall-cmd --permanent --direct --remove-rule \
ipv4 filter FORWARD 0 -j ACCEPT
firewall-cmd --permanent --direct --add-rule \
ipv4 filter -i lxdbr0 FORWARD 0 -j ACCEPT
firewall-cmd --permanent --direct --add-rule \
ipv4 filter -o lxdbr0 FORWARD 0 -j ACCEPT
firewall-cmd --reload
firewall-cmd --direct --get-all-rules
8.2. Installing sniproxy in a Docker container
-
Install Docker (if not already installed):
curl -fsSL https://get.docker.com -o get-docker.sh DRY_RUN=1 sh ./get-docker.sh sh ./get-docker.sh rm get-docker.sh
-
Install docker-scripts:
apt install git make m4 highlight git clone \ https://gitlab.com/docker-scripts/ds \ /opt/docker-scripts/ds cd /opt/docker-scripts/ds/ make install
-
Install sniproxy:
ds pull sniproxy ds init sniproxy @sniproxy cd /var/ds/sniproxy/ ds make
-
Edit
etc/sniproxy.conf
and add a line like this on the default table:# if no table specified the default 'default' table is defined table { # . . . . . mail.example.org 10.210.64.201 .*.mail.example.org 10.210.64.201 # . . . . . }
We are using 10.210.64.201, which is the fixed IP of the carbonio container. Make sure that the lines #table http_hosts
and#table https_hosts
are commented, so that the default table is used in both cases. -
We should also restart
sniproxy
in order to reload the configuration file:cd /var/ds/sniproxy/ ds restart