Installing BigBlueButton with LXD
1. Intro
I have a Dedicated Server on hetzner.com and I want to install BigBlueButton. The latest stable release of BBB requires ubuntu:20.04, but I have installed on the server ubuntu:22.04 (the latest LTS release), because I want to install there some other applications as well (with docker). So, I’d like to use an LXC container for installing BBB.
For ease of installation, the script bbb-install is recommended, and it seems to work well with a NAT-ed system too. So I don’t need to purchase an extra IP for the BBB container. The case with an extra IP is covered in this (deprecated) article.
The script that we use also installs a TURN server inside the BBB container itself, so we don’t need to install a TURN server on a separate container or machine (as we have done previously).
2. Preparing the host
As mentioned in the introduction, we need to have a freshly installed ubuntu:22.04 on this machine.
2.1. Secure the server
I also install firewalld
and fail2ban
to protect the server from
the attacks:
# install firewalld
apt install --yes firewalld
firewall-cmd --list-all
# firewall-cmd --permanent --zone=public --set-target=DROP
# firewall-cmd --reload
# install fail2ban
apt install --yes fail2ban
fail2ban-client status
fail2ban-client status sshd
Their default configuration is usually fine, so for the time being we don’t need to change anything.
2.2. Install LXD
To manage the LXC containers I use LXD. I install it like this:
apt install snapd
snap install lxd --channel=latest/stable
snap refresh lxd --channel=latest
snap list
lxc list
lxd init
The output from the last command looks like this:
Would you like to use LXD clustering? (yes/no) [default=no]: Do you want to configure a new storage pool? (yes/no) [default=yes]: Name of the new storage pool [default=default]: Name of the storage backend to use (btrfs, dir, lvm, zfs, ceph) [default=zfs]: btrfs Create a new BTRFS pool? (yes/no) [default=yes]: Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]: Size in GB of the new loop device (1GB minimum) [default=30GB]: 70 Would you like to connect to a MAAS server? (yes/no) [default=no]: Would you like to create a new local network bridge? (yes/no) [default=yes]: What should the new bridge be called? [default=lxdbr0]: What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: Would you like the LXD server to be available over the network? (yes/no) [default=no]: Would you like stale cached images to be updated automatically? (yes/no) [default=yes] Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
2.3. Fix the firewall
For more details about this look at the corresponding appendix. |
Any interface that is not explicitly added to a zone, is added to the
default zone, which is the zone public
. This zone is meant for the
interfaces that are facing the public internet, so it is
restricted. For example DHCP requests are blocked, and the containers
cannot get an IP.
To fix this, we can add the bridge interface to the trusted
zone,
where everything is allowed:
firewall-cmd --zone=trusted --list-all
firewall-cmd --permanent --zone=trusted --add-interface=lxdbr0
firewall-cmd --reload
firewall-cmd --zone=trusted --list-all
We should also make sure that forwarding is enabled:
firewall-cmd --permanent --direct --add-rule \
ipv4 filter FORWARD 0 -j ACCEPT
firewall-cmd --reload
firewall-cmd --direct --get-all-rules
3. Creating the container
The latest stable version of BBB requires ubuntu:20.04
.
lxc launch images:ubuntu/20.04 bbb \
-c security.nesting=true \
-c security.syscalls.intercept.mknod=true \
-c security.syscalls.intercept.setxattr=true
lxc list
lxc list -c ns4t
lxc info bbb
lxc config show bbb
The configuration -c security.nesting=true
is needed in order
to run docker inside the container. However if the container is
unprivileged it does not really have any security implications.
Similarly, the other two configuration options are needed so that docker can handle images efficiently.
4. Networking
The connection of our container to the network goes through the host, which serves as a gateway. So, the BigBlueButton server is behind NAT. In this case we should follow the instructions at: https://docs.bigbluebutton.org/administration/firewall-configuration
In particular, we need to forward the UDP ports 16384-32768
(for
FreeSWITCH) from the host to the bbb
container, and the TCP ports
80
and 443
. For the TURN server, we should also forward the UDP
ports 32769-65535
, as well as the ports 3478/tcp
and 3478/udp
.
This is easier if the container has a fixed IP (rather then a dynamic
one, obtained from DHCP). So, first of all, let’s limit the range of
DHCP for the lxdbr0
bridge, and then change the network
configuration inside the container so that it has a fixed IP that is
outside the DHCP range.
4.1. Limit the DHCP range
lxc network set lxdbr0 \
ipv4.dhcp.ranges 10.210.64.2-10.210.64.200
lxc network show lxdbr0
4.2. Set a fixed IP
Network configuration on ubuntu is managed by netplan
.
lxc exec bbb -- bash
ip address
ip route
rm /etc/netplan/*.yaml
cat <<EOF > /etc/netplan/01-netcfg.yaml
network:
version: 2
ethernets:
eth0:
dhcp4: no
addresses:
- 10.210.64.201/8
nameservers:
addresses: [8.8.8.8, 8.8.4.4]
routes:
- to: default
via: 10.210.64.1
EOF
netplan apply
ip address
ip route
ping 8.8.8.8
4.3. Forward ports
We can use the command lxc network forward
to forward these
ports to the internal IP of the BBB container:
HOST_IP=10.11.12.13 # the public IP of the host
CONTAINER_IP=10.210.64.201
lxc network forward create lxdbr0 $HOST_IP
lxc network forward port add lxdbr0 $HOST_IP tcp 3478 $CONTAINER_IP
lxc network forward port add lxdbr0 $HOST_IP udp 3478 $CONTAINER_IP
lxc network forward port add lxdbr0 $HOST_IP udp 16384-65535 $CONTAINER_IP
lxc network forward list lxdbr0
lxc network forward show lxdbr0 $HOST_IP
We can use netcat
to test that ports are forwarded correctly. On the
server run:
lxc exec bbb -- nc -u -l 17000
Outside the server run:
nc bbb.example.org 17000
We are assuming that bbb.example.org is resolved to the
external IP of the server.
|
Every line that is typed outside the server should be displayed inside the server.
4.4. Forward the TCP ports 80 and 443
Forwarding these two ports is a bit more complex and cannot be done
with the same method that was used above for the UDP ports. This is
because these ports need to be used by other applications as well,
beside BBB. We need to forward these ports to different applications
or containers, based on the domain that is being used. We can use
sniproxy
for this.
apt install -y sniproxy
vim /etc/sniproxy.conf
killall sniproxy
service start sniproxy
ps ax | sniproxy
Make sure that the configuration file /etc/sniproxy.conf
looks
like this:
user daemon pidfile /var/run/sniproxy.pid error_log { syslog daemon priority notice } access_log { filename /var/log/sniproxy/access.log priority notice } listen 80 { proto http fallback localhost:8080 } listen 443 { proto tls } table { # . . . . . bbb.example.org 10.210.64.201 # . . . . . }
We are using 10.210.64.201, which is the fixed IP of the bbb container. |
5. Installing BBB inside the container
5.1. Run the installation script
As recommended on
the
docs, before running the installation script, add first this line at
/etc/hosts
:
10.11.12.13 bbb.example.org
10.11.12.13 is the external/public IP of the server.
|
lxc exec bbb -- bash
echo "10.11.12.13 bbb.example.org" >> /etc/hosts
apt install -y wget gnupg2
wget http://ubuntu.bigbluebutton.org/repo/bigbluebutton.asc -O- | apt-key add -
wget -q https://ubuntu.bigbluebutton.org/bbb-install-2.6.sh
chmod +x bbb-install-2.6.sh
./bbb-install-2.6.sh
./bbb-install-2.6.sh \
-v focal-260 \
-s bbb.example.org \
-e info@example.org \
-t ohw1Ree0Ohyie0oo:eex8ieCoAeLeo1sa \
-g -k
5.2. Add admins and users:
docker exec greenlight-v3 \
bundle exec rake \
admin:create["Full Name 1","email1@example.org","Pass-123"]
docker exec greenlight-v3 \
bundle exec rake \
user:create["Full Name 2","email2@example.org","Pass-234"]
5.3. Updating
To update the system, create a script update.sh
with a content
like this:
#!/bin/bash -x
apt update
apt -y upgrade
cd $(dirname $0)
#wget -q https://ubuntu.bigbluebutton.org/bbb-install-2.6.sh
#chmod +x bbb-install-2.6.sh
./bbb-install-2.6.sh \
-v focal-260 \
-s bbb.example.org \
-e info@example.org \
-t ohw1Ree0Ohyie0oo:eex8ieCoAeLeo1sa \
-g -k
6. Appendices
6.1. Fixing the firewall
If we create a test container, we will notice that the network in the container is not working:
lxc launch images:ubuntu/22.04 u22
lxc ls
lxc exec u22 -- ip addr
The container did not get an IP, as it normally should.
However, if you stop firewalld and restart the container, everything works fine.
systemctl status firewalld
systemctl stop firewalld
lxc restart u22
lxc ls
lxc exec u22 -- ip addr
lxc exec u22 -- ping 8.8.8.8
systemctl start firewalld
systemctl status firewalld
So the problem is that the firewall is not configured properly. Let’s fix it.
6.1.1. Add the bridge interface to the trusted zone
Any interface that is not explicitly added to a zone, is added to the
default zone, which is the zone public
. This zone is meant for the
interfaces that are facing the public internet, so it is
restricted. For example DHCP requests are blocked, and the containers
cannot get an IP.
To fix this, we can add the bridge interface to the trusted
zone,
where everything is allowed:
firewall-cmd --zone=trusted --list-all
firewall-cmd --permanent --zone=trusted --add-interface=lxdbr0
firewall-cmd --reload
firewall-cmd --zone=trusted --list-all
Let’s check that it is working:
lxc restart u22
lxc ls
lxc exec u22 -- ip addr
lxc exec u22 -- ping 8.8.8.8
6.1.2. Fix the FORWARD chain
If the ping is still not working, usually the problem is that
forwarding is blocked. If you try iptables-save | head
and see
something like this: :FORWARD DROP [4:2508]
, it means that the
policy for the FORWARD chain is DROP. Maybe it is set by Docker,
if you have installed it.
You can make the default policy ACCEPT, like this: iptables -P
FORWARD ACCEPT
. However, the next time that the server will be
rebooted, or firewalld
restarted, you may loose this
configuration.
A better way is to add a direct (explicit) rule with
firewall-cmd
, like this:
firewall-cmd --permanent --direct --add-rule \
ipv4 filter FORWARD 0 -j ACCEPT
firewall-cmd --reload
firewall-cmd --direct --get-all-rules
This will enable (ACCEPT) forwarding for all the interfaces, the current ones and the ones that will be created in the future. If this is not what you want, you can use more specific rules, like these:
firewall-cmd --permanent --direct --remove-rule \
ipv4 filter FORWARD 0 -j ACCEPT
firewall-cmd --permanent --direct --add-rule \
ipv4 filter -i lxdbr0 FORWARD 0 -j ACCEPT
firewall-cmd --permanent --direct --add-rule \
ipv4 filter -o lxdbr0 FORWARD 0 -j ACCEPT
firewall-cmd --reload
firewall-cmd --direct --get-all-rules
6.2. Installing sniproxy in a Docker container
-
Install Docker (if not already installed):
curl -fsSL https://get.docker.com -o get-docker.sh DRY_RUN=1 sh ./get-docker.sh sh ./get-docker.sh rm get-docker.sh
-
Install docker-scripts:
apt install git make m4 highlight git clone \ https://gitlab.com/docker-scripts/ds \ /opt/docker-scripts/ds cd /opt/docker-scripts/ds/ make install
-
Install sniproxy:
ds pull sniproxy ds init sniproxy @sniproxy cd /var/ds/sniproxy/ ds make
-
Edit
etc/sniproxy.conf
and add a line like this on the default table:# if no table specified the default 'default' table is defined table { # . . . . . bbb.example.org 10.210.64.201 # . . . . . }
We are using 10.210.64.201, which is the fixed IP of the bbb container. Make sure that the lines #table http_hosts
and#table https_hosts
are commented, so that the default table is used in both cases. -
We should also restart
sniproxy
in order to reload the configuration file:cd /var/ds/sniproxy/ ds restart