Installing BigBlueButton with LXD

1. Intro

I have a VDS on contabo and I want to install BigBlueButton. The latest stable release of BBB requires ubuntu:18.04, but I have installed on the server ubuntu:20.04 (the latest LTS release), because I want to install there some other applications as well (with docker). So, I’d like to use an LXC container for installing BBB.

For ease of installation, the script bbb-install.sh is recommended, which works best if used in a system with public IP (not a NAT-ed one). For this reason I have also purchased a second IP, which I want to use for the BBB container. The second IP is routed to the primary one.

I’d like also to install a TURN server in another container, which may improve the connectivity of the clients to the BBB server. A TURN server does not require lots of computing resources and, besides BBB, it can be used for other conferencing applications as well, like Galene, the Talk app of NextCloud, etc.

2. Preparing the host

As mentioned in the introduction, we need to have a freshly installed ubuntu:20.04 on this machine.

2.1. Secure the server

I also install firewalld and fail2ban to protect the server from the attacks:

# install firewalld
apt install --yes firewalld
firewall-cmd --list-all
# firewall-cmd --permanent --zone=public --set-target=DROP
# firewall-cmd --reload

# install fail2ban
apt install --yes fail2ban
fail2ban-client status
fail2ban-client status sshd

Their default configuration is usually fine, so for the time being we don’t need to change anything.

2.2. Install LXD

To manage the LXC containers I use LXD. I install it like this:

apt install snapd
snap install lxd --channel=4.0/stable
snap list
lxc list
lxd init

The output from the last command looks like this:

Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, dir, lvm, zfs, ceph) [default=zfs]: btrfs
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing empty block device (e.g. a disk or partition)? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=30GB]: 70
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like the LXD server to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

Almost all the answers here are the default ones. One of the answers that is different is this:

Name of the storage backend to use (btrfs, dir, lvm, zfs, ceph) [default=zfs]: btrfs

I am using btrfs for the storage backend, because the BBB installation script needs to install Docker inside the container, and only this filesystem supports it.

2.3. Fix the firewall

If we create a test container, we will notice that the network in the container is not working:

lxc launch images:ubuntu/22.04 u22
lxc ls
lxc exec u22 -- ip addr

The container did not get an IP, as it normally should.

However, if you stop firewalld and restart the container, everything works fine.

systemctl status firewalld
systemctl stop firewalld

lxc restart u22

lxc ls
lxc exec u22 -- ip addr
lxc exec u22 -- ping 8.8.8.8

systemctl start firewalld
systemctl status firewalld

By the way, IP forwarding should already be enabled in the kernel of the host:

sysctl net.ipv4.ip_forward
cat /proc/sys/net/ipv4/ip_forward

If it is not, enable it like this:

echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf
sysctl -p

So the problem is that the firewall is not configured properly. Let’s fix it.

2.3.1. Allow DHCP requests

firewall-cmd --zone=internal --list-all
firewall-cmd --permanent --zone=internal --add-interface=lxdbr0
firewall-cmd --permanent --zone=internal \
    --remove-service={dhcpv6-client,mdns,samba-client,ssh}
firewall-cmd --permanent --zone=internal --add-service=dhcp
firewall-cmd --reload
firewall-cmd --zone=internal --list-all

By adding the interface lxdbr0 to the zone internal and allowing DHCP requests on this zone, the containers are able to get an automatic IP from lxdbr0.

lxc restart u22
lxc ls
lxc exec u22 -- ip addr

2.3.2. Fix forwarding in firewalld

Ping is still not working:

lxc exec u22 -- ping 8.8.8.8

Let’s move the external interface (eth0 in my case) to the zone external, and enable forwarding between the zones internal and external:

firewall-cmd --permanent --zone=external --add-interface=eth0
firewall-cmd --reload
firewall-cmd --zone=external --list-all

firewall-cmd --permanent --new-policy=forward-internal
firewall-cmd --permanent --policy=forward-internal --add-ingress-zone=internal
firewall-cmd --permanent --policy=forward-internal --add-egress-zone=external
firewall-cmd --permanent --policy=forward-internal --set-target=ACCEPT
firewall-cmd --reload
firewall-cmd --list-all-policies

By default, the external inteface (eth0 in this case) is associated implicitly to the zone public. If you use an SSH port different from the default one (for example 2201), you have to add this port to the external zone too, along with the eth0 interface, otherwise you may lock yourself out of the server. For example like this:

firewall-cmd --permanent --zone=external --add-port=2201/tcp
firewall-cmd --permanent --zone=public --remove-port=2201/tcp
firewall-cmd --reload

2.3.3. Fix the FORWARD chain

If the ping is still not working, usually the problem is the default policy of iptables. If you try iptables-save | head and see something like this: :FORWARD DROP [4:2508], it means that the default policy for the FORWARD chain is DROP.

You can make the default policy ACCEPT, like this: iptables -P FORWARD ACCEPT. However, the next time that the server will be rebooted, or firewalld restarted, you may loose this configuration.

A better way is to add a direct (explicit) rule with firewall-cmd, like this:

firewall-cmd --permanent --direct --add-rule \
    ipv4 filter FORWARD 0 -j ACCEPT
firewall-cmd --reload

firewall-cmd --direct --get-all-rules

This will enable (ACCEPT) forwarding for all the interfaces, the current ones and the ones that will be created in the future. If this is not what you want, you can use more specific rules, like these:

firewall-cmd --permanent --direct --remove-rule \
    ipv4 filter FORWARD 0 -j ACCEPT

firewall-cmd --permanent --direct --add-rule \
    ipv4 filter -i lxdbr0 FORWARD 0 -j ACCEPT
firewall-cmd --permanent --direct --add-rule \
    ipv4 filter -o lxdbr0 FORWARD 0 -j ACCEPT

firewall-cmd --reload
firewall-cmd --direct --get-all-rules
I would expect the firewalld policy that we defined on the second step to take care of enabling the necessary forwarding rules automatically, but apparently it doesn’t do that.

2.3.4. Cleanup

Let’s test again and then remove the test container:

lxc exec u22 -- ping 8.8.8.8

lxc stop u22
lxc rm u22
lxc ls

2.4. Remove the second IP from the host

The default configuration assigns the second IP to eth0 on the host. We can remove it like this:

ip addr del 37.121.182.6/32 dev eth0

To remove it permanently, edit /etc/netplan/50-cloud-init.yaml and comment out the second IP line. Then do:

netplan apply

3. Creating the container

3.1. Create a profile

lxc profile create bbb
lxc profile edit bbb
lxc profile list

The content of the bbb profile looks like this:

config:
  security.nesting: true    (1)
  user.network-config: |    (2)
    version: 2
    ethernets:
        eth0:
            addresses:
            - 37.121.182.6/32    (3)
            nameservers:
                addresses:
                - 8.8.8.8
                search: []
            routes:
            -   to: 0.0.0.0/0
                via: 169.254.0.1    (4)
                on-link: true
description: Routed LXD profile
devices:
  eth0:
    ipv4.address: 37.121.182.6    (3)
    nictype: routed    (5)
    host_name: veth-bbb    (6)
    type: nic
    parent: eth0    (7)
name: bbb
used_by:
1 The configuration security.nesting: true is needed in order to run docker inside the container. However if the container is unprivileged it does not really have any security implications.
2 The config.user.network-config part is about the configuration of the container (through cloud-init).
3 I am using the second public IP that I deleted from the host interface (37.121.182.6/32). For this reason this profile can be used only for one container. To build other containers like this, we should make a copy of the profile and modify it.
4 The gateway is 169.254.0.1.
5 Notice that devices.eth0.nictype is routed. I could have used an ipvlan type as well, and most of the configurations would be almost the same, however it seems that in this case the container cannot ping the public IP of the host, and I don’t want this.
6 The field devices.eth0.host_name sets the name of the virtual interface that will be created on the host. If it is not specified, then a random name will be used each time the container is started. But this would make difficult the specification of the firewall rules (that we will see later).
7 The field devices.eth0.parent is the name of the interface on the host where this virtual interface will be attached. In my case this is not really necessary and can be left out or commented.

3.2. Launch the container

The latest stable version of BBB requires ubuntu:18.04.

lxc launch ubuntu:18.04 bbb --profile default --profile bbb
lxc list
lxc list -c ns4t
lxc info bbb

4. Networking

4.1. Check networking

With the command ip addr you can notice that a new interface named veth-bbb has been created, and it has the IP 169.254.0.1/32.

With the command ip ro you can notice that a route like this has been added:

37.121.182.6 dev veth-bbb scope link

Try also these commands:

lxc exec bbb -- ip addr
lxc exec bbb -- ip ro

Notice that the interface inside the container has IP 37.121.182.6/32 and the default gateway is 169.254.0.1.

We can also ping from the host to 37.121.182.6:

ping 37.121.182.6

4.2. Fix the firewall

However from the container we cannot ping to the host or outside (to the Internet):

lxc exec bbb -- ping 169.254.0.1
lxc exec bbb -- ping 8.8.8.8

The problem is that firewalld (that I have installed on the host) blocks these connections. To fix this problem, we can add the interface that is connected to the container to the external zone of the firewall:

firewall-cmd --add-interface=veth-bbb --zone=external --permanent
firewall-cmd --reload
firewall-cmd --list-all --zone=external
If we did not specify the name of the veth interface on the profile, then each time the container is started it would get a random name, and the firewall configuration would not work properly.

Now connections to the internet should work:

lxc exec bbb -- ping 169.254.0.1
lxc exec bbb -- ping 8.8.8.8

4.3. Allow connection forwarding

However there is still something that does not work: from outside the host we cannot ping to the container. The problem is that the traffic goes through the FORWARD chain of iptables and the firewall currently blocks it. To fix this we should add some rules that allow forwarding for all the traffic that goes through the interface veth-bbb:

firewall-cmd --permanent --direct --add-rule \
        ipv4 filter FORWARD 0 -o veth-bbb -j ACCEPT
firewall-cmd --permanent --direct --add-rule \
        ipv6 filter FORWARD 0 -o veth-bbb -j ACCEPT
firewall-cmd --permanent --direct --add-rule \
        ipv4 filter FORWARD 0 -i veth-bbb -j ACCEPT
firewall-cmd --permanent --direct --add-rule \
        ipv6 filter FORWARD 0 -i veth-bbb -j ACCEPT

firewall-cmd --reload
firewall-cmd --direct --get-all-rules

4.4. Test connections

We can test that everything works with netcat. On the server run:

lxc exec bbb -- nc -l 443

Outside the server run:

nc 37.121.182.6 443

Then every line that is typed outside the server should be displayed inside the server.

If we used nic type ipvlan instead of routed, then instead of FORWARD, the relevant chains of iptables in this case would have been INPUT and OUTPUT and the filter rules above would have been a bit different.

5. Installing BBB inside the container

5.1. Run the installation script

Let’s see how to install BBB inside the container.

lxc exec bbb -- bash

wget http://ubuntu.bigbluebutton.org/repo/bigbluebutton.asc -O- | apt-key add -
wget -q https://ubuntu.bigbluebutton.org/bbb-install.sh
chmod +x bbb-install.sh

./bbb-install.sh -v bionic-240 -s bbb.example.org -e info@example.org  -g -w
We assume that the DNS record bbb.example.org already points to the IP that is assigned to this container.
Getting a SSL cert

Usually the installation script should get automatically a letsencrypt SSL cert. I am not sure why it fails for me some times. As a workaround, I have used DNS verification, with a command like this:

certbot certonly --manual \
    --preferred-challenges dns \
    -d bbb.example.org \
    --email info@example.org \
    --agree-tos

After getting the SSL cert, run the installation script again.

5.2. Add admins and users:

docker exec greenlight-v2 \
    bundle exec rake \
    admin:create["Full Name 1","email1@example.org","passw1","username1"]
docker exec greenlight-v2 \
    bundle exec rake \
    user:create["Full Name 2","email2@example.org","passw2","username2"]

5.3. Fix the services

If you run bbb-conf --status you will notice that some services are not working. They can be fixed like this:

# Override /lib/systemd/system/freeswitch.service
mkdir /etc/systemd/system/freeswitch.service.d
cat <<EOF | tee /etc/systemd/system/freeswitch.service.d/override.conf
[Service]
CPUSchedulingPolicy=other
EOF

# override /usr/lib/systemd/system/bbb-html5-frontend@.service
mkdir /etc/systemd/system/bbb-html5-frontend@.service.d
cat <<EOF | tee /etc/systemd/system/bbb-html5-frontend@.service.d/override.conf
[Service]
CPUSchedulingPolicy=other
EOF

# override /usr/lib/systemd/system/bbb-html5-backend@.service
mkdir /etc/systemd/system/bbb-html5-backend@.service.d
cat <<EOF | tee /etc/systemd/system/bbb-html5-backend@.service.d/override.conf
[Service]
CPUSchedulingPolicy=other
EOF

systemctl daemon-reload
bbb-conf --restart
bbb-conf --status

(Thanks to this post.)

5.4. Update

To update the system, create a script update.sh with a content like this:

#!/bin/bash -x

apt update
apt -y upgrade

cd $(dirname $0)
#wget -q https://ubuntu.bigbluebutton.org/bbb-install.sh
#chmod +x bbb-install.sh
./bbb-install.sh -g -v \
        -v bionic-240 \
        -s bbb.fs.al \
        -e dashohoxha@gmail.com \
        -c turn.fs.al:er6Goj8AEph6foh1

bbb-config --setip bbb.fs.al

# Fix proxy_pass
sed -i /etc/bigbluebutton/nginx/sip.nginx \
    -e '/proxy_pass/ s/http:/https:/' \
    -e '/proxy_pass/ s/5066/7443/'

# Override /lib/systemd/system/freeswitch.service
mkdir /etc/systemd/system/freeswitch.service.d
cat <<EOF | tee /etc/systemd/system/freeswitch.service.d/override.conf
[Service]
CPUSchedulingPolicy=other
EOF

# override /usr/lib/systemd/system/bbb-html5-frontend@.service
mkdir /etc/systemd/system/bbb-html5-frontend@.service.d
cat <<EOF | tee /etc/systemd/system/bbb-html5-frontend@.service.d/override.conf
[Service]
CPUSchedulingPolicy=other
EOF

# override /usr/lib/systemd/system/bbb-html5-backend@.service
mkdir /etc/systemd/system/bbb-html5-backend@.service.d
cat <<EOF | tee /etc/systemd/system/bbb-html5-backend@.service.d/override.conf
[Service]
CPUSchedulingPolicy=other
EOF

systemctl daemon-reload
bbb-conf --restart
bbb-conf --status

6. Installing a TURN server in another container

In some network restricted sites or development environments, such as those behind NAT or a firewall that restricts outgoing UDP connections, users may be unable to make outgoing UDP connections to your BigBlueButton server.

The TURN protocol is designed to allow UDP-based communication flows like WebRTC to bypass NAT or firewalls by having the client connect to the TURN server, and then have the TURN server connect to the destination on their behalf.

In addition, the TURN server implements the STUN protocol as well, used to allow direct UDP connections through certain types of firewalls which otherwise might not work.

Using a TURN server under your control improves the success of connections to BigBlueButton and also improves user privacy, since they will no longer be sending IP address information to a public STUN server.

Because the TURN protocol is not CPU and memory intensive, and because it needs to use the port 443, it makes sense to use another public IP for it and to install it in a container with a routed NIC.

6.1. Create the profile

We can copy and modify the profile for BBB:

lxc profile copy bbb turn
lxc profile ls
lxc profile edit turn

Change the public IP and the host_name. It should look like this:

config:
  security.nesting: "true"
  user.network-config: |
    version: 2
    ethernets:
        eth0:
            addresses:
            - 37.121.183.102/32    (1)
            nameservers:
                addresses:
                - 8.8.8.8
                search: []
            routes:
            -   to: 0.0.0.0/0
                via: 169.254.0.1
                on-link: true
description: Routed LXD profile
devices:
  eth0:
    host_name: veth-turn    (2)
    ipv4.address: 37.121.183.102    (1)
    nictype: routed
    parent: eth0
    type: nic
name: turn
used_by:
1 Another IP
2 Another host_name

The modifications are these:

config:
    ethernets:
        eth0:
            addresses:
            - 37.121.183.102/32

devices:
  eth0:
    host_name: veth-turn
    ipv4.address: 37.121.183.102
The setting : security.nesting: "true" is not actually needed because we don’t need to run docker inside the container, but it doesn’t harm.

6.2. Launch the container

lxc launch ubuntu:20.04 turn --profile default --profile turn
lxc list
lxc info turn

ip addr show veth-turn
ip ro | grep veth

lxc exec turn -- ip addr
lxc exec turn -- ip ro

6.3. Fix networking

firewall-cmd --permanent --zone=external --add-interface=veth-turn

firewall-cmd --permanent --direct --add-rule \
        ipv4 filter FORWARD 0 -o veth-turn -j ACCEPT
firewall-cmd --permanent --direct --add-rule \
        ipv6 filter FORWARD 0 -o veth-turn -j ACCEPT
firewall-cmd --permanent --direct --add-rule \
        ipv4 filter FORWARD 0 -i veth-turn -j ACCEPT
firewall-cmd --permanent --direct --add-rule \
        ipv6 filter FORWARD 0 -i veth-turn -j ACCEPT

firewall-cmd --reload

firewall-cmd --direct --get-all-rules
firewall-cmd --zone=trusted --list-all
iptables-save | grep veth

lxc exec turn -- ping 169.254.0.1
lxc exec turn -- ping 8.8.8.8

Use netcat as well to make sure that you can reach any TCP or UDP port inside the container.

6.4. Install coturn inside the container

lxc exec turn -- bash

wget -qO- https://ubuntu.bigbluebutton.org/bbb-install.sh \
    | bash -s -- -c turn.example.com:1234abcd -e info@example.com
Again we are assuming that the domain turn.example.com is already set up properly on the DNS.
Getting a SSL cert

Usually the installation script should get automatically a letsencrypt SSL cert. I am not sure why it fails for me some times. As a workaround, I have used DNS verification, with a command like this:

certbot certonly --manual \
    --preferred-challenges dns \
    -d turn.example.org \
    --email info@example.org \
    --agree-tos

After getting the SSL cert, run the installation script again.

Update the server

To update the system, create a script update.sh with a content like this:

#!/bin/bash -x
apt update
apt upgrade
apt autoremove
#wget -q https://ubuntu.bigbluebutton.org/bbb-install.sh
#chmod +x bbb-install.sh
./bbb-install.sh -c turn.example.org:1234abcd -e info@example.org

6.5. Use it on the BBB server

Now reinstall BBB, adding the option -c turn.example.com:1234abcd to the installation command, like this:

lxc exec bbb -- bash

./bbb-install.sh -g -w \
        -v bionic-240 \
        -s bbb.example.org \
        -e info@example.org  \
        -c turn.example.com:1234abcd

For testing that everything works as expected, see: https://docs.bigbluebutton.org/admin/setup-turn-server.html#test-your-turn-server