Load Balancing

Use the load balancer to distribute your traffic across multiple servers and optimize performance.

The load balancer receives the requests from the clients and forwards them to the most suitable server. In this way, you ensure that Enginsight runs optimally even with a large number of incoming requests.

We recommend using the load balancer at the latest from 500 hosts to be managed.

To use load balancing, 3 additional virtual machines must be provisioned:

  1. VM for the load balancer

  2. VM for the services

  3. VM for the 2nd app server

For smooth operation, we recommend using Nginx as a reverse proxy.

Preparation of the Virtual Machines

Load balancer VM

In the first step, prepare the VM for the load balancer. This is used for certificate handling and forwarding of your requests.

It is important to note that Docker is not installed on the VM running Nginx. This is the only way to ensure smooth operation.

  1. Provide a VM for the load balancer.

  2. Install the Nginx using sudo apt install nginx. To do this, simply take the configuration and adjust it accordingly.

 map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
}
 
upstream apiServers {
    server <ipApiServer1>:8080;
    server <ipApiServer2>:8080;
}
 
upstream appServers {
    server <ipAppServer1>:80;
    server <ipAppServer2>:80;
}
 
 
  server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
 
    ssl_stapling on;
    ssl_stapling_verify on;
    server_name ...;
 
    ssl_protocols TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_ciphers "ECDHE-RSA-AES256-GCM-SHA384";
    ssl_ecdh_curve secp384r1;
    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets off;
    resolver 8.8.8.8 8.8.4.4 valid=300s;
    resolver_timeout 5s;
 
    ssl_dhparam /etc/nginx/dhparam.pem;
    ssl_certificate /etc/letsencrypt/live/.../fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/.../privkey.pem;
 
    client_max_body_size 200m;
 
    location / {
        proxy_pass http://apiServers;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-Proto "https";
        proxy_set_header X-Forwarded-Ssl "on";
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
    }
}
 
 
server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
 
    ssl_stapling on;
    ssl_stapling_verify on;
    server_name ...;
 
    ssl_protocols TLSv1.2;
    ssl_prefer_server_ciphers on;
    ssl_ciphers "ECDHE-RSA-AES256-GCM-SHA384";
    ssl_ecdh_curve secp384r1;
    ssl_session_cache shared:SSL:10m;
    ssl_session_tickets off;
    resolver 8.8.8.8 8.8.4.4 valid=300s;
    resolver_timeout 5s;
 
    ssl_dhparam /etc/nginx/dhparam.pem;
    ssl_certificate /etc/letsencrypt/live/.../fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/.../privkey.pem;
 
    client_max_body_size 200m;
 
    location / {
        proxy_pass http://appServers;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-Proto "https";
        proxy_set_header X-Forwarded-Ssl "on";
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
    }
}
  1. Now add the certificates and adjust the paths accordingly in the configuration. Ex: Location of the certificates: /etc/nginx/ssl Adjustment in the configuration: ssl_certificate /etc/nginx/ssl/fullchain.pem; and ssl_certificate_key /etc/nginx/privkey.pem;

If you have issued your certificates to IP addresses, make sure that the respective certificates are also issued to the correct IP addresses.

Services VM

This VM is to be provisioned exclusively for the execution of the following services:

ServiceFunction

sentinel-m3

Controls alerts and manages assigned notifications.

reporter-m4

Provides the Enginsight platform with up-to-date vulnerability data (CVEs) and distributes it to the modules.

profiler-m22

Provides for the calculation of the normal curve of the machine learning metrics.

anomalies-m28

Aligns normal curve of machine learning metrics with measured data to detect anomalies.

scheduler-m29

Triggers scheduled, automated actions, for example, plugins or audits.

updater-m34

Managed and updated configuration checklists.

generator-m35

Generates PDF reports, e.g. for hosts, endpoints and penetration tests.

historian-m38

Summarizes measured data to display it over time.

themis-m43

Acts as an integrity manager and checks data for correctness as well as for topicality.

Be sure to note that the services are now running on a separate VM and take this into account when configuring your app server.

  1. Install Docker on your Services VM.

  2. Now customize the docker-compose.yml under /opt/enginsight/enterprise according to the instructions.


version: "3"
services:
  mongodb-cves:
    image: mongo:4
    networks:
    - mongodb-cves
    restart: always
    volumes:
    - mongodb-cves-volume:/data/db

  sentinel-m3:
    image: registry.enginsight.com/enginsight/sentinel-m3:2.22.37
    restart: always
    volumes:
    - "./conf/services/config.json.production:/etc/enginsight/sentinel-m3/config.json"

  reporter-m4:
    image: registry.enginsight.com/enginsight/reporter-m4:2.4.47
    networks:
    - mongodb-cves
    depends_on:
    - mongodb-cves
    restart: always
    volumes:
    - "./conf/services/config.json.production:/etc/enginsight/reporter-m4/config.json"

  profiler-m22:
    image: registry.enginsight.com/enginsight/profiler-m22:2.2.9
    restart: always
    volumes:
    - "./conf/services/config.json.production:/etc/enginsight/profiler-m22/config.json"

  anomalies-m28:
    image: registry.enginsight.com/enginsight/anomalies-m28:2.2.2
    restart: always
    volumes:
    - "./conf/services/config.json.production:/etc/enginsight/anomalies-m28/config.json"

  scheduler-m29:
    image: registry.enginsight.com/enginsight/scheduler-m29:1.8.76
    restart: always
    volumes:
    - "./conf/services/config.json.production:/etc/enginsight/scheduler-m29/config.json"

  updater-m34:
    image: registry.enginsight.com/enginsight/updater-m34:2.0.4
    restart: always
    volumes:
    - "./conf/services/config.json.production:/etc/enginsight/updater-m34/config.json"

  generator-m35:
    image: registry.enginsight.com/enginsight/generator-m35:1.14.2
    restart: always
    volumes:
    - "./conf/services/config.json.production:/etc/enginsight/generator-m35/config.json"

  historian-m38:
    image: registry.enginsight.com/enginsight/historian-m38:2.1.58
    restart: always
    volumes:
    - "./conf/services/config.json.production:/etc/enginsight/historian-m38/config.json"

  themis-m43:
    image: registry.enginsight.com/enginsight/themis-m43:1.18.20
    restart: always
    volumes:
    - "./conf/services/config.json.production:/etc/enginsight/themis-m43/config.json"

networks:
  mongodb-cves:

volumes:
  mongodb-cves-volume:

Make sure that always the latest version endings are used.

  1. Store a mail server configuration and ensure that the mail server configuration is removed from the app servers.

Database VM

  1. Secure your database with iptables.

Adjust the iptables to block any connections from outside to the database. This step results in only the application being able to access MongoDB and prevents unauthorized access.

  1. Add new rules for the 2nd app server and the server running the services. To do this, simply invoke the iptables.

nano /etc/iptables/rules.v4 
  1. Replace <APP IP> with the application server IP reachable from the database. Replace <DB IP> with the database server IP reachable from the application and add rules for redis as shown below.

 -A INPUT -p tcp -m tcp --dport 27017 -s 127.0.0.1 -j ACCEPT
 -A INPUT -p tcp -m tcp --dport 27017 -s <APP IP> -j ACCEPT
 -A INPUT -p tcp -m tcp --dport 6379 -s <APP IP> -j ACCEPT
 -A INPUT -p tcp -m tcp --dport 27017 -s <APP2 IP> -j ACCEPT
 -A INPUT -p tcp -m tcp --dport 6379 -s <APP2 IP> -j ACCEPT
 -A INPUT -p tcp -m tcp --dport 27017 -s <Services IP> -j ACCEPT
 -A INPUT -p tcp -m tcp --dport 6379 -s <Services IP> -j ACCEPT
 -A INPUT -p tcp -m tcp --dport 27017 -s <DB IP> -j ACCEPT
 -A INPUT -p tcp -m tcp --dport 27017 -j DROP
  1. Once all changes are made save your settings persistently.

apt-get install -y iptables-persistent
  1. Now add Redis by installing the Redis server.

apt install redis-server
  1. Adjust the configuration.

nano /etc/redis/redis.conf
bind 0.0.0.0
  1. Save and restart the application afterwards.

service redis restart

App-Server VMs

Prepare two virtual machines for the app servers. These VMs ensure that the user interface is accessible from all servers and the services can run in parallel.

If you have previously used Enginsight without a load balancer, you can use the previously used app server as one of the 2 required app server VMs!

This greatly simplifies the load balancer setup and allows you to quickly proceed with the implementation.

  1. Install Docker on the two app servers.

To save time and effort, we recommend that you either use our ISO file or clone the first app server to create the second VM for the app server.

  1. Modify the docker-compose.yml, as indicated below.

version: "3"
services:
  ui-m1:
    image: registry.enginsight.com/enginsight/ui-m1:3.5.10
    ports:
    - "80:80"
    restart: always
    volumes:
    - "./conf/ui-m1/environment.js.production:/opt/enginsight/ui-m1/config/environment.js"

  server-m2:
    image: registry.enginsight.com/enginsight/server-m2:3.5.426
    ports:
    - "8080:8080"
    restart: always
    volumes:
    - "./conf/services/config.json.production:/etc/enginsight/server-m2/config.json"

Note also here that always the current versions are at the end. For this purpose, either adjust the .xml and delete all entries that are not required or adjust the versions independently.

  1. If you have cloned the app server, disable nginx on both app servers with the command: systemctl stop nginx and systemctl disable nginx

  2. Copy the contents of the DEFAULT_JWT_SECRET.conf file under /opt/enginsight/enterprise/conf and paste it into the same file on the 2nd app server. This ensures that both files are stored identically on both servers.

  3. Now check the connection to Redis. To do this, log into the container and establish a connection:

    • Check Redis Connection

      1. docker ps

      2. docker exec -it <Id von redis:4> /bin/sh

      3. redis-cli -h <IPDB>

  4. Now check the Docker logs from server m-2 to see if there is a connection from the database to Redis.

Start load balancing

  1. Change the DNS entry.

Note here that the URL of the APP and the API are aligned to the load balancer and no longer to the APP server.

  1. Once you have prepared all VMs, you can run setup.sh.

Note that the Redis URL must be changed to redis://:6379

Now it's time to check if your application will continue to work without any problems in case of a single server failure.

  1. To do this, run docker-compose down on APP Server 1 and verify that APP Server 2 is still receiving data and all hosts are still active.

  2. Restart all Docker containers with docker-compose up -d

  3. Perform subsections 1 and 2 again for App Server 2.

Note that the update script must now always be run on all three servers to ensure that all servers are up to date and no incompatibilities occur.

Last updated