UTMStack

UTMStack Installation Manual

A guide for Docker Swarm deployment

1.1 Introduction

UTMStack functionalities involve a certain number of running services, some of them very complex concerning installation and configuration. Luckily, most of the product deployment process is unattended, leaving the minimal possibility of human errors, either way, it's healthy to double-check every step to be sure.

1.2 Purpose

This document is provided to serve as a guide during the software installation phase in order to reduce possible manipulation mistakes, errors, misinterpretations, and loss of time finding the correct information.

BQWwarning-4-64.png An inadequate, incomplete, or inexistent install of any service may lead to the failure of the UTMStack deployment and, therefore, unable to be used properly.

2 Installation Manual

We assume you are not a specialist for all the services running at the UTMStack deployment, and this manual will try to avoid errors during the installation and later use of applications. Please check the Contact Information list at the end of this document if necessary.

2.1 Pre-requisites

In this section, you can find a list of pre-requisites that must be met before the install can begin, keep in mind that if fewer resources are provided, the services won't deploy properly.

2.1.1 Hardware

VERY IMPORTANT! The designated server must be a Physical Server or a Virtual Machine (VM), NEVER a container (CT) in case you rent a Virtual Private Server (VPS).

  • CPU: 8 Cores (Minimal)
  • RAM: 32 Gb (Minimal)
  • SWAP: 32 Gb (Minimal)

On GNU/Linux systems The SWAP partition is VERY IMPORTANT even disabled, in case of extreme processing tasks or failures.

2.1.2 Operating System

We strongly recommend Ubuntu 18.04 LTS, since it has been widely tested with desired results, either way, the installer should work just fine with any modern version.

Check the configuration of your Ubuntu Repositories. If you use the next command, you should have a similar output (depending of course of the Ubuntu release: bionic for Ubuntu 18.04, focal for Ubuntu 20.04). Make sure you have this properly set.

$ cat /etc/apt/sources.list
deb http://us.archive.ubuntu.com/ubuntu bionic main restricted universe multiverse
deb http://us.archive.ubuntu.com/ubuntu bionic-updates main restricted universe multiverse
deb http://us.archive.ubuntu.com/ubuntu bionic-security main restricted universe multiverse
deb http://us.archive.ubuntu.com/ubuntu bionic-backports main restricted universe multiverse

Once you have done with the repositories, you need to update the package indexing.

$ apt update

2.1.3 Storage

You have two separate and different approaches to follow, considering the way your data is located. One with a single big disk and three separate hard drives. The first is simple but dangerous; if your events data consume all the space of your disk, your system won't be easy to get back on the stable condition since partition '/' will be full and unable to receive any other bit, that could bring unexpected results. The second option, individual disks drives, it is what common sense dictates as a good practice. It is what we recommend, and the one explained length here

2.1.3.1 Three separate hard drives

Let us make an apart in this section of the hardware needs. It is wise to have a "think ahead" methodology when making decisions about storage. You will deal with these 3 kinds of storage pieces, each been different considering use, speed, size, and location.

  1. ROOT partition

    • Location: '/'
    • Size: 100 Gigabytes (recommended)
    • Speed: It determines the speed of your operating system so, the best possible (SSD disks if you can afford that), but it works with regular hard drives, either way, just slower
    • Use: It's where your operating system rests, including all the packages you could possible and will surely need.
  2. EVENT DATA partition

    • Location: '/utmstack/data'(it's a mount point, folder /utmstack must exist or created beforehand)
    • Size: Depending of the results of the Resource Calculator webpage, i.e. just a few servers (in a year) will easily reach 1 Terabyte of data.
    • Speed: It determines the speed of your data indexing so, the best possible (SSD disks if you can afford that), but it works with regular hard drives, either way, just slower
    • Use: It's where your events data rests, all the logs your system is capturing go there.
  3. SNAPSHOTS partition

    • Location: '/utmstack/repo'(it's a mount point, folder /utmstack must exist or created beforehand)
    • Size: Depending of the results of the Resource Calculator webpage. Here you will save your Snapshots.
    • Speed: It determines the speed of your snapshots so this piece of storage doesn't need to be that fast since it's just for backups.
    • Use: It's where your data snapshots (backups) rests, once an snapshot containing a subset of your data (grouped on daily indices) is done (and double-checked) you can delete those indices from your active data. It's the place where oldest events data are kept (months and years as they pass).

There are VERY GOOD reasons to have those "partitions" finely defined and separated

  • root partition (System): You want your system always stable, even if storage a) and b) are completely full and unable to hold a single bit.
  • /utmstack/data (Events): You want this piece of storage: spacious, fast, and, of course, free from interference due to the system and backups work (zero delay).
  • /utmstack/repo (Snapshots): You want this partition always inside your server, always with room for snapshots, first, because you will be able to make backups of your indices even if storage b) is full; second, because Cloud Storage like S3 are usually slow and it's wise to previously save data on your server. Then, upload the snapshots to the Storage Bucket (like S3 or anything of your like) in order to...yes, you guessed right, to make room on storage c) for more backups.

Summarizing, if you rent a VPS, just be sure to consider these three kinds of storage needs in your budget: system (fast), data (fast if possible), and snapshots (fast is optional). Usually, on the same Web Page where you buy the service and select system partition size, you also can add the other two pieces of storage with the desired (affordable) speed and size. If you have a Physical Server, the needs remain the same, but you handle it in your own way.

2.1.3.2 In case of a single big disk

Another possible case is that you already have a fat root partition'/' with enough disk space to handle the operating system and your data. If you are in the straightforward case that you have a single but really big disk (let's say 1Tb o 2Tb) , and for some reason you don't plan to get any other piece of storage, you could just (AND THIS IS NOT RECOMENDED) create the folders '/utmstack/data' and '/utmstack/repo' on your existing server. Then, skip the next section and continue with "2.1.4 DNS Entries". The installer should work just fine.

$ mkdir -p /utmstack/data /utmstack/repo

2.1.3.3 Setting up the extra storage

Assuming you decided to go for the three separates pieces of storage, let's begin with checking what storage we have in our own server. Depending on the sizes you got from your VPS provider, or the hard drives installed on your physical server, the output should be something similar to this:

$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda       8:0    0  100G  0 disk
├─xvda1    8:2    0    1G  0 part /boot
├─xvda2    8:3    0   32G  0 part [SWAP]
└─xvda3    8:4    0   67G  0 part /
xvdb       8:16   0    2T  0 disk
xvdc       8:32   0    3T  0 disk

As you can see, you have 2 extra disks: xvdb and xvdc, they need a little work to make them functional, think on them as blanks, with no partitions and no format what so ever.

Creating the main partitions:

$ parted /dev/xvdb mklabel gpt $ parted /dev/xvdb mkpart primary ext4 34s 100% i

$ parted /dev/xvdc mklabel gpt $ parted /dev/xvdc mkpart primary ext4 34s 100% i

Verifying the partitions are recognized by the system and some visual check for us:

$ partprobe
$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda       8:0    0  100G  0 disk
├─xvda1    8:2    0    1G  0 part /boot
├─xvda2    8:3    0   32G  0 part [SWAP]
└─xvda3    8:4    0   67G  0 part /
xvdb       8:16   0    2T  0 disk
└─xvdb1    8:17   0    2T  0 part
xvdc       8:32   0    3T  0 disk
└─xvdc1    8:33   0    3T  0 part

Now you can see the newly created partitions; we need some things more to leave it in the right way.

Formatting to EXT4, the Linux filesystem format (Some kinds of hard drives):

$ mkfs.ext4 /dev/xvdb1
$ mkfs.ext4 /dev/xvdc1

If you get a preventing message like this one:

mke2fs 1.44.1 (24-Mar-2018)
/dev/xvdb1 contains a ext4 file system
        created on Sun Aug 23 03:24:21 2020
Proceed anyway? (y,N)

It means that during the partitioning process, the command 'parted' accepted the format we sent previously (mkpart primary ext4) and formatted in advance. Therefore, that particular disk is already in EXT4 format and does not need to be done again (not all types of disks work that way). Don't worry. It will not break anything if you do it, it will be just a loss of time.

Regularly, both outputs should look similar to this:

mke2fs 1.44.1 (24-Mar-2018)
Creating filesystem with 805306359 4k blocks and 201326592 inodes
Filesystem UUID: 5f8d0c2f-3907-45cf-a96f-4162f6b8c33f
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848, 512000000, 550731776, 644972544

Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done

Checking that formatting went well:

$ e2fsck -f /dev/xvdb1
$ e2fsck -f /dev/xvdc1

This time you should see something similar to this in your outputs:

e2fsck 1.44.1 (24-Mar-2018)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/xvdc1: 11/6553600 files (0.0% non-contiguous), 557848/26214391 blocks

Finally, we will create the folder that will hold the mounting points and then mount the disks on those:

$ mkdir -p /utmstack/data /utmstack/repo
$ mount /dev/xvdb1 /utmstack/data
$ mount /dev/xvdc1 /utmstack/repo

After all this work, you can check the two extra disks are properly partitioned, formatted and mounted:

$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda       8:0    0  100G  0 disk
├─xvda1    8:2    0    1G  0 part /boot
├─xvda2    8:3    0   32G  0 part [SWAP]
└─xvda3    8:4    0   67G  0 part /
xvdb       8:16   0    2T  0 disk
└─xvdb1    8:17   0    2T  0 part /utmstack/data
xvdc       8:32   0    3T  0 disk
└─xvdc1    8:33   0    3T  0 part /utmstack/repo

2.1.4 DNS Entries

You need to add a couple of A records in your DNS, one pointing at "www.utmclient.utmstack.com" (doesn't have to be precisely .utmstack, it could be perfectly something else) and the other one to "utmclient.utmstack.com", this entries will provide a doorway for all the services deployed after the installation.

2.2 Pre-installation Tasks

First of all you must have the UTMStack Installer, hosted on GitHub: UTMStack/utmstackInstallerBin, click on the green button named "Code" then "Download Zip", place it somewhere on your server file system, i.e. under "/opt" folder.

IMPORTANT: Make sure you are logged as root user and the apt services are not busy.

$ sudo su
[sudo] password for USERNAME:

You are now logged as user root and you have the permissions to go through the installation process.

$ cd /opt

Unpack zipped file "utmstack-docker-master.zip".

$ unzip utmstackInstallerBin-master.zip

Archive:  utmstackInstallerBin-master.zip
dbc841608bda063d68ab99fc1b9ca2f7d0a6c9b3
   creating: utmstackInstallerBin-master/
 extracting: utmstackInstallerBin-master/.gitignore
  inflating: utmstackInstallerBin-master/README.md
   creating: utmstackInstallerBin-master/assets/
  inflating: utmstackInstallerBin-master/assets/set_cluster_settings_cerebro.png
  inflating: utmstackInstallerBin-master/assets/set_cluster_settings_kibana.png
 extracting: utmstackInstallerBin-master/assets/warning-4-64.png
   creating: utmstackInstallerBin-master/config/
  inflating: utmstackInstallerBin-master/config/dynamic.yml
  inflating: utmstackInstallerBin-master/config/traefik.yml
  inflating: utmstackInstallerBin-master/config/utmstack_application-prod.yml
  inflating: utmstackInstallerBin-master/config/utmstack_compose_biggest.yml
  inflating: utmstackInstallerBin-master/config/utmstack_compose_medium.yml
  inflating: utmstackInstallerBin-master/config/utmstack_compose_small.yml
   creating: utmstackInstallerBin-master/templates/
  inflating: utmstackInstallerBin-master/templates/cluster_settings.json
  inflating: utmstackInstallerBin-master/templates/index-lifecycle-policy-quarterly.json
  inflating: utmstackInstallerBin-master/templates/template-all-index-client-code.json
  inflating: utmstackInstallerBin-master/utmstackInstallerBin
  
  $ cd utmstackInstallerBin-master

(Mandatory) All the commands for installation and deployment should run from within this location

2.3 Installation Procedures

You need to make sure the script "utmstackInstallerBin" is executable, and then run the script from the location where the installer folder rests; just follow the instructions.

$ chmod a+x utmstackInstallerBin
$ ./utmstackInstallerBin

You will see an output like this (Depending on your server resources)

# [UTMSTACK]: ROOT user...Ok.
# [UTMSTACK]: Your resources, RAM: 65968 MB, CPU: 12 UNITS
# [UTMSTACK]: According to that, you have enough to install:
1) Small Server deployment (at least RAM:32Gb, CPU:8 units)
2) Medium Server deployment (at least RAM:64Gb, CPU:12 units)
(Q/q)uit the installation
You answer is: 1
# [UTMSTACK]: Checking: /utmstack/data ...Ok.
# [UTMSTACK]: Checking: /utmstack/repo ...Ok.
# [UTMSTACK]: Folder: /utmstack/data/utm_es-master-1 Created ...Ok.
# [UTMSTACK]: Folder: /utmstack/data/utm_es-master-2 Created ...Ok.
# [UTMSTACK]: Folder: /utmstack/data/utm_es-master-3 Created ...Ok.
# [UTMSTACK]: Folder: /utmstack/data/utm_es-data-1 Created ...Ok.
# [UTMSTACK]: Folder: /utmstack/data/utm_es-data-2 Created ...Ok.
# [UTMSTACK]: Folder: /utmstack/data/utm_es-data-3 Created ...Ok.
# [UTMSTACK]: Folder: /utmstack/data/utm_es-data-4 Created ...Ok.
# [UTMSTACK]: Folder: /utmstack/data/utm_es-data-5 Created ...Ok.
# [UTMSTACK]: Setting owner to: /utmstack ...Ok.
# [UTMSTACK]: Setting owner to: /utmstack/data ...Ok.
# [UTMSTACK]: Setting owner to: /utmstack/data/utm_es-data-1 ...Ok.
# [UTMSTACK]: Setting owner to: /utmstack/data/utm_es-data-2 ...Ok.
# [UTMSTACK]: Setting owner to: /utmstack/data/utm_es-data-3 ...Ok.
# [UTMSTACK]: Setting owner to: /utmstack/data/utm_es-data-4 ...Ok.
# [UTMSTACK]: Setting owner to: /utmstack/data/utm_es-data-5 ...Ok.
# [UTMSTACK]: Setting owner to: /utmstack/data/utm_es-master-1 ...Ok.
# [UTMSTACK]: Setting owner to: /utmstack/data/utm_es-master-2 ...Ok.
# [UTMSTACK]: Setting owner to: /utmstack/data/utm_es-master-3 ...Ok.
# [UTMSTACK]: Setting owner to: /utmstack/repo ...Ok.
# [UTMSTACK]: Enter credentials for administrator user.
YOUR Admin Username: administrator
YOUR Admin Password: YoUrAdMiNpAsS
YOUR Domain FQDN is: utmclient.utmstack.com
# [UTMSTACK]: VERIFY YOUR INPUTS...
# [UTMSTACK]: Admin Username:  administrator
# [UTMSTACK]: Admin Password:  YoUrAdMiNpAsS
# [UTMSTACK]: Domain FQDN is:  utmclient.utmstack.com
# [UTMSTACK]: Is this correct to proceed? (Y)es|(N)o|(Q)uit:
(Y/y)es to continue
(N/n)o to repeat the INPUTS
(Q/q)uit the installation
You answer is: y
# [UTMSTACK]: The installation proceeds ...

The credentials will be used to create access for all services (Traefik, Openvas and PostgreSQL) , but Portainer manager, requested at first login.

If you make a mistake in your inputs, just hit n (or N) on the verification question to reenter data.

2.4 Technical Tests

First we check the whole stack: images (pulled from Docker Hub), services (some have 3 or 5 replicas), containers (these are actually the running processes), volumes (permanent storage)

2.4.1 Images

$ docker images

REPOSITORY TAG IMAGE ID CREATED SIZE
utmstack/tomcat-omp 9-jdk8 09c98fe99a79 16 hours ago 595MB
traefik 2.2 9b9d2d696ad3 18 hours ago 78.4MB
portainer/portainer latest 62771b0b9b09 6 days ago 79.1MB
securecompliance/gvm latest 11a37d7c6e0f 12 days ago 2.41GB
postgres alpine 17150f4321a3 4 weeks ago 157MB
lmenezes/cerebro latest 3553b54da0e7 5 weeks ago 268MB
logstash 7.8.0 01979bbd06c9 6 weeks ago 789MB
kibana 7.8.0 df0a0da46dd1 6 weeks ago 1.29GB
elasticsearch 7.8.0 121454ddad72 6 weeks ago 810MB

2.4.2 Services

$ docker service ls

ID NAME MODE REPLICAS IMAGE
l93jwc7l1u41 utm_cerebro replicated 1/1 lmenezes/cerebro:latest
1p5oogfr5vsy utm_es-data replicated 5/5 elasticsearch:7.8.0
v2zn6otn0lnr utm_es-elastic replicated 1/1 elasticsearch:7.8.0
bhhot1wlyc6c utm_es-master replicated 3/3 elasticsearch:7.8.0
k0q33hgqfvsh utm_kibana replicated 1/1 kibana:7.8.0
nw47i3lmogex utm_logstash replicated 1/1 logstash:7.8.0
0fcify3przs4 utm_openvas replicated 1/1 securecompliance/gvm:latest
mna01erx1vgo utm_portainer replicated 1/1 portainer/portainer:latest
xhtu44iqduxc utm_postgres replicated 1/1 postgres:alpine
szhwh1rs00vi utm_tomcat replicated 1/1 tomcat:9-jdk8
rbvtuezvhvv6 utm_traefik replicated 1/1 traefik:2.2

2.4.3 Containers

$ docker container ls

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8a78959fea15 elasticsearch:7.8.0 ... ... ... 9200/tcp, 9300/tcp utm_es-data.3.fo2rp08gflmoet0zxlczikxr5
25fe5c3e3805 elasticsearch:7.8.0 ... ... ... 9200/tcp, 9300/tcp utm_es-data.5.9to1m0fvcq58ck3krpp16vdzn
800d9305065d elasticsearch:7.8.0 ... ... ... 9200/tcp, 9300/tcp utm_es-data.4.i5cuild0ah649wcv1porxmden
3f0d5918d804 elasticsearch:7.8.0 ... ... ... 9200/tcp, 9300/tcp utm_es-data.1.d959lnz9v780xcg3u662e6nx2
d58a3c7fb009 elasticsearch:7.8.0 ... ... ... 9200/tcp, 9300/tcp utm_es-data.2.o2jozhqf0idcbfhlebilvhp2e
05e30be3b854 elasticsearch:7.8.0 ... ... ... 9200/tcp, 9300/tcp utm_es-elastic.1.y4p9epd4ym0d1vryxjm1nt72o
01a2c14d8c6b elasticsearch:7.8.0 ... ... ... 9200/tcp, 9300/tcp utm_es-master.3.uvflfxcs480kkm69pbio43man
920c8c9279ca elasticsearch:7.8.0 ... ... ... 9200/tcp, 9300/tcp utm_es-master.2.1imhlgyf57gh2gvc3htsy2eyz
fa52eed5babb elasticsearch:7.8.0 ... ... ... 9200/tcp, 9300/tcp utm_es-master.1.6qykjgm7nplx9drpqki1w7l88
ae271d3eece5 kibana:7.8.0 ... ... ... 5601/tcp utm_kibana.1.69ggbm7rthlbpk3k5y8w1grp9
d4d8b4216f90 lmenezes/cerebro:latest ... ... ...   utm_cerebro.1.yg1j94nbnka82l3fg105neeax
9129a5fd9532 logstash:7.8.0 ... ... ... 5044/tcp, 9600/tcp utm_logstash.1.na5h4pogiqwdq9qvjzhikf9hq
8d549c6c8e21 portainer/portainer:latest ... ... ... 9000/tcp utm_portainer.1.sfpdvjppeia4wzpw7wvlu2yl8
71b33ac6b410 postgres:alpine ... ... ... 5432/tcp utm_postgres.1.w5ahup8yboprqsk7xzp1jzk33
bd3d5340e721 securecompliance/gvm:latest ... ... ...   utm_openvas.1.j9trmvghhqm4xvdl1fihvswy9
1f49cba465b2 utmstack/tomcat-omp:9-jdk8 ... ... ... 8080/tcp utm_tomcat.1.ybvq46hu2w471kh5bpuva96oc
f460a0e30f74 traefik:2.3 ... ... ... 80/tcp utm_traefik.1.23jfg32weyv58jk9t3rsoy7v6

2.4.3.1 Checking container's IP address

To peek on the virtual network configuration of any container you could use this command

docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <CONTAINER ID>

Example using a real container ID

$ docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' f460a0e30f74

In case you want to check the whole network use the command:

$ docker inspect net_utmstack

or

$ docker inspect net_utmstack --format '{{range .Containers}}{{println}}ContainerName: {{.Name}} --- IPv4Address: {{.IPv4Address}}{{end}}'

2.4.4 Volumes

docker volume ls

DRIVER VOLUME NAME DESCRIPTION
local utm_es-data-1 Data Node Volume
local utm_es-data-2 Data Node Volume
local utm_es-data-3 Data Node Volume
local utm_es-data-4 Data Node Volume
local utm_es-data-5 Data Node Volume
local utm_es-master-1 Master Node Volume
local utm_es-master-2 Master Node Volume
local utm_es-master-3 Master Node Volume
local utm_portainer_data Portainer Manager Volume
local vol_elastic_repo ElasticSearch Snaphoshots & Repositores
local vol_gvm_data Openvas Data Volume
local vol_postgres_data PostgreSQL Data Volume
local vol_tomcat_conf Tomcat Config Volume
local vol_tomcat_webapps Tomcat Deployments Volume
local vol_traefik_config Traefik Certificates Volume

2.5 Post-installation

Once the UTMStack services are deployed on a Docker Swarm environment, some things must be tuned after the unattended part to complete the installation process. You should have at hand the two files that combined conform to the "visible" application: "utmstack.war" and "utm-stack.zip".

2.5.1 Securing Web

For all kind of SSL provider, the certificate type must be WILDCARD to cover all your web pages; check your DNS and SSL providers to get one pointing at ".utmclient.utmstack.com" in the CN section, after that, just copy the file to the Traefik's config folder. You can set your certificate any time; just copy them with the right names to the designated folder

This part is not mandatory. If you fail to provide a valid certificate, the Traefik's service will use an internal one.

No matter where or how you get your SSL support: self-signed, free from Let's Encrypt or any paid service, just get a wildcard certificate, rename your_own_cert.pem and your_own_cert.key it to wildcard.pem and wildcard.key, copy both to the Traefik's configuration folder and update the service to commit the changes.

$ cp /path/to/file/your_own_cert.pem /var/lib/docker/volumes/vol_traefik_config/_data/wildcard.pem
$ cp /path/to/file/your_own_key.key /var/lib/docker/volumes/vol_traefik_config/_data/wildcard.key
$ docker service update utm_traefik

utm_traefik
overall progress: 1 out of 1 tasks
1/1: running   [==================================================>]
verify: Service converged

Now you can test the webpages and should be all updated...

2.5.1.1 Let's Encrypt Certbot

Certbot is a FREE open source software tool for automatically using Let’s Encrypt certificates on manually-administrated websites to enable HTTPS. If you are ready for this option, install the package and get the certificates using the next commands:

$ apt install certbot
$ certbot certonly --manual --preferred-challenges=dns --email enterprise@yourdomain.com --server https://acme-v02.api.letsencrypt.org/directory --agree-tos -d *.utmclient.utmstack.com`

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator manual, Installer None
Obtaining a new certificate
Performing the following challenges:
dns-01 challenge for utmclient.utmstack.com

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NOTE: The IP of this machine will be publicly logged as having requested this
certificate. If you're running certbot in manual mode on a machine that is not
your server, please ensure you're okay with that.

Are you OK with your IP being logged?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: y

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Please deploy a DNS TXT record under the name
_acme-challenge.utmclient.utmstack.com with the following value:

6exwW66JrrSgWErhdqzXVg4CzYJWuEWFEnLoyDH9_QM

Before continuing, verify the record is deployed.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Press Enter to Continue

This part is pretty straightforward, Cerbot need you to add a verification key in your DNS, simply add a DNS TXT record with the name "_acme-challenge.utmclient.utmstack.com" (use your real domain) with the indicated value inside, in the example is "6exwW66JrrSgWErhdqzXVg4CzYJWuEWFEnLoyDH9_QM", use the one you got from your own certbot execution. This is a onetime configuration but needed to keep going, Hit ENTER once the work on DNS is done.

Waiting for verification...
Cleaning up challenges

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/utmclient.utmstack.com/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/utmclient.utmstack.com/privkey.pem
   Your cert will expire on 2020-11-11. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot
   again. To non-interactively renew *all* of your certificates, run
   "certbot renew"
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

Now you just need to copy the files generated and rename to the Traefik's permanent volume, use the next commands

$ cp /etc/letsencrypt/live/utmclient.utmstack.com/fullchain.pem /var/lib/docker/volumes/vol_traefik_config/_data/wildcard.pem
$ cp /etc/letsencrypt/live/utmclient.utmstack.com/privkey.pem /var/lib/docker/volumes/vol_traefik_config/_data/wildcard.key
$ docker service update utm_traefik

utm_traefik
overall progress: 1 out of 1 tasks
1/1: running   [==================================================>]
verify: Service converged

2.5.1.2 Testing with a self-signed certificate

This section is just a test if you want to try something quickly. First, generate an RSA private key. Use 2048-bit or even better 4096-bit key

$ openssl genrsa 2048 > wildcard.key

Provide desired values to the first questions, but in the interactive part of Common Name be careful to enter the domain: *.utmclient.utmstack.com

$ openssl req -new -x509 -nodes -sha256 -days 3650 -key wildcard.key > wildcard.cert

You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]: US
State or Province Name (full name) [Some-State]: NY
Locality Name (eg, city) []: New York
Organization Name (eg, company) [Internet Widgits Pty Ltd]: MyCompany
Organizational Unit Name (eg, section) []: MyDepartment
Common Name (e.g. server FQDN or YOUR name) []: *.utmclient.utmstack.com
Email Address []: mail@yourdomain.com

Now sign and generate certificate $ openssl x509 -noout -fingerprint -text < wildcard.cert > wildcard.info

Finally you can bundle the data

$ cat wildcard.cert wildcard.key > wildcard.pem

$ cp wildcard.pem /mnt/utmstack/vol_traefik_config/wildcard.pem
$ cp wildcard.key /mnt/utmstack/vol_traefik_config/wildcard.key
$ docker service update utm_traefik

utm_traefik
overall progress: 1 out of 1 tasks
1/1: running   [==================================================>]
verify: Service converged

IMPORTANT: This self-signed certificate is used for trial purposes only, NOT suitable in production!

2.5.2 Deploying Java Application WAR file

The file "utmstack.war" should be copied on the Tomcat folder to be deployed, it automatically triggers a group of actions usual in this kind of service.

$ cd /path/to/warfile/
$ cp utmstack.war /var/lib/docker/volumes/vol_tomcat_webapps/_data/utmstack.war

2.5.3 Unpack Java Application frontend files

The file utm-stack.zip holds a folder /utm-stack, its content goes to vol_tomcat_webapps volume.

$ unzip utm-stack.zip
$ cp -R utm-stack/* /var/lib/docker/volumes/vol_tomcat_webapps/_data/

2.5.4 Checking UTMSTACK configuration

A preconfigured version of the file "application-prod.yml" is delivered with the installation zip file, to copy and check all is OK, run the next commands.

$ cd /opt/utmstackInstallerBin
$ cp config/application-prod.yml /var/lib/docker/volumes/vol_tomcat_webapps/_data/utmstack/WEB-INF/classes/config/application-prod.yml
$ nano /var/lib/docker/volumes/vol_tomcat_webapps/_data/utmstack/WEB-INF/classes/config/application-prod.yml

Ctrl + O + RETURN, to save the file, Ctrl + X to exit from NANO editor

Double check users and passwords for service postgres (postgres_pass:postgres_pass) and service openvas (openvas_user: openvas_pass), use the same you provided as administrator user credentials.

2.5.5 Coping Manager folder to default webapps

Tomcat offers a manager, useful to make deployments, checks on the java applications on the web server, or simply watch server statistics.

$ docker container ls
$ docker exec -it TOMCAT_CONTAINER_ID bash

Example:

$ docker exec -it 1f49cba465b2 bash
root@1f49cba465b2:/usr/local/tomcat# cp -R /usr/local/tomcat/webapps.dist/manager /usr/local/tomcat/webapps

Ctrl + D to leave container (or just type 'exit')

2.5.6 Adding Manager User

It's mandatory to have a configured admin user to log into the Tomcat manager, modify values adminuser and adminpass to match your desired configuration.

nano /var/lib/docker/volumes/vol_tomcat_conf/_data/tomcat-users.xml

Inside tag structure: <tomcat-users> </tomcat-users> (Using your desired usename and password) Add this 2 lines:

  <role rolename="manager-gui"/>
  <user username="adminuser" password="adminpass" roles="manager-gui"/>

Ctrl + O + RETURN, to save the file, Ctrl + X to exit from NANO editor

2.5.7 Allowing Manager Web from everywhere

Normally, Tomcat Manager is only accessible from "localhost", this must be changed if you want to access the web page in a remotely located computer.

$ nano /var/lib/docker/volumes/vol_tomcat_webapps/_data/manager/META-INF/context.xml

Make sure this option:

allow="127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1"

Is modified to:

allow=".*"

Ctrl + O + RETURN, to save the file, Ctrl + X to exit from NANO editor

2.5.8 Default root page configuration

We will configure the Tomcat to consider our previous deployed war file as default root page.

$ nano /var/lib/docker/volumes/vol_tomcat_conf/_data/server.xml

Right below the structure: <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> add just the following context:

<Context docBase="${catalina.home}/webapps" path="" debug="0" reloadable="true"/>

The section should finally look like this

<Host name="localhost" appBase="webapps"
            unpackWARs="true" autoDeploy="true">
  <Context docBase="${catalina.home}/webapps" path="" debug="0" reloadable="true"/>            

Ctrl + O + RETURN, to save the file, Ctrl + X to exit from NANO editor

2.5.9 Default error page configuration

Every error code 404 should redirect back to the index.html

$ nano /var/lib/docker/volumes/vol_tomcat_conf/_data/web.xml

Add next code before the last line of the file (</web-app>)

  <error-page>
    <error-code>404</error-code>
    <location>/index.html</location>
  </error-page>

Ctrl + O + RETURN, to save the file, Ctrl + X to exit from NANO editor

2.5.10 Redeploy Tomcat Service

We have many changes on the Tomcat server, and it's mandatory to update the service

$ docker service update utm_tomcat

2.5.11 Elasticsearch Configuration

2.5.11.1 Cluster Settings

The ElasticSearch Cluster configurations are all dynamic and can be set using CURL, Kibana, or with the simplistic Cerebro. The next options can be set exactly like are shown on Kibana Console, to use Cerebro, the syntax is a bit different, check the helping image to be sure.

This is the standard format of the cluster settings

PUT /_cluster/settings
{
  "persistent": {
    "cluster.routing.rebalance.enable": "all",
    "cluster.routing.allocation.enable": "all",
    "cluster.routing.allocation.disk.threshold_enabled": false,
    "cluster.routing.allocation.node_concurrent_recoveries": 5,
    "cluster.routing.allocation.cluster_concurrent_rebalance": 5,
    "cluster.routing.allocation.node_initial_primaries_recoveries": 5,
    "cluster.routing.allocation.allow_rebalance": "indices_primaries_active",
    "cluster.routing.allocation.same_shard.host": true,
    "indices.recovery.max_concurrent_file_chunks": 5,
    "indices.recovery.max_bytes_per_sec": "100mb"
  }
}

If you feel comfortable using CURL on a bash prompt in Linux, use the next command to set your cluster:

$ curl -X PUT "http://elastic.utmclient.utmstack.com/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'
{
  "persistent": {
    "cluster.routing.rebalance.enable": "all",
    "cluster.routing.allocation.enable": "all",
    "cluster.routing.allocation.disk.threshold_enabled": false,
    "cluster.routing.allocation.node_concurrent_recoveries": 5,
    "cluster.routing.allocation.cluster_concurrent_rebalance": 5,
    "cluster.routing.allocation.node_initial_primaries_recoveries": 5,
    "cluster.routing.allocation.allow_rebalance": "indices_primaries_active",
    "cluster.routing.allocation.same_shard.host": true,
    "indices.recovery.max_concurrent_file_chunks": 5,
    "indices.recovery.max_bytes_per_sec": "100mb"
  }
}'

The output should be:

{
  "acknowledged" : true,
  "persistent" : {
    "cluster" : {
      "routing" : {
        "rebalance" : {
          "enable" : "all"
        },
        "allocation" : {
          "disk" : {
            "threshold_enabled" : "false"
          },
          "node_initial_primaries_recoveries" : "5",
          "enable" : "all",
          "same_shard" : {
            "host" : "true"
          },
          "allow_rebalance" : "indices_primaries_active",
          "cluster_concurrent_rebalance" : "5",
          "node_concurrent_recoveries" : "5"
        }
      }
    },
    "indices" : {
      "recovery" : {
        "max_bytes_per_sec" : "100mb",
        "max_concurrent_file_chunks" : "5"
      }
    }
  },
  "transient" : { }
}

Using visual managers Kibana and Cerebro

Cluster Settings on Kibana

Note: The settings using Kibana Console are direct as it's shown, hit the icon with a balloon tip: "Click to send request " to commit changes. If you don't see the image, it's located in /assets/set_cluster_settings_kibana.png

Cluster Settings on Cerebro

Note: The settings in Cerebro are divided into parts, marked in the image, select PUT and hit send to commit changes. If you don't see the image, it's located in /assets/set_cluster_settings_cerebro.png

2.5.11.2 Rollover configuration

Planning a lifecycle for our data in order to move daily indices through 3 phases (hot, warm, cold) is a good practice, to save some hardware resources since old data is less requested than newer. Active (newly created) indices will stay hot for 90 days, then it rolls over to warm phase, it stays there for another 90 days to finally reach the cold phase. We are going to set that by setting all new indices with a template that will contain a pointer to a lifecycle policy. Fresh indices will be configured automatically.

This is the standard format of this types of settings; both can be also set visually with Kibana or Cerebro in the same way you managed the Cluster Settings.

Lifecycle Policy

PUT _ilm/policy/index-lifecycle-policy-quarterly
{
  "policy": {
    "phases": {
      "hot": {
        "actions": {
          "rollover": {"max_age": "90d"},
          "set_priority": {"priority": 50}
        }
      },
      "warm": {
        "min_age": "1d",
        "actions": {
          "allocate": {"number_of_replicas": 0},
          "forcemerge": {"max_num_segments": 1},
          "set_priority": {"priority": 25},
          "shrink": {"number_of_shards": 1}
        }
      },
      "cold": {
        "min_age": "90d",
        "actions": {
          "freeze": {},
          "set_priority": {"priority": 0}
        }
      }
    }
  }
}

This is a CURL command to set this Rollover policy:

$ curl -X PUT "http://elastic.utmclient.utmstack.com/_ilm/policy/index-lifecycle-policy-quarterly?pretty" -H 'Content-Type: application/json' -d'
{
  "policy": {
    "phases": {
      "hot": {
        "actions": {
          "rollover": {"max_age": "90d"},
          "set_priority": {"priority": 50}
        }
      },
      "warm": {
        "min_age": "1d",
        "actions": {
          "allocate": {"number_of_replicas": 0},
          "forcemerge": {"max_num_segments": 1},
          "set_priority": {"priority": 25},
          "shrink": {"number_of_shards": 1}
        }
      },
      "cold": {
        "min_age": "90d",
        "actions": {
          "freeze": {},
          "set_priority": {"priority": 0}
        }
      }
    }
  }
}'

The output should be:

{
  "acknowledged" : true
}

2.5.11.2 Indices template:

This is a simple concept, where a brand new index's name match the patterns,in this case index-*, then it will be set up with the settings defined across the temple. This is mainly used to "link" the matching index to the previously defined lifecycle policy.

PUT _template/template-all-index-client-code
{
  "index_patterns": ["index-*"],
  "settings": {
    "index": {
      "lifecycle": {"name": "index-lifecycle-policy-quarterly"},
      "mapping": {"total_fields": {"limit": "2000"}},
      "store": {"preload": ["nvd", "dvd"]},
      "number_of_shards": "2",
      "number_of_replicas": "0"
    }
  },
  "mappings": {},
  "aliases": {}
}

This is a CURL command to add the template:

$ curl -X PUT "http://elastic.utmclient.utmstack.com/_template/template-all-index-client-code?pretty" -H 'Content-Type: application/json' -d'
{
  "index_patterns": ["index-*"],
  "settings": {
    "index": {
      "lifecycle": {"name": "index-lifecycle-policy-quarterly"},
      "mapping": {"total_fields": {"limit": "2000"}},
      "store": {"preload": ["nvd", "dvd"]},
      "number_of_shards": "2",
      "number_of_replicas": "0"
    }
  },
  "mappings": {},
  "aliases": {}
}'

The output should be:

{
  "acknowledged" : true
}

2.5.12 Check all URLs

The first checkpoints to the main product: UTMStack, look at this URL: http://www.utmclient.utmstack.com/manager/html. You should see a "true" flag showing that all is OK. If, for some reason, the war file (utmstack.war) is not properly deployed, this is the place to fix that.Since we did a lot of changes in the post-installation process, it must be checked, either way, it be manually redeployed, just hit "Deploy" in the applications section.

Test every other URL pointing to services, remember to use the adminuser and the adminpass you provided during the installation process, with the exception of "Portainer" it requests username and password in the first login.

UTM Service To be used as URL
utm_cerebro Elasticsearch web admin tool http://cerebro.utmclient.utmstack.com
utm_elastic Elasticsearch coordination node http://elastic.utmclient.utmstack.com
utm_kibana Elasticsearch web admin tool http://kibana.utmclient.utmstack.com
utm_openvas Openvas vulnerability scanner http://openvas.utmclient.utmstack.com
utm_portainer Manage your docker environments http://portainer.utmclient.utmstack.com
utm_traefik Load balancer & reverse proxy http://traefik.utmclient.utmstack.com
utm_tomcat Java webserver (application) http://www.utmclient.utmstack.com

2.6 Repositories to do snapshots (backup) of your events data

If you want to backup your data, either to save some space in your storage or simply to have a copy of one or more indices, you must create a repository under the folder /usr/share/elasticsearch/backup, after that, you can create a snapshot inside the repository, using Kibana or Cerebro, both intuitive and easy to use, check the images showing the process.

CURL command to add a repository to hold the snapshots (backups)

$ curl -X PUT "http://elastic.utmclient.utmstack.com/_snapshot/utm_repository?pretty" -H 'Content-Type: application/json' -d'
{
  "type": "fs",
  "settings": {
    "location": "utm_repository"
  }
}'

2.7 Uninstall or Roll-back

Follow this instruction to properly disassembly all the Docker Swarm services.

2.7.1 Remove stack, services, and containers

This part takes some minutes to take down the stack, check containers and services to verify all is gone

$ docker stack rm utm

Removing service utm_cerebro
Removing service utm_es-data
Removing service utm_es-elastic
Removing service utm_es-master
Removing service utm_kibana
Removing service utm_openvas
Removing service utm_portainer
Removing service utm_postgres
Removing service utm_tomcat
Removing service utm_traefik

$ docker service ls

$ docker container ls

Note: Wait for all services and containers are removes, check with steps 2 and 3 until you get an empty output.

2.7.2 Remove virtual network

Since you have nothing attached to this virtual network, you can now remove it

$ docker network rm net_utmstack

2.7.3 Remove persistent volumes

[WARNING] YOU WILL DELETE ALL YOUR STACK DATA!!!

Save all you data first if you are not done yet, permanent data is here: "/var/lib/docker/volumes/", there you can find folders associated to your different services. Once you are completely sure you can get rid of those volumes you can use this command:

$ docker volume prune

Output Example:

WARNING! This will remove all local volumes not used by at least one container.
Are you sure you want to continue? [y/N] y
Deleted Volumes:
vol_gvm_data
utm_portainer_data
vol_postgres_data
vol_tomcat_conf
vol_tomcat_webapps
vol_traefik_config
utm_es-data-1
utm_es-data-2
utm_es-data-3
utm_es-data-4
utm_es-data-5
utm_es-master-1
utm_es-master-2
utm_es-master-3

2.7.4 Leave Swarm Mode

Now that you have emptied the Docker Swarm environment, you can leave the "swarm mode" of Docker

$ docker swarm leave --force

2.7.5 Remove Docker Community

This is the way to remove and clean configurations made by Docker Community packages

$ apt -y remove docker-ce docker-ce-cli containerd.io && apt -y purge docker-ce docker-ce-cli containerd.io && apt autoremove

2.7.6 Remove Support packages

Some packages were needed before the installation of Docker Community, they can be removed now if they are not useful for your server.

$ apt -y remove apt-transport-https ca-certificates curl gnupg-agent software-properties-common && apt -y purge apt-transport-https ca-certificates curl gnupg-agent software-properties-common && apt autoremove

2.7.7 Remove additional lines on APT sources

At the bottom of the "/etc/apt/sources.list", there should be some lines added by the installation scripts

$ nano /etc/apt/sources.list

Ctrl + K Remove a whole line where the cursor is located Ctrl + O + RETURN, to save the file, Ctrl + X to exit from NANO editor

Now the next Docker repo lines can be removed

deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable
# deb-src [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable

3 Contact Information

Title Description
demo demo@stalasinside.com