Manual

1 UTMStack Installation Manual

 

A guide for Docker Swarm deployment

 

1.1 Introduction

UTMStack functionalities involve a certain number of running services, which are very complex concerning installation and configuration. Luckily, most of the product deployment process is unattended, leaving the minimal possibility of human errors; either way, it's healthy to double-check every step to be sure.

 

1.2 Purpose

This document is provided to serve as a guide during the software installation phase to reduce possible manipulation mistakes, errors, misinterpretations, loss of time, finding the correct information.

             An inadequate, incomplete, or inexistent install of any service may lead to the failure of the UTMStack deployment and, therefore, unable to be used properly.   

 

2 Installation Manual

We assume you are not a specialist for all the services running at the UTMStack deployment. This manual will try to avoid errors during the installation and later use of applications. Please check the Contact Information list at the end of this document if necessary.

 

2.1 Pre-requisites

In this section, you can find a list of pre-requisites that must be met before the install can begin. Keep in mind that if fewer resources are provided, the services won't deploy properly.

 

2.1.1 Hardware

VERY IMPORTANT! The designated server must be a Physical Server or a Virtual Machine (VM), NEVER a container (CT) in case you rent a Virtual Private Server (VPS).

  • RAM: 32 Gb (Minimal)
  • SWAP: 32 Gb (Minimal)
  • CPU: 8 Cores (Minimal)

On GNU/Linux systems The SWAP partition is VERY IMPORTANT, even disabled, in case of extreme processing tasks or failures.

People usually forget that fact, so when you are setting your server, you MUST remember to reserve in your disk configuration the same amount you set as RAM memory on your server. In this case, the SWAP size would be 32 Gigabytes, and you might want your system able to handle every hard operation.

You could have a share of your main hard drive (fastest) dedicated to SWAP operations, like 100Gb = 32Gb (swap partition) + 68Gb (root partition '/'). Something like this:

xvda       8:0    0  100G  0 disk
├─xvda1    8:1    0   68G  0 part /
└─xvda2    8:2    0   32G  0 part [SWAP]

You can do all this during the installation of the operating system on your server,

 

2.1.2 Operating System

We strongly recommend Ubuntu 18.04 LTS, since it has been widely tested with desired results. Either way, the Installer should work just fine with any modern version.

Check the configuration of your Ubuntu Repositories. If you use the next command, you should have a similar output (depending of course of the Ubuntu release: bionic for Ubuntu 18.04, focal for Ubuntu 20.04). Make sure you have this properly set, and your system is up to date.

$ cat /etc/apt/sources.list
deb http://us.archive.ubuntu.com/ubuntu bionic main restricted universe multiverse
deb http://us.archive.ubuntu.com/ubuntu bionic-updates main restricted universe multiverse
deb http://us.archive.ubuntu.com/ubuntu bionic-security main restricted universe multiverse
deb http://us.archive.ubuntu.com/ubuntu bionic-backports main restricted universe multiverse

Once you have done with the repositories, you need to update the package indexing and upgrade your system.

$ apt update && apt -y upgrade

 

2.1.3 Storage

Let's make an apart in this section of the hardware needs. It's wise to have a "think ahead" approach when making decisions about storage. You will deal with these three kinds of storage pieces, each different considering: use, speed, size, location, and reasons to be the way they are set. There are VERY GOOD reasons to have these "partitions" finely defined:

  1. ROOT partition
    • USE: It's where your operating system rests, including all the packages you could possible and will surely need.
    • SIZE: 100 Gigabytes (recommended).
    • SPEED: It determines the speed of your operating system so, the best possible (SSD disks if you can afford that), but it works with regular hard drives either way, just slower.
    • REASON: You want your system always stable, even if storage (2) and (3) are completely full and unable to hold a single bit.
    • LOCATION: '/', the main partition on Linux.
  2. EVENTS DATA partition
    • USE: It's where your events data rests; all the logs your system is capturing goes there.
    • SIZE: Depending on the results of the Resource Calculator webpage, i.e., just a few servers (in a year) will easily reach 1 Terabyte of data
    • SPEED: It determines the speed of your data indexing so, the best possible (SSD disks if you can afford that), but it works with regular hard drives either way, just slower.
    • REASON: You want this piece of storage: spacious, fast, and of course free from interference due to the system and backups work (zero delay).
    • LOCATION: '/utmstack/data', It's a mounting point, folder /utmstack must exist or created beforehand, check section: 1.3.3 Setting up the extra storage
  3. SNAPSHOTS partition
    • USE: It's where your data snapshots (backups) rests. Once a snapshot containing a subset of your data (grouped on daily indices) is done (and double-checked), you can delete those indices from your active data. It's the place where the oldest events data are kept (months and years as they pass).
    • SIZE: Depending on the results of the Resource Calculator Here you will save your Snapshots.
    • SPEED: It determines the speed of your snapshots so, this piece of storage doesn't need to be that fast since it's just for backups.
    • REASON: You want this partition always inside your server, always with room for snapshots, first, because you will be able to make backups of your indices even if storage (2) is full; second, because Cloud Storage like S3 is usually slower and it's wise to previously save data on your own server then upload the snapshots to the Storage Bucket (like S3 or anything of your like) in order to...yes, you guessed right, to make room on this storage for more backups.
    • LOCATION: '/utmstack/repo', It's a mounting point, folder /utmstack must exist or be created beforehand.

Summarizing, if you rent a VPS, just be sure to consider these three kinds of storage needs in your budget: system (fast), data (fast if possible), and snapshots (fast is optional). Usually, in the same place or website, you rent the server (VPS) and select system partition size. You also can add the other two pieces of storage with the desired (affordable) speed and size. If you have a Physical Server, the needs remain the same, but you handle it in your own way.

Either way, you will face one of three different scenarios considering the physical way your data is located, a single big disk, two separated disks, or three separated disks.

 

2.1.3.1 Three separated disks

This option, it's what common sense dictates as a good practice, it's what we recommend, and it's the one explained length here.

Assuming you decided to go for the three separates pieces of storage, let's begin checking the storage you have in your own server. Depending on the sizes you got from your VPS provider or the hard drives installed on your physical server, the output should be something similar to this:

$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda       8:0    0  100G  0 disk
├─xvda1    8:1    0    1G  0 part /boot
├─xvda2    8:2    0   32G  0 part [SWAP]
└─xvda3    8:3    0   67G  0 part /
xvdb       8:16   0    2T  0 disk
xvdc       8:32   0    3T  0 disk

The disks could have different names, like sda, sdb, sdc, just use the real ones. As you can see in the example, you have two extra disks: xvdb with 2 Terabytes and xvdc with 3 Terabytes. They need a little work to make them functional. Think about them as blanks, with no partitions and no format what so ever.

Creating extra partitions:

### Disk #2 (xvdb)
$ parted /dev/xvdb mklabel gpt
$ parted /dev/xvdb mkpart primary ext4 34s 100% i

### Disk #3 (xvdc)
$ parted /dev/xvdc mklabel gpt
$ parted /dev/xvdc mkpart primary ext4 34s 100% i

Verifying the partitions are recognized by the system and some visual check for us:

$ partprobe
$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda       8:0    0  100G  0 disk
├─xvda1    8:1    0    1G  0 part /boot
├─xvda2    8:2    0   32G  0 part [SWAP]
└─xvda3    8:3    0   67G  0 part /
xvdb       8:16   0    2T  0 disk
└─xvdb1    8:17   0    2T  0 part
xvdc       8:32   0    3T  0 disk
└─xvdc1    8:33   0    3T  0 part

Now you can see the newly created partitions: xvdb1 and xvdc1; they need to be prepared for Linux Systems.

Formatting to EXT4, the Linux filesystem format (Some kinds of hard drives):

$ mkfs.ext4 /dev/xvdb1
$ mkfs.ext4 /dev/xvdc1

If you get a preventing message like this one:

mke2fs 1.44.1 (24-Mar-2018)
/dev/xvdb1 contains a ext4 file system
        created on Sun Aug 23 03:24:21 2020
Proceed anyway? (y,N)

It means that during the partitioning process, the command 'parted' accepted the format we sent previously (mkpart primary ext4). It was formatted in advance so that particular disk is already in EXT4 format and doesn't need to be done again (not all types of disks work that way), don't worry, it won't break anything if you do it anyway, just a loss of time.

Regularly, both outputs should look similar to this:

mke2fs 1.44.1 (24-Mar-2018)
Creating filesystem with 805306359 4k blocks and 201326592 inodes
Filesystem UUID: 5f8d0c2f-3907-45cf-a96f-4162f6b8c33f
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
        102400000, 214990848, 512000000, 550731776, 644972544

Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done

Checking both formatting went well:

$ e2fsck -f /dev/xvdb1
$ e2fsck -f /dev/xvdc1

This time you should see in your outputs something similar to this :

e2fsck 1.44.1 (24-Mar-2018)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/xvdc1: 11/6553600 files (0.0% non-contiguous), 557848/26214391 blocks

Now, we will create the folders that will hold the mounting points. The disks partitions must be mounted there respectively:

$ mkdir -p /utmstack/data /utmstack/repo
$ mount /dev/xvdb1 /utmstack/data
$ mount /dev/xvdc1 /utmstack/repo

You can check the two extra disks are properly partitioned, formatted, and mounted:

$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda       8:0    0  100G  0 disk
├─xvda1    8:1    0    1G  0 part /boot
├─xvda2    8:2    0   32G  0 part [SWAP]
└─xvda3    8:3    0   67G  0 part /
xvdb       8:16   0    2T  0 disk
└─xvdb1    8:17   0    2T  0 part /utmstack/data
xvdc       8:32   0    3T  0 disk
└─xvdc1    8:33   0    3T  0 part /utmstack/repo

Finally, after all this work, we should make all these changes persistent on the system. The next commands will do that for you:

echo "/dev/xvdb1 /utmstack/data ext4 defaults 0 0" >> /etc/fstab
echo "/dev/xvdc1 /utmstack/repo ext4 defaults 0 0" >> /etc/fstab

 

2.1.3.2 Two separated disks

This option is also possible. It means having just an extra disk besides the one where the operating system rests. In this case, you must dedicate that disk for both /data and /repo. The trick is to mount the extra disk partition as /utmstack

The commands are similar to the scenario with three disks:

$ parted /dev/xvdb mklabel gpt
$ parted /dev/xvdb mkpart primary ext4 34s 100% i
$ mkfs.ext4 /dev/xvdb1
$ e2fsck -f /dev/xvdb1
$ mkdir /utmstack
$ mount /dev/xvdb1 /utmstack
$ mkdir /utmstack/data /utmstack/repo
$ echo "/dev/xvdb1 /utmstack ext4 defaults 0 0" >> /etc/fstab

 

2.1.3.3 Single big disk

This option is simple but dangerous. In case your events data consumes all the space of your disk, your system won't be easy to get back on the stable condition. Since partition '/' will be full and unable to receive any other bit, that could bring unexpected results.

If your case is that you already have a big fat root partition'/' with enough disk space to handle the operating system, your data, your snapshots, etcetera (it means really big disk, let's say 1-2 Terabytes), and for some reason you don't plan to get any other piece of storage, you could just (AND THIS IS NOT RECOMENDED) create folders '/utmstack/data' and '/utmstack/repo' on your existing server, the Installer should work just fine.

$ mkdir -p /utmstack/data /utmstack/repo

 

2.1.4 DNS Entries

Along with this manual, you will see an example domain "utmclient.utmstack.com" that YOU MUST change it for your own.

For name resolving purposes, the DNS server (or provider) must be managed so that you will need special permissions or help from your system administrator. You need to add a couple of records, one type "A" pointing at "www.utmclient.utmstack.com" as a default with the server IP address in it. The other has been the type "CNAME" (or "Forward") with value "utmclient.utmstack.com" pointing to the first one. These entries will provide a doorway for all the services deployed during the installation.

 

2.2 Pre-installation Tasks

First of all, make sure you are logged as root user, and the apt services are not busy. You must have the UTMStack Installer, hosted on GitHub: UTMStack/utmstackInstallerBin, click on the green button named "Code" then "Download Zip", place it somewhere on your server file system, i.e., under "/opt" folder, or just download directly from the server.

$ sudo su
[sudo] password for USERNAME:
$ cd /opt
$ wget -o utmstackInstallerBin.zip https://github.com/UTMStack/utmstackInstallerBin/archive/master.zip

Unpack the zipped file "utmstackInstallerBin.zip".

$ unzip utmstackInstallerBin.zip
$ cd utmstackInstallerBin

(Mandatory) All the command for installation and deployment should run from within this location

 

2.3 Installation Procedures

Just an advance, you will face two interactive parts that need your answers. You might want to have some things at hand, like:

  1. Server size

(1) TESTING Server deployment
(2) Tiny Server deployment (at least RAM:16Gb, CPU:4 units)
(3) Small Server deployment (at least RAM:32Gb, CPU:8 units)
(4) Medium Server deployment (at least RAM:64Gb, CPU:12 units)
(5) Large Server deployment (at least RAM:128Gb, CPU:16 units)
(6) Biggest Server deployment (at least RAM:256Gb, CPU:20 units)

You will see only the options you can use. The Installer will check your server resources, choose according to the Resource Calculator webpage.

  1. Credentials and Domain

Your Admin Username: administrator
Your Admin Password: YoUrAdMiNpAsS
YOUR domain FQDN is: yourdomain.com

This is where you set your administrator user credentials and provide your real domain.

  1. License details

Your LICENSE Key is: wsbx3gk414gfg5msw4t
Your LICENSE Name is: CompanyName
Your LICENSE Email is: company@yourdomain.com

OK, with that said, you need to make sure the script "utmstackInstallerBin" is a Linux executable program, then run the script from the location where the installer folder is. After that, just follow the instructions.

$ chmod a+x install
$ ./install

Once you run the Installer, the output should look like this:

# [UTMSTACK]: ROOT user...Ok.
# [UTMSTACK]: Installing: curl ...
# [UTMSTACK]: Enter credentials for administrator user.
Your Admin Username: administrator
Your Admin Password: YoUrAdMiNpAsS
Your domain FQDN is: utmclient.utmstack.com
# [UTMSTACK]: Enter license details.
Your LICENSE Key is: wsbx3gk414gfg5msw4t
Your LICENSE Name is: CompanyName
Your LICENSE Email is: company@yourdomain.com
License existence ... OK.
           Key: wsbx3gk414gfg5msw4t
          Days: 380
  CustomerName: CompanyName
 CustomerEmail: company@yourdomain.com
    Activation: 2020-09-05T00:13:57.215829Z
        Expire: 2021-09-20T00:13:57.215829Z
         Valid: true
# [UTMSTACK]: VERIFY YOUR INPUTS...
# [UTMSTACK]: ---------------------------------------
# [UTMSTACK]: Admin Username: administrator
# [UTMSTACK]: Admin Password: YoUrAdMiNpAsS
# [UTMSTACK]: Domain FQDN is: utmclient.utmstack.com
# [UTMSTACK]: ---------------------------------------
# [UTMSTACK]: LICENSE Key is: wsbx3gk414gfg5msw4t
# [UTMSTACK]: LICENSE Name is: CompanyName
# [UTMSTACK]: LICENSE Email is: company@yourdomain.com
# [UTMSTACK]: ---------------------------------------
# [UTMSTACK]: LICENSE Validity: 380 days left.
# [UTMSTACK]: Install type is: LICENSED VERSION
# [UTMSTACK]: Is this correct to proceed?
(Y)es to continue
(N)o to repeat the INPUTS
(Q)uit the installation
Your answer is?: y
# [UTMSTACK]: Your resources, RAM: 32939 MB, CPU: 8 UNITS
# [UTMSTACK]: Your install type is: LICENSED VERSION
# [UTMSTACK]: According to this, you can install:
(1) TESTING Server deployment
(2) Tiny Server deployment (at least RAM:16Gb, CPU:4 units)
(3) Small Server deployment (at least RAM:32Gb, CPU:8 units)
(Q) Quit the installation
Your answer is?: 3

The credentials will be used to create access for all services (Traefik, Openvas, and PostgreSQL) but Portainer manager requested at first login. If you make a mistake in your inputs, just hit n (or N) on the verification question to reenter data.

 

2.4 Technical Tests

First, we check the whole stack: images (pulled locally or from Docker Hub), services (some have 1, 3, or 5 replicas), containers (these are the running processes), volumes (permanent storage)

 

2.4.1 Images

$ docker images

REPOSITORY

TAG

IMAGE ID

CREATED

SIZE

utmstack/tomcat-omp

9-jdk8

09c98fe99a79

16 hours ago

595MB

Traefik

2.2

9b9d2d696ad3

18 hours ago

78.4MB

portainer/portainer

latest

62771b0b9b09

6 days ago

79.1MB

securecompliance/gvm

latest

11a37d7c6e0f

12 days ago

2.41GB

Postgres

alpine

17150f4321a3

4 weeks ago

157MB

lmenezes/cerebro

latest

3553b54da0e7

5 weeks ago

268MB

Logstash

7.8.0

01979bbd06c9

6 weeks ago

789MB

Kibana

7.8.0

df0a0da46dd1

6 weeks ago

1.29GB

Elasticsearch

7.8.0

121454ddad72

6 weeks ago

810MB

 

2.4.2 Services

$ docker service ls

ID

NAME

MODE

REPLICAS

IMAGE

e96swdtu7cyv

utm_cerebro

global

1/1

lmenezes/cerebro:latest

r48aofrvm89y

utm_es-elastic

global

1/1

elasticsearch:7.8.0

z51gvqdu47gr

utm_es-node1

global

1/1

elasticsearch:7.8.0

j65wyq9agk3x

utm_es-node2

global

1/1

elasticsearch:7.8.0

wui2t042zkyb

utm_es-node3

global

1/1

elasticsearch:7.8.0

x5nqswmq9uz8

utm_es-node4

global

1/1

elasticsearch:7.8.0

c5oilgqo3ylh

utm_kibana

global

1/1

kibana:7.8.0

dkjzy7uzzsnh

utm_openvas

global

1/1

securecompliance/gvm:latest

7xbdmdogwuqg

utm_portainer

global

1/1

portainer/portainer:latest

mnye4zcemv39

utm_postgres

global

1/1

postgres:alpine

od9plbcnvgil

utm_tomcat

global

1/1

utmstack/tomcat-omp:9-jdk8

Rvhsxupgvahj

utm_traefik

global

1/1

traefik:2.2

 

2.4.3 Containers

$ docker container ls

CONTAINER ID

IMAGE

COMMAND

CREATED

STATUS

PORTS

NAMES

d297ffc9a2ce

lmenezes/cerebro:latest

3m ago

Up 3m

9000/tcp

utm_cerebro.fnqlygvmqnou4u402exvxysv9.ws07stpwnk1x90qjbv1hv1x7h

8d1cfd0db667

elasticsearch:7.8.0

3m ago

Up 3m

9200/tcp, 9300/tcp

utm_es-elastic.fnqlygvmqnou4u402exvxysv9.0hv8abbxijydb4y3vun2r9pkp

d948fba742fc

elasticsearch:7.8.0

3m ago

Up 3m

9200/tcp, 9300/tcp

utm_es-node1.fnqlygvmqnou4u402exvxysv9.iekmrerkcf0s1k6q9ofegik9p

df0767ab6395

elasticsearch:7.8.0

3m ago

Up 3m

9200/tcp, 9300/tcp

utm_es-node2.fnqlygvmqnou4u402exvxysv9.j3hq7nh637bq3s906lus5l64i

d36dd3e463da

elasticsearch:7.8.0

3m ago

Up 3m

9200/tcp, 9300/tcp

utm_es-node3.fnqlygvmqnou4u402exvxysv9.uo1eyko2uu2ev4u92s8e3y7qp

6d08f7a9e8a1

elasticsearch:7.8.0

3m ago

Up 3m

9200/tcp, 9300/tcp

utm_es-node4.fnqlygvmqnou4u402exvxysv9.lkpq39eaw27xp98z3uc3wc2a8

f760e7442833

kibana:7.8.0

3m ago

Up 3m

5601/tcp

utm_kibana.fnqlygvmqnou4u402exvxysv9.r337f19ajrw1n9onr4ooc2sgj

a22ed92c2367

securecompliance/gvm:latest

3m ago

Up 3m

9390/tcp

utm_openvas.fnqlygvmqnou4u402exvxysv9.rcy1dtj5m8qe9z31bbfj2uohr

2018c3f09430

portainer/portainer:latest

3m ago

Up 3m

9000/tcp

utm_portainer.fnqlygvmqnou4u402exvxysv9.vr99qyqr6b5myhs2qzc40dpmw

31a0cb475da8

postgres:alpine

3m ago

Up 3m

5432/tcp

utm_postgres.fnqlygvmqnou4u402exvxysv9.g6octrqtt2q9uaka3zwq58ksx

95e7e03253f9

utmstack/tomcat-omp:9-jdk8

3m ago

Up 3m

8080/tcp

utm_tomcat.fnqlygvmqnou4u402exvxysv9.uhj6od9a9193degphpgdmq6j3

be3fd16e3776

traefik:2.2

3m ago

Up 3m

80/tcp

utm_traefik.fnqlygvmqnou4u402exvxysv9.sebogeptyuie843oyexe6kh0j

 

2.4.4 Volumes

docker volume ls

DRIVER

VOLUME NAME

DESCRIPTION

local

utmportainerdata

Portainer Manager Volume

local

volgvmdata

Openvas Data Volume

local

volpostgresdata

PostgreSQL Data Volume

local

voltomcatconf

Tomcat Config Volume

local

voltomcatwebapps

Tomcat Deployments Volume

local

voltraefikconfig

Traefik Certificates Volume

 

2.5 Post-installation

Once the UTMStack services are deployed on a Docker Swarm environment, some things must be tuned after the unattended part to ensure the installation process.

 

2.5.1 Securing Web

For all kinds of SSL provider, the certificate type must ask should be: "WILDCARD", to cover all your web pages. Please check your DNS and SSL providers to get one pointing at ".utmclient.utmstack.com" in the CN section. After that, just copy the file to the Traefik's config folder. You can set your certificate any time; just copy them with the right names to the designated folder

This part is not mandatory. If you fail to provide a valid certificate, the Traefik's service will use an internal one.

No matter where or how you get your SSL support: self-signed, free from Let's Encrypt or any paid service, just get a wildcard certificate, rename your_own_cert.pem, and your_own_cert.key it to wildcard.pem and wildcard.key, copy both to the Traefik's configuration folder, and update the service to commit the changes.

$ cp /path/to/file/your_own_cert.pem /var/lib/docker/volumes/vol_traefik_config/_data/wildcard.pem
$ cp /path/to/file/your_own_key.key /var/lib/docker/volumes/vol_traefik_config/_data/wildcard.key
$ docker service update utm_traefik

utm_traefik
overall progress: 1 out of 1 task
1/1: running   [==================================================>]
verify: Service converged

Now you can test the webpages, and should be all updated...

 

2.5.1.1 Let's Encrypt Certbot

Certbot is a FREE open-source software tool for automatically using Let's Encrypt certificates on manually administrated websites to enable HTTPS. If you are ready for this option, install the package and get the certificates using the next commands:

$ apt install certbot
$ certbot certonly --manual --preferred-challenges=dns --email enterprise@yourdomain.com --server https://acme-v02.api.letsencrypt.org/directory --agree-tos -d *.utmclient.utmstack.com

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator manual, Installer None
Obtaining a new certificate
Performing the following challenges:
dns-01 challenge for utmclient.utmstack.com

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NOTE: The IP of this machine will be publicly logged as having requested this
certificate. If you're running certbot in manual mode on a machine that is not
your server, please ensure you're OK with that.

Are you OK with your IP being logged?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: y

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Please deploy a DNS TXT record under the name
_acme-challenge.utmclient.utmstack.com with the following value:

6exwW66JrrSgWErhdqzXVg4CzYJWuEWFEnLoyDH9_QM

Before continuing, verify the record is deployed.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Press Enter to Continue

This part is pretty straightforward, Cerbot needs you to add a verification key in your DNS, simply add a DNS TXT record with the name "_acme-challenge.utmclient.utmstack.com" (use your real domain) with the indicated value inside, in the example is "6exwW66JrrSgWErhdqzXVg4CzYJWuEWFEnLoyDH9_QM", use the one you got from your own certbot execution. This is a onetime configuration but needed to keep going. Hit ENTER once the work on DNS is done.

Waiting for verification...
Cleaning up challenges

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/utmclient.utmstack.com/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/utmclient.utmstack.com/privkey.pem
   Your cert will expire on 2020-11-11. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot
   again. To non-interactively renew *all* of your certificates, run
   "certbot renew"
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

Now you just need to copy the files generated and rename to the Traefik's permanent volume, use next commands

$ cp /etc/letsencrypt/live/utmclient.utmstack.com/fullchain.pem /var/lib/docker/volumes/vol_traefik_config/_data/wildcard.pem
$ cp /etc/letsencrypt/live/utmclient.utmstack.com/privkey.pem /var/lib/docker/volumes/vol_traefik_config/_data/wildcard.key
$ docker service update utm_traefik

utm_traefik
overall progress: 1 out of 1 task
1/1: running   [==================================================>]
verify: Service converged

 

2.5.1.2 Testing with a self-signed certificate

This section is just a test if you want to try something quickly. First, generate an RSA private key. Use 2048 bit or even better 4096 bit key

$ openssl genrsa 2048 > wildcard.key

Provide desired values to the first questions, but in the interactive part of Common Name be careful to enter the domain: *.utmclient.utmstack.com

$ openssl req -new -x509 -nodes -sha256 -days 3650 -key wildcard.key > wildcard.cert

You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields, but you can leave some blank
For some fields, there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]: US
State or Province Name (full name) [Some-State]: NY
Locality Name (e.g., city) []: New York
Organization Name (eg, company) [Internet Widgits Pty Ltd]: MyCompany
Organizational Unit Name (eg, section) []: MyDepartment
Common Name (e.g. server FQDN or YOUR name) []: *.utmclient.utmstack.com
Email Address []: mail@yourdomain.com

Now sign and generate the certificate
$ openssl x509 -noout -fingerprint -text < wildcard.cert > wildcard.info

Finally you can bundle the data

$ cat wildcard.cert wildcard.key > wildcard.pem

$ cp wildcard.pem /mnt/utmstack/vol_traefik_config/wildcard.pem
$ cp wildcard.key /mnt/utmstack/vol_traefik_config/wildcard.key
$ docker service update utm_traefik

utm_traefik
overall progress: 1 out of 1 task
1/1: running   [==================================================>]
verify: Service converged

IMPORTANT: This self-signed certificate is used for trial purposes only, NOT suitable in production!

 

 

2.5.2 Elasticsearch Configuration

 

2.5.2.1 Cluster Settings

The ElasticSearch Cluster configurations are all dynamic and can be set using CURL, Kibana

, or the simplistic Cerebro. The next options can be set exactly like are shown on Kibana Console. In order to use Cerebro, the syntax is a bit different. Check the helping image to be sure.

This is the standard format of the cluster settings

PUT /_cluster/settings
{
  "persistent": {
    "cluster.routing.rebalance.enable": "all",
    "cluster.routing.allocation.enable": "all",
    "cluster.routing.allocation.disk.threshold_enabled": false,
    "cluster.routing.allocation.node_concurrent_recoveries": 5,
    "cluster.routing.allocation.cluster_concurrent_rebalance": 5,
    "cluster.routing.allocation.node_initial_primaries_recoveries": 5,
    "cluster.routing.allocation.allow_rebalance": "indices_primaries_active",
    "cluster.routing.allocation.same_shard.host": true,
    "indices.recovery.max_concurrent_file_chunks": 5,
    "indices.recovery.max_bytes_per_sec": "100mb"
  }
}

If you feel comfortable using CURL on a bash prompt in Linux, use the next command to set your cluster:

$ curl -X PUT "http://elastic.utmclient.utmstack.com/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'
{
  "persistent": {
    "cluster.routing.rebalance.enable": "all",
    "cluster.routing.allocation.enable": "all",
    "cluster.routing.allocation.disk.threshold_enabled": false,
    "cluster.routing.allocation.node_concurrent_recoveries": 5,
    "cluster.routing.allocation.cluster_concurrent_rebalance": 5,
    "cluster.routing.allocation.node_initial_primaries_recoveries": 5,
    "cluster.routing.allocation.allow_rebalance": "indices_primaries_active",
    "cluster.routing.allocation.same_shard.host": true,
    "indices.recovery.max_concurrent_file_chunks": 5,
    "indices.recovery.max_bytes_per_sec": "100mb"
  }
}'

The output should be:

{
  "acknowledged" : true,
  "persistent" : {
    "cluster" : {
      "routing" : {
        "rebalance" : {
          "enable" : "all"
        },
        "allocation" : {
          "disk" : {
            "threshold_enabled" : "false"
          },
          "node_initial_primaries_recoveries" : "5",
          "enable" : "all",
          "same_shard" : {
            "host" : "true"
          },
          "allow_rebalance" : "indices_primaries_active",
          "cluster_concurrent_rebalance" : "5",
          "node_concurrent_recoveries" : "5"
        }
      }
    },
    "indices" : {
      "recovery" : {
        "max_bytes_per_sec" : "100mb",
        "max_concurrent_file_chunks" : "5"
      }
    }
  },
  "transient" : { }
}

Using visual managers Kibana and Cerebro

 

Note: The settings using Kibana Console are direct, as it's shown, hit the icon with a balloon tip: "Click to send request " to commit changes. If you don't see the image, it's located in assets/set_cluster_settings_kibana.png

 

Note: The settings in Cerebro are divided into parts, marked in the image, select PUT and hit send to commit changes. If you don't see the image, it's located in assets/set_cluster_settings_cerebro.png

 

2.5.2.2 Rollover configuration

Planning a lifecycle for our data to move daily indices through 3 phases (hot, warm, cold) is a good practice to save some hardware resources since old data is less requested than newer. Active (newly created) indices will stay hot for 90 days, then it rolls over to warm phase. It stays there for another 90 days to finally reach the cold phase. We will set that by setting all new indices with a template that will contain a pointer to a lifecycle policy. Fresh indices will be configured automatically.

This is the standard format of these types of settings. Both can also be set visually with Kibana or Cerebro in the same way you managed the Cluster Settings.

Lifecycle Policy

PUT _ilm/policy/index-lifecycle-policy-quarterly
{
  "policy": {
    "phases": {
      "hot": {
        "actions": {
          "rollover": {"max_age": "90d"},
          "set_priority": {"priority": 50}
        }
      },
      "warm": {
        "min_age": "1d",
        "actions": {
          "allocate": {"number_of_replicas": 0},
          "forcemerge": {"max_num_segments": 1},
          "set_priority": {"priority": 25},
          "shrink": {"number_of_shards": 1}
        }
      },
      "cold": {
        "min_age": "90d",
        "actions": {
          "freeze": {},
          "set_priority": {"priority": 0}
        }
      }
    }
  }
}

This is a CURL command to set this Rollover policy:

$ curl -X PUT "http://elastic.utmclient.utmstack.com/_ilm/policy/index-lifecycle-policy-quarterly?pretty" -H 'Content-Type: application/json' -d'
{
  "policy": {
    "phases": {
      "hot": {
        "actions": {
          "rollover": {"max_age": "90d"},
          "set_priority": {"priority": 50}
        }
      },
      "warm": {
        "min_age": "1d",
        "actions": {
          "allocate": {"number_of_replicas": 0},
          "forcemerge": {"max_num_segments": 1},
          "set_priority": {"priority": 25},
          "shrink": {"number_of_shards": 1}
        }
      },
      "cold": {
        "min_age": "90d",
        "actions": {
          "freeze": {},
          "set_priority": {"priority": 0}
        }
      }
    }
  }
}'

The output should be:

{
  "acknowledged" : true
}

 

2.5.2.3 Indices template:

This is a simple concept, whenever a brand new index's name match the patterns, in this case index-*, then it will be set up with the settings defined across the temple. This is mainly used to "link" the matching index to the previously defined lifecycle policy.

PUT _template/template-all-index-client-code
{
  "index_patterns": ["index-*"],
  "settings": {
    "index": {
      "lifecycle": {"name": "index-lifecycle-policy-quarterly"},
      "mapping": {"total_fields": {"limit": "2000"}},
      "store": {"preload": ["nvd", "dvd"]},
      "number_of_shards": "2",
      "number_of_replicas": "0"
    }
  },
  "mappings": {},
  "aliases": {}
}

This is a CURL command to add the template:

$ curl -X PUT "http://elastic.utmclient.utmstack.com/_template/template-all-index-client-code?pretty" -H 'Content-Type: application/json' -d'
{
  "index_patterns": ["index-*"],
  "settings": {
    "index": {
      "lifecycle": {"name": "index-lifecycle-policy-quarterly"},
      "mapping": {"total_fields": {"limit": "3000"}},
      "number_of_shards": "3",
      "number_of_replicas": "0"
    }
  },
  "mappings": {},
  "aliases": {}
}'

The output should be:

{
  "acknowledged" : true
}

 

2.5.4 Check all URLs

The first checkpoints to the main product: UTMStack, look at this URL: http://www.utmclient.utmstack.com/manager/html. You should see a "true" flag showing that everything is OK. If for some reason, the war file (utmstack.war) is not properly deployed, this is the place to fix that. Since we made many changes in the post-installation process, it must be checked. Either way, it is manually redeployed; just hit "Deploy" in the applications section.

Test every other URL pointing to services. Remember to use the adminuser and the adminpass you provided during the installation process, except for "Portainer" it requests username and password in the first login.

UTM Service

To be used as

URL

utm_cerebro

Elasticsearch web admin tool

http://cerebro.utmclient.utmstack.com

utm_elastic

Elasticsearch coordination node

http://elastic.utmclient.utmstack.com

utm_kibana

Elasticsearch web admin tool

http://kibana.utmclient.utmstack.com

utm_openvas

Openvas vulnerability scanner

http://openvas.utmclient.utmstack.com

utm_portainer

Manage your docker environments

http://portainer.utmclient.utmstack.com

utm_traefik

Load balancer & reverse proxy

http://traefik.utmclient.utmstack.com

utm_tomcat

Java webserver (application)

http://www.utmclient.utmstack.com

 

2.6 Repositories & Snapshots (backups)

If you want to back up your data, either to save some space in your storage or simply to have a copy of one or more indices, you must create a repository. If you don't set the location, it will use the default repository location. In this deployment, it will be under the folder /usr/share/elasticsearch/backup. Now, you can create snapshots inside that repository, using Kibana or Cerebro, both intuitive and easy to use, check the images showing the process.

CURL command to add a repository to hold the snapshots (backups)

$ curl -X PUT "http://elastic.utmclient.utmstack.com/_snapshot/utm_repository?pretty" -H 'Content-Type: application/json' -d'
{
  "type": "fs",
  "settings": {
    "location": "utm_repository"
  }
}'

Or you can use "Cerebro" to make it easier. First, we can see how to manage repositories, go to the menu option "more" => "repositories":

 

If you don't see the image, it's located in assets/register_repository_cerebro.png

And here, we can see how to register a snapshot (backup) for one or more indices, go to the menu option "more" => "snapshot":

 

If you don't see the image, it's located in assets/register_snapshot_cerebro.png

 

2.7 Uninstall or Roll-back

You can use the script "uninstall" from the installation folder.

$ ./uninstall

Or, if you want to do it by hand, just follow the next instructions to properly disassembly all the Docker Swarm services.

 

2.7.1 Remove stack, services, and containers

This part takes some minutes to take down the stack, check containers and services to verify all is gone

$ docker stack rm utm

Removing service utm_cerebro
Removing service utm_es-data
Removing service utm_es-elastic
Removing service utm_es-master
Removing service utm_kibana
Removing service utm_openvas
Removing service utm_portainer
Removing service utm_postgres
Removing service utm_tomcat
Removing service utm_traefik

$ docker service ls

$ docker container ls

Note: Wait for all services and containers are removed; check with steps 2 and 3 until you get an empty output.

 

2.7.2 Remove virtual network

Since you have nothing attached to this virtual network, you can now remove it

$ docker network rm net_utmstack

 

2.7.3 Remove persistent volumes

[WARNING] YOU WILL DELETE ALL YOUR STACK DATA!!!

Save all your data, first if you are not done yet, permanent data is here: "/var/lib/docker/volumes/", there you can find folders associated to your different services. Once you are completely sure you can get rid of those volumes, you can use this command:

$ docker volume prune

Output Example:

WARNING! This will remove all local volumes not used by at least one container.
Are you sure you want to continue? [y/N] y
Deleted Volumes:
vol_gvm_data
utm_portainer_data
vol_postgres_data
vol_tomcat_conf
vol_tomcat_webapps
vol_traefik_config

 

2.7.4 Leave Swarm Mode

Now that you have emptied the Docker Swarm environment, you can leave the "swarm mode" of Docker

$ docker swarm leave --force

 

2.7.5 Remove Docker Community

This is the way to remove and clean configurations made by Docker Community packages

$ apt -y remove docker-ce docker-ce-cli containerd.io && apt -y purge docker-ce docker-ce-cli containerd.io && apt autoremove

 

2.7.6 Remove Support packages

Some packages were needed before installing Docker Community. They can be removed now if they are not useful for your server.

$ apt -y remove apt-transport-https ca-certificates curl gnupg-agent software-properties-common && apt -y purge apt-transport-https ca-certificates curl gnupg-agent software-properties-common && apt autoremove

 

2.7.7 Remove additional lines on APT sources

At the bottom of the "/etc/apt/sources.list", there should be some lines added by the installation scripts

$ nano /etc/apt/sources.list

Ctrl + K Remove a whole line where the cursor is located
Ctrl + O + RETURN, to save the file, Ctrl + X to exit from NANO editor

Now the next Docker repo lines can be removed

deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable
# deb-src [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable

There will be a remnant inside folders /utmstack/data and /utmstack/repo. It's the data and the backups, respectively. If you are sure you will not need this content for the future, just delete both folders.

 

3 Contact Information

Title

Description

demo

demo@atlasinside.com