Building Your own Greenbone Vulnerability Manager Source Edition (GSE)

Building the GSE Vulnerability Scanner

Want to deploy Greenbone Vulnerability Manager Source Edition instead of  the Trial version graciously provided by Greenbone Networks GmbH to the community?
Latest news
  • Updated to 21.4.4 Greenbones latest releases from Feb. 24th 2022
  • Updated to August 2021 updates from Greenbone (current releases) 2021-09-24
  • Changed to work with Debian 10 (Buster) and Debian 11 (Bullseye) on 2021-09-14.
  • Updated to GSE 21.4.0 on 2021-05-08.
The GVM framework is released under Open Source licenses as the Greenbone Source Edition (GSE).
The script on my Github [3] installs GSE, including the following components:
  • gvm-libs - required libraries for GVMD
  • gvm-tools -Remote control your Greenbone Vulnerability Manager (GVM) 
  • GVMD - Greenbone Vulnerability Manager 
  • GSA - Greenbone Security Assistant
  • ospd-openvas - OSP server implementation to allow GVM to remotely control an OpenVAS Scanner 
  • gvm-pyhon - Greenbone Vulnerability Management Python Library
  • openvas - Open Vulnerability Assessment Scanner, a scanner for Greenbone Vulnerability Management (GVM)
  • openvas-smb - SMB module for the OpenVAS Scanner

The installation script is based on the hard work of "sadsloth" [4] and [5], without whom I'd probably never finished the install.

Also a BIG Thank You to Greenbone Networks GmbH for supporting the security community, not least Björn Ricks (https://twitter.com/BjoernRicks) who is Team lead at Greenbone Networks GmbH and responsible for openvas.

Requirements: Debian 10 Buster with 8 GB RAM or more (I've tested with 5 GB, and it works). It should work on Ubuntu too, but that is not tested - yet.

I'm striving to find the time to maintain (and improve) this installation script as well as add additional scanners, not least the ospd-nmap implementation. However any help is appreciated, so go ahead and create issues on github.

[1] Greenbone Security Manager Trial: https://www.greenbone.net/en/testnow/
[2] Greenbone Source: https://github.com/greenbone/
[3] Bash installation script: https://github.com/martinboller/gse.git


Stratum-1 NTP Server & ntppool.org

It's about time

What is ntppool.org? From the website.
"The pool.ntp.org project is a big virtual cluster of timeservers providing reliable easy to use NTP service for millions of clients.
The pool is being used by hundreds of millions of systems around the world. It's the default "time server" for most of the major Linux distributions and many networked appliances (see information for vendors).
Because of the large number of users we are in need of more servers. If you have a server with a static IP address always available on the internet, please consider adding it to the system."
While not good OpSec*, I wanted to support the ntppool by adding a stratum-1 server to it.
* While the GPS coordinates of my server give my location away, this is public information already. Also note that the rounding actually results in Google Maps not pinpointing my home exactly :). My IP address is easily found too, however I'll have to live with it being as easily found as it is now.

In order not to publish any internal stratum-1 servers to the internet the firewall itself was "upgraded" with a GPS configured to provide NMEA time info and PPS over serial - more on that later. While this is something you should never do on your corporate firewall, I've accepted the risk associated with this - most importantly NTP is not running as root.

After configuring and tuning NTP on the firewall [4], it was time to add it to the pool [1]. According to the firewall logs the monitoring station in Newark, NJ, USA connected shortly thereafter racking up the score of the system over the hours of March 6th 2020, as seen in figure 1.

Figure 1 - Initial Monitoring 

Note: It is noticeable from the monitoring that the server was added to hastily - It was still being tuned.

Update: After the first day the score has been consistently at 20 as shown in figure 2.

Figure 2 - March 9th and 10th Monitoring

After achieving a score of 10+ clients started connecting (as expected). Connected peers can be shown using $ sudo ntpq -n -cmru. However I'm also logging all requests on the WAN interface to a separate log file. To ensure that it doesn't outgrow the disk, logrotate is configured to rotate that logfile daily with only 1 backup. The logs are ingested into elasticsearch, so there's no reason to keep those logs on the firewall itself for longer periods of time.
The logging performed for NTP may seem excessive and puts an extra load on the system, however is justified by the need for visibility into this traffic.
The ntp logging is used to monitor NTP activity, including vizualizations in Kibana. A few examples are shown in the figures below. The numbers are relatively low, as I had a power outage just before capturing these images, thus the 24 hour view lacks a few hours of activity.

 Figure 3 - Count of requests and peers

 Figure 4 - World Map of NTP peers 

 Figure 5 - Autonomous System and Country

In an effort to provide a dashboard for everything NTP monitoring, local dashboards for NTPPOOL monitoring were also created. The data from ntppool.org is pulled daily at noon (RandomizedDelaySec=7200) and ingested into elasticsearch. The specific visualizations are shown below.

Figure 6 - Offset Monitoring based on NTPPOOL monitoring data

Figure 7 - Score Monitoring based on NTPPOOL monitoring data

When the NTP server was added to the pool, the NTPPOOL monitoring was giving it a low score which slowly, but steadily, increased the following hours. The score then saw a slight decrease, correlating with the (planned) change to an outdoor GPS antenna. Late on the 7th the score was at 20, and stayed there until a power outage on the 12th (local incident in the household), which caused it to drop below 10 (5.7 to be precise). The development in score from Day 0 until March 15th is shown in Figure 8, below.

Figure 8 - Score Monitoring from Day 0 - March 15th

An observation about ISP's and Cloud Providers use of servers in the pool
NTP clients from several major ISP's as well as Cloud Providers connect to timeservers in the pool including mine. While that is fine, it seems wrong - at least to me - that these providers (often) do not themselves support the pool with NTP servers and/or help their customers use the providers own servers. The cost of doing that would be negligible and they (supposedly) have the required infrastructure.
So if you're working for an ISP or Cloud Provider, please push this agenda to the right people, you're all running on open source software anyway.

The actual configuration

The original firewall, discussed in a previous blog post was (re)configured as discussed briefly below.

The GPS receiver used is based on the U-blox NEO-7M. Chinese factories are cranking these boards out in high numbers and they can be found on e-bay at around the $7 price tag. I've bought quite a few of these exact boards, and have 5 stratum-1 servers deployed so far using these.

Please note that they do not have any holdover, so will stop disciplining NTP when there's no GPS-fix. When that happens, the quality of the other peers configured in ntp.conf and the internal real time clock (RTC) decides how badly the server will drift until there's a GPS-fix again. Buying a device with an oven controlled crystal (OCXO) would help mitigate this, however the price tag is much higher. (Still considering it, as sometimes used ones come up at decent prices). Be aware that devices such as BG7TBL's GPSDO (GNSSDO) does not deliver pps without a GPS-fix either, so don't buy them for that purpose, however there's some refurbished Symmetricom and Samsung devices available that would do the job.

alt text
Picture 1 - U-Blox NEO7M board

Opening the APU and connecting the GPS

For the APU4C4 case I drilled a 6.5 mm hole in the front of the lid and mounted the GPS board internally. Had to mount it there, as there's no room left at the back with 4 NIC's and wireless. The board is connected to J18 on the APU board itself as discussed below.

alt text
Picture 2 - APU 4C4 with mounted GPS board

Connect the GPS to J18 as described in Table 1 using serial #3 of the LPC UART (schematics for for the APU4C4 board can be found here: https://www.pcengines.ch/schema/apu4c.pdf), specifically the LPC UART is shown on page 12 of 18.

Connecting to serial #3 translates to the serial device becoming /dev/ttyS2. It was used for the following main reasons:
  1. Serial #1 (COM1 on the schematic) is used for console access and should be reserved for that purpose.
  2. The GPS board is 3v so doesn't support the RS232 levels on Serial #1 or Serial #2 (COM2 on the schematic).
  3. Serial #2 Also does not have DCD (or RI) required for PPS.
  4. Serial #3, thus is the first feasible port for communicating with the GPS.

GPS to LPC UART connections
GPS   J18     J18 Pin     Comment
GND         Ground    1    Ground
VCC  V3    2    3 Volt
TXD         RXD3#    7    TX (GPS) -> RX (J18)
RXD        TXD3#    8    RX (GPS) -> TX (J18)
PPS        DCD3#    9    Kernel PPS uses DCD
Table 1 - GPS to APU4C4 connection


CPU Throttling

The impact on PPS timing from the CPU changing clock frequency dynamically is very noticeable on the accuracy of NTP. Thus the system is configured with the performance governor set to performance using sysfsutils. The performance governor ensures that all CPU cores will run at 1 Ghz all the time. This was added to the main script, as that doesn't hurt netfilter's performance either.

[1] How do I join pool.ntp.org? https://www.ntppool.org/en/join.html
[2] My NTP server: https://www.ntppool.org/a/aika &
[3] http://support.ntp.org/bin/view/Servers/PublicTimeServer001660
[4] Debian Firewall: https://github.com/martinboller/DebFirewall
[5] PC Engines APU4C4 Schematics: https://www.pcengines.ch/schema/apu4c.pdf

Added server to Beta monitoring and changed the graphs accordingly.

Figure 9 - Beta Offset Monitoring

 Figure 10 - Beta Score Monitoring


Postfix/Postscreen using Blocklists and Deep Protocol tests

When configuring Postfix, it is worth considering using an 'aggressive" postscreen configuration thus saving valuable bandwidth and processing power.

However, some configurations may lead to issues with some mail providers. Specifically I've had issues with Google GMail and Microsoft Hotmail with blocklists and deep protocol tests respectively.

Microsoft Hotmail / Outlook / live

Initially postscreen was configured to use the following blocklists:
  1. zen.spamhaus.org
  2. bl.spameatingmonkey.net
  3. dnsbl.habl.org
  4. bl.spamcop.net
  5. dnsbl.sorbs.net
However sorbs.net appear to be a little too harsh on Microsoft, leading to rejects of Hotmail IP addresses, so if that is a problem for you, consider not using that list.

Google GMail

GMail use a large number of IPv4 and IPv6 addresses. That combined with the behavior of deep protocol tests:
"When any "deep protocol tests" are configured, postscreen(8) cannot hand off the "live" connection to a Postfix SMTP server process in the middle of the session. Instead, postscreen(8) defers mail delivery attempts with a 4XX status, logs the helo/sender/recipient information, and waits for the client to disconnect. The next time the client connects it will be allowed to talk to a Postfix SMTP server process to deliver its mail. postscreen(8) mitigates the impact of this limitation by giving deep protocol tests a long expiration time."

As GMail does not resend from the same IP-address after the 4xx, this generates a lot of "reject noise" in the mail log (not least for IPv6). Instead of disabling deep protocol tests, instead just configure postscreen_dnsbl_whitelist_threshold with a negative value.

Given the above the postscreen section of /etc/postfix/main.cf now looks like this:
## Postcreen settings
postscreen_access_list =
postscreen_blacklist_action = enforce

# Use selected DNSBLs
postscreen_dnsbl_sites =
postscreen_dnsbl_threshold = 3
postscreen_dnsbl_action = enforce
# Whitelist everything below threshold on BLs
postscreen_dnsbl_whitelist_threshold = -1

postscreen_greet_banner = Welcome, please wait...
postscreen_greet_action = enforce

# Deep protocol tests
postscreen_pipelining_enable = yes
postscreen_pipelining_action = enforce

postscreen_non_smtp_command_enable = yes
postscreen_non_smtp_command_action = enforce

postscreen_bare_newline_enable = yes
postscreen_bare_newline_action = enforce
run - postfix reload - to activate any changes.

Before going full reject, read the howto from postfix http://www.postfix.org/POSTSCREEN_README.html and start with ignore instead of enforce, which is useful for testing and collecting statistics without blocking mail from the get go.


Secure TLS Server Configurations

A few notes on configuring a web-server with secure TLS protocol versions and Ciphers. NGINX is used in the examples herein, but the protocols, ciphers and headers should be universal.

There's some good advice from Mozilla here [2] [3], and @rootsecdev wrote a Medium Post on configuring IIS on Windows Server 2019 [4].


Only TLSv1.2 and TLSv1.3 are considered secure, so start with configuring these as the only supported protocols.

Change the config file to:

ssl_protocols TLSv1.2 TLSv1.3;

If only modern clients are used you can get away with TLSv1.3 only.
ssl_protocols TLSv1.3;

Windows Server builds 1903 and newer support TLSv1.3 too, however enabling this is not covered in [4]. It can be found with some Google-fu, though.


Only protocols with Perfect Forward Secrecy (PFS) should be used, and CBC has some issues (see [1] for more information). That leaves us with the following ciphers configured in the NGINX conf-file: 
For compatibility with older stuff use the intermediate configuration use Mozilla's configurator [3]. Please note that the above ciphers will effectively remove access for Android 5 and 6, Firefox <= 47, Safari 6-8, and likely a few other legacy platforms.

HSTS Header

Strict Transport Security will ensure that everything is served over HTTPS, configure this as shown below for an age of a year:
    add_header Strict-Transport-Security "max-age=31536000" always;

In summary

  • Use TLSv1.3 and nothing else if you can get away with it. 
  • Enforce HTTPS using strict transport security.
  • If TLSv1.2 is needed limit the ciphers used as discussed above.

For all of the settings above, Mozilla has a pretty good configurator discussed here [2] and found here [3]
If you're running Citrix ADC (The artist formerly known as Netscaler), there's a recent (Jan 2020) post worth reading [5].

[1] Padding oracles and the decline of CBC-mode cipher suites: https://blog.cloudflare.com/padding-oracles-and-the-decline-of-cbc-mode-ciphersuites/

[2] Security/Server Side TLS: https://wiki.mozilla.org/Security/Server_Side_TLS

[3] Mozilla's SSL Config generator: https://ssl-config.mozilla.org/

[4] Configuring secure cipher suites in Windows Server 2019 IIS: https://medium.com/@rootsecdev/configuring-secure-cipher-suites-in-windows-server-2019-iis-7d1ff1ffe5ea

[5] Citrix Networking SSL / TLS Best Practices: https://docs.citrix.com/en-us/tech-zone/build/tech-papers/networking-tls-best-practices.html


Detecting CVE-2020-0601 Windows CryptoAPI Spoofing Vulnerability exploit attempts

After installation of the patch for CVE-2020-0601 Windows CryptoAPI Spoofing Vulnerability, the system will log EventID 1 in the application log to indicate an attempt to exploit the vulnerability.

The awesome Didier Stevens https://twitter.com/DidierStevens created a VBA script to generate that event [1] however in order to test the flow of this from several systems, Powershell was the way to go (Didier did all the heavy lifting). The oneliner goes like this:

Write-EventLog -LogName "Application" -Source "Microsoft-Windows-Audit-CVE" -EventID 1 -EntryType Warning -Message "[CVE-2020-0601] alert validation" -Category 0 -RawData 0xDE,0XAD,0xBE,0XEF

This should show up like this in Event Viewer

If You're selective in what logevents you forward (and you should), here's the XPATH query used to collect this:

  <Query Id="0" Path="Application">
    <Select Path="Application">*[System[Provider[@Name='Microsoft-Windows-Audit-CVE' or @Name='Microsoft-Windows-UAC'] and (EventID=1)]]</Select>
Filtering for it in Kibana:
event.code: 1 and event.provider: Microsoft-Windows-Audit-CVE

After ingesting into Elasticsearch, the alert (in Alerta [2]) looks like this

[1] Using CveEventWrite From VBA (CVE-2020-0601): https://blog.didierstevens.com/2020/01/15/using-cveeventwrite-from-vba-cve-2020-0601/
[2] https://alerta.io/