Managing Postfix headers for Privacy and Red Teaming

Managing Postfix Headers

Postfix allows you to change headers using header_check [1]. It is relatively simple, however, as always, you (as I do) will mess up the regular expressions required now and again.
However the simple stuff discussed here isn't in that territory.
The overall process
  1. Configure postfix to enable header_check.
  2. Edit the header_check file and add the regular expressions needed.
  3. Run postmap to rebuild the database.
  4. Restart postfix.
  5. Troubleshoot.

1. Configure postfix

Add the following line to  /etc/postfix/main.cf - if it's not already there.

smtp_header_checks = regexp:/etc/postfix/header_checks.cf


2. Header Check File

Add your regular expressions to the /etc/postfix/header_checks.cf file.
Examples will follow shortly.

3. Rebuild Database

Run postmap hash:path to header checks file, specifically.

# postmap hash:/etc/postfix/header_checks.cf

4. Restart Postfix

As simple as running (if your distro is using the systemd init system).
# systemctl restart postfix

5. Troubleshooting

First make sure that overrides are not specified for postfix, by running.
# grep -Frail receive_override_options /etc/postfix/
if main.cf and/or master.cf contain overrides, modify those accordingly, or comment the respective overrides out.
Use postconf to verify the new setting in main.cf:
# postconf -n | grep header
Should output something similar to this:
smtp_header_checks = regexp:/etc/postfix/header_checks.cf
Always check your regular expressions before deploying them to production. Several tools are available to help with that. I prefer CyberChef [3] [4].

For privacy

Removing the received headers giving away the hostname/IP of the sending client as well as other artifacts is as simple as adding the following to /etc/postfix/header_checks.cf.

/^Received:/    IGNORE

/^User-Agent:/    IGNORE 

This will remove internal host-names and IP's from the mail header.

As discussed above, run postmap, then restart postfix.

# postmap hash:/etc/postfix/header_checks.cf

# systemctl restart postfix

# postconf -n | grep header 


For Red Teaming: Bypassing mail-filters

Many companies use a third party to perform phishing tests. Some of those use a well-know X-Header to bypass spam filtering [2]. Thus the only thing you have to do to get past that pesky filter is to add that header. Below is an exa--mple taken from [2] below.
/^Subject:/i PREPEND X-PhishTest: KnowBe4

Again, after changing header_checks.cf, run postmap, then restart postfix.

# postmap hash:/etc/postfix/header_checks.cf

# systemctl restart postfix

# postconf -n | grep header

Believe it or not - It really is true. Several vendors really do ask their customers to drill a huge hole in their e-mail defenses [2].
Some other headers useful for shenanigans:
X-ThreatSim-Header: http://whateva
X-ThreatSim-ID: {GUID}

Nevertheless do not use these headers on your production system! Only use it in a pentest/red teaming exercise that you have been given permission to perform.

Other examples

Another example is hiding the specific AV scanner used on your setup, this can be done by specifying.
/^X-Virus-Scanned:/i REPLACE X-Virus-Scanned: Trend Micro
Thereby replacing the actual scanner details with the text "Trend Micro".

The complete header_check.cf

The complete header_check.cf file would contain the following

/^Subject:/i PREPEND X-PHISHTEST: KnowBe4
/^X-Virus-Scanned:/i REPLACE X-Virus-Scanned: Trend Micro
/^Received:/    IGNORE
/^User-Agent:/    IGNORE


[1] Postfix Header Checks: http://www.postfix.org/header_checks.5.html

[2] X-PHISHTEST: https://support.knowbe4.com/hc/en-us/articles/212723707-Whitelisting-by-Email-Header-in-Exchange-2013-2016-or-Microsoft-365

[3] CyberChef on GitHub: https://github.com/gchq/CyberChef

[4] How to build CyberChef: https://blog.infosecworrier.dk/2021/12/how-to-build-cyberchef.html


How to build CyberChef

CyberChef. The Cyber Swiss Army Knife.

A web app for encryption, encoding, compression and data analysis.

  • Always wanted to build the latest version of CyberChef?
  • Struggled with Node versions and other weird dependencies?
  • Did grunt dev and grunt prod fail with cryptic error messages?
  • Are you a noob with node (like me)?

Well, then at least there's two of us...


Building CyberChef

CyberChef works best with Node version 10 now with 16 :), back then my first mistake was to (naïvely) expect it to be able to build with a later version, like the one in the Debian repositories. Should've read the simple installation instructions found here [1].

Things started looking a little better after that, however stupid little things like having to implement a fix for fixCryptoApiImports because it is hard-coded to use /bin/sh (which is now in /usr/bin/sh for Debian based distros) still broke the build.
The idea was to build a build system, then use the CyberChef Build built on that build system on a web-server or container without all the development pieces. The simplified flow for this ended up being like this:


My dev and test environments are based on Debian 11 (Bullseye), Vagrant, and VirtualBox - mainly because I have to pay for it myself, but also because it is possible to spin up test environments in minutes - even for Elastic / OpenStack clusters - but not least for purposes of creating a build server with an old obscure version of Node. Like required for CyberChef.


The final thing now spins up a development/build server called Charpentier which will build the latest version of CyberChef copying it to the Virtual Host in the ./CyberChef directory. You can either choose to let Vagrant continue installing the web-server called "Cyberchef", which will copy the newly created build and use that. Or you could manually copy the build created on "Charpentier" to your own web-server (red dotted line) and configure that manually, the script "other_webserver.sh" provided might be helpful in getting NGINX installed, with a self-signed certificate.

Logging in to the Server and/or the Website

The bootstrap script adds the public SSH key I use in that testlab. The installation disables the Vagrant user and adds another Linux user called "cyberchef" with a randomly generated password
It is also possible to enable basic authentication for NGINX, thus requiring credentials for accessing CyberChef. However, this is commented out. Uncomment "configure_nginx_auth" towards the bottom of the "main" routine to enable this.
Information about credentials created during installation can be found in /var/lib/cyberchef/ please delete these files after you have added the passwords to your password manager, you should probably change the passwords too.


All you need for this is Debian, Vagrant, and VirtualBox. This should also run on VMWare, however this is not tested, so please provide feedback when/if you test that.
  1. Install your preferred Linux distro (this should also run on a Mac or Windows system) then proceed to install;
  2.  VirtualBox (or VMWare, but as stated; not tested yet)
  3. Vagrant
After all prerequisites are in place, proceed and run the following.
  1. git clone https://github.com/martinboller/cc-build
  2. cd cc-build
  3. vagrant up <- will bring both the build system and the web-server up
  4.  Wait...
 For further details on installation, please refer to the README [3].
Warning: If you decide to use the virtual web-server, please, please, PLEASE ensure that you remove and/or disable the Vagrant account on that. It is defaulting to vagrant/vagrant. The cc-build script does that for you now, as well as creating another user with sudo privileges and a random password (look in /var/lib/cyberchef for credentials created during installation).




Happy Holidays!

Wishing everyone a wonderful and fun-filled holiday season and a wholesome 2022.

I do hope to see all of you next year. If not in meat space (reality)*, then at least virtually - at conferences, at work, at dinner, at coffee, at sea, at home, at .....

In short: You're all sorely missed!


2021 has been cool for me - Moving from one part of critical infrastructure (Finance) to another (Energy), however the issues are basically the same and the solution is not about products, but about the basics. Did have some fun with a supplier experiencing ransomware as well as Log4Shell, though.

I'm not crazy about reality, but it's still the only place to get a decent meal.
-Groucho Marx


Time for Pi

Building a Stratum-1 NTP Server with a Raspberry Pi 3 or 4

While plenty others have posted their Raspberry Pi based NTP server builds, here's my take on it (this post and github [1]).


Picture 1, Precision timepiece


Why do I need accurate and precise time on my network?

Many organizations, not just the smaller ones, do not take the time to really get time right (pun intended). Some believe that their Active Directory PDC emulator on top of the AD Hierarchy (at least time-wise) provide accurate and precise time.
Others just assign a single random Network Time Protocol (NTP) server as the time source (some Cisco equipment only allow one, some only two, some vendors even rely on - the horror - SNTP) which is another fallacy. As Segals law state “A man with one watch always knows what time it is. A man with two watches is never sure.” which helps to illustrate the need for multiple sources of accurate time - NTP does exactly that, by utilizing many time sources.
This build use a GPS/PPS based source combined with a number of public Stratum-1 servers, but with 4 or more internal time sources internet connectivity isn't required. Precision Time Protocol (PTP) is a totally different beast, achieving nanosecond- or even picosecond-level precision, where NTP "only" provide microsecond or, depending on Stratum, millisecond-level precision. NTP is likely more than accurate enough, but for even better precision and accuracy PTP is required. PTP is way out of scope for this post, so let's get back on track with NTP.

Precise and accurate time is important for many reasons, including
  • Log file accuracy, auditing & monitoring
  • Trouble shooting and recovery
  • File time stamps
  • Directory services
  • Access security and authentication (i.e. Kerberos in AD)
  • Distributed computing
  • Transactions (not least financial)

There's also legal requirements to be aware of for many industries - far more than you might think - some requiring accurate timestamps on all logs. However the benefits of keeping accurate time alone is reason enough to do it.


NTP Server Hardware

A few Raspberry Pi's with cheap GPS boards and external antennas* should be within reach for most - even home networks. With internet access you could get away with one or two Pi's adding publicly available time servers to reach the sweet spot of 5-7 time sources. With no internet access 5 Pi's won't break the bank either, and you'd even have enough redundancy to swap the inevitable worn out sd-card and update them regularly.

* I Put the cheap indoor antennas inside outdoor electrical boxes like the one in picture 2, and mounted them outside with a clear view of the sky. They did work in the window sill, however with less satellites seen.

Picture 2, Electrical Box


It would also be within reach of many organizations to buy a cheaper NTP server, such as the LeoNTP [2] or the Meinberg M200 [3], then supplement with a few Raspberry Pi's for extra sources.

A lot of different GPS Boards are available, including some complete hats that can be mounted directly on the 40 Pin GPIO on the Pi, however it's easy to connect a breakout board with 5 pieces of wire.

Picture 3, GPS Board


Chrony vs NTPD

Using chrony may increase precision, including using hardware timestamping (like PTP), however Raspberry Pi NICs do not support hw timestamping, as evidenced by the command "ethtool -T eth0".

Time stamping parameters for eth0:
PTP Hardware Clock: none
Hardware Transmit Timestamp Modes: none
Hardware Receive Filter Modes: none

Chrony may replace NTPD in future versions of this build if my testing show increased precision - but please note that smarter people with better tools have shown this to be the case [4], so why am I waiting?

Precision versus accuracy

While I may have used accuracy and precision loosely above, here's a definition lifted off the Internet:
Accuracy refers to how close a measurement is to the true or accepted value.
Precision refers to how close measurements of the same item are to each other.
That means it is possible to be very precise but not very accurate, and it is also possible to be accurate without being precise.

When it comes to timestamps we want them to be both accurate and precise, however achieving good precision across all systems on your network at least allow you to work with the timestamps you've got, and understanding the delta in accuracy even to correlate with data from other, more accurate, systems.

[4] Building a more accurate time service at Facebook scale: https://engineering.fb.com/2020/03/18/production-engineering/ntp-service/