2021-12-30

Managing Postfix headers for Privacy and Red Teaming

Managing Postfix Headers

Postfix allows you to change headers using header_check [1]. It is relatively simple, however, as always, you (as I do) will mess up the regular expressions required now and again.
 
However the simple stuff discussed here isn't in that territory.
 
The overall process
 
  1. Configure postfix to enable header_check.
  2. Edit the header_check file and add the regular expressions needed.
  3. Run postmap to rebuild the database.
  4. Restart postfix.
  5. Troubleshoot.



1. Configure postfix

Add the following line to  /etc/postfix/main.cf - if it's not already there.

smtp_header_checks = regexp:/etc/postfix/header_checks.cf

 

2. Header Check File

Add your regular expressions to the /etc/postfix/header_checks.cf file.
Examples will follow shortly.

3. Rebuild Database

Run postmap hash:path to header checks file, specifically.

# postmap hash:/etc/postfix/header_checks.cf

4. Restart Postfix

As simple as running (if your distro is using the systemd init system).
# systemctl restart postfix

5. Troubleshooting

First make sure that overrides are not specified for postfix, by running.
# grep -Frail receive_override_options /etc/postfix/
 
if main.cf and/or master.cf contain overrides, modify those accordingly, or comment the respective overrides out.
Use postconf to verify the new setting in main.cf:
 
# postconf -n | grep header
Should output something similar to this:
smtp_header_checks = regexp:/etc/postfix/header_checks.cf
 
Always check your regular expressions before deploying them to production. Several tools are available to help with that. I prefer CyberChef [3] [4].

For privacy

Removing the received headers giving away the hostname/IP of the sending client as well as other artifacts is as simple as adding the following to /etc/postfix/header_checks.cf.

/^Received:/    IGNORE

/^User-Agent:/    IGNORE 

This will remove internal host-names and IP's from the mail header.

As discussed above, run postmap, then restart postfix.

# postmap hash:/etc/postfix/header_checks.cf

# systemctl restart postfix

# postconf -n | grep header 

 

For Red Teaming: Bypassing mail-filters

Many companies use a third party to perform phishing tests. Some of those use a well-know X-Header to bypass spam filtering [2]. Thus the only thing you have to do to get past that pesky filter is to add that header. Below is an exa--mple taken from [2] below.
/^Subject:/i PREPEND X-PhishTest: KnowBe4

Again, after changing header_checks.cf, run postmap, then restart postfix.

# postmap hash:/etc/postfix/header_checks.cf

# systemctl restart postfix

# postconf -n | grep header


Believe it or not - It really is true. Several vendors really do ask their customers to drill a huge hole in their e-mail defenses [2].
 
Some other headers useful for shenanigans:
X-ThreatSim-Header: http://whateva
X-ThreatSim-ID: {GUID}
 
X-Gophish-Contact:
X-Gophish-Signature:

X-PhishMe:
X-PhishMe-Tracking:
 
X-CanIPhish:
 
  
Nevertheless do not use these headers on your production system! Only use it in a pentest/red teaming exercise that you have been given permission to perform.

Other examples

Another example is hiding the specific AV scanner used on your setup, this can be done by specifying.
/^X-Virus-Scanned:/i REPLACE X-Virus-Scanned: Trend Micro
 
Thereby replacing the actual scanner details with the text "Trend Micro".

The complete header_check.cf

The complete header_check.cf file would contain the following

/^Subject:/i PREPEND X-PHISHTEST: KnowBe4
/^X-Virus-Scanned:/i REPLACE X-Virus-Scanned: Trend Micro
/^Received:/    IGNORE
/^User-Agent:/    IGNORE



References

[1] Postfix Header Checks: http://www.postfix.org/header_checks.5.html

[2] X-PHISHTEST: https://support.knowbe4.com/hc/en-us/articles/212723707-Whitelisting-by-Email-Header-in-Exchange-2013-2016-or-Microsoft-365

[3] CyberChef on GitHub: https://github.com/gchq/CyberChef

[4] How to build CyberChef: https://blog.infosecworrier.dk/2021/12/how-to-build-cyberchef.html

2021-12-27

How to build CyberChef

CyberChef. The Cyber Swiss Army Knife.

A web app for encryption, encoding, compression and data analysis.

  • Always wanted to build the latest version of CyberChef?
  • Struggled with Node versions and other weird dependencies?
  • Did grunt dev and grunt prod fail with cryptic error messages?
  • Are you a noob with node (like me)?

Well, then at least there's two of us...

 



Building CyberChef

CyberChef works best with Node version 10 now with 16 :), back then my first mistake was to (naïvely) expect it to be able to build with a later version, like the one in the Debian repositories. Should've read the simple installation instructions found here [1].

Things started looking a little better after that, however stupid little things like having to implement a fix for fixCryptoApiImports because it is hard-coded to use /bin/sh (which is now in /usr/bin/sh for Debian based distros) still broke the build.
 
Anyhoo.
The idea was to build a build system, then use the CyberChef Build built on that build system on a web-server or container without all the development pieces. The simplified flow for this ended up being like this:
 

 

My dev and test environments are based on Debian 11 (Bullseye), Vagrant, and VirtualBox - mainly because I have to pay for it myself, but also because it is possible to spin up test environments in minutes - even for Elastic / OpenStack clusters - but not least for purposes of creating a build server with an old obscure version of Node. Like required for CyberChef.

Result

The final thing now spins up a development/build server called Charpentier which will build the latest version of CyberChef copying it to the Virtual Host in the ./CyberChef directory. You can either choose to let Vagrant continue installing the web-server called "Cyberchef", which will copy the newly created build and use that. Or you could manually copy the build created on "Charpentier" to your own web-server (red dotted line) and configure that manually, the script "other_webserver.sh" provided might be helpful in getting NGINX installed, with a self-signed certificate.

Logging in to the Server and/or the Website

The bootstrap script adds the public SSH key I use in that testlab. The installation disables the Vagrant user and adds another Linux user called "cyberchef" with a randomly generated password
It is also possible to enable basic authentication for NGINX, thus requiring credentials for accessing CyberChef. However, this is commented out. Uncomment "configure_nginx_auth" towards the bottom of the "main" routine to enable this.
Information about credentials created during installation can be found in /var/lib/cyberchef/ please delete these files after you have added the passwords to your password manager, you should probably change the passwords too.

Prerequisites

All you need for this is Debian, Vagrant, and VirtualBox. This should also run on VMWare, however this is not tested, so please provide feedback when/if you test that.
  1. Install your preferred Linux distro (this should also run on a Mac or Windows system) then proceed to install;
  2.  VirtualBox (or VMWare, but as stated; not tested yet)
  3. Vagrant
After all prerequisites are in place, proceed and run the following.
 
  1. git clone https://github.com/martinboller/cc-build
  2. cd cc-build
  3. vagrant up <- will bring both the build system and the web-server up
  4.  Wait...
 For further details on installation, please refer to the README [3].
 
 
Warning: If you decide to use the virtual web-server, please, please, PLEASE ensure that you remove and/or disable the Vagrant account on that. It is defaulting to vagrant/vagrant. The cc-build script does that for you now, as well as creating another user with sudo privileges and a random password (look in /var/lib/cyberchef for credentials created during installation).



 

 



2021-12-24

Happy Holidays!

Wishing everyone a wonderful and fun-filled holiday season and a wholesome 2022.

I do hope to see all of you next year. If not in meat space (reality)*, then at least virtually - at conferences, at work, at dinner, at coffee, at sea, at home, at .....

In short: You're all sorely missed!


 

2021 has been cool for me - Moving from one part of critical infrastructure (Finance) to another (Energy), however the issues are basically the same and the solution is not about products, but about the basics. Did have some fun with a supplier experiencing ransomware as well as Log4Shell, though.





*)
I'm not crazy about reality, but it's still the only place to get a decent meal.
-Groucho Marx

2021-12-03

Time for Pi

Building a Stratum-1 NTP Server with a Raspberry Pi 3 or 4

While plenty others have posted their Raspberry Pi based NTP server builds, here's my take on it (this post and github [1]).

 

Picture 1, Precision timepiece

 

Why do I need accurate and precise time on my network?

Many organizations, not just the smaller ones, do not take the time to really get time right (pun intended). Some believe that their Active Directory PDC emulator on top of the AD Hierarchy (at least time-wise) provide accurate and precise time.
Others just assign a single random Network Time Protocol (NTP) server as the time source (some Cisco equipment only allow one, some only two, some vendors even rely on - the horror - SNTP) which is another fallacy. As Segals law state “A man with one watch always knows what time it is. A man with two watches is never sure.” which helps to illustrate the need for multiple sources of accurate time - NTP does exactly that, by utilizing many time sources.
This build use a GPS/PPS based source combined with a number of public Stratum-1 servers, but with 4 or more internal time sources internet connectivity isn't required. Precision Time Protocol (PTP) is a totally different beast, achieving nanosecond- or even picosecond-level precision, where NTP "only" provide microsecond or, depending on Stratum, millisecond-level precision. NTP is likely more than accurate enough, but for even better precision and accuracy PTP is required. PTP is way out of scope for this post, so let's get back on track with NTP.

Precise and accurate time is important for many reasons, including
  • Log file accuracy, auditing & monitoring
  • Trouble shooting and recovery
  • File time stamps
  • Directory services
  • Access security and authentication (i.e. Kerberos in AD)
  • Distributed computing
  • Transactions (not least financial)

There's also legal requirements to be aware of for many industries - far more than you might think - some requiring accurate timestamps on all logs. However the benefits of keeping accurate time alone is reason enough to do it.

 

NTP Server Hardware

A few Raspberry Pi's with cheap GPS boards and external antennas* should be within reach for most - even home networks. With internet access you could get away with one or two Pi's adding publicly available time servers to reach the sweet spot of 5-7 time sources. With no internet access 5 Pi's won't break the bank either, and you'd even have enough redundancy to swap the inevitable worn out sd-card and update them regularly.

* I Put the cheap indoor antennas inside outdoor electrical boxes like the one in picture 2, and mounted them outside with a clear view of the sky. They did work in the window sill, however with less satellites seen.


Picture 2, Electrical Box

 

It would also be within reach of many organizations to buy a cheaper NTP server, such as the LeoNTP [2] or the Meinberg M200 [3], then supplement with a few Raspberry Pi's for extra sources.

A lot of different GPS Boards are available, including some complete hats that can be mounted directly on the 40 Pin GPIO on the Pi, however it's easy to connect a breakout board with 5 pieces of wire.

Picture 3, GPS Board

 

Chrony vs NTPD

Using chrony may increase precision, including using hardware timestamping (like PTP), however Raspberry Pi NICs do not support hw timestamping, as evidenced by the command "ethtool -T eth0".
_________________________________

Time stamping parameters for eth0:
Capabilities:
    software-transmit
    software-receive
    software-system-clock
PTP Hardware Clock: none
Hardware Transmit Timestamp Modes: none
Hardware Receive Filter Modes: none
_________________________________

Chrony may replace NTPD in future versions of this build if my testing show increased precision - but please note that smarter people with better tools have shown this to be the case [4], so why am I waiting?
 

Precision versus accuracy

While I may have used accuracy and precision loosely above, here's a definition lifted off the Internet:
Accuracy refers to how close a measurement is to the true or accepted value.
Precision refers to how close measurements of the same item are to each other.
That means it is possible to be very precise but not very accurate, and it is also possible to be accurate without being precise.

When it comes to timestamps we want them to be both accurate and precise, however achieving good precision across all systems on your network at least allow you to work with the timestamps you've got, and understanding the delta in accuracy even to correlate with data from other, more accurate, systems.


____________________________________________________________________________________________________________
 
[4] Building a more accurate time service at Facebook scale: https://engineering.fb.com/2020/03/18/production-engineering/ntp-service/
____________________________________________________________________________________________________________

2021-11-24

Søjde, høe lich hæ

Rikke Thomsen

 

Har lyttet til en ny kunstner, som en god kollega anbefalede. 

Hun hedder Rikke Thomsen og er en fantastisk blanding af Alberte's sødme og Allan Olsen finurlige egnsfortællinger, bare flyttet et par hundrede kilometer sydpå til Synnejyllan'.


Lyt for eksempel til "Ballebrovej 2"

 




2021-11-23

Spiderfoot

SpiderFoot on your own server

 
"What is SpiderFoot?

SpiderFoot is a reconnaissance tool that automatically queries over 100 public data sources (OSINT) to gather intelligence on IP addresses, domain names, e-mail addresses, names and more. You simply specify the target you want to investigate, pick which modules to enable and then SpiderFoot will collect data to build up an understanding of all the entities and how they relate to each other."

 


 

 

If you - like me - sometimes want to run this yourself, I added yet another bash script to do just that.

It's available on Github: https://github.com/martinboller/spiderfoot-build


2021-11-18

Dradis Community Edition install on Debian 11

 Dradis Community Edition

In order to be to quickly spin up a Dradis server to be able to collaborate on a pentest automated installation is preferable.

 


 

 

Whether you prefer own data center or cloud, here's [1] a little bash ugliness to do just that on Debian 11 (Bullseye). There's also files to test it using Vagrant and VirtualBox. Just remember that it leaves some nastiness in form of default credentials.

 

[1]Dradis CE install on Github: https://github.com/martinboller/dradisce-build.git

[2] Dradis Community Edition on Github: https://github.com/dradis/dradis-ce

 

2021-07-26

OpenSSH on Windows 10

PuTTY on the shelf


 

I prefer using the same toolset across platforms, and as Windows 10 have included OpenSSH for a while, why not put PuTTY on the shelf?

Decommissioning PuTTY will also provide you with the ability to do so much more from the command line, and reuse your scripts from your favorite distro.

It's 3 simple steps (4 if you convert your PuTTY key).

1. Install the OpenSSH Client features

Add-WindowsCapability -Online -Name OpenSSH.Client*

Or from the GUI

  1. Click Start, then choose Settings
  2. Choose Apps from Windows Settings
  3. Click “Manage optional features“
  4. Click “Add a feature“
  5. Choose “OpenSSH Client” and click Install

2. Configure the SSH agent service to start automatically

Get-Service -Name ssh-agent | Set-Service -StartupType Automatic

As the service hasn't really been given the chance to auto-start, go ahead and start it

Start-Service ssh-agent

3. Add the required key(s)

ssh-add C:\Users\<i>username</i>\.ssh\keyname

Example: ssh-add C:\Users\Martin\.ssh\id_ed25519

Optional (if you only have a PuTTY private key)

Use Puttygen to show the actual key and export (force new file format) that


That's all there is to it.

2021-05-20

Finally! Internet Explorers obituary (ancient time - June 15th 2022)

Time to celebrate

While a number of services hasn't worked with Internet Explorer for a while, Microsoft finally gave us the EoL date for it.

 

While the "personalities" that haven't prepared for the inevitable likely still refuse to assess and update or replace old sh1t that require IE (or Flash for that matter), at least we can now give them a deadline they can enjoy watching pass by, then request exceptions filled with bad excuses.

More mature organizations will already have replaced IE and the (few) required exceptions written already.


I wrote about this before:
https://blog.infosecworrier.dk/2019/08/its-about-5-years-too-late-that-we-kill.html

 

References:

"The future of IE is Edge": https://blogs.windows.com/windowsexperience/2021/05/19/the-future-of-internet-explorer-on-windows-10-is-in-microsoft-edge/ 

IE not supported on Azure (March 31, 2021): https://azure.microsoft.com/en-us/updates/azure-portal-to-end-support-for-internet-explorer-11-on-march-31-2021/

2021-03-02

Book Review: Intrusion Detection HoneyPots, Detection through Deception

Intrusion Detection HoneyPots, Detection through Deception

  • Author: Chris Sanders
  • Publisher : Applied Network Defense (30 Aug. 2020)
  • Language : English
  • Paperback : 238 pages
  • ISBN-10 : 1735188301
  • ISBN-13 : 978-1735188300

 


 

Let's get the important stuff out of the way first1)

Cookie Recipe: 🍪🍪🍪🍪🍪

These cookies are very very good. Having to convert from obscure measurements to something for the modern ages (metric) was well worth it.

1) Read the book You must :)


Conclusion: Recommended reading for everyone interested in honeypots, novice or expert.

While I've worked on most of the ideas and products discussed in the book, I really liked the structure and content of the book.
Came away from reading it with a more structured approach to how, when, and where to deploy honeypots - Really wish this book was available when I started messing with honeypots, it would certainly have saved me some time.

Noteworthy (to me)
Chapter 1, A brief History of Honeypots: While a brief chapter on the history of honeypots, it's always great to be reminded of The Cuckoos egg and Berferd, however it gets even better in the following chapters.

Chapter 2, Defining and Classifying Honeypots: As Chris state in the book, "All honeypots are deceptive, discoverable, interactive, and monitored", but not just that, he's also providing a good explanation of what that these characteristics mean and what questions to ask regarding your own deployment. This chapter also gave me a better understanding of Whaleys Deception Taxonomy - I'd say that theory and practice converged and I'll be able to utilize that understanding better going forward.
 
Chapter 3, Planning Honeypot-Based Detection: See - Think - Do! - Not just the words, but used to explain honeypots very clearly and precisely. Combined with the case study, it really sets the stage. I feel kinda verified, as it confirms (most of) the ideas and principles I've used when deploying honeypots :)
 
Chapter 4, Logging and Monitoring: Even for someone with extensive experience in logging and monitoring there's still a lot of food for thought in this chapter - not just for honeypots, but in general. I'll be using variations of Chris's "Log plumbing reference framework for logging and monitoring infrastructure" to explain to both business and other colleagues why we've implemented e.g. certificates for encryption and mutual authentication in our logging infrastructure.
 
Chapter 5, Building Your First Honeypot from scratch: Nice and pragmatic intro to what a honeypot could be, using Netcat (Specifically NCAT from Fyodor).
 
Chapter 6, Honey Services: No problem, let's just build a Windows based RDP honeypot, a SSH honeypot with Cowrie, and a multi-service honeypot using OpenCanary. Again very concise and clear guidance that takes you most of the way to deploying honeypots.

Chapter 7, Honey Tokens: Read the "From the Threnches" sidebar. Like other chapters the Sigma and Suricata rules are great inspiration.
 
Chapter 8, Honey Credentials: This chapter goes way further and provides several possible ways of deploying honeytokens. amongst those, an example on how to create a "LLMNR Broadcast Honeypot", rounding the chapter off with some cool awesomeness. One caution, though, I think it is a violation of GDPR to use previous employers accounts as honeytokens as discussed - I might well be wrong (IANAL), but better safe than sorry.
 
Chapter 9, Unconventional Honeypots: The idea of a DHCP honeypot is cool, not least because it is likely to delay the adversary, however with a lot of potential pitfalls (YOLO). This chapter also covers "cloned website honeytokens", Honey-tables, and more, rounding the chapter off with honey commands using aliases on Linux.
 
[1] Applied Network Defense: https://www.networkdefense.co/