Schrödingers Software Bill of Materials (SBOMs)


Schrödingers Software Bill of Materials (SBOMs) or the Emperors new Clothes

Let me start saying that the work done by several individuals and companies on SBOMs is extremely valuable and impressive. The archaeology on ancient stuff where the source code no longer exist is important work that we all should support. An example of that is EMBA [1]



Software Bill of Materials. From the CISA website [2]:

A “software bill of materials” (SBOM) has emerged as a key building block in software security and software supply chain risk management. A SBOM is a nested inventory, a list of ingredients that make up software components. 


Having said that, we really need vendors to openly share what building blocks they’re using while at the same time realizing that they will fight it tooth and nail as it is likely to be a) problematic for a lot of them as they sometimes don’t know themselves* and b) will highlight that a huge percentage is based on repackaged open source tools and that some vendors are in violation of licensing agreements and don’t support the maintainers.


So the onus is on us – the companies and individuals relying on those vendors – to make them contractually obliged to provide complete and detailed SBOMs every time and not let them get away with manipulating the authorities into accepting that it’s something that a third party maintains - providing, what they would call, assurance that everything is secure and well maintained. Otherwise we’ll end up with “Schrödingers SBOM” and pay extra for it.


And sorry, I don’t buy the “we need to protect our intellectual property” argument, especially not for critical infrastructure – some of us want to protect critical infrastructure and would still buy the knowledge and products that help us do just that – problem is we might realize that some vendors really do not provide much added value, but that's a good thing as it’ll boost quality and fair competition.

We should have learned by now that Security through Obscurity isn't working** 

All in my opinion of course.

* Remember HeartBleed, Log4Shell, J4Shell, and so on.

** Unless used actively for deception but that is something completely different.


[1] EMBA from securefirmware.de: https://www.securefirmware.de/

[2] https://www.cisa.gov/sbom


Live Remote Packet Analysis

WireShark and TcpDump over SSH

Quick and dirty bash script to start TCPDump on remote host and shovel the data back to WireShark over SSH.

Note: Don't do this over links that are already saturated, and ESPECIALLY not on networks running deterministic and/or real time systems/processes.


How to use

The script is rather simple but may take up to 5 inputs

.\WoS.sh RemoteHost RemoteInterface RemotePort RemoteUser RemoteKey

Only the first is required. However they all are:

RemoteHost: The host you want to capture packets on.

RemoteInterface: The interface on the remote host on which you want to capture packets. Uses "any" if not specified.

RemotePort: The SSH port. Uses Port 22 if not specified.

RemoteUser: The user that runs tcpdump on RemoteHost.

RemoteKey: The SSH needed to access RemoteHost


You'll need the following on these systems, respectively:
Remote Host: tcpdump
Install with sudo apt -y install tcpdump, sudo yum install tcpdump, sudo pacman -S tcpdump, or whatever works on your distro.

Your System: WireShark (and SSH).
See above, however replace tcpdump with wireshark.

BASH Script on GitHub


Open Source Vulnerability Scanning: Greenbone Community Edition 22.4

 I'm still scanning, yeah, yeah, yeah

Back in 2020 some ugly Bash Scripts were cobbled together by Yours truly to enable friends and (former) colleagues of mine to maintain a decent security posture for their small businesses - I've also used it to validate scans by other tools (Qualys &  Nessus) specifically (trust but verify) not to mention that it may be useful in testing etc. This "effort" was previously described here:


In July of 2022 Greenbone released 22.4 and I haven't had the time to upgrade to that before now. Mainly due to some changes in the architecture, as described here:


There's multiple scripts, supporting the installation of both primaries and secondaries (Greenbone still use the Master/Slave terminology, however) ensuring that certificates are generated correctly and installed on the secondaries.

The addition of the notus-scanner require installation of additional components, primarily mosquitto a lightweight message broker responsible for brokering the communications between ospd-openvas and the OpenVAS and Notus scanners respectively.

The Primary runs all the components shown in the diagram above, whereas the secondaries runs "only" ospd-openvas, the OpenVAS, and Notus scanners as well as Mosquitto (Message Broker).

Thanks to Greenbone for making this available and supporting the security community.


Serially install Debian (Debian over serial)


Installing Debian over serial

Installing Debian using serial is relatively straight forward and still very useful on a multitude of devices that can be installed, and managed, over serial. This include the PC-Engines APU Devices which can be booted on USB and installation performed over serial.


“Begin at the beginning," the King said, very gravely, "and go on till you come to the end: then stop.”

-Lewis Carroll, Alice in Wonderland


Prerequisites (the beginning)

You will need:
  1. Serial cable for your device.

  2. Serial terminal, such as minicom, picocom, GTKTerm, screen or Putty [3] if you’re on Windows.

  3. Debian 11, code name bullseye, netinst, for 64-bit PC (amd64) [1] or a media with additional non-free firmware, such as [2].

  4. USB Device to transfer the Debian ISO to.

Note: Debian netinst works well for APU devices as they're using AMD CPUs and Intel NICs - both vendors have been good Open Source contributors.


Burn ISO, burn to USB

As we will boot the system from USB, we will have to write the ISO to a suitable USB drive.

On Linux you have the option of using dd:

sudo dd if=debian-11.3.0-amd64-netinst.iso of=/dev/sdn bs=4096 status=progress
sudo dd if=firmware-11.3.0-amd64-DVD-1.iso of=/dev/sdn bs=4096 status=progress

Replace sdn with sdb, sdc, depending on what drives are already installed in your system. Please double-check that you’re not writing to other drives than the USB for this purpose. You will lose data if you’re not careful. (ask me how I know).

There’s also the graphical tool called popsicle on many Linux distros which is quite neat.

On Windows, you can use Rufus [4]

Note: The netinst iso is around 400M, while the "firmware" iso is just shy of 4G.

Aaaand action

Now plug the USB and the serial cable into the system, start your favorite serial terminal and power up the system to be installed.

Just remember.

USB Superposition


Hopefully you will now see Debian booting and, after a short while, show the installation menu.

BUT… if you just choose install now, it won't work, as the console is not shoved over serial for the rest of the installation, unless you do this.

Press the <TAB> key instead and replace quiet after the --- with:


Replace quiet with console=ttyS0,115220n8

Note: Adjust port and its parameters to your specific device. The above works for APU boards (ttyS0 is the first serial port on that - please note uppercase S followed by a zero)


Then when the message below shows up, press space to start installation, or wait 30 seconds.

For installation press space or wait 30 seconds

You should now be able to complete the standard Debian installation process. Do remember to remove the desktop options and add SSH Server.


You should now be able to connect over SSH to the system.

Serial Killer

The freshly installed Debian may not work over serial after installation, if that is the case, change /etc/default/grub as follows:
GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,115200n8
GRUB_SERIAL_COMMAND="serial --speed=15200 --unit=0 --word=8 --parity=no --stop=1"
Then run
sudo update-grub

Note: I have not had this problem with the latest builds, and have not taken the time to reproduce if it was my mistake or certain builds that caused it.


(The end)


[1] Debian 11 netinst: https://www.debian.org/distrib/netinst

[2] Unofficial Debian 11 with non-free firmware: https://cdimage.debian.org/cdimage/unofficial/non-free/cd-including-firmware/11.3.0+nonfree/amd64/iso-dvd/

[3] Putty for Windows: https://www.chiark.greenend.org.uk/~sgtatham/putty/

[4] Rufus: https://rufus.ie/en/


There's Lies, Damn Lies, and Maturity Assessments

On the immaturity of maturity assessors

The emperors new clothes


Over the course of my career I have been the assessor as well as the assessee of maturity assessments revolving around Security, Architecture and/or Operational Maturity of IT/OT.

Having made the mistake myself of painting a better picture of As-Is being interviewed, as well as experiencing others painting a picture that was obviously far from reality I learned the hard way that "If it isn't documented, it doesn't exist". While there's certainly value in feeling pride in your work, it shouldn't stand in the way of getting as precise a maturity measurement as possible.

After all, it is embarrassing to start at say ML-3 then, after spending a lot of resources, being at 2 just because the initial assessment was overly optimistic. In reality that is not going to happen, so everyone will just lie, painting a better and better picture, never having the time to address the actual root of the issue. As we all know it is hard work keeping up appearances.

So here's my issue with this. Several assessments that I have seen are in the category of overly optimistic.

There could be 2 reasons this is happening; 1) The Assessor is caving to pressure and/or just don't want the customer to look (too) bad to be able to sell more hours, or 2) The Assessor isn't competent.

Or in short, 1) Lack of integrity, or 2) Incompetence.

Based on what I've seen the worse option (Lack of Integrity) is the main issue here - For the sake of better security postures and providing real value to customers it would be great to see assessors taking the hard discussions to be able to provide direct and honest information in order to enable senior executives to make fact-based decisions.

I suggest starting with demanding supporting information and if that is not made available within hours, it doesn’t exist and maturity is <1.


Redelegation of .dk domains to other nameservers

My chosen DNS Hosting Provider does not support .dk hostmaster's  process for DNS redelegation, what should I do?

So while the new process for redelegating to other DNS servers work quite well, provided the new DNS hosting provider support it (as e.g. simply.com does), several - including Cloudflare - do not.

This is easily overcome by using the simple "anonymous redelegation" link:


It can also be automated, as explained on the page itself - the example below is taken from there.


Shortly after requesting the change you will be asked to confirm the requested change. You will have to login to your account(s) at https://selvbetjening.dk-hostmaster.dk/ to confirm this.


DNS Hosting with gratisdns is dead, long live...

How I learned to stop worrying and replaced GratisDNS

This might be helpful - not just for those of you moving from gratisdns, but for anyone looking for a DNS hosting provider.

Note: If You're moving to a DNS Hosting provider that do not support .dk hostmasters new process, read this companion post: https://blog.infosecworrier.dk/2022/02/redelegation-of-dk-domains-to-other.html

While I subconsciously knew this would happen when one.com bought GratisDNS, I'm ashamed to admit that I did not plan accordingly. The other day all current users of GratisDNS got the troubling email from the new owners that they'd migrate all zones from the multiple GratisDNS servers to 2 one.com servers "march 2022" so likely in a month.

With no further information tangible information on the migration specifics or future cost, this triggered me to look for other providers of DNS hosting.


My requirements were (are):

[R1] DNSSEC: This is a must!

[R2] Decentralized. There's way too much centralization, perverting DNS.

[R3] Hidden Master: Very nice to have, however can live without.

[R4] API: If hidden primary isn't supported, this is high on the list.

R5. Cheap or free: I need to pay for food and coffee.

Started investigating some possibilities (In alphabetical order):


Hetzner.de (.com).



Migration Process

Created the following simple process for migration.
  1. Select a less critical domain for testing.Luckily I have quite a few domains.
  2. Disable DNSSEC.
  3. Create zone on selected provider.
  4. Transfer, add and verify all RR's
  5. Test with dig to ensure RR's are correct (use @ns.provider.tld with dig and e.g. OpenDNS or even 1.1)
  6. (Re)enable DNSSEC.
  7. Test with dig that DNSSEC works. dnsviz.net is awesome for this too, especially when you're tired and can't read dig output :)
  8. Test DKIM some more. Fatfingered during testing.

This is (opinionated) how they fared during my testing.


Large centralized US based provider, trying to grab all our DNS queries at 1.1.
They did have a very nice interface, support DNSSEC and have an API with the free tier. If you want to pay several hundred $/month you can also use a hidden master.
Migrated one of my domains to Cloudflare and it worked well. But no (see R2)


Never got to test Hetzner as they wanted both a credit card and a photo of my passport to create an account even for the free tier. Heard good things, but no.


Not supporting DNSSEC, no hidden master, no API, even no AAAA support.
I wish quickdns the best and hope they grow into a great alternative.


European provider, nice interface, have an API, but no hidden master support. Free tier for DNS only but the prices for web hosting look okay (not tested, but may happen).


I ended up moving a bunch of my domains to simply.com, with nothing but the mere due diligence described above, to be in better control of the migration and current/future costs. I'll report back on my experiences, but so far it's great.


Pay the maintainers of FOSS - Your business relies on it

Pay per use.

Ever wondered why large software corporations - including, but not limited to - Apple, Cisco, Microsoft, and Oracle, are able to develop licensing schemes so intricate that it requires tons of people to understand, but doesn't seem to be able to create a comparatively simpler model of paying the maintainers of the Open Source Software that their products and businesses rely on?

Yeah Yeah, I know it isn't that simple, some of those corporations have people employed that work on Open Source projects as well, but the point still stands.

Just today @bagder of Curl fame posted this:


The business model of the large cloud providers is to sell services (mainly) based on Open Source Software providing great services (most of the time) to their customers. Their shrink wrapped software is based on or contain FOSS components - Windows 10/11 contains curl as well as OpenSSH. Many others including Aruba, BMC, Broadcom, Cisco, Citrix, and VMWare use Log4j. (See also https://github.com/cisagov/log4j-affected-db/blob/develop/SOFTWARE-LIST.md).

The harsh truth is that the corporations that relied on Log4j never paid a dime to the maintainers, while being so bad at CI/CD that they couldn't even tell us what versions they used where, nor how it was configured out of the(ir) box.  



Worst of all, over the course of handling the Log4Shell incident, I heard people blame Open Source for this situation. Please, this is beyond stupid.

This needs to stop, and we must hold all companies responsible for  the  current state of affairs. I do not have the legal, nor financial, insight into whether or not it would be possible to demand that when you pay e.g. Microsoft for Windows, that a buck or two of that cost had to be forwarded to the maintainers of Curl (and others) but we need to "nudge" those corporations to do that to a greater extent.

Back to the intricate licensing models; Why not "just" add a clause, stating that every time you sell a product or use license containing/using e.g. Curl, a small percentage of that had to go to the maintainers. It wouldn't make the license less understandable (That's impossible for most of them anyway), ensuring that the maintainers get something for their efforts and can continue to maintain their project; to the benefit of everyone using it/relying on it!

And... Please do remember to pay for the FOSS used when building software and solutions inside your organization, if it's worth deploying, it's worth paying for.


Let me just end by saying that way waaaaaaaay smarter people have pondered this question, so please investigate this topic further yourself.