I’ve mentioned on my socials that I’ll be presenting at the Embedded Linux Conference (part of the Open Source Summit Europe) this year. By the time you’re reading this, I will have already given my presentation. I still wanted to write a summary in a text form and publish it after the show, and that’s what this blog text is going to be.
I’ll give the same disclaimer that I gave before my presentation: security is not a one-size-fits-all solution, so carefully consider what advice applies to you. With that in mind, let’s begin.
Core Idea
When I started to think about what I would want to present, I knew that it’d be something about cybersecurity, but I was not exactly sure what. So, I started to collect some Linux hardening techniques and technologies, trying to think what would be interesting and if I could possibly do a deep dive into one of them. I couldn’t quite pick one, so eventually I started to create a larger model on how to “completely” secure the system.
However, soon I started to feel that this neat technical presentation had one big shortcoming. I was picking these hardening features, thinking that “okay, what if an adversary tries to do this”, and then trying to figure out how I can counteract their action. After doing this for a while, I realised that I was forgetting the big picture. While creating something like this is an interesting theoretical exercise, it does hold water in the real world.
The thing is, in the real world, we are not developing these Linux boxes to be a hacking exercise for the adversary. We are creating them to create value for the people actually using them. From an embedded developer’s point of view, it is a bit too common to focus solely on technological hardening measures and then completely forget that there is a user interface and a human user that an adversary can manipulate. And sure, we can harden that user interface with all the technical measures, but the user remains.
The cyberattacks tend to utilise the path of the least resistance. Even if one manages to create a system that is a completely hardened, rock-solid, impenetrable fortress, most of the time there is still a human doing something with the system. If a persistent and motivated attacker cannot manipulate the system to do what they want, they may have better luck swaying the user to do their bidding.

The Humans
The people who surround the Linux boxes can roughly be divided into three groups: developers, users, and adversaries.
Developers are the people, well, developing the product. They may be directly working with the product, like engineers & managers who do hands-on work, or more in support roles like legal or sales, who still have input on the product. Indirect groups like suppliers and open-source software developers can also be considered a part of the developer group. Even though they may not always even know about the product’s existence, they are delivering and developing components for the product.
Then we have the users, who use the product as it is intended to be used, playing along with the given rules. Depending on the product, these people may be something like factory floor workers, consumers, or sometimes even other developers if the product is for example a software platform.
And finally, we have the adversaries. These can be considered to be the people who want to use the product for their own purposes on their own terms. The typical hackers, cyber criminals, and hopefully less typical nation states fall into this group. A special adversary group is the insiders, who are users or developers who start acting against the system. They may do this intentionally, or they may become adversaries accidentally or after being socially engineered. Insiders are a dangerous group, because they are usually trusted and may have deeper knowledge, meaning they may be able to cause impactful harm.
The Problems
Continuing with the theme of rough categorisations, the problems that result from the existence of human actors can be divided into two categories: human interfaces and human behaviour.
The first category, interfaces, consists of things that the developers have to do to make the product usable by humans. Without interfaces, it is impossible to generate much value for the user. There is nothing inherently bad or problematic about adding an interface. It is the task of the developer to implement it and do it securely. However, the problems may arise if there is not enough time or knowledge to implement the interface properly. The closer this interface is to humans, the more dangerous this incorrect implementation will be.
That’s because humans do actions on these interfaces, behaviour being the second problem category. Once again, there is nothing wrong with people having behaviour and performing actions on the system. If no actions are ever taken, it means that the system is literally useless. However, human behaviour is unpredictable. The developers may have an idea how an interface should be used, but that is not necessarily how the interface ends up being used by the users. Slight deviations are not necessarily a problem, but people may also make dangerous errors if the expectations do not match.
A dangerous error is an action that likely results in a security incident or severe disruptions in operations. Dangerous errors can be accidents, or people might be socially engineered to perform such actions. It is these dangerous errors we want to try to prevent as much as possible. Note that sometimes we need to allow certain dangerous actions in the systems, like configuring a firewall. In these situations, we should take care that these dangerous actions do not result in security incidents.

As an embedded developer, one might be tempted to think that they’re behind so many layers of abstraction that considering these interfaces and humans is not really their problem. However, there is a risk that people work around the system, causing non-human interfaces to suddenly become used by humans. Also, all the interfaces are interesting to the adversary, especially the ones in the lower levels of the system, so even though the driver interfaces may be well hidden from the users, the adversary might consider poking them.
The Solutions
So, how to avoid these dangerous errors? There’s a lot of actual science that’s gone into trying to understand how humans act and think, but my firmware engineer brain thinks that the steps to perform an action can be simplified as follows:
- Person decides to perform an action
- Person gets an idea in their head that they need to perform an action. They may get the idea on their own or from their environment.
- Person tries to perform an action
- Person starts researching how to perform the action they’ve decided to perform.
- Person performs an action
- Person commits to the action and performs it.
We can try to prevent the dangerous errors in points 1 and 2. Point 3 is mostly for damage control. Note that there may be “accidents” that skip points 1 and 2, so preventing all of the dangerous errors is not possible, and we should always prepare for the worst-case scenario.
To address these points, there are (at least) the following things we can do:
- Create a secure organisation environment
- Prevent people from compromising the security
- Design for resilience
This is not a complete list, but it should at least help fix some of the issues. Let’s go over these in a bit more detail with some practical suggestions.
Create secure organisation environment
The first point is mostly non-technical. At this point, we want to avoid people from getting the idea that they have to perform a dangerous action. People may get these ideas on their own, or they may be socially engineered to think that they need to perform such an action.
One way to prevent the success of social engineering is with secure communication guidelines. In these guidelines, it should be clarified what communication methods are accepted within the organisation, and between the organisations we might be talking to. It’s also important to know how to share secrets. For the communication between different organisations, it is a good idea to name the contact persons.
In addition to that, it is important to use technical measures to increase the trustworthiness of the environment. For example, message filtering, MFA, and user verification can be used to improve the security of the communications.
A thing that is easier said than done is to have a security-oriented culture in the organisation. And not only that, but having an organisation where the policies and actual actions match each other. If that doesn’t sound difficult enough, it’s also important to have an organisation without too much cognitive load. Simply put, the organisation should not be stressful and exhausting, because stress and exhaustion lead to human errors.
And finally, some training is useful as well. Sharing common security knowledge, information about past attacks, and reminders about the communication guidelines should help to prepare for future social engineering attempts. The training should be personalised if at all possible, because generic training does not simply work as well and may result in pushback.
Prevent people from compromising the security
Despite the best efforts, people sometimes want to perform a dangerous action. The next step should be ensuring that it is simply not possible for them. The principle of least privilege is the key here, and the system should be designed so that the users only have the smallest required set of actions available to them. This in practice usually means designing a system with multiple authority levels.
However, sometimes we have to allow certain dangerous actions that cannot be hidden away. In these cases, we need to ensure that both the developers and users understand the system. The users need to understand on a high level how the system operates, what kind of dangerous actions are available to them, and what might be the result of performing these actions. Something like this should be done carefully though, otherwise you’ll end up with a “big red button” scenario where the dangerous action suddenly becomes a tempting one.
On the other hand, the developers have to understand the needs of the users and the value that the system brings to the users. In addition, the developers should also understand how users use the system. Mismatches in the implementation and actual use make the system harder to use, resulting in possibly dangerous errors and behaviour in the system.
While the users should understand the system, the system itself should warn about dangerous situations as well. Some ideas for this could be things like short time delays before confirming actions, requiring additional approvals, multi-factor authentication, etc. Sometimes pausing for a few seconds may be crucial for the person to understand that they should carefully consider what they are doing.
It is important to note that this should not become a nuisance. Alert fatigue is a real thing, and if the product becomes too difficult or exhausting to use, people start to work around it. This results in unexpected actions in the interfaces and unintended interfaces becoming human interfaces, so be careful with the cautionary measures and consider carefully what actions are actually dangerous.

Design for resilience
Finally, we have reached the point where a person got an idea about a dangerous action and performed it. Or they accidentally managed to perform such an action. While these actions do not always immediately result in a compromised or broken product, it should be assumed that it will happen soon. At this point, we are simply trying to control the damage.
This process begins all the way from the planning and threat-modelling phase. It should always be considered what the worst-case scenario is from the security point of view. Quite often, it involves an insider with high privileges and a good understanding of the product deciding to set the world on fire. After that, it is important to implement layered security in the system to prevent that from happening. Layered security is important because if one of the security measures fail, it should not mean that the whole system immediately falls under the adversary’s control.
Another equally important thing is containment and isolation. We want to ensure that if one instance of the product falls, the rest of the instances are not immediately vulnerable. For example, this could mean that there are no shared credentials between the devices, or in static installations, the network is properly segmented.
One key thing to have in the system is intrusion detection. Most of the cyber attacks go unnoticed for lengthy periods of time, allowing the attacks to have more impact. Therefore, you want to be able to detect if something is going wrong in the system. These intrusion detection tools can also detect accidental actions that may be considered dangerous errors, so the usefulness is not limited to just detecting adversaries.
Once the incident is under control and properly contained, it should be made sure that the system can be reset into a fully functional and secure state. This means that there should be robust firmware update processes in place, credential rotations need to be possible in any situation, etc., so that the product can be trusted again to be usable and reliable in day-to-day operations.
And finally, as a preventive measure, it should be ensured that the developers have enough knowledge to do their work and implement the security features correctly. Without enough knowledge, it is impossible to create a secure product, because security does not happen by accident.
Conclusion
In summary, it is safe to say that most of the products cannot exist without humans, both developers and users (adversaries are not really mandatory). These products need interfaces to be actually useful, otherwise the things we are building are essentially just paperweights. Humans will perform actions on these interfaces to gain some actual value out of the product. It’s the task of the developers to try to create a product that can withstand this ongoing chaos of the human behaviour. Good luck, have fun.