Agent lessons: Hostile Territory


RSS feed

What happens if you deploy your new end-point security solution to hosts that already have a malicious actor on them? The malicious actor could manipulate the security solution and feed it lies or disable it in various ways. Similarly, if an attacker is able to get on a system that already has such a system installed, without being detected, they could start abusing it. This post considers this problem.

The main problems an attacker can cause as experienced by the defender looking at the callback server are:

  1. The agent is not communicating with the server.
  2. The agent is saying everything is ok, when it’s not.

If you missed the introduction to my agent lesson series earlier this week, about what agent architectures are, read it here.

When agents aren’t communicating with the server

If an agent isn’t checking in, there are a lot of possible reasons, and many are benign. These include:

  1. The end-point is off (ie. no power).
  2. The agent has crashed due to a bug.
  3. The server is inaccessible to the end-point. For example, the server is only accessible on the internal network and the end-point isn’t VPN’d in, or the agent is running on a laptop and the user is on a plane or otherwise without internet connectivity.
  4. The user of the system has decided they don’t like being monitored or believe the agent is impacting the system’s performance and has figured out how to disable it.
  5. The system has been compromised by an attacker who has disabled the agent.

This problem really boils down to “Should the agent be reporting in, but isn’t?” The way to identify that is to correlate the agent’s check-ins with some other data source. For example, is the end-point’s MAC address active on the network? Or is the user logging in to their email and other services? Being able to correlate this data can be useful for more than just agent trouble-shooting. Many enterprise defender’s set up alerts when a user logs in from a new country or seems to move to a new geographic area quickly, but by having logins correlated with agent data, you could identify even more accurately when a login occurs from a network that their laptop isn’t associated with.

Mischievous users and attackers can deny the agents from communicating with the server in a number of ways including:

  • Uninstalling the agent through legitimate means.
  • Killing the agent process and removing it’s persistence techniques.
  • Introducing faults into the agent to crash it or breaks it’s logic, for example marking it’s log file as read-only so no new log messages get reported.
  • Breaking communication to the call-back server, for example by associating the call-back server’s domain name with in the hosts file.

Defensive products often try to protect themselves, but they may also need to send home an SOS distress call as some action is being applied. Ultimately, the best hope is that the agent has sent home enough info for an alert before it’s been disabled.

When agents can’t be trusted

Now that you know all the agents that should be calling back are, how do you know they are telling the truth? A simple agent task might be to report back running processes. How do you know the agent isn’t being fed lies? The basic way this could happen is normal rootkit hooking techniques. The malware hooks the API calls the agent makes to the OS and changes the responses. This is likely going to just be generic hiding and isn’t a targetted attack on that specific security product.

The historic solution has been to move into the kernel to avoid the userland hooks, but if the malware also runs in the kernel, then you need to go deeper, but trying to burrow lower into hypervisor levels or the BIOS becomes more difficult and ultimately you likely need to push data back up to the higher levels to get it out across the network, at which point the malware’s higher level hooks could intercept it.

As Dave Aitel noted back in 2005, this ultimately becomes a game of Core Wars where if the defensive solution and malware each execute with the same privileges they are racing to detect or attack the other first.

Directed attacks

You also have the worry of attacks that were created specifically for your defensive solution. Maybe your EDR beacons home activity on the host, but ignores it’s own activity, so the malware names itself with the same name as the EDR process. Maybe the attacker knows what your code path is for when a alert is generated and breaks only that.


Rootkit detection

The tactics for a defender to detect rootkits are:

  • Scanning for known rootkit signatures (on disk or in memory).
  • Cross-view analysis: Making a high-level API call such as “list files in this directory” with low-level API calls, such as reading the hard-drive contents directly and parsing them manually.
  • Integrity checking: Reading the memory of your own defensive solution, and parts of the OS, to ensure it hasn’t been modified.
  • Running deeper: Executing at a lower ring level.
  • Watching network traffic, so you get a view outside of the compromised host.

Learn from Attackers

Malware authors have faced many variations of this problem for decades. They have attempted to obtain footing on systems that have defensive products and avoid analysis and detection. They’ve implemented anti-debugging, anti-vm, anti-emulation, and other technologies into their creations.

Attempting to ensure attackers don’t know about your defensive solution is an effective defense. Don’t pay too much attention to the complaints about this being security by obscurity that are voiced loudly from the silver-bullet crowd that wants a perfect solution or none at all.

For the open-source tools, these can be modified in unique ways for each network, making the attacker’s life more difficult. Google GRR, for example, recommends obfuscation in their documentation and provides some guidance on how to do this. It recommends changing process and service names, registry keys, and other identifiers. The same techniques that malware uses to hide from security solutions can be used by security solutions to hide from malware.

Unexpected inspections

In theory, an attacker can reverse engineer and feed lies to any defensive product if these products are each functioning at the same permission levels. However, reverse engineering takes time, so unique inspections could be crafted that should return accurate information. This is expensive to implement, since you need to think of and write new ways of detecting things, and still has some problems. An example of this would be if you were always scanning disks with the normal API calls, and then you sent out an upgrade or command that read directly from the hard-drive.

Closing Thoughts

I think for the most part, people expect too much of attackers, and definitely focus too much on the possible weaknesses of the defensive solutions without giving enough credit to the value it can offer. When attackers do look for defensive products, they are usually doing so to bail on the host because they know they’ll be caught, or in some rare cases they might abuse the defensive solution for privilege escalation, as folks like Joxean Koret, in “The Antivirus Hacker’s Handbook,” have shown defensive solutions often have many vulnerabilities. Duqu 2 is an interesting case study where the malware did directly target Kaspersky products, when they were on the system, in order to better hide on the system.

Trying to overcome the concern of agents lying to you is a hard problem and higher level goal. If there was a Maslow’s hierarchy of defensive needs, you’d find that you need to make sure all your basic defenses are working before you should be too concerned with this problem. For those that are at that level though, this hopefully gives you some ideas.

Stay tuned for more agent lessons! Next up I’ll write about the communication protocols and trust mechanisms of agents.