Blog

No items found.

Integration in Focus: Bringing Machine Learning to Third-Party EDR Alerts

Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
12
Dec 2022
12
Dec 2022
This blog walks through the key benefits of integrating EDR technologies with Darktrace.

This blog demonstrates how we use EDR integration in Darktrace for detection & investigation. We’ll look at four key features, which are summarized with an example below:  

1)    Contextualizing existing Darktrace information – E.g. ‘There was a Microsoft Defender for Endpoint (MDE) alert 5 minutes after Darktrace saw the device beacon to an unusual destination on the internet. Let me pivot back into the Defender UI’
2)    Cross-data detection engineering
‘Darktrace, create an alert or trigger a response if you see a specific MDE alert and a native Darktrace detection on the same entity over a period of time’
3)    Applying unsupervised machine learning to third-party EDR alerts
‘Darktrace, create an alert or trigger a response if there is a specific MDE alert that is unusual for the entity, given the context’
4)    Use third-party EDR alerts to trigger AI Analyst
‘AI Analyst, this low-fidelity MDE alert flagged something on the endpoint. Please take a deep look at that device at the time of the Defender alert, conduct an investigation on Darktrace data and share your conclusions about whether there is more to it or not’ 

MDE is used as an example above, but Darktrace’s EDR integration capabilities extend beyond MDE to other EDRs as well, for example to Sentinel One and CrowdStrike EDR.

Darktrace brings its Self-Learning AI to your data, no matter where it resides. The data can be anywhere – in email environments, cloud, SaaS, OT, endpoints, or the network, for example. Usually, we want to get as close to the raw data as possible to get the maximum context for our machine learning. 

We will explain how we leverage high-value integrations from our technology partners to bring further context to Darktrace, but also how we apply our Self-Learning AI to third-party data. While there are a broad range of integrations and capabilities available, we will primarily look at Microsoft Defender for Endpoint, CrowdStrike, and SentinelOne and focus on detection in this blog post. 

The Nuts and Bolts – Setting up the Integration

Darktrace is an open platform – almost everything it does is API-driven. Our system and machine learning are flexible enough to ingest new types of data & combine it with already existing information.  

The EDR integrations mentioned here are part of our 1-click integrations. All it requires is the right level of API access from the EDR solutions and the ability for Darktrace to communicate with the EDR’s API. This type of integration can be setup within minutes – it currently doesn’t require additional Darktrace licenses.

Figure 1: Set-up of Darktrace Graph Security API integration

As soon as the setup is complete, it enables various additional capabilities. 
Let’s look at some of the key detection & investigation-focussed capabilities step-by-step.

Contextualizing Existing Darktrace Information

The most basic, but still highly-useful integration is enriching existing Darktrace information with EDR alerts. Darktrace shows a chronological history of associated telemetry and machine learning for each entity observed in the entities event log. 

With an EDR integration enabled, we now start to see EDR alerts for the respective entities turn up in the entity’s event log at the correct point in time – with a ton of context and a 1-click pivot back to the native EDR console: 

Figure 2: A pivot from the Darktrace Threat Visualizer to Microsoft Defender

This context is extremely useful to have in a single screen during investigations. Context is king – it reduces time-to-meaning and skill required to understand alerts.

Cross-Data Detection Engineering

When an EDR integration is activated, Darktrace enables an additional set of detections that leverage the new EDR alerts. This comes out of the box and doesn’t require any further detection engineering. It is worth mentioning though that the new EDR information is being made available in the background for bespoke detection engineering, if advanced users want to leverage these as custom metrics.

The trick here is that the added context provided by the additional EDR alerts allows for more refined detections – primarily to detect malicious activity with higher confidence. A network detection showing us beaconing over an unusual protocol or port combination to a rare destination on the internet is great – but seeing within Darktrace that CrowdStrike detected a potentially hostile file or process three minutes prior to the beaconing detection on the same device will greatly help to prioritize the detections and aid a subsequent investigation.

Here is an example of what this looks like in Darktrace:

Figure 3: A combined model breach in the Threat Visualizer

Applying Unsupervised Machine Learning to Third-Party EDR Alerts


Once we start seeing EDR alerts in Darktrace, we can start treating it like any other data – by applying unsupervised machine learning to it. This means we can then understand how unusual a given EDR detection is for each device in question. This is extremely powerful – it allows to reduce noisy alerts without requiring ongoing EDR alert tuning and opens a whole world of new detection capabilities.

As an example – let’s imagine a low-level malware alert keeps appearing from the EDR on a specific device. This might be a false-positive in the EDR, or just not of interest for the security team, but they may not have the resources or knowledge to further tune their EDR and get rid of this noisy alert.

While Darktrace keeps adding this as contextual information in the device’s event log, it could, depending on the context of the device, the EDR alert, and the overall environment, stop alerting on this particular EDR malware alert on this specific device if it stops being unusual. Over time, noise is reduced across the environment – but if that particular EDR alert appears on another device, or on the same device in a different context, it might get flagged again, as it now is unusual in the given context.

Darktrace then goes a step further, taking those unusual EDR alerts and combining them with unusual activity seen in other Darktrace coverage areas, like the network for example. Combining an unusual EDR alert with an unusual lateral movement attempt, for example, allows it to find these combined, high-precision, cross-data set anomalous events that are highly indicative of an active cyber-attack – without having to pre-define the exact nature of what ‘unusual’ looks like.

Figure 4: Combined EDR & network detection using unsupervised machine learning in Darktrace

Use Third-Party EDR Alerts to Trigger AI Analyst

Everything we discussed so far is great for improving precision in initial detections, adding context, and cutting through alert-noise. We don’t stop there though – we can also now use the third-party EDR alerts to trigger our investigation engine, the AI Analyst.

Cyber AI Analyst replicates and automates typical level 1 and level 2 Security Operations Centre (SOC) workflows. It is usually triggered by every native Darktrace detection. This is not a SOAR where playbooks are statically defined – AI Analyst builds hypotheses, gathers data, evaluates the data & reports on its findings based on the context of each individual scenario & investigation. 

Darktrace can use EDR alerts as starting points for its investigation, with every EDR alert ingested now triggering AI Analyst. This is similar to giving a (low-level) EDR alert to a human analyst and telling them: ‘Go and take a look at information in Darktrace and try to conclude whether there is more to this EDR alert or not.’

The AI Analyst subsequently looks at the entity which had triggered the EDR alert and investigates all available Darktrace data on that entity, over a period of time, in light of that EDR alert. It does not pivot outside Darktrace itself for that investigation (e.g. back into the Microsoft console) but looks at all of the context natively available in Darktrace. If concludes that there is more to this EDR alert – e.g. a bigger incident – it will report on that and clearly flag it. The report can of course be directly downloaded as a PDF to be shared with other stakeholders.

This comes in handy for a variety of reasons – primarily to further automate security operations and alleviate pressure from human teams. AI Analyst’s investigative capabilities sit on top of everything we discussed so far (combining EDR detections with detections from other coverage areas, applying unsupervised machine learning to EDR detections, …).

However, it can also come in handy to follow up on low-severity EDR alerts for which you might not have the human resources to do so.

The below screenshot shows an example of a concluded AI Analyst investigation that was triggered by an EDR alert:

Figure 5: An AI Analyst incident trained on third-party data

The Impact of EDR Integrations

The purpose behind all of this is to augment human teams, save them time and drive further security automation.

By ingesting third-party endpoint alerts, combining it with our existing intelligence and applying unsupervised machine learning to it, we achieve that further security automation. 

Analysts don’t have to switch between consoles for investigations. They can leverage our high-fidelity detections that look for unusual endpoint alerts, in combination with our already powerful detections across cloud and email systems, zero trust architecture, IT and OT networks, and more. 

In our experience, this pinpoints the needle in the haystack – it cuts through noise and reduces the mean-time-to-detect and mean-time-to-investigate drastically.

All of this is done out of the box in Darktrace once the endpoint integrations are enabled. It does not need a data scientist to make the machine learning work. Nor does it need a detection engineer or threat hunter to create bespoke, meaningful detections. We want to reduce the barrier to entry for using detection and investigation solutions – in terms of skill and experience required. The system is still flexible, transparent, and open, meaning that advanced users can create their own combined detections, leveraging unsupervised machine learning across different data sets with a few clicks.

There are of course more endpoint integration capabilities available than what we covered here, and we will explore these in future blog posts.

INSIDE THE SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
AUTHOR
ABOUT ThE AUTHOR
Max Heinemeyer
Chief Product Officer

Max is a cyber security expert with over a decade of experience in the field, specializing in a wide range of areas such as Penetration Testing, Red-Teaming, SIEM and SOC consulting and hunting Advanced Persistent Threat (APT) groups. At Darktrace, Max is closely involved with Darktrace’s strategic customers & prospects. He works with the R&D team at Darktrace, shaping research into new AI innovations and their various defensive and offensive applications. Max’s insights are regularly featured in international media outlets such as the BBC, Forbes and WIRED. Max holds an MSc from the University of Duisburg-Essen and a BSc from the Cooperative State University Stuttgart in International Business Information Systems.

Book a 1-1 meeting with one of our experts
share this article
USE CASES
No items found.
COre coverage

More in this series

No items found.

Blog

Email

How to Protect your Organization Against Microsoft Teams Phishing Attacks

Default blog imageDefault blog image
21
May 2024

The problem: Microsoft Teams phishing attacks are on the rise

Around 83% of Fortune 500 companies rely on Microsoft Office products and services1, with Microsoft Teams and Microsoft SharePoint in particular emerging as critical platforms to the business operations of the everyday workplace. Researchers across the threat landscape have begun to observe these legitimate services being leveraged more and more by malicious actors as an initial access method.

As Teams becomes a more prominent feature of the workplace many employees rely on it for daily internal and external communication, even surpassing email usage in some organizations. As Microsoft2 states, "Teams changes your relationship with email. When your whole group is working in Teams, it means you'll all get fewer emails. And you'll spend less time in your inbox, because you'll use Teams for more of your conversations."

However, Teams can be exploited to send targeted phishing messages to individuals either internally or externally, while appearing legitimate and safe. Users might receive an external message request from a Teams account claiming to be an IT support service or otherwise affiliated with the organization. Once a user has accepted, the threat actor can launch a social engineering campaign or deliver a malicious payload. As a primarily internal tool there is naturally less training and security awareness around Teams – due to the nature of the channel it is assumed to be a trusted source, meaning that social engineering is already one step ahead.

Screenshot of a Microsoft Teams message request from a Midnight Blizzard-controlled account (courtesy of Microsoft)
Figure 1: Screenshot of a Microsoft Teams message request from a Midnight Blizzard-controlled account (courtesy of Microsoft)

Microsoft Teams Phishing Examples

Microsoft has identified several major phishing attacks using Teams within the past year.

In July 2023, Microsoft announced that the threat actor known as Midnight Blizzard – identified by the United States as a Russian state-sponsored group – had launched a series of phishing campaigns via Teams with the aim of stealing user credentials. These attacks used previously compromised Microsoft 365 accounts and set up new domain names that impersonated legitimate IT support organizations. The threat actors then used social engineering tactics to trick targeted users into sharing their credentials via Teams, enabling them to access sensitive data.  

At a similar time, threat actor Storm-0324 was observed sending phishing lures via Teams containing links to malicious SharePoint-hosted files. The group targeted organizations that allow Teams users to interact and share files externally. Storm-0324’s goal is to gain initial access to hand over to other threat actors to pursue more dangerous follow-on attacks like ransomware.

For a more in depth look at how Darktrace stops Microsoft Teams phishing read our blog: Don’t Take the Bait: How Darktrace Keeps Microsoft Teams Phishing Attacks at Bay

The market: Existing Microsoft Teams security solutions are insufficient

Microsoft’s native Teams security focuses on payloads, namely links and attachments, as the principal malicious component of any phishing. These payloads are relatively straightforward to detect with their experience in anti-virus, sandboxing, and IOCs. However, this approach is unable to intervene before the stage at which payloads are delivered, before the user even gets the chance to accept or deny an external message request. At the same time, it risks missing more subtle threats that don’t include attachments or links – like early stage phishing, which is pure social engineering – or completely new payloads.

Equally, the market offering for Teams security is limited. Security solutions available on the market are always payload-focused, rather than taking into account the content and context in which a link or attachment is sent. Answering questions like:

  • Does it make sense for these two accounts to speak to each other?
  • Are there any linguistic indicators of inducement?

Furthermore, they do not correlate with email to track threats across multiple communication environments which could signal a wider campaign. Effectively, other market solutions aren’t adding extra value – they are protecting against the same types of threats that Microsoft is already covering by default.

The other aspect of Teams security that native and market solutions fail to address is the account itself. As well as focusing on Teams threats, it’s important to analyze messages to understand the normal mode of communication for a user, and spot when a user’s Teams activity might signal account takeover.

The solution: How Darktrace protects Microsoft Teams against sophisticated threats

With its biggest update to Darktrace/Email ever, Darktrace now offers support for Microsoft Teams. With that, we are bringing the same AI philosophy that protects your email and accounts to your messaging environment.  

Our Self-Learning AI looks at content and context for every communication, whether that’s sent in an email or Teams message. It looks at actual user behavior, including language patterns, relationship history of sender and recipient, tone and payloads, to understand if a message poses a threat. This approach allows Darktrace to detect threats such as social engineering and payloadless attacks using visibility and forensic capabilities that Microsoft security doesn’t currently offer, as well as early symptoms of account compromise.  

Unlike market solutions, Darktrace doesn’t offer a siloed approach to Teams security. Data and signals from Teams are shared across email to inform detection, and also with the wider Darktrace ActiveAI security platform. By correlating information from email and Teams with network and apps security, Darktrace is able to better identify suspicious Teams activity and vice versa.  

Interested in the other ways Darktrace/Email augments threat detection? Read our latest blog on how improving the quality of end-user reporting can decrease the burden on the SOC. To find our more about Darktrace's enduring partnership with Microsoft, click here.

References

[1] Essential Microsoft Office Statistics in 2024

[2] Microsoft blog, Microsoft Teams and email, living in harmony, 2024

Continue reading
About the author
Carlos Gray
Product Manager

Blog

Inside the SOC

Don’t Take the Bait: How Darktrace Keeps Microsoft Teams Phishing Attacks at Bay

Default blog imageDefault blog image
20
May 2024

Social Engineering in Phishing Attacks

Faced with increasingly cyber-aware endpoint users and vigilant security teams, more and more threat actors are forced to think psychologically about the individuals they are targeting with their phishing attacks. Social engineering methods like taking advantage of the human emotions of their would-be victims, pressuring them to open emails or follow links or face financial or legal repercussions, and impersonating known and trusted brands or services, have become common place in phishing campaigns in recent years.

Phishing with Microsoft Teams

The malicious use of the popular communications platform Microsoft Teams has become widely observed and discussed across the threat landscape, with many organizations adopting it as their primary means of business communication, and many threat actors using it as an attack vector. As Teams allows users to communicate with people outside of their organization by default [1], it becomes an easy entry point for potential attackers to use as a social engineering vector.

In early 2024, Darktrace/Apps™ identified two separate instances of malicious actors using Microsoft Teams to launch a phishing attack against Darktrace customers in the Europe, the Middle East and Africa (EMEA) region. Interestingly, in this case the attackers not only used a well-known legitimate service to carry out their phishing campaign, but they were also attempting to impersonate an international hotel chain.

Despite these attempts to evade endpoint users and traditional security measures, Darktrace’s anomaly detection enabled it to identify the suspicious phishing messages and bring them to the customer’s attention. Additionally, Darktrace’s autonomous response capability, was able to follow-up these detections with targeted actions to contain the suspicious activity in the first instance.

Darktrace Coverage of Microsoft Teams Phishing

Chats Sent by External User and Following Actions by Darktrace

On February 29, 2024, Darktrace detected the presence of a new external user on the Software-as-a-Service (SaaS) environment of an EMEA customer for the first time. The user, “REDACTED@InternationalHotelChain[.]onmicrosoft[.]com” was only observed on this date and no further activities were detected from this user after February 29.

Later the same day, the unusual external user created its first chat on Microsoft Teams named “New Employee Loyalty Program”. Over the course of around 5 minutes, the user sent 63 messages across 21 different chats to unique internal users on the customer’s SaaS platform. All these chats included the ‘foreign tenant user’ and one of the customer’s internal users, likely in an attempt to remain undetected. Foreign tenant user, in this case, refers to users without access to typical internal software and privileges, indicating the presence of an external user.

Darktrace’s detection of unusual messages being sent by a suspicious external user via Microsoft Teams.
Figure 1: Darktrace’s detection of unusual messages being sent by a suspicious external user via Microsoft Teams.
Advanced Search results showing the presence of a foreign tenant user on the customer’s SaaS environment.
Figure 2: Advanced Search results showing the presence of a foreign tenant user on the customer’s SaaS environment.

Darktrace identified that the external user had connected from an unusual IP address located in Poland, 195.242.125[.]186. Darktrace understood that this was unexpected behavior for this user who had only previously been observed connecting from the United Kingdom; it further recognized that no other users within the customer’s environment had connected from this external source, thereby deeming it suspicious. Further investigation by Darktrace’s analyst team revealed that the endpoint had been flagged as malicious by several open-source intelligence (OSINT) vendors.

External Summary highlighting the rarity of the rare external source from which the Teams messages were sent.
Figure 3: External Summary highlighting the rarity of the rare external source from which the Teams messages were sent.

Following Darktrace’s initial detection of these suspicious Microsoft Teams messages, Darktrace's autonomous response was able to further support the customer by providing suggested mitigative actions that could be applied to stop the external user from sending any additional phishing messages.

Unfortunately, at the time of this attack Darktrace's autonomous response capability was configured in human confirmation mode, meaning any autonomous response actions had to be manually actioned by the customer. Had it been enabled in autonomous response mode, it would have been able promptly disrupt the attack, disabling the external user to prevent them from continuing their phishing attempts and securing precious time for the customer’s security team to begin their own remediation procedures.

Darktrace autonomous response actions that were suggested following the ’Large Volume of Messages Sent from New External User’ detection model alert.
Figure 4: Darktrace autonomous response actions that were suggested following the ’Large Volume of Messages Sent from New External User’ detection model alert.

External URL Sent within Teams Chats

Within the 21 Teams chats created by the threat actor, Darktrace identified 21 different external URLs being sent, all of which included the domain "cloud-sharcpoint[.]com”. Many of these URLs had been recently established and had been flagged as malicious by OSINT providers [3]. This was likely an attempt to impersonate “cloud-sharepoint[.]com”, the legitimate domain of Microsoft SharePoint, with the threat actor attempting to ‘typo-squat’ the URL to convince endpoint users to trust the legitimacy of the link. Typo-squatted domains are commonly misspelled URLs registered by opportunistic attackers in the hope of gaining the trust of unsuspecting targets. They are often used for nefarious purposes like dropping malicious files on devices or harvesting credentials.

Upon clicking this malicious link, users were directed to a similarly typo-squatted domain, “InternatlonalHotelChain[.]sharcpoInte-docs[.]com”. This domain was likely made to appear like the SharePoint URL used by the international hotel chain being impersonated.

Redirected link to a fake SharePoint page attempting to impersonate an international hotel chain.
Figure 5: Redirected link to a fake SharePoint page attempting to impersonate an international hotel chain.

This fake SharePoint page used the branding of the international hotel chain and contained a document named “New Employee Loyalty Program”; the same name given to the phishing messages sent by the attacker on Microsoft Teams. Upon accessing this file, users would be directed to a credential harvester, masquerading as a Microsoft login page, and prompted to enter their credentials. If successful, this would allow the attacker to gain unauthorized access to a user’s SaaS account, thereby compromising the account and enabling further escalation in the customer’s environment.

Figure 6: A fake Microsoft login page that popped-up when attempting to open the ’New Employee Loyalty Program’ document.

This is a clear example of an attacker attempting to leverage social engineering tactics to gain the trust of their targets and convince them to inadvertently compromise their account. Many corporate organizations partner with other companies and well-known brands to offer their employees loyalty programs as part of their employment benefits and perks. As such, it would not necessarily be unexpected for employees to receive such an offer from an international hotel chain. By impersonating an international hotel chain, threat actors would increase the probability of convincing their targets to trust and click their malicious messages and links, and unintentionally compromising their accounts.

In spite of the attacker’s attempts to impersonate reputable brands, platforms, Darktrace/Apps was able to successfully recognize the malicious intent behind this phishing campaign and suggest steps to contain the attack. Darktrace recognized that the user in question had deviated from its ‘learned’ pattern of behavior by connecting to the customer’s SaaS environment from an unusual external location, before proceeding to send an unusually large volume of messages via Teams, indicating that the SaaS account had been compromised.

A Wider Campaign?

Around a month later, in March 2024, Darktrace observed a similar incident of a malicious actor impersonating the same international hotel chain in a phishing attacking using Microsoft Teams, suggesting that this was part of a wider phishing campaign. Like the previous example, this customer was also based in the EMEA region.  

The attack tactics identified in this instance were very similar to the previously example, with a new external user identified within the network proceeding to create a series of Teams messages named “New Employee Loyalty Program” containing a typo-squatted external links.

There were a few differences with this second incident, however, with the attacker using the domain “@InternationalHotelChainExpeditions[.]onmicrosoft[.]com” to send their malicious Teams messages and using differently typo-squatted URLs to imitate Microsoft SharePoint.

As both customers targeted by this phishing campaign were subscribed to Darktrace’s Proactive Threat Notification (PTN) service, this suspicious SaaS activity was promptly escalated to the Darktrace Security Operations Center (SOC) for immediate triage and investigation. Following their investigation, the SOC team sent an alert to the customers informing them of the compromise and advising urgent follow-up.

Conclusion

While there are clear similarities between these Microsoft Teams-based phishing attacks, the attackers here have seemingly sought ways to refine their tactics, techniques, and procedures (TTPs), leveraging new connection locations and creating new malicious URLs in an effort to outmaneuver human security teams and conventional security tools.

As cyber threats grow increasingly sophisticated and evasive, it is crucial for organizations to employ intelligent security solutions that can see through social engineering techniques and pinpoint suspicious activity early.

Darktrace’s Self-Learning AI understands customer environments and is able to recognize the subtle deviations in a device’s behavioral pattern, enabling it to effectively identify suspicious activity even when attackers adapt their strategies. In this instance, this allowed Darktrace to detect the phishing messages, and the malicious links contained within them, despite the seemingly trustworthy source and use of a reputable platform like Microsoft Teams.

Credit to Min Kim, Cyber Security Analyst, Raymond Norbert, Cyber Security Analyst and Ryan Traill, Threat Content Lead

Appendix

Darktrace Model Detections

SaaS Model

Large Volume of Messages Sent from New External User

SaaS / Unusual Activity / Large Volume of Messages Sent from New External User

Indicators of Compromise (IoCs)

IoC – Type - Description

https://cloud-sharcpoint[.]com/[a-zA-Z0-9]{15} - Example hostname - Malicious phishing redirection link

InternatlonalHotelChain[.]sharcpolnte-docs[.]com – Hostname – Redirected Link

195.242.125[.]186 - External Source IP Address – Malicious Endpoint

MITRE Tactics

Tactic – Technique

Phishing – Initial Access (T1566)

References

[1] https://learn.microsoft.com/en-us/microsoftteams/trusted-organizations-external-meetings-chat?tabs=organization-settings

[2] https://www.virustotal.com/gui/ip-address/195.242.125.186/detection

[3] https://www.virustotal.com/gui/domain/cloud-sharcpoint.com

Continue reading
About the author
Min Kim
Cyber Security Analyst
Our ai. Your data.

Elevate your cyber defenses with Darktrace AI

Start your free trial
Darktrace AI protecting a business from cyber threats.