Blog

항목을 찾을 수 없습니다.

Finding the Right Cyber Security AI for You

Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
20
Dec 2022
20
Dec 2022
This blog explores the nuances of AI in cyber security, how to identify true AI, and considerations when integrating AI technology with people, processes, and other technology.

AI has long been a buzzword – we started seeing it utilized in consumer space; in social media, e-commerce, and even in our music preference! In the past few years it has started to make its way through the enterprise space, especially in cyber security.

Increasingly, we see threat actors utilizing AI in their attack techniques. This is inevitable with the advancements in AI technology, the lower barrier to entry to the cyber security industry, and the continued profitability of being a threat actor. Surveying security decision makers across different industries like financial services and manufacturing, 77% of the respondents expect weaponized AI to lead to an increase in the scale and speed of attacks. 

Defenders are also ramping up their use of AI in cyber security – with more than 80% of the respondents agreeing that organizations require advanced defenses to combat offensive AI – resulted in a ‘cyber arms race’ with adversaries and security teams in constant pursuit of the latest technological advancements.  

The rules and signature approach is no longer sufficient in this evolving threat landscape. Because of this collective need, we will continue to see the push of AI innovations in this space as well. By 2025, cyber security technologies will account for 25% of the AI software market.

Despite the intrigue surrounding AI, many people have a limited understanding of how it truly works. The mystery of AI technology is what piques the interest of many cyber security practitioners. As an industry we also know that AI is necessary for advancement, but there is so much noise around AI and machine learning that some teams struggle to understand it. The paradox of choice leaves security teams more frustrated and confused by all the options presented to them.

Identifying True AI

You first need to define what you want the AI technology to solve. This might seem trivial, but many security teams often forget to come back to the fundamentals: what problem are you addressing? What are you trying to improve? 

Not every process needs AI; some processes will simply need automation – these are the more straightforward parts of your business. More complex and bigger systems require AI. The crux is identifying these parts of your business, applying AI and being clear of what you are going to achieve with these AI technologies. 

For example, when it comes to factory floor operations or tracking leave days of employees, businesses employ automation technologies, but when it comes to business decisions like PR strategies or new business exploration, AI is used to predict trends and help business owners make these decisions. 

Similarly, in cyber security, when dealing with known threats such as known malicious malware and hosting sites, automation is great at keeping track of them; workflows and playbooks are also best assessed with automation tools. However, when it comes to unknown unknowns like zero-day attacks, insider threats, IoT threats and supply chain attacks, AI is needed to detect and respond these threats as they emerge.

Automation is often communicated as AI, and it becomes difficult for security teams to differentiate. Automation helps you to quickly make a decision you already know you will make, whereas true AI helps you make a better decision.

Key ways to differentiate true AI from automation:

  • The Data Set: In automation, what you are looking for is very well-scoped. You already know what you are looking for – you are just accelerating the process with rules and signatures. True AI is dynamic. You no longer need to define activities that deserve your attention, the AI highlights and prioritizes this for you.
  • Bias: When you define what you are looking for, as humans inherently we impose our biases on these decisions. We are also limited by our knowledge at that point in time – this leaves out the crucial unknown unknowns.
  • Real-time: Every organization is always changing and it is important that AI takes all that data into consideration. True AI that is real time and also changes with your organization’s growth is hard to find. 

Our AI Research Centre has produced numerous papers on the applications of true AI in cyber security. The Centre comprises of more than 150 members and has more than 100 patents and patents pending. Some of the featured white papers include research on Attack Path Modeling and using AI as a preventative approach in your organization. 

Integrating AI Outputs with People, Process, and Technology


Integrating AI with People

We are living in the time of trust deficit, and that applies to AI as well. As humans we can be skeptical with AI, so how do we build trust for AI such that it works for us? This applies not only to the users of the technology, but the wider organization as well. Since this is the People pillar, the key factors to achieving trust in AI is through education, culture, and exposure. In a culture where people are open to learn and try new AI technologies, we will naturally build trust towards AI over time.

Integrating AI with Process

Then we should consider the integration of AI and its outputs into your workflow and playbooks. To make decisions around that, security managers need to be clear what their security priorities are, or which security gaps a particular technology is meant to fill. Regardless of whether you have an outsourced MSSP/SOC team, 50-strong in-house SOC team, or even just a 2-man team, it is about understanding your priorities and assigning the proper resources to them.

Integrating AI with Technology 

Finally, there is the integration of AI with your existing technology stack. Most security teams deploy different tools and services to help them achieve different goals – whether it is a tool like SIEM, a firewall, an endpoint, or services like pentesting, or vulnerability assessment exercises. One of the biggest challenges is putting all of this information together and pulling actionable insights out of them. Integration on multiple levels is always challenging with complex technologies because they technologies can rate or interpret threats differently.

Security teams often find themselves spending the most time making sense of the output of different tools and services. For example, taking the outcomes from a pentesting report and trying to enhance SOAR configurations, or looking at SOC alerts to advise firewall configurations, or taking vulnerability assessment reports to scope third-party Incident Response teams.

These tools can have a strong mastery of large volumes of data, but eventually ownership of the knowledge should still lie with the human teams – and the way to do that is with continuous feedback and integration. It is no longer efficient to use human teams to carry out this at scale and at speed. 

The Cyber AI Loop is Darktrace’s approach to cyber security. The four product families make up a key aspect of an organization’s cyber security posture. Darktrace PREVENT, DETECT, RESPOND and HEAL each feed back into a continuous, virtuous cycle, constantly strengthening each other’s abilities. 

This cycle augments humans at every stage of an incident lifecycle. For example, PREVENT may alert you to a vulnerability which holds a particularly high risk potential for your organization. It provides clear mitigation advice, and while you are on this, PREVENT will feed into DETECT and RESPOND, which are immediately poised to kick in should an attack occur in the interim. Conversely, once an attack has been contained by RESPOND, it will feed information back into PREVENT which will anticipate an attacker’s likely next move. Cyber AI Loop helps you harden security a holistic way so that month on month, year on year, the organization continuously improves its defensive posture. 

Explainable AI

Despite its complexity, AI needs to produce outputs that are clear and easy to understand in order to be useful. In the heat of the moment during a cyber incident, human teams need to quickly comprehend: What happened here? When did it happen? What devices are affected? What does it mean for my business? What should I deal with first?

To this end, Darktrace applies another level of AI on top of its initial findings that autonomously investigates in the background, reducing a mass of individual security events to just a few overall cyber incidents worthy of human review. It generates natural-language incident reports with all the relevant information for your team to make judgements in an instant. 

Figure 1: An example of how Darktrace filters individual model breaches into incidents and then critical incidents for the human to review 

Cyber AI Analyst does not only take into consideration network detection but also in your endpoints, your cloud space, IoT devices and OT devices. Cyber AI Analyst also looks at your attack surface and the risks associated to triage and show you the most prioritized alerts that if unexpected would cause maximum damage to your organization. These insights are not only delivered in real time but also unique to your environment.

This also helps address another topic that frequently comes up in conversations around AI: false positives. This is of course a valid concern: what is the point of harvesting the value of AI if it means that a small team now must look at thousands of alerts? But we have to remember that while AI allows us to make more connections over the vastness of logs, its goal is not to create more work for security teams, but to augment them instead.

To ensure that your business can continue to own these AI outputs and more importantly the knowledge, Explainable AI such as that used in Darktrace’s Cyber AI Analyst is needed to interpret the findings of AI, to ensure human teams know what happened, what action (if any) the AI took, and why. 

Conclusion

Every organization is different, and its security should reflect that. However, some fundamental common challenges of AI in cyber security are shared amongst all security teams, regardless of size, resources, industry vertical, and culture. Their cyber strategy and maturity levels are what sets them apart. Maturity is not defined by how many professional certifications or how many years of experience the team has. A mature team works together to solve problems. They understand that while AI is not the silver bullet, it is a powerful bullet that if used right, will autonomously harden the security of the complete digital ecosystem, while augmenting the humans tasked with defending it. 

INSIDE THE SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
AUTHOR
ABOUT ThE AUTHOR
Germaine Tan
VP of Cyber Risk Management

Germaine is the Director of Analysis, APAC at Darktrace. Based in Singapore, she works with CISOs, managers and security teams all over APAC on model optimization and operationalization of Darktrace in their digital environments. She also manages the team of 17 analysts in the APAC region that threat hunts and monitors networks from all over the world. Germaine holds a Bachelor of Science in Engineering and a Masters of Science in Technology Management from Nanyang Technological University. She is CISSP, CRISC and CEH certified.

Book a 1-1 meeting with one of our experts
share this article
USE CASES
항목을 찾을 수 없습니다.
PRODUCT SPOTLIGHT
항목을 찾을 수 없습니다.
COre coverage
항목을 찾을 수 없습니다.

More in this series

항목을 찾을 수 없습니다.

Blog

항목을 찾을 수 없습니다.

The State of AI in Cybersecurity: How AI will impact the cyber threat landscape in 2024

Default blog imageDefault blog image
22
Apr 2024

About the AI Cybersecurity Report

We surveyed 1,800 CISOs, security leaders, administrators, and practitioners from industries around the globe. Our research was conducted to understand how the adoption of new AI-powered offensive and defensive cybersecurity technologies are being managed by organizations.

This blog is continuing the conversation from our last blog post “The State of AI in Cybersecurity: Unveiling Global Insights from 1,800 Security Practitioners” which was an overview of the entire report. This blog will focus on one aspect of the overarching report, the impact of AI on the cyber threat landscape.

To access the full report click here.

Are organizations feeling the impact of AI-powered cyber threats?

Nearly three-quarters (74%) state AI-powered threats are now a significant issue. Almost nine in ten (89%) agree that AI-powered threats will remain a major challenge into the foreseeable future, not just for the next one to two years.

However, only a slight majority (56%) thought AI-powered threats were a separate issue from traditional/non AI-powered threats. This could be the case because there are few, if any, reliable methods to determine whether an attack is AI-powered.

Identifying exactly when and where AI is being applied may not ever be possible. However, it is possible for AI to affect every stage of the attack lifecycle. As such, defenders will likely need to focus on preparing for a world where threats are unique and are coming faster than ever before.

a hypothetical cyber attack augmented by AI at every stage

Are security stakeholders concerned about AI’s impact on cyber threats and risks?

The results from our survey showed that security practitioners are concerned that AI will impact organizations in a variety of ways. There was equal concern associated across the board – from volume and sophistication of malware to internal risks like leakage of proprietary information from employees using generative AI tools.

What this tells us is that defenders need to prepare for a greater volume of sophisticated attacks and balance this with a focus on cyber hygiene to manage internal risks.

One example of a growing internal risks is shadow AI. It takes little effort for employees to adopt publicly-available text-based generative AI systems to increase their productivity. This opens the door to “shadow AI”, which is the use of popular AI tools without organizational approval or oversight. Resulting security risks such as inadvertent exposure of sensitive information or intellectual property are an ever-growing concern.

Are organizations taking strides to reduce risks associated with adoption of AI in their application and computing environment?

71.2% of survey participants say their organization has taken steps specifically to reduce the risk of using AI within its application and computing environment.

16.3% of survey participants claim their organization has not taken these steps.

These findings are good news. Even as enterprises compete to get as much value from AI as they can, as quickly as possible, they’re tempering their eager embrace of new tools with sensible caution.

Still, responses varied across roles. Security analysts, operators, administrators, and incident responders are less likely to have said their organizations had taken AI risk mitigation steps than respondents in other roles. In fact, 79% of executives said steps had been taken, and only 54% of respondents in hands-on roles agreed. It seems that leaders believe their organizations are taking the needed steps, but practitioners are seeing a gap.

Do security professionals feel confident in their preparedness for the next generation of threats?

A majority of respondents (six out of every ten) believe their organizations are inadequately prepared to face the next generation of AI-powered threats.

The survey findings reveal contrasting perceptions of organizational preparedness for cybersecurity threats across different regions and job roles. Security administrators, due to their hands-on experience, express the highest level of skepticism, with 72% feeling their organizations are inadequately prepared. Notably, respondents in mid-sized organizations feel the least prepared, while those in the largest companies feel the most prepared.

Regionally, participants in Asia-Pacific are most likely to believe their organizations are unprepared, while those in Latin America feel the most prepared. This aligns with the observation that Asia-Pacific has been the most impacted region by cybersecurity threats in recent years, according to the IBM X-Force Threat Intelligence Index.

The optimism among Latin American respondents could be attributed to lower threat volumes experienced in the region, but it's cautioned that this could change suddenly (1).

What are biggest barriers to defending against AI-powered threats?

The top-ranked inhibitors center on knowledge and personnel. However, issues are alluded to almost equally across the board including concerns around budget, tool integration, lack of attention to AI-powered threats, and poor cyber hygiene.

The cybersecurity industry is facing a significant shortage of skilled professionals, with a global deficit of approximately 4 million experts (2). As organizations struggle to manage their security tools and alerts, the challenge intensifies with the increasing adoption of AI by attackers. This shift has altered the demands on security teams, requiring practitioners to possess broad and deep knowledge across rapidly evolving solution stacks.

Educating end users about AI-driven defenses becomes paramount as organizations grapple with the shortage of professionals proficient in managing AI-powered security tools. Operationalizing machine learning models for effectiveness and accuracy emerges as a crucial skill set in high demand. However, our survey highlights a concerning lack of understanding among cybersecurity professionals regarding AI-driven threats and the use of AI-driven countermeasures indicating a gap in keeping pace with evolving attacker tactics.

The integration of security solutions remains a notable problem, hindering effective defense strategies. While budget constraints are not a primary inhibitor, organizations must prioritize addressing these challenges to bolster their cybersecurity posture. It's imperative for stakeholders to recognize the importance of investing in skilled professionals and integrated security solutions to mitigate emerging threats effectively.

To access the full report click here.

References

1. IBM, X-Force Threat Intelligence Index 2024, Available at: https://www.ibm.com/downloads/cas/L0GKXDWJ

2. ISC2, Cybersecurity Workforce Study 2023, Available at: https://media.isc2.org/-/media/Project/ISC2/Main/Media/ documents/research/ISC2_Cybersecurity_Workforce_Study_2023.pdf?rev=28b46de71ce24e6ab7705f6e3da8637e

Continue reading
About the author

Blog

Inside the SOC

Sliver C2: How Darktrace Provided a Sliver of Hope in the Face of an Emerging C2 Framework

Default blog imageDefault blog image
17
Apr 2024

Offensive Security Tools

As organizations globally seek to for ways to bolster their digital defenses and safeguard their networks against ever-changing cyber threats, security teams are increasingly adopting offensive security tools to simulate cyber-attacks and assess the security posture of their networks. These legitimate tools, however, can sometimes be exploited by real threat actors and used as genuine actor vectors.

What is Sliver C2?

Sliver C2 is a legitimate open-source command-and-control (C2) framework that was released in 2020 by the security organization Bishop Fox. Silver C2 was originally intended for security teams and penetration testers to perform security tests on their digital environments [1] [2] [5]. In recent years, however, the Sliver C2 framework has become a popular alternative to Cobalt Strike and Metasploit for many attackers and Advanced Persistence Threat (APT) groups who adopt this C2 framework for unsolicited and ill-intentioned activities.

The use of Sliver C2 has been observed in conjunction with various strains of Rust-based malware, such as KrustyLoader, to provide backdoors enabling lines of communication between attackers and their malicious C2 severs [6]. It is unsurprising, then, that it has also been leveraged to exploit zero-day vulnerabilities, including critical vulnerabilities in the Ivanti Connect Secure and Policy Secure services.

In early 2024, Darktrace observed the malicious use of Sliver C2 during an investigation into post-exploitation activity on customer networks affected by the Ivanti vulnerabilities. Fortunately for affected customers, Darktrace DETECT™ was able to recognize the suspicious network-based connectivity that emerged alongside Sliver C2 usage and promptly brought it to the attention of customer security teams for remediation.

How does Silver C2 work?

Given its open-source nature, the Sliver C2 framework is extremely easy to access and download and is designed to support multiple operating systems (OS), including MacOS, Windows, and Linux [4].

Sliver C2 generates implants (aptly referred to as ‘slivers’) that operate on a client-server architecture [1]. An implant contains malicious code used to remotely control a targeted device [5]. Once a ‘sliver’ is deployed on a compromised device, a line of communication is established between the target device and the central C2 server. These connections can then be managed over Mutual TLS (mTLS), WireGuard, HTTP(S), or DNS [1] [4]. Sliver C2 has a wide-range of features, which include dynamic code generation, compile-time obfuscation, multiplayer-mode, staged and stageless payloads, procedurally generated C2 over HTTP(S) and DNS canary blue team detection [4].

Why Do Attackers Use Sliver C2?

Amidst the multitude of reasons why malicious actors opt for Sliver C2 over its counterparts, one stands out: its relative obscurity. This lack of widespread recognition means that security teams may overlook the threat, failing to actively search for it within their networks [3] [5].

Although the presence of Sliver C2 activity could be representative of authorized and expected penetration testing behavior, it could also be indicative of a threat actor attempting to communicate with its malicious infrastructure, so it is crucial for organizations and their security teams to identify such activity at the earliest possible stage.

Darktrace’s Coverage of Sliver C2 Activity

Darktrace’s anomaly-based approach to threat detection means that it does not explicitly attempt to attribute or distinguish between specific C2 infrastructures. Despite this, Darktrace was able to connect Sliver C2 usage to phases of an ongoing attack chain related to the exploitation of zero-day vulnerabilities in Ivanti Connect Secure VPN appliances in January 2024.

Around the time that the zero-day Ivanti vulnerabilities were disclosed, Darktrace detected an internal server on one customer network deviating from its expected pattern of activity. The device was observed making regular connections to endpoints associated with Pulse Secure Cloud Licensing, indicating it was an Ivanti server. It was observed connecting to a string of anomalous hostnames, including ‘cmjk3d071amc01fu9e10ae5rt9jaatj6b.oast[.]live’ and ‘cmjft14b13vpn5vf9i90xdu6akt5k3pnx.oast[.]pro’, via HTTP using the user agent ‘curl/7.19.7 (i686-redhat-linux-gnu) libcurl/7.63.0 OpenSSL/1.0.2n zlib/1.2.7’.

Darktrace further identified that the URI requested during these connections was ‘/’ and the top-level domains (TLDs) of the endpoints in question were known Out-of-band Application Security Testing (OAST) server provider domains, namely ‘oast[.]live’ and ‘oast[.]pro’. OAST is a testing method that is used to verify the security posture of an application by testing it for vulnerabilities from outside of the network [7]. This activity triggered the DETECT model ‘Compromise / Possible Tunnelling to Bin Services’, which breaches when a device is observed sending DNS requests for, or connecting to, ‘request bin’ services. Malicious actors often abuse such services to tunnel data via DNS or HTTP requests. In this specific incident, only two connections were observed, and the total volume of data transferred was relatively low (2,302 bytes transferred externally). It is likely that the connections to OAST servers represented malicious actors testing whether target devices were vulnerable to the Ivanti exploits.

The device proceeded to make several SSL connections to the IP address 103.13.28[.]40, using the destination port 53, which is typically reserved for DNS requests. Darktrace recognized that this activity was unusual as the offending device had never previously been observed using port 53 for SSL connections.

Model Breach Event Log displaying the ‘Application Protocol on Uncommon Port’ DETECT model breaching in response to the unusual use of port 53.
Figure 1: Model Breach Event Log displaying the ‘Application Protocol on Uncommon Port’ DETECT model breaching in response to the unusual use of port 53.

Figure 2: Model Breach Event Log displaying details pertaining to the ‘Application Protocol on Uncommon Port’ DETECT model breach, including the 100% rarity of the port usage.
Figure 2: Model Breach Event Log displaying details pertaining to the ‘Application Protocol on Uncommon Port’ DETECT model breach, including the 100% rarity of the port usage.

Further investigation into the suspicious IP address revealed that it had been flagged as malicious by multiple open-source intelligence (OSINT) vendors [8]. In addition, OSINT sources also identified that the JARM fingerprint of the service running on this IP and port (00000000000000000043d43d00043de2a97eabb398317329f027c66e4c1b01) was linked to the Sliver C2 framework and the mTLS protocol it is known to use [4] [5].

An Additional Example of Darktrace’s Detection of Sliver C2

However, it was not just during the January 2024 exploitation of Ivanti services that Darktrace observed cases of Sliver C2 usages across its customer base.  In March 2023, for example, Darktrace detected devices on multiple customer accounts making beaconing connections to malicious endpoints linked to Sliver C2 infrastructure, including 18.234.7[.]23 [10] [11] [12] [13].

Darktrace identified that the observed connections to this endpoint contained the unusual URI ‘/NIS-[REDACTED]’ which contained 125 characters, including numbers, lower and upper case letters, and special characters like “_”, “/”, and “-“, as well as various other URIs which suggested attempted data exfiltration:

‘/upload/api.html?c=[REDACTED] &fp=[REDACTED]’

  • ‘/samples.html?mx=[REDACTED] &s=[REDACTED]’
  • ‘/actions/samples.html?l=[REDACTED] &tc=[REDACTED]’
  • ‘/api.html?gf=[REDACTED] &x=[REDACTED]’
  • ‘/samples.html?c=[REDACTED] &zo=[REDACTED]’

This anomalous external connectivity was carried out through multiple destination ports, including the key ports 443 and 8888.

Darktrace additionally observed devices on affected customer networks performing TLS beaconing to the IP address 44.202.135[.]229 with the JA3 hash 19e29534fd49dd27d09234e639c4057e. According to OSINT sources, this JA3 hash is associated with the Golang TLS cipher suites in which the Sliver framework is developed [14].

Conclusion

Despite its relative novelty in the threat landscape and its lesser-known status compared to other C2 frameworks, Darktrace has demonstrated its ability effectively detect malicious use of Sliver C2 across numerous customer environments. This included instances where attackers exploited vulnerabilities in the Ivanti Connect Secure and Policy Secure services.

While human security teams may lack awareness of this framework, and traditional rules and signatured-based security tools might not be fully equipped and updated to detect Sliver C2 activity, Darktrace’s Self Learning AI understands its customer networks, users, and devices. As such, Darktrace is adept at identifying subtle deviations in device behavior that could indicate network compromise, including connections to new or unusual external locations, regardless of whether attackers use established or novel C2 frameworks, providing organizations with a sliver of hope in an ever-evolving threat landscape.

Credit to Natalia Sánchez Rocafort, Cyber Security Analyst, Paul Jennings, Principal Analyst Consultant

Appendices

DETECT Model Coverage

  • Compromise / Repeating Connections Over 4 Days
  • Anomalous Connection / Application Protocol on Uncommon Port
  • Anomalous Server Activity / Server Activity on New Non-Standard Port
  • Compromise / Sustained TCP Beaconing Activity To Rare Endpoint
  • Compromise / Quick and Regular Windows HTTP Beaconing
  • Compromise / High Volume of Connections with Beacon Score
  • Anomalous Connection / Multiple Failed Connections to Rare Endpoint
  • Compromise / Slow Beaconing Activity To External Rare
  • Compromise / HTTP Beaconing to Rare Destination
  • Compromise / Sustained SSL or HTTP Increase
  • Compromise / Large Number of Suspicious Failed Connections
  • Compromise / SSL or HTTP Beacon
  • Compromise / Possible Malware HTTP Comms
  • Compromise / Possible Tunnelling to Bin Services
  • Anomalous Connection / Low and Slow Exfiltration to IP
  • Device / New User Agent
  • Anomalous Connection / New User Agent to IP Without Hostname
  • Anomalous File / EXE from Rare External Location
  • Anomalous File / Numeric File Download
  • Anomalous Connection / Powershell to Rare External
  • Anomalous Server Activity / New Internet Facing System

List of Indicators of Compromise (IoCs)

18.234.7[.]23 - Destination IP - Likely C2 Server

103.13.28[.]40 - Destination IP - Likely C2 Server

44.202.135[.]229 - Destination IP - Likely C2 Server

References

[1] https://bishopfox.com/tools/sliver

[2] https://vk9-sec.com/how-to-set-up-use-c2-sliver/

[3] https://www.scmagazine.com/brief/sliver-c2-framework-gaining-traction-among-threat-actors

[4] https://github[.]com/BishopFox/sliver

[5] https://www.cybereason.com/blog/sliver-c2-leveraged-by-many-threat-actors

[6] https://securityaffairs.com/158393/malware/ivanti-connect-secure-vpn-deliver-krustyloader.html

[7] https://www.xenonstack.com/insights/out-of-band-application-security-testing

[8] https://www.virustotal.com/gui/ip-address/103.13.28.40/detection

[9] https://threatfox.abuse.ch/browse.php?search=ioc%3A107.174.78.227

[10] https://threatfox.abuse.ch/ioc/1074576/

[11] https://threatfox.abuse.ch/ioc/1093887/

[12] https://threatfox.abuse.ch/ioc/846889/

[13] https://threatfox.abuse.ch/ioc/1093889/

[14] https://github.com/projectdiscovery/nuclei/issues/3330

Continue reading
About the author
Natalia Sánchez Rocafort
Cyber Security Analyst
Our ai. Your data.

Elevate your cyber defenses with Darktrace AI

무료 평가판 시작
Darktrace AI protecting a business from cyber threats.