Blog

Inside the SOC

When Hallucinations Become Reality: An Exploration of AI Package Hallucination Attacks

Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
30
Oct 2023
30
Oct 2023
This blog discusses the plausible threat of malicious actors taking advantage of errors in generative AI tools, or AI “hallucinations”, to launch malicious packet attacks and how Darktrace’s suite of products might detect these attempts.

AI tools open doors for threat actors

On November 30, 2022, the free conversational language generation model ChatGPT was launched by OpenAI, an artificial intelligence (AI) research and development company. The launch of ChatGPT was the culmination of development ongoing since 2018 and represented the latest innovation in the ongoing generative AI boom and made the use of generative AI tools accessible to the general population for the first time.

ChatGPT is estimated to currently have at least 100 million users, and in August 2023 the site reached 1.43 billion visits [1]. Darktrace data indicated that, as of March 2023, 74% of active customer environments have employees using generative AI tools in the workplace [2].

However, with new tools come new opportunities for threat actors to exploit and use them maliciously, expanding their arsenal.

Much consideration has been given to mitigating the impacts of the increased linguistic complexity in social engineering and phishing attacks resulting from generative AI tool use, with Darktrace observing a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email™ customers from January to February 2023, corresponding with the widespread adoption of ChatGPT and its peers [3].

Less overall consideration, however, has been given to impacts stemming from errors intrinsic to generative AI tools. One of these errors is AI hallucinations.

What is an AI hallucination?

AI “hallucination” is a term which refers to the predictive elements of generative AI and LLMs’ AI model gives an unexpected or factually incorrect response which does not align with its machine learning training data [4]. This differs from regular and intended behavior for an AI model, which should provide a response based on the data it was trained upon.  

Why are AI hallucinations a problem?

Despite the term indicating it might be a rare phenomenon, hallucinations are far more likely than accurate or factual results as the AI models used in LLMs are merely predictive and focus on the most probable text or outcome, rather than factual accuracy.

Given the widespread use of generative AI tools in the workplace employees are becoming significantly more likely to encounter an AI hallucination. Furthermore, if these fabricated hallucination responses are taken at face value, they could cause significant issues for an organization.

Use of generative AI in software development

Software developers may use generative AI for recommendations on how to optimize their scripts or code, or to find packages to import into their code for various uses. Software developers may ask LLMs for recommendations on specific pieces of code or how to solve a specific problem, which will likely lead to a third-party package. It is possible that packages recommended by generative AI tools could represent AI hallucinations and the packages may not have been published, or, more accurately, the packages may not have been published prior to the date at which the training data for the model halts. If these hallucinations result in common suggestions of a non-existent package, and the developer copies the code snippet wholesale, this may leave the exchanges vulnerable to attack.

Research conducted by Vulcan revealed the prevalence of AI hallucinations when ChatGPT is asked questions related to coding. After sourcing a sample of commonly asked coding questions from Stack Overflow, a question-and-answer website for programmers, researchers queried ChatGPT (in the context of Node.js and Python) and reviewed its responses. In 20% of the responses provided by ChatGPT pertaining to Node.js at least one un-published package was included, whilst the figure sat at around 35% for Python [4].

Hallucinations can be unpredictable, but would-be attackers are able to find packages to create by asking generative AI tools generic questions and checking whether the suggested packages exist already. As such, attacks using this vector are unlikely to target specific organizations, instead posing more of a widespread threat to users of generative AI tools.

Malicious packages as attack vectors

Although AI hallucinations can be unpredictable, and responses given by generative AI tools may not always be consistent, malicious actors are able to discover AI hallucinations by adopting the approach used by Vulcan. This allows hallucinated packages to be used as attack vectors. Once a malicious actor has discovered a hallucination of an un-published package, they are able to create a package with the same name and include a malicious payload, before publishing it. This is known as a malicious package.

Malicious packages could also be recommended by generative AI tools in the form of pre-existing packages. A user may be recommended a package that had previously been confirmed to contain malicious content, or a package that is no longer maintained and, therefore, is more vulnerable to hijack by malicious actors.

In such scenarios it is not necessary to manipulate the training data (data poisoning) to achieve the desired outcome for the malicious actor, thus a complex and time-consuming attack phase can easily be bypassed.

An unsuspecting software developer may incorporate a malicious package into their code, rendering it harmful. Deployment of this code could then result in compromise and escalation into a full-blown cyber-attack.

Figure 1: Flow diagram depicting the initial stages of an AI Package Hallucination Attack.

For providers of Software-as-a-Service (SaaS) products, this attack vector may represent an even greater risk. Such organizations may have a higher proportion of employed software developers than other organizations of comparable size. A threat actor, therefore, could utilize this attack vector as part of a supply chain attack, whereby a malicious payload becomes incorporated into trusted software and is then distributed to multiple customers. This type of attack could have severe consequences including data loss, the downtime of critical systems, and reputational damage.

How could Darktrace detect an AI Package Hallucination Attack?

In June 2023, Darktrace introduced a range of DETECT™ and RESPOND™ models designed to identify the use of generative AI tools within customer environments, and to autonomously perform inhibitive actions in response to such detections. These models will trigger based on connections to endpoints associated with generative AI tools, as such, Darktrace’s detection of an AI Package Hallucination Attack would likely begin with the breaching of one of the following DETECT models:

  • Compliance / Anomalous Upload to Generative AI
  • Compliance / Beaconing to Rare Generative AI and Generative AI
  • Compliance / Generative AI

Should generative AI tool use not be permitted by an organization, the Darktrace RESPOND model ‘Antigena / Network / Compliance / Antigena Generative AI Block’ can be activated to autonomously block connections to endpoints associated with generative AI, thus preventing an AI Package Hallucination attack before it can take hold.

Once a malicious package has been recommended, it may be downloaded from GitHub, a platform and cloud-based service used to store and manage code. Darktrace DETECT is able to identify when a device has performed a download from an open-source repository such as GitHub using the following models:

  • Device / Anomalous GitHub Download
  • Device / Anomalous Script Download Followed By Additional Packages

Whatever goal the malicious package has been designed to fulfil will determine the next stages of the attack. Due to their highly flexible nature, AI package hallucinations could be used as an attack vector to deliver a large variety of different malware types.

As GitHub is a commonly used service by software developers and IT professionals alike, traditional security tools may not alert customer security teams to such GitHub downloads, meaning malicious downloads may go undetected. Darktrace’s anomaly-based approach to threat detection, however, enables it to recognize subtle deviations in a device’s pre-established pattern of life which may be indicative of an emerging attack.

Subsequent anomalous activity representing the possible progression of the kill chain as part of an AI Package Hallucination Attack could then trigger an Enhanced Monitoring model. Enhanced Monitoring models are high-fidelity indicators of potential malicious activity that are investigated by the Darktrace analyst team as part of the Proactive Threat Notification (PTN) service offered by the Darktrace Security Operation Center (SOC).

Conclusion

Employees are often considered the first line of defense in cyber security; this is particularly true in the face of an AI Package Hallucination Attack.

As the use of generative AI becomes more accessible and an increasingly prevalent tool in an attacker’s toolbox, organizations will benefit from implementing company-wide policies to define expectations surrounding the use of such tools. It is simple, yet critical, for example, for employees to fact check responses provided to them by generative AI tools. All packages recommended by generative AI should also be checked by reviewing non-generated data from either external third-party or internal sources. It is also good practice to adopt caution when downloading packages with very few downloads as it could indicate the package is untrustworthy or malicious.

As of September 2023, ChatGPT Plus and Enterprise users were able to use the tool to browse the internet, expanding the data ChatGPT can access beyond the previous training data cut-off of September 2021 [5]. This feature will be expanded to all users soon [6]. ChatGPT providing up-to-date responses could prompt the evolution of this attack vector, allowing attackers to publish malicious packages which could subsequently be recommended by ChatGPT.

It is inevitable that a greater embrace of AI tools in the workplace will be seen in the coming years as the AI technology advances and existing tools become less novel and more familiar. By fighting fire with fire, using AI technology to identify AI usage, Darktrace is uniquely placed to detect and take preventative action against malicious actors capitalizing on the AI boom.

Credit to Charlotte Thompson, Cyber Analyst, Tiana Kelly, Analyst Team Lead, London, Cyber Analyst

References

[1] https://seo.ai/blog/chatgpt-user-statistics-facts

[2] https://darktrace.com/news/darktrace-addresses-generative-ai-concerns

[3] https://darktrace.com/news/darktrace-email-defends-organizations-against-evolving-cyber-threat-landscape

[4] https://vulcan.io/blog/ai-hallucinations-package-risk?nab=1&utm_referrer=https%3A%2F%2Fwww.google.com%2F

[5] https://twitter.com/OpenAI/status/1707077710047216095

[6] https://www.reuters.com/technology/openai-says-chatgpt-can-now-browse-internet-2023-09-27/

INSIDE THE SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
AUTHOR
ABOUT ThE AUTHOR
Charlotte Thompson
Cyber Analyst
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst
Book a 1-1 meeting with one of our experts
share this article
USE CASES
항목을 찾을 수 없습니다.

More in this series

항목을 찾을 수 없습니다.

Blog

항목을 찾을 수 없습니다.

The State of AI in Cybersecurity: How AI will impact the cyber threat landscape in 2024

Default blog imageDefault blog image
22
Apr 2024

About the AI Cybersecurity Report

We surveyed 1,800 CISOs, security leaders, administrators, and practitioners from industries around the globe. Our research was conducted to understand how the adoption of new AI-powered offensive and defensive cybersecurity technologies are being managed by organizations.

This blog is continuing the conversation from our last blog post “The State of AI in Cybersecurity: Unveiling Global Insights from 1,800 Security Practitioners” which was an overview of the entire report. This blog will focus on one aspect of the overarching report, the impact of AI on the cyber threat landscape.

To access the full report click here.

Are organizations feeling the impact of AI-powered cyber threats?

Nearly three-quarters (74%) state AI-powered threats are now a significant issue. Almost nine in ten (89%) agree that AI-powered threats will remain a major challenge into the foreseeable future, not just for the next one to two years.

However, only a slight majority (56%) thought AI-powered threats were a separate issue from traditional/non AI-powered threats. This could be the case because there are few, if any, reliable methods to determine whether an attack is AI-powered.

Identifying exactly when and where AI is being applied may not ever be possible. However, it is possible for AI to affect every stage of the attack lifecycle. As such, defenders will likely need to focus on preparing for a world where threats are unique and are coming faster than ever before.

a hypothetical cyber attack augmented by AI at every stage

Are security stakeholders concerned about AI’s impact on cyber threats and risks?

The results from our survey showed that security practitioners are concerned that AI will impact organizations in a variety of ways. There was equal concern associated across the board – from volume and sophistication of malware to internal risks like leakage of proprietary information from employees using generative AI tools.

What this tells us is that defenders need to prepare for a greater volume of sophisticated attacks and balance this with a focus on cyber hygiene to manage internal risks.

One example of a growing internal risks is shadow AI. It takes little effort for employees to adopt publicly-available text-based generative AI systems to increase their productivity. This opens the door to “shadow AI”, which is the use of popular AI tools without organizational approval or oversight. Resulting security risks such as inadvertent exposure of sensitive information or intellectual property are an ever-growing concern.

Are organizations taking strides to reduce risks associated with adoption of AI in their application and computing environment?

71.2% of survey participants say their organization has taken steps specifically to reduce the risk of using AI within its application and computing environment.

16.3% of survey participants claim their organization has not taken these steps.

These findings are good news. Even as enterprises compete to get as much value from AI as they can, as quickly as possible, they’re tempering their eager embrace of new tools with sensible caution.

Still, responses varied across roles. Security analysts, operators, administrators, and incident responders are less likely to have said their organizations had taken AI risk mitigation steps than respondents in other roles. In fact, 79% of executives said steps had been taken, and only 54% of respondents in hands-on roles agreed. It seems that leaders believe their organizations are taking the needed steps, but practitioners are seeing a gap.

Do security professionals feel confident in their preparedness for the next generation of threats?

A majority of respondents (six out of every ten) believe their organizations are inadequately prepared to face the next generation of AI-powered threats.

The survey findings reveal contrasting perceptions of organizational preparedness for cybersecurity threats across different regions and job roles. Security administrators, due to their hands-on experience, express the highest level of skepticism, with 72% feeling their organizations are inadequately prepared. Notably, respondents in mid-sized organizations feel the least prepared, while those in the largest companies feel the most prepared.

Regionally, participants in Asia-Pacific are most likely to believe their organizations are unprepared, while those in Latin America feel the most prepared. This aligns with the observation that Asia-Pacific has been the most impacted region by cybersecurity threats in recent years, according to the IBM X-Force Threat Intelligence Index.

The optimism among Latin American respondents could be attributed to lower threat volumes experienced in the region, but it's cautioned that this could change suddenly (1).

What are biggest barriers to defending against AI-powered threats?

The top-ranked inhibitors center on knowledge and personnel. However, issues are alluded to almost equally across the board including concerns around budget, tool integration, lack of attention to AI-powered threats, and poor cyber hygiene.

The cybersecurity industry is facing a significant shortage of skilled professionals, with a global deficit of approximately 4 million experts (2). As organizations struggle to manage their security tools and alerts, the challenge intensifies with the increasing adoption of AI by attackers. This shift has altered the demands on security teams, requiring practitioners to possess broad and deep knowledge across rapidly evolving solution stacks.

Educating end users about AI-driven defenses becomes paramount as organizations grapple with the shortage of professionals proficient in managing AI-powered security tools. Operationalizing machine learning models for effectiveness and accuracy emerges as a crucial skill set in high demand. However, our survey highlights a concerning lack of understanding among cybersecurity professionals regarding AI-driven threats and the use of AI-driven countermeasures indicating a gap in keeping pace with evolving attacker tactics.

The integration of security solutions remains a notable problem, hindering effective defense strategies. While budget constraints are not a primary inhibitor, organizations must prioritize addressing these challenges to bolster their cybersecurity posture. It's imperative for stakeholders to recognize the importance of investing in skilled professionals and integrated security solutions to mitigate emerging threats effectively.

To access the full report click here.

References

1. IBM, X-Force Threat Intelligence Index 2024, Available at: https://www.ibm.com/downloads/cas/L0GKXDWJ

2. ISC2, Cybersecurity Workforce Study 2023, Available at: https://media.isc2.org/-/media/Project/ISC2/Main/Media/ documents/research/ISC2_Cybersecurity_Workforce_Study_2023.pdf?rev=28b46de71ce24e6ab7705f6e3da8637e

Continue reading
About the author

Blog

Inside the SOC

Sliver C2: How Darktrace Provided a Sliver of Hope in the Face of an Emerging C2 Framework

Default blog imageDefault blog image
17
Apr 2024

Offensive Security Tools

As organizations globally seek to for ways to bolster their digital defenses and safeguard their networks against ever-changing cyber threats, security teams are increasingly adopting offensive security tools to simulate cyber-attacks and assess the security posture of their networks. These legitimate tools, however, can sometimes be exploited by real threat actors and used as genuine actor vectors.

What is Sliver C2?

Sliver C2 is a legitimate open-source command-and-control (C2) framework that was released in 2020 by the security organization Bishop Fox. Silver C2 was originally intended for security teams and penetration testers to perform security tests on their digital environments [1] [2] [5]. In recent years, however, the Sliver C2 framework has become a popular alternative to Cobalt Strike and Metasploit for many attackers and Advanced Persistence Threat (APT) groups who adopt this C2 framework for unsolicited and ill-intentioned activities.

The use of Sliver C2 has been observed in conjunction with various strains of Rust-based malware, such as KrustyLoader, to provide backdoors enabling lines of communication between attackers and their malicious C2 severs [6]. It is unsurprising, then, that it has also been leveraged to exploit zero-day vulnerabilities, including critical vulnerabilities in the Ivanti Connect Secure and Policy Secure services.

In early 2024, Darktrace observed the malicious use of Sliver C2 during an investigation into post-exploitation activity on customer networks affected by the Ivanti vulnerabilities. Fortunately for affected customers, Darktrace DETECT™ was able to recognize the suspicious network-based connectivity that emerged alongside Sliver C2 usage and promptly brought it to the attention of customer security teams for remediation.

How does Silver C2 work?

Given its open-source nature, the Sliver C2 framework is extremely easy to access and download and is designed to support multiple operating systems (OS), including MacOS, Windows, and Linux [4].

Sliver C2 generates implants (aptly referred to as ‘slivers’) that operate on a client-server architecture [1]. An implant contains malicious code used to remotely control a targeted device [5]. Once a ‘sliver’ is deployed on a compromised device, a line of communication is established between the target device and the central C2 server. These connections can then be managed over Mutual TLS (mTLS), WireGuard, HTTP(S), or DNS [1] [4]. Sliver C2 has a wide-range of features, which include dynamic code generation, compile-time obfuscation, multiplayer-mode, staged and stageless payloads, procedurally generated C2 over HTTP(S) and DNS canary blue team detection [4].

Why Do Attackers Use Sliver C2?

Amidst the multitude of reasons why malicious actors opt for Sliver C2 over its counterparts, one stands out: its relative obscurity. This lack of widespread recognition means that security teams may overlook the threat, failing to actively search for it within their networks [3] [5].

Although the presence of Sliver C2 activity could be representative of authorized and expected penetration testing behavior, it could also be indicative of a threat actor attempting to communicate with its malicious infrastructure, so it is crucial for organizations and their security teams to identify such activity at the earliest possible stage.

Darktrace’s Coverage of Sliver C2 Activity

Darktrace’s anomaly-based approach to threat detection means that it does not explicitly attempt to attribute or distinguish between specific C2 infrastructures. Despite this, Darktrace was able to connect Sliver C2 usage to phases of an ongoing attack chain related to the exploitation of zero-day vulnerabilities in Ivanti Connect Secure VPN appliances in January 2024.

Around the time that the zero-day Ivanti vulnerabilities were disclosed, Darktrace detected an internal server on one customer network deviating from its expected pattern of activity. The device was observed making regular connections to endpoints associated with Pulse Secure Cloud Licensing, indicating it was an Ivanti server. It was observed connecting to a string of anomalous hostnames, including ‘cmjk3d071amc01fu9e10ae5rt9jaatj6b.oast[.]live’ and ‘cmjft14b13vpn5vf9i90xdu6akt5k3pnx.oast[.]pro’, via HTTP using the user agent ‘curl/7.19.7 (i686-redhat-linux-gnu) libcurl/7.63.0 OpenSSL/1.0.2n zlib/1.2.7’.

Darktrace further identified that the URI requested during these connections was ‘/’ and the top-level domains (TLDs) of the endpoints in question were known Out-of-band Application Security Testing (OAST) server provider domains, namely ‘oast[.]live’ and ‘oast[.]pro’. OAST is a testing method that is used to verify the security posture of an application by testing it for vulnerabilities from outside of the network [7]. This activity triggered the DETECT model ‘Compromise / Possible Tunnelling to Bin Services’, which breaches when a device is observed sending DNS requests for, or connecting to, ‘request bin’ services. Malicious actors often abuse such services to tunnel data via DNS or HTTP requests. In this specific incident, only two connections were observed, and the total volume of data transferred was relatively low (2,302 bytes transferred externally). It is likely that the connections to OAST servers represented malicious actors testing whether target devices were vulnerable to the Ivanti exploits.

The device proceeded to make several SSL connections to the IP address 103.13.28[.]40, using the destination port 53, which is typically reserved for DNS requests. Darktrace recognized that this activity was unusual as the offending device had never previously been observed using port 53 for SSL connections.

Model Breach Event Log displaying the ‘Application Protocol on Uncommon Port’ DETECT model breaching in response to the unusual use of port 53.
Figure 1: Model Breach Event Log displaying the ‘Application Protocol on Uncommon Port’ DETECT model breaching in response to the unusual use of port 53.

Figure 2: Model Breach Event Log displaying details pertaining to the ‘Application Protocol on Uncommon Port’ DETECT model breach, including the 100% rarity of the port usage.
Figure 2: Model Breach Event Log displaying details pertaining to the ‘Application Protocol on Uncommon Port’ DETECT model breach, including the 100% rarity of the port usage.

Further investigation into the suspicious IP address revealed that it had been flagged as malicious by multiple open-source intelligence (OSINT) vendors [8]. In addition, OSINT sources also identified that the JARM fingerprint of the service running on this IP and port (00000000000000000043d43d00043de2a97eabb398317329f027c66e4c1b01) was linked to the Sliver C2 framework and the mTLS protocol it is known to use [4] [5].

An Additional Example of Darktrace’s Detection of Sliver C2

However, it was not just during the January 2024 exploitation of Ivanti services that Darktrace observed cases of Sliver C2 usages across its customer base.  In March 2023, for example, Darktrace detected devices on multiple customer accounts making beaconing connections to malicious endpoints linked to Sliver C2 infrastructure, including 18.234.7[.]23 [10] [11] [12] [13].

Darktrace identified that the observed connections to this endpoint contained the unusual URI ‘/NIS-[REDACTED]’ which contained 125 characters, including numbers, lower and upper case letters, and special characters like “_”, “/”, and “-“, as well as various other URIs which suggested attempted data exfiltration:

‘/upload/api.html?c=[REDACTED] &fp=[REDACTED]’

  • ‘/samples.html?mx=[REDACTED] &s=[REDACTED]’
  • ‘/actions/samples.html?l=[REDACTED] &tc=[REDACTED]’
  • ‘/api.html?gf=[REDACTED] &x=[REDACTED]’
  • ‘/samples.html?c=[REDACTED] &zo=[REDACTED]’

This anomalous external connectivity was carried out through multiple destination ports, including the key ports 443 and 8888.

Darktrace additionally observed devices on affected customer networks performing TLS beaconing to the IP address 44.202.135[.]229 with the JA3 hash 19e29534fd49dd27d09234e639c4057e. According to OSINT sources, this JA3 hash is associated with the Golang TLS cipher suites in which the Sliver framework is developed [14].

Conclusion

Despite its relative novelty in the threat landscape and its lesser-known status compared to other C2 frameworks, Darktrace has demonstrated its ability effectively detect malicious use of Sliver C2 across numerous customer environments. This included instances where attackers exploited vulnerabilities in the Ivanti Connect Secure and Policy Secure services.

While human security teams may lack awareness of this framework, and traditional rules and signatured-based security tools might not be fully equipped and updated to detect Sliver C2 activity, Darktrace’s Self Learning AI understands its customer networks, users, and devices. As such, Darktrace is adept at identifying subtle deviations in device behavior that could indicate network compromise, including connections to new or unusual external locations, regardless of whether attackers use established or novel C2 frameworks, providing organizations with a sliver of hope in an ever-evolving threat landscape.

Credit to Natalia Sánchez Rocafort, Cyber Security Analyst, Paul Jennings, Principal Analyst Consultant

Appendices

DETECT Model Coverage

  • Compromise / Repeating Connections Over 4 Days
  • Anomalous Connection / Application Protocol on Uncommon Port
  • Anomalous Server Activity / Server Activity on New Non-Standard Port
  • Compromise / Sustained TCP Beaconing Activity To Rare Endpoint
  • Compromise / Quick and Regular Windows HTTP Beaconing
  • Compromise / High Volume of Connections with Beacon Score
  • Anomalous Connection / Multiple Failed Connections to Rare Endpoint
  • Compromise / Slow Beaconing Activity To External Rare
  • Compromise / HTTP Beaconing to Rare Destination
  • Compromise / Sustained SSL or HTTP Increase
  • Compromise / Large Number of Suspicious Failed Connections
  • Compromise / SSL or HTTP Beacon
  • Compromise / Possible Malware HTTP Comms
  • Compromise / Possible Tunnelling to Bin Services
  • Anomalous Connection / Low and Slow Exfiltration to IP
  • Device / New User Agent
  • Anomalous Connection / New User Agent to IP Without Hostname
  • Anomalous File / EXE from Rare External Location
  • Anomalous File / Numeric File Download
  • Anomalous Connection / Powershell to Rare External
  • Anomalous Server Activity / New Internet Facing System

List of Indicators of Compromise (IoCs)

18.234.7[.]23 - Destination IP - Likely C2 Server

103.13.28[.]40 - Destination IP - Likely C2 Server

44.202.135[.]229 - Destination IP - Likely C2 Server

References

[1] https://bishopfox.com/tools/sliver

[2] https://vk9-sec.com/how-to-set-up-use-c2-sliver/

[3] https://www.scmagazine.com/brief/sliver-c2-framework-gaining-traction-among-threat-actors

[4] https://github[.]com/BishopFox/sliver

[5] https://www.cybereason.com/blog/sliver-c2-leveraged-by-many-threat-actors

[6] https://securityaffairs.com/158393/malware/ivanti-connect-secure-vpn-deliver-krustyloader.html

[7] https://www.xenonstack.com/insights/out-of-band-application-security-testing

[8] https://www.virustotal.com/gui/ip-address/103.13.28.40/detection

[9] https://threatfox.abuse.ch/browse.php?search=ioc%3A107.174.78.227

[10] https://threatfox.abuse.ch/ioc/1074576/

[11] https://threatfox.abuse.ch/ioc/1093887/

[12] https://threatfox.abuse.ch/ioc/846889/

[13] https://threatfox.abuse.ch/ioc/1093889/

[14] https://github.com/projectdiscovery/nuclei/issues/3330

Continue reading
About the author
Natalia Sánchez Rocafort
Cyber Security Analyst
Our ai. Your data.

Elevate your cyber defenses with Darktrace AI

무료 평가판 시작
Darktrace AI protecting a business from cyber threats.