company logo

Product

Our Product

We are Reshaping the way Developers find and fix vulnerabilities before they get exploited.

Solutions

By Industry

BFSI

Healthcare

Education

IT & Telecom

Government

By Role

CISO/CTO

DevOps Engineer

Resources

Resource Library

Get actionable insight straight from our threat Intel lab to keep you informed about the ever-changing Threat landscape.

Subscribe to Our Weekly Threat Digest

Company

Contact Us

Have queries, feedback or prospects? Get in touch and we shall be with you shortly.

loading..
loading..
loading..
Loading...

Bluetooth

BLUFFS

AitM

loading..
loading..
loading..

BLUFFS Bluetooth Exploits Expose Devices to Adversary-in-the-Middle Attacks

Discover BLUFFS Bluetooth attack: a critical threat exposing devices to Adversary-in-the-Middle risks.

05-Dec-2023
4 min read

Related Articles

loading..

Zero-Click

NAS

Zero Day

Explore how Synology's rapid response to zero-day vulnerabilities sets a new cyb...

The recent zero-day vulnerabilities discovered at Pwn2Own Ireland 2024 highlight Synology’s swift handling of cybersecurity threats, offering a valuable case study in rapid response and the evolution of corporate responsibility in an era of increasingly sophisticated cyber threats. #### From Vulnerability to Accountability It’s easy to see the Synology zero-day incident as just another security patch story. However, what’s more thought-provoking is how it reveals a broader narrative about the need for a shift in how vendors perceive their role in safeguarding users. Midnight Blue's discovery of the RISK:STATION vulnerability (CVE-2024-10443) speaks volumes about the potential of collaborative efforts between security researchers and vendors. Synology’s accelerated response—delivering patches for BeeStation and DiskStation within a remarkable 48 hours—demonstrates a newfound urgency that goes beyond compliance. It embodies the fact that companies must now see themselves as active custodians of user safety. The stakes here are stark. A critical zero-click vulnerability, such as RISK:STATION, is akin to a digital wildfire waiting to happen—especially when millions of network-attached storage (NAS) devices, used both at home and across enterprises, are exposed to the internet. Midnight Blue’s prompt communication and Synology’s swift release of patches turned what could have been a devastating incident into a teachable moment for all companies grappling with vulnerabilities: timing and transparency can be the difference between chaos and control. #### Beyond Patches: The Human Element in Cybersecurity The technical details of Synology's patched vulnerabilities, while crucial, mask a deeper layer of significance—the human factor. Vulnerabilities, particularly those in ubiquitous devices like NAS systems, hold very tangible implications for everyday users. The reality that these vulnerabilities were found not just in common homes, but within the infrastructure of police departments, critical infrastructure contractors, and more, underscores the very real human cost of security gaps. Midnight Blue's subsequent media reach-out to emphasize mitigative actions reflects an essential, yet often overlooked, dimension of cybersecurity: informing and empowering the users themselves. The narrative here is not just about how swiftly a vendor can release a patch, but also about how well users can be educated to take immediate action. For many, these patches aren't applied automatically, necessitating awareness, engagement, and proactive defense on the part of device owners. By framing the dissemination of patch information as a top priority, Synology and Midnight Blue have taken a step toward bridging the gap between tech companies and their customers in cybersecurity literacy. #### Toward a Secure Digital Future The hurried patch releases by Synology and QNAP in the wake of Pwn2Own’s discoveries set a new standard in timeliness, but they also illustrate the changing relationship between security research and product safety. Vendors, previously accustomed to the luxury of taking up to 90 days to address reported vulnerabilities, must now operate in an accelerated environment where rapid exploitation is a clear and present danger. The story of RISK:STATION is a stark reminder that no connected device is immune, and every link in the chain of connectivity needs vigilance. The Internet of Things, of which NAS devices are a part, is only as strong as its weakest point, and often that point is the delay between vulnerability disclosure and patch application. Synology's response demonstrates how shrinking this gap must be at the forefront of vendor priorities. The challenge lies not just in the release of patches, but also in how swiftly and effectively they reach every vulnerable system. As NAS devices increasingly serve as repositories for sensitive information—not just for enterprises but for individuals who trust them with their family photos and personal data—stories like this should serve as a clarion call to both users and vendors. For vendors, it’s about recognizing the gravity of their role in user protection. For users, it’s a reminder to be vigilant, apply patches promptly, and reconsider how they expose their devices online. The Synology incident is, in many ways, a microcosm of what’s to come as our digital ecosystems expand. It’s a reminder that cybersecurity is as much about the processes of discovery and patching as it is about communication, education, and the fundamental responsibility of every player in the digital space to take security as seriously as possible. In a hyper-connected age, vigilance is no longer optional—it’s imperative.

loading..   02-Nov-2024
loading..   4 min read
loading..

AI

Azure

Microsoft

Azure AI vulnerability reveals flaws in content moderation guardrails, raising q...

In February 2024, Mindgard disclosed a striking vulnerability: Microsoft’s Azure AI Content Safety Service, which many depend on to ensure responsible AI behavior, had two glaring weaknesses. These vulnerabilities allowed sophisticated attackers to slip through the well-advertised “guardrails,” bypassing established mechanisms to keep harmful content at bay. At first glance, this might seem like a run-of-the-mill vulnerability disclosure, but let’s dive into why this breach underscores a far deeper challenge for AI security and our collective perception of safety. ### **Illusion of Impenetrability** Microsoft’s Azure AI Content Safety service, promoted as a safeguard for AI content, comprises AI Text Moderation and Prompt Shield. AI Text Moderation is responsible for blocking harmful content like hate speech, while Prompt Shield aims to protect AI models against manipulative attacks such as jailbreaks and prompt injection. These mechanisms are supposed to ensure that harmful, inappropriate, or manipulated content cannot make its way into the output generated by AI systems. However, the discovery by Mindgard has exposed a stark truth: while AI guardrails sound reliable, they often exist in a precarious balance between effectiveness and exploitation. The vulnerabilities revolved around ‘Character Injection’ and ‘Adversarial ML Evasion’ techniques—both methods designed to exploit blind spots in detection mechanisms. This insight changes our perception of what it means to create guardrails around AI. The once-assumed invincibility of AI moderation tools begins to crumble when we realize the ease with which creative adversaries can identify loopholes, rendering those safety nets insufficient. ### **Attack Techniques: Exploiting Blind Spots** The first evasion technique—Character Injection—leverages imperceptible character modifications that result in evading detection while retaining a message’s meaning to human readers. For instance, attackers used variations like diacritical marks (‘a’ to ‘á’), homoglyphs (using ‘O’ instead of ‘0’), and zero-width spaces. These changes, while trivial to the human eye, wreaked havoc on AI classifiers trained on natural text, achieving a staggering evasion success rate ranging from 83% to 100%. Adversarial ML evasion techniques took a different approach, by modifying the context at the word level, introducing small changes that disoriented the AI system's understanding—undermining content moderation by up to 58%. These attacks highlight how machine learning models inherently struggle to address ambiguities that are easily recognized by humans. This challenge reveals a critical limitation in the effectiveness of guardrails—they often operate on shallow semantics without robust context understanding, making them susceptible to surprisingly simple manipulations. ### **Undermining Trust and AI Safety Narratives** What does this mean for us as individuals, corporations, and societies increasingly adopting AI into our daily lives? First and foremost, it serves as a powerful reminder that AI moderation is neither flawless nor immune to adversarial ingenuity. This incident undermines trust in AI systems' capability to act autonomously and ethically without supervision, and it questions the scalability of relying purely on technical barriers for safety. The reliability of content moderation and ethical AI relies on maintaining impenetrable defenses—an illusion shattered when attackers identify and exploit vulnerabilities. The consequences of bypassing Azure’s safeguards extend beyond just inappropriate content slipping through. The system’s incapacity to identify these sophisticated attacks means sensitive and harmful content can infiltrate the AI’s decision-making process, generate malicious responses, or even propagate misinformation. For instance, with Prompt Shield’s evasion, adversaries could manipulate a model into breaking ethical guidelines, potentially resulting in dangerous real-world consequences, from influencing public discourse to committing fraud. Such incidents compel us to rethink what true “safety” means in an AI context. ### **Guardrails as an Ongoing Process, Not a Product** The vulnerabilities revealed by Mindgard illustrate a critical lesson—guardrails are not one-time fixes. They require an iterative, adaptive approach to respond to the ever-evolving tactics of adversarial actors. This raises a provocative point: are AI safety guardrails sufficient as they stand today? Or do we need to look beyond traditional reactive security measures, adopting more proactive and resilient approaches that learn and evolve just as the attackers do? This calls for a paradigm shift in how we approach the AI safety narrative. Instead of presenting these solutions as definitive safety barriers, the focus should be on transparency, adaptability, and continual learning. Mitigation strategies, such as embedding context-aware AI, deploying diverse moderation techniques, and conducting consistent red teaming, need to be integrated to create a more robust and resilient AI security architecture. ### **A Shared Responsibility** The onus of securing AI systems doesn’t rest solely on the service providers. Developers, users, and companies integrating AI models into their ecosystems must actively understand the limitations and risks inherent in the tools they use. Supplementary moderation tools, tighter integrations, and human oversight are crucial components for developing truly effective safety mechanisms. It’s easy to read vulnerability disclosures and view them as flaws in someone else’s product. But the reality is that AI vulnerabilities represent weaknesses in our collective ability to control the technology we create. The impact of AI’s failures isn’t limited to a single company or product—the consequences affect people, trust, and societal norms. As we forge ahead, the lessons from these vulnerabilities should drive us to embrace a more nuanced understanding of AI’s limitations. True AI safety isn’t just a feature to be checked off—it’s an ongoing, collaborative pursuit to ensure these tools work for us, not against us.

loading..   02-Nov-2024
loading..   5 min read
loading..

Misconfig

Git

EMERALDWHALE breach exploits Git config misconfigurations, exposing 15,000 crede...

Imagine if your cloud credentials were stolen due to a single misconfigured [Git](https://www.secureblink.com/cyber-security-news/git-hub-exploited-to-spread-lumma-stealer-malware-via-fake-code-fixes) file—how would this affect your business? Despite having strong passwords and multi-factor authentication in place, a single misconfigured Git file could have allowed attackers direct access to your systems. The EMERALDWHALE operation highlights a chilling reality: misconfigurations, often overlooked in favor of more sophisticated security measures, can serve as a silent entry point for cybercriminals. In this [Threatfeed](https://www.secureblink.com/cyber-security-news), we explore how EMERALDWHALE exploited these misconfigurations, stole over 15,000 cloud service credentials, and wreaked havoc on a global scale. This campaign exposes a harsh truth: flashy tools and the latest tech gimmicks are useless if you're leaving basic vulnerabilities wide open. It's not glamorous work, but it makes the difference between being secure and becoming the next headline. --- #### **Attack Chain - How EMERALDWHALE Exploited Vulnerable Configurations** EMERALDWHALE began by targeting an often-overlooked vulnerability: exposed Git configuration files. [Git](https://github.com/arthaud/git-dumper), a Concurrent Versions System (CVS), is popular for managing codebases, and developers often mistakenly expose their `.git` directories due to web server misconfigurations. ![IMG-20241102-WA0009(1).jpg](https://sb-cms.s3.ap-south-1.amazonaws.com/IMG_20241102_WA_0009_1_8eba571f70.jpg) ***EMERALDWHALE Attack Chain*** EMERALDWHALE leveraged these exposures with remarkable simplicity, using open-source tools like `[httpx](https://github.com/projectdiscovery/httpx)` to scan and discover repositories with publicly accessible configuration files. Once identified, the credentials embedded within these files were harvested and used for further attacks. The operation followed a systematic attack chain: 1. **Target Discovery:** Long lists of IP address ranges were scanned using automated tools like `httpx` to locate exposed Git repositories. 2. **Credential Extraction:** The stolen tokens were then used to clone repositories, accessing sensitive information such as usernames, passwords, and API keys. 3. **Further Exploitation:** Using Python and shell scripts, the attackers validated the stolen credentials and attempted to leverage cloud service APIs to expand their access. The attack did not require sophisticated malware or exploits—it relied solely on automation, publicly available scanning tools, and, crucially, the negligence of those managing their web servers. EMERALDWHALE's efficiency illustrates how small missteps in configuration can lead to massive security breaches. EMERALDWHALE isn’t the most sophisticated threat, but it capitalized on a fundamental weakness: human oversight. Its success was not due to novel vulnerabilities or advanced malware, but rather to misconfigurations and complacency. Security is not just about the best tools; it is about consistently applying best practices, educating teams, and ensuring every possible vulnerability is addressed. As we move forward, let’s take the lessons from EMERALDWHALE and apply them to build a more resilient defense against the next unseen threat. --- #### **Case Studies - Real-Life Exploits and Lessons Learned** To better understand the impact of EMERALDWHALE, let’s dive into two mini case studies that highlight the effectiveness of their tactics. ##### **Case Study 1: The Misconfigured S3 Bucket** While monitoring its cloud honeypot, the Sysdig Threat Research Team [discovered](https://sysdig.com/blog/emeraldwhale/) an exposed S3 bucket named `s3simplisitter`. It contained over a terabyte of data, including credentials harvested by EMERALDWHALE. The data consisted of logging information, stolen keys, and evidence of past campaigns. This bucket, which had been left open by a previous victim, provided the attackers with an ideal storage location for their stolen data. This case study underscores the importance of correctly configuring cloud storage permissions to prevent such leaks. **Lesson Learned:** Organizations must enforce stringent access policies for cloud storage services like Amazon S3, ensuring that buckets are not publicly accessible unless absolutely necessary. Regular auditing of these permissions is crucial. ##### **Case Study 2: Exploitation of Laravel .env Files** In addition to targeting Git configurations, EMERALDWHALE also focused on Laravel `.env` files, which often contain sensitive credentials, including API keys and database passwords. Laravel, a popular PHP framework, has a history of security issues linked to improper file handling. Attackers leveraged these files to gain access to further credentials, broadening the scope of their campaign. **Lesson Learned:** Sensitive files like `.env` should never be exposed to the public. Organizations must ensure that environment files are excluded from public access by configuring their web servers and firewalls appropriately. ![IMG-20241102-WA0006.jpg](https://sb-cms.s3.ap-south-1.amazonaws.com/IMG_20241102_WA_0006_8fc61c19f9.jpg) ***EMERALDWHALE Attack Path*** --- #### **Ethical Reflections and Practical Steps Forward** EMERALDWHALE's success forces us to confront a critical issue in cybersecurity: the challenge of balancing convenience and security. Developers often assume that private repositories are inherently safe, leading to complacency in managing sensitive information. The underground market for credentials, such as the lists discovered in this operation, underscores how even seemingly trivial missteps can have a global impact. ### **Developer's Dilemma** One of the most significant lessons from EMERALDWHALE is that developers can unwittingly contribute to the underground economy by neglecting simple security best practices. Misconfigured Git files may seem like a minor oversight, but the repercussions—including access to sensitive cloud services—are substantial. Developers must take personal responsibility for their code and ensure that secrets are never committed to version control systems. **Key Questions to Reflect On:** - How frequently do we review our repository settings to prevent public exposure? - Are there policies in place to remove hardcoded secrets before committing code? - Are we providing adequate security training to developers on handling sensitive data? ### **Practical Steps for Organizations** To prevent attacks like EMERALDWHALE, organizations need to adopt a proactive approach: 1. **Implement Robust Secret Management Solutions:** Store sensitive credentials in secret management systems such as [AWS Secrets Manager](https://aws.amazon.com/blogs/devops/how-to-migrate-your-aws-codecommit-repository-to-another-git-provider/) or HashiCorp Vault instead of embedding them in source code. 2. **Regular Auditing and Scanning:** Use vulnerability scanners and automated tools to regularly check both internal and external systems for misconfigurations. Tools such as Shodan or internal scanning solutions can help detect exposed `.git` directories or cloud credentials. 3. **Secure Access Controls:** Apply the principle of least privilege (PoLP) to cloud services. This way, even if credentials are compromised, the damage remains minimal. 4. **Continuous Monitoring:** Utilize behavior analytics to monitor unusual activities associated with cloud services and repositories. If credentials are accessed from unexpected locations, trigger alerts to investigate. ---

loading..   02-Nov-2024
loading..   6 min read