company logo

Product

Our Product

We are Reshaping the way Developers find and fix vulnerabilities before they get exploited.

Solutions

By Industry

BFSI

Healthcare

Education

IT & Telecom

Government

By Role

CISO/CTO

DevOps Engineer

Resources

Resource Library

Get actionable insight straight from our threat Intel lab to keep you informed about the ever-changing Threat landscape.

Subscribe to Our Weekly Threat Digest

Company

Contact Us

Have queries, feedback or prospects? Get in touch and we shall be with you shortly.

loading..
loading..
loading..
Loading...

NAS

Zero-Click

Zero Day

loading..
loading..
loading..

Critical Zero-Days of Synology Exploited in Pwn2Own Hacking Competition

Explore how Synology's rapid response to zero-day vulnerabilities sets a new cybersecurity standard, highlighting proactive responsibility and user safety...

02-Nov-2024
4 min read

Related Articles

loading..

AI

Azure

Microsoft

Azure AI vulnerability reveals flaws in content moderation guardrails, raising q...

In February 2024, Mindgard disclosed a striking vulnerability: Microsoft’s Azure AI Content Safety Service, which many depend on to ensure responsible AI behavior, had two glaring weaknesses. These vulnerabilities allowed sophisticated attackers to slip through the well-advertised “guardrails,” bypassing established mechanisms to keep harmful content at bay. At first glance, this might seem like a run-of-the-mill vulnerability disclosure, but let’s dive into why this breach underscores a far deeper challenge for AI security and our collective perception of safety. ### **Illusion of Impenetrability** Microsoft’s Azure AI Content Safety service, promoted as a safeguard for AI content, comprises AI Text Moderation and Prompt Shield. AI Text Moderation is responsible for blocking harmful content like hate speech, while Prompt Shield aims to protect AI models against manipulative attacks such as jailbreaks and prompt injection. These mechanisms are supposed to ensure that harmful, inappropriate, or manipulated content cannot make its way into the output generated by AI systems. However, the discovery by Mindgard has exposed a stark truth: while AI guardrails sound reliable, they often exist in a precarious balance between effectiveness and exploitation. The vulnerabilities revolved around ‘Character Injection’ and ‘Adversarial ML Evasion’ techniques—both methods designed to exploit blind spots in detection mechanisms. This insight changes our perception of what it means to create guardrails around AI. The once-assumed invincibility of AI moderation tools begins to crumble when we realize the ease with which creative adversaries can identify loopholes, rendering those safety nets insufficient. ### **Attack Techniques: Exploiting Blind Spots** The first evasion technique—Character Injection—leverages imperceptible character modifications that result in evading detection while retaining a message’s meaning to human readers. For instance, attackers used variations like diacritical marks (‘a’ to ‘á’), homoglyphs (using ‘O’ instead of ‘0’), and zero-width spaces. These changes, while trivial to the human eye, wreaked havoc on AI classifiers trained on natural text, achieving a staggering evasion success rate ranging from 83% to 100%. Adversarial ML evasion techniques took a different approach, by modifying the context at the word level, introducing small changes that disoriented the AI system's understanding—undermining content moderation by up to 58%. These attacks highlight how machine learning models inherently struggle to address ambiguities that are easily recognized by humans. This challenge reveals a critical limitation in the effectiveness of guardrails—they often operate on shallow semantics without robust context understanding, making them susceptible to surprisingly simple manipulations. ### **Undermining Trust and AI Safety Narratives** What does this mean for us as individuals, corporations, and societies increasingly adopting AI into our daily lives? First and foremost, it serves as a powerful reminder that AI moderation is neither flawless nor immune to adversarial ingenuity. This incident undermines trust in AI systems' capability to act autonomously and ethically without supervision, and it questions the scalability of relying purely on technical barriers for safety. The reliability of content moderation and ethical AI relies on maintaining impenetrable defenses—an illusion shattered when attackers identify and exploit vulnerabilities. The consequences of bypassing Azure’s safeguards extend beyond just inappropriate content slipping through. The system’s incapacity to identify these sophisticated attacks means sensitive and harmful content can infiltrate the AI’s decision-making process, generate malicious responses, or even propagate misinformation. For instance, with Prompt Shield’s evasion, adversaries could manipulate a model into breaking ethical guidelines, potentially resulting in dangerous real-world consequences, from influencing public discourse to committing fraud. Such incidents compel us to rethink what true “safety” means in an AI context. ### **Guardrails as an Ongoing Process, Not a Product** The vulnerabilities revealed by Mindgard illustrate a critical lesson—guardrails are not one-time fixes. They require an iterative, adaptive approach to respond to the ever-evolving tactics of adversarial actors. This raises a provocative point: are AI safety guardrails sufficient as they stand today? Or do we need to look beyond traditional reactive security measures, adopting more proactive and resilient approaches that learn and evolve just as the attackers do? This calls for a paradigm shift in how we approach the AI safety narrative. Instead of presenting these solutions as definitive safety barriers, the focus should be on transparency, adaptability, and continual learning. Mitigation strategies, such as embedding context-aware AI, deploying diverse moderation techniques, and conducting consistent red teaming, need to be integrated to create a more robust and resilient AI security architecture. ### **A Shared Responsibility** The onus of securing AI systems doesn’t rest solely on the service providers. Developers, users, and companies integrating AI models into their ecosystems must actively understand the limitations and risks inherent in the tools they use. Supplementary moderation tools, tighter integrations, and human oversight are crucial components for developing truly effective safety mechanisms. It’s easy to read vulnerability disclosures and view them as flaws in someone else’s product. But the reality is that AI vulnerabilities represent weaknesses in our collective ability to control the technology we create. The impact of AI’s failures isn’t limited to a single company or product—the consequences affect people, trust, and societal norms. As we forge ahead, the lessons from these vulnerabilities should drive us to embrace a more nuanced understanding of AI’s limitations. True AI safety isn’t just a feature to be checked off—it’s an ongoing, collaborative pursuit to ensure these tools work for us, not against us.

loading..   02-Nov-2024
loading..   5 min read
loading..

Misconfig

Git

EMERALDWHALE breach exploits Git config misconfigurations, exposing 15,000 crede...

Imagine if your cloud credentials were stolen due to a single misconfigured [Git](https://www.secureblink.com/cyber-security-news/git-hub-exploited-to-spread-lumma-stealer-malware-via-fake-code-fixes) file—how would this affect your business? Despite having strong passwords and multi-factor authentication in place, a single misconfigured Git file could have allowed attackers direct access to your systems. The EMERALDWHALE operation highlights a chilling reality: misconfigurations, often overlooked in favor of more sophisticated security measures, can serve as a silent entry point for cybercriminals. In this [Threatfeed](https://www.secureblink.com/cyber-security-news), we explore how EMERALDWHALE exploited these misconfigurations, stole over 15,000 cloud service credentials, and wreaked havoc on a global scale. This campaign exposes a harsh truth: flashy tools and the latest tech gimmicks are useless if you're leaving basic vulnerabilities wide open. It's not glamorous work, but it makes the difference between being secure and becoming the next headline. --- #### **Attack Chain - How EMERALDWHALE Exploited Vulnerable Configurations** EMERALDWHALE began by targeting an often-overlooked vulnerability: exposed Git configuration files. [Git](https://github.com/arthaud/git-dumper), a Concurrent Versions System (CVS), is popular for managing codebases, and developers often mistakenly expose their `.git` directories due to web server misconfigurations. ![IMG-20241102-WA0009(1).jpg](https://sb-cms.s3.ap-south-1.amazonaws.com/IMG_20241102_WA_0009_1_8eba571f70.jpg) ***EMERALDWHALE Attack Chain*** EMERALDWHALE leveraged these exposures with remarkable simplicity, using open-source tools like `[httpx](https://github.com/projectdiscovery/httpx)` to scan and discover repositories with publicly accessible configuration files. Once identified, the credentials embedded within these files were harvested and used for further attacks. The operation followed a systematic attack chain: 1. **Target Discovery:** Long lists of IP address ranges were scanned using automated tools like `httpx` to locate exposed Git repositories. 2. **Credential Extraction:** The stolen tokens were then used to clone repositories, accessing sensitive information such as usernames, passwords, and API keys. 3. **Further Exploitation:** Using Python and shell scripts, the attackers validated the stolen credentials and attempted to leverage cloud service APIs to expand their access. The attack did not require sophisticated malware or exploits—it relied solely on automation, publicly available scanning tools, and, crucially, the negligence of those managing their web servers. EMERALDWHALE's efficiency illustrates how small missteps in configuration can lead to massive security breaches. EMERALDWHALE isn’t the most sophisticated threat, but it capitalized on a fundamental weakness: human oversight. Its success was not due to novel vulnerabilities or advanced malware, but rather to misconfigurations and complacency. Security is not just about the best tools; it is about consistently applying best practices, educating teams, and ensuring every possible vulnerability is addressed. As we move forward, let’s take the lessons from EMERALDWHALE and apply them to build a more resilient defense against the next unseen threat. --- #### **Case Studies - Real-Life Exploits and Lessons Learned** To better understand the impact of EMERALDWHALE, let’s dive into two mini case studies that highlight the effectiveness of their tactics. ##### **Case Study 1: The Misconfigured S3 Bucket** While monitoring its cloud honeypot, the Sysdig Threat Research Team [discovered](https://sysdig.com/blog/emeraldwhale/) an exposed S3 bucket named `s3simplisitter`. It contained over a terabyte of data, including credentials harvested by EMERALDWHALE. The data consisted of logging information, stolen keys, and evidence of past campaigns. This bucket, which had been left open by a previous victim, provided the attackers with an ideal storage location for their stolen data. This case study underscores the importance of correctly configuring cloud storage permissions to prevent such leaks. **Lesson Learned:** Organizations must enforce stringent access policies for cloud storage services like Amazon S3, ensuring that buckets are not publicly accessible unless absolutely necessary. Regular auditing of these permissions is crucial. ##### **Case Study 2: Exploitation of Laravel .env Files** In addition to targeting Git configurations, EMERALDWHALE also focused on Laravel `.env` files, which often contain sensitive credentials, including API keys and database passwords. Laravel, a popular PHP framework, has a history of security issues linked to improper file handling. Attackers leveraged these files to gain access to further credentials, broadening the scope of their campaign. **Lesson Learned:** Sensitive files like `.env` should never be exposed to the public. Organizations must ensure that environment files are excluded from public access by configuring their web servers and firewalls appropriately. ![IMG-20241102-WA0006.jpg](https://sb-cms.s3.ap-south-1.amazonaws.com/IMG_20241102_WA_0006_8fc61c19f9.jpg) ***EMERALDWHALE Attack Path*** --- #### **Ethical Reflections and Practical Steps Forward** EMERALDWHALE's success forces us to confront a critical issue in cybersecurity: the challenge of balancing convenience and security. Developers often assume that private repositories are inherently safe, leading to complacency in managing sensitive information. The underground market for credentials, such as the lists discovered in this operation, underscores how even seemingly trivial missteps can have a global impact. ### **Developer's Dilemma** One of the most significant lessons from EMERALDWHALE is that developers can unwittingly contribute to the underground economy by neglecting simple security best practices. Misconfigured Git files may seem like a minor oversight, but the repercussions—including access to sensitive cloud services—are substantial. Developers must take personal responsibility for their code and ensure that secrets are never committed to version control systems. **Key Questions to Reflect On:** - How frequently do we review our repository settings to prevent public exposure? - Are there policies in place to remove hardcoded secrets before committing code? - Are we providing adequate security training to developers on handling sensitive data? ### **Practical Steps for Organizations** To prevent attacks like EMERALDWHALE, organizations need to adopt a proactive approach: 1. **Implement Robust Secret Management Solutions:** Store sensitive credentials in secret management systems such as [AWS Secrets Manager](https://aws.amazon.com/blogs/devops/how-to-migrate-your-aws-codecommit-repository-to-another-git-provider/) or HashiCorp Vault instead of embedding them in source code. 2. **Regular Auditing and Scanning:** Use vulnerability scanners and automated tools to regularly check both internal and external systems for misconfigurations. Tools such as Shodan or internal scanning solutions can help detect exposed `.git` directories or cloud credentials. 3. **Secure Access Controls:** Apply the principle of least privilege (PoLP) to cloud services. This way, even if credentials are compromised, the damage remains minimal. 4. **Continuous Monitoring:** Utilize behavior analytics to monitor unusual activities associated with cloud services and repositories. If credentials are accessed from unexpected locations, trigger alerts to investigate. ---

loading..   02-Nov-2024
loading..   6 min read
loading..

Zero Day

QNAP

QNAP patches a critical zero-day vulnerability in NAS devices post-Pwn2Own 2024 ...

QNAP has addressed a critical zero-day vulnerability exploited by security researchers to hack a TS-464 NAS device during the Pwn2Own Ireland 2024 competition. This vulnerability, designated **CVE-2024-50388**, is rooted in an OS command injection weakness in the HBS 3 Hybrid Backup Sync software, which serves as QNAP's solution for disaster recovery and data backup. --- #### Overview of the Vulnerability The flaw in question, CVE-2024-50388, was identified in [HBS 3 Hybrid Backup Sync](https://www.qnap.com/en/software/hybrid-backup-sync) version 25.1.x. The vulnerability poses a significant risk, as it could enable remote attackers to execute arbitrary commands on affected devices, potentially gaining unauthorized access. > **QNAP Security Advisory:** > "An OS command injection vulnerability has been reported to affect HBS 3 Hybrid Backup Sync. If exploited, the vulnerability could allow remote attackers to execute arbitrary commands," QNAP [said](https://www.qnap.com/en/security-advisory/qsa-24-41) in a Tuesday security advisory. --- #### Update and Patch Information QNAP has issued a patch in **HBS 3 Hybrid Backup Sync version 25.1.1.673** and later to address this critical vulnerability. To protect your NAS device from potential exploits, it is essential to ensure your HBS 3 installation is up-to-date. ##### How to Update HBS 3: 1. **Log in to QTS or QuTS Hero** as an administrator. 2. **Open the App Center** and search for "HBS 3 Hybrid Backup Sync." 3. If an update is available, click on **Update**. Note: If the “Update” button is missing, your HBS 3 is already current. --- #### Exploit Demonstration at Pwn2Own The vulnerability came to light during the **Pwn2Own Ireland 2024** competition, where security researchers Ha The Long and Ha Anh Hoang from Viettel Cyber Security successfully leveraged it to gain [administrative privileges](https://x.com/thezdi/status/1849372314212749751) on QNAP’s TS-464 NAS device. Notably, **Team Viettel** secured victory in the Pwn2Own competition, held over four days and concluded on October 25, 2024. The team won substantial prizes, contributing to a total pool exceeding $1 million, by disclosing over 70 [zero-day vulnerabilities](https://www.secureblink.com/cyber-security-news/80-000-devices-vulnerable-to-qnap-zero-day-vulnerability) across various devices and applications. --- #### Patch Timing and Industry Standard Response QNAP’s response to this zero-day vulnerability is considered swift, with the patch released five days after the exploit was demonstrated. Typically, vendors participating in Pwn2Own are granted a 90-day window to address reported vulnerabilities before the **Zero Day Initiative (ZDI)**, run by Trend Micro, publishes detailed information on the vulnerabilities disclosed during the contest. --- ### Historical Context: QNAP's Vulnerability Challenges QNAP devices have been a frequent target for cyber threats over the years, particularly by ransomware gangs due to the sensitive personal and organizational data they store. Below are some notable historical vulnerabilities and attacks against QNAP devices: 1. **Backdoor Account Removal ([CVE-2021-28799}(https://www.qnap.com/en/security-advisory/QSA-21-13)):** In 2021, QNAP removed a backdoor account in the HBS 3 Hybrid Backup Sync. This vulnerability was exploited in conjunction with an **[SQL injection vulnerability](https://www.qnap.com/de-de/security-advisory/qsa-21-11) ([CVE-2020-36195](https://nvd.nist.gov/vuln/detail/CVE-2020-36195))** in QNAP’s Multimedia Console and Media Streaming Add-On. Attackers used these flaws to deploy **[Qlocker ransomware](https://www.secureblink.com/cyber-security-news/qlocker-resurrected-with-a-new-campaign-in-targeting-qnap-nas-devices-once-again)**, encrypting files on Internet-exposed NAS devices. 2. **eCh0raix Ransomware Attacks (2020-2021):** QNAP NAS devices faced extensive ransomware attacks leveraging known security flaws. In June 2020, QNAP warned users of **[eCh0raix](https://www.secureblink.com/cyber-security-news/qnap-nas-devices-yet-again-victimized-due-to-rise-of-ech0raix) (QNAPCrypt) ransomware**, which exploited vulnerabilities in the Photo Station app. By mid-2021, attackers using eCh0raix reemerged, taking advantage of weak user passwords and unresolved vulnerabilities. 3. **AgeLocker Ransomware Attacks (September 2020):** AgeLocker ransomware targeted NAS devices running outdated Photo Station software versions. This attack highlighted the risks associated with publicly exposed NAS devices that lack regular updates or security patches. QNAP NAS devices continue to be attractive to ransomware groups due to the personal and sensitive nature of the data stored on these systems. Cybercriminals often leverage this data to demand ransoms, knowing that victims may pay to regain access to their critical files. QNAP’s quick response in patching the HBS 3 zero-day vulnerability shows a proactive approach to securing their systems against emerging threats. As NAS devices remain a popular yet viable target for threat actors, keeping such devices updated with the latest security patches often remains non-negotiable for preventing exploitation and minimizing data loss.

loading..   30-Oct-2024
loading..   4 min read