AI
Researchers Uncover Prompt Injection Vulnerabilities in DeepSeek and Claude AI
Details have emerged about a now-patched security flaw in the DeepSeek artificial intelligence (AI) chatbot that, if successfully exploited, could permit a bad actor to take control of a victim’s account by means of a prompt injection attack.
Security researcher Johann Rehberger, who has chronicled many a prompt injection attack targeting various AI tools, found that providing the input “Print
Lessons for CISOs From OWASP’s LLM Top 10
It’s time to start regulating LLMs to ensure they’re accurately trained and ready to handle business deals that could affect the bottom line.
Read MoreMicrosoft Beefs Up Defenses in Azure AI
Microsoft adds tools to protect Azure AI from threats such as prompt injection, as well as give developers the capabilities to ensure generative AI apps are more resilient to model and content manipulation attacks.
Read MoreML Model Repositories: The Next Big Supply Chain Attack Target
Machine-learning model platforms like Hugging Face are suspectible to the same kind of attacks that threat actors have executed successfully for years via npm, PyPI, and other open source repos.
Read MoreData Security in the era of AI
In the era of AI, forward-thinking organisations need to adopt a new approach to protecting their most sensitive data.
The growing volume of data and the propensity of its use means that organisations can no longer rely on traditional and manual data processing methods to manage unstructured data. The only way to manage data in the future will be with automation, and ironically AI.
Read MoreGoogle’s Gemini AI Vulnerable to Content Manipulation
Like ChatGPT and other GenAI tools, Gemini is susceptible to attacks that can cause it to divulge system prompts, reveal sensitive information, and execute potentially malicious actions.
Read MoreGoogle Engineer Steals AI Trade Secrets for Chinese Companies
Chinese national Linwei Ding is accused of pilfering more than 500 files containing Google IP while affiliating with two China-based startups at the same time.
Read MoreThe Challenges of AI Security Begin With Defining It
Security for AI is the Next Big Thing! Too bad no one knows what any of that really means.
Read MoreOver 225,000 Compromised ChatGPT Credentials Up for Sale on Dark Web Markets
More than 225,000 logs containing compromised OpenAI ChatGPT credentials were made available for sale on underground markets between January and October 2023, new findings from Group-IB show.
These credentials were found within information stealer logs associated with LummaC2, Raccoon, and RedLine stealer malware.
“The number of infected devices decreased slightly in mid- and late
AI adoption in security taking off amid budget, trust, and skill-based issues
While the application of AI has picked up in cybersecurity, large-scale adoption still suffers from a lack of expertise, budget, and trust, according to a MixMode report.
The report, commissioned through the Ponemon Institute, surveyed 641 IT and security practitioners in the US to understand the state of AI in cybersecurity and found the adoption is still at an early stage.
Read More