Artificial Intelligence
Generative AI Can Automate the Creation of Malware Variants
Ivan Milenkovic, Vice President – Cyber Risk Technology, EMEA at Qualys, says, as much as generative AI can fortify security, it equally arms malicious actors with new tools
How is generative AI being utilized to enhance cybersecurity measures today?
Today, generative AI is used to bolster cybersecurity defences in a multitude of ways. It automates mundane tasks, sifting through vast data logs to identify potential vulnerabilities and weed out false positives (Gartner, 2021). More impressively, generative AI can predict emerging threats by simulating attack scenarios, helping teams spot anomalies before they escalate (Mandiant, 2022).
Compared with older rule-based systems, these AI models adapt in real time, learning from both benign and malicious activity to create dynamic defence postures. A notable example is Darktrace’s “Antigena” product, which uses self-learning AI to detect abnormal network behaviours. In 2018, it reportedly thwarted an insider threat by flagging unusual data transfers in a UK-based financial services firm (Darktrace, 2018). The technology reduced the manual workload on analysts by automating front-line triage, freeing human experts to focus on higher-level investigations.
What potential risks does generative AI introduce in the cybersecurity landscape, such as AI-driven cyberattacks?
As much as generative AI can fortify security, it equally arms malicious actors with new tools. Sophisticated attackers are already deploying adversarial machine learning to bypass detection (Goodfellow et al., 2014) and using deepfakes to manipulate social engineering scams. One infamous example involved fraudsters using deepfake voice impersonation of a CEO to authorise a fraudulent wire transfer of approximately €220,000 from a UK-based energy firm in 2019 (Wall Street Journal, 2019).
This dark side underscores why cybersecurity leaders must remain vigilant. Generative AI can automate the creation of malware variants, obfuscate malicious code, or create entire networks of bot accounts capable of launching coordinated attacks (ENISA Threat Landscape, 2021). These challenges highlight the need for organisations to keep their AI defences on par with adversarial AI capabilities.
How can organizations leverage generative AI for proactive threat detection and response?
Given the growing dangers, organisations are increasingly using generative AI for proactive threat hunting. By training models on historical attack datasets, security systems can anticipate emerging vulnerabilities, formulate defensive strategies, and even recommend immediate containment measures (IBM X-Force Threat Intelligence Index, 2022). Generative AI excels at pattern recognition, which — when combined with behavioural analysis — helps security teams detect anomalies that conventional defences might miss.
Several Fortune 500 companies have begun deploying AI-driven “red team” exercises using synthetic data to simulate real attacks (Ponemon Institute, 2022). By synthesising new attack variants, these organisations can better train their detection algorithms and prepare incident response teams for novel threat scenarios.
What ethical concerns arise when using generative AI in cybersecurity, and how can they be addressed?
A critical ethical question arises when deploying powerful AI tools for cybersecurity: Where do we draw the line between data-driven intelligence and intrusive surveillance? Privacy concerns loom large, particularly when AI systems process personal information to identify potential insider threats (NIST SP 800-53, 2020). It is essential that organisations establish transparent governance structures, involving cross-functional teams from legal, compliance, and human resources.
These frameworks should clarify data usage policies, ensure algorithmic fairness, and reinforce accountability (European Commission, 2021, EU AI Act, 2024). Treating user data with respect whilst maintaining robust defences is not just a matter of compliance; it’s a moral imperative that, if neglected, can damage trust irreparably.
What challenges do cybersecurity teams face when integrating generative AI tools into their workflows?
Despite the allure of next-generation solutions, cybersecurity teams often face significant hurdles when incorporating generative AI. Firstly, there is a matter of technical complexity. Building models that accurately understand and adapt to evolving threats requires specialised expertise and substantial computational resources (Gartner, 2021). Secondly, legacy systems are mostly ill-equipped to handle the high data throughput AI demands, leading to integration bottlenecks (Mandiant, 2022). Then, there is a problem of inflated expectations. The hype around AI can cause organisations to invest in poorly scoped projects, hampering returns and morale (Ponemon Institute, 2022).
To combat these issues, teams should conduct thorough proofs of concept and collaborate with experienced data scientists to align capabilities with organisational needs.
Are there any notable examples of generative AI successfully preventing or mitigating cyberattacks?
Several case studies highlight the growing success of generative AI in thwarting attacks. Darktrace reported detecting anomalous “beacon” traffic months before a known banking Trojan was publicly identified (Darktrace, 2019). Meanwhile, a large financial institution in Asia leveraged AI-driven user behaviour analytics (UBA) to pinpoint a suspicious spike in credential escalations, uncovering an elaborate insider threat that might otherwise have slipped under the radar (IBM, 2020). These incidents illustrate the transformative power of AI when integrated thoughtfully with security operations.
How do you see generative AI evolving in the cybersecurity domain over the next few years?
Over the coming years, generative AI is expected to mature into an even more intuitive and autonomous guardian. As data collection methods expand and computational power grows (Ponemon Institute, 2022), AI models will become more adept at detecting zero-day exploits and adapting, on the fly, to novel attack techniques. Widespread adoption of AI systems that interact seamlessly with security analysts will facilitate real-time recommendations, and “self-healing” networks capable of automated patching are likely to become mainstream (Gartner, 2021).
However, we should brace for an escalation in AI-enabled cyberattacks as well (e.g. from near perfect deep-fakes, to far better personalised targeted attacks). This unfolding arms race underscores the importance of continuous innovation and collaboration between industry, academia, and government (ENISA Threat Landscape, 2021).
What role does human oversight (HITL) play in ensuring generative AI systems are effectively managing cybersecurity threats?
Human-in-the-loop oversight remains indispensable. Even the most advanced AI systems can produce false positives or overlook subtleties requiring human judgement (European Commission, 2021). Skilled analysts, especially those with deep domain knowledge, are needed to validate AI-driven alerts, fine-tune learning models, and account for socio-political contexts.
As a result, AI should be viewed as an extension of human capabilities rather than a replacement. A balanced combination of machine efficiency and human intuition results in the most effective security outcomes (Mandiant, 2022). Lastly, let’s not forget that emerging legislations (EU AI Act for example) might “insist” on having human decisions for certain privacy-critical aspects.
How can smaller organizations with limited budgets incorporate generative AI for cybersecurity?
Budget constraints need not bar smaller organisations from leveraging generative AI. A pragmatic step is to use cloud-based security tools with built-in AI features, offsetting the cost of on-premises infrastructure (Microsoft Azure Security Centre, 2021). Partnerships with managed service providers can also help smaller entities develop tailored AI strategies.
Starting with low-complexity use cases, such as automated phishing detection, can yield quick wins and free up resources to invest in more advanced capabilities. By focusing on modular, scalable solutions, smaller organisations can gradually expand their AI footprint without jeopardising financial stability.
What best practices would you recommend for implementing generative AI tools while minimizing risks?
To implement generative AI responsibly, organisations should embrace and follow industry good practices. A good example would be NIST SP 800-53. Basic steps should not be news to cyber-security professionals:
- Establish a clear governance framework that outlines AI deployment goals, data usage policies, and oversight responsibilities.
- Invest in robust training datasets to mitigate bias and ensure the AI can accurately detect real threats.
- Enforce rigorous testing and validation procedures, including adversarial testing to identify potential exploits.
- Maintain audit logs and version-control for the AI models, enabling swift rollback if necessary.
- Finally, foster a culture of transparency by openly communicating to stakeholders how and why AI is used within the security apparatus.
Artificial Intelligence
CyberKnight Partners with Ridge Security for AI-Powered Security Validation
The automated penetration testing market was valued at roughly $3.1 billion in 2023 and is projected to grow rapidly, with forecasts estimating a compound annual growth rate (CAGR) between 21% and 25%. By 2030, the sector is expected to reach approximately $9 to $10 billion. The broader penetration testing industry is also expanding, with projections indicating it will surpass $5.3 billion by 2027, according to MarketandMarket.
To support enterprises and government entities across the Middle East, Turkey and Africa (META) with identifying and validating vulnerabilities and reducing security gaps in real-time, CyberKnight has partnered with Ridge Security, the World’s First Al-powered Offensive Security Validation Platform. Ridge Security’s products incorporate advanced artificial intelligence to deliver security validation through automated penetration testing and breach and attack simulations.
RidgeBot uses advanced AI to autonomously perform multi-vector iterative attacks, conduct continuous penetration testing, and validate vulnerabilities with zero false positives. RidgeBot has been deployed by customers worldwide as a key element of their journey to evolve from traditional vulnerability management to Continuous Threat Exposure Management (CTEM).
“Ridge Security’s core strength lies in delivering holistic, AI-driven security validation that enables organizations to proactively manage risk and improve operational performance,” said Hom Bahmanyar, Chief Enablement Officer at Ridge Security. “We are delighted to partner with CyberKnight to leverage their network of strategic partners, deep-rooted customer relations, and security expertise to accelerate our expansion plans in the region.”
“Our partnership with Ridge Security is a timely and strategic step, as 69% of organizations are now adopting AI-driven security for threat detection and prevention,” added Wael Jaber, Chief Strategy Officer at CyberKnight. “By joining forces, we enhance our ability to deliver automated, intelligent security validation solutions, reaffirming our commitment to empowering customers with resilient, future-ready cybersecurity across the region.”
Artificial Intelligence
Cequence Intros Security Layer to Protect Agentic AI Interactions
Cequence Security has announced significant enhancements to its Unified API Protection (UAP) platform to deliver a comprehensive security solution for agentic AI development, usage, and connectivity. This enhancement empowers organizations to secure every AI agent interaction, regardless of the development framework. By implementing robust guardrails, the solution protects both enterprise-hosted AI applications and external AI APIs, preventing sensitive data exfiltration through business logic abuse and ensuring regulatory compliance.
There is no AI without APIs, and the rapid growth of agentic AI applications has amplified concerns about securing sensitive data during their interactions. These AI-driven exchanges can inadvertently expose internal systems, create significant vulnerabilities, and jeopardize valuable data assets. Recognising this critical challenge, Cequence has expanded its UAP platform, introducing an enhanced security layer to govern interactions between AI agents and backend services specifically. This new layer of security enables customers to detect and prevent AI bots such as ChatGPT from OpenAI and Perplexity from harvesting organizational data.
Internal telemetry across Global 2000 deployments shows that the overwhelming majority of AI-related bot traffic, nearly 88%, originates from large language model infrastructure, with most requests obfuscated behind generic or unidentified user agents. Less than 4% of this traffic is transparently attributed to bots like GPTBot or Gemini. Over 97% of it comes from U.S.-based IP addresses, highlighting the concentration of risk in North American enterprises. Cequence’s ability to detect and govern this traffic in real time, despite the lack of clear identifiers, reinforces the platform’s unmatched readiness for securing agentic AI in the wild.
Key enhancements to Cequence’s UAP platform include:
- Block unauthorized AI data harvesting: Understanding that external AI often seeks to learn by broadly collecting data without obtaining permission, Cequence provides organizations with the critical capability to manage which AI, if any, can interact with their proprietary information.
- Detect and prevent sensitive data exposure: Empowers organizations to effectively detect and prevent sensitive data exposure across all forms of agentic AI. This includes safeguarding against external AI harvesting attempts and securing data within internal AI applications. The platform’s intelligent analysis automatically differentiates between legitimate data access during normal application usage and anomalous activities signaling sensitive data exfiltration, ensuring comprehensive protection against AI-related data loss.
- Discover and manage shadow AI: Automatically discovers and classifies APIs from agentic AI tools like Microsoft Copilot and Salesforce Agentforce, presenting a unified view alongside customers’ internal and third-party APIs. This comprehensive visibility empowers organizations to easily manage these interactions and effectively detect and block sensitive data leaks, whether from external AI harvesting or internal AI usage.
- Seamless integration: Integrates easily into DevOps frameworks for discovering internal AI applications and generates OpenAPI specifications that detail API schemas and security mechanisms, including strong authentication and security policies. Cequence delivers powerful protection without relying on third-party tools, while seamlessly integrating with the customer’s existing cybersecurity ecosystem. This simplifies management and security enforcement.
“Gartner predicts that by 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously. We’ve taken immediate action to extend our market-leading API security and bot management capabilities,” said Ameya Talwalkar, CEO of Cequence. “Agentic AI introduces a new layer of complexity, where every agent behaves like a bidirectional API. That’s our wheelhouse. Our platform helps organizations embrace innovation at scale without sacrificing governance, compliance, or control.”
These extended capabilities will be generally available in June.
Artificial Intelligence
Fortinet Expands FortiAI Across its Security Fabric Platform
Fortinet has announced major upgrades to FortiAI, integrating advanced AI capabilities across its Security Fabric platform to combat evolving threats, automate security tasks, and protect AI systems from cyber risks. As cybercriminals increasingly weaponize AI to launch sophisticated attacks, organizations need smarter defenses. Fortinet—with 500+ AI patents and 15 years of AI innovation—now embeds FortiAI across its platform to:
- Stop AI-powered threats
- Automate security and network operations
- Secure AI tools used by businesses
“Fortinet’s AI advantage stems from the breadth and depth of our AI ecosystem—shaped by over a decade of AI innovation and reinforced by more patents than any other cybersecurity vendor,” said Michael Xie, Founder, President, and Chief Technology Officer at Fortinet. “By embedding FortiAI across the Fortinet Security Fabric platform, including new agentic AI capabilities, we’re empowering our customers to reduce the workload on their security and network analysts while improving the efficiency, speed, and accuracy of their security and networking operations. In parallel, we’ve added coverage across the Fabric ecosystem to enable customers to monitor and control the use of GenAI-enabled services within their organization.”
Key upgrades:
FortiAI-Assist – AI That Works for You
- Automatic Network Fixes: AI configures, validates, and troubleshoots network issues without human help.
- Smarter Security Alerts: Cuts through noise, prioritizing only critical threats.
- AI-Powered Threat Hunting: Scans for hidden risks and traces attack origins.
FortiAI-Protect – Defending Against AI Threats
- Tracks 6,500+ AI apps, blocking risky or unauthorized usage.
- Stops new malware with machine learning.
- Adapts to new attack methods in real time.
FortiAI-SecureAI – Safe AI Adoption
- Protects AI models, data, and cloud workloads.
- Prevents leaks from tools like ChatGPT.
- Enforces zero-trust access for AI systems.
FortiAI processes queries locally, ensuring sensitive data never leaves your network.
-
GISEC1 week agoPositive Technologies @ GISEC Global 2025: Demonstrating Cutting-Edge Cyber Threats and AI Defense Strategies
-
Cyber Security1 week agoAxis Communications Sheds Light on Video Surveillance Industry Perspectives on AI
-
GISEC1 week agoVideo: SANS Institute Weighs in on Deepfakes, Model Poisoning and Risk Frameworks at GISEC Global 2025
-
GISEC1 week agoManageEngine @ GISEC Global 2025: AI, Quantum Computing, and Ransomware Form Part of Cybersecurity Outlook for 2025
-
GISEC1 week agoVideo: SentinelOne Speaks Hyperautomation, Purple AI, and the Future of Threat Detection at GISEC Global 2025
-
Africa Focus6 days agoCyberKnight Sets Up South Africa Entity
-
GISEC1 week agoGroup-IB @ GISEC Global 2025: Tackling Evolving Cyber Threats with Localised Intelligence and AI
-
GISEC1 week agoVideo: CyberKnight on Zero Trust, AI, and Saudi Arabia’s Digital Transformation at GISEC Global 2025
