Artificial Intelligence – Security Review Magazine https://securityreviewmag.com We bring you the latest from the IT and physical security industry in the Middle East and Africa region. Thu, 08 May 2025 17:06:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://securityreviewmag.com/wp-content/uploads/2024/08/cropped-SRico-32x32.png Artificial Intelligence – Security Review Magazine https://securityreviewmag.com 32 32 CyberKnight Partners with Ridge Security for AI-Powered Security Validation https://securityreviewmag.com/?p=28198 Thu, 08 May 2025 17:06:26 +0000 https://securityreviewmag.com/?p=28198 The automated penetration testing market was valued at roughly $3.1 billion in 2023 and is projected to grow rapidly, with forecasts estimating a compound annual growth rate (CAGR) between 21% and 25%. By 2030, the sector is expected to reach approximately $9 to $10 billion. The broader penetration testing industry is also expanding, with projections indicating it will surpass $5.3 billion by 2027, according to MarketandMarket.

To support enterprises and government entities across the Middle East, Turkey and Africa (META) with identifying and validating vulnerabilities and reducing security gaps in real-time, CyberKnight has partnered with Ridge Security, the World’s First Al-powered Offensive Security Validation Platform. Ridge Security’s products incorporate advanced artificial intelligence to deliver security validation through automated penetration testing and breach and attack simulations.

RidgeBot uses advanced AI to autonomously perform multi-vector iterative attacks, conduct continuous penetration testing, and validate vulnerabilities with zero false positives. RidgeBot has been deployed by customers worldwide as a key element of their journey to evolve from traditional vulnerability management to Continuous Threat Exposure Management (CTEM).

“Ridge Security’s core strength lies in delivering holistic, AI-driven security validation that enables organizations to proactively manage risk and improve operational performance,” said Hom Bahmanyar, Chief Enablement Officer at Ridge Security. “We are delighted to partner with CyberKnight to leverage their network of strategic partners, deep-rooted customer relations, and security expertise to accelerate our expansion plans in the region.”

“Our partnership with Ridge Security is a timely and strategic step, as 69% of organizations are now adopting AI-driven security for threat detection and prevention,” added Wael Jaber, Chief Strategy Officer at CyberKnight. “By joining forces, we enhance our ability to deliver automated, intelligent security validation solutions, reaffirming our commitment to empowering customers with resilient, future-ready cybersecurity across the region.”

]]>
Cequence Intros Security Layer to Protect Agentic AI Interactions https://securityreviewmag.com/?p=28134 Tue, 29 Apr 2025 07:42:16 +0000 https://securityreviewmag.com/?p=28134 Cequence Security has announced significant enhancements to its Unified API Protection (UAP) platform to deliver a comprehensive security solution for agentic AI development, usage, and connectivity. This enhancement empowers organizations to secure every AI agent interaction, regardless of the development framework. By implementing robust guardrails, the solution protects both enterprise-hosted AI applications and external AI APIs, preventing sensitive data exfiltration through business logic abuse and ensuring regulatory compliance.

There is no AI without APIs, and the rapid growth of agentic AI applications has amplified concerns about securing sensitive data during their interactions. These AI-driven exchanges can inadvertently expose internal systems, create significant vulnerabilities, and jeopardize valuable data assets. Recognising this critical challenge, Cequence has expanded its UAP platform, introducing an enhanced security layer to govern interactions between AI agents and backend services specifically. This new layer of security enables customers to detect and prevent AI bots such as ChatGPT from OpenAI and Perplexity from harvesting organizational data.

Internal telemetry across Global 2000 deployments shows that the overwhelming majority of AI-related bot traffic, nearly 88%, originates from large language model infrastructure, with most requests obfuscated behind generic or unidentified user agents. Less than 4% of this traffic is transparently attributed to bots like GPTBot or Gemini. Over 97% of it comes from U.S.-based IP addresses, highlighting the concentration of risk in North American enterprises. Cequence’s ability to detect and govern this traffic in real time, despite the lack of clear identifiers, reinforces the platform’s unmatched readiness for securing agentic AI in the wild.

Key enhancements to Cequence’s UAP platform include:

  • Block unauthorized AI data harvesting: Understanding that external AI often seeks to learn by broadly collecting data without obtaining permission, Cequence provides organizations with the critical capability to manage which AI, if any, can interact with their proprietary information.
  • Detect and prevent sensitive data exposure: Empowers organizations to effectively detect and prevent sensitive data exposure across all forms of agentic AI. This includes safeguarding against external AI harvesting attempts and securing data within internal AI applications. The platform’s intelligent analysis automatically differentiates between legitimate data access during normal application usage and anomalous activities signaling sensitive data exfiltration, ensuring comprehensive protection against AI-related data loss.
  • Discover and manage shadow AI: Automatically discovers and classifies APIs from agentic AI tools like Microsoft Copilot and Salesforce Agentforce, presenting a unified view alongside customers’ internal and third-party APIs. This comprehensive visibility empowers organizations to easily manage these interactions and effectively detect and block sensitive data leaks, whether from external AI harvesting or internal AI usage.
  • Seamless integration: Integrates easily into DevOps frameworks for discovering internal AI applications and generates OpenAPI specifications that detail API schemas and security mechanisms, including strong authentication and security policies. Cequence delivers powerful protection without relying on third-party tools, while seamlessly integrating with the customer’s existing cybersecurity ecosystem. This simplifies management and security enforcement.

“Gartner predicts that by 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously. We’ve taken immediate action to extend our market-leading API security and bot management capabilities,” said Ameya Talwalkar, CEO of Cequence. “Agentic AI introduces a new layer of complexity, where every agent behaves like a bidirectional API. That’s our wheelhouse. Our platform helps organizations embrace innovation at scale without sacrificing governance, compliance, or control.”

These extended capabilities will be generally available in June.

]]>
Fortinet Expands FortiAI Across its Security Fabric Platform https://securityreviewmag.com/?p=28103 Wed, 23 Apr 2025 17:41:20 +0000 https://securityreviewmag.com/?p=28103 Fortinet has announced major upgrades to FortiAI, integrating advanced AI capabilities across its Security Fabric platform to combat evolving threats, automate security tasks, and protect AI systems from cyber risks. As cybercriminals increasingly weaponize AI to launch sophisticated attacks, organizations need smarter defenses. Fortinet—with 500+ AI patents and 15 years of AI innovation—now embeds FortiAI across its platform to:

  • Stop AI-powered threats
  • Automate security and network operations
  • Secure AI tools used by businesses

“Fortinet’s AI advantage stems from the breadth and depth of our AI ecosystem—shaped by over a decade of AI innovation and reinforced by more patents than any other cybersecurity vendor,” said Michael Xie, Founder, President, and Chief Technology Officer at Fortinet. “By embedding FortiAI across the Fortinet Security Fabric platform, including new agentic AI capabilities, we’re empowering our customers to reduce the workload on their security and network analysts while improving the efficiency, speed, and accuracy of their security and networking operations. In parallel, we’ve added coverage across the Fabric ecosystem to enable customers to monitor and control the use of GenAI-enabled services within their organization.”

Key upgrades:
FortiAI-Assist – AI That Works for You

  1. Automatic Network Fixes: AI configures, validates, and troubleshoots network issues without human help.
  2. Smarter Security Alerts: Cuts through noise, prioritizing only critical threats.
  3. AI-Powered Threat Hunting: Scans for hidden risks and traces attack origins.

FortiAI-Protect – Defending Against AI Threats

  1. Tracks 6,500+ AI apps, blocking risky or unauthorized usage.
  2. Stops new malware with machine learning.
  3. Adapts to new attack methods in real time.

FortiAI-SecureAI – Safe AI Adoption

  1. Protects AI models, data, and cloud workloads.
  2. Prevents leaks from tools like ChatGPT.
  3. Enforces zero-trust access for AI systems.

FortiAI processes queries locally, ensuring sensitive data never leaves your network.

]]>
SandboxAQ Platform Tackles AI Agent “Non-Human Identity” Threats https://securityreviewmag.com/?p=28095 Wed, 23 Apr 2025 08:25:57 +0000 https://securityreviewmag.com/?p=28095 SandboxAQ has announced the general availability of AQtive Guard, a platform designed to secure Non-Human Identities (NHIs) and cryptographic assets. This critical security solution arrives as organizations worldwide face increasingly sophisticated AI-driven threats capable of autonomously infiltrating networks, bypassing traditional defenses, and exploiting vulnerabilities at machine speed.

Modern enterprises are experiencing an unprecedented surge in machine-to-machine communications, with billions of AI agents now operating across corporate networks. These digital entities – ranging from legitimate automation tools to potential attack vectors – depend on cryptographic keys, digital certificates, and machine identities that frequently go unmanaged. This oversight creates massive security gaps that malicious actors can exploit, leading to potential data breaches, compliance violations, and operational disruptions.

“There will be more than one billion AI agents with significant autonomous power in the next few years,” stated Jack Hidary, CEO of SandboxAQ. “Enterprises are giving AI agents a vastly increased range of capabilities to impact customers and real-world assets. This creates a dangerous attack surface for adversaries. AQtive Guard’s Discover and Protect modules address this urgent issue.”

AQtive Guard addresses these challenges through its integrated Discover and Protect modules. The Discover component maintains continuous, real-time visibility into all NHIs and cryptographic assets including keys, certificates, and algorithms – a fundamental requirement for maintaining regulatory compliance. The Protect module then automates critical security workflows, enforcing essential policies like automated credential rotation and certificate renewal to proactively mitigate risks before they can be exploited.

At the core of AQtive Guard’s capabilities are SandboxAQ’s industry-leading Large Quantitative Models (LQMs), which provide organizations with unmatched visibility and control over their cryptographic infrastructure. This advanced technology enables enterprises to successfully navigate evolving security standards, including the latest NIST requirements, while maintaining robust protection against emerging threats.

Marc Manzano, General Manager of Cybersecurity at SandboxAQ

“As organizations accelerate AI adoption and the use of agents and machine-to-machine communication across all business domains and functions, maintaining a real-time, accurate inventory of NHIs and cryptographic assets is an essential cybersecurity practice. Being able to automatically remediate vulnerabilities and policy violations identified is crucial to decrease time to mitigation and prevent potential breaches within the first day of use of our software,” said Marc Manzano, General Manager of Cybersecurity at SandboxAQ.

SandboxAQ has significantly strengthened AQtive Guard’s capabilities through deep technical integrations with two cybersecurity industry leaders. The platform now features robust integration with CrowdStrike’s Falcon® platform, enabling direct ingestion of endpoint data for real-time vulnerability detection and immediate one-click remediation. This seamless connection allows security teams to identify and neutralize threats with unprecedented speed.

Additionally, AQtive Guard now offers full interoperability with Palo Alto Networks’ security solutions. By analyzing and incorporating firewall log data, the platform delivers enhanced network visibility, improved threat detection, and stronger compliance with enterprise security policies across hybrid environments.

AQtive Guard delivers a comprehensive, AI-powered approach to managing NHIs and cryptographic assets through four key functional areas. The platform’s advanced vulnerability detection system aggregates data from multiple sources including major cloud providers like AWS and Google Cloud, maintaining a continuously updated inventory of all cryptographic assets.

The solution’s AI-driven risk analysis engine leverages SandboxAQ’s proprietary Cyber LQMs to accurately prioritize threats while dramatically reducing false positives. This capability is enhanced by an integrated GenAI assistant that helps security teams navigate complex compliance requirements and implement appropriate remediation strategies.

For operational efficiency, AQtive Guard automates the entire lifecycle management of cryptographic assets, including issuance, rotation, and revocation processes. This automation significantly reduces manual errors while eliminating the risks associated with stale or compromised credentials. The platform also provides robust compliance support with pre-configured rulesets for major regulatory standards, customizable query capabilities, and comprehensive reporting features. These tools help organizations accelerate their transition to new NIST standards while maintaining continuous compliance with evolving requirements.

Available now as a fully managed, cloud-native solution, AQtive Guard is designed for rapid deployment and immediate impact. Enterprises can register for priority access to begin early adoption and conduct comprehensive risk assessments of their cryptographic infrastructure.

]]>
How AI is Reinventing Cybersecurity for the Automotive Industry https://securityreviewmag.com/?p=28087 Wed, 23 Apr 2025 07:17:25 +0000 https://securityreviewmag.com/?p=28087 Written by Alain Penel, VP of Middle East, CIS & Turkey at Fortinet

Autonomous and electric vehicle uptake is rising across the Middle East, driven by national agendas and a growing push for sustainable mobility. With this rapid growth however comes an urgent need to address cybersecurity at every stage of the automotive value chain.
Artificial Intelligence (AI) is at the heart of this shift; transforming not only how vehicles operate, but also how cyber threats are identified, mitigated, and prevented. From predictive maintenance to driver behavior analytics, AI is streamlining processes and unlocking efficiencies. But it is also redefining the security perimeter for automotive organizations.

Forces Influencing AI Adoption in Automotive
As the industry evolves, three forces are shaping the current landscape: stricter regulations, rapid AI integration, and a fundamental change in communication infrastructure. Regulations such as the Cyber Resilience Act and NIS2 for example are introducing more granular compliance mandates, especially for sectors handling critical infrastructure.

Meanwhile, AI is accelerating business and individual learning processes. At the network level, the need for faster communication and bandwidth adaptability is giving rise to next-generation connectivity frameworks that can support AI-native systems. This evolution in infrastructure and intelligence also promotes a significant shift in cybersecurity from reactive to preventive.

AI is increasingly being used to analyze threat landscapes and internal vulnerabilities in real-time. This shift enables organizations to prepare for attacks before they happen, leveraging behavioral analytics and high-speed correlation to stay ahead of potential breaches. Hardware acceleration and software development, guided by AI, are now setting the pace for how cybersecurity evolves across the industry.

The Impact of Cybersecurity
Unsurprisingly, automotive enterprises are becoming high-value targets for cybercriminals. Three core factors contribute to this trend; the financial opportunity of holding connected services hostage, the complexity of digital supply chains, and the vast amount of sensitive data being generated. With every vehicle connected to cloud-based services, a single breach can have wide-ranging brand, operational, and financial repercussions. Moreover, the ecosystem of third-party vendors involved in producing autonomous and electric vehicles significantly expands the attack surface.

The use of digital twins and advanced manufacturing technologies further intensifies the volume of valuable data. This information ranging from user behavior patterns to proprietary designs is not only attractive to attackers but also becomes a tool for launching future attacks or selling on the dark web.

AI Transformations in the Automotive Supply Chain
AI is also transforming the automotive supply chain. Predictive maintenance for example – as opposed to scheduled or reactive vehicle maintenance, which until now has been the norm – enables companies to forecast part failures, optimize distribution, and reduce warehousing costs. AI can analyse and synthesise so many data streams that this guessing game becomes much more accurate. Not only does this mean more reliable vehicles for the consumer, but it also means that each element of demand can be optimised.

Driver behavior analysis and in-cabin monitoring systems powered by AI are also enhancing safety, particularly for long-haul truck drivers exposed to risks such as fatigue and theft. These AI-powered innovations are already helping companies reduce operational costs while improving customer satisfaction.

Strengthening security across the supply chain means embedding real-time monitoring, mapping data flows, and building a fast, coordinated response to incidents. The introduction of cyber resilience principles encouraged by regulatory bodies requires organizations to maintain robust and sustainable response mechanisms. AI can help with this.

AI’s Role in Automotive Cybersecurity
The future of AI in automotive cybersecurity lies in its ecosystem-wide integration. Multimodal AI models that can process text, images, and design data are already in use. But the next phase involves combining internal and external intelligence to strengthen risk postures. Synthetic data created specifically to train internal models without exposing real user data is becoming an important asset in speeding up AI development while preserving privacy.

The impact of AI can be summarized as transformative, dual-edged, and adaptable. It is enhancing cybersecurity readiness, being weaponized by attackers, and empowering businesses to evolve quickly in a changing environment. As the Middle East embraces connected mobility and smart transportation, the conversation must move beyond adopting AI to implementing it securely and intelligently. The road to the future may be autonomous, but its success will hinge on cybersecurity built for adaptability, speed, and scale.

]]>
Positive Technologies to Highlight AI Cyber Threats and Defense at GISEC 2025 https://securityreviewmag.com/?p=28075 Tue, 22 Apr 2025 07:48:56 +0000 https://securityreviewmag.com/?p=28075 Positive Technologies is joining GISEC Global 2025, one of the largest cybersecurity and technology exhibitions in the Middle East, on May 6–8 in Dubai. At the Positive Technologies booth (D 90, Hall 7), in-house experts will share their expertise in application security, industrial cybersecurity, and detection of cyberattacks in network traffic using PT Network Attack Discovery. The Positive Technologies team will also host workshops in the Hack-O-Sphere zone.

“Multiple countries in the Middle East have made significant strides in cybersecurity. However, organizations in the region remain an attractive target for cybercriminals, as our research shows,” says Ilya Leonov, Regional Director for MENA, Positive Technologies. “At GISEC Global 2025, we will focus on application security (AppSec) and operational technology security (OT security). Our team will share best practices for using PT Network Attack Discovery, which detects cybercriminal activity in the network traffic and also aids in incident investigation and proactive threat hunting. We’ll also be talking about a range of our other products and solutions to help you get real value from your cybersecurity investments. Additionally, our experts will demonstrate sophisticated attack methods and explain how to defend against them.”

Visitors to the Positive Technologies booth will have the opportunity to observe offensive security specialists simulating DMA attacks, using various devices to bypass defenses and gain access to valuable information. An accessible and user-friendly tool for chip security analysis will also be presented to GISEC participants. This tool, which simulates fault injection attacks, will be demonstrated in action, and the Positive Technologies team will deliver a workshop for cybersecurity professionals.

Positive Technologies will also be organizing four activities in the Hack-O-Sphere zone. At Fixathon, guests will have the opportunity to test their skills in fixing code vulnerabilities and improve their secure development skills. The second activity is dedicated to steganography: guests will be encouraged to find words encrypted in the works of renowned artists and get acquainted with this fascinating method of information transmission. At the workshop on hacking devices, participants will learn how attackers exploit physical access vulnerabilities and how to defend against such attacks. At the soldering workshop, you’ll have the opportunity to craft a useful mini-gadget.

]]>
Generative AI is Transforming Cybersecurity Across Detection, Defense, and Governance https://securityreviewmag.com/?p=28072 Mon, 21 Apr 2025 15:46:46 +0000 https://securityreviewmag.com/?p=28072 Radu Balanescu, the Associate Director for Cybersecurity at BCG, says the governance domain benefits from GenAI’s ability to streamline compliance and awareness

How is Generative AI being utilised to enhance cybersecurity measures today?
Generative AI is transforming cybersecurity across three critical domains—detection, defense, and governance. In detection activities, GenAI is proficient at analysing vast datasets to identify threats through automated threat intelligence analysis, rapid malware detection, and identifying deepfake content. Tools like Google Gemini can process malware samples in seconds rather than hours, dramatically improving response times and enabling more proactive security postures.

In defense activities, GenAI augments protective capabilities by evaluating language patterns and contextual signals to prevent sophisticated phishing attempts. Organisations are also deploying GenAI to create convincing decoy environments with synthetic data, deliberately misleading attackers while protecting genuine assets. When breaches occur, AI-powered playbooks are invaluable assets for security teams while deciding optimal remediation processes, reducing recovery time while ensuring consistent response protocols.

The governance domain benefits from GenAI’s ability to streamline compliance and awareness. AI tools continuously monitor regulatory changes and emerging threats, automatically suggesting policy updates to maintain compliance. Perhaps most promising is GenAI’s ability to create personalised, realistic training scenarios that adapt to individual employee behavior patterns, dramatically improving retention and effectiveness compared to generic security training approaches. These applications represent just the beginning of GenAI’s potential to redefine our approach to digital protection.

What potential risks does Generative AI introduce in the cybersecurity landscape, such as AI-driven cyberattacks?
Generative AI is a double-edged sword, increasing cybersecurity risks as much as it helps protect against attacks. The broadening landscape for cybersecurity risk encompasses two critical aspects: GenAI empowers attackers with tools to accelerate and simplify their malicious actions, while simultaneously introducing security risks in organisations when deployed for usage.

GenAI per se does not generate new types of attacks. However, it simplifies exploit generation, improves the quality of known attacks, and further reduces the cost of creating malicious tools. It even enables less sophisticated actors to conduct complex attacks that were once reserved for only the most skilled malicious actors. AI-generated phishing attacks now create more sophisticated, human-like emails that increase the likelihood of successful social engineering. Cybercriminals can use GenAI to generate new types of malware, bypassing traditional security systems.

Deepfake-based impersonation using AI-generated audio and video can convincingly mimic executives or government officials, leading to fraud or misinformation campaigns. Particularly concerning is AI-enabled reconnaissance, where threat actors use AI to scan systems for vulnerabilities more efficiently, making cyberattacks more targeted and effective. Deploying GenAI in an organisation introduces new types of security risks, mostly around data exfiltration or manipulation. Hallucination risks in AI security models may produce false positives or misleading security insights, leading to incorrect threat assessments. Data poisoning attacks allow adversaries to manipulate training data to introduce biases or vulnerabilities into AI security models.

Supply chain attacks on AI models present significant risks, as compromised models can provide attackers with unauthorised access. Additionally, sophisticated attackers can manipulate AI-based decision-making, forcing systems to misclassify threats or grant unwarranted access. These emerging risks highlight the need for comprehensive security frameworks specifically designed for the GenAI era, balancing innovation with heightened safeguards against increasingly sophisticated threats.

How can organisations leverage Generative AI for proactive threat detection and response?
Organisations have multiple ways to leverage GenAI in their defense activities, fundamentally transforming security from reactive to proactive postures. AI-powered anomaly detection serves as a foundation, using real-time analysis to identify behavioral deviations that could indicate potential cyber threats before they manifest as full attacks. This works in conjunction with automated threat-hunting capabilities, where GenAI assists security analysts by identifying suspicious patterns and suggesting possible cyberattack vectors that might otherwise remain hidden.

The predictive capabilities of GenAI enable cybersecurity modeling that analyses historical threat data to forecast and prevent future attacks before they occur. When incidents arise, automated incident triage becomes critical—AI can categorise and prioritise security events, ensuring that the most severe threats receive immediate attention while optimising resource allocation.

Security operations benefit from contextualised threat intelligence dashboards where AI summarises and visualises threats in real time, providing actionable insights to cybersecurity teams. On the frontlines, phishing prevention, and email security systems leverage AI to filter and block increasingly sophisticated attacks by detecting language anomalies and metadata inconsistencies. Additionally, AI-powered malware reverse engineering can analyse new strains and generate automated responses to mitigate them rapidly. These capabilities collectively enable organisations to stay ahead of evolving threats, shifting the advantage away from attackers and toward defenders in the ongoing cybersecurity battle.

How do you see Generative AI evolving in the cybersecurity domain over the next few years?
Looking into the future, we see GenAI gaining more prevalence in enhancing defense capabilities across multiple dimensions of the cybersecurity landscape. Organisations will increasingly deploy stronger AI-powered cyber defenses that automate complex security tasks, dramatically improving efficiency while reducing the need for manual intervention in routine security operations.

The cybersecurity battlefield will transform into sophisticated AI vs. AI cyber battles, where defensive AI systems continuously adapt to counter AI-driven attacks. This evolution will necessitate continuous AI model adaptation and training to stay ahead of increasingly sophisticated threats. Identity management will see significant advancements through AI-based verification systems, with AI-driven biometric and behavioral authentication strengthening defenses against impersonation and credential theft.

Zero Trust Architecture implementations will be revolutionised as AI plays a larger role in enforcing these policies, continuously verifying users and devices before granting access to sensitive resources. This dynamic verification approach will significantly reduce the attack surface available to potential intruders. Simultaneously, governments and international organisations are defining and implementing stricter policies to prevent AI misuse in cyberattacks, striving for ethical AI usage in security contexts.

Organisations will leverage AI to continuously monitor compliance with these evolving cybersecurity regulations, automating what has traditionally been a resource-intensive process. Perhaps most forward-looking, as quantum computing progresses and brings new threats to conventional encryption methods, AI models will be adapted to counteract quantum-powered cyber threats, ensuring security resilience even as computational paradigms shift dramatically.

]]>
Gen AI is Redefining Cybersecurity’s Future https://securityreviewmag.com/?p=28048 Tue, 08 Apr 2025 16:25:15 +0000 https://securityreviewmag.com/?p=28048 Subhalakshmi Ganapathy, the Chief IT Security Evangelist at ManageEngine, says, by simulating threats, auto-remediating incidents, and decoding attacker tactics, AI empowers organisations to stay ahead of adversaries

How is generative AI being utilised to enhance cybersecurity measures today?
Generative AI is redefining cybersecurity’s future, transforming defenses from reactive to predictive. By simulating threats, auto-remediating incidents, and decoding attacker tactics, it empowers organisations to stay ahead of adversaries. Yet, its true power lies in harmonising human expertise with machine speed—augmenting analysts to focus on strategic risks, not routine alerts. As AI-generated attacks surge, the same technology becomes a double-edged sword, demanding ethical frameworks to prevent misuse.

Forward-thinking leaders must prioritise adaptive AI ecosystems that learn in real time while safeguarding trust. The next frontier isn’t just about stopping threats but fostering resilience through innovation, collaboration, and responsible AI governance. Cybersecurity’s evolution hinges on this balance.

What potential risks does generative AI introduce in the cybersecurity landscape, such as AI-driven cyberattacks?
Generative AI, while a powerful defender, also brings in sophisticated threats. While empowering defenses, it supercharges attacks: AI-crafted deepfakes erode trust, hyper-personalised phishing bypasses filters, and self-mutating malware evades detection. Adversaries leverage AI to automate exploitation, democratising sophisticated attacks for low-skilled threat actors.

Worse, AI models themselves become targets—poisoned training data or adversarial inputs can corrupt defensive systems. This arms race erodes the asymmetrical defenders once relied on. Leaders must confront the paradox: the tools fortifying security also weaponise threats. Mitigation hinges on AI-augmented threat hunting, adversarial testing of models, and global collaboration to govern AI’s ethical use. Proactive resilience, not just reaction, is the new imperative.

How can organisations leverage generative AI for proactive threat detection and response?
Generative AI enables organisations to shift from reactive to anticipatory cybersecurity by synthesising intelligence and automating precision. By training models on historical and synthetic threat data, AI identifies subtle attack patterns—like zero-day exploits or insider risks—before they escalate. Real-time behavioral analysis flags anomalies in user activity or network traffic, while AI-driven simulations stress-test defenses against evolving adversarial tactics (e.g., AI-generated phishing lures).

Automated playbooks powered generative-AI tools, instantly quarantine threats and patch vulnerabilities, slashing response times. Crucially, generative AI augments human teams—curating actionable insights from noise—enabling analysts to prioritise high-impact risks. The key lies in ethical, explainable AI frameworks that balance autonomy with oversight, fostering trust in machine-augmented defense.

What ethical concerns arise when using generative AI in cybersecurity, and how can they be addressed?
Ethical AI in cybersecurity isn’t just about security; it’s about building a future where security and rights coexist. The ethical frontier of generative AI in cybersecurity demands rigorous introspection, particularly regarding data provenance. The AI’s very efficacy hinges on the data it consumes, a double-edged sword. What type of datasets are ethically sound, and what would constitute a privacy minefield?

We must move beyond mere technical accuracy and embrace ethical precision. Training AI on sensitive, personally identifiable information, or data reflecting historical biases, risks perpetuating and amplifying societal inequalities within security systems. This demands a paradigm shift: prioritising anonymised, representative datasets, and rigorously auditing training data for potential biases.

What challenges do cybersecurity teams face when integrating generative AI tools into their workflows?
Integrating generative AI into cybersecurity workflows presents a formidable challenge: balancing innovation with operational integrity. The crux of the issue lies in the accuracy of AI-driven remediation. Inaccurate detection breeds false positives, overwhelming SOCs and eroding analyst trust. More critically, flawed remediation suggestions risk catastrophic configuration changes, impacting employee experience and potentially crippling critical infrastructure.

Imagine AI incorrectly disabling a crucial user account or altering vital system configurations. This necessitates a paradigm shift: AI as an augmentation, not an automation, tool. Rigorous testing, human-in-the-loop protocols, and granular control are paramount. We must avoid the allure of fully automated remediation instead of focusing on AI as a powerful analytical tool that empowers human decision-making. The future of AI in cybersecurity hinges on cautious integration, prioritising accuracy and control to prevent unintended consequences.

Are there any notable examples of generative AI successfully preventing or mitigating cyberattacks?
While the vision of AI autonomously repelling cyberattacks captivates, the reality remains a journey, not a destination. We’ve achieved a pivotal advancement: AI’s prowess in threat detection. However, the full spectrum of AI-driven mitigation remains largely theoretical, confined to controlled environments and phased deployments. Enterprises are cautiously navigating this landscape, recognising the potential but wary of the unknown.

We stand at the cusp of a paradigm shift, where AI’s predictive capabilities could preemptively neutralise threats. Yet, true realisation requires meticulous testing and controlled integration. The focus must shift from isolated detection to a holistic, AI-powered security ecosystem. The future holds immense promises, but responsible innovation demands a measured approach, acknowledging that the AI-driven cybersecurity revolution is still in its nascent stages.

How do you see generative AI evolving in the cybersecurity domain over the next few years?
The trajectory of generative AI in cybersecurity points towards a significant evolution, primarily aimed at alleviating the chronic resource shortage plaguing Security Operations Centers (SOCs). We’re witnessing a shift from reactive to proactive security, where AI’s extensive training and Retrieval Augmented Generation (RAG) capabilities will dramatically reduce incident investigation times. By seamlessly integrating data from disparate ecosystems, AI will provide enriched, contextualised insights, empowering analysts to make faster, more informed decisions.

This evolution will not be about replacing human analysts but about augmenting their capabilities. AI will become a powerful force multiplier, automating mundane tasks and freeing up human experts to focus on complex, strategic threats. We’ll see AI evolving into a sophisticated threat intelligence platform, capable of predicting and preempting attacks rather than merely reacting to them. The future of cybersecurity will be defined by a collaborative partnership between human intelligence and AI’s analytical prowess.

What role does human oversight (HITL) play in ensuring generative AI systems are effectively managing cybersecurity threats?
In the dynamic realm of cybersecurity, generative AI serves as a powerful ally, but its efficacy is fundamentally dependent on human oversight. AI excels at processing vast datasets, identifying anomalies, and automating routine tasks. However, it lacks the nuanced understanding of context, ethical considerations, and strategic adaptability that human analysts possess.

HITL ensures that AI-generated alerts are validated, false positives are filtered, and complex threats are accurately assessed. It’s the critical bridge between algorithmic precision and human intuition, ensuring AI remains a tool, not a replacement, for strategic security. Furthermore, human oversight is vital for mitigating bias in AI models and adapting to the ever-evolving threat landscape, ensuring ethical and effective AI deployment

How can smaller organisations with limited budgets incorporate generative AI for cybersecurity?
For resource-constrained organisations, AI cybersecurity isn’t a luxury, but a strategic imperative. The key lies in intelligent deployment. Embrace MSSPs as force multipliers, gaining access to sophisticated AI defences without prohibitive capital expenditure. Prioritise targeted AI applications, focusing on high-return areas like phishing and anomaly detection, thus maximising impact with finite resources.

Democratise AI access through open-source tools and AI-infused security platforms. Crucially, cultivate an AI-literate workforce. Investing in targeted education ensures these tools are leveraged effectively, transforming potential into tangible security gains. This isn’t about mere adoption; it’s about strategic empowerment, turning budgetary constraints into a catalyst for innovative security.

What best practices would you recommend for implementing generative AI tools while minimising risks?
To truly unlock generative AI’s cybersecurity potential, we must build a fortified framework, not merely deploy tools. Foundational to this is rigorous data governance, ensuring AI’s intelligence is built on pristine, unbiased data. Continuous model vigilance is non-negotiable; constant monitoring and evaluation are essential to preempt performance drift and bias.

Human-in-the-loop protocols are the linchpin, guaranteeing that critical decisions remain anchored in human wisdom. Proactive risk assessments and relentless security testing transform vulnerabilities into strengths. Transparency, woven into the AI’s decision-making fabric, builds trust. Clear policies and procedures, coupled with a commitment to staying at the forefront of AI evolution, ensure adaptability in a rapidly changing threat landscape. This holistic approach empowers organisations to harness AI’s transformative power, not as a gamble, but as a strategic, risk-mitigated advantage.

]]>
EDGE’s Beacon Red and Presight AI Partner to Boost AI Security Solutions https://securityreviewmag.com/?p=28040 Mon, 07 Apr 2025 16:44:35 +0000 https://securityreviewmag.com/?p=28040 EDGE has announced that its entity, Beacon Red, has signed a strategic Memorandum of Understanding (MoU) with Presight AI, the UAE’s leading AI powered global big data analytics company. The partnership was formalised at LAAD Defence & Security 2025, taking place at the Riocentro Exhibition & Convention Center in Rio de Janeiro. The objective of the agreement is to explore business synergies between Presight’s cutting-edge AI and omni-analytics capabilities and Beacon Red’s mission-focused security solutions. Together, they will explore impactful projects across safe and smart cities, digital transformation initiatives, and advanced security systems in strategic international markets, including Latin America.

Thomas Pramotedham, CEO of Presight, emphasised, “Through this partnership with Beacon Red, we are extending the frontier of Applied Intelligence to deliver secure, adaptable, and forward-thinking solutions. Our collaboration reflects the UAE’s commitment to responsible AI deployment and highlights the increasing role of technology in enabling resilient, secure, and sustainable communities – both locally and globally.”

Mauricio De Almeida, CEO of Beacon Red, said, “As part of a diverse and global group, Beacon Red is always looking to expand the range of its capabilities through new partnerships, in new markets, and with superior products. Our collaboration with Presight will enable both companies to leverage their unique strengths, resulting in a comprehensive, integrated solution that helps businesses across industries optimise their operations, improve decision-making, and drive innovation.”

This collaboration is particularly important for the UAE as it reinforces the country’s position as a global leader in the adoption of advanced technologies for national security and societal development. Key benefits include:

  • Strengthening National Security: By combining Presight’s AI-driven insights with Beacon Red’s cybersecurity expertise, this partnership enhances threat detection capabilities critical to safeguarding national infrastructure.
  • Advancing Smart City Initiatives: The MoU aligns with Abu Dhabi’s vision for smart cities by integrating AI-powered technologies that improve urban planning, public safety, and crisis management.
  • Promoting Global Collaboration: The agreement highlights the UAE’s proactive role in fostering international partnerships that drive innovation while addressing global challenges.
  • Supporting Technological Leadership: This partnership underscores Abu Dhabi’s ambition to lead in AI-driven solutions that enable sustainable growth and secure environments.

A joint committee will be established to identify strategic projects that leverage AI for actionable improvement across urban infrastructure while addressing global priorities. This initiative underscores Presight’s dedication to applying AI for positive impact, and Beacon Red’s commitment to advancing security innovation. The partnership marks a pivotal moment for both companies as they work together to redefine the future of digital transformation and security innovation on a global scale.

]]>
AI Will Introduce New Threats as LLMs Take Over Automated Systems https://securityreviewmag.com/?p=28033 Mon, 07 Apr 2025 08:21:57 +0000 https://securityreviewmag.com/?p=28033 Chester Wisniewski, the Director and Global field CTO at Sophos, says, criminals are using AI for exclusively for social scams and the social aspects of traditional attacks

How is generative AI being utilised to enhance cybersecurity measures today?
AI brings a wide variety of advantages to cybersecurity: automation, speed, scalability, enhanced detection, and generalisability. Without AI, rule-based systems need immense manual upkeep to handle the scale of modern threats. AI models can generalise by learning relationships between any number of potentially hundreds of features, while human analysts cannot write such complex rules. AI does, however, stand to introduce new threats as large language models take over automated systems.

What potential risks does generative AI introduce in the cybersecurity landscape, such as AI-driven cyberattacks?
For the most part, criminals are using AI for exclusively for social scams and the social aspects of traditional attacks. AI allows for accurate translation at scale, which dramatically increases the quality of social scams. It can also be used to create high-quality phishing emails that are indistinguishable from the real thing.

AI chatbots are also very useful for initiating conversations with potential victims and setting the hook. Once a victim has been captured, humans usually take over but can still use AI to help with translation and grammar. One additional area where AI might be useful is in assessing the value of large volumes of stolen data. Using AI a criminal might be quicker to identify high-value data and either sell it at a premium or use it as an extra pressure mechanism against the victim.

What ethical concerns arise when using generative AI in cybersecurity, and how can they be addressed?
In most applications there aren’t many ethical concerns. Clearly using AI to generate malicious code or to gather open source intelligence should be done with caution, but most cybersecurity applications don’t involve many ethical dilemmas.

What challenges do cybersecurity teams face when integrating generative AI tools into their workflows?
Two primary concerns come to mind. First is when you are using generative to help you write code you must do very thorough reviews to be sure you are not introducing vulnerabilities. Generative AI has been known to make up the names of libraries that don’t exist or recommend code snippets containing basic programming mistakes like allowing SQL injection attacks or buffer overflow attacks. Second, we must verify the outputs when it really matters. Frequently mild inaccuracies may not matter, but when in circumstances where it is of great importance, we have to double-check the outputs to ensure the accuracy of the results.

Are there any notable examples of generative AI successfully preventing or mitigating cyberattacks?
Not that I am aware of. Traditional machine learning and neural-network malware detection models prevent attacks around the clock, but I am not aware of generative AI being used in this way to date.

How do you see generative AI evolving in the cybersecurity domain over the next few years?
I think the real promise is in alert triage and language translation capabilities. Of course, these technologies are available now from ourselves and other vendors, but as these capabilities mature, they will become increasingly important for smart automation and aiding human analysts. We are also likely to see AI automation of bug discovery in code before it ships to customers preventing vulnerabilities and improved detection of targeted phishing attacks in email solutions.

What role does human oversight (HITL) play in ensuring generative AI systems are effectively managing cybersecurity threats?
This is critically important. The machines are excellent at processing vast amounts of data and helping make sense of it, but they lack intuition, creativity, and context. Humans can take this reduced flow of information and add that intelligence to achieve superior outcomes.

How can smaller organisations with limited budgets incorporate generative AI for cybersecurity?
Most smaller organisations will benefit from AI through its integration into their existing tools and through their service providers. Much of the efficiencies gained by smart applications of this technology will allow for more affordable services from security providers and easier to use tools.

What best practices would you recommend for implementing generative AI tools while minimising risks?
If using AI models hosted in public clouds or by service providers caution must be exercised to not process sensitive information using these tools. Risks can be minimised by choosing providers in countries with privacy laws in-line with your responsibilities, but caution should still be exercised. For the most sensitive types of information, it would be best to host it on-premise or in a private cloud instance that is not shared with other tenants.

]]>