AI – Security Review Magazine https://securityreviewmag.com We bring you the latest from the IT and physical security industry in the Middle East and Africa region. Mon, 12 May 2025 18:21:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://securityreviewmag.com/wp-content/uploads/2024/08/cropped-SRico-32x32.png AI – Security Review Magazine https://securityreviewmag.com 32 32 Axis Communications Sheds Light on Video Surveillance Industry Perspectives on AI https://securityreviewmag.com/?p=28241 Mon, 12 May 2025 18:21:18 +0000 https://securityreviewmag.com/?p=28241 Axis Communications has published a new report that explores the state of AI in the global video surveillance industry. Titled The State of AI in Video Surveillance, the report examines the key opportunities, challenges and future trends, as well as the responsible practices that are becoming critical for organisations in their use of AI. The report draws insights from qualitative research as well as quantitative data sources, including in-depth interviews with carefully selected experts from the Axis global partner network.

A leading insight featured in the report is the unanimous view among interviewees that interest in the technology has surged over the past few years, with more and more business customers becoming curious and increasingly knowledgeable about its potential applications.

Mats Thulin, Director AI & Analytics Solutions at Axis Communications

“AI is a technology that has the potential to touch every corner and every function of the modern enterprise. That said, any implementations or integrations that aim to drive value come with serious financial and ethical considerations. These considerations should prompt organisations to scrutinise any initiative or investment. Axis’s new report not only shows how AI is transforming the video surveillance landscape, but also how that transformation should ideally be approached,” said Mats Thulin, Director AI & Analytics Solutions at Axis Communications.

According to the Axis report, the move by businesses from on-premise security server systems to hybrid cloud architectures continues at pace, driven by the need for faster processing, improved bandwidth usage and greater scalability. At the same time, cloud-based technology is being combined with edge AI solutions, which play a crucial role by enabling faster, local analytics with minimal latency, a prerequisite for real-time responsiveness in security-related situations.

By moving AI processing closer to the source using edge devices such as cameras, businesses can reduce bandwidth consumption and better support real-time applications like security monitoring. As a result, the hybrid approach is expected to continue to shape the role of AI in security and unlock new business intelligence and operational efficiencies.

A trend that is emerging among businesses is the integration of diverse data for a more comprehensive analysis, transforming safety and security. Experts predict that by integrating additional sensory data, such as audio and contextual environmental factors caught on camera, can lead to enhanced situational awareness and greater actionable insights, offering a more comprehensive understanding of events.

Combining multiple data streams can ultimately lead to improved detection and prediction of potential threats or incidents. For example, in emergency scenarios, pairing visual data with audio analysis can enable security teams to respond more quickly and precisely. This context-aware approach can potentially elevate safety, security and operational efficiency, and reflects how system operators can leverage and process multiple data inputs to make better-informed decisions.

According to the Axis report, interviewees emphasised that responsible AI and ethical considerations are critical priorities in the development and deployment of new systems, raising concerns about decisions potentially based on biased or unreliable AI. Other risks highlighted include those related to privacy violations and how facial and behavioural recognition could have ethical and legal repercussions.

As a result, a recurring theme among interviewees was the importance of embedding responsible AI practices early in the development process. Interviewees also pointed to regulatory frameworks, such as the EU AI Act, as pivotal in shaping responsible use of technology, particularly in high-risk areas. While regulation was broadly acknowledged as necessary to build trust and accountability, several interviewees also stressed the need for balance to safeguard innovation and address privacy and data security concerns.

“The findings of this report reflect how enterprises are viewing the trend of AI holistically, working to have a firm grasp of both how to use the technology effectively and understand the macro implications of its usage. Conversations surrounding privacy and responsibility will continue but so will the pace of innovation and the adoption of technologies that advance the video surveillance industry and lead to new and exciting possibilities,” Thulin added.

]]>
CyberKnight Partners with Ridge Security for AI-Powered Security Validation https://securityreviewmag.com/?p=28198 Thu, 08 May 2025 17:06:26 +0000 https://securityreviewmag.com/?p=28198 The automated penetration testing market was valued at roughly $3.1 billion in 2023 and is projected to grow rapidly, with forecasts estimating a compound annual growth rate (CAGR) between 21% and 25%. By 2030, the sector is expected to reach approximately $9 to $10 billion. The broader penetration testing industry is also expanding, with projections indicating it will surpass $5.3 billion by 2027, according to MarketandMarket.

To support enterprises and government entities across the Middle East, Turkey and Africa (META) with identifying and validating vulnerabilities and reducing security gaps in real-time, CyberKnight has partnered with Ridge Security, the World’s First Al-powered Offensive Security Validation Platform. Ridge Security’s products incorporate advanced artificial intelligence to deliver security validation through automated penetration testing and breach and attack simulations.

RidgeBot uses advanced AI to autonomously perform multi-vector iterative attacks, conduct continuous penetration testing, and validate vulnerabilities with zero false positives. RidgeBot has been deployed by customers worldwide as a key element of their journey to evolve from traditional vulnerability management to Continuous Threat Exposure Management (CTEM).

“Ridge Security’s core strength lies in delivering holistic, AI-driven security validation that enables organizations to proactively manage risk and improve operational performance,” said Hom Bahmanyar, Chief Enablement Officer at Ridge Security. “We are delighted to partner with CyberKnight to leverage their network of strategic partners, deep-rooted customer relations, and security expertise to accelerate our expansion plans in the region.”

“Our partnership with Ridge Security is a timely and strategic step, as 69% of organizations are now adopting AI-driven security for threat detection and prevention,” added Wael Jaber, Chief Strategy Officer at CyberKnight. “By joining forces, we enhance our ability to deliver automated, intelligent security validation solutions, reaffirming our commitment to empowering customers with resilient, future-ready cybersecurity across the region.”

]]>
How AI is Reinventing Cybersecurity for the Automotive Industry https://securityreviewmag.com/?p=28087 Wed, 23 Apr 2025 07:17:25 +0000 https://securityreviewmag.com/?p=28087 Written by Alain Penel, VP of Middle East, CIS & Turkey at Fortinet

Autonomous and electric vehicle uptake is rising across the Middle East, driven by national agendas and a growing push for sustainable mobility. With this rapid growth however comes an urgent need to address cybersecurity at every stage of the automotive value chain.
Artificial Intelligence (AI) is at the heart of this shift; transforming not only how vehicles operate, but also how cyber threats are identified, mitigated, and prevented. From predictive maintenance to driver behavior analytics, AI is streamlining processes and unlocking efficiencies. But it is also redefining the security perimeter for automotive organizations.

Forces Influencing AI Adoption in Automotive
As the industry evolves, three forces are shaping the current landscape: stricter regulations, rapid AI integration, and a fundamental change in communication infrastructure. Regulations such as the Cyber Resilience Act and NIS2 for example are introducing more granular compliance mandates, especially for sectors handling critical infrastructure.

Meanwhile, AI is accelerating business and individual learning processes. At the network level, the need for faster communication and bandwidth adaptability is giving rise to next-generation connectivity frameworks that can support AI-native systems. This evolution in infrastructure and intelligence also promotes a significant shift in cybersecurity from reactive to preventive.

AI is increasingly being used to analyze threat landscapes and internal vulnerabilities in real-time. This shift enables organizations to prepare for attacks before they happen, leveraging behavioral analytics and high-speed correlation to stay ahead of potential breaches. Hardware acceleration and software development, guided by AI, are now setting the pace for how cybersecurity evolves across the industry.

The Impact of Cybersecurity
Unsurprisingly, automotive enterprises are becoming high-value targets for cybercriminals. Three core factors contribute to this trend; the financial opportunity of holding connected services hostage, the complexity of digital supply chains, and the vast amount of sensitive data being generated. With every vehicle connected to cloud-based services, a single breach can have wide-ranging brand, operational, and financial repercussions. Moreover, the ecosystem of third-party vendors involved in producing autonomous and electric vehicles significantly expands the attack surface.

The use of digital twins and advanced manufacturing technologies further intensifies the volume of valuable data. This information ranging from user behavior patterns to proprietary designs is not only attractive to attackers but also becomes a tool for launching future attacks or selling on the dark web.

AI Transformations in the Automotive Supply Chain
AI is also transforming the automotive supply chain. Predictive maintenance for example – as opposed to scheduled or reactive vehicle maintenance, which until now has been the norm – enables companies to forecast part failures, optimize distribution, and reduce warehousing costs. AI can analyse and synthesise so many data streams that this guessing game becomes much more accurate. Not only does this mean more reliable vehicles for the consumer, but it also means that each element of demand can be optimised.

Driver behavior analysis and in-cabin monitoring systems powered by AI are also enhancing safety, particularly for long-haul truck drivers exposed to risks such as fatigue and theft. These AI-powered innovations are already helping companies reduce operational costs while improving customer satisfaction.

Strengthening security across the supply chain means embedding real-time monitoring, mapping data flows, and building a fast, coordinated response to incidents. The introduction of cyber resilience principles encouraged by regulatory bodies requires organizations to maintain robust and sustainable response mechanisms. AI can help with this.

AI’s Role in Automotive Cybersecurity
The future of AI in automotive cybersecurity lies in its ecosystem-wide integration. Multimodal AI models that can process text, images, and design data are already in use. But the next phase involves combining internal and external intelligence to strengthen risk postures. Synthetic data created specifically to train internal models without exposing real user data is becoming an important asset in speeding up AI development while preserving privacy.

The impact of AI can be summarized as transformative, dual-edged, and adaptable. It is enhancing cybersecurity readiness, being weaponized by attackers, and empowering businesses to evolve quickly in a changing environment. As the Middle East embraces connected mobility and smart transportation, the conversation must move beyond adopting AI to implementing it securely and intelligently. The road to the future may be autonomous, but its success will hinge on cybersecurity built for adaptability, speed, and scale.

]]>
Positive Technologies to Highlight AI Cyber Threats and Defense at GISEC 2025 https://securityreviewmag.com/?p=28075 Tue, 22 Apr 2025 07:48:56 +0000 https://securityreviewmag.com/?p=28075 Positive Technologies is joining GISEC Global 2025, one of the largest cybersecurity and technology exhibitions in the Middle East, on May 6–8 in Dubai. At the Positive Technologies booth (D 90, Hall 7), in-house experts will share their expertise in application security, industrial cybersecurity, and detection of cyberattacks in network traffic using PT Network Attack Discovery. The Positive Technologies team will also host workshops in the Hack-O-Sphere zone.

“Multiple countries in the Middle East have made significant strides in cybersecurity. However, organizations in the region remain an attractive target for cybercriminals, as our research shows,” says Ilya Leonov, Regional Director for MENA, Positive Technologies. “At GISEC Global 2025, we will focus on application security (AppSec) and operational technology security (OT security). Our team will share best practices for using PT Network Attack Discovery, which detects cybercriminal activity in the network traffic and also aids in incident investigation and proactive threat hunting. We’ll also be talking about a range of our other products and solutions to help you get real value from your cybersecurity investments. Additionally, our experts will demonstrate sophisticated attack methods and explain how to defend against them.”

Visitors to the Positive Technologies booth will have the opportunity to observe offensive security specialists simulating DMA attacks, using various devices to bypass defenses and gain access to valuable information. An accessible and user-friendly tool for chip security analysis will also be presented to GISEC participants. This tool, which simulates fault injection attacks, will be demonstrated in action, and the Positive Technologies team will deliver a workshop for cybersecurity professionals.

Positive Technologies will also be organizing four activities in the Hack-O-Sphere zone. At Fixathon, guests will have the opportunity to test their skills in fixing code vulnerabilities and improve their secure development skills. The second activity is dedicated to steganography: guests will be encouraged to find words encrypted in the works of renowned artists and get acquainted with this fascinating method of information transmission. At the workshop on hacking devices, participants will learn how attackers exploit physical access vulnerabilities and how to defend against such attacks. At the soldering workshop, you’ll have the opportunity to craft a useful mini-gadget.

]]>
AI Will Introduce New Threats as LLMs Take Over Automated Systems https://securityreviewmag.com/?p=28033 Mon, 07 Apr 2025 08:21:57 +0000 https://securityreviewmag.com/?p=28033 Chester Wisniewski, the Director and Global field CTO at Sophos, says, criminals are using AI for exclusively for social scams and the social aspects of traditional attacks

How is generative AI being utilised to enhance cybersecurity measures today?
AI brings a wide variety of advantages to cybersecurity: automation, speed, scalability, enhanced detection, and generalisability. Without AI, rule-based systems need immense manual upkeep to handle the scale of modern threats. AI models can generalise by learning relationships between any number of potentially hundreds of features, while human analysts cannot write such complex rules. AI does, however, stand to introduce new threats as large language models take over automated systems.

What potential risks does generative AI introduce in the cybersecurity landscape, such as AI-driven cyberattacks?
For the most part, criminals are using AI for exclusively for social scams and the social aspects of traditional attacks. AI allows for accurate translation at scale, which dramatically increases the quality of social scams. It can also be used to create high-quality phishing emails that are indistinguishable from the real thing.

AI chatbots are also very useful for initiating conversations with potential victims and setting the hook. Once a victim has been captured, humans usually take over but can still use AI to help with translation and grammar. One additional area where AI might be useful is in assessing the value of large volumes of stolen data. Using AI a criminal might be quicker to identify high-value data and either sell it at a premium or use it as an extra pressure mechanism against the victim.

What ethical concerns arise when using generative AI in cybersecurity, and how can they be addressed?
In most applications there aren’t many ethical concerns. Clearly using AI to generate malicious code or to gather open source intelligence should be done with caution, but most cybersecurity applications don’t involve many ethical dilemmas.

What challenges do cybersecurity teams face when integrating generative AI tools into their workflows?
Two primary concerns come to mind. First is when you are using generative to help you write code you must do very thorough reviews to be sure you are not introducing vulnerabilities. Generative AI has been known to make up the names of libraries that don’t exist or recommend code snippets containing basic programming mistakes like allowing SQL injection attacks or buffer overflow attacks. Second, we must verify the outputs when it really matters. Frequently mild inaccuracies may not matter, but when in circumstances where it is of great importance, we have to double-check the outputs to ensure the accuracy of the results.

Are there any notable examples of generative AI successfully preventing or mitigating cyberattacks?
Not that I am aware of. Traditional machine learning and neural-network malware detection models prevent attacks around the clock, but I am not aware of generative AI being used in this way to date.

How do you see generative AI evolving in the cybersecurity domain over the next few years?
I think the real promise is in alert triage and language translation capabilities. Of course, these technologies are available now from ourselves and other vendors, but as these capabilities mature, they will become increasingly important for smart automation and aiding human analysts. We are also likely to see AI automation of bug discovery in code before it ships to customers preventing vulnerabilities and improved detection of targeted phishing attacks in email solutions.

What role does human oversight (HITL) play in ensuring generative AI systems are effectively managing cybersecurity threats?
This is critically important. The machines are excellent at processing vast amounts of data and helping make sense of it, but they lack intuition, creativity, and context. Humans can take this reduced flow of information and add that intelligence to achieve superior outcomes.

How can smaller organisations with limited budgets incorporate generative AI for cybersecurity?
Most smaller organisations will benefit from AI through its integration into their existing tools and through their service providers. Much of the efficiencies gained by smart applications of this technology will allow for more affordable services from security providers and easier to use tools.

What best practices would you recommend for implementing generative AI tools while minimising risks?
If using AI models hosted in public clouds or by service providers caution must be exercised to not process sensitive information using these tools. Risks can be minimised by choosing providers in countries with privacy laws in-line with your responsibilities, but caution should still be exercised. For the most sensitive types of information, it would be best to host it on-premise or in a private cloud instance that is not shared with other tenants.

]]>
Generative AI Can Automate the Creation of Malware Variants https://securityreviewmag.com/?p=28022 Thu, 03 Apr 2025 14:51:40 +0000 https://securityreviewmag.com/?p=28022 Ivan Milenkovic, Vice President – Cyber Risk Technology, EMEA at Qualys, says, as much as generative AI can fortify security, it equally arms malicious actors with new tools

How is generative AI being utilized to enhance cybersecurity measures today?
Today, generative AI is used to bolster cybersecurity defences in a multitude of ways. It automates mundane tasks, sifting through vast data logs to identify potential vulnerabilities and weed out false positives (Gartner, 2021). More impressively, generative AI can predict emerging threats by simulating attack scenarios, helping teams spot anomalies before they escalate (Mandiant, 2022).

Compared with older rule-based systems, these AI models adapt in real time, learning from both benign and malicious activity to create dynamic defence postures. A notable example is Darktrace’s “Antigena” product, which uses self-learning AI to detect abnormal network behaviours. In 2018, it reportedly thwarted an insider threat by flagging unusual data transfers in a UK-based financial services firm (Darktrace, 2018). The technology reduced the manual workload on analysts by automating front-line triage, freeing human experts to focus on higher-level investigations.

What potential risks does generative AI introduce in the cybersecurity landscape, such as AI-driven cyberattacks?
As much as generative AI can fortify security, it equally arms malicious actors with new tools. Sophisticated attackers are already deploying adversarial machine learning to bypass detection (Goodfellow et al., 2014) and using deepfakes to manipulate social engineering scams. One infamous example involved fraudsters using deepfake voice impersonation of a CEO to authorise a fraudulent wire transfer of approximately €220,000 from a UK-based energy firm in 2019 (Wall Street Journal, 2019).

This dark side underscores why cybersecurity leaders must remain vigilant. Generative AI can automate the creation of malware variants, obfuscate malicious code, or create entire networks of bot accounts capable of launching coordinated attacks (ENISA Threat Landscape, 2021). These challenges highlight the need for organisations to keep their AI defences on par with adversarial AI capabilities.

How can organizations leverage generative AI for proactive threat detection and response?
Given the growing dangers, organisations are increasingly using generative AI for proactive threat hunting. By training models on historical attack datasets, security systems can anticipate emerging vulnerabilities, formulate defensive strategies, and even recommend immediate containment measures (IBM X-Force Threat Intelligence Index, 2022). Generative AI excels at pattern recognition, which — when combined with behavioural analysis — helps security teams detect anomalies that conventional defences might miss.

Several Fortune 500 companies have begun deploying AI-driven “red team” exercises using synthetic data to simulate real attacks (Ponemon Institute, 2022). By synthesising new attack variants, these organisations can better train their detection algorithms and prepare incident response teams for novel threat scenarios.

What ethical concerns arise when using generative AI in cybersecurity, and how can they be addressed?
A critical ethical question arises when deploying powerful AI tools for cybersecurity: Where do we draw the line between data-driven intelligence and intrusive surveillance? Privacy concerns loom large, particularly when AI systems process personal information to identify potential insider threats (NIST SP 800-53, 2020). It is essential that organisations establish transparent governance structures, involving cross-functional teams from legal, compliance, and human resources.

These frameworks should clarify data usage policies, ensure algorithmic fairness, and reinforce accountability (European Commission, 2021, EU AI Act, 2024). Treating user data with respect whilst maintaining robust defences is not just a matter of compliance; it’s a moral imperative that, if neglected, can damage trust irreparably.

What challenges do cybersecurity teams face when integrating generative AI tools into their workflows?
Despite the allure of next-generation solutions, cybersecurity teams often face significant hurdles when incorporating generative AI. Firstly, there is a matter of technical complexity. Building models that accurately understand and adapt to evolving threats requires specialised expertise and substantial computational resources (Gartner, 2021). Secondly, legacy systems are mostly ill-equipped to handle the high data throughput AI demands, leading to integration bottlenecks (Mandiant, 2022). Then, there is a problem of inflated expectations. The hype around AI can cause organisations to invest in poorly scoped projects, hampering returns and morale (Ponemon Institute, 2022).

To combat these issues, teams should conduct thorough proofs of concept and collaborate with experienced data scientists to align capabilities with organisational needs.

Are there any notable examples of generative AI successfully preventing or mitigating cyberattacks?
Several case studies highlight the growing success of generative AI in thwarting attacks. Darktrace reported detecting anomalous “beacon” traffic months before a known banking Trojan was publicly identified (Darktrace, 2019). Meanwhile, a large financial institution in Asia leveraged AI-driven user behaviour analytics (UBA) to pinpoint a suspicious spike in credential escalations, uncovering an elaborate insider threat that might otherwise have slipped under the radar (IBM, 2020). These incidents illustrate the transformative power of AI when integrated thoughtfully with security operations.

How do you see generative AI evolving in the cybersecurity domain over the next few years?
Over the coming years, generative AI is expected to mature into an even more intuitive and autonomous guardian. As data collection methods expand and computational power grows (Ponemon Institute, 2022), AI models will become more adept at detecting zero-day exploits and adapting, on the fly, to novel attack techniques. Widespread adoption of AI systems that interact seamlessly with security analysts will facilitate real-time recommendations, and “self-healing” networks capable of automated patching are likely to become mainstream (Gartner, 2021).

However, we should brace for an escalation in AI-enabled cyberattacks as well (e.g. from near perfect deep-fakes, to far better personalised targeted attacks). This unfolding arms race underscores the importance of continuous innovation and collaboration between industry, academia, and government (ENISA Threat Landscape, 2021).

What role does human oversight (HITL) play in ensuring generative AI systems are effectively managing cybersecurity threats?
Human-in-the-loop oversight remains indispensable. Even the most advanced AI systems can produce false positives or overlook subtleties requiring human judgement (European Commission, 2021). Skilled analysts, especially those with deep domain knowledge, are needed to validate AI-driven alerts, fine-tune learning models, and account for socio-political contexts.

As a result, AI should be viewed as an extension of human capabilities rather than a replacement. A balanced combination of machine efficiency and human intuition results in the most effective security outcomes (Mandiant, 2022). Lastly, let’s not forget that emerging legislations (EU AI Act for example) might “insist” on having human decisions for certain privacy-critical aspects.

How can smaller organizations with limited budgets incorporate generative AI for cybersecurity?
Budget constraints need not bar smaller organisations from leveraging generative AI. A pragmatic step is to use cloud-based security tools with built-in AI features, offsetting the cost of on-premises infrastructure (Microsoft Azure Security Centre, 2021). Partnerships with managed service providers can also help smaller entities develop tailored AI strategies.

Starting with low-complexity use cases, such as automated phishing detection, can yield quick wins and free up resources to invest in more advanced capabilities. By focusing on modular, scalable solutions, smaller organisations can gradually expand their AI footprint without jeopardising financial stability.

What best practices would you recommend for implementing generative AI tools while minimizing risks?
To implement generative AI responsibly, organisations should embrace and follow industry good practices. A good example would be NIST SP 800-53. Basic steps should not be news to cyber-security professionals:

  1. Establish a clear governance framework that outlines AI deployment goals, data usage policies, and oversight responsibilities.
  2. Invest in robust training datasets to mitigate bias and ensure the AI can accurately detect real threats.
  3. Enforce rigorous testing and validation procedures, including adversarial testing to identify potential exploits.
  4. Maintain audit logs and version-control for the AI models, enabling swift rollback if necessary.
  5. Finally, foster a culture of transparency by openly communicating to stakeholders how and why AI is used within the security apparatus.
]]>
AI Has Lowered the Barrier to Entry Into Cybercrime https://securityreviewmag.com/?p=28018 Thu, 03 Apr 2025 14:44:10 +0000 https://securityreviewmag.com/?p=28018 Kalle Bjorn, Sr Director, Systems Engineering – Middle East, Fortinet, says cybersecurity is a strategic enabler for realizing the full potential of AI

How is generative AI being utilized to enhance cybersecurity measures today?
As today’s network complexity grows, so does the need for intelligent tools that can simplify management tasks and enhance efficiency. Generative AI (GenAI) has become a cornerstone for making it happen. It can truly transform how Day 0 to Day 2 network operations are performed.

According to Gartner research, by 2026, GenAI technology is expected to influence 20% of initial network configuration, a dramatic rise from virtually none in 2023. Currently, 65% of network activities, including configuration and troubleshooting, are still performed manually, highlighting a significant opportunity for automation and efficiency improvements through GenAI.

What potential risks does generative AI introduce in the cybersecurity landscape, such as AI-driven cyberattacks?
AI has become a double-edged sword for cybersecurity. On the one hand, it has lowered the barrier to entry into cybercrime, enabling would-be criminals to generate malware even when they lack programming skills and providing more sophisticated criminals with capabilities few could have imagined a short time ago. On the other hand, cyber defenders can take advantage of AI for intelligent automation and defense strategies.

Last year, global leaders raised this issue to the World Economic Forum’s Centre for Cybersecurity, with the aim of helping organizations everywhere to better comprehend the cybersecurity implications of using AI technologies and how to adopt these offerings securely. As a result of these discussions, the World Economic Forum launched its AI and Cyber Initiative to develop guidance for organizations to manage the complex cyber risks associated with AI use.

Understanding and implementing risk management measures positively impacts more than just an enterprise’s cyber resilience. Cybersecurity is a strategic enabler for realizing the full potential of AI. By embedding security into AI systems from the ground up, organizations transform risk mitigation into a competitive advantage, ensuring trustworthiness and ethical compliance.

How can organizations leverage generative AI for proactive threat detection and response?
GenAI can analyze massive data streams, recognize patterns, and deliver actionable intelligence in real-time. It also offers advanced scripting assistance, proactive troubleshooting and IoT vulnerability diagnostics, and automated implementation of AI-recommended remediations, leading to a more secure, efficient, resilient network and, eventually, an autonomous network. FortiAI for FortiManager is revolutionizing network management by integrating GenAI to do just this. FortiAI provides rapid insights into vulnerabilities, quarantining risky IoT devices and helping organizations stay ahead of potential threats.

What ethical concerns arise when using generative AI in cybersecurity, and how can they be addressed?
AI presents a multitude of perspectives and ethical considerations, particularly concerning its development, deployment, and economic ramifications.

To ensure explainability and accountability in AI-driven security decision-making, security teams can focus opting for transparent AI models whose decision-making processes can be understood and audited by human experts. Organisations should also implement robust validation and testing, rigorously testing AI models with diverse datasets to identify and mitigate biases or inaccuracies and following any local and global regulations around AI.

How do you see generative AI evolving in the cybersecurity domain over the next few years?
As we continue to imagine AI in every aspect of cybersecurity, we’re witnessing a revolution that’s reshaping the industry, making it more proactive, responsive, and adaptive than ever before.

GenAI offers transformative potential, as demonstrated by Klarna’s AI Assistant, which now handles the workload equivalent to 700 customer service agents. For Klarna, this translates into an estimated $40 million in annual savings, showcasing AI’s ability to enhance productivity and reduce operational costs.

It is widely acknowledged that AI will have a profound impact on everyday life, though the precise nature and trajectory of this impact remain difficult to predict. Nonetheless, it is imperative that we adopt a forward-thinking approach to understanding and harnessing AI’s potential, ensuring that its development is aligned with societal benefit and economic sustainability. By prioritizing cybersecurity, organizations can protect their investments in AI, supporting innovation while strengthening defenses.

What role does human oversight (HITL) play in ensuring generative AI systems are effectively managing cybersecurity threats?
Automation is particularly important in cybersecurity given the ongoing shortage of expert security staff. However, human oversight will still be important. Security teams will need to be equipped with the knowledge and skills to understand, interpret, and manage AI-driven security systems effectively. The Fortinet Training Institute recently added two new AI-focused modules to its Security Awareness and Training service to enhance learners’ understanding of AI and the role this technology plays in cybersecurity.

AI excels at tactical responses based on predefined rules. However, defining security policies, understanding risk tolerance, and making strategic decisions still require human expertise and intuition. Analysing new and evolving threats, understanding their potential impact, and developing innovative countermeasures will also still require human intelligence and creativity.

]]>
Generative AI: The Game-Changer Transforming Cybersecurity in a Rapidly Evolving Threat Landscape https://securityreviewmag.com/?p=27997 Fri, 28 Mar 2025 08:05:38 +0000 https://securityreviewmag.com/?p=27997 Fernando Cea, VP of Technology for New Markets at Globant, champions the integration of generative AI into cybersecurity as a transformative approach in tackling today’s rapidly evolving threat landscape

How is generative AI being utilized to enhance cybersecurity measures today?
The cybersecurity industry is at an inflection point. With the total addressable market expected to soar to $1.5–$2.0 trillion—nearly 10x the size of the current vended market—there’s no room for complacency. Generative AI isn’t just a tool; it’s a force multiplier. At Globant, we’re integrating Gen AI into the heart of cybersecurity, enabling systems to not only identify anomalies faster but to predict them—before they strike.

Simulation and evolution of genAI models are hand-by-hand so it is exactly where are we going. We’re seeing AI models automatically generate threat intelligence reports, simulate attacks to test system resilience, and dynamically rewrite defensive code in real-time. According to a recent IBM study, organizations using AI and automation in security saw a 108-day shorter breach lifecycle and saved an average of $1.76 million per breach. But here’s the reality: Gen AI is also arming the attackers. The only way to keep up is to fight AI with AI.

How can organizations leverage generative AI for proactive threat detection and response?
Proactive cybersecurity isn’t just about building walls—it’s about anticipating the breach before it happens. Gen AI enables organizations to shift from reactive playbooks to predictive defense. GenAI is really good on creating scenarios and synthetic data and this pattern have introduced a new approach of the problem solving. We’re helping clients in highly sensitive industries—from government to finance to entertainment—deploy AI agents that constantly scan internal and external networks, flag anomalous behavior in milliseconds, and autonomously deploy countermeasures before human analysts are even alerted.

This isn’t theoretical—it’s happening now. Imagine a Gen AI model that learns from every attempted breach across an ecosystem, feeding insights into your security fabric in real time. It’s like having a red team and blue team working together 24/7, learning from each other, and never sleeping.

What challenges do cybersecurity teams face when integrating generative AI tools into their workflows?
Let’s not sugarcoat it—integrating Gen AI into cybersecurity isn’t plug-and-play. It demands new skillsets, new mindsets, and a willingness to break the old model. One of the biggest challenges is explainability. Gen AI models often operate as black boxes, which makes it hard for CISOs and security teams to justify actions to regulators or internal stakeholders.

There’s also the risk of AI-generated false positives that can overwhelm analysts or, worse, generate blind spots. Then there’s trust: many organizations are hesitant to hand over critical security operations to a machine. And rightfully so. The risk is real—but the risk of standing still is even greater.

Are there any notable examples of generative AI successfully preventing or mitigating cyberattacks?
Yes—and they’re multiplying. One case we’re particularly proud of involved a large-scale financial institution that faced a rising wave of phishing attacks using deepfake content. Traditional rule-based systems failed to detect the nuances, but a Gen AI-powered detection layer helped deploy flagged irregular tone and semantic drift in emails and voice transcriptions in real-time. The system prevented a multi-million-dollar breach.

Another example: a digital media platform we work with experienced a zero-day exploit attempt. Our AI models, trained on synthetic attack data, recognized the pattern within seconds and auto-isolated the affected microservice—without any human intervention. These aren’t just success stories. They’re proof that Gen AI can move faster than the adversary.

How do you see generative AI evolving in the cybersecurity domain over the next few years?
The future of cybersecurity will be defined by cyber resilience—not just fortification. And Gen AI will be at the core. In the next three to five years, we expect to see fully autonomous security orchestration platforms powered by Gen AI that adapt and evolve without manual configuration. Think of them as living, breathing digital immune systems—capable of learning, mutating, and healing themselves. But there’s also a dark side.

Nation-states and cybercrime syndicates will weaponize Gen AI to launch attacks at unprecedented scale and sophistication. Deepfakes, synthetic identities, and AI-generated malware will become the norm.

]]>
Can AI Outsmart Hackers? How Generative AI is Reshaping Cybersecurity https://securityreviewmag.com/?p=27991 Thu, 27 Mar 2025 15:09:47 +0000 https://securityreviewmag.com/?p=27991 As generative AI transforms cybersecurity into an AI-versus-AI battleground, organizations must navigate both its defensive potential and emerging risks. We spoke with Ramprakash Ramamoorthy, Director of AI Research at Zoho, about how this technology is reshaping threat detection, automating responses, and even being weaponized by attackers. From real-world attack prevention to ethical implementation challenges, Ramamoorthy shares critical insights on leveraging generative AI effectively while mitigating its dangers in our increasingly digital world

How is generative AI being utilized to enhance cybersecurity measures today?
Generative AI has changed the way cybersecurity operates today. It has not only automated tasks but streamlined workflows, improved threat detection, and is also used to stimulate attacks to see how well an organization is proactive to cyber threats. Unlike traditional static thresholds that require constant human vigilance, Generative AI adapts dynamically, learning from vast data volumes to stay ahead of evolving attacks.

This makes it highly effective in identifying zero-day vulnerabilities and sophisticated threats. Moreover, Generative AI streamlines incident response by generating detailed reports, suggesting mitigation steps, and even creating code patches to address security gaps. Its ability to analyse patterns, predict risks, and automate defensive actions has made Generative AI an important tool in modern cybersecurity threats.

What potential risks does generative AI introduce in the cybersecurity landscape, such as AI-driven cyberattacks?
Generative AI was evolved to make things easier, but it has also become a powerful ally to the bad actors. Cyber attackers use Gen AI to create highly convincing phishing emails, fake websites, and deep fakes to deceive users and steal information from. It also leads to the development of sophisticated malware that bypasses traditional security defences, keeping non-digitized enterprises at a higher risk.

Gen AI can also generate synthetic malware samples, which, while useful for security testing, can also be exploited to bypass detection. Large scale attacks can also be deployed at ease as attackers can automate malware creation. Datasets containing sensitive information can expose AI models to risks like manipulation and data theft. Additionally, biassed models may result in inaccurate threat detection, further complicating cybersecurity efforts.

How can organizations leverage generative AI for proactive threat detection and response?
Generative AI offers a significant advantage in analysing large volumes of data that helps to identify anomalies in real time and save the risk of being vulnerable. Its advanced pattern recognition capabilities help organizations proactively identify threats, provide prescriptive insights, and help to safeguard your organization by being adaptive to the newer thresholds. By simulating realistic cyberattacks, generative AI can also test the effectiveness of defence systems, ensuring they are prepared for real-world scenarios.

As organizations increasingly migrate to cloud environments, new security risks emerge, making Gen AI-driven solutions essential. Gen AI can strengthen Identity and Access Management (IAM) by identifying weaknesses in authentication systems which is a common target for cybercriminals and recommend preventive measures. By combining proactive threat detection, adaptive defence mechanisms, and improved IAM strategies, organizations can build a more resilient security framework against evolving cyber threats.

What ethical concerns arise when using generative AI in cybersecurity, and how can they be addressed?
Using generative AI in cybersecurity comes with important ethical considerations that organizations must address. One key concern is bias, where AI models may unfairly target certain behaviors or user profiles due to biased training data. To prevent this, businesses should use diverse datasets and regularly audit their models. Privacy is another major challenge, as AI systems often analyze large volumes of sensitive information. Strong data encryption, anonymization, and strict access controls can help keep this data secure.

There’s also the issue of accountability, especially when AI is making critical security decisions. Incorporating Human-in-the-Loop (HITL) practices ensures human oversight, adding a layer of responsibility and judgment where needed. Finally, transparency is crucial where AI systems should explain their decisions clearly, allowing security teams to trust and understand the reasoning behind each action.

What challenges do cybersecurity teams face when integrating generative AI tools into their workflows?
Integrating Gen AI into cybersecurity workflows presents several challenges. When there is bias lingering in the models, it can lead to flawed threat detection causing false positives and can disrupt operations. Adversarial attacks pose another risk, where attackers manipulate the data to trick AI models into overlooking malicious activity. Data manipulation is a major concern, as corrupted training data can compromise model accuracy and create security gaps.

Integration challenges may arise when adapting AI tools to legacy systems, requiring significant resources and adjustments. Hence, being a digitally mature organization can smoothen the process of including Gen AI to it. Furthermore, adhering to compliance with data privacy regulations while using AI models adds another layer of complexity. Finally, cybersecurity professionals must continuously update and train AI models to stay effective against evolving threats. Overcoming these challenges requires careful implementation, ongoing monitoring, and collaboration between AI experts and security teams to maximize the benefits of Gen AI tools.

Are there any notable examples of generative AI successfully preventing or mitigating cyberattacks?
Generative AI has proven highly effective in preventing and mitigating cyberattacks through innovative applications. By autonomously analysing large datasets, it can identify threats in real-time, flagging phishing attempts and isolating malicious emails before they reach employees, ultimately preventing potential financial losses. In one notable case in 2023, AI-driven threat intelligence successfully detected a major phishing campaign, saving businesses millions by stopping breaches before they occurred.

Generative AI’s predictive capabilities also allow organizations to simulate potential attacks and refine their defences. For instance, a financial institution used AI to anticipate a zero-day attack, enabling them to prevent a breach that could have exposed sensitive customer data. By combining real-time detection, automated responses, and predictive modelling, gen AI significantly enhances cybersecurity efforts, helping organizations stay one step ahead of evolving threats

How do you see generative AI evolving in the cybersecurity domain over the next few years?
Generative AI will significantly reshape cybersecurity in the coming years. As cyber threats grow more sophisticated, Gen AI will enhance proactive defence strategies by improving anomaly detection, threat prediction, and automated response systems. By being more context aware, Gen AI can distinguish between normal behaviour and subtle attack patterns with increased accuracy. Gen AI coupled with AI Agents can analyse vast data patterns, identify suspicious behaviour, and act swiftly to avoid potential attacks.

AI-driven deception techniques, such as creating realistic decoy assets or fake data, will become more advanced to mislead attackers. However, as AI strengthens security defences, cybercriminals are also expected to use Gen AI to create convincing phishing scams, deep fakes, and adaptive malware.

What role does human oversight (HITL) play in ensuring generative AI systems are effectively managing cybersecurity threats?
Generative AI systems are powerful at processing vast amounts of data, detecting anomalies, and automating responses, but they can’t do it alone. Human expertise plays a crucial role in interpreting results, validating decisions, and tackling complex, out-of-the-box scenarios. While Gen AI acts as a protective shield, humans step in to handle the tougher security challenges. For a seamless and secure workplace, both must work together.

Humans guide AI to make fair and ethical decisions, reducing bias and discrimination. When Gen AI explains its reasoning, it not only builds trust but also helps security teams learn from its decision-making process. By refining AI models, adjusting detection thresholds, and ensuring systems stay adaptive, humans keep Gen AI effective. In cases of adversarial attacks, where attackers manipulate AI models, human judgment is key to spotting suspicious patterns and strengthening defences. Together, Gen AI and human insight create a stronger, smarter cybersecurity strategy.

How can smaller organizations with limited budgets incorporate generative AI for cybersecurity?
Smaller organizations don’t require massive budgets to take advantage of generative AI for cybersecurity. Several cloud-based security tools now come with built-in AI features such as threat detection in real time and automated response, making them an affordable option. Open-source AI models also can also help businesses improve security without hefty licensing fees.

These organizations can partner with Managed Security Service Providers (MSSPs) for cybersecurity eliminating the need of in house experts. Moreover, AI agents can handle monotonous tasks such as analysing logs, flagging unusual activity, and prioritising alerts. A combination of budget-friendly Gen AI tools with human oversight and staff training, smaller businesses can strengthen their cybersecurity without going overboard on expenses.

What best practices would you recommend for implementing generative AI tools while minimising risks?
Generative AI tools can be effectively implemented with a more cautious approach to zero down any risks. Ensuring quality data and efficient security practices have to be implemented so the model can be trained without biased data while sensitive information is protected to prevent leaks or manipulation. It is essential to incorporate Human-in-the-Loop (HITL) practices, allowing human oversight to validate AI decisions, reduce errors, and uphold ethical standards.

While handling critical data, there should be strict access control protocols to restrict any unauthorized use. Adversarial testing is a method for systematically evaluating an ML model, which can be carried out regularly to spot vulnerabilities such as data poisoning or manipulation attempts before they are exploited by attackers. Continuous monitoring is essential for identifying performance issues, adapting to evolving threats, and maintaining the model’s accuracy over time. By combining these approaches, organizations can safely and effectively utilize Gen AI in their cybersecurity frameworks.

]]>
Generative AI Redefining Cybersecurity with Advanced Capabilities https://securityreviewmag.com/?p=27988 Tue, 25 Mar 2025 15:37:24 +0000 https://securityreviewmag.com/?p=27988 Emad Fahmy, Systems Engineering Director at NETSCOUT, emphasizes the importance of leveraging advanced threat analytics and adaptive DDoS solutions to address the evolving cybersecurity challenges in hybrid environments. He highlights NETSCOUT’s commitment to providing real-time visibility, actionable insights, and innovative technologies to enhance network security and resilience against sophisticated cyber threats

How is generative AI being utilised to enhance cybersecurity measures today?
Generative AI is improving cybersecurity by helping detect and stop threats more effectively. It can recognise patterns in cyberattacks, like malware and unusual network activity, that traditional security systems might miss. AI also speeds up response times by automatically taking action against threats. Additionally, it helps businesses manage risks by tracking security gaps and ensuring compliance with safety rules. While AI strengthens security, it also brings new challenges, such as the risk of AI-generated attacks and concerns about data privacy, making careful use important.

What potential risks does generative AI introduce in the cybersecurity landscape, such as AI-driven cyberattacks?
Generative AI can make some cyberattacks more effective and harder to detect. AI-powered social engineering enables extremely convincing phishing emails to be created, can mimic real voices and generate deepfake images that may bypass biometric security. AI also scales cyberattacks more efficiently, optimising DDoS, credential stuffing and malware deployment. In Unified Communications as a Service (UCaaS) platforms, AI automation introduces new risks, as AI-generated text and responses could spread misinformation.

How can organisations leverage generative AI for proactive threat detection and response?
Organisations can use AI-driven systems to automate threat detection and response. This means analysing network data in real time, identifying anomalies at speed and detecting attack patterns before breaches occur. AI also helps counter social engineering by recognising phishing attempts and deepfake content. Additionally, AI-powered tools can automate attack resolution processes, improving speed and accuracy. However, it’s important to remember that human oversight and trained cybersecurity teams are essential to interpret AI insights and mitigate risks effectively. A combination of AI-driven defence and human oversight is the best strategy for organisations to stay ahead of evolving cyber threats.

What ethical concerns arise when using generative AI in cybersecurity, and how can they be addressed?
Ethical concerns around generative AI in cybersecurity arise from both its misuse by attackers and the risks associated with AI-driven defences. Cybercriminals can harness the power of AI for more sophisticated phishing, deepfake manipulation and large-scale automated attacks, raising concerns about privacy, misinformation and identity fraud. While AI strengthens security, over-reliance on automation can also lead to false positives or missed threats if not properly monitored. To mitigate these risks, organisations must combine AI-driven cybersecurity with human oversight. Training cybersecurity teams, ensuring responsible threat detection and maintaining transparency in AI decision-making are essential for ethical and effective cybersecurity.

What challenges do cybersecurity teams face when integrating generative AI tools into their workflows?
Cybersecurity teams face several hurdles when adopting generative AI. A major issue is practicality, as many AI initiatives sound promising but lack clear, actionable solutions. Workforce automation is another concern, as ongoing labour shortages continue to stretch security teams. While AI has been around for decades, much of today’s focus is on large-scale models rather than targeted, practical applications. Smaller AI/ML projects that are quicker and more cost-effective to deploy may offer a better approach. However, the current AI hype makes it difficult to distinguish real innovation from inflated expectations, further complicating integration efforts.

How do you see generative AI evolving in the cybersecurity domain over the next few years?
Generative AI is set to play an increasingly complex role in cybersecurity over the next few years. Cybercriminals are already leveraging it to automate phishing attacks, generate deepfake scams, and optimise large-scale threats like DDoS attacks. As AI technology advances, these threats will become more sophisticated and harder to detect. Conversely, AI-driven security tools will evolve to counteract these risks by enhancing threat detection, improving anomaly detection, and accelerating response times. While AI will improve cybersecurity defences, its success will depend on balancing automation with human expertise to prevent the misidentification of threats.

What role does human oversight play in ensuring generative AI systems are effectively managing cybersecurity threats?
AI is a powerful asset in cybersecurity, but it’s not infallible. That’s where human oversight is critical. AI can rapidly detect threats and automate responses, but it lacks contextual understanding and can misinterpret data, leading to false positives. Security teams must stay engaged, validating AI-driven insights, refining models and ensuring decision accuracy. Generative AI still struggles with reliability, making expert involvement essential to prevent costly mistakes and build trust. The most effective approach combines AI’s speed with human judgement, creating smarter, more resilient cybersecurity operations.

How can smaller organisations with limited budgets incorporate generative AI for cybersecurity?
Smaller organisations can adopt generative AI for cybersecurity by leveraging cost-effective, cloud-based AI-driven security solutions. Instead of investing in expensive in-house AI models, they can use AIOps platforms that automate threat detection and incident response, delivering actionable insights without requiring large security teams. AI-powered monitoring tools can also help identify security risks proactively, reducing response times. However, human oversight remains essential—AI is most effective when combined with expert analysis. By strategically integrating AI with human intelligence, smaller organisations can strengthen their security without exceeding their budgets.

What best practices would you recommend for implementing generative AI tools while minimising risks?
Implementing generative AI in cybersecurity requires a careful balance of automation and human oversight. AI should generate reliable, predictable results rather than depending on large language models that may introduce inaccuracies. Continuous monitoring is essential to prevent AI from mistakenly blocking legitimate traffic or disrupting operations. Organisations should leverage AI for real-time threat detection while keeping human experts involved in critical decision-making.

]]>