Cybersecurity – Security Review Magazine https://securityreviewmag.com We bring you the latest from the IT and physical security industry in the Middle East and Africa region. Thu, 08 May 2025 07:49:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://securityreviewmag.com/wp-content/uploads/2024/08/cropped-SRico-32x32.png Cybersecurity – Security Review Magazine https://securityreviewmag.com 32 32 Huawei Experts Reiterate the Importance of a Unified Cybersecurity Foundation at GISEC Global 2025 https://securityreviewmag.com/?p=28179 Thu, 08 May 2025 07:49:30 +0000 https://securityreviewmag.com/?p=28179 Huawei convened a media briefing at GISEC GLOBAL 2025, the Middle East’s preeminent cybersecurity event, to articulate its vision for a unified cybersecurity foundation designed to address the evolving challenges of the digital and intelligent era.

The briefing, themed “Establishing a Unified Cybersecurity Foundation to Safeguard the Expanding Digital and Intelligent Landscape,” featured presentations by Dr. Zhu Shenggao, Vice President of AI at Huawei Cloud Middle East & Central Asia; Richard Wu, President of Security Product Domain in the Data Communication Product Line, Huawei; and Yongjian Li, President of Data Protection, Huawei. Moderated by Colm Murphy from the Huawei European Cybersecurity Center, the session was attended by media representatives from the GCC, reflecting the region’s increasing emphasis on collaborative cybersecurity strategies.

Strategic Investment in Research and Development
Huawei’s commitment to cybersecurity is underscored by its sustained investment in research and development. Mr. Murphy highlighted the company’s dedication to innovation, noting that in 2024, Huawei allocated USD 24.6 billion to R&D, representing 20.8% of its annual revenue. “The company’s cumulative R&D investment over the past decade amounts to USD 171.1 billion, demonstrating its commitment to continuous advancement in cybersecurity,” Mr. Murphy stated. “Huawei currently employs more than 3,000 cybersecurity R&D personnel, with 5% of its R&D expenditure focused on enhancing the security of its products.”

This investment supports Huawei’s strategic approach to cybersecurity, predicated on the principle that security should be integral to system design and based on rigorous verification against established standards. This commitment is manifested in a comprehensive governance framework and a dedication to providing secure technologies through collaborative partnerships, contributing to industry standards, and upholding privacy and data sovereignty.

Addressing the Evolving Threat Landscape with AI-Native Security
A central focus of the briefing was the increasing prevalence of AI-driven cyberattacks. Mr. Wu emphasized the escalating frequency, sophistication, and covert nature of these threats, noting the utilization of AI technologies by malicious actors to execute advanced attacks and rapidly generate malware variants.

Dr. Zhu addressed this challenge by presenting Huawei Cloud’s AI-Native Security paradigm. “Cybersecurity and privacy protection constitute the cornerstones of development in the digital and intelligent world,” Dr. Zhu stated. “Our unified approach integrates protection across cloud, network, edge, and endpoint environments to provide the comprehensive security foundation necessary for organizations to innovate with confidence.”

Huawei Cloud’s AI Pangu security models integrate comprehensive threat intelligence with specialized capabilities, automating 99% of threat responses and significantly reducing incident detection times. This proactive approach is essential in an environment where conventional security measures are often inadequate in addressing rapidly evolving threats.

Mitigating the Threat of Ransomware: A Multi-Layered Defense Strategy
The briefing also addressed the escalating threat of ransomware, which Mr. Wu reported resulted in USD 42 billion in global losses in 2024. Huawei’s multi-layered protection solution provides active defense with a 99.99% ransomware detection rate, while its HiSec Endpoint product employs AI-driven monitoring to initiate file backup upon detection of suspicious encryption activity.

“In the context of an increasingly complex threat landscape characterized by more frequent, automated, and covert cyber-attacks, cybersecurity must transition from reactive to proactive threat containment,” Mr. Wu stated. “Our unified cybersecurity foundation reflects Huawei’s commitment to assisting organizations in safeguarding their critical digital assets while enabling continued innovation.”

Revolutionizing Data Protection and Recovery Capabilities
Mr. Li introduced Huawei’s innovative approach to data protection and storage security with the unveiling of the OceanProtect E8000. This advanced system features a 3-in-1 converged architecture that integrates backup software servers, short-term retention storage, and long-term retention storage into a unified system.

“Organizations today require comprehensive protection and rapid recovery capabilities,” said Mr. Li. “With OceanProtect E8000, we are providing both within a single integrated system that significantly reduces complexity while enhancing security.” The OceanProtect E8000 delivers a 5x improvement in recovery performance, enabling the restoration of 1TB of data in 20 seconds, and offers a high-density 2PB/2U capacity that reduces rack space requirements by up to 90% compared to conventional solutions.

Fostering Collaboration and Ensuring Compliance
During the Q&A session, the speakers highlighted Huawei’s strategic partnership with Jeraisy Group in Saudi Arabia and discussed Huawei’s Cloud Service Cybersecurity & Compliance Standard (3CS), a framework based on more than 16 global security standards that ensures robust compliance and governance across all deployments.

]]>
Cloudflare to Showcase Future of Cybersecurity at GISEC GLOBAL 2025 https://securityreviewmag.com/?p=28159 Mon, 05 May 2025 14:00:22 +0000 https://securityreviewmag.com/?p=28159 Cloudflare has announced its participation as a Gold Sponsor at GISEC GLOBAL 2025, the region’s premier cybersecurity event, taking place at the Dubai World Trade Centre from May 06–08, 2025. At Hall 7, Booth C100, Cloudflare will demonstrate how its Connectivity Cloud is redefining cybersecurity by offering unparalleled protection, speed, and reliability to businesses of all sizes, and public sector organizations. The company will also showcase its latest portfolio of products and solutions designed to empower businesses to take back control of their technology and security environments – streamlining complexity and enhancing visibility across on-premises infrastructure, public clouds, SaaS platforms, and the open Internet.

In a time where digital threats are evolving rapidly, Cloudflare continues to invest in cutting-edge technology that enables businesses to build, scale, and secure digital operations. GISEC 2025 attendees will get a first-hand look at Cloudflare’s Zero Trust platform, AI-native security innovations, Developer Platform, and DDoS mitigation capabilities, all engineered to meet the security demands of modern enterprises.

“As cyber threats become more sophisticated and the region experiences a surge in digital adoption, Cloudflare is committed to enabling secure, resilient, and fast digital experiences,” said Bashar Bashaireh, AVP, Middle East, Türkiye & North Africa at Cloudflare. “Our presence at GISEC reflects our commitment to helping organizations across the Middle East stay ahead of evolving threats, build more secure architectures, and embrace digital transformation with confidence.”

At Security Week 2025, Cloudflare introduced significant advancements to its Zero Trust suite, including Browser Isolation improvements, phishing-resistant authentication, and AI-powered threat detection – making Zero Trust easier to deploy and more powerful for global teams. According to Cloudflare’s Q1 2025 DDoS Threat Report, the company mitigated over 20.5 million DDoS attacks, up 358% YoY. Cloudflare blocked 4.8 billion packets per second (Bpps) attacks, 52% higher than the previous benchmark, and separately defended against a massive 6.5 terabits-per-second (Tbps) flood, matching the highest bandwidth attacks ever reported. Cloudflare will showcase its automated mitigation system, capable of stopping attacks in under 3 seconds, and how its 1.5 Tbps+ edge network ensures constant protection.

At Developer Week in April, Cloudflare unveiled Workers AI Templates, enhanced observability tooling, and WebSocket support, giving developers faster paths to building secure, scalable applications. The Workers AI platform, now running on GPUs in 180+ cities, empowers organizations to deploy low-latency, inference-ready AI apps globally.

Cloudflare’s recent Middle East & Turkey Security Report highlights that 73% of businesses in the region expect an increase in cyberattacks in 2025, yet 60% feel underprepared. Cloudflare’s regional presence and capabilities help bridge this gap with cloud-native solutions built for modern threats. Cloudflare is collaborating with Diligent and Qualys to power a next-generation cyber risk reporting solution—transforming how boards and executive teams gain visibility into cybersecurity posture.

As organizations across the Middle East accelerate cloud adoption and digital transformation, Cloudflare remains a trusted partner in securing networks, web applications, and APIs. The company’s global edge network ensures performance and compliance, while innovations in SASE and AI inference help customers stay resilient and competitive.

]]>
How AI is Reinventing Cybersecurity for the Automotive Industry https://securityreviewmag.com/?p=28087 Wed, 23 Apr 2025 07:17:25 +0000 https://securityreviewmag.com/?p=28087 Written by Alain Penel, VP of Middle East, CIS & Turkey at Fortinet

Autonomous and electric vehicle uptake is rising across the Middle East, driven by national agendas and a growing push for sustainable mobility. With this rapid growth however comes an urgent need to address cybersecurity at every stage of the automotive value chain.
Artificial Intelligence (AI) is at the heart of this shift; transforming not only how vehicles operate, but also how cyber threats are identified, mitigated, and prevented. From predictive maintenance to driver behavior analytics, AI is streamlining processes and unlocking efficiencies. But it is also redefining the security perimeter for automotive organizations.

Forces Influencing AI Adoption in Automotive
As the industry evolves, three forces are shaping the current landscape: stricter regulations, rapid AI integration, and a fundamental change in communication infrastructure. Regulations such as the Cyber Resilience Act and NIS2 for example are introducing more granular compliance mandates, especially for sectors handling critical infrastructure.

Meanwhile, AI is accelerating business and individual learning processes. At the network level, the need for faster communication and bandwidth adaptability is giving rise to next-generation connectivity frameworks that can support AI-native systems. This evolution in infrastructure and intelligence also promotes a significant shift in cybersecurity from reactive to preventive.

AI is increasingly being used to analyze threat landscapes and internal vulnerabilities in real-time. This shift enables organizations to prepare for attacks before they happen, leveraging behavioral analytics and high-speed correlation to stay ahead of potential breaches. Hardware acceleration and software development, guided by AI, are now setting the pace for how cybersecurity evolves across the industry.

The Impact of Cybersecurity
Unsurprisingly, automotive enterprises are becoming high-value targets for cybercriminals. Three core factors contribute to this trend; the financial opportunity of holding connected services hostage, the complexity of digital supply chains, and the vast amount of sensitive data being generated. With every vehicle connected to cloud-based services, a single breach can have wide-ranging brand, operational, and financial repercussions. Moreover, the ecosystem of third-party vendors involved in producing autonomous and electric vehicles significantly expands the attack surface.

The use of digital twins and advanced manufacturing technologies further intensifies the volume of valuable data. This information ranging from user behavior patterns to proprietary designs is not only attractive to attackers but also becomes a tool for launching future attacks or selling on the dark web.

AI Transformations in the Automotive Supply Chain
AI is also transforming the automotive supply chain. Predictive maintenance for example – as opposed to scheduled or reactive vehicle maintenance, which until now has been the norm – enables companies to forecast part failures, optimize distribution, and reduce warehousing costs. AI can analyse and synthesise so many data streams that this guessing game becomes much more accurate. Not only does this mean more reliable vehicles for the consumer, but it also means that each element of demand can be optimised.

Driver behavior analysis and in-cabin monitoring systems powered by AI are also enhancing safety, particularly for long-haul truck drivers exposed to risks such as fatigue and theft. These AI-powered innovations are already helping companies reduce operational costs while improving customer satisfaction.

Strengthening security across the supply chain means embedding real-time monitoring, mapping data flows, and building a fast, coordinated response to incidents. The introduction of cyber resilience principles encouraged by regulatory bodies requires organizations to maintain robust and sustainable response mechanisms. AI can help with this.

AI’s Role in Automotive Cybersecurity
The future of AI in automotive cybersecurity lies in its ecosystem-wide integration. Multimodal AI models that can process text, images, and design data are already in use. But the next phase involves combining internal and external intelligence to strengthen risk postures. Synthetic data created specifically to train internal models without exposing real user data is becoming an important asset in speeding up AI development while preserving privacy.

The impact of AI can be summarized as transformative, dual-edged, and adaptable. It is enhancing cybersecurity readiness, being weaponized by attackers, and empowering businesses to evolve quickly in a changing environment. As the Middle East embraces connected mobility and smart transportation, the conversation must move beyond adopting AI to implementing it securely and intelligently. The road to the future may be autonomous, but its success will hinge on cybersecurity built for adaptability, speed, and scale.

]]>
AmiViz to Show Off the “Future of Cybersecurity” at GISEC 2025 https://securityreviewmag.com/?p=28081 Tue, 22 Apr 2025 14:39:50 +0000 https://securityreviewmag.com/?p=28081 AmiViz is set to participate in GISEC Global 2025, the region’s premier cybersecurity event, taking place from May 6–8 at the Dubai World Trade Centre. Visitors can find AmiViz at Stand B180 in Hall 5, where the company will spotlight a powerful lineup of cybersecurity solutions tailored to address today’s most pressing digital threats.

At this year’s event, AmiViz will be joined by six leading technology partners, each bringing unique capabilities to the cybersecurity ecosystem:

  1. Sysdig – Real-Time Security for Cloud and Containers
  2. Threatcop – People Security Management
  3. Bitsight – Cyber Risk Management
  4. ExtraHop – Cloud Native Network Detection and Response
  5. RunZero – Total Attack Surface & Exposure Management
  6. YesWeHack – Global Bug Bounty and Vulnerability Management Platform

These technologies span the full cybersecurity spectrum—from securing cloud-native environments and managing cyber risk to strengthening human defenses and identifying vulnerabilities before they can be exploited.

AmiViz’s participation underscores its ongoing commitment to empowering regional enterprises with comprehensive cybersecurity strategies. Visitors to the stand will have the opportunity to explore each of the featured solutions, engage directly with product experts, and attend insightful presentations on how these tools can be integrated into their existing security frameworks.

“GISEC is a cornerstone event for cybersecurity in the region,” said Ilyas Mohammed, Chief Operating Officer of Amiviz. “It provides a unique platform for us to showcase our growing portfolio of innovative technologies and to connect with decision-makers looking to enhance their cyber resilience. This year, our focus is on helping organizations build proactive, integrated defenses that are future-ready.”

]]>
Positive Technologies to Highlight AI Cyber Threats and Defense at GISEC 2025 https://securityreviewmag.com/?p=28075 Tue, 22 Apr 2025 07:48:56 +0000 https://securityreviewmag.com/?p=28075 Positive Technologies is joining GISEC Global 2025, one of the largest cybersecurity and technology exhibitions in the Middle East, on May 6–8 in Dubai. At the Positive Technologies booth (D 90, Hall 7), in-house experts will share their expertise in application security, industrial cybersecurity, and detection of cyberattacks in network traffic using PT Network Attack Discovery. The Positive Technologies team will also host workshops in the Hack-O-Sphere zone.

“Multiple countries in the Middle East have made significant strides in cybersecurity. However, organizations in the region remain an attractive target for cybercriminals, as our research shows,” says Ilya Leonov, Regional Director for MENA, Positive Technologies. “At GISEC Global 2025, we will focus on application security (AppSec) and operational technology security (OT security). Our team will share best practices for using PT Network Attack Discovery, which detects cybercriminal activity in the network traffic and also aids in incident investigation and proactive threat hunting. We’ll also be talking about a range of our other products and solutions to help you get real value from your cybersecurity investments. Additionally, our experts will demonstrate sophisticated attack methods and explain how to defend against them.”

Visitors to the Positive Technologies booth will have the opportunity to observe offensive security specialists simulating DMA attacks, using various devices to bypass defenses and gain access to valuable information. An accessible and user-friendly tool for chip security analysis will also be presented to GISEC participants. This tool, which simulates fault injection attacks, will be demonstrated in action, and the Positive Technologies team will deliver a workshop for cybersecurity professionals.

Positive Technologies will also be organizing four activities in the Hack-O-Sphere zone. At Fixathon, guests will have the opportunity to test their skills in fixing code vulnerabilities and improve their secure development skills. The second activity is dedicated to steganography: guests will be encouraged to find words encrypted in the works of renowned artists and get acquainted with this fascinating method of information transmission. At the workshop on hacking devices, participants will learn how attackers exploit physical access vulnerabilities and how to defend against such attacks. At the soldering workshop, you’ll have the opportunity to craft a useful mini-gadget.

]]>
AI Will Introduce New Threats as LLMs Take Over Automated Systems https://securityreviewmag.com/?p=28033 Mon, 07 Apr 2025 08:21:57 +0000 https://securityreviewmag.com/?p=28033 Chester Wisniewski, the Director and Global field CTO at Sophos, says, criminals are using AI for exclusively for social scams and the social aspects of traditional attacks

How is generative AI being utilised to enhance cybersecurity measures today?
AI brings a wide variety of advantages to cybersecurity: automation, speed, scalability, enhanced detection, and generalisability. Without AI, rule-based systems need immense manual upkeep to handle the scale of modern threats. AI models can generalise by learning relationships between any number of potentially hundreds of features, while human analysts cannot write such complex rules. AI does, however, stand to introduce new threats as large language models take over automated systems.

What potential risks does generative AI introduce in the cybersecurity landscape, such as AI-driven cyberattacks?
For the most part, criminals are using AI for exclusively for social scams and the social aspects of traditional attacks. AI allows for accurate translation at scale, which dramatically increases the quality of social scams. It can also be used to create high-quality phishing emails that are indistinguishable from the real thing.

AI chatbots are also very useful for initiating conversations with potential victims and setting the hook. Once a victim has been captured, humans usually take over but can still use AI to help with translation and grammar. One additional area where AI might be useful is in assessing the value of large volumes of stolen data. Using AI a criminal might be quicker to identify high-value data and either sell it at a premium or use it as an extra pressure mechanism against the victim.

What ethical concerns arise when using generative AI in cybersecurity, and how can they be addressed?
In most applications there aren’t many ethical concerns. Clearly using AI to generate malicious code or to gather open source intelligence should be done with caution, but most cybersecurity applications don’t involve many ethical dilemmas.

What challenges do cybersecurity teams face when integrating generative AI tools into their workflows?
Two primary concerns come to mind. First is when you are using generative to help you write code you must do very thorough reviews to be sure you are not introducing vulnerabilities. Generative AI has been known to make up the names of libraries that don’t exist or recommend code snippets containing basic programming mistakes like allowing SQL injection attacks or buffer overflow attacks. Second, we must verify the outputs when it really matters. Frequently mild inaccuracies may not matter, but when in circumstances where it is of great importance, we have to double-check the outputs to ensure the accuracy of the results.

Are there any notable examples of generative AI successfully preventing or mitigating cyberattacks?
Not that I am aware of. Traditional machine learning and neural-network malware detection models prevent attacks around the clock, but I am not aware of generative AI being used in this way to date.

How do you see generative AI evolving in the cybersecurity domain over the next few years?
I think the real promise is in alert triage and language translation capabilities. Of course, these technologies are available now from ourselves and other vendors, but as these capabilities mature, they will become increasingly important for smart automation and aiding human analysts. We are also likely to see AI automation of bug discovery in code before it ships to customers preventing vulnerabilities and improved detection of targeted phishing attacks in email solutions.

What role does human oversight (HITL) play in ensuring generative AI systems are effectively managing cybersecurity threats?
This is critically important. The machines are excellent at processing vast amounts of data and helping make sense of it, but they lack intuition, creativity, and context. Humans can take this reduced flow of information and add that intelligence to achieve superior outcomes.

How can smaller organisations with limited budgets incorporate generative AI for cybersecurity?
Most smaller organisations will benefit from AI through its integration into their existing tools and through their service providers. Much of the efficiencies gained by smart applications of this technology will allow for more affordable services from security providers and easier to use tools.

What best practices would you recommend for implementing generative AI tools while minimising risks?
If using AI models hosted in public clouds or by service providers caution must be exercised to not process sensitive information using these tools. Risks can be minimised by choosing providers in countries with privacy laws in-line with your responsibilities, but caution should still be exercised. For the most sensitive types of information, it would be best to host it on-premise or in a private cloud instance that is not shared with other tenants.

]]>
Generative AI Can Automate the Creation of Malware Variants https://securityreviewmag.com/?p=28022 Thu, 03 Apr 2025 14:51:40 +0000 https://securityreviewmag.com/?p=28022 Ivan Milenkovic, Vice President – Cyber Risk Technology, EMEA at Qualys, says, as much as generative AI can fortify security, it equally arms malicious actors with new tools

How is generative AI being utilized to enhance cybersecurity measures today?
Today, generative AI is used to bolster cybersecurity defences in a multitude of ways. It automates mundane tasks, sifting through vast data logs to identify potential vulnerabilities and weed out false positives (Gartner, 2021). More impressively, generative AI can predict emerging threats by simulating attack scenarios, helping teams spot anomalies before they escalate (Mandiant, 2022).

Compared with older rule-based systems, these AI models adapt in real time, learning from both benign and malicious activity to create dynamic defence postures. A notable example is Darktrace’s “Antigena” product, which uses self-learning AI to detect abnormal network behaviours. In 2018, it reportedly thwarted an insider threat by flagging unusual data transfers in a UK-based financial services firm (Darktrace, 2018). The technology reduced the manual workload on analysts by automating front-line triage, freeing human experts to focus on higher-level investigations.

What potential risks does generative AI introduce in the cybersecurity landscape, such as AI-driven cyberattacks?
As much as generative AI can fortify security, it equally arms malicious actors with new tools. Sophisticated attackers are already deploying adversarial machine learning to bypass detection (Goodfellow et al., 2014) and using deepfakes to manipulate social engineering scams. One infamous example involved fraudsters using deepfake voice impersonation of a CEO to authorise a fraudulent wire transfer of approximately €220,000 from a UK-based energy firm in 2019 (Wall Street Journal, 2019).

This dark side underscores why cybersecurity leaders must remain vigilant. Generative AI can automate the creation of malware variants, obfuscate malicious code, or create entire networks of bot accounts capable of launching coordinated attacks (ENISA Threat Landscape, 2021). These challenges highlight the need for organisations to keep their AI defences on par with adversarial AI capabilities.

How can organizations leverage generative AI for proactive threat detection and response?
Given the growing dangers, organisations are increasingly using generative AI for proactive threat hunting. By training models on historical attack datasets, security systems can anticipate emerging vulnerabilities, formulate defensive strategies, and even recommend immediate containment measures (IBM X-Force Threat Intelligence Index, 2022). Generative AI excels at pattern recognition, which — when combined with behavioural analysis — helps security teams detect anomalies that conventional defences might miss.

Several Fortune 500 companies have begun deploying AI-driven “red team” exercises using synthetic data to simulate real attacks (Ponemon Institute, 2022). By synthesising new attack variants, these organisations can better train their detection algorithms and prepare incident response teams for novel threat scenarios.

What ethical concerns arise when using generative AI in cybersecurity, and how can they be addressed?
A critical ethical question arises when deploying powerful AI tools for cybersecurity: Where do we draw the line between data-driven intelligence and intrusive surveillance? Privacy concerns loom large, particularly when AI systems process personal information to identify potential insider threats (NIST SP 800-53, 2020). It is essential that organisations establish transparent governance structures, involving cross-functional teams from legal, compliance, and human resources.

These frameworks should clarify data usage policies, ensure algorithmic fairness, and reinforce accountability (European Commission, 2021, EU AI Act, 2024). Treating user data with respect whilst maintaining robust defences is not just a matter of compliance; it’s a moral imperative that, if neglected, can damage trust irreparably.

What challenges do cybersecurity teams face when integrating generative AI tools into their workflows?
Despite the allure of next-generation solutions, cybersecurity teams often face significant hurdles when incorporating generative AI. Firstly, there is a matter of technical complexity. Building models that accurately understand and adapt to evolving threats requires specialised expertise and substantial computational resources (Gartner, 2021). Secondly, legacy systems are mostly ill-equipped to handle the high data throughput AI demands, leading to integration bottlenecks (Mandiant, 2022). Then, there is a problem of inflated expectations. The hype around AI can cause organisations to invest in poorly scoped projects, hampering returns and morale (Ponemon Institute, 2022).

To combat these issues, teams should conduct thorough proofs of concept and collaborate with experienced data scientists to align capabilities with organisational needs.

Are there any notable examples of generative AI successfully preventing or mitigating cyberattacks?
Several case studies highlight the growing success of generative AI in thwarting attacks. Darktrace reported detecting anomalous “beacon” traffic months before a known banking Trojan was publicly identified (Darktrace, 2019). Meanwhile, a large financial institution in Asia leveraged AI-driven user behaviour analytics (UBA) to pinpoint a suspicious spike in credential escalations, uncovering an elaborate insider threat that might otherwise have slipped under the radar (IBM, 2020). These incidents illustrate the transformative power of AI when integrated thoughtfully with security operations.

How do you see generative AI evolving in the cybersecurity domain over the next few years?
Over the coming years, generative AI is expected to mature into an even more intuitive and autonomous guardian. As data collection methods expand and computational power grows (Ponemon Institute, 2022), AI models will become more adept at detecting zero-day exploits and adapting, on the fly, to novel attack techniques. Widespread adoption of AI systems that interact seamlessly with security analysts will facilitate real-time recommendations, and “self-healing” networks capable of automated patching are likely to become mainstream (Gartner, 2021).

However, we should brace for an escalation in AI-enabled cyberattacks as well (e.g. from near perfect deep-fakes, to far better personalised targeted attacks). This unfolding arms race underscores the importance of continuous innovation and collaboration between industry, academia, and government (ENISA Threat Landscape, 2021).

What role does human oversight (HITL) play in ensuring generative AI systems are effectively managing cybersecurity threats?
Human-in-the-loop oversight remains indispensable. Even the most advanced AI systems can produce false positives or overlook subtleties requiring human judgement (European Commission, 2021). Skilled analysts, especially those with deep domain knowledge, are needed to validate AI-driven alerts, fine-tune learning models, and account for socio-political contexts.

As a result, AI should be viewed as an extension of human capabilities rather than a replacement. A balanced combination of machine efficiency and human intuition results in the most effective security outcomes (Mandiant, 2022). Lastly, let’s not forget that emerging legislations (EU AI Act for example) might “insist” on having human decisions for certain privacy-critical aspects.

How can smaller organizations with limited budgets incorporate generative AI for cybersecurity?
Budget constraints need not bar smaller organisations from leveraging generative AI. A pragmatic step is to use cloud-based security tools with built-in AI features, offsetting the cost of on-premises infrastructure (Microsoft Azure Security Centre, 2021). Partnerships with managed service providers can also help smaller entities develop tailored AI strategies.

Starting with low-complexity use cases, such as automated phishing detection, can yield quick wins and free up resources to invest in more advanced capabilities. By focusing on modular, scalable solutions, smaller organisations can gradually expand their AI footprint without jeopardising financial stability.

What best practices would you recommend for implementing generative AI tools while minimizing risks?
To implement generative AI responsibly, organisations should embrace and follow industry good practices. A good example would be NIST SP 800-53. Basic steps should not be news to cyber-security professionals:

  1. Establish a clear governance framework that outlines AI deployment goals, data usage policies, and oversight responsibilities.
  2. Invest in robust training datasets to mitigate bias and ensure the AI can accurately detect real threats.
  3. Enforce rigorous testing and validation procedures, including adversarial testing to identify potential exploits.
  4. Maintain audit logs and version-control for the AI models, enabling swift rollback if necessary.
  5. Finally, foster a culture of transparency by openly communicating to stakeholders how and why AI is used within the security apparatus.
]]>
AI Has Lowered the Barrier to Entry Into Cybercrime https://securityreviewmag.com/?p=28018 Thu, 03 Apr 2025 14:44:10 +0000 https://securityreviewmag.com/?p=28018 Kalle Bjorn, Sr Director, Systems Engineering – Middle East, Fortinet, says cybersecurity is a strategic enabler for realizing the full potential of AI

How is generative AI being utilized to enhance cybersecurity measures today?
As today’s network complexity grows, so does the need for intelligent tools that can simplify management tasks and enhance efficiency. Generative AI (GenAI) has become a cornerstone for making it happen. It can truly transform how Day 0 to Day 2 network operations are performed.

According to Gartner research, by 2026, GenAI technology is expected to influence 20% of initial network configuration, a dramatic rise from virtually none in 2023. Currently, 65% of network activities, including configuration and troubleshooting, are still performed manually, highlighting a significant opportunity for automation and efficiency improvements through GenAI.

What potential risks does generative AI introduce in the cybersecurity landscape, such as AI-driven cyberattacks?
AI has become a double-edged sword for cybersecurity. On the one hand, it has lowered the barrier to entry into cybercrime, enabling would-be criminals to generate malware even when they lack programming skills and providing more sophisticated criminals with capabilities few could have imagined a short time ago. On the other hand, cyber defenders can take advantage of AI for intelligent automation and defense strategies.

Last year, global leaders raised this issue to the World Economic Forum’s Centre for Cybersecurity, with the aim of helping organizations everywhere to better comprehend the cybersecurity implications of using AI technologies and how to adopt these offerings securely. As a result of these discussions, the World Economic Forum launched its AI and Cyber Initiative to develop guidance for organizations to manage the complex cyber risks associated with AI use.

Understanding and implementing risk management measures positively impacts more than just an enterprise’s cyber resilience. Cybersecurity is a strategic enabler for realizing the full potential of AI. By embedding security into AI systems from the ground up, organizations transform risk mitigation into a competitive advantage, ensuring trustworthiness and ethical compliance.

How can organizations leverage generative AI for proactive threat detection and response?
GenAI can analyze massive data streams, recognize patterns, and deliver actionable intelligence in real-time. It also offers advanced scripting assistance, proactive troubleshooting and IoT vulnerability diagnostics, and automated implementation of AI-recommended remediations, leading to a more secure, efficient, resilient network and, eventually, an autonomous network. FortiAI for FortiManager is revolutionizing network management by integrating GenAI to do just this. FortiAI provides rapid insights into vulnerabilities, quarantining risky IoT devices and helping organizations stay ahead of potential threats.

What ethical concerns arise when using generative AI in cybersecurity, and how can they be addressed?
AI presents a multitude of perspectives and ethical considerations, particularly concerning its development, deployment, and economic ramifications.

To ensure explainability and accountability in AI-driven security decision-making, security teams can focus opting for transparent AI models whose decision-making processes can be understood and audited by human experts. Organisations should also implement robust validation and testing, rigorously testing AI models with diverse datasets to identify and mitigate biases or inaccuracies and following any local and global regulations around AI.

How do you see generative AI evolving in the cybersecurity domain over the next few years?
As we continue to imagine AI in every aspect of cybersecurity, we’re witnessing a revolution that’s reshaping the industry, making it more proactive, responsive, and adaptive than ever before.

GenAI offers transformative potential, as demonstrated by Klarna’s AI Assistant, which now handles the workload equivalent to 700 customer service agents. For Klarna, this translates into an estimated $40 million in annual savings, showcasing AI’s ability to enhance productivity and reduce operational costs.

It is widely acknowledged that AI will have a profound impact on everyday life, though the precise nature and trajectory of this impact remain difficult to predict. Nonetheless, it is imperative that we adopt a forward-thinking approach to understanding and harnessing AI’s potential, ensuring that its development is aligned with societal benefit and economic sustainability. By prioritizing cybersecurity, organizations can protect their investments in AI, supporting innovation while strengthening defenses.

What role does human oversight (HITL) play in ensuring generative AI systems are effectively managing cybersecurity threats?
Automation is particularly important in cybersecurity given the ongoing shortage of expert security staff. However, human oversight will still be important. Security teams will need to be equipped with the knowledge and skills to understand, interpret, and manage AI-driven security systems effectively. The Fortinet Training Institute recently added two new AI-focused modules to its Security Awareness and Training service to enhance learners’ understanding of AI and the role this technology plays in cybersecurity.

AI excels at tactical responses based on predefined rules. However, defining security policies, understanding risk tolerance, and making strategic decisions still require human expertise and intuition. Analysing new and evolving threats, understanding their potential impact, and developing innovative countermeasures will also still require human intelligence and creativity.

]]>
Generative AI: The Game-Changer Transforming Cybersecurity in a Rapidly Evolving Threat Landscape https://securityreviewmag.com/?p=27997 Fri, 28 Mar 2025 08:05:38 +0000 https://securityreviewmag.com/?p=27997 Fernando Cea, VP of Technology for New Markets at Globant, champions the integration of generative AI into cybersecurity as a transformative approach in tackling today’s rapidly evolving threat landscape

How is generative AI being utilized to enhance cybersecurity measures today?
The cybersecurity industry is at an inflection point. With the total addressable market expected to soar to $1.5–$2.0 trillion—nearly 10x the size of the current vended market—there’s no room for complacency. Generative AI isn’t just a tool; it’s a force multiplier. At Globant, we’re integrating Gen AI into the heart of cybersecurity, enabling systems to not only identify anomalies faster but to predict them—before they strike.

Simulation and evolution of genAI models are hand-by-hand so it is exactly where are we going. We’re seeing AI models automatically generate threat intelligence reports, simulate attacks to test system resilience, and dynamically rewrite defensive code in real-time. According to a recent IBM study, organizations using AI and automation in security saw a 108-day shorter breach lifecycle and saved an average of $1.76 million per breach. But here’s the reality: Gen AI is also arming the attackers. The only way to keep up is to fight AI with AI.

How can organizations leverage generative AI for proactive threat detection and response?
Proactive cybersecurity isn’t just about building walls—it’s about anticipating the breach before it happens. Gen AI enables organizations to shift from reactive playbooks to predictive defense. GenAI is really good on creating scenarios and synthetic data and this pattern have introduced a new approach of the problem solving. We’re helping clients in highly sensitive industries—from government to finance to entertainment—deploy AI agents that constantly scan internal and external networks, flag anomalous behavior in milliseconds, and autonomously deploy countermeasures before human analysts are even alerted.

This isn’t theoretical—it’s happening now. Imagine a Gen AI model that learns from every attempted breach across an ecosystem, feeding insights into your security fabric in real time. It’s like having a red team and blue team working together 24/7, learning from each other, and never sleeping.

What challenges do cybersecurity teams face when integrating generative AI tools into their workflows?
Let’s not sugarcoat it—integrating Gen AI into cybersecurity isn’t plug-and-play. It demands new skillsets, new mindsets, and a willingness to break the old model. One of the biggest challenges is explainability. Gen AI models often operate as black boxes, which makes it hard for CISOs and security teams to justify actions to regulators or internal stakeholders.

There’s also the risk of AI-generated false positives that can overwhelm analysts or, worse, generate blind spots. Then there’s trust: many organizations are hesitant to hand over critical security operations to a machine. And rightfully so. The risk is real—but the risk of standing still is even greater.

Are there any notable examples of generative AI successfully preventing or mitigating cyberattacks?
Yes—and they’re multiplying. One case we’re particularly proud of involved a large-scale financial institution that faced a rising wave of phishing attacks using deepfake content. Traditional rule-based systems failed to detect the nuances, but a Gen AI-powered detection layer helped deploy flagged irregular tone and semantic drift in emails and voice transcriptions in real-time. The system prevented a multi-million-dollar breach.

Another example: a digital media platform we work with experienced a zero-day exploit attempt. Our AI models, trained on synthetic attack data, recognized the pattern within seconds and auto-isolated the affected microservice—without any human intervention. These aren’t just success stories. They’re proof that Gen AI can move faster than the adversary.

How do you see generative AI evolving in the cybersecurity domain over the next few years?
The future of cybersecurity will be defined by cyber resilience—not just fortification. And Gen AI will be at the core. In the next three to five years, we expect to see fully autonomous security orchestration platforms powered by Gen AI that adapt and evolve without manual configuration. Think of them as living, breathing digital immune systems—capable of learning, mutating, and healing themselves. But there’s also a dark side.

Nation-states and cybercrime syndicates will weaponize Gen AI to launch attacks at unprecedented scale and sophistication. Deepfakes, synthetic identities, and AI-generated malware will become the norm.

]]>
Can AI Outsmart Hackers? How Generative AI is Reshaping Cybersecurity https://securityreviewmag.com/?p=27991 Thu, 27 Mar 2025 15:09:47 +0000 https://securityreviewmag.com/?p=27991 As generative AI transforms cybersecurity into an AI-versus-AI battleground, organizations must navigate both its defensive potential and emerging risks. We spoke with Ramprakash Ramamoorthy, Director of AI Research at Zoho, about how this technology is reshaping threat detection, automating responses, and even being weaponized by attackers. From real-world attack prevention to ethical implementation challenges, Ramamoorthy shares critical insights on leveraging generative AI effectively while mitigating its dangers in our increasingly digital world

How is generative AI being utilized to enhance cybersecurity measures today?
Generative AI has changed the way cybersecurity operates today. It has not only automated tasks but streamlined workflows, improved threat detection, and is also used to stimulate attacks to see how well an organization is proactive to cyber threats. Unlike traditional static thresholds that require constant human vigilance, Generative AI adapts dynamically, learning from vast data volumes to stay ahead of evolving attacks.

This makes it highly effective in identifying zero-day vulnerabilities and sophisticated threats. Moreover, Generative AI streamlines incident response by generating detailed reports, suggesting mitigation steps, and even creating code patches to address security gaps. Its ability to analyse patterns, predict risks, and automate defensive actions has made Generative AI an important tool in modern cybersecurity threats.

What potential risks does generative AI introduce in the cybersecurity landscape, such as AI-driven cyberattacks?
Generative AI was evolved to make things easier, but it has also become a powerful ally to the bad actors. Cyber attackers use Gen AI to create highly convincing phishing emails, fake websites, and deep fakes to deceive users and steal information from. It also leads to the development of sophisticated malware that bypasses traditional security defences, keeping non-digitized enterprises at a higher risk.

Gen AI can also generate synthetic malware samples, which, while useful for security testing, can also be exploited to bypass detection. Large scale attacks can also be deployed at ease as attackers can automate malware creation. Datasets containing sensitive information can expose AI models to risks like manipulation and data theft. Additionally, biassed models may result in inaccurate threat detection, further complicating cybersecurity efforts.

How can organizations leverage generative AI for proactive threat detection and response?
Generative AI offers a significant advantage in analysing large volumes of data that helps to identify anomalies in real time and save the risk of being vulnerable. Its advanced pattern recognition capabilities help organizations proactively identify threats, provide prescriptive insights, and help to safeguard your organization by being adaptive to the newer thresholds. By simulating realistic cyberattacks, generative AI can also test the effectiveness of defence systems, ensuring they are prepared for real-world scenarios.

As organizations increasingly migrate to cloud environments, new security risks emerge, making Gen AI-driven solutions essential. Gen AI can strengthen Identity and Access Management (IAM) by identifying weaknesses in authentication systems which is a common target for cybercriminals and recommend preventive measures. By combining proactive threat detection, adaptive defence mechanisms, and improved IAM strategies, organizations can build a more resilient security framework against evolving cyber threats.

What ethical concerns arise when using generative AI in cybersecurity, and how can they be addressed?
Using generative AI in cybersecurity comes with important ethical considerations that organizations must address. One key concern is bias, where AI models may unfairly target certain behaviors or user profiles due to biased training data. To prevent this, businesses should use diverse datasets and regularly audit their models. Privacy is another major challenge, as AI systems often analyze large volumes of sensitive information. Strong data encryption, anonymization, and strict access controls can help keep this data secure.

There’s also the issue of accountability, especially when AI is making critical security decisions. Incorporating Human-in-the-Loop (HITL) practices ensures human oversight, adding a layer of responsibility and judgment where needed. Finally, transparency is crucial where AI systems should explain their decisions clearly, allowing security teams to trust and understand the reasoning behind each action.

What challenges do cybersecurity teams face when integrating generative AI tools into their workflows?
Integrating Gen AI into cybersecurity workflows presents several challenges. When there is bias lingering in the models, it can lead to flawed threat detection causing false positives and can disrupt operations. Adversarial attacks pose another risk, where attackers manipulate the data to trick AI models into overlooking malicious activity. Data manipulation is a major concern, as corrupted training data can compromise model accuracy and create security gaps.

Integration challenges may arise when adapting AI tools to legacy systems, requiring significant resources and adjustments. Hence, being a digitally mature organization can smoothen the process of including Gen AI to it. Furthermore, adhering to compliance with data privacy regulations while using AI models adds another layer of complexity. Finally, cybersecurity professionals must continuously update and train AI models to stay effective against evolving threats. Overcoming these challenges requires careful implementation, ongoing monitoring, and collaboration between AI experts and security teams to maximize the benefits of Gen AI tools.

Are there any notable examples of generative AI successfully preventing or mitigating cyberattacks?
Generative AI has proven highly effective in preventing and mitigating cyberattacks through innovative applications. By autonomously analysing large datasets, it can identify threats in real-time, flagging phishing attempts and isolating malicious emails before they reach employees, ultimately preventing potential financial losses. In one notable case in 2023, AI-driven threat intelligence successfully detected a major phishing campaign, saving businesses millions by stopping breaches before they occurred.

Generative AI’s predictive capabilities also allow organizations to simulate potential attacks and refine their defences. For instance, a financial institution used AI to anticipate a zero-day attack, enabling them to prevent a breach that could have exposed sensitive customer data. By combining real-time detection, automated responses, and predictive modelling, gen AI significantly enhances cybersecurity efforts, helping organizations stay one step ahead of evolving threats

How do you see generative AI evolving in the cybersecurity domain over the next few years?
Generative AI will significantly reshape cybersecurity in the coming years. As cyber threats grow more sophisticated, Gen AI will enhance proactive defence strategies by improving anomaly detection, threat prediction, and automated response systems. By being more context aware, Gen AI can distinguish between normal behaviour and subtle attack patterns with increased accuracy. Gen AI coupled with AI Agents can analyse vast data patterns, identify suspicious behaviour, and act swiftly to avoid potential attacks.

AI-driven deception techniques, such as creating realistic decoy assets or fake data, will become more advanced to mislead attackers. However, as AI strengthens security defences, cybercriminals are also expected to use Gen AI to create convincing phishing scams, deep fakes, and adaptive malware.

What role does human oversight (HITL) play in ensuring generative AI systems are effectively managing cybersecurity threats?
Generative AI systems are powerful at processing vast amounts of data, detecting anomalies, and automating responses, but they can’t do it alone. Human expertise plays a crucial role in interpreting results, validating decisions, and tackling complex, out-of-the-box scenarios. While Gen AI acts as a protective shield, humans step in to handle the tougher security challenges. For a seamless and secure workplace, both must work together.

Humans guide AI to make fair and ethical decisions, reducing bias and discrimination. When Gen AI explains its reasoning, it not only builds trust but also helps security teams learn from its decision-making process. By refining AI models, adjusting detection thresholds, and ensuring systems stay adaptive, humans keep Gen AI effective. In cases of adversarial attacks, where attackers manipulate AI models, human judgment is key to spotting suspicious patterns and strengthening defences. Together, Gen AI and human insight create a stronger, smarter cybersecurity strategy.

How can smaller organizations with limited budgets incorporate generative AI for cybersecurity?
Smaller organizations don’t require massive budgets to take advantage of generative AI for cybersecurity. Several cloud-based security tools now come with built-in AI features such as threat detection in real time and automated response, making them an affordable option. Open-source AI models also can also help businesses improve security without hefty licensing fees.

These organizations can partner with Managed Security Service Providers (MSSPs) for cybersecurity eliminating the need of in house experts. Moreover, AI agents can handle monotonous tasks such as analysing logs, flagging unusual activity, and prioritising alerts. A combination of budget-friendly Gen AI tools with human oversight and staff training, smaller businesses can strengthen their cybersecurity without going overboard on expenses.

What best practices would you recommend for implementing generative AI tools while minimising risks?
Generative AI tools can be effectively implemented with a more cautious approach to zero down any risks. Ensuring quality data and efficient security practices have to be implemented so the model can be trained without biased data while sensitive information is protected to prevent leaks or manipulation. It is essential to incorporate Human-in-the-Loop (HITL) practices, allowing human oversight to validate AI decisions, reduce errors, and uphold ethical standards.

While handling critical data, there should be strict access control protocols to restrict any unauthorized use. Adversarial testing is a method for systematically evaluating an ML model, which can be carried out regularly to spot vulnerabilities such as data poisoning or manipulation attempts before they are exploited by attackers. Continuous monitoring is essential for identifying performance issues, adapting to evolving threats, and maintaining the model’s accuracy over time. By combining these approaches, organizations can safely and effectively utilize Gen AI in their cybersecurity frameworks.

]]>