Cyber Security
Is Artificial Intelligence a Boon or Bane for Cybersecurity?
Written by Sergey Belov, Head of Internal Security at Acronis
AI stands at the forefront of innovations in online safety in 2024, transforming how organisations detect, analyse and respond to threats. Businesses across the Middle East are increasingly investing in AI, recognising its potential to enhance operations and safeguard digital assets through advanced security and data protection measures. By leveraging AI’s processing power, companies can bolster their defences, preemptively addressing threats before they escalate. However, while Artificial Intelligence offers significant benefits to businesses worldwide, its adoption also escalates the risk of potential attacks and security vulnerabilities.
Here are some ways Artificial Intelligence acts for and against cybersecurity:
Artificial Intelligence for Cybersecurity
Studies show that 91% of the businesses in UAE have employed AI in their cyber safety strategies to address the increase in threats today.
- Threat detection and response: AI-enhanced security solutions utilise machine learning algorithms to analyse extensive data sets from various origins, aiding IT professionals in identifying and addressing cyber threats promptly.
- Behavioural analysis: Tools powered by AI for behavioural analysis monitor the actions of users and systems to detect anomalies that may signal potential security risks.
- Predictive analytics: AI and machine learning algorithms analyse past data to forecast future cyber protection trends and anticipate emerging threats.
- Automation of routine tasks: AI-driven automation technologies streamline everyday safety operations, including threat detection, incident response, and vulnerability management, allowing security teams to concentrate more on strategic activities.
Artificial Intelligence Against Cybersecurity
As technology advances, so do the threats against it. In the Middle East, the cyber threat landscape is evolving rapidly, with targeted ransomware attacks, supply chain vulnerabilities and advanced phishing methods on the rise.
Recent data shows a 29.1% increase in malware detection and a 25.5% rise in blocked URLs, emphasising the urgency for robust security measures. To address these challenges it is important to be aware of the threats that reside in new technologies and identify the best plan of action against them.
- Bias and discrimination: AI algorithms are susceptible to bias and discrimination, potentially leading to inaccurate decisions and unintended consequences. To mitigate these risks, organisations must ensure AI-driven cybersecurity systems are trained on diverse and representative datasets, thereby minimising bias and mitigating ethical and legal concerns.
- False positives and negatives: AI-driven security systems may produce false positives (incorrectly identifying benign activities as malicious) or false negatives (failing to detect genuine security threats), resulting in either unnecessary alerts or missed opportunities leading to unnecessary alerts or missed opportunities to prevent security incidents.
- Adversarial attacks: AI systems can be vulnerable to adversarial attacks, where malicious actors manipulate input data to evade detection by AI algorithms. Adversarial training and anomaly detection techniques can reduce these risks and ensure the resilience of systems depending on Artificial Intelligence.
Moreover, AI not only enhances threat detection but also empowers cybercriminals to execute sophisticated attacks. 73% of organisations in the UAE have experienced ransomware attacks in the past two years, underscoring the critical need for heightened vigilance.
Addressing the Challenges
To truly reap the benefits of emerging technology, companies must adopt a comprehensive approach to address the risks posed by them. This involves investing in AI-driven threat detection and prevention tools, employing advanced AI security measures, and continuously updating and refining their cyber protection strategies. Moreover, organisations should conduct regular training, attack simulations and awareness programs for their employees to stay prepared for the malicious usage of AI. Examples of malicious use of AI targeting employees include several concerning scenarios:
- Deepfakes: Deepfake techniques can be employed by attackers with minimal technical expertise. These can be created using software that is easily accessible through a simple Google search. Such deepfakes could be used to impersonate company executives in video calls, tricking employees into disclosing sensitive information or authorising fraudulent transactions.
- Phishing Scams: AI can be used to generate highly personalised phishing emails. By analysing publicly available data, AI can craft emails that appear to come from a trusted source within the company, making employees more likely to click on malicious links or download harmful attachments.
- Voice Spoofing: Similar to deepfakes, AI can be used to create realistic voice simulations. Attackers can use these to call employees, pretend to be someone they trust and manipulate them into revealing confidential information or performing certain actions that compromise security.
- Automated Social Engineering: AI can automate social engineering attacks by scraping social media and other online platforms to gather detailed profiles of employees. This information can then be used to craft convincing fake scenarios, gaining employees’ trust and leading them to inadvertently share sensitive company information.
- Fake News and Disinformation: AI-generated fake news or misinformation can be targeted at employees to influence their decisions or create unrest within the company. For example, spreading false rumours about company layoffs or financial instability can cause panic and disrupt normal operations.
These examples illustrate how easily accessible AI tools can be exploited by attackers to target employees and compromise organisational security. Companies must educate their employees about these threats and implement robust security measures to mitigate such risks. In addition to in-house security measures, companies should consider leveraging third-party security solutions to enhance the level of security, especially for addressing threats that are beyond their capabilities.
As we venture into the uncharted territories of Artificial intelligence, the potential of this technology is limitless. While we uncover new possibilities and leverage their benefits to enhance and protect our organisations, vigilance is crucial against its potential drawbacks. Finding the right balance and developing effective strategies is paramount in the ever-evolving landscape of security.
Cyber Security
Positive Technologies Reports 80% of Middle East Cyberattacks Compromise Confidential Data
A new study by cybersecurity firm Positive Technologies has shed light on the evolving cyber threat landscape in the Middle East, revealing that a staggering 80% of successful cyberattacks in the region lead to the breach of confidential information. The research, examining the impact of digital transformation, organized cybercrime, and the underground market, highlights the increasing exposure of Middle Eastern nations to sophisticated cyber threats.
The study found that one in three successful cyberattacks were attributed to Advanced Persistent Threat (APT) groups, which predominantly target government institutions and critical infrastructure. While the rapid adoption of new IT solutions is driving efficiency, it simultaneously expands the attack surface for malicious actors.
Cybercriminals in the region heavily utilize social engineering tactics (61% of cases) and malware (51%), often employing a combination of both. Remote Access Trojans (RATs) emerged as a primary weapon in 27% of malware-based attacks, indicating a common objective of gaining long-term access to compromised systems.
The analysis revealed that credentials and trade secrets (29% each) were the most sought-after data, followed by personal information (20%). This stolen data is frequently leveraged for blackmail or sold on the dark web. Beyond data theft, 38% of attacks resulted in the disruption of core business operations, posing significant risks to critical sectors like healthcare, transportation, and government services.
APT groups are identified as the most formidable threat actors due to their substantial resources and advanced technical capabilities. In 2024, they accounted for 32% of recorded attacks, with a clear focus on government and critical infrastructure. Their activities often extend beyond traditional cybercrime, encompassing cyberespionage and even cyberwarfare aimed at undermining trust and demonstrating digital dominance.
Dark web analysis further revealed that government organizations were the most frequently mentioned targets (34%), followed by the industrial sector (20%). Hacktivist activity was also prominent, with ideologically motivated actors often sharing stolen databases freely, exacerbating the cybercrime landscape.
The United Arab Emirates, Saudi Arabia, Israel, and Qatar, all leaders in digital transformation, were the most frequently cited countries on the dark web in connection with stolen data. Experts suggest that the prevalence of advertisements for selling data from these nations underscores the challenges of securing rapidly expanding digital environments, which cybercriminals are quick to exploit.
Positive Technologies analyst Alexey Lukash said, “In the near future, we expect cyberthreats in the Middle East to grow both in scale and sophistication. As digital transformation efforts expand, so does the attack surface, creating more opportunities for hackers of all skill levels. Governments in the region need to focus on protecting critical infrastructure, financial institutions, and government systems. The consequences of successful attacks in these areas could have far-reaching implications for national security and sovereignty.”
To help organizations build stronger defenses against cyberthreats, Positive Technologies recommends implementing modern security measures. These include vulnerability management systems to automate asset management, as well as identify, prioritize, and remediate vulnerabilities. Positive Technologies also suggests using network traffic analysis tools to monitor network activity and detect cyberattacks. Another critical layer of protection involves securing applications. Such solutions are designed to identify vulnerabilities in applications, detect suspicious activity, and take immediate action to prevent attacks.
Positive Technologies emphasizes the need for a comprehensive, result-driven approach to cybersecurity. This strategy is designed to prevent attackers from disrupting critical business processes. Scalable and flexible, it can be tailored to individual organizations, entire industries, or even large-scale digital ecosystems like nations or international alliances. The goal is to deliver clear, measurable results in cybersecurity—not just to meet compliance standards or rely on isolated technical fixes.
Cyber Security
Axis Communications Sheds Light on Video Surveillance Industry Perspectives on AI
Axis Communications has published a new report that explores the state of AI in the global video surveillance industry. Titled The State of AI in Video Surveillance, the report examines the key opportunities, challenges and future trends, as well as the responsible practices that are becoming critical for organisations in their use of AI. The report draws insights from qualitative research as well as quantitative data sources, including in-depth interviews with carefully selected experts from the Axis global partner network.
A leading insight featured in the report is the unanimous view among interviewees that interest in the technology has surged over the past few years, with more and more business customers becoming curious and increasingly knowledgeable about its potential applications.

Mats Thulin, Director AI & Analytics Solutions at Axis Communications
“AI is a technology that has the potential to touch every corner and every function of the modern enterprise. That said, any implementations or integrations that aim to drive value come with serious financial and ethical considerations. These considerations should prompt organisations to scrutinise any initiative or investment. Axis’s new report not only shows how AI is transforming the video surveillance landscape, but also how that transformation should ideally be approached,” said Mats Thulin, Director AI & Analytics Solutions at Axis Communications.
According to the Axis report, the move by businesses from on-premise security server systems to hybrid cloud architectures continues at pace, driven by the need for faster processing, improved bandwidth usage and greater scalability. At the same time, cloud-based technology is being combined with edge AI solutions, which play a crucial role by enabling faster, local analytics with minimal latency, a prerequisite for real-time responsiveness in security-related situations.
By moving AI processing closer to the source using edge devices such as cameras, businesses can reduce bandwidth consumption and better support real-time applications like security monitoring. As a result, the hybrid approach is expected to continue to shape the role of AI in security and unlock new business intelligence and operational efficiencies.
A trend that is emerging among businesses is the integration of diverse data for a more comprehensive analysis, transforming safety and security. Experts predict that by integrating additional sensory data, such as audio and contextual environmental factors caught on camera, can lead to enhanced situational awareness and greater actionable insights, offering a more comprehensive understanding of events.
Combining multiple data streams can ultimately lead to improved detection and prediction of potential threats or incidents. For example, in emergency scenarios, pairing visual data with audio analysis can enable security teams to respond more quickly and precisely. This context-aware approach can potentially elevate safety, security and operational efficiency, and reflects how system operators can leverage and process multiple data inputs to make better-informed decisions.
According to the Axis report, interviewees emphasised that responsible AI and ethical considerations are critical priorities in the development and deployment of new systems, raising concerns about decisions potentially based on biased or unreliable AI. Other risks highlighted include those related to privacy violations and how facial and behavioural recognition could have ethical and legal repercussions.
As a result, a recurring theme among interviewees was the importance of embedding responsible AI practices early in the development process. Interviewees also pointed to regulatory frameworks, such as the EU AI Act, as pivotal in shaping responsible use of technology, particularly in high-risk areas. While regulation was broadly acknowledged as necessary to build trust and accountability, several interviewees also stressed the need for balance to safeguard innovation and address privacy and data security concerns.
“The findings of this report reflect how enterprises are viewing the trend of AI holistically, working to have a firm grasp of both how to use the technology effectively and understand the macro implications of its usage. Conversations surrounding privacy and responsibility will continue but so will the pace of innovation and the adoption of technologies that advance the video surveillance industry and lead to new and exciting possibilities,” Thulin added.
Artificial Intelligence
CyberKnight Partners with Ridge Security for AI-Powered Security Validation
The automated penetration testing market was valued at roughly $3.1 billion in 2023 and is projected to grow rapidly, with forecasts estimating a compound annual growth rate (CAGR) between 21% and 25%. By 2030, the sector is expected to reach approximately $9 to $10 billion. The broader penetration testing industry is also expanding, with projections indicating it will surpass $5.3 billion by 2027, according to MarketandMarket.
To support enterprises and government entities across the Middle East, Turkey and Africa (META) with identifying and validating vulnerabilities and reducing security gaps in real-time, CyberKnight has partnered with Ridge Security, the World’s First Al-powered Offensive Security Validation Platform. Ridge Security’s products incorporate advanced artificial intelligence to deliver security validation through automated penetration testing and breach and attack simulations.
RidgeBot uses advanced AI to autonomously perform multi-vector iterative attacks, conduct continuous penetration testing, and validate vulnerabilities with zero false positives. RidgeBot has been deployed by customers worldwide as a key element of their journey to evolve from traditional vulnerability management to Continuous Threat Exposure Management (CTEM).
“Ridge Security’s core strength lies in delivering holistic, AI-driven security validation that enables organizations to proactively manage risk and improve operational performance,” said Hom Bahmanyar, Chief Enablement Officer at Ridge Security. “We are delighted to partner with CyberKnight to leverage their network of strategic partners, deep-rooted customer relations, and security expertise to accelerate our expansion plans in the region.”
“Our partnership with Ridge Security is a timely and strategic step, as 69% of organizations are now adopting AI-driven security for threat detection and prevention,” added Wael Jaber, Chief Strategy Officer at CyberKnight. “By joining forces, we enhance our ability to deliver automated, intelligent security validation solutions, reaffirming our commitment to empowering customers with resilient, future-ready cybersecurity across the region.”
-
GISEC1 week agoPositive Technologies @ GISEC Global 2025: Demonstrating Cutting-Edge Cyber Threats and AI Defense Strategies
-
Cyber Security1 week agoAxis Communications Sheds Light on Video Surveillance Industry Perspectives on AI
-
GISEC1 week agoVideo: SANS Institute Weighs in on Deepfakes, Model Poisoning and Risk Frameworks at GISEC Global 2025
-
GISEC1 week agoManageEngine @ GISEC Global 2025: AI, Quantum Computing, and Ransomware Form Part of Cybersecurity Outlook for 2025
-
GISEC1 week agoVideo: SentinelOne Speaks Hyperautomation, Purple AI, and the Future of Threat Detection at GISEC Global 2025
-
Africa Focus6 days agoCyberKnight Sets Up South Africa Entity
-
GISEC1 week agoGroup-IB @ GISEC Global 2025: Tackling Evolving Cyber Threats with Localised Intelligence and AI
-
GISEC1 week agoVideo: CyberKnight on Zero Trust, AI, and Saudi Arabia’s Digital Transformation at GISEC Global 2025
