Artificial Intelligence
AI Technology’s Potential for Misuse Necessitates Robust Security Policies
Ram Narayanan, the Country Manager at Check Point Software Technologies, Middle East, says collaborating with AI providers and researchers is essential to remain current with AI advancements
What have we achieved so far in terms of use case scenarios of Gen AI?
Generative AI tools like ChatGPT and Google Bard have witnessed remarkable growth in their use case scenarios, showcasing their versatility and potential across various applications. These AI tools have proven to be valuable assets in enhancing productivity and creativity. However, they also present significant challenges, primarily related to their vulnerability to misuse in cyber-attacks.
Instances of Generative AI being exploited to create malicious content, such as malware, phishing emails, and deceptive videos, have raised concerns in the cybersecurity domain. Organizations have had to proactively address these issues to protect their digital assets and sensitive data. While Generative AI continues to offer substantial benefits, organizations must remain vigilant in their efforts to protect against emerging AI threats, ensuring that AI and machine learning-based defences become essential components of their cybersecurity strategies.
Why according to you should companies leverage generative AI?
Companies should leverage generative AI for a multitude of reasons that promise transformative benefits. Generative AI streamlines content creation processes, allowing for efficient, cost-effective production of customized content at scale. Moreover, the scalability of generative AI ensures that businesses can adapt effortlessly to varying audience sizes without compromising content quality. Generative AI extends its utility to customer support through AI-powered chatbots, offering round-the-clock assistance while freeing up human teams for more complex tasks.
Furthermore, its flexibility to generate content in diverse formats, from text to images and audio-visual content, enables companies to diversify their content offerings and reach audiences across multiple platforms. Embracing generative AI grants companies a competitive edge in a dynamic business landscape, fostering agility and innovation. However, responsible AI use is paramount.
The technology’s potential for misuse, including cyber threats and malicious content creation, necessitates robust security policies, especially for mobile devices. Advanced technology, including AI and machine learning, is crucial to effectively detect and mitigate these risks. Companies must also uphold ethical standards in AI deployment, ensuring responsible use that aligns with societal values while reaping the myriad benefits generative AI offers.
What are the challenges companies face in terms of adopting and using Gen AI and how can they be overcome?
Companies face several challenges when adopting and using Generative AI. A primary concern is the potential for misuse, as Gen AI can be exploited for cyber-attacks, including the creation of malware, phishing emails, and deceptive content. This poses significant security risks that must be addressed. Firstly, robust security policies should be established and enforced, governing the use of AI tools on corporate devices and networks.
Employee education is crucial to raise awareness and empower staff to recognize AI-generated threats. Advanced threat detection technologies, utilizing behavioural analysis and machine learning, enhance security measures. Access control to AI tools helps mitigate misuse risks, and regular security updates are essential. Mobile devices, often entry points to organizations, require special attention with robust mobile security solutions.
Ethical concerns, regulatory compliance, quality control, bias mitigation, and public perception challenges also need to be addressed through collaboration, self-regulation, responsible AI development, and continuous monitoring. Striking a balance between AI’s potential and ethical considerations is key for successful Gen AI adoption.
Are companies aware of regional and global policies surrounding the use of Gen AI?
The awareness among companies regarding regional and global policies surrounding the use of Generative AI can vary significantly. Some companies are well-informed and proactive in understanding and adhering to these policies, especially if they operate in highly regulated industries or have a global presence. These companies often invest in compliance efforts to ensure they align with regional and international regulations related to AI.
However, many companies, particularly smaller or newer ones, may have limited awareness of the full scope of regional and global policies concerning Gen AI. It’s worth noting that the awareness of Gen AI policies can also be influenced by the region in which a company operates. The United Arab Emirates has been actively embracing AI technology in various sectors, including healthcare, transportation, finance, and government services.
To ensure responsible and ethical use of AI, the UAE government has developed regulatory frameworks and policies. For instance, the UAE AI Strategy 2031 focuses on creating a conducive environment for AI innovation while also addressing the ethics and legal aspects of AI implementation. Given the substantial investment in AI technology and the government’s commitment to AI governance, it is likely that UAE companies are well-informed about the regional and global policies surrounding the use of Gen AI. Companies operating in sensitive sectors, such as healthcare or finance, may have a higher level of awareness and compliance with AI regulations due to the potential impact on individuals’ privacy and security.
How can companies use their resources on using Gen AI to create a competitive advantage?
Companies can utilize their resources to harness Generative AI strategically, thereby gaining a competitive edge in various aspects. Gen AI enables swift innovation by automating product development, reducing time-to-market, and ensuring companies stay ahead in dynamic industries. Gen AI’s data analysis capabilities facilitate data-driven decision-making, enabling informed strategic choices, rapid response to market trends, and optimized supply chains, leading to cost savings and operational efficiency.
It also plays a vital role in cybersecurity, effectively detecting and mitigating advanced threats to safeguard digital assets and reputation. Automated market research with Gen AI identifies trends and consumer preferences, guiding product development and marketing strategies. Task automation enhances employee productivity, freeing up time for innovation, while Gen AI assists in compliance and risk management efforts.
To maintain a competitive edge, companies should integrate Gen AI strategically, invest in workforce training, ensure ethical use, and implement robust cybersecurity measures. Collaboration with AI providers and researchers is essential to stay current with AI advancements and maintain responsible practices.
What factors do companies need to consider before adopting Gen AI such as having a centralised data strategy?
Before adopting Generative AI, companies must carefully consider several critical factors, one of which is the establishment of a centralized data strategy. Security is of utmost concern, as Gen AI tools have the potential to be exploited in cyber-attacks, exemplified by instances of AI-generated malware and phishing campaigns. To mitigate these risks, robust security policies and measures should be implemented to safeguard sensitive data and prevent data breaches. Mobile devices, commonly used for Gen AI interactions, present unique vulnerabilities, necessitating a focused approach to mobile security that encompasses both prevention and detection, ideally utilizing AI and machine learning in security solutions.
A centralized data strategy should incorporate these security measures to protect against potential AI threats during Gen AI adoption. Additionally, it should encompass data governance practices, data quality assessment, privacy compliance, ethical guidelines, transparency, scalability, cross-functional collaboration, and continuous monitoring to ensure responsible and secure Gen AI integration. Building and maintaining customer trust and preparing for crisis management are integral aspects of a comprehensive Gen AI strategy.
How can companies experiment with Gen AI to predict the future of strategic workforce planning?
Companies can gain a competitive edge by strategically allocating resources to harness Generative AI in various ways. Generative AI accelerates innovation by automating product development processes, leading to faster time-to-market and a competitive advantage in rapidly evolving industries.
It also streamlines content creation, reducing costs, and delivering personalized content to enhance customer engagement and loyalty. With the deployment of AI-powered chatbots and virtual assistants, companies can improve customer support, providing efficient round-the-clock assistance while optimizing the supply chain, ultimately increasing customer satisfaction and operational efficiency.
Generative AI’s role in cybersecurity is crucial, as it effectively detects and mitigates advanced threats. Additionally, it aids in automated market research, identifying trends and consumer preferences to guide product development and marketing strategies. Lastly, it contributes to compliance and risk management efforts.
To maintain this competitive edge, companies must strategically integrate Generative AI, invest in workforce training, ensure ethical use, and implement robust cybersecurity measures to safeguard against AI-related threats. Collaborating with AI providers and researchers is essential to remain current with AI advancements, allowing companies to effectively harness these technologies while upholding responsible practices.
Artificial Intelligence
CyberKnight Partners with Ridge Security for AI-Powered Security Validation
The automated penetration testing market was valued at roughly $3.1 billion in 2023 and is projected to grow rapidly, with forecasts estimating a compound annual growth rate (CAGR) between 21% and 25%. By 2030, the sector is expected to reach approximately $9 to $10 billion. The broader penetration testing industry is also expanding, with projections indicating it will surpass $5.3 billion by 2027, according to MarketandMarket.
To support enterprises and government entities across the Middle East, Turkey and Africa (META) with identifying and validating vulnerabilities and reducing security gaps in real-time, CyberKnight has partnered with Ridge Security, the World’s First Al-powered Offensive Security Validation Platform. Ridge Security’s products incorporate advanced artificial intelligence to deliver security validation through automated penetration testing and breach and attack simulations.
RidgeBot uses advanced AI to autonomously perform multi-vector iterative attacks, conduct continuous penetration testing, and validate vulnerabilities with zero false positives. RidgeBot has been deployed by customers worldwide as a key element of their journey to evolve from traditional vulnerability management to Continuous Threat Exposure Management (CTEM).
“Ridge Security’s core strength lies in delivering holistic, AI-driven security validation that enables organizations to proactively manage risk and improve operational performance,” said Hom Bahmanyar, Chief Enablement Officer at Ridge Security. “We are delighted to partner with CyberKnight to leverage their network of strategic partners, deep-rooted customer relations, and security expertise to accelerate our expansion plans in the region.”
“Our partnership with Ridge Security is a timely and strategic step, as 69% of organizations are now adopting AI-driven security for threat detection and prevention,” added Wael Jaber, Chief Strategy Officer at CyberKnight. “By joining forces, we enhance our ability to deliver automated, intelligent security validation solutions, reaffirming our commitment to empowering customers with resilient, future-ready cybersecurity across the region.”
Artificial Intelligence
Cequence Intros Security Layer to Protect Agentic AI Interactions
Cequence Security has announced significant enhancements to its Unified API Protection (UAP) platform to deliver a comprehensive security solution for agentic AI development, usage, and connectivity. This enhancement empowers organizations to secure every AI agent interaction, regardless of the development framework. By implementing robust guardrails, the solution protects both enterprise-hosted AI applications and external AI APIs, preventing sensitive data exfiltration through business logic abuse and ensuring regulatory compliance.
There is no AI without APIs, and the rapid growth of agentic AI applications has amplified concerns about securing sensitive data during their interactions. These AI-driven exchanges can inadvertently expose internal systems, create significant vulnerabilities, and jeopardize valuable data assets. Recognising this critical challenge, Cequence has expanded its UAP platform, introducing an enhanced security layer to govern interactions between AI agents and backend services specifically. This new layer of security enables customers to detect and prevent AI bots such as ChatGPT from OpenAI and Perplexity from harvesting organizational data.
Internal telemetry across Global 2000 deployments shows that the overwhelming majority of AI-related bot traffic, nearly 88%, originates from large language model infrastructure, with most requests obfuscated behind generic or unidentified user agents. Less than 4% of this traffic is transparently attributed to bots like GPTBot or Gemini. Over 97% of it comes from U.S.-based IP addresses, highlighting the concentration of risk in North American enterprises. Cequence’s ability to detect and govern this traffic in real time, despite the lack of clear identifiers, reinforces the platform’s unmatched readiness for securing agentic AI in the wild.
Key enhancements to Cequence’s UAP platform include:
- Block unauthorized AI data harvesting: Understanding that external AI often seeks to learn by broadly collecting data without obtaining permission, Cequence provides organizations with the critical capability to manage which AI, if any, can interact with their proprietary information.
- Detect and prevent sensitive data exposure: Empowers organizations to effectively detect and prevent sensitive data exposure across all forms of agentic AI. This includes safeguarding against external AI harvesting attempts and securing data within internal AI applications. The platform’s intelligent analysis automatically differentiates between legitimate data access during normal application usage and anomalous activities signaling sensitive data exfiltration, ensuring comprehensive protection against AI-related data loss.
- Discover and manage shadow AI: Automatically discovers and classifies APIs from agentic AI tools like Microsoft Copilot and Salesforce Agentforce, presenting a unified view alongside customers’ internal and third-party APIs. This comprehensive visibility empowers organizations to easily manage these interactions and effectively detect and block sensitive data leaks, whether from external AI harvesting or internal AI usage.
- Seamless integration: Integrates easily into DevOps frameworks for discovering internal AI applications and generates OpenAPI specifications that detail API schemas and security mechanisms, including strong authentication and security policies. Cequence delivers powerful protection without relying on third-party tools, while seamlessly integrating with the customer’s existing cybersecurity ecosystem. This simplifies management and security enforcement.
“Gartner predicts that by 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously. We’ve taken immediate action to extend our market-leading API security and bot management capabilities,” said Ameya Talwalkar, CEO of Cequence. “Agentic AI introduces a new layer of complexity, where every agent behaves like a bidirectional API. That’s our wheelhouse. Our platform helps organizations embrace innovation at scale without sacrificing governance, compliance, or control.”
These extended capabilities will be generally available in June.
Artificial Intelligence
Fortinet Expands FortiAI Across its Security Fabric Platform
Fortinet has announced major upgrades to FortiAI, integrating advanced AI capabilities across its Security Fabric platform to combat evolving threats, automate security tasks, and protect AI systems from cyber risks. As cybercriminals increasingly weaponize AI to launch sophisticated attacks, organizations need smarter defenses. Fortinet—with 500+ AI patents and 15 years of AI innovation—now embeds FortiAI across its platform to:
- Stop AI-powered threats
- Automate security and network operations
- Secure AI tools used by businesses
“Fortinet’s AI advantage stems from the breadth and depth of our AI ecosystem—shaped by over a decade of AI innovation and reinforced by more patents than any other cybersecurity vendor,” said Michael Xie, Founder, President, and Chief Technology Officer at Fortinet. “By embedding FortiAI across the Fortinet Security Fabric platform, including new agentic AI capabilities, we’re empowering our customers to reduce the workload on their security and network analysts while improving the efficiency, speed, and accuracy of their security and networking operations. In parallel, we’ve added coverage across the Fabric ecosystem to enable customers to monitor and control the use of GenAI-enabled services within their organization.”
Key upgrades:
FortiAI-Assist – AI That Works for You
- Automatic Network Fixes: AI configures, validates, and troubleshoots network issues without human help.
- Smarter Security Alerts: Cuts through noise, prioritizing only critical threats.
- AI-Powered Threat Hunting: Scans for hidden risks and traces attack origins.
FortiAI-Protect – Defending Against AI Threats
- Tracks 6,500+ AI apps, blocking risky or unauthorized usage.
- Stops new malware with machine learning.
- Adapts to new attack methods in real time.
FortiAI-SecureAI – Safe AI Adoption
- Protects AI models, data, and cloud workloads.
- Prevents leaks from tools like ChatGPT.
- Enforces zero-trust access for AI systems.
FortiAI processes queries locally, ensuring sensitive data never leaves your network.
-
GISEC1 week agoPositive Technologies @ GISEC Global 2025: Demonstrating Cutting-Edge Cyber Threats and AI Defense Strategies
-
Cyber Security1 week agoAxis Communications Sheds Light on Video Surveillance Industry Perspectives on AI
-
GISEC1 week agoVideo: SANS Institute Weighs in on Deepfakes, Model Poisoning and Risk Frameworks at GISEC Global 2025
-
GISEC1 week agoManageEngine @ GISEC Global 2025: AI, Quantum Computing, and Ransomware Form Part of Cybersecurity Outlook for 2025
-
GISEC1 week agoVideo: SentinelOne Speaks Hyperautomation, Purple AI, and the Future of Threat Detection at GISEC Global 2025
-
Africa Focus6 days agoCyberKnight Sets Up South Africa Entity
-
GISEC1 week agoGroup-IB @ GISEC Global 2025: Tackling Evolving Cyber Threats with Localised Intelligence and AI
-
GISEC1 week agoVideo: CyberKnight on Zero Trust, AI, and Saudi Arabia’s Digital Transformation at GISEC Global 2025
