theNet by CLOUDFLARE

ChatGPT impersonators reveal security vulnerability

Employee use of fraudulent AI

ChatGPT: Adopted faster than any app, but with what consequences?

ChatGPT, the popular AI-based large language model (LLM) app from OpenAI, has seen levels of user growth unique for many reasons. For one, it reached over a million users in five days of its release, a mark unmatched by even the most historically popular apps like Facebook and Spotify. Additionally, ChatGPT has seen near-immediate adoption in business contexts, as organizations seek to gain efficiencies in content creation, code generation, and other functional tasks.

But as businesses rush to take advantage of AI, so too do attackers. One notable way in which they do so is through unethical or malicious LLM apps.

Unfortunately, a recent spate of these malicious apps has introduced risk into an organization’s AI journey. And, the associated risk is not easily addressed with a single policy or solution. To unlock the value of AI without opening doors to data loss, security leaders need to rethink how they approach broader visibility and control of corporate applications.


Types of fraudulent "GPT" and their risk

Malicious LLM applications fall into multiple categories that present different flavors of risk, such as:

  • Hindering an organization by locking employees into an inferior service and generating inaccurate content. In May 2023, ZDNet reported on a supposedly much-downloaded app called "ChatOn" that locked users into expensive subscriptions. Similarly, an app called "Genie" claimed to be powered by ChatGPT but hallucinated — the term for when an AI reports incorrect data — more often than the real thing. This category of imitator app simply provides poor service.

  • Stealing accounts, vacuuming up data, and compromising networks. In March 2023, a fake ChatGPT Chrome extension was found to be hijacking Facebook business accounts, installing backdoors, harvesting browser information, stealing cookies, and more. The fake extension was promoted via Facebook ads as "Quick access to Chat GPT [sic],” and was at one point being installed over 2,000 times per day.

  • Installing malware. Inevitably, attackers have pretended to offer access to ChatGPT using classic social engineering attacks. One campaign used links within social posts to install a ChatGPT client. The links led to a realistic-looking page with a prompt to download this client. Clicking this link resulted in the installation of the "Fobo" Trojan which harvests account credentials stored in browsers — especially those associated with business accounts.

Fake apps impersonating real apps in order to trick users into downloading malware is hardly a new attack tactic. Attackers have been manipulating users into doing so for decades. But these ChatGPT-based attacks point to a larger problem.


The larger issue: Lack of application awareness

Lack of visibility into the applications that enter an organization's network results, naturally, in a lack of control — leaving companies vulnerable to fraudulent apps.

Almost any software can be distributed over the Internet or accessed through the cloud; this is the new normal. Employees are typically able to install non-approved applications on company devices in a matter of seconds.

Employees are also able to use all manner of cloud-hosted software-as-a-service (SaaS) apps. The use of unauthorized cloud-based services runs rampant throughout most large organizations; this phenomenon is known as shadow IT. Shadow IT is so prevalent that in one survey, 80% of employees reported using non-approved SaaS applications.

These risks are ongoing, but the danger increases when one particular type of app — in this case, AI-based LLMs — has so firmly entered the corporate zeitgeist. Well-meaning employees seeking to increase their efficiency may end up providing a foothold for attackers to enter their organizations' networks.


Solving application control

Cyber security awareness training has become core to organizational cyber resilience, but this is a technical problem and requires a technical solution.

For many years, firewalls inspected network traffic at layer 4, the transport layer. These firewalls were able to block traffic going to or coming from non-approved IP addresses or ports, and this prevented many attacks. However, classic firewalls are demonstrably insufficient for the modern era, since they lack awareness of layer 7, the application layer, and thus cannot determine which applications traffic is coming from.

Next-generation firewalls (NGFW) have this capability. They inspect traffic at layer 7 and can allow or deny based on the application of origin. This application awareness allows administrators to block potentially risky applications. If an application's data cannot get past the firewall, then it cannot introduce threats into the network.

However, like traditional firewalls, next-generation firewalls assume a self-contained, private internal network — not the IT environment of today, with applications and data spread across internal networks, private clouds, and public clouds. Modern networks are distributed and encompass SaaS applications, web applications, and remote use.

Therefore, organizations need cloud-based NGFW capabilities that can sit in front of both on-premise networks and the cloud.

But NGFWs on their own cannot deal with shadow IT. And by the time an NGFW detects malicious application usage, it may be too late. Application control has to be integrated with a cloud access security broker (CASB) to truly secure networks, devices, and users.

Along with other capabilities, CASBs discover shadow IT and give administrators the ability to remediate it. They can deploy URL filtering to ensure phishing sites and applications are not loaded, and that malware cannot connect to known bad web addresses for instructions from a command and control server. They can add approved applications to an allowlist, blocking all others. They can use anti-malware to identify malicious imitation software as it enters a network (whether on-premise or in the cloud).


Securing application use — today and tomorrow

ChatGPT started trending in 2023 and AI tools are likely to be released for years to come.

SaaS applications are mission-critical for workforce collaboration, however they are hard to keep secure. Cloudflare's CASB service gives comprehensive visibility and control over SaaS apps, so you can easily prevent data leaks and compliance violations. With Zero Trust security, block insider threats, Shadow IT, risky data sharing, and bad actors.

This article is part of a series on the latest trends and topics impacting today’s technology decision-makers.


Dive deeper into this topic.

Learn more about how CASB works with the Simplifying the way we protect SaaS applications whitepaper.


Key takeaways

After reading this article you will be able to understand:

  • The risks that follow when apps gain popularity

  • 3 different types of malicious apps

  • The importance of application visibility and control

  • How NGFW and CASB together can secure modern organizations



Receive a monthly recap of the most popular Internet insights!