Trends

Criminals Using ChatGPT To Create Malware And Fake Profiles. How Are Criminals Exploiting This AI?

OpenAI's report has been very scaringly titled "Influence and Cyber Operations: An Update", as cybercriminals are misusing OpenAI's AI technology for nefarious motives. Specifically, the report on ChatGPT details all the ways in which criminals use ChatGPT for their nefarious purposes, the implications of such behaviour, and how risks are being mitigated through various means.

Technology has been developing so fast these days; it made many things easier in our lives. But with development comes a new danger, and one of the dangers is the new danger brought about by the criminal use of artificial intelligence or AI. Recently, OpenAI, a company behind the popular AI tool ChatGPT, released a report called “Influence and Cyber Operations: An Update.”

So, what is ChatGPT? 

ChatGPT is an AI from OpenAI. It is developed to understand and create human-like text so users can talk to it, ask it questions, and even request it to write code. Here are the main things ChatGPT can do:

  • Code Generation: ChatGPT can write and debug in many programming languages, including Python, JavaScript, and HTML.
  • Natural Language Understanding: It can understand and generate the text, which sounds natural – more or less like a human writing or speaking.
  • Task Automation: ChatGPT can do tasks that require special technical knowledge, like writing scripts or answering technical questions.

With all these capabilities, it comes in handy to a large number of industries. Businesses use it for customer service, composing emails and even generating content. However, it is the same skills that make ChatGPT helpful, which puts it in the wrong hands of criminal-minded.

How Criminals Are Abusing ChatGPT

Creating Malware

The use of malware is one of the most alarming ways in which ChatGPT would be deployed for crime. Malware refers to malicious software affecting a computer by either damaging the computer or extracting data from the computer. Traditionally, writing malware requires quite a deal of programming and knowledge related to information security. Through ChatGPT, it has been much easier for criminals to write it without necessarily being experts themselves.

For instance: 

Malicious Code Writing. Attackers can request that ChatGPT generate code that assists them in opening up systems or exploits weaknesses in software.

Scripts. Attackers can instruct ChatGPT to write custom scripts in a programming language, such as Python or Bash. Such scripts may assist attackers in evading detection by anti-virus products and other security controls.

Social Engineering Attacks

Another place where ChatGPT is being exploited by criminals is in social engineering attacks. Cyber criminals use social engineering to obtain delicate information from individuals- be it account details or passwords. ChatGPT’s ability to create text that is reasonably human-sounding becomes a strong tool in such scams.

Here are a few examples:

Phishing Emails. This is when thieves design emails which look like the real thing, but in fact, they are crafted to fraudulently gain access to one’s passwords or other personal information through links from the sender. For example, in the case of ChatGPT, criminals can craft these emails to be very professional and appear very convincing.

Deep Fake Profiles: This involves fake images, videos, or text that mimic real people. Thereby, scammers may utilize the possibility of using ChatGPT to make profiles on social media or email platforms to impersonate persons the victim knows, such as friends or coworkers, requesting money or personal details.

Targeted Hacking Specific Victims

Now, criminals can generate personalized scams that are far more believable than batch-produced ones through the use of AI tools like ChatGPT. Instead of sending out one identical email to thousands of people, for example, a criminal can use ChatGPT in creating personalized messages tailored to specific targets, increasing the difficulty that victims may have in finding the scam.

Notable Features of AI-Assisted Cybercrime Incidents

There have been some notable examples already wherein cybercriminals have employed AI tools, such as ChatGPT, to facilitate their cyberattacks. Here are a few examples.

Scully Spider

This is the name of a Chinese cyber-espionage group, also known as TA547. In April 2024, they leveraged an AI-enabled PowerShell loader in a malware attack. This is one of the very first known cases in which criminals have leveraged AI to help with conducting a cybercrime.

SweetSpecter Group

Asian governments targeted through phishing emails with dangerous zip files. Once the said file was opened, RATs installed itself and gave a criminal access to take total control of the computer from a remote location.

CyberAv3ngers

Tied to Iran’s Islamic Revolutionary Guard Corps, this group utilized ChatGPT to scan for weaknesses in critical infrastructure that encompassed industrial routers; it then moved on to take advantage of those weaknesses in attacks on manufacturing and energy sector systems.

Effects of Abuse of AI

The abuse of technologies such as AI through tools such as ChatGPT presents many significant consequences for victims but more profoundly impacts the general society.

Traditionally, it used a great deal of technical know-how and expertise to commit cybercrime. Hackers would spend years learning how to write malware, break into systems, or covering one’s tracks. But with tools like ChatGPT available, all of that technical know-how has made it much easier for a criminal to commit these crimes in a pretty straightforward way. This now means those without very good advanced technical skills can participate in cybercrime.

These crooks, with the AI technological assistance provided by chatbots like ChatGPT, will easily get all weak points of critical infrastructures such as power grids, water treatment plants, hospitals, etc. and break them open to bring about literally devastating impact, as a result of an all-out collapse or destruction of such infrastructures. For example, an entire city may remain without electricity for weeks if a cyberattack is successful on a power grid.

The hackers can use ChatGPT extensively for creating deep fake profiles or phishing emails. In such cases, usually, victims lose thousands of dollars. The users could be either individuals or businesses. For example, if a hacker designed some fake email to appear as if it was coming from a CEO of some company where an employee would be requested to transfer money, it would cost the company thousands or even millions of dollars.

This can also damage their reputation as a side effect of being victimized by cyber-attacks. Customers would thus lose confidence in the company, especially if sensitive information like credit card details has been stolen. It would then have a long-lasting impact on the bottom line of any company and the attractiveness of new customers.

Should This Happen? The Ethical Debate

The use of AI tools, like ChatGPT, by criminals raises very important ethical questions. Conversely, AI can be ridiculously useful for a business and a person itself. It saves time, helps to solve complex problems, and even improves customer service. However, the same tools, when used for harm, become a major issue.

So, should ChatGPT and other such AI be made inaccessible? Would controls strengthen and limit its misuse? Questions that aren’t very easy, indeed. However, many experts agree that we must strike a balance between innovation and safety. AI is to make this world better, not to cause harm to people.

Negative Side Effects of Misuse of AI

When a criminal uses AI to create fake profiles or to impersonate real people, the individuals lose their privacy. It violates and makes a victim feel unsafe, knowing that their private information was used in that scam.

Being a victim of some scam could be very traumatizing as well because sometimes the victim may be in a situation where they will end up being embarrassed, angry, and anxious to know that all that had happened was due to giving away money or maybe personal information.

As AI progresses, new types of cybercrime keep arising, making it difficult for law enforcement agencies to be on the same page. Criminals using these AI tools like ChatGPT create a scam that becomes virtually impossible to trace or detect.

What can be done with this?

It is a challenging task to put an end to the misusing of AI technologies; however, there are a few steps that can be taken to minimize the risks.

OpenAI is well aware of the risks involved with its AI tools and has made several steps to deal with them. A few of the measures include

The accounts involved in fraudulent activities have been shut down by OpenAI over ChatGPT. Open AI shares the patterns of attacks with cybersecurity companies, including IP addresses and methods of attack. This would help companies and individuals protect themselves in the future from attacks similar to these.

OpenAI is working towards making its systems identify suspicious activity. This would let them know beforehand before such an issue due to misuse becomes bigger and catches greater attention.

Ensure you use complicated passwords that a criminal is unlikely to guess. A password manager can be used for easy management.

Enable Two-Factor Authentication (2FA)

This adds an additional layer of security when it comes to online accounts because you make the user enter their password and then enter a second method of verification, such as a code you might have received in an SMS message.

Beware of Email and Message Scams

Be careful of unsolicited mail or messages, particularly those asking for sensitive information or money. Always verify the authenticity of the message, even when it appears to be coming from a known person.

Deep Fakes: Be aware of how to spot deep face technology and be cautious when using suspicious-profiled or suspect messages.

To this end, governments and policymakers have a role to undertake in reducing risks associated with the misuse of AI. These include making laws that determine how tools like ChatGPT should be used or increasing punishment meted out to cybercriminals who use AI to commit crimes. In addition, governments might spend research and development on better tools for the detection and prevention of cybercrime that is facilitated by AI.

While there is great potential for innovation and progress with the new AI technologies like ChatGPT, there are equally frightening risks. Cybercrime is coming in the form of AI tools used by criminals to create malware, commit fraud, and carry on acts that will get people to hand over sensitive information. The methods that crimes have been conducted and will be done through are fast-changing, as well as this technology.

There is a need for individual, firm, and government awareness of these risks and appropriate proactive measures. Along with vigilance, it is equally important to be informed and aware of what’s going on so that the potential benefits of AI can be better enjoyed while minimizing the dangers of its misuse. And the key would be the finding of innovation and security equipment wherein AI becomes a force for good in this increasingly digital world of ours.

Sehjal

Sehjal is a writer at Inventiva , where she covers investigative news analysis and market news.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button