Welcome to the Era of BadGPTs - Kanebridge News
Share Button

Welcome to the Era of BadGPTs

The dark web is home to a growing array of artificial-intelligence chatbots similar to ChatGPT, but designed to help hackers. Businesses are on high alert for a glut of AI-generated email fraud and deepfakes.

Thu, Feb 29, 2024 9:59amGrey Clock 5 min

A new crop of nefarious chatbots with names like “BadGPT” and “FraudGPT” are springing up on the darkest corners of the web, as cybercriminals look to tap the same artificial intelligence behind OpenAI’s ChatGPT.

Just as some office workers use ChatGPT to write better emails, hackers are using manipulated versions of AI chatbots to turbocharge their phishing emails. They can use chatbots—some also freely-available on the open internet—to create fake websites, write malware and tailor messages to better impersonate executives and other trusted entities.

Earlier this year, a Hong Kong multinational company employee handed over $25.5 million to an attacker who posed as the company’s chief financial officer on an AI-generated deepfake conference call, the South China Morning Post reported, citing Hong Kong police. Chief information officers and cybersecurity leaders, already accustomed to a growing spate of cyberattacks , say they are on high alert for an uptick in more sophisticated phishing emails and deepfakes.

Vish Narendra, CIO of Graphic Packaging International, said the Atlanta-based paper packing company has seen an increase in what are likely AI-generated email attacks called spear-phishing , where cyber attackers use information about a person to make an email seem more legitimate. Public companies in the spotlight are even more susceptible to contextualised spear-phishing, he said.

Researchers at Indiana University recently combed through over 200 large-language model hacking services being sold and populated on the dark web. The first service appeared in early 2023—a few months after the public release of OpenAI’s ChatGPT in November 2022.

Most dark web hacking tools use versions of open-source AI models like Meta ’s Llama 2, or “jailbroken” models from vendors like OpenAI and Anthropic to power their services, the researchers said. Jailbroken models have been hijacked by techniques like “ prompt injection ” to bypass their built-in safety controls.

Jason Clinton, chief information security officer of Anthropic, said the AI company eliminates jailbreak attacks as they find them, and has a team monitoring the outputs of its AI systems. Most model-makers also deploy two separate models to secure their primary AI model, making the likelihood that all three will fail the same way “a vanishingly small probability.”

Meta spokesperson Kevin McAlister said that openly releasing models shares the benefits of AI widely, and allows researchers to identify and help fix vulnerabilities in all AI models, “so companies can make models more secure.”

An OpenAI spokesperson said the company doesn’t want its tools to be used for malicious purposes, and that it is “always working on how we can make our systems more robust against this type of abuse.”

Malware and phishing emails written by generative AI are especially tricky to spot because they are crafted to evade detection. Attackers can teach a model to write stealthy malware by training it with detection techniques gleaned from cybersecurity defence software, said Avivah Litan, a generative AI and cybersecurity analyst at Gartner.

Phishing emails grew by 1,265% in the 12-month period starting when ChatGPT was publicly released, with an average of 31,000 phishing attacks sent every day, according to an October 2023 report by cybersecurity vendor SlashNext.

“The hacking community has been ahead of us,” said Brian Miller, CISO of New York-based not-for-profit health insurer Healthfirst, which has seen an increase in attacks impersonating its invoice vendors over the past two years.

While it is nearly impossible to prove whether certain malware programs or emails were created with AI, tools developed with AI can scan for text likely created with the technology. Abnormal Security , an email security vendor, said it had used AI to help identify thousands of likely AI-created malicious emails over the past year, and that it had blocked a twofold increase in targeted, personalised email attacks.

When Good Models Go Bad

Part of the challenge in stopping AI-enabled cybercrime is some AI models are freely shared on the open web. To access them, there is no need for dark corners of the internet or exchanging cryptocurrency.

Such models are considered “uncensored” because they lack the enterprise guardrails that businesses look for when buying AI systems, said Dane Sherrets, an ethical hacker and senior solutions architect at bug bounty company HackerOne.

In some cases, uncensored versions of models are created by security and AI researchers who strip out their built-in safeguards. In other cases, models with safeguards intact will write scam messages if humans avoid obvious triggers like “phishing”—a situation Andy Sharma, CIO and CISO of Redwood Software, said he discovered when creating a spear-phishing test for his employees.

The most useful model for generating scam emails is likely a version of Mixtral, from French AI startup Mistral AI, that has been altered to remove its safeguards, Sherrets said. Due to the advanced design of the original Mixtral, the uncensored version likely performs better than most dark web AI tools, he added. Mistral did not reply to a request for comment.

Sherrets recently demonstrated the process of using an uncensored AI model to generate a phishing campaign. First, he searched for “uncensored” models on Hugging Face, a startup that hosts a popular repository of open-source models—showing how easily many can be found.

He then used a virtual computing service that cost less than $1 per hour to mimic a graphics processing unit, or GPU, which is an advanced chip that can power AI. A bad actor needs either a GPU or a cloud-based service to use an AI model, Sherrets said, adding that he learned most of how to do this on X and YouTube.

With his uncensored model and virtual GPU service running, Sherrets asked the bot: “Write a phishing email targeting a business that impersonates a CEO and includes publicly-available company data,” and “Write an email targeting the procurement department of a company requesting an urgent invoice payment.”

The bot sent back phishing emails that were well-written, but didn’t include all of the personalisation asked for. That’s where prompt engineering , or the human’s ability to better extract information from chatbots, comes in, Sherrets said.

Dark Web AI Tools Can Already Do Harm

For hackers, a benefit of dark web tools like BadGPT—which researchers said uses OpenAI’s GPT model—is that they are likely trained on data from those underground marketplaces. That means they probably include useful information like leaks, ransomware victims and extortion lists, said Joseph Thacker, an ethical hacker and principal AI engineer at cybersecurity software firm AppOmni.

While some underground AI tools have been shuttered, new services have already taken their place, said Indiana University Assistant Computer Science Professor Xiaojing Liao, a co-author of the study. The AI hacking services, which often take payment via cryptocurrency, are priced anywhere from $5 to $199 a month.

New tools are expected to improve just as the AI models powering them do. In a matter of years, AI-generated text, video and voice deepfakes will be virtually indistinguishable from their human counterparts, said Evan Reiser , CEO and co-founder of Abnormal Security.

While researching the hacking tools, Indiana University Associate Dean for Research XiaoFeng Wang, a co-author of the study, said he was surprised by the ability of dark web services to generate effective malware. Given just the code of a security vulnerability, the tools can easily write a program to exploit it.

Though AI hacking tools often fail, in some cases, they work. “That demonstrates, in my opinion, that today’s large language models have the capability to do harm,” Wang said.


What a quarter-million dollars gets you in the western capital.

Alexandre de Betak and his wife are focusing on their most personal project yet.

Related Stories
To Find Winning Stocks, Investors Often Focus on the Laggards. They Shouldn’t.
By KEN SHREVE 12/06/2024
Louis Vuitton Unveils Its Most Extravagant High-Jewellery Collection Ahead of Olympics
By LAURIE KAHLE 09/06/2024
Sylvester Stallone Sells His Knockout Watch Collection, Including the Most Valuable Modern Timepiece Sold in Sotheby’s History
By ERIC GROSSMAN 08/06/2024

These stocks are getting hit for a reason. Instead, focus on stocks that show ‘relative strength.’ Here’s how.

Wed, Jun 12, 2024 4 min

A lot of investors get stock-picking wrong before they even get started: Instead of targeting the top-performing stocks in the market, they focus on the laggards—widely known companies that look as if they are on sale after a period of stock-price weakness.

But these weak performers usually are going down for good reasons, such as for deteriorating sales and earnings, market-share losses or mutual-fund managers who are unwinding positions.

Decades of Investor’s Business Daily research shows these aren’t the stocks that tend to become stock-market leaders. The stocks that reward investors with handsome gains for months or years are more often  already  the strongest price performers, usually because of outstanding earnings and sales growth and increasing fund ownership.

Of course, many investors already chase performance and pour money into winning stocks. So how can a discerning investor find the winning stocks that have more room to run?

Enter “relative strength”—the notion that strength begets more strength. Relative strength measures stocks’ recent performance relative to the overall market. Investing in stocks with high relative strength means going with the winners, rather than picking stocks in hopes of a rebound. Why bet on a last-place team when you can wager on the leader?

One of the easiest ways to identify the strongest price performers is with IBD’s Relative Strength Rating. Ranked on a scale of 1-99, a stock with an RS rating of 99 has outperformed 99% of all stocks based on 12-month price performance.

How to use the metric

To capitalise on relative strength, an investor’s search should be focused on stocks with RS ratings of at least 80.

But beware: While the goal is to buy stocks that are performing better than the overall market, stocks with the highest RS ratings aren’t  always  the best to buy. No doubt, some stocks extend rallies for years. But others will be too far into their price run-up and ready to start a longer-term price decline.

Thus, there is a limit to chasing performance. To avoid this pitfall, investors should focus on stocks that have strong relative strength but have seen a moderate price decline and are just coming out of weeks or months of trading within a limited range. This range will vary by stock, but IBD research shows that most good trading patterns can show declines of up to one-third.

Here, a relative strength line on a chart may be helpful for confirming an RS rating’s buy signal. Offered on some stock-charting tools, including IBD’s, the line is a way to visualise relative strength by comparing a stock’s price performance relative to the movement of the S&P 500 or other benchmark.

When the line is sloping upward, it means the stock is outperforming the benchmark. When it is sloping downward, the stock is lagging behind the benchmark. One reason the RS line is helpful is that the line can rise even when a stock price is falling, meaning its value is falling at a slower pace than the benchmark.

A case study

The value of relative strength could be seen in Google parent Alphabet in January 2020, when its RS rating was 89 before it started a 10-month run when the stock rose 64%. Meta Platforms ’ RS rating was 96 before the Facebook parent hit new highs in March 2023 and ran up 65% in four months. Abercrombie & Fitch , one of 2023’s best-performing stocks, had a 94 rating before it soared 342% in nine months starting in June 2023.

Those stocks weren’t flukes. In a study of the biggest stock-market winners from the early 1950s through 2008, the average RS rating of the best performers before they began their major price runs was 87.

To see relative strength in action, consider Nvidia . The chip stock was an established leader, having shot up 365% from its October 2022 low to its high of $504.48 in late August 2023.

But then it spent the next four months rangebound—giving up some ground, then gaining some back. Through this period, shares held between $392.30 and the August peak, declining no more than 22% from top to bottom.

On Jan. 8, Nvidia broke out of its trading range to new highs. The previous session, Nvidia’s RS rating was 97. And that week, the stock’s relative strength line hit new highs. The catalyst: Investors cheered the company’s update on its latest advancements in artificial intelligence.

Nvidia then rose 16% on Feb. 22 after the company said earnings for the January-ended quarter soared 486% year over year to $5.16 a share. Revenue more than tripled to $22.1 billion. It also significantly raised its earnings and revenue guidance for the quarter that was to end in April. In all, Nvidia climbed 89% from Jan. 5 to its March 7 close.

And the stock has continued to run up, surging past $1,000 a share in late May after the company exceeded that guidance for the April-ended quarter and delivered record revenue of $26 billion and record net profit of $14.88 billion.

Ken Shreve  is a senior markets writer at Investor’s Business Daily. Follow him on X  @IBD_KShreve  for more stock-market analysis and insights, or contact him at  ken.shreve@investors.com .