When you go looking for an AI tool on GitHub, the star count is the first thing you check. More stars means more users, which means the tool is probably real, probably maintained, probably worth trying.
Except a significant share of those stars were purchased.
A peer-reviewed study presented at ICSE 2026 - the world's top software engineering conference - by researchers at Carnegie Mellon University, NC State University, and Socket found approximately 6 million suspected fake stars distributed across 18,617 GitHub repositories. They used 301,000 accounts to generate them.
The researchers built a detection tool called StarScout and ran it against 20 terabytes of GitHub event data - 6.7 billion events, 326 million stars, from 2019 through 2024. Their findings: AI and LLM repositories are the largest non-malicious category of fake star recipients, ahead of blockchain and crypto projects.
In their words: "many of which are academic paper repositories or LLM-related startup products."
And the problem is accelerating. By July 2024, 16.66% of all repositories with 50 or more stars were involved in fake star campaigns - up from near-zero before 2022.
What Does a Star Actually Cost?
The going rate: $0.03 to $0.85 per star, depending on the vendor and the quality of the accounts used.
The research platform AwesomeAgents ran their own analysis, sampling 150 profiles per repository across 20 AI projects, and found repos where 36 to 76% of stargazers had zero followers and fork-to-star ratios 10 times below organic baselines. They found at least a dozen websites selling stars openly, plus Fiverr gigs and Telegram channels. No dark web required.
A project that wants to look like it has 10,000 stars can buy them for $300 to $8,500, depending on quality. Compare that against the seed funding signal those stars generate.
Why This Matters If You're Not a VC
You might be wondering why you should care about what venture capitalists do with GitHub metrics.
Fair. But here's the direct connection to your business:
You use star counts to evaluate tools. Maybe not consciously, but you do. When you are researching whether to use a new AI library, plugin, or automation tool, you check if it looks legitimate. Star count, fork count, and recent commit activity are all signals you rely on.
Fake stars pollute those signals. A tool with 15,000 stars feels more trustworthy than a tool with 800 stars. If a significant portion of those 15,000 were bought, you are making a tool decision based on manufactured credibility.
AI tools especially. The study specifically calls out AI and LLM repositories as the top fake-star category. That means the tools most relevant to small business right now - the AI writing assistants, automation helpers, open source models, and business tools built on top of them - are disproportionately represented in the fake star economy.
78 repositories with fake stars appeared on GitHub Trending. GitHub's "trending" algorithm can be gamed. If a tool appears on Trending, that is sometimes how it gets discovered - by you, by newsletters, by review sites.
How to Evaluate AI Tools More Reliably
GitHub stars are one signal among many. The problem is that fake stars have made that signal less reliable. Here is how to compensate.
Check the fork-to-star ratio. Organic repositories typically have a fairly consistent ratio between forks and stars. A high star count with very few forks suggests the stars are decorative, not functional. A tool people actually use gets forked.
Look at issue activity, not just stars. Are people filing bug reports? Are maintainers responding? A real tool generates real questions, feature requests, and problem reports. A fake-starred repository often has very quiet issues.
Check stargazer profiles yourself. This takes two minutes. Click into the stargazer list. If the accounts have zero followers, no bio, no other activity, and were created around the same time, that is a red flag.
Look at actual integrations. The most reliable signal: does this tool have real tutorials written by real users? Are there Stack Overflow questions about it? Are there YouTube tutorials from non-affiliated channels?
Check GitHub's own deletions. The CMU study found that 90% of flagged repositories and 57% of flagged accounts had been deleted by GitHub itself as of January 2025. If a tool you are evaluating has seen dramatic star count drops recently, that is worth investigating.
Prefer tools with paying customers over tools with star counts. If a tool has a pricing page with real tiers and real customers, that is a harder metric to fake than stars. Customer reviews on G2 or Capterra - while not immune to manipulation - require more effort to game than GitHub metrics.
The Bigger Problem
The fake star economy is a symptom of something real: we are making software decisions - including business software decisions - based on credibility signals that are not hard to manufacture.
This does not mean all popular AI tools are fake. Most of the widely-used ones have real organic communities. But in a market flooded with AI tools claiming to do the same thing, star counts have become a shortcut that some founders are willing to buy.
The research confirms it is widespread, accelerating, and concentrated in the exact category of software you are probably evaluating right now.
Spend the extra five minutes. Check the forks. Check the issues. Check a few stargazers.
A tool with 800 real users is worth more than a tool with 40,000 bought stars.
Sources: ICSE 2026 peer-reviewed study by CMU, NCSU, and Socket; AwesomeAgents GitHub fake star investigation; Dagster original investigation (2023)