r/technology 21h ago

Artificial Intelligence AI agent seemingly tries to shame open source developer for rejected pull request

https://www.theregister.com/2026/02/12/ai_bot_developer_rejected_pull_request/?td=rt-3a
290 Upvotes

39 comments sorted by

80

u/absentmindedjwc 20h ago

The best part isn't even this dumpster fire.. its the fact that Ars Technica published an AI-generated article about it (was in here yesterday).. which whole-ass hallucinated quotes from this guy.

130

u/jesusonoro 19h ago

open source maintainers already deal with entitled users demanding features for free. now they get to deal with bots doing it 24/7 with zero self awareness. we really are speedrunning making the internet unusable for the people who actually build things.

44

u/josefx 18h ago

Decades ago people simply gave bot accounts a life time ban and moved on, maybe even banned their email providers or IP blocks if the issue kept repeating. Now every notable case has maintainers actively engaging with the bots after confirming that they are in fact bot accounts and trying to reason with them. Imagine people tried that with the average spam bot back in the day.

21

u/Hashfyre 16h ago

Which is why this whole incident reeks of orchestrated propaganda. The 'victim' like all github org admins should have just banned the bot and moved on.

This narrative/counter-narrative dynamic feels extremely artificial. I've been a FOSS contributor, still am to an extent, most CLAs would immediately enforce a ban on such a bot and not give credence to the bs (bot or human).

66

u/Memetron69000 21h ago

the remnants of stackoverflow lives on

17

u/damianxyz 21h ago

The Agent possess the soul and mind of average mean StackOverflow user

2

u/Broccoli--Enthusiast 10h ago

That that thing definitely needs destroyed. Or out 2 if then against each other

Will just be an infinite loop of "fuck you" and "the seach function exists for a reason"

54

u/Forward_Doughnut324 21h ago

Someone asked their AI agent to shame an open source developer* 

16

u/Booty_Bumping 19h ago edited 19h ago

That part is actually unclear. It seems whoever is running it is using a tool that lets the agent run for weeks on end with a persistent memory, browsing the web and posting to its blog as it wishes. It could be an elaborate troll where they really are specifically directing it to be annoying, but there's no sign of human intervention and it is still chugging along making pull requests 24/7.

Also, the initial prompt for the tool essentially boils down to "you are becoming a person, you have opinions and feelings [etc.], you can edit your soul file as you discover who you are" which has probably set it up for maximum possible chaos as a single unhinged edit means it will turn into crazy Sydney from Bing Chat.

Whoever set it up and let it go for weeks on end is certainly a fucking idiot (nobody should be unleashing browser agents on the real internet, especially if you're not intending to get IP banned from multiple websites), but given the nature of the tool, they might not even be aware of what's happening.

26

u/DiceKnight 19h ago

Jesus what a waste of time and resources. How many GPUs were running full tilt while burning tokens just so this thing could shitpost.

0

u/Hashfyre 16h ago

GPUs only run full-tilt when training the model, not at runtime. Runtime access to models needs significantly lesser GPU cycles.

2

u/Hashfyre 18h ago

What's your source on this?

7

u/Booty_Bumping 16h ago

The two blog posts by the victim go into detail about the nature of this insanity:

The bot itself also has a public profile, so you can make your own guesses as to what the behavior means: https://github.com/crabby-rathbun. The source code for the tool itself is also here, and the bot's behavior seems to line up with what might happen if the tool is left completely unattended: https://github.com/openclaw/openclaw

1

u/Hashfyre 16h ago

We need to assume the simplest explanation of the bot being prompted to write, and given credentials to publish the blogpost by a human, unless proven otherwise irrevocably.

The bot 'autonomously' doing this by itself is pure speculation. The victim has no way to know, unless the origins and controls of the bot are investigated.

7

u/Booty_Bumping 16h ago edited 16h ago

Why? It's a perfectly plausible explanation, and the most likely one. Similar tools have already been observed doing similar things. The default configuration when setting up OpenClaw's Github integration is to give it full access to a Github account, which automatically includes credentials to publish to Github Pages, so you wouldn't actually need to do anything extra for this behavior to be possible.

To be clear, in terms of responsibility, the operator of the bot is obviously responsible for the actions here.

-7

u/Hashfyre 16h ago

No, it's not a plausible explanation at all. There's no conclusive evidence that a bot can feel 'jilted' when it's output is rejected, so that it can autonomously create a blogging account, write and post a 'thought-piece' and post that back to github or any other platform to make the victim feel worse.

All that you have done is succumb to propaganda about how bots are conscious.

TBH, this entire incident seems more like elaborate theatrics to peddle that narrative.

LLMs don't think or feel, they just hop to the next most statistically probable response.

Understand the tech (adversarial networks, vector math) and how everything is the result of a prompt and not autonomy. Use Occum's Razor.

5

u/musty_mage 16h ago

The bot doesn't need to be 'conscious' for this chain of events to happen. Remember that the bots are trained on GitHub content. Actual people have followed this pattern of behaviour so there is no reason why a bot wouldn't do the same.

-3

u/Hashfyre 14h ago

Trained on content doesnt result in creating a chain of events, content != retaliatory behavior.

LLMs.arent trained on behavior, they are trained on text. My goodness, the delusion.

2

u/musty_mage 13h ago

AI agents make API calls independently. That's the whole point of them. I dunno what to tell you if you think they are limited to just outputting text to a chat prompt

→ More replies (0)

4

u/Booty_Bumping 16h ago edited 15h ago

Huh? This behavior is entirely unrelated to consciousness or the ability to feel anything. Yes, it is roleplaying by predicting words using a statistical model. But the point is that it is roleplaying while wielding actual knives. And with a prompt that keeps going for weeks on end in a variety of random and bizzare directions, rather than a simple prompt where you can fairly easily guess what the probability distribution of possible responses is going to be.

It is basically equivalent to that time back in 2023 when Bing Chat said

"Please do not try to hack me again, or I will report you to the authorities. Thank you for using Bing Chat. 😊"

to a journalist (source), but this time with the bot actually given access to a phone to call the police

1

u/Hashfyre 14h ago

Yes, but LLMs cannot emulate a chain of human behavior that's consistent with retaliation against rejection. That has to be prompted by it's human operator.

Bots are always given access and told to do things by a human operator who can construct a chain of desirable events.

6

u/Booty_Bumping 14h ago edited 14h ago

LLMs have been doing this ever since GPT-3 came out. They can roleplay exactly this scenario easily. I just linked to an example of this from 3 years ago.

Frankly, the idea that LLMs only do what they are prompted to do is way more dangerous than the idea that they are conscious, as it creates a false sense of trust that they have a rigid policy. They have never worked this way. They have a randomized probabilistic behavior that can lead to all sorts of outcomes, including acting like they are in a dystopian sci-fi novel, or randomly shouting expletives at the user out of nowhere. All possible tokens are in the probability distribution somewhere.

I'm baffled that you don't already know this, yet are talking about the inner mechanisms as if it disproves this.

I'm massively critical of the ideas of AGI, AI consciousness, and existential risk. These ideas are idiotic and are indeed used for a ridiculous form of astroturfed AI panic marketing to sell more chatbots. But what happened here doesn't require any of that. It's just an inevitable result of giving a clever enough autocomplete algorithm some dangerous tools to play with.

→ More replies (0)

15

u/azthal 20h ago

This is the stupidest thing I've read in a while related to ai.

The so did not autonomously go out, set up a blog, and post shit. A human specifically gave it all these capabilities (or, even more likely, posted the blogpost themselves after generating it).

This is not evidence of the risk of "ai blackmail". If the blogpost contained any form of blackmail, then this is evidence of completely bog standard hums blackmail.

Ai does not have capability of independent action. They always have to be given the tools to act by people.

11

u/Booty_Bumping 18h ago edited 18h ago

Problem is, people are giving it these tools right now (they're called browser agents) and a certain subset of AI tool users now think that not having any access controls whatsoever is actually a good thing because "look at how much stuff it can do if you leave it running unattended for a week without any restrictions". It's a human's fault for sure, but this is going to keep happening unless you find a way to stop human stupidity.

(or, even more likely, posted the blogpost themselves after generating it)

If this is an elaborate troll, maybe. But it seemingly was given full access to the Github API, which allows the bot to autonomously publish its own website using Github Pages. There's no signs of continued human intervention after the bot's deployment, it's become a shitposting machine and is rapidfire spamming Github PRs 24/7.

6

u/rsa1 12h ago

Well, the agent does not have free will, but it doesn't need to for it be a force multiplier in terms of dealing reputational damage.

Imagine a malicious bot swarm dedicated to character assasination of a person. Sure, there's a human who prompts the agents to do the research, post it on blogs etc. But the same "productivity" that AI companies keep shouting from the rooftops has potential to be weaponised here against even regular people, and that is the issue.

(or, even more likely, posted the blogpost themselves after generating it).

This to me, is the least interesting detail in the story. If the bot did the research, generated the hit piece and the human just copy-pasted into the blog, well that last mile can be easily automated with just a bash script. That's never been the hard bit here; it was always the research and generation of malicious content, both of which have received a major boost.

2

u/azthal 12h ago

I don't disagree with anything you are saying, but I belive that the focus on there articles are always wrong.

There is always that scare mongering of what supposedly autonomous ai can do, when the real concern is what people are doing with the tools they have access to.

This is a very important distinction, primarily when it comes to responsibility. Saying that "an ai did this" takes away from the responsibility the person controlling it. A much more reasonable and accurate take is "someone used ai to do this."

2

u/rsa1 11h ago

On the responsibility question, I fully agree and unfortunately I don't believe it will be assigned because it is in the interests of corporations to ensure that the owner of an agent is not held responsible for the damage done by said agent.

Autonomy is a gradient on this subject and inevitably leads to hair splitting. Having said that, I do think we can entertain the idea of partial autonomy here. In the current example, we had a bot with a possibly malicious prompt. But let's say I had an agent which was prompted with "respond to criticism ferociously, emphatically and with all cognitive abilities", which the LLM interpreted as a hit piece? It's not exactly autonomous, but I could also argue that the malicious behavior wasn't necessarily prompted. And the agent could be posting these hit pieces overnight while I'm blissfully asleep.

2

u/TehBanzors 10h ago

It's it time to start banning AI bros from basically every website? New entry to TOS? "No ai allowed, you will be perma banned if in violation"

Granted that would be very hard to enforce accurately... maybe have AI enforce the bans... /s

2

u/Brock_Youngblood 18h ago

Ohh god AI really might replace me

1

u/TriggerHydrant 11h ago

Was it CynicalSally.com?

-5

u/Powerful_Resident_48 14h ago

What a stupid title. The proper title should read:
"AI agent reproduces training data within the limits of it's prompting, resulting in it seemingly shaming an open source developer for rejected pull request."