I’ve seen this after inviting someone whom also received a pop for copilot and they probably clicked it not intending to send of a request. Just seems like traditional bad Microsoft design.
That's easy. It's the Microsoft dark pattern era of computing, and Microsoft owns github, and Microsoft is heavily pushing "AI" and "Copilot" everywhere. Therefore, the person who requested this is clearly Microsoft.
If it was actually an employee/someone you know, Microsoft would tell you, or, the person would have you sent you a link and said "this is cool, we should get this".
I'm imagining a kind of weaponized innocent straightforwardness:
"GitHub support, it appears a hacker compromised our organization, and slipped up by clicking to request an AI feature. There's not enough auditing on this, so we need to know exactly which user-account did that in order to disable it and stabilize the situation."
That was one of my other theories- a miss-click or a click that the user thought went to one of their other organizations. But if they are setting up dark patterns to farm clicks, are those really expression of use intent?
Why is it so hard to believe that someone in your organization accidentally or deliberately clicked the "get copilot" button in their IDE or on any Github surface? I'm not saying Github is right here (this is an ad after all), but directly accusing them of dishonesty/cheating without even putting in the bare minimum effort to confirm it on your end seems weird. AI-assisted coding has been all the rage in tech circles for a while now, and I'd bet there's at least one person at your company who wants to try it out.
Although I guess the purpose of the post was to generate internet outrage, and I have no doubt it will be successful at that.
It is an organization of 6. 4 of which are guests (so would likely not incur a charge without asking, or at least mentioning it). But the theory somebody clicked by accident is pretty good.
I think of this more as pushing back a bit and warning others.
What you encountered here is a dark pattern, but not the way you think it is.
They try to upsell a feature, but because it is a B2B SaaS product, the users are not the ones holding the credit card.
So, to upsell they have to get through your users to the admin / license owner. They prominently display a shiny feature, sometimes leading the users there through misdirection. Then they call for action: you need to request a bigger license, go ask your admin!
And of course, someone thought it is a good idea to add a button, which sends a templated email to the primary address to reach the person with the credit card.
This notably does not count as a marketing communication, so they can just send it even without explicit permission given prior.
So yeah, someone in your team probably triggered it, likely without knowing because they were misled.
I’ve seen this after inviting someone whom also received a pop for copilot and they probably clicked it not intending to send of a request. Just seems like traditional bad Microsoft design.
> how do you know they didn’t request this.
That's easy. It's the Microsoft dark pattern era of computing, and Microsoft owns github, and Microsoft is heavily pushing "AI" and "Copilot" everywhere. Therefore, the person who requested this is clearly Microsoft.
If it was actually an employee/someone you know, Microsoft would tell you, or, the person would have you sent you a link and said "this is cool, we should get this".
It’s your future AI programmer that clicked it.
I'm imagining a kind of weaponized innocent straightforwardness:
"GitHub support, it appears a hacker compromised our organization, and slipped up by clicking to request an AI feature. There's not enough auditing on this, so we need to know exactly which user-account did that in order to disable it and stabilize the situation."
The [0, 1, infinity] rule has been simplified to [1, infinity] rule.
I absolutely do not want CoPilot, but I almost clicked on a button to request it from my organization of just me a couple of times.
That was one of my other theories- a miss-click or a click that the user thought went to one of their other organizations. But if they are setting up dark patterns to farm clicks, are those really expression of use intent?
Why is it so hard to believe that someone in your organization accidentally or deliberately clicked the "get copilot" button in their IDE or on any Github surface? I'm not saying Github is right here (this is an ad after all), but directly accusing them of dishonesty/cheating without even putting in the bare minimum effort to confirm it on your end seems weird. AI-assisted coding has been all the rage in tech circles for a while now, and I'd bet there's at least one person at your company who wants to try it out.
Although I guess the purpose of the post was to generate internet outrage, and I have no doubt it will be successful at that.
It is an organization of 6. 4 of which are guests (so would likely not incur a charge without asking, or at least mentioning it). But the theory somebody clicked by accident is pretty good.
I think of this more as pushing back a bit and warning others.
What you encountered here is a dark pattern, but not the way you think it is.
They try to upsell a feature, but because it is a B2B SaaS product, the users are not the ones holding the credit card.
So, to upsell they have to get through your users to the admin / license owner. They prominently display a shiny feature, sometimes leading the users there through misdirection. Then they call for action: you need to request a bigger license, go ask your admin!
And of course, someone thought it is a good idea to add a button, which sends a templated email to the primary address to reach the person with the credit card.
This notably does not count as a marketing communication, so they can just send it even without explicit permission given prior.
So yeah, someone in your team probably triggered it, likely without knowing because they were misled.