Out of sheer curiosity, I put three screenshots of the noise into Claude Opus 4.1, Gemini 2.5 Pro, and GPT 5, all with thinking enabled with the prompt “what does the screen say?”.
Opus 4.1 flagged the message due to prompt injection risk, Gemini made a bad guess, and GPT 5 got it by using the code interpreter.
I thought it was amusing. Claude’s (non) response got me thinking - first, it was very on brand, second, that the content filter was right - pasting images of seemingly random noise into a sensitive environment is a bad idea!
A friend of mine made a similar animated GIF type captcha a few years ago but based on multiple scrolling horizontal bars that would each reveal their portion of the underlying image including letters, and made a (friendly) bet that it should be pretty hard to solve.
Grabbing the entire set of frames and greyscaling them, doing an average over all of them and then applying a few minor fixups like thresholding and contrast adjustment worked easily enough as the letters were reveleaed in more frames than not (I don't think that would affect the difficulty much though if it were any diffierent). After that the rest of the image was pretty amenable to character recognition.
If anybody implements that to antiscrenshot some sensitive data, somebody else will use another phone, a tablet or a camera to record a video of it. Nice idea though.
This is more a curious question for those affected by epilepsy. If you know you are triggered by such things how long an exposure is required to trigger an effect. Are you able notice that media may be triggering and simply close it or is exposure and triggering almost instantaneous?
I saw the game using this rendering weeks ago, looked okay. Now I saw a font and tried to hold on to the edges while reading it, and yes, somehow this made me more (sea) sick. Strange.
Perhaps faces would be strongest in terms of reaction.
Neat! I've seen stuff like this that works as a magic eye thing. So you cross your eyes (or make them parallel, depending on the type of image) and it makes a 3d animation appear in front of the page.
It seems to depend on reading pixels from a canvas. This is commonly used for fingerprinting users on the web, so you have to disable some privacy plugins.
This idea has made me think of another subject - would it be possible to overload a face / car plate scanning camera by using a pattern, like qr code for exampl? Or a jacket made of qr codes?
Doesn't even show anything on LibreWolf, probably disabled WebGL as usual. I thought it was a nice error screen, but apparently it was intended, just without the text :P
Could someone please post what this disappeared bit is supposed to look like? Looks legible to me when I screenshot and open in Preview on MacOS 15.6.1 (Firefox).
You are probably browsing with zoom, that seems to screw up the up rendering and makes the background and text look different. It should be just black&white random pixel noise for both background and foreground, without motion the text becomes invisible, as it blends with the background.
Has anyone tried a long exposure to see if the motion smears into something discernible? Obviously harder to expose a bright screen without some ND since the shutter speed is the phone's main exposure control
Not the parent but that was not at all clear to me. I immediately thought of taking multiple successive instantaneous screenshots and then stacking them. I'm not sure I would have thought of using a camera within a few minutes to an hour, it's not a tool I would ever reach for normally.
It's a nice effect, but I don't think it's usable in practice, because it's not accessible for visually impaired users: not enough contrast between foreground text and background
Another idea I had with this concept is to make an LLM-proof captcha. Maybe humans can detect the characters in the 'motion' itself, which could be unique to us?
- The captcha would be generated like this on a headless browser, and recorded as a video, which is then served to the user.
- We can make the background also move in random directions, to prevent just detecting which pixels are changing and drawing an outline.
- I tried also having the text itself move (bounce like the DVD logo). Somehow makes it even more readable.
I definitely know nothing about how LLMs interpret video, or optics, so please let me know if this is dumb.
Probably the result of canvas fingerprinting protection configured in your `about:config`? With a default profile it seems to work fine on Firefox for Android.
> The pattern across any single frame is entirely random noise.
This is untrue in at least one sense. The patterning within the animated letters cycles. It is generated either by evaluating a periodic function or by reading from a file using a periodic offset.
You could do that, but that's not what the page is doing.
You don't even need to maintain the approach of having the pattern within the text move downwards over time. You could redraw it every frame with random data, as if it was television static. It would still be easy to read, as long as the background stayed fixed.
For what it's worth, there are some websites that embed some crazy shit when you screenshot. On reddit, r/CenturyClub will fill your background with a slightly off-white version of your username so that they can identify leakers, and I'm not certain how exactly they do it.
In your phone, just record the screen, then drag the player to see how every still pic blend in within the surroundings, but as soon as it moves it shows up.
yeah - I actually was initially confused since I wasn't having any issues screenshotting it but had forgotten that I have the default site zoom set to ~65%.
Not sure what you mean - I can screenshot it freely that's not the point the point is if you look then at the screenshot you cant discern the text because its a single frame now
This is on MacOS 15.6, Chromium (BrowserOS), captured with the OS' native screenshot utility. Since I was asked about the zoom factor, I now tried simply capturing it at 100% and it was still perfectly readable...
This is really interesting - because it means the "randomness" is different between the text and the background, and when you zoom out enough, the eye can distinguish it?
hmmm I think it's probably just an aliasing / canvas drawing issue. When I bring a screenshot in heavily zoomed out 33% - the pixels comprising the "HELLO" shape have a significantly higher luminance than the rest of the background.
I zoomed out to 90% and could make out something was there but wasn't easy to read. Zooming out further went back to just being noise. I also tried zooming in but with no success. What zoom level did you use and I guess we have to ask the standard what browser/version/OS/etc?? My FFv142 on macOS never took a screen grab like you did
Zooming out before taking screenshot and the text is no longer obfuscated. I tried and confirmed it works. In fact, the text is perhaps even more readable than the original.
It depends how fast or slow your GPU is. I tried it and saw the effect you described, but within a second or two it started moving and was obscured again. Obviously you could automate the problem away.
What I meant was that even if it only freezes for a second, you could automate the screenshots to be captured during that time instead of trying to beat the clock manually
If it's even true someone from outsourced support has access to some sensitive security details then using this dumpster is almost like throwing your money out of the window.
You can take TWO screenshots, moments apart, open in GIMP, paste one over the other and choose any one of these laying modes:
Lighten, Screen, Addition, Darken, Multiply, Linear burn, Hard Mix, Difference, Exclusion, Subtract, Grain Extract, Grain Merge, or Luminance.
https://ibb.co/DDQBJDKR
> You can take TWO screenshots, moments apart, open in GIMP, paste one over the other and choose any one of these laying modes:
You actually don't need any image editing skill. Here is a browser-only solution:
1. Take two screenshots.
2. Open these screenshots in two separate tabs on your browser.
3. Switch between tabs very, very quickly (use CTRL-Tab)
Source: tested on Firefox
Is it possible to modify the webpage to make the pattern of the text go down and the pattern of the background do up?
Out of sheer curiosity, I put three screenshots of the noise into Claude Opus 4.1, Gemini 2.5 Pro, and GPT 5, all with thinking enabled with the prompt “what does the screen say?”.
Opus 4.1 flagged the message due to prompt injection risk, Gemini made a bad guess, and GPT 5 got it by using the code interpreter.
I thought it was amusing. Claude’s (non) response got me thinking - first, it was very on brand, second, that the content filter was right - pasting images of seemingly random noise into a sensitive environment is a bad idea!
Neat idea.
A friend of mine made a similar animated GIF type captcha a few years ago but based on multiple scrolling horizontal bars that would each reveal their portion of the underlying image including letters, and made a (friendly) bet that it should be pretty hard to solve.
Grabbing the entire set of frames and greyscaling them, doing an average over all of them and then applying a few minor fixups like thresholding and contrast adjustment worked easily enough as the letters were reveleaed in more frames than not (I don't think that would affect the difficulty much though if it were any diffierent). After that the rest of the image was pretty amenable to character recognition.
Or just copy the text from the url. Not very secure, really. :D
Or just ... record a video of the screen.
Yeah if this became popular, we'd have another Show HN for a tool that automated that.
"you cannot screenshot this already illegible mess of white noise"
This game disappears if you pause it: https://youtube.com/watch?v=Bg3RAI8uyVw
This is great - seems to be the same effect of hiding a shape using an animated noise pattern on a background of static noise.
They even provide the source code for the effect:
https://github.com/brantagames/noise-shader
Yes - I was thinking of this. It solves various complicated problems such as rendering distance information in this format.
This is great. The sphere example looks especially pleasing. It also reminds me of the game The Voidness.
First time seeing this, makes me smile involuntarily.
If anybody implements that to antiscrenshot some sensitive data, somebody else will use another phone, a tablet or a camera to record a video of it. Nice idea though.
I first saw this effect in a video from Branta Games.
https://www.youtube.com/watch?v=Bg3RAI8uyVw
The effect is disrupted by introducing rendering artifacts, by watching the video in 144p or in this case by zooming out.
I'd love to know the name of this effect, so I can read more about the fMRI studies that make use of it.
What I've found so far:
Random Dot Kinematogram
Perceptual Organization from Motion (video of Flounder camouflage)
https://www.youtube.com/watch?v=2VO10eDIyiE
https://gist.github.com/jncornett/d7cb397ce3ceff268a0ee1b86f...
On iPhone: screenrecord. Take screenshots every couple seconds. Overlay images with 50% transparency (I use Procreate Pocket for this part)
This should have an epilepsy warning. Or something of that kind. It certainly made me feel sick.
This is more a curious question for those affected by epilepsy. If you know you are triggered by such things how long an exposure is required to trigger an effect. Are you able notice that media may be triggering and simply close it or is exposure and triggering almost instantaneous?
I saw the game using this rendering weeks ago, looked okay. Now I saw a font and tried to hold on to the edges while reading it, and yes, somehow this made me more (sea) sick. Strange.
Perhaps faces would be strongest in terms of reaction.
Oh yes please add a warning. My brain is burning right now!
As soon as I read the title I knew it would be akin to "Bad Apple that disappears when you pause it"
https://www.youtube.com/watch?v=bVLwYa46Cf0
And another version of this, using apples instead of white noise
https://www.youtube.com/watch?v=r40AvHs3uJE
Neat! I've seen stuff like this that works as a magic eye thing. So you cross your eyes (or make them parallel, depending on the type of image) and it makes a 3d animation appear in front of the page.
Others have mentioned Branta Games, but I first saw the effect here: https://youtu.be/TdTMeNXCnTs
thanks, that's also the best explained one!
I don't see any text: just a scrolling down screen of random black/white pixels.
It seems to depend on reading pixels from a canvas. This is commonly used for fingerprinting users on the web, so you have to disable some privacy plugins.
This idea has made me think of another subject - would it be possible to overload a face / car plate scanning camera by using a pattern, like qr code for exampl? Or a jacket made of qr codes?
This makes me feel motion-sick, which is kind of impressive because I'm normally not easily susceptible to that.
My eyes went straight into seeing 3D image mode. It's the easiest one I've seen yet! /s
Heh my eyes felt like they started bleeding
"The text disappears..." And my eyesight with it
Doesn't even show anything on LibreWolf, probably disabled WebGL as usual. I thought it was a nice error screen, but apparently it was intended, just without the text :P
Seems to work if you disable canvas fingerprinting protection.
Yeah but the randomness may produce all kinds of NSFW stuff ...
Also, it's even harder to read than most captchas.
But fun idea, it was nice to see.
Could someone please post what this disappeared bit is supposed to look like? Looks legible to me when I screenshot and open in Preview on MacOS 15.6.1 (Firefox).
You are probably browsing with zoom, that seems to screw up the up rendering and makes the background and text look different. It should be just black&white random pixel noise for both background and foreground, without motion the text becomes invisible, as it blends with the background.
Has anyone tried a long exposure to see if the motion smears into something discernible? Obviously harder to expose a bright screen without some ND since the shutter speed is the phone's main exposure control
Here's the screen recording version of a long exposure (thanks for the nerd snipe) - https://gist.github.com/spro/7599415b0e47de65311557b3454771a...
Perhaps this technique could be defeated by scrolling the background in the opposite direction as the text
If you zoom out to 25 % the text is clearly visible and screenshottable.
Probably the lower frequencies of noise are not matched? Not sure if the frequencies of the order of movement frequency can actually be matched
How do you take a “long exposure” screenshot? Isn’t every screenshot a perfect digital copy of a single frame or a full on video?
Clearly, I meant using a camera, and I'm guessing you knew that too
Not the parent but that was not at all clear to me. I immediately thought of taking multiple successive instantaneous screenshots and then stacking them. I'm not sure I would have thought of using a camera within a few minutes to an hour, it's not a tool I would ever reach for normally.
I just did this with 50% transparency. It works
Also not the parent but how the hell did you not understand what "long exposure" means ffs
Because the context is about screenshots and context matters
"ffs".
It's a nice effect, but I don't think it's usable in practice, because it's not accessible for visually impaired users: not enough contrast between foreground text and background
Another idea I had with this concept is to make an LLM-proof captcha. Maybe humans can detect the characters in the 'motion' itself, which could be unique to us?
- The captcha would be generated like this on a headless browser, and recorded as a video, which is then served to the user.
- We can make the background also move in random directions, to prevent just detecting which pixels are changing and drawing an outline.
- I tried also having the text itself move (bounce like the DVD logo). Somehow makes it even more readable.
I definitely know nothing about how LLMs interpret video, or optics, so please let me know if this is dumb.
Take N screenshots, XOR them pairwise, OR the results, then perform normal OCR.
I don't think we need more capable people thinking of silly captchas.
As if captchas aren't painful enough for visually impaired users...
Cool. I used the Windows snipping tool and just screen-recorded it.
Not technically a screenshot, I guess, but trivially easy to do with software I had lying around all the same. https://media4.giphy.com/media/v1.Y2lkPTc5MGI3NjExYXloZ3Z0NT...
Sure, but I can just record a video instead. It doesn’t disappear then!
Firefox on Android seems to just be a static image, I can't see any text.
Probably the result of canvas fingerprinting protection configured in your `about:config`? With a default profile it seems to work fine on Firefox for Android.
Wfm
Even is some have found a workaround, this is a cool feature
This could be used for Captcha systems. Would current bots be able to decipher these?
What i am supposed to see here? Its just static noisy background
Had the same in LibreWolf under Manjaro Linux. Worked in Chrome.
Animation, but only inside a border that is the letters of Hello.
Ha cool! How’s it work?
The only way to see the text is in the movement. The pattern across any single frame is entirely random noise.
> The pattern across any single frame is entirely random noise.
This is untrue in at least one sense. The patterning within the animated letters cycles. It is generated either by evaluating a periodic function or by reading from a file using a periodic offset.
Can't it be continuous random noise added at the top and then moved down each frame.
Roughly you create another full size rect. On each frame add a random pixel on row 1 and shift everything down.
Make that rest a layer below the top one which has Hello cut out as transparent.
In any single frame the result is random noise.
You could do that, but that's not what the page is doing.
You don't even need to maintain the approach of having the pattern within the text move downwards over time. You could redraw it every frame with random data, as if it was television static. It would still be easy to read, as long as the background stayed fixed.
It's not a great visual, but like this: https://michaelbach.de/ot/cog-Dalmatian/
You can also break it by recording the screen, of course.
same thing, but a game: https://brantagames.itch.io/motus
For what it's worth, there are some websites that embed some crazy shit when you screenshot. On reddit, r/CenturyClub will fill your background with a slightly off-white version of your username so that they can identify leakers, and I'm not certain how exactly they do it.
If you blink really fast, the text almost disappears.
firefox on linux with a bunch of css stuff set to defaults or none !important shows a static image
but screen recording works :)
In your phone, just record the screen, then drag the player to see how every still pic blend in within the surroundings, but as soon as it moves it shows up.
Fun side effect: staring at the letters for a bit makes the rest of the image move.
Had a lot of fun trying to break this. Turns out you can screenshot real easily by zooming out. Maybe there are other ways but I stopped trying :)
yeah - I actually was initially confused since I wasn't having any issues screenshotting it but had forgotten that I have the default site zoom set to ~65%.
Not sure what you mean - I can screenshot it freely that's not the point the point is if you look then at the screenshot you cant discern the text because its a single frame now
He's right. This is zoomed out: https://imgur.com/a/G7CKZ94
This is on MacOS 15.6, Chromium (BrowserOS), captured with the OS' native screenshot utility. Since I was asked about the zoom factor, I now tried simply capturing it at 100% and it was still perfectly readable...
I guess the trick doesn't work on this browser.
This is really interesting - because it means the "randomness" is different between the text and the background, and when you zoom out enough, the eye can distinguish it?
hmmm I think it's probably just an aliasing / canvas drawing issue. When I bring a screenshot in heavily zoomed out 33% - the pixels comprising the "HELLO" shape have a significantly higher luminance than the rest of the background.
I zoomed out to 90% and could make out something was there but wasn't easy to read. Zooming out further went back to just being noise. I also tried zooming in but with no success. What zoom level did you use and I guess we have to ask the standard what browser/version/OS/etc?? My FFv142 on macOS never took a screen grab like you did
Zooming out before taking screenshot and the text is no longer obfuscated. I tried and confirmed it works. In fact, the text is perhaps even more readable than the original.
It depends how fast or slow your GPU is. I tried it and saw the effect you described, but within a second or two it started moving and was obscured again. Obviously you could automate the problem away.
Mine freezes the animation on zoom change. Not sure you could automate against that
What I meant was that even if it only freezes for a second, you could automate the screenshots to be captured during that time instead of trying to beat the clock manually
Screnshotted fine in Xfce.
The text reappears when I screenshot it twice.
Seems trivial to diff multiple screenshots to identify what parts move. Or just use a compression algorithm to do the same.
Would 2 screenshots be enough, I wonder?
Yeah, the letters are big enough, an xor shows the text quite clearly.
Coinbase was hacked for $400M when literally someone from outsourced support services was taking screenshots on their phone!
The culprit had more than 10k photos of all security details for thousands of wealthy customers.
If it's even true someone from outsourced support has access to some sensitive security details then using this dumpster is almost like throwing your money out of the window.