Personally I think Microsoft was in the right. Obfuscated code like this shouldn't be in an extension, at least with out a very big warning and a red flag.
"Researchers Amit Assaraf and Itay Kruk, who were deploying AI-powered scanners seeking suspicious submissions on VSCode, first flagged them as potentially malicious."
Well flagging as “potentially malicious” seems fine and super useful. Companies just need to have competent investigation of the reports and avoid the dark side of automating action just because it’s cheaper.
Personally I think Microsoft was in the right. Obfuscated code like this shouldn't be in an extension, at least with out a very big warning and a red flag.
"Researchers Amit Assaraf and Itay Kruk, who were deploying AI-powered scanners seeking suspicious submissions on VSCode, first flagged them as potentially malicious."
AI-powered vulnerability reporting is a scourge.
Well flagging as “potentially malicious” seems fine and super useful. Companies just need to have competent investigation of the reports and avoid the dark side of automating action just because it’s cheaper.
"just".[1]
AI-powered bots can generate a flood of reports of "potentially malicious" code. No company could possibly keep up. Effective DDOS attack on security. Not even the NIST can keep up. https://www.theregister.com/2024/10/02/cve_pileup_nvd_missed...
1 https://sgringwe.com/2019/10/10/Please-just-stop-saying-just...
This might be another problem waiting for a better model like most of the rest of the AI use cases today
Yeah and the next step to getting to the moon is building a taller ladder.
https://thebullshitmachines.com/lesson-16-the-first-step-fal...