I feel like this is general knowledge for the past 5 or so years, but the real question is "What do we do about it?". Personally, I put real effort into not spending time being outraged online, but this is a societal ill that's bigger then I am...
Shut down the behavior with regulations or shut down the companies. Meta and TikTok have no natural right to exist if they are a net negative to society.
Specifically, I believe Section 230 protections shouldn't apply to algorithmicly promoted content. TikTok hosting my video isn't inherently an endorsement of what I'm saying, but proactively pushing that video to people is functionally equivalent even if you want to quible over dictionary definitions. These algorithms take these platforms from dumb content-agnostic pipes that deserve protections to editorial enterprises that should bear responsibility for what they promote.
I don't think we even need to go that far. Just remove protection for paid advertisements. It's absurd that Meta cannot be held liable for the ads they promote when a newspaper can be held liable if they were to publish the same ad.
There is a decent legal argument to be made that §230 doesn't immunize platforms for the speech of their algorithm, to the extent that said speech is different from the speech of the underlying content. (A simple, if absurd, example of this would be if I ran a web forum and then created a highlight page of all of the defamatory comments people posted, then I'm probably liable for defamation.)
The problem of course is that it's difficult to disentangle the speech of algorithmic moderation from the speech of the content being moderated. And the minor issue that the vast majority of things people complain about is just plain First Amendment-protected speech, so it's not like the §230 protections actually matter as the content isn't illegal in the first place.
Listing content alphabetically or chronologically is technically an "algorithm" too. What I'm specifically challenging here is the personalized algorithm designed to keep individual users on the platform based off a user profile influenced by countless active and passive choices the user has made over time. The type of HN algorithm that serves the same content to every user based off global behavior is fine in my book because it is both less exploitative of the user base and a reflection of that user base's proactive decisions in upvoting/downvoting content.
So if HN added anything personalized, like allowing you to show fewer stories on topics you dislike, it would lose protection? I can't get on board with that.
I also think it would be extremely unpopular. People like their recommendation engines. They want Netflix to show them more similar shows. They want Reddit to help them find more similar subreddits. I know there are HN users who don't want any of these recommendation engines, but on the whole people actually want them.
>They want Netflix to show them more similar shows.
Perhaps that example was a little too revealing on your end. Netflix doesn't have/need Section 230 protections and they're doing fine.
I'm not suggesting these algorithms should be illegal, just that Section 230 protections were defined too broadly because they predated the feasibility of these type of algorithms. These platforms would be free to continue algorithmic promotion, but I believe these algorithms would be less harmful if the platforms had to worry about potential legal liability.
Think YouTube and copyright for comparison. The DMCA is far from perfect, but we have YouTube as an example of a platform that survived and even thrived in the transition from a world that didn't care about copyrighted internet video to one in which they that needed to moderate with copyright in mind.
Cigarettes weren’t made illegal. Cigarette companies are not liable for their user’s choice to consume them. What’s your point?
> Perhaps that example was a little too revealing on your end. Netflix doesn't have/need Section 230 protections and they're doing fine.
Perhaps it was a little too revealing on your end that you conveniently ignored my other example of Reddit.
If you need to cherry pick to make your point it doesn’t look very strong.
I still don’t see consistency in your argument that Section 230 should still apply to Hacker News but not, for example, Reddit, simply because one of them allows users to personalize the content they see.
> Cigarette companies are not liable for their user’s choice to consume them.
They kind of were. Not completely liable, but partially. Because... um, well, uh, yeah, they are. They are literally liable.
If you produce cigarettes, you are partially responsible for people smoking. Smoking is also not a "choice", come on now. The only people who believe that are people trying to sell you cigarettes or people who have never smoked.
That's why you can't advertise cigarettes anywhere anymore and they're very hard to find. And, when you do find them, the box tells you "hey please don't smoke this". R.J. Reynolds didn't do that by fucking choice, we forced them.
Cigarette companies paid billions, and continue to pay, for the societal harm they cause. That's a liability. They're not legally liable in the sense that nobody is going to jail. But they have financial liabilities. Because they do, literally, cause financial harm.
I don't think people really understand just how harshly we ran Tobacco companies into the ground. Many pay more per cigarette for liability than they pay to make the cigarette.
This is the type of comment that suggests you aren't engaging with what I'm saying beyond a superficial level. My argument is consistent. I'm not cherry-picking examples. The differentiator I'm criticizing is the personalized nature of the algorithms. But rather than engaging with the merit of that distinction, you're acting as if there is no distinction at all. I'm not sure if there is much point in contuning the conversation from there.
I think the other person's issue with your position is that the distinction is entirely arbitrary. You're not giving any reasons why the demarcation line for which feed algorithms are OK and which are not is there instead of anywhere else. It seems to be just "Facebook and TikTok are bad; Their feeds are personalized recommendation engines; Therefore personalized recommendation engines are bad, and other feed algorithms are OK".
>I think the other person's issue with your position is that the distinction is entirely arbitrary.
Basically all laws related to speech are abitrary. Can you define a clear and self-evident line between pornography and art as an example? Or do you agree with the Supreme Court that we just "know it when [we] see it"?
>You're not giving any reasons why the demarcation line for which feed algorithms are OK and which are not is there instead of anywhere else.
Let me just copy and paste what I said before: "The type of HN algorithm that serves the same content to every user based off global behavior is fine in my book because it is both less exploitative of the user base and a reflection of that user base's proactive decisions in upvoting/downvoting content." I can understand if one of you want to challenge that line of thought, but you both acting like I didn't give any reasoning at all is bizarre and gives me the impression that you aren't actually reading what I'm writing.
> Basically all laws related to speech are abitrary.
True. This is a fair point. But the expected counter argument would be that the exact line isn't the issue instead it's the justification for the principle.
IE why is personalized algorithms more dangerous than general ones.
My answer (because I mostly agree with you) is that the difference is that personalized algorithms almost feel like brain hacking. And this brain hacking simply doesn't work at scale when applied to vague general algorithms.
>Basically all laws related to speech are abitrary. Can you define a clear and self-evident line between pornography and art as an example? Or do you agree with the Supreme Court that we just "know it when [we] see it"?
I'm a free speech absolutist, so I personally don't find which laws already exist on the matter to be a compelling argument. If it was up to me, I'd get rid of any such laws.
>The type of HN algorithm that serves the same content to every user based off global behavior is fine in my book because it is both less exploitative of the user base and a reflection of that user base's proactive decisions in upvoting/downvoting content.
The argument hinges entirely on the relative exploitativeness of different feed algorithms, but that metric is merely asserted with no support.
Typically free speech absolutism leads individuals into logical traps they find difficult to dig themselves out of.
But we don't even need that in this case. Private property can have all kinds of restrictions put on it based on the potential dangers and harms it causes. This in fact is one of the most common attacks on speech I see right now (Meta et el) that they will just put age requirements on sites.
>Typically free speech absolutism leads individuals into logical traps they find difficult to dig themselves out of.
Yes, "free speech absolutists" tend to define these terms in ways to hide the true arbitrary nature of their beliefs. The obvious test case is do they believe in legalizing CSAM. Either they answer "yes" and ostracize themselves from almost all of society or they say "no" and have to come up with arbitrary rules why this specific content doesn't count as speech. Either way, self-applying the label is its own red flag.
I wasn't the one who brought up free speech into the discussion; slg was. That aside, whether it curtails it or not would depend on how one defines "speech". Even if the particular way in which a website displays information is not speech, I still think it would be an overreach for a government to legislate how websites are allowed to function. If I as a user want to see a feed populated by recommended content, and the site's operators want to show it to me, what business does the government have stepping into our interaction?
What do you think about the case of Lucy Connolly, who, during a riot where rioters were burning down hotels housing immigrants, tweeted that people should burn down hotels housing immigrants and was arrested for that?
I'm paying for Netflix to do that as a feature. Instagram uses that to drive engagement to sell ads. Disabling personalized content on Netflix is a revenue-neutral choice. On Instagram, that would mean their ad revenue takes a huge dive. Apples aren't oranges.
That is not comparable because of the little you have over the algorithm for the other cases. On bandcamp, you can select the genre and a sorting criteria and have very good control over the list. But on Spotify, it’s very obscure, with things you’ve never asked for being in front even before your own library.
But algorithmic feeds can actually be useful for discovery of related material - I want Youtube to show me more Japanese jazz and video essays about true crime based on my watch history, I wanted Twitter to show me more accounts from writers and game developers because I follow them (before the platform went full Nazi) and I like that Facebook shows me people and information from my local area. Forcing all platforms to use only alphabetical or chronological feeds because of the exploitative way some platforms use algorithms seems awfully close to the "banning math" argument people used to use about cryptography and DRM, and it would remove a lot of legitimate use from the internet.
It's all about who controls the algorithm. A sensible approach would be to decouple recommendations from platforms, to treat them like plug-ins that the user must be allowed to add or disable. You want to use YouTube's recommendation algorithm on YouTube? Great, but there needs to be an off-switch and a way to change over to another provider. This is classic anti-trust stuff, breaking up a sector into interoperable pieces.
Really nice to see someone else bringing this up. Algorithmic editorial decisions are still editorial decisions. I think ultimately search and other forms of selective content surfacing should not have ever been exempt. They were never carriers. I appreciate that this would make the web as we know it unusable. I think failing to tackle this problem has will also make the web unusable, and in a worse way.
> I think ultimately search and other forms of selective content surfacing should not have ever been exempt. They were never carriers. I appreciate that this would make the web as we know it unusable
I can’t be the only one confused at these calls to have the government destroy things like searching the web, am I?
How is this a real idea being proposed on Hacker News, of all places? Not that long ago it was all about freedom on the Internet and getting angry when the government interfered with our right to speech online, and now there are calls to do drastic measures like make search engines legally untenable to run in the United States?
It’s also confusing that nobody calling for banning things or making the web unusable appears to be making the connection that the internet is global. If we passed laws that forced Google and Bing to shut down because they’re liable for results they index, what do you think the population will do? Shrug their shoulders and give up on the internet? Or go use a search engine from another country?
> How is this a real idea being proposed on Hacker News, of all places? Not that long ago it was all about freedom on the Internet and getting angry when the government interfered with our right to speech online
I can be upset about the government trying to make the world worse, and about other huge balls of power who have been making the world shitty in an ongoing fashion. Freedom of speech doesn't mean shit if a handful of people can buy up or otherwise absorb control of 90% of media and choose who gets heard. The call for regulation is an acknowledgment that the market fucked this one up. When the government threatens speech, I'll call for civil disobedience and proactive protections. When oligarchs threaten speech I'll call for regulation and punishment.
> It’s also confusing that nobody calling for banning things or making the web unusable appears to be making the connection that the internet is global. If we passed laws that forced Google and Bing to shut down because they’re liable for results they index, what do you think the population will do?
You assume that the only way to get a good, free search engine is to give control of it to some private entity. That if we don't do it in the US, people with turn to someplace else. I think you may be lacking in imagination. At a minimum, the possibility exists for nonprofit organizations to run quality search engines, but it's also possible to decouple the indexing business from the ranking provider. Google could run an index and charge for access, and ranking providers could build on top of that and recoup costs with non-tracking ads, donations, sales, whatever business model they please. Just because an unregulated market doesn't come up with a good solution doesn't mean a market under different constraints won't find a better way. And if nothing works out you always have the option of grants or a public digital infrastructure approach. There are so many things to try beyond shrugging and declaring that the market has ordained five dudes arbiters of the internet as experienced by most people.
> I can’t be the only one confused at these calls to have the government destroy things like searching the web, am I?
if you find this distressing then i imagine you find it equally as distressing as a couple of corporations destroy something.
the reason the word *enshittification” has become so ubiquitous is because corporations are actively destroying the internet and desperately trying to convince us the internet is separate from “the real world”.
sometimes stopping a person from burning the house down is necessary. no matter how loudly they cry about their freedom to have a bonfire in the living room.
Wouldn’t we need to shut down all news outlets, all the twitters and all the newspapers then? They might not be on the toxic spectrum as meta/tiktok, but are very close
There are people in this thread directly calling for us to strip protections from search engines and force them to shut down.
I think a lot of this discussion has become detached from reality and we’re just entertaining some people’s impossible fantasies about shutting down the internet and returning to the past.
Human instinct is always to ban and fight everything as soon as any change happens in society. The same biological motivation to doomscroll fuels our instincts to panic and doompost about how society is ruined unless we do [brash action].
Regulating content that makes people enraged seems like a slippery slide towards regulating any kind of "unwanted" speech. I get regulating CSAM, calls for violence or really obvious bullying (serious ones like "kill yourself" to a kid), but regulating algorithms that show rage bait leaves a lot of judgement to the regulators. Obviously I don't trust TikTok or Meta at all, but I don't trust the current or the future governments with this much power.
For example, some teen got radicalized with racist and sexist content. That's bad in my opinion, as I'm not a racist or a sexist. But should racist or sexist speech be censored or regulated? On what grounds? How do we know other unpopular (now or in the future) speech won't be censored or regulated in the future? Again, as much as I'm not a racist or sexist, I don't think the government should have a say in whether a company should be able to promote speech like "whites/blacks are X" or "men/women are Y". What's next? Should we regulate speech about religion (Christians/Muslims/atheists are Z) or ethics (anti-war people or vegans are Q) or politics or drugs or sex?
The current situation is shitty, but giving too much power to regulators will likely make it way shittier. If not now, in the future, since passed regulations are rarely removed.
At least in the US the government can't regulate speech (for the most part). But what we could do is regulate recommendation algorithms or other aspects of the overall design in a way that's generalized enough to be neutral in regards to any particular speech. And such regulations don't need to apply to any entity below some MAU or other metric.
Even just mandating interoperability would likely do since that would open up the floor to competitors. Many users are well aware of the issues but don't feel they have a viable alternative that satisfies their goals.
In theory I'm OK (kinda) with regulating the "overall design" somehow, but I don't see how it's going to work. Forced interoperability is a (very?) good idea, as it's really general, but it also doesn't address directly what the article and most comments talk about - the rage bait. I just can't imagine regulations (or "laws" or whatever the correct term is) that deal specifically with the algos that push rage bait that can't be later abused, if passed, to deal with other unpopular speech. And it seems like people want some laws to directly deal with that - the bad types of speech or algos themselves.
To clarify, I use "rage bait" as an example phrase, but it includes algos that only promote engagement at any cost and other things that aren't outright dangerous, but we think are dangerous. Not, like I said, CSAM or yelling FIRE or telling people to kill themselves.
Interoperability sidesteps the issue by giving users the choice of which algorithm (or algorithm provider) to use. The majority might or might not agree with that approach - for example obviously tobacco has not been left purely to the individual's judgment in the west.
Agreed, you can't regulate speech in a targeted manner while also not doing so. You're forced to find some common aspect much more general than "rage bait". Perhaps prohibiting the targeting of certain metrics? Or even prohibiting their collection in the first place.
> You're forced to find some common aspect much more general than "rage bait". Perhaps prohibiting the targeting of certain metrics? Or even prohibiting their collection in the first place.
Can you elaborate, give some ideas, examples, etc.? What metrics? How can you define them in a consistent, safe way?
We're talking generalized metrics. I have no idea which ones - I wasn't claiming to have solved the problem. The point is that if you can identify a general characteristic that is being used in a way which disproportionately contributes to a particular outcome then you can filter on that.
Estimated user age is an example of a metric largely unrelated to concerns regarding free speech. I doubt it has much relevance to the problem we're taking about here but hopefully you can imagine that prohibiting the targeting of ads or the curation of an algorithmic feed based on that metric would not be expected to unduly disadvantage any particular sort of speech.
> The point is that if you can identify a general characteristic that is being used in a way which disproportionately contributes to a particular outcome then you can filter on that.
In a non-adversarial political context where we trust the government and the future ones, sure, but I think without any strong guardrails, we could enact a law that's good today, but will be exploited in the future.
For targeting minors with any kind of political speech - I'd love it if it wasn't legal. But that brings its own can of worms. There's enough discussion on HN on the implications of age verification, whether on how it's done technically (privacy-preserving or not (ZKP or just shady 3rd parties); FOSS or not; on the ISP, OS or app level, etc.) and whether the mere precedent could trigger additional issues down the road.
Anyway, I'd love a society where everything is perfect, but I'm afraid of what might actually happen. With a benevolent god as a permanent ruler, I'd be happy with 100% prosecution rate against all kinds of littering, hate speech and whatnot, but in reality random crimes are easier to evade than a law passed down by a malevolent government, so I'm strongly against any kind of overreach. (Because the law tomorrow could be one we must evade if we want to resist an unethical government). Someone will likely chime in with "but complete and massive overreach has never happened so far", to which I'd reply - we're close to the point where technology will let the ones in power grab that power absolutely and forever if we them grab too much in the beginning.
> I get regulating CSAM, calls for violence or really obvious bullying (serious ones like "kill yourself" to a kid)
I’ve reported videos that look like sexual exploitation, videos that call for violence and videos that promote hate (xyz people are cockroaches) and all I’ve gotten is that “it does not go against community guidelines” with a link to block the person who created them. So any concerns of “where do we draw the line” are in my opinion pointless because the bare minimum isn’t even being done.
I agree with your CSAM and explicit calls for violence examples - they probably should be regulated. But a few comments ago in another thread someone didn't like me calling people in the workplace who annoy me with their mindless chit chat "corporate drones". My post could be construed as promoting hate. Where do we draw the line from "cockroaches" to "drones"? Do I have to call a certain "protected class" drones for it to qualify as hate speech?
What if I didn't say anything bad about a race or a sex, but said:
> I have coworkers that pester with me with their small talk about the weather every time I see them. I hate those fucking cockroaches.
That's in bad taste, sure, but should it be regulated? You may know I obviously don't hate-hate them (they're just annoying, but most of them are good people) or actually consider them cockroach-like in any meaningful aspect (they're obviously people, but with annoying tendencies). But would a regulator know the difference? What about a malicious regulator who gets paid by (ok, this is a silly example, but bear with me) the weather-talking coworker lobby to censor me? In many not-so-silly examples a regulator could silence anyone for anything (politics, sex, drugs, ethics), as long as it uses a bad word or says anything negative about anyone. I don't want to live in such a society. That much power would be abused sooner or later.
I'm sorry but are you saying it's hard to figure out what to do so let's do nothing? Banning racist and sexist content is not a slippery slope. It's just banning racist and sexist content, slope is only slippery because the salivating mouths of these social platforms grease them.
Also, I don't think people are advocating censorship, they are advocating not promoting assholes. You can have your little blog and be racist on it all you want, but let's not give these people equivalent of nukes for communication.
> are you saying it's hard to figure out what to do so let's do nothing?
I'm fine with doing something, but the current "something" seems slippery.
> Banning racist and sexist content is not a slippery slope. It's just banning racist and sexist content, slope is only slippery because the salivating mouths of these social platforms grease them.
But what is "racist", exactly? See why I think it's a slippery slope and why it's ill-defined:
1. We could agree that "Let's go out and kill/enslave all the $race/$gender" is racist, but that's bad if we switch $race to any group, as it's speech that incites violence.
2. What about "$race is genetically inferior in a way (less intelligent, less athletic, more prone to $bad_behavior)"? I honestly think most differences in race/ethnicity is due to environmental factors, but what if there actually are difference in intelligence or anything like that? Should we ban speech that discusses that? Black people win running races and are great at basketball. They're prone to certain diseases, as are Caucasians or Asians. So would you ban discussing that? Or would you ban blindly asserting that $race is $Y without some sort of proof?
3. What about statements like "There are way more male bus drivers because X"? Or "men are better at Y, but women are better at Z"?
What do you think the definition of racism and sexism in this context should be? I think the line is where we incite violence towards a group, but not about discussing differences that may or may not be true.
> Also, I don't think people are advocating censorship, they are advocating not promoting assholes. You can have your little blog and be racist on it all you want, but let's not give these people equivalent of nukes for communication.
I think restricting a platform (or anyone or anything) from promoting someone IS censorship. If it's not censored, why shouldn't I be able to promote it? This honestly feels disingenuous - like "we pretend that the racist isn't censored and can have his little blog, but it's illegal to promote his little blog".
> I'm sorry but are you saying it's hard to figure out what to do so let's do nothing?
That seems more reasonable than the alternative, which is to make modifications to a complex system which you aren't sure what the outcome will be. You're more likely to cause bigger problems.
>> Meta and TikTok have no natural right to exist if they are a net negative to society.
Exactly. And when we are done with them we will shut down Molson and Anheuser-Busch. Then we can go after the people who make selfy sticks. Then the company that owns that truck that cut me off last week. Basically, organization who i dislike should not be allowed to exist.
oddly enough the TikTok referred to here was to be shut down in the US. But then the executive branch ignored the law while it could organize handing the company over to Larry Ellison instead. But these allegations date to when the company was fully under the control of ByteDance, and not US-regulated entities at all.
> oddly enough the TikTok referred to here was to be shut down in the US. But then the executive branch ignored the law while it could organize handing the company over to Larry Ellison instead
Which should make people think twice when they call for government regulation on speech as a solution to content they don't want other people to see.
The more you give the government power to control speech, the more they'll use those laws to further their own interests.
All the more reason for regulation. If people catch on to the fact that they are being manipulated and abused by the platforms to "drive engagement" they might abandon them or spend less time on them. If the government regulates these platforms so that they are safer or at least less harmful people will feel better about using them giving the government a larger platform to use to control the masses.
> If people catch on to the fact that they are being manipulated and abused by the platforms
I am not trying to be funny or anything but this sounds like "if only fat kid realized that eating 10 apple pies before bedtime might be the reason s/he is fat" We already know what social media platforms are doing, not to just young people but to all people.
> If the government regulates these platforms
This is like saying "congressman care about our debt so they will vote to reduce their own salaries by 90%" - the government is not going to regulate tools they are using to control the narrative/masses etc...
Tax and heavily regulate online advertising. The root of the problem is that it is very, very lucrative to drive engagement and until you get rid of the monetary incentive, the problem will never go away.
Just as they were settling into middle age far-right propaganda, conspiracy, and hate "entertainment" escaped AM radio and flooded cable news and social media.
Technology, culture, legalization of pot, adtech, covid, there are a metric ton of factors that all had significant impact on both decreasing socialization and reduction in drinking. And lowering the birth rates, and the number of healthy relationships, healthy friendships, etc.
I'm for legalizing all drugs, regulating the sale, ensuring quality and purity, and educating the public. Cognitive liberty is sacred - but the dip in drinking has a whole lot of causes.
A healthier society would be more social and get out and drink more, I think.
I'd wager how expensive it has gotten plus a year or two of lockdowns which lead to a whole generation of people not going out to get wasted as soon as they're legally allowed to had way more effect.
I also noticed a trend that happened at my old college and a number of others that I've never seen anyone write about: the great buyout of the old college area slumlords.
All the dive bars where you could black out off $10-20 I drunk at in college are gone. They all faced the wrecking ball, and were replaced in the past 10-15 years with apartments over targets and cvs and family friendly restaurants. A huge concerted effort to buy up these properties in piecemeal then destroy entire blocks at a time. I have no clue where kids at my college go to drink now. I have little interest in going back either as an alumnus as they destroyed all the places of my memories.
Laws appear to have fallen out of fashion. And a disturbing proportion of the loudest people like it. Then you have those who ought to know better but are attention-seeking, selfish assholes who somehow find it «interesting» or think they adhere to «principles».
The latter category know who you are. You downvoted this comment.
I recently provided guidance to state legislators, with that guidance making its way into law in regards of balcony solar. If you don’t think that making law works, I would encourage you to get involved somewhere that means something to you.
It turns out that if you present as an honest, non-interested party, people will call you and ask you for your advice. I do admit that the ease of this is going to be a function of the people you are up against and the subject being regulated. My point of this comment is: default to action. “You can just do things.”
previous generations of neurotics objected to many current (at the time) things we don't bat an eye about. when was the last time you saw anyone campaign against satanic music, violent video games, or hardcore pornography?
>You in the 90s: "Leaded fuel isn't illegal guys, stop your campaigning, let's keep huffing it"
people who raised alarm about such things could easily be branded as conspiracy theorists. even now, at this very website, so full of well-educated folx, people who speak out against xenoestrogens, for example, are being downvoted to hell.
Consuming social media doesn't have an inescapable negative impact on other people, unlike burning leaded fuel. In the same way that eating junk food doesn't. Should we ban junk food? What else do you want to ban from others just because it has a risk profile you personally don't feel comfortable with?
> Consuming social media doesn't have an inescapable negative impact on other people
You don't think large portions an entire generation(s) getting cooked by social media doesn't have negative externalities that impact society as a whole?
I don't think anybody has the moral authority to regulate such second-order effects.
Should unhealthy food be banned because of the second-order effects of obesity? What about mandatory church / religious service? After all, I judge that atheism has negative second-order effects on the world. Where would I get this moral authority from?
I posted above that social media related issues are a problem, and then a bunch of posts accused me of wanting to make it illegal. I never suggested that and I actually don't support censorship, I just wish some people I know didn't spend so much of their time bummed out about social media.
I'm not suggesting that it should be illegal, I'm just seeing this monetization of bad vibes and wondering how we can have less bad vibes. Pump the brakes a little.
The people who were voted to power (across the globe, not just the US) to do something about it are stuck getting their dopamine kicks posting garbage on the same platforms.
It’s truly a terrible timeline we are in.
It’s like asking how do you get people to stop drinking alcohol
As long as there are people who don’t acknowledge or care about the health effects it will exist. If that’s a plurality of your population then you have a fundamental population problem IF you are in the group who thinks it’s bad.
Aka every minority-majority split on every issue ever.
So the answer is: live in a society governed by science. Unfortunately none exist
> So the answer is: live in a society governed by science. Unfortunately none exist
Science is a lagging indicator of reality. It is by definition conservative (in that it requires rigorous, repeatable data before it can label something as true). Because of that, there's usually a pretty substantial gap between human discovery and scientific consensus.
Mindfulness was discovered, as an example, to be beneficial as far back as 500 BCE. It wasn't "proven" with science until 1979.
Sometimes we just need to rely on lived experience to make important decisions, especially regulation. We can't always wait for science.
I drink, but I acknowledge and care about the health effects. I care more about how it makes me feel. Don't assume everyone who smokes or drinks alcohol or takes another type of drug just doesn't care. Why don't we ban dangerous sports like rock climbing or BASE jumping or MMA while we're at it?
We handled smoking pretty well by making it cost more and banning it in public places. If tiktok was banned from official app stores it would essentially go away.
I don't think deeper is the right word. Nicotine has a physical addiction element that social media does not. You cut off social media, you at worse face some boredom and FOMO.
And PM's earnings are mostly from developing countries at this point. In the US alone, the adult smoking rate has fallen nearly 73% from 1965 to now, so clearly the regulations are working.
We need to do the same for social media. People didn't quit smoking because they suddenly got more disciplined. We just made it inconvenient. The biggest start would be get rid of algorithmic feeds and "recommendations" keep it purely chronological, only from people you explicitly follow.
Nitpicking maybe, but nicotine isn't the main thing that makes cigarettes addictive and it's not that bad by itself. Gwern has a long article on nicotine that's worth a read [0].
More importantly, why do you think society should make smoking inconvenient - more costly, more illegal or anything like that? If I'm not blowing smoke in your face, why interfere with my desire to smoke? If it's about medical bills, just let me sign a waiver that I won't get cancer treatments or whatever, and let me buy a pack of smokes for what it should cost - a few cents per pack, not a few dollars/euro.
If I can smell it, I don't really care if you're blowing it directly at me or not, it's still a pain. If you want to smoke in private in your own home and then wash your clothes after so no one can tell you're doing it, I guess that's fine, but I don't see why it also has to be cheap?
I admit I sometimes smoke near people, even if I try to move to the side. At bus stops I try to be 5-10 meters away from people, but often I don't do it and it inconveniences people. Sorry, truly. I will try to be more mindful. When I switched to e-cigs for a while a couple of years ago, I started noticing the smell of tobacco smoke. After I switched back to cigs, I stopped noticing it. Smokers don't notice it that much as they're around it often. It's not always smokers being inconsiderate, it's not realizing how it smells to others. If you let me smell the clothes of a smoker and a non-smoker, I wouldn't be able to tell the different if my life depended on it. Although I only smoke outdoors and wash my clothes regularly, so I hope my base smell isn't that offensive to non-smokers.
So yeah, this comment really reminded me to not light up whenever and "try my best" to walk a few meters away, but to really think if I'd inconvenience people.
On the other hand, if I'm alone on a street and you're walking towards me so I just pass you for a second, I can't imagine that the smell would be that bad from just a casual walk-by. When I'm passing people, I hold in my smoke till I pass them.
Even if I agree that smoking outdoors is inconsiderate and annoying to others, I could still do it at home or in dedicated areas (smoking sections in bars with good ventilation, ofr example).
> I don't see why it also has to be cheap?
If we agree on the previous points, then why not let it be cheap? Tobacco is cheap to produce. Most of the price of cigarettes is artificial, to cover medical costs and whatnot. Let's say I sign a waiver that if I get sick, I either pay through the nose or don't receive treatment at all. Would you be OK with letting me buy tobacco at it's original cost (no subsidies, no artificial fees)?
Or, as a thought experiment - let's say tobacco didn't have any smell and there were 0 negative effects of second-hand smoke. Like, you wouldn't know it if I smoked near you unless you saw me. Then what would be the justification in making smoking artificially expensive for me?
If it wasn't for the impact on offer people, I think you could handle it basically like sugary drinks - there's some benefit in discouraging it for health reasons but not as much benefit comparatively, so a more modest tax is all I could really argue for, yeah. (Like how nicotine gum is treated essentially)
Since the impact is mostly annoyance (the smell) and most restaurants are either smoke-free or offer separate enclosures, why tax it at all (besides for the smell)? I am reducing my lifespan by about 8 to 10 years with smoking, sure. But why should the government force me to change that by taxing it? Why tax sugary drinks or ban or criminalize drugs other than the caffeine, nicotine and alcohol?
If the idea is to make everyone be healthy, live as long as possible and be productive for as long as possible, why not ban dangerous sports, too? I'm "the government" for my dog and I don't let him do anything dangerous or stupid, but he's a dog and we're people. With the supposed free will and agency we all like.
>But why should the government force me to change that by taxing it?
Because the government ends up paying for the medical treatment of a lot of smokers when they're older. And it's incredibly expensive. You can say you won't rely on government funds, but there's no way to actually opt out of Medicare for life or sign up to never be guaranteed stabilization when you show up at a hospital.
Nicotine is also notoriously addictive, which weakens the "my choice" argument.
>Why tax sugary drinks
That's totally a nanny state thing. Personally, I would mildly support it. But it's not a hill I'd die on.
>or ban or criminalize drugs other than the caffeine, nicotine and alcohol?
Hard drugs cause blight. People don't mind so much if they see a soda can on their street, but if they see a used needle they'll move. And again, any society with a safety net has an interest in preventing common causes of people falling into it.
>why not ban dangerous sports, too?
It hasn't proven to be a big problem at the population level. Hell, public health experts would love to have that problem, because it'd mean more people were exercising.
> Because the government ends up paying for the medical treatment of a lot of smokers when they're older. And it's incredibly expensive. You can say you won't rely on government funds, but there's no way to actually opt out of Medicare for life or sign up to never be guaranteed stabilization when you show up at a hospital.
That's why I'd get a tattoo on my chest, if necessary, saying "Smoker!". I know that most of the price of tobacco is insurance for medical treatments. Not Medicare, as I'm not in the US, but similar. I am OK with tattooing "DO NOT STABILIZE OR CARE FOR AT ALL - SMOKER !!!1".
> Nicotine is also notoriously addictive, which weakens the "my choice" argument.
I am an adult human who participates in society and has chosen to smoke. Please treat me as an adult who has made a (bad) decision and is willing to suffer the consequences.
> sugary drinks... nanny state
Same with any drug.
> hard drugs...
People who abuse hard drugs to the point where we need to save them or others from them are most often uneducated or poor (and living in a poor neighborhoods, with all that it brings). Believe it or not, I know several people with PhDs in things like physics and biology who regularly take "hard" and/or "soft" drugs besides alcohol and nicotine. Only one needed intervention after ~10 years and it was because of pre-existing psychological issues that led him to abuse the drugs. I and lots of people I know who lead normal lives can list more 3- or 4-letter abbreviations of stuff we've tried than a HN comment will let us fill. Or maybe I'm exaggerating a bit, not sure, but you get the point.
If you look at a poor neighborhood, you'll see a lot more people with drug problems. Not because richer people don't do drugs, but because it's not an escape plan, it's not some random impure thing you get and because it's done within a safe place. It's a social issue, not a drug issue. Work on solving poverty and education, not on making us drug users feel like criminals for trying new stuff or on making our drugs more expensive. Whether it's legal like alcohol or nicotine, or illegal a psychedelic, a benzo, weed, an opioid, a dissociative or anything else, it's a drug. I am an adult. Let me experience my adulthood like I want to. You don't take drugs and that's fine, but please understand that you have no fucking idea what you're missing if you're doing it correctly. Literally anything you've likely experienced, like romantic relationships, climbing mountains, orgasms and so on, is categorically and qualitatively different from the amazing things you can experience on various drugs.
I think it's also partially due to smoking being more and more considered disgusting, not just inconvenient. The peer pressure of "don't do this very stinky disgusting thing around me" must have at least a little to do with declining smoking rates. Back in the 80s, most people didn't have the guts to say "Hey, don't smoke around me, it's gross!" but plenty of people do today.
We need to culturally consider Social Media use to be disgusting or at least something to be ashamed of.
> You cut off social media, you at worse face some boredom and FOMO.
I wish this was true but I know tens of people that quit smoking and (besides myself) know 1/2 of another person that quit social media. drunk at NYE two years I offered $10k to a group of 25 people to delete all social media apps from their phones for 60 days - still have that $10k in my account. I think quitting social media is around the same as getting off hard drug addiction (like hard, hard, hard one - opioid, heroin etc...) and maybe even tougher that that - for most people.
> People didn't quit smoking because they suddenly got more disciplined. We just made it inconvenient.
I want to believe this! I just haven't personally experienced this at all (I am in my 6 decade on Earth so plenty of time around). I don't know single person that stopped smoking because they could not burn one inside restaurants/clubs/... or because it costs $18/pack or any of that. 18 year old person has very little "regulation" when it comes to smoking. Little inconveniences to move 25 feet away from the building isn't much of a deterrent IMO.
I am subjective on the matter of social media, I know that. But I am educated in its evil and would for instance never let my kid be on any social media as long as she is under my roof. This has already cause significant challenges for her (and my wife and I) but also it is an amazing learning experience to overcome silly social obstacles...
It's like how do you get people to stop letting their kids drink alcohol.
Everyone knows what the dangers of alcohol are now. We need to get reliable data one can base policy on and then let the public health system do their thing. Maybe not every health authority but enough of them to protect the species at large. Then we'll get social media out of schools, away from young people, vulnerable folks, etc.
What do we do? We treat platforms with algorithmic news feeds as publishers not platforms in the Section 230 sense.
Think about it this way: imagine if you took a million random posts or videos. You would find a wide range of political views, conspiracy theories and so on. Whatever your position on any of those issues, you could find content pushing those views.
So if your algorithm selects and distributes content that fits your desired views and suppresses content that opposes your views, how are you different from a random publisher who posts content with those exact same views?
This is kind of like the "secret third thing" of Section 230 where you get all the protections of being a platform and all the flexibility of being a publisher and we need to close that loophole. Let platforms choose which one they are.
Another example: if I create a blog and write a post that accuses my local mayor of being a drug addict and a pedophile, I can be sued for defamation. You can try the journalism defense but it won't shield you from defamation. Traditoinal media outlets are normally very careful about what they publish for this reason.
But what if I run Facebook or Twitter and one of my users says the exact same thing? Well I'm just a platform. I have a libel shield. But again, my algorithm can promote or suppress that claim. Even if I have processes to moderate that content, either by responding to a court order to take it down and/or allowing users to flag it and then take it down myself with human or AI moderation, the damage can't really be rolled back.
We've let tech companies get away with "the algorithm" being some kind of mysterious and neutral black box that just does stuff and we have no idea what. It's complete bullshit. Every behavior of such an algorithm reflects a choice made by people, period. And we need to start treating this as publishing.
Is this unavoidable? I mean it does generate clicks and views and user engagement so if one platform is doing it, doesn't that automatically mean that the other has to do it? Otherwise they will continuously lose market share.
I think the burden to curate your feed so that you do not have such content is now resting with the user and they cannot rely on the platform to do it for them.
If the user even wants to do that. Why would they? They're looking for a sugar rush, they're not looking to eat their intellectual vegetables. How do you get children to eat vegetables?
> I mean it does generate clicks and views and user engagement so if one platform is doing it, doesn't that automatically mean that the other has to do it? Otherwise they will continuously lose market share.
Why? User engagement isn't the same thing as market share.
If McDonald's trained its cashiers to insult you while taking your order, engagement would go up, and market share would go down.
The feedback loop for this moral hazard is slow but implacable. You can treat the zeitgeist as a dumping ground for so long, until you get so big, that you can no longer treat it like an idealized infinite substance.
In my experience there’s a strong “banality of evil” that happens.
Some poor schlub ML Eng has shipped a feature that wins an A/B test. They’re pushing to get promoted. Their management wants to show they’re hitting their KPIs.
An engine of destruction filled with well meaning people just hoping to advance in their careers.
You might say, it’s ultimately the designers of the incentives that matter. Even there, the leadership will change. Inevitably the needs of the capitalist machine take over.
If you like better content look for kagi's small web or better yet find a better algorithm that optimizes for your preferences rather than engagement.
I have my instagram, x on a locked down browser in a container with a fake profile that an LLM drives and finds the posts for specific users and compiles a gist of all the important things in my locality(or what u care about) every evening, without me ever going near that FOMO driven dumpster fire of tiktok/insta/x.
I look at people who use fb or tiktok, or x, the same way I look at smokers or alcoholics. With sadness and pity. The fact that we let children use this is hard to accept. The fact that fellow hackers and engineers, some of the brightest minds, have contributed to this is extremely disappointing. Shame on you.
the bucket of crabs truly pervades in its metaphorical accuracy. regardless as to intelligence, humans are liable to drag down their fellow men. insane to consider that children are effectively drugged from infancy. for this i do not blame an uneducated society strained to its zenith; i blame the sociopathic and the craven who have enabled the proferring of digital drugs, and consequently accelerated societal addiction. the shame falls entirely on them. may reincarnation be real such that sadistic six figure salaried software engineers and their malicious managers are forced to reap the rewards of such "engineering".
When I hear "Meta" and "Facebook" the top 10 things I think:
1. "Surveillance"
2. "Advertising"
3. "Scams"
4. "AI slop"
5. "Manipulated experience"
6. "Child harms"
7. Misinformation campaigns.
8. Disinformation campaigns.
9. "Doom scroll regret"
10. "Zuckavatarphilia"
But I don't claim to have the "right" opinion and am curious how other people respond to the brands. If each of you could reply, and re-list those associations in the order you experience them, I will collate the results and post them everywhere I can think of. It would go a long ways to satisfying my curiosity, and the curiosity of reporters that like to repeat things they read on the internet.
Throw away your 'smartphone' and stop using anti-social media. It is killing society, and only making the Billionaires more powerful. They are evil and will do anything to stay in power.
What can you do about it? That is the rub. You can't. It is no coincidence that pretty much all avenues of information consumption you face are susceptible to this issue. It is by design that these technologies are able to reach you in these ways. It is by design that propagandists have so much success. Everyone in power today is in power because of propaganda. Why would they ever let go of their reigns of power? It is the sole forcing factor keeping them in power after all. They'd be no different than you and I otherwise, which scares them more than anything.
Legislate! We need laws! I get we aren’t used to that anymore in the US but truly “marketing” and social media in the US has become so hostile and harmful I just don’t understand how we can in good conscience not start to put heavier restrictions on them. Enough is enough. We can’t continue to sacrifice our society on the altar of the Almighty dollar.
There's the other rub. Can't get good laws passed either because laws are also subject to propaganda. You draft your good Bill A. Technocracy comes out with millions in ad spend, floods the zone, convinces the voters your bill is bad, or their Bill B with their profitable carveouts is good, voters vote against their interests and for the technocracy. This pattern has played out like this countless times already. Only thing I can think to do is dissociate and pretend like the world isn't fucked.
Shun! You know people at Meta, Tiktok, et al? They're not people you want as friends, they're family but one step above nothing in your lives. Boo Zuck wherever he goes, MMA fights is just one place. Make it as uncool to be part of the machine as it should be for other drains on society like Palantir.
Given how TikTok "trends" seem to consist mostly of "get teenagers to do stuff that causes huge expenses for US society":
* "eat tide pods"
* "stick a fork in electrical sockets in your school"
* "destroy your school's shit" aka "Devious Licks" - bathrooms, chromebooks (jamming stuff into the charging ports to start fires...)
* "drink a shitload of Benadryl to see what happens"
* "steal a kia/hyundai and drive 80mph, run from the cops, etc"
...convince me that this is not a purposeful attack on US society by the CCP?
Given that the 'tide pod challenge' was before TikTok's time and took place on wholly US-owned platforms like YouTube, we can safely assume it's all in your head. Most of the other stuff you're sharing sounds like a reflection of what you find out in the streets of any major US city. Perhaps you should question if your government is the one that is attacking you.
Did we forget Gresham's Law applies to content and has done so since humans could communicate?
Bad or wrong ideas are the ones that get talked about. Do we discuss the 10 issues politicians get correct, or the 1 they screw up?
Platform is irrelevant here; the exact same phenomena occurs/ed on radio and TV decades before it did on social media platforms, and in news papers centuries prior.
You have finally identified the problem. It all started with Homo habilis and misinformation has been rampant ever since. But even protozoan parasites mimic host proteins and block signals, so you really have to go a lot further back to deal with fake news.
I feel like this is general knowledge for the past 5 or so years, but the real question is "What do we do about it?". Personally, I put real effort into not spending time being outraged online, but this is a societal ill that's bigger then I am...
"What do we do about it?"
Shut down the behavior with regulations or shut down the companies. Meta and TikTok have no natural right to exist if they are a net negative to society.
Specifically, I believe Section 230 protections shouldn't apply to algorithmicly promoted content. TikTok hosting my video isn't inherently an endorsement of what I'm saying, but proactively pushing that video to people is functionally equivalent even if you want to quible over dictionary definitions. These algorithms take these platforms from dumb content-agnostic pipes that deserve protections to editorial enterprises that should bear responsibility for what they promote.
I don't think we even need to go that far. Just remove protection for paid advertisements. It's absurd that Meta cannot be held liable for the ads they promote when a newspaper can be held liable if they were to publish the same ad.
There is a decent legal argument to be made that §230 doesn't immunize platforms for the speech of their algorithm, to the extent that said speech is different from the speech of the underlying content. (A simple, if absurd, example of this would be if I ran a web forum and then created a highlight page of all of the defamatory comments people posted, then I'm probably liable for defamation.)
The problem of course is that it's difficult to disentangle the speech of algorithmic moderation from the speech of the content being moderated. And the minor issue that the vast majority of things people complain about is just plain First Amendment-protected speech, so it's not like the §230 protections actually matter as the content isn't illegal in the first place.
How would you square that with a site like Hacker News, which has algorithms for showing user-submitted links and user-generated comments?
Listing content alphabetically or chronologically is technically an "algorithm" too. What I'm specifically challenging here is the personalized algorithm designed to keep individual users on the platform based off a user profile influenced by countless active and passive choices the user has made over time. The type of HN algorithm that serves the same content to every user based off global behavior is fine in my book because it is both less exploitative of the user base and a reflection of that user base's proactive decisions in upvoting/downvoting content.
So if HN added anything personalized, like allowing you to show fewer stories on topics you dislike, it would lose protection? I can't get on board with that.
I also think it would be extremely unpopular. People like their recommendation engines. They want Netflix to show them more similar shows. They want Reddit to help them find more similar subreddits. I know there are HN users who don't want any of these recommendation engines, but on the whole people actually want them.
>People like their recommendation engines.
People liked cigarettes too.
>They want Netflix to show them more similar shows.
Perhaps that example was a little too revealing on your end. Netflix doesn't have/need Section 230 protections and they're doing fine.
I'm not suggesting these algorithms should be illegal, just that Section 230 protections were defined too broadly because they predated the feasibility of these type of algorithms. These platforms would be free to continue algorithmic promotion, but I believe these algorithms would be less harmful if the platforms had to worry about potential legal liability.
Think YouTube and copyright for comparison. The DMCA is far from perfect, but we have YouTube as an example of a platform that survived and even thrived in the transition from a world that didn't care about copyrighted internet video to one in which they that needed to moderate with copyright in mind.
> People liked cigarattes too.
Cigarettes weren’t made illegal. Cigarette companies are not liable for their user’s choice to consume them. What’s your point?
> Perhaps that example was a little too revealing on your end. Netflix doesn't have/need Section 230 protections and they're doing fine.
Perhaps it was a little too revealing on your end that you conveniently ignored my other example of Reddit.
If you need to cherry pick to make your point it doesn’t look very strong.
I still don’t see consistency in your argument that Section 230 should still apply to Hacker News but not, for example, Reddit, simply because one of them allows users to personalize the content they see.
> Cigarette companies are not liable for their user’s choice to consume them.
They kind of were. Not completely liable, but partially. Because... um, well, uh, yeah, they are. They are literally liable.
If you produce cigarettes, you are partially responsible for people smoking. Smoking is also not a "choice", come on now. The only people who believe that are people trying to sell you cigarettes or people who have never smoked.
That's why you can't advertise cigarettes anywhere anymore and they're very hard to find. And, when you do find them, the box tells you "hey please don't smoke this". R.J. Reynolds didn't do that by fucking choice, we forced them.
> They kind of were. Not completely liable, but partially. Because... um, well, uh, yeah, they are. They are literally liable.
Cigarette companies are not legally liable for the consequences their users encounter.
It’s really hard to have an actual discussion about anything when people are just making up their own definitions.
Cigarette companies paid billions, and continue to pay, for the societal harm they cause. That's a liability. They're not legally liable in the sense that nobody is going to jail. But they have financial liabilities. Because they do, literally, cause financial harm.
I don't think people really understand just how harshly we ran Tobacco companies into the ground. Many pay more per cigarette for liability than they pay to make the cigarette.
> Cigarette companies are not legally liable for the consequences their users encounter.
Ok! But they do have to follow a bunch of extra laws that cost them a ton of money and/or users.
Therefore the same can apply to social media algorithm companies.
The one extreme example, is just like cigarettes, there could be 18+ age verification for social media. There a big deal.
This is the type of comment that suggests you aren't engaging with what I'm saying beyond a superficial level. My argument is consistent. I'm not cherry-picking examples. The differentiator I'm criticizing is the personalized nature of the algorithms. But rather than engaging with the merit of that distinction, you're acting as if there is no distinction at all. I'm not sure if there is much point in contuning the conversation from there.
I think the other person's issue with your position is that the distinction is entirely arbitrary. You're not giving any reasons why the demarcation line for which feed algorithms are OK and which are not is there instead of anywhere else. It seems to be just "Facebook and TikTok are bad; Their feeds are personalized recommendation engines; Therefore personalized recommendation engines are bad, and other feed algorithms are OK".
>I think the other person's issue with your position is that the distinction is entirely arbitrary.
Basically all laws related to speech are abitrary. Can you define a clear and self-evident line between pornography and art as an example? Or do you agree with the Supreme Court that we just "know it when [we] see it"?
>You're not giving any reasons why the demarcation line for which feed algorithms are OK and which are not is there instead of anywhere else.
Let me just copy and paste what I said before: "The type of HN algorithm that serves the same content to every user based off global behavior is fine in my book because it is both less exploitative of the user base and a reflection of that user base's proactive decisions in upvoting/downvoting content." I can understand if one of you want to challenge that line of thought, but you both acting like I didn't give any reasoning at all is bizarre and gives me the impression that you aren't actually reading what I'm writing.
> Basically all laws related to speech are abitrary.
True. This is a fair point. But the expected counter argument would be that the exact line isn't the issue instead it's the justification for the principle.
IE why is personalized algorithms more dangerous than general ones.
My answer (because I mostly agree with you) is that the difference is that personalized algorithms almost feel like brain hacking. And this brain hacking simply doesn't work at scale when applied to vague general algorithms.
>Basically all laws related to speech are abitrary. Can you define a clear and self-evident line between pornography and art as an example? Or do you agree with the Supreme Court that we just "know it when [we] see it"?
I'm a free speech absolutist, so I personally don't find which laws already exist on the matter to be a compelling argument. If it was up to me, I'd get rid of any such laws.
>The type of HN algorithm that serves the same content to every user based off global behavior is fine in my book because it is both less exploitative of the user base and a reflection of that user base's proactive decisions in upvoting/downvoting content.
The argument hinges entirely on the relative exploitativeness of different feed algorithms, but that metric is merely asserted with no support.
>I'm a free speech absolutist
Typically free speech absolutism leads individuals into logical traps they find difficult to dig themselves out of.
But we don't even need that in this case. Private property can have all kinds of restrictions put on it based on the potential dangers and harms it causes. This in fact is one of the most common attacks on speech I see right now (Meta et el) that they will just put age requirements on sites.
>Typically free speech absolutism leads individuals into logical traps they find difficult to dig themselves out of.
Yes, "free speech absolutists" tend to define these terms in ways to hide the true arbitrary nature of their beliefs. The obvious test case is do they believe in legalizing CSAM. Either they answer "yes" and ostracize themselves from almost all of society or they say "no" and have to come up with arbitrary rules why this specific content doesn't count as speech. Either way, self-applying the label is its own red flag.
I don't really understand what your point is.
If I understand the point correctly, it's that regulating the algorithms of Meta et al does not curtail your free speech, so it's a moot argument
I wasn't the one who brought up free speech into the discussion; slg was. That aside, whether it curtails it or not would depend on how one defines "speech". Even if the particular way in which a website displays information is not speech, I still think it would be an overreach for a government to legislate how websites are allowed to function. If I as a user want to see a feed populated by recommended content, and the site's operators want to show it to me, what business does the government have stepping into our interaction?
What do you think about the case of Lucy Connolly, who, during a riot where rioters were burning down hotels housing immigrants, tweeted that people should burn down hotels housing immigrants and was arrested for that?
I already stated what my position is. Why do you need to ask about specific cases? Are you trying to look for gotchas?
Of course Section 230 would apply to both sites, but only to the user-generated part of each site, because that's what Section 230 says.
I'm paying for Netflix to do that as a feature. Instagram uses that to drive engagement to sell ads. Disabling personalized content on Netflix is a revenue-neutral choice. On Instagram, that would mean their ad revenue takes a huge dive. Apples aren't oranges.
Netflix does it to drive engagement as well.
That is not comparable because of the little you have over the algorithm for the other cases. On bandcamp, you can select the genre and a sorting criteria and have very good control over the list. But on Spotify, it’s very obscure, with things you’ve never asked for being in front even before your own library.
But algorithmic feeds can actually be useful for discovery of related material - I want Youtube to show me more Japanese jazz and video essays about true crime based on my watch history, I wanted Twitter to show me more accounts from writers and game developers because I follow them (before the platform went full Nazi) and I like that Facebook shows me people and information from my local area. Forcing all platforms to use only alphabetical or chronological feeds because of the exploitative way some platforms use algorithms seems awfully close to the "banning math" argument people used to use about cryptography and DRM, and it would remove a lot of legitimate use from the internet.
It's all about who controls the algorithm. A sensible approach would be to decouple recommendations from platforms, to treat them like plug-ins that the user must be allowed to add or disable. You want to use YouTube's recommendation algorithm on YouTube? Great, but there needs to be an off-switch and a way to change over to another provider. This is classic anti-trust stuff, breaking up a sector into interoperable pieces.
Really nice to see someone else bringing this up. Algorithmic editorial decisions are still editorial decisions. I think ultimately search and other forms of selective content surfacing should not have ever been exempt. They were never carriers. I appreciate that this would make the web as we know it unusable. I think failing to tackle this problem has will also make the web unusable, and in a worse way.
> I think ultimately search and other forms of selective content surfacing should not have ever been exempt. They were never carriers. I appreciate that this would make the web as we know it unusable
I can’t be the only one confused at these calls to have the government destroy things like searching the web, am I?
How is this a real idea being proposed on Hacker News, of all places? Not that long ago it was all about freedom on the Internet and getting angry when the government interfered with our right to speech online, and now there are calls to do drastic measures like make search engines legally untenable to run in the United States?
It’s also confusing that nobody calling for banning things or making the web unusable appears to be making the connection that the internet is global. If we passed laws that forced Google and Bing to shut down because they’re liable for results they index, what do you think the population will do? Shrug their shoulders and give up on the internet? Or go use a search engine from another country?
> How is this a real idea being proposed on Hacker News, of all places? Not that long ago it was all about freedom on the Internet and getting angry when the government interfered with our right to speech online
I can be upset about the government trying to make the world worse, and about other huge balls of power who have been making the world shitty in an ongoing fashion. Freedom of speech doesn't mean shit if a handful of people can buy up or otherwise absorb control of 90% of media and choose who gets heard. The call for regulation is an acknowledgment that the market fucked this one up. When the government threatens speech, I'll call for civil disobedience and proactive protections. When oligarchs threaten speech I'll call for regulation and punishment.
> It’s also confusing that nobody calling for banning things or making the web unusable appears to be making the connection that the internet is global. If we passed laws that forced Google and Bing to shut down because they’re liable for results they index, what do you think the population will do?
You assume that the only way to get a good, free search engine is to give control of it to some private entity. That if we don't do it in the US, people with turn to someplace else. I think you may be lacking in imagination. At a minimum, the possibility exists for nonprofit organizations to run quality search engines, but it's also possible to decouple the indexing business from the ranking provider. Google could run an index and charge for access, and ranking providers could build on top of that and recoup costs with non-tracking ads, donations, sales, whatever business model they please. Just because an unregulated market doesn't come up with a good solution doesn't mean a market under different constraints won't find a better way. And if nothing works out you always have the option of grants or a public digital infrastructure approach. There are so many things to try beyond shrugging and declaring that the market has ordained five dudes arbiters of the internet as experienced by most people.
> I can’t be the only one confused at these calls to have the government destroy things like searching the web, am I?
if you find this distressing then i imagine you find it equally as distressing as a couple of corporations destroy something.
the reason the word *enshittification” has become so ubiquitous is because corporations are actively destroying the internet and desperately trying to convince us the internet is separate from “the real world”.
sometimes stopping a person from burning the house down is necessary. no matter how loudly they cry about their freedom to have a bonfire in the living room.
"What do we do about it"
Account --> Delete
Wouldn’t we need to shut down all news outlets, all the twitters and all the newspapers then? They might not be on the toxic spectrum as meta/tiktok, but are very close
There are people in this thread directly calling for us to strip protections from search engines and force them to shut down.
I think a lot of this discussion has become detached from reality and we’re just entertaining some people’s impossible fantasies about shutting down the internet and returning to the past.
Human instinct is always to ban and fight everything as soon as any change happens in society. The same biological motivation to doomscroll fuels our instincts to panic and doompost about how society is ruined unless we do [brash action].
Then we'll just use the Chinese apps. Or do you plan on shutting down our access to Chinese apps too?
Like TikTok?
Regulating content that makes people enraged seems like a slippery slide towards regulating any kind of "unwanted" speech. I get regulating CSAM, calls for violence or really obvious bullying (serious ones like "kill yourself" to a kid), but regulating algorithms that show rage bait leaves a lot of judgement to the regulators. Obviously I don't trust TikTok or Meta at all, but I don't trust the current or the future governments with this much power.
For example, some teen got radicalized with racist and sexist content. That's bad in my opinion, as I'm not a racist or a sexist. But should racist or sexist speech be censored or regulated? On what grounds? How do we know other unpopular (now or in the future) speech won't be censored or regulated in the future? Again, as much as I'm not a racist or sexist, I don't think the government should have a say in whether a company should be able to promote speech like "whites/blacks are X" or "men/women are Y". What's next? Should we regulate speech about religion (Christians/Muslims/atheists are Z) or ethics (anti-war people or vegans are Q) or politics or drugs or sex?
The current situation is shitty, but giving too much power to regulators will likely make it way shittier. If not now, in the future, since passed regulations are rarely removed.
At least in the US the government can't regulate speech (for the most part). But what we could do is regulate recommendation algorithms or other aspects of the overall design in a way that's generalized enough to be neutral in regards to any particular speech. And such regulations don't need to apply to any entity below some MAU or other metric.
Even just mandating interoperability would likely do since that would open up the floor to competitors. Many users are well aware of the issues but don't feel they have a viable alternative that satisfies their goals.
In theory I'm OK (kinda) with regulating the "overall design" somehow, but I don't see how it's going to work. Forced interoperability is a (very?) good idea, as it's really general, but it also doesn't address directly what the article and most comments talk about - the rage bait. I just can't imagine regulations (or "laws" or whatever the correct term is) that deal specifically with the algos that push rage bait that can't be later abused, if passed, to deal with other unpopular speech. And it seems like people want some laws to directly deal with that - the bad types of speech or algos themselves.
To clarify, I use "rage bait" as an example phrase, but it includes algos that only promote engagement at any cost and other things that aren't outright dangerous, but we think are dangerous. Not, like I said, CSAM or yelling FIRE or telling people to kill themselves.
Interoperability sidesteps the issue by giving users the choice of which algorithm (or algorithm provider) to use. The majority might or might not agree with that approach - for example obviously tobacco has not been left purely to the individual's judgment in the west.
Agreed, you can't regulate speech in a targeted manner while also not doing so. You're forced to find some common aspect much more general than "rage bait". Perhaps prohibiting the targeting of certain metrics? Or even prohibiting their collection in the first place.
> You're forced to find some common aspect much more general than "rage bait". Perhaps prohibiting the targeting of certain metrics? Or even prohibiting their collection in the first place.
Can you elaborate, give some ideas, examples, etc.? What metrics? How can you define them in a consistent, safe way?
We're talking generalized metrics. I have no idea which ones - I wasn't claiming to have solved the problem. The point is that if you can identify a general characteristic that is being used in a way which disproportionately contributes to a particular outcome then you can filter on that.
Estimated user age is an example of a metric largely unrelated to concerns regarding free speech. I doubt it has much relevance to the problem we're taking about here but hopefully you can imagine that prohibiting the targeting of ads or the curation of an algorithmic feed based on that metric would not be expected to unduly disadvantage any particular sort of speech.
> The point is that if you can identify a general characteristic that is being used in a way which disproportionately contributes to a particular outcome then you can filter on that.
In a non-adversarial political context where we trust the government and the future ones, sure, but I think without any strong guardrails, we could enact a law that's good today, but will be exploited in the future.
For targeting minors with any kind of political speech - I'd love it if it wasn't legal. But that brings its own can of worms. There's enough discussion on HN on the implications of age verification, whether on how it's done technically (privacy-preserving or not (ZKP or just shady 3rd parties); FOSS or not; on the ISP, OS or app level, etc.) and whether the mere precedent could trigger additional issues down the road.
Anyway, I'd love a society where everything is perfect, but I'm afraid of what might actually happen. With a benevolent god as a permanent ruler, I'd be happy with 100% prosecution rate against all kinds of littering, hate speech and whatnot, but in reality random crimes are easier to evade than a law passed down by a malevolent government, so I'm strongly against any kind of overreach. (Because the law tomorrow could be one we must evade if we want to resist an unethical government). Someone will likely chime in with "but complete and massive overreach has never happened so far", to which I'd reply - we're close to the point where technology will let the ones in power grab that power absolutely and forever if we them grab too much in the beginning.
> I get regulating CSAM, calls for violence or really obvious bullying (serious ones like "kill yourself" to a kid)
I’ve reported videos that look like sexual exploitation, videos that call for violence and videos that promote hate (xyz people are cockroaches) and all I’ve gotten is that “it does not go against community guidelines” with a link to block the person who created them. So any concerns of “where do we draw the line” are in my opinion pointless because the bare minimum isn’t even being done.
I agree with your CSAM and explicit calls for violence examples - they probably should be regulated. But a few comments ago in another thread someone didn't like me calling people in the workplace who annoy me with their mindless chit chat "corporate drones". My post could be construed as promoting hate. Where do we draw the line from "cockroaches" to "drones"? Do I have to call a certain "protected class" drones for it to qualify as hate speech?
What if I didn't say anything bad about a race or a sex, but said:
> I have coworkers that pester with me with their small talk about the weather every time I see them. I hate those fucking cockroaches.
That's in bad taste, sure, but should it be regulated? You may know I obviously don't hate-hate them (they're just annoying, but most of them are good people) or actually consider them cockroach-like in any meaningful aspect (they're obviously people, but with annoying tendencies). But would a regulator know the difference? What about a malicious regulator who gets paid by (ok, this is a silly example, but bear with me) the weather-talking coworker lobby to censor me? In many not-so-silly examples a regulator could silence anyone for anything (politics, sex, drugs, ethics), as long as it uses a bad word or says anything negative about anyone. I don't want to live in such a society. That much power would be abused sooner or later.
I'm sorry but are you saying it's hard to figure out what to do so let's do nothing? Banning racist and sexist content is not a slippery slope. It's just banning racist and sexist content, slope is only slippery because the salivating mouths of these social platforms grease them.
Also, I don't think people are advocating censorship, they are advocating not promoting assholes. You can have your little blog and be racist on it all you want, but let's not give these people equivalent of nukes for communication.
> are you saying it's hard to figure out what to do so let's do nothing?
I'm fine with doing something, but the current "something" seems slippery.
> Banning racist and sexist content is not a slippery slope. It's just banning racist and sexist content, slope is only slippery because the salivating mouths of these social platforms grease them.
But what is "racist", exactly? See why I think it's a slippery slope and why it's ill-defined:
1. We could agree that "Let's go out and kill/enslave all the $race/$gender" is racist, but that's bad if we switch $race to any group, as it's speech that incites violence.
2. What about "$race is genetically inferior in a way (less intelligent, less athletic, more prone to $bad_behavior)"? I honestly think most differences in race/ethnicity is due to environmental factors, but what if there actually are difference in intelligence or anything like that? Should we ban speech that discusses that? Black people win running races and are great at basketball. They're prone to certain diseases, as are Caucasians or Asians. So would you ban discussing that? Or would you ban blindly asserting that $race is $Y without some sort of proof?
3. What about statements like "There are way more male bus drivers because X"? Or "men are better at Y, but women are better at Z"?
What do you think the definition of racism and sexism in this context should be? I think the line is where we incite violence towards a group, but not about discussing differences that may or may not be true.
> Also, I don't think people are advocating censorship, they are advocating not promoting assholes. You can have your little blog and be racist on it all you want, but let's not give these people equivalent of nukes for communication.
I think restricting a platform (or anyone or anything) from promoting someone IS censorship. If it's not censored, why shouldn't I be able to promote it? This honestly feels disingenuous - like "we pretend that the racist isn't censored and can have his little blog, but it's illegal to promote his little blog".
> I'm sorry but are you saying it's hard to figure out what to do so let's do nothing?
That seems more reasonable than the alternative, which is to make modifications to a complex system which you aren't sure what the outcome will be. You're more likely to cause bigger problems.
>> Meta and TikTok have no natural right to exist if they are a net negative to society.
Exactly. And when we are done with them we will shut down Molson and Anheuser-Busch. Then we can go after the people who make selfy sticks. Then the company that owns that truck that cut me off last week. Basically, organization who i dislike should not be allowed to exist.
oddly enough the TikTok referred to here was to be shut down in the US. But then the executive branch ignored the law while it could organize handing the company over to Larry Ellison instead. But these allegations date to when the company was fully under the control of ByteDance, and not US-regulated entities at all.
> oddly enough the TikTok referred to here was to be shut down in the US. But then the executive branch ignored the law while it could organize handing the company over to Larry Ellison instead
Which should make people think twice when they call for government regulation on speech as a solution to content they don't want other people to see.
The more you give the government power to control speech, the more they'll use those laws to further their own interests.
regulation will never happen because these are instruments to control the masses
All the more reason for regulation. If people catch on to the fact that they are being manipulated and abused by the platforms to "drive engagement" they might abandon them or spend less time on them. If the government regulates these platforms so that they are safer or at least less harmful people will feel better about using them giving the government a larger platform to use to control the masses.
> If people catch on to the fact that they are being manipulated and abused by the platforms
I am not trying to be funny or anything but this sounds like "if only fat kid realized that eating 10 apple pies before bedtime might be the reason s/he is fat" We already know what social media platforms are doing, not to just young people but to all people.
> If the government regulates these platforms
This is like saying "congressman care about our debt so they will vote to reduce their own salaries by 90%" - the government is not going to regulate tools they are using to control the narrative/masses etc...
Tax and heavily regulate online advertising. The root of the problem is that it is very, very lucrative to drive engagement and until you get rid of the monetary incentive, the problem will never go away.
"Make the drug less good" likely isn't the answer. Nor is banning it.
What caused Gen Z to drink less than millenials? Maybe Gen Z has the answer.
You're only allowed to drink as an adult. We're talking about letting those companies rot our brains in those first 18 years.
In my experience the 60+ demographic have had far more damage done.
We just haven't seen what 60 year old ipad kids look like yet. It's not going to be pretty
Just as they were settling into middle age far-right propaganda, conspiracy, and hate "entertainment" escaped AM radio and flooded cable news and social media.
They never stood a chance.
yeah, it's called "smoking weed".
Technology, culture, legalization of pot, adtech, covid, there are a metric ton of factors that all had significant impact on both decreasing socialization and reduction in drinking. And lowering the birth rates, and the number of healthy relationships, healthy friendships, etc.
I'm for legalizing all drugs, regulating the sale, ensuring quality and purity, and educating the public. Cognitive liberty is sacred - but the dip in drinking has a whole lot of causes.
A healthier society would be more social and get out and drink more, I think.
Millennials love their weed, party drugs too, it took over Gen X drinking in some way.
But I find Zoomers to be rather tame in terms of drinking, smoking, drugs, unsafe sex, etc... Few of the traditional vices, really.
> What caused Gen Z to drink less than millenials?
Social media addiction?
Decades of science communication and real life examples of knowing (of) alcohol addicts
I'd wager how expensive it has gotten plus a year or two of lockdowns which lead to a whole generation of people not going out to get wasted as soon as they're legally allowed to had way more effect.
Oh, and weed being increasingly legal to consume.
I also noticed a trend that happened at my old college and a number of others that I've never seen anyone write about: the great buyout of the old college area slumlords.
All the dive bars where you could black out off $10-20 I drunk at in college are gone. They all faced the wrecking ball, and were replaced in the past 10-15 years with apartments over targets and cvs and family friendly restaurants. A huge concerted effort to buy up these properties in piecemeal then destroy entire blocks at a time. I have no clue where kids at my college go to drink now. I have little interest in going back either as an alumnus as they destroyed all the places of my memories.
Real life experience with alcoholics would at-best be constant over time, or be diminishing (since gen Z drinks less).
Also seems like the science on whether science communication actual changes behavior doesn't point towards it being much of a cause here.
Gen Z drinks less because alcohol isn’t enough of a fix and hard drugs are way cheaper. The answer isn’t what you’re looking for.
Inflation, mostly. And a lot of us lack social skills so they don't have many friends, thus no reason to go out and get drunk.
But like, when a pint is $12 and mixed drinks are $15+ sobriety starts looking more appealing.
Source: Am gen Z.
Make it legal and expensive?
Gen Z never touch grass, you need to first leave the house before drinking comes up.
I’m going to bet we do nothing and continue to complain instead.
Regulate it. Laws, consequences, etc.
Laws appear to have fallen out of fashion. And a disturbing proportion of the loudest people like it. Then you have those who ought to know better but are attention-seeking, selfish assholes who somehow find it «interesting» or think they adhere to «principles».
The latter category know who you are. You downvoted this comment.
> Laws appear to have fallen out of fashion.
Laws are very much fashionable, but only for us. “Rules for thee but not for me” is what's in season right now.
Importantly, seasons change.
I recently provided guidance to state legislators, with that guidance making its way into law in regards of balcony solar. If you don’t think that making law works, I would encourage you to get involved somewhere that means something to you.
It turns out that if you present as an honest, non-interested party, people will call you and ask you for your advice. I do admit that the ease of this is going to be a function of the people you are up against and the subject being regulated. My point of this comment is: default to action. “You can just do things.”
>"What do we do about it?"
nothing. if it isn't illegal, it isn't illegal.
previous generations of neurotics objected to many current (at the time) things we don't bat an eye about. when was the last time you saw anyone campaign against satanic music, violent video games, or hardcore pornography?
Nothing is inherently illegal. Laws are created in response to an undesireable outcome - murder wasn't illegal until it was made illegal.
You in the 90s: "Leaded fuel isn't illegal guys, stop your campaigning, let's keep huffing it"
How about coming up with an actual defense of social media rather than an ad hominem about "neurotics"?
>You in the 90s: "Leaded fuel isn't illegal guys, stop your campaigning, let's keep huffing it"
people who raised alarm about such things could easily be branded as conspiracy theorists. even now, at this very website, so full of well-educated folx, people who speak out against xenoestrogens, for example, are being downvoted to hell.
Consuming social media doesn't have an inescapable negative impact on other people, unlike burning leaded fuel. In the same way that eating junk food doesn't. Should we ban junk food? What else do you want to ban from others just because it has a risk profile you personally don't feel comfortable with?
> Consuming social media doesn't have an inescapable negative impact on other people
You don't think large portions an entire generation(s) getting cooked by social media doesn't have negative externalities that impact society as a whole?
I don't think anybody has the moral authority to regulate such second-order effects.
Should unhealthy food be banned because of the second-order effects of obesity? What about mandatory church / religious service? After all, I judge that atheism has negative second-order effects on the world. Where would I get this moral authority from?
I wonder where folks like this came from, and at what point did people who associate themselves with hacker culture decide that censorship is great.
The OG hackers thought of censorship as network damage that needed to be routed around.
People who support censorship always think of themselves as smarter than the rest. Dunning-Krueger however would suggest something different.
I posted above that social media related issues are a problem, and then a bunch of posts accused me of wanting to make it illegal. I never suggested that and I actually don't support censorship, I just wish some people I know didn't spend so much of their time bummed out about social media.
> >"What do we do about it?"
> nothing. if it isn't illegal, it isn't illegal.
Are you suggesting that because something isn't illegal, it shouldn't be illegal?
Are you perhaps a representative of the Triangle Shirtwaist Factory?
I'm not suggesting that it should be illegal, I'm just seeing this monetization of bad vibes and wondering how we can have less bad vibes. Pump the brakes a little.
Things that are not illegal can and should be made illegal if need be.
Many things were not illegal before they became illegal.
okay. go ahead and make "conspiracy theories" illegal.
The people who were voted to power (across the globe, not just the US) to do something about it are stuck getting their dopamine kicks posting garbage on the same platforms. It’s truly a terrible timeline we are in.
We are in a real life cyberpunk dystopia. Without any of the fun parts.
It’s like asking how do you get people to stop drinking alcohol
As long as there are people who don’t acknowledge or care about the health effects it will exist. If that’s a plurality of your population then you have a fundamental population problem IF you are in the group who thinks it’s bad.
Aka every minority-majority split on every issue ever.
So the answer is: live in a society governed by science. Unfortunately none exist
> So the answer is: live in a society governed by science. Unfortunately none exist
Science is a lagging indicator of reality. It is by definition conservative (in that it requires rigorous, repeatable data before it can label something as true). Because of that, there's usually a pretty substantial gap between human discovery and scientific consensus.
Mindfulness was discovered, as an example, to be beneficial as far back as 500 BCE. It wasn't "proven" with science until 1979.
Sometimes we just need to rely on lived experience to make important decisions, especially regulation. We can't always wait for science.
>Science is a lagging indicator of reality
Tell me what the leading indicator of reality is then
I drink, but I acknowledge and care about the health effects. I care more about how it makes me feel. Don't assume everyone who smokes or drinks alcohol or takes another type of drug just doesn't care. Why don't we ban dangerous sports like rock climbing or BASE jumping or MMA while we're at it?
We handled smoking pretty well by making it cost more and banning it in public places. If tiktok was banned from official app stores it would essentially go away.
Social media addiction is much deeper than nicotine addiction. And people still smoke, see Phillip Morris stock and earnings :)
I don't think deeper is the right word. Nicotine has a physical addiction element that social media does not. You cut off social media, you at worse face some boredom and FOMO.
And PM's earnings are mostly from developing countries at this point. In the US alone, the adult smoking rate has fallen nearly 73% from 1965 to now, so clearly the regulations are working.
We need to do the same for social media. People didn't quit smoking because they suddenly got more disciplined. We just made it inconvenient. The biggest start would be get rid of algorithmic feeds and "recommendations" keep it purely chronological, only from people you explicitly follow.
Nitpicking maybe, but nicotine isn't the main thing that makes cigarettes addictive and it's not that bad by itself. Gwern has a long article on nicotine that's worth a read [0].
More importantly, why do you think society should make smoking inconvenient - more costly, more illegal or anything like that? If I'm not blowing smoke in your face, why interfere with my desire to smoke? If it's about medical bills, just let me sign a waiver that I won't get cancer treatments or whatever, and let me buy a pack of smokes for what it should cost - a few cents per pack, not a few dollars/euro.
[0] https://gwern.net/nicotine
If I can smell it, I don't really care if you're blowing it directly at me or not, it's still a pain. If you want to smoke in private in your own home and then wash your clothes after so no one can tell you're doing it, I guess that's fine, but I don't see why it also has to be cheap?
I admit I sometimes smoke near people, even if I try to move to the side. At bus stops I try to be 5-10 meters away from people, but often I don't do it and it inconveniences people. Sorry, truly. I will try to be more mindful. When I switched to e-cigs for a while a couple of years ago, I started noticing the smell of tobacco smoke. After I switched back to cigs, I stopped noticing it. Smokers don't notice it that much as they're around it often. It's not always smokers being inconsiderate, it's not realizing how it smells to others. If you let me smell the clothes of a smoker and a non-smoker, I wouldn't be able to tell the different if my life depended on it. Although I only smoke outdoors and wash my clothes regularly, so I hope my base smell isn't that offensive to non-smokers.
So yeah, this comment really reminded me to not light up whenever and "try my best" to walk a few meters away, but to really think if I'd inconvenience people.
On the other hand, if I'm alone on a street and you're walking towards me so I just pass you for a second, I can't imagine that the smell would be that bad from just a casual walk-by. When I'm passing people, I hold in my smoke till I pass them.
Even if I agree that smoking outdoors is inconsiderate and annoying to others, I could still do it at home or in dedicated areas (smoking sections in bars with good ventilation, ofr example).
> I don't see why it also has to be cheap?
If we agree on the previous points, then why not let it be cheap? Tobacco is cheap to produce. Most of the price of cigarettes is artificial, to cover medical costs and whatnot. Let's say I sign a waiver that if I get sick, I either pay through the nose or don't receive treatment at all. Would you be OK with letting me buy tobacco at it's original cost (no subsidies, no artificial fees)?
Or, as a thought experiment - let's say tobacco didn't have any smell and there were 0 negative effects of second-hand smoke. Like, you wouldn't know it if I smoked near you unless you saw me. Then what would be the justification in making smoking artificially expensive for me?
If it wasn't for the impact on offer people, I think you could handle it basically like sugary drinks - there's some benefit in discouraging it for health reasons but not as much benefit comparatively, so a more modest tax is all I could really argue for, yeah. (Like how nicotine gum is treated essentially)
Since the impact is mostly annoyance (the smell) and most restaurants are either smoke-free or offer separate enclosures, why tax it at all (besides for the smell)? I am reducing my lifespan by about 8 to 10 years with smoking, sure. But why should the government force me to change that by taxing it? Why tax sugary drinks or ban or criminalize drugs other than the caffeine, nicotine and alcohol?
If the idea is to make everyone be healthy, live as long as possible and be productive for as long as possible, why not ban dangerous sports, too? I'm "the government" for my dog and I don't let him do anything dangerous or stupid, but he's a dog and we're people. With the supposed free will and agency we all like.
>But why should the government force me to change that by taxing it?
Because the government ends up paying for the medical treatment of a lot of smokers when they're older. And it's incredibly expensive. You can say you won't rely on government funds, but there's no way to actually opt out of Medicare for life or sign up to never be guaranteed stabilization when you show up at a hospital.
Nicotine is also notoriously addictive, which weakens the "my choice" argument.
>Why tax sugary drinks
That's totally a nanny state thing. Personally, I would mildly support it. But it's not a hill I'd die on.
>or ban or criminalize drugs other than the caffeine, nicotine and alcohol?
Hard drugs cause blight. People don't mind so much if they see a soda can on their street, but if they see a used needle they'll move. And again, any society with a safety net has an interest in preventing common causes of people falling into it.
>why not ban dangerous sports, too?
It hasn't proven to be a big problem at the population level. Hell, public health experts would love to have that problem, because it'd mean more people were exercising.
> Because the government ends up paying for the medical treatment of a lot of smokers when they're older. And it's incredibly expensive. You can say you won't rely on government funds, but there's no way to actually opt out of Medicare for life or sign up to never be guaranteed stabilization when you show up at a hospital.
That's why I'd get a tattoo on my chest, if necessary, saying "Smoker!". I know that most of the price of tobacco is insurance for medical treatments. Not Medicare, as I'm not in the US, but similar. I am OK with tattooing "DO NOT STABILIZE OR CARE FOR AT ALL - SMOKER !!!1".
> Nicotine is also notoriously addictive, which weakens the "my choice" argument.
I am an adult human who participates in society and has chosen to smoke. Please treat me as an adult who has made a (bad) decision and is willing to suffer the consequences.
> sugary drinks... nanny state
Same with any drug.
> hard drugs...
People who abuse hard drugs to the point where we need to save them or others from them are most often uneducated or poor (and living in a poor neighborhoods, with all that it brings). Believe it or not, I know several people with PhDs in things like physics and biology who regularly take "hard" and/or "soft" drugs besides alcohol and nicotine. Only one needed intervention after ~10 years and it was because of pre-existing psychological issues that led him to abuse the drugs. I and lots of people I know who lead normal lives can list more 3- or 4-letter abbreviations of stuff we've tried than a HN comment will let us fill. Or maybe I'm exaggerating a bit, not sure, but you get the point.
If you look at a poor neighborhood, you'll see a lot more people with drug problems. Not because richer people don't do drugs, but because it's not an escape plan, it's not some random impure thing you get and because it's done within a safe place. It's a social issue, not a drug issue. Work on solving poverty and education, not on making us drug users feel like criminals for trying new stuff or on making our drugs more expensive. Whether it's legal like alcohol or nicotine, or illegal a psychedelic, a benzo, weed, an opioid, a dissociative or anything else, it's a drug. I am an adult. Let me experience my adulthood like I want to. You don't take drugs and that's fine, but please understand that you have no fucking idea what you're missing if you're doing it correctly. Literally anything you've likely experienced, like romantic relationships, climbing mountains, orgasms and so on, is categorically and qualitatively different from the amazing things you can experience on various drugs.
I think it's also partially due to smoking being more and more considered disgusting, not just inconvenient. The peer pressure of "don't do this very stinky disgusting thing around me" must have at least a little to do with declining smoking rates. Back in the 80s, most people didn't have the guts to say "Hey, don't smoke around me, it's gross!" but plenty of people do today.
We need to culturally consider Social Media use to be disgusting or at least something to be ashamed of.
The irony is that social media trends are making smoking cool again.
> You cut off social media, you at worse face some boredom and FOMO.
I wish this was true but I know tens of people that quit smoking and (besides myself) know 1/2 of another person that quit social media. drunk at NYE two years I offered $10k to a group of 25 people to delete all social media apps from their phones for 60 days - still have that $10k in my account. I think quitting social media is around the same as getting off hard drug addiction (like hard, hard, hard one - opioid, heroin etc...) and maybe even tougher that that - for most people.
> People didn't quit smoking because they suddenly got more disciplined. We just made it inconvenient.
I want to believe this! I just haven't personally experienced this at all (I am in my 6 decade on Earth so plenty of time around). I don't know single person that stopped smoking because they could not burn one inside restaurants/clubs/... or because it costs $18/pack or any of that. 18 year old person has very little "regulation" when it comes to smoking. Little inconveniences to move 25 feet away from the building isn't much of a deterrent IMO.
I am subjective on the matter of social media, I know that. But I am educated in its evil and would for instance never let my kid be on any social media as long as she is under my roof. This has already cause significant challenges for her (and my wife and I) but also it is an amazing learning experience to overcome silly social obstacles...
Not a fan of conflating personal enjoyment of a vice with promoting hatred.
It's like how do you get people to stop letting their kids drink alcohol.
Everyone knows what the dangers of alcohol are now. We need to get reliable data one can base policy on and then let the public health system do their thing. Maybe not every health authority but enough of them to protect the species at large. Then we'll get social media out of schools, away from young people, vulnerable folks, etc.
Why would someone want to get other people to stop drinking alcohol?
What do we do? We treat platforms with algorithmic news feeds as publishers not platforms in the Section 230 sense.
Think about it this way: imagine if you took a million random posts or videos. You would find a wide range of political views, conspiracy theories and so on. Whatever your position on any of those issues, you could find content pushing those views.
So if your algorithm selects and distributes content that fits your desired views and suppresses content that opposes your views, how are you different from a random publisher who posts content with those exact same views?
This is kind of like the "secret third thing" of Section 230 where you get all the protections of being a platform and all the flexibility of being a publisher and we need to close that loophole. Let platforms choose which one they are.
Another example: if I create a blog and write a post that accuses my local mayor of being a drug addict and a pedophile, I can be sued for defamation. You can try the journalism defense but it won't shield you from defamation. Traditoinal media outlets are normally very careful about what they publish for this reason.
But what if I run Facebook or Twitter and one of my users says the exact same thing? Well I'm just a platform. I have a libel shield. But again, my algorithm can promote or suppress that claim. Even if I have processes to moderate that content, either by responding to a court order to take it down and/or allowing users to flag it and then take it down myself with human or AI moderation, the damage can't really be rolled back.
We've let tech companies get away with "the algorithm" being some kind of mysterious and neutral black box that just does stuff and we have no idea what. It's complete bullshit. Every behavior of such an algorithm reflects a choice made by people, period. And we need to start treating this as publishing.
Does anyone know of watchdog agencies that do the research to document and litigate harmful algorithmic trends?.
I know https://www.reset.tech/ does really good work in this space, but are there others, and who is funding them?
As long as the general public respond to sensationalism, what’s stopping the social media platforms from exploiting.
Most of them are click baits anyways.
I can't say I'm surprised and I think most people wouldn't be surprised either. But it's always good to have evidence.
Is this unavoidable? I mean it does generate clicks and views and user engagement so if one platform is doing it, doesn't that automatically mean that the other has to do it? Otherwise they will continuously lose market share.
I think the burden to curate your feed so that you do not have such content is now resting with the user and they cannot rely on the platform to do it for them.
If the user even wants to do that. Why would they? They're looking for a sugar rush, they're not looking to eat their intellectual vegetables. How do you get children to eat vegetables?
"They" being others, but definitely not you right? Those people...
> I mean it does generate clicks and views and user engagement so if one platform is doing it, doesn't that automatically mean that the other has to do it? Otherwise they will continuously lose market share.
Why? User engagement isn't the same thing as market share.
If McDonald's trained its cashiers to insult you while taking your order, engagement would go up, and market share would go down.
Of course they did. As long as they're legally allowed to do so and profit from doing so they will continue.
The feedback loop for this moral hazard is slow but implacable. You can treat the zeitgeist as a dumping ground for so long, until you get so big, that you can no longer treat it like an idealized infinite substance.
In my experience there’s a strong “banality of evil” that happens.
Some poor schlub ML Eng has shipped a feature that wins an A/B test. They’re pushing to get promoted. Their management wants to show they’re hitting their KPIs.
An engine of destruction filled with well meaning people just hoping to advance in their careers.
You might say, it’s ultimately the designers of the incentives that matter. Even there, the leadership will change. Inevitably the needs of the capitalist machine take over.
Someone changed the title and added a typo
British people complaining about free speech and trying to censor the internet. America needs to keep standing up to British censorship interests.
What!? I’m shocked! Shocked I say!
Is it really whistleblowing when everyone already knows it?
If you like better content look for kagi's small web or better yet find a better algorithm that optimizes for your preferences rather than engagement.
I have my instagram, x on a locked down browser in a container with a fake profile that an LLM drives and finds the posts for specific users and compiles a gist of all the important things in my locality(or what u care about) every evening, without me ever going near that FOMO driven dumpster fire of tiktok/insta/x.
Best LLM RoI I made.
Dupe? https://news.ycombinator.com/item?id=47403929
I look at people who use fb or tiktok, or x, the same way I look at smokers or alcoholics. With sadness and pity. The fact that we let children use this is hard to accept. The fact that fellow hackers and engineers, some of the brightest minds, have contributed to this is extremely disappointing. Shame on you.
the bucket of crabs truly pervades in its metaphorical accuracy. regardless as to intelligence, humans are liable to drag down their fellow men. insane to consider that children are effectively drugged from infancy. for this i do not blame an uneducated society strained to its zenith; i blame the sociopathic and the craven who have enabled the proferring of digital drugs, and consequently accelerated societal addiction. the shame falls entirely on them. may reincarnation be real such that sadistic six figure salaried software engineers and their malicious managers are forced to reap the rewards of such "engineering".
The BBC has a lot of harmful content.. but that is what they get paid for by the government.
When I hear "Meta" and "Facebook" the top 10 things I think:
1. "Surveillance"
2. "Advertising"
3. "Scams"
4. "AI slop"
5. "Manipulated experience"
6. "Child harms"
7. Misinformation campaigns.
8. Disinformation campaigns.
9. "Doom scroll regret"
10. "Zuckavatarphilia"
But I don't claim to have the "right" opinion and am curious how other people respond to the brands. If each of you could reply, and re-list those associations in the order you experience them, I will collate the results and post them everywhere I can think of. It would go a long ways to satisfying my curiosity, and the curiosity of reporters that like to repeat things they read on the internet.
* to drive
Throw away your 'smartphone' and stop using anti-social media. It is killing society, and only making the Billionaires more powerful. They are evil and will do anything to stay in power.
I remember The Social Dilemma’s entire premise was basically this headline minus TikTok, and that came out what? 7 or 8 years ago?
Not saying “well duh” I just think at this point I have to ask “are we going to do anything about it?”
We’ve known about the financial incentives to promote anger and outrage online for at least a decade now. So what are we going to do about it?
What can you do about it? That is the rub. You can't. It is no coincidence that pretty much all avenues of information consumption you face are susceptible to this issue. It is by design that these technologies are able to reach you in these ways. It is by design that propagandists have so much success. Everyone in power today is in power because of propaganda. Why would they ever let go of their reigns of power? It is the sole forcing factor keeping them in power after all. They'd be no different than you and I otherwise, which scares them more than anything.
Legislate! We need laws! I get we aren’t used to that anymore in the US but truly “marketing” and social media in the US has become so hostile and harmful I just don’t understand how we can in good conscience not start to put heavier restrictions on them. Enough is enough. We can’t continue to sacrifice our society on the altar of the Almighty dollar.
There's the other rub. Can't get good laws passed either because laws are also subject to propaganda. You draft your good Bill A. Technocracy comes out with millions in ad spend, floods the zone, convinces the voters your bill is bad, or their Bill B with their profitable carveouts is good, voters vote against their interests and for the technocracy. This pattern has played out like this countless times already. Only thing I can think to do is dissociate and pretend like the world isn't fucked.
Shun! You know people at Meta, Tiktok, et al? They're not people you want as friends, they're family but one step above nothing in your lives. Boo Zuck wherever he goes, MMA fights is just one place. Make it as uncool to be part of the machine as it should be for other drains on society like Palantir.
Given how TikTok "trends" seem to consist mostly of "get teenagers to do stuff that causes huge expenses for US society":
* "eat tide pods" * "stick a fork in electrical sockets in your school" * "destroy your school's shit" aka "Devious Licks" - bathrooms, chromebooks (jamming stuff into the charging ports to start fires...) * "drink a shitload of Benadryl to see what happens" * "steal a kia/hyundai and drive 80mph, run from the cops, etc"
...convince me that this is not a purposeful attack on US society by the CCP?
Given that the 'tide pod challenge' was before TikTok's time and took place on wholly US-owned platforms like YouTube, we can safely assume it's all in your head. Most of the other stuff you're sharing sounds like a reflection of what you find out in the streets of any major US city. Perhaps you should question if your government is the one that is attacking you.
What? Conspiracy theories are not harmful!
Drugs.
Why are social media platforms picked on?
Did we forget Gresham's Law applies to content and has done so since humans could communicate?
Bad or wrong ideas are the ones that get talked about. Do we discuss the 10 issues politicians get correct, or the 1 they screw up?
Platform is irrelevant here; the exact same phenomena occurs/ed on radio and TV decades before it did on social media platforms, and in news papers centuries prior.
> since humans could communicate
You have finally identified the problem. It all started with Homo habilis and misinformation has been rampant ever since. But even protozoan parasites mimic host proteins and block signals, so you really have to go a lot further back to deal with fake news.