







I think next step should be developing a test that can predict how someone will react to it.
Unnecessary: foolish people always gonna fool. Anyone that far gone in the lacking judgement department demands far more help than anyone can reasonably be expected to provide, and attempting to “foolproof” for them will only drag everyone else down while doing nothing for them. Likewise, just because some people overeat junk food doesn’t mean we need to devise some test to decide who can safely get junk food: it’s a personal choice, the risks of bad judgement are reasonably understood, & that bullshit’s beyond paternalistic.


Does it help to frame it in a different light for you if you think of it as those companies exploiting vulnerable peoples’ disorders to extract money from them?
Not at all: we don’t go winning lawsuits against any of those companies promoting themselves to appeal to the consumer because of how the dysfunctional among us may overconsume it. Liberty comes with accepting responsibility for reasonably foreseeable consequences/risks of our choices or no one will be able to realize liberty when someone makes their responsibility everyone else’s duty. Society can’t reasonably be expected to cater to everyone’s irrational/dysfunctional manifestations & whims. The legal standard is reasonable person, not dysfunctional ones. Moreover, the existence of children doesn’t imply we need to childproof all of society: people are still entitled liberty to their adult activity & vices.
When risks are open & obvious, such as the overconsumption of certain foods & legal substances, that’s generally viewed as a matter of personal choice rather than unreasonably dangerous product defect. Even when kids grow obese from overeating junk food, blame primarily lies in whoever provides them that food rather than the product itself no matter how appealing the design of the food, the design on the container, or its advertisements. Especially with the latest wave of moral panic over social media, the risks & dysfunctions of obsessively overconsuming social media or any information service to the extent it impairs us are open & obvious. Parents giving their children these devices, observing excessive attachment, and not cutting them off bear considerable responsibility.
Information & devices to view it are generally benign & noncoercive. People use these services, because some find them useful & engaging to their interests. Those features that effectively meet user demand for engaging information offer legitimate utility to a reasonable person without impairing them. Such features aren’t defects, and “fool-proofing” them would hamper utility to functional adults who can deal with the “dangers” of attention-grabbing information.
However, even supposing such features defectively make the system unreasonably dangerous in a reasonably foreseeable manner, that only demands that service providers provide fair warning. Once duty to warn has been met, users are reasonably aware of risks and responsibility shifts to risk-takers or parents who give children access despite reasonably knowing the risk.
Telling those people to just have self control is like telling someone with depression to just stop being sad.
We can’t rearrange all of society just because some people have depression. Liberty means not imposing on others issues we should be dealing with ourselves or through appropriate services specifically for that.


I don’t know. Seems like self-control issues. People can get addicted to anything: shopping, sex, internet use, work, gaming, exercise. I also disagree with prohibitions on gambling, drug use, prostitution: it’s their money, their body, etc.
Penalizing systems of communication & information delivery seems overreach. The harm seems phony & averted by basic self-control.


OS level parental controls do not give a parent control over a child’s use of a social media platform
A quick web search indicates they can filter/block content, restrict apps, report activity. Additional software can monitor communication (including social media) and alert guardians.
However, the legal opinion wasn’t that parental control software is the best solution or only better solution[1], but that more effective alternatives (such as non-punitive laws promoting use of client-side parental controls) with less adverse impact exist than punitive laws limited in their enforceability by jurisdiction & that unnecessarily burden & deter (thus harm) free exercise fundamental liberties.[2] Client-side parental controls only affect their users without affecting everyone else. Unlike regulations on site operators, they work on content originating outside a law’s jurisdiction. Even at the time of that federal court decision, parental controls could screen dynamic content (eg, live chats) over any protocol.
By far, the most appropriate answer is responsible adult involvement & supervision and the education of children to address motivation, coping, & responsible behavior.
The internet is global. A key problem with any coercive law is their jurisdiction isn’t: just as 4chan.org can tell UK’s OfCom to go fuck itself, site operators beyond a law’s jurisdiction can tell its enforcers the same. Another issue is the compliance burden is harder on entrants than the dominant companies in the industry with more resources to afford to comply, thus deterring competition. Do we really want to make it harder to displace our current social media companies with alternatives?
Communication alone rarely poses immediate danger: there’s usually a number of steps between the communication & actual harm where anyone can intervene. We can block or ignore unwanted communication & choose the information we disclose. Responsible people can guide their children on safety & control their access to the devices they give them.
A while ago, when my uncle struck his kid for making an unauthorized payment through the kid’s tablet, I scolded him for creating the situation where the kid could do that instead of setting up a child account with parental controls. When I asked him how child abuse is more responsible than reading some shit designed for him to understand and pressing a few buttons to use the system exactly as designed to prevent this shit from happening, he quickly got the point and did that in about an hour. This shit ain’t hard.
Better solutions already exist, they’re effective, and the solid recommendations governments already have to promote them effectively would work. Governments have largely chosen not to.
The cited recommendations I mentioned elsewhere went beyond parental control software into areas such as the promotion of standards & the development of better standards in the industry. ↩︎
Rather than accept any law, government has a duty to minimize compromises of fundamental rights in meeting its “compelling interests”. When government fails to prove that a law is the least adverse to fundamental liberties among alternatives that are at least as effective, that law must be rejected. ↩︎


And improve parental controls for children’s accounts. I’m sure there’s nothing currently giving a “parent” account high level control over a “child” account, but I’m happy to be corrected if I’m wrong.
Parental controls already exist in every major OS, they suffice to restrict & monitor social media, and they go unused.
A better solution might be for laws to provide parents resources & incentives to parent children’s online activity (including training to use resources they already have) & to provide children education in online safety & literacy. Decades ago, federal courts citing commission findings & studies recommended these alternatives as superior in effectiveness, meeting government duties to minimize impact on civil liberties, allocation of law enforcement resources, etc. For the permanent injunction to COPA, the judge wrote
Moreover, defendant contends that: (1) filters currently exist and, thus, cannot be considered a less restrictive alternative to COPA; and that (2) the private use of filters cannot be deemed a less restrictive alternative to COPA because it is not an alternative which the government can implement. These contentions have been squarely rejected by the Supreme Court in ruling upon the efficacy of the 1999 preliminary injunction by this court. The Supreme Court wrote:
Congress undoubtedly may act to encourage the use of filters. We have held that Congress can give strong incentives to schools and libraries to use them. It could also take steps to promote their development by industry, and their use by parents. It is incorrect, for that reason, to say that filters are part of the current regulatory status quo. The need for parental cooperation does not automatically disqualify a proposed less restrictive alternative. In enacting COPA, Congress said its goal was to prevent the “widespread availability of the Internet” from providing “opportunities for minors to access materials through the World Wide Web in a manner that can frustrate parental supervision or control.” COPA presumes that parents lack the ability, not the will, to monitor what their children see. By enacting programs to promote use of filtering software, Congress could give parents that ability without subjecting protected speech to severe penalties.
I also agree and conclude that in conjunction with the private use of filters, the government may promote and support their use by, for example, providing further education and training programs to parents and caregivers, giving incentives or mandates to ISP’s to provide filters to their subscribers, directing the developers of computer operating systems to provide filters and parental controls as a part of their products (Microsoft’s new operating system, Vista, now provides such features, see Finding of Fact 91), subsidizing the purchase of filters for those who cannot afford them, and by performing further studies and recommendations regarding filters.
Adult supervision, child education on online safety & literacy, parental controls & filters are more effective at less expense to fundamental rights. Governments know this & conveniently forget it.


self-cleaning/pyrolytic oven on wheels


I find doing AI impressions an effective trolling technique: beep bip boop & some fun punctuation ‒−–—―…:.


Still unnecessary & less effective than less invasive alternatives that already exist & the government could promote. To quote another comment
Governments have commissioned enough studies to know that education, training, and parental controls filtering content at the receiving end are more effective & less infringing of civil rights than laws imposing restrictions & penalties on website operators to comply with online age verification. Laws could instead allocate resources to promote the former in a major way, setup independent evaluations reporting the effectiveness of child protection technologies to the public, promote standards & the development of better standards in the industry. Laws of the latter kind simply aren’t needed & also suffer technical defects.
The most fatal technical defect is they lack enforceability on websites outside their jurisdiction. They’re limited to HTTP (or successor). They practically rule out dynamic content (chat, fora) for minors unless that content is dynamically prescreened. Parental control filters lack all these defects, and they don’t adversely impact privacy, fundamental rights, and law enforcement.
Governments know better & choose worse, because it’s not about promoting the public good, it’s about imposing control.


AI companies are making a choice when they design unsafe platforms.
The right choice.
Technology to prevent this harm already exists: Anthropic’s Claude, for example, consistently tried to dissuade users from acts of violence.
That shit’s awfully condescending & paternalistic.
AI platforms are becoming a weapon for extremists and school shooters.
For deficient plans: AI gets shit wrong so often, we should probably encourage idiots to concoct their “foolproof” plans on it.
Demand AI companies put people’s safety ahead of profit.
Nah: thought isn’t action. Liberty means respecting others’ freedom to have “unsafe” thoughts. Someone else could pose the same questions to audit security weaknesses & prepare safety plans.
Moreover, all of this was already possible with a search engine & notes. Information alarmists can get fucked.


And? The word enshittification is not a great contribution to society.


immiserated and precaratized
dafuq?
Whereas the people who choose when and how to use AI — the centaurs
que?
The Reverse-Centaur’s Guide
A bit contrived?
Thanks for bringing us this extraterrestrial perspective, OP. Extraterrestrial voices matter! 🫡


Cling to semantics if you need to, but the spirit of what I said was true.
Is it? Doesn’t seem a valid argument.
Hitler embraced the construction of the autobahn. Therefore, the autobahn is evil.
operates the same way (guilt by association fallacy). I agree bluesky “was always going to shit” for entirely different reasons like repeating the same mistakes of twitter.
Maybe you could offer a more logical argument for your conclusion instead of dragging the discussion into irrationality?



and they’re taking it out on the trees.
What did the trees do to deserve that?


Are you referring to yourself by claiming your ignorance somehow matches legal expertise? Cool ad hominem, by the way: fallacies (including strawman of the transformative use argument), blame-shifting when you can’t back claims with credible evidence, & self-indulgent vanity are the hallmarks of trolls. Way to out yourself, buddy. 😄


Don’t need to: their lawyers understood the law & lawyered successfully so far.


Precedent means we can cite it, so yes, this helps a bit. The rest you wrote is a fair bit of assumption or unnecessary: evidence to back your points would help. Otherwise, it just looks like inconclusive defeatism.


Moby Dick
You could also try understanding the law
§107. Limitations on exclusive rights: Fair use
Notwithstanding the provisions of sections 106 and 106A, the fair use of a copyrighted work, including such use by reproduction in copies or phonorecords or by any other means specified by that section, for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use the factors to be considered shall include-
- the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
- the nature of the copyrighted work;
- the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
- the effect of the use upon the potential market for or value of the copyrighted work.
with particular attention to factors 1 (especially transformation) & 4.
If that’s not for you, though, then you should definitely try that with a copyright work (Disney?) & report back on how that went.