

The typical pattern for leaders is to get “second opinions” from advisors who tell them whatever they want to hear, so… maybe asking the equivalent of a magic 8 ball is a marginal improvement?
The typical pattern for leaders is to get “second opinions” from advisors who tell them whatever they want to hear, so… maybe asking the equivalent of a magic 8 ball is a marginal improvement?
“Researchers in the field sometimes describe our goal as to pass the ‘Visual Turing Test,’” said Suyeon Choi […] “A visual Turing Test then means, ideally, one cannot distinguish between a physical, real thing as seen through the glasses and a digitally created image being projected on the display surface,” Choi said.
So they just came up with a needlessly opaque synonym of “verisimilitude”.
Doom Quixote.
As a 50-something, I can see the case for putting the “golden age” of the internet between the birth of Wikipedia in 2001 and Facebook in 2006.
I think it does accurately model the part of the brain that forms predictions from observations—including predictions about what a speaker is going to say next, which lets human listeners focus on the surprising/informative parts. But with LLMs they just keep feeding it its own output as if it were a third party whose next words it’s trying to predict.
It’s like a child describing an imaginary friend, if you keep repeating “And what would your friend say after that?”
IMO the focus should have always been on the potential for AI to produce copyright-violating output, not on the method of training.
Why would the article’s credited authors pass up the chance to improve their own health status and health satisfaction?
Critical paragraph:
Our research highlights the importance of Germany’s unique institutional context, characterized by strong labor protections, extensive union representation, and comprehensive employment legislation. These factors, combined with Germany’s gradual adoption of AI technologies, create an environment where AI is more likely to complement rather than displace worker skills, mitigating some of the negative labor market effects observed in countries like the US.
That makes sense—being raised by ChatGPT might be marginally better than being raised by Sam Altman.
How does that compare to the growth in size of the overall code base?
I assume it’s because it reduces the possibility of other processes outside of the linked containers accessing the files (so security and stability).
Here’s a list of WP’s templates for adding social media links to articles—looks like they have one for Mastodon.
https://en.wikipedia.org/wiki/Category:Social_media_external_link_templates
CasaOS is not an operating system and more like a GUI for Docker
So it’s more like Portainer?
The current version of Affinity is great and will continue to work forever—there’s no need to switch to an alternative if you’re already using it. I just don’t have much hope for its future development.
I guess technically, Raspbian.
The Affinity Suite is great, but I’m suspicious of its acquisition by Canva—I’m afraid their solution to “bringing the suite to Linux” will be turning it into a web service.
One metric you might want to add is the network effect: how much of a difference does it make to the user experience to join a large instance (or the same instance most of your friends are on) compared to a small or self-hosted one? (Or in other words—does the nature of the platform software potentially incentivize consolidation?)
Ok—to the extent that SVG is HTML, the variant of HTML that it is is a flavor of XML.
I think the main takeaway is that these models are fundamentally inconsistent, and you can never assume they’re going to be reliable based on past performance.