Tags: machinelearning

252

sparkline

Thursday, December 18th, 2025

The Colonization of Confidence., Sightless Scribbles

I love the small web, the clean web. I hate tech bloat.

And LLMs are the ultimate bloat.

So much truth in one story:

They built a machine to gentrify the English language.

They have built a machine that weaponizes mediocrity and sells it as perfection.

They are strip-mining your confidence to sell you back a synthetic version of it.

Saturday, December 13th, 2025

Dissent | blarg

I suppose it’s not clear to me what a ‘good’ window into unreliable, systemically toxic systems accomplishes, or how it changes anything that matters for the better, or what that idea even means at all. I don’t understand how “ethical AI” isn’t just “clean coal” or “natural gas.” The power of normalization as four generations are raised breathing low doses of aerosolized neurotoxins; the alternative was called “unleaded”, but the poison was called “regular gas”.

There’s a real technology here, somewhere. Stochastic pattern recognition seems like a powerful tool for solving some problems. But solving a problem starts at the problem, not working backwards from the tools.

Thursday, December 11th, 2025

AI CEO – Replace Your Boss Before They Replace You

Delivering total nonsense, with complete confidence.

Tuesday, December 9th, 2025

Pluralistic: The Reverse-Centaur’s Guide to Criticizing AI (05 Dec 2025) – Pluralistic: Daily links from Cory Doctorow

The promise of AI – the promise AI companies make to investors – is that there will be AIs that can do your job, and when your boss fires you and replaces you with AI, he will keep half of your salary for himself, and give the other half to the AI company.

That’s it.

That’s the $13T growth story that MorganStanley is telling. It’s why big investors and institutionals are giving AI companies hundreds of billions of dollars. And because they are piling in, normies are also getting sucked in, risking their retirement savings and their family’s financial security.

Now, if AI could do your job, this would still be a problem. We’d have to figure out what to do with all these technologically unemployed people.

But AI can’t do your job. It can help you do your job, but that doesn’t mean it’s going to save anyone money.

Sunday, December 7th, 2025

The Jeopardy Phenomenon – Chris Coyier

AI has the Jeopardy Phenomenon too.

If you use it to generate code that is outside your expertise, you are likely to think it’s all well and good, especially if it seems to work at first pop. But if you’re intimately familiar with the technology or the code around the code it’s generating, there is a good chance you’ll be like hey! that’s not quite right!

Not just code. I’m astounded by the cognitive dissonance displayed by people who say “I asked an LLM about {topic I’m familiar with}, and here’s all the things it got wrong” who then proceed to say “It was really useful when I asked an LLM for advice on {topic I’m not familiar with, hence why I’m asking an LLM for advice}.”

Like, if you know that the results are super dodgy for your own area of expertise, why would you think they’d be any better for, I don’t know, restaurant recommendations in a city you’ve never been to?

Wednesday, December 3rd, 2025

The only winning move is not to play

My mind boggles at the thought of using a generative tool based on a large language model to do any kind of qualatitive user research, so every single thing that Gregg says here makes complete sense to me.

Monday, December 1st, 2025

On not choosing nice versions of AI – This day’s portion

Whenever anyone states that “AI is the future, so…” or “many people are using AI anyway, so…” they are not only expressing an opinion — they‘re shaping that future.

Thursday, November 27th, 2025

The line and the stream. — Ethan Marcotte

I’ve come to realize that statements about the future aren’t predictions: they’re more like spells. When someone describes something to you as the future, they’re sharing a heartfelt belief that this something will be part of whatever comes next. “Artificial intelligence isn’t going anywhere” quite literally involves casting a technology forward into time. How could that be anything else but a kind of magic?

Wednesday, November 19th, 2025

David Chisnall (*Now with 50% more sarcasm!*): “I think this needs to be repeated…”

Machine learning is amazing if … the value of a correct answer is much higher than the cost of an incorrect answer.

Related to Laissez-faire Cognitive Debt:

And that’s where I start to get really annoyed by a lot of the LLM hype. It’s pushing machine-learning approaches into places where there are significant harms for sometimes giving the wrong answer. And it’s doing so while trying to outsource the liability to the customers who are using these machines in ways in which they are advertised as working. It’s great for translation! Unless a mistranslated word could kill a business deal or start a war. It’s great for summarisation! Unless missing a key point could cost you a load of money. It’s great for writing code! Unless a security vulnerability would cost you lost revenue or a copyright infringement lawsuit from having accidentally put something from the training set directly in your codebase in contravention of its license would kill your business. And so on. Lots of risks that are outsourced and liabilities that are passed directly to the user.

Laissez-faire Cognitive Debt – Smithery

I think of Cognitive Debt as ‘where we have the answers, but not the thinking that went into producing those answers’.

Lately, I have started noticing examples of not just where the debt is being accrued, but who then has the responsibility to pick it up and repay it.

Too often, an LLM doesn’t replace the need for thinking in a group setting, but simply creates more work for others.

Thursday, November 13th, 2025

Alchemy - Josh Collinsworth blog

I am interested in art—we are interested in art, in any and all of its forms—because humans made it. That’s the very thing that makes it interesting; the who, the how, and especially the why.

The existence of the work itself is only part of the point, and materializing an image out of thin air misses the point of art, in very much the same way that putting a football into a Waymo to drive it up and down the street for a few hours would be entirely missing the point of sports.

Wednesday, November 12th, 2025

Pink goo and stolen sandwiches | Frederic Marx, Front-End Developer

The generative AI industry only exists because some people decided that it’s okay for them to take all this work with no permission, let alone compensation for the original creators, and to charge others for the privilege of using the probabilistic plagiarism machines they’ve fed it to.

Tuesday, November 4th, 2025

cubic blog: The real problem with AI coding

Can you ship AI-generated code without creating a maintenance nightmare six months from now? Can you debug it when it breaks? Can you modify it when requirements change? Can you onboard new engineers to a codebase they didn’t write and the AI barely explained?

Most teams haven’t realized this shift yet. They’re optimizing for code generation speed while comprehension debt silently accumulates in their repos.

One team I talked to spent 3 days fixing what should have been a 2-hour problem. They had “saved” time by having AI generate the initial implementation. But when it broke, they lost 70 hours trying to understand code they had never built themselves.

That’s comprehension debt compounding. The time you save upfront gets charged back with interest later.

Tuesday, October 28th, 2025

Cryosleep

On the last day of UX London this year, I was sitting and chatting with Rachel Coldicutt who was going to be giving the closing keynote. Inevitably the topic of converstation worked its way ’round to “AI”. I remember Rachel having a good laugh when I summarised my overall feeling:

I kind of wish I could go into suspended animation and be woken up when all this is over and things have settled down one way or another.

I still feel that way. Like Gina, I’d welcome a measured approach to this technology. As Anil puts it:

Technologies like LLMs have utility, but the absurd way they’ve been over-hyped, the fact they’re being forced on everyone, and the insistence on ignoring the many valid critiques about them make it very difficult to focus on legitimate uses where they might add value.

I very much look forward to using language models (probably small and local) to automate genuinely tedious tasks. That’s a very different vision to what the slopagandists are pushing. Or, like Paul Ford says:

Make it boring. That’s what’s interesting.

Fortunately, my cryosleep-awakening probably isn’t be too far off. You can smell it in the air, that whiff of a bubble about to burst. And while it will almost certainly be messy, it’s long overdue.

Paul Ford again:

I’ve felt so alienated from tech over the past couple of years. Part of it is the craven authoritarianism. It dampens the mood. But another part is the monolithic narrative—the fact that we live in a world where there seem to be only a few companies, only a few stories going at any time, and everything reduces to politics. God, please let it end.

ChatGPT’s Atlas: The Browser That’s Anti-Web - Anil Dash

I love the web, and this thing is bad for the web.

  1. Atlas substitutes its own AI-generated content for the web, but it looks like it’s showing you the web
  2. The user experience makes you guess what commands to type instead of clicking on links
  3. You’re the agent for the browser, it’s not being an agent for you

It’s very clear that a lot of the new AI era is about dismantling the web’s original design.

eurollm.io

A different world is possible. Here, for example, is an open-source large language model from Europe, designed to support the 24 official languages of the European Union.

I have no idea why their top level domain is for the British Indian Ocean Territory, soon to be no more. That doesn’t instil confidence.

Monday, October 27th, 2025

Measured AI | Note to Self

It’s creepy to tell people they’ll lose their jobs if they don’t use AI. It’s weird to assume AI critics hate progress and are resisting some inevitable future.

Sunday, October 26th, 2025

The AI Gold Rush Is Cover for a Class War

Under the guise of technological inevitability, companies are using the AI boom to rewrite the social contract — laying off employees, rehiring them at lower wages, intensifying workloads, and normalizing precarity. In short, these are political choices masquerading as technical necessities, AI is not the cause of the layoffs but their justification.

Tuesday, October 21st, 2025

Frank Chimero · Beyond the Machine

The transcript of a very thoughtful talk by Frank.

“AI is inevitable” is bullshit · Eric Eggert

LLMs are useful when you need a compromise between fast and good. You will never get a good outcome fast.

I’m afraid we are settling into a status of good enough when using “AI,” which is especially hurtful for accessibility.