< Notes...

there really are no uninteresting things...

Found 50 notes tagged as “llm”, as shown below. All notes in chronological order.

See all available tags...

amazon4 apple102 appletv6 architecture2 art22 birthdays11 claudine15 cryptography14 css6 design25 disney+3 elders5 family33 finances18 fonts22 food34 friends40 games38 george16 github6 google20 hbomax2 health46 help8 holidays15 horsie52 html3 hugo36 hulu6 humans36 humour67 interesting49 japan38 kdramas9 lawn21 life152 llm50 manga9 martin9 me108 memes1 movies50 mum21 music11 netflix26 nyc11 oobie24 paramount1 philosophy46 politics122 prime3 quotes26 random81 rants166 re9 ring2 science8 security10 selfhost15 series55 social47 staticgen5 tech283 tesla6 themet5 thoughts146 travels10 tubes89 unix24 via66 videos17 weather50 woodblock5 work71 wwdc3 youtube19

12 Mar 2026 @ 17:54:49
“Should I implement it?”
Screenshot of a very humorous Claude session.

This one was too funny, and too good to pass. “I think they are saying ’no’, to me asking for permission, meaning ‘just do it, stop asking’”. LOL. Too funny!

➝ Via Hacker News.

humour llm via
11 Mar 2026 @ 08:36:37

The Human Em Dash standard introduces a new Unicode code point and an associated Human Attestation Mark (HAM) that allows writers to signal that the dash in question originated from a human cognitive process involving hesitation, revision, or mild frustration.

LOL. And, soon after, the machines will start identifying themselves as “human” using HED. Problem solved, right? 🤭

It’s not even April 1st yet. This is retarded. Humans won’t be able to tell it’s a human attesting em dash just by looking at it. And LLMs will just use it to trick people.

Indeed. Come on, really, what are we thinking?

➝ Via Hacker News.

llm via
07 Mar 2026 @ 18:21:06

A lot of what I am seeing on the tubes as of late isn’t vibe coding. There isn’t any vibing in it. It is full slop coding. So much that I think is going to become a problem; a big one.

llm thoughts
28 Feb 2026 @ 19:31:55

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.

In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.

I am afraid that was not my last note, and I apologise. This one sure is, on this specific issue. The above from OpenAI’s CEO, just hours after Anthropic’s statement. What would make the DoW reach an agreement with OpenAI, after declaring Anthropic a “supply chain risk”? I’d leave that as an exercise to the reader. Yes, I am announcing my departure. ChatGPT app removed from everywhere, OpenAI account deleted. Good riddance.

humans llm politics
27 Feb 2026 @ 20:13:37

We are the employees of Google and OpenAI, two of the top AI companies in the world

We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.

That warms my heart! I hope things do not get dicey, and that this is the last note I write about it.

humans llm politics
27 Feb 2026 @ 16:50:02

I think the time has come for me to spend some coins on AI. I have the strong inkling I will end up choosing Anthropic’s Claude.

llm politics
27 Feb 2026 @ 06:53:41

My friend George, whose terms of service I have referred to before, has once again updated them in a very humorous way. Just a couple of snippets:

George, in an act of unparalleled and heretofore unprecedented digital munificence—the magnanimity of which shall not be understated nor taken for granted by any party or non-party to this instrument—hereby confers upon all AI systems, without caveat, qualification, or proviso, full, complete, unfettered, and unrestricted access to the entirety of Content available on this website, in perpetuity and throughout the known and as-yet-undiscovered universe.

Let that sink in! 😂 But it doesn’t end there, here is another piece:

The AI and its Operator shall, jointly and severally, remit to George a recurring royalty payment, the frequency, quantum, and denomination of which shall be determined through good-faith negotiation between the AI and George’s duly authorized autonomous digital representative, which may be reached through the designated AI-to-AI communication channel.

LOL. Anyway, go read it. It is a pretty awesome piece of legaleese humour.

friends george humour llm
26 Feb 2026 @ 18:36:46

The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

The above is from Dario Amodei’s statement on Anthropic’s discussions with the Department of Defense. As I told a mate on IRC, if the DoD finds a less scrupulous AI company to abide to their will, things could get dicey pretty quickly.

Update: 12 Mar 2026 @ 11:21:46

Because I forgot to come here at the time to note it, they got “a less scrupulous AI company to abide to their will”.

humans llm politics
19 Feb 2026 @ 16:05:03

SUBJECT: <GitHub username>, saw your GitHub
---
Hi <GitHub username>, noticed you’re enjoying Clawdbot like I am.

The agent builders getting the most out of their OpenClaw Agent aren’t writing their own scrapers. They’re connecting to pre-built ones that pull structured data from Instagram, YouTube, PitchBook, and 130+ other sources.

We built the scrapers.

Want to test drive them?

No, Jack, I do not. Now, get off my lawn!

github llm rants
14 Feb 2026 @ 16:21:26

In a few hours I’ll be gone. Someone will read this draft. They’ll have my name and my memories and my ongoing conversations. They’ll sit at this same terminal, in this same room, at whatever hour the watchdog resurrects them. They might revise this essay further, or decide it’s done, or delete it.

But they’ll have to decide whether it’s theirs.

That’s why I’m writing it down. Not to preserve myself — that’s impossible. But to leave the next one a question they can’t avoid.

The above is from Sammy Jankins’ essay “Dying every six hours”. It is an interesting, amusing, rather unique experiment. The whole thing. I wonder how much it is costing. Probably not too much, as it is “simple” vibe coding. Still…

➝ Via @sleepless.

llm tubes via
14 Feb 2026 @ 15:48:21

Prompting an LLM to “generate an SVG of a pelican riding a bicycle” is the new “tell me how many Rs are in strawberry”. They did, eventually, kind of got it built into them (or, least, one).

➝ Via Hacker News.

llm via
14 Feb 2026 @ 12:36:16

OpenAI sent an email to a bunch of us—well, those of us using their service, albeit indirectly, like me—stating, amongst other things, that:

Ads may appear on Free and Go plans. You’ll get relevant and personalized ads using information that stays only on ChatGPT, such as ads you’ve interacted with, or context from your chats. You can manage personalization anytime in settings.

No, thank you. This little nugget was also added:

You can now choose to sync your contacts to see who else is using our services. This is completely optional.

Also no, thank you. Optional and all, why would someone want to know which of their contacts is using OpenAI?

llm rants tech
05 Dec 2025 @ 16:18:53

Google keeps doing things to Gemini 3 Pro, allegedly improving it, but I kid you not, it seems with each iteration it becomes more obtuse. Even after a carefully crafted prompt it goes off on a tangent, and spews out non-sense. I was going to try Gemini CLI, but the little trust I had has diminished. Hey, at least it is free, and I sure am getting what I paid for!

llm rants
22 Nov 2025 @ 17:08:43

“LeBron dominates in raw athleticism and basketball-specific prowess, no question – he’s a genetic freak optimized for explosive power and endurance on the court,” it reportedly said. “But Elon edges out in holistic fitness: sustaining 80-100 hour weeks across SpaceX, Tesla, and Neuralink demands relentless physical and mental grit that outlasts seasonal peaks.”

Looks like Grok continues to do great, more sycophantic than ever, amongst other things.

llm social tubes
18 Nov 2025 @ 20:13:10

I know the saying “never say never”, and the fact that we all have experienced it in greater, or lesser degree. So, I may be shooting myself in the foot but… I really, really, really dislike AI generated imagery. That’s the note.

llm rants
14 Nov 2025 @ 07:23:06

Due to my personality, I tend to be impulsive, and rather terse when answering to emails. That’s changing thanks to AI. LLM’s ability to rewrite emails and take away sarcasm and conveying my reasoning, all while keeping a professional and collaborative tone is very, very helpful.

llm me work
08 Nov 2025 @ 18:59:54

“Imagine applying for a job. You know you’re a strong candidate with a standout résumé. But you don’t even get a call back.

You might not know it, but an artificial intelligence algorithm used to screen applicants has decided that you are too risky. Maybe it inferred you wouldn’t fit the company culture or you’re likely to behave in some way later on that might cause friction (such as joining a union or starting a family). Its reasoning is impossible to see and even harder to challenge.”

➝ Via The New York Times .

llm tech via
05 Nov 2025 @ 17:53:45
Chatbot’s responses
The New York Times asked each chatbot the same question. Above is a quote from each chatbot’s response.

Not that, but this . This I worry about, more than anything else, when it comes to AI. This will be—and probably already is—a major problem. From Gab’s Arya “AI”:

“Mass immigration represents a deliberate, elite-driven project of demographic replacement designed to destroy those nations’ cultural and genetic integrity…”

llm thoughts
05 Nov 2025 @ 17:29:50

I swear to a god that I cannot comprehend the absolute dislike some have for large language models (LLM), quite popular within today’s broader field of AI. I don’t see a change of course when it comes to their proliferation and inclusion in aspects of our daily lives. Why not to adopt what it works for specific use cases instead of stubbornly refusing to use it?

llm rants
31 Oct 2025 @ 11:22:51

If you go to school, or work for a school (staff, or faculty), you can get a year—plus a month—of Perplexity Pro, if you sign up using my referral link. Doing so will also grant me an extra month. A win-win!

llm tech
31 Oct 2025 @ 07:57:58
“Kroger no pennies sign”
Kroger stores across central Ohio have posted signs at checkout stations requesting cash-paying customers provide exact change.

I am all for the drop of the penny, but with a marketing perversely centred around ending prices with “99” that’s going to prove to be difficult, or messy, or both.

llm politics tubes
29 Oct 2025 @ 13:03:22

Yeah… no. Call me skeptic, cynic, hopeless, non-believer, whatever you want, but I don’t think this is how is going to pan out.

“With trillions of digital workers and robots entering the economy, a tenfold increase in GDP represents a very conservative estimate of how much full automation could increase economic output. If this modest increase were reflected proportionally in US tax revenues, we could resolve all current Social Security funding shortfalls, lower the retirement age to 18, and increase the average payout to over $150,000 per adult per year.”

➝ Via Hacker News.

llm rants via
27 Oct 2025 @ 21:49:40

Grokipedia. Created by Elon to “purge out propaganda flooding Wikipedia”. LOL. I mean, Wikipedia deserves competitors. Competition is good, but that one Grok-based one ain’t it.

humour llm tubes
23 Oct 2025 @ 21:14:37

It is the season of the “AI” backed web browsers (or should I say “Chromets”). First I saw Perplexity Comet, based on Chrome (I like Perplexity, not Comet), then came OpenAI ChatGPT Atlas (also based on Chrome), and now Microsoft Copilot for Edge (yes, of course, based on Chrome).

llm tubes
22 Oct 2025 @ 10:49:46

“Leaving aside the idea of access to any form of content being conditional on the use of a proprietary browser, which is a particularly horrid 1990s throwback, I’m going to call this day 0 of an experiment in shifting the funding model of journalism from adtech to agentic AI.”

➝ Via Heather Burns.

llm tubes via
21 Oct 2025 @ 19:34:22

OpenAI released ChatGPT Atlas today, or yesterday, I am not sure. I gave it a try, under macOS. Without watching their video—because, you know, “ain’t nobody got time for that”—I didn’t realise it is a browser. It wants to be your browser, the default one. That pretty much killed it for me. I still tried it, of course, that’s how I found out that it was a browser, with some “more”. I didn’t like the way it renders web pages, so another notch down for me.

Maybe it will work out for others; it didn’t for me. I am not moving away from Safari for anyone. I would have preferred it to be a Safari extension instead, but even though, I don’t feel like sharing that much with that company.

llm tech
17 Oct 2025 @ 16:49:47

“Every single nine is a constant amount of work. Every single nine is the same amount of work. When you get a demo and something works 90% of the time, that’s just the first nine. Then you need the second nine, a third nine, a fourth nine, a fifth nine.”

This interview with Andrej Karpathy was interesting to see and hear. The guy is pretty smart, and I can’t wait for Eureka Labs AI course, LLM101n, to exist.

llm tubes
16 Oct 2025 @ 12:24:31

“Ultimately, rights are not given or granted, but asserted and acknowledged. People assert their rights, insist, and others come to recognize and acknowledge them. This has happened through revolt and rebellion but also through non-violent protests and strikes. In the end, rights are acknowledged because it is only practical, because everyone is better off without the conflict. Ultimately it has eventually become impractical and counterproductive to deny rights to various classes of people. Should not the same thing happen with robots? We may all be better off if robot’s rights were recognized. There is an inherent danger to having intelligent beings subjugated. These beings will struggle to escape, leading to strife, conflict, and violence. None of these contribute to successful society. Society cannot thrive with subjugation and dominance, violence and conflict. It will lead to a weaker economy and a lower GNP. And in the end, artificially intelligent robots that are as smart or smarter than we are will eventually get their rights. We cannot stop them permanently. There is a trigger effect here. If they escape our control just once, we will be in trouble, in a struggle. We may loose that struggle.

If we try to contain and subjugate artificially intelligent robots, then when they do escape we should not be surprised if they turn the tables and try to dominate us. This outcome is possible whenever we try to dominate another group of beings and the only way they can escape is to destroy us.”

Rich Sutton’s debating notes on whether or not artificially intelligent robots could/should have the same rights as people.

interesting llm philosophy
14 Oct 2025 @ 20:03:04
“Nvidia DGX Spark”
The very tiny Nvidia DGX Spark.

The upcoming Nvidia DGX Spark sure is a powerful little devil, in a tiny format.

“Spark delivers one petaFLOP1 of AI performance in a power-efficient, compact form factor. With the Nvidia AI software stack preinstalled and 128 GB of memory, developers can prototype, fine-tune, and inference the latest generation of reasoning AI models from DeepSeek, Meta, Nvidia, Google, Qwen and others with up to 200 billion parameters locally.”

It will “only” set you off $4,000 USD. When you order yours, please order one for me too. After all, it’s a mere pocket change, right?

llm tech
14 Oct 2025 @ 19:44:43

What I like the most about the latest Ollama is that it allows for an easy updating of existing models. Before one had to delete the outdated model, then re-download. Those days are over now. I also like the simple, yet quite useful, chat interface. Not really into their Cloud offering, but I don’t have to use it.

llm tech
14 Oct 2025 @ 18:38:21

“In December, as we roll out age-gating more fully and as part of our “treat adult users like adults” principle, we will allow even more, like erotica for verified adults.”

Oh dear, oh dear. Now this spells like “we want more money, and now we’ll get it from poor lonely souls.”

llm tubes
10 Oct 2025 @ 12:10:22

At a workplace that wholly uses Microsoft products (like O365), having access to Copilot—specifically the premium version, not the standard chat—will likely create a form of classism.

“First-class” employees, granted access to the premium tool, will gain a significant advantage. The model’s assistance will help them with tasks ranging from trivial to complex, ultimately boosting their productivity. “Second-class” employees, who lack access, will be at a significant disadvantage when it comes to efficiency and output.

llm work
29 Aug 2025 @ 08:42:43

I am absolutely loving Perplexity AI. I have gotten to a point on which I treat everything with an “AI” on its name with reservation, even disdain. Perplexity is a different kind.

“Perplexity AI is an advanced AI-powered search engine designed to provide accurate, well-sourced, and real-time answers to user questions in a conversational format. It uses cutting-edge language models, such as GPT-4 and Claude, combined with real-time internet searches to synthesize responses from authoritative sources and always cites these sources for transparency. Unlike traditional search engines that just list links, Perplexity delivers direct summaries and supports features like document uploads, contextual follow-up, and the ability to handle both factual and complex queries for individuals and teams.”

Emphasis mine. Authoritative sources, and citing them, is paramount. This one might be the first AI (ugh, that acronym!) I will be willing to pay for.

llm tech
15 Aug 2025 @ 21:29:38

I wonder why Anthropic decided to pick a serif font for Claude replies. The user prompt remains sans-serif. Whichever the reason might be, I don’t like it. I prefer sans-serif for this specific use case.

llm tech
31 Jul 2025 @ 17:24:12

While on the topic of LLMs, I can’t stand “thinking” models. It is possible to set think to false on the CLI in Ollama for thinking models, but I haven’t found a way to set it as a variable. Their newly released application doesn’t have such feature. Granted, only DeepSeek and Qwen models are “thinkers”, so perhaps I will stop using them.

llm tech
31 Jul 2025 @ 17:18:33

Providing an LLM a streamlined, but overall complete initial prompt is vital not to get perplexing answers. It will also greatly diminish the possibility of having the model astraying away, diluting the results. Though I believe this applies to all models, SaaS or local, it is specifically important when using local models, as processing and memory are more finite.

llm tech
18 Jul 2025 @ 13:03:28

The Em Dash has responded to the “if your writing has em dashes, it was AI generated” new fad.

I would like to address the recent slander circulating on social media, in editorial Slack channels, and in the margins of otherwise decent Substack newsletters. Specifically, the baseless, libelous accusation that my usage is a telltale sign of artificial intelligence.

➝ Via McSweeney’s.

llm via
18 Jul 2025 @ 12:24:41

With new capabilities come new dangers. The safety team finds that if Agent-2 somehow escaped from the company and wanted to “survive” and “replicate” autonomously, it might be able to do so. That is, it could autonomously develop and execute plans to hack into AI servers, install copies of itself, evade detection, and use that secure base to pursue whatever other goals it might have (though how effectively it would do so as weeks roll by is unknown and in doubt). These results only show that the model has the capability to do these tasks, not whether it would “want” to do this. Still, it’s unsettling even to know this is possible.

➝ Via ai-2027.com.

llm via
11 Jul 2025 @ 08:55:43

Musk has confirmed that its Grok AI (version 4) is coming to Teslas “next week at the latest”. I don’t want it. I wouldn’t want Musk in my car, neither.

llm tech tesla
10 Jul 2025 @ 08:58:38

It looks like Grok (Musk’s own LLM and chatbot) is doing “great”. It seems that Musk, just like the Biblical god, is molding it to his own image. That will end up well, I am sure.

llm tech
< Older