Notes - Llm

< Notes...

“there really are no uninteresting things...”

All notes tagged as "llm", in chronological order.

29 Aug 2025 @ 08:42:43 #llm #tech

I am absolutely loving Perplexity AI. I have gotten to a point on which I treat everything with an “AI” on its name with reservation, even disdain. Perplexity is a different kind.

“Perplexity AI is an advanced AI-powered search engine designed to provide accurate, well-sourced, and real-time answers to user questions in a conversational format. It uses cutting-edge language models, such as GPT-4 and Claude, combined with real-time internet searches to synthesize responses from authoritative sources and always cites these sources for transparency. Unlike traditional search engines that just list links, Perplexity delivers direct summaries and supports features like document uploads, contextual follow-up, and the ability to handle both factual and complex queries for individuals and teams.”

Emphasis mine. Authoritative sources, and citing them, is paramount. This one might be the first AI (ugh, that acronym!) I will be willing to pay for.

15 Aug 2025 @ 21:29:38 #llm #tech

I wonder why Anthropic decided to pick a serif font for Claude replies. The user prompt remains sans-serif. Whichever the reason might be, I don’t like it. I prefer sans-serif for this specific use case.

31 Jul 2025 @ 17:24:12 #llm #tech

While on the topic of LLMs, I can’t stand “thinking” models. It is possible to set think to false on the CLI in Ollama for thinking models, but I haven’t found a way to set it as a variable. Their newly released application doesn’t have such feature. Granted, only DeepSeek and Qwen models are “thinkers”, so perhaps I will stop using them.

31 Jul 2025 @ 17:18:33 #llm #tech

Providing an LLM a streamlined, but overall complete initial prompt is vital not to get perplexing answers. It will also greatly diminish the possibility of having the model astraying away, diluting the results. Though I believe this applies to all models, SaaS or local, it is specifically important when using local models, as processing and memory are more finite.

18 Jul 2025 @ 13:03:28 #llm #via

The Em Dash has responded to the “if your writing has em dashes, it was AI generated” new fad.

I would like to address the recent slander circulating on social media, in editorial Slack channels, and in the margins of otherwise decent Substack newsletters. Specifically, the baseless, libelous accusation that my usage is a telltale sign of artificial intelligence.

➝ Via McSweeney’s.

18 Jul 2025 @ 12:24:41 #llm #via

With new capabilities come new dangers. The safety team finds that if Agent-2 somehow escaped from the company and wanted to “survive” and “replicate” autonomously, it might be able to do so. That is, it could autonomously develop and execute plans to hack into AI servers, install copies of itself, evade detection, and use that secure base to pursue whatever other goals it might have (though how effectively it would do so as weeks roll by is unknown and in doubt). These results only show that the model has the capability to do these tasks, not whether it would “want” to do this. Still, it’s unsettling even to know this is possible.

➝ Via ai-2027.com.

11 Jul 2025 @ 08:55:43 #llm #tech

Musk has confirmed that its Grok AI (version 4) is coming to Teslas “next week at the latest”. I don’t want it. I wouldn’t want Musk in my car, neither.

10 Jul 2025 @ 08:58:38 #llm #tech

It looks like Grok (Musk’s own LLM and chatbot) is doing “great”. It seems that Musk, just like the Biblical god, is molding it to his own image. That will end up well, I am sure.

07 Jul 2025 @ 12:52:26 #llm #me

Apparently using em dashes is proof that a tool (AI, for example), was used to write things instead. As someone who likes to use en dashes, em dashes, and dashes, who uses macOS/iOS, I find that simply amusing. Pff, as if!

To create these notes I use Hugo, and Markdown, so generating them is as easy as using -, --, and ---.