c.im is one of the many independent Mastodon servers you can use to participate in the fediverse.
C.IM is a general, mainly English-speaking Mastodon instance.

Server stats:

2.8K
active users

#Hallucinations

6 posts5 participants0 posts today
arabiccola<p>Good one.</p><p><a href="https://mastodon.social/tags/postgresql" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>postgresql</span></a> <a href="https://mastodon.social/tags/postgres" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>postgres</span></a> <a href="https://mastodon.social/tags/immutable" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>immutable</span></a> <a href="https://mastodon.social/tags/constraints" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>constraints</span></a> <a href="https://mastodon.social/tags/hallucination" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>hallucination</span></a> <a href="https://mastodon.social/tags/hallucinations" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>hallucinations</span></a> <a href="https://mastodon.social/tags/claudeai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>claudeai</span></a> <a href="https://mastodon.social/tags/sonnet37" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>sonnet37</span></a></p>
Piotr Nowak<p>Shining a Light on AI Hallucinations</p><p><a href="https://cacm.acm.org/news/shining-a-light-on-ai-hallucinations/" target="_blank" rel="nofollow noopener noreferrer" translate="no"><span class="invisible">https://</span><span class="ellipsis">cacm.acm.org/news/shining-a-li</span><span class="invisible">ght-on-ai-hallucinations/</span></a></p><p><a href="https://c.im/tags/ai" class="mention hashtag" rel="tag">#<span>ai</span></a> <a href="https://c.im/tags/llm" class="mention hashtag" rel="tag">#<span>llm</span></a> <a href="https://c.im/tags/hallucinations" class="mention hashtag" rel="tag">#<span>hallucinations</span></a></p>
Calishat<p><a href="https://researchbuzz.masto.host/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://researchbuzz.masto.host/tags/misinformation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>misinformation</span></a> <a href="https://researchbuzz.masto.host/tags/hallucinations" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>hallucinations</span></a> </p><p>"Despite billions in research investment, AI factuality remains largely unsolved. According to the report, even the most advanced models from OpenAI and Anthropic 'correctly answered less than half of the questions' on new benchmarks like SimpleQA, a collection of straightforward questions."</p><p><a href="https://www.searchenginejournal.com/ai-researchers-warn-hallucinations-persist-in-leading-ai-models/543290/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">searchenginejournal.com/ai-res</span><span class="invisible">earchers-warn-hallucinations-persist-in-leading-ai-models/543290/</span></a></p>
Julieñ in the Sky<p>Looking north... Noticed some strange lights in the sky !?</p><p><a href="https://mastodon.art/tags/ufo" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ufo</span></a> ?<br><a href="https://mastodon.art/tags/northernlights" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>northernlights</span></a> ?<br><a href="https://mastodon.art/tags/hallucinations" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>hallucinations</span></a> ?</p><p>In awe, I wonder...</p>
Miguel Afonso Caetano<p>"Why do language models sometimes hallucinate—that is, make up information? At a basic level, language model training incentivizes hallucination: models are always supposed to give a guess for the next word. Viewed this way, the major challenge is how to get models to not hallucinate. Models like Claude have relatively successful (though imperfect) anti-hallucination training; they will often refuse to answer a question if they don’t know the answer, rather than speculate. We wanted to understand how this works.</p><p>It turns out that, in Claude, refusal to answer is the default behavior: we find a circuit that is "on" by default and that causes the model to state that it has insufficient information to answer any given question. However, when the model is asked about something it knows well—say, the basketball player Michael Jordan—a competing feature representing "known entities" activates and inhibits this default circuit (see also this recent paper for related findings). This allows Claude to answer the question when it knows the answer. In contrast, when asked about an unknown entity ("Michael Batkin"), it declines to answer.</p><p>Sometimes, this sort of “misfire” of the “known answer” circuit happens naturally, without us intervening, resulting in a hallucination. In our paper, we show that such misfires can occur when Claude recognizes a name but doesn't know anything else about that person. In cases like this, the “known entity” feature might still activate, and then suppress the default "don't know" feature—in this case incorrectly. Once the model has decided that it needs to answer the question, it proceeds to confabulate: to generate a plausible—but unfortunately untrue—response."</p><p><a href="https://www.anthropic.com/research/tracing-thoughts-language-model" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">anthropic.com/research/tracing</span><span class="invisible">-thoughts-language-model</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chatbots</span></a> <a href="https://tldr.nettime.org/tags/Anthropic" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Anthropic</span></a> <a href="https://tldr.nettime.org/tags/Claude" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Claude</span></a> <a href="https://tldr.nettime.org/tags/Hallucinations" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Hallucinations</span></a></p>
Miguel Afonso Caetano<p>"Anthropic's research found that artificially increasing the neurons' weights in the "known answer" feature could force Claude to confidently hallucinate information about completely made-up athletes like "Michael Batkin." That kind of result leads the researchers to suggest that "at least some" of Claude's hallucinations are related to a "misfire" of the circuit inhibiting that "can't answer" pathway—that is, situations where the "known entity" feature (or others like it) is activated even when the token isn't actually well-represented in the training data.</p><p>Unfortunately, Claude's modeling of what it knows and doesn't know isn't always particularly fine-grained or cut and dried. In another example, researchers note that asking Claude to name a paper written by AI researcher Andrej Karpathy causes the model to confabulate the plausible-sounding but completely made-up paper title "ImageNet Classification with Deep Convolutional Neural Networks." Asking the same question about Anthropic mathematician Josh Batson, on the other hand, causes Claude to respond that it "cannot confidently name a specific paper... without verifying the information.""</p><p><a href="https://arstechnica.com/ai/2025/03/why-do-llms-make-stuff-up-new-research-peers-under-the-hood/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">arstechnica.com/ai/2025/03/why</span><span class="invisible">-do-llms-make-stuff-up-new-research-peers-under-the-hood/</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Chatbots</span></a> <a href="https://tldr.nettime.org/tags/Hallucinations" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Hallucinations</span></a></p>
I wonder...<p><span class="h-card" translate="no"><a href="https://im-in.space/@Chip_Unicorn" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>Chip_Unicorn</span></a></span> <span class="h-card" translate="no"><a href="https://hachyderm.io/@inthehands" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>inthehands</span></a></span> </p><p>tbh, it's hella work to refactor in the /same/ language. </p><p>As long as <a href="https://lgbtqia.space/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> continues to routinely make up wrong answers (cutely referred to as <a href="https://lgbtqia.space/tags/hallucinations" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>hallucinations</span></a> because <a href="https://lgbtqia.space/tags/lies" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>lies</span></a> involve intention, which AI lacks) via autocomplete, it is both foolish and irresponsible to use it for any seroius work.</p><p>It's not even cost-effective, as 40 hours of AI is orders of magnitude more costly than 40 hours of a team of developers.</p>
Don Curren 🇨🇦🇺🇦<p>“Hallucinations aren’t incidental to <a class="hashtag" href="https://bsky.app/search?q=%23LLMs%E2%80%94they%E2%80%99re" rel="nofollow noopener noreferrer" target="_blank">#LLMs—they’re</a> inherent. <a class="hashtag" href="https://bsky.app/search?q=%23Hallucinations" rel="nofollow noopener noreferrer" target="_blank">#Hallucinations</a> aren’t contingent bugs to be ironed out in some future iteration, given “better” data (but from where?): They’re an ontological, or necessary, feature of the <a class="hashtag" href="https://bsky.app/search?q=%23tech" rel="nofollow noopener noreferrer" target="_blank">#tech</a> involved” <a href="https://open.substack.com/pub/victorshammas/p/the-ai-hype-bubble?r=b8e6&amp;utm_medium=ios" rel="nofollow noopener noreferrer" target="_blank">open.substack.com/pub/victorsh...</a> <a class="hashtag" href="https://bsky.app/search?q=%23AI" rel="nofollow noopener noreferrer" target="_blank">#AI</a> <a class="hashtag" href="https://bsky.app/search?q=%23AIbubble" rel="nofollow noopener noreferrer" target="_blank">#AIbubble</a><br><br><a href="https://open.substack.com/pub/victorshammas/p/the-ai-hype-bubble?r=b8e6&amp;utm_medium=ios" rel="nofollow noopener noreferrer" target="_blank">The AI Hype Bubble</a></p>
Miguel Afonso Caetano<p>"OpenAI’s highly popular chatbot, ChatGPT, regularly gives false information about people without offering any way to correct it. In many cases, these so-called “hallucinations” can seriously damage a person’s reputation: In the past, ChatGPT falsely accused people of corruption, child abuse – or even murder. The latter was the case with a Norwegian user. When he tried to find out if the chatbot had any information about him, ChatGPT confidently made up a fake story that pictured him as a convicted murderer. This clearly isn’t an isolated case. noyb has therefore filed its second complaint against OpenAI. By knowingly allowing ChatGPT to produce defamatory results, the company clearly violates the GDPR’s principle of data accuracy."</p><p><a href="https://noyb.eu/en/ai-hallucinations-chatgpt-created-fake-child-murderer" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">noyb.eu/en/ai-hallucinations-c</span><span class="invisible">hatgpt-created-fake-child-murderer</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/OpenAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>OpenAI</span></a> <a href="https://tldr.nettime.org/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ChatGPT</span></a> <a href="https://tldr.nettime.org/tags/Hallucinations" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Hallucinations</span></a> <a href="https://tldr.nettime.org/tags/GDPR" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GDPR</span></a> <a href="https://tldr.nettime.org/tags/DataProtection" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DataProtection</span></a></p>
Michaël Monney<p>Quand GPT pense que 120´ est une émission d’information. Je n’ose pas imaginer le nombre de blague que ce truc enfile aux gens (moi y compris même si j’ai sensément arrêté). <br><a href="https://mastodon.social/tags/openai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>openai</span></a> <a href="https://mastodon.social/tags/intelligenceartificielle" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>intelligenceartificielle</span></a> <a href="https://mastodon.social/tags/hallucinations" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>hallucinations</span></a> <a href="https://mastodon.social/tags/news" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>news</span></a> <a href="https://mastodon.social/tags/dittli" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>dittli</span></a> <a href="https://mastodon.social/tags/rts" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>rts</span></a> #120</p>
Miguel Afonso Caetano<p>"Building on our previous research, the Tow Center for Digital Journalism conducted tests on eight generative search tools with live search features to assess their abilities to accurately retrieve and cite news content, as well as how they behave when they cannot.</p><p>We found that…</p><p>- Chatbots were generally bad at declining to answer questions they couldn’t answer accurately, offering incorrect or speculative answers instead. <br>- Premium chatbots provided more confidently incorrect answers than their free counterparts.<br>- Multiple chatbots seemed to bypass Robot Exclusion Protocol preferences.<br>- Generative search tools fabricated links and cited syndicated and copied versions of articles. <br>- Content licensing deals with news sources provided no guarantee of accurate citation in chatbot responses.</p><p>Our findings were consistent with our previous study, proving that our observations are not just a ChatGPT problem, but rather recur across all the prominent generative search tools that we tested."</p><p><a href="https://www.cjr.org/tow_center/we-compared-eight-ai-search-engines-theyre-all-bad-at-citing-news.php" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">cjr.org/tow_center/we-compared</span><span class="invisible">-eight-ai-search-engines-theyre-all-bad-at-citing-news.php</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/AISearch" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AISearch</span></a> <a href="https://tldr.nettime.org/tags/AISearchEngines" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AISearchEngines</span></a> <a href="https://tldr.nettime.org/tags/News" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>News</span></a> <a href="https://tldr.nettime.org/tags/Journalism" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Journalism</span></a> <a href="https://tldr.nettime.org/tags/Media" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Media</span></a> <a href="https://tldr.nettime.org/tags/Hallucinations" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Hallucinations</span></a></p>
Total Sonic Media<p>This is going to create more disinfo problems, for sure. <a href="https://www.cjr.org/tow_center/we-compared-eight-ai-search-engines-theyre-all-bad-at-citing-news.php" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">cjr.org/tow_center/we-compared</span><span class="invisible">-eight-ai-search-engines-theyre-all-bad-at-citing-news.php</span></a><br><a href="https://mastodon.social/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://mastodon.social/tags/chatgpt" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>chatgpt</span></a> <a href="https://mastodon.social/tags/copilot" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>copilot</span></a> <a href="https://mastodon.social/tags/searchengines" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>searchengines</span></a> <a href="https://mastodon.social/tags/fakenews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>fakenews</span></a> <a href="https://mastodon.social/tags/disinfo" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>disinfo</span></a> <a href="https://mastodon.social/tags/hallucinations" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>hallucinations</span></a> <a href="https://mastodon.social/tags/DeepSeek" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DeepSeek</span></a> <a href="https://mastodon.social/tags/tech" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>tech</span></a> <a href="https://mastodon.social/tags/llm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llm</span></a> <a href="https://mastodon.social/tags/llms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llms</span></a></p>
gentlegardener<p><a href="https://mastodon.scot/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://mastodon.scot/tags/hallucinations" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>hallucinations</span></a> warnings to investors buried in annual reports from those hippies and luddites at <a href="https://mastodon.scot/tags/goldmansachs" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>goldmansachs</span></a> <a href="https://mastodon.scot/tags/citi" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>citi</span></a> <a href="https://mastodon.scot/tags/jpmchase" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>jpmchase</span></a></p>
Erklärbär<p>We complain about <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/hallucinations" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>hallucinations</span></a> but elect <a href="https://mastodon.social/tags/Trump" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Trump</span></a>... </p><p>Hm, I wonder</p>
Andrew Scott<p>Is AI hallucination a new risk?<br>If you have heard anything about Generative AI, you have heard about its "hallucinations". They provide great fodder for the media, who can point out many silly things that Gen AI has produced since appeared on the AI scene (around 2021-2022). However, as Gen AI gets adopted by more organisations, how should Directors and Executives think about hallucinations, the resulting risk to their organisations, and what approach can be taken to probe whether the risks are being managed? Are Gen AI hallucinations a new risk, or do we already have tools and approaches to manage them?<br><a href="https://aes.id.au/blog/2025/03/10/is-ai-hallucination-a-new-risk/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">aes.id.au/blog/2025/03/10/is-a</span><span class="invisible">i-hallucination-a-new-risk/</span></a><br><a href="https://aus.social/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://aus.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://aus.social/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenerativeAI</span></a> <a href="https://aus.social/tags/governance" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>governance</span></a> <a href="https://aus.social/tags/hallucinations" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>hallucinations</span></a> <a href="https://aus.social/tags/technology" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>technology</span></a></p>
frimoulux<p>Centigrade Trailer</p><p><a href="https://frimoulux.com/centigrade-trailer/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">frimoulux.com/centigrade-trail</span><span class="invisible">er/</span></a></p><p><a href="https://mastodon.social/tags/Centigrade" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Centigrade</span></a> <a href="https://mastodon.social/tags/BrendanWalsh" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BrendanWalsh</span></a> <a href="https://mastodon.social/tags/GenesisRodriguez" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GenesisRodriguez</span></a> <a href="https://mastodon.social/tags/VincentPiazza" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>VincentPiazza</span></a> <a href="https://mastodon.social/tags/MavisSimpsonErnst" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MavisSimpsonErnst</span></a> <a href="https://mastodon.social/tags/NaomiIbbotson" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NaomiIbbotson</span></a> <a href="https://mastodon.social/tags/MattIbbotson" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MattIbbotson</span></a> <a href="https://mastodon.social/tags/BabyLiv" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BabyLiv</span></a> <a href="https://mastodon.social/tags/SurvivalThriller" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>SurvivalThriller</span></a> <a href="https://mastodon.social/tags/Horreur" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Horreur</span></a> <a href="https://mastodon.social/tags/Drame" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Drame</span></a> <a href="https://mastodon.social/tags/Hypothermie" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Hypothermie</span></a> <a href="https://mastodon.social/tags/Temp%C3%AAteDeNeige" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TempêteDeNeige</span></a> <a href="https://mastodon.social/tags/Norv%C3%A8ge" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Norvège</span></a> <a href="https://mastodon.social/tags/VoitureEnsevelie" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>VoitureEnsevelie</span></a> <a href="https://mastodon.social/tags/Grossesse" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Grossesse</span></a> <a href="https://mastodon.social/tags/LuttePourLaSurvie" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LuttePourLaSurvie</span></a> <a href="https://mastodon.social/tags/Hallucinations" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Hallucinations</span></a> <a href="https://mastodon.social/tags/Tension" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Tension</span></a> <a href="https://mastodon.social/tags/%C3%89l%C3%A9mentsNaturels" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ÉlémentsNaturels</span></a> <a href="https://mastodon.social/tags/Temp%C3%A9ratureGlaciale" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>TempératureGlaciale</span></a> <a href="https://mastodon.social/tags/Sc%C3%A9narioOriginal" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ScénarioOriginal</span></a> <a href="https://mastodon.social/tags/DaleyNixon" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DaleyNixon</span></a> <a href="https://mastodon.social/tags/AdaptationR%C3%A9elle" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AdaptationRéelle</span></a> <a href="https://mastodon.social/tags/PeterSkyllberg" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>PeterSkyllberg</span></a> <a href="https://mastodon.social/tags/IglooEffect" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>IglooEffect</span></a> <a href="https://mastodon.social/tags/Famine" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Famine</span></a> <a href="https://mastodon.social/tags/D%C3%A9sespoir" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Désespoir</span></a> <a href="https://mastodon.social/tags/CoupleAm%C3%A9ricain" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CoupleAméricain</span></a> <a href="https://mastodon.social/tags/IsolationExtr%C3%AAme" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>IsolationExtrême</span></a> <a href="https://mastodon.social/tags/ParamountPictures" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ParamountPictures</span></a></p>
janhoglund<p>“…language models can fundamentally be described as supercharged autocomplete tools, prone to returning incorrect information because they are skilled at creating a facsimile of a human-written sentence—something that looks like an acceptable response—but chatbots are not doing any critical “thinking.”“<br>—Thomas Maxwell, Microsoft's Satya Nadella Pumps the Brakes on AI Hype<br><a href="https://mastodon.nu/tags/llm" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llm</span></a> <a href="https://mastodon.nu/tags/llms" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>llms</span></a> <a href="https://mastodon.nu/tags/hallucinations" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>hallucinations</span></a> <a href="https://mastodon.nu/tags/ai" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ai</span></a> <a href="https://mastodon.nu/tags/aihype" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>aihype</span></a></p>
Hacker News<p>Hallucinations in code are the least dangerous form of LLM mistakes — <a href="https://simonwillison.net/2025/Mar/2/hallucinations-in-code/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">simonwillison.net/2025/Mar/2/h</span><span class="invisible">allucinations-in-code/</span></a><br><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/Hallucinations" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Hallucinations</span></a> <a href="https://mastodon.social/tags/in" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>in</span></a> <a href="https://mastodon.social/tags/code" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>code</span></a> <a href="https://mastodon.social/tags/LLM" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.social/tags/mistakes" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>mistakes</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/safety" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>safety</span></a> <a href="https://mastodon.social/tags/machine" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>machine</span></a> <a href="https://mastodon.social/tags/learning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>learning</span></a> <a href="https://mastodon.social/tags/code" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>code</span></a> <a href="https://mastodon.social/tags/quality" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>quality</span></a></p>
Knowledge Zone<p>6174: The Most <a href="https://mstdn.social/tags/Useless" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Useless</span></a> <a href="https://mstdn.social/tags/Fascinating" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Fascinating</span></a> <a href="https://mstdn.social/tags/Number" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Number</span></a> I Have Come Across : Medium</p><p><a href="https://mstdn.social/tags/AI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>AI</span></a> <a href="https://mstdn.social/tags/Hallucinations" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Hallucinations</span></a> can’t be stopped — but these <a href="https://mstdn.social/tags/Techniques" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Techniques</span></a> can limit their <a href="https://mstdn.social/tags/Damage" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Damage</span></a> : Nature</p><p><a href="https://mstdn.social/tags/People" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>People</span></a> With <a href="https://mstdn.social/tags/ADHD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ADHD</span></a> Are Likely to <a href="https://mstdn.social/tags/Die" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Die</span></a> Significantly Earlier Than Their <a href="https://mstdn.social/tags/Peers" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Peers</span></a>, <a href="https://mstdn.social/tags/Study" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Study</span></a> Finds : NY Times</p><p>Check our latest <a href="https://mstdn.social/tags/KnowledgeLinks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>KnowledgeLinks</span></a></p><p><a href="https://knowledgezone.co.in/resources/bookmarks" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">knowledgezone.co.in/resources/</span><span class="invisible">bookmarks</span></a></p>
ResearchBuzz: Firehose<p>Ars Technica: Condé Nast, other news orgs say AI firm stole articles, spit out “hallucinations”. “Condé Nast and several other media companies sued the AI startup Cohere today, alleging that it engaged in ‘systematic copyright and trademark infringement’ by using news articles to train its large language model.”</p><p><a href="https://rbfirehose.com/2025/02/18/ars-technica-conde-nast-other-news-orgs-say-ai-firm-stole-articles-spit-out-hallucinations/" class="" rel="nofollow noopener noreferrer" target="_blank">https://rbfirehose.com/2025/02/18/ars-technica-conde-nast-other-news-orgs-say-ai-firm-stole-articles-spit-out-hallucinations/</a></p>