c.im is one of the many independent Mastodon servers you can use to participate in the fediverse.
C.IM is a general, mainly English-speaking Mastodon instance.

Server stats:

2.9K
active users

#Superintelligence

9 posts3 participants0 posts today

Sam Altman,
CEO of OpenAI,
has set the tone for the year ahead in AI with a bold declaration:

OpenAI believes it knows how to build (artificial general intelligence)
and is now turning its sights towards .

While there is no consensus as to what AGI is exactly, OpenAI defines AGI as
"highly autonomous systems that outperform humans in most economically valuable work".

Altman believes superintelligent tools could accelerate scientific discovery and innovation beyond current human capabilities,
leading to increased abundance and prosperity.

Altman said:
"We are now confident we know how to build AGI as we have traditionally understood it.
We believe that, in 2025, we may see the first AI agents
“join the workforce” and materially change the output of companies.
We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.

We are beginning to turn our aim beyond that -- to superintelligence in the true sense of the word.

Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own,
and in turn massively increase abundance and prosperity."

Multiple AI researchers from leading labs have now expressed similar sentiments about the timeline for AGI .

In fact, last June, Ilya Sutskever (who played a key role in the failed attempt to oust Altman as CEO), departed OpenAI and founded what he described as the world's first "straight-shot superintelligence lab".

In September, Sutskever secured $1 billion in funding at a $5 billion valuation.

Altman’s reflections come as OpenAI prepares to launch its latest reasoning model, o3, later this month.

The company debuted o3 in December at the conclusion of its "12 Days of OpenAI" event with some impressive benchmarks

maginative.com/article/openai-

Maginative · OpenAI Says it Knows how to Build AGI and Sets Sights on SuperintelligenceAltman says that in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies.

L'#IA, de la poudre aux yeux ?

Dans un article récent du magazine Time (time.com/7205596/sam-altman-su), Sam Altman, actuel CEO de #OpenAI décrit sa vision d'une #Intelligence #Artificielle Générale (#AGI) ou "#superintelligence" qui est sensée révolutionner la planète. Dans une course effrénée au profit il annonce l'avènement d'une nouvelle ère pour cette année ou tout du moins sous le nouveau règne de #DonaldTrump
1/4
...

Time · How OpenAI’s Sam Altman Is Thinking About AGI and Superintelligence in 2025By Tharin Pillay

Honestly, #AI will require #superintelligence just to cross the chasm from “what I say” to “what I mean.”

When we say “AI is stupid” what we mean is “I am too stupid to say what I actually mean.”

The one caveat is that meaning is created through dialogue. One-shot understanding is a high bar, and not necessary. Hence the medium of “chat” not “command line.”
infosec.exchange/@david_chisna

Infosec ExchangeDavid Chisnall (*Now with 50% more sarcasm!*) (@david_chisnall@infosec.exchange)A lot of the current hype around LLMs revolves around one core idea, which I blame on Star Trek: > Wouldn't it be cool if we could use natural language to control things? The problem is that this is, at the fundamental level, a terrible idea. There's a reason that mathematics doesn't use English. There's a reason that every professional field comes with its own flavour of jargon. There's a reason that contracts are written in legalese, not plain natural language. *Natural language is really bad at being unambiguous*. When I was a small child, I thought that a mature civilisation would evolve two languages. A language of poetry, that was rich in metaphor and delighted in ambiguity, and a language of science that required more detail and actively avoided ambiguity. The latter would have no homophones, no homonyms, unambiguous grammar, and so on. Programming languages, including the ad-hoc programming languages that we refer to as 'user interfaces' are all attempts to build languages like the latter. They allow the user to unambiguously express intent so that it can be carried out. Natural languages are not designed and end up being examples of the former. When I interact with a tool, I want it to do what I tell it. If I am willing to restrict my use of natural language to a clear and unambiguous subset, I have defined a language that is easy for deterministic parsers to understand with a fraction of the energy requirement of a language model. If I am not, then I am expressing myself ambiguously and no amount of processing can possibly remove the ambiguity that is intrinsic in the source, except a complete, fully synchronised, model of my own mind that knows what *I* meant (and not what some other person saying the same thing at the same time might have meant). The hard part of programming is not writing things in some language's syntax, it's expressing the problem in a way that lacks ambiguity. LLMs don't help here, they pick an arbitrary, nondeterministic, option for the ambiguous cases. In C, compilers do this for undefined behaviour and it is widely regarded as a disaster. LLMs are built entirely out of undefined behaviour. There are use cases where getting it wrong is fine. Choosing a radio station or album to listen to while driving, for example. It is far better to sometimes listen to the wrong thing than to take your attention away from the road and interact with a richer UI for ten seconds. In situations where your hands are unavailable (for example, controlling non-critical equipment while performing surgery, or cooking), a natural-language interface is better than no interface. It's rarely, if ever, the best.

Oxford shuts down institute run by Elon Musk-backed philosopher

Oxford University this week shut down an academic institute run by one of Elon Musk’s favorite philosophers.
The 🔸Future of Humanity Institute, 🔸dedicated to the long-termism movement
and other Silicon Valley-endorsed ideas such as effective altruism,
closed this week after 19 years of operation.

Musk had donated £1m to the FIH in 2015 through a sister organization to research the threat of artificial intelligence.
He had also boosted the ideas of its leader for nearly a decade on X, formerly Twitter.

The center was run by
💥Nick Bostrom💥, a Swedish-born philosopher whose writings about the long-term 👉threat of AI replacing humanity
turned him into a celebrity figure among the tech elite and routinely landed him on lists of top global thinkers.

OpenAI chief executive Sam , Microsoft founder Bill and Tesla chief all wrote blurbs for his 2014 bestselling book .
“Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes,” Musk tweeted in 2014.

⭐ resigned from Oxford following the institute’s closure. ⭐

The closure of Bostrom’s center is a 👍further blow to the " " and movements 👍that the philosopher has spent decades championing,
which in recent years have become mired in scandals related to , and .
Bostrom himself issued an apology last year after a decades-old email surfaced in which he claimed
“Blacks are more stupid than whites” and used the N-word.
theguardian.com/technology/202

The Guardian · Oxford shuts down institute run by Elon Musk-backed philosopherBy Nick Robins-Early