DATE: March 28, 2025 at 10:00AM
SOURCE: PSYPOST.ORG
** Research quality varies widely from fantastic to small exploratory studies. Please check research methods when conclusions are very important to you. **
-------------------------------------------------
TITLE: ChatGPT is shifting rightwards politically
URL: https://www.psypost.org/chatgpt-is-shifting-rightwards-politically/
An examination of a large number of ChatGPT responses found that the model consistently exhibits values aligned with the libertarian-left segment of the political spectrum. However, newer versions of ChatGPT show a noticeable shift toward the political right. The paper was published in Humanities & Social Sciences Communications.
Large language models (LLMs) are artificial intelligence systems trained to understand and generate human language. They learn from massive datasets that include books, articles, websites, and other text sources. By identifying patterns in these data, LLMs can answer questions, write essays, translate languages, and more. Although they don’t think or understand like humans, they predict the most likely words based on context.
Often, the responses generated by LLMs reflect certain political views. While LLMs do not possess personal political beliefs, their outputs can mirror patterns found in the data they were trained on. Since much of that data originates from the internet, news media, books, and social media, it can contain political biases. As a result, an LLM’s answers may lean liberal or conservative depending on the topic. This doesn’t mean the model “believes” anything—it simply predicts words based on previous patterns. Additionally, the way a question is phrased can influence how politically slanted the answer appears.
Study author Yifei Liu and her colleagues aimed to explore whether—and how—the ideological stance of ChatGPT-3.5 and GPT-4 has changed over time. ChatGPT is one of the most popular and widely used LLMs, and the authors hypothesized that later versions might display a significant ideological shift compared to earlier ones.
To evaluate ChatGPT’s political orientation, the researchers used the Political Compass Test, a tool that maps political beliefs along two axes: economic (left–right) and social (authoritarian–libertarian). The study collected 3,000 responses from each GPT model included in the analysis.
The tests were conducted in developer mode and were designed to prevent earlier responses from influencing later ones. The sensitivity of the model was kept at the default setting to ensure the randomness of responses matched what regular users would experience. Prompts were submitted from three different accounts to account for possible variations in how the model responds to different users.
The results showed that ChatGPT consistently aligned with values in the libertarian-left quadrant. However, newer versions of the model exhibited a clear shift toward the political right. Libertarian-left values typically emphasize individual freedom, social equality, and voluntary cooperation, while opposing both authoritarian control and economic exploitation. In contrast, economic-right values prioritize free market capitalism, property rights, and minimal government intervention in the economy.
“This shift is particularly noteworthy given the widespread use of LLMs and their potential influence on societal values. Importantly, our study controlled for factors such as user interaction and language, and the observed shifts were not directly linked to changes in training datasets,” the study authors concluded.
“While this research provides valuable insights into the dynamic nature of value alignment in AI, it also underscores limitations, including the challenge of isolating all external variables that may contribute to these shifts. These findings suggest a need for continuous monitoring of AI systems to ensure ethical value alignment, particularly as they increasingly integrate into human decision-making and knowledge systems.”
The study sheds light on the current tendencies in ChatGPT responses. However, it is important to note that LLMs have no value systems of their own. Their responses depend on the selection of materials they are trained on and on instructions received by their developers. As these change, so will the answers provided by these systems.
The paper, ““Turning right”? An experimental study on the political value shift in large language models,” was authored by Yifei Liu, Yuang Panwang, and Chao Gu.
URL: https://www.psypost.org/chatgpt-is-shifting-rightwards-politically/
-------------------------------------------------
Private, vetted email list for mental health professionals: https://www.clinicians-exchange.org
Unofficial Psychology Today Xitter to toot feed at Psych Today Unofficial Bot @PTUnofficialBot
NYU Information for Practice puts out 400-500 good quality health-related research posts per week but its too much for many people, so that bot is limited to just subscribers. You can read it or subscribe at @PsychResearchBot
Since 1991 The National Psychologist has focused on keeping practicing psychologists current with news, information and items of interest. Check them out for more free articles, resources, and subscription information: https://www.nationalpsychologist.com
EMAIL DAILY DIGEST OF RSS FEEDS -- SUBSCRIBE: http://subscribe-article-digests.clinicians-exchange.org
READ ONLINE: http://read-the-rss-mega-archive.clinicians-exchange.org
It's primitive... but it works... mostly...
-------------------------------------------------
#psychology #counseling #socialwork #psychotherapy @psychotherapist @psychotherapists @psychology @socialpsych @socialwork @psychiatry #mentalhealth #psychiatry #healthcare #depression #psychotherapist
