Emerging issues in #ethics, #trust and #responsibleAI at the interface between #AI and #citizenScience #CCC workshop
Emerging issues in #ethics, #trust and #responsibleAI at the interface between #AI and #citizenScience #CCC workshop
Zombie Canon: The Impact of AI on Art History and Museum Collections
April 1 | 12:00 p.m. | Zoom
Guest Artist & Scholar Lecture Series: Amanda Wasielewski
#DigitalArtHistory #GenerativeAI #OpenAccess #DigitalHeritage #CulturalHeritage #ArtHistory #ResponsibleAI #DigitalHumanities #MuseTech
https://asuevents.asu.edu/event/guest-artist-scholar-lecture-series-amanda-wasielewski
It's worth noting that Tony Blair hitched New Labour to Cool Britannia back in the day. Something something the work of British creators and culture generators. His thinktank institute screwing them over is a bit on the nose. #AI #Copyright #ResponsibleAI
Deadline was closing on Mach 25, 2025. With more than 100 applications the admittance decision process for the 60 accepted students will be hard. We'll keep you posted. Notifications will be out soon!
#summerschool #knowledgegraph #semanticweb #academiclife #AI #reliableAI #responsibleAI #bertinoro #emiliaromagna @enorouzi @sourisnumerique @fizise
Anthropic Unveils Interpretability Framework To Make Claude’s AI Reasoning More Transparent
#AI #Anthropic #ClaudeAI #AIInterpretability #ResponsibleAI #AITransparency #MachineLearning #AIResearch #AIAlignment #AIEthics #ReinforcementLearning #AISafety
Joint statement by a coalition of authors, performers and other rightsholders active across the EU’s cultural and creative sectors regarding the third draft of the EU AI Act’s GPAI Code of Practice" https://fep-fee.eu/Rightsholders-joint-statement-on-the-third-draft-of-the-Code-of-Practice #AIAct, #CopyrightCompliance, #GPAICode, #CreativeRights, #EUlaw, #AIPolicy, #ResponsibleAI, #DigitalRights, #AIRegulation, #FairAI
Fascinating development in autonomous AI: Manus AI is setting new benchmarks in task automation, outperforming GPT-4 in the GAIA tests. But here's the real question: how do we balance unprecedented productivity gains with responsible AI deployment?
While it shows impressive 86.5% accuracy in basic tasks, should we consider the human oversight needed for complex decisions?
What safeguards would you want to see in autonomous AI systems?
In an interview (in German/dubbed) for ARD’s Weltspiegel (23.03.2025), Prof. Aimee van Wynsberghe highlights how AI systems, while transformative, consume vast amounts of energy and resources like water. This raises vital questions about sustainability, ethics, and environmental impact. It’s crucial to address these issues in research and public debates.
Artificial Intelligence's Growing Capacity for Deception Raises Ethical Concerns
Artificial intelligence (AI) systems are advancing rapidly, not only in performing complex tasks but also in developing deceptive
@shanselman https://birdsite.lakedrops.com/users/jennifermarsman 6/ @slbird stressing that exciting part #AI is only possible with #ResponsibleAI: people need to trust AI; tests are not trade-offs. #gaib25mum
Microsoft unveils Microsoft Security Copilot agents and new protections for AI.
www.microsoft.com/en-us/securi...
#ai #security #microsoft #copilot #securitycopilot #rai #responsibleai
Microsoft unveils Microsoft Se...
Microsoft unveils Microsoft Security Copilot agents and new protections for AI.
Our short presentation “The Purity Myth: Why Stigmatizing GAI in Academic Writing Is Harmful” at the Newswise Research-To-Practice Webinar: Perils and Promise of Generative AI for Science Communication is available here: http://dx.doi.org/10.13140/RG.2.2.15680.03841
"Backed by nine governments – including Finland, France, Germany, Chile, India, Kenya, Morocco, Nigeria, Slovenia and Switzerland – as well as an assortment of philanthropic bodies and private companies (including Google and Salesforce, which are listed as “core partners”), Current AI aims to “reshape” the AI landscape by expanding access to high-quality datasets; investing in open source tooling and infrastructure to improve transparency around AI; and measuring its social and environmental impact.
European governments and private companies also partnered to commit around €200bn to AI-related investments, which is currently the largest public-private investment in the world. In the run up to the summit, Macron announced the country would attract €109bn worth of private investment in datacentres and AI projects “in the coming years”.
The summit ended with 61 countries – including France, China, India, Japan, Australia and Canada – signing a Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet at the AI Action Summit in Paris, which affirmed a number of shared priorities.
This includes promoting AI accessibility to reduce digital divides between rich and developing countries; “ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all”; avoiding market concentrations around the technology; reinforcing international cooperation; making AI sustainable; and encouraging deployments that “positively” shape labour markets.
However, the UK and US governments refused to sign the joint declaration."
Last chance to apply for the International Semantic Web Research Summer School! Dive into research problems about responsible AI with world-class tutors and enthusiastic PhD students for one week located in Bertinoro, in beautiful Emilia Romagna, Italy!
Deadline: March 25, 2025
more info: https://2025.semanticwebschool.org/
"We, the undersigned researchers, affirm the scientific consensus that artificial intelligence (AI) can exacerbate bias and discrimination in society, and that governments need to enact appropriate guardrails and governance in order to identify and mitigate these harms. [1]
Over the past decade, thousands of scientific studies have shown how biased AI systems can violate civil and human rights, even if their users and creators are well-intentioned. [2] When AI systems perpetuate discrimination, their errors make our societies less just and fair. Researchers have observed this same pattern across many fields, including computer science, the social sciences, law, and the humanities. Yet while scientists agree on the common problem of bias in AI, the solutions to this problem are an area of ongoing research, innovation, and policy.
These facts have been a basis for bipartisan and global policymaking for nearly a decade. [3] We urge policymakers to continue to develop public policy that is rooted in and builds on this scientific consensus, rather than discarding the bipartisan and global progress made thus far."
@Researchbuzz I am liking how #BlueSky is handling its AI approach. Unlike the old bird site, allowing users to provide consent builds trust in their #socialmedia platform. #AI #ML #Tech #socialmedia #responsibleAI
After all these recent episodes, I don't know how anyone can have the nerve to say out loud that the Trump administration and the Republican Party value freedom of expression and oppose any form of censorship. Bunch of hypocrites! United States of America: The New Land of SELF-CENSORSHIP.
"The National Institute of Standards and Technology (NIST) has issued new instructions to scientists that partner with the US Artificial Intelligence Safety Institute (AISI) that eliminate mention of “AI safety,” “responsible AI,” and “AI fairness” in the skills it expects of members and introduces a request to prioritize “reducing ideological bias, to enable human flourishing and economic competitiveness.”
The information comes as part of an updated cooperative research and development agreement for AI Safety Institute consortium members, sent in early March. Previously, that agreement encouraged researchers to contribute technical work that could help identify and fix discriminatory model behavior related to gender, race, age, or wealth inequality. Such biases are hugely important because they can directly affect end users and disproportionately harm minorities and economically disadvantaged groups.
The new agreement removes mention of developing tools “for authenticating content and tracking its provenance” as well as “labeling synthetic content,” signaling less interest in tracking misinformation and deep fakes. It also adds emphasis on putting America first, asking one working group to develop testing tools “to expand America’s global AI position.”"
https://www.wired.com/story/ai-safety-institute-new-directive-america-first/
Rebooting AI from the Ground Up | SXSW LIVE www.youtube.com/live/91I7AGb... (excellent talk by Dr. Rumman Chowdhury) #AI #ResponsibleAI #bias #jobs #evaluation #geopolitics
Rebooting AI from the Ground U...
Deadline Extension!
We've just extended the deadline for applications for the International Semantic Web Research Summer School to March 25, 2025.
Apply now!
https://2025.semanticwebschool.org/
#summerschool #semanticweb #knowledgegraphs #lod #linkeddata #AI #llms #responsibleAI #PhD #academiclife #bertinoro #research #emiliaromagna @fiz_karlsruhe @fizise @albertmeronyo @AxelPolleres #dh #digitalhumanities