Thursday, 12 February 2026

2 LONG PIECES, PUBLISHED AROUND THE SAME DATE, ABOUT THE SAME SUBJECT/COMPANY. WORTH READING AND PONDERING ABOUT.




I will first include the links, followed by the English translation. It’s important to read both together to catch the irony. I’ll say no more. They speak for themselves.

Daniela Amodei, Anthropic Co-Founder: “Studying the Humanities Will Be More Important Than Ever”


Daniela Amodei, Anthropic Co-Founder: “Studying the Humanities Will Be More Important Than Ever”
Most students have chosen engineering degrees to work in AI
Daniela Amodei believes the future of AI will lie in the humanities and human skills, not in STEM degrees

By Rubén Andrés
Editor – Work and Productivity

When a student is about to take the PAU (Spanish University Entrance Exam) and considers what to study if they want to work in AI development, they are likely to choose computer engineering or another STEM degree. In a way, that would be the right decision, as shown by the high employment rates that technical engineering degrees achieve year after year.

However, according to Daniela Amodei, co-founder and president of Anthropic, the humanities are the key to the future of work with AI. Claude can handle the programming.

Less machine, more human. In a recent interview with ABC News, Daniela Amodei—who holds a degree in Literature from the University of California, Santa Cruz, and is the sister of Anthropic co-founder Dario Amodei—argued that “studying the humanities is going to be more important than ever.”

Technology salaries in Spain are no longer rising at the same pace. AI determines which wages increase the most.

Her argument echoes what other AI executives, such as Jensen Huang, have been saying for some time: “Our job is to create computing technology so that no one needs to program,” Huang stated at a conference in 2024.

“Many of these models are actually very good at STEM, right? But I think this idea that there are things that make us uniquely human—understanding ourselves, understanding history, understanding what motivates us—that will always be really important.” In other words, what Amodei believes will truly be valuable in the future is not people who know how to code, but people who can teach AI models to think like humans.

At Anthropic, they are already moving in that direction. The company’s president stated that when hiring new employees, they now prioritize profiles of “great communicators, people with excellent emotional intelligence and interpersonal skills, who are kind, compassionate, curious, and want to help others.” For the executive, “the things that make us human will become much more important rather than much less important.”

In fact, Amodei does not see the future of work as humans versus AI, but humans plus AI. “The combination of humans and AI creates jobs that are more meaningful, more challenging, more interesting, and highly productive,” the president of Anthropic emphasized. “And I think it will also open the door to greater access and opportunities for many people,” she added.

The harsh labor reality in Spain. The employment rate for humanities graduates in Spain paints a very different picture. According to data from the BBVA Foundation and Ivie, 77.6% of young university graduates secure a job aligned with their degree. Students who complete computer and software engineering degrees achieve an average employability rate of 89.4%.

By contrast, according to the report “Youth Employability in Spain 2025” by the Knowledge and Development Foundation (CYD), the Arts and Humanities field offers the fewest career opportunities, with an average employment rate of 63.5%.

A complicated present. Amodei foresees a very different future in which AI will free up the technical side to enhance the human one. However, the reality today is that Arts and Humanities graduates earn the lowest salaries.

Only 36.4% of humanities graduates earn more than €1,500 per month, compared to engineering graduates, who earn an average of €2,900 gross per year.

In Xataka | Finding a job used to be a good way to escape poverty: in Spain, that is starting to no longer be true

I think "compared to engineering graduates, who earn an average of €2,900 gross per year " is a mistake, maybe 29,000 euros per year? maybe €2,900 per month? Let's go with the other article, published in Forbes, shall we?  : 

Anthropic AI Safety Researcher Warns Of World ‘In Peril’ In Resignation

Anthropic AI Safety Researcher Warns Of World ‘In Peril’ In Resignation


ByConor Murray,Forbes Staff. Murray is a Forbes news reporter covering entertainment trends.

Follow Author

Feb 09, 2026, 05:01pm EST


Topline

An Anthropic staffer who led a team researching AI safety departed the company Monday, darkly warning both of a world “in peril” and the difficulty in being able to let “our values govern our actions”—without any elaboration—in a public resignation letter that also suggested the company had set its values aside.


Anthropic safety researcher Mrinank Sharma's resignation letter garnered 1 million views by Monday afternoon.


Key Facts

Mrinank Sharma, who had led Anthropic’s safeguards research team since its launch last year, shared his resignation letter in a post on X Monday morning, which quickly garnered attention and has been viewed 1 million times.

In his letter, Sharma said it is “clear to me that the time has come to move on,” stating the “world is in peril,” not just from AI, but a “whole series of interconnected crises unfolding in this very moment.”

Sharma said he has “repeatedly seen how hard it is to truly let our values govern our actions” while at Anthropic, adding, “we constantly face pressures to set aside what matters most,” though he did not offer any specifics.

After leaving Anthropic, Sharma said he may pursue a poetry degree and “devote myself to the practice of courageous speech,” adding he wants to “contribute in a way that feels fully in my integrity.”

Sharma declined a request for comment (Forbes also reached out to Anthropic for comment and has not heard back).

Crucial Quote

We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences,” Sharma wrote in his letter.

What Did Sharma Do At Anthropic?

Sharma, who has a Ph.D. in machine learning from the University of Oxford, began working at Anthropic in August 2023, according to his LinkedIn profile. According to his website, the team he formerly led at Anthropic researches how to mitigate risks from AI. In his resignation letter, Sharma said some of his work included developing defenses against AI-assisted bioterrorism and researching AI sycophancy, the phenomenon where AI chatbots overly praise and flatter a user. According to a report published in May by Sharma’s team, the Safeguards Research Team had focused on researching and developing safeguards against actors using an AI chatbot to seek guidance on how to conduct malicious activities.

Sharma’s New Study Found That Chatbots Could Create Distorted Reality

According to a study Sharma published last week, in which he investigated how using AI chatbots could cause users to form a distorted perception of reality, he found “thousands” of these interactions that may produce these distortions “occur daily.” Severe instances of distorted perceptions of reality, which Sharma refers to as disempowerment patterns, are rare, but rates are higher regarding topics like relationships and wellness. Sharma said his findings “highlight the need for AI systems designed to robustly support human autonomy and flourishing.”


Tangent

Other high-profile AI company employees have quit citing ethical concerns. Tom Cunningham, a former economic researcher at OpenAI, left the company in September and reportedly said in an internal message he had grown frustrated with the company allegedly becoming more hesitant to publish research that is critical of AI usage. In 2024, OpenAI dissolved Superalignment, a safety research team, after two of its key members resigned. One of these members, Jan Leike—who now leads safety research at Anthropic—said in a post on X upon his resignation that he had been “disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point.” Gretchen Krueger, who left her post as an AI policy researcher shortly after Leike, said in posts on X the company needs to do more to improve “decision-making processes; accountability; transparency” and “mitigations for impacts on inequality, rights, and the environment.”


Fasten your seat-belts, dear earthlings.

xx




No comments:

Post a Comment