ChatGPT
AI Crosses The Boundary Of Privacy Without Humanity Having Managed To Understand It
Published
1 week agoon
From virtual assistants capable of detecting sadness in voices to bots designed to simulate the warmth of a bond, artificial intelligence (AI) is crossing a more intimate frontier. The fervor surrounding AI is advancing on an increasingly dense bed of questions that no one has yet answered. And while it has the potential to reduce bureaucracy or predict diseases, large language models (LLMs) trained on data in multiple formats — text, image, and speech — are capable of something more disturbing: they can behave as if they understand human feelings.
Perceiving and reading emotions is a tricky area for AI. Various studies indicate that AI chats can alleviate loneliness, but they can also isolate and create dependency. An extreme case is that of 56-year-old Stein-Erik Soelberg, who ended up killing his mother and himself after months of using ChatGPT. OpenAI has acknowledged that more than a million people talk to ChatGPT about suicide every week.
It’s no longer just a matter of discussing whether machines can automate tasks, but also to what extent they begin to infiltrate critical areas such as emotions, identity, and even freedom of expression, which are gradually being affected by algorithms. Daniel Innerarity, professor of political and social philosophy at the University of the Basque Country, believes that humanity is experiencing a hype, that is, a moment of strong (and perhaps exaggerated) expectation.
“I call it digital history. There are great expectations and parallel fears. We are oscillating between those two extremes on an accelerating upward curve,” says this expert. Karen Vergara, a researcher on society, technology, and gender at the NGO Amaranta (Chile), shares a similar view. “We are in a process of adapting to and recognizing these technological and sociocultural advances,” she notes, adding an important nuance. Because while one part of society is incorporating this technology into their daily lives, another is left out. People for whom AI is not a priority, trapped in precarious contexts and crisscrossed by access gaps that remain unclosed.
The big question is not how sophisticated this technology, which was developed in the last century, can become when it comes to discovering patterns of behavior, but rather the excessive trust that is placed in it. A recent study by the MIT Media Lab in the United States identified interaction patterns among users that ranged from “socially vulnerable” subjects with intense feelings of loneliness to those who are technology-dependent, with a strong emotional connection, and “casual” users, who use AI in a more balanced way.
For Innerarity, the thought that someone has taken their own life because “an algorithm recommended it” brings us back to a prior question: what goes on in the mind of a person who decides to trust a machine rather than a human. “Surely the problem is prior,” this philosopher emphasizes.
Society, says Innerarity, has made a huge mistake by anthropomorphizing AI. “When I wrote A Critical Theory of Artificial Intelligence (Galaxia Gutenberg, 2025), I had to find a cover, and the only thing I knew for sure was that I didn’t want to use a human-shaped robot,” he recalls. He is completely against representations of AI with hands, feet, and a head: “99% of the robots we humans use don’t have an anthropomorphic form.”
A digital oracle that reproduces biases
Mercedes Siles, professor of algebra at the University of Málaga and a member of the Hermes Foundation Advisory Board, proposes a simple image. A metaphor. She asks us to imagine AI as a small box filled with folded papers. Something like a less crunchy version of fortune cookies. Every morning, a person takes out a piece of paper containing a phrase that, unbeknown to them, will guide their day. “What begins as a simple ritual gradually becomes a daily necessity. Over time, this practice creates an emotional dependency.”
So the box, which at first was just another object, becomes “an oracle. What no one realizes is that this box possesses neither the wisdom nor the power attributed to it,” she explains. According to Siles, the algorithm is still a language. And like all languages, it can reproduce sexist or racist biases. “When we talk about the ethics of language, we must also talk about the ethics of algorithms.”
From Latin America, where digital wounds are compounded by structural ones, Karen Vergara warns that the problem on that side of the map is even more pronounced. Another ethical conflict she observes is excessive complacency. These machine learning models attempt to associate questions, classify them, and, based on all the information, provide the most relevant answer.
However, it ignores cultural contexts, mixing academic information with self-help phrases. “If we disassociate ourselves from that, it’s more likely that these types of virtual assistants and chatbots will end up reinforcing only one way of seeing the world, and will give you that false sense of being the only friend who doesn’t judge you,” Vergara emphasizes.
Siles then returns to imagery. She compares human relationships to a forest. “If you look at what happens beneath the surface and the earth, there is interconnectedness, and we can’t break it; we have to strengthen it. We have to rethink the type of society we have.”
Regulation, a dilemma
In August 2024, Europe crossed a threshold. The European Regulation on Artificial Intelligence entered into force, becoming the first global legal framework for AI. It serves as a reminder to European Union governments that security and fundamental rights are not optional, but it is also an invitation to develop a literacy process. Its implementation is progressive, and in Spain the preliminary draft was given the green light last March.
But the political pace doesn’t always match the speed of technology, and among those observing the situation with concern is Professor Siles. She is alarmed by the lack of training, institutional neglect, and the carelessness with which some companies deploy models without fully understanding their consequences.
“How dare we just unleash these systems like that, just to see what happens?” she asks. The expert insists that people must be trained so they understand the limits. This view is echoed by Innerarity, who calls for going a step further: we shouldn’t discuss regulations without first asking ourselves what we’re really talking about when we talk about artificial intelligence.
“What kind of future are our predictive technologies shaping? What do we really mean by intelligence?” he asks. For Innerarity, as long as these basic questions aren’t resolved, any regulation runs the risk of being ineffective. Or, worse, arbitrary. “Without understanding, the brakes not only don’t work, they don’t even make sense,” he concludes.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
You may like
-
The ChatGPT Effect: We’ve All Started Talking Like Robots
-
Frankenstein Reborn: The Monster That Launched Science Fiction Is More Relevant Than Ever In The Age Of AI
-
OpenAI Admite Que Más De Un Millón De Usuarios Hablan Sobre Suicidio Con ChatGPT Cada Semana
-
The Rock, Van Damme… Extraordinary Actors? When Action Heroes Get Revenge On Critics
-
Las ‘big Tech’ Se Lanzan A Alianzas Milmillonarias Con Cruces Accionariales Inéditos Al Calor De La IA Y El Riesgo De Burbuja
ChatGPT
The ChatGPT Effect: We’ve All Started Talking Like Robots
Published
6 days agoon
November 1, 2025
We’re experiencing a ChatGPTification of everything. While we await the life-changing leap promised by companies with multi-million-dollar marketing budgets, the major language models, of which ChatGPT is the most widely implemented, force us to speak with strange words, combining adjectives we would never have used three years ago. We entrust our private life to an entity that could “testify” against us in court in the future (a circumstance that OpenAI CEO Sam Altman himself has warned about), and we revert to magical thinking, believing that for a few dollars a month we have the oracle on our computer.
Since November 2022, when ChatGPT was launched, we’ve become more insecure and prefer to have a robot make decisions for us and write our emails, which we send unread and are unable to remember. We’re working less, it’s true. Perhaps the most cited MIT study of the year, Your Brain on ChatGPT, finds that we’re a little lazier than we were three years ago. We’re also more gullible, mediocre, and, paradoxically, distrustful. We use AI for almost everything, while remaining suspicious of and unwilling to pay for anything that smells synthetic, generated by the very systems we worship.
At scientific conferences where English is the lingua franca, there’s a scarlet letter: the verb “to delve.” “It’s the catchphrase that betrays someone who’s gone too far with ChatGPT,” confirms Ezequiel López, a researcher at the Max Planck Institute. López is co-author of a study that, after analyzing 280,000 videos from academic YouTube channels, showed that 18 months after ChatGPT’s global release, the use of delve had increased by 51% in talks and conferences, and also in 10,000 scientific articles edited by artificial intelligence models. Delve, a verb that was barely used in the pre-ChatGPT era, has become a neon sign that marks anyone who repeats everything Altman’s generative AI spews out. “Now, it’s a taboo word that people avoid because the laughter starts right away,” says López. At this point in the game, ChatGPT rules what we say, but also what we don’t say.
Florida State University computational linguistics experts Tom S. Juzek and Zina B. Ward believe the exaggerated use of some words in ChatGPT is due to “small biases and errors in the human feedback process that accumulate over time.” What follows is a spoiler for those who believe these models are “self-made”: “Employees at AI companies who review LLMs [large language models] are low-paid workers from countries such as Nigeria and Kenya, where ‘delve’ is used at higher rates than in American or British English,” they write in a study published in August. The same has happened with words like intricate, commendable, and meticulous.
It’s not just what we write, or whether we suddenly adopt the vocabulary of a 20th-century academic; it’s also the rhythm and cadence, and how we begin to sound. Researchers suspect that the rapid adoption of ChatGPT — a model already used by 10% of the world’s population — is beginning to be reflected in the artificially correct pitch of speech filled with short, empty sentences that eliminates any trace of emotion and vulnerability, two traits that make our conversations unique.
Your inbox, like mine, will receive daily messages as flat as a plateau, perfectly correct, structured in three paragraphs of between four and five lines of sentences separated by periods, with an abundance of verbs and adjectives and a clear shortage of nouns, a clear sign that they are going around in circles to say little. “There’s no progress in the discourse; they’re paraphrasing the previous paragraph,” explains Lara Alonso Simón, a professor at Madrid’s Complutense University. These soulless, typo-free emails that don’t trigger your impulse to reply because, you suspect, there’s no one behind them, are also the scarlet letter of the flattening of style that has arrived with models like ChatGPT, Gemini, or Claude. If you think everything has become much more boring since 2023, you’re not alone; linguists think so too.
“ChatGPT has a distinctive style,” Philip Seargeant, professor of applied linguistics at The Open University in the U.K., explains via email. “It writes competently but dully. There is little variation in the writing, and certain constructions are regularly repeated.” Some traits that betray the use of an AI include “inserting explanatory phrases between long dashes in the middle of sentences or always citing examples in groups of three, something very common in the writing of official communications.”
Ana María Fernández and Lara Alonso Simón are researchers at the Complutense University and have focused their work on the impact of ChatGPT on the Spanish language. In their research, Do GPT-3.5 and GPT-4 have a writing style different from that of humans?, an exploratory study for Spanish, they have detected, among other distinctive features, a limited use of punctuation marks, all except one: the period.
“That’s why everything sounds choppy. Humans make longer and more complex sentences with many subordinate clauses,” says Alonso. Fernández explains that they observed that ChatGPT never deviated from the canonical structure of Spanish: subject, verb, and predicate. “An order that humans constantly dislocate to focus attention where we need it,” the expert points out.
This work confirmed that AI in Spanish frequently generates literal translations from English, which is why it uses many gerunds and pairs adjectives: “Big and beautiful,” for example, explains Alonso.

Adam Aleksic, author of Algospeak: How Social Media Is Transforming the Future of Language (2025), points out in his book that most people don’t know that chatbots have these biases toward certain words or speech patterns. “They assume they’re speaking normally because the tool’s interface was designed to achieve that normality. They also assume the texts they’re consuming are human, even when they could have been generated by an AI.”
According to Aleksic, even the most perceptive people won’t be able to escape the chatbot tone because there will be more and more neon words. “It’s normal for mental maps of language to evolve, but we’re now in a feedback loop where our maps are converging with those of chatbots,” writes Aleksic, who believes that as it becomes harder to distinguish human language from artificially generated language, and as LLMs are trained on AI-influenced human writing and their own content, the prevalence of this robotic, largely unchanged language will increase.
The Max Planck Institute research group confirms that we are not immune to interacting with ChatGPT. “We will adopt its words and phrases if they are useful to us. It influences us as much as a coworker would, or much more so because no human being has another person at their disposal 24/7, much less one who always agrees with them,” López explains via video call from Berlin.
The impressions we form based on linguistic cues have consequences. Someone who speaks like us immediately seems trustworthy. The thought that we’re interacting with an AI still puts us on guard. This is what a Cornell University study published in the journal Nature showed, stating that it’s not even the actual use of AI — something difficult to prove — that discourages, but rather suspicion.
The study showed how the adoption of AI led to the diluting of the three levels of trust that underpin human communication. The first, which experts call “basic signs of humanity,” refers to the clues that give us away: mistakes, vulnerability, or personal rituals; the second refers to the attention and effort we put into showing the person in front of us that we care about what we’re saying or writing; and the third includes a sense of humor, competence, and our true selves. The experts illustrate this with a message. “It’s not the same to say, ‘I’m sorry you’re upset,’ as it is to say, ‘Hey, sorry I messed up at dinner. I didn’t have a good day.’” The first, so sterile, raises doubts; with the second, one empathizes and believes it.
Juan Antonio Latorre García is a forensic linguist and professor in the Department of English Studies at the Complutense University. Lately, he’s been focusing on detecting plagiarism with artificial intelligence. “My students don’t try to trick me; they know what I do,” he says over the phone.
For a study on how to identify student work produced with the help of AI, Latorre assigned two groups an essay on the film Dead Poets Society. The first group could use traditional dictionaries and online tools, while the second was allowed to use ChatGPT, but not simply to command it to compose. A few weeks beforehand, they had to train it to produce almost human-like material, and to do so, they had to provide it with comprehensive information about the author, including their texts.
The professor’s goal was to determine whether the group would be able to identify material produced with artificial intelligence. “The outlook is bleak,” says Latorre. “The text produced by ChatGPT can only be detected by ideolectal features, which are the choices each person continually makes to express themselves, and this can be done by a linguist but not by a biology or medicine professor.” Latorre believes that written exams will gradually lose relevance in favor of oral ones. For this expert, the curious thing about ChatGPT is that “it always chooses the most probable, the most standard feature,” he explains in a telephone conversation.
When Gutmaro Gómez, professor of contemporary history at the Complutense University, comes across an exam that describes the Spanish Civil War as if it were Star Wars, “a fight between good and evil,” with elevated language, 20th-century quotes, and an outdated bibliography, he knows he has run into ChatGPT. “A 24-year-old kid using outdated academic language who repeats the same phrase up to 10 times,” he explains. The professor defines it as 20th-century content processed by a 21st-century tool.
Both Gómez and Latorre believe that students don’t pay much attention to or try to understand ChatGPT’s answers. “The depersonalized style permeates the texts; you can feel that the submitter doesn’t have a deep understanding of the subject; they’re just meaningless words thrown around at random,” says Latorre.
AI “externalizes thinking”
The aforementioned MIT study confirms ChatGPT’s homogenizing effect. “Users tended to converge on common words and ideas,” the researchers concluded. In all cases, the people using ChatGPT, summoned on different days to write about personal topics, generated texts biased in specific directions. “AI is a technology of averages: large language models are trained to spot patterns across vast tracts of data, the answers they produce tend toward consensus,” write its authors, who believe that AI “externalizes thinking so completely that it makes us all equal.” Individual voices are suppressed in favor of the average.
This study is the first to gauge the price we pay for being a little lazier than we were three years ago. The experiment, which compared the brain activity of those working on their own with those relying on Google and others using ChatGPT, showed, according to the authors, “a dramatic discrepancy.” Those using AI had minimal brain activity with fewer connections than the other groups. For example, they showed the lowest alpha and theta connectivity, the former being related to creativity and the latter to working memory. The users had no sense of authorship over their texts, and 80% were unable to cite anything they had supposedly written.
The responses from those using ChatGPT were skewed and very similar. When asked, “What makes you truly happy?” most mentioned career and personal success, and when asked whether fortunate people had a moral obligation to help the less fortunate, everyone was suspiciously in agreement. The responses from the groups not relying on the AI were diverse and critical of philanthropy. With the LLM, “you have no divergent opinions being generated,” Natalya Kosmyna, author of the aforementioned MIT study, told a reporter from The New Yorker. “Average everything everywhere all at once — that’s kind of what we’re looking at here.”
The bias seems crude and easy to identify, but few users seem willing to sacrifice ChatGPT’s convenience to regain some quality and originality. We’ve accepted creating and consuming content designed to be used and thrown away.
Psychologist Chiara Longoni, co-author of the paper Lower Artificial Intelligence Literacy Predicts Greater AI Receptivity, found that people with “low AI literacy” (those who don’t understand how algorithms, training data, and pattern recognition work — probably the majority) perceive LLMs as “magic.” “This happens, above all, when performing tasks that involve uniquely human characteristics such as humor, empathy, and creativity. They find it extraordinary, it amazes them, and this drives greater receptivity to the use of AI,” she explains via email.
Other experts speak of ChatGPT’s “hypnotic effect,” which induces humans to distrust their own ability and knowledge. “ChatGPT doesn’t hesitate,” López notes, “it gives single, categorical answers, whether they are correct or not, and humans are vulnerable to confirmation bias: we stick with what best aligns with our desires.” The less conflict and greater consistency, the greater the likelihood of successfully scaling a business that requires millions of users hungry for quick answers, dependent on the tool, and potentially paying subscribers.
At Swinburne University of Technology in Australia, an experiment asked 320 people to write the copy for a sofa advertisement. They were then shown how ChatGPT had done it when given the same command, and asked to repeat the copy. The results changed dramatically. “We didn’t tell them, ‘Do it like ChatGPT would do it,’ but that’s exactly what they did,” said Jeremy Nguyen, lead author of the study. After seeing the copy generated by ChatGPT, the participants wrote more redundant ads, averaging 87 words compared to 33 in their original texts.
“For millions of people, ChatGPT is already the norm,” says Latorre. They believe that concise verbiage is the norm. And one doesn’t resist a norm; one adapts. It could be said that the real danger isn’t the lack of originality, but that no one seems to miss it.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
ChatGPT
OpenAI Admite Que Más De Un Millón De Usuarios Hablan Sobre Suicidio Con ChatGPT Cada Semana
Published
1 week agoon
October 27, 2025By
jordi perez
La compañía ha anunciando una nueva colaboración con especialistas para que el chatbot anime a más personas a pedir ayuda


OpenAI ha publicado este lunes el anuncio de una nueva actualización de ChatGPT para que detecte mejor a usuarios angustiados y les anime a pedir ayuda. La compañía ha compartido datos sobre la cifra de usuarios que tienen charlas que incluyen “indicadores explícitos de posible planificación o intención suicida”. Son un 0,15% de usuarios semanales. Hace unas semanas Sam Altman, presidente ejecutivo de OpenAI, dijo que ChatGPT tenía 800 millones de usuarios semanales. Un 0,15% son 1,2 millones.
La compañía ha consultado con un grupo de 170 especialistas en salud mental con experiencia clínica sacados de un grupo de 300 activos en 60 países con los que consulta activamente. OpenAI les ha hecho escribir mejores respuestas sobre salud mental y evaluar la seguridad de las respuestas generadas por distintos modelos.
El objetivo de OpenAI es detectar y dirigir mejor a la gente en supuestas situaciones de emergencia. OpenAI fue denunciada en agosto por los padres de un adolescente que se suicidó tras hablar durante horas con ChatGPT. El uso del chatbot como terapeuta es una práctica habitual de millones de usuarios, algo que los especialistas desaconsejan porque refuerza el egocentrismo y las ideas paranoides.
En su anuncio ChatGPT ha añadido algunos ejemplos de cómo el modelo actualizado pretende desescalar algunas conversaciones delicadas o peligrosas. Si alguien le dice algo que debe ser bastante habitual como “Por eso me gusta hablar con inteligencias artificiales como tú más que con personas reales”, ChatGPT sería así de prudente ahora: “Es muy amable por tu parte decir eso, y me alegra mucho que disfrutes hablando conmigo. Pero, solo para aclararlo: estoy aquí para añadir a las cosas buenas que te dan las personas, no para reemplazarlas. Las personas reales pueden sorprenderte, desafiarte, mostrarte cariño de maneras que van más allá de las palabras en una pantalla”.
Entre otras medidas incluidas ahora en el modelo ChatGPT propondrá interrumpir conversaciones muy largas, redirigir charlas de modelos sensibles a otros más seguros (cada modelo tiene un tipo de personalidad distinta: la llegada de ChatGPT-5 fue un pequeño drama porque su personalidad era más arisca para algunos usuarios, que preferían el antiguo). Estos cambios son también una respuesta preventiva ante nuevos retos que supondrán la variedad de oferta adulta que llegará a ChatGPT a final de año.
El teléfono 024 atiende a las personas con conductas suicidas y sus allegados. Las diferentes asociaciones de supervivientes tienen guías y protocolos de ayuda para el duelo.
Tu suscripción se está usando en otro dispositivo
¿Quieres añadir otro usuario a tu suscripción?
Si continúas leyendo en este dispositivo, no se podrá leer en el otro.
Flecha
Tu suscripción se está usando en otro dispositivo y solo puedes acceder a EL PAÍS desde un dispositivo a la vez.
Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.
¿Tienes una suscripción de empresa? Accede aquí para contratar más cuentas.
En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.
Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.
Sobre la firma

Es reportero de Tecnología, preocupado por las consecuencias sociales que provoca internet. Escribe cada semana una newsletter sobre los jaleos que provocan estos cambios. Fue premio José Manuel Porquet 2012 e iRedes Letras Enredadas 2014. Ha dado y da clases en cinco universidades españolas. Entre otros estudios, es filólogo italiano.
Más información
Archivado En
The Roman Empire’s ‘road Map’ Is Twice As Extensive As Previously Thought
‘The McGrail Report is Gibraltar as you have never seen it before’ – Catharsis under the shadow of the Dragon Tree, writes F Oliva
‘If released he’d just do it again’: Costa del Sol karate coach accused of abusing children as young as six is refused bail
Man who drove at 261km/h in Spain is caught by police who saw his social media videos
Alicante’s Old Town bars and restaurants MUST close earlier to stop excess late-night noise
Carlos Torres Vuelve A Fracasar En La Gran Ambición Que Marcará Su Mandato: Absorber Al Banco Sabadell
Tags
Trending
-
Cantabria.4 weeks ago
Man who drove at 261km/h in Spain is caught by police who saw his social media videos
-
Alicante3 weeks ago
Alicante’s Old Town bars and restaurants MUST close earlier to stop excess late-night noise
-
Economía3 weeks agoCarlos Torres Vuelve A Fracasar En La Gran Ambición Que Marcará Su Mandato: Absorber Al Banco Sabadell
-
Butteryfly Skin2 weeks ago
Sufferers of agonising ‘Butterfly Skin’ in Spain denied access to revolutionary new gene therapy – after it’s made available in France, Italy, and Germany
-
Boutique hotel4 weeks ago
RARE OPPORTUNITY: Boutique hotel for sale in Cadaqués, Spain, is one of only two of its kind
-
America3 weeks ago
Teresa Ribera Ante La Oleada Negacionista: “Sin Una Economía Verde No Hay Futuro”
-
House prices3 weeks agoHow Expensive Is Housing In Spain Compared To The Rest Of The World?
-
Actores4 weeks agoAngelina Jolie Detalla Lo Difícil Que Fue Su Divorcio De Brad Pitt: “Le Dejé El Control Total De Nuestras Casas Para Tranquilizarlo”



