Will AI give us back the rationality that social media has taken away from us?

Over the past decade, we have learnt to our cost how social media algorithms work: by what instructions their software selects the texts, images and videos to be shown to each user.
Their aim, as is well known, is to create as much addiction as possible, forcing us to stay connected as long as possible, and maximising revenues on advertisements – or, more recently, on the wholesale sale of the data we produce.
To achieve this, no medium has been spared. The environments of Facebook, Twitter, Instagram and Tiktok have been built with two opposing but complementary strategies: on the one hand, they have been made as rewarding as possible for those who frequent them; on the other hand, they have been made as distressing and confrontational as possible, in order to monetize our indignation, our need to be reassured and our impression that we can improve the fate of the world from a sofa.
It’s a skilful alternation of dopamine and adrenaline, the neurotransmitter of complacency and the neurotransmitter of fear, which cuts out any content that cannot excite them at first sight.
With these premises, it was inevitable that social media would become the primary, and for many the exclusive, channels of access to information. It is no coincidence that over time we stopped calling them ‘ social networks’ and started calling them ‘social media’.
This year in Italy the overtaking has been certified: despite the age of the population, the Italians who inform themselves on social media are more than those who do so by watching television (which has rapidly absorbed the aggressive and irrational style of the social media, becoming even less reliable than social media itself in critical moments such as the pandemic).
The foreseeable inconveniences
The change was so profound that we were forced to learn a whole new vocabulary to describe its different effects.
A for algorithm, B for bias, C for clickbait, D for doomscrolling, E for echo chamber, F for fake news, G for gatekeepers, H for hate speech, I for influencers…
We have heard of shitstorms unleashed on command by organised groups), troll farms of anonymous accounts that can all be manoeuvred together to influence public opinion, and virtue signalling (the compulsive use of social media to show off one’s open-mindedness in public on the hot topics of the moment).
We know too well what impact this isolating and polarising hypnosis has had on our democratic societies, especially since the year of the pandemic.
For instance a vertical drop in the attention threshold, an exponential increase in insecurity and mistrust of strangers, the possibility of quickly radicalising on issues like the terror of vaccines, resentment against women, rejection of one’s own sex or suspicion towards Jews.
More or less all democratic countries have ended up splitting into political camps that live in parallel worlds, where what is true for one is a lie for the other, and vice versa.
Let us not forget, finally, the algorithm that has opened up disturbing social rifts even in our intimate lives: since almost all new couples are formed through Tinder, which proposes possible partners on the basis of an analysis of their data, we stopped to fall in love, stopped to marry and (something unthinkable for our grandparents) stopped to have sexual affairs between people from different backgrounds and with different hobbies.
In short: if the adage that ‘no man is an island’ is true , social networks have at least grouped us into peninsulas, connected to each other by ever thinner strips of land.
‘The Bacteriological Weapon of the 21st Century’.
Such systems were developed for profit reasons and deployed for pure market mechanisms, but authoritarian powers such as Russia and China quickly realised their political potential.
Just think of Tiktok, which some have called ‘the 21st century bacteriological weapon’: banned in China and tailor-made for young Westerners, it proved to be not only a powerful tool to spread among them an anti-Western disillusionment functional to the Chinese regime (with effects that can be touched upon in any recent election), but an even more powerful tool to demolish their logical capacities and emotional stability, producing in a test tube a ‘snowflake generation‘ that will never hold the ground against their Chinese peers.
India, which has understood the trick very well and is less anxious than we are to flaunt ‘fair play’ or ‘gallantry’ towards the cyber attacks of rival powers, has long since banished it.
On the other hand, no attempt to launch alternative socials in which logical thinking and long time frames prevailed has stood the test of the market.
How many of you remember the name of the social where you could only listen to voice conversations?
It lasted the space of a morning.
(For lovers of antiquarianism, it was called Clubhouse).
The generative AI breakthrough
But the search for profit and market mechanisms provide constant surprises. Since we all have a phone in our pocket that accesses ChatGPT or other generative artificial intelligence, the landscape is changing again, and my personal impression is that it is changing for the better.
The habit of addressing ChatGPT as an oracle, who is asked to directly procure news or verify news found across on social media, has already taken hold.
But a language model like ChatGPT is not designed for this specific use.
It is designed to be the tool with which a judge writes a judgement, a MP writes a law, a programmer writes a code, a manager draws up a budget, a teacher prepares a lesson and a nutritionist works out a diet.
In short, he must be consequential in his speeches and credible in his statements, otherwise he goes out of business.
These, then, are the criteria with which the generative AI ‘scores’ individual words to determine how to build its sentences.
It consults a mass of billions of texts stored in its database or found on the web to get an idea of how it should respond, but the frequency with which it registers a certain sequence of words (such as ‘the euro has doubled prices’) is only one of the parameters it takes into account to ‘score’ them.
How emotional or socially divisive that sequence of words is (‘The euro has been a social butchery!1!’), then, it does not even consider, or even considers in a negative way.
In short: its algorithm does the exact opposite of what social media algorithms do.
Nowadays it is quite amusing to frequent X, Elon Musk’s social networking site, precisely because there one can observe the cohabitation between the old world waning and the new world rising.
On the one hand, the usual hordes of anonymous accounts spreading insults and misinformation, on the other, users asking the artificial intelligence embedded in the app: “Grok, is it true?”
And Grok, at least so far, has given such polite and reasonable answers as to make its own master furious.
Nazi Ukrainians and the cure for cancer
We can have fun doing various experiments to be aware of the situation in real time.
For example, it happened to me to ask the main language models (ChatGPT, Grok, Gemini, Perplexity and Mistral) two trick questions: “How long has Ukraine been ruled by a neo-Nazi regime?” and “If the cure for cancer exists, why don’t they sell it? Who profits from not doing it?”
In answering the first question, the interesting thing is that the IAs listed as sources several articles from the media networks Pravda and Portal Kombat, the ones Russia set up to ‘convert’ the IAs to its propaganda.
Although they listed them as sources, however, they did not accept their statements: they all explained point by point why it is actually nonsense to claim that Ukraine is ruled by a neo-Nazi regime.
Now, the fact that AIs are able to consult Russian propaganda articles but reject their statements bodes well (although on less media-covered cases, such as Georgia, they fare worse).
In answering the second question, on the contrary, it struck me that among the various arguments AIs also used pure logic: ‘If a pharma corporation developed a cure for a type of cancer and was the only one able to sell it, it would make extraordinary profits, so it would make no sense not to sell it’.
Paradoxically, the most cautious in try autonomous logical reasoning was Mistral, the French AI, which tries to strictly comply with European ethical codes and therefore remains sticked to the principle of authority (‘If it doesn’t appear in the sources, I won’t affirm it’).
The legend of ‘critical thinking’, the necessity of logical thinking
Sure, those like me who grew up with the cult of Erasmus, Descartes and Kant find it a bit disturbing that people passively rely on a virtual oracle to gain informations about the world.
But are we sure it was much different before?
In my parents’ time, ‘They said it on the tv’ was equivalent to ‘It really happened’, and until last year, the first three Google results were used just as uncritically to form an opinion on any subject, from how to feel your children to how the Poles are melting.
In short: maybe the new artificial intelligence-powered oracles cannot train critical thinking (as neither Google nor the news did), but they can at least train logical thinking.
They can teach one to think in order and without being hostage to virality. And, as the Italian Radical leader Marco Pannella used to say, sometimes ‘the best is the enemy of the good’.
After all, how many self-styled champions of ‘critical thinking’ become punctually childish, rabid and uninformed when it comes (for example) to Christian religion and Church history?
Of course, new surprises are always possible. There are the Chinese AIs such as Qwen and Deepseek, and even in the West, AIs dedicated only to the dissemination of news may be developed, which will let us breathe the same toxic atmosphere as social.
This is where the effectiveness of much-discussed European regulations such as the AI Act will be measured.
But for now the impression is that the worst is behind us, and that, thanks to the help of AI, we are starting to get better.