This week, I’ve changed things around a bit. Instead of the usual collection of articles in the ‘handpicked reads’ section, I’ve reviewed a single, 92-page book that I recently read: Zara Rahman’s Machine Readable Me that I think is well worth your time.
And, in the following section, I reflect on the concept of intellectual humility, inspired by an episode involving pizza, of all things. Read on to find out the connection.
Handpicked Read
Zara Rahman’s Machine Readable Me offers a great introduction on the intricacies of data collection by governments and corporations. Integrating insights from a decade’s worth of research, Rahman explores the personal impacts of government digital ID systems (e.g., India’s Aadhaar) and the UN’s deployment of biometrics on refugees, to the implications of predictive machine learning on children’s rights, and the identity-shaping power of genome analysis.
One of the key focusses of the book is the systemic asymmetries at play, where data on marginalized groups and refugees is harvested by governments and international agencies, often without proper consent and beyond necessity. Despite my privilege, carrying a weaker passport than nearly all my peers has made me acutely aware of the pace at which my own biometrics and digital data travels across international borders, outpacing my own self. Over the years, I’ve spent countless hours filling immigration paperwork and standing in never-ending queues, surrendering my data away to other governments on an unprecedented scale. This information, once shared with one border agency, travels unquestioningly to another’s archive, collating a profile of me that is perhaps more detailed than my own understanding of myself.
We are all part of databases that seek to identify and categorize us, freezing our identities — at point of birth most often — even when we ourselves may not agree with the categories used. The concern isn’t against the act of data collection, but rather its intent and scope. Rahman’s critique resonated with me because she probes this very thing — the ‘why’ of data collection and when it might serve the end users and not just the invisible hands collecting it.
At its core, Machine Readable Me offers a vision where we wield technology as a tool that serves us in the expression of our identities, without letting it confine us within its unyielding parameters.
Overall, it is an accessible primer for those who want to understand the ways in which their digital information strengthens preexisting systemic inequalities.
(Not-So) In-Depth Reflections
Recently, over a casual chat about the best Italian restaurant in London, a friend pointed out, “You’ve changed your mind.” It was a simple observation, for it was true that I had, but the innocuous statement made me wonder of a fundamental quirk in human nature. Why does it feel so consequential to admit that our opinions have changed? In a world bursting with new information, why is it hard to just accept, ‘I was wrong’?
Most of us today, comfortably glued to our echo chambers, meet contrarian evidence with skepticism or outright rejection. It’s as if admitting a change in stance or opinion, despite being faced with overwhelming evidence against it, is a personal failing. But isn’t this what learning should be about?
There are multiple reasons for this, but I can hazard a couple. For one, the overtly politicised and permanent nature of our public commons. Our views, once ephemeral, now gain permanence on the digital soapboxes of our age — from Facebook and Twitter (now X) to Instagram, TikTok, and Threads. The permanence of our digital footprints often leads us to two extremes: a reluctance to share opinions for fear of their endurance or a fierce defense of those views once shared. There are posts I made over a decade ago that I cringe over, or actively disagree with. I was a different person back then, hadn’t had some of the experiences I now have had; surely, the evolution of my views is a testament to growth? If even an iPhone gets an upgrade every six months, why shouldn’t our thoughts and opinions?
In The Enigma of Reason, Hugo Mercier and Dan Sperber suggest that our capacity for reason developed not merely as a tool for solitary intellectualization but as a means to thrive as social animals in a community. Reason evolved primarily to navigate the complexities of social collaboration — for our biggest evolutionary differentiator is our ability to cooperate — not to tackle abstract puzzles or analyze new information as popularly thought. So, certain patterns of thought that may appear irrational or foolish from a purely logical standpoint might make sense when viewed through a hypersocial or ‘interactionist’ standpoint.
Thus, individuals who openly change their minds or admit to being wrong often risk undermining their social standing, as such shifts can be perceived by others as a lack of conviction or expertise, potentially eroding their influence and reputation within their social group. In light of this, it’s not surprising that people are hesitant to admit they were wrong on important matters, worried it’ll diminish their social status and lead to a potential ostracization from the tribe — an outcome our evolutionary instincts have adapted against. “Man is by nature a social animal,” after all (à la Aristotle).
In similar vein, research by Jonathan Haidt suggests that often, our moral judgements arise from intuition. Essentially, we post hoc rationalize our initial, gut-based decisions. As he states in The Righteous Mind:
When you ask people moral questions, time their responses and scan their brains, their answers and brain activation patterns indicate that they reach conclusions quickly and produce reasons later only to justify what they’ve decided.
But in an age of endemic misinformation and outright disinformation, with rise of fake news and AI-manufactured deepfake content, it can be outright dangerous to hold onto the same views even when new information comes to light.
The virtue of intellectual humility — of recognizing our cognitive biases and limitations — and prioritizing the pursuit of truth over being correct is ever-so important in today’s polarized landscape.
At the same time, it’s important to afford individuals the opportunity to evolve their beliefs, rather than eternally binding them to their past statements. What started with the advent of the printing press has only magnified exponentially in the digital era: our words and opinions are eternally archived and displayed for all to see, a phenomenon akin to a modern ‘Medusa effect’ — whatever we share in public turns to stone for all of posterity to see, judge, and remind us of our historic failings. Yet, this is a distortion; our ideas are meant to be fluid and adaptable, not frozen.
So what’s the way out? Maybe we need to create a society that actually values the reliability vis-à-vis intellectual humility instead of reliability of opinion itself. A tribe that values people who, when faced with new evidence that challenges their beliefs, remember that this is part of life’s iterative learning programme.
Changing our mind isn’t a sign of weakness, but a sign of growth. As John Maynard Keynes said (or didn’t say):
When the facts change, I change my mind — what do you do, sir?
Every edition of Infinity Inklings aims to spark your curiosity. Did it resonate with you? Your feedback is invaluable, so please do hit the reply button if you want to share your thoughts.
If it struck a chord, consider sharing with fellow inquisitive minds.