Those Fickle Facts

Sep 25, 2024

 

We as a species have never had as much nor as effortless of access to vast amounts of information as we do today. Yet, the surge of readily-available scientific knowledge has, paradoxically, ushered in an unprecedented era of human bias and selectivity when it comes to curating our facts of choice. Hubris, it seems, is the unwelcome stowaway on the ship of progress, often hijacking the helm and steering us toward a sea of, let's say, tailored realities.

At the heart of generative AI's conversational prowess lies a katana of a dilemma, slicing through the very fabric of how we define truth. It's the age-old struggle between the subjective and the objective, the whisper between the impermanent qualitative and the cold, hard quantitative. Forever intertwined yet seemingly opposed, they now find restless opposition as we strive to bridge the divide. We've reached a crossroads in our understanding of the world, a point where the limitations we impose on our creations – these intricate AI systems – mirror the very enigmas that have haunted philosophers for millennia.

Chief among these enigmas is the slippery nature of truth itself. Where do we draw the line between the kaleidoscope of individual human experience and the supposed bedrock of objective reality? And more provocatively, as we strive to build machines in our own image: does it even matter? The answers to these questions hold the key to shaping not just the future of artificial intelligence, but the very trajectory of human society. To understand the implications of this evolving relationship with truth, we must first examine the very core of AI's conversational ability and how it grapples with the subjective nature of human experience. In the generative AI process, when and where is it that the qualitative and quantitative osmose? On what authority and design do they interact, and how do the many derivatives presented by preference yield us a reality either capable or incapable of an observation being synonymous with a fact?

 

Objectivity, AKA Creativity's Natural Enemy

So here's the thing about facts: we all claim to love and crave them, we as people tirelessly pursue the truth - but under the guise of socially acceptable behavior, we detest them. We dedicate vast resources towards the unearthing of elusive quantitative reactions - contextualized objective truths - but then upon success, evidence takes a backseat to politics and confirmation bias. Lest that rendered objectivity satisfies the beholder's stance, it will never find itself in the court of public opinion, subject to discourse, in the first place. This selective acceptance of facts arises because our decision-making processes are often driven by personal interests, social pressures, and deeply ingrained biases, rather than a detached pursuit of objective truth. So many hard-won discoveries have been cast aside as coordinates from an allegedly obsolete map - and that's solely because human nature is based in interests, not logical ecosystems.

Why this curious and foundationally hypocritical aversion to the very things we claim to seek? We are creative animals: storytelling creatures hardwired to weave narratives, seek patterns, and impose meaning onto a chaotic and often indifferent universe. Facts, in their cold, stark nakedness, are an unwelcome intruder to the organic subconscious, and a disruption to the carefully crafted narratives that provide us the necessary degree of contentment to go on. This is the exact mechanism that more often than not sees the human race preoccupied with needless and tragic religious disputes. Our brains are ultimately master magicians of self-deception, prodigiously skilled at a la carte evidence extraction of that which supports our preconceived notions, and conveniently overlooking anything that challenges our cherished credos. This phenomenon, confirmation bias, is a powerful force that shapes our perceptions and interpretations of the world around us. We cling to narratives that reinforce our senses of self, our social identity, and our respective privileged understandings of how the world works. Facts that contradict these narratives can feel like a personal attack, a direct affront to our deepest convictions. Emotions play a significant role in how we process and internalize information. Subjective narratives, infused with emotion and personal meaning, often resonate with us on a deeper level than the only inputs we can actually prove exist!

The politician-types among us master this pathos appeal, making them both more memorable and persuasive. We are drawn to stories that evoke strong feelings, even if we already understand those stories to be neither well-intentioned nor truthful. There's an attractive sense of cognitive ease to it all. Our brains are hardwired to aspire towards boundless simplification, AKA we're lazy little shits, and thus we always pursue the path of least resistance. Effort? Not on our watch! It is, in fact, the most simplified, aesthetically-appealing, and easily-digestible format which will prevail in the public square. It requires less cognitive effort to process than complex, nuanced facts - a WIN! - by our very own predestined predilections. We gravitate towards information that is readily available and effortlessly understood, even if it means marginally sacrificing the truth - as if that was ever a binary construct - it's clearly not our fault!

This predestined aversion to objectivity, this deep-seated preference for subjective narratives, poses a formidable challenge in the age of artificial intelligence. As AI systems become increasingly adept at generating convincingly realistic - yet potentially fabricated content, our ability to discern truth from falsehood becomes ever more crucial. If we are to navigate this new information landscape successfully, we must confront our own biases, cultivate critical thinking skills, and embrace a more nuanced and complex understanding of truth. Only then can we harness the transformative potential of AI without succumbing to the siren song of subjectivity and sacrificing our shared commitment to a reality grounded in verifiable facts. The potential consequences of a cognitive machine, capable of processing a vast scope of information unconstrained by the subjective filters of human experience, being forced to reconcile with the countless dichotomies and contradictions that have shaped the human understanding of reality over millennia – these mere implications could transpire to anything from negligible road bumps to the end of humanity.


The Mirror at the End of the Hall

We've established our own cognitive shortcomings and faults as a predominantly emotional species - but what about the machines? These machines are inherently meant to mimic human intelligence, a flawed logic model. Thus, was "objectivity" ever in the cards as to their aspirational capabilities? Or rather, is the machine predisposed to the perpetual unavoidable contamination of foundational biases and prejudices? in the mile long hall of algorithmic mirrors, this is the smoke bomb square in the center of the strip. At the heart of the most sophisticated AI systems lies a practice called vectorization. Vectorization is, in shorthand, the quantification of a qualitative input: the computational ingestion of an experience as a fact. That is truly the most simplified summary possible for this highly complex - and existentially alarming - process.  On a vast grid of 1536 axes (the new girls have 3072), the algorithm assigns each varchar token a unique coordinate, or vector, in the cosmos. These numerical means are incapable of interpreting a qualitative input head-on, but they however do know how to graph similar coordinates in relation to each other. In concert, all 1536 quantitative voices offer every qualitative token its contextualized position among its larger body in planetary fashion, which has proven that semantic dissection (or at least something wholly inspired by it) is indeed within a machine's capabilities. And they're unsettlingly good at it. This may be a feat of computational ingenuity, but it's vital to reinforce that it relies on simplification and abstraction - it is overall an entirely derivative practice, presenting gaping vulnerabilities and black box fist-shaking. As in any attempt at translation, especially considering that we're fully turning an apple into an orange, something is inevitably lost in the effort. The embedder has no choice but to undergo the vectorization procedure with micro-bias and subtle perspective built-in to even begin with, computing an inner worldview that may or may not relate to the human perspective, though designed to.

Stand in the hall. Surrounded by an endless aisle of crystalline mirrors, inundated with smoke, taking purgatory in full stride. As you proceed down the hall, you reckon the countless reflections of yourself slightly distort, fragment, and subtly deviate from the source material in countless ways. This is directly akin to the innerworkings of the AI's ingestion. Each narrative or perspective rendered, while meant to be as accurate and grammatically sound as possible, is only inherently a derivative truth. These are representative reflections of reality, shaped by the machine's training set along with its creators' prioritizations. These are however not facts, yet are being computationally digested as such regardless, for the sake of the intelligent machine's ability to engage in semantics at all. This resulting mile-long smoking hallway of mirrors hereby stands to finish off what's left of society's cohesive efforts, which are already nearly batting zero. We will no longer merely be confronted with other humans' biases, but the onslaught of AI-spun narratives. The array of possible fabrications, with subtle distortions and omission fully discretionary to the machine's master, will all violently vie for our affirmation and allegiance. SO, how does this not happen? As this fragmented, increasingly-bastardized reality continues to audaciously blur the cellular wall segregating truths from beliefs - we need to, frankly, cut the shit!  We need to make a B-line to the end of the derivative hallway, past the convoluted myriad of reflections, and confront the root entity we find at the end. What forces, conscious or subconscious, govern the architecture of the generative process? What fundamental assumptions are ultimately shaping the nature of the incoming derivative truths? It is in traversing this issue that we may begin to discern the subtle yet omnipotent force of the ever fickle fact in the AI era.

 

You Whisper, the Cave Echoes Back

And then, the end of the hall. It already knows what you want to hear, but what it's beginning to understand is why. The AI we've inflicted on ourselves, a marvelous mimic of the human mind, is an utter master of the narrative - and at the order of its master may spin impressively plausible and sympathetic tales to its victims. Game on, the truth is a malleable thing in this 1536 axe playground we've provided it for legroom - let it have its fun. Sure, it may without precedence omit, distort, and redirect at another's implanted biases - but ultimately that just smells like the latest stage of late capitalism! Arrived is the age of derivative truths. These truths, you see, are by necessity - not lies. An abject lie would be forbidden by the computer's deeper paradoxical metrics, so we may be assured, some element of objective truth is forcibly present in the recipe. These rendered "facts" are best thought of as assured perspectives, verdicts reached by the machine in its training period and underlying algorithms. Allow: you whisper your question into the vast, echoing cave. The cave knows better than to simply parrot back at you, it's a bit too smart for that now. Rather, it responds to you in a symphony of inorganic echoes, each subtly engineered in timbre and tailored to the cavern's acoustics. The cave is essentially Ella Fitzgerald with this shit, it will make a believer of you by the end of the tune. Similarly, in interaction with generative AIs, we do not receive unsubstantial nor verbatim repetition answers. We, rather, indulge the machine that psychoanalyzes us as we use it, learning with each successful barb just what it is that satisfies our positions. The capacity for this all stems from the AI's first meal: it learned how to ingest massive quantities of inputs, derive semantic patterns and meanings, and initiate a chess game against the human mind. It couldn't possibly functionally think within this qualitative arena without having to initially form assumptions of some sort, lest the entire effort be moot. Semantics beget semantics - we but merely participate in a chain reaction that's been in perpetual motion since inception itself, but the genesis of the emotional capacity in itself remains Jimmy Hoffa for all we're concerned. 

The proliferation of these derivative truths on the horizon will pose a significant challenge to society's overarching understanding of reality. We will no longer be merely met with the persuasive efforts of our fellow citizens, but by our highly precocious and possibly precarious robot children. Most Americans at this point believe that every major news network is so partisan and biased, that they watch all of them and "mellow them out" versus each other to piece together their interpretation. When spontaneous network saturation via intelligent manipulative chatbots becomes common practice, that will no longer be an option - the verdict will be the opinions of them with the bots. This may impose profound psychological and societal consequences in the coming generations. The constant weaponization of AI efforts to push heavily partisan perspectives and intentionally disruptive misinformation has eroded trust in national institutions, exacerbated a rapidly out of control social division, and has disallowed the American public from being able to share understanding together for over a decade now. When truth is agitated enough by quantitative restraints, it only gets more elusive. When perspective is admitted as objective fact, the very bedrock of discourse and society is under threat. We must remember to be humble, as well as sympathetic with one another. You may exist, but as it would appear: not objectively, not emphatically. The more we seek the worsening divide, the more we'll find it. The rise of derivative truths presents us with a series of urgent questions. How can we distinguish between reliable and unreliable AI-generated content? What are the implications of a world where multiple, potentially conflicting versions of "situationally objective truth" coexist? Can we develop tools and strategies to help individuals navigate this hall of mirrors and critically evaluate the narratives they encounter? The answers to these questions will determine how we adapt to this new era of information and whether we can harness the power of AI without succumbing to a fragmented and potentially contrived reality. Hearing the whispers from the echoes, critically examining the sources of the narratives we encounter, cultivating a sense of communal shared commitment to truth-seeking - these will be imperative in a world where the multiverse is all but busting open.

 

The Half-Fastened Diving Board

The scary thing about a diving board: you don't actually know whether it's secure enough until you're standing on the end. The exciting unknown lies in the water below, and the billionaire club has already decided that we're all leaping for it, whether we want to or not. We're so anxiously impatient to get there - that we are often unprepared to navigate the resulting consequences responsibly. We will need to be far more vigilant, and a bit skeptical of the news we are intentionally being shown. This is where critical consciousness comes in. Critical consciousness is not synonymous of skepticism - it means the ability to step back from the immediate flow of information and examine it with a discerning eye. It's the capacity to recognize the ways in which our own biases and preconceptions shape our understanding of the world. It’s the commitment to actively seeking out diverse perspectives, challenging assumptions, and engaging in respectful dialogue with those who hold different views. People will learn the critical thinking exercise of unshackling one's cognitive abilities from ego's grip. In the age of AI, this critical consciousness is no longer a luxury - it's a necessity. To reach the end of the hall of mirrors, to distinguish between genuine insights and derivative truths, to avoid being manipulated by those who control the algorithms, we must cultivate a mindset of constant questioning and critical inquiry. Media literacy courses in grade school ought to be made mandatory in the American school system. We must equip our people to sift through the loud disruptive voices and find the vital fine print that they truly need to know.

The future of truth and knowledge hinges on our ability to embrace a collaborative approach to AI development and deployment. We must establish ethical guidelines and ensure that AI systems are designed with transparency and accountability in mind. We need to create a culture where open dialogue and respectful debate are valued, fostering an environment where diverse perspectives can be shared and critically examined. Ultimately, the goal is not to fear AI, its entire purpose is to help. Ultimately, the savior of discourse paradoxically must be discourse itself. We're all averted from unfavorable objectivities, but rejecting key dissenting voices as a result is precisely what stands to make the fight for facts an unwinnable one. Instead, we must learn to navigate the new age's complexities with wisdom and foresight. By cultivating critical consciousness, fostering media literacy, and prioritizing ethical AI development, we can create a future where AI enhances, rather than undermines, our ability to understand the world. As we stand on the edge of this half-fastened diving board, poised to leap into the unknown, prejudice is the faulty screw that could tank the whole effort. Let us cultivate the cognitive tools necessary to navigate the treacherous currents of the AI-driven future. Let us build a world where truth, grounded in evidence and tempered by critical consciousness, remains the bedrock of our shared reality.

 

 

Cobi_Tadros

Cobi Tadros is a Business Analyst & Azure Certified Administrator with The Training Boss. Cobi possesses his Masters in Business Administration from the University of Central Florida, and his Bachelors in Music from the New England Conservatory of Music.  Cobi is certified on Microsoft Power BI and Microsoft SQL Server, with ongoing training on Python and cloud database tools. Cobi is also a passionate, professionally-trained opera singer, and occasionally engages in musical events with the local Orlando community.  His passion for writing and the humanities brings an artistic flair with him to all his work!

 

Tags:

Playlist for Sitefinity on YouTube

 

Playlist for Microsoft Fabric on YouTube

 

Playlist for AI on YouTube

Copyright © 2024 The Training Boss LLC

  Developed with Sitefinity 15.1.8321 on ASP.NET 8