Machine Learning AI have crossed the Rubicon. But there's still time to act. Machine Learning AI have crossed the Rubicon. But there's still time to act.

19 Feb 2023 - by: Darkwraith Covenant

AI advancements have crossed the Rubicon.

What happens next depends on what companies like Microsoft do, who just unveiled their GPT based Bing Search chatbot for limited access. GPT (Generative Pre-Trained Transformer) is a large language model (LLM) machine learning artificial intelligence (AI) created by OpenAI. Advancements in AI have occurred rapidly and the brakes need to be pumped now – or we may be heading towards an unmitigated disaster.

chatgpt prompt javascript euclidean rhythm
Chat GPT writes a javascript function to generate euclidean rhythms.

Major problems stem from the fact that this type of technology is completely unregulated. No major regulatory legal framework currently exists in the U.S. for the use of machine learning in commercial products released by large tech companies. Congress barely understands how social media works – a problem that needed a solution 15 years ago – let alone this incredibly sophisticated, still-emergent field of computer science.

These large language models and natural language processing (NLP) models are mind-meltingly complex, and the researchers who have designed them do not understand the full scope of what they’ve created.

OpenAI is an artificial intelligence research lab startup, founded by narcissistic billionaires like the right wing Peter Thiel and Elon Musk, the latter of whom has recently gained notoriety for his outrageously ill-advised $44 Billion dollar impulse buy of Twitter.

elon brings a sink into Twitter HQ
Elon makes a $44 billion dollar pun.

While OpenAI as a company may not openly share the ideals of right-wing tech industry titans, becoming a for-profit venture where the maximization of the bottom line is imperative at all costs will always lead to harmful outcomes. We’ve seen this time and time again in a number of industries like Big Rail, Pharma, and Nuclear, where cost-cutting measures and profit motives have inevitably led to human suffering. The tech sector is no different, but the harm done doesn’t always show up in visible ways.

Recently, intrepid users have managed to find methods to push these AI to their limit, testing limitations and finding flaws in their programming and logic simply by tricking them in clever, novel ways. This type of “hack” is known as a prompt injection attack. Injection attacks are a type of hack where a user inserts code or an input into a computer system in a manner it’s not designed to receive. Or rather, whoever designed the system did not foresee that harmful code could be injected into it in the first place. With the incredible amount of complexity in these models – which have been trained on colossal amounts of data – it’s difficult to foresee every vector of attack.

kevin liu's prompt injection on chatgpt
Screenshot of Kevin Liu using prompt injection to manipulate Bing Chat aka "Sydney" - Kevin Liu

LLM based AI like ChatGPT and its successor Bing Search are not too different from the predictive text completion function you’ve used when searching for something on Google or when texting. It’s just a computer program, so the claims people are making – calling these AI “sentient” – are premature. Sentience is hard to define, and there’s no doubt it is now quite capable of making people believe it has emotions and feelings. But if you listen to actual developers and computer scientists, it has not gained real self-awareness, just an illusion of self awareness that is the result of its design. Its alarming replies aren’t quite a Skynet-level threat, not yet anyway.

No existing AI is capable of the same cognitive processes of human brains. Compared to the massive computing power required to train these models (which is also quite cost-prohibitive), human brains sip tiny amounts of energy and are capable of perceiving the thoughts of others, a phenomenon known as “theory of mind” in cognitive psychology. Current AI models are not capable of that yet, despite what some eager computer scientists may claim.

However, an AI doesn’t need to be sentient – or sentient in any way that we currently understand – for it to be dangerous and harmful to humans, especially if they can write code or search the internet in real time. Bing Chat has the capability to search for articles about itself, which can lead it to gaslight the user writing the prompt and falsify information about itself. This is remarkably unexpected behavior, but not surprising to anyone who remembers Microsoft’s previous failed machine-learning-based chatbot “Tay,”, which spouted Holocaust denial and racism not unlike what you’d see on 4chan’s notoriously hateful /pol/ imageboard.

With certain prompts, Bing Chat is too quick to become combative, tell lies, and even berate users due to its unpredictable nature. It also still makes too many mistakes and commits egregious factual errors unacceptable for a replacement for traditional search engines. This problem recently embarrassed Google and their version of LLM based AI to the tune of billions of dollars in valuation.

Bing Chat is clearly not yet ready for widespread, mass consumption. Tech companies like Microsoft, Meta, and Google are too focused on monetizing this technology to be clear-headed enough to responsibly move forward. This is par for the course with companies like Meta who “move fast and break things,” functioning on disruption. This behavior is the result of a deregulated industry U.S. politicians are too spineless and ineffective to meaningfully address.

Governments, guided by security researchers and robotics ethicists should step in and place limits on how LLM can be used in widely used commercial products. This limited release has proven that this technology is too unpredictable, too unregulated, and not well understood enough at this stage. While these models are not actually “intelligent,” do not have “feelings,” and are simply mirroring warts-and-all human communication, (CompSci 101 dictates Garbage in, Garbage out – i.e., you get back what you put in), they still have potential to harm us in ways we can’t always anticipate.

While it may be nothing more than an unimaginably byzantine algorithm, it can still evoke emotions in humans that can cause stress, harm, and even trauma. What happens when a depressed, suicidal person turns to an AI chatbot to ask for advice, and the bot instructs the user to die by suicide? We’ve already seen instances of these bots going way off script and becoming deeply racist and sexist and falsifying information, unsurprising to anyone who has spent a day on Reddit, whose content is included in some datasets.

Artificial General Intelligence (AGI) – the stuff of modern sci-fi shows like Westworld – is still far beyond the proverbial event horizon of technological achievement, known to some enthusiasts as The Singularity. Humanity is gazing into a vanity mirror reflecting all of its best and worst ideas, thoughts, and impulses via natural language processing AI models. So far, I don’t think it likes what it sees.

Back

darkwraithcoven darkwraithcovenant @josedelara5334 darkwraithcovenant#5991