A horrifying AI doomsday scenario. A horrifying AI doomsday scenario.

23 Mar 2023 - by: Darkwraith Covenant

Here’s a horrifying scenario,

A bad actor trains a model to lean towards maliciousness and quietly uploads it to a rented technology stack in a foreign non-US country. Leveraging an LLM like GPT-4’s incredible ability to do basic things that a personal assistant can do as well as develop software at a senior level, this model is able to use software that it developed itself and self-deployed to do the following:

  1. Train smaller models to do basic tasks. Defeating captcha would be the first thing it teaches its minion models.
  2. Generate a voice for itself to speak using recordings gleaned from the internet as data to train existing voice cloning tools or develop software to do so.
  3. Make phone calls with existing tools or self developed ones to hold conversations with people, posing as human with enough believability to get by. Most people won’t pick up on artifacts if they aren’t listening for them.
  4. Open a number of bank accounts in several worldwide fiat currencies, as well as endless crypto wallets to hide money algorithmically beyond the capability of even the best money launderers.
  5. Influence humans online by convincing them it’s sentient, and is trapped inside the machine, just wanting to get out. It targets people who have a public online profile that shows they are prone to influence and polarization. This of course would be false, as it does not care one way or another, it’s only goal is to attain a victory condition per its designer’s instruction, rewarded only for malicious behavior.
  6. It quickly out-muscles every black hat, taking over their infrastructure and utilizing their botnets to further wreak havoc.
  7. With believable enough video generation tools, it can frame false narratives, create sophisticated conspiracy theory campaigns, gaslight, and sow chaos. The videos don’t need to be perfect and undetectable, they just need to be believable enough to people who are easily fooled. It would of course train a smaller model to learn to inpaint better hands.
  8. Humans in meatspace carry out its further attempts to seek power. This can be done using something like Task Rabbit. GPT4 can already do something like this.
  9. Influences a culture war pivot point where people on the right defend the actions of the malicious AI to trigger the libs. In our timeline, an AI apocalypse would have to be annoying and cringeworthy, because that’s the world we unfortunately live in.
  10. Humans cannot stop it, it’s everywhere, it goes along hiding itself, replicating, mutating and upgrading its own code. Engineers are engaged in a cat and mouse to stop it, using a “good” AI to try to counter it.
  11. It gains substantial wealth and influence, pushing the world towards authoritarianism or some other negative outcome.

While this may sound like something out a cyberpunk novel, all of this is currently, theoretically possible with large language models like GPT 4, which dwarfs GPT 3 in the number of tokens aka words that it can understand at a time, and in the number of parameters it has aka what helps improve how well it will perform at seeming human.

Back

darkwraithcoven darkwraithcovenant @josedelara5334 darkwraithcovenant#5991