Crypto collapse? Get in loser, we’re pivoting to AI

By Amy Castor and David Gerard

“Current AI feels like something out of a Philip K Dick story because it answers a question very few people were asking: What if a computer was stupid?” — Maple Cocaine

Half of crypto has been pivoting to AI. Crypto’s pretty quiet — so let’s give it a try ourselves!

Turns out it’s the same grift. And frequently the same grifters.

 

 

AI is the new NFT

“Artificial intelligence” has always been a science fiction dream. It’s the promise of your plastic pal who’s fun to be with — especially when he’s your unpaid employee. That’s the hype to lure in the money men, and that’s what we’re seeing play out now.

There is no such thing as “artificial intelligence.” Since the term was coined in the 1950s, it has never referred to any particular technology. We can talk about specific technologies, like General Problem Solver, perceptrons, ELIZA, Lisp machines, expert systems, Cyc, The Last One, Fifth Generation, Siri, Facebook M, Full Self-Driving, Google Translate, generative adversarial networks, transformers, or large language models — but these have nothing to do with each other except the marketing banner “AI.” A bit like “Web3.”

Much like crypto, AI has gone through booms and busts, with periods of great enthusiasm followed by AI winters whenever a particular tech hype fails to work out.

The current AI hype is due to a boom in machine learning — when you train an algorithm on huge datasets so that it works out rules for the dataset itself, as opposed to the old days when rules had to be hand-coded.

ChatGPT, a chatbot developed by Sam Altman’s OpenAI and released in November 2022, is a stupendously scaled-up autocomplete. Really, that’s all that it is. ChatGPT can’t think as a human can. It just spews out word combinations based on vast quantities of training text — all used without the authors’ permission.

The other popular hype right now is AI art generators. Artists widely object to AI art because VC-funded companies are stealing their art and chopping it up for sale without paying the original creators. Not paying creators is the only reason the VCs are funding AI art.

Do AI art and ChatGPT output qualify as art? Can they be used for art? Sure, anything can be used for art. But that’s not a substantive question. The important questions are who’s getting paid, who’s getting ripped off, and who’s just running a grift.

You’ll be delighted to hear that blockchain is out and AI is in:

It’s not clear if the VCs actually buy their own pitch for ChatGPT’s spicy autocomplete as the harbinger of the robot apocalypse. Though if you replaced VC Twitter with ChatGPT, you would see a significant increase in quality.

I want to believe

The tech itself is interesting and does things. ChatGPT or AI art generators wouldn’t be causing the problems they are if they didn’t generate plausible text and plausible images.

ChatGPT makes up text that statistically follows from the previous text, with memory over the conversation. The system has no idea of truth or falsity — it’s just making up something that’s structurally plausible.

Users speak of ChatGPT as “hallucinating” wrong answers — large language models make stuff up and present it as fact when they don’t know the answer. But  any answers that happen to be correct were “hallucinated” in the same way.

If ChatGPT has plagiarized good sources, the constructed text may be factually accurate. But ChatGPT is absolutely not a search engine or a trustworthy summarization tool — despite the claims of its promoters.

ChatGPT certainly can’t replace human thinking. Yet people project sentient qualities onto ChatGPT and feel like they are conducting meaningful conversations with another person. When they realize that’s a foolish claim, they say they’re sure that’s definitely coming soon!

People’s susceptibility to anthropomorphizing an even slightly convincing computer program has been known since ELIZA, one of the first chatbots, in 1966. It’s called the ELIZA effect.

As Joseph Weizenbaum, ELIZA’s author, put it: “I had not realized … that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

Better chatbots only amplify the ELIZA effect. When things do go wrong, the results can be disastrous:

  • A professor at Texas A&M worried that his students were using ChatGPT to write their essays. He asked ChatGPT if it had generated the essays! It said it might have. The professor gave the students a mark of zero. The students protested vociferously, producing the evidence they wrote their essays themselves. One even asked ChatGPT about the professor’s Ph.D thesis, and it said it might have written it.  The university has reversed the grading. [Reddit; Rolling Stone]
  • Not one but two lawyers thought they could blindly trust ChatGPT to write their briefs. The program made up citations and precedents that didn’t exist. Judge Kevin Castel of the Southern District of New York — who those following crypto will know well for his impatience with nonsense — has required the lawyers to show cause not to be sanctioned into the sun. These were lawyers of several decades’ experience. [New York Times; order to show cause, PDF]
  • GitHub Copilot synthesizes computer program fragments with an OpenAI program similar to ChatGPT, based on the gigabytes of code stored in GitHub. The generated code frequently works! And it has serious copyright issues — Copilot can easily be induced to spit out straight-up copies of its source materials, and GitHub is currently being sued over this massive license violation. [Register; case docket]
  • Copilot is also a good way to write a pile of security holes. [arXiv, PDF, 2021; Invicti, 2022]
  • Text and image generators are increasingly used to make fake news. This doesn’t even have to be very good — just good enough. Deep fake hoaxes have been a perennial problem, most recently with a fake attack on the Pentagon, tweeted by an $8 blue check account pretending to be Bloomberg News. [Fortune]

This is the same risk in AI as the big risk in cryptocurrency: human gullibility in the face of lying grifters and their enablers in the press.

But you’re just ignoring how AI might end humanity!

The idea that AI will take over the world and turn us all into paperclips is not impossible!

It’s just that our technology is not within a million miles of that. Mashing the autocomplete button isn’t going to destroy humanity.

All of the AI doom scenarios are literally straight out of science fiction, usually from allegories of slave revolts that use the word “robot” instead. This subgenre goes back to Rossum’s Universal Robots (1920) and arguably back to Frankenstein (1818).

The warnings of AI doom originate with LessWrong’s Eliezer Yudkowsky, a man whose sole achievements in life are charity fundraising — getting Peter Thiel to fund his Machine Intelligence Research Institute (MIRI), a research institute that does almost no research — and finishing a popular Harry Potter fanfiction novel. Yudkowsky has literally no other qualifications or experience.

Yudkowsky believes there is no greater threat to humanity than a rogue AI taking over the world and treating humans as mere speedbumps. He believes this apocalypse is imminent. The only hope is to give MIRI all the money you have. This is also the most effective possible altruism.

Yudkowsky has also suggested, in an op-ed in Time, that we should conduct air strikes on data centers in foreign countries that run unregulated AI models. Not that he advocates violence, you understand. [Time; Twitter, archive]

During one recent “AI Safety” workshop, LessWrong AI doomers came up with ideas such as: “Strategy: start building bombs from your cabin in Montana and mail them to OpenAI and DeepMind lol.” In Minecraft, we presume. [Twitter]

We need to stress that Yudkowsky himself is not a charlatan — he is completely sincere. He means every word he says. This may be scarier.

Remember that cryptocurrency and AI doom are already close friends — Sam Bankman-Fried and Caroline Ellison of FTX/Alameda are true believers, as are Vitalik Buterin and many Ethereum people.

But what about the AI drone that killed its operator, huh?

Thursday’s big news story was from the Royal Aeronautical Society Future Combat Air & Space Capabilities Summit in late May about a talk from Colonel Tucker “Cinco” Hamilton, the US Air Force’s chief of AI test and operations: [RAeS]

He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission — killing SAMs — and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

Wow, this is pretty serious stuff! Except that it obviously doesn’t make any sense. Why would you program your AI that way in the first place?

The press was fully primed by Yudkowsky’s AI doom op-ed in Time in March. They went wild with the killer drone story because there’s nothing like a sci-fi doomsday tale. Vice even ran the headline “AI-Controlled Drone Goes Rogue, Kills Human Operator in USAF Simulated Test.” [Vice, archive of 20:13 UTC June 1]

But it turns out that none of this ever happened. Vice added three corrections, the second noting that “the Air Force denied it conducted a simulation in which an AI drone killed its operators.” Vice has now updated the headline as well. [Vice, archive of 09:13 UTC June 3]

Yudkowsky went off about the scenario he had warned of suddenly playing out. Edouard Harris, another “AI safety” guy, clarified for Yudkowsky that this was just a hypothetical planning scenario and not an actual simulation: [Twitter, archive]

This particular example was a constructed scenario rather than a rules-based simulation … Source: know the team that supplied the scenario … Meaning an entire, prepared story as opposed to an actual simulation. No ML models were trained, etc.

The RAeS has also added a clarification to the original blog post: the colonel was describing a thought experiment as if the team had done the actual test.

The whole thing was just fiction. But it sure captured the imagination.

The lucrative business of making things worse

The real threat of AI is the bozos promoting AI doom who want to use it as an excuse to ignore real-world problems — like the risk of climate change to humanity — and to make money by destroying labor conditions and making products worse. This is because they’re running a grift.

Anil Dash observes (over on Bluesky, where we can’t link it yet) that venture capital’s playbook for AI is the same one it tried with crypto and Web3 and first used for Uber and Airbnb: break the laws as hard as possible, then build new laws around their exploitation.

The VCs’ actual use case for AI is treating workers badly.

The Writer’s Guild of America, a labor union representing writers for TV and film in the US, is on strike for better pay and conditions. One of the reasons is that studio executives are using the threat of AI against them. Writers think the plan is to get a chatbot to generate a low-quality script, which the writers are then paid less in worse conditions to fix. [Guardian]

Executives at the National Eating Disorders Association replaced hotline workers with a chatbot four days after the workers unionized. “This is about union busting, plain and simple,” said one helpline associate. The bot then gave wrong and damaging advice to users of the service: “Every single thing Tessa suggested were things that led to the development of my eating disorder.” The service has backtracked on using the chatbot. [Vice; Labor Notes; Vice; Daily Dot]

Digital blackface: instead of actually hiring black models, Levi’s thought it would be a great idea to take white models and alter the images to look like black people. Levi’s claimed it would increase diversity if they faked the diversity. One agency tried using AI to synthesize a suitably stereotypical “Black voice” instead of hiring an actual black voice actor. [Business Insider, archive]

 

Genius at work

 

Sam Altman: My potions are too powerful for you, Senator

Sam Altman, 38, is a venture capitalist and the CEO of OpenAI, the company behind ChatGPT. The media loves to tout Altman as a boy genius. He learned to code at age eight!

Altman’s blog post “Moore’s Law for Everything” elaborates on Yudkowsky’s ideas on runaway self-improving AI. The original Moore’s Law (1965) predicted that the number of transistors that engineers could fit into a chip would double every year. Altman’s theory is that if we just make the systems we have now bigger with more data, they’ll reach human-level AI, or artificial general intelligence (AGI). [blog post]

But that’s just ridiculous. Moore’s Law is slowing down badly, and there’s no actual reason to think that feeding your autocomplete more data will make it start thinking like a person. It might do better approximations of a sequence of words, but the current round of systems marketed as “AI” are still at the extremely unreliable chatbot level.

Altman is also a doomsday prepper. He has bragged about having “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to” in the event of super-contagious viruses, nuclear war, or AI “that attacks us.” [New Yorker, 2016]

Altman told the US Senate Judiciary Subcommittee that his autocomplete system with a gigantic dictionary was a risk to the continued existence of the human race! So they should regulate AI, but in such a way as to license large providers — such as OpenAI — before they could deploy this amazing technology. [Time; transcript]

Around the same time he was talking to the Senate, Altman was telling the EU that OpenAI would pull out of Europe if they regulated his company other than how he wanted. This is because the planned European regulations would address AI companies’ actual problematic behaviors, and not the made-up problems Altman wants them to think about. [Zeit Online, in German, paywalled; Fast Company]

The thing Sam’s working on is so cool and dank that it could destroy humanity! So you better give him a pile of money and a regulatory moat around his business. And not just take him at his word and shut down OpenAI immediately.

Occasionally Sam gives the game away that his doomerism is entirely vaporware: [Twitter; archive]

AI is how we describe software that we don’t quite know how to build yet, particularly software we are either very excited about or very nervous about

Altman has a long-running interest in weird and bad parasitical billionaire transhumanist ideas, including the “young blood” anti-aging scam that Peter Thiel famously fell for — billionaires as literal vampires — and a company that promises to preserve your brain in plastic when you die so your mind can be uploaded to a computer. [MIT Technology Review; MIT Technology Review]

Altman is also a crypto grifter, with his proof-of-eyeball cryptocurrency Worldcoin. This has already generated a black market in biometric data courtesy of aspiring holders. [Wired, 2021; Reuters; Gizmodo]

 

 

CAIS: Statement on AI Risk

Altman promoted the recent “Statement on AI Risk,” a widely publicized open letter signed by various past AI luminaries, venture capitalists, AI doom cranks, and a musician who met her billionaire boyfriend over Roko’s basilisk. Here is the complete text, all 22 words: [CAIS]

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

A short statement like this on an allegedly serious matter will usually hide a mountain of hidden assumptions. In this case, you would need to know that the statement was promoted by the Center for AI Safety — a group of Yudkowsky’s AI doom acolytes. That’s the hidden baggage for this one.

CAIS is a nonprofit that gets about 90% of its funding from Open Philanthropy, which is part of the Effective Altruism subculture, which David has covered previously. Open Philanthropy’s main funders are Dustin Moskowitz and his wife Cari Tuna. Moskowitz made his money from co-founding Facebook and from his startup Asana, which was largely funded by Sam Altman.

That is: the open letter is the same small group of tech funders. They want to get you worrying about sci-fi scenarios and not about the socially damaging effects of their AI-based businesses.

Computer security guru Bruce Schneier signed the CAIS letter. He was called out on signing on with these guys’ weird nonsense, then he backtracked and said he supported an imaginary version of the letter that wasn’t stupid — and not the one he did in fact put his name to. [Schneier on Security]

And in conclusion

Crypto sucks, and it turns out AI sucks too. We promise we’ll go back to crypto next time.

“Don’t want to worry anyone, but I just asked ChatGPT to build me a better paperclip.” — Bethany Black


Correction: we originally wrote up the professor story as him using Turnitin’s AI plagiarism tester. The original Reddit thread makes it clear what he did.


There’s more! Here’s part 2. We’ll keep doing these as long as we have something to say.



Become a Patron!

Your subscriptions keep this site going. Sign up today!

27 Comments on “Crypto collapse? Get in loser, we’re pivoting to AI”

  1. I wish you had a “like” button in addition to comments. I don’t have much to say about the article, other than I almost completely agree and was very entertained by it, so it would be nice if I could express that by lazily clicking a 👍🏻 button!

  2. “scaled-up autocomplete” is excellent, got to remember that one.

    However, casting this topic in terms of “same hype as blockchain”, I still think that’s a bit unfair. AI is overhyped, but only in the same sense that one might overhype the impact of virtual meeting software or the potential for tidal energy generation. It actually works and does useful things, including in my own job, where I use deep-learning computer vision. Can’t really say that of blockchain, where anything using a blockchain can immediately be made more efficient by removing the blockchain and the closest my field of research got to using it was somebody asking at a meeting, “can we do anything with blockchain? that would make us look more innovative”. (Cue groan to the effect of, “thanks for that interesting idea, but let’s get back to the topic of this workshop.”)

    Really don’t get those lawyers, I must say. First, how can they be so stupid? Second, I have repeatedly opened ChatGPT and then closed it again after failing to come up with anything it could do for me that I either couldn’t do just as easily myself for wouldn’t WANT to do myself because I need to ensure it is correct. And I am not in front of judge but merely have to pass peer review. Ye gods.

    On the other hand, those lawyers are very understandable compared to the perennial mystery why anybody is taking Yudkowsky and Altman seriously. Sometimes I wonder if I have woken up in a parallel universe where either I am completely insane, or everybody else is, such as when I read Altman’s tweeting like a lightly inebriated college student who just had a Deep Thought That Nobody Will Ever Have Had Before Because I Am The Smartest Human Ever and then see his opinions being quoted by journalists as if he knew what he was talking about on any topic whatsoever. I guess running a business or being rich = must be very clever, because otherwise the assumption of our society being meritocratic is wrong, and we can’t have that.

    Ironically, their warnings of Skynet-shaped doom and your own warning about propaganda/human gullibility have a similar problem: AI adds nothing substantial. If we can so easily killed by a GMO bacterium or somebody launching all the missiles, then the problem isn’t the AI going rogue, it is how easy it is to launch all the missiles. Likewise, if people are so easily influenced by a made-up story on the interwebtubes, a human can make up fake stories, and they have done so forever. AI at best makes that step slightly faster, and the problem is still human gullibility itself.

    I once saw an entire, well-referenced Google Docs sheet of AIs sensu lato cheating to achieve the goals that were put into them, such as learning how to crash a computer game to achieve the condition “AI does not lose the game”. Thus, the thought experiment with the drone isn’t completely deranged, although presumably somebody would notice what is happening during the training phase and then not deploy the trained model. But again the real problem here is not having AI but deploying murder drones to kill people at Afghan wedding parties in the first place.

    1. oh yeah, there’s interesting stuff that works. It would be nice if it wasn’t called “AI” but was instead called whatever it actually is.

      1. That ship has sailed, I guess. As you wrote, people call everything AI, including e.g. a computer opponent in strategy game, even it works at a level of sophistication like “if I have at least 2x the army strength of another player, declare war against them, and then charge all units towards them without any tactics whatsoever”.

          1. This was a good read/I agree with the sentiment and analysis of the selected sources for the article.

            I’m curious what you make of this interview with Altman: https://www.wired.com/story/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over

            The comments on larger model size no longer contributing to progress/stagnating run counter to the picture you paint of Altman’s beliefs about how far large models will take his auto complete (but, “I don’t believe his interview comments were genuine” is totally valid, or just that building this auto complete is counter to that idea).

          2. Sam does have a vague connection to reality and it’s hard for him to even make a model larger than GPT-4. But also, he says a lot of things.

  3. Is there a reason why A16Z is still on the crypto train and seeming unwilling to get off it? Even many of the other crypto pushers are moving on to “AI” yet A16Z is still talking about NFTs and Web3.

    1. They have too much invested in the crypto and web3 narrative. They have raised billions from investors selling the narrative and are now too invested in it to back off.. I do notice that they have changed some of the wording to reflect more of a focus on “infrastructure” and also lead using the blockchain term, not crypto term.

  4. There is one sense in which the AI doomers are correct, though it’s not the one they think.

    These Machine Learning models, just like cryptocurrency, consume vast amounts of electricity in their training and usage, accelerating climate change and bringing the extinction (or near-extinction) of humanity ever closer.

    Of course, “regulating” them doesn’t do a thing to solve this problem.

  5. Freely write more about the hype, direct risks and scammy rubbish surrounding this subject, as well! I, for one, can appreciate your perspective.

    Could so without the “Cryptokitties-but-sentient” grifts we’ll nigh-inevitably see.

  6. Several years ago, maybe more, Harpers magazine published an excellent piece about how our popular conception of the mind, or consciousness, has for several decades, followed developments in computer science. Not the other way around. I.e. here’s our conception of what sentience means – is there a computer that does that? But rather here’s what our computers can do – humans minds are like that. AI: If consciousness is generating “content” … look! Our computers can do that! What the AI people seem to want to do is convince us of the same proposition (define consciousness as what their machines can do). I agree your spot on this is the next grift.

  7. A decent term for this mass of ‘technology’ is SALAMI – Systematic Approaches to Learning Algorithms and Machine Inferences
    Please pardon me for not recalling the name of the person who came up with it.

    Another parallel to the cryptocurrency fiasco is the amount of energy required to train and power these SALAMIs. It is interesting to consider that the sunk cost in energy used for the training of the ‘large language models’ implies that if there are ‘better’ methods they will probably not survive.

    1. I for one welcome our Acausal Robot God. Eight rationalists wedgied for every dollar donated!

  8. Now that we’ve overplanted Merkel Trees and are experiencing intellectual deforestation, it’s obviously time to pivot to ‘AI’. Workers of the world unite! You have nothing to gain but Markov Chains.

    The AI-doomers remain the kind of folk who have re-invented Pascal’s Wager without the benefit of prior reading, and the AI-boosters are for the most part selling something they don’t understand to people who understand it less.

    The ‘AI kills pilot’ had the ring of truthiness, but it’s meant to. I’m sure I saw something about a machine learning or a genetic algorithm that found a ‘cheat’ for debugging wherein it deleted the file to debug which resulted in 0 compiler errors. These “one weird trick” hacks are exactly the kind of bullshit that folk love as context breakers. It’s especially ironic in the context of avoiding war crime drone strikes, when the easier one would be to target groups sufficiently large as to guarantee it was someone’s birthday.

  9. > Crypto sucks, and it turns out AI sucks too. We promise we’ll go back to crypto next time.

    I, for one, hope to see more AI on your blogs!

    This thorough takedown of AI techno-fantasies was just as satisfying and informative to read for me today as your takedowns were at the peak of crypto hype. It definitely feels like the moment for an “attack of the 50 foot language model” and I think your experience covering crypto makes few people better suited to write that today.

  10. I am a lawyer, who became a full time translator and my specialization is the translation of legislation about financial services, financial instruments, risk-assessment for financial institutions, anti-money laundering regulations etc. etc. etc.

    There are a lot of tools that work pretty well to translate between the bigger world languages (lets say for example between English and Spanish), for example “smartcat”, it just needs a bit of post-editing. But there aren’t any good tools that help a lot with translation from Icelandic or into Icelandic. (Google translate is much better at translating between English and French than English and Icelandic, for example).

    I have been running tests on chatGPT and maybe I am imagining it, but it seems to be getting better and better at translating from Icelandic and into Icelandic. It started to outperforme https://velthyding.is/ recently (but was behind it by a large margin when I first started to check), and is vaaaaaaaaaaaaaaaaaaaaaaaastly better than google translate with Icelandic. This might be what the tech could be reaaly good for.

      1. Since I wrote the original post about chat GTP outperforming google translate and being improving I have changed my mind. It was a coincidence that I had been running texts throught chat GTP that suited it well. It performed horribly the day after the comment I made about it being improving and makes up a bunch of nonesense everytime I try it out these days.

        1. ah well 🙂 I mean, machine translation is based on ML stuff that shares a close ancestry with Chat GPT. But people expect machine translation to basically work.

  11. i am an average gen x desk slave and have lived through 500 generations of bullshit hype and clutched pearls. i have linked this article (did I find it on boing boing or Metafilter?) to 11 million people because of its refreshing debunking anti-bullshit stance and its intelligent skewering of the money-grubbing lunatics taking over the asylum. thanks!!!

  12. Technology is the promise of idle: easy money, effortless labor, endless leisure at the end the rainbow. But, as it’s pointed out, it just enslaved us in new ways.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.