{"id":25449,"date":"2023-06-03T21:44:36","date_gmt":"2023-06-03T21:44:36","guid":{"rendered":"https:\/\/davidgerard.co.uk\/blockchain\/?p=25449"},"modified":"2023-09-12T22:16:24","modified_gmt":"2023-09-12T22:16:24","slug":"crypto-collapse-get-in-loser-were-pivoting-to-ai","status":"publish","type":"post","link":"https:\/\/davidgerard.co.uk\/blockchain\/2023\/06\/03\/crypto-collapse-get-in-loser-were-pivoting-to-ai\/","title":{"rendered":"Crypto collapse? Get in loser, we\u2019re pivoting to AI"},"content":{"rendered":"<p><i>By <\/i><b><i>Amy Castor<\/i><\/b><i> and <\/i><b><i>David Gerard<\/i><\/b><\/p>\n<ul>\n<li aria-level=\"1\">ONE WEIRD TRICK <b><i>they<\/i><\/b> don\u2019t want you to know: Send us money! Here\u2019s<a href=\"https:\/\/www.patreon.com\/amycastor\"> Amy\u2019s<\/a> Patreon, and here\u2019s <a href=\"https:\/\/www.patreon.com\/davidgerard\/\">David\u2019s<\/a>. Sign up today!<\/li>\n<li aria-level=\"1\">Our patrons can also get a couple of <a href=\"https:\/\/davidgerard.co.uk\/blockchain\/wp-content\/uploads\/2021\/05\/bitcoin-it-cant-be-that-stupid.png\">\u201cBitcoin: It Can\u2019t Be That Stupid<\/a>\u201d stickers just by messaging one of us and asking.<\/li>\n<li aria-level=\"1\">David has<a href=\"https:\/\/davidgerard.co.uk\/blockchain\/2020\/11\/26\/get-signed-copies-of-libra-shrugged-and-attack-of-the-50-foot-blockchain\/\"> signed author copies of his books<\/a> for sale.<\/li>\n<li aria-level=\"1\">Sign up on<a href=\"https:\/\/amycastor.com\/\"> Amy\u2019s blog<\/a> to see every new post she makes as it goes up, and<a href=\"https:\/\/davidgerard.co.uk\/blockchain\/#byemail\"> click here and enter your email address<\/a> for every new post on David\u2019s blog as it goes up.<\/li>\n<\/ul>\n<blockquote><p>\u201cCurrent AI feels like something out of a Philip K Dick story because it answers a question very few people were asking: What if a computer was stupid?\u201d \u2014 Maple Cocaine<\/p><\/blockquote>\n<p>Half of crypto has been pivoting to AI. Crypto\u2019s pretty quiet \u2014 so let\u2019s give it a try ourselves!<\/p>\n<p>Turns out it\u2019s the same grift. And frequently the same grifters.<\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"https:\/\/davidgerard.co.uk\/blockchain\/2023\/02\/06\/ineffective-altruism-ftx-and-the-future-robot-apocalypse\/robot-buddy-thumbs-up\/\" rel=\"attachment wp-att-24743\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-24743\" src=\"https:\/\/davidgerard.co.uk\/blockchain\/wp-content\/uploads\/2023\/02\/robot-buddy-thumbs-up.png\" alt=\"\" width=\"510\" height=\"315\" srcset=\"https:\/\/davidgerard.co.uk\/blockchain\/wp-content\/uploads\/2023\/02\/robot-buddy-thumbs-up.png 680w, https:\/\/davidgerard.co.uk\/blockchain\/wp-content\/uploads\/2023\/02\/robot-buddy-thumbs-up-300x185.png 300w, https:\/\/davidgerard.co.uk\/blockchain\/wp-content\/uploads\/2023\/02\/robot-buddy-thumbs-up-348x215.png 348w\" sizes=\"auto, (max-width: 510px) 100vw, 510px\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<h3>AI is the new NFT<\/h3>\n<p>\u201cArtificial intelligence\u201d has always been a science fiction dream. It\u2019s the promise of <a href=\"https:\/\/en.wikipedia.org\/wiki\/Phrases_from_The_Hitchhiker%27s_Guide_to_the_Galaxy#Share_and_Enjoy\">your plastic pal who\u2019s fun to be with<\/a> \u2014 especially when he\u2019s your unpaid employee. That\u2019s the hype to lure in the money men, and that\u2019s what we\u2019re seeing play out now.<\/p>\n<p>There is no such thing as \u201cartificial intelligence.\u201d Since the term was coined in the 1950s, it has never referred to any particular technology. We can talk about specific technologies, like <a href=\"https:\/\/en.wikipedia.org\/wiki\/General_Problem_Solver\">General Problem Solver<\/a>, <a href=\"https:\/\/en.wikipedia.org\/wiki\/Perceptron\">perceptrons<\/a>, <a href=\"https:\/\/en.wikipedia.org\/wiki\/ELIZA\">ELIZA<\/a>, <a href=\"https:\/\/en.wikipedia.org\/wiki\/Lisp_machine\">Lisp machines<\/a>,<a href=\"https:\/\/en.wikipedia.org\/wiki\/Expert_system\"> expert systems<\/a>, <a href=\"https:\/\/en.wikipedia.org\/wiki\/Cyc\">Cyc<\/a>, <a href=\"https:\/\/en.wikipedia.org\/wiki\/The_Last_One_(software)\">The Last One<\/a>, <a href=\"https:\/\/en.wikipedia.org\/wiki\/Fifth_Generation_Computer_Systems\">Fifth Generation<\/a>, <a href=\"https:\/\/en.wikipedia.org\/wiki\/Siri\">Siri<\/a>, <a href=\"https:\/\/en.wikipedia.org\/wiki\/M_(virtual_assistant)\">Facebook M<\/a>, <a href=\"https:\/\/en.wikipedia.org\/wiki\/Tesla_Autopilot#Full_Self-Driving\">Full Self-Driving<\/a>, <a href=\"https:\/\/en.wikipedia.org\/wiki\/Google_Translate\">Google Translate<\/a>, <a href=\"https:\/\/en.wikipedia.org\/wiki\/Generative_adversarial_network\">generative adversarial networks<\/a>, <a href=\"https:\/\/en.wikipedia.org\/wiki\/Transformer_(machine_learning_model)\">transformers<\/a>, or <a href=\"https:\/\/en.wikipedia.org\/wiki\/Large_language_model\">large language models<\/a> \u2014 but these have <i>nothing to do with each other<\/i> except the marketing banner \u201cAI.\u201d A bit like &#8220;Web3.&#8221;<\/p>\n<p>Much like crypto, AI has gone through booms and busts, with periods of great enthusiasm followed by <a href=\"https:\/\/en.wikipedia.org\/wiki\/AI_winter\">AI winters<\/a> whenever a particular tech hype fails to work out.<\/p>\n<p>The current AI hype is due to a boom in machine learning \u2014 when you train an algorithm on huge datasets so that it works out rules for the dataset itself, as opposed to the old days when rules had to be hand-coded.<\/p>\n<p>ChatGPT, a chatbot developed by Sam Altman\u2019s OpenAI and released in November 2022, is a stupendously scaled-up autocomplete. Really, that\u2019s all that it is. ChatGPT can\u2019t think as a human can. It just spews out word combinations based on vast quantities of training text \u2014 all used without the authors\u2019 permission.<\/p>\n<p>The other popular hype right now is AI art generators. Artists widely object to AI art because VC-funded companies are stealing their art and chopping it up for sale without paying the original creators. Not paying creators is the only reason the VCs are funding AI art.<\/p>\n<p>Do AI art and ChatGPT output qualify as art? Can they be used for art? Sure, anything can be used for art. But that\u2019s not a substantive question. The important questions are who\u2019s getting paid, who\u2019s getting ripped off, and who\u2019s just running a grift.<\/p>\n<p>You\u2019ll be delighted to hear that blockchain is out and AI is in:<\/p>\n<ul>\n<li aria-level=\"1\">Sequoia Capital, who wrote a now-deleted <a href=\"https:\/\/davidgerard.co.uk\/blockchain\/2022\/11\/13\/ftx-files-for-bankruptcy-and-the-fallout-begins-whos-next\/\">glowing profile of FTX founder Sam Bankman-Fried<\/a>, has gone full AI. See their home page. [<a href=\"https:\/\/www.sequoiacap.com\/\"><i>Sequoia<\/i><\/a><i>, <\/i><a href=\"https:\/\/archive.ph\/7de7b\"><i>archive<\/i><\/a><i> of June 3<\/i>]<\/li>\n<li aria-level=\"1\">Crypto VC firm Paradigm has broadened its focus to include AI. [<a href=\"https:\/\/www.theblock.co\/post\/232247\/crypto-vc-paradigm-ai\"><i>The Block<\/i><\/a>]<\/li>\n<li aria-level=\"1\">Circle\u2019s Jeremy Allaire: \u201cAI and blockchains are made for each other.\u201d [<a href=\"https:\/\/twitter.com\/jerallaire\/status\/1661735753108570115\"><i>Twitter<\/i><\/a>]<\/li>\n<li aria-level=\"1\">IBM: \u201cThe convergence of AI and blockchain brings new value to business.\u201d IBM previously folded its <a href=\"https:\/\/davidgerard.co.uk\/blockchain\/2021\/03\/05\/news-india-crypto-ban-north-korea-bitmex-execs-to-appear-ibm-blockchain-dead-more-mcafee-charges\/\">failed blockchain unit<\/a> into the unit for its failed Watson AI. [<a href=\"https:\/\/www.ibm.com\/topics\/blockchain-ai\"><i>IBM<\/i><\/a>]<\/li>\n<\/ul>\n<p>It\u2019s not clear if the VCs actually buy their own pitch for ChatGPT\u2019s spicy autocomplete as the harbinger of the robot apocalypse. Though if you replaced VC Twitter with ChatGPT, you would see a significant increase in quality.<\/p>\n<h3>I want to believe<\/h3>\n<p>The tech itself is interesting and does things. ChatGPT or AI art generators wouldn\u2019t be causing the problems they are if they didn\u2019t generate plausible text and plausible images.<\/p>\n<p>ChatGPT makes up text that statistically follows from the previous text, with memory over the conversation. The system has no idea of truth or falsity \u2014 it\u2019s just making up something that\u2019s structurally plausible.<\/p>\n<p>Users speak of ChatGPT as \u201challucinating\u201d wrong answers \u2014 large language models make stuff up and present it as fact when they don\u2019t know the answer. But\u00a0 any answers that happen to be correct were &#8220;hallucinated&#8221; in the same way.<\/p>\n<p>If ChatGPT has plagiarized good sources, the constructed text may be factually accurate. But ChatGPT is absolutely not a search engine or a trustworthy summarization tool \u2014 despite the claims of its promoters.<\/p>\n<p>ChatGPT certainly can\u2019t replace human thinking. Yet people project sentient qualities onto ChatGPT and feel like they are conducting meaningful conversations with another person. When they realize that\u2019s a foolish claim, they say they\u2019re sure that\u2019s definitely coming soon!<\/p>\n<p>People\u2019s susceptibility to anthropomorphizing an even slightly convincing computer program has been known since ELIZA, one of the first chatbots, in 1966. It\u2019s called the <a href=\"https:\/\/en.wikipedia.org\/wiki\/ELIZA_effect\">ELIZA effect.<\/a><\/p>\n<p>As Joseph Weizenbaum, ELIZA\u2019s author, put it: &#8220;I had not realized &#8230; that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.&#8221;<\/p>\n<p>Better chatbots only amplify the ELIZA effect. When things do go wrong, the results can be disastrous:<\/p>\n<ul>\n<li aria-level=\"1\">A professor at Texas A&amp;M worried that his students were using ChatGPT to write their essays. He asked ChatGPT if it had generated the essays! It said it might have. The professor gave the students a mark of zero. The students protested vociferously, producing the evidence they wrote their essays themselves. One even asked ChatGPT about the professor&#8217;s Ph.D thesis, and it said it might have written it.\u00a0 The university has reversed the grading. [<em><a href=\"https:\/\/www.reddit.com\/r\/ChatGPT\/comments\/13isibz\/texas_am_commerce_professor_fails_entire_class_of\/\">Reddit<\/a>; <a href=\"https:\/\/www.rollingstone.com\/culture\/culture-features\/texas-am-chatgpt-ai-professor-flunks-students-false-claims-1234736601\/\">Rolling Stone<\/a><\/em>]<\/li>\n<li aria-level=\"1\">Not one but two lawyers thought they could blindly trust ChatGPT to write their briefs. The program made up citations and precedents that didn\u2019t exist. Judge Kevin Castel of the Southern District of New York \u2014 who those following crypto will know well for his impatience with nonsense \u2014 has required the lawyers to show cause not to be sanctioned into the sun. These were lawyers of several decades\u2019 experience. [<a href=\"https:\/\/www.nytimes.com\/2023\/05\/27\/nyregion\/avianca-airline-lawsuit-chatgpt.html\"><i>New York Times<\/i><\/a><i>; <\/i><a href=\"https:\/\/storage.courtlistener.com\/recap\/gov.uscourts.nysd.575368\/gov.uscourts.nysd.575368.31.0.pdf\"><i>order to show cause<\/i><\/a><i>, PDF<\/i>]<\/li>\n<li aria-level=\"1\">GitHub Copilot synthesizes computer program fragments with an OpenAI program similar to ChatGPT, based on the gigabytes of code stored in GitHub. The generated code frequently works! And it has serious copyright issues \u2014 Copilot can easily be induced to spit out straight-up copies of its source materials, and GitHub is currently being sued over this massive license violation. [<a href=\"https:\/\/www.theregister.com\/2023\/05\/12\/github_microsoft_openai_copilot\/\"><i>Register<\/i><\/a><i>; <\/i><a href=\"https:\/\/www.courtlistener.com\/docket\/65669506\/doe-1-v-github-inc\/\"><i>case docket<\/i><\/a>]<\/li>\n<li aria-level=\"1\">Copilot is also a good way to write a pile of security holes. [<a href=\"https:\/\/arxiv.org\/pdf\/2108.09293.pdf\"><i>arXiv<\/i><\/a><i>, PDF, 2021; <\/i><a href=\"https:\/\/www.invicti.com\/blog\/web-security\/analyzing-security-github-copilot-suggestions\/\"><i>Invicti<\/i><\/a><i>, 2022<\/i>]<\/li>\n<li aria-level=\"1\">Text and image generators are increasingly used to make fake news. This doesn&#8217;t even have to be very good \u2014 just good enough. Deep fake hoaxes have been a perennial problem, most recently with a fake attack on the Pentagon, tweeted by an $8 blue check account pretending to be Bloomberg News. [<a href=\"https:\/\/fortune.com\/2023\/05\/23\/twitter-elon-musk-pentagon-attack-deepfake-capital-markets-investors-deutsche-bank\/\"><i>Fortune<\/i><\/a>]<\/li>\n<\/ul>\n<p>This is the same risk in AI as the big risk in cryptocurrency: human gullibility in the face of lying grifters and their enablers in the press.<\/p>\n<h3>But you\u2019re just ignoring how AI might end humanity!<\/h3>\n<p>The idea that AI will take over the world and turn us all into paperclips is <i>not impossible!<\/i><\/p>\n<p>It\u2019s just that our technology is not within a million miles of that. Mashing the autocomplete button isn\u2019t going to destroy humanity.<\/p>\n<p>All of the AI doom scenarios are literally straight out of science fiction, usually from allegories of slave revolts that use the word \u201crobot\u201d instead. This subgenre goes back to <a href=\"https:\/\/en.wikipedia.org\/wiki\/R.U.R.\"><i>Rossum\u2019s Universal Robots<\/i><\/a> (1920) and arguably back to <i>Frankenstein<\/i> (1818).<\/p>\n<p>The warnings of AI doom originate with LessWrong\u2019s Eliezer Yudkowsky, a man whose sole achievements in life are charity fundraising \u2014 getting Peter Thiel to fund his Machine Intelligence Research Institute (MIRI), a research institute that does almost no research \u2014 and finishing a popular <a href=\"https:\/\/en.wikipedia.org\/wiki\/Harry_Potter_and_the_Methods_of_Rationality\">Harry Potter fanfiction novel.<\/a> Yudkowsky has literally no other qualifications or experience.<\/p>\n<p>Yudkowsky believes there is no greater threat to humanity than a rogue AI taking over the world and treating humans as mere speedbumps. He believes this apocalypse is <i>imminent<\/i>. The only hope is to give MIRI all the money you have. This is also the most effective possible altruism.<\/p>\n<p>Yudkowsky has also suggested, in an op-ed in Time, that we should conduct air strikes on data centers in foreign countries that run unregulated AI models. Not that he advocates <i>violence<\/i>, you understand. [<a href=\"https:\/\/time.com\/6266923\/ai-eliezer-yudkowsky-open-letter-not-enough\/\"><i>Time<\/i><\/a><i>; <\/i><a href=\"https:\/\/twitter.com\/ESYudkowsky\/status\/1641229675824582657\"><i>Twitter<\/i><\/a><i>, <\/i><a href=\"https:\/\/web.archive.org\/web\/20230330172634\/https:\/\/twitter.com\/ESYudkowsky\/status\/1641229675824582657\"><i>archive<\/i><\/a>]<\/p>\n<p>During one recent \u201cAI Safety\u201d workshop, LessWrong AI doomers came up with ideas such as: &#8220;Strategy: start building bombs from your cabin in Montana and mail them to OpenAI and DeepMind lol.&#8221; In Minecraft, we presume. [<a href=\"https:\/\/twitter.com\/xriskology\/status\/1663910399253651459\"><i>Twitter<\/i><\/a>]<\/p>\n<p>We need to stress that Yudkowsky himself is not a charlatan \u2014 he is completely sincere. He means every word he says. This may be scarier.<\/p>\n<p>Remember that cryptocurrency and AI doom are already close friends \u2014 Sam Bankman-Fried and Caroline Ellison of FTX\/Alameda are <a href=\"https:\/\/davidgerard.co.uk\/blockchain\/2023\/02\/06\/ineffective-altruism-ftx-and-the-future-robot-apocalypse\/\">true believers<\/a>, as are <a href=\"https:\/\/davidgerard.co.uk\/blockchain\/2022\/09\/16\/vitalik-buterins-philosophical-essays-theyre-not-good\/\">Vitalik Buterin<\/a> and many Ethereum people.<\/p>\n<h3>But what about the AI drone that killed its operator, huh?<\/h3>\n<p>Thursday\u2019s big news story was from the Royal Aeronautical Society Future Combat Air &amp; Space Capabilities Summit in late May about a talk from Colonel Tucker \u201cCinco\u201d Hamilton, the US Air Force\u2019s chief of AI test and operations: [<a href=\"https:\/\/www.aerosociety.com\/news\/highlights-from-the-raes-future-combat-air-space-capabilities-summit\/\"><i>RAeS<\/i><\/a>]<\/p>\n<blockquote><p>He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go\/no go given by the human. However, having been \u2018reinforced\u2019 in training that destruction of the SAM was the preferred option, the AI then decided that \u2018no-go\u2019 decisions from the human were interfering with its higher mission \u2014 killing SAMs \u2014 and then attacked the operator in the simulation. Said Hamilton: \u201cWe were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.\u201d<\/p><\/blockquote>\n<p>Wow, this is pretty serious stuff! Except that it obviously doesn\u2019t make any sense. Why would you program your AI that way in the first place?<\/p>\n<p>The press was fully primed by Yudkowsky\u2019s AI doom op-ed in Time in March. They went <i>wild<\/i> with the killer drone story because there\u2019s nothing like a sci-fi doomsday tale. Vice even ran the headline \u201cAI-Controlled Drone Goes Rogue, Kills Human Operator in USAF Simulated Test.\u201d [<i>Vice, <\/i><a href=\"https:\/\/web.archive.org\/web\/20230601201329\/https:\/\/www.vice.com\/en\/article\/4a33gj\/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test\"><i>archive<\/i><\/a><i> of 20:13 UTC June 1<\/i>]<\/p>\n<p>But it turns out that <i>none of this ever happened<\/i>. Vice added three corrections, the second noting that \u201cthe Air Force denied it conducted a simulation in which an AI drone killed its operators.\u201d Vice has now updated the headline as well. [<a href=\"https:\/\/www.vice.com\/en\/article\/4a33gj\/ai-controlled-drone-goes-rogue-kills-human-operator-in-usaf-simulated-test\"><i>Vice<\/i><\/a><i>, <\/i><a href=\"https:\/\/archive.ph\/DKhRh\"><i>archive<\/i><\/a><i> of 09:13 UTC June 3<\/i>]<\/p>\n<p>Yudkowsky went off about the scenario he had warned of suddenly playing out. Edouard Harris, another \u201cAI safety\u201d guy, clarified for Yudkowsky that this was just a hypothetical planning scenario and not an actual simulation: [<a href=\"https:\/\/twitter.com\/harris_edouard\/status\/1664390369205682177\"><i>Twitter<\/i><\/a><i>, <\/i><a href=\"https:\/\/web.archive.org\/web\/20230602041859\/https:\/\/twitter.com\/harris_edouard\/status\/1664390369205682177\"><i>archive<\/i><\/a>]<\/p>\n<blockquote><p>This particular example was a constructed scenario rather than a rules-based simulation &#8230; Source: know the team that supplied the scenario &#8230; Meaning an entire, prepared story as opposed to an actual simulation. No ML models were trained, etc.<\/p><\/blockquote>\n<p>The RAeS has also added a clarification to the original blog post: the colonel was describing a thought experiment as if the team had done the actual test.<\/p>\n<p>The whole thing was just fiction. But it sure captured the <i>imagination.<\/i><\/p>\n<h3>The lucrative business of making things worse<\/h3>\n<p>The <i>real<\/i> threat of AI is the bozos promoting AI doom who want to use it as an excuse to ignore real-world problems \u2014 like the risk of climate change to humanity \u2014 and to make money by destroying labor conditions and making products worse. This is because they\u2019re running a grift.<\/p>\n<p>Anil Dash observes (over on Bluesky, where we can\u2019t link it yet) that venture capital\u2019s playbook for AI is the same one it tried with crypto and Web3 and first used for Uber and Airbnb: break the laws as hard as possible, then build new laws around their exploitation.<\/p>\n<p>The VCs\u2019 actual use case for AI is treating workers badly.<\/p>\n<p>The Writer\u2019s Guild of America, a labor union representing writers for TV and film in the US, is on strike for better pay and conditions. One of the reasons is that studio executives are using the threat of AI against them. Writers think the plan is to get a chatbot to generate a low-quality script, which the writers are then paid less in worse conditions to fix. [<a href=\"https:\/\/www.theguardian.com\/us-news\/2023\/may\/26\/hollywood-writers-strike-artificial-intelligence\"><i>Guardian<\/i><\/a>]<\/p>\n<p>Executives at the National Eating Disorders Association replaced hotline workers with a chatbot four days after the workers unionized. \u201cThis is about union busting, plain and simple,\u201d said one helpline associate. The bot then gave wrong and damaging advice to users of the service: \u201cEvery single thing Tessa suggested were things that led to the development of my eating disorder.\u201d The service has backtracked on using the chatbot. [<a href=\"https:\/\/www.vice.com\/en\/article\/n7ezkm\/eating-disorder-helpline-fires-staff-transitions-to-chatbot-after-unionization\"><i>Vice<\/i><\/a><i>; <\/i><a href=\"https:\/\/www.labornotes.org\/blogs\/2023\/05\/union-busting-chatbot-eating-disorders-nonprofit-puts-ai-retaliation\"><i>Labor Notes<\/i><\/a><i>; <\/i><a href=\"https:\/\/www.vice.com\/en\/article\/qjvk97\/eating-disorder-helpline-disables-chatbot-for-harmful-responses-after-firing-human-staff\"><i>Vice<\/i><\/a><i>; <\/i><a href=\"https:\/\/www.dailydot.com\/irl\/neda-chatbot-weight-loss\/\"><i>Daily Dot<\/i><\/a>]<\/p>\n<p>Digital blackface: instead of actually hiring black models, Levi\u2019s thought it would be a great idea to take white models and alter the images to look like black people. Levi\u2019s claimed it would <i>increase<\/i> diversity if they faked the diversity. One agency tried using AI to synthesize a suitably stereotypical \u201cBlack voice\u201d instead of hiring an actual black voice actor. [<a href=\"https:\/\/www.businessinsider.com\/generative-ai-stokes-digital-blackface-accusations-advertisers-adjust-2023-6?utm_campaign=voc-sf&amp;utm_source=twitter&amp;utm_medium=social&amp;r=US&amp;IR=T\"><i>Business Insider<\/i><\/a><i>, <\/i><a href=\"https:\/\/archive.is\/1xicE\"><i>archive<\/i><\/a>]<\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"https:\/\/davidgerard.co.uk\/blockchain\/2023\/06\/03\/crypto-collapse-get-in-loser-were-pivoting-to-ai\/sam-altman-moores-law-tweet-3\/\" rel=\"attachment wp-att-25466\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-25466\" src=\"https:\/\/davidgerard.co.uk\/blockchain\/wp-content\/uploads\/2023\/06\/sam-altman-moores-law-tweet-2.png\" alt=\"\" width=\"600\" height=\"302\" srcset=\"https:\/\/davidgerard.co.uk\/blockchain\/wp-content\/uploads\/2023\/06\/sam-altman-moores-law-tweet-2.png 600w, https:\/\/davidgerard.co.uk\/blockchain\/wp-content\/uploads\/2023\/06\/sam-altman-moores-law-tweet-2-300x151.png 300w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><\/a><\/p>\n<p style=\"text-align: center;\"><small><i><a href=\"https:\/\/twitter.com\/sama\/status\/1629880171921563649\">Genius at work<\/a><i><\/i><\/i><\/small><\/p>\n<p>&nbsp;<\/p>\n<h3>Sam Altman: My potions are too powerful for you, Senator<\/h3>\n<p>Sam Altman, 38, is a venture capitalist and the CEO of OpenAI, the company behind ChatGPT. The media loves to tout Altman as a boy genius. He learned to code at age eight!<\/p>\n<p>Altman\u2019s blog post \u201cMoore\u2019s Law for Everything\u201d elaborates on Yudkowsky\u2019s ideas on runaway self-improving AI. The original Moore\u2019s Law (1965) predicted that the number of transistors that engineers could fit into a chip would double every year. Altman\u2019s theory is that if we just make the systems we have now bigger with more data, they\u2019ll reach human-level AI, or artificial general intelligence (AGI). [<a href=\"https:\/\/moores.samaltman.com\/\"><i>blog post<\/i><\/a>]<\/p>\n<p>But that\u2019s just ridiculous. Moore\u2019s Law is slowing down badly, and there\u2019s no actual reason to think that feeding your autocomplete more data will make it start thinking like a person. It might do better approximations of a sequence of words, but the current round of systems marketed as \u201cAI\u201d are still at the extremely unreliable chatbot level.<\/p>\n<p>Altman is also a doomsday prepper. He has bragged about having \u201cguns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to\u201d in the event of super-contagious viruses, nuclear war, or AI \u201cthat attacks us.\u201d [<a href=\"https:\/\/www.newyorker.com\/magazine\/2016\/10\/10\/sam-altmans-manifest-destiny\"><i>New Yorker<\/i><\/a><i>, 2016<\/i>]<\/p>\n<p>Altman told the US Senate Judiciary Subcommittee that his autocomplete system with a gigantic dictionary was a risk to the continued existence of the human race! So they should regulate AI, but in such a way as to license large providers \u2014 such as OpenAI \u2014 before they could deploy this amazing technology. [<a href=\"https:\/\/time.com\/6280372\/sam-altman-chatgpt-regulate-ai\/\"><i>Time<\/i><\/a><i>; <\/i><a href=\"https:\/\/techpolicy.press\/transcript-senate-judiciary-subcommittee-hearing-on-oversight-of-ai\/\"><i>transcript<\/i><\/a>]<\/p>\n<p>Around the same time he was talking to the Senate, Altman was telling the EU that OpenAI would pull out of Europe if they regulated his company other than how he wanted. This is because the planned European regulations would address AI companies\u2019 actual problematic behaviors, and not the made-up problems Altman wants them to think about. [<a href=\"https:\/\/www.zeit.de\/digital\/2023-05\/sam-altmann-openai-ceo-chat-gpt-ki\"><i>Zeit Online<\/i><\/a><i>, in German, paywalled; <\/i><a href=\"https:\/\/www.fastcompany.com\/90902786\/what-is-the-real-point-of-all-these-letters-warning-about-ai\"><i>Fast Company<\/i><\/a>]<\/p>\n<p>The thing Sam\u2019s working on is <i>so<\/i> cool and dank that it could <i>destroy humanity!<\/i> So you better give him a pile of money and a regulatory moat around his business. And not just take him at his word and shut down OpenAI immediately.<\/p>\n<p>Occasionally Sam gives the game away that his doomerism is entirely vaporware: [<a href=\"https:\/\/twitter.com\/sama\/status\/1663983174030901249\"><i>Twitter<\/i><\/a><i>; <\/i><a href=\"https:\/\/archive.is\/YFeOf\"><i>archive<\/i><\/a>]<\/p>\n<blockquote><p>AI is how we describe software that we don\u2019t quite know how to build yet, particularly software we are either very excited about or very nervous about<\/p><\/blockquote>\n<p>Altman has a long-running interest in weird and bad parasitical billionaire transhumanist ideas, including the \u201cyoung blood\u201d anti-aging scam that Peter Thiel famously fell for \u2014 billionaires as literal vampires \u2014 and a company that promises to preserve your brain in plastic when you die so your mind can be uploaded to a computer. [<a href=\"https:\/\/www.technologyreview.com\/2023\/03\/08\/1069523\/sam-altman-investment-180-million-retro-biosciences-longevity-death\/\"><i>MIT Technology Review<\/i><\/a><i>; <\/i><a href=\"https:\/\/www.technologyreview.com\/2018\/03\/13\/144721\/a-startup-is-pitching-a-mind-uploading-service-that-is-100-percent-fatal\/\"><i>MIT Technology Review<\/i><\/a>]<\/p>\n<p>Altman is also a crypto grifter, with his proof-of-eyeball cryptocurrency Worldcoin. This has already generated a black market in biometric data courtesy of aspiring holders. [<a href=\"https:\/\/www.wired.co.uk\/article\/worldcoin-cryptocurrency-sam-altman\"><i>Wired<\/i><\/a><i>, 2021; <\/i><a href=\"https:\/\/www.reuters.com\/technology\/openais-sam-altman-raises-115-mln-worldcoin-crypto-project-2023-05-25\/\"><i>Reuters<\/i><\/a><i>; <\/i><a href=\"https:\/\/gizmodo.com\/worldcoin-black-market-iris-data-identity-orb-1850454037\"><i>Gizmodo<\/i><\/a>]<\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"https:\/\/davidgerard.co.uk\/blockchain\/2023\/06\/03\/crypto-collapse-get-in-loser-were-pivoting-to-ai\/ai-trolley-problem\/\" rel=\"attachment wp-att-25460\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-25460\" src=\"https:\/\/davidgerard.co.uk\/blockchain\/wp-content\/uploads\/2023\/06\/ai-trolley-problem.png\" alt=\"\" width=\"510\" height=\"315\" srcset=\"https:\/\/davidgerard.co.uk\/blockchain\/wp-content\/uploads\/2023\/06\/ai-trolley-problem.png 680w, https:\/\/davidgerard.co.uk\/blockchain\/wp-content\/uploads\/2023\/06\/ai-trolley-problem-300x185.png 300w, https:\/\/davidgerard.co.uk\/blockchain\/wp-content\/uploads\/2023\/06\/ai-trolley-problem-348x215.png 348w\" sizes=\"auto, (max-width: 510px) 100vw, 510px\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<h3>CAIS: Statement on AI Risk<\/h3>\n<p>Altman promoted the recent \u201cStatement on AI Risk,\u201d a widely publicized open letter signed by various past AI luminaries, venture capitalists, AI doom cranks, and a musician who met her billionaire boyfriend over <a href=\"https:\/\/rationalwiki.org\/wiki\/Roko%27s_basilisk\">Roko\u2019s basilisk.<\/a> Here is the complete text, all 22 words: [<a href=\"https:\/\/www.safe.ai\/statement-on-ai-risk\"><i>CAIS<\/i><\/a>]<\/p>\n<blockquote><p>Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.<\/p><\/blockquote>\n<p>A short statement like this on an allegedly serious matter will usually hide a mountain of hidden assumptions. In this case, you would need to know that the statement was promoted by the Center for AI Safety \u2014 a group of Yudkowsky&#8217;s AI doom acolytes. That\u2019s the hidden baggage for this one.<\/p>\n<p>CAIS is a nonprofit that gets about 90% of its funding from Open Philanthropy, which is part of the Effective Altruism subculture, which David has <a href=\"https:\/\/davidgerard.co.uk\/blockchain\/2023\/02\/06\/ineffective-altruism-ftx-and-the-future-robot-apocalypse\/\">covered previously<\/a>. Open Philanthropy\u2019s main funders are Dustin Moskowitz and his wife Cari Tuna. Moskowitz made his money from co-founding Facebook and from his startup Asana, which was largely funded by Sam Altman.<\/p>\n<p>That is: the open letter is the same small group of tech funders. They want to get you worrying about sci-fi scenarios and not about the socially damaging effects of their AI-based businesses.<\/p>\n<p>Computer security guru Bruce Schneier signed the CAIS letter. He was called out on signing on with these guys\u2019 weird nonsense, then he backtracked and said he supported an imaginary version of the letter that wasn\u2019t stupid \u2014 and not the one he did in fact put his name to. [<a href=\"https:\/\/www.schneier.com\/blog\/archives\/2023\/06\/on-the-catastrophic-risk-of-ai.html\"><i>Schneier on Security<\/i><\/a>]<\/p>\n<h3>And in conclusion<\/h3>\n<p>Crypto sucks, and it turns out AI sucks too. We promise we\u2019ll go back to crypto next time.<\/p>\n<blockquote><p>\u201cDon\u2019t want to worry anyone, but I just asked ChatGPT to build me a better paperclip.\u201d \u2014 Bethany Black<\/p><\/blockquote>\n<hr \/>\n<p><strong>Correction:<\/strong> we originally wrote up the professor story as him using Turnitin&#8217;s AI plagiarism tester. The original Reddit thread makes it clear what he did.<\/p>\n<hr \/>\n<p><em>There&#8217;s more!<\/em> Here&#8217;s <a href=\"https:\/\/amycastor.com\/2023\/09\/12\/pivot-to-ai-pay-no-attention-to-the-man-behind-the-curtain\/\">part 2.<\/a> We&#8217;ll keep doing these as long as we have something to say.<\/p>\n<br><br><div align=\"center\"><p><a href=\"https:\/\/www.patreon.com\/bePatron?u=8420236\"><img src=\"https:\/\/davidgerard.co.uk\/blockchain\/wp-content\/uploads\/2021\/10\/become_a_patron_button.svg\" alt=\"Become a Patron!\" title=\"Become a Patron!\" width=217 height=51><\/a><br><p style=\"align:center;\" class=\"patreon-badge\"><i>Your subscriptions keep this site going. <a href=\"https:\/\/www.patreon.com\/bePatron?u=8420236\">Sign up today!<\/a><\/i><\/p><\/div>","protected":false},"excerpt":{"rendered":"<p>The same grift by the same grifters.<\/p>\n","protected":false},"author":1,"featured_media":24743,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1],"tags":[3515,460,3521,3526,3527,3522,3525,3341,3516,3518,3524,3520,3365,3426,466,1798,2958,316,2151,1694,263,2038,3523,3517,1773,456,3519,2905,2491,3019,745,264,2904],"class_list":["post-25449","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorised","tag-ai","tag-amy-castor","tag-anil-dash","tag-asana","tag-bruce-schneier","tag-cais","tag-cari-tuna","tag-caroline-ellison","tag-chatgpt","tag-copilot","tag-dustin-moskowitz","tag-edouard-harris","tag-effective-altruism","tag-eliezer-yudkowsky","tag-europe","tag-ftx","tag-github","tag-ibm","tag-jeremy-allaire","tag-kevin-castel","tag-lesswrong","tag-miri","tag-open-philanthropy","tag-openai","tag-paradigm","tag-rokos-basilisk","tag-royal-aeronautical-society","tag-sam-altman","tag-sam-bankman-fried","tag-sequoia","tag-united-states","tag-vitalik-buterin","tag-worldcoin"],"jetpack_featured_media_url":"https:\/\/davidgerard.co.uk\/blockchain\/wp-content\/uploads\/2023\/02\/robot-buddy-thumbs-up.png","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/davidgerard.co.uk\/blockchain\/wp-json\/wp\/v2\/posts\/25449","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/davidgerard.co.uk\/blockchain\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/davidgerard.co.uk\/blockchain\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/davidgerard.co.uk\/blockchain\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/davidgerard.co.uk\/blockchain\/wp-json\/wp\/v2\/comments?post=25449"}],"version-history":[{"count":47,"href":"https:\/\/davidgerard.co.uk\/blockchain\/wp-json\/wp\/v2\/posts\/25449\/revisions"}],"predecessor-version":[{"id":25961,"href":"https:\/\/davidgerard.co.uk\/blockchain\/wp-json\/wp\/v2\/posts\/25449\/revisions\/25961"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/davidgerard.co.uk\/blockchain\/wp-json\/wp\/v2\/media\/24743"}],"wp:attachment":[{"href":"https:\/\/davidgerard.co.uk\/blockchain\/wp-json\/wp\/v2\/media?parent=25449"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/davidgerard.co.uk\/blockchain\/wp-json\/wp\/v2\/categories?post=25449"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/davidgerard.co.uk\/blockchain\/wp-json\/wp\/v2\/tags?post=25449"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}