(2024-07-18) ZviM AI #73 Openly Evil AI
Zvi Mowshowitz: AI #73: Openly Evil AI. What do you call a clause explicitly saying that you waive the right to whistleblower compensation, and that you need to get permission before sharing information with government regulators like the SEC?
I have many answers.
I also know that OpenAI, having f**ed around, seems poised to find out, because that is the claim made by whistleblowers to the SEC. Given the SEC fines you for merely not making an explicit exception to your NDA for whistleblowers, what will they do once aware of explicit clauses going the other way?*
We also have rather a lot of tech people coming out in support of Donald Trump. I go into the reasons why, which I do think is worth considering.
Table of Contents
- Language Models Offer Mundane Utility. Fight the insurance company.
- Language Models Don’t Offer Mundane Utility. Have you tried using it?
- Clauding Along. Not that many people are switching over.
- Fun With Image Generation. Amazon Music and K-Pop start to embrace AI.
- Deepfaketown and Botpocalypse Soon. FoxVox, turn Fox into Vox or Vox into Fox.
- They Took Our Jobs. Take away one haggling job, create another haggling job.
- Get Involved. OpenPhil request for proposals. Job openings elsewhere.
- Introducing. Karpathy goes into AI education.
- In Other AI News. OpenAI’s Q is now named Strawberry. Is it happening?*
- Denying the Future. Projects of the future that think AI will never improve again.
- Quiet Speculations. How to think about stages of AI capabilities.
- The Quest for Sane Regulations. EU, UK, The Public.
- The Other Quest Regarding Regulations. Many in tech embrace The Donald.
- SB 1047 Opposition Watch (1). I’m sorry. You don’t have to read this.
- SB 1047 Opposition Watch (2). I’m sorry. You don’t have to read this.
- Open Weights are Unsafe and Nothing Can Fix This. What to do about it?
- The Week in Audio. YouTube highlighted an older interview I’d forgotten.
- Rhetorical Innovation. Supervillains, oh no.
- Oh Anthropic. More details available, things not as bad as they look.
- Openly Evil AI. Other things, in other places, on the other hand, look worse.
- Aligning a Smarter Than Human Intelligence is Difficult. Noble attempts.
- People Are Worried About AI Killing Everyone. Scott Adams? Kind of?
- Other People Are Not As Worried About AI Killing Everyone. All glory to it.
- The Lighter Side. A different kind of mental gymnastics.
Language Models Offer Mundane Utility
Let Claude write your prompts for you. He suggests using the Claude prompt improver... Sully: convinced that we are all really bad at writing prompts....I’m personally never writing prompts by hand again
Predict who will be the shooting victim. A machine learning model did this for citizens of Chicago (a clear violation of the EU AI Act, if it was done there!) and of the 500 people it said were most likely to be shot, 13% of them were shot in the next 18 months. That’s a lot
A lot of this ultimately is not rocket science: Benjamin Miller: The DC City Administrator under Fenty told me that one of the most surprising things he learned was virtually all the violent crime in the city was caused by a few hundred people. The city knows who they are and used to police them more actively, but now that’s become politically infeasible.
Dr. Tariq said Doximity GPT, a HIPAA-compliant version of the chatbot, had halved the time he spent on prior authorizations. Maybe more important, he said, the tool — which draws from his patient’s medical records and the insurer’s coverage requirements — has made his letters more successful. Since using A.I. to draft prior-authorization requests, he said about 90 percent of his requests for coverage had been approved by insurers, compared with about 10 percent before.
This is an inherently adversarial system we have chosen.
We would not want either side to fully get their way.
Nail all five of this man’s family’s medical diagnostic issues that doctors messed up. He notes that a smart doctor friend also got them all correct, so a lot of this is ‘any given doctor might not be good
Arnold Kling is highly impressed by Claude’s answer about the Stiglitz-Shapiro 1984 efficiency wage model, in terms of what it would take to generate such an answer
Kling there also expresses optimism about using AI to talk to simulations of dead people, I looked at the sample conversations, and it all seems so basic and simple
Language Models Don’t Offer Mundane Utility
Ethan Mollick: In my most recent talks to companies, even though everyone is talking about AI, less than 10% of people have even tried GPT-4, and less than 2% have spent the required 10 or so hours with a frontier model.
But the people who have tried frontier models seriously seem to have found many uses. I rarely hear someone saying they were not useful.
Clauding Along
Are people switching to Claude en masse over ChatGPT now that Claude is better? From what I can tell, the cognesenti are, but the masses are as usual much slower.
Eliezer Yudkowsky: Who the heck thought ChatGPT was sticky? Current LLM services have a moat as deep as toilet paper.
Fun with Image Generation
K-pop is increasingly experimenting with AI generated content, starting with music videos and also it is starting to help some artists generate the songs
Meanwhile: Scott Lincicome: Amazon music\ now testing an AI playlist maker called “Maestro
Deepfaketown and Botpocalypse Soon
As a warning, Palisade Research releases [[FoxVox, a browser extension that uses ChatGPT to transform websites to make them sound like they were written by Vox (liberal) or Fox (conservative).
They Took Our Jobs
Get Involved
Open Philanthropy’s AI governance team is launching a request for proposals for work to mitigate the potential catastrophic risks from advanced AI systems.
Introducing
In Other AI News
Denying the Future
When people try to model ‘the impact of AI’ the majority of them, including most economists, refuse to consider ANY improvements in AI in the future
Quiet Speculations
Alex Tabarrok says and Seb Krier mostly agrees that AI will not be intelligent enough to figure out how to ‘perfectly organize a modern economy.’ Why? Because the AIs will be part of the economy, and they will be unable to anticipate each other. (planning)
Arvind Narayanan offers thoughts on what went wrong with generative AI from a business perspective. In his view, OpenAI and Anthropic forgot to turn their models into something people want, but are fixing that now, while Google and Microsoft rushed forward instead of taking time to get it right, whereas Apple took the time.
I don’t see it that way, nor do Microsoft and Google (or OpenAI or Anthropic) shareholders
The Quest for Sane Regulations
The Other Quest Regarding Regulations
You know who is not going to let public opposition or any dangers stop them? According to his public statements, Donald Trump
Cat Zakrzewski (Washington Post): Former president Donald Trump’s allies are drafting a sweeping AI executive order that would launch a series of “Manhattan Projects” to develop military technology and immediately review “unnecessary and burdensome regulations”
Trump also has a way of polarizing things. So this could go quite badly as well. If he does polarize politics and go in as the pro-AI party, I predict a very rough 2028 for Republicans, one way or the other.
To the great surprise of no one paying attention, Marc Andreessen and Ben Horowitz have endorsed Trump and plan to donate large amounts to a Trump PAC
Far more tech people than in past cycles are embracing Trump
something has changed quite a lot.
Why?
They describe an old clinton-obama moral/political framework, where business people could get rich, give their money away to philanthropic efforts, and have socially liberal views and they view that as having broken down since 2016 or so.
I think Andreesen and Horowitz lead with the vibes stuff partly because it is highly aversive to have such vibes coming at you, and also because they are fundamentally vibes people, who see tech and Silicon Valley as ultimately a vibes driven business first and a technology based business second. Their businesses especially are based on pushing the vibes.
When it comes to AI what do they want? To not be subject regulations or taxes or safety requirements. To instead get handouts, carveouts and regulatory arbitrage. Trump offers them this.
Trump is strongly pro-crypto, whereas Biden is anti-crypto, and a huge portion of a16z’s portfolio and business model is crypto.
For further thoughts on crypto regulation in practice, see my write-up on Chevron.
Here is a wise man named Vitalik Buterin warning not to ask who is pro-crypto, and instead ask who supports the freedoms and other principles you value, including those that drove you to crypto. Ask what someone is likely to support in the future
In terms of their AI discussions I will say this: It is in no way new, but the part where they talk about the Executive Order is called ‘Discussion on Executive Order limiting math operations in AI’ which tells you how deeply they are in bad faith on the AI issue.
However, to be fair to Andreessen and Horowitz, the Biden tax proposal on unrealized capital gains is indeed an existential threat to their entire business model along with the entire American economy. On this point, they are correct. I am pretty furious about it too
Even if you don’t take the proposal literally or seriously as a potential actual law, it is highly illustrative of where Biden’s policy thinking is at, no matter who is actually doing that policy thinking. Other moves along that line of thinking could be quite bad.
proposal to tax unrealized capital gains of people with >$100M
There is another highly understandable reason for all these sudden endorsements. Everyone (except the 538 model, send help) thinks Trump is (probably) going to win.
I do think there has been a vibe shift, but in addition to having a reasonably different list of things I would cite (with overlap of course) I would say that those vibes mostly had already shifted. What happened in the last few week is that everyone got the social permission to recognize that
SB 1047 Opposition Watch (1)
SB 1047 Opposition Watch (2)
Open Weights are Unsafe and Nothing Can Fix This
The Week in Audio
Rhetorical Innovation
Maybe it’s you, indeed: Tyler Cowen calls those who want lower pharmaceutical prices ‘supervillains.’ So what should we call someone, say Tyler Cowen, who wants to accelerate construction of AI systems that might kill everyone, and opposes any and all regulatory attempts to ensure we do not all die, and is willing to link to arguments against such attempts even when they are clearly not accurate?
Oh Anthropic
Openly Evil AI
You know that OpenAI situation with the NDAs and nondisparagement agreements? It’s worse.
illegally prohibited its employees from warning regulators about the grave risks its technology may pose to humanity, calling for an investigation.
I mean, who has the sheer audacity to actually write that down?
They tested GPT-4 for months to ensure it was not dangerous. They tested GPT-4o for… a week.
It is becoming increasingly difficult to be confused about the nature of OpenAI.
Aligning a Smarter Than Human Intelligence is Difficult
People Are Worried About AI Killing Everyone
Other People Are Not As Worried About AI Killing Everyone
Why build something that might kill everyone? Why do anything big?
It doesn’t make any sense to me to wonder ‘what’s in it for sama … he owns no equity’ and yet this is a very common question anywhere outside of san francisco do you really think there’s a monetary value that compares against the glory of delivering ASI to mankind.
Roon and I, and I am guessing most of you reading this, can appreciate the glory. We can understand the value of greatness, and indeed the value of the act of striving for greatness. Very much so.
I worry and expect that most people no longer appreciate that, or appreciate it vastly less
I do not see this as a good change, even if the glory is purely personal.
The Lighter Side
Edited: | Tweet this! | Search Twitter for discussion