(2023-03-01) ZviM AI: Practical Advice For The Worried

Zvi Mowshowitz: AI: Practical Advice for the Worried. Some people (although very far from all people) are worried that AI will wipe out all value in the universe. (existential risk

There are good reasons to worry about AI

There are also good reasons that AGI, or otherwise transformational AI, might not come to pass for a long time.

I do not consider imminent transformational AI inevitable in our lifetimes

There is also the highly disputed question of how likely it is that if we did create an AGI reasonably soon, it would wipe out all value in the universe. There are what I consider very good arguments that this is what happens unless we solve extremely difficult problems to prevent it, and that we are unlikely to solve those problems in time. Thus I believe this is very likely, although there are some (such as Eliezer Yudkowsky) who consider it more likely still.

If this is something that is going to impact your major life decisions, or keep you up at night, you need to develop your own understanding and model

Many of these outcomes, both good and bad, will radically alter the payoffs of various life decisions you might make now. Some such changes are predictable. Others not.

None of this is new. We have long lived under the very real threat of potential nuclear annihilation. The employees of the RAND corporation, in charge of nuclear strategic planning, famously did not contribute to their retirement accounts because they did not expect to live long enough to need them.

I would center my position on a simple claim: Normal Life is Worth Living, even if you think P(doom) relatively soon is very high.

One really bad reason to burn your bridges is to satisfy people who ask why you haven’t burned your bridges.

On to individual questions to flesh all this out.

Q: How Long Do We Have? What is the Timeline? Short Answer: Unknown. Look at the arguments and evidence. Form your own opinion.

Eliezer’s answer was that he would be very surprised if it didn’t happen by 2050, but that within that range little would surprise him and he has low confidence. Others have longer or shorter means and medians in their timelines. Mine are substantially longer and less confident than Eliezer’s

Q: Are there any ‘no regrets’ steps you should take, similar to stocking up on canned goods? Would this include learning to code if you’re not a coder, or learning something else instead if you are a coder? Short Answer: Not that you shouldn’t have done anyway.

Long Answer: Keeping your situation flexible, and being mentally ready to change things if the world changes radically, is probably what would count here. On the margin I would learn to code rather than learn things other than how to code. Good coding will help you keep up with events, help you get mundane utility from whatever happens, and if AI wipes out demand for coding then that will be the least of your worries

Working directly on AI capabilities, or working directly to fund work on AI capabilities, both seem maximally bad, with ‘which is worse’ being a question of scope. (gain of function)

Using existing AI for mundane utility is not something I would worry about the ‘badness’ of.


Edited:    |       |    Search Twitter for discussion