June 16, 2023

Andreessen Horowitz on AI

Great article by Marc Andreessen, of Andreessen Horowitz. 

I've argued pretty much for the same ideas, a few months ago, in conference articles (see my website) and on this blog.

A summary is below.


<< what AI is: The application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it. AI is a computer program like any other – it runs, takes input, processes, and generates output. AI’s output is useful across a wide range of fields, ranging from coding to medicine to law to the creative arts. It is owned by people and controlled by people, like any other technology.


what AI isn’t: Killer software and robots that will spring to life and decide to murder the human race or otherwise ruin everything, like you see in the movies.


what AI could be: A way to make everything we care about better ...

What AI offers us is the opportunity to profoundly augment human intelligence


AI augmentation of human intelligence has already started – AI is already around us in the form of computer control systems of many kinds, is now rapidly escalating with AI Large Language Models like ChatGPT, and will accelerate very quickly from here – if we let it


Historically, every new technology that matters, from electric lighting to automobiles to radio to the Internet, has sparked a moral panic.


[N.b. I prefer to give the example of Gutenberg printing press. Lots of people were terrified of it, obviously]


AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is a machine – is not going to come alive any more than your toaster will.


..., some of the Baptists are actually Bootleggers. There is a whole profession of “AI safety expert”, “AI ethicist”, “AI risk researcher”. They are paid to be doomers, and their statements should be processed appropriately.


the slippery slope is not a fallacy, it’s an inevitability


AI is not some esoteric physical material that is hard to come by, like plutonium. It’s the opposite, it’s the easiest material in the world to come by – math and code


The single greatest risk of AI is that China wins global AI dominance and we – the United States and the West – do not.


companies should be allowed to build AI as fast and aggressively as they can >>


No comments:

Post a Comment

Online training, MIT: Fundamentals of Entrepreneurial Finance (Entrepreneurship 104).

Finished a new online training, this week, MIT: Fundamentals of Entrepreneurial Finance (Entrepreneurship 104). The general level was a bit ...