Tuesday, April 16, 2019

AN ARTIFICIAL INTELLIGENCE CAN NOW WRITE BOOKS

A computer just wrote a book. It's not a particularly catchy title--Lithium-Ion Batteries--and the author, a system called Beta Writer, isn't going to win a Pulitzer. However, it is a full-fledged, meaningful and readable book entirely written by a machine--the first but certainly not the last.

Authors watch out--one more "human only" skill bites the dust

If your reaction to this news is along the lines of "what's the big deal," that's understandable. Hardly a day goes by without news that AI has equaled or surpassed us plodding humans at yet another activity once thought to require uniquely human intelligence.

Artificial intelligence systems--let's just call them AIs for short--are now as good as humans at a large and rapidly growing number of tasks, and far superior in some others, including games like chess and Go, mastery of which was once seen as one of the pinnacles of human intelligence, and highly esteemed (and highly paid) skills including sinking basketball three-pointers. This is happening so rapidly and so frequently that most of us don't even notice the next advance.

However, some heavy-duty thinkers including Elon Musk, Bill Gates and Stephen Hawking have been warning us for some time about the potentially existential risks of AI.

Gates, Musk and Hawking are not so much worried about AIs that are better than humans at one particular task or another, but about the emergence of an AI that is smarter and more capable than humans in every area. This kind of entity, they point out, could rapidly design and create an even smarter AI, which in turn could quickly improve on itself, leading to an intelligence explosion that could leave the human race, quite literally, in the dust.

Beta Writer, the system that created Lithium-Ion Batteries is pretty smart. It read thousands of scientific articles, extracted their most important findings, melded together related items, and them summarized them in readable, if technical prose. It produced the kind of comprehensive,  well organized, up-to-the-minute review of a scientific or technical field that until now would have been produced by an expert or a team of experts in a field. As such, it joins the ranks of expert systems that are matching or surpassing humans at increasingly high-level tasks. As an author myself, I can't help but be impressed. However, it's far more limited than the kind of AIs Hawking worried about.

Most researchers working on AI argue that these system, even if increasingly savvy and capable, are on the whole benign, for example helping doctors make accurate diagnoses, providing even amateur investors with high-quality guidance, and making all kinds of complex systems such as air traffic, shipping and product delivery run more smoothly. AIs are now integrated, mostly invisibly, into almost every aspect of our lives, and we rely on them whether we choose to or not. And although they occasionally do destructive things, for example the financial "Flash Crash" of 2010, or the deadly real crashes of  Boeing's 737 Max 8 aircraft, it wasn't because they were too smart or being malicious.

It's extremely difficult to predict when, if ever, an AI will emerge that surpasses humans in all of the areas that we consider important, including a deep understanding of itself and the world, emotional as well as analytic intelligence, creativity and imagination as well as problem solving. However, those who have thought most deeply about this, point out that such an entity may well have values and goals that are very different than ours.

These critics, or prophets, warn that we should be working as hard on "the control problem"--making sure that any emerging super-smart AI has the safety and security of us humans embedded so deeply into its design that it can't decide to act against us--as we are working to make AIs smarter, more capable and more ubiquitous.

All I know is that hundreds or thousands of times more money and talent is being poured into developing smarter, more capable AIs than are being devoted at that boring, but potentially vital control problem.

-----

Earlier posts on AI and its risks:

Google's Alphazero is now scary smart

Advanced artificial intelligence--friend or foe?

-----

If you enjoyed this post, please sign up to follow or receive email alerts from zerospinzone

No comments: