Singularity Summit Day 2 – Verner Vinge reminds me why I doubt the recursive self-improving AI
October 16, 2012
I did break down and actually attend a couple of talks at the Singularity Summit this year: Vernor Vinge and Peter Norvig.
Peter Norvig gave a talk that would have satisfied any generic group of AI developers. Google is making some frightening progress. This Deep Learning project is the most interesting aspect of his presentation from an AI architecture point of view. It’s impressive that Google can pair two top-level researchers in the field (Andrew Ng and Geoffrey Hinton) with parallel processing expert Jeff Dean and scale up academic models onto a functional 1000 node cluster. Boom, you are identifying cats and faces from unlabeled YouTube videos. It must be sickening to anyone who wants to compete with Google in the AI space.
But he never really mentioned friendliness. I was hoping he would trot out some more theory behind this big data approach. He gave a similar talk to Monica Anderson’s AI meetup a couple of years ago. I was there for that and it was pretty cool to see him present to such a small crowd.
At the Singularity Summit this year, he also talked about Google’s translation service which basically derives translations by mapping many many identical documents written in multiple languages. I was hoping to ask him what happens when the algorithm starts consuming translations that were actually created by Google Translate. It’s bound to screw them up if that happens. But then I realized that Google probably saves every translated document and checks new documents checksums against previous translations before using them to build mappings. That’s hard to picture though. They manage: A. Mind. Crushingly. Large. Amount. Of. Data.
Vernor Vinge outlined some outcomes that he sees for the singularity. One crazy idea he puts forth is a digital gaia where the world is minutely ornamented with digital sensors coupled to processors and actuators. One day they all spontaneously “wake up” and all hell breaks loose. He describes a reality with all the stability and permanence of the financial markets. I had a vision of my SmartLivingRoom(tm) suddenly reconfiguring itself into a nightmare of yawning jaws and oozing orifices. But in reality, we might just see wild fluctuations in the functionality of computationally enhanced environments; from smart to dumb to incomprehensible.
Next up: Augmented intelligence, a neo-neo-cortex provided by technology. This is his preferred scenario. Crowdsourcing is cool, yada-yada. Vinge imagines a UI so extreme that accessing it would be as convenient as the supported cognitive features. I used to like this idea until I started thinking about the security implications. I don’t want my brain hacked.
He did make one amazingly succinct point about human computer synergy. Computers can give us instantaneous recall and amazing processing speed, humans can provide that which we are best at: wanting things.
Humans want things. For me this cuts to the very heart of the AI question. I always complain that none of these AI geniuses can show us an algorithm to define problems. (No, CEV doesn’t count.) Algorithmic problem definition is just another way to say algorithmic desire definition Good luck with that one.
All simple human desires seem to arise from biological imperatives. Maybe artificial life could give you that. More complex desires are interpersonal and might be impossible to reduce back to metabolic processes. You may want fame for the status, but the specific type of fame depends on which person or group you are trying to impress. And that changes throughout your life.
And if we do build Artificial Life, it may well be that it can only function with similar constraints as, uh, non-artificial life. In fact, Terrence Deacon may well be right and constraints are the key to everything. Ahh, the warm fuzzies of embodiment are seeping over me now.
But seriously, SingInst, where is this algorithmic desire going to come from? And once you get that, how the hell are you going to constrain the actions of GodLikeAI again? I know, I know, Gandi would never change himself into an anti-Gandi. But we may be like bacteria saying that our distant offspring would never neglect the all encompassing wisdom of nutrient gradients.