I did break down and actually attend a couple of talks at the Singularity Summit this year: Vernor Vinge and Peter Norvig.

Peter Norvig gave a talk that would have satisfied any generic group of AI developers.  Google is making some frightening progress.  This Deep Learning project is the most interesting aspect of his presentation from an AI architecture point of view.  It’s impressive that Google can pair two top-level researchers in the field (Andrew Ng and Geoffrey Hinton) with parallel processing expert Jeff Dean and scale up academic models onto a functional 1000 node cluster.   Boom, you are identifying cats and faces from unlabeled YouTube videos.  It must be sickening to anyone who wants to compete with Google in the AI space.

But he never really mentioned friendliness.  I was hoping he would trot out some more theory behind this big data approach.  He gave a similar talk to Monica Anderson’s AI meetup a couple of years ago.  I was there for that and it was pretty cool to see him present to such a small crowd.

At the Singularity Summit this year, he also talked about Google’s translation service which basically derives translations by mapping many many identical documents written in multiple languages.  I was hoping to ask him what happens when the algorithm starts consuming translations that were actually created by Google Translate.  It’s bound to screw them up if that happens.  But then I realized that Google probably saves every translated document and checks new documents checksums against previous translations before using them to build mappings.  That’s hard to picture though.  They manage:  A. Mind. Crushingly.  Large. Amount. Of. Data.

Vernor Vinge outlined some outcomes that he sees for the singularity.  One crazy idea he puts forth is a digital gaia where the world is minutely ornamented with digital sensors coupled to processors and actuators.  One day they all spontaneously “wake up” and all hell breaks loose.  He describes a reality with all the stability and permanence of the financial markets.  I had a vision of my SmartLivingRoom(tm) suddenly reconfiguring itself into a nightmare of yawning jaws and oozing orifices.  But in reality, we might just see wild fluctuations in the functionality of computationally enhanced environments; from smart to dumb to incomprehensible.

Next up: Augmented intelligence, a neo-neo-cortex provided by technology.  This is his preferred scenario.   Crowdsourcing is cool, yada-yada.  Vinge imagines a UI so extreme that accessing it would be as convenient as the supported cognitive features. I used to like this idea until I started thinking about the security implications.  I don’t want my brain hacked.

He did make one amazingly succinct point about human computer synergy.   Computers can give us instantaneous recall and amazing processing speed, humans can provide that which we are best at: wanting things.

Humans want things.  For me this cuts to the very heart of the AI question.  I always complain that none of these AI geniuses can show us an algorithm to define problems.  (No, CEV doesn’t count.)  Algorithmic problem definition is just another way to say algorithmic desire definition   Good luck with that one.

All simple human desires seem to arise from biological imperatives.  Maybe artificial life could give you that.   More complex desires are interpersonal and might be impossible to reduce back to metabolic processes.  You may want fame for the status, but the specific type of fame depends on which person or group you are trying to impress.  And that changes throughout your life.

And if we do build Artificial Life, it may well be that it can only function with similar constraints as, uh, non-artificial life.  In fact, Terrence Deacon may well be right and constraints are the key to everything.  Ahh, the warm fuzzies of embodiment are seeping over me now.

But seriously, SingInst, where is this algorithmic desire going to come from?  And once you get that, how the hell are you going to constrain the actions of GodLikeAI again?  I know, I know, Gandi would never change himself into an anti-Gandi.  But we may be like bacteria saying that our distant offspring would never neglect the all encompassing wisdom of nutrient gradients.

I continued my strategy of mostly skipping talks in favor of socializing today at the Singularity Summit.   However, so many people I talked to raved about the Jaan Tallinn talk that I regretted missing that one.  Many people were impressed by his presentation and the prezi platform he apparently used for his presentation.  My friend Peter McCluskey explained that his thesis builds on Nick Bostrom’s “Are You Living In a Simulation?” paper but expanded it in new directions.  Robin Hanson tried explaining to me that there were only three plausible descriptions of our current perceived reality if we accept the premise that future agents will have the ability to simulate humans:

  1. We are on the verge of extinction.  (Really?!  Wow I am really missing something.)
  2. We are living in a simulation now.
  3. No one in the future cares to simulate humans. (Ok, unlikely that NO one would care to simulate human perceived reality.)

I find it quite hard to get my head around this one.  My initial reaction is to question the assumption that future entities will be able to simulate humans.  But since we can simulate so much stuff now, that’s a pretty dark vision of the future too.   I will go read Bostrom’s paper and wait impatiently for the video to get posted.

I had the rare privilege to briefly meet James O’Neill who sits on the board of the Thiel Foundation, SENS, and the Seasteading Institute (among others), though I had no idea who he was at the time.  He talked a bit about Mithril Capital, Thiel’s new VC firm. (Yes. This is a $400 million dollar firm named after a mythical metal from the writings of J.R.R. Tolkien.)

O’Neill is also involved with Breakout Labs which seeks to fill the funding gap for radical  early stage science projects that can’t get funding from VC investors with short-term goals or risk-averse long-term government sources.  They are funding Dalrymple’s Nemaload worm neuron mapping project I learned about yesterday.

I also spoke further with Paul Bohm who has some interesting ideas about leveraging social network topologies to help people share information.  He suggested that these social networks mights be isomorphic to the neural networks of a brain.  He further suggested that the per capita decrease in innovation that we may be seeing might be corrected by reducing the cost of information sharing.  Now I really need to dig back into Christakis!

Then suddenly a girl in shiny clothes appeared with a camera and whisked Bohm away to be filmed for the SpaceCollective website.  She said they were from LA and looking to collect profiles of transhumanist types.  It seems pretty interesting, I look forward to seeing some of the profiles that they gathered at the Summit this year.

I attended the Singularity Summit again this year, which I believe is my third.  I previously attended 2008 in San Jose and 2010 in San Francisco.  I have a tendency to overdo content consumption at these conferences.  But for this conference I made a conscious effort to spend more time meeting and talking with attendees and less time attending the talks.  The Singularity Summit does a good job of posting videos of the talks on their website, so I will catch up on the presentations later.

One speaker I did see was 23andme founder Linda Avey.  She have a new company called Curious which will be a personal data discovery platform for us QS types.  She mentioned some very interesting devices that I want to check out.  She talked about a GSR/HRV monitoring patch system that can interact with an ingestible pill sensor but I didn’t catch the manufacturer.   She also mentioned a patch being developed by Sano Intelligence to monitor interstitial tissue in realtime to provide a API to the blood stream.  Don’t worry, she insists that you will barely notice the micro-needle.   I definitely want that.

Avey also talked about telomere measurement to monitor stress and microbiome sequencing.  Gut flora is getting lots of attention lately.  As an embodiment subscription holder, I am all for digging into the cognitive impact of the gut.  Ah, so much to quantify and so little time.

So I skipped all the talks and just talked with people.  What did I learn?  Anders Sandberg is cool and had some interesting things to say about suveillance and neuronal stimulation. I will be peering at his blog the next chance I get.   My pal Bill Jarrold told me to check out the “How of Happiness.”  And so I shall put aside my skepticism about the importance of happiness; Bill is never full of shit.

I sat a table with Eliezer Yudkowsky and Luke Nosek, which was fun.  Eliezer noted with some disappointment that the Gates Foundation wasn’t contributing anything to Givewell.org, but he didn’t admit to being surprised by this.  Nosek talked a little about a new Founders Fund AI startup, vicarious.com with Dileep George of Numenta fame.  He also shared a bit of his VC strategy and suggested that it was important to pick non-crazy founders even if their ideas seemed crazy. (Hint: you can tell them by the mechanism they use to explain and rationalize their crazy ideas.)

Eliezer took some exception to the premise that ideas are less important to a startup’s success than its founders.  He wondered if VCs had this bias just because they were bad as detecting good ideas.  I will have to side with Nosek and Paul Graham on this one.  A good idea requires execution, a seemingly crazy idea requires actual sanity, a bad idea might lead you to a better idea if you know how to fail properly.  I came across these quotes on planning which may be germain.  Business ideas being plans of sorts:

Those who plan do better than those who do not plan even though they rarely stick to their plan. – Winston Churchill, British Prime Minister

It is not the strongest of the species that survive, not the most intelligent, but the one most responsive to change. – Charles Darwin, scientist

At the same table, I got to hear about David Dalrymple‘s project to map all neural pathways of a nematode.  Yep, I think you heard that correctly.  Don’t feel bad if you confused this with the OpenWorm project.  Dalrymple is also in on that one.

I got to meet the marvelous but shy (HA!) Razib Khan and thoroughly enjoyed hanging out with him.  He shared some interesting views about the domestication of humans such as: it’s making our brains smaller!  I asked if this was a necessary adaptation.  I assume that big-brained primitive man would be less cooperative and less able to survive in this modern time.  Razib conceded the point and mentioned examples of older populations dying in high numbers when introduced into cities.  This might be due to immune system differences.  Which raises the question: Did out brains shrink to divert energy to our immune system or our gut even?  He was more open to the gut hypothesis when pressed to venture a guess about this..  I hear his blog on gene expression is quite good.

Let’s see, what other notes did I take?  Ah yes: Colin Ho showed us a cool hacked up lifelogger camera.  I want to host a video blog discussing topics of interest with my friends and have each participant wear something like this.  We could edit the video feeds together to show multiple perspectives throughout the conversation.   Technically challenging, but it might be worth it.

Finally I spend some time talking with Alex Peake who shared his vision on how to accelerate the singularity.

I went to see Daniel Suarez read from his latest book, Kill Decision at the Long Now this evening. I had originally been introduced to his work at a previous Long Now talk he gave years ago in support of his first book, Daemon.  Daemon was about ways in which a bunch of narrow AIs could be cobbled together to form a deadly system.  Kill Decision seems to be focusing on the problems around weaponizing autonomous drones. Suarez is particularly concerned about allowing algorithms to kill humans. He believes that these “kill decisions” should be made by humans and that treaties should be created to restrict the use of autonomous drones.  Suarez suggested that there has been a historical trend in warfare that has required the complicity of  more and more people over the years.  He compared the relatively few knights required to wage battles in the middle ages with the hundreds of thousands of soldiers who must cooperate to conduct modern wars.  He argues that autonomous drones would reverse this trend and allow even a single person to wage a battle without the complicity of any other humans.     There was a very lively discussion and it was suggested that autonomous drones are not unlike other modern weapons in how separated an attacker can be from the actual killing. Suarez stuck by his guns an insisted that it’s important that humans and not algorithms are making the actual decisions.  He acknowledged that humans still do make horrible decisions that result in many deaths, but pointed out how much worse it could be if the process were automated.  I suggested that if drone warfare followed the pattern of cyberwar as Suarez suggested then we could expect to see hackers contributing to the defense against automated drones.  Alex P. suggested that we should start an open-source anti-drone drone project.  I like that idea.  Technology is often a double-edged sword but there always seems to be more people willing to use it to help than to hurt (barely).

This video  to me via the VLAB mailing list: http://www.vlab.org/article.html?aid=450

Michael Liebhold talks about:
  • global supercomputing
  • combinatorial innovation
  • liquid data, augmented reality
  • ambient decisions
  • data visualizations
  • blending personal with clinical information ecosystems
  • datapalooza
  • RDF linked databases
  • predictive analytics on medical data sets
  • singularity skepticism “We don’t know what we are uploading”
  • data quality and providence
  • using retail space to window shop and then buy on eBay
    • ask me about my great start-up idea on how to capture the real value of brick and mortar
  • Keiichi Matsuda “Augmented City
  • personal ecosystems acting as contextual filters for data overload
  • Cognitive toolkit:
    • read SciFi: Sterling, Vinge, Rucker, Stross (the right Stross – hear hear!),
    • read over your head (even technical manuals),
    • social network like crazy,
  • collecting credible signals about the future (I would love to see this compiled forecast graph that he talks about around 31:20
  • AR is the web escaping from the screen into the real world,
  • by 2025 the children will never know a world not ornamented with big data
One thing that struck me is that he doesn’t talk about the risks associated with how big data will be filtered for consumption.  He says that data overload will be prevented by a filter based on you personal data ecosystem, but not who will control those filters.  Will people be fed only the information that sells more product?  Will it require hackers to break out of this matrix?