I went to see Daniel Suarez read from his latest book, Kill Decision at the Long Now this evening. I had originally been introduced to his work at a previous Long Now talk he gave years ago in support of his first book, Daemon.  Daemon was about ways in which a bunch of narrow AIs could be cobbled together to form a deadly system.  Kill Decision seems to be focusing on the problems around weaponizing autonomous drones. Suarez is particularly concerned about allowing algorithms to kill humans. He believes that these “kill decisions” should be made by humans and that treaties should be created to restrict the use of autonomous drones.  Suarez suggested that there has been a historical trend in warfare that has required the complicity of  more and more people over the years.  He compared the relatively few knights required to wage battles in the middle ages with the hundreds of thousands of soldiers who must cooperate to conduct modern wars.  He argues that autonomous drones would reverse this trend and allow even a single person to wage a battle without the complicity of any other humans.     There was a very lively discussion and it was suggested that autonomous drones are not unlike other modern weapons in how separated an attacker can be from the actual killing. Suarez stuck by his guns an insisted that it’s important that humans and not algorithms are making the actual decisions.  He acknowledged that humans still do make horrible decisions that result in many deaths, but pointed out how much worse it could be if the process were automated.  I suggested that if drone warfare followed the pattern of cyberwar as Suarez suggested then we could expect to see hackers contributing to the defense against automated drones.  Alex P. suggested that we should start an open-source anti-drone drone project.  I like that idea.  Technology is often a double-edged sword but there always seems to be more people willing to use it to help than to hurt (barely).

I had a conversation at CFAR last night in which we discussed the consequences of an objectivist vs a constructivist viewpoint. An objectivist statement might sound something like “there is a reality that exists in the absence of any observers.” A constructivist response might be “properties or characteristics of reality are a function of observer coupling with reality.” And an objectivist reaction would be “well, duh. Tell me something I don’t know.” So I doubt that there are any hard objectivists really. But then what good is constructivism?

One difference might be that the constructivist viewpoint privileges the observer’s role in reality definition. The constructivist might be biased to pay more attention to the observer when considering definitions of reality. In this way it might be compared to post-modernism which is something I hadn’t thought of but was suggested by a member of CFAR and it does make sense. So we might expect that people slavishly sticking to objectivist viewpoints would be less aware of observer biases. And this is where I reach the conclusion that I haven’t met any strict objectivists. I don’t know anyone who might otherwise be labelled objectivist who isn’t interested in cognitive biases.

Another related example is the “brain-centric” view of cognition criticized by the constructivists (enactivists) like Noë or Thompson. Those who hold a “brain-centric” view of cognition might be accused of overlooking ways in which the body or inter-subjective experience defines cognition. So I might expect someone who oversubscribes to the brain-centric view of cognition to reject the findings of Christakis on social influence on behavior. However, I have yet to meet this strawman brain-centric individual. The enactivists are presumably fighting against someone though. I guess I will dig through the literature and see if I can find any viewpoints to populate the other side of this argument.

neuro-marketing

July 13, 2012

At what point will marketing move to direct brainwave manipulation?
http://online.wsj.com/article_email/SB10001424052702303644004577520760230459438-lMyQjAxMTAyMDEwMTExNDEyWj.html

This video  to me via the VLAB mailing list: http://www.vlab.org/article.html?aid=450

Michael Liebhold talks about:
  • global supercomputing
  • combinatorial innovation
  • liquid data, augmented reality
  • ambient decisions
  • data visualizations
  • blending personal with clinical information ecosystems
  • datapalooza
  • RDF linked databases
  • predictive analytics on medical data sets
  • singularity skepticism “We don’t know what we are uploading”
  • data quality and providence
  • using retail space to window shop and then buy on eBay
    • ask me about my great start-up idea on how to capture the real value of brick and mortar
  • Keiichi Matsuda “Augmented City
  • personal ecosystems acting as contextual filters for data overload
  • Cognitive toolkit:
    • read SciFi: Sterling, Vinge, Rucker, Stross (the right Stross – hear hear!),
    • read over your head (even technical manuals),
    • social network like crazy,
  • collecting credible signals about the future (I would love to see this compiled forecast graph that he talks about around 31:20
  • AR is the web escaping from the screen into the real world,
  • by 2025 the children will never know a world not ornamented with big data
One thing that struck me is that he doesn’t talk about the risks associated with how big data will be filtered for consumption.  He says that data overload will be prevented by a filter based on you personal data ecosystem, but not who will control those filters.  Will people be fed only the information that sells more product?  Will it require hackers to break out of this matrix?