Samuel Arbesman at Long Now

October 31, 2012

Samuel Arbesman gave a talk on his new book “The Half LIfe of Facts” at the Long Now museum tonight.  This is a good venue to hear authors speak.  It is quite intimate and there are generally plenty of good questions and discussions afterward.  Arbesman did a good job of fielding comments from the  group.  You can see his presentation style in this short video.

His thesis is that there is an order and regularity  to the way knowledge changes.  He thinks that studying this can help us order the knowledge around us.  Because of this change, we should expect some portion of the facts we take for granted now to be overturned.  He points out that doctors are taught in medical school to expect their field to change and journals such as UpToDate provide this service to help doctors keep track of changes in medical knowledge. (My personal experience makes me skeptical that many doctors actually take advantage of this sort of thing.)

Arbesman takes the position that we would all benefit from this approach to learning.   We should learn how to think and how to understand the world but treat education as a continuing process.  Which is something I tried to touch on before.  Arbesman did comment that it’s better to rely on Google for current information than memorize a bunch of facts that may or may not continue to be true.  This takes me back to the ideas of Madeline Levine who I’ve mentioned before.  She argues that children should do less homework and more play because that builds creativity and problem solving skills.

Another point Arbesman brought up was that there is so much knowledge now, that many correlations can be discovered by mining the existing literature and joining together papers that each solve some fraction of a problem.  A speaker at the CogSci conference in 2010 at UC Berkeley mentioned that many answers probably go unnoticed and uncorrelated in the literature.  One effort to start detecting these hidden relations in the bioinformatics field is CoPub project which is a text mining tool developed by Dutch academics and researchers.  Theory[Mine] does an amusing take on this idea by letting users purchase a personalized, AI derived, unique, and interesting theorem.

Arbesman also suggested that facts in the hard sciences are subject to longer half-lives than facts in biology and the half-life decreases even further for the humanities and medicine.  He mentioned that when physicists colonize other fields they are unpopular and create disruption, but that they bring in useful ideas.  But I wonder if it’s even theoretically possible to reduce sociology to physics.  This is the whole holism/reductionism dichotomy that Monica Anderson loves to explore.

Another point that came up was that while the idea of fact decay should encourage healthy skepticism, we should still try to avoid unhealthy skepticism.  During the question and answer session it was suggested that politically controversial topics such as evolution, global warming, and even GMO labelling are clouded with incorrect facts.  I think a lot of scientists get a little overly defensive by what they term as anti-science policy decisions and they might be incorrectly grouping GMO opponents in with the creationists and global warming denialists.  Hopefully, better understanding of fact decay will radiate out and attenuate some of the scientific hubris out there.

At the East Bay Futurist meetup today, we discussed a non-Singularity scenario  similar to the vision in Lights in the Tunnel.  In this scenario, automation eliminates enough jobs that the economy stops functioning.  The idea that automation causes macroeconomic harm is known as the Luddite Fallacy.   Historically automation has lead to short term unemployment, but the resulting lowered cost of goods supposedly created more demand and the displaced workers were able to find jobs in other sectors.

We were discussing this topic last year around this time at the East Bay Futurists.  It may be that the fall brings out these melancholy thoughts.  Maybe the damp and cold produces some malevolent mold or something.  But I am still looking  for an economist who can show that automation is continuing to create jobs.  It looks like the world employment to population ratio has decreased from 62% to 60% between 1991 and 2011.

Blogger Steve Roth offers a couple of reasons why the Luddite Fallacy argument might be running out of steam:

1. The limits of human capabilities (Not everyone can get a PhD in Computer Science and eventurally there may be nothing that machines can’t do.)
2. The declining marginal utility of innovation and consumption. (All the important stuff has been around since the 60’s and really how many more mansions do you need?)

Now supposedly there is some sort of argument that says consumption by the super rich can continue to drive the economy.  But I like how Roth dissects that argument using Marginal Propensity to Consume.  Basically poor people spend a greater portion of their income.  Apparently the third Lamborghini is somewhat less satisfying to the rich than having enough food is to the poor.

Now there is also this idea that we can somehow transition from a work based economy to an asset based economy.  Robin Hanson alludes to this during this discussion with Martin Ford (see 21:20 for the asset argument).  Hanson’s point about machines generating more net wealth may be true.  Poverty is  decreasing, the number of people living below the poverty line worldwide has decrease from 52% to 28% between 1981 and 2008.  How do we transition from adding value through labor to just owning assets?  It’s especially hard for me to understand how this new asset economy works for the poor.  Do they switch from owning goats to owning GoatBots in order to survive?  A lot of people will get left out in the cold in that sort of economy.  Asset management is tricky and the sheep will soon get fleeced of their assets.

So we need to fundamentally restructure our economy in the face of accelerating automation.  Is it still possible to salvage the work model by finding ways to monetize what people do with their hearts and minds as Lanier suggests?  Or should we just give everyone $25,000 a year to drive consumption as Marshall Brain has suggested?

A lot of people seem to think that some sort of stipend will be required to keep the economy flowing.  However, I am fairly skeptical that this will come about.  Look how the EU is pushing austerity.  Here in the US, half the population demands freedom FROM health care.  I honestly think that us Americans will choose to go Mad Max before we turn (more) socialist.  But I could be wrong.  The Great Depression brought about a bunch of social programs.  Maybe something like that will happen again.

But, Lanier’s argument is interesting:monetize heart and minds, etc.  As I said before, Vinge thinks that the only thing humans can do which machines won’t be able to do is want things.  How do you monetize that?  And even if the SuperRich did suddenly decide to get all loving and start handing out stipends, what about well being?  I think of the youth rioting in England in 2011.  Those kids had the dole, but they weren’t happy.

Seligman’s PERMA (Positive emotions, Engagement, Relationships, Meaning, and Accomplishment) model of well being comes to mind.  We can hand people money and then what?  Star Fleet won’t be recruiting for a while yet. Where does accomplishment come from?  Games?  The arts?  But still this is all premised on a bunch of meglomaniacal sociopaths handing over a bunch of money.  I’m not holding my breathe.  I am just saving as much money as I can in the hopes of affording an adequate KillBot(tm) once ThunderDome time comes.

I recently completed a couple of 23andMe research surveys that measure your Empathy Quotient and Systemizing Quotient.  Empathizing–systemizing theory  was developed by Borat actor Sasha Baron-Cohen’s cousin, Simon Baron-Cohen as a way to understand autism.  According to this theory, people with Autism Spectrum Disorders (ASD) have a below average ability to empathize and and an above average ability to systemize. They are more interested in systems than people.

Note that E-S theory differentiates between cognitive and affective empathy.  So ASD folks have trouble determining how others are feeling (cognitive empathy) but can empathize when they do understand the state of mind of others (they have affective empathy).  They are contrasted with psychopaths who know how you are feeling and don’t care and will use that to hurt or manipulate or run major corporations.

Gut flora and cognition

October 26, 2012

You may already know this, but I was surprised to hear that the microbes living in an on our bodies outnumber our own cells 10 to 1 and collectively contain orders of magnitude more genetic information.  I guess that’s not necessarily saying much since the common potato has almost twice as many genes as a human.  There is actually a huge effort in the EU to sequence the DNA of the human microbiome called MetaHIT.

MetaHIT discovered that there are 3, err 2  distinct microbiome population types callled enterotypes.  One enterotype is dominated by the Bacteroides genus of microbes and is related to high fat or protein diets. This the one a lot of fat people have.  The Prevotella enterotype is characterized by high carb diets and I assume is related to better metabolic health.

Gut flora have been implicated in everything from  mood regulation to diabetes.  The fecal transplant stories are pretty freaky too.  This is a treatment for bacterial infections (primarily Clostridium difficile?) that involves transferring fecal matter from a healthy relative into the colon of a person desperately ill with a bacterial infection.  Once the gut flora is fixed, it takes care of the other bad bugs hanging around.  Some people also think there might be a connection between autism and gut flora problems.

I don’t want to get all Larry Smarr about it, but I am interested in getting my gut flora sequenced.  So I joined this study on the Genomera citizen science platform organized by the smart and cool Melanie SwanMicrobiome Profiling Response to Probiotic in a Healthy Cohort.  Here is the description:

Critical to digestive health, the microbiome is a newly available personal health data stream. Join this first-ever participant-organized citizen science microbiome project! Second Genome will provide microbiome sequencing to analyze potential shifts in the gut microbiome before and after 4 weeks of a daily dose of an OTC probiotic such asCulterelle® (Lactobacillus GG). A personalized report will be provided to each participant with the global shift in microbiome bacterial abundance by individual and study group, and a personalized profile of ratios pre and post intervention of Firmicutes, Bacteroidetes, Helicobacter pylori, and the most abundant 10-15 bacterial taxa at the phylogenetic family level (DRAFT of sample report). Human genetic SNPs related to Ulcerative Colitis andCrohn’s Disease are optionally requested to see if they may have a connection with microbiome profiles.

It’s $800 for the sequencing from Second Genome (which I hear is a good price) and I encourage anyone interested to join up.  We need more people to join before we can begin the study, so spread the word.  Check out Melanie’s blog when you get a chance, she covers a lot of QS, Futurist, and other modern topics.

Sorry, I know the title is Gut Flora and Cognition, but I don’t have much to say on that since the data isn’t in yet.  I suspect he work on mood or autism might pan out.  Those are cognitive things.  Also, I intend to recommend people track their cognitive performance with Quantified Mind during the probiotic study to see if this gut intervention makes you smarter or stupider.  Though I think that this article makes a good point when the author questions “the ability of a single strain of bacteria to impact on the vast inner ecosystem of the human gut.”  So a tiny dose of just a few strains of bacteria taken orally seems unlikely to have much impact.  Still, we shall see, we shall see.

Intrinsic and extrinsic motivation have been coming up in my conversations lately.  The Habit Design folks will say that only intrinsic motivation works.  If that is true, it throws a wrench into the plans of sites like Beeminder that provide people extrinsic motivation to meet their goals by charging them for defection.  I am not sure which side to take there.  Extrinsic motivation can certainly work, but it may not be preferrable.

I have had projects that languished because I didn’t find them interesting and the clients didn’t pester me for status updates.  Left to my own devices, I will only work on projects that are fun or novel.  But when clients do demand project updates, I do get motivated to work on the less fun stuff.  One friend likened this to outsourcing your boss.  Why be your own boss when you can delegate that task to someone else?  But ideally I would be able to find something intrinsically motivating about every project.  If I can’t, I should just pass it along to my associates to handle.  When we say everyone should do work that they enjoy, we are really saying that they should be intrinsically motivated to do their work.  Kurzweil said that he feels that he retired at age 5 since he loves his work.

I heard an interview on the radio with psychologist Madeline Levine a few weeks ago.  She was talking about child development and she made some interesting points about letting kids fail so that they can learn and be independent.  She also talked about teaching kids intrinsic motivation.  One example she gave was asking kids how much they learned from a test at school as opposed to focusing on what the grade was.  She calls this intrinsic motivation authentic.  That’s an unusual way to define authentic, but it makes sense.  Authentic people are more intrinsically than extrinsically motivated.  Of course the line between internal and external can get a bit fuzzy.  Much of what we value is learned from others after all.  We don’t llive in a vacuum.  But I do think that we take ownership of values and goals at some point.

One fellow I was talking with tonight brought up the Buddhist idea that desire is suffering.  Just eliminate desire and they suffering goes away.  That’s sort of like James’ equation:

Self-esteem = Success / Pretensions

As you can see, simply reducing pretensions to a low enough value can give even the biggest loser an enormous sense of self-esteem.  Which is sort of how I feel about that Buddhist idea.  Why bother with life at all if you desire nothing?  But I guess they are trying to escape from some horrid endless cycle of reincarnation or something.  That’s why I like my mindfullness stripped of all that superstitious bullshit.  But introspection might perversely be a way to discover what truly motivates us.  Now I just need to write up some more specific instructions on that and I will have a self-help best seller on my hands.

I’ve been thinking a lot lately about problems with science.  These  epistemological questions come to mind:

  • What is knowledge?
  • How is knowledge acquired?
  • To what extent is it possible for a given subject or entity to be known?

So it’s a huge elephant of a field, but I will nibble at it one bite at a time.

I should point out that I make my living as a systems engineer, so I like to distinguish between  theory and practice.  I tend to think of science as being more theoretical and engineering as being practical.  Also, I am sympathetic to the constructivists, so I tend to agree with Feynman’s “If I Can’t Build it, I Don’t Understand it.”

This question of how much faith we can put in science initially came to mind when we were asking how much we can trust medicine.  A family member had cancer and we started researching the best treatment options.  The first problem we encountered was that medicine can’t keep up with science.  i.e. doctors can’t keep up with the deluge of journal articles.  But the next problem is more germane to this discussion, namely conflict of interest.

Corporate funding of research in agriculture for example has surpassed government funding since the 1990’s.  One can certainly see why corporations would want to control the research in their respective fields given the importance of science in determining public policy. Manipulation of research can be very subtle.  It spans the range of selectively funding research that supports industry interests to setting the actual scientific standard to favor industry.  (i.e. determining methodology and setting thresholds, etc.)  I would love to hear how my libertarian friends would respond to this.  Fire away, I am moving on.

Another point worth mentioning is related to Quantified Self and self-knowledge.  I had a conversation last year with a big QS guy who was pondering the relationship between self-tracking and science. In one sense, findings from this n=1 self tracking cannot be generalized to larger populations.   But in another sense you can learn things that matter to yourself which might never be extracted from the n=many findings which average out all individual differences.  (That said, huge amounts of this n=1 data is being aggregated by sites like MedHelp and used in research.)

But my main point from the previous paragraph is that there is a lot to be learned about ourselves that is currently innaccessible to science.  Meditation might be a good example.  Science can tell you the health outcomes of meditators or measure the brain wave frequencies of meditators.  But, it can’t reveal your own individual thought patterns to yourself the way mindfulness training might.  So the knowledge provided by science is in many ways incomplete.

An anti-atheist rant I saw on a friend’s Facebook page went on about how science constrains your world view to a box and included some nice Max Planck quotes:

“Anybody who has been seriously engaged in scientific work of any kind realizes that over the entrance to the gates of the temple of science are written the words: ‘Ye must have faith.'”
– Max Planck

It’s just nice to be reminded of that, but I don’t want to get into it too much.  I’d rather place faith in a bunch of bickering scientists than a shaman on peyote or a pedophilic priest.  But one other point that was brought up was that scientific consensus does change over time.  And this is a good point.  We should temper the confidence we have in current findings.  Entire scientific paradigms have been routinely discarded throughout history. (Some are more stable to be sure, check out Egyptian medical procedures circa 1600 BC:  examination, diagnosis, treatment and prognosis.  Pretty decent.)

But then we get into some of the current problems with how research is generated today.  Publication bias is partly (wholly?) a problem where positive results are more likely to be published than negative results.  This paper even asserts:

Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.

That’s nice.  I feel so much better about science now.

Next we come to the issue of reproducibility.  Apparantly a lot of studies are never reproduced and can’t be replicated outside the author’s lab.  It might be up to the private sector to separate the wheat from the chaff.  And according to a Bayer study from last year, they are finding plenty of chaff.  It’s easy to see that no one gets ahead in academia by replicating someone else’s work, so there are some incentive problems around that.  And who would even  publish replicated results apart from PLoS? (Serious, I am asking.  Post links in the comment section.)

Now, I do want to point out that scientists are sort of aware of this problem and there is plenty of work going on to identify bad research.   However, from arguments on Prop 87 to drawing distinctions between holism and reductionism the way Monica Anderson likes to do, I’m tending to downgrade my confidence in science lately.   But then again, my initial value was probably pretty high.  It’s not like I am going to go on a vision quest the next time I need to get new knowledge about the universe.

What is Futurism anyway?

October 21, 2012

Tonight I attended a party to celebrate the recent marriage of a friend.  I found myself being asked over and over again: “So what is Futurism anyway?”  I couldn’t resist responding that that it was an art movement in Italy around the early 1900’s.  I do actually like a lot of futurist art.  They often tried to depict this sense of motion to capture the frenetic pace of modern life.  I am not too into the violence and fascism though.

But then I had to get serious and come up with a decent answer.  And that is why it’s a good idea to hang out with people outside your scene sometimes.  It forces to you articulate ideas that you often take for granted.  So I would say things like: Futurism is thinking about the future and wondering about what will happen.  Science Fiction is futurism.  Futurist consider the idea that technology is accelerating exponentially and ask what the consequences might be.

And a lot of people responded quite positively to this.  People feel these changes around them.  The impact of automation on jobs is becoming more evident.  We talked about the importance of education in these changing times and how budget cuts and skyrocketing college costs are putting kids into indentured servitude.  We talked about how China might come to rule the world. I trotted out my standard bearish comments regarding China’s corrupt financial system and it’s lack of transparency and rule of law.

A scientist who recently drank the Kurzweillian kool-aid and had actually visited China was part of this discussion.  He mentioned that systems with different paths to accomplish similar ends were more stable.  I took this to be an endorsement of pluralism and I complained that China’s police state doesn’t allow for this.  Another guest chimed in that top down rule can’t work and bottom up societies have more ideas.  But our newly minted Singularitarian friend countered that the Chinese rulers carefully tweak the different elements of society, allowing more freedom in certain areas and restricting it in others.  I don’t understand how this system can possibly work, but it’s hard to argue with the growth numbers.  (Well the specific numbers are probably fudged but there has clearly been lots of growth.)

I talked to another fellow who was into machine learning and who had doubts about the whole Deep Learning project that Norvig was recently crowing about at the Singularity Summit.  His opinion was that Deep Learning has been around for a while and that any recent success of the algorithms might be getting conflated with the benefits conferred by big data.  He said that other algorithms should be tested against this big data to see if they perform almost as well.  He mentioned support vector machines as one alternative, but these seem to require labeled training data, which Deep Learning doesn’t require.  So arguably, Deep Learning is nicer to have when evaluating big unlabeled data sets.  Anyway, when I asked Monica Anderson, she endorsed Deep Learning as being a thing, so I remain impressed for the time being.

My Deep Learning skeptic friend was also wary of Quantified Self.  I think his point was that over-quantification was being slowly forced upon people.  This hilarious scenario of ordering a pizza in the big data future immediately came to mind.  But as much as I love the ACLU, I don’t have much faith that they can protect us against big data.  I actually think that being into QS might better prepare people to deal with big data’s oppression.  At least QS’ers become more aware that personal data can tell a story and they are exploring how some of these stories can be self-constructed.  Hopefully this will help us navigate a future where nothing is private.

A recurring theme when thinking about the future is that humans will somehow get left behind as technological progress skyrockets beyond our comprehension.  A lot of humans are already getting left behind, economically and technologically.  Someone who can’t use search is at a massive disadvantage to everyone that can.  I try to be positive sometimes and point out that mobile devices are spreading throughout the developing world or that humans can augment to keep up with change.  But while we may live in an age of declining violence, I can see why some would still complain of sociopathic corporate actors and the policies being promoted that withdraw a helping hand from those in need.

At one point in the evening, toasts were made to the newlyweds and a passage by CS Lewis celebrating love was read.  I looked around as the various couples reacted to the emotional piece and I thought of my own girlfriend.  I thought about how we had been through death and madness.  Yet we managed to stay together, supporting one another, loving each other after all these years.  I thought about how deeply lucky we are to have one another.  I felt great happiness for these newlyweds with the courage to undertake this struggle for love.  I know us futurists can be cold, almost autistic in our dispassionate rationality, but it may well be love and empathy that will serve us best in the coming future where so little is certain.

There was much talk at the Singularity Summit this year of the Great Stagnation.  The basic idea is that contrary to popular belief (among transhumanists), innovation is actually in decline.  Here is an excellent blog post about the Huebner study that showed a reduction in per capita patents since 1870.  I guess John Smart takes issue with the data sampling, etc.  I have my own doubts that patents are a good metric for innovation, but it’s an intriguing idea.  Sure you have the internet, but where are the flying cars?  If per capita innovation is going down, maybe Homer was right all along and we are a bunch of degenerates.

Peter Thiel has been talking about this for a while now.  He points to high energy costs as a failure to innovate in the energy space.  He mentions that median real wages are unchanged since the 70’s and that this suppresses innovation.  He sees the space program in shambles.  Libertarian Thiel even actually (sort of) attributes the Apollo launch to the higher marginal tax rate of the 60’s.  Well he concedes that the government had more macroeconmic control but exercised less microeconomic control.  (i.e. the Polio vaccine wouldn’t have made it past the FDA)

In a debate at Stanford between Thiel and George Gilder, Thiel expands on his ideas that innovation in the real world of matter has been outlawed driving all innovation into the virtual world of bits such as information technology and finance.  Gilder on the other hand takes a view that all fields will become subject to information technology and will soon start to see progress similar to that seen in the world of bits.  Kurzweil commonly makes  similar arguments when he says that biology is becoming an IT field.  As an aside, I know some folks in bioinformatics and the fact is that this field is quite rocky.  Job growth isn’t very impressive.  It’s one thing to crunch the numbers, it’s another thing to deliver tangible results.

So Thiel focuses on the real world and talks about how food production isn’t outpacing population by much.  And he loves to bring up the theory that food cost triggered the Arab spring.  I’m sympathetic to this, I see him coming from an embodiment angle with that.  He also takes some issue with the views of optimistic experts like Gilder and contrasts that with the views of average people.  The percentage of people who think the next generation will be better off than the last generation has steadily gone down over the past 40 years.  I like that angle too, it reminds me of Wisdom of the Crowds.

But I am always wary about these over-regulation stories.  First, improvement in communication technology must be providing a huge decrease in the pressure to innovate on the transportation side.  On the other hand I wonder how much easier it is to move goods around.  I know most shipping cost is tied to fuel prices which supports Thiels energy narrative.   But, we do see logistics operations like Apple, Amazon, and even Walmart that simply could not exist without IT.  Sure, personal air travel might not be faster today than in the 1960’s, but my MacBook air arrived at my doorstep from Shenzhen 4 days after I ordered it.

A lot of the huge progress on the physical side might just have been low hanging fruit and we may just be in the area of diminishing returns.  Gasoline’s energy density is hard to match.  The information theory folks like Gilder and Kurzweil seem to do some handwaving on the energy story.

Fracking might be a thing, but we have to see how it actually pans out.  I don’t blame people for getting pissed when it turns their tap water flamable.  These energy companies love to skimp on costly safety measures (Valdez, Deep Water Horizon, even pipeline monitoring. ) Those Yankees whose drinking water gets hosed by cheap concrete lining in the fracking wells will probably shut it down.  Yankees are feisty like that.

Another problem with the over-regulation theory of innovation decline is that we would expect to see better innovation rates in places with less regulation.  So why don’t we see Texas taking the national lead in innovation?  Europe is pretty heavily regulated and we still see plenty of patents coming out of there.  So I don’t really disagree with most of Thiel’s observations (on this innovation thing only, not the other crazy shit).   I more question the causal mechanisms.  I look forward to his forthcoming book on this topic, coauthored with Max Levchin and chess great Garry Kasparav.  But I am skeptical about any grand plans to change the tides.

I talked with a bunch of Singularity Institute folks about this at the Less Wrong pre-party and the Summit itself and opinions varied.  Some say the innovation slump isn’t actually a thing.  Some say that it’s a thing but it doesn’t matter.  Some suggested that it might buy more time to  develop friendly AI.

But what about the long, long term.  Say there is no Singularity and that innovation was merely a  function of population growth.  If we have population stabilization or even a population crash, will we see innovation follow suit?  In Incandescence by Greg Egan, the survivors of innovation crash are “mining” wire to make crude tools.  This is a common thread in SciFi.  In A Canticle for Leibowitz survivors create illuminated manuscripts of circuit boards.

Oh, but those are more technology crashes than innovation crashes…hmm…

Kevin Kelly makes a compelling argument about the nature of technology in What Technology Wants. This is a cool book that deserves much more discussion, but the basic idea is that new technology sort of springs from the existing framework of old technology.  He points out many inventions that were independently arrived at.  In some sense technological change becomes inevitable but also highly constrained.  Innovation is dependent on the underlying framework of enabling technologies.

So how are you really going to change that?

At the end of Pinker’s “Decline of Violence” talk last week he said that the evolution of social norms was an exciting area of inquiry.  If we accept Pinker’s data, but don’t feel satisfied by the causal mechanisms he speculates about (i.e. Pacification, etc.), it does seem like a logical next step to dig more fully into social norms.  Some of the researchers that he mentioned were: Nicolas ChristakisDuncan WattsJames Fowler, and Michael Macey.

Now I have to admit that I have a bias toward new ideas that can be easily attached to my existing conceptual framework.  (Arguably we all do and no one could learn anything new without attaching it to existing knowledge but this post isn’t about constructivism.)  It’s especially satisfying when new concepts resonate with remote structures elsewhere in the idea tree.

I read Christaki’s Connected when it first came out and it strongly influenced my thinking on human behavior.  I do plan on reviewing the content, but it basically explores the idea that human behavior is partially a network phenomenon.  This seems obvious and uninteresting until you drill down into some of the consequences.  The book shows that you have a higher chance of gaining weight if there are overweight people in your social network with up to three degrees of separation.  Yep, better start keeping track of your  friends’ friends’ friends.  Don’t worry, this tool I saw on Melanie Swan’s blog can make it easier to map at least your LinkedIn network.

Now there was some controversy around the models used in this book.  I didn’t fully examine them and wouldn’t be able to independently evaluate the statistics anyway.  But I guess Harvard has to defend it’s own and bunch of statisticians from the old alma mater jumped to his defense.  I admit that I’m biased and I like the idea.  For the sake of argument, let’s agree that network behavior contagion is a thing. (If any statistics guru out there can show there exists a laymen’s explanation of why we should absolutely reject these findings, please do.)

Wait, sorry, I don’t have an argument yet.  But Christakis is just really cool.  In this video he talks about how he got into social network science and gives the example of caregivers getting sick from exhaustion and that effecting their other family members.  In a sense, he saw a non-biological contagion of illness.  My girlfriend and I experienced this first hand when her sister died of cancer so I deeply empathize with folks in that example.

On a brighter note,  Christakis gets into topology and nematode neuron mapping in the second half of the video.  This was the stuff we were talking about at the Singularity Summit with Paul Bohm this year.  See?  Christakis is cool.

But Pinker’s “Decline of Violence” thesis must also be supported by evolutionary population dynamics somehow, right?  So I pinged my awesome CogSci book club friend Ruchira Datta, and she recommended the following books for me to explore:

SuperCooperators

Genetic and Cultural Evolution of Cooperation

A Cooperative Species: Human Reciprocity and Its Evolution

I recall that there was a discussion about evolutionary game theory strategies at one of these meetups and it was suggested that there are population equilibria in which a certain percentage of “enforcer” agents (who punish defectors without regard to self-benefit) serve to protect a cooperative majority of nice, contrite, tit-for-tat agents.  So this is why we need tough conservatives around to protect all the cooperating liberals.

I brought this up at the LessWrong meetup tonight and someone objected that this might require group selection or some other troubling theory.  I wonder if it couldn’t be explained more along co-evolutionary lines similar to pollinators and flowering plants.

But anyway, where I’m trying to go with this is that we can take the above scenario and start to examine ways in which the ratios of cooperators and defectors change.  Then we somehow plug that into the whole social network science thing and we will have an awesome blog post or something.  (But I have a bunch more reading to do first.)

I did break down and actually attend a couple of talks at the Singularity Summit this year: Vernor Vinge and Peter Norvig.

Peter Norvig gave a talk that would have satisfied any generic group of AI developers.  Google is making some frightening progress.  This Deep Learning project is the most interesting aspect of his presentation from an AI architecture point of view.  It’s impressive that Google can pair two top-level researchers in the field (Andrew Ng and Geoffrey Hinton) with parallel processing expert Jeff Dean and scale up academic models onto a functional 1000 node cluster.   Boom, you are identifying cats and faces from unlabeled YouTube videos.  It must be sickening to anyone who wants to compete with Google in the AI space.

But he never really mentioned friendliness.  I was hoping he would trot out some more theory behind this big data approach.  He gave a similar talk to Monica Anderson’s AI meetup a couple of years ago.  I was there for that and it was pretty cool to see him present to such a small crowd.

At the Singularity Summit this year, he also talked about Google’s translation service which basically derives translations by mapping many many identical documents written in multiple languages.  I was hoping to ask him what happens when the algorithm starts consuming translations that were actually created by Google Translate.  It’s bound to screw them up if that happens.  But then I realized that Google probably saves every translated document and checks new documents checksums against previous translations before using them to build mappings.  That’s hard to picture though.  They manage:  A. Mind. Crushingly.  Large. Amount. Of. Data.

Vernor Vinge outlined some outcomes that he sees for the singularity.  One crazy idea he puts forth is a digital gaia where the world is minutely ornamented with digital sensors coupled to processors and actuators.  One day they all spontaneously “wake up” and all hell breaks loose.  He describes a reality with all the stability and permanence of the financial markets.  I had a vision of my SmartLivingRoom(tm) suddenly reconfiguring itself into a nightmare of yawning jaws and oozing orifices.  But in reality, we might just see wild fluctuations in the functionality of computationally enhanced environments; from smart to dumb to incomprehensible.

Next up: Augmented intelligence, a neo-neo-cortex provided by technology.  This is his preferred scenario.   Crowdsourcing is cool, yada-yada.  Vinge imagines a UI so extreme that accessing it would be as convenient as the supported cognitive features. I used to like this idea until I started thinking about the security implications.  I don’t want my brain hacked.

He did make one amazingly succinct point about human computer synergy.   Computers can give us instantaneous recall and amazing processing speed, humans can provide that which we are best at: wanting things.

Humans want things.  For me this cuts to the very heart of the AI question.  I always complain that none of these AI geniuses can show us an algorithm to define problems.  (No, CEV doesn’t count.)  Algorithmic problem definition is just another way to say algorithmic desire definition   Good luck with that one.

All simple human desires seem to arise from biological imperatives.  Maybe artificial life could give you that.   More complex desires are interpersonal and might be impossible to reduce back to metabolic processes.  You may want fame for the status, but the specific type of fame depends on which person or group you are trying to impress.  And that changes throughout your life.

And if we do build Artificial Life, it may well be that it can only function with similar constraints as, uh, non-artificial life.  In fact, Terrence Deacon may well be right and constraints are the key to everything.  Ahh, the warm fuzzies of embodiment are seeping over me now.

But seriously, SingInst, where is this algorithmic desire going to come from?  And once you get that, how the hell are you going to constrain the actions of GodLikeAI again?  I know, I know, Gandi would never change himself into an anti-Gandi.  But we may be like bacteria saying that our distant offspring would never neglect the all encompassing wisdom of nutrient gradients.