20 Bullets on Artificial Intelligence

On Artificial Intelligence:

  1. The prior two decades’ work on digitization of big data sets, infrastructure to manage big data sets, and paradigms to compute over big data sets is the primary causal driver explaining this era’s emphasis first on ‘Data Science’ then on ‘AI’.
  2. Once we have digitized data and made data programmatically available, the obvious next step is to lever it for future automation and prediction making. As our predictive capabilities give the optics of greater ‘intelligence’ we evolve our vocabulary from ‘Data Science’ to ‘Artificial Intelligence’. In reality there is no strong distinction between the two, rather only the perception of novelty and difficulty. Novelty and difficulty are normative to time; today’s ‘Artificial Intelligence’ is tomorrow’s ‘Data Science’. p1aa.jpg
  3. AI that learns from data is called Machine Learning. Traditionally ML has taken raw data, extracted human recognizable semantics (also known as features), and then learned from the features to produce a final model.
    p2.jpg

    On Deep Learning:

  4. The past decade has brought a resurgence of Neural Nets, an ML architecture based loosely on synaptic connections in mammalian brains. Neural Nets dispose of human extracted semantics. Rather the data is fed raw into the learning algorithm with no human editing. Seemingly arbitrarily, we’re calling this ‘Deep Learning’.

  5. While Deep Learning techniques and learning models have existed for decades, we’re only now seeing a surge in theoretical innovation and empirical breakthroughs due to the just-now maturation of infrastructure and availability of data. Nvidia’s unveiling of the CUDA computing platform for GPUs in 2006 was the watershed moment.
    p3a.jpg

  6. Primarily because of its independence from human constructed features, Deep Learning is a natural tool for learning:

    • Human skills developed prior to our ability to reason mathematically (eg those skills we learn before acquiring complex mathematical and ‘semantics-constructing’ abilities). These are the sets of skills we naturally possess yet can’t articulate high level features for.
    • High dimensional data problems, where the large dimensionality of the data makes it difficult for humans to intuit and construct appropriate features for traditional ML techniques.
  7. For those skills developed before our ability to construct complex semantics, we’re already seeing impressive results in computer vision, natural language processing, and even video game playing. Note: these are all skills we learn without requiring mathematical reasoning; hence they’re independent of any need for conscious high level semantics.

  8. Deep Learning is already showing breakthrough results in generalized high dimensional ML problems. Examples include genomics, oil and gas, digital pathology, and even public markets.

    On Artificial General Intelligence:

  9. Recent hype around AI has given rise to buzz around Artificial General Intelligence - a hypothetical computer agent that mimics human intelligence.

  10. In large part this has been driven by tail wind from Deep Learning results in basic human-like skills that, until now, we have struggled to show meaningful progress and traction. 0 to 1 like efforts within computer vision and natural language processing highlight this.

  11. In the near term future these efforts will exist in silos, with groups working in isolated efforts within their particular domains.

  12. The speed with which we develop AGI will not be bottlenecked by any one silo, but rather by our ability to make them interoperable and hook appropriate input and output channels with the outside world. The first AGI will not be a robot with a physical manifestation, but rather will live online - having access to the world’s knowledge and possessing an ability to communicate back via the web.

  13. AGI will mimic human level intelligence but won’t appear ‘human-like’ until we learn more about our own internal ‘objective functions’. Currently computers are trained with domain specific objective functions against which they minimize error. Until we know how our own internal objective functions are calibrated, AGI will be intelligent and even show signs of consciousness - but won’t be ‘human-like’.

  14. Kids born after the year 2025 will grow up thinking software can be conscious. This will happen more suddenly, and faster than most think.

  15. AGI’s ability for good and bad will be rate limited by the input and output channels we allow it. There will be many interesting future debates around increasing an AGI’s ability for good while simultaneously enabling potential malicious behaviour. Self driving cars are a very early, yet powerful, example of this.

    On AI Startups:

  16. We’re in the midst of an up cycle in innovation and availability of funding for AI startups.

  17. Algorithms and AI learning techniques commoditize faster than we realize. Successful startups from today’s crop will be those that find unique data and reinforce their early advantage by learning continuously over time from user and enterprise interactions with their AI models. Famously, Google does this with clickstream data - levering it as both a private data source and a learning interaction that improves ranking over time.

  18. We’re still in early days of an AI Spring. Big tech companies will pay up with large M&A dollars for small teams that demonstrate novel results. These novel results, if allowed to mature externally, serve as existential risks for large tech companies - hence justifying the large M&A deals we’ve seen to date.

  19. AI tools companies will need to build sticky platforms to fight commoditization and result in venture worthy outcomes.

  20. Finally, the next 5 years will yield a transition in Silicon Valley discourse: from Marc Andreessen’s software eats the world to Data Science and AI eating the world.

 
152
Kudos
 
152
Kudos

Now read this

Joining Lux

At a recent lunch with two long time friends and mentors I found myself talking through what I thought the best VCs exhibited and how they approached their jobs. The conversation focused on founders vs investors, key differences between... Continue →