Computer Says Yes - Part 1: Getting to Grips with Artificial Intelligence

Hello and welcome!

I'm going to inaugurate this blog with Part 1 of a series of posts focused on one of the hot topics of today: Artificial Intelligence (AI).

My intention is to deconstruct the vast field of AI into more relatable terms, by looking at what is possible with AI, where the limitations are and where the strongest opportunities for value creation currently exist. Throughout the series, I will shift perspective between the technological aspect of AI, the impact on people and society and the narratives surrounding AI. Since I have a tendency to get philosophical, I will also examine some of the AI-inspired stories that have in turn inspired humans to think about what it means to be human, how global society is evolving and what new realities we being pushed towards.

Part 1 of this series offers an introduction to AI and looks to highlight how pervasive AI is already in our daily lives.

Putting AI into context

The term "Digital Transformation" is in danger of becoming a buzzword(s). In any case, AI is a key enabler of digital transformation, which is changing the way we work. My intention is to compartmentalise AI to some degree so that it's possibilities may be more clearly understood in the context of a digital transformation strategy. With that in mind, AI is most commonly broken up into three stages:

  1. Narrow AI describes systems that are very good at one particular function but has limited understanding. This is the type of AI that we use every day. Modern AI-driven innovation belongs to this domain. Humans tend to extrapolate and perceive AI to be more intelligent than it actually is.
  2. Artificial General Intelligence (AGI) is akin to humans.
  3. Superintelligence is when AI transcends human intelligence, otherwise known as singularity. At this point we probably couldn't even comprehend this type of intelligence. Some say that we will reach this point before 2050. Others say that it’s at least 100 years away. For now, this is science fiction territory.

Who Cares?

Listed in order, the top acquirers of AI startups in 2017 were Google, Twitter, Apple, Intel, Salesforce, AOL and IBM. With such major players investing in AI at a relatively early development phase in the field, it seems that the hype around AI is justified. In fact, most of these companies have been acquiring AI startups since 2011. Although Microsoft is not included in the list above, they also have a strong presence in the field.

Co-founded by Elon Musk in 2015, OpenAI is a non-profit AI research company. They believe that advancement in (Narrow) AI will culminate in Artificial General Intelligence (AGI) and that steps must be taken to ensure that this advancement must be done safely. Here is their mission statement:

"OpenAI's mission is to build safe AGI, and ensure AGI's benefits are as widely and evenly distributed as possible. We expect AI technologies to be hugely impactful in the short term, but their impact will be outstripped by that of the first AGIs. We're a non-profit research company. Our full-time staff of 60 researchers and engineers is dedicated to working towards our mission regardless of the opportunities for selfish gain which arise along the way. We focus on long-term research, working on problems that require us to make fundamental advances in AI capabilities."

I'll be making reference to some of their research throughout this series, but further research papers and open source code is readily available from their website for any developers out there.

Deep Learning - Computers don't just say "no" anymore

Deep learning is a subset of the broader field of Machine Learning and describes the technique for teaching computers how to learn. A major area within AI, we interact with deep learning algorithms everyday of our lives, either indirectly or directly. Our voice-activated assistants such as Apple's Siri and Microsoft's Cortana are examples of deep learning algorithms. Advertisers make use of deep learning algorithms to more precisely reach their target market with predictive advertising and increasing the relevance of the advertisements that we see. Netflix recommends what we might want to watch and Spotify recommends new music. All powered by deep learning and not at all surprising. Most of the time, AI is an invisible technology as we hardly notice how invisible deep learning algorithms influence our decisions everyday. 

Here's an example of a deep learning algorithm trained to modify images. The algorithm learns from a huge database of images how to generate an image of a particular output: smiling, age, beard, blond hair, etc. I was able to put a smile on my face and enhance the effect of my hidden ginger gene. (If you would like to mess about with this yourself, you can do so here).

Here be monsters.

Here be monsters.

Deep learning algorithms make it possible for computers to see, hear, read, write and speak. These capabilities can be combined in many ways and it is here, through innovation by combination, that many new applications are being created. Indeed, we are already in an exponential growth phase of AI-driven innovation. There are a myriad of real-time translation apps available for download, Adobe announced a VoCo in 2016 (the Photoshop for voice), and products such as Office 365 are being enhanced with deep learning algorithms. 

While deep learning algorithms perform their given tasks well (and in some cases better than humans), they still lack the understanding and creativity that surrounds the task. In spite of rapid development, AI has not yet reached human levels of intelligence.

For now at least, the vast majority of AI-driven value propositions are associated with optimisation.

In part 2, I talk about some of the challenges involved in teaching machines and how humans are learning something about themselves through this interaction. I also look at examples of how overcoming these challenges is leading to breakthroughs in automation technology, one of many areas that stands to be greatly impacted by AI in the short term.