Read this week’s top read, written by Ken Lyon: AI is quietly becoming integral to modern supply chain processes. Published October 4, 2018 in Logistics Briefing – To receive FREE news & briefs weekly, written by logistics industry experts click here 

There has been a huge amount of commentary, speculation and misinformation about Artificial Intelligence (AI) and its implications for society and the world at large. Much of this ‘noise’ understandably passes people by, unless it includes something that is directly relevant to the observer. This is unfortunate as AI really is something that is very significant, but as with all things requires some context to be directly relevant to most of us.

Almost by stealth many of the things we use on a daily basis are now exploiting AI in order to perform tasks. Probably the most common is Google search. Many aspects of Google’s search and classification algorithms have been using AI for some time. Some users may have detected that answers have been getting smarter (more relevant) and faster. Now leaving aside the inherent bias they include to meet any revenue goals they may have (adverts), services such as Google Translate, Google Vision and others, are all using AI to get better.

It’s not only Google. Every major technology service provider is using AI as a key element in their products. Apple uses AI extensively to underpin and operate their services. Their operating systems, IOS and MacOS, all use AI to deliver answers or suggestions to the user as they go about their daily routine. e.g. A reminder about where you have parked your car, or a notification about how long it will take to get to your next appointment, all appearing on the screen of your phone automatically. They also introduced Siri into many of their products as an intelligent assistant. But Siri seems to be losing that race to Amazon’s Alexa, a service which is expanding its reach and influence across an ever growing range of services. Microsoft is also in the game with Cortana.

All of these services and many others, have been designed as ‘services’ that can be exploited by other applications. This is helpful and useful, but it is a two way street. The AI engines developed by companies require access to huge amounts of data to learn and improve. By providing interfaces (APIs, or Application Programming Interfaces – smart doorways between systems), other applications can exchange data or be interrogated by external systems. These mechanisms help the tech giants to accumulate enormous pools of data from which they can train their AI tools.

Practical examples of this already exist in the logistics arena where a number of TMS and WMS vendors have integrated their solutions with Amazon’s Alexa via published APIs. So users can simply ask the system for information rather than having to navigate through numerous menus for the answer.

Elements of AI are now regularly used for predictive analytics in relation to intelligent transportation and route planning, demand planning (DRP) and others. In warehouse operations, the operational systems (WMS) of some vendors, are being integrated with augmented guidance and robotic systems to accelerate inventory management. Amazon and Ocado are a couple of the first movers in this area.

Customer services are also benefiting from AI. In one example, a government agency has improved its engagement with citizens by using AI to streamline the process for claiming benefits. Instead of people having to navigate through huge numbers of incomprehensible forms, they engage with a natural language engine that asks them what they are trying to do in their terms and takes them through the process, automatically completing the appropriate forms and processing the claim. It has been a huge success and reduced the cost of processing claims (less mistakes), while helping people get their entitlements faster.

Autonomous vehicles are also in the vanguard of systems looking to exploit AI, but for general adoption a number of other technologies also need to advance. GPS accuracy, communications bandwidth, legislative frameworks etc.

Although these emerging services are really just the tip of the iceberg, they illustrate the huge potential.

Now for the reality check… For Artificial Intelligence to deliver any benefits, it requires access to huge data sets. Like human brains, they need data to learn and eventually understand what that data might mean. Any comparison between the brain of an average human and an AI engine is meaningless at the moment because most people struggle to understand how human brains actually work. With AI it’s easier to explain the fundamentals behind machine learning.

AI can be roughly split into two groupings – Symbolic Learning and Machine Learning. Symbolic Learning is a branch of AI that encompasses robots, computer vision and mechanics. In short, designing systems that mimic how humans interpret and interact with the world. They have to be taught how to respond to external stimuli and events, with the algorithms that drive them adjusting behaviour accordingly. There are many videos on YouTube where you can see how robots learn to navigate and frequently fail, with hilarious results. But over time they are getting better and the underlying systems are getting smarter.

Machine Learning is a branch that is focussed on providing answers to questions or making predictions based on the systems understanding of the data it has ingested. So systems that translate languages, find specific images and patterns in millions of photos, draw inferences, achieve defined goals and answer specific questions, are using Machine Learning. These are the AI systems we are touching on a daily basis.

The amount of data required to assume anything like human intelligence is colossal and may take some years to achieve. Alternatively, if a smart enough system can be developed that can mimic the brain’s ability to interpret its environment and react accordingly, it may happen much faster. The brain absorbs stimuli through a variety of sensors such as your eyes, ears, taste, touch, etc. The artificial equivalents are cameras, microphones, pressure sensors and chemical detectors, all of which have been integrated into computer systems. Humans can learn and infer much faster than machines by using a small number of data sets or examples. However, machines can ingest enormous amounts of data much faster than humans and analyse that data using brute force computational power. But the science of Deep Learning is starting to bridge the gap between the two and as this sector of AI research advances, then we will see the first examples of truly ‘Smart’  machines.

What is often lost in the narrative about AI, is that underpinning every system at the moment are programs and algorithms developed by a human. These people all have inherent bias with their view of the world. Therefore it is a challenge not to subconsciously embed those biases into the underlying codebase. This is particularly true when systems have to be taught, or directed, in their early stages of development.

Maybe we need to think about some very basic ‘rules’ for any artificial system we create. In science fiction, Isaac Asimov defined his three laws that should be embedded in any robot. There have also been subsequent attempts at defining similar principles by academics, which is sensible. My suggestion would be that as a first step, any artificial intelligence should be taught right from wrong. The debate that would ensue from this would be illuminating.

Source: Transport Intelligence, October 4, 2018

Author: Ken Lyon