Artificial Intelligence, or AI as it’s usually referred to in the media, offers huge potential for enhancing or improving logistics and supply chain management operations. But AI is a very broad description for a range of technologies covering not only AI, but also Machine Learning, Computer Vision and others. However, there is one key ingredient that all require to function appropriately and that is accurate and contextually correct data.
The more data that is available to drive and inform and train the AI algorithms, the better the outcome – usually. So access to huge amounts of data is critical. But just having access to a lot of data is of no value if the data itself is inaccurate, incomplete, or absent context. This is why the largest tech companies have invested heavily in resources to confirm the accuracy and contextual references of the data they are capturing.
Many companies involved in supply chain operations probably have huge amounts of data accumulated over a number of years. Unfortunately, a common problem that arises with data from enterprise systems that have been running for many years, is that the datasets are full of inconsistencies, along with inaccurate and incomplete data. These databases are also very large – due to the accumulated data – and it is unlikely that these databases have ever been reviewed, validated and cleansed. Therefore to expect any kind of AI technology to use them as source material is potentially challenging and may also produce some unexpected outcomes.
The technology giants understand this issue and they also have the resources to address it at source. Perhaps more relevant is that many of them were founded only a few years ago and have developed systems that from day one have always tried to validate and correct data that they ingest. As was explained in a recent issue of The Economist magazine, most of the AI services being developed by large technology service providers, have hundreds of people examining the datasets they are using to develop those services, in order to provide a clean and coherent baseline. This is not an option for many companies with legacy systems that are running day to day operations.
Almost all supply chain and logistics operations are run by interconnected information systems and they generate, collect and store data at a prodigious rate. As we have mentioned many times, as more devices come online due to the explosion in the IoT (Internet of Things) populations, this data pool will grow very quickly. Therefore supply chain networks are obviously prime targets for technology vendors delivering AI services.
The greatest challenge with making sense of any dataset being used for AI or machine learning, is context. This provides the critical information that gives meaning to the data and enables the algorithm to understand what the data is and where and why it was captured. In turn, it enables the AI to use it (or not) appropriately and also to ‘learn’ more about the environment in which the data exists.
If databases have limited or no context associated to the datasets, their usefulness to any AI system is compromised. This is why companies considering supporting their operations with AI services must understand the significance of context and accuracy in any data they use to feed them.
Google, Amazon, IBM and many others (especially in China) are developing a range of services using AI and machine learning technologies. Facial recognition technology is also growing rapidly. All of these raise issues of privacy and accountability, but the advantages they can potentially provide when augmenting commercial systems may be too compelling to avoid. But any technology of this nature must be both ‘trustworthy’ and ‘trusted’. This trust can only be established over time. The companies providing these services understand this and that is why they are putting a lot of effort into making sure the data feeding these systems is reliable.
The speed that an AI service can respond to a given situation and either recommend a course of action or take action itself, is orders of magnitude faster than any human. This could boost operational efficiency to the same degree. But it also implies that correcting any mistakes could be a complex and time-consuming exercise.
Any company looking to use AI services to augment supply chain and logistics operations should ensure that the vendors understand and qualify the data they plan to use to drive the system well before it’s operational.
Source: Transport Intelligence, February 13, 2020
Author: Ken Lyon
GLOBAL SUPPLY CHAIN INTELLIGENCE (GSCi)