Failure to understand the complexity of new technologies will severely dampen potential


Almost daily we learn about some new technology that will either enrich or challenge (for the worst) our daily lives. Much of this stems from publishers and broadcasters seeking content that will attract the readers/viewers attention for a few seconds, the goal being to boost audience figures for the marketing department to attract advertising. Sadly this approach often promotes and highlights the wrong things in what are often significant stories. The current drama and hype surrounding artificial intelligence (AI) is a clear case in point.

But calm yourself dear reader. The intention of this piece is not to highlight yet again AI, but to ponder on the nature and implementation of large scale systems themselves.

I recently came across an interesting twitter post by an influential technology investor based in California (where else?). They were musing on how systems using AI and machine learning technologies are deployed in the real world and how that will inevitably change over time. The implications are interesting…

The proposition concerned the assumed inherent bias in any artificial intelligence. After all, any system built by a human must inherit at least some of the developers opinions and likes/dislikes?

Well it doesn’t quite work like that and it’s fair to say that the developers of the large scale AI systems, the ones who work for the very well funded technology giants who can afford to attract the brightest and the best brains, know what they are doing. They understand how any AI algorithm should be built and most importantly how to train them with the most appropriate and highly accurate data. It’s what happens after they are released out into the world.

Ti has made the point in numerous reports, that a lot of the capabilities now residing in the large enterprise applications used across the transport sector (IMS, WMS, TMS, etc.) will evolve into services running in the Cloud. Customers will be able to access those services via subscription payments and combine them into unique solutions. This flexibility will enable companies to be very agile and adaptable to market changes. But…

What happens when these services are combined into solution sets by people who only have a vague idea of the capabilities of the systems they are creating. It’s reality that most people who rely on systems to do their job, often have a limited knowledge of the complete capabilities of these systems. Often that is by design as the owners of the systems require the users to stick to a limited range of tasks as they do their jobs. It is also true that the senior managers charged with overseeing these systems, also have a limited appreciation of the capabilities. Hence their nervousness with the introduction of any new systems, as they prefer the devil they know rather than the opportunity to engage with a new one.

By the same token, as consumer technology has now blurred the lines between business and personal computing devices, the same characteristics are observed. How many people only use a limited number of apps on their smartphones or tablets? When these popular applications are changed by the developers, or the underlying operating system is updated and they no longer work the way they used to, there are a lot of unhappy campers who feel disenfranchised and helpless…. Until a friend or colleague helps them out.

People are now conditioned into thinking that information technology is essentially comprised of easy to use apps and should work the same everywhere. A desirable goal. But there is a very limited appreciation for the complexity underpinning this global infrastructure and the effort and thought expended by the developers (usually).

These assumptions are being transferred into the expected experience with AI engines. People think that Alexa and Siri being able to obey spoken commands and tell jokes, is helpful, funny and cute (mostly). However, as these systems spread through the global information infrastructure, unintended consequences may occur.

Let’s assume, as my influential commentator posited, that numerous AI apps are available and a client requests a solution from a small, inexperienced technology provider. The client doesn’t really understand how to describe in detail what they want, they don’t want to pay too much money, they want it in hurry and will deploy it into the hands of untrained (but low cost) operators.

Correspondingly, the solution provider is inexperienced, doesn’t really understand how to get detailed requirements from the client, but knows that there are numerous AI apps available that could probably do the job – for a low enough price that allows a decent margin.

So the result is likely to be something along the lines of – and here I directly quote the commentator, who works for the firm of A16Z, “a third tier outsourcer bolts a face recognition app together for the lowest bid, and sells to an unsophisticated client who doesn’t know what to ask, who then gives it to a minimum wage security guard with the instruction of ‘do whatever the system tells you’.”  

I’m guessing you see the problem here???

Source: Transport Intelligence, February 28, 2019

Author: Ken Lyon