There has been unnecessary panic about a supposed threat to humankind from developments in artificial intelligence (AI). AI has been with us for a long time, from the philosophical discussions between Alan Turing and Donald Michie at Bletchley Park in wartime and the later ability to put their ideas into practice as second- and third-generation computers caught up with them. Turing died young, tragically and unnecessarily but Michie was able to capitalise on the transistor revolution. He founded at least one AI research unit and his regular column in Computer Weekly introduced his ideas to those of us who were making a living in commercial and administrative information technology. (His centenary comes up in September this year. One trusts that he will be celebrated appropriately then.)
By 1985, there were already real-world examples of the application of AI in the form of expert systems. Medical diagnoses and fault-finding in complex systems benefited from linking AI to faster and larger databases. Machine learning and artificial neural networks also became part of the mix. What has happened since then has been steady development in techniques and game-changing improvements in processing speed and memory capacity. It now looks like magic, as Arthur C Clarke observed.
But fast or slow, these systems lack the vital spark, the thing that distinguishes real life from artificial. They can never be self-aware or act on an initiative which is not implicit in their program. Even machines that learn can only do so within the parameters set down by their designers and coders. And, as one wag pointed out on the radio recently, if one does act dangerously due to a program bug or a hardware fault, all you need to do is pull the plug out.
The great new danger it seems to me is of evil people taking advantage of state-of-the-art simulation on broadcast media and the Web. (Fortunately, we are a long way from real-life robots indistinguishable from humans alla Westworld.) Already, as this Galaxy commercial featuring "Audrey Hepburn" showed, well-known figures can be made to appear in videos and movies long after they are dead. Text-to-speech systems have been with us for some time. Linked to a program which can analyse speech patterns using surprisingly little material, there are applications which can appear to make (e.g.) politicians spout views diametrically opposed to their public stance.
The possibilities are endless. Is that really Sir Keir Starmer praising private medicine? Is it really Sir Ed Davey having a conversation with John Redwood about a post-election coalition? Or Nigel Farage admitting he was wrong all along and that the UK's future is in Europe? Caroline Lucas praising the Aston Martin Bulldog and pressing for it to be put into production?
Those were all extreme examples and easily detectable as fakes (except maybe the first). But more subtle fakes could easily take voters in and destroy what confidence remains in our electoral system. Perhaps politicians will be forced back to physical hustings ("this is really me, in the flesh"), which would be no bad thing.
Finally, there is an imminent threat to actors. In a notoriously insecure profession, many who have not made it on the stage or in feature film have been able to use their training to voice TV commercials or documentaries. Some like Christopher Tester, pen-pictured in the i recently, have been very successful. Others may be just scraping a living. But all could be swept away by simulators. The bottom line has no scruples.
No comments:
Post a Comment