AI will continue to improve quickly, seemingly at an exponential rate
Artificial Intelligence. “A poor choice of words in 1954” according to Ted Chiang, multiple Hugo and Nebula awards winning writer and commonly seen as the preeminent successor to Isaac Asimov. When asked for his favoured term, his answer was simple: “Applied statistics”.
AI is applied statistics. AI is not intelligent, at least not in the everyday meaning of the term. AI is not self-aware, it doesn’t have random thoughts or self-reflections, it struggles with human individual and community evolutionary common sense and ethical reasoning, entirely lacks empathy or compassion and has a very limited understanding of the nuances of language and behaviour. It remains an artificial construct and is greatly susceptible to programmatic bias and error. Furthermore, AI cannot be held accountable for its decisions and AI algorithms can be difficult to explain and its outputs impossible to audit.
Having said that, AI, as understood as a branch of applied statistics, is incredibly powerful. AI can sift, sort and identify patterns, programmatically acquire and apply information and new skills, weigh and reason using algorithmic logic to analyse complex problems, and continually adapt to new situations by machine-learning from previous processing cycles.
Generative AI can create new and novel texts and images from cleverly written prompts and requests, leveraging algorithms and gigabytes, terabytes or even petabytes of data to generate desired, and sometimes unexpected, outcomes. These outcomes can then be fed back into the hopper, further analysed and interrogated with more and more specific prompts and requests.
More focused Extractive AI can be used to pull valuable information out of reams of structured and unstructured documents, images and data, allowing users to far more quickly identify patterns, find meaning and review actionable findings.
AI will continue to improve quickly, seemingly at an exponential rate, and, already, some developers and the media are talking about difficult to understand and sometimes impossible to explain emergent behaviours or abilities.
In evolutionary terms, emergent behaviours and abilities can be easily seen in birds flocking and ant colony organisation - no one individual bird or ant is encoded with the flock or colony level rules, but together they manage massively complicated flight patterns or build physical structures and manage resource allocation. These successful emergent behaviours developed as part of an evolutionary hit-or-miss experiment over hundreds of millions of years. More recent human evolutionary emergent behaviours are still playing out - our ability to transfer formerly successful family and clan survival tactics and instincts to deal with a range of national and international political and global climate emergencies is evolving through trial and error and is, as yet, unproven.
The evolution of AI emergent behaviours is being observed in daily, weekly and monthly timeframes and are unbounded by common sense or ethical reasoning. Some of these unexpected behaviours and capabilities are welcome - e.g. novel approaches to problem solving, creative composition in language, music and art, self-improvement and process optimisation etc. Emergent behaviours are also sometimes dismissed as either incorrectly measured mirages, or reported as more sensational, speculative, AI doom-scenarios that are often variations on unrealistic Terminator-like fantasies.
While the former is common and the latter unlikely - for now - one overwhelmingly common negative emergent behaviour seems to be AI’s simple and confident presentation of entirely erroneous results, so-called hallucinations. These faults are largely based on the various models’ incredibly complicated programmatic limitations, with errors and biases being referenced and factored again and again in endless rounds of data-sets, algorithms and weights.
Clearly more measurements, controls and quality feedback loops are necessary - from better testing, peer-review, human intervention, analysis and continuous monitoring to, who knows, maybe even an AI version of Asimov’s Three Rules of Robotics?
This is part one of a three-part series detailing AI and its potential for transformation in AML/KYC processes. Next Up: A Few Big Buts