John Oliver is fantastic. Not simply because Morrissey, Monty Python, and I are both from the land of fish and chips.
I’m a little embarrassed to say how many of my viewpoints are influenced by his way of thinking. The news is presented by the TV host in a witty, thoroughly researched, and irreverent way that makes sense in a complex world.
Nevertheless, this is not the case with artificial intelligence.
During “Last Week Tonight,” John Oliver bashed artificial intelligence.
Oliver recently attacked the rapid uptake of artificial intelligence and machine learning models during an edition of his well-liked HBO Sunday night talk programme “Last Week Tonight,” in particular the Microsoft-funded chatbot ChatGPT.
The issue with AI at the moment, according to Oliver, is not that it is intelligent. It’s foolish in ways that we can’t always predict, that’s the problem. Which is a serious issue considering how frequently we employ AI for important purposes.
These remarks are unfair. Large language models like ChatGPT and Google’s LaMDA and Bard are astounding in the number of jobs they can complete, and AI is valuable in a variety of industries including banking and education.
For instance, being able to provide pertinent material fast is useful when creating content for a website’s “help” section.
AI is helpful in improving customer service as well. Consumers want to be immediately attended to but yet feeling like they are connecting with a human. In a world where so many of our interactions take place remotely, AI is important in facilitating that experience.
It’s also critical to recognise that the models are changing. As they consume more content from larger audiences, they’ll become smarter, much like Siri and Alexa.
Oliver has some good arguments, to be fair. The emergence of AI models does bring to light issues that require deeper discussion. For instance, ChatGPT and other AI tools are excellent code cleaners. Yet by providing the chatbot with their intellectual property, developers from all over the world who use ChatGPT to tidy up code are essentially leaking it.
AI can be abused, but there are ways to prevent it
Black box AI technology, which lacks transparency in its decision-making process and, as a result, lacks responsibility for disseminating false information, is another major issue. Now, even a non-technical person may learn how to train widely accessible black box AI models like XGBoost by downloading them off the internet.
Yet, they won’t know why the model made the choices it did or whether bias or misleading information affected the outcomes.
Oliver spoke out against the risks of black box AI for 28 minutes. “AI systems ought to be explainable,” he said, meaning that we should be able to comprehend precisely how and why an AI came up with its solutions.
He neglected to clarify that there is a less risky and more open substitute for black box systems. By creating ground-breaking white box solutions for recognising financial crime for top institutions like HSBC, Standard Chartered, and First Abu Dhabi, Silent Eight has established itself as an industry leader in AI.
Iris, our flagship platform, is completely explicable and explains why it made a given choice. Beyond banking, a variety of other businesses have comparable approaches.
There are sophisticated models that have been developed that provide explicit justifications for why a particular outcome occurs. Yet it’s difficult to find them or download them quickly from the internet. Expert mathematicians, data scientists, subject matter experts, and risk and compliance analysts painstakingly design, monitor, tune, measure, and test these tools.
AI and intelligent machine learning are useful technologies that are essential to the current world. While you should never employ black box AI to make judgements with legal repercussions or significant implications, it doesn’t mean you can’t use sophisticated, cutting-edge, and open technologies to enhance the decision-making process in important industries.