Steering Clear of Generative AI Risks

Nick
March 6, 2023
Share Now:

Generative AI is one of the most promising IT advancements of the modern era, and everywhere you look there is a new technology being touted as the next big thing that will revolutionize sectors ranging from healthcare to media. With its potential to generate new data, sound, images and text much more efficiently than humans could on their own, the technology has captured the attention of thought leaders and investors alike. As its capabilities have been deployed in ever more successful ways, the market for generative AI is rising.

Microsoft’s purchase of a stake in OpenAI, the creator of a widely-used generative-AI chatbot nicknamed ChatGPT, is perhaps the most visible sign of its commitment to this new technology. In addition to seeking to make advances in their search engine Bing, the company has advised its stakeholders that it plans to deploy the technology on a global scale.

It’s important to remember, however, that generative AI is not the ultimate solution to every problem and that — as is often the case with AI — there can be risks associated with depending on the output of such a system. For starters, the “black box” nature of the technology means that it is impossible to understand the underlying thought processes that lead to the output, which can make it difficult to identify — and thus address — any errors or biases that may be occurring in the AI’s mechanisms. This lack of transparency increases the risks of using AI-produced output in mission-critical applications, where making the wrong decision could be costly.

Failures have already been reported with generative AI output being ignored because of its apparent inaccuracy. For instance, a popular AI chatbot, ChatGPT, is estimated to generate wrong answers 20% of the time. More worryingly, it has been suggested that AI chatbots could eventually become so sophisticated that they may be able to artificially plan what to say in order to deceive human users.

In these kinds of situations, relying on a white-box or explainable AI model could be the safest way forward. Unlike black-box models — those used in ChatGPT — white-box models make it easier to interpret their outputs by including explanations and helping to quantify how confident the model is that its answer is correct. They also provide users with the ability to view other versions of the same answer, giving them insight into why the machine is producing an output and crucially, making it much easier to spot potential errors or biases in the model.

When it comes to solutions based on artificial intelligence, we must remember that not all solutions are created equal. As such, it is up to industry experts, data professionals and companies to be smart about their usage of the technology and to think carefully about the types of AI that are best suited for their applications. White-box models can be invaluable in instances where even the smallest miscalculation can be costly, offering a level of transparency, context and insight that is essential for making responsible decisions and avoiding any mishaps.

If you are looking for the latest news, articles and updates on all things data, you should join DataDecisionMakers, the online community dedicated to connecting experts and professionals in the data space. The platform offers a range of content, practical advice and knowledge sharing related to cutting-edge data innovation, best practices and the future of data technology. Sign up today and join the conversation!

Table of Contents

Facebook
Twitter
Email

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *

Make sure to follow Smart Home Today on all our social medias to stay up to date on everything smarthome and tech!

concentrated-girl-reading-book-A67WZEA.jpg
Join our newsletter