Artificial intelligence has been taking over seemingly every corner of the internet and free market. Airlines, search engines, and marketing firms are adopting it, and it seems like every service has a new AI feature to advertise. For the average consumer, is any of this even useful?
AI itself has been mainly behind the scenes for decades, whether it be a computer player in a video game or an algorithm of any kind. Its recent boom in popularity, however, was due to OpenAI’s Chat-GPT releasing in November 2022. Recent advances like these began the AI boom as we know it.
For background, OpenAI started as a non-profit organization in 2015 with the support of various Silicon Valley millionaires. Their programs span various forms of media, including music composer MuseNet, text-to-image generator DALL-E, and – their main focus since the start – GPT.
Chat-GPT is based on OpenAI’s earlier GPT-3.5 language model, which stands for Generative Pre-trained Transformer. Like the name implies, these language models are trained on data from across the internet to then generate its own text via a prompt or lead. GPT-3 in particular focuses on specific input words and then derives its output from there.
GPT-3 normally requires text to be written before continuing where it left off. Chat-GPT instead waits for a question or sentence to generate its own answer. The simplicity of the interface allows anyone to use it, and OpenAI’s open-source allows anyone to implement it for their own benefit.
The explosion of Chat-GPT led the way to AI being implemented everywhere it could. Companies like Microsoft and Google joined the trend with their Copilot and Gemini programs respectively, while other platforms like Quora offered generative AI bots to answer questions.
These companies are most concerned with staying ahead of the AI race, but do consumers feel the same? According to a study by Pew Research Center, consumers have only grown more concerned about AI in recent times. The percentage of concern over excitement jumped from 38% in 2022 to 52% in 2023.
The same study polled consumers on the use of AI in certain areas, as well as splitting up those with and without college degrees. Both groups shared similar beliefs about the use of AI in the same fields. The most mixed or concerning areas include the use of AI in private information, customer service, finding information online, and public safety.
Countries also share in this sentiment, notably the European Union. AI services are at a constant clash with the EU as they introduce more safety and privacy regulations, including the Digital Markets Act and the proposed AI Act. Companies like OpenAI are also threatening to leave the EU should these regulations affect them.
These back and forths are also part of consumer worries, but seemingly distract from the issues of AI and its implementation. Whilst companies and governments fight back and forth, companies strengthen their AI use to gain an edge of sorts in the background of the fight.
The most invested and affected seems to be Google Search, as a study in Germany describes. The ease of AI access and its use of SEO, or Search Engine Optimization, allows fake AI-generated articles to quickly rise to the front page of results. Such articles appear indistinguishable from real articles at a glance, but fail to match their first impression.
Rather than take safety measures, Google seems to be faltering. Google Search, in an attempt to push more of its products, has become littered with advertisements and features of its own that overtake actual results. Conversely, the AI-generated articles from before show up as advertisements and remain heavily favored by the engine – so long as it makes them money.
Google, along with other search engines like Bing, have their own AI results summarizing information from other sites. This sounds like a convenient feature, but it’s also affected by the aforementioned SEO spam and AI-ception. On top of that, Google is also looking for ways to monetize the results in the future – likely pushing unnecessary products or advertisements.
While AI-generated results are a major hindrance, Chat-GPT and other chatbots have issues of their own. These chatbots occasionally suffer a phenomenon known as hallucinations, where they pull misinformation from thin air. An extreme of this phenomenon also includes bots getting stuck in loops, repeating information and going off the rails.
A notable mass hallucination occurred just last month, when users on social media reported Chat-GPT getting stuck in loops of repeating words and phrases endlessly. OpenAI worked swiftly to take the program down for maintenance, but this was only an extreme of an existing problem. If hallucinations could get this bad, how many were lying under the surface?
That question was answered in another recent incident with Air Canada’s chatbot. A customer messaged with the bot regarding a bereavement policy for flights, to which the AI chatbot responded with a refund policy of its own. Air Canada fought to settle the refund with a coupon, but was ultimately forced to pay in court and take responsibility for the actions of the chatbot.
Then there’s the question of AI image and voice generators. These were the subject of the 2023 SAG-AFTRA strike, as screen actors and writers protested unfair labor practices and the threat of AI to their careers. The group secured a contract to ban generative AI in Hollywood for a few years, but other companies are still using it to cut costs and jobs.
Generative AI will continue to grow and learn over time, but will its faults ever truly go away? And if they do, will the AI become too powerful and need to be stopped? Such a dilemma can’t be answered easily, but it’s what companies, governments, and consumers alike must decide for themselves.