High Computing Power Costs
Google's effort to integrate a large language model like ChatGPT into its search engine could result in significant costs for the company. The high computing power required to run such a model is one of the main factors contributing to the cost. In order to handle AI-powered search queries, a large number of chips are required, which can cost billions of dollars. These chips need to last for several years and the ongoing electricity costs can also add up, particularly for companies like Google that have carbon footprint goals.
The Inference Process of AI-Powered Search Queries
Another cost factor is the process of handling the AI-powered search queries, known as "inference." This process involves a "neural network" that tries to find answers to questions based on information it has learned from previous training. In comparison, traditional search engines like Google use web crawlers that scan the internet to create an index of information. When a user searches for something, the most relevant answers are retrieved from the index.
Potential Costs for Alphabet
While the potential profits from chat-based search ads may offset some of the costs, analysts predict that the technology could still result in several billion dollars in extra expenses for Alphabet (Google's parent company). For example, if ChatGPT-like AI were to handle half of the 3.3 trillion search queries Google received last year, it could result in a $6 billion increase in expenses by 2024.
Ways to Reduce Costs
To address these costs, Google and other companies are working on ways to reduce the cost of training these models and the inference process. This can include using smaller AI models for simpler tasks, charging for access to the AI, or finding ways to make the chips run more efficiently.
While the integration of a large language model like ChatGPT into Google's search engine may bring new benefits, it also comes with significant costs that the company will need to consider. These costs include the high computing power required to run the model, the costs of the chips and electricity, and the costs associated with the inference process. However, with ongoing efforts to make AI more affordable, it is possible that these costs will decrease over time.