Limits to Growth: Has Generative AI Reached a Dead End?


Limits to Growth: Has Generative AI Reached a Dead End?

The generative AI revolution, which is based on the belief that large language models will continue to grow and evolve exponentially, is facing new concerns that it may be reaching a plateau, with many reports indicating that the pace of progress in this field is currently slowing.

But why does this matter?

Two years after the launch of ChatGPT , major tech companies such as Google , Microsoft , and OpenAI have bet billions of dollars on a strategy of expanding to build larger, more powerful, and more complex language models, and developing the computing power needed to run these models, believing that the larger the models, the better their performance will be.

But what if these models reach saturation point? That is, increasing the size of the models will not significantly improve the results!

If generative AI models prove to have reached a plateau, this will have profound implications for the technology industry and society as a whole. Companies that have invested billions of dollars in this field will lose these billions and will face great difficulties. In addition, the cessation of development in this field may delay the realization of many of the innovative applications that were expected from artificial intelligence.

This raises many questions, including: What are the obstacles standing in the way of the development of generative artificial intelligence, what alternatives are researchers proposing, and what about the dream of achieving general artificial intelligence and then super artificial intelligence , which will outperform humans and radically change the face of the world?

First, what are the obstacles to the development of generative AI?

Some OpenAI employees believe that the company’s next generation of generative AI, known as Orion, may not make the same leap from the current model ( GPT-4 ) as the transition from GPT-3 to GPT-4 was, according to reports in The Information and Reuters over the past week.

Since the launch of GPT-4 in March 2023, there have been growing questions in the tech community about whether OpenAI can surpass the success of GPT-4. Google and its competitor Anthropic are also facing setbacks and delays in their efforts to develop the next generation of their foundation models, which are models previously trained on large datasets as a starting point and basis for building new generative models by retraining the model on specialized datasets.

However, OpenAI CEO Sam Altman remains a firm believer in the idea that the size of AI models is key to their development. In his blog post “ The Age of Intelligence ” published in September, Altman argued that deep learning technology gets stronger in direct proportion to the amount of computing power and data, noting that performance improves as more resources become available.

But this approach faces fundamental limitations: increasing the size of the models requires huge computing resources that are energy-intensive, making them very expensive and unsustainable in the long run. In addition, most of the good data that could be used to train these models has already been used, making it difficult to find new sources of data.

Moreover, the effectiveness of using synthetic data to train AI models has not yet been proven . Synthetic data is defined as data generated by algorithms rather than collected from the real world, and can simulate real data with high accuracy. However, it faces significant challenges, including increased costs because the generation process also requires a lot of computing resources, in addition to the difficulty of verifying the validity of models trained on synthetic data, especially in cases where real data is not available for comparison.

Second, what alternatives do researchers propose for development?

When any strategy encounters obstacles, the natural solution is to look for alternatives. Therefore, the industry has begun to look for promising alternatives to develop generative AI away from the approach of increasing the size of models. Researchers are currently aiming to develop smaller and more efficient models, capable of achieving excellent performance in specific tasks, without the need for huge computing resources.

OpenAI sought to achieve this when it launched the (o1) model - known internally as the ( Strawberry ) project - which is characterized by its ability to reason to provide more accurate and comprehensive answers, which came in two versions: (o1-preview) and (o1-mini) to reduce the consumption of computing resources, but this improvement in performance came at the expense of increased response time.

The company also said the model can perform complex reasoning, rivaling the performance of human experts on many benchmarks. However, a recent study by Apple researchers suggests that these claims may be exaggerated. While the O1 model is capable of producing coherent and fluent text, it struggles to solve mathematical problems that require abstract reasoning.

This suggests that the model can mimic human thinking in some ways, but it still relies heavily on matching patterns in the data it was trained on. This raises questions about whether current models can achieve a level of intelligence comparable to real human intelligence.

Third: Expert opinions:

Experts have long warned of the limitations of the approach to scaling up language models in developing generative AI, and now, with reports of a slowdown in progress in the field, it seems those warnings are starting to materialize.

Bill Gates previously predicted that the expected successor to the GPT-4 model would be disappointing and would not achieve qualitative leaps, and this was confirmed by many critics, most notably Gary Marcus , who has long doubted the possibility of achieving significant progress in the field of generative artificial intelligence.

Fourth, what about the dream of achieving general artificial intelligence?

Opinions differ on the best path to AGI, with some researchers arguing that generative AI based on massive amounts of data is the answer, while others argue that there are more effective alternatives. These include combining neural networks with fixed knowledge , an approach taken by Google DeepMind, to build AI models that can solve complex mathematical problems.

However, all these efforts face significant challenges related to increasing computing requirements and the decline of Moore's Law, which was the main driver of the development of computing devices in the past.

The biggest challenge is Moore's Law, which predicted that processor performance would increase regularly every 18 months to two years, as Intel founder Gordon Moore predicted, but the semiconductor industry has reached limits in shrinking the size of transistors, making it difficult to maintain the accelerating pace of progress.

As a result, companies are exploring new technologies such as quantum computing and advanced materials to improve device performance. However, these technologies are still in the early stages of development and require huge investments.

In addition, the high costs of developing large AI models are worrying investors, as major tech companies have spent up to $200 billion on research and development in this field, increasing pressure to achieve tangible returns in the short term.

At the same time, AI faces challenges in meeting user expectations. While generative AI has achieved great success in some areas, such as generating text and images, it still struggles to understand complex context and make informed decisions.

This is confirmed by Karthik Dinakar , co-founder and CTO of Pienso, who stresses that AI must go beyond large language models and be able to solve complex problems in the real world, saying that models like GPT alone will not suffice to meet these needs.

Conclusion:

The race to achieve AGI faces multiple challenges, from the physical limitations of computers to high costs and development challenges, meaning the road to achieving this goal is still long and thorny.


google-playkhamsatmostaqltradent