Key takeaways
Jean-Simon Venne
Co-Founder & Chief Technology Officer, BrainBox AI
It takes all sorts to make a world. Now that AI is seeping into every aspect of our lives, the world is becoming home to an even greater number of “sorts” – not all of which are human.
In fact, we see two distinct types of AI models emerging to interact with us in greatly profound ways. First, we have foundational models, trained on a vast amount of generalized knowledge and capable of handling multiple tasks across diverse fields. Second, we see specialized AI models rapidly evolving to tackle specific problems with laser-focused precision.
Both types are powerful in their own right. But, like humans, they achieve infinitely more when they work together. As we look ahead, it’s this collaboration between generalist and specialist AI models that will shape the future of artificial intelligence—and, indeed, the future of the industries that rely on it.
To understand where we’re headed, it's important to realize that AI isn’t a monolith. The idea of a singular, all-knowing AI may exist in science fiction, but real progress will actually come from a “society of AI agents” where different models playing different roles—just as human experts from various fields collaborate to solve complex problems.
On the one hand, we have generalist models, like GPT-4 and Claude 3. They have a broad understanding of language, logic, and context, making them useful for generating ideas, explaining concepts, and assisting with a range of cognitive tasks. These models are a little like liberal arts students who know a little bit of everything and are able to pull from multiple disciplines to provide insights. They can draft speeches, explain scientific theories, or brainstorm marketing strategies, drawing from extensive training in language and general knowledge to provide versatile and contextually appropriate responses.
On the one hand, we have generalist models like GPT-4. These models are a little like liberal arts students who know a little bit of everything. On the other hand, we have highly specialized AI models that excel in very specific areas.
On the other hand, we have highly specialized AI models that excel in very specific areas. These models are like experts trained to solve complex tasks in fields such as climate science, drug discovery, energy optimization, or mathematics (as it the case with OpenAI’s “o1”). The precision of these specialized models often leads to more accurate outcomes, faster processing, and a deeper understanding of their respective fields.
For instance, in drug discovery, finding new treatments involves predicting how potential drugs interact with biological targets at a molecular level. Traditional methods can be time-consuming and costly. Specialist AI models like Chai-1 are trained to predict 3D molecular structures, helping researchers rapidly identify promising drug candidates by simulating how potential drugs interact with proteins. This means what used to take years of trial and error could now be achieved in a fraction of the time.
In climate science, Google DeepMind’s GraphCast is trained to more accurately forecast weather. For example, in 2023, GraphCast accurately predicted that Hurricane Lee would hit Nova Scotia nine days in advance — three days earlier than traditional models. This level of precision is essential in preparing for and mitigating the impact of extreme weather events.
Clearly, specialist AI models can have a significantly positive effect on tackling complex challenges in fields where accuracy and speed are critical. However, as we push further into specialized AI, there’s a growing concern about striking the right balance to avoid creating systems so narrowly focused that they lose the ability to think broadly, collaborate, or share insights across disciplines. After all, if humans can become siloed in their expertise, isn’t there a risk that hyper-specialized AI models could become isolated too? Excellent at one task, but blind to the bigger picture?
If humans can become siloed in their expertise, isn’t there a risk that hyper-specialized AI models could become isolated too?
This is where the collaboration between generalist and specialist models becomes critical. In my view, the future of AI doesn’t lie in choosing between broad or specialized intelligence, but in combining the best of both. It’s about building AI ecosystems where different models—each with their own strengths—work together to solve complex problems, a little like how a large company’s various departments work together to achieve a common goal.
I’ll give you a tangible example of how this could work. Imagine a renewable energy project where GPT-4 provides insights on global trends in renewable energy. Then, a specialized model trained specifically on wind turbine optimization fine-tunes the project plan to maximize energy output. After that, a virtual engineer agent optimizes the daily operation and maintenance of these wind turbines. Together, they provide a comprehensive solution that neither could achieve on its own.
This kind of collaboration mirrors how humans from different fields share knowledge and insights to drive innovation. In fact, it’s common knowledge that the best ideas often come from combining insights from areas that don’t seem connected.
This kind of collaboration mirrors how humans from different fields share knowledge and insights to drive innovation. In fact, it’s common knowledge that the best ideas often come from combining insights from areas that don’t seem connected. Just look at Charles Babbage, whose invention of computational machines (the foundation of modern computing) was inspired by his knowledge of the silk-weaving industry, where punch cards created patterns in fabric. Similarly, Henry Ford’s assembly line for car manufacturing was influenced by the efficiencies he observed in Singer sewing machines and meatpacking plants. The same principle can apply to AI: Generalists providing breadth, while specialists delivering depth to create breakthroughs faster than humans alone ever could.
This leads me to the question most have: As we develop more capable, collaborative AI systems, will humans still play a meaningful role? The answer, I believe, is yes and no.
Of course, there are areas where AI will undeniably outperform humans. For example, AI models can optimize energy use far better than we can or detect cybersecurity threats in ways that are beyond our capabilities. These are tasks that require constant monitoring, rapid data processing, and decision-making at a scale that no human can possibly match. In these areas, we can—and should—step back and let AI take the lead.
However, there are other areas where human involvement remains essential. In fields like scientific research, the sheer scale of possible solutions AI can generate is staggering. But while AI can process billions of combinations and test hypotheses faster than we can, it’s still up to humans to evaluate and refine these AI-generated results. Not to mention create the prompt in the first place.
Take cancer research, for example. Traditionally, researchers rely on a whole lot of trial-and-error to identify promising drug candidates. However, this is changing. Together with AlphaFold, a specialized AI-powered protein structure database, researchers at the University of Toronto found they could design and synthesize a potential cancer drug in just 30 days. This model can sift through endless permutations, flagging the most promising ones far faster and more accurately than researchers could. But the final decision—whether a candidate is truly viable—will still fall to the researchers who understand the nuances of biology and medicine.
So, while AI is taking over tasks once dominated by humans, we are still very much a part of the process. In fact, our role may become even more critical as we shift from being the ones “in the trenches” to the ones guiding AI-driven discoveries.
The rise of general and specialized AI models is undoubtedly unlocking new possibilities across fields like healthcare, climate science, and beyond.
As these models grow and interact, it’s up to us to create smarter individual models and architect frameworks that allow these models to communicate, collaborate, and collectively tackle complex challenges. In doing so, we’re tapping into AI's real potential to become a highly effective partner in innovation, working side by side to create a smarter, healthier, more responsive world.
It’s up to us to create smarter individual models and architect frameworks that allow these models to communicate, collaborate, and collectively tackle complex challenges.
Discover how BrainBox AI's ARIA is revolutionizing building management with cutting-edge AI agents
You browse any page of our website
You fill out a form on our website
You receive emails from us
You opt-in to marketing emails
You request a dashboard account
For more information regarding our Privacy Policy, please email: contact@brainboxai.com
Click here to read the full privacy policy
This privacy notice is based on an open-sourced design from Juro (https://juro.com) and Stefania Passera (https://stefaniapassera.com/portfolio/juro/) - get your own free privacy policy template here: https://info.juro.com/privacy-policy-template