Key takeaways
Jean-Simon Venne
Co-Founder & Chief Technology Officer, BrainBox AI
If 2023 was the year AI took the world by storm and 2024 was the year it settled into our workflows, then 2025 will be the year it stops being optional.
From models like DeepSeek pushing the boundaries of efficiency (and ethics) to AI agents evolving into autonomous coworkers, AI is shifting faster than we can even articulate. It’s changing how we work and live. It’s redesigning entire industries. It’s challenging how we think about intelligence, ownership, and responsibility. And it’s even got some of us talking to our cars.
Here are seven of my thoughts for the year ahead.
DeepSeek has shown us that we can do more with less, meaning intelligence will become as ubiquitous as electricity. So, 2025 won’t be about who builds the best AI—it will be about who builds the future on top of it.
By distilling large models into smaller, more efficient ones, DeepSeek has shown that high-performance reasoning can be achieved at a fraction of the cost, making AI more accessible than ever. At the same time, DeepSeek’s open-source approach is removing a key barrier to AI adoption.
Until now, many companies hesitated to scale AI due to cost, vendor lock-in, and reliance on closed systems like OpenAI or Anthropic. With R1, companies can run their own AI models without being tied to a single provider. The result? AI adoption will accelerate, compute demand will skyrocket, and intelligence will become a self-reinforcing cycle of growth. As inference costs continue to drop, AI won’t just become more powerful, it’ll be cheap, embedded everywhere - within every device, every interaction, and every digital surface - as ubiquitous as electricity.
The alignment problem—ensuring AI systems act in humanity’s best interests—will become a central topic in 2025.
This issue isn’t discussed nearly enough, but it’s critical. The more advanced AI becomes, the greater the stakes, so we must align our design, production, and deployment of AI systems with the broader context of human values in mind. The questions we need to address go beyond “Can we build it?” to “Should we build it this way?”
But building trust in AI systems requires more than engineering expertise. We’ll need to engage ethicists, policymakers, and the broader public to create clear frameworks for accountability. Governments, unfortunately, are struggling to keep up. The pace of evolving technology is outstripping the pace of regulation, so although governmental oversight is necessary, self-regulation is also crucial.
Already private companies are stepping up; Microsoft and Adobe are among those leading the charge with comprehensive frameworks for responsible AI. They’re auditing their algorithms and identifying and addressing biases at every stage of development. I hope we see more organizations stepping up in this way. I also hope to see governments incentivising these initiatives - because the alignment problem isn’t merely about making AI work—it’s about making it work for us. In 2025, I expect this to be one of the most urgent discussions in AI.
In 2025, we’ll stop thinking of AI as a tool and start seeing it as a teammate. The kind that doesn’t drink all the coffee or steamroll through deadlines.
AI agents are autonomous systems capable of handling complex, goal-oriented tasks. Currently the most talked-about example of agentic AI is OpenAI’s Operator, launched last Thursday. This AI agent uses a web browser just like we would. It clicks buttons, fills out forms, and executes tasks – like booking you two front row tickets to a Bruce Springsteen concert or ordering you a nice quiche for lunch.
But even before Operator, companies like Salesforce were embedding AI-powered agents to handle customer inquiries. In healthcare too, AI agents are assisting in patient scheduling, insurance verification, and billing. In IT, agents are monitoring systems for vulnerabilities and handling routine troubleshooting before users notice the disruption. Even in HR, AI agents are being used to help onboard new employees.
The real power of AI agents, though, lies in their ability to handle workflows we’ve barely begun to consider—much like how industrial robots redefined manufacturing. In fact, when robots entered factories, they didn’t just build cars faster, they essentially changed how we thought about production. AI agents will do the same thing for knowledge work - freeing us to focus on strategy, creativity, and innovation.
In 2025, I expect multimodal systems to be everywhere, embedded into industries, reshaping how we design, communicate, learn... and even drive.
The evolution of multimodal AI is one of the most exciting developments I’ve seen lately as it signals a whole new way of interacting with technology. These systems create rich, layered responses that blend text, images, video, and sound – soon we’ll be able to generate an entire video, complete with voiceovers and subtitles, from a simple text prompt. Models like GPT-4V (GPT-4 Vision) are already pushing those boundaries, analyzing images and engaging in real-time conversations about them.
Mercedes-Benz is another great example of multimodal AI in action. Its MBUX Virtual Assistant has integrated an Automotive AI Agent, turning voice commands into a fully personalized, multimodal experience. Drivers can ask for directions, find restaurants, or turn the air-conditioning on—all through natural conversation. So, in 2025, we can look forward to seeing a lot more people talking to their cars.
In 2025, we’ll see robots that will work alongside us as partners... and may even save lives.
Large language models (LLMs) are enhancing robotic capabilities at breakneck speed. Now, instead of programming them step-by-step, we can use LLMs to help robots process complex instructions and navigate challenging environments with increasing precision. They fail, they adjust, and they succeed - faster than we ever could.
The implications of this are huge. Take disaster response, for example. Picture a robot calmly entering a burning building, navigating collapsing floors, avoiding obstacles, and locating survivors. It sounds like science fiction, but companies like Boston Dynamics and Anduril are already building robots that perform in hazardous conditions, from search-and-rescue missions to military applications.
What makes this shift so exciting is the adaptability of these problem-solving robots. They’re learning to handle unexpected and possibly dangerous scenarios, which means they can operate in environments where humans either can’t or shouldn’t go.
AI is a power-hungry technology, but it’s also potentially one of the best tools we have to fight climate change.
AI demands a large amount of energy – and it’s only growing. In fact, the computational power needed for AI is doubling roughly every 100 days. Ironically, AI is also one of the most powerful tools we have for reducing our environmental footprint.
In fact, we’re already seeing AI make a real difference in emission reduction. In commercial buildings, AI-powered HVAC systems are using AI to predict energy demand, optimize heating and cooling schedules, and adjust dynamically based on how the building is actually being used. The result is a reduction of CO2e by up to 40%. To put that into perspective, buildings account for around 40% of global energy related emissions. Now, imagine if we made all of them smarter and greener?
AI is expediting drug discovery at an unprecedented pace. Processes that used to take years are now condensed into months.
Healthcare is where we’re currently seeing some of the most profound impacts of AI. Multimodal AI systems, like those developed by OpenAI, are completely changing how data is analyzed – going beyond simply sifting through patient records. Now, they can cross-reference radiology scans, lab results, and clinical notes to find patterns no human could spot in such a short space of time. For instance, they’re identifying early signs of Parkinson’s years before symptoms appear or catching subtle tumor growths that even the sharpest radiologist might miss. In one case, Exscientia used AI to design a new drug for obsessive-compulsive disorder, advancing it from concept to clinical trials in just 12 months—a fraction of the typical timeline.
And the buck doesn’t stop at diagnostics. Hospitals are chaotic ecosystems, and AI is making sense of the madness by optimizing staffing and triaging to minimize delays. We’re even seeing AI support home-based care, where wearable devices are paired with algorithms to monitor chronic conditions in real time, alerting doctors to potential issues before they become emergencies.
Waiting for AI to ‘mature’ is like waiting for the internet to ‘prove itself.’ Hesitate too long and you’ll be left behind.
AI is turning what used to take hours into something you can accomplish in minutes. This isn’t limited to one or two industries. AI is optimizing supply chains, revolutionizing healthcare diagnostics, personalizing customer experiences, enhancing creative fields, and even transforming our daily commutes. The organizations that have embraced AI are already seeing increased revenue growth – meaning those taking a wait-and-see approach are missing revenue opportunities, and the gap is only set to grow. In this light, 2025 will be less about experimenting with AI and more about building it (hopefully responsibly), integrating it everywhere, and learning to work with it.
In short, where 2024’s question was ‘What can AI do?’, 2025’s is ‘What are we going to do with it?’
Want to read more thoughts from Jean-Simon Venne?
You browse any page of our website
You fill out a form on our website
You receive emails from us
You opt-in to marketing emails
You request a dashboard account
For more information regarding our Privacy Policy, please email: contact@brainboxai.com
Click here to read the full privacy policy
This privacy notice is based on an open-sourced design from Juro (https://juro.com) and Stefania Passera (https://stefaniapassera.com/portfolio/juro/) - get your own free privacy policy template here: https://info.juro.com/privacy-policy-template