Quick thoughts from co-CIO Greg Jensen and AIA Labs Chief Scientist Jas Sekhon on the release of Google’s Gemini 3 model and its implications for economies and markets.
Editor's Note: This article was updated on December 2, 2025 to include new information.
Progress in artificial intelligence continues at a rapid pace. We have described how this rapid technological advancement has driven the industry into a “resource grab” phase, which is more dangerous than earlier stages of the AI boom and has major implications for capital markets. In today’s report, we focus on a timely development in the technological progress that underpins AI’s economic impact—Google’s release of its latest model, Gemini 3—and share some of the key takeaways for the AI ecosystem, economies, and markets. As we see it:
- Both external testing and our own internal testing indicates that Gemini 3 is the best publicly available model in terms of raw intelligence and reflects the biggest jump in frontier model capabilities we’ve seen in a while. The model is strongly multimodal—performing well in text, image, and video generation and interpretation. Across a range of text-based assessments, like coding, “humanity’s last exam,” mathematical reasoning, as well as vision-based assessments, Gemini 3 outperforms previous models by a substantial margin. On our own internal tests, Gemini 3 is the largest jump in performance since at least OpenAI’s o3 more than half a year ago, and possibly since o1, which was released in late 2024. The charts below show Gemini 3 versus previous models, using the Center for AI Safety’s aggregation of different AI benchmarks.
(It’s worth noting that, following Gemini 3, Anthropic also released its latest model Claude Opus 4.5 last week. While both our own and external evaluations of Opus 4.5’s performance in usage are still underway, our current expectation is that it’s likely the best model for coding but trails Gemini 3 in general intelligence. This is in line with the consistent capability skew of past Anthropic models that differentiates them from other frontier models: Anthropic models generally excel in agentic tasks and in particular agentic coding, while being weaker at reasoning overall (e.g., math) and multimodal capabilities (e.g., vision); though, its lead in agentic task has narrowed. You can see this skew in Opus 4.5’s relative performances across different kinds of benchmarks in the below charts.)
Gemini is the first release of a significantly bigger model in a while, and it shows that pre-training scaling will continue. LLM training consists of two phases: pre-training and post-training. Pre-training is the foundational phase of training where the model learns next-token prediction from a vast corpus of data and develops raw intelligence, while post-training improves the model’s performance in particular aspects using techniques such as reinforcement learning. Scaling in pre-training—i.e., increasing the size of the model and the amount of data and compute used—has been the primary driver of model improvements in recent years. Yet since last year, labs such as OpenAI have run into challenges in their training pipeline—a process involving data cleaning and curation, and training algorithm optimization—when trying to continue to scale up pre-training. This induced speculations that pre-training scaling might stall or slow as models get larger and more complex. But Google cracked the challenge with Gemini 3. It is the first model released with significantly larger, effectively used pre-training compute than GPT-4o, and it showed sizable capability improvements. Our rough approximation is it appears to have used at least 2-3 times more compute than GPT-4o and GPT-5 in its pre-training, and possibly an order of magnitude more.
As a wave of large data centers across labs and cloud providers comes online next year, we will almost certainly see other labs follow suit to address the challenges and continue scaling pre-training. For example, when releasing its latest model V3.2 earlier this week, DeepSeek explicitly acknowledged its model’s gap with Gemini 3 due to smaller pre-training compute and announced plans to scale up pre-training going forward. Post-training scaling has also been shown to improve model capabilities (it’s what powered improvements in OpenAI’s o1 and o3 models), and Gemini 3 likely went through less post-training than GPT-5 pro, meaning that Google has substantial room to further improve Gemini 3’s capabilities.
- On net, Gemini is a positive development for the AI ecosystem as a whole and will contribute to ongoing resource-grab dynamics. One of the biggest potential risks to the AI ecosystem today is evidence of a breakdown in pre-training scaling laws, which would put further investments in compute’s return on model performance into question. In that world, companies would have less need for compute, although they would still invest in post-training compute needs. But with Gemini 3 showing how pre-training can continue to scale and lead to better models, there is strong pressure on the rest of the ecosystem to continue investing in compute. The path to getting better models requires more compute, more chips, and more power. Given the immense potential of AI technologies, we believe companies will spend whatever it takes to keep up in the race to develop leading models and make sure they don’t miss out on access to the necessary resources. Stock market corrections, or modest increases in credit spread, don’t change the underlying reality of this dynamic. A massive increase in capex, on top of levels of investments wildly above expectations two years ago, is coming, with major implications for capital markets and the economy.
- Google is now the clear leader in the AI race. We’ve shared our assessment of Google’s competitive advantages in prior AIA Labs quarterly calls—its tremendous balance sheet, significant profitability, and leading research. The release of Gemini 3, trained entirely on Google’s own TPU chips, confirms Google’s leading position. Across nearly all dimensions (with the possible exception of coding, as mentioned previously), Gemini 3 is the best-performing model. This marks the first time since the launch of GPT-3.5 three years ago that OpenAI doesn’t have a leading model. Looking ahead, Google has a significant advantage in training frontier models, and efficiently and cheaply servicing them—in part due to its vertically integrated ecosystem. Google has its own large, mature, and competitive hardware stack, from chips to data centers, which likely gives it the cheapest and easiest access to compute among leading AI labs. And after nearly a decade of limiting external access to its TPU chips, Google has also started exploring selling and leasing them to other AI labs and cloud providers in recent months, opening the possibility of it dominating the full AI stack, from chips to models.
- Google’s advantageous position and budding ambition in the AI chip market pose a major risk to Nvidia, which in turn is driving major investments into the rest of the AI ecosystem. Google’s possession of another AI ecosystem not reliant on Nvidia’s GPUs creates a significant risk to Nvidia’s ability to sustain such high market share and gross margins. This competitive pressure from Google has been a key motivating factor behind Nvidia’s extensive downstream investments in the AI ecosystem—most notably OpenAI—to ensure that the majority of the AI ecosystem continues to be deeply integrated with its own chips. This is a major reason why we believe OpenAI and many other companies will likely be able to raise the necessary capital to fund continued investments in the near future—while they lack the balance sheet strength of the hyperscalers, they have the implicit backing of Nvidia’s own tremendous balance sheet and profits.
- The “Barnes & Noble moment” is getting closer. The “resource grab” phase, and associated sums of investment, is currently being driven by a small number of leading AI players recognizing the incredibly transformative power of AI. The next phase will come when a major business outside of the AI ecosystem realizes that its entire business model is about to collapse due to pressure from an upstart competitor using AI (as occurred with Amazon disrupting Barnes & Noble). At that point, every business will have to spend existentially to adopt AI technologies and configure them into their business model—creating the potential for levels of investment and productivity growth unlike anything we’ve ever seen. Current frontier AI models are still not easy to work with, and require technical skills, subject matter expertise, and meaningful work to get large gains out of them. But with the capability jump that Gemini 3 brings, the challenge of getting more productivity out of LLMs has become more surmountable, and the point of widespread adoption is getting closer. The implications are profound, and we will continue to share our latest thinking on these developments in subsequent research.
- At this point, we think the boost to the global economy in the next two years is underappreciated in most markets, as is the need for capital. We will live through the biggest capex boom of our lives in 2026 and 2027, and the investment plans are likely already baked in the cake. What that means for markets is less obvious, but it is these developments that are likely to be the central driver, and as a result, the most important area to understand.
This research paper is prepared by and is the property of Bridgewater Associates, LP and is circulated for informational and educational purposes only. There is no consideration given to the specific investment needs, objectives, or tolerances of any of the recipients. Additionally, Bridgewater's actual investment positions may, and often will, vary from its conclusions discussed herein based on any number of factors, such as client investment restrictions, portfolio rebalancing and transactions costs, among others. Recipients should consult their own advisors, including tax advisors, before making any investment decision. This material is for informational and educational purposes only and is not an offer to sell or the solicitation of an offer to buy the securities or other instruments mentioned. Any such offering will be made pursuant to a definitive offering memorandum. This material does not constitute a personal recommendation or take into account the particular investment objectives, financial situations, or needs of individual investors which are necessary considerations before making any investment decision. Investors should consider whether any advice or recommendation in this research is suitable for their particular circumstances and, where appropriate, seek professional advice, including legal, tax, accounting, investment, or other advice. No discussion with respect to specific companies should be considered a recommendation to purchase or sell any particular investment. The companies discussed should not be taken to represent holdings in any Bridgewater strategy. It should not be assumed that any of the companies discussed were or will be profitable, or that recommendations made in the future will be profitable.
The information provided herein is not intended to provide a sufficient basis on which to make an investment decision and investment decisions should not be based on simulated, hypothetical, or illustrative information that have inherent limitations. Unlike an actual performance record simulated or hypothetical results do not represent actual trading or the actual costs of management and may have under or overcompensated for the impact of certain market risk factors. Bridgewater makes no representation that any account will or is likely to achieve returns similar to those shown. The price and value of the investments referred to in this research and the income therefrom may fluctuate. Every investment involves risk and in volatile or uncertain market conditions, significant variations in the value or return on that investment may occur. Investments in hedge funds are complex, speculative and carry a high degree of risk, including the risk of a complete loss of an investor’s entire investment. Past performance is not a guide to future performance, future returns are not guaranteed, and a complete loss of original capital may occur. Certain transactions, including those involving leverage, futures, options, and other derivatives, give rise to substantial risk and are not suitable for all investors. Fluctuations in exchange rates could have material adverse effects on the value or price of, or income derived from, certain investments.
Bridgewater research utilizes data and information from public, private, and internal sources, including data from actual Bridgewater trades. Sources include BCA, Bloomberg Finance L.P., Bond Radar, Candeal, CEIC Data Company Ltd., Ceras Analytics, China Bull Research, Clarus Financial Technology, CLS Processing Solutions, Conference Board of Canada, Consensus Economics Inc., DTCC Data Repository, Ecoanalitica, Empirical Research Partners, Energy Aspects Corp, Entis (Axioma Qontigo Simcorp), Enverus, EPFR Global, Eurasia Group, Evercore ISI, FactSet Research Systems, Fastmarkets Global Limited, The Financial Times Limited, Finaeon, Inc., FINRA, GaveKal Research Ltd., GlobalSource Partners, Harvard Business Review, Haver Analytics, Inc., Institutional Shareholder Services (ISS), The Investment Funds Institute of Canada, ICE Derived Data (UK), Investment Company Institute, International Institute of Finance, JP Morgan, JTSA Advisors, LSEG Data and Analytics, MarketAxess, Metals Focus Ltd, MSCI, Inc., National Bureau of Economic Research, Neudata, Organisation for Economic Cooperation and Development, Pensions & Investments Research Center, Pitchbook, Political Alpha, Renaissance Capital Research, Rhodium Group, RP Data, Rubinson Research, Rystad Energy, S&P Global Market Intelligence, Sentix GmbH, SGH Macro, Shanghai Metals Market, Smart Insider Ltd., Sustainalytics, Swaps Monitor, Tradeweb, United Nations, US Department of Commerce, Visible Alpha, Wells Bay, Wind Financial Information LLC, With Intelligence, Wood Mackenzie Limited, World Bureau of Metal Statistics, World Economic Forum, and YieldBook. While we consider information from external sources to be reliable, we do not assume responsibility for its accuracy. Data leveraged from third-party providers, related to financial and non-financial characteristics, may not be accurate or complete. The data and factors that Bridgewater considers within its research process may change over time.
This information is not directed at or intended for distribution to or use by any person or entity located in any jurisdiction where such distribution, publication, availability, or use would be contrary to applicable law or regulation, or which would subject Bridgewater to any registration or licensing requirements within such jurisdiction. No part of this material may be (i) copied, photocopied, or duplicated in any form by any means or (ii) redistributed without the prior written consent of Bridgewater® Associates, LP.
The views expressed herein are solely those of Bridgewater as of the date of this report and are subject to change without notice. Bridgewater may have a significant financial interest in one or more of the positions and/or securities or derivatives discussed. Those responsible for preparing this report receive compensation based upon various factors, including, among other things, the quality of their work and firm revenues.