Home > Industry News > Detail

NVIDIA's market value plummets, future hard to predict

Date:2024-09-09 15:33:27    Views:323

NVIDIA Corp's market value evaporated by about $406 billion (about Rs 28,78,094 crore) this week, putting pressure on major stock market benchmarks due to concerns about the health of the U.S. economy and an artificial intelligence deal that may have been ahead of its time.

The world's largest maker of artificial intelligence chips has seen its market capitalization shrink by a fifth in the past two weeks. The drop also highlights a more pressing issue for investors in the $2.5 trillion market cap giant: its volatility now dwarfs that of the Big Seven, making Bitcoin look like a calm harbor.

Nvidia shares have fluctuated between $90.69 and $131.26 over the past 30 trading days, with market capitalization evaporating to a record high on Tuesday. That level of volatility has raised its 30-day real volatility to about 80 - roughly four times that of Microsoft Corp. and twice that of Bitcoin, and higher than meme stocks like Donald Trump's media companies and Elon Musk's Tesla Inc.

b90a5e7a39a2c7312c7b5bc574fba9c2.png

The selloff has put the stock on its worst two-week losing streak in two years, data compiled by Bloomberg show. It comes after the stock fell after the company issued lukewarm earnings forecasts and investor enthusiasm waned due to problems with its Blackwell chips. Then came news that the U.S. Department of Justice issued a subpoena in connection with an escalating antitrust investigation. Chipmakers' pessimism was further fueled by a disappointing sales forecast issued by Broadcom.

“You're in a very difficult market environment right now,” said Rhys Williams, chief strategist at Wayve Capital Management LLC, adding that AI trading is still in its early stages. Still, “from a day-to-day perspective, it's anybody's guess where the bottom is.”

Of course, despite the recent slide, the stock has still rewarded investors handsomely this year. The stock is still up more than 100 percent this year, adding $1.3 trillion to its market capitalization. Wall Street generally expects NVIDIA to remain in a favorable position as companies rush to build AI-related infrastructure, a process that is expected to continue for at least a few more quarters.

Nvidia's biggest customers - notably Microsoft Corp, Meta Platforms Inc, Alphabet Inc. and Amazon.com Inc, which together account for more than 40 percent of Nvidia's revenue, according to data compiled by Bloomberg Inc. -have confirmed spending plans in recent quarters.

Nvidia's results last week confirmed that optimistic view. Revenue more than doubled, better than expected, as did adjusted earnings. The company also gave a revenue forecast that beat analysts' consensus, though it missed the upper end of expectations.

bbcfcd424acfc17ab5f17d97283f84ba.png

The results were a disappointment to market participants who have grown accustomed to explosive growth reports. It also fuels the concerns of those who are skeptical about the long-term prospects for AI spending.

All of this means that the volatility in the share prices of NVIDIA and other chipmakers is likely to continue as investors digest the evolution of the AI theme. For fund managers looking to invest for the long term, that could mean opportunity.

“For long-term investors, now is a good time to start getting in,” said Wayve Capital's Williams. “If someone gave me new money today, I would enthusiastically buy some AI-related stocks.”

NVIDIA's GPU market share rises to 20% in Q2

AI demand has driven growth in NVIDIA chip sales since last year, and has allowed NVIDIA to continue to expand its GPU market share. The latest survey by research organization Jon Peddie Research (JPR) shows that NVIDIA's market share in the global GPU market has reached 20% in the second quarter of this year.

JPR report shows that in the second quarter of 2024, NVIDIA's global GPU market share increased by 2 percentage points from the first quarter to 20%. During the same period, Supermicro's market share increased by 0.2 percentage points to 16%.

Although Intel's market share dropped from 66% to 64%, it still remains the global GPU market leader.

The report points out that although Intel mainly sells CPUs, Intel CPUs are usually integrated with graphics display cards and are therefore categorized by JPR as integrated GPUs.

In terms of the overall market, the JPR report shows that global GPU shipments reached 70 million units in the second quarter of this year, an increase of 1.8% from the first quarter. In previous years, GPU shipments in the second quarter have always been lower than the first quarter, for example, JPR's statistics show that GPU shipments in the second quarter of the last 10 years have declined by an average of 7% compared to the first quarter, which makes this year's second-quarter market performance seemingly brighter.

Jon Peddie, president of JPR, said, “We are very pleased with the growth in GPU shipments in the second quarter. The market has had its ups and downs over the past few years and has been looking for regular vigor. Under the interlocking influences of trade wars, epidemics, political elections, and central bank interest rates, I'm afraid it will be difficult for the market to return to normalcy in the short term.”

According to the report, if all computer platforms such as desktops, laptops and workstations, as well as all categories of GPUs such as standalone and integrated are included in the calculation, the global GPU shipments in the second quarter of this year increased by 16% compared to the same period last year. Depending on the platform, desktop GPU shipments increased by 21% and laptop GPU shipments increased by 13% in Q2, and the overall GPU-to-PC ratio reached 120% in the second quarter of this year, an increase of 6.7 percentage points from the first quarter, according to JPR.

Looking ahead, JPR expects global GPU shipments to continue to expand at a CAGR of 4.2% from 2024 to 2026, reaching 3.3 billion units by the end of 2026, and expects PC penetration of standalone GPUs to reach 23% in the next five years.

The Future of GPUs, Unpredictable

Graphics chips (GPUs) are the engine of the AI revolution, powering the large-scale language models (LLMs) that chatbots and other AI applications rely on. As the price of these chips is likely to fluctuate dramatically over the next few years, many organizations will need to learn for the first time how to manage the variable costs of key products.

Some industries are already familiar with this principle. Companies in energy-intensive industries such as mining are accustomed to managing energy cost fluctuations, balancing different energy sources to achieve the right mix of availability and price. Logistics companies do this for transportation costs, which are currently volatile due to disruptions in the Suez and Panama Canals.

Calculating cost volatility is different because it affects industries that do not have experience with this type of cost management. Financial services and pharmaceutical companies, for example, do not typically trade in energy or shipping, but they are among the companies expected to benefit greatly from AI. They need to learn fast.

Nvidia is a major supplier of GPUs, which explains why its valuation has skyrocketed this year.GPUs are favored because they can process many computations in parallel, making them ideal for training and deploying LLMs.Nvidia's chips are so highly sought after that one company is even delivering them in armored vehicles.

Hit by supply and demand fundamentals, GPU-related costs are likely to continue to fluctuate wildly and are difficult to predict.

Demand will almost certainly increase as companies continue to build AI at a rapid pace. Investment firm Mizuho says the total market for GPUs could grow tenfold to more than $400 billion over the next five years as companies scramble to deploy new AI applications.

Supply depends on several factors that are difficult to predict. These include manufacturing capacity, which is expensive to scale up, and geopolitical factors - many GPUs are made in Taiwan, whose continued independence is threatened by China.

Supply is already tight, with some companies reportedly waiting six months to get their hands on Nvidia's powerful H100 chips. As organizations increasingly rely on GPUs to power AI applications, these dynamics mean they need to get a handle on how to manage variable costs.

To lock in costs, more companies may choose to manage their own GPU servers instead of renting them from a cloud provider. This creates additional overhead, but allows for better control and potentially lower costs in the long run. Companies can also buy GPUs for defensive purposes: even if they don't know how to use them yet, these defensive contracts ensure that they'll be able to use GPUs when they need them in the future, while their competitors won't be able to.

Not all GPUs are created equal, so companies should optimize costs by ensuring the right type of GPU for their intended use. The most powerful GPUs are best suited for the few organizations that train large underlying models, such as OpenAI's GPT and Meta's LLama. most companies will be doing less demanding, higher-volume inference work, which involves running data against an existing model, so using more lower-performing GPUs will be the right strategy.

Geographic location is another lever that organizations can use to manage costs. GPUs consume a lot of power, and a large part of their unit economics is the cost of the electricity used to power them. Placing GPU servers in areas where power is plentiful and affordable, such as Norway, can significantly reduce costs compared to the eastern U.S., where power costs are typically high.

CIOs should also carefully consider the tradeoffs between cost and quality of AI applications to find the most effective balance. For example, they can use less compute power to run application models that require less accuracy, or run application models that are less important to the business.

Switching between different cloud service providers and different AI models gives organizations another way to optimize costs, just as today's logistics companies use different shipping methods and routes to manage costs. They can also employ techniques that optimize the cost of running LLM models for different use cases, thereby increasing the efficiency of GPU usage.

The entire field of AI computing is evolving rapidly, making it difficult for organizations to accurately predict their GPU needs. Vendors are building new LLMs with more efficient architectures, while chipmakers, including Nvidia and TitanML, are working on techniques to improve inference efficiency.

At the same time, new applications and use cases are emerging, increasing the challenge of accurately predicting demand. Even today's relatively simple use cases, such as RAG chatbots, can change in the way they are built, driving up or down GPU demand. Predicting GPU demand is uncharted territory for most companies and is difficult to predict accurately.

The wave of AI development shows no signs of abating. According to Bank of America Global Research and IDC, global revenues related to AI software, hardware, services, and sales will grow at an annual rate of 19 percent to $900 billion by 2026. That's great news for chipmakers like Nvidia, but for many organizations, it will require learning a whole new discipline of cost management. They should start planning now.



  • STEP 1

    Enter Electronic Component part number below.

  • STEP 2

    Click the button below.It's that easy.

  • Contact name/company*
  • Email address*
  • Telephone number*
  • Part number and quantity and target price