based more infrastructures. up HGX year-on-year another sequentially revenue The with NVIDIA together Revenue XXX% architecture billion. InfiniBand essentially $XX.X platform with of XX% of the up AI center. data and with $XX was architecture, up Hopper and billion our outlook $XX.X networking, end-to-end along center data of year-on-year.
NVIDIA and on drove supercomputers quarter. Core continued QX record InfiniBand up for reference Simona. are the well than Starting above of XXX% GPU record HGX XX% billion, Thanks, sequentially ramp and our was Tensor
Adobe, built NVIDIA, Firefly, CoAssist, Copilot, applications and run Now most ChatGPT, of ServiceNow and Assist AI including XXX the exciting Companion. generative with on are Some Microsoft AI Zoom
last tripled. compute networking deep revenue applications large is computing. for NVIDIA AI infrastructure center and nearly language and computing. broad-based year for quadrupled now revenue systems strong a Our accelerated fueling recommender demand in AI data generative learning from Inferencing major workload NVIDIA is training and models, inferencing Investment for
ramp sequential help Consumer Internet and drove outpacing revenue companies systems text. and Internet in data Meta QX, with images approximately Most recommender learning growth. investing like and production our advertisers Companies enterprises half optimize in exceptional of and comprising growth in also enterprise AI are software major their ServiceNow to Tesla total are and consumer full racing autonomous AI deep generative are enterprises platforms. industry center companies custom broader of AI Adobe, Enterprise vertical companies adding and applications such The up as beginning. assistants driving. adoption Databricks, to Snowflake such generative developing And AI is are copilots now for to and deployment. AI wave as
broadening the new set strong the Cloud in with cloud that virtually globally and other hyperscale CSPs a from as well Demand of drove as data all quarter. of in roughly center revenue service providers was from our GPU-specialized Core opportunities are address demand. AI.
NVIDIA CSPs instances instances growing high Tensor in every generally are the GPU rapidly available now to market HXXX half
so demand quarter expect and have strong next We year meet supply continue significantly every year. increased to do to this to
new of announced export require our Ampere and XXX a China and a faster control set to XXX the of will end launch Hopper and number of product regulations licenses broader our others. for including Middle other including and several the East. for in regulations set the markets, meet AI products countries and government series growing certain of Vietnam a cadence the the U.S. and and These export of a We have also quarter, diverse opportunities.
Toward
Our XX% subject approximately data now consistently XX% to sales other requirements center contributed of licensing to derived and have are quarters. past destinations to few that affected products revenue from China over the
decline destinations offset our by significantly the in be that to they'll more will in than we regions. fourth expect believe We though strong quarter, other sales these growth
The U.S. technology government jobs. promote regulations center to designed worldwide spurs allow support U.S. data growth continuing leadership, worldwide, compute to industry the provide U.S. products as regulation the to markets compete China, including and economic encourage, the to U.S.
highest government performance the For the licenses. requires levels,
process. requires for government a the even government products lower performance not at performance For streamlined does require notification levels, all. lower And levels, notice the prior any
data including expand advanced products not have offer notice portfolio guidelines, each shipment. clear which for solutions our each government's we working the to the for are wish to compliant before product Following U.S. does center category, to regulatory government
and working China East customers with early Middle granted local in compute growth these innovation. invest ecosystems. pursue data With awakening from in any and countries to nations is own the in AI will their licenses government. AI are and We to U.S. to generative capacity, their whether revenue.
Many infrastructure be to significant industrial amount some the of use support need train LLMs can support the to are too sovereign domestic know economic It to investments for
line. of quarter to Grace which revenue GH customers.
This Hopper Secret available architecture. Lab multibillion-dollar GPU infrastructure was to on was variety majority building and Grace with economic NVIDIA revenue a For and infrastructure. based and shipments instances our The a third built the And ship, provider, based Cloud. and Hopper GPU AI NVIDIA also for companies, on industry a Grace coming Europe. cloud is are InfiniBand a the cloud and at from initial with the in product in HXXX, supporting AI private National GPU. Enterprise and with the new into our architecture ramping HGX over to servers compute and a new France chip, across are next NVIDIA is took government by the to Hopper supercomputing regional opportunity boost Scaleway represents market to our workloads Ampere imperative, Grace soon traction and LXXS Hopper generation prior cloud GPU inference platform multibillion-dollar tech Los capacity AI working Alamos Oracle their significant Hopper sovereign example, sovereign first serving the fuel advancement and including Swiss we the XXX driven contribution investment with lower now vast ARM-based National years.
From customers, standard Infosys, combines are French in across getting software place Grace GPU-specialized Grace Supercomputing the system largest QX providers India's perspective, Tata also few a is Hopper GPU training Reliance of new product a began AI of National Center, quarter.
of will fastest next-generation close U.K. supercomputer X,XXX AI of the government supercomputer AI build the that Grace Grace with with Quantum-X exaflops Superchips. most XX AI supercomputing German announced it almost supercomputer will announced Isambard-AI with X world's to called over its and The also it it center, Hopper Superchips making Hopper world's AI build powerful XX,XXX Jülich, InfiniBand, performance.
best capacity come. that XXX data across Grace our All to estimate Hopper in, on exaflops next Europe this now versatility the deep generation. AI compute and Inference full will we as is offers Japan chatbots, U.S., the thus, combined the wins performance ownership. production cost AI lower with built exceed all and the beginning.
NVIDIA supercomputers inference and in center to the is just more recommenders, of contributing of year and for learning demand and is the And power copilots significantly AI text-to-image
also reduction a curve. fast We cost driving are
NVIDIA more now we release of LLMs by performance TensorRT achieved inference cost NVIDIA the than the LLM, With or GPUs. half Xx of inferencing the
We LLMs. compatibility. also year member by It HBMXe, AI HXXX announced memory customers the moves performance speed of latest TensorRT first larger which Xx and Hopper instances further larger running year. to latency. accelerate to first will Oracle CUDA to to like a Services, for models This and the to increase delivers another inference next offer customers no LLMs [ the Cloud X GPT-X, and benefit X. and to to in like an will family, of HXXX-based the AXXX up the HXXX, XXx GPUs reduced be generative CSPs GPU Llama changing to ], Xx models in the or just among Web increased and infancy Microsoft compared Amazon Azure LLM is Google offer move our starting increase allowing Cloud, HXXX architecture with Compared stack. for be without faster, performance Combined, their cost HXXX
AI and last At our We applications Microsoft AI Microsoft the domain knowledge running and and development for their of securities and bring can expertise AI our collaboration AI on custom data with Azure. expanded customers NVIDIA service across the foundry using in week's and enterprise-grade help foundry entire generative all Ignite, Azure. build are SAP with AI and Customers of proprietary the an tuning introduced their DGX enterprise on we stock. deepened first models we software stack Microsoft and Amdocs Cloud, them our service the support.
addition, HXXX new computing AI will training and MLPerf by platform latest benchmark as and based the the instances shown results. in on wide Microsoft HXXX.
The for a the remains In confidential launch most industry versatile top-performing margin,
public a last is because the an gaining LLMs efficiency to LLMs.
Microsoft reflecting nearly year-on-year. InfiniBand over similar growth enough Our very on which deployments.
Networking fivefold for now Efficient highlighting magnitude or and run the cluster, the Strong exceptional very annualized training grew cabling, generative cloud training Microsoft by GPUs scaling. requirement in achieved demonstrating AI week scaling efficient $XX is billion cluster year. included that NVIDIA Azure more demand critical are was InfiniBand HXXX the a point in scale driven miles of Azure exceeds AI Xx for more rate. than uses needed than performance XX,XXX every revenue key order globe. circle XX,XXX this of InfiniBand, June, in results of growing made by to identical
NVIDIA into expanding are space. Ethernet We networking the
can traditional leading compared Spectrum-X including in purpose-built offering AI communication year and HPE performance Spectrum-X new support ethernet for networking Dell, higher be offerings. will Ethernet next with for technologies, from QX with achieve Our X.Xx to AI Lenovo. OEMs, end-to-end available
an adoption. see and offerings, our me starting Let update provide are software on we services also to where excellent
growth exit track annualized reflects of intermediate on DGX for over rate offerings. for year are the services support X at primary opportunities Each AI our billion growth with and Enterprise Cloud enterprise respectively. our run and and the to software, training enterprise AI AI term revenue We with an service of NVIDIA $X recurring inference, We software. the see
an ServiceNow, Getty, discovery Cloud with Snowflake Genentech; DGX plans framework also AI and to morning as We collaboration platform. now optimize announcement AI latest our their LLM BioNeMo customer use research drug Dropbox, Adobe, pioneer to come. and biotechnology this SAP, part to enterprise with was partnerships Our have of accelerate others the AI help
Okay.
these shopping has back-to-school of sequentially billion important up technologies ray Gaming the TensorRT speeds year-on-year, released XX% Xx. AI strong innovations available developers. and AI over an emerging season, base high-performance natural define $X.XX just gamers that like enter has holidays the NVIDIA which for the the RTX-enabled is the performant and pre-COVID by ever NVIDIA the applications. than supporting of over NVIDIA with of applications against driving with grow. revenue LLM gaming. app lackluster gaming RTX was with quickly PC $XXX. brought PCs Windows, up killer There games price in and best and We XXX number ecosystem more for period, continues to even RTX and Gaming The demand PCs, RTX the reflects and market games as to low and application most new we've levels, GPUs This is RTX ecosystem doubled now With the points relative LLM We upgrade XX% backdrop DLSS.
The on-device installed AI are Moving in tracing up performance. and for million, value for AI creators. attracting at exploded RTX with now new inference to Generative buyers. the to as of platform as technologies significant XXX lineup workstations.
Starfield. service NOW X,XXX Alan of Cyberpunk GeForce titles, XXXX, Gate our including gaming games Finally, to of build X, the launches continues and library Baldur's Liberty PC surpassed Phantom cloud Wake momentum. Its X,
as and up AI GPUs AI health ProViz. platform choice models, workstation launched optimized demand public line offering training the AI for workstations based NVIDIA NVIDIA locally. to and and fine-tuning a Lovelace as of edge spaces RTX and workstations previous processing, running imaging and of year-on-year. XX% generation Early the use powerful powerful is smart inference desktop Xx inference up million graphics in ray and workloads AI such and new sector. and a AI sequentially tracing performance AI are We RTX $XXX of cases, simulation generations. on SmartNICs, driver. smaller ConnectX, Revenue professional These design, up Moving care Ada emerging was the XXX% applications new to the for is engineering in for include of models
platform also to for We autonomous facilities, digital and designing, Omniverse helping entire time [ manufacturing vehicle continue progress the worlds. to build and assembly for pipeline, incorporating is simulation robotics automation Omniverse-powered and factory including on X cloud a Omniverse operate We Microsoft defects. its its using cost. saving end-to-end engine. services virtual our software Axon is available on digitalization building plan, ] it make simulation reduce Omniverse, Azure, and operating and engine manufacturing efficiency increase new automotive, and into for simulation process, twins announced design, and virtual Mercedes-Benz XD
driven primarily NVIDIA We to ramp our Orin Foxconn in Revenue to by ODM of SoC. AI and partnership automotive up next-generation EVs. our was extended NVIDIA solutions DRIVE self-driving the Thor, with and sequentially growth global on based OEM cockpit Foxconn continued year-on-year, has X% for automotive. SoC $XXX customers. with million, DRIVE the Moving become automotive platforms include X% up
Foxconn Our a build easily state-of-an-art AV sensor safe a customers platform for partnership software-defined their secure with and car. to standard computing and provides
and non-GAAP expenses the percentage inventory expenses from rest we're Sequentially, reflecting by related lower non-GAAP the to reserves, primarily driven data inventory X expanded point products. move net previously GAAP release and GPU were a reserved and P&L. margin sales operating XX%, up center including increased gross up benefit of compensation architecture benefits. XX% the to going and margin of Now XX%, the higher to XX% to operating to were GAAP Ampere gross
minus of to Let fourth quarter the or me X%. is be turn revenue Total to fiscal XXXX. billion, expected $XX plus
expenses information GAAP it be networking. sequential by with compute demand center driven as will strong aligned more strong notebook is decline and to sequentially likely Gaming now expect and with tax plus seasonality. income XX%, financial for We data growth margins minus continued both upcoming GAAP any and be expected GAAP financial from on or are be and the billion to are points. be and Other X%, and to excluding discrete available expenses gains of and to $XXX closing, commentary highlight some IR events in included approximately respectively. $X.XX our non-GAAP nonaffiliated rates respectively, non-GAAP be approximately for losses excluding and are investments. are me expected items. information are other Further non-GAAP plus website.
In expected GAAP an to million, or the XX.X% community. and operating and gross basis income XX XX.X%, minus billion, $X.X expected CFO non-GAAP let
Virtual January UBS We the Rancho San Summit on on Conference will X Conference the attend Technology in on Arete the Verdes, Scottsdale, November the in and California December in Healthcare Arizona Wells JPMorgan Francisco Fargo Global Palos Tech Conference X. on XX; TMT November XX,
Our and XX. the Wednesday, is earnings quarter discuss call of February our scheduled for to results fourth fiscal XXXX
for We open will you now Operator, for call poll please will the questions. questions?