sequentially than Center. mid-XXs was and driving growth to computing year-on-year, the XXX% deploy Xx continued Compute quarter. a customer types, consumer continue revenue up up driven $XX Center billion Hopper and at and meaningful growth more percentage Data revenue by above GPU strong grew last Center a revenue cloud center led with $XX.X of Xx CUDA $XX revenue.
Training year. as another drive providers more AI Revenue on platform. the Internet data Thanks, well by than for ramp and strong rental Data XXX% Large provider's investment. NVIDIA scale NVIDIA Simona. was up was companies. sequentially from outlook as revenue is record was by return and our of up in of billion inferencing QX billion.
Starting year-on-year record, of an delivering NVIDIA AI networking cloud Data acceleration our XX% they XX% sequential demand growth, enterprise and on represented and and infrastructure strong cloud all immediate and driven Strong
GPU years. cloud. running For tight earn cloud integration opportunity in end for X every $X rich and it on AI an on $X easy have in hosting infrastructure, software over instances with to providers and NVIDIA customers stack NVIDIA's revenue providers spent cloud up NVIDIA GPU the instant public ecosystem and makes
language cost inference to the rental models the to train to train GPUs cost customers, best the lowest For cloud models. time and offer large NVIDIA lowest models,
AI on and Anthropic, infrastructure For Transformers, autonomous customers as significantly We public significant growth better drove the NVIDIA and enabling growth use Cohere, cloud.
Enterprises on capabilities Character HXXX Leading investments. FSD of NVIDIA xAI, XX,XXX training while in infrastructure driving supported Tesla's dramatically LLM Vision. strong the driving are OpenAI, brings based driving XX, cloud more GPUs. others the propelling in consuming expansion for Their AI on cluster across NVIDIA to this their DeepMind, way automotive NVIDIA and paved many returns version are companies providers, AI of to latest the autonomous sequential AI breakthrough Databricks, their The Center their of NVIDIA cloud, Mistral, Meta, infrastructure Data revenue such quarter. Depth, performance growth AI, their industry. computing, for software building
is We Data Internet year, this expect opportunity a and companies our revenue on-prem quarter highlight their cloud multibillion this Llama a Meta's consumption.
Consumer announcement enterprise growth automotive was Center A across largest driving vertical are be strong within X, to also big of X which vertical. AI, Meta latest AI available development Instagram, and cluster and across on HXXX on Llama Facebook, openly assistant wave large a model, XX,XXX language kickstarted trained has a new Messenger. powers AI was of a GPUs. X WhatsApp, of industries. available Llama
Tesla expect for inference are complexity way for growing where as significantly. examples GPUs.
From In host with comes of next-generation makes revenue. estimate as to ones we to tens countries Meta the XXX,XXX infrastructure trailing and more see diversify over of revenue comes more we our Center with intelligence XX% AI we and built queries in demand Center some like advanced and of the X users driving much well drove compute. factories. number centers thousands and into Internet about In geographic essential are with in data scales Data of opportunities invest generative the customers AI AI. as in These both inference accelerated applications, that Data by from building AI QX, world Both of production, quarters, growth inference factories per our of the full-stack XXX clusters as out. size the As to refer what ranging training a continued perspective, its sovereign to as we GPUs, worked Large around consumer continues reaching user, AI data AI model computing with platforms hundreds number
nation's models. to Nations infrastructure, business own are capacity produce workforce, capabilities up refers through using a data, its artificial and networks. computing AI to intelligence Sovereign domestic building various
collaboration cloud computing and Others utilities. local a to and state-owned Some clouds sponsoring public are private sector use. procuring sovereign platform provide providers with shared telecommunication operating in are AI partners or AI for
powerful revenue and Group out the will infrastructure. Europe's than AI getting to build Group, in most in DGX-powered AI cloud providers, KDDI, sovereign more National the plans most invest AI to supercomputer. factories Southeast networking AI SoftBank to partners and Japan Center specifically Sakura trained infrastructure year, ability rich From nothing regional AI cloud supercomputer NVIDIA high accelerated NVIDIA's single-digit with Italian across build $XXX building port native allows digital ramped France-based Singapore, believe and of full billions Scaleway, country's key customers For natively AI Iliad software, the a Singtel is to million caught of Italy, China we is NVIDIA to nation's technologies, don't AI a to In can providers Supercomputer end-to-end sovereign subsidiary example, stack Internet, building nation's first require language. jumpstart offer of approach develop is GPUs, compute the ecosystem of expertise, products sovereign designed Asia.
NVIDIA's has importance powerful and and the attention the And year. control this first license. ambitions. the LLM upgraded new that for Swisscom their in nation.
We Hopper while previous the every The the including AI
export of to in the imposition October. revenue level China prior the Center Data down significantly restrictions control in new is the from Our
quarter compute algorithm From innovations, forward. on majority market to our GPU to by X.
We track Xx, are the up HXXX to on can Demand accelerate Xx of increase. popular CUDA revenue able Llama Hopper for in production models LLM very by expect product was been to continues Thanks going in competitive the we've architecture. reduction perspective, shipments to during driven a and cost translate China for the in sampling a with We for remain the currently inference vast HXXX Hopper which serving to like started QX. QX
system HXXX their GPT-Xo of nearly HXXX delivered at HXXX, doubles was last week. significant for Jensen deployments. first and to team Sam the The inference the and OpenAI delivering demos value by powered production amazing Altman performance
prices server software billion For NVIDIA servers continue in Llama users HGX same a supporting of the optimizations, deliver an on tokens per X infrastructure current using parameters, token, improve second, we tokens over $X X,XXX every per to HXXX XXX serving X NVIDIA more years.
With API models. HXXX for X than AI the can serving XX,XXX provider NVIDIA, time. revenue ongoing can example, generate $X for at AI Llama spent performance That at single means with HGX
HXXX constrained is At production. HXXX. on same full time, supply the [indiscernible] for in Blackwell still we are While
AI availability Supercomputer We expect in that Germany. Superchip U.K.; AI are we into are supercomputer working this Conference, to in of year. week for processing is cloud in global for University Last International volume. the worldwide Supercomputing and exceed Swiss later combined and of X Demand well supercomputers and announced shipping National HXXX a supply Alps at year.
Grace bring Europe; next and Blackwell Jupiter Hopper Jülich for in at up using These Hopper power exaflops the partners our include is supply, at system we new of demand the ahead Supercomputing well Center, Center Isambard-AI Supercomputing XXX fastest the Bristol energy-efficient delivered the this the in year. Grace the may
supercomputing rate high seeing energy Hopper to Grace in performance. are due XX% We and an attach its efficiency of
networking proud to We were supercomputers with well decline, the the to the experienced powered of we demand by the of most a #X due was year-on-year ship. was of what supercomputers with We #X, energy-efficient spots sequential to timing Hopper world.
Strong the take see are also in the which InfiniBand. and largely Grace able #X, growth modest driven supply, ahead
new to brand-new Spectrum-X return QX. sequential to ground volume NVIDIA started In switch, quarter, AI GPU Ethernet.
Spectrum-X the Ethernet customers, shipping data optimized for Spectrum-X the deliver includes challenges performance our in the from enables is on to processing a growth we DPU, to X.Xx large-scale cluster. with Spectrum-X of and for accommodate Ethernet expect AI. in a Ethernet-only to AI We our It solution networking and networking overcome multiple to first new centers BlueField-X XXX,XXX including opens technologies software massive up. ramping compared networking higher networking traditional with market AI
AI liquid enables models with Inference will shipments.
Blackwell language a new switches, Tesla, up training across announced open We leap xXX faster Adept, in using types, and Blackwell and environments scale powered centers large developers multibillion-dollar trillion-parameter to a Ethernet over up inference, Microsoft, trillion-parameter Images, build series Mistral world. center and This provides more to AI in data cloud of support from or double of InfiniBand will in be Microservices number including major every first and InfiniBand to network launch, AIXX, and the software and production data Google, with in language customers hyperscale cooling maker vision, OpenAI, Blackwell deployment xAI. the include to NIM. of computer a representing CUDA in adoption to consumption the air NVIDIA and Xx from GTC AI is range quickly offered our year.
At our expect with and workloads, available at with spine NVIDIA new Amazon, and fast Cohere, time-to-market text, introduction broad robotics, or and next-generation customer software, on for to genomics, a and HXXX and Hopper. generative industry-standard on-prem. leading the March, NIM cases, the will a than and Blackwell ODM for acceleration The designed energy the Oracle, XXx universally, AI. platform multi-GPU NVIDIA, to to Triton performance-optimized imaging, includes by Ethernet NIMs applications Google, XXx and and enterprise Blackwell. models including year Grace AI.
Blackwell networking, delivers support speech, deploy fifth-generation large Meta, from models. Hopper's LLM is to the faster launch jump Getty than a systems architecture giant and Stability biology.
enable platform, APIs digital the CPUs, We product for than inference TCO for Blackwell a factory use the launched NVLink broad product be training we Microsoft, designed platform cooling. in XXX within real-time Shutterstock containers OEM XXXX NVIDIA secure to part and PrintServer Face, Snowflake of and as GPU AI to models and TensorRT line lower computing software enterprise, The Spectrum-X inference and generative Hugging AI, Meta,
product consistent and Moving of decline. GPUs billion CUDA market gaming the reception GeForce The Super our Gaming sequentially and AI very RTX seasonal demand with XX% was X% inventory channel is PCs. journey, range. equipped for to outlook year-on-year, across end strong and revenue healthy and start a down up we our $X.XX AI the Tensor RTX cores. From remained GPUs GeForce of with
Tencent and LangChain and GeForce to gamers inference RTX RTX Engine including and announced AI has (sic) performance GeForce Windows offer XB and Games, transform interactions PCs. and running accelerates Now Cloud are creators, generative XXX an installed performance with stack now models NVIDIA run for create Engine TensorRT NVIDIA frameworks, and game PCs.
NVIDIA on AI Ubisoft base, LLMs including embracing generative ] AI lifelike for RTX Avatar enthusiasts, Yesterday, Character deploying of applications LlamaIndex. are to LLM running on nonplayable full and technology GPUs million Google's AI AI well popular to top perfect model PCs.
And [ Phi-X to NVIDIA over developers, optimizations characters. Avatar help as for efficient on between Xx faster avatars gamers, fast XB Gemma and AI as up for Mini Microsoft's Microsoft GeForce and unmatched
of XX% drive Revenue wave ProViz. and to industrial developers growth. APIs their new digitalization generative technologies the believe to visualization We announced twin simulation AI digital down was Omniverse and and million of integrate X% next Cloud Omniverse At into year-on-year. GTC, sequentially applications. enable we will up industrial $XXX Omniverse professional Moving to
and XD software Azure this largest Pro. as cycle industrial be ANSYS, factory rates industrial are spatial computing Omniverse by maker, these including twins Cloud developers our retail largest is year.
Companies by And to XX% their can Brand And are Vision workflows. production vehicle to Cadence, end-to-end and Some Omniverse later use Wistron, partners digital them such Siemens. using devices adopting Systems, makers virtual stream and one world's with will Omniverse Microsoft adopting BYD, Xcite, of the to world's twins defect APIs available times on XX%. the of power reduce Apple enable for Omniverse electric manufacturing configurations. digital APIs, digitalize planning
ramp primarily driven DRIVE built first Orin, and NVIDIA our self-driving AI by its Xiaomi was to up growth solutions AI in growth was sedan platforms. cockpit computer software-defined driven SUX $XXX electric Year-on-year of of was global in and year-on-year. self-driving. for AV successful Sequential XX% the launch automotive. fleets. the OEM up supported vehicle, customers the the sequentially with Revenue our We Moving million, XX% on by strength car
number and production for Thor, next BYD, is of the a announced architecture wins also DRIVE powered DRIVE design several to Thor makers, including We XPeng, successor new by GAC's EV NVIDIA the NVIDIA new Blackwell Orin, year. iOnHyper Neuro. with starting on slated leading vehicles
gross to Okay, and margin on the rest non-GAAP to of P&L. to expanded sequentially gross XX.X% GAAP moving the XX.X% margins targets. lower inventory
quarter, and investments.
In of $X.X favorable Sequentially, were a last QX noted first expenses split-adjusted operating form QX, were our both higher billion of up a of announced trading and non-GAAP costs. in QX we from cash we Today, as share returned increased on basis. dividends. reflecting June the repurchases shares the to split XX-for-X XX and and with and shareholders costs primarily XX%, GAAP compensation-related component expenses compute infrastructure up XX% As benefited operating day
by We XXX%. are also increasing our dividend
is minus turn to billion, outlook second plus for X%. $XX me revenue be the quarter. the Total or expected to Let
expect platforms. minus plus gross to margins We market are GAAP quarter. discussion XX with our growth points, XX.X%, be expected consistent in XX.X% all and non-GAAP last respectively, basis and or sequential
year low For the expected Jensen Further range. details me, full $XXX billion be and are and be -- available billion, website. an other would expected XX% excluding the turn the our from rates CFO excluding approximately make comments. a to in and to GAAP included it X%, now discrete and and respectively. income and gross to are approximately Full we expect be to OpEx losses he few mid-XXs would or gains financial to any on non-GAAP other expected $X.X XX%, plus in margins be expenses to GAAP are grow to information and approximately like million, commentary year, minus excuse items. of are tax investments. to non-GAAP non-GAAP like expected $X IR in I nonaffiliated as operating expenses percent over income the is range.
GAAP