News Posts matching #Jensen Huang

Return to Keyword Browsing

NVIDIA GeForce NOW Gets NieR:Automata, NieR Replicant, and More Games

Stuck in a gaming rut? Get out of the loop this GFN Thursday with four new games joining the GeForce NOW library of over 2,000 supported games. Dive into Square Enix's mind-bending action role-playing games (RPGs) NieR:Automata and NieR Replicant ver.1.22474487139…, now streaming in the cloud. Plus, explore HoYoverse's Zenless Zone Zero for an adrenaline-packed adventure, just in time for its 1.4 update.

Check out GeForce Greats, which offers a look back at the biggest and best moments of PC gaming, from the launch of the GeForce 256 graphics card to the modern era. Follow the GeForce, GeForce NOW, NVIDIA Studio and NVIDIA AI PC channels on X, as well as #GeForceGreats, to join in on the nostalgic journey. Plus, participate in the GeForce LAN Missions from the cloud with GeForce NOW starting on Saturday, Jan. 4, for a chance to win in-game rewards, first come, first served. GeForce NOW members will also be able to launch a virtual stadium for a front-row seat to the CES opening keynote, to be delivered by NVIDIA founder and CEO Jensen Huang on Monday, Jan. 6. Stay tuned to GFN Thursday for more details.

Acer Leaks GeForce RTX 5090 and RTX 5080 GPU, Memory Sizes Confirmed

Acer has jumped the gun and listed its ACER Predator Orion 7000 systems with the upcoming NVIDIA RTX 50 series graphics cards, namely the GeForce RTX 5080 and the GeForce RTX 5090. In addition, the listing confirms that the GeForce RTX 5080 will come with 16 GB of GDDR7 memory, while the GeForce RTX 5090 will get 32 GB of GDDR7 memory.

The ACER Predator Orion 7000 gaming PC was announced back in September, together with Intel's Core Ultra 200 series, and it does not come as a surprise that this high-end pre-built system will now be getting NVIDIA's new GeForce RTX 50 series graphics cards. In case you missed previous rumors, the GeForce RTX 5080 is expected to use the GB203-400 GPU with 10,752 CUDA cores, and come with 16 GB of GDDR7 memory on a 256-bit memory interface. The GeForce RTX 5090, on the other hand, gets the GB202-300 GPU with 21,760 CUDA cores and packs 32 GB of GDDR7 memory.

NVIDIA to Open Vietnam R&D Center to Bolster AI Development

NVIDIA announced today it is opening its first Vietnam research and development center, signaling its confidence in the country's bright artificial intelligence future. The company is collaborating with the Vietnamese government to establish its new Vietnam Research and Development Center focused on AI. NVIDIA will use the R&D center to focus on software development, capitalizing on the country's strong talent pool of STEM engineers, and to engage industry leaders, startups, government agencies, universities and students to accelerate the adoption of AI.

"We are delighted to open NVIDIA's R&D center to accelerate Vietnam's AI journey," said Jensen Huang, founder and CEO of NVIDIA. "With our expertise in AI development, we will partner with a vibrant ecosystem of researchers, startups and enterprise organizations to build incredible AI right here in Vietnam."

NVIDIA Surpasses Apple as the World's Most Valuable Company

Valued at $3.43 trillion against Apple's $3.38 trillion at the time of this writing, NVIDIA Corporation is now the most valuable company in the world. NVIDIA has been the hottest tech stock since 2021, as it created the most valuable IP of this decade—AI GPUs, which were ready just in time for OpenAI and ChatGPT to open its doors, jumpstarting the generative AI revolution. NVIDIA's AI GPUs didn't come to being overnight, the company had been investing in the parallel computing and HPC space since the 2000s, with its CUDA programming language that lets application exploit the SIMD parallelism of its GPUs. In the mid-2010s, the company created the Tensor core, accelerating AI, and invested heavily in AI frameworks. Some of the most fundamental research leading up to GPT was done on NVIDIA GPUs. This would lead to the big payoff in the 2020s, helping NVIDIA become a multi-trillion-dollar company. Founding CEO Jensen Huang has been NVIDIA's brains and spine throughout the company's 30+ year journey, and its success could not be possible without him at the reins.

NVIDIA CEO Jensen Huang Asks SK hynix to Speed Up HBM4 Delivery by Six Months

SK hynix announced the first 48 GB 16-high HBM3E in the industry at the SK AI Summit in Seoul today. During the event, news came out about newer plans to develop their next-gen memory tech. Reuters and ZDNet Korea reported that NVIDIA CEO Jensen Huang asked SK hynix to speed up their HBM4 delivery by six months. SK Group Chairman Chey Tae-won shared this info at the Summit. The company had earlier said they would give HBM4 chips to customers in the second half of 2025.

When ZDNet asked about this sped-up plan, SK hynix President Kwak Noh-Jung gave a careful answer saying "We will give it a try." A company spokesperson told Reuters that this new schedule would be quicker than first planned, but they didn't share more details. In a video interview shown at the Summit, NVIDIA's Jensen Huang pointed out the strong team-up between the companies. He said working with SK hynix has helped NVIDIA go beyond Moore's Law performance gains. He stressed that NVIDIA will keep needing SK hynix's HBM tech for future products. SK hynix plans to supply the latest 12-layer HBM3E to an undisclosed customer this year, and will start sampling of the 16-layer HBM3E early next year.

NVIDIA to Release the Bulk of its RTX 50-series in Q1-2025

The first quarter of 2025 (January thru March) will see back-to-back launches of next-generation GeForce RTX 50-series "Blackwell" graphics card, according to the latest rumors. NVIDIA CEO Jensen Huang is confirmed to take center stage for the 2025 International CES keynote address, where he is widely expected to kick off the GeForce "Blackwell" gaming GPU generation. CES is expected to see NVIDIA launch its flagship GeForce RTX 5090 (RTX 4090-successor SKU), and its next-best part, the GeForce RTX 5080 (RTX 4080 successor).

February 2025 is expected to see the company debut the RTX 5070, and possibly the RTX 5070 Ti, if there is such a SKU. The RTX 5070 succeeds a long line of extremely successful SKUs that tended to sell in large volumes. Perhaps the most important launches of the generation will come in March 2025, when the company is expected to debut the RTX 5060 and RTX 5060 Ti, which succeed the current RTX 4060 and RTX 4060 Ti, respectively. The xx60 tier tends to be the bestselling class of gaming GPUs in any generation. In all, it's expected that NVIDIA will release six new SKUs within Q1, and you can expect over a hundred graphics card reviews from TechPowerUp in Q1.

NVIDIA Tunes GeForce RTX 5080 GDDR7 Memory to 32 Gbps, RTX 5070 Launches at CES

NVIDIA is gearing up for an exciting showcase at CES 2025, where its CEO, Jensen Huang, will take the stage and talk about, hopefully, future "Blackwell" products. According to Wccftech's sources, the anticipated GeForce RTX 5090, RTX 5080, and RTX 5070 graphics cards should arrive at CES 2025 in January. The flagship RTX 5090 is rumored to come equipped with 32 GB of GDDR7 memory running at 28 Gbps. Meanwhile, the RTX 5080 looks very interesting with reports of its impressive 16 GB of GDDR7 memory running at 32 Gbps. This advancement comes after we previously believed that the RTX 5080 model is going to feature 28 Gbps GDDR7 memory. However, the newest rumors suggest that we are in for a surprise, as the massive gap between RTX 5090 and RTX 5080 compute cores will be filled... with a faster memory.

The more budget-friendly RTX 5070 is also set for a CES debut, featuring 12 GB of memory. This card aims to deliver solid performance for gamers who want high-quality graphics without breaking the bank, targeting the mid-range segment. We are very curious about pricing of these models and how they would fit in the current market. As anticipation builds for CES 2025, we are eager to see how these innovations will impact gaming experiences and creative workflows in the coming year. Stay tuned for more updates as the event approaches!

NVIDIA's Jensen Huang to Lead CES 2025 Keynote

NVIDIA CEO Jensen Huang will be leading the keynote address at the coveted 2025 International CES in Las Vegas, which opens on January 7. The keynote address is slated for January 6, 6:30 am PT. There is of course no word from NVIDIA on what to expect, but we have some fairly easy guesswork. NVIDIA's refresh of the GeForce RTX product stack is due, and the company is expected to either debut or expand its next-generation GeForce RTX 50-series "Blackwell" gaming GPU stack, bringing in generational improvements in performance and performance-per-Watt, besides new technology.

The company could also make more announcements related to its "Blackwell" AI GPU lineup, which is expected to ramp through 2025, succeeding the current "Hopper" H100 and H200 series. The company could also tease "Rubin," which it referenced recently at GTC in May, "Rubin" succeeds "Blackwell," and will debut as an AI GPU toward the end of 2025, with a 2026 ramp toward customers. It's unclear if NVIDIA will make gaming GPUs on "Rubin," since GeForce RTX generations tend to have a 2-year cadence, and there was no gaming GPU based on "Hopper."

Accenture to Train 30,000 of Its Employees on NVIDIA AI Full Stack

Accenture and NVIDIA today announced an expanded partnership, including Accenture's formation of a new NVIDIA Business Group, to help the world's enterprises rapidly scale their AI adoption. With generative AI demand driving $3 billion in Accenture bookings in its recently-closed fiscal year, the new group will help clients lay the foundation for agentic AI functionality using Accenture's AI Refinery, which uses the full NVIDIA AI stack—including NVIDIA AI Foundry, NVIDIA AI Enterprise and NVIDIA Omniverse—to advance areas such as process reinvention, AI-powered simulation and sovereign AI.

Accenture AI Refinery will be available on all public and private cloud platforms and will integrate seamlessly with other Accenture Business Groups to accelerate AI across the SaaS and Cloud AI ecosystem.

NVIDIA RTX 5090 "Blackwell" Could Feature Two 16-pin Power Connectors

NVIDIA CEO Jensen Huang never misses an opportunity to remind us that Moore's Law is cooked, and that future generations of logic hardware will only get larger and hotter, or hungrier for power. NVIDIA's next generation "Blackwell" graphics architecture promises to bring certain architecture-level performance/Watt improvements, coupled with the node-level performance/Watt improvements from the switch to the TSMC 4NP (4 nm-class) node. Even so, the GeForce RTX 5090, or the part that succeeds the current RTX 4090, will be a power hungry GPU, with rumors suggesting the need for two 16-pin power inputs.

TweakTown reports that the RTX 5090 could come with two 16-pin power connectors, which should give the card the theoretical ability to pull 1200 W (continuous). This doesn't mean that the GPU's total graphics power (TGP) is 1200 W, but a number close to or greater than 600 W, which calls for two of these connectors. Even if the TGP is exactly 600 W, NVIDIA would want to deploy two inputs, to spread the load among two connectors, and improve physical resilience of the connector. It's likely that both connectors will have 600 W input capability, so end-users don't mix up connectors should one of them be 600 W and the other keyed to 150 W or 300 W.

NVIDIA Resolves "Blackwell" Yield Issues with New Photomask

During its Q2 2024 earnings call, NVIDIA confirmed that its upcoming Blackwell-based products are facing low-yield challenges. However, the company announced that it has implemented design changes to improve the production yields of its B100 and B200 processors. Despite these setbacks, NVIDIA remains optimistic about its production timeline. The tech giant plans to commence the production ramp of Blackwell GPUs in Q4 2024, with expected shipments worth several billion dollars by the end of the year. In an official statement, NVIDIA explained, "We executed a change to the Blackwell GPU mask to improve production yield." The company also reaffirmed that it had successfully sampled Blackwell GPUs with customers in the second quarter.

However, NVIDIA acknowledged that meeting demand required producing "low-yielding Blackwell material," which impacted its gross margins. During an earnings call, NVIDIA's CEO Jensen Huang assured investors that the supply of B100 and B200 GPUs will be there. He expressed confidence in the company's ability to mass-produce these chips starting in the fourth quarter. The Blackwell B100 and B200 GPUs use TSMC's CoWoS-L packaging technology and a complex design, which prompted rumors about the company facing yield issues with its designs. Reports suggest that initial challenges arose from mismatched thermal expansion coefficients among various components, leading to warping and system failures. However, now the company claims that the fix that solved these problems was a new GPU photomask, which bumped yields back to normal levels.

NVIDIA Accelerates Humanoid Robotics Development

To accelerate humanoid development on a global scale, NVIDIA today announced it is providing the world's leading robot manufacturers, AI model developers and software makers with a suite of services, models and computing platforms to develop, train and build the next generation of humanoid robotics.

Among the offerings are new NVIDIA NIM microservices and frameworks for robot simulation and learning, the NVIDIA OSMO orchestration service for running multi-stage robotics workloads, and an AI- and simulation-enabled teleoperation workflow that allows developers to train robots using small amounts of human demonstration data.

NVIDIA Beats Microsoft to Become World's Most Valuable Company, at $3.34 Trillion

With a market capitalization of USD $3.34 trillion, NVIDIA has beaten Microsoft to become the world's most valuable company. The company's valuation doubled year-over-year, thanks to its meteoric rise as the preeminent manufacturer of AI accelerator chips that's in a dominant position to support the productization and mainstreaming of generative AI, and the company only expects further growth of the AI acceleration industry. Chris Penrose, global head of business development for telecom at NVIDIA, speaking at an event in Copenhagen, said "The generative AI journey is really transforming businesses and telcos around the world," he said. "We're just at the beginning." BBC notes that eight years ago, NVIDIA was worth less than 1% of its current valuation.

In the most recent quarterly result, Q1 fiscal 2025, NVIDIA posted a revenue of $26 billion, with the Data Center business handling the company's AI GPUs making up the lion's share of it, at $22.6 billion. The Gaming and AI PC segment, which handles the GeForce GPU product line that used to be NVIDIA's main breadwinner until a few years ago, made just $2.6 billion, in stark contrast. This highlights that NVIDIA is now mainly a data center acceleration hardware company that happens to sell visual compute products on the side, along with a constellation of smaller product lines such as robotics and automobile self-driving hardware. With NVIDIA at the number-1 spot, the top-5 most valuable companies in the world are all American tech giants—NVIDIA, Microsoft, Apple, Alphabet (Google), and Amazon. The other companies in the top-10 list include Meta and Broadcom.

TSMC Thinking to Raise Prices, NVIDIA's Jensen Fully Supports the Idea

NVIDIA's CEO Jensen Huang said on June 5th that TSMC's stock price is too low, and he agrees with new TSMC chairman C. C. Wei's idea about TSMC's value. Jensen promised to support TSMC in charging more for their wafers and a type of packaging called CoWoS. An article from TrendForce says that NVIDIA and TSMC will talk about chip prices for next year, which could help TSMC make more money. Jensen also said he's not too worried about problems between countries because Taiwan has a strong supply chain; TSMC is doing more than just making chips, they're handling many supply chain issues too.

Last year, many companies were waiting for TSMC's products, ever-increasing demand and production issues causing delays. Even though things got a bit better this year, there's still not enough supply. TSMC says that even making three times more 3-nanometer chips isn't enough, so they need to make even more. NVIDIA's profits are very high, much higher than other companies like AMD and even TSMC. If TSMC raises prices for these advanced processes, it won't hurt NVIDIA's profits much, but it might lower profits for other companies like Apple, AMD, and Qualcomm. It will also have an impact on end-users.

Nightmare Fuel for Intel: Arm CEO Predicts Arm will Take Over 50% Windows PC Market-share by 2029

Arm CEO Rene Haas predicts that SoCs based on the Arm CPU machine architecture will beat x86 in the Windows PC space in the next 5 years (by 2029). Haas is bullish about the current crop of Arm SoCs striking the right balance of performance and power efficiency, along with just the right blend of on-chip acceleration for AI and graphics, to make serious gains in this market, which has traditionally been dominated by the x86 machine architecture, with chips from just two manufacturers—Intel and AMD. On the other hand, Arm has a vibrant ecosystem of SoC vendors. "Arm's market share in Windows - I think, truly, in the next five years, it could be better than 50%." Haas said, in an interview with Reuters.

Currently, Microsoft has an exclusive deal with Qualcomm to power Windows-on-Arm (WoA) Copilot+ AI PCs. Qualcomm's chip lineup spans the Snapdragon Elite X and Snapdragon Elite Plus. This exclusivity, however, could change, with a recent interview of Michael Dell and Jensen Huang hinting at NVIDIA working on a chip for the AI PC market. The writing is on the wall for Intel and AMD—they need to compete with Arm on its terms: to make leaner PC processors with the kinds of performance/Watt and chip costs that Arm SoCs offer to PC OEMs. Intel has taken a big step in this direction with its "Lunar Lake" processor, you can read all about the architecture here.

Palit Computex 2024: Neptunus, Beyond Limits, Master, LYNK Project, SFF-Ready

Palit sprung an unexpected surprise at the 2024 Computex. Normally, graphics card partners announce their new custom-design brands alongside new GPU generation launches. Palit took a different path, it rehashed its usual GameRock, JetStream, and Dual OC brands with the RTX 40-series "Ada," back in 2022, but showcased all new custom graphics card designs at Computex 2024, with an expected 5-6 months left for NVIDIA's next-gen GeForce "Blackwell" to hit the scene. The RTX 4090 Neptunus is a variation of the GameRock OC, except it is an air+liquid hybrid cooling solution. The card doesn't include a liquid cooling loop, and out of the box, the air-cooling performance of this card should resemble that of the GameRock, but it has a liquid cooling channel, you use your own G 1/4" fittings, and connect the card to a DIY loop for a transformative upgrade in cooling performance.

Next up, is the RTX 4080 SUPER Beyond Limits. This is Palit being flamboyant with its design, similar to the ASUS ROG Strix or the MSI Gaming X. The card features a very capable 3.5-slot air cooling design with high static-pressure fans, but the star attraction is a large acrylic RGB LED diffuser that runs along the length of the card, which has the Beyond Limits logo, and an infinite-reflection mirror. There is a variant of this card called the Beyond Limits Crystal, which has an "infinity reflection pyramid." The screaming "Beyond Limits" lettering makes way for some abstract shapes.

A Visit to PNY at Computex: NVIDIA's Jensen Really Loved This Card and Signed it

We visited the PNY booth at the 2024 Computex and found something interesting, an air-cooled GeForce RTX 4070 SUPER graphics card, and a pre-built, which was custom-made by PNY for their booth. Jensen Huang of NVIDIA visited this booth, and signed the card. It's now quite the attraction. It's a simple lateral blower card that's meant to be bought in numbers, and crammed into workstation cases, where the lateral blower design allows neighboring cards to breathe better. Typically, such cards tend to have boring, unremarkable black designs, with plastic cooler shrouds, but PNY managed to make their card stand out with a die-cast metal shroud finished in NVIDIA's favorite shade of green. As of now, this card isn't a real product, but we've been told that the company is considering a small production run, without Jensen's signature, of course.

NVIDIA Supercharges Ethernet Networking for Generative AI

NVIDIA today announced widespread adoption of the NVIDIA Spectrum -X Ethernet networking platform as well as an accelerated product release schedule. CoreWeave, GMO Internet Group, Lambda, Scaleway, STPX Global and Yotta are among the first AI cloud service providers embracing NVIDIA Spectrum-X to bring extreme networking performance to their AI infrastructures. Additionally, several NVIDIA partners have announced Spectrum-based products, including ASRock Rack, ASUS, GIGABYTE, Ingrasys, Inventec, Pegatron, QCT, Wistron and Wiwynn, which join Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro in incorporating the platform into their offerings.

"Rapid advancements in groundbreaking technologies like generative AI underscore the necessity for every business to prioritize networking innovation to gain a competitive edge," said Gilad Shainer, senior vice president of networking at NVIDIA. "NVIDIA Spectrum-X revolutionizes Ethernet networking to let businesses fully harness the power of their AI infrastructures to transform their operations and their industries."

Qualcomm's Success with Windows AI PC Drawing NVIDIA Back to the Client SoC Business

NVIDIA is eying a comeback to the client processor business, reveals a Bloomberg interview with the CEOs of NVIDIA and Dell. For NVIDIA, all it takes is a simple driver update that exposes every GeForce GPU with tensor cores as an NPU to Windows 11, with translation layers to get popular client AI apps to work with TensorRT. But that would need you to have a discrete NVIDIA GPU. What about the vast market of Windows AI PCs powered by the likes of Qualcomm, Intel, and AMD, who each sell 15 W-class processors with integrated NPUs capable of 50 AI TOPS, which is all that Copilot+ needs? NVIDIA held an Arm license for decades now, and makes Arm-based CPUs to this day, with the NVIDIA Grace, however, that is a large server processor meant for its AI GPU servers.

NVIDIA already made client processors under the Tegra brand targeting smartphones, which it winded down last decade. It's since been making Drive PX processors for its automotive self-driving hardware division; and of course there's Grace. NVIDIA hinted that it might have a client CPU for the AI PC market in 2025. In the interview Bloomberg asked NVIDIA CEO Jensen Huang a pointed question on whether NVIDIA has a place in the AI PC market. Dell CEO Michael Dell, who was also in the interview, interjected "come back next year," to which Jensen affirmed "exactly." Dell would be in a front-and-center position to know if NVIDIA is working on a new PC processor for launch in 2025, and Jensen's nod almost confirms this

NVIDIA CEO Jensen Huang to Deliver Keynote Ahead of COMPUTEX 2024

Amid an AI revolution sweeping through trillion-dollar industries worldwide, NVIDIA founder and CEO Jensen Huang will deliver a keynote address ahead of COMPUTEX 2024, in Taipei, outlining what's next for the AI ecosystem. Slated for June 2 at the National Taiwan University Sports Center, the address kicks off before the COMPUTEX trade show scheduled to run from June 3-6 at the Taipei Nangang Exhibition Center. The keynote will be livestreamed at 7 p.m. Taiwan time (4 a.m. PT) on Sunday, June 2, with a replay available at NVIDIA.com.

With over 1,500 exhibitors from 26 countries and an expected crowd of 50,000 attendees, COMPUTEX is one of the world's premier technology events. It has long showcased the vibrant technology ecosystem anchored by Taiwan and has become a launching pad for the cutting-edge systems required to scale AI globally. As a leader in AI, NVIDIA continues to nurture and expand the AI ecosystem. Last year, Huang's keynote and appearances in partner press conferences exemplified NVIDIA's role in helping advance partners across the technology industry.

NVIDIA Hopper Leaps Ahead in Generative AI at MLPerf

It's official: NVIDIA delivered the world's fastest platform in industry-standard tests for inference on generative AI. In the latest MLPerf benchmarks, NVIDIA TensorRT-LLM—software that speeds and simplifies the complex job of inference on large language models—boosted the performance of NVIDIA Hopper architecture GPUs on the GPT-J LLM nearly 3x over their results just six months ago. The dramatic speedup demonstrates the power of NVIDIA's full-stack platform of chips, systems and software to handle the demanding requirements of running generative AI. Leading companies are using TensorRT-LLM to optimize their models. And NVIDIA NIM—a set of inference microservices that includes inferencing engines like TensorRT-LLM—makes it easier than ever for businesses to deploy NVIDIA's inference platform.

Raising the Bar in Generative AI
TensorRT-LLM running on NVIDIA H200 Tensor Core GPUs—the latest, memory-enhanced Hopper GPUs—delivered the fastest performance running inference in MLPerf's biggest test of generative AI to date. The new benchmark uses the largest version of Llama 2, a state-of-the-art large language model packing 70 billion parameters. The model is more than 10x larger than the GPT-J LLM first used in the September benchmarks. The memory-enhanced H200 GPUs, in their MLPerf debut, used TensorRT-LLM to produce up to 31,000 tokens/second, a record on MLPerf's Llama 2 benchmark. The H200 GPU results include up to 14% gains from a custom thermal solution. It's one example of innovations beyond standard air cooling that systems builders are applying to their NVIDIA MGX designs to take the performance of Hopper GPUs to new heights.

NVIDIA CEO Jensen Huang: AGI Within Five Years, AI Hallucinations are Solvable

After giving a vivid GTC talk, NVIDIA's CEO Jensen Huang took on a Q&A session with many interesting ideas for debate. One of them is addressing the pressing concerns surrounding AI hallucinations and the future of Artificial General Intelligence (AGI). With a tone of confidence, Huang reassured the tech community that the phenomenon of AI hallucinations—where AI systems generate plausible yet unfounded answers—is a solvable issue. His solution emphasizes the importance of well-researched and accurate data feeding into AI systems to mitigate these occurrences. "The AI shouldn't just answer; it should do research first to determine which of the answers are the best," noted Mr. Huang as he added that for every single question, there should be a rule that makes AI research the answer. This also refers to Retrieval-Augmented Generation (RAG), where LLMs fetch data from external sources, like additional databases, for fact-checking.

Another interesting comment made by the CEO is that the pinnacle of AI evolution—Artificial General Intelligence—is just five years away. Many people working in AI are divided between the AGI timeline. While Mr. Huang predicted five years, some leading researchers like Meta's Yann LeCunn think we are far from the AGI singularity threshold and will be stuck with dog/cat-level AI systems first. AGI has long been a topic of both fascination and apprehension, with debates often revolving around its potential to exceed human intelligence and the ethical implications of such a development. Critics worry about the unpredictability and uncontrollability of AGI once it reaches a certain level of autonomy, raising questions about aligning its objectives with human values and priorities. Timeline-wise, no one knows, and everyone makes their prediction, so time will tell who was right.

Jensen Huang Discloses NVIDIA Blackwell GPU Pricing: $30,000 to $40,000

Jensen Huang has been talking to media outlets following the conclusion of his keynote presentation at NVIDIA's GTC 2024 conference—an NBC TV "exclusive" interview with the Team Green boss has caused a stir in tech circles. Jim Cramer's long-running "Squawk on the Street" trade segment hosted Huang for just under five minutes—NBC's presenter labelled the latest edition of GTC the "Woodstock of AI." NVIDIA's leader reckoned that around $1 trillion of industry was in attendance at this year's event—folks turned up to witness the unveiling of "Blackwell" B200 and GB200 AI GPUs. In the interview, Huang estimated that his company had invested around $10 billion into the research and development of its latest architecture: "we had to invent some new technology to make it possible."

Industry watchdogs have seized on a major revelation—as disclosed during the televised NBC report—Huang revealed that his next-gen AI GPUs "will cost between $30,000 and $40,000 per unit." NVIDIA (and its rivals) are not known to publicly announce price ranges for AI and HPC chips—leaks from hardware partners and individuals within industry supply chains are the "usual" sources. An investment banking company has already delved into alleged Blackwell production costs—as shared by Tae Kim/firstadopter: "Raymond James estimates it will cost NVIDIA more than $6000 to make a B200 and they will price the GPU at a 50-60% premium to H100...(the bank) estimates it costs NVIDIA $3320 to make the H100, which is then sold to customers for $25,000 to $30,000." Huang's disclosure should be treated as an approximation, since his company (normally) deals with the supply of basic building blocks.

Microsoft and NVIDIA Announce Major Integrations to Accelerate Generative AI for Enterprises Everywhere

At GTC on Monday, Microsoft Corp. and NVIDIA expanded their longstanding collaboration with powerful new integrations that leverage the latest NVIDIA generative AI and Omniverse technologies across Microsoft Azure, Azure AI services, Microsoft Fabric and Microsoft 365.

"Together with NVIDIA, we are making the promise of AI real, helping to drive new benefits and productivity gains for people and organizations everywhere," said Satya Nadella, Chairman and CEO, Microsoft. "From bringing the GB200 Grace Blackwell processor to Azure, to new integrations between DGX Cloud and Microsoft Fabric, the announcements we are making today will ensure customers have the most comprehensive platforms and tools across every layer of the Copilot stack, from silicon to software, to build their own breakthrough AI capability."

"AI is transforming our daily lives - opening up a world of new opportunities," said Jensen Huang, founder and CEO of NVIDIA. "Through our collaboration with Microsoft, we're building a future that unlocks the promise of AI for customers, helping them deliver innovative solutions to the world."

NVIDIA Launches Blackwell-Powered DGX SuperPOD for Generative AI Supercomputing at Trillion-Parameter Scale

NVIDIA today announced its next-generation AI supercomputer—the NVIDIA DGX SuperPOD powered by NVIDIA GB200 Grace Blackwell Superchips—for processing trillion-parameter models with constant uptime for superscale generative AI training and inference workloads.

Featuring a new, highly efficient, liquid-cooled rack-scale architecture, the new DGX SuperPOD is built with NVIDIA DGX GB200 systems and provides 11.5 exaflops of AI supercomputing at FP4 precision and 240 terabytes of fast memory—scaling to more with additional racks.
Return to Keyword Browsing
Dec 21st, 2024 06:50 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts