News Posts matching #Team Green

Return to Keyword Browsing

Lenovo Anticipates Great Demand for AMD Instinct MI300X Accelerator Products

Ryan McCurdy, President of Lenovo North America, revealed ambitious forward-thinking product roadmap during an interview with CRN magazine. A hybrid strategic approach will create an anticipated AI fast lane on future hardware—McCurdy, a former Intel veteran, stated: "there will be a steady stream of product development to add (AI PC) hardware capabilities in a chicken-and-egg scenario for the OS and for the (independent software vendor) community to develop their latest AI capabilities on top of that hardware...So we are really paving the AI autobahn from a hardware perspective so that we can get the AI software cars to go faster on them." Lenovo—as expected—is jumping on the AI-on-device train, but it will be diversifying its range of AI server systems with new AMD and Intel-powered options. The company has reacted to recent Team Green AI GPU supply issues—alternative units are now in the picture: "with NVIDIA, I think there's obviously lead times associated with it, and there's some end customer identification, to make sure that the products are going to certain identified end customers. As we showcased at Tech World with NVIDIA on stage, AMD on stage, Intel on stage and Microsoft on stage, those industry partnerships are critical to not only how we operate on a tactical supply chain question but also on a strategic what's our value proposition."

McCurdy did not go into detail about upcoming Intel-based server equipment, but seemed excited about AMD's Instinct MI300X accelerator—Lenovo was (previously) announced as one of the early OEM takers of Team Red's latest CDNA 3.0 tech. CRN asked about the firm's outlook for upcoming MI300X-based inventory—McCurdy responded with: "I won't comment on an unreleased product, but the partnership I think illustrates the larger point, which is the industry is looking for a broad array of options. Obviously, when you have any sort of lead times, especially six-month, nine-month and 12-month lead times, there is interest in this incredible technology to be more broadly available. I think you could say in a very generic sense, demand is as high as we've ever seen for the product. And then it comes down to getting the infrastructure launched, getting testing done, and getting workloads validated, and all that work is underway. So I think there is a very hungry end customer-partner user base when it comes to alternatives and a more broad, diverse set of solutions."

NVIDIA Hopper Leaps Ahead in Generative AI at MLPerf

It's official: NVIDIA delivered the world's fastest platform in industry-standard tests for inference on generative AI. In the latest MLPerf benchmarks, NVIDIA TensorRT-LLM—software that speeds and simplifies the complex job of inference on large language models—boosted the performance of NVIDIA Hopper architecture GPUs on the GPT-J LLM nearly 3x over their results just six months ago. The dramatic speedup demonstrates the power of NVIDIA's full-stack platform of chips, systems and software to handle the demanding requirements of running generative AI. Leading companies are using TensorRT-LLM to optimize their models. And NVIDIA NIM—a set of inference microservices that includes inferencing engines like TensorRT-LLM—makes it easier than ever for businesses to deploy NVIDIA's inference platform.

Raising the Bar in Generative AI
TensorRT-LLM running on NVIDIA H200 Tensor Core GPUs—the latest, memory-enhanced Hopper GPUs—delivered the fastest performance running inference in MLPerf's biggest test of generative AI to date. The new benchmark uses the largest version of Llama 2, a state-of-the-art large language model packing 70 billion parameters. The model is more than 10x larger than the GPT-J LLM first used in the September benchmarks. The memory-enhanced H200 GPUs, in their MLPerf debut, used TensorRT-LLM to produce up to 31,000 tokens/second, a record on MLPerf's Llama 2 benchmark. The H200 GPU results include up to 14% gains from a custom thermal solution. It's one example of innovations beyond standard air cooling that systems builders are applying to their NVIDIA MGX designs to take the performance of Hopper GPUs to new heights.

Outpost: Infinity Siege Launches With DLSS 3 & New DLSS 2 Games Out Now

Over 500 games and applications feature RTX technologies, and barely a week goes by without new blockbuster games and incredible indie releases integrating NVIDIA DLSS, NVIDIA Reflex, and advanced ray-traced effects to deliver the definitive PC experience for GeForce RTX gamers.

This week, we're highlighting the release of DLSS 3-accelerated release of Outpost: Infinity Siege, and the launch of Alone In The Dark and Lightyear Frontier, which both feature DLSS 2. This batch of great new RTX releases follows the release of Horizon Forbidden West Complete Edition, which boasted day-one support for NVIDIA DLSS 3, NVIDIA DLAA, and NVIDIA Reflex. Additionally, Diablo IV's ray tracing update is out now—learn more about each new announcement below.

Jensen Huang Discloses NVIDIA Blackwell GPU Pricing: $30,000 to $40,000

Jensen Huang has been talking to media outlets following the conclusion of his keynote presentation at NVIDIA's GTC 2024 conference—an NBC TV "exclusive" interview with the Team Green boss has caused a stir in tech circles. Jim Cramer's long-running "Squawk on the Street" trade segment hosted Huang for just under five minutes—NBC's presenter labelled the latest edition of GTC the "Woodstock of AI." NVIDIA's leader reckoned that around $1 trillion of industry was in attendance at this year's event—folks turned up to witness the unveiling of "Blackwell" B200 and GB200 AI GPUs. In the interview, Huang estimated that his company had invested around $10 billion into the research and development of its latest architecture: "we had to invent some new technology to make it possible."

Industry watchdogs have seized on a major revelation—as disclosed during the televised NBC report—Huang revealed that his next-gen AI GPUs "will cost between $30,000 and $40,000 per unit." NVIDIA (and its rivals) are not known to publicly announce price ranges for AI and HPC chips—leaks from hardware partners and individuals within industry supply chains are the "usual" sources. An investment banking company has already delved into alleged Blackwell production costs—as shared by Tae Kim/firstadopter: "Raymond James estimates it will cost NVIDIA more than $6000 to make a B200 and they will price the GPU at a 50-60% premium to H100...(the bank) estimates it costs NVIDIA $3320 to make the H100, which is then sold to customers for $25,000 to $30,000." Huang's disclosure should be treated as an approximation, since his company (normally) deals with the supply of basic building blocks.

Chinese Research Institute Utilizing "Banned" NVIDIA H100 AI GPUs

NVIDIA's freshly unveiled "Blackwell" B200 and GB200 AI GPUs will be getting plenty of coverage this year, but many organizations will be sticking with current or prior generation hardware. Team Green is in the process of shipping out compromised "Hopper" designs to customers in China, but the region's appetite for powerful AI-crunching hardware is growing. Last year's China-specific H800 design, and the older "Ampere" A800 chip were deemed too potent—new regulations prevented further sales. Recently, AMD's Instinct MI309 AI accelerator was considered "too powerful to gain unconditional approval from the US Department of Commerce." Natively-developed solutions are catching up with Western designs, but some institutions are not prepared to queue up for emerging technologies.

NVIDIA's new H20 AI GPU as well as Ada Lovelace-based L20 PCIe and L2 PCIe models are weakened enough to get a thumbs up from trade regulators, but likely not compelling enough for discerning clients. The Telegraph believes that NVIDIA's uncompromised H100 AI GPU is currently in use at several Chinese establishments—the report cites information presented within four academic papers published on ArXiv, an open access science website. The Telegraph's news piece highlights one of the studies—it was: "co-authored by a researcher at 4paradigm, an AI company that was last year placed on an export control list by the US Commerce Department for attempting to acquire US technology to support China's military." Additionally, the Chinese Academy of Sciences appears to have conducted several AI-accelerated experiments, involving the solving of complex mathematical and logical problems. The article suggests that this research organization has acquired a very small batch of NVIDIA H100 GPUs (up to eight units). A "thriving black market" for high-end NVIDIA processors has emerged in the region—last Autumn, the Center for a New American Security (CNAS) published an in-depth article about ongoing smuggling activities.

NVIDIA B100 "Blackwell" AI GPU Technical Details Leak Out

Jensen Huang's opening GTC 2024 keynote is scheduled to happen tomorrow afternoon (13:00 Pacific time)—many industry experts believe that the NVIDIA boss will take the stage and formally introduce his company's B100 "Blackwell" GPU architecture. An enlightened few have been treated to preview (AI and HPC) units—including Dell's CEO, Jeff Clarke—but pre-introduction leaks have not flowed out. Team Green is likely enforcing strict conditions upon a fortunate selection of trusted evaluators, within a pool of ecosystem partners and customers.

Today, a brave soul has broken that silence—tech tipster, AGF/XpeaGPU, fears repercussions from the leather-jacketed one. They revealed a handful of technical details, a day prior to Team Green's highly anticipated unveiling: "I don't want to spoil NVIDIA B100 launch tomorrow, but this thing is a monster. 2 dies on (TSMC) CoWoS-L, 8x8-Hi HBM3E stacks for 192 GB of memory." They also crystal balled an inevitable follow-up card: "one year later, B200 goes with 12-Hi stacks and will offer a beefy 288 GB. And the performance! It's... oh no Jensen is there... me run away!" Reuters has also joined in on the fun, with some predictions and insider information: "NVIDIA is unlikely to give specific pricing, but the B100 is likely to cost more than its predecessor, which sells for upwards of $20,000." Enterprise products are expected to arrive first—possibly later this year—followed by gaming variants, maybe months later.

Microsoft's Latest Agility SDK Released with Cutting-edge Work Graphs API

Microsoft's DirectX department is scheduled to show off several innovations at this month's Game Developers Conference (GDC), although a late February preview has already spilled their DirectSR Super Resolution API's beans. Today, retail support for Shader Model 6.8 and Work Graphs has been introduced with an updated version of the company's Agility Software Development Kit. Program manager, Joshua Tucker, stated that these technologies will be showcased on-stage at GDC 2024—Shader Model 6.8 arrives with a "host of new features for shader developers, including Start Vertex/Instance Location, Wave Size Range, and Expanded Comparison Sampling." A linked supplementary article—D3D12 Work Graphs—provides an in-depth look into the cutting-edge API's underpinnings, best consumed if you have an hour or two to spare.

Tucker summarized the Work Graphs API: "(it) utilizes the full potential of your GPU. It's not just an upgrade to the existing models, but a whole new paradigm that enables more efficient, flexible, and creative game development. With Work Graphs, you can generate and schedule GPU work on the fly, without relying on the host. This means you can achieve higher performance, lower latency, and greater scalability for your games with tasks such as culling, binning, chaining of compute work, and much more." AMD and NVIDIA are offering driver support on day one. Team Red has discussed the launch of "Microsoft DirectX 12 Work Graphs 1.0 API" in a GPUOpen blog—they confirm that "a deep dive" into the API will happen during their Advanced Graphics Summit presentation. NVIDIA's Wessam Bahnassi has also discussed the significance of Work Graphs—check out his "Advancing GPU-driven rendering" article. Graham Wihlidal—of Epic Games—is excited about the latest development: "we have been advocating for something like this for a number of years, and it is very exciting to finally see the release of Work Graphs."

NVIDIA RTX 50-series "GB20X" GPU Memory Interface Details Leak Out

Earlier in the week it was revealed that NVIDIA had distributed next-gen AI GPUs to its most important ecosystem partners and customers—Dell's CEO expressed enthusiasm with his discussion of "Blackwell" B100 and B200 evaluation samples. Team Green's next-gen family of gaming GPUs have received less media attention in early 2024—a mid-February TPU report pointed to a rumored PCIe 6.0 CEM specification for upcoming RTX 50-series cards, but leaks have become uncommon since late last year. Top technology tipster, kopite7kimi, has broken the relative silence on Blackwell's gaming configurations—an early hours tweet posits a slightly underwhelming scenario: "although I still have fantasies about 512 bit, the memory interface configuration of GB20x is not much different from that of AD10x."

Past disclosures have hinted about next-gen NVIDIA gaming GPUs sporting memory interface configurations comparable to the current crop of "Ada Lovelace" models. The latest batch of insider information suggests that Team Green's next flagship GeForce RTX GPU—GB202—will stick with a 384-bit memory bus. The beefiest current-gen GPU AD102—as featured in GeForce RTX 4090 graphics cards—is specced with a 384-bit interface. A significant upgrade for GeForce RTX 50xx cards could arrive with a step-up to next-gen GDDR7 memory—kopite7kimi reckons that top GPU designers will stick with 16 Gbit memory chip densities (2 GB). JEDEC officially announced its "GDDR7 Graphics Memory Standard" a couple of days ago. VideoCardz has kindly assembled the latest batch of insider info into a cross-generation comparison table (see below).

NVIDIA Introduces Generative AI Professional Certification

NVIDIA is offering a new professional certification in generative AI to enable developers to establish technical credibility in this important domain. Generative AI is revolutionizing industries worldwide, yet there's a critical skills gap and need to uplevel employees to more fully harness the technology. Available for the first time from NVIDIA, this new professional certification enables developers, career professionals, and others to validate and showcase their generative AI skills and expertise. Our new professional certification program introduces two associate-level generative AI certifications, focusing on proficiency in large language models and multimodal workflow skills.

"Generative AI has moved to center stage as governments, industries and organizations everywhere look to harness its transformative capabilities," NVIDIA founder and CEO Jensen Huang recently said. The certification will become available starting at GTC, where in-person attendees can also access recommended training to prepare for a certification exam. "Organizations in every industry need to increase their expertise in this transformative technology," said Greg Estes, VP of developer programs at NVIDIA. "Our goals are to assist in upskilling workforces, sharpen the skills of qualified professionals, and enable individuals to demonstrate their proficiency in order to gain a competitive advantage in the job market."

NVIDIA Data Center GPU Business Predicted to Generate $87 Billion in 2024

Omdia, an independent analyst and consultancy firm, has bestowed the title of "Kingmaker" on NVIDIA—thanks to impressive 2023 results in the data server market. The research firm predicts very buoyant numbers for the financial year of 2024—their February Cloud and Datacenter Market snapshot/report guesstimates that Team Green's data center GPU business group has the potential to rake in $87 billion of revenue. Omdia's forecast is based on last year's numbers—Jensen & Co. managed to pull in $34 billion, courtesy of an unmatched/dominant position in the AI GPU industry sector. Analysts have estimated a 150% rise in revenues for in 2024—the majority of popular server manufacturers are reliant on NVIDIA's supply of chips. Super Micro Computer Inc. CEO—Charles Liang—disclosed that his business is experiencing strong demand for cutting-edge server equipment, but complications have slowed down production: "once we have more supply from the chip companies, from NVIDIA, we can ship more to customers."

Demand for AI inference in 2023 accounted for 40% of NVIDIA data center GPU revenue—according Omdia's expert analysis—they predict further growth this year. Team Green's comfortable AI-centric business model could expand to a greater extent—2023 market trends indicated that enterprise customers had spent less on acquiring/upgrading traditional server equipment. Instead, they prioritized the channeling of significant funds into "AI heavyweight hardware." Omdia's report discussed these shifted priorities: "This reaffirms our thesis that end users are prioritizing investment in highly configured server clusters for AI to the detriment of other projects, including delaying the refresh of older server fleets." Late February reports suggest that NVIDIA H100 GPU supply issues are largely resolved—with much improved production timeframes. Insiders at unnamed AI-oriented organizations have admitted that leadership has resorted to selling-off of excess stock. The Omdia forecast proposes—somewhat surprisingly—that H100 GPUs will continue to be "supply-constrained" throughout 2024.

NVIDIA Reportedly Sampling SK Hynix 12-layer HBM3E

South Korean tech insiders believe that SK Hynix has sent "12-layer DRAM stacked HBM3E (5th generation HBM)" prototype samples to NVIDIA—according a ZDNET.co.kr article, initial examples were shipped out last month. Reports from mid-2023 suggested that Team Green had sampled 8-layer HBM3E (4th gen) units around summer time—with SK Hynix receiving approval notices soon after. Another South Korean media outlet, DealSite, reckons that NVIDIA's memory qualification process has exposed HBM yield problems across a number of manufacturers. SK Hynix, Samsung and Micron are competing fiercely on the HBM3E front—with hopes of getting their respective products attached to NVIDIA's H200 AI GPU. DigiTimes Asia proposed that SK Hynix is ready to "commence mass production of fifth-generation HBM3E" at some point this month.

SK Hynix is believed to be leading the pack—insiders believe that yield rates are good enough to pass early NVIDIA certification, and advanced 12-layer samples are expected to be approved in the near future. ZDNET reckons that SK Hynix's forward momentum has placed it an advantageous position: "(They) supplied 8-layer HBM3E samples in the second half of last year and passed recent testing. Although the official schedule has not been revealed, mass production is expected to begin as early as this month. Furthermore, SK Hynix supplied 12-layer HBM3E samples to NVIDIA last month. This sample is an extremely early version and is mainly used to establish standards and characteristics of new products. SK Hynix calls it UTV (Universal Test Vehicle)... Since Hynix has already completed the performance verification of the 8-layer HBM3E, it is expected that the 12-layer HBM3E test will not take much time." SK Hynix's Vice President recently revealed that his company's 2024 HBM production volumes for were already sold out, and leadership is already preparing innovations for 2025 and beyond.

Microsoft DirectX Team to Introduce "DirectSR" at GDC 2024

According to a Game Developers Conference (GDC) 2024 schedule page, Microsoft is planning to present next-gen technologies with their upcoming "DirectX State of the Union Ft. Work Graphs and Introducing DirectSR" presentation. Shawn Hargreaves, Direct3D's Development Manager and Austin Kinross (PIX Developer Lead, Microsoft) are scheduled to discuss matters with representatives from NVIDIA and AMD. Wessam Bahnassi, a "20-year veteran in 3D engine design and optimization," is Team Green's Principal Engineer of Developer Technology. Rob Martin, a Fellow Software Engineer, will be representing all things Team Red—where he leads development on implementations for GPU Work Graphs. According to GDC, the intended audience will be: "graphics developers or technical directors from game studios or engine companies."

Earlier this month, an "Automatic super resolution" feature was discovered in Windows 11 Insider Preview build (24H2)—the captioned part stated: "use AI to make supported games play more smoothly with enhanced details," although further interface options granted usage in desktop applications as well. Initial analysis and user impressions indicated that Microsoft engineers had created a proprietary model, separate from familiar technologies: NVIDIA DLSS, AMD FSR and Intel XeSS. It is interesting to note that Team Blue is not participating in the upcoming March 21 "DirectX State of the Union" panel discussion (a sponsored session). GDC's event description states (in full): "The DirectX team will showcase the latest updates, demos, and best practices for game development with key partners from AMD and NVIDIA. Work graphs are the newest way to take full advantage of GPU hardware and parallelize workloads. Microsoft will provide a preview into DirectSR, making it easier than ever for game devs to scale super resolution support across Windows devices. Finally, dive into the latest tooling updates for PIX."

NVIDIA Readying H20 AI GPU for Chinese Market

NVIDIA's H800 AI GPU was rolled out last year to appease the Sanction Gods—but later on, the US Government deemed the cutdown "Hopper" part to be far too potent for Team Green's Chinese enterprise customers. Last October, newly amended export conditions banned sales of the H800, as well as the slightly older (plus similarly gimped) A800 "Ampere" GPU in the region. NVIDIA's engineering team returned to the drawing board, and developed a new range of compliantly weakened products. An exclusive Reuters report suggests that Team Green is taking pre-orders for a refreshed "Hopper" GPU—the latest China-specific flagship is called "HGX H20." NVIDIA web presences have not been updated with this new model, as well as Ada Lovelace-based L20 PCIe and L2 PCIe GPUs. Huawei's competing Ascend 910B is said to be slightly more performant in "some areas"—when compared to the H20—according to insiders within the distribution network.

The leakers reckon that NVIDIA's mainland distributors will be selling H20 models within a price range of $12,000 - $15,000—Huawei's locally developed Ascend 910B is priced at 120,000 RMB (~$16,900). One Reuters source stated that: "some distributors have started advertising the (NVIDIA H20) chips with a significant markup to the lower end of that range at about 110,000 yuan ($15,320). The report suggests that NVIDIA refused to comment on this situation. Another insider claimed that: "distributors are offering H20 servers, which are pre-configured with eight of the AI chips, for 1.4 million yuan. By comparison, servers that used eight of the H800 chips were sold at around 2 million yuan when they were launched a year ago." Small batches of H20 products are expected to reach important clients within the first quarter of 2024, followed by a wider release in Q2. It is believed that mass production will begin around Spring time.

GeForce RTX 4080 SUPER Custom Model €1109 MSRPs Appear on German Webshop

European buyers are facing a baseline MSRP of €1109 for the upcoming GeForce RTX 4080 SUPER graphics card family, thanks to extra sales taxes affecting purchases in the region's various countries. North American customers are set to "enjoy" a more reasonable entry point of $999 come January 31, including various custom options from NVIDIA's board partners—ZOTAC lead the charge with their non-overclocked offerings matching Team Green's Founders Edition MSRP. A small selection of brave retailers have already delivered GeForce RTX 4080 SUPER graphics cards to customers, while others have simply gone live with their asking prices.

Germany's Notebooksbilliger (translation: cheaper laptops) online store has produced product pages for all sorts of custom GeForce RTX 4080 SUPER cards—prices start off at NVIDIA's €1109 baseline, and ramp up to a maximum of €1379 for the fanciest option (ASUS ROG STRIX RTX 4080 SUPER OC). A VideoCardz report focuses mostly on the cheapest products listed by Notebooksbilliger.de. Five non-overclocked custom designs sits at the bottom of the webshop's RTX 4080 SUPER pricing pile: ASUS TUF GAMING, GIGABYTE SUPER WINDFORCE, SUPER WINDFORCE V2, Inno3D X3 and ZOTAC's Trinity Black Edition. At the time of writing, Notebooksbilliger's customers cannot pre-order any of the listed GeForce RTX 4080 SUPER cards—the full checkout process could be unlocked early next week, a few days ahead of the official January 31 launch day.

NVIDIA Releases RTX Video HDR Tool - AI-Upscale Standard Res Video to HDR Quality

RTX Video HDR—first announced at CES—is now available for download through the January Studio Driver. It uses AI to transform standard dynamic range video playing in internet browsers into stunning high dynamic range (HDR) on HDR10 displays. PC game modders now have a powerful new set of tools to use with the release of the NVIDIA RTX Remix open beta. It features full ray tracing, NVIDIA DLSS, NVIDIA Reflex, modern physically based rendering assets and generative AI texture tools so modders can remaster games more efficiently than ever. Pick up the new GeForce RTX 4070 Ti SUPER available from custom board partners in stock-clocked and factory-overclocked configurations to enhance creating, gaming and AI tasks.

Part of the RTX 40 SUPER Series announced at CES, it's equipped with more CUDA cores than the RTX 4070, a frame buffer increased to 16 GB, and a 256-bit bus—perfect for video editing and rendering large 3D scenes. It runs up to 1.6x faster than the RTX 3070 Ti and 2.5x faster with DLSS 3 in the most graphics-intensive games. And this week's featured In the NVIDIA Studio technical artist Vishal Ranga shares his vivid 3D scene Disowned—powered by NVIDIA RTX and Unreal Engine with DLSS.
Return to Keyword Browsing
Dec 19th, 2024 01:58 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts