News Posts matching #Ampere

Return to Keyword Browsing

NVIDIA Unveils New Jetson Orin Nano Super Developer Kit

NVIDIA is taking the wraps off a new compact generative AI supercomputer, offering increased performance at a lower price with a software upgrade. The new NVIDIA Jetson Orin Nano Super Developer Kit, which fits in the palm of a hand, provides everyone from commercial AI developers to hobbyists and students, gains in generative AI capabilities and performance. And the price is now $249, down from $499.

Available today, it delivers as much as a 1.7x leap in generative AI inference performance, a 70% increase in performance to 67 INT8 TOPS, and a 50% increase in memory bandwidth to 102 GB/s compared with its predecessor. Whether creating LLM chatbots based on retrieval-augmented generation, building a visual AI agent, or deploying AI-based robots, the Jetson Orin Nano Super is an ideal solution to fetch.

Arctic Intros Freezer 4U-M Rev. 2 Server CPU Cooler With Support for Ampere Altra Series

Even more versatile in the second revision: Developed on the basis of its proven predecessor model, the new version of the Freezer 4U-M offers optimised cooling performance, not only for powerful server CPUs from AMD and Intel, but also for the ARM processors of the Ampere Altra series.

Multi-compatible with additional flexibility
The 2nd revision of the Freezer 4U-M also impresses with its case and socket compatibility. In addition, it has been specially adapted to support Ampere Altra processors with 32 to 128 cores.

Nintendo Switch Successor: Backward Compatibility Confirmed for 2025 Launch

Nintendo has officially announced that its next-generation Switch console will feature backward compatibility, allowing players to use their existing game libraries on the new system. However, those eagerly awaiting the console's release may need to exercise patience as launch expectations have shifted to early 2025. On the official X account, Nintendo has announced: "At today's Corporate Management Policy Briefing, we announced that Nintendo Switch software will also be playable on the successor to Nintendo Switch. Nintendo Switch Online will be available on the successor to Nintendo Switch as well. Further information about the successor to Nintendo Switch, including its compatibility with Nintendo Switch, will be announced at a later date."

While the original Switch evolved from a 20 nm Tegra X1 to a more power-efficient 16 nm Tegra X1+ SoC (both featuring four Cortex-A57 and four Cortex-A53 cores with GM20B Maxwell GPUs), the Switch 2 is rumored to utilize a customized variant of NVIDIA's Jetson Orin SoC, now codenamed T239. The new chip represents a significant upgrade with its 12 Cortex-A78AE cores, LPDDR5 memory, and Ampere GPU architecture with 1,536 CUDA cores, promising enhanced battery efficiency and DLSS capabilities for the handheld gaming market. With the holiday 2024 release window now seemingly off the table, the new console is anticipated to debut in the first half of 2025, marking nearly eight years since the original Switch's launch.

GIGABYTE Announces Availability for Its New Servers Using AmpereOne Family of Processors

Giga Computing, a subsidiary of GIGABYTE and an industry leader in servers for x86 and ARM platforms as well as advanced cooling technologies, today announced its initial wave of GIGABYTE servers that support the full stack of AmpereOne family of processors. Last year, AmpereOne processors were announced and there were GIGABYTE servers in support of the platform available for select customers. Now, GIGABYTE servers have general availability with single and dual socket servers already in production, and more coming in late Q4. GIGABYTE servers for Ampere Altra and AmpereOne processors will be showcased at the GIGABYTE booth and Ampere pavilion at Yotta 2024 in Las Vegas on Oct. 7-9.

The AmpereOne family of processors, designed for cloud-native computing, features up to 192 custom designed Ampere cores, DDR5 memory, and 128 lanes of PCIe Gen 5 per socket. Overall, this line of processors target cloud instances with incredible VM density to boot, all while excelling at performance per watt. Delivering more cores, more IO, more memory, more performance, and more cloud features, this full stack of CPUs has additional applications in AI inference, data analytics, and more.

Nintendo Switch 2 Allegedly Not Powered by AMD APU Due to Poor Battery Life

Nintendo's next-generation Switch 2 handheld gaming console is nearing its release. As leaks intensify about its future specifications, we get information about its planning stages. According to Moore's Law is Dead YouTube video, we learn that Nintendo didn't choose AMD APU to be the powerhouse behind Switch 2 due to poor battery life. In a bid to secure the best chip at a mere five watts of power, the Japanese company had two choices: NVIDIA Tegra or AMD APU. With some preliminary testing and evaluation, AMD APU wasn't reportedly power-efficient at 5 Watt TDP, while the NVIDIA Tegra chip was maintaining sufficient battery life and performance at target specifications.

Allegedly the AMD APU was good for 15 W design, but Nintendo didn't want to place a bigger battery so that the device remains lighter and cheaper. The final design will likely carry a battery with a 20 Wh capacity, which will be the main power source behind the NVIDIA Tegra T239 SoC. As a reminder, the Tegra T239 SoC features eight-core Arm A78C cluster with modified NVIDIA Ampere cores in combination with DLSS, featuring some of the latest encoding/decoding elements from Ada Lovelace, like AV1. There are likely 1536 CUDA cores paired with 128-bit LPDDR5 memory running at 102 GB/s bandwidth. For final specifications, we have to wait for the official launch, but with rumors starting to intensify, we can expect to see it relatively soon.

NVIDIA Hit with DOJ Antitrust Probe over AI GPUs, Unfair Sales Tactics and Pricing Alleged

NVIDIA has reportedly been hit with a US Department of Justice (DOJ) antitrust probe over the tactics the company allegedly employs to sell or lease its AI GPUs and data-center networking equipment, "The Information" reported. Shares of the NVIDIA stock fell 3.6% in the pre-market trading on Friday (08/02). The main complainants behind the probe appear to be a special interest group among the customers of AI GPUs, and not NVIDIA's competitors in the AI GPU industry per se. US Senator Elizabeth Warren and US progressives have been most vocal about calling upon the DOJ to investigate antitrust allegations against NVIDIA.

Meanwhile, US officials are reportedly reaching out to NVIDIA's competitors, including AMD and Intel, to gather information about the complaints. NVIDIA holds 80% of the AI GPU market, while AMD, and to a much lesser extent, Intel, have received spillover demand for AI GPUs. "The Information" report says that the complaint alleges NVIDIA pressured cloud customers to buy "multiple products". We don't know what this means, one theory holds that NVIDIA is getting them to commit to buying multiple generations of products (eg: Ampere, Hopper, and over to Blackwell); while another holds that it's getting them to buy multiple kinds of products, which include not just the AI GPUs, but also NVIDIA's first-party server systems and networking equipment. Yet another theory holds that it is bundle first-party software and services to go with the hardware, far beyond the basic software needed to get the hardware to work.

Ampere Announces 512-Core AmpereOne Aurora CPU for AI Computing

Ampere has announced a significant update to its product roadmap, highlighting the upcoming 512-core AmpereOne Aurora processor. This new chip is specifically designed to address the growing demands of cloud-native AI computing.

The newly announced AmpereOne Aurora 512 cores processor integrates AI acceleration and on-chip High Bandwidth Memory (HBM), promising three times the performance per rack compared to current AmpereOne processors. Aurora is designed to handle both AI training and inference workloads, indicating Ampere's commitment to becoming a major player in the AI computing space.

NVIDIA Plans RTX 3050 A with Ada Lovelace AD106 Silicon

NVIDIA may be working on a new RTX 3050 A laptop GPU using an AD106 (Ada Lovelace) die, moving away from the Ampere chips used in other RTX 30-series GPUs. While not officially announced, the GPU is included in NVIDIA's latest driver release and the PCI ID database as GeForce RTX 3050 A Laptop GPU. The AD106 die choice is notable, as it has more transistors and CUDA cores than the GA107 in current RTX 3050s and the AD107 in RTX 4050 laptops. The AD106, used in RTX 4060 Ti desktop and RTX 4070 laptop GPUs, boasts 22.9 billion transistors and 4,608 CUDA cores, compared to GA107's 8.7 billion transistors and 2,560 CUDA cores, and AD107's 18.9 billion transistors and 3,072 CUDA cores.

While this could potentially improve performance, it's likely that NVIDIA will use a cut-down version of the AD106 chip for the RTX 3050 A. The exact specifications and features, such as support for DLSS 3, remain unknown. The use of TSMC's 4N node in AD106, instead of Samsung's 8N node used in Ampere, could potentially improve power efficiency and battery life. The performance of the RTX 3050 A compared to existing RTX 3050 and RTX 4050 laptops remains to be seen, however, the RTX 3050 A will likely perform similarly to existing Ampere-based parts as NVIDIA tends to use similar names for comparable performance levels. It's unclear if NVIDIA will bring this GPU to market, but adding new SKUs late in a product's lifespan isn't unprecedented.

Ghost of Tsushima Lets You Use DLSS 2 and FSR 3 Frame Generation Together

The latest update of Ghost of Tsushima lets you use DLSS 2 super-resolution and FSR 3 Frame Generation simultaneously, so you have the unique benefit of having the NVIDIA DLSS 2 handle super resolution and image quality, while letting the AMD FSR 3 nearly double frame-rates of the DLSS 2 output. All this, without the need for any mods, it's part of the game's original code. It's crazy when you think about it—you now have two performance enhancements running in tandem, with gamers reporting over 170 FPS at 4K with reasonably good image quality. This could particularly benefit those on older GeForce RTX 30-series "Ampere" and RTX 20-series "Turing" graphics cards, as those lack support for DLSS 3 Frame Generation.

Ampere Scales AmpereOne Product Family to 256 Cores

Ampere Computing today released its annual update on upcoming products and milestones, highlighting the company's continued innovation and invention around sustainable, power efficient computing for the Cloud and AI. The company also announced that they are working with Qualcomm Technologies, Inc. to develop a joint solution for AI inferencing using Qualcomm Technologies' high-performance, low power Qualcomm Cloud AI 100 inference solutions and Ampere CPUs.

Semiconductor industry veteran and Ampere CEO Renee James said the increasing power requirements and energy challenge of AI is bringing Ampere's silicon design approach around performance and efficiency into focus more than ever. "We started down this path six years ago because it is clear it is the right path," James said. "Low power used to be synonymous with low performance. Ampere has proven that isn't true. We have pioneered the efficiency frontier of computing and delivered performance beyond legacy CPUs in an efficient computing envelope."

AIO Workstation Combines 128-Core Arm Processor and Four NVIDIA GPUs Totaling 28,416 CUDA Cores

All-in-one computers are often traditionally seen as lower-powered alternatives to traditional desktop workstations. However, a new offering from Alafia AI, a startup focused on medical imaging appliances, aims to shatter that perception. The company's upcoming Alafia Aivas SuperWorkstation packs serious hardware muscle, demonstrating that all-in-one systems can match the performance of their more modular counterparts. At the heart of the Aivas SuperWorkstation lies a 128-core Ampere Altra processor, running at 3.0 GHz clock speed. This CPU is complemented by not one but three NVIDIA L4 GPUs for compute, and a single NVIDIA RTX 4000 Ada GPU for video output, delivering a combined 28,416 CUDA cores for accelerated parallel computing tasks. The system doesn't skimp on other components, either. It features a 4K touch display with up to 360 nits of brightness, an extensive 2 TB of DDR4 RAM, and storage options up to an 8 TB solid-state drive. This combination of cutting-edge CPU, GPU, memory, and storage is squarely aimed at the demands of medical imaging and AI development workloads.

The all-in-one form factor packs this incredible hardware into a sleek, purposefully designed clinical research appliance. While initially targeting software developers, Alafia AI hopes that institutions that can optimize their applications for the Arm architecture can eventually deploy the Aivas SuperWorkstation for production medical imaging workloads. The company is aiming for application integration in Q3 2024 and full ecosystem device integration by Q4 2024. With this powerful new offering, Alafia AI is challenging long-held assumptions about the performance limitations of all-in-one systems. The Aivas SuperWorkstation demonstrates that the right hardware choices can transform these compact form factors into true powerhouse workstations. Especially with a combined total output of three NVIDIA L4 compute units, alongside RTX 4000 Ada graphics card, the AIO is more powerful than some of the high-end desktop workstations.

ASUS Lists Low-Profile GeForce RTX 3050 BRK 6 GB Graphics Cards

NVIDIA's recent launch of a "new" entry-level gaming GPU has not set pulses racing—their return visit to Ampere City arrived in the form of custom GeForce RTX 3050 6 GB graphics cards. The absence of a reference model sometimes signals a low expectation, but Team Green's partners have pushed ahead with a surprisingly diverse portfolio of options. Early last month, Galax introduced a low-profile white design—the custom GeForce RTX 3050 6 GB card's slot-powered operation presents an ideal solution for super compact low-power footprint builds. ASUS is readying its own dual-fan low-profile models—as evidenced by official product pages. The listings do not reveal release dates or recommended price points for the reference-clocked GeForce RTX 3050 LP BRK 6 GB card, and its OC sibling. ASUS believes that both models offer "big productivity in a small package."

Low-profile card enthusiasts have warmly welcomed new-ish GeForce RTX 4060 GPU-based solutions—courtesy of ASUS and GIGABYTE, but reported $300+ MSRPs have likely put off budget-conscious buyers. A sub-$200 price point is a more palatable prospect, especially for system builders who are not all bothered about cutting-edge gaming performance. A DVI-D connector ensures legacy compatibility, alongside modern port standards: HDMI 2.1 and DP 1.4a. As mentioned before, ASUS has not publicly disclosed its pricing policy for the GeForce RTX 3050 LP BRK 6 GB card (and its OC variant)—the manufacturer's Dual and Dual OC models retail in a range of $170 - $180. Graphics card watchdogs reckon that the LP BRK designs will warrant a small premium over normal-sized products.

NVIDIA RTX 20-series and GTX 16-series "Turing" GPUs Get Resizable BAR Support Through NVStrapsReBAR Mod

February saw community mods bring resizable BAR support to several older platforms; and now we come across a mod that brings it to some older GPUs. The NVStrapsReBAR mod by terminatorul, which is forked out of the ReBarUEFI mod by xCurio, brings resizable BAR support to NVIDIA GeForce RTX 20-series and GTX 16-series GPUs based on the "Turing" graphics architecture. This mod is intended for power users, and can potentially brick your motherboard. NVIDIA officially implemented resizable BAR support since its RTX 30-series "Ampere" GPUs in response to AMD's Radeon RX 6000 RDNA 2 GPUs implementing the tech under the marketing name Smart Access Memory. While AMD would go on to retroactively enable the tech for even the older RX 5000 series RDNA GPUs, NVIDIA didn't do so for "Turing."

NVStrapsReBAR is a motherboard UEFI firmware mod. It modifies the way your system firmware negotiates BAR size with the GPU on boot. There are only two ways to go about modding a platform to enable resizable BAR on an unsupported platform—by modding the motherboard firmware, or the video BIOS. Signature checks by security processors in NVIDIA GPUs make the video BIOS modding route impossible for most users; thankfully motherboard firmware modding isn't as difficult. There is an extensive documentation by the author to go about using this mod. The author has tested the mod to work with "Turing" GPUs, however, it doesn't work with older NVIDIA GPUs, including "Pascal." Resizable BAR enables the CPU (software) to see video memory as a single contiguously addressable block, rather than through 256 MB apertures.

KFA2 Intros GeForce RTX 3050 6GB EX Graphics Card

KFA2, the EU-focused brand of graphics cards by Galax, today released the GeForce RTX 3050 6 GB EX, a somewhat premium take on the recently released entry-level GPU by NVIDIA. The KFA2 EX features a spruced up aluminium fin-stack heatsink that uses a flattened copper heatpipe to make broader contact with the GPU, and spread the heat better across the fin-stack. The 22.4 cm long card also has a couple of premium touches, such as a metal backplate, and RGB LED lighting. The lighting setup includes physical switch on the tail end of the card, with which you can turn it off. Also featured is idle fan-stop. The card offers a tiny factory overclock of 1485 MHz boost, compared to 1475 MHz reference. It sticks with PCIe slot power, there are no additional power connectors.

NVIDIA launched the GeForce RTX 3050 6 GB as its new entry level GPU. It is based on the older "Ampere" graphics architecture, and the 8 nm "GA107" silicon. It enables 18 out of 20 streaming multiprocessors physically present, which work out to 2,304 CUDA cores, 72 Tensor cores, 18 RT cores, 72 TMUs, and 32 ROPs. The 6 GB of 14 Gbps GDDR6 memory is spread across a narrower 96-bit memory bus than the one found in the original RTX 3050 8 GB. KFA2 is pricing the RTX 3050 6 GB EX at €199 including taxes.

Palit Introduces GeForce RTX 3050 6 GB KalmX and StormX Models

Palit Microsystems Ltd., a leading graphics card manufacturer, proudly announces the NVIDIA GeForce RTX 3050 6 GB KalmX and StormX Series graphics cards. The GeForce RTX 3050 6 GB GPU is built with the powerful graphics performance of the NVIDIA Ampere architecture. It offers dedicated 2nd gen RT Cores and 3rd gen Tensor Cores, new streaming multiprocessors, and high-speed G6 memory to tackle the latest games.

GeForce RTX 3050 6 GB KalmX: Passive Cooling. Silent Gaming
Introducing the Palit GeForce RTX 3050 KalmX, where silence meets performance in perfect harmony. The KalmX series, renowned for its ingenious fan-less design, redefines your gaming experience. With its passive cooling system, this graphics card operates silently, making it ideal for both gaming and multimedia applications. Available on shelves today—2nd February 2024.

NVIDIA GeForce RTX 3050 6GB Formally Launched

NVIDIA today formally launched the GeForce RTX 3050 6 GB as its new entry-level discrete GPU. The RTX 3050 6 GB is a significantly different product from the original RTX 3050 that the company launched as a mid-range product way back in January 2022. The RTX 3050 had originally launched on the 8 nm GA106 silicon, with 2,560 CUDA cores, 80 Tensor cores, 20 RT cores, 80 TMUs, and 32 ROPs; with 8 GB of 14 Gbps GDDR6 memory across a 128-bit memory bus; these specs also matched the maximum core-configuration of the smaller GA107 silicon, and so the company launched the RTX 3050 based on GA107 toward the end of 2022, with no change in specs, but a slight improvement in energy efficiency from the switch to the smaller silicon. The new RTX 3060 6 GB is based on the same GA107 silicon, but with significant changes.

To begin with, the most obvious change is memory. The new SKU features 6 GB of 14 Gbps GDDR6, across a narrower 96-bit memory bus, for 168 GB/s of memory bandwidth. That's not all, the GPU is significantly cut down, with just 16 SM instead of the 20 found on the original RTX 3050. This works out to 2,048 CUDA cores, 64 Tensor cores, 16 RT cores, 64 TMUs, and an unchanged 32 ROPs. The GPU comes with lower clock speeds of 1470 MHz boost, compared to 1777 MHz on the original RTX 3050. The silver lining with this SKU is its total graphics power (TGP) of just 70 W, which means that cards can completely do away with power connectors, and rely entirely on PCIe slot power. NVIDIA hasn't listed its own MSRP for this SKU, but last we heard, it was supposed to go for $179, and square off against the likes of the Intel Arc A580.

MSI GeForce RTX 3050 6 GB VENTUS 2X OC Card Listed by Austrian Shop

Specifications for NVIDIA's GeForce RTX 3050 6 GB GPU leaked midway through the month—fueling further speculation about cutdown Ampere cards hitting retail outlets within the first quarter of 2024. The super budget-friendly alternative to existing GeForce RTX 3050 8 GB graphics card models is tipped to be weakened in many areas (not just a reduction in memory capacity)—according to the last set of leaks: "performance could lag behind the (two years old) RTX 3050 8 GB SKU by up to 25%, making it weaker competition even for AMD's budget Radeon RX 6500 XT."

ComputerBase.de has uncovered an interesting E-Tec shop listing—now removed thanks to global news coverage—MSI is likely preparing a 6 GB variant of its RTX 3050 VENTUS 2X OC design. A screenshot of the Austrian e-tailer's listing has been preserved and circulated—leaked pricing was €245.15, while the model's manufacturer code is V812-015R. A Google search of the latter generates a number of hits—we see information that aligns with TPU's database entry. Specification sheets probably originate from distributors, so are subject to change closer to launch time. VideoCardz points out that a 130 W TDP has appeared online, although some older leaks indicate that the MSI part is targeting NVIDIA's alleged reference figure of 70 W.

Possible NVIDIA GeForce RTX 3050 6 GB Edition Specifications Appear

Alleged full specifications leaked for NVIDIA's upcoming GeForce RTX 3050 6 GB graphics card show extensive reductions beyond merely reducing memory size versus the 8 GB model. If accurate, performance could lag the existing RTX 3050 8 GB SKU by up to 25%, making it weaker competition even for AMD's budget RX 6500 XT. Previous rumors suggested only capacity and bandwidth differences on a partially disabled memory bus between 3050 variants, which would reduce the memory to 6 GB and 96-bit bus, from 8 GB and 128-bit bus.. But leaked specs indicate CUDA core counts, clock speeds, and TDP all see cuts for the upcoming 6 GB version. With 18 SMs and 2304 cores rather than 20 SMs and 2560 cores at lower base and boost frequencies, the impact looks more severe than expected. A 70 W TDP does allow passive cooling but hurts performance versus the 3050 8 GB's 130 W design.

Some napkin math suggests the 3050 6 GB could deliver only 75% of its elder sibling's frame rates, putting it more in line with the entry-level 6500 XT. While having 50% more VRAM helps, dramatic core and clock downgrades counteract that memory advantage. According to rumors, the RTX 3050 6 GB is set to launch in February, bringing lower-end Ampere to even more budget-focused builders. But with specifications seemingly hobbled beyond just capacity, its real-world gaming value remains to be determined. NVIDIA likely intends RTX 3060 6 GB primarily for less demanding esports titles. Given the scale of cutbacks and the modern AAA title's recommended specifications, mainstream AAA gaming performance seems improbable.

NVIDIA GeForce RTX 3050 6GB to Get a Formal Release in February 2024

The GeForce RTX 3050 has been around since January 2022, and formed the entry level of the company's RTX 30-series. It had its moment under the Sun during the crypto GPU shortage as a 1080p gaming option that sold around the $300 mark. With the advent of the RTX 40-series, NVIDIA is finding itself lacking an entry-level discrete GPU that it can push in high volumes. Enter the RTX 3050 6 GB. Cut down from the original RTX 3050, this SKU has 6 GB of memory across a narrower 96-bit GDDR6 memory interface, and fewer shaders. Based on the tiny GA107 "Ampere" silicon, it gets 2,048 CUDA cores compared to the 2,560 of the RTX 3050, a core-configuration NVIDIA refers to as the GA107-325. The card has a tiny typical graphics power (TGP) of just 70 W, and so we should see graphics cards without additional power connectors. The company plans to give the RTX 3050 6 GB a formal retail channel launch in February 2024, at a starting price of $179.

NVIDIA CFO Hints at Intel Foundry Services Partnership

NVIDIA CFO Colette Kress, responding to a question in the Q&A session of the recent UBS Global Technology Conference, hinted at the possibility of NVIDIA onboarding a third semiconductor foundry partner besides its current TSMC and Samsung, with the implication being Intel Foundry Services (IFS). "We would love a third one. And that takes a work of what are they interested in terms of the services. Keep in mind, there is other ones that may come to the U.S. TSMC in the U.S. may be an option for us as well. Not necessarily different, but again in terms of the different region. Nothing that stops us from potentially adding another foundry."

NVIDIA currently sources its chips from TSMC and Samsung. It uses the premier Taiwanese fab for its latest "Ada" GPUs and "Hopper" AI processors, while using Samsung for its older generation "Ampere" GPUs. The addition of IFS as a third foundry partner could improve the company's supply-chain resilience in an uncertain geopolitical environment; given that IFS fabs are predominantly based in the US and the EU.

Intel "Sierra Forest" Xeon System Surfaces, Fails in Comparison to AMD Bergamo

Intel's upcoming Sierra Forest Xeon server chip has debuted on Geekbench 6, showcasing its potential in multi-core performance. Slated for release in the first half of 2024, Sierra Forest is equipped with up to 288 Efficiency cores, positioning it to compete with AMD's Zen 4c Bergamo server CPUs and other ARM-based server chips like those from Ampere for the favor of cloud service providers (CSP). In the Geekbench 6 benchmark, a dual-socket configuration featuring two 144-core Sierra Forest CPUs was tested. The benchmark revealed a notable multi-core score of 7,770, surpassing most dual-socket systems powered by Intel's high-end Xeon Platinum 8480+, which typically scores between 6,500 and 7,500. However, Sierra Forest's single-core score of 855 points was considerably lower, not even reaching half of that of the 8480+, which manages 1,897 points.

The difference in single-core performance is a matter of choice, as Sierra Forest uses Crestmont-derived Sierra Glen E-cores, which are more power and area-efficient, unlike the Golden Cove P-cores in the Sapphire Rapids-based 8480+. This design choice is particularly advantageous for server environments where high-core counts are crucial, as CSPs usually partition their instances by the number of CPU cores. However, compared to AMD's Bergamo CPUs, which use Zen 4c cores, Sierra Forest lacks pure computing performance, especially in multi-core. The Sierra Forest lacks hyperthreading, while Bergaamo offers SMT with 256 threads on the 128-core SKU. Comparing the Geekbench 6 scores to AMD Bergamo EPYC 9754 and Sierra Forest results look a lot less impressive. Bergamo scored 1,597 points in single-core, almost double that of Sierra Forest, and 16,455 points in the multi-core benchmarks, which is more than double. This is a significant advantage of the Zen 4c core, which cuts down on caches instead of being an entirely different core, as Intel does with its P and E-cores. However, these are just preliminary numbers; we must wait for real-world benchmarks to see the actual performance.

NVIDIA Celebrates 500 Games & Apps with DLSS and RTX Technologies

NVIDIA today announced the important milestone of 500 games and apps that take advantage of NVIDIA RTX, the transformative set of gaming graphics technologies, that among many other things, mainstreamed real-time ray tracing in the consumer gaming space, and debuted the most profound gaming technology of recent times—DLSS, or performance uplifts through high-quality upscaling technologies. The company got to this milestone over a 5-year period, with RTX seeing the light of the day in August 2018. NVIDIA RTX is the combined feature-set of real time ray tracing, including NVIDIA-specific enhancements; and DLSS.

Although it started out as an upscaling-based performance enhancement that leverages AI, DLSS encompasses a whole suite of technologies aimed at enhancing performance at minimal quality loss, and in some cases even enhances the image quality over native resolution. This includes super resolution, or the classic DLSS and DLSS 2 feature set; DLSS 3 Frame Generation, which nearly doubles frame-rates by generating entire alternate frames entirely using AI, without involving the graphics rendering machinery; and DLSS 3.5 Ray Reconstruction, which attempts to vastly improve the fidelity of ray traced elements in upscaled scenarios.

ASRock Rack Announces Support of NVIDIA H200 GPUs and GH200 Superchips and Highlights HPC and AI Server Platforms at SC 23

ASRock Rack Inc., the leading innovative server company, today is set to showcase a comprehensive range of servers for diverse AI workloads catering to scenarios from the edge, on-premises, and to the cloud at booth #1737 at SC 23 held at the Colorado Convention Center in Denver, USA. The event is from November 13th to 16th, and ASRock Rack will feature the following significant highlights:

At SC 23, ASRock Rack will demonstrate the NVIDIA-Qualified 2U4G-GENOA/M3 and 4U8G series GPU server solutions along with the NVIDIA H100 PCIe. The ASRock Rack 4U8G and 4U10G series GPU servers are able to accommodate eight to ten 400 W dual-slot GPU cards and 24 hot-swappable 2.5" drives, designed to deliver exceptional performance for demanding AI workloads deployed in the cloud environment. The 2U4G-GENOA/M3, tailored for lighter workloads, is powered by a single AMD EPYC 9004 series processor and is able to support four 400 W dual-slot GPUs while having additional PCIe and OCP NIC 3.0 slots for expansions.

AAEON Unveils BOXER-8651AI Mini PC Powered by NVIDIA Jetson Orin NX

Industry-leading designer and manufacturer of edge AI solutions, AAEON, has released the BOXER-8651AI, a compact fanless embedded AI System powered by the NVIDIA Jetson Orin NX module. Consequently, the BOXER-8651AI takes advantage of the module's NVIDIA Ampere architecture GPU, featuring 1024 CUDA and 32 Tensor Cores, along with support for NVIDIA JetPack 5.0 and above to provide users with accelerated graphics, data processing, and image classification.

With a fanless chassis measuring just 105 mm x 90 mm x 52 mm, the BOXER-8651AI is an extremely small solution that houses a dense range of interfaces, including DB-9 and DB-15 ports for RS-232 (Rx/Tx/CTS/RTS)/RS-485, CANBus, and DIO functions. Additionally, the device provides HDMI 2.1 display output, GbE LAN, and a variety of USB Type-A ports, supporting both USB 3.2 Gen 2 and USB 2.0 functionality.

The BOXER-8651AI, despite containing such powerful AI performance for its size, is built to operate in rugged conditions, boasting a -5°F to 131°F (-15°C to 55°C) temperature range alongside anti-shock and vibration resistance features. Consequently, the PC is ideally suited for wall mounted deployment across a range of environments.

Q2 Revenue for Top 10 Global IC Houses Surges by 12.5% as Q3 on Pace to Set New Record

Fueled by an AI-driven inventory stocking frenzy across the supply chain, TrendForce reveals that Q2 revenue for the top 10 global IC design powerhouses soared to US $38.1 billion, marking a 12.5% quarterly increase. In this rising tide, NVIDIA seized the crown, officially dethroning Qualcomm as the world's premier IC design house, while the remainder of the leaderboard remained stable.

AI charges ahead, buoying IC design performance amid a seasonal stocking slump
NVIDIA is reaping the rewards of a global transformation. Bolstered by the global demand from CSPs, internet behemoths, and enterprises diving into generative AI and large language models, NVIDIA's data center revenue skyrocketed by a whopping 105%. A deluge of shipments, including the likes of their advanced Hopper and Ampere architecture HGX systems and the high-performing InfinBand, played a pivotal role. Beyond that, both gaming and professional visualization sectors thrived under the allure of fresh product launches. Clocking a Q2 revenue of US$11.33 billion (a 68.3% surge), NVIDIA has vaulted over both Qualcomm and Broadcom to seize the IC design throne.
Return to Keyword Browsing
Dec 21st, 2024 10:12 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts