News Posts matching #Ampere

Return to Keyword Browsing

Palit Introduces GeForce RTX 3050 6 GB KalmX and StormX Models

Palit Microsystems Ltd., a leading graphics card manufacturer, proudly announces the NVIDIA GeForce RTX 3050 6 GB KalmX and StormX Series graphics cards. The GeForce RTX 3050 6 GB GPU is built with the powerful graphics performance of the NVIDIA Ampere architecture. It offers dedicated 2nd gen RT Cores and 3rd gen Tensor Cores, new streaming multiprocessors, and high-speed G6 memory to tackle the latest games.

GeForce RTX 3050 6 GB KalmX: Passive Cooling. Silent Gaming
Introducing the Palit GeForce RTX 3050 KalmX, where silence meets performance in perfect harmony. The KalmX series, renowned for its ingenious fan-less design, redefines your gaming experience. With its passive cooling system, this graphics card operates silently, making it ideal for both gaming and multimedia applications. Available on shelves today—2nd February 2024.

NVIDIA GeForce RTX 3050 6GB Formally Launched

NVIDIA today formally launched the GeForce RTX 3050 6 GB as its new entry-level discrete GPU. The RTX 3050 6 GB is a significantly different product from the original RTX 3050 that the company launched as a mid-range product way back in January 2022. The RTX 3050 had originally launched on the 8 nm GA106 silicon, with 2,560 CUDA cores, 80 Tensor cores, 20 RT cores, 80 TMUs, and 32 ROPs; with 8 GB of 14 Gbps GDDR6 memory across a 128-bit memory bus; these specs also matched the maximum core-configuration of the smaller GA107 silicon, and so the company launched the RTX 3050 based on GA107 toward the end of 2022, with no change in specs, but a slight improvement in energy efficiency from the switch to the smaller silicon. The new RTX 3060 6 GB is based on the same GA107 silicon, but with significant changes.

To begin with, the most obvious change is memory. The new SKU features 6 GB of 14 Gbps GDDR6, across a narrower 96-bit memory bus, for 168 GB/s of memory bandwidth. That's not all, the GPU is significantly cut down, with just 16 SM instead of the 20 found on the original RTX 3050. This works out to 2,048 CUDA cores, 64 Tensor cores, 16 RT cores, 64 TMUs, and an unchanged 32 ROPs. The GPU comes with lower clock speeds of 1470 MHz boost, compared to 1777 MHz on the original RTX 3050. The silver lining with this SKU is its total graphics power (TGP) of just 70 W, which means that cards can completely do away with power connectors, and rely entirely on PCIe slot power. NVIDIA hasn't listed its own MSRP for this SKU, but last we heard, it was supposed to go for $179, and square off against the likes of the Intel Arc A580.

MSI GeForce RTX 3050 6 GB VENTUS 2X OC Card Listed by Austrian Shop

Specifications for NVIDIA's GeForce RTX 3050 6 GB GPU leaked midway through the month—fueling further speculation about cutdown Ampere cards hitting retail outlets within the first quarter of 2024. The super budget-friendly alternative to existing GeForce RTX 3050 8 GB graphics card models is tipped to be weakened in many areas (not just a reduction in memory capacity)—according to the last set of leaks: "performance could lag behind the (two years old) RTX 3050 8 GB SKU by up to 25%, making it weaker competition even for AMD's budget Radeon RX 6500 XT."

ComputerBase.de has uncovered an interesting E-Tec shop listing—now removed thanks to global news coverage—MSI is likely preparing a 6 GB variant of its RTX 3050 VENTUS 2X OC design. A screenshot of the Austrian e-tailer's listing has been preserved and circulated—leaked pricing was €245.15, while the model's manufacturer code is V812-015R. A Google search of the latter generates a number of hits—we see information that aligns with TPU's database entry. Specification sheets probably originate from distributors, so are subject to change closer to launch time. VideoCardz points out that a 130 W TDP has appeared online, although some older leaks indicate that the MSI part is targeting NVIDIA's alleged reference figure of 70 W.

Possible NVIDIA GeForce RTX 3050 6 GB Edition Specifications Appear

Alleged full specifications leaked for NVIDIA's upcoming GeForce RTX 3050 6 GB graphics card show extensive reductions beyond merely reducing memory size versus the 8 GB model. If accurate, performance could lag the existing RTX 3050 8 GB SKU by up to 25%, making it weaker competition even for AMD's budget RX 6500 XT. Previous rumors suggested only capacity and bandwidth differences on a partially disabled memory bus between 3050 variants, which would reduce the memory to 6 GB and 96-bit bus, from 8 GB and 128-bit bus.. But leaked specs indicate CUDA core counts, clock speeds, and TDP all see cuts for the upcoming 6 GB version. With 18 SMs and 2304 cores rather than 20 SMs and 2560 cores at lower base and boost frequencies, the impact looks more severe than expected. A 70 W TDP does allow passive cooling but hurts performance versus the 3050 8 GB's 130 W design.

Some napkin math suggests the 3050 6 GB could deliver only 75% of its elder sibling's frame rates, putting it more in line with the entry-level 6500 XT. While having 50% more VRAM helps, dramatic core and clock downgrades counteract that memory advantage. According to rumors, the RTX 3050 6 GB is set to launch in February, bringing lower-end Ampere to even more budget-focused builders. But with specifications seemingly hobbled beyond just capacity, its real-world gaming value remains to be determined. NVIDIA likely intends RTX 3060 6 GB primarily for less demanding esports titles. Given the scale of cutbacks and the modern AAA title's recommended specifications, mainstream AAA gaming performance seems improbable.

NVIDIA GeForce RTX 3050 6GB to Get a Formal Release in February 2024

The GeForce RTX 3050 has been around since January 2022, and formed the entry level of the company's RTX 30-series. It had its moment under the Sun during the crypto GPU shortage as a 1080p gaming option that sold around the $300 mark. With the advent of the RTX 40-series, NVIDIA is finding itself lacking an entry-level discrete GPU that it can push in high volumes. Enter the RTX 3050 6 GB. Cut down from the original RTX 3050, this SKU has 6 GB of memory across a narrower 96-bit GDDR6 memory interface, and fewer shaders. Based on the tiny GA107 "Ampere" silicon, it gets 2,048 CUDA cores compared to the 2,560 of the RTX 3050, a core-configuration NVIDIA refers to as the GA107-325. The card has a tiny typical graphics power (TGP) of just 70 W, and so we should see graphics cards without additional power connectors. The company plans to give the RTX 3050 6 GB a formal retail channel launch in February 2024, at a starting price of $179.

NVIDIA CFO Hints at Intel Foundry Services Partnership

NVIDIA CFO Colette Kress, responding to a question in the Q&A session of the recent UBS Global Technology Conference, hinted at the possibility of NVIDIA onboarding a third semiconductor foundry partner besides its current TSMC and Samsung, with the implication being Intel Foundry Services (IFS). "We would love a third one. And that takes a work of what are they interested in terms of the services. Keep in mind, there is other ones that may come to the U.S. TSMC in the U.S. may be an option for us as well. Not necessarily different, but again in terms of the different region. Nothing that stops us from potentially adding another foundry."

NVIDIA currently sources its chips from TSMC and Samsung. It uses the premier Taiwanese fab for its latest "Ada" GPUs and "Hopper" AI processors, while using Samsung for its older generation "Ampere" GPUs. The addition of IFS as a third foundry partner could improve the company's supply-chain resilience in an uncertain geopolitical environment; given that IFS fabs are predominantly based in the US and the EU.

Intel "Sierra Forest" Xeon System Surfaces, Fails in Comparison to AMD Bergamo

Intel's upcoming Sierra Forest Xeon server chip has debuted on Geekbench 6, showcasing its potential in multi-core performance. Slated for release in the first half of 2024, Sierra Forest is equipped with up to 288 Efficiency cores, positioning it to compete with AMD's Zen 4c Bergamo server CPUs and other ARM-based server chips like those from Ampere for the favor of cloud service providers (CSP). In the Geekbench 6 benchmark, a dual-socket configuration featuring two 144-core Sierra Forest CPUs was tested. The benchmark revealed a notable multi-core score of 7,770, surpassing most dual-socket systems powered by Intel's high-end Xeon Platinum 8480+, which typically scores between 6,500 and 7,500. However, Sierra Forest's single-core score of 855 points was considerably lower, not even reaching half of that of the 8480+, which manages 1,897 points.

The difference in single-core performance is a matter of choice, as Sierra Forest uses Crestmont-derived Sierra Glen E-cores, which are more power and area-efficient, unlike the Golden Cove P-cores in the Sapphire Rapids-based 8480+. This design choice is particularly advantageous for server environments where high-core counts are crucial, as CSPs usually partition their instances by the number of CPU cores. However, compared to AMD's Bergamo CPUs, which use Zen 4c cores, Sierra Forest lacks pure computing performance, especially in multi-core. The Sierra Forest lacks hyperthreading, while Bergaamo offers SMT with 256 threads on the 128-core SKU. Comparing the Geekbench 6 scores to AMD Bergamo EPYC 9754 and Sierra Forest results look a lot less impressive. Bergamo scored 1,597 points in single-core, almost double that of Sierra Forest, and 16,455 points in the multi-core benchmarks, which is more than double. This is a significant advantage of the Zen 4c core, which cuts down on caches instead of being an entirely different core, as Intel does with its P and E-cores. However, these are just preliminary numbers; we must wait for real-world benchmarks to see the actual performance.

NVIDIA Celebrates 500 Games & Apps with DLSS and RTX Technologies

NVIDIA today announced the important milestone of 500 games and apps that take advantage of NVIDIA RTX, the transformative set of gaming graphics technologies, that among many other things, mainstreamed real-time ray tracing in the consumer gaming space, and debuted the most profound gaming technology of recent times—DLSS, or performance uplifts through high-quality upscaling technologies. The company got to this milestone over a 5-year period, with RTX seeing the light of the day in August 2018. NVIDIA RTX is the combined feature-set of real time ray tracing, including NVIDIA-specific enhancements; and DLSS.

Although it started out as an upscaling-based performance enhancement that leverages AI, DLSS encompasses a whole suite of technologies aimed at enhancing performance at minimal quality loss, and in some cases even enhances the image quality over native resolution. This includes super resolution, or the classic DLSS and DLSS 2 feature set; DLSS 3 Frame Generation, which nearly doubles frame-rates by generating entire alternate frames entirely using AI, without involving the graphics rendering machinery; and DLSS 3.5 Ray Reconstruction, which attempts to vastly improve the fidelity of ray traced elements in upscaled scenarios.

ASRock Rack Announces Support of NVIDIA H200 GPUs and GH200 Superchips and Highlights HPC and AI Server Platforms at SC 23

ASRock Rack Inc., the leading innovative server company, today is set to showcase a comprehensive range of servers for diverse AI workloads catering to scenarios from the edge, on-premises, and to the cloud at booth #1737 at SC 23 held at the Colorado Convention Center in Denver, USA. The event is from November 13th to 16th, and ASRock Rack will feature the following significant highlights:

At SC 23, ASRock Rack will demonstrate the NVIDIA-Qualified 2U4G-GENOA/M3 and 4U8G series GPU server solutions along with the NVIDIA H100 PCIe. The ASRock Rack 4U8G and 4U10G series GPU servers are able to accommodate eight to ten 400 W dual-slot GPU cards and 24 hot-swappable 2.5" drives, designed to deliver exceptional performance for demanding AI workloads deployed in the cloud environment. The 2U4G-GENOA/M3, tailored for lighter workloads, is powered by a single AMD EPYC 9004 series processor and is able to support four 400 W dual-slot GPUs while having additional PCIe and OCP NIC 3.0 slots for expansions.

AAEON Unveils BOXER-8651AI Mini PC Powered by NVIDIA Jetson Orin NX

Industry-leading designer and manufacturer of edge AI solutions, AAEON, has released the BOXER-8651AI, a compact fanless embedded AI System powered by the NVIDIA Jetson Orin NX module. Consequently, the BOXER-8651AI takes advantage of the module's NVIDIA Ampere architecture GPU, featuring 1024 CUDA and 32 Tensor Cores, along with support for NVIDIA JetPack 5.0 and above to provide users with accelerated graphics, data processing, and image classification.

With a fanless chassis measuring just 105 mm x 90 mm x 52 mm, the BOXER-8651AI is an extremely small solution that houses a dense range of interfaces, including DB-9 and DB-15 ports for RS-232 (Rx/Tx/CTS/RTS)/RS-485, CANBus, and DIO functions. Additionally, the device provides HDMI 2.1 display output, GbE LAN, and a variety of USB Type-A ports, supporting both USB 3.2 Gen 2 and USB 2.0 functionality.

The BOXER-8651AI, despite containing such powerful AI performance for its size, is built to operate in rugged conditions, boasting a -5°F to 131°F (-15°C to 55°C) temperature range alongside anti-shock and vibration resistance features. Consequently, the PC is ideally suited for wall mounted deployment across a range of environments.

Q2 Revenue for Top 10 Global IC Houses Surges by 12.5% as Q3 on Pace to Set New Record

Fueled by an AI-driven inventory stocking frenzy across the supply chain, TrendForce reveals that Q2 revenue for the top 10 global IC design powerhouses soared to US $38.1 billion, marking a 12.5% quarterly increase. In this rising tide, NVIDIA seized the crown, officially dethroning Qualcomm as the world's premier IC design house, while the remainder of the leaderboard remained stable.

AI charges ahead, buoying IC design performance amid a seasonal stocking slump
NVIDIA is reaping the rewards of a global transformation. Bolstered by the global demand from CSPs, internet behemoths, and enterprises diving into generative AI and large language models, NVIDIA's data center revenue skyrocketed by a whopping 105%. A deluge of shipments, including the likes of their advanced Hopper and Ampere architecture HGX systems and the high-performing InfinBand, played a pivotal role. Beyond that, both gaming and professional visualization sectors thrived under the allure of fresh product launches. Clocking a Q2 revenue of US$11.33 billion (a 68.3% surge), NVIDIA has vaulted over both Qualcomm and Broadcom to seize the IC design throne.

Oracle Cloud Adds AmpereOne Processor and Broad Set of New Services on Ampere

Oracle has announced their next generation Ampere A2 Compute Instances based on the latest AmpereOne processor, with availability starting later this year. According to Oracle, the new instances will deliver up to 44% more price-performance compared to x86 offerings and are ideal for AI inference, databases, web services, media transcoding workloads and run-time language support, such as GO and Java.

In related news, several new customers including industry leading real-time video service companies 8x8 and Phenix, along with AI startups like Wallaroo, said they are migrating to Oracle Cloud Infrastructure (OCI) and Ampere as more and more companies seek to maximize price, performance and energy efficiency.

Nintendo Switch 2 to Feature NVIDIA Ampere GPU with DLSS

The rumors of Nintendo's next-generation Switch handheld gaming console have been piling up ever since the competition in the handheld console market got more intense. Since the release of the original Switch, Valve has released Steam Deck, ASUS made ROG Ally, and others are also exploring the market. However, the next-generation Nintendo Switch 2 is closer and closer, as we have information about the chipset that will power this device. Thanks to Kepler_L2 on Twitter/X, we have the codenames of the upcoming processors. The first generation Switch came with NVIDIA's Tegra X1 SoC built on a 20 nm node. However, later on, NVIDIA supplied Nintendo with a Tegra X1+ SoC made on a 16 nm node. There were no performance increases recorded, just improved power efficiency. Both of them used four Cortex-A57 and four Cortex-A53 cores with GM20B Maxwell GPUs.

For the Nintendo Switch 2, NVIDIA is said to utilize a customized variant of NVIDIA Jetson Orin SoC for automotive applications. The reference Orin SoC carries a codename T234, while this alleged adaptation has a T239 codename; the version is most likely optimized for power efficiency. The reference Orin design is a considerable uplift compared to the Tegra X1, as it boasts 12 Cortex-A78AE cores and LPDDR5 memory, along with Ampere GPU microarchitecture. Built on Samsung's 8 nm node, the efficiency would likely yield better battery life and position the second-generation Switch well among the now extended handheld gaming console market. However, including Ampere architecture would also bring technologies like DLSS, which would benefit the low-power SoC.

NVIDIA GeForce RTX 3090 SUPER Founders Edition Pops Up on Taobao

An unreleased NVIDIA GeForce RTX 3090 SUPER Founders Edition graphics card was last spotted just over a year ago. A fortunate member of the Chinese NGA discussion board provided a close-up shot of a shroud bearing "super." A new leak gives us a full view of the RTX 3090 SUPER FE with prominent branding—KittyYYuko declared: "WTF, I have indeed heard of this leak before" upon posting this discovery to social media.

According to ITHome, the example from last year appeared to be a publicly released variant of "an unpackaged GeForce RTX 3090 Ti," and the latest finding seems to be identical. A seller, tbNick_dn86z, has created an entry for his GeForce RTX 3090 SUPER Founders Edition card with a value of 9999 RMB (~$1370) on Xianyu (Taobao's second hand market)—it is advertised as being "original and not modified, with a pure black casing." When confronted about identifying any apparent differences between the SUPER and officially launched Ti version, tbNick_dn86z confirmed that they are largely the same (minus external branding)—a matching device ID is shared across both variants.

Ampere Computing Creates Gaming on Linux Guide, Runs Steam Proton on Server-class Arm CPUs

Ampere Computing, known for its Altra (Max) and upcoming AmpereOne families of AArch64 server processors tailored for data centers, has released a guide for enthusiasts on running Steam for Linux on these ARM64 processors. This includes using Steam Play (Proton) to play Windows games on these Linux-powered servers. Over the summer, Ampere Computing introduced a GitHub repository detailing the process of running Steam for Linux on their AArch64 platforms, including Steam Play/Proton. While the guide is primarily designed for Ampere Altra/Altra Max and AmpereOne hardware, it can be adapted for other 64-bit Arm platforms. However, a powerful processor is essential to appreciate the gaming experience truly. Additionally, for the 3D OpenGL/Vulkan graphics to function optimally, an Ampere workstation system is more suitable than a headless server.

The guide recommends the Ampere Altra Developer platform paired with an NVIDIA RTX A6000 series graphics card, which supports AArch64 proprietary drivers. The guide uses Box86 and Box64 to run Steam x86 binaries and other x86/x86-64 games for emulation. While there are other options like FEX-Emu and Hangover to enhance the Linux binary experience on AArch64, Box86/Box64 is the preferred choice for gaming on Ampere workstations, as indicated by its mention in Ampere Computing's Once the AArch64 Linux graphics drivers are accelerated and Box86/Box64 emulation is set up, users can install Steam for Linux. By activating Proton within Steam, it becomes feasible to play Windows-exclusive x86/x86-64 games on Ampere AArch64 workstations or server processors. However, the guide doesn't provide insights into the performance of such a configuration.

China Hosts 40% of all Arm-based Servers in the World

The escalating challenges in acquiring high-performance x86 servers have prompted Chinese data center companies to accelerate the shift to Arm-based system-on-chips (SoCs). Investment banking firm Bernstein reports that approximately 40% of all Arm-powered servers globally are currently being used in China. While most servers operate on x86 processors from AMD and Intel, there's a growing preference for Arm-based SoCs, especially in the Chinese market. Several global tech giants, including AWS, Ampere, Google, Fujitsu, Microsoft, and Nvidia, have already adopted or developed Arm-powered SoCs. However, Arm-based SoCs are increasingly favorable for Chinese firms, given the difficulty in consistently sourcing Intel's Xeon or AMD's EPYC. Chinese companies like Alibaba, Huawei, and Phytium are pioneering the development of these Arm-based SoCs for client and data center processors.

However, the US government's restrictions present some challenges. Both Huawei and Phytium, blacklisted by the US, cannot access TSMC's cutting-edge process technologies, limiting their ability to produce competitive processors. Although Alibaba's T-Head can leverage TSMC's latest innovations, it can't license Arm's high-performance computing Neoverse V-series CPU cores due to various export control rules. Despite these challenges, many chip designers are considering alternatives such as RISC-V, an unrestricted, rapidly evolving open-source instruction set architecture (ISA) suitable for designing highly customized general-purpose cores for specific workloads. Still, with the backing of influential firms like AWS, Google, Nvidia, Microsoft, Qualcomm, and Samsung, the Armv8 and Armv9 instruction set architectures continue to hold an edge over RISC-V. These companies' support ensures that the software ecosystem remains compatible with their CPUs, which will likely continue to drive the adoption of Arm in the data center space.

Curious MSI GeForce RTX 3080 Ti 20 GB Card pops up on FB Marketplace

An unusual MSI RTX 3080 Ti SUPRIM X graphics card is up for sale, second hand, on Facebook Marketplace—the Sydney, Australia-based seller is advertising this component as a truly custom model with a non-standard allocation of VRAM: "Yes this is 20 GB not 12 GB." The used item is said to be in "good condition" with its product description elaborating on a bit of history: "There are some scuff marks from the previous owner, but the card works fine. It is an extremely rare collector's item, due to NVIDIA cancelling these variants a month before release. This is not an engineering sample card—this was a finished OEM product that got cancelled, unfortunately." The seller is seeking AU$1100 (~$740 USD), after a reduction from the original asking price of AU$1,300 (~$870 USD).

MSI and Gigabyte were reportedly on the verge of launching GeForce RTX 3080 Ti 20 GB variants two years ago, but NVIDIA had a change of heart (probably due to concerns about costs and production volumes) and decided to stick with a public release of the standard 12 GB GPU. Affected AIBs chose to not destroy their stock of 20 GB cards—these were instead sold to crypto miners and shady retailers. Wccftech points out that mining-oriented units have identifying marks on their I/O ports.

IBASE Unveils SI-624-AI Industrial AI Computer with NVIDIA Ampere MXM GPU

IBASE Technology Inc. (TPEx: 8050), a leading provider of industrial computing solutions, unveils the SI-624-AI industrial AI computer, which won the Embedded Computing Design's Embedded World 2023 Best in Show Award in Germany. This recognition highlights the exceptional performance and innovation of the rugged system in the field of AI deep learning.

The SI-624-AI is designed to meet the demands of high-speed multiple tasks for artificial neural network applications. Powered by the 12th Gen Intel Core CPU and incorporating the NVIDIA Ampere Architecture MXM GPU, this cutting-edge system delivers image processing capabilities that enable real-time analysis of visual data, enhancing automation, quality control, and overall production efficiency for AIoT applications in smart factory, retail, transportation or medical fields. It is suitable for use as a digital signage control system in mission-critical control rooms in transportation networks, smart retail, healthcare, or AI education where remote AI data analysis capabilities are required.

Supermicro Adds 192-Core ARM CPU Based Low Power Servers to Its Broad Range of Workload Optimized Servers and Storage Systems

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing several new servers to its already broad application optimized product line. These new servers incorporate the new AmpereOne CPU, with up to 192 single-threaded cores and up to 4 TB of memory capacity. Applications such as databases, telco edge, web servers, caching services, media encoding, and video gaming streaming will benefit from increased cores, faster memory access, higher performance per watt, scalable power management, and the new cloud security features. Additionally, Cloud Native microservice based applications will benefit from the lower latencies and power usage.

"Supermicro is expanding our customer choices by introducing these new systems that incorporate the latest high core count CPUs from Ampere Computing," said Michael McNerney, vice president of Marketing and Security, Supermicro. "With high core counts, predictable latencies, and up to 4 TB of memory, users will experience increased performance for a range of workloads and lower energy use. We continue to design and deliver a range of environmentally friendly servers that give customers a competitive advantage for various applications."

Oracle to Spend Billions on NVIDIA Data Center GPUs, Even More on Ampere & AMD CPUs

Oracle founder and Chairman Larry Ellison last week announced a substantial spending spree on new equipment as he prepares his business for a cloud computing service expansion that will be aimed at attracting a "new wave" of artificial intelligence (AI) companies. He made this announcement at a recent Ampere event: "This year, Oracle will buy GPUs and CPUs from three companies...We will buy GPUs from NVIDIA, and we're buying billions of dollars of those. We will spend three times that on CPUs from Ampere and AMD. We still spend more money on conventional compute." His cloud division is said to be gearing up to take on larger competition—namely Amazon Web Services and Microsoft Corp. Oracle is hoping to outmaneuver these major players by focusing on the construction of fast networks, capable of shifting around huge volumes of data—the end goal being the creation of its own ChatGPT-type model.

Ellison's expressed that he was leaving Team Blue behind—Oracle has invested heavily in Ampere Computing—a startup founded by ex-Intel folks: "It's a major commitment to move to a new supplier. We've moved to a new architecture...We think that this is the future. The old Intel x86 architecture, after many decades in the market, is reaching its limit." Oracle's database software has been updated to run on Ampere's Arm-based chips, Ellison posits that these grant greater power efficiency when compared to AMD and NVIDIA enterprise processors. There will be some reliance on "x86-64" going forward, since Oracle's next-gen Exadata X10M platform was recently announced with the integration of Team Red's EPYC 9004 series processors—a company spokesman stated that these server CPUs offer higher core counts and "extreme scale and dramatically improved price performance," when compared to older Intel Xeon systems.

Zephyr GeForce RTX 3060 Ti Card Has a Pink PCB

Zephyr has produced an NVIDIA GeForce custom graphics card that sports a very unique pink printed circuit board—bright and pastel colors have featured on cooling solutions in the past, but this new-ish product presents the first example of a PCB with a tinge of blush. Renowned hardware tipster harukaze5719 broke from his normal delivery of very cold and macho tech on social media, and shared his discovery of Zephyr's GeForce RTX 3060 Ti compact ITX card.

International buyers will be disappointed to learn that the pink Ampere card is a China market exclusive, with the company only offering a limited number of products on JD.com. VideoCardz notes that the card's specifications are not at all special, despite its interesting compact form factor and brightly toned cooling solution design. It is a non-overclocked model based on the older RTX 3060 Ti GDDR6 GPU variant with a standard 1750 MHz boost clock, 8 GB VRAM configuration, and a single 8-pin power connector.

Galax Reportedly Preparing GeForce RTX GPU Price Cuts

A recent report published by BoardChannels points to Galax possibly implementing a broad set of price cuts across its range of NVIDIA GeForce RTX 40 and 30-series custom graphics card models. Insider information originating from sources within a pool of NVIDIA and AMD board partners suggests that Galax could be shaving off up to 1000 RMB (around $140) from certain Ampere and Ada Lovelace products - effective later this month in its native Hong Kong market as well as mainland China.

The article posits that GeForce RTX 4080 cards could end up becoming 1000 RMB cheaper, and popular RTX 3060 models receiving cuts of around 250 RMB (≈$35). Galax is reported to have already offered entry-level desktop GeForce RTX 3050 cards at lower prices in the latter half of May - with 140 RMB (≈$19.50) reductions. The RTX 4070 series is supposedly set to receive a measly discount of around 150 to 200 RMB (≈$21 to $28), which is likely not doing it many favors given slow worldwide uptake since the product range's launch in mid-April. Galax could be making adjustments to fall in line with rivals (in the region) who have already reduced asking prices for NVIDIA gaming hardware.

Gigabyte Shows AI/HPC and Data Center Servers at Computex

GIGABYTE is exhibiting cutting-edge technologies and solutions at COMPUTEX 2023, presenting the theme "Future of COMPUTING". From May 30th to June 2nd, GIGABYTE is showcasing over 110 products that are driving future industry transformation, demonstrating the emerging trends of AI technology and sustainability, on the 1st floor, Taipei Nangang Exhibition Center, Hall 1.

GIGABYTE and its subsidiary, Giga Computing, are introducing unparalleled AI/HPC server lineups, leading the era of exascale supercomputing. One of the stars is the industry's first NVIDIA-certified HGX H100 8-GPU SXM5 server, G593-SD0. Equipped with the 4th Gen Intel Xeon Scalable Processors and GIGABYTE's industry-leading thermal design, G593-SD0 can perform extremely intensive workloads from generative AI and deep learning model training within a density-optimized 5U server chassis, making it a top choice for data centers aimed for AI breakthroughs. In addition, GIGABYTE is debuting AI computing servers supporting NVIDIA Grace CPU and Grace Hopper Superchips. The high-density servers are accelerated with NVLink-C2C technology under the ARM Neoverse V2 platform, setting a new standard for AI/HPC computing efficiency and bandwidth.

Ampere Computing Unveils New AmpereOne Processor Family with 192 Custom Cores

Ampere Computing today announced a new AmpereOne Family of processors with up to 192 single threaded Ampere cores - the highest core count in the industry. This is the first product from Ampere based on the company's new custom core, built from the ground up and leveraging the company's internal IP. CEO Renée James, who founded Ampere Computing to offer a modern alternative to the industry with processors designed specifically for both efficiency and performance in the Cloud, said there was a fundamental shift happening that required a new approach.

"Every few decades of compute there has emerged a driving application or use of performance that sets a new bar of what is required of performance," James said. "The current driving uses are AI and connected everything combined with our continued use and desire for streaming media. We cannot continue to use power as a proxy for performance in the data center. At Ampere, we design our products to maximize performance at a sustainable power, so we can continue to drive the future of the industry."

NVIDIA Has Stopped Making GeForce RTX 3060 Ti GPUs, According to Industry Leaks

A Taiwanese PC hardware news outlet, Benchlife, has been talking to insider sources positioned within several of NVIDIA's add-in-board (AIB) partners - the author reports that these organizations are experiencing significant changeovers. The AIB informants indicate that production of GeForce RTX 4060 and 4060 Ti models is accelerating, following rumors of the older Ampere-based RTX 3060 Ti card being discontinued. The article's author was seeking further clarification and confirmation from industry insiders, given that most of the recent leaks have emerged from Chinese technology discussion boards. Forumites have posited that NVIDIA has stopped supplying its AIB partners with RTX 3060 Ti silicon. It is difficult to tell whether (via translation) the AIB tipsters have concluded that the older card is totally done for, but NVIDIA is prioritizing the launch of new products.

It would make sense for Team Green to clear the way for the much newer Ada Lovelace-based lineups, but their entry level RTX 3060 cards have remained firm favorites with PC hardware buyers, so it could be quite tricky to play catch up with succeeding product lines. NVIDIA's component suppliers have stated (back in mid-April) that RTX 4000-series GPU production was not ramping up, due to a possible slow uptake of existing cards - in particular the recently released RTX 4070. Given the vast popularity of budget graphics card models, it seems that NVIDIA is preparing to embrace that market segment once again with its latest offerings - due for launch at the end of this month.
Return to Keyword Browsing
Jul 15th, 2025 19:32 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts