News Posts matching #GPU

Return to Keyword Browsing

Intel Launches Mobile Arc A570M and A530M

Without fanfare, Intel has launched two new mobile GPUs in the shape of the Arc A570M and the A530M. The Arc A570M gets 16 Xe Cores and 256 execution units, as well as four render slices and 16 RT units. The lower-end Arc A530M gets to make do with 12 Xe cores, 192 execution units, three render slices and 12 RT units, which is a smaller cut than the model name suggests. What's interesting to note here is that the Arc A570M appears to have identical hardware specs to the Arc A550M that launched in the second quarter of 2022, although as we'll see, the clock speeds and TGP differ between the parts. The Arc A570M supports 8 GB of GDDR6 memory, the same as the Arc A550M, with the Arc A530M supports 4 or 8 GB of GDDR6.

Both the Arc A570M and the A530M will get a GPU clock speed of 1,300 MHz, which is a significant boost from the Arc A550M which is plodding along at a mere 900 MHz in comparison. This makes the two newcomers Intel's third highest clocked mobile GPUs, with only the Arc A770M and Arc A370M being clocked higher. The downside of this is an increase in TGP, where the Arc A550M had a fairly reasonable TGP of 60 Watts, the Arc A530M has a TGP range of 65 to 95 Watts, while the Arc A570M extends this to 75-95 Watts. The rest of the specs appear to carry over from the Arc A550M, so the new GPUs will support up to four displays via eDP, DP 2.0 or HDMI 2.1 and the full set of video encoders and decoders are also supported. The new additions are still made using TSMC's N6 node, so what we're looking at are most likely just optimised silicon here, which has led Intel to be able to boost the clock speeds while maintaining acceptable thermals.

NVIDIA H100 GPUs Now Available on AWS Cloud

AWS users can now access the leading performance demonstrated in industry benchmarks of AI training and inference. The cloud giant officially switched on a new Amazon EC2 P5 instance powered by NVIDIA H100 Tensor Core GPUs. The service lets users scale generative AI, high performance computing (HPC) and other applications with a click from a browser.

The news comes in the wake of AI's iPhone moment. Developers and researchers are using large language models (LLMs) to uncover new applications for AI almost daily. Bringing these new use cases to market requires the efficiency of accelerated computing. The NVIDIA H100 GPU delivers supercomputing-class performance through architectural innovations including fourth-generation Tensor Cores, a new Transformer Engine for accelerating LLMs and the latest NVLink technology that lets GPUs talk to each other at 900 GB/sec.

Report: ASUS to Start Production of GPUs With No External Power Connectors

We witnessed an exciting concept during the Computex 2023 show in late May. ASUS has developed a GPU without an external power connector called GC_HPWR. Unlike current solutions, this connection type doesn't require additional cables. Using the GC_HPWR means that power is being supplied directly from the motherboard and that these special-edition GPUs also require special-edition motherboards. Thanks to the latest information from the Bilibili content creator Eixa Studio, attending Bilibili World 2023 exhibition in Shanghai, China, we have information that ASUS is preparing mass production of these zero-cable GPU solutions. Scheduled to enter mass production in Fall, ASUS plans to deliver these GPUs and accompanying motherboards before the year ends.

Additionally, it is worth noting that the motherboard lineup is called Back To Future (BTF), and the first GPU showcased was the GeForce RTX 4070 Megalodon. The PSU connectors are placed on the back side of the BTF board, while the CG_HPWR connector sits right next to the PCIe x16 expansion slot and looks like a PCIe x1 connector. You can see images of both products below.

Intel Arc A580 GPU Reportedly Appears in GFXBench Database

The Intel Arc A580 GPU was revealed alongside its Alchemist siblings—A380 A750 and A770—last year, but remains the only one out of that lineup to not have reached the retail market. Things have been quiet on the Intel Arc 5-series "Advanced Gaming" front for a while now—TechPowerUp's GPU-Z utility was updated with support for the A580 last September, and an evaluation sample was benched in Ashes of the Singularity a month prior to that. A supposed sample Intel Arc A580 was recently tested via a Vulkan-based renderer in GFXBench 5.0, perhaps not the best platform to gauge PC performance on.

Has an owner of a rare curiosity unit chosen to bench the unreleased GPU, or is a manufacturer evaluating a sample with a very delayed product launch in mind? The test results are not all that impressive, with the A580 performing poorly compared to the range-topping Arc A770 (placed in Intel's "high performance gaming" tier), although it does much better than the A380 (not a big boast). The likely prototype nature of the evaluated card or immature state of drivers could be to blame for shortcomings in GFXBench 5.0.

MSI Arc A310 Low Profile 2X Graphics Card Unboxed in YouTube Short

An MSI Arc A310 4 GB Low Profile 2X graphics card is now available to purchase at a couple of Russian PC hardware retailers (for roughly $140), and a local YouTuber has already produced a quick unboxing video. Intel's board partners have been sluggish in adopting lower-end Alchemist GPU variants, with only a handful of companies bothering to produce cards based on the Arc A310 (DG2-128).

Sparkle displayed an unnamed super compact model at Computex 2023, and presentation slides have recently revealed that their "Industrial Low-Profile" series will be introduced under the GENIE moniker, comprised of single slot/fan A310 and A380 variants. Sparkle has not outlined a possible Western release for these budget cards, and other manufacturers have been reluctant to move beyond the Chinese OEM market. The Gigabyte A3 series is reportedly a Russian market exclusive for the moment, so it will be interesting to see if MSI is only targeting that territory with its competing dual fan design Arc A310 4 GB Low Profile model.

Cerebras and G42 Unveil World's Largest Supercomputer for AI Training with 4 ExaFLOPS

Cerebras Systems, the pioneer in accelerating generative AI, and G42, the UAE-based technology holding group, today announced Condor Galaxy, a network of nine interconnected supercomputers, offering a new approach to AI compute that promises to significantly reduce AI model training time. The first AI supercomputer on this network, Condor Galaxy 1 (CG-1), has 4 exaFLOPs and 54 million cores. Cerebras and G42 are planning to deploy two more such supercomputers, CG-2 and CG-3, in the U.S. in early 2024. With a planned capacity of 36 exaFLOPs in total, this unprecedented supercomputing network will revolutionize the advancement of AI globally.

"Collaborating with Cerebras to rapidly deliver the world's fastest AI training supercomputer and laying the foundation for interconnecting a constellation of these supercomputers across the world has been enormously exciting. This partnership brings together Cerebras' extraordinary compute capabilities, together with G42's multi-industry AI expertise. G42 and Cerebras' shared vision is that Condor Galaxy will be used to address society's most pressing challenges across healthcare, energy, climate action and more," said Talal Alkaissi, CEO of G42 Cloud, a subsidiary of G42.

Supermicro Adds 192-Core ARM CPU Based Low Power Servers to Its Broad Range of Workload Optimized Servers and Storage Systems

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing several new servers to its already broad application optimized product line. These new servers incorporate the new AmpereOne CPU, with up to 192 single-threaded cores and up to 4 TB of memory capacity. Applications such as databases, telco edge, web servers, caching services, media encoding, and video gaming streaming will benefit from increased cores, faster memory access, higher performance per watt, scalable power management, and the new cloud security features. Additionally, Cloud Native microservice based applications will benefit from the lower latencies and power usage.

"Supermicro is expanding our customer choices by introducing these new systems that incorporate the latest high core count CPUs from Ampere Computing," said Michael McNerney, vice president of Marketing and Security, Supermicro. "With high core counts, predictable latencies, and up to 4 TB of memory, users will experience increased performance for a range of workloads and lower energy use. We continue to design and deliver a range of environmentally friendly servers that give customers a competitive advantage for various applications."

Lenovo Expands Latest ThinkPad Mobile Workstations to Include AMD Ryzen PRO 7040 Series Mobile Processors

Today, Lenovo unveiled the newest additions to its ThinkPad mobile workstation portfolio. Powered by the latest AMD Ryzen PRO 7040 Series Mobile processors with optional NVIDIA RTX professional graphics, the new ThinkPad P16v, P16s Gen 2 and P14s Gen 4 complement the models announced in May 2023, offering customers a broad choice in mobile workstation PC solutions. ThinkPad P Series devices deliver breakthrough performance, premium design, and durability for demanding workflows across a variety of price points and include support for Windows 11 and several flavors of Linux. Bringing advanced and power-efficient processors with AMD PRO technologies and Ryzen AI on select models opens up an enhanced world of AI-driven features for advanced collaboration on ThinkPad mobile workstations.

"Our latest workstations are designed to help our customers make a difference and drive a positive long-lasting impact in their fields, whether it's research and design, engineering and finance, media and entertainment, healthcare and education, or anything else. We are committed to delivering human-centric innovations that empower our customers to unleash their potential with ThinkPad mobile workstations," said Rob Herman, VP and GM, Worldwide Workstation and Client AI Business at Lenovo.

ThundeRobot Packs a 13th Gen Core Processor and RTX 4060 in 1.7 Liter Chassis

ThundeRobot, a major player in China's laptop market, is set to release a new PC console, the MIX, which shares striking similarities with Alienware's bygone Steam Machine. The console, equipped with Intel's 13th Gen Core CPU and Nvidia's RTX 4060 GPU, is set to debut on July 21st, predominantly targeting the Chinese market. Though not as familiar a brand outside Asia, ThundeRobot enjoys a significant market share in the region as the third-largest supplier of consumer notebooks and gaming peripherals. Its product catalog rivals brands like Asus and Razer, with offerings spanning custom-branded gaming notebooks to gaming monitors, keyboards, mice, and controllers.

The upcoming MIX console boasts a compact size, nearly 60% smaller than an Xbox Series S, at only 1.7 liters. Despite the uncertainty around whether the console's RTX 4060 GPU is a mobile or desktop variant, ThundeRobot brags that it would feature one or more of Intel's new 13th Gen Raptor Lake HX-series mobile CPUs. The console's matte black finish and triangular front-right indentation echo the design of Alienware's Steam Machine, suggesting that ThundeRobot may have drawn some inspiration from the Alienware console PC. Priced at around 6000 Yuan, approximately $830, the compact yet potent MIX console is expected to launch soon in China, with no current plans for release in the United States.

Sparkle Embracing Arc A380 & A310 GPUs with Low-Profile GENIE Series

Sparkle presented a pair of custom Intel Arc A380 and A310 cards[/url] at last month's Computex expo—reaffirming its commitment to presenting the full lineup of Arc GPUs. It is now reported that these "Industrial Low-Profile" cooled units will form the company's "GENIE" series. Sparkle's triple-fan TITAN series is comprised of Arc A770 and A750 GPUs, while the dual-fan ORC is formed solely of an A750. ELF is a single fan design A380 card.

The aforementioned GENIE models are both one slot designs with single fans and a low profile shrouds that only covers part of the PCB (comparable to the reference card). The A380 unit offers 8 Xe-Cores with 6 GB GDDR6 96-bit memory, while the lesser A310 gets 6 Xe-Cores and 4 GB of GDDR6 64-bit memory. The leaked presentation slide does not show any release date information, but reasonably final looking hardware making an appearance at Computex 2023 suggests that the GENIE series is not too far off from reaching retail.

NVIDIA Espouses Generative AI for Improved Productivity Across Industries

A watershed moment on Nov. 22, 2022, was mostly virtual, yet it shook the foundations of nearly every industry on the planet. On that day, OpenAI released ChatGPT, the most advanced artificial intelligence chatbot ever developed. This set off demand for generative AI applications that help businesses become more efficient, from providing consumers with answers to their questions to accelerating the work of researchers as they seek scientific breakthroughs, and much, much more.

Businesses that previously dabbled in AI are now rushing to adopt and deploy the latest applications. Generative AI—the ability of algorithms to create new text, images, sounds, animations, 3D models and even computer code—is moving at warp speed, transforming the way people work and play. By employing large language models (LLMs) to handle queries, the technology can dramatically reduce the time people devote to manual tasks like searching for and compiling information.

Acer Prepping Radeon RX 7600 GPU for Predator BiFrost Series

According to information and images released by Xfastest, Acer seems to be preparing a new trio of Predator BiFrost custom cards. The series is currently limited to a single factory overclocked model, based on Intel's Arc A770 16 GB GPU. One of the new cards seems to be a cheaper (~$258) A750 8 GB BiFrost model, so Acer's Alchemist ACM-G10 GPU variant lineup is welcoming one addition.

Acer is also embracing RDNA 3 courtesy of AMD, although graphics card enthusiasts could see the introduction of two new Predator BiFrost models based on Radeon RX 7600 8 GB GPU as less than exciting prospects. The leaked photos seem to show a cooler design that lacks ARGB around the two cooling fans—budget friendly pricing (~$290 for the overclocked model, and ~$258 for non-OC) suggests that fancy livery is not so important in the low-to-mid-range tier.

Intel Brings Gaudi2 Accelerator to China, to Fill Gap Created By NVIDIA Export Limitations

Intel has responded to the high demand for advanced chips in mainland China by bringing its processor, the Gaudi2, to the market. This move comes as the country grapples with US export restrictions, leading to a thriving market for smuggled NVIDIA GPUs. At a press conference in Beijing, Intel presented the Gaudi2 processor as an alternative to NVIDIA's A100 GPU, widely used for training AI systems. Despite US export controls, Intel recognizes the importance of the Chinese market, with 27 percent of its 2022 revenue generated from China. NVIDIA has also tried to comply with restrictions by offering modified versions of its GPUs, but limited supplies have driven the demand for smuggled GPUs. Intel's Gaudi2 aims to provide Chinese companies with various hardware options and bolster their ability to deploy AI through cloud and smart-edge technologies. By partnering with Inspur Group, a major AI server manufacturer, Intel plans to build Gaudi2-powered machines tailored explicitly for the Chinese market.

China's AI ambitions face potential challenges as the US government considers restricting Chinese companies access to American cloud computing services. This move could impede the utilization of advanced AI chips by major players like Amazon Web Services and Microsoft for their Chinese clients. Additionally, there are reports of a potential expansion of the US export ban to include NVIDIA's A800 GPU. As China continues to push forward with its AI development projects, Intel's introduction of the Gaudi2 processor helps country's demand for advanced chips. Balancing export controls and technological requirements within this complex trade landscape remains a crucial task for both companies and governments involved in the Chinese AI industry.

No Official Review Program for NVIDIA GeForce RTX 4060 Ti 16 GB Cards

NVIDIA is reported to be taking a hands off approach prior to the launch of its GeForce RTX 4060 Ti 16 GB GPU next week—rumored to take place on July 18. Murmurs from last week posited that add-in card (AIC) partners were not all that confident in the variant's prospects, with very little promotional activity lined up. NVIDIA itself is not releasing a Founders Edition GeForce RTX 4060 Ti 16 GB model, so it will be relying on board partners to get custom design units sent out to press outlets/reviewers. According to Hardware Unboxed, as posted on Twitter earlier today, no hardware will be distributed to the media: "Now there's no official review program for this model, there will be no FE version and it seems that NVIDIA and their partners really don't want to know about it. Every NVIDIA partner I've spoken to so far has said they won't be providing review samples, and they're not even sure when their model will be available."

Their announcement continued: "So I don't know when you'll be able to view our review, but I will be buying one as soon as I can. I expect coverage will be pretty thin and that's probably the plan, the release strategy here is similar to that of the RTX 3080 12 GB." TPU can confirm that test samples have not been sent out by NVIDIA's board partners, so a retail unit will be purchased (out of pocket) for reviewing purposes. Previous reports have theorized that not many custom models will be available at launch, with the series MSRP of $499 not doing it many favors in terms of buyer interest. MSI has prepared a new white GAMING X design for the 16 GB variant, so it is good to see at least one example of an AIB putting the effort in...but it would be nice to get a press sample.

Imagination GPUs Gains OpenGL 4.6 Support

When it comes to APIs, OpenGL is something of a classic. According to the Khronos Group, OpenGL is the most widely adopted 2D and 3D graphics API. Since its launch in 1992 it has been used extensively by software developers for PCs and workstations to create high-performance, visually compelling graphics applications for markets such as CAD, content creation, entertainment, game development and virtual reality.

To date, Imagination GPUs have natively supported OpenGL up until Release 3.3 as well as OpenGL ES (the version of OpenGL for embedded systems), Vulkan (a cross-platform graphics API) and OpenCL (an API for parallel programming). However, thanks to the increasing performance of our top-end GPUs, especially with the likes of the DXT-72-2304, they present a competitive offering to the data centre and desktop (DCD) market. Indeed, we have multiple customers - including the likes of Innosilicon - choosing Imagination GPUs for the flexibility an IP solution, their scalability and their ability to offer up to 6 TFLOPS of compute.

Two-ExaFLOP El Capitan Supercomputer Starts Installation Process with AMD Instinct MI300A

When Lawrence Livermore National Laboratory (LLNL) announced the creation of a two-ExaFLOP supercomputer named El Capitan, we heard that AMD would power it with its Instinct MI300 accelerator. Today, LNLL published a Tweet that states, "We've begun receiving & installing components for El Capitan, @NNSANews' first #exascale #supercomputer. While we're still a ways from deploying it for national security purposes in 2024, it's exciting to see years of work becoming reality." As published images show, HPE racks filled with AMD Instinct MI300 are showing up now at LNLL's facility, and the supercomputer is expected to go operational in 2024. This could mean that November 2023 TOP500 list update wouldn't feature El Capitan, as system enablement would be very hard to achieve in four months until then.

The El Capitan supercomputer is expected to run on AMD Instinct MI300A accelerator, which features 24 Zen4 cores, CDNA3 architecture, and 128 GB of HBM3 memory. All paired together in a four-accelerator configuration goes inside each node from HPE, also getting water cooling treatment. While we don't have many further details on the memory and storage of El Capitan, we know that the system will exceed two ExFLOPS at peak and will consume close to 40 MW of power.

ASRock Adds A380 Low Profile 6 GB Graphics Card to its Arc Lineup

ASRock has added another Arc model to its small selection of Intel graphics cards—this time in low profile form. The entry level A380 GPU is well suited for this narrow (zero dB/silent) dual fan cooling solution due to its diminutive 75 W TDP rating. ASRock has stayed in the safe zone by sticking with the default base clock of 2.0 GHz, as opposed to the sibling Challenger ITX 6 GB OC model's slightly more ambitious 2.25 GHz.

The specifications are typical A380—you get 6 GB of GDDR6 VRAM with a 96-bit memory bus, granting a bandwidth going up to 186 GB/s (memory is clocked at 15.5 Gbps), although the selection of ports has been reduced in number due to the card's small stature. Only single DisplayPort 2.0 and HDMI 2.0b connections here. ASRock's product page for their Arc A380 Low Profile model includes the usual yammering about the GPU's "next-gen gaming" capabilities thanks to Intel's Xe Super Sampling (XeSS) technology, but the card is better suited for compact budget builds and users who require a decent level of AV1 encoding (for the price—not announced at the time of writing).

Intel Developing Efficient Solution for Path Tracing on Integrated GPUs

Intel's software engineers are working on path-traced light simulation and conducting neural graphics research, as documented in a recent company news article, with an ambition to create a more efficient solution for integrated graphics cards. The company's Graphics Research Organization is set to present their path-traced optimizations at SIGGRAPH 2023. Their papers have been showcased at recent EGSR and HPG events. The team is aiming to get iGPUs running path-tracing in real time, by reducing the number of calculations required to simulate light bounces.

The article covers three different techniques, all designed to improve GPU performance: "Across the process of path tracing, the research presented in these papers demonstrates improvements in efficiency in path tracing's main building blocks, namely ray tracing, shading, and sampling. These are important components to make photorealistic rendering with path tracing available on more affordable GPUs, such as Intel Arc GPUs, and a step toward real-time performance on integrated GPUs." Although there is an emphasis on in-house products in the article, Intel's "open source-first mindset" hints that their R&D could be shared with others—NVIDIA and AMD are likely still struggling to make ray tracing practical on their modest graphics card models.

Adlink's Next-Gen IPC Strives to Revolutionize Industry Use Cases at the Edge

ADLINK Technology Inc., a global leader in edge computing, and a Titanium member of the Intel Partner Alliance, is proud to announce the launch of its latest MVP Series fanless modular computers—the MVP-5200 Compact Modular Industrial Computers and MVP-6200 Expandable Modular Industrial Computers—powered by 12/13th Gen Intel Core i9/i7/i5/i3 and Celeron processors. Featuring Intel R680E chipset and supporting up to 65 W, the computers can also incorporate GPU cards in a rugged package suitable for AI inferencing at the Edge, and can be used for but not limited to smart manufacturing, semiconductor equipment, and warehouse applications.

The MVP-5200/MVP-6200 series though expandable remains compact with support for up to 4 PCI/PCIe slots that allow for performance acceleration through GPUs, accelerators, and other expansion cards. Comprehensive modularized options and the ease of configuration can effectively reduce lead times for customers' diverse requirements. In addition, ADLINK also offers a broad range of pre-validated expansion cards, such as GPU, motion, vision, and I/O embedded cards, all can be easily deployed for your industrial applications.

Gigabyte Launches the GeForce RTX 4070 Ti Series Water-Cooled Graphics Cards

GIGABYTE TECHNOLOGY Co. Ltd, a leading manufacturer of premium gaming hardware, today launches GeForce RTX 4070 Ti series water-cooled graphics cards powered by NVIDIA ADA Lovelace architecture. GIGABYTE launches two AORUS WATERFORCE graphics cards - AORUS GeForce RTX 4070 Ti XTREME WATERFORCE 12G and AORUS GeForce RTX 4070 Ti XTREME WATERFORCE WB 12G. Both graphics cards are equipped with the top-of-the-line GPU cores that have overclocking capabilities and are certified by GIGABYTE GPU Gauntlet sorting technology. AORUS provides the all-around cooling solution for all key components of the graphics card. In addition to cooling the GPU, AORUS also takes care of the VRAM and MOSFET, ensuring stable overclocking performance and enhanced durability.

The AORUS WATERFORCE graphics cards features RGB Fusion, protection metal backplate, aerospace-grade PCB coating for dust and moisture protection, Ultra-Durable top-grade components, and extended warranty for registered members. These features make the AORUS WATERFORCE graphics cards the best choice for enthusiasts who desire both silent operation and high performance.

Oracle to Spend Billions on NVIDIA Data Center GPUs, Even More on Ampere & AMD CPUs

Oracle founder and Chairman Larry Ellison last week announced a substantial spending spree on new equipment as he prepares his business for a cloud computing service expansion that will be aimed at attracting a "new wave" of artificial intelligence (AI) companies. He made this announcement at a recent Ampere event: "This year, Oracle will buy GPUs and CPUs from three companies...We will buy GPUs from NVIDIA, and we're buying billions of dollars of those. We will spend three times that on CPUs from Ampere and AMD. We still spend more money on conventional compute." His cloud division is said to be gearing up to take on larger competition—namely Amazon Web Services and Microsoft Corp. Oracle is hoping to outmaneuver these major players by focusing on the construction of fast networks, capable of shifting around huge volumes of data—the end goal being the creation of its own ChatGPT-type model.

Ellison's expressed that he was leaving Team Blue behind—Oracle has invested heavily in Ampere Computing—a startup founded by ex-Intel folks: "It's a major commitment to move to a new supplier. We've moved to a new architecture...We think that this is the future. The old Intel x86 architecture, after many decades in the market, is reaching its limit." Oracle's database software has been updated to run on Ampere's Arm-based chips, Ellison posits that these grant greater power efficiency when compared to AMD and NVIDIA enterprise processors. There will be some reliance on "x86-64" going forward, since Oracle's next-gen Exadata X10M platform was recently announced with the integration of Team Red's EPYC 9004 series processors—a company spokesman stated that these server CPUs offer higher core counts and "extreme scale and dramatically improved price performance," when compared to older Intel Xeon systems.

Inflection AI Builds Supercomputer with 22,000 NVIDIA H100 GPUs

The AI hype continues to push hardware shipments, especially for servers with GPUs that are in very high demand. Another example is the latest feat of AI startup, Inflection AI. Building foundational AI models, the Inflection AI crew has secured an order of 22,000 NVIDIA H100 GPUs and built a supercomputer. Assuming a configuration of a single Intel Xeon CPU with eight GPUs, almost 700 four-node racks should go into the supercomputer. Scaling and connecting 22,000 GPUs is easier than it is to acquire them, as NVIDIA's H100 GPUs are selling out everywhere due to the enormous demand for AI applications both on and off premises.

Getting 22,000 H100 GPUs is the biggest challenge here, and Inflection AI managed to get them by having NVIDIA as an investor in the startup. The supercomputer is estimated to cost around one billion USD and consume 31 Mega-Watts of power. The Inflection AI startup is now valued at 1.5 billion USD at the time of writing.

NVIDIA GeForce GTX 1650 is Still the Most Popular GPU in the Steam Hardware Survey

NVIDIA GeForce GTX 1650 was released more than four years ago. With its TU117 graphics processor, it features 896 CUDA cores, 56 texture mapping units, and 32 ROPs. NVIDIA has paired 4 GB GDDR5 memory with the GeForce GTX 1650, which are connected using a 128-bit memory interface. Interestingly, according to the latest Steam Hardware Survey results, this GPU still remains the most popular choice among gamers. While the total addressable market is unknown with the exact number, it is fair to assume that a large group participates every month. The latest numbers for June 2023 indicate that the GeForce GTX 1650 is still the number one GPU, with 5.50% of the users having that GPU. The second closest one was GeForce RTX 3060, with 4.60%.

Other information in the survey remains similar, with CPUs mostly ranging from 2.3 GHz to 2.69 GHz in frequency and with six cores and twelve threads. Storage also recorded a small bump with capacity over 1 TB surging 1.48%, indicating that gamers are buying larger drives as game sizes get bigger.

AI and HPC Demand Set to Boost HBM Volume by Almost 60% in 2023

High Bandwidth Memory (HBM) is emerging as the preferred solution for overcoming memory transfer speed restrictions due to the bandwidth limitations of DDR SDRAM in high-speed computation. HBM is recognized for its revolutionary transmission efficiency and plays a pivotal role in allowing core computational components to operate at their maximum capacity. Top-tier AI server GPUs have set a new industry standard by primarily using HBM. TrendForce forecasts that global demand for HBM will experience almost 60% growth annually in 2023, reaching 290 million GB, with a further 30% growth in 2024.

TrendForce's forecast for 2025, taking into account five large-scale AIGC products equivalent to ChatGPT, 25 mid-size AIGC products from Midjourney, and 80 small AIGC products, the minimum computing resources required globally could range from 145,600 to 233,700 Nvidia A100 GPUs. Emerging technologies such as supercomputers, 8K video streaming, and AR/VR, among others, are expected to simultaneously increase the workload on cloud computing systems due to escalating demands for high-speed computing.

ASUS has a GeForce RTX 4060 Ti Card with an M.2 SSD Slot

ASUS Chinese GM—Tony Yu—has shown off a graphics card concept on Bilibili that has a rather unusual feature, a slot for an M.2 NVMe SSD. The card is based on NVIDIA's GeForce RT 4060 Ti GPU and although not all details are clear at this point in time, but ASUS is taking advantage of the unused PCIe lanes on the card, since the AD106 GPU only uses eight PCIe lanes, the PCIe connector on the card has space for a further eight lanes. In theory ASUS could have added a pair of SSDs, since there are a total of eight lanes available, but as this was just a proof of concept, they seemingly stuck with a single SSD.

It's unclear if ASUS relies on bifurcation or if the company has added some kind of bridge chip, but bifurcation makes more sense, as a bridge chip would add a lot more cost. The neat thing with the NVMe drive being on the GPU, is that it also connects to the heatsink of the graphics card, which means the cooling should be rather good. However, for this to work properly, the SSD would have to be mounted back to front compared to how it would be mounted on a motherboard. Based on the test results, the SSD runs at a cool 42 degrees C, even when the GPU is being stress tested. It's likely that this product will not make it to markets outside of China, if it's ever launched into retail.
Return to Keyword Browsing
Jun 3rd, 2024 13:16 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts