News Posts matching #GPU

Return to Keyword Browsing

NVIDIA Seemingly Producing Yet Another GTX 1650 Variant Based on TU-116

NVIDIA's GTX 1650 has already seen more action and revisions within its own generation than most GPUs ever have in the history of graphics cards, with NVIDIA having updated not only its memory (from 4 GB GDDR5 with 128 GB/s bandwidth to 4 GB GDDR6 memory for 192 GB/s bandwidth), but also by carving up different silicon chips to provide the same part to market. The original GTX 1650 made use of NVIDIA's TU117 chips with 896 CUDA cores, which was then superseded by the TU116-based GTX 1650 SUPER, which mightily increased the GTX 1650's execution units (1280) and bandwidth (256-bit bus). There was also a TU106-based GTX 1650, which was just bonkers - a chip originally used on the RTX 2060 was thus repurposed and cut-down.

Now, another TU-116 variant is also available, which NVIDIA carved down from its GTX 1650 SUPER chips. These go back to the original releases' 896 CUDA cores and 128-bit bus, whilst keeping the GDDR6 memory ticking at 12 Gbps and clocks set at 1410 MHz Base and 1590 MHz Boost. This card achieves feature parity with the TU106-based GTX 1650, but trades in the crazy 445 mm² TU106 die for the much more svelte 284 mm² TU116 one. NVIDIA seems to be doing what it can by cleaning house of any and all leftover chips in preparation for their next-gen release - consumer confusion be damned.

Zhaoxin to Design Discrete GPUs

Zhaoxin, the Chinese chip-maker famous for Kaixian line of x86 processors, and a major beneficiary of the Chinese government's ambitious "3-5-2 plan" of public investment toward the country's computer hardware independence by the mid-2020s, unveiled plans to design its first discrete GPUs that could double up as scalar-compute and AI processors. The company's baby step is a tiny 70-Watt dGPU that will be fabricated on TSMC's 28 nm silicon fabrication process that will likely serve as a tech demonstrator and development platform for ISVs. The dGPU is largely expected to derive from VIA's S3 Graphics IP as VIA has collaborated with Zhaoxin as an iGPU provider for its Kaixian line of x86 SoCs.

Qualcomm Announces Snapdragon 865 Plus 5G Mobile Platform, Breaking the 3 GHz Barrier

Qualcomm Technologies, Inc. unveiled the Qualcomm Snapdragon 865 Plus 5G Mobile Platform, a follow-on to the flagship Snapdragon 865 that has powered more than 140 devices (announced or in development) - the most individual premium-tier designs powered by a single mobile platform this year. The new Snapdragon 865 Plus is designed to deliver increased performance across the board for superior gameplay and insanely fast Qualcomm Snapdragon Elite Gaming experiences, truly global 5G, and ultra-intuitive AI.

"As we work to scale 5G, we continue to invest in our premium tier, 8-series mobile platforms, to push the envelope in terms of performance and power efficiency and deliver the next generation of camera, AI and gaming experiences," said Alex Katouzian, senior vice president and general manager, mobile, Qualcomm Technologies, Inc. "Building upon the success of Snapdragon 865, the new Snapdragon 865 Plus will deliver enhanced performance for the next wave of flagship smartphones."

NVIDIA GeForce RTX 3070 and RTX 3070 Ti Rumored Specifications Appear

NVIDIA is slowly preparing to launch its next-generation Ampere graphics cards for consumers after we got the A100 GPU for data-centric applications. The Ampere lineup is getting more and more leaks and speculations every day, so we can assume that the launch is near. In the most recent round of rumors, we have some new information about the GPU SKU and memory of the upcoming GeForce RTX 3070 and RTX 3070 Ti. Thanks to Twitter user kopite7kimi, who had multiple confirmed speculations in the past, we have information that GeForce RTX 3070 and RTX 3070 Ti use a GA104 GPU SKU, paired with GDDR6 memory. The cath is that the Ti version of GPU will feature a new GDDR6X memory, which has a higher speed and can reportedly go up to 21 Gbps.

The regular RTX 3070 is supposed to have 2944 CUDA cores on GA104-400 GPU die, while its bigger brother RTX 3070 Ti is designed with 3072 CUDA cores on GA104-300 die. Paired with new technologies that Ampere architecture brings, with a new GDDR6X memory, the GPUs are set to be very good performers. It is estimated that both of the cards would reach a memory bandwidth of 512 GB/s. So far that is all we have. NVIDIA is reportedly in Design Validation Test (DVT) phase with these cards and is preparing for mass production in August. Following those events is the official launch which should happen before the end of this year, with some speculations indicating that it is in September.

MediaTek Introduces Helio G35 & G25 Gaming Series Chipsets

MediaTek, the world's 4th largest global fabless semiconductor company, today launched its newest chips in the smartphone gaming-focused G series - the MediaTek Helio G25 and G35. The latest chips feature MediaTek HyperEngine game technology for faster, smoother performance, enhanced power efficiency, and brilliant graphics.

The new chipsets always keep you connected and deliver the lowest latency gaming experience. They also offer enhanced imaging features, making these G-series chipsets a perfect fit for photography enthusiasts and gamers alike.

ASRock Launches Radeon RX 5600 XT Challenger Pro 6G OC Graphics Card

The leading global motherboard, graphics card and mini PC manufacturer, ASRock, has launched new Radeon RX 5600 XT Challenger Pro 6G OC three-fan graphics card. The Radeon RX 5600 XT Challenger Pro 6G OC features ASRock's new styled shroud design with upgraded cooling fins, AMD's second-generation Radeon RX 5600 XT 7 nm GPU, plus 6 GB 192-bit GDDR6 memory and PCI Express 4.0 bus. The ASRock Radeon RX 5600 XT Challenger Pro 6G OC graphics card provides excellent overclocking settings, which enables users to enjoy a smooth 1080p gaming experience.

The ASRock Radeon RX 5600 XT Challenger Pro 6G OC adopts AMD's second-generation Radeon RX 5600 XT GPU. With factory default GPU base/game/boost clock settings, this new graphics card can reach 1420/1615/up to 1750 MHz respectively. The boost clock setting is 4% higher than the AMD's standard settings. Furthermore, the clock frequency of GDDR6 memory is set as 1750 MHz, which is 17% faster than AMD's memory default value - 1500 MHz. The ASRock Radeon RX 5600 XT Challenger Pro 6G OC is equipped with 3-fan cooler, 6 GB 192-bit GDDR6 memory and latest PCI Express 4.0 bus standard; ideally partnering with AMD Ryzen 3000 CPU systems and ASRock B550 and X570 motherboards. These premium specifications allow Radeon RX 5600 XT Challenger Pro 6G OC graphics card to have outstanding performance and bring users excellent 1080p gaming experience.

European Hardware Awards Announced; AMD CPU and GPU Division Wins Big

The European Hardware Association (EHA), comprised of the nine largest independent technology news and review websites on the continent, has announced its hardware winners for 2020. And AMD has completely blindsided its competition in all possible metrics, whether you're talking about the GPU or CPU side of the equation. AMD's CPU division has completely razed Intel's offerings when it comes to awards, with no Intel CPU even being credited with a single prize. AMD's Ryzen 3000 series won the most-desired award in the form of the "Product of the Year" award. The Ryzen 3000 chiplet design in itself won the EHA "Best Technology" Award; and more specifically, AMD's Ryzen 9 3950X took home the "Best CPU" prize; the Ryzen 5 3600 won "Best Gaming Product"; and the Ryzen 3 3300X won "Best Overclocking Product".

But AMD didn't stop in the CPU category, besting even rival NVIDIA in the GPU side of the equation. AMD's Navi 10 GPU, used in the Radeon RX 5700 series, has won the "Best GPU" category, while the "Best AMD-based graphics card" award goes to the Sapphire RX 5700 XT Nitro+ (the ASUS ROG Strix GeForce RTX 2080 Ti OC won "Best NVIDIA Graphics Card" category). Another AMD-inside design won the "Best Gaming Notebook" Award - ASUS' ROG Zephyrus G14, which packs AMD's mobile Renoir CPUs inside.

AMD Preparing Additional Ryzen 4000G Renoir series SKUs, Ryzen 7 Pro 4750G Benchmarked

AMD Ryzen 4000 series of desktop APUs are set to be released next month as a quiet launch. What we expected to see is a launch covering only a few models ranging from Ryzen 3 to Ryzen 7 level, meaning that there would be configurations equipped with anything from 4C/8T to 8C/16T. In the beginning thanks to all the leaks we expected to see six models (listed in the table below), however thanks to discovery, we could be looking at even more SKUs of the Renoir family of APUs. Mentioned in the table are some new entries to both consumer and pro-grade users which means AMD will probably do a launch of both editions, possibly on the same day. We are not sure if that is the case, however, it is just a speculation.
AMD Ryzen 4000G Renoir SKUs

New AMD Radeon Pro 5600M Mobile GPU Brings Desktop-Class Graphics Performance and Enhanced Power Efficiency to 16-inch MacBook Pro

AMD today announced availability of the new AMD Radeon Pro 5600M mobile GPU for the 16-inch MacBook Pro. Designed to deliver desktop-class graphics performance in an efficient mobile form factor, this new GPU powers computationally heavy workloads, enabling pro users to maximize productivity while on-the-go.

The AMD Radeon Pro 5600M GPU is built upon industry-leading 7 nm process technology and advanced AMD RDNA architecture to power a diverse range of pro applications, including video editing, color grading, application development, game creation and more. With 40 compute units and 8 GB of ultra-fast, low-power High Bandwidth Memory (HBM2), the AMD Radeon Pro 5600M GPU delivers superfast performance and excellent power efficiency in a single GPU package.

ASUS Releases Polaris 12 Phoenix Radeon 550 Card

The Polaris architecture was debuted by AMD in the RX 400 series almost 4 years ago, since then AMD has released two new generations of graphics processors, Vega and Navi. It seems that the Polaris architecture will be living on a bit longer with the release of the ASUS Phoenix Radeon 550 2GB GPU, based on the Polaris 12 GPU.

This product may seem familiar and that's because ASUS released the Phoenix Radeon RX 550 back in 2017, the new Phoenix Radeon 550 uses a different memory configuration of 2 GB GDDR5 / 64-bit / 6 Gbps which is a significant step down from the 2/4 GB GDDR5 / 128-bit / 7 Gbps of the Phoenix Radeon RX 550 especially considering that card was released 3 years ago. This new card seems to have been available to OEM's for some time and is only now making it's way to retail at a hopefully cheap price.

AMD EPYC Processors Ecosystem Continues to Grow with Integration into New NVIDIA DGX A100

AMD today announced the NVIDIA DGX A100, the third generation of the world's most advanced AI system, is the latest high-performance computing system featuring 2nd Gen AMD EPYC processors. Delivering 5 petaflops of AI performance, the elastic architecture of the NVIDIA DGX A100 enables enterprises to accelerate diverse AI workloads such as data analytics, training, and inference.

NVIDIA DGX A100 leverages the high-performance capabilities, 128 cores, DDR4-3200 MHz and PCIe 4 support from two AMD EPYC 7742 processors running at speeds up to 3.4 GHz¹. The 2nd Gen AMD EPYC processor is the first and only current x86-architecture server processor that supports PCIe 4, providing leadership high-bandwidth I/O that's critical for high performance computing and connections between the CPU and other devices like GPUs.

AMD "Ryzen C7" Smartphone SoC Specifications Listed

Last year Samsung and AMD announced their collaboration which promises to deliver smartphone chips with AMD RDNA 2 graphics at its heart. This collaboration is set to deliver first products sometime at the beginning of 2021 when Samsung will likely utilize new SoCs in their smartphones. In previous leaks, we have found that the GPU inside this processor is reportedly beating the competition form Qualcomm, where the AMD GPU was compared to Adreno 650. However, today we have obtained more information about the new SoC which is reportedly called "Ryzen C7" smartphone SoC. A new submission to a mobile phone leaks website called Slash Leaks has revealed a lot of new details to us.

The SoC looks like a beast. Manufactured on TSMC 5 nm process, it features two Gaugin Pro cores based on recently announced Arm Cortex-X1, two Gaugin cores based on Arm Cortex-A78, and four cores based on Arm Cortex-A55. This configuration represents a standard big.LITTLE CPU typical for smartphones. Two of the Cortex-X1 cores run at 3 GHz, two of Cortex-A78 run at 2.6 GHz, while four little cores are clocked at 2 GHz frequency. The GPU inside this piece of silicon is what is amazing. It features four cores of custom RDNA 2 based designs that are clocked at 700 MHz. These are reported to beat the Adreno 650 by 45% in performance measurements.

Intel Scores Another AMD Graphics Higher-up: Ali Ibrahim

To support its efforts to build a competitive consumer GPU lineup under the Xe brand, which Intel likes to call its "Odyssey," the company scored another higher-up from AMD, this time Ali Ibrahim. He joined Intel this month as a vice-president within the Architecture, Graphics and Software group, although the company didn't specify his responsibilities. "We are thrilled that Ali has joined Intel as Vice President, Platform Architecture and Engineering - dGPUs to be part of the exciting Intel Xe graphics journey," said an Intel spokesperson in a comment to CRN.

During his 13-year tenure at AMD, Ali Ibrahim was the chief-architect of the company's cloud gaming and console SoC businesses, which provides valuable insight into Intel's breakneck efforts to build high-end discrete GPUs (something it lacked for the past two decades). Intel is the only other company that is capable of building semi-custom chips for someone like Microsoft or Sony as the inventor of x86, provided it has a GPU that can match AMD's in the console space. Likewise, with gaming taking baby steps to the cloud as big players such as Google betting on it, Intel sees an opportunity for cloud gaming GPUs that aren't too different from its "Ponte Vecchio" scalar processors. The transfer of talent isn't one-way, as AMD recently bagged Intel's server processor lead Dan McNamara to head the EPYC brand.

NVIDIA Announces Quadro Experience

Experience matters. And with NVIDIA Quadro Experience—a new application for Quadro GPUs—professionals across industries can boost their creativity and increase their productivity like never before.

Quadro Experience, available now, helps professionals simplify time-consuming tasks, streamline workflows and ensure your favorite applications always have the latest updates. NVIDIA Quadro Experience makes sharing content easier by providing screen capture and desktop recording in 4K, so teams can easily upload content and even broadcast their work directly from their desktop or laptop.
NVIDIA Quadro Experience NVIDIA Quadro Experience NVIDIA Quadro Experience

Arm Announces new IP Portfolio with Cortex-A78 CPU

During this unprecedented global health crisis, we have experienced rapid societal changes in how we interact with and rely on technology to connect, aid, and support us. As a result of this we are increasingly living our lives on our smartphones, which have been essential in helping feed our families through application-based grocery or meal delivery services, as well as virtually seeing our colleagues and loved ones daily. Without question, our Arm-based smartphones are the computing hub of our lives.

However, even before this increased reliance on our smartphones, there was already growing interest among users to explore the limits of what is possible. The combination of these factors with the convergence of 5G and AI, are generating greater demand for more performance and efficiency in the palm of our hands.
Arm Cortex-A78

NVIDIA Investors Claw Back at Company, Claiming $1 Billion Mining GPU Revenue Hidden Away in the Gaming Division

NVIDIA investors have recently filed a suit against the company, claiming that NVIDIA wrongfully detailed its revenue indicators between departments. The main point of contention here is that investors claim NVIDIA knowingly obfuscated the total value of the crypto market boom (and subsequent bust) from investors, thus painting a picture of the company's outlook than was different from reality (making demand for the Gaming division look higher than it was in reality) and exposing them to a different state of affairs and revenue gains than they expected. The investors say that NVIDIA knew that a not insignificant number of its graphics cards sold between 2017 and 2018 was being bought-up solely for the purpose of crypto mining, and that the company knew this (and even marketed GPUs specifically for that purpose).

The crypto mining boom had miners gobbling up all NVIDIA and AMD graphics cards that they could, with both companies seemingly increasing production to meet the crypto mining bubble demand. However, due to the economics of crypto mining, it was clear that any profits derived from this bubble would ultimately open the door to an explosive logistics problem, as miners offloaded their graphics cards to the second-hand market, which could ultimately harm NVIDIA's financial book. Of course, one can look at NVIDIA's revenue categories at the time to see that crypto would hardly fit neatly into either the Gaming, Professional Visualization, Datacenter, Auto, or OEM & IP divisions.

Asetek Unveils Rad Card Industry's First Slot-In PCIe Radiator Card

Asetek, the creator of the all-in-one (AIO) liquid cooler and the global leader in liquid cooling solutions for gaming PCs and DIY enthusiasts, today announced its Rad Card GPU Cooler, bringing liquid cooled GPUs to space constrained PC cases. Asetek's Rad Card GPU Cooler, the industry's first slot-in PCIe radiator card, is first available in Dell-Alienware's newly introduced Alienware Aurora R11 PC.

Space concerns are a real issue for PC manufacturers, leaving GPU air cooling as the only option, until now. Asetek took this challenge head-on, innovating a new approach to radiator technology that reimagines the shape and location of the radiator. The Asetek Rad Card GPU Cooler fits into your motherboard's PCIe slot, just like any other add-in card. By utilizing PCIe slots, Asetek has defined a way to overcome PC manufacturers' dilemma of finding additional space inside the case for a liquid cooled GPU heat exchanger (HEx).

Update May 18th: This card may not be limited to just OEMs with Asetek tweeting "Not all of them made it to Alienware. Not what to do with these...". Asetek is very open about seeking feedback and is watching demand for this product from consumers, possibly even getting ready for a giveaway so it will be exciting to see what comes from this.

Hot Chips 2020 Program Announced

Today the Hot Chips program committee officially announced the August conference line-up, posted to hotchips.org. For this first-ever live-streamed Hot Chips Symposium, the program is better than ever!

In a session on deep learning training for data centers, we have a mix of talks from the internet giant Google showcasing their TPUv2 and TPUv3, and a talk from startup Cerebras on their 2nd gen wafer-scale AI solution, as well as ETH Zurich's 4096-core RISC-V based AI chip. And in deep learning inference, we have talks from several of China's biggest AI infrastructure companies: Baidu, Alibaba, and SenseTime. We also have some new startups that will showcase their interesting solutions—LightMatter talking about its optical computing solution, and TensTorrent giving a first-look at its new architecture for AI.
Hot Chips

NVIDIA Tesla A100 GPU Pictured

Thanks to the sources of VideoCardz, we now have the first picture of the next-generation NVIDIA Tesla A100 graphics card. Designed for computing oriented applications, the Tesla A100 is a socketed GPU designed for NVIDIA's proprietary SXM socket. In a post few days ago, we were suspecting that you might be able to fit the Tesla A100 GPU in the socket of the previous Volta V100 GPUs as it is a similar SXM socket. However, the mounting holes have been re-arranged and this one requires a new socket/motherboard. The Tesla A100 GPU is based on GA100 GPU die, which we don't know specifications of. From the picture, we can only see that there is one very big die attached to six HBM modules, most likely HBM2E. Besides that everything else is unknown. More details are expected to be announced today at the GTC 2020 digital keynote.
NVIDIA Tesla A100

NVIDIA CEO Jensen Huang has been Cooking the World's Largest GPU - Is this Ampere?

NVIDIA is rumored to introduce their next-generation Ampere architecture very soon, at its GTC event happening on May 14th. We're expecting to see an announcement for the successor to the company's DGX lineup of pre-built compute systems—using the upcoming Ampere architecture of course. At the heart of these machines, will be a new GA100 GPU, that's rumored to be very fast. A while ago, we've seen NVIDIA register a trademark for "DGX A100", which seems to be a credible name for these systems featuring the new Tesla A100 graphics cards.

Today, NVIDIA's CEO was spotted in an unlisted video that's published on the official NVIDIA YouTube channel. It shows him pulling out of the oven what he calls "world's largest GPU", that he has been cooking all the time. Featuring eight Tesla A100 GPUs, this DGX A100 system appears to be based on a similar platform design as previous DGX systems, where the GPU is a socketed SXM2 design. This looks like a viable upgrade path for owners of previous DGX systems—just swap out the GPUs and enjoy higher performance. It's been a while since we have seen Mr. Huang appear with his leather jacket, and in the video, he isn't wearing one, is this the real Jensen? Jokes aside, you can check out the video below, if it is not taken down soon.
NVIDIA DGX A100 System
Update May 12th, 5 pm UTC: NVIDIA has listed the video and it is not unlisted anymore.

TSMC 5 nm Customers Listed, Intel Rumored to be One of Them

TSMC is working hard to bring a new 5 nm (N5 and N5+) despite all the hiccups the company may have had due to the COVID-19 pandemic happening. However, it seems like nothing can stop TSMC, and plenty of companies have already reserved some capacity for their chips. With mass production supposed to start in Q3 of this year, 5 nm node should become one of the major nodes over time for TSMC, with predictions that it will account for 10% of all capacity for 2020. Thanks to the report of ChinaTimes, we have a list of new clients for the TSMC 5 nm node, with some very interesting names like Intel appearing on the list.

Apple and Huawei/HiSilicon will be the biggest customers for the node this year with A14 and Kirin 1000 chips being made for N5 node, with Apple ordering the A15 chips and Huawei readying the Kirin 1100 5G chip for the next generation N5+. From there, AMD will join the 5 nm party for Zen 4 processors and RDNA 3 graphics cards. NVIDIA has also reserved some capacity for its Hopper architecture, which is expected to be a consumer-oriented option, unlike Ampere. And perhaps the most interesting entry to the list is Intel Xe graphics cards. The list shows that Intel might use the N5 process form TSMC so it can ensure the best possible performance for its future cards, in case it has some issues manufacturing its own nodes, just like it did with 10 nm.
TSMC 5 nm customers

AMD Adds Four New Graphics Technologies to Its FidelityFX Software Stack via GPUOpen

AMD today via its newly released GPUOpen website has announced that it is adding four new graphics technologies to its FidelityFX software stack. Before you ask, no; there is no included Ray Tracing graphics libraries among these four new technologies. However, considering the use-case for these is to give developers an almost plug-in flexibility on various graphics technologies they would otherwise have to find other ways to integrate in their rendering pass, added layers to GPUOpen are always a welcome sight. And rest assured that "classic" shading techniques will still be widely used even in the advent of top to bottom raytracing capabilities on graphics hardware - which likely won't happen in the next GPU hardware generation anyway.

Added technologies to the previously-released Contrast Adaptive Sharpening are libraries for SSSR (Stochastic Screen Space Reflections) for better reflections without the usage of raytracing; CACAO (Combined Adaptive Compute Ambient Occlusion) for added depth to shadows and object quality; LPM (Luminance Preserving Mapper) for eased application of an HDR rendering pipeline with correct values, preventing overblown details; and SPD (Single Pass Downsampler) which will allow developers to seamlessly downsample required assets (think something along the lines of Variable Rate Shading) to achieve FPS targets. The GPUOpen is an effort from AMD to create an open graphics library that will allow developers to easily integrate AMD-optimized technologies to their graphics workflow.

Intel Gen12 Xe GPU with 96 Execution Units Shows Up on SiSoft Database

An Intel Gen12 Xe GPU, possibly a discrete- DG1 prototype, showed up on the SiSoft SANDRA online database. The GPU is detailed by SANDRA as having 768 unified shaders across 96 execution units (EUs), a 1.50 GHz GPU clock speed, 1 MB of on-die L2 cache, and 3 GB of dedicated video memory of an unknown type (likely GDDR6). This is probably a different chip from the DG1-SDV, which caps out at 900 MHz GPU clock, although its SIMD muscle is identical.

At a clock-speed of 1.50 GHz, the chip would feature an FP32 throughput of 2,303 GFLOPs (we know this from the DG1-SDV offering 1382 GFLOPs at 900 MHz). If the software-side optimization backs this hardware, the resulting product could end up with performance in the league of the 8 CU Radeon "Vega" solution found in the AMD "Renoir" APU, or the Radeon RX 560 discrete GPU, which are just about enough for PUBG at 1080p with medium settings.

Samsung/AMD Radeon GPU for Smartphones is Reportedly Beating the Competition

Samsung and AMD announced last year their strategic partnership to bring AMD RDNA GPUs to the Samsung mobile chips and use that as the only GPU going forward. And now, some performance numbers are going around about the new RDNA smartphone GPU that is compared to Qualcomm Adreno 650 GPU. Thanks to the South Korean technology forum "Clien", they have obtained some alleged performance results of new GPU in the GFXBench benchmark. The baseline in these tests is the Qualcomm Adreno 650 GPU, which scored 123 FPS in Manhattan 3.1 test, 53 FPS in Aztec Normal, and 20 FPS in Aztec High.

The welcome surprise here is the new RDNA GPU Samsung is pursuing. It has scored an amazing 181 FPS in Manhattan 3.1 test (up 47% from Adreno 650), 138 FPS in Aztec Normal (up almost 200% from Adreno 650), and 58 FPS in Aztec High which is 190% higher compared to Adreno 650. This performance results could be very true, as the Samsung and AMD collaboration should give first results in 2021 when the competition will be better, and they need to prepare for that. You always start designing a processor for next-generation workloads and performance if you want to be competitive by the time you release a product.
AMD RDNA GPU

Intel Teases "Big Daddy" Xe-HP GPU

The Intel Graphics Twitter account was on fire today, because they posted an update on the development of the Xe graphics processor, mentioning that samples are ready and packed up in quite an interesting package. The processor in question was discovered to be a Xe-HP GPU variant with an estimated die size of 3700 mm², which means we sure are talking about a multi-chip package here. How we concluded that it is the Xe-HP GPU, is by words of Raja Koduri, senior vice president, chief architect, general manager for Architecture, Graphics, and Software at Intel. He made a tweet, which was later deleted, that says this processor is a "baap of all", meaning "big daddy of them all" when translated from Hindi.

Mr. Koduri previously tweeted a photo of the Intel Graphics team at India, which has been working on the same "baap of all" GPU, which suggests this is a Xe-HP chip. It seems that this is not the version of the GPU made for HPC workloads (this is reserved for the Xe-HPC GPU), this model could be a direct competitor to offers like NVIDIA Quadro or AMD Radeon Pro. We can't wait to learn more about Intel's Xe GPUs, so stay tuned. Mr. Koduri has confirmed that this GPU will be used only for Data Centric applications as it is needed to "keep up with the data we are generating". He has also added that the focus for gaming GPUs is to start off with better integrated GPUs and low power chips above that, that could reach millions of users. That will be a good beginning as that will enable software preparation for possible high-performance GPUs in future.

Update May 2: changed "father" to "big daddy", as that's the better translation for "baap".
Update 2, May 3rd: The GPU is confirmed to be a Data Center component.
Return to Keyword Browsing
Nov 24th, 2024 07:00 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts