News Posts matching #VRAM

Return to Keyword Browsing

Despite Frank Azor's Dismissal, Whispers of a 32 GB Radeon RX 9070 XTX Resurface

Recent rumors hinted at a 32 GB variant of the Radeon RX 9070 XT being in the works, which were quickly dismissed as false information by AMD's Frank Azor. However, reliable sources seem to point to the contrary, stating that a 32 GB variant of the RX 9070 XT, likely dubbed the RX 9070 XTX, is under active development indeed. The source, as pointed out by Wccftech, has a decent track record with AMD-related claims, which sure does add weight to the assertion. Unlike previous XTX-class cards from AMD, which boasted higher clock speeds and core counts, the 9070 XTX is almost certain to feature the same core count as the XT, since the latter already utilizes the full Navi 48 chip - unless, of course, there is an even higher-end chip under wraps.

The VRAM amount seems to indicate that the card will likely be positioned to appease AI enthusiasts. There is also the possibility that the rumored card will be launched under a different branding entirely, although that is not what the post at Chiphell states. Interestingly, Frank Azor did specifically mention that a 32 GB "RX 9070 XT" card is not on the horizon - he did not state that a higher-end XTX card isn't either, which sure does leave room for us to speculate. Benchlife has also chimed in on the matter, claiming that they are aware of AIB partners working on a 32 GB RDNA 4 card with the Navi 48 GPU, which in some ways, confirms the information that came out of Chiphell. The RDNA 4 cards are set to see the light of day soon enough, it seems the wait won't be much longer. However, if the 32 GB card is indeed in the pipeline, it's likely still further down the road.

AMD Radeon RX 9070 XT Could Get a 32 GB GDDR6 Upgrade

AMD's Radeon RX 9000 series GPUs are expected to come with up to 16 GB of GDDR6 memory. However, AMD is reportedly expanding its RX 9070 lineup with a new 32 GB variant, according to sources on Chiphell. The card, speculatively called the RX 9070 XT 32 GB, is slated for release at the end of Q2 2025. The current GDDR6 memory modules used in GPUs carry a capacity of 2 GB per module only, meaning that a design with 32 GB of VRAM would require as many as 16 memory modules on a single card. No 2 GB+ GDDR6 memory modules are available, meaning that the design would require memory module installation on both the front and back of the PCB. Consumers GPUs are not known for this, but it is a possibility with workstation/prosumer grade GPUs employing this engineering tactic to boost capacity,

While we don't have information on the GPU architecture, discussions point to potential modifications of the existing Navi 48 silicon. This release is positioned as a gaming card rather than a workstation-class Radeon PRO 9000 series product. AMD appears to be targeting gamers interested in running AI workloads, which typically require massive VRAM amounts to run locally. Additionally, investing in a GPU with a big VRAM capacity is essentially "future-proofing" for gamers who plan to keep their cards for longer, as recent games have been spiking VRAM usage by a large margin. The combination of gaming and AI workloads may have made AMD reconsider some of its product offerings, potentially giving us the Radeon RX 9070 XT 32 GB SKU. We have to wait for the Q2 to start, and we can expect more details by then.

Update 20:55 UTC: AMD's Frank Azor on X debunked rumors of the 32 GB SKU coming to gamers. So, this will not happen. Instead, we could be looking at prosumer oriented AMD Radeon Pro GPU with 32 GB of memory instead.

NVIDIA GeForce RTX 5060 16 GB Variants Deemed Fake, Insiders Insist SKU is 8 GB Only

According to early February reportage, Team Green's GeForce RTX 5060 and RTX 5060 Ti graphics cards are expected to launch onto market next month. Very basic technical information has leaked online; insiders reckon that both product tiers will be utilizing the NVIDIA "Blackwell" GB206 GPU. Rumors have swirled regarding intended VRAM configurations—loose online declarations point to variants being prepared with 8 or 16 GB of GDDR7 VRAM, on a 128-bit bus. Regulatory filings indicate two different configs with the eventual arrival of GeForce RTX 5060 Ti models, but certain industry watchdogs insist that the GeForce RTX 5060 SKU will be an 8 GB-only product.

A curious-looking ZOTAC Trinity OC White Edition GeForce RTX 5060 16 GB variant surfaced via a TikTok video—post-analysis, expert eyes declared that the upload contained doctored material. A BenchLife.info report pointed to a notable inconsistency on the offending item's retail packaging: "DLSS 3 should not appear on the GeForce RTX 50 series box, because the Blackwell GPU architecture focuses on DLSS 4." The publication presented evidence of ZOTAC RTX 4070 Ti SUPER Trinity OC White Edition box art being repurposed in the TikToker's video. Hardware soothsayer MEGAsizeGPU added their two cents: "this is fake. There is no plan for a GeForce RTX 5060 16 GB, and the box is photoshopped from the last-gen ZOTAC box." At the end of their report, BenchLife reckons that NVIDIA has not sent a "GeForce RTX 5060 color box template" to its board partners.

Edward Snowden Lashes Out at NVIDIA Over GeForce RTX 50 Pricing And Value

It's not every day that we witness a famous NSA whistleblower voice their disappointment over modern gaming hardware. Edward Snowden, who likely needs no introduction, did not bother to hold back his disapproval of NVIDIA's recently launched RTX 5090, RTX 5080, and RTX 5070 gaming GPUs. The reviews for the RTX 5090 have been mostly positive, although the same cannot be said for its affordable sibling, the RTX 5080. Snowden, voicing his thoughts on Twitter, claimed that NVIDIA is selling "F-tier value for S-tier prices".

Needless to say, there is no doubt that the RTX 5090's pricing is quite exorbitant, regardless of how anyone puts it. Snowden was particularly displeased with the amount of VRAM on offer, which is also hard to argue against. The RTX 5080 ships with "only" 16 GB of VRAM, whereas Snowden believes that it should have shipped with at least 24, or even 32 GB. He further adds that the RTX 5090, which ships with a whopping 32 GB of VRAM, should have been available with a 48 GB variant. As for the RTX 5070, the security consultant expressed desire for at least 16 GB of VRAM (instead of 12 GB).

AMD Radeon RX 9070 XT Pricing Leak: More Affordable Than RTX 5070?

As we reported yesterday, the Radeon RX 9070 XT appears to be all set to disrupt the mid-range gaming GPU segment, offering performance that looks truly enticing, at least if the leaked synthetic benchmarks are anything to go by. The highest-end RDNA 4 GPU is expected to handily outperform the RTX 4080 Super despite costing half as much, with comparison to its primary competitor, the RTX 5070, yet to be made.

Now, a fresh leak has seemingly hinted at how heavy the RDNA 4 GPU is going to be on its buyers' pockets. Also sourced from Chiphell, the Radeon RX 9070 XT is expected to command a price tag between $479 for AMD's reference card and roughly $549 for an AIB unit, varying based on which exact product one opts for. At that price, the Radeon RX 9070 XT easily undercuts the RTX 5070, which will start from $549, while offering 16 GB of VRAM, albeit of the older GDDR6 spec. There is hardly any doubt that the RTX GPU will come out ahead in ray tracing performance, as we already witnessed yesterday, although traditional rasterization performance will be more interesting to compare.

MSI Introduces Next-Gen NVIDIA GeForce RTX 50 Series Graphics Cards for the AI Era

MSI has unveiled its groundbreaking NVIDIA GeForce RTX 50 Series graphics cards, featuring cutting-edge designs including Suprim Liquid, Suprim, Vanguard, Gaming Trio, Ventus, and Inspire. Engineered with enhanced thermal solutions, the cards are crafted to meet the high-performance demands of next-gen GPUs, delivering advanced cooling and peak performance.

Powered by NVIDIA Blackwell, GeForce RTX 50 Series GPUs bring game-changing capabilities to gamers and creators. Equipped with a massive level of AI horsepower, the RTX 50 Series enables new experiences and next-level graphics fidelity. Multiply performance with NVIDIA DLSS 4, generate images at unprecedented speed, and unleash creativity with NVIDIA Studio. Plus, access NVIDIA NIM microservices - state-of-the-art AI models that let enthusiasts and developers build AI assistants, agents, and workflows with peak performance on NIM-ready systems.

NVIDIA RTX 5000 Blackwell Memory Amounts Confirmed by Pre-Built PC Maker

By now, it's a surprise to almost nobody that NVIDIA plans to launch its next-generation RTX 5000-series "Blackwell" gaming graphics cards at the upcoming CES 2025 event in Las Vegas in early January. Previously, leaks and rumors gave us a full run-down of expected VRAM amounts and other specifications and features for the new GPUs, but these have yet to be confirmed by NVIDIA—for obvious reasons. Now, though, it looks as though iBuyPower has jumped the gun and prematurely revealed the new specifications for its updated line-up of pre-built gaming PCs with RTX 5000-series GPUs ahead of NVIDIA's official announcement. The offending product pages have since been removed, but they both give us confirmation of the previously leaked VRAM amounts and of the expected release cadence for RTX 5000, which will reportedly see the RTX 5070 Ti and RTX 5080 launch before the RTX 5090 flagship.

On iBuyPower's now-pulled pages, the NVIDIA GeForce RTX 5080 16 GB and GeForce RTX 5070 Ti 16 GB can be seen as the GPUs powering two different upcoming Y40 pre-built gaming PCs from the system integrator. The VRAM specifications here coincide with what we have previously seen from other leaked sources. Unfortunately, while an archived version of the page for the pre-built containing the RTX 5080 appears to show the design for an ASUS TUF Gaming RTX 5080 with a triple-fan cooler, it looks like iBuyPower is using the same renders for both the 5080 and 5070Ti versions of the pre-built PCs. What's also interesting is that iBuyPower looks to be pairing the next-gen GPUs with 7000-series AMD X3D CPUs, as opposed to the newly released AMD Ryzen 9000 X3D chips that have started making their way out into the market.

NVIDIA Blackwell RTX and AI Features Leaked by Inno3D

NVIDIA's RTX 5000 series GPU hardware has been leaked repeatedly in the weeks and months leading up to CES 2025, with previous leaks tipping significant updates for the RTX 5070 Ti in the VRAM department. Now, Inno3D is apparently hinting that the RTX 5000 series will also introduce updated machine learning and AI tools to NVIDIA's GPU line-up. An official CES 2025 teaser published by Inno3D, titled "Inno3D At CES 2025, See You In Las Vegas!" makes mention of potential updates to NVIDIA's AI acceleration suite for both gaming and productivity.

The Inno3D teaser specifically points out "Advanced DLSS Technology," "Enhanced Ray Tracing" with new RT cores, "better integration of AI in gaming and content creation," "AI-Enhanced Power Efficiency," AI-powered upscaling tech for content creators, and optimizations for generative AI tasks. All of this sounds like it builds off of previous NVIDIA technology, like RTX Video Super Resolution, although the mention of content creation suggests that it will be more capable than previous efforts, which were seemingly mostly consumer-focussed. Of course, improved RT cores in the new RTX 5000 GPUs is also expected, although it will seemingly be the first time NVIDIA will use AI to enhance power draw, suggesting that the CES announcement will come with new features for the NVIDIA App. The real standout feature, though, are called "Neural Rendering" and "Advanced DLSS," both of which are new nomenclatures. Of course, Advanced DLSS may simply be Inno3D marketing copy, but Neural Rendering suggests that NVIDIA will "Revolutionize how graphics are processed and displayed," which is about as vague as one could be.

NVIDIA GeForce RTX 5070 Ti Leak Tips More VRAM, Cores, and Power Draw

It's an open secret by now that NVIDIA's GeForce RTX 5000 series GPUs are on the way, with an early 2025 launch on the cards. Now, preliminary details about the RTX 5070 Ti have leaked, revealing an increase in both VRAM and TDP and suggesting that the new upper mid-range GPU will finally address the increased VRAM demand from modern games. According to the leak from Wccftech, the RTX 5070 Ti will have 16 GB of GDDR7 VRAM, up from 12 GB on the RTX 4070 Ti, as we previously speculated. Also confirming previous leaks, the new sources confirm that the 5070 Ti will use the cut-down GB203 chip, although the new leak points to a significantly higher TBP of 350 W. The new memory configuration will supposedly run on a 256-bit memory bus and run at 28 Gbps for a total memory bandwidth of 896 GB/s, which is a significant boost over the RTX 4070 Ti.

Supposedly, the RTX 5070 Ti will also see a bump in total CUDA cores, from 7680 in the RTX 4070 Ti to 8960 in the RTX 5070 Ti. The new RTX 5070 Ti will also switch to the 12V-2x6 power connector, compared to the 16-pin connector from the 4070 Ti. NVIDIA is expected to announce the RTX 5000 series graphics cards at CES 2025 in early January, but the RTX 5070 Ti will supposedly be the third card in the 5000-series launch cycle. That said, leaks suggest that the 5070 Ti will still launch in Q1 2025, meaning we may see an indication of specs at CES 2025, although pricing is still unclear.

Update Dec 16th: Kopite7kimi, ubiquitous hardware leaker, has since responded to the RTX 5070 Ti leaks, stating that 350 W may be on the higher end for the RTX 5070 Ti: "...the latest data shows 285W. However, 350W is also one of the configs." This could mean that a TBP of 350 W is possible, although maybe only on certain graphics card models, if competition is strong, or in certain boost scenarios.

Thermal Grizzly Launches New Thermal Putty Gap Fillers in Three Different Versions

Thermal Grizzly's Thermal Putty offers a premium alternative to traditional thermal pads. It is electrically non-conductive, easy to apply, and functions as a flexible gap filler that compensates for height differences. This makes it an ideal replacement for thermal pads in graphics cards. Graphics cards are typically equipped with thermal pads of varying heights from the factory. When replacing these pads or upgrading to a GPU water cooler, matching replacement pads are necessary.

TG Thermal Putty can compensate for height differences from 0.2 to 3.0 mm, making it a versatile solution. Thermal Putty can be applied in two ways. Firstly, it can be applied over large areas using the included spatulas. Alternatively, it can be applied manually (gloves are recommended). When applied by hand, small beads can be shaped to fit the specific contact surfaces (e.g., VRAM, SMD).

MAXSUN Unveils Intel Arc B580 Series Graphics Cards

MAXSUN and Intel introduced their latest collaboration, the MAXSUN Intel Arc B580 Series Graphics Cards. Both models come with 12 GB of VRAM, ensuring smooth gameplay and efficient creative workflows.

Two Options for Every Need
The MAXSUN Intel Arc B580 Series features two models:
  • MAXSUN Intel Arc B580 iCraft 12G (MSRP: $259)
  • MAXSUN Intel Arc B580 Milestone 12G (MSRP: $249)

Intel Arc B580 Card Pricing Leak Suggests Competitive Pricing

Earlier this week, details of two Intel Arc B580 "Battlemage" graphics cards from ASRock leaked, but there was no indication of any pricing, which lead to some speculations in the comments section. Now, serial leaker @momomo_us on X/Twitter has leaked the pricing for Intel's own card, which will apparently be known as the Intel Arc B580 Limited Edition Graphics card. The leaker suggests a retail price of US$250 for the 12 GB graphics card, which seems like a competitive starting point for what is expected to be a lower mid-tier GPU. However, this will most likely be the cheapest option on the market, since AIB's tend to charge higher pricing due to customised PCB and cooling, plus some extra bling over the Intel cards.

In addition to the pricing leak above, Videocardz did some digging and found an etailer that has listed the Intel Arc B580 card on its site, albeit without any details, for US$259.55, although the site didn't reveal the details of the etailer, beyond the fact that it's a US company. The question is how the B580 will compare in terms of performance against both Intel's own Arc A750 and A770—which comes with either 8 or 16 GB of VRAM—especially as you can pick up an Acer Predator BiFrost Arc A770 or a couple of different ASRock Challenger Arc A770 cards for as little as US$230.

NVIDIA Fine-Tunes Llama3.1 Model to Beat GPT-4o and Claude 3.5 Sonnet with Only 70 Billion Parameters

NVIDIA has officially released its Llama-3.1-Nemotron-70B-Instruct model. Based on META's Llama3.1 70B, the Nemotron model is a large language model customized by NVIDIA in order to improve the helpfulness of LLM-generated responses. NVIDIA uses fine-tuning structured data to steer the model and allow it to generate more helpful responses. With only 70 billion parameters, the model is punching far above its weight class. The company claims that the model is beating the current top models from leading labs like OpenAI's GPT-4o and Anthropic's Claude 3.5 Sonnet, which are the current leaders across AI benchmarks. In evaluations such as Arena Hard, the NVIDIA Llama3.1 Nemotron 70B is scoring 85 points, while GPT-4o and Sonnet 3.5 score 79.3 and 79.2, respectively. Other benchmarks like AlpacaEval and MT-Bench spot NVIDIA also hold the top spot, with 57.6 and 8.98 scores earned. Claude and GPT reach 52.4 / 8.81 and 57.5 / 8.74, just below Nemotron.

This language model underwent training using reinforcement learning from human feedback (RLHF), specifically employing the REINFORCE algorithm. The process involved a reward model based on a large language model architecture and custom preference prompts designed to guide the model's behavior. The training began with a pre-existing instruction-tuned language model as the starting point. It was trained on Llama-3.1-Nemotron-70B-Reward and HelpSteer2-Preference prompts on a Llama-3.1-70B-Instruct model as the initial policy. Running the model locally requires either four 40 GB or two 80 GB VRAM GPUs and 150 GB of free disk space. We managed to take it for a spin on NVIDIA's website to say hello to TechPowerUp readers. The model also passes the infamous "strawberry" test, where it has to count the number of specific letters in a word, however, it appears that it was part of the fine-tuning data as it fails the next test, shown in the image below.

NVIDIA's RTX 5060 "Blackwell" Laptop GPU Comes with 8 GB of GDDR7 Memory Running at 28 Gbps, 25 W Lower TGP

In a recent event hosted by Chinese laptop manufacturer Hasee, company's chairman Wu Haijun unveiled exciting details about NVIDIA's upcoming GeForce RTX 5060 "Blackwell" laptop GPU. Attending the event was industry insider Golden Pig Upgrade, who managed to catch some details of the card set to launch next year. The RTX 5060 is expected to be the first in the market to feature GDDR7 memory, a move that aligns with earlier leaks suggesting NVIDIA's entire Blackwell lineup would adopt this new standard. This upgrade is anticipated to deliver substantial boosts in bandwidth and possibly increased VRAM capacities in other SKUs. Perhaps most intriguing is the reported performance of the RTX 5060. Wu said this laptop SKU could offer performance comparable to the current RTX 4070 laptop GPU. It's said to exceed the RTX 4070 in ray tracing scenarios and match or come close to its rasterization performance.

This leap in capabilities is made even more impressive by the chip's reduced power consumption, with a maximum TGP of 115 W compared to the RTX 4060's 140 W. The reported power efficiency gains are not exclusive to RTX 5060. Wu suggests that the entire Blackwell lineup will see significant reductions in power draw, potentially lowering overall system power consumption by 40 to 50 watts in many Blackwell models. While specific technical details remain limited, it's believed the RTX 5060 will utilize the GB206 GPU die paired with 8 GB of GDDR7 memory, likely running at 28 Gbps in its initial iteration.

Stability AI Outs Stable Diffusion 3 Medium, Company's Most Advanced Image Generation Model

Stability AI, a maker of various generative AI models and the company behind text-to-image Stable Diffusion models, has released its latest Stable Diffusion 3 (SD3) Medium AI model. Running on two billion dense parameters, the SD3 Medium is the company's most advanced text-to-image model to date. It boasts features like generating highly realistic and detailed images across a wide range of styles and compositions. It demonstrates capabilities in handling intricate prompts that involve spatial reasoning, actions, and diverse artistic directions. The model's innovative architecture, including the 16-channel variational autoencoder (VAE), allows it to overcome common challenges faced by other models, such as accurately rendering realistic human faces and hands.

Additionally, it achieves exceptional text quality, with precise letter formation, kerning, and spacing, thanks to the Diffusion Transformer architecture. Notably, the model is resource-efficient, capable of running smoothly on consumer-grade GPUs without compromising performance due to its low VRAM footprint. Furthermore, it exhibits impressive fine-tuning abilities, allowing it to absorb and replicate nuanced details from small datasets, making it highly customizable for specific use cases that users may have. Being an open-weight model, it is available for download on HuggingFace, and it has libraries optimized for both NVIDIA's TensorRT (all modern NVIDIA GPUs) and AMD Radeon/Instinct GPUs.

22 GB Modded GeForce RTX 2080 Ti Cards Listed on Ebay - $499 per unit

An Ebay Store—customgpu_official—is selling memory modified GeForce RTX 2080 Ti graphics cards. The outfit (located in Palo Alto, California) has a large inventory of MSI GeForce RTX 2080 Ti AERO cards—judging from their listing's photo gallery. Workers in China are reportedly upgrading these (possibly refurbished) units with extra lashings of GDDR6 VRAM—going from the original 11 GB specification up to 22 GB. We have observed smaller scale GeForce RTX 2080 Ti modification projects and a very ambitious user-modified example in the past, but customgpu's latest endeavor targets a growth industry—the item description states: "Why do you need a 22 GB 2080 Ti? Large VRAM is essential to cool AIGC apps such as stable diffusion fine tuning, LLAMA, LLM." At the time of writing three cards are available to purchase, and interested customers have already acquired four memory modded units.

They advertise their upgraded "Turbo Edition" card as a great "budget alternative" to more modern GeForce RTX 3090 and 4090 models—"more information and videos" can be accessed via 2080ti22g.com. The MSI GeForce RTX 2080 Ti AERO 11 GB model is not documented within TPU's GPU database, but its dual-slot custom cooling solution is also sported by the MSI RTX 2080 SUPER AERO 8 GB graphics card. The AERO's blower fan system creates a "mini-wind tunnel, pulling fresh air from inside the case and blowing it out the IO panel, and out of the system." The seller's asking price is $499 per unit—perhaps a little bit steep for used cards (potentially involved in mining activities), but customgpu_official seems to be well versed in repairs. Other Ebay listings show non-upgraded MSI GeForce RTX 2080 Ti AERO cards selling in the region of $300 to $400. Custom GPU Upgrade and Repair's hype video proposes that their modified card offers great value, given that it sells for a third of the cost of a GeForce RTX 3090—their Ebay item description contradicts this claim: "only half price compared with GeForce RTX 3090 with almost the same GPU memory."

Possible NVIDIA GeForce RTX 3050 6 GB Edition Specifications Appear

Alleged full specifications leaked for NVIDIA's upcoming GeForce RTX 3050 6 GB graphics card show extensive reductions beyond merely reducing memory size versus the 8 GB model. If accurate, performance could lag the existing RTX 3050 8 GB SKU by up to 25%, making it weaker competition even for AMD's budget RX 6500 XT. Previous rumors suggested only capacity and bandwidth differences on a partially disabled memory bus between 3050 variants, which would reduce the memory to 6 GB and 96-bit bus, from 8 GB and 128-bit bus.. But leaked specs indicate CUDA core counts, clock speeds, and TDP all see cuts for the upcoming 6 GB version. With 18 SMs and 2304 cores rather than 20 SMs and 2560 cores at lower base and boost frequencies, the impact looks more severe than expected. A 70 W TDP does allow passive cooling but hurts performance versus the 3050 8 GB's 130 W design.

Some napkin math suggests the 3050 6 GB could deliver only 75% of its elder sibling's frame rates, putting it more in line with the entry-level 6500 XT. While having 50% more VRAM helps, dramatic core and clock downgrades counteract that memory advantage. According to rumors, the RTX 3050 6 GB is set to launch in February, bringing lower-end Ampere to even more budget-focused builders. But with specifications seemingly hobbled beyond just capacity, its real-world gaming value remains to be determined. NVIDIA likely intends RTX 3060 6 GB primarily for less demanding esports titles. Given the scale of cutbacks and the modern AAA title's recommended specifications, mainstream AAA gaming performance seems improbable.

Lords of the Fallen Gets New Patches, Latest Patch 1.1.207 Brings Stability Improvements

Lords of the Fallen got plenty of patches in the last few days, with two of them, Patch 1.1.199 and 1.1.203, launched yesterday, and the latest, Patch v1.1.207, launched earlier today. The previous two fixed GPU crashes on AMD graphics cards, as well as a big fix for the issue in communication between the drivers and DirectX 12. The Patch 1.1.203 also brought a reduction in VRAM usage that should provide additional headroom for GPUs operating at the limit, which in turn should provide a substantial performance improvement, at least according to Paradox Interactive.

The latest Patch 1.1.207 brought further stability improvements, fixing several crash issues as well as implementing various optimization, multiplayer, gameplay, AI, Quest and other improvements. The release notes also note that the fix for the issue that causes the game to crash on Steam Deck has been fixed, and should be published as soon as it passes QA.

AMD's Radeon RX 6750 GRE Specs and Pricing Revealed

There have been several rumours about AMD's upcoming RX 6750 GRE graphics cards, which may or maybe not be limited to the Chinese market. Details have now appeared of not one, but two different RX 6750 GRE SKUs, courtesy of @momomo_us on Twitter/X and it seems like AMD has simply re-branded the RX 6700 XT and RX 6700, adjusted the clock speed minimally and slapped a new cooler on the cards. To call this disappointing would be an understatement, but then again, these cards weren't expected to bring anything new to the table.

The fact that AMD is calling the cards the RX 6750 GRE 10 GB and RX 6750 GRE 12 GB will help confuse consumers as well, especially when you consider the two cards were clearly different SKUs when they launched as the RX 6700 and RX 6700 XT. Now it just looks like one has las VRAM than the other, when in fact it also has a different GPU. At least the pricing difference between the two SKUs is minimal, with the 10 GB model having an MSRP of US$269 and the 12 GB model coming in at a mere $20 more, at US$289. The RX 6700 XT had a launch price of US$479 and still retails for over US$300, which at least makes these refreshed products somewhat more wallet friendly.

Gigabyte AORUS Laptops Empower Creativity with AI Artistry

GIGABYTE, the world's leading computer hardware brand, featured its AORUS 17X laptop in two influential AI-content-focused Youtubers by Hugh Hou and MDMZ. Both Youtubers took the new AORUS 17X laptop to the AI image and AI video generation test and found the great potential of how the laptops benefit their workflow. The AORUS 17X laptop is powered by NVIDIA GeForce RTX 40 series GPUs to unlock new horizons for creativity and become the go-to choice for art and tech enthusiasts.

Hugh Hou: Unleashing the Power of AI Arts with the AORUS 17X Laptop
Hugh Hou's journey into AI arts, powered by Stable Diffusion XL, garnered viral success. The AORUS 17X laptop emerged as a game-changer with up to 16G of VRAM, enabling local AI photo and video generation without bearing hefty cloud-rendering costs. It empowers creators and outperforms competitors in AI-assisted tasks and enhances AI artistry.

AMD's Radeon RX 7900 GRE Gets Benchmarked

AMD's China exclusive Radeon RX 7900 GRE has been put through its paces by Expreview and the US$740 equivalent card should in short not carry the 7900-series moniker. In most of the tests, the card performs like a Raden RX 6950 XT or worse, with it being beaten by the Radeon RX 6800 XT in 3D Mark Fire Strike, even if it's only by the tiniest amount. Expreview has done a fairly limited comparison, mainly pitching the Radeon RX 7900 GRE against the Radeon RX 7900 XT and NVIDIA's GeForce RTX 4070, where it loses by a mile towards AMD's higher-end GPU, which by no means was unexpected as this is a lower tier product.

However, when it comes the GeForce RTX 4070, AMD struggles to keep up at 1080p, where NVIDIA takes home the win in games like The Last of US Part 1 and Diablo 4. In games like F1 22 and Assassin's Creed Hall of Valor, AMD is only ahead by a mere percentage point or less. Once ray tracing is enabled, AMD only wins in F1 22 and it's by less than one percent again and Far Cry 6, where AMD is almost three percent faster. Moving up in resolution, the Radeon RX 7900 GRE ends up being a clear winner, most likely partially due to having 16 GB of VRAM and at 1440p the GeForce RTX 4070 also falls behind in most of the ray traced game tests, if only just in most of them. At 4K the NVIDIA card can no longer keep up, but the Radeon RX 7900 GRE isn't really a 4K champion either, dropping under 60 FPS in more resource heavy games like Cyberpunk 2077 and The Last of Us Part 1. Considering the GeForce RTX 4070 Ti only costs around US$50 more, it seems like it would be the better choice, despite having less VRAM. AMD appears to have pulled an NVIDIA with this card, which at least performance wise, seems to belong in the Radeon RX 7800 segment. The benchmark figures also suggests that the actual Radeon RX 7800 cards won't be worth the wait, unless AMD prices them very competitively.

Update 11:45 UTC: [Editor's note: The official MSRP from AMD appears to be US$649 for this card, which is more reasonable, but the performance still places this in in a category lower than the model name suggests.]

GDDR6 VRAM Prices Falling According to Spot Market Analysis - 8 GB Selling for $27

The price of GDDR6 memory has continued to fall sharply - over recent financial quarters - due to an apparent decrease in demand for graphics cards. Supply shortages are also a thing of the past—industry experts think that manufacturers have been having an easier time acquiring components since late 2021, but that also means that the likes of NVIDIA and AMD have been paying less for VRAM packages. Graphics card enthusiasts will be questioning why these savings have not been passed on swiftly to the customer, as technology news outlets (this week) have been picking up on interesting data—it demonstrates that spot prices of GDDR6 have decreased to less than a quarter of their value from a year and a half ago. 3DCenter.org has presented a case example of 8 GB GDDR6 now costing $27 via the spot market (through DRAMeXchange's tracking system), although manufacturers will be paying less than that due to direct contract agreements with their favored memory chip maker/supplier.

A 3DCenter.org staffer had difficulty sourcing the price of 16 Gb GDDR6 VRAM ICs on the spot market, so it is tricky to paint a comparative picture of how much more expensive it is to equip a "budget friendly" graphics card with a larger allocation of video memory, when the bill-of-materials (BoM) and limits presented by narrow bus widths are taken into account. NVIDIA is releasing a GeForce RTX 4060 Ti 16 GB variant in July, but the latest batch of low to mid-range models (GeForce RTX 4060-series and Radeon RX 7600) are still 8 GB affairs. Tom's Hardware points to GPU makers sticking with traditional specification hierarchy for the most part going forward: "(models) with double the VRAM (two 16 Gb chips per channel on both sides of the PCB) are usually reserved for the more lucrative professional GPU market."

Intel Announces Intel Arc Pro A60 and Pro A60M GPUs

Today, Intel introduced the Intel Arc Pro A60 and Pro A60M as new members of the Intel Arc Pro A-series professional range of graphics processing units (GPUs). The new products are a significant step up in performance in the Intel Arc Pro family and are carefully designed for professional workstations users with up to 12 GB of video memory (VRAM) and support for four displays with high dynamic range (HDR) and Dolby Vision support.

With built-in ray tracing hardware, graphics acceleration and machine learning capabilities, the Intel Arc Pro A60 GPU unites fluid viewports, the latest in visual technologies and rich content creation in a traditional single slot factor.

NVIDIA Explains GeForce RTX 40 Series VRAM Functionality

NVIDIA receives a lot of questions about graphics memory, also known as the frame buffer, video memory, or "VRAM", and so with the unveiling of our new GeForce RTX 4060 Family of graphics cards we wanted to share some insights, so gamers can make the best buying decisions for their gaming needs. What Is VRAM? VRAM is high speed memory located on your graphics card.

It's one component of a larger memory subsystem that helps make sure your GPU has access to the data it needs to smoothly process and display images. In this article, we'll describe memory subsystem innovations in our latest generation Ada Lovelace GPU architecture, as well as how the speed and size of GPU cache and VRAM impacts performance and the gameplay experience.

Xbox Series S Hitting VRAM Limits, 8 GB is the Magic Number

Microsoft launched two flavors of its Xbox Series console back in November of 2020 - a more expensive and powerful "X" model appealing to hardcore enthusiasts arrived alongside an entry-level/budget friendly "S" system that featured lesser hardware specifications. The current generation Xbox consoles share the same custom AMD 8-core Zen 2 processor, albeit with different clock configurations, but the key divergence lies in Microsoft's choice of graphical hardware. The Series X packs an AMD "Scarlett" graphics processor with access to 16 GB of VRAM, while the Series S makes do with only 8 GB of high speed video memory with its "Lockhart" GPU.

Games studios have historically struggled to optimize their projects for the step down Xbox model - with software engineers complaining about memory allocation issues thanks to a smaller pool of VRAM - the Series S CPU and GPU have to fight over a total of 10 GB GDDR6 system memory. Microsoft listened to this feedback and made necessary changes last year - an updated SDK was released and a video briefing explained: "Hundreds of additional megabytes of memory are now available to Xbox Series S developers...This gives developers more control over memory, which can improve graphics performance in memory-constrained conditions."

Return to Keyword Browsing
Feb 21st, 2025 04:28 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts