News Posts matching #VRAM

Return to Keyword Browsing

NVIDIA Fine-Tunes Llama3.1 Model to Beat GPT-4o and Claude 3.5 Sonnet with Only 70 Billion Parameters

NVIDIA has officially released its Llama-3.1-Nemotron-70B-Instruct model. Based on META's Llama3.1 70B, the Nemotron model is a large language model customized by NVIDIA in order to improve the helpfulness of LLM-generated responses. NVIDIA uses fine-tuning structured data to steer the model and allow it to generate more helpful responses. With only 70 billion parameters, the model is punching far above its weight class. The company claims that the model is beating the current top models from leading labs like OpenAI's GPT-4o and Anthropic's Claude 3.5 Sonnet, which are the current leaders across AI benchmarks. In evaluations such as Arena Hard, the NVIDIA Llama3.1 Nemotron 70B is scoring 85 points, while GPT-4o and Sonnet 3.5 score 79.3 and 79.2, respectively. Other benchmarks like AlpacaEval and MT-Bench spot NVIDIA also hold the top spot, with 57.6 and 8.98 scores earned. Claude and GPT reach 52.4 / 8.81 and 57.5 / 8.74, just below Nemotron.

This language model underwent training using reinforcement learning from human feedback (RLHF), specifically employing the REINFORCE algorithm. The process involved a reward model based on a large language model architecture and custom preference prompts designed to guide the model's behavior. The training began with a pre-existing instruction-tuned language model as the starting point. It was trained on Llama-3.1-Nemotron-70B-Reward and HelpSteer2-Preference prompts on a Llama-3.1-70B-Instruct model as the initial policy. Running the model locally requires either four 40 GB or two 80 GB VRAM GPUs and 150 GB of free disk space. We managed to take it for a spin on NVIDIA's website to say hello to TechPowerUp readers. The model also passes the infamous "strawberry" test, where it has to count the number of specific letters in a word, however, it appears that it was part of the fine-tuning data as it fails the next test, shown in the image below.

NVIDIA's RTX 5060 "Blackwell" Laptop GPU Comes with 8 GB of GDDR7 Memory Running at 28 Gbps, 25 W Lower TGP

In a recent event hosted by Chinese laptop manufacturer Hasee, company's chairman Wu Haijun unveiled exciting details about NVIDIA's upcoming GeForce RTX 5060 "Blackwell" laptop GPU. Attending the event was industry insider Golden Pig Upgrade, who managed to catch some details of the card set to launch next year. The RTX 5060 is expected to be the first in the market to feature GDDR7 memory, a move that aligns with earlier leaks suggesting NVIDIA's entire Blackwell lineup would adopt this new standard. This upgrade is anticipated to deliver substantial boosts in bandwidth and possibly increased VRAM capacities in other SKUs. Perhaps most intriguing is the reported performance of the RTX 5060. Wu said this laptop SKU could offer performance comparable to the current RTX 4070 laptop GPU. It's said to exceed the RTX 4070 in ray tracing scenarios and match or come close to its rasterization performance.

This leap in capabilities is made even more impressive by the chip's reduced power consumption, with a maximum TGP of 115 W compared to the RTX 4060's 140 W. The reported power efficiency gains are not exclusive to RTX 5060. Wu suggests that the entire Blackwell lineup will see significant reductions in power draw, potentially lowering overall system power consumption by 40 to 50 watts in many Blackwell models. While specific technical details remain limited, it's believed the RTX 5060 will utilize the GB206 GPU die paired with 8 GB of GDDR7 memory, likely running at 28 Gbps in its initial iteration.

Stability AI Outs Stable Diffusion 3 Medium, Company's Most Advanced Image Generation Model

Stability AI, a maker of various generative AI models and the company behind text-to-image Stable Diffusion models, has released its latest Stable Diffusion 3 (SD3) Medium AI model. Running on two billion dense parameters, the SD3 Medium is the company's most advanced text-to-image model to date. It boasts features like generating highly realistic and detailed images across a wide range of styles and compositions. It demonstrates capabilities in handling intricate prompts that involve spatial reasoning, actions, and diverse artistic directions. The model's innovative architecture, including the 16-channel variational autoencoder (VAE), allows it to overcome common challenges faced by other models, such as accurately rendering realistic human faces and hands.

Additionally, it achieves exceptional text quality, with precise letter formation, kerning, and spacing, thanks to the Diffusion Transformer architecture. Notably, the model is resource-efficient, capable of running smoothly on consumer-grade GPUs without compromising performance due to its low VRAM footprint. Furthermore, it exhibits impressive fine-tuning abilities, allowing it to absorb and replicate nuanced details from small datasets, making it highly customizable for specific use cases that users may have. Being an open-weight model, it is available for download on HuggingFace, and it has libraries optimized for both NVIDIA's TensorRT (all modern NVIDIA GPUs) and AMD Radeon/Instinct GPUs.

22 GB Modded GeForce RTX 2080 Ti Cards Listed on Ebay - $499 per unit

An Ebay Store—customgpu_official—is selling memory modified GeForce RTX 2080 Ti graphics cards. The outfit (located in Palo Alto, California) has a large inventory of MSI GeForce RTX 2080 Ti AERO cards—judging from their listing's photo gallery. Workers in China are reportedly upgrading these (possibly refurbished) units with extra lashings of GDDR6 VRAM—going from the original 11 GB specification up to 22 GB. We have observed smaller scale GeForce RTX 2080 Ti modification projects and a very ambitious user-modified example in the past, but customgpu's latest endeavor targets a growth industry—the item description states: "Why do you need a 22 GB 2080 Ti? Large VRAM is essential to cool AIGC apps such as stable diffusion fine tuning, LLAMA, LLM." At the time of writing three cards are available to purchase, and interested customers have already acquired four memory modded units.

They advertise their upgraded "Turbo Edition" card as a great "budget alternative" to more modern GeForce RTX 3090 and 4090 models—"more information and videos" can be accessed via 2080ti22g.com. The MSI GeForce RTX 2080 Ti AERO 11 GB model is not documented within TPU's GPU database, but its dual-slot custom cooling solution is also sported by the MSI RTX 2080 SUPER AERO 8 GB graphics card. The AERO's blower fan system creates a "mini-wind tunnel, pulling fresh air from inside the case and blowing it out the IO panel, and out of the system." The seller's asking price is $499 per unit—perhaps a little bit steep for used cards (potentially involved in mining activities), but customgpu_official seems to be well versed in repairs. Other Ebay listings show non-upgraded MSI GeForce RTX 2080 Ti AERO cards selling in the region of $300 to $400. Custom GPU Upgrade and Repair's hype video proposes that their modified card offers great value, given that it sells for a third of the cost of a GeForce RTX 3090—their Ebay item description contradicts this claim: "only half price compared with GeForce RTX 3090 with almost the same GPU memory."

Possible NVIDIA GeForce RTX 3050 6 GB Edition Specifications Appear

Alleged full specifications leaked for NVIDIA's upcoming GeForce RTX 3050 6 GB graphics card show extensive reductions beyond merely reducing memory size versus the 8 GB model. If accurate, performance could lag the existing RTX 3050 8 GB SKU by up to 25%, making it weaker competition even for AMD's budget RX 6500 XT. Previous rumors suggested only capacity and bandwidth differences on a partially disabled memory bus between 3050 variants, which would reduce the memory to 6 GB and 96-bit bus, from 8 GB and 128-bit bus.. But leaked specs indicate CUDA core counts, clock speeds, and TDP all see cuts for the upcoming 6 GB version. With 18 SMs and 2304 cores rather than 20 SMs and 2560 cores at lower base and boost frequencies, the impact looks more severe than expected. A 70 W TDP does allow passive cooling but hurts performance versus the 3050 8 GB's 130 W design.

Some napkin math suggests the 3050 6 GB could deliver only 75% of its elder sibling's frame rates, putting it more in line with the entry-level 6500 XT. While having 50% more VRAM helps, dramatic core and clock downgrades counteract that memory advantage. According to rumors, the RTX 3050 6 GB is set to launch in February, bringing lower-end Ampere to even more budget-focused builders. But with specifications seemingly hobbled beyond just capacity, its real-world gaming value remains to be determined. NVIDIA likely intends RTX 3060 6 GB primarily for less demanding esports titles. Given the scale of cutbacks and the modern AAA title's recommended specifications, mainstream AAA gaming performance seems improbable.

Lords of the Fallen Gets New Patches, Latest Patch 1.1.207 Brings Stability Improvements

Lords of the Fallen got plenty of patches in the last few days, with two of them, Patch 1.1.199 and 1.1.203, launched yesterday, and the latest, Patch v1.1.207, launched earlier today. The previous two fixed GPU crashes on AMD graphics cards, as well as a big fix for the issue in communication between the drivers and DirectX 12. The Patch 1.1.203 also brought a reduction in VRAM usage that should provide additional headroom for GPUs operating at the limit, which in turn should provide a substantial performance improvement, at least according to Paradox Interactive.

The latest Patch 1.1.207 brought further stability improvements, fixing several crash issues as well as implementing various optimization, multiplayer, gameplay, AI, Quest and other improvements. The release notes also note that the fix for the issue that causes the game to crash on Steam Deck has been fixed, and should be published as soon as it passes QA.

AMD's Radeon RX 6750 GRE Specs and Pricing Revealed

There have been several rumours about AMD's upcoming RX 6750 GRE graphics cards, which may or maybe not be limited to the Chinese market. Details have now appeared of not one, but two different RX 6750 GRE SKUs, courtesy of @momomo_us on Twitter/X and it seems like AMD has simply re-branded the RX 6700 XT and RX 6700, adjusted the clock speed minimally and slapped a new cooler on the cards. To call this disappointing would be an understatement, but then again, these cards weren't expected to bring anything new to the table.

The fact that AMD is calling the cards the RX 6750 GRE 10 GB and RX 6750 GRE 12 GB will help confuse consumers as well, especially when you consider the two cards were clearly different SKUs when they launched as the RX 6700 and RX 6700 XT. Now it just looks like one has las VRAM than the other, when in fact it also has a different GPU. At least the pricing difference between the two SKUs is minimal, with the 10 GB model having an MSRP of US$269 and the 12 GB model coming in at a mere $20 more, at US$289. The RX 6700 XT had a launch price of US$479 and still retails for over US$300, which at least makes these refreshed products somewhat more wallet friendly.

Gigabyte AORUS Laptops Empower Creativity with AI Artistry

GIGABYTE, the world's leading computer hardware brand, featured its AORUS 17X laptop in two influential AI-content-focused Youtubers by Hugh Hou and MDMZ. Both Youtubers took the new AORUS 17X laptop to the AI image and AI video generation test and found the great potential of how the laptops benefit their workflow. The AORUS 17X laptop is powered by NVIDIA GeForce RTX 40 series GPUs to unlock new horizons for creativity and become the go-to choice for art and tech enthusiasts.

Hugh Hou: Unleashing the Power of AI Arts with the AORUS 17X Laptop
Hugh Hou's journey into AI arts, powered by Stable Diffusion XL, garnered viral success. The AORUS 17X laptop emerged as a game-changer with up to 16G of VRAM, enabling local AI photo and video generation without bearing hefty cloud-rendering costs. It empowers creators and outperforms competitors in AI-assisted tasks and enhances AI artistry.

AMD's Radeon RX 7900 GRE Gets Benchmarked

AMD's China exclusive Radeon RX 7900 GRE has been put through its paces by Expreview and the US$740 equivalent card should in short not carry the 7900-series moniker. In most of the tests, the card performs like a Raden RX 6950 XT or worse, with it being beaten by the Radeon RX 6800 XT in 3D Mark Fire Strike, even if it's only by the tiniest amount. Expreview has done a fairly limited comparison, mainly pitching the Radeon RX 7900 GRE against the Radeon RX 7900 XT and NVIDIA's GeForce RTX 4070, where it loses by a mile towards AMD's higher-end GPU, which by no means was unexpected as this is a lower tier product.

However, when it comes the GeForce RTX 4070, AMD struggles to keep up at 1080p, where NVIDIA takes home the win in games like The Last of US Part 1 and Diablo 4. In games like F1 22 and Assassin's Creed Hall of Valor, AMD is only ahead by a mere percentage point or less. Once ray tracing is enabled, AMD only wins in F1 22 and it's by less than one percent again and Far Cry 6, where AMD is almost three percent faster. Moving up in resolution, the Radeon RX 7900 GRE ends up being a clear winner, most likely partially due to having 16 GB of VRAM and at 1440p the GeForce RTX 4070 also falls behind in most of the ray traced game tests, if only just in most of them. At 4K the NVIDIA card can no longer keep up, but the Radeon RX 7900 GRE isn't really a 4K champion either, dropping under 60 FPS in more resource heavy games like Cyberpunk 2077 and The Last of Us Part 1. Considering the GeForce RTX 4070 Ti only costs around US$50 more, it seems like it would be the better choice, despite having less VRAM. AMD appears to have pulled an NVIDIA with this card, which at least performance wise, seems to belong in the Radeon RX 7800 segment. The benchmark figures also suggests that the actual Radeon RX 7800 cards won't be worth the wait, unless AMD prices them very competitively.

Update 11:45 UTC: [Editor's note: The official MSRP from AMD appears to be US$649 for this card, which is more reasonable, but the performance still places this in in a category lower than the model name suggests.]

GDDR6 VRAM Prices Falling According to Spot Market Analysis - 8 GB Selling for $27

The price of GDDR6 memory has continued to fall sharply - over recent financial quarters - due to an apparent decrease in demand for graphics cards. Supply shortages are also a thing of the past—industry experts think that manufacturers have been having an easier time acquiring components since late 2021, but that also means that the likes of NVIDIA and AMD have been paying less for VRAM packages. Graphics card enthusiasts will be questioning why these savings have not been passed on swiftly to the customer, as technology news outlets (this week) have been picking up on interesting data—it demonstrates that spot prices of GDDR6 have decreased to less than a quarter of their value from a year and a half ago. 3DCenter.org has presented a case example of 8 GB GDDR6 now costing $27 via the spot market (through DRAMeXchange's tracking system), although manufacturers will be paying less than that due to direct contract agreements with their favored memory chip maker/supplier.

A 3DCenter.org staffer had difficulty sourcing the price of 16 Gb GDDR6 VRAM ICs on the spot market, so it is tricky to paint a comparative picture of how much more expensive it is to equip a "budget friendly" graphics card with a larger allocation of video memory, when the bill-of-materials (BoM) and limits presented by narrow bus widths are taken into account. NVIDIA is releasing a GeForce RTX 4060 Ti 16 GB variant in July, but the latest batch of low to mid-range models (GeForce RTX 4060-series and Radeon RX 7600) are still 8 GB affairs. Tom's Hardware points to GPU makers sticking with traditional specification hierarchy for the most part going forward: "(models) with double the VRAM (two 16 Gb chips per channel on both sides of the PCB) are usually reserved for the more lucrative professional GPU market."

Intel Announces Intel Arc Pro A60 and Pro A60M GPUs

Today, Intel introduced the Intel Arc Pro A60 and Pro A60M as new members of the Intel Arc Pro A-series professional range of graphics processing units (GPUs). The new products are a significant step up in performance in the Intel Arc Pro family and are carefully designed for professional workstations users with up to 12 GB of video memory (VRAM) and support for four displays with high dynamic range (HDR) and Dolby Vision support.

With built-in ray tracing hardware, graphics acceleration and machine learning capabilities, the Intel Arc Pro A60 GPU unites fluid viewports, the latest in visual technologies and rich content creation in a traditional single slot factor.

NVIDIA Explains GeForce RTX 40 Series VRAM Functionality

NVIDIA receives a lot of questions about graphics memory, also known as the frame buffer, video memory, or "VRAM", and so with the unveiling of our new GeForce RTX 4060 Family of graphics cards we wanted to share some insights, so gamers can make the best buying decisions for their gaming needs. What Is VRAM? VRAM is high speed memory located on your graphics card.

It's one component of a larger memory subsystem that helps make sure your GPU has access to the data it needs to smoothly process and display images. In this article, we'll describe memory subsystem innovations in our latest generation Ada Lovelace GPU architecture, as well as how the speed and size of GPU cache and VRAM impacts performance and the gameplay experience.

Xbox Series S Hitting VRAM Limits, 8 GB is the Magic Number

Microsoft launched two flavors of its Xbox Series console back in November of 2020 - a more expensive and powerful "X" model appealing to hardcore enthusiasts arrived alongside an entry-level/budget friendly "S" system that featured lesser hardware specifications. The current generation Xbox consoles share the same custom AMD 8-core Zen 2 processor, albeit with different clock configurations, but the key divergence lies in Microsoft's choice of graphical hardware. The Series X packs an AMD "Scarlett" graphics processor with access to 16 GB of VRAM, while the Series S makes do with only 8 GB of high speed video memory with its "Lockhart" GPU.

Games studios have historically struggled to optimize their projects for the step down Xbox model - with software engineers complaining about memory allocation issues thanks to a smaller pool of VRAM - the Series S CPU and GPU have to fight over a total of 10 GB GDDR6 system memory. Microsoft listened to this feedback and made necessary changes last year - an updated SDK was released and a video briefing explained: "Hundreds of additional megabytes of memory are now available to Xbox Series S developers...This gives developers more control over memory, which can improve graphics performance in memory-constrained conditions."

Retail Leak of Custom GeForce RTX 4060 Ti Models Reconfirms Standard 8 GB VRAM Config

NVIDIA is expected to roll out its RTX 4060 Ti GPU series later this month but the American hardware giant has not issued any official material about these cards - leaks have so far become the only sources of specification and configuration information. A launch price of $450 is mooted, and the low-to-midrange RTX 4060 Ti is likely to sport NVIDIA's AD106 GPU (based on its Ada Lovelace graphics architecture). Previous leaks have indicated that this variant will be offered in an 8 GB VRAM configuration with a 128-bit memory interface, although TPU hardware archivist (T4C Fantasy) reckons that NVIDIA's board partners can opt for larger pools of video memory.

A retailer in Russia has released (possibly in error) catalog listings for four upcoming Palit branded GeForce RTX 4060 Ti graphics card models. The board partner seems to be offering two sets of its Dual and StormX designs - in normal and overclocked variations - but all of these models share the bog standard VRAM allocation of 8 GB, paired with the usual 128-bit memory interface. Palit's Dual cooler design is currently available on its RTX 4070-based models, but the StormX cooling solution design has not been applied to any 40 series cards so far - the upcoming RTX 4060 Ti cards will debut a (likely) smaller StormX shroud, heatsink and fan combination.

AMD Marketing Highlights Sub-$500 Pricing of 16 GB Radeon GPUs

AMD's marketing department this week continued its battle to outwit arch rival NVIDIA in GPU VRAM pricing wars - Sasa Marinkovic, a senior director at Team Red's gaming promotion department, tweeted out a simple and concise statement yesterday: "Our @amdradeon 16 GB gaming experience starts at $499." He included a helpful chart that lines up part of the AMD Radeon GPU range against a couple of hand-picked NVIDIA GeForce RTX cards, with emphasis on comparing pricing and respective allotments of VRAM. The infographic indicates AMD's first official declaration of the (last generation "Big Navi" architecture) RX 6800 GPU bottoming out at $499, an all time low, as well as hefty cut affecting the old range topping RX 6950 XT - now available for $649 (an ASRock version is going for $599 at the moment). The RX 6800 XT sits in-between at $579, but it is curious that the RX 6900 XT did not get a slot on the chart.

AMD's latest play against NVIDIA in the video memory size stake is nothing really new - earlier this month it encouraged potential customers to select one of its pricey current generation RX 7900 XT or XTX GPUs. The main reason being that the hefty Radeon cards pack more onboard VRAM than equivalent GeForce RTX models - namely the 4070 Ti and 4080 - therefore future-proofed for increasingly memory hungry games. The latest batch of marketing did not account for board partner variants of the (RDNA3-based) RX 7900 XT GPU selling for as low as $762 this week.

Modded NVIDIA GeForce RTX 3070 With 16 GB of VRAM Shows Impressive Performance Uplift

A memory mod for the NVIDIA GeForce RTX 3070 that doubles the amount of VRAM showed some impressive performance gains, especially in the most recent games. While the mod was more complicated than earlier ones, since it required some additional PCB soldering, the one tested game shows incredible performance boost, especially in the 1%, 0.1% lows, and the average frame rate.

Modding the NVIDIA GeForce RTX 3070 to 16 GB VRAM is not a bad idea, since NVIDIA already planned a similar card (RTX 3070 Ti 16 GB), but eventually cancelled it. With today games using more than 8 GB of VRAM, it means that some RTX 30 series graphics card can struggle with pushing playable FPS. The modder benchmarked the new Resident Evil 4 at very high settings, showing that those additional 8 GB of VRAM is the difference between stuttering and smooth gameplay.

Colorful Custom RTX 4060 Ti GPU Clocks Outed, 8 GB VRAM Confirmed

Resident TechPowerUp hardware database overseer T4C Fantasy has divulged some early information about a custom version of the NVIDIA GeForce RTX 4060 Ti GPU card - Colorful's catchily named iGame RTX 4060 Ti Ultra White OC model has been added to the TPU GPU database, and T4C Fantasy has revealed a couple of tidbits on Twitter. The GPU has been tuned to have a maximum boost clock of 2580 MHz, jumping from a base clock of 2310 MHz. According to past leaks the reference version of the GeForce RTX 4060 Ti has a default boost clock of 2535 MHz, so Colorful's engineers have managed to add another 45 MHz on top of that with their custom iteration - so roughly 2% more than the reference default.

T4C Fantasy also confirmed that the Colorful iGame RTX 4060 Ti Ultra W OC will be appointed with 8 GB of VRAM, which also matches the reference model's rumored memory spec. T4C Fantasy points out that brands have the option to produce RTX 4060 Ti cards with a larger pool of attached video memory, but launch models will likely stick with the standard allotment of 8 GB of VRAM. The RTX 4060 Ti is listed as being based on the Ada Lovelace GPU architecture (GPU variant AD106-350-A1), and T4C Fantasy expects that Team Green will stick with a 5 nm process size - contrary to reports of a transition to manufacturing on 4 nm (chez TSMC foundries).

AMD Plays the VRAM Card Against NVIDIA

In a blog post, AMD has pulled the VRAM card against NVIDIA, telling potential graphics card buyers that they should consider AMD over NVIDIA, because current and future games will require more VRAM, especially at higher resolution. There's no secret that there has been something of a consensus from at least some of the PC gaming crowd that NVIDIA is being too stingy when it comes to VRAM on its graphics cards and AMD is clearly trying to cash in on that sentiment with its latest blog post. AMD is showing the VRAM usage in games such as Resident Evil 4—with and without ray tracing at that—The Last of US Part I and Hogwarts Legacy, all games that use over 11 GB of VRAM or more.

AMD does have a point here, but as the company has as yet to launch anything below the Radeon RX 7900 XT in the 7000-series, AMD is mostly comparing its 6000-series of cards with NVIDIA's 3000-series of cards, most of which are getting hard to purchase and potentially less interesting for those looking to upgrade their system. That said, AMD also compares its two 7000-series cards to the NVIDIA RTX 4070 Ti and the RTX 4080, claiming up to a 27 percent lead over NVIDIA in performance. Based on TPU's own tests of some of these games, albeit most likely using different test scenarios, the figures provided by AMD don't seem to reflect real world performance. It's also surprising to see AMD claims its RX 7900 XTX beats NVIDIA's RTX 4080 in ray tracing performance in Resident Evil 4 by 23 percent, where our own tests shows NVIDIA in front by a small margin. Make what you want of this, but one thing is fairly certain and that is that future games will require more VRAM, but most likely the need for a powerful GPU isn't going to go away.

DirectX 12 API New Feature Set Introduces GPU Upload Heaps, Enables Simultaneous Access to VRAM for CPU and GPU

Microsoft has implemented two new features into its DirectX 12 API - GPU Upload Heaps and Non-Normalized sampling have been added via the latest Agility SDK 1.710.0 preview, and the former looks to be the more intriguing of the pair. The SDK preview is only accessible to developers at the present time, since its official introduction on Friday 31 March. Support has also been initiated via the latest graphics drivers issued by NVIDIA, Intel, and AMD. The Microsoft team has this to say about the preview version of GPU upload heaps feature in DirectX 12: "Historically a GPU's VRAM was inaccessible to the CPU, forcing programs to have to copy large amounts of data to the GPU via the PCI bus. Most modern GPUs have introduced VRAM resizable base address register (BAR) enabling Windows to manage the GPU VRAM in WDDM 2.0 or later."

They continue to describe how the update allows the CPU to gain access to the pool of VRAM on the connected graphics card: "With the VRAM being managed by Windows, D3D now exposes the heap memory access directly to the CPU! This allows both the CPU and GPU to directly access the memory simultaneously, removing the need to copy data from the CPU to the GPU increasing performance in certain scenarios." This GPU optimization could offer many benefits in the context of computer games, since memory requirements continue to grow in line with an increase in visual sophistication and complexity.

Halo Infinite's Latest PC Patch Shifts Minimum GPU Spec Requirements, Below 4 GB of VRAM Insufficient

The latest patch for Halo Infinite has introduced an undesired side effect for a select portion of its PC platform playerbase. Changes to minimum system specification requirements were not clarified by 343 Industries in their patch notes, but it appears that the game now refuses to launch for owners of older GPU hardware. A limit of 4 GB of VRAM has been listed as the bare minimum since Halo Infinite's launch in late 2021, with the AMD Radeon RX 570 and Nvidia GTX GeForce 1050 Ti cards representing the entry level GPU tier, basic versions of both were fitted with 4 GB of VRAM as standard.

Apparently users running the GTX 1060 3 GB model were able to launch and play the game just fine prior to the latest patch, due to it being more powerful than the entry level cards, but now it seems that the advertised hard VRAM limit has finally gone into full effect. The weaker RX 570 and GTX 1050 Ti cards are still capable of running Halo Infinite after the introduction of season 3 content, but a technically superior piece of hardware cannot, which is unfortunate for owners of the GTX 1060 3 GB model who want to play Halo Infinite in its current state.
Return to Keyword Browsing
Nov 20th, 2024 00:37 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts