News Posts matching #Graphics

Return to Keyword Browsing

PowerColor Teases Radeon RX 6800 XT "Red Devil" Edition Graphics Card

PowerColor, the creator of the iconic "Red Devil" flagship designs of AMD Radeon graphics cards, has today posted a teaser for the upcoming Radeon RX 6800 XT GPUs. With their custom PowerColor Radeon RX 6800 XT Red Devil graphics card, the company is bringing consumers their best engineering and design. Today, we are getting the first glimpse of what is to come. Pictured below is a backside of the GPU, with a dark metallic backplate, illuminated by the Red Devil logo. The teased picture shows a bit more of the card as well, where we can see the printed Red Devil logo. This custom design is expected to be a triple-slot and triple-fan design. With AMD reference designs being priced at an MSRP of $649, this custom card is possibly going to be pricier. Below you can see that the Red Devil has awoken amid the wait for custom cards to arrive:
PowerColor Radeon RX 6800 XT Red Devil

Intel Xe-HP "NEO Graphics" GPU with 512 EUs Spotted

Intel is preparing to flood the market with its Xe GPU lineup, covering the entire vector from low-end to high-end consumer graphics cards. Just a few days ago, the company has announced its Iris Xe MAX GPU, the first discrete GPU from Intel, aimed at 1080p gamer and content creators. However, that seems to be only the beginning of Intel's GPU plan and just a small piece of the entire lineup. Next year, the company is expected to launch two GPU families - Xe-HP and Xe-HPG. With the former being a data-centric GPU codenamed Arctic Sound, and the latter being a gaming-oriented GPU called DG2. Today, thanks to the GeekBench listing, we have some information on the Xe-HP GPU.

Being listed with 512 EUs (Execution Units), translating into 4096 shading units, the GPU is reportedly a Xe-HP variant codenamed "NEO Graphics". This is not the first time that the NEO graphics has been mentioned. Intel has called a processor Neo graphics before, on its Architecture day when the company was demonstrating the FP32 performance. The new GeekBench leak shows the GPU running at 1.15 GHz clock speed, where at the Architecture day the same GPU ran at 1.3 GHz frequency, indicating that this is only an engineering sample. The GPU ran the GeekBench'es OpenCL test and scored very low 25,475 points. Compared to NVIDIA's GeForce RTX 3070 GPU that scored 140,484, the Intel GPU is at least four times slower. That is possibly due to the non-optimization of the benchmark, which could greatly improve in the future. In the first picture below, this Xe-HP GPU would represent the single-tile design.

GIGABYTE Announces AORUS XTREME GeForce RTX 30 Series WATERFORCE Graphics Card

GIGABYTE, the world's leading premium gaming hardware manufacturer, today announced the latest GeForce RTX 30 Series WATERFORCE graphics cards powered by NVIDIA Ampere architecture—AORUS GeForce RTX 3090 XTREME WATERFORCE WB 24G, AORUS GeForce RTX 3080 XTREME WATERFORCE WB 10G, AORUS GeForce RTX 3090 XTREME WATERFORCE 24G, and AORUS GeForce RTX 3080 XTREME WATERFORCE 10G. AORUS is the world's first manufacturer to include the patent-pending "Leak detection" in the WATERFORCE WB open-loop graphics cards. The built-in leak detection circuit covers the entire fitting and water block and can promptly alert users by flashing light at the first sign of leak, so users can deal with the leakage early and prevent any further damage to the system.

AORUS WATERFORCE WB is ideal for those who wish to build open-loop liquid cooling systems. GIGABYTE specializes in thermal cooling solutions, providing optimal channel spacing between the micro fins for enhanced heat transfer from the GPU via stable water flows. The sunk-designed copper micro fins also shorten the heat conduction path from the GPU, so that the heat can be transferred to the water channel area quickly. Moreover, the cover and backplate of the new-gen WATERFORCE WB feature customizable RGB lighting as users can create their own PC styles and bring creativity into the liquid cooling systems.

EVGA Unleashes XOC BIOS for GeForce RTX 3090 FTW3 Graphics Card

EVGA has today published the "XOC" BIOS version for its GeForce RTX 3090 FTW3 graphics cards. The XOC BIOS version is designed for "extreme overclocking" purposes, as it boosts the power limit of the card by very a few additional Watts. This allows the overclockers to use the card to its full potential so the GPU core is not limited by power. To run XOC BIOS on your GeForce RTX 3090 FTW3 GPU card, you need to have an adequate cooling solution and sufficient power supply. For power, EVGA recommends that you use at least 850w+ Gold PSU, at minimum. This is a sign that shows that XOC bios will boost the system power consumption by quite a bit. The XOC BIOS is enabling the GPU to have a power limit of 500 Watts. It is important to note that EVGA does not guarantee any performance increase or overclock while using this BIOS update.

You can download the EVGA XOC BIOS for GeForce RTX 3090 FTW3 graphics card here. To install it, unzip the file, run Update.exe, and after updating restart your PC. That is the complete update process for the BIOS update. EVGA uploads both the normal BIOS (so you can revert) and XOC BIOS there so be careful when choosing the right files. You can use TechPowerUp GPU-Z tool to verify the BIOS install.

AMD Graphics Drivers Have a CreateAllocation Security Vulnerability

Discovering vulnerabilities in software is not an easy thing to do. There are many use cases and states that need to be tested to see a possible vulnerability. Still, security researchers know how to find those and they usually report it to the company that made the software. Today, AMD has disclosed that there is a vulnerability present in the company graphics driver powering the GPUs and making them work on systems. Called CreateAllocation (CVE-2020-12911), the vulnerability is marked with a score of 7.1 in the CVSSv3 test results, meaning that it is not a top priority, however, it still represents a big problem.

"A denial-of-service vulnerability exists in the D3DKMTCreateAllocation handler functionality of AMD ATIKMDAG.SYS 26.20.15029.27017. A specially crafted D3DKMTCreateAllocation API request can cause an out-of-bounds read and denial of service (BSOD). This vulnerability can be triggered from a guest account, " says the report about the vulnerability. AMD states that a temporary fix is implemented by simply restarting your computer if a BSOD happens. The company also declares that "confidential information and long-term system functionality are not impacted". AMD plans to release a fix for this software problem sometime in 2021 with the new driver release. You can read more about it here.

NVIDIA Announces GeForce Ampere RTX 3000 Series Graphics Cards: Over 10000 CUDA Cores

NVIDIA just announced its new generation GeForce "Ampere" graphics card series. The company is taking a top-to-down approach with this generation, much like "Turing," by launching its two top-end products, the GeForce RTX 3090 24 GB, and the GeForce RTX 3080 10 GB graphics cards. Both cards are based on the 8 nm "GA102" silicon. Join us as we live blog the pre-recorded stream by NVIDIA, hosted by CEO Jen-Hsun Huang.

AMD Releases Radeon Software Adrenalin 2020 Edition 20.8.2 Beta

Today, AMD released their Radeon Software Adrenalin 2020 Edition 20.8.2 Beta drivers. This latest driver release brings with it support and improvements for numerous titles. A Total War Saga: Troy sees the most significant boost to performance with up to 12% better FPS on the High preset when using a Radeon RX 5700 XT. Other titles with improved support include Microsoft Flight Simulator, Mortal Shell, and the Marvel's Avengers Open Beta. Meanwhile, AMD's list of fixes while short is no less important. They managed to solve the issue with intermittent system hangs when exiting sleep on some AMD Ryzen 3000 mobile processors with Radeon Graphics, which will likely make life easier for numerous users. They also fixed the system freeze or failure to recognize input from the user when pressing a key with Radeon Overlay open or when exiting it while playing Hyper Scape. The full list of features and improvements can be found in the list below.
DOWNLOAD: AMD Radeon Software Adrenalin 20.8.2 Beta

Microsoft's New Windows Update Allows GPU Selection According to Workload

Microsoft's future update to Windows 10 will add a GPU-aware selector that allows both the OS and the user to adaptively select the best GPU for each usage scenario. The preview release of Windows 10 build 20190 features this in two ways. First is an OS-level layer that automagically selects the best GPU for the task at hand between installed options (let's assume, an Intel iGPU and your discrete GPU). For web browsing or productivity it's expected the OS will switch to the less power-hungry option, whilst for gaming and its all-cylinders philosophy, it would launch the discrete option.

However, if you're not much into ceding that kind of control to the OS itself, you can override which specific GPU is activated for a specific application. This change is made via the Settings panel with a drop down menu in Graphics Settings. This feature should be a particular boon for laptops that don't feature a power-saving technology that enables this kind of behavior, but there are some other usages for power users that might come in handy with this OS-level integration.

AMD Releases Radeon Software Adrenalin 2020 Edition 20.8.1

AMD has today released the latest update to its Radeon graphics drivers in the form of Radeon Software Adrenalin 2020 Edition 20.8.1, which brings a list of improvements and bug fixes. Starting off, the new driver brings support for Horizon Zero Dawn PC port, which is coming out this Friday. Next, the driver brings some improvements to the performance of the Radeon 5700XT graphics card, which scored 9% higher FPS in the game Grounded, when using Epic preset. Another important supporting feature is for game Hyper Scape, which this new driver enables. For a full list of improvements and bug fixes, please check out the list below:
DOWNLOAD: AMD Radeon Software Adrenalin 20.8.1

The Curious Case of the 12-pin Power Connector: It's Real and Coming with NVIDIA Ampere GPUs

Over the past few days, we've heard chatter about a new 12-pin PCIe power connector for graphics cards being introduced, particularly from Chinese language publication FCPowerUp, including a picture of the connector itself. Igor's Lab also did an in-depth technical breakdown of the connector. TechPowerUp has some new information on this from a well placed industry source. The connector is real, and will be introduced with NVIDIA's next-generation "Ampere" graphics cards. The connector appears to be NVIDIA's brain-child, and not that of any other IP- or trading group, such as the PCI-SIG, Molex or Intel. The connector was designed in response to two market realities - that high-end graphics cards inevitably need two power connectors; and it would be neater for consumers to have a single cable than having to wrestle with two; and that lower-end (<225 W) graphics cards can make do with one 8-pin or 6-pin connector.

The new NVIDIA 12-pin connector has six 12 V and six ground pins. Its designers specify higher quality contacts both on the male and female ends, which can handle higher current than the pins on 8-pin/6-pin PCIe power connectors. Depending on the PSU vendor, the 12-pin connector can even split in the middle into two 6-pin, and could be marketed as "6+6 pin." The point of contact between the two 6-pin halves are kept leveled so they align seamlessly.

ASRock Launches Radeon RX 5600 XT Challenger Pro 6G OC Graphics Card

The leading global motherboard, graphics card and mini PC manufacturer, ASRock, has launched new Radeon RX 5600 XT Challenger Pro 6G OC three-fan graphics card. The Radeon RX 5600 XT Challenger Pro 6G OC features ASRock's new styled shroud design with upgraded cooling fins, AMD's second-generation Radeon RX 5600 XT 7 nm GPU, plus 6 GB 192-bit GDDR6 memory and PCI Express 4.0 bus. The ASRock Radeon RX 5600 XT Challenger Pro 6G OC graphics card provides excellent overclocking settings, which enables users to enjoy a smooth 1080p gaming experience.

The ASRock Radeon RX 5600 XT Challenger Pro 6G OC adopts AMD's second-generation Radeon RX 5600 XT GPU. With factory default GPU base/game/boost clock settings, this new graphics card can reach 1420/1615/up to 1750 MHz respectively. The boost clock setting is 4% higher than the AMD's standard settings. Furthermore, the clock frequency of GDDR6 memory is set as 1750 MHz, which is 17% faster than AMD's memory default value - 1500 MHz. The ASRock Radeon RX 5600 XT Challenger Pro 6G OC is equipped with 3-fan cooler, 6 GB 192-bit GDDR6 memory and latest PCI Express 4.0 bus standard; ideally partnering with AMD Ryzen 3000 CPU systems and ASRock B550 and X570 motherboards. These premium specifications allow Radeon RX 5600 XT Challenger Pro 6G OC graphics card to have outstanding performance and bring users excellent 1080p gaming experience.

Cyberpunk 2077 Graphics Comparison Video Between 2018 and 2020 Builds Shows Many Differences

Cyberpunk 2077 is the year's most awaited game release, and has been met with not one, but two delays already. Originally expected to ship in April of this year, it has since been postponed to September, and now to November 19th on account of extra optimization and bug quashing from developer CD Projekt Red. However, the recent gameplay videos released for the game by the developer showcase the amount of work that has gone into the engine since 2018, when we were first treated to a gameplay video.

The video after the break comes courtesy of YouTube user 'Cycu1', who set up the 2018 and 2020 trailers side by side. In it, you can see extreme improvements to overall level and character detail (some of this can certainly be attributed to a lower-quality 2018 video compression). However, the video also showcases some lighting differences (I guess it's subjective whether this has worked out for better or worse, but the new videos supposedly make use of ray tracing). Another point that I'd like to call your attention to is that there seem to be some environment differences between the two versions - it seems that some environments were simplified compared to their 2018 version, such as the "Going Pro" mission - the chair and panels were removed from the environment and replaced by what looks like a garage door. Whether this was done as a way to improve performance is on CD Projekt Red's purview.

New AMD Radeon Pro 5600M Mobile GPU Brings Desktop-Class Graphics Performance and Enhanced Power Efficiency to 16-inch MacBook Pro

AMD today announced availability of the new AMD Radeon Pro 5600M mobile GPU for the 16-inch MacBook Pro. Designed to deliver desktop-class graphics performance in an efficient mobile form factor, this new GPU powers computationally heavy workloads, enabling pro users to maximize productivity while on-the-go.

The AMD Radeon Pro 5600M GPU is built upon industry-leading 7 nm process technology and advanced AMD RDNA architecture to power a diverse range of pro applications, including video editing, color grading, application development, game creation and more. With 40 compute units and 8 GB of ultra-fast, low-power High Bandwidth Memory (HBM2), the AMD Radeon Pro 5600M GPU delivers superfast performance and excellent power efficiency in a single GPU package.

Another Nail on Intel Kaby Lake-G Coffin as AMD Pulls Graphics Driver Support

Kaby Lake-G was the result of one of the strangest collaborations in the industry - though that may not be a just way of looking at it. It made total sense at the time - a product that combined the world's best CPU design with one of the foremost graphics architectures seems a recipe for success. However, the Intel-AMD collaboration was an unexpected one, as these two rivals were never expected to look eye to eye in any sort of meaningful way. Kaby Lake-G was revolutionary in how it combined both AMD and Intel IP in an EMIB-capable design, but it wasn't one built to last.

Now, after Intel has announced a stop to product manufacturing and order capacity, it's come the time for AMD to pull driver support. The company's latest Windows 10 version 2004 update-compatible drivers don't install on Kaby Lake-G powered systems, citing an unsupported hardware configuration. Tom's Hardware contacted Intel, who said they're working with AMD to bring back "Radeon graphics driver support to Intel NUC 8 Extreme Mini PCs (previously codenamed "Hades Canyon")." AMD, however, still hasn't commented on the story.

AMD Declares That The Era of 4GB Graphics Cards is Over

AMD has declared that the era of 4 GB graphics cards is over and that users should "Game Beyond 4 GB". AMD has conducted testing of its 4 GB RX 5500XT & 8 GB RX 5500XT to see how much of a difference VRAM can make on gaming performance. AMD tested the cards on a variety of games at 1080p high/ultra settings with a 3600X & 16 GB 3200 MHz ram, on average the 8 GB model performed ~19% better than its 4 GB counterpart. With next-gen consoles featuring 16 GB of combined memory and developers showing no sign of slowing down, it will be interesting to see what happens.

Intel Jasper Lake CPU Appears with Gen11 Graphics

Intel is preparing to update its low-end segment designed for embedded solutions, with a next-generation CPU codenamed Jasper Lake. Thanks to the popular hardware finder and leaker, _rogame has found a benchmark showing that Intel is about to bless low-end with a lot of neat stuff. The benchmark results show a four-core, four threaded CPU running at 1.1 GHz base clock with a 1.12 GHz boost clock. Even though these clocks are low, this is only a sample and the actual frequency will be much higher, expecting to be near 3 GHz. The CPU was spotted in a configuration rocking 32 GB of DDR4 SODIMM memory.

Jasper Lake is meant to be a successor to Gemini Lake and it will use Intel's Tremont CPU architecture designed for low-power scenarios. Designed on a 10 nm manufacturing node from Intel, this CPU should bring x86 processors to a wide range of embedded systems. Although the benchmark didn't mention which graphics the CPU will be paired with, _rogame speculates that Intel will use Gen11 graphics IP. That will bring a nice update over Gemini Lake's Gen9.5 graphics. That alone should bring better display output options and more speed. These CPUs are designed for Atom/Pentium/Celeron lineup, just like Gemini Lake before them.

Update: Updated the article to reflect the targeted CPU category.
Intel Tremont Intel Jasper Lake

Intel Posts Windows 10 May 2020 Update-ready Graphics Drivers

Intel today released its first Graphics Drivers ready for the upcoming Windows 10 May 2020 Update (2004). Version 27.20.100.8187 of Intel Graphics Drivers are WDDM 2.7 compliant, which means support for Shader Model 6.5, and Dolby Vision, on Gen 9.5 or later iGPUs. The drivers also add readiness for OneAPI, Intel's ambitious unified programming model for x86 processors, iGPU execution units, and future Xe compute processors. For gamers, the latest drivers add optimization for "Gears Tactics," "XCOM: Chimera Squad," and "Call of Duty Modern Warfare 2 Campaign Remastered), on Iris Plus or later iGPUs. As with the previous drivers, these drivers are OEM-unlocked.
DOWNLOAD: Intel Graphics Drivers 27.20.100.8187

Intel Teases "Big Daddy" Xe-HP GPU

The Intel Graphics Twitter account was on fire today, because they posted an update on the development of the Xe graphics processor, mentioning that samples are ready and packed up in quite an interesting package. The processor in question was discovered to be a Xe-HP GPU variant with an estimated die size of 3700 mm², which means we sure are talking about a multi-chip package here. How we concluded that it is the Xe-HP GPU, is by words of Raja Koduri, senior vice president, chief architect, general manager for Architecture, Graphics, and Software at Intel. He made a tweet, which was later deleted, that says this processor is a "baap of all", meaning "big daddy of them all" when translated from Hindi.

Mr. Koduri previously tweeted a photo of the Intel Graphics team at India, which has been working on the same "baap of all" GPU, which suggests this is a Xe-HP chip. It seems that this is not the version of the GPU made for HPC workloads (this is reserved for the Xe-HPC GPU), this model could be a direct competitor to offers like NVIDIA Quadro or AMD Radeon Pro. We can't wait to learn more about Intel's Xe GPUs, so stay tuned. Mr. Koduri has confirmed that this GPU will be used only for Data Centric applications as it is needed to "keep up with the data we are generating". He has also added that the focus for gaming GPUs is to start off with better integrated GPUs and low power chips above that, that could reach millions of users. That will be a good beginning as that will enable software preparation for possible high-performance GPUs in future.

Update May 2: changed "father" to "big daddy", as that's the better translation for "baap".
Update 2, May 3rd: The GPU is confirmed to be a Data Center component.

AMD Reports First Quarter 2020 Financial Results

AMD today announced revenue for the first quarter of 2020 of $1.79 billion, operating income of $177 million, net income of $162 million and diluted earnings per share of $0.14. On a non-GAAP* basis, operating income was $236 million, net income was $222 million and diluted earnings per share was $0.18.

"We executed well in the first quarter, navigating the challenging environment to deliver 40 percent year-over-year revenue growth and significant gross margin expansion driven by our Ryzen and EPYC processors," said Dr. Lisa Su, AMD president and CEO. "While we expect some uncertainty in the near-term demand environment, our financial foundation is solid and our strong product portfolio positions us well across a diverse set of resilient end markets. We remain focused on strong business execution while ensuring the safety of our employees and supporting our customers, partners and communities. Our strategy and long-term growth plans are unchanged."

Khronos Group Releases OpenCL 3.0

Today, The Khronos Group, an open consortium of industry-leading companies creating advanced interoperability standards, publicly releases the OpenCL 3.0 Provisional Specifications. OpenCL 3.0 realigns the OpenCL roadmap to enable developer-requested functionality to be broadly deployed by hardware vendors, and it significantly increases deployment flexibility by empowering conformant OpenCL implementations to focus on functionality relevant to their target markets. OpenCL 3.0 also integrates subgroup functionality into the core specification, ships with a new OpenCL C 3.0 language specification, uses a new unified specification format, and introduces extensions for asynchronous data copies to enable a new class of embedded processors. The provisional OpenCL 3.0 specifications enable the developer community to provide feedback on GitHub before the specifications and conformance tests are finalized.
OpenCL

Intel iGPU+dGPU Multi-Adapter Tech Shows Promise Thanks to its Realistic Goals

Intel is revisiting the concept of asymmetric multi-GPU introduced with DirectX 12. The company posted an elaborate technical slide-deck it originally planned to present to game developers at the now-cancelled GDC 2020. The technology shows promise because the company isn't insulting developers' intelligence by proposing that the iGPU lying dormant be made to shoulder the game's entire rendering pipeline for a single-digit percentage performance boost. Rather, it has come up with innovating augments to the rendering path such that only certain lightweight compute aspects of the game's rendering be passed on to the iGPU's execution units, so it has a more meaningful contribution to overall performance. To that effect, Intel is on the path of coming up with SDK that can be integrated with existing game engines.

Microsoft DirectX 12 introduced the holy grail of multi-GPU technology, under its Explicit Multi-Adapter specification. This allows game engines to send rendering traffic to any combinations or makes of GPUs that support the API, to achieve a performance uplift over single GPU. This was met with lukewarm reception from AMD and NVIDIA, and far too few DirectX 12 games actually support it. Intel proposes a specialization of explicit multi-adapter approach, in which the iGPU's execution units are made to process various low-bandwidth elements both during the rendering and post-processing stages, such as Occlusion Culling, AI, game physics, etc. Intel's method leverages cross-adapter shared resources sitting in system memory (main memory), and D3D12 asynchronous compute, which creates separate processing queues for rendering and compute.

Intel Rocket Lake-S Platform Detailed, Features PCIe 4.0 and Xe Graphics

Intel's upcoming Rocket Lake-S desktop platform is expected to arrive sometime later this year, however, we didn't have any concrete details on what will it bring. Thanks to the exclusive information obtained by VideoCardz'es sources at Intel, there are some more details regarding the RKL-S platform. To start, the RKL-S platform is based on a 500-series chipset. This is an iteration of the upcoming 400-series chipset, and it features many platform improvements. The 500-series chipset based motherboards will supposedly have an LGA 1200 socket, which is an improvement in pin count compared to LGA 1151 socket found on 300 series chipset.

The main improvement is the CPU core itself, which is supposedly a 14 nm adaptation of Tiger Lake-U based on Willow Cove core. This design is representing a backport of IP to an older manufacturing node, which results in bigger die space due to larger node used. When it comes to the platform improvements, it will support the long-awaited PCIe 4.0 connection already present on competing platforms from AMD. It will enable much faster SSD speeds as there are already PCIe 4.0 NVMe devices that run at 7 GB/s speeds. With RKL-S, there will be 20 PCIe 4.0 lanes present, where four would go to the NVMe SSD and 16 would go to the PCIe slots from GPUs. Another interesting feature of the RKL-S is the addition of Xe graphics found on the CPU die, meant as iGPU. Supposedly based on Gen12 graphics, it will bring support for HDMI 2.0b and DisplayPort 1.4a connectors.
Intel Rocket Lake-S Platform

Intel Xe Graphics to Feature MCM-like Configurations, up to 512 EU on 500 W TDP

A reportedly leaked Intel slide via DigitalTrends has given us a load of information on Intel's upcoming take on the high performance graphics accelerators market - whether in its server or consumer iterations. Intel's Xe has already been cause for much discussion in a market that has only really seen two real competitors for ages now - the coming of a third player with muscles and brawl such as Intel against the already-established players NVIDIA and AMD would surely spark competition in the segment - and competition is the lifeblood of advancement, as we've recently seen with AMD's Ryzen CPU line.

The leaked slide reveals that Intel will be looking to employ a Multi-Chip-Module (MCM) approach to its high performance "Arctic Sound" graphics architecture. The GPUs will be available in up to 4-tile configuration (the name Intel is giving each module), which will then be joined via Foveros 3D stacking (first employed in Intel Lakefield. This leaked slide shows Intel's approach starting with a 1-tile GPU (with only 96 of its 128 total EUs active) for the entry level market (at 75 W TDP) a-la DG1 SDV (Software Development Vehicle).

Intel DG1 Discrete GPU Shows Up with 96 Execution Units

As we are approaching the year 2020, when Intel is rumored to launch its discrete graphics cards to the hand of consumers around the world, we are gearing up on the number of leaks about the upcoming products. Thanks to Twitter user @KOMACHI_ENSAKA, who found the latest EEC listing, we have new information regarding Intel's upcoming DG1 discrete graphics solution.

In the leaked EEC listing, the DG1 GPU is being presented as a GPU with 96 execution units, meaning that Intel is planning to take on entry-level graphics cards with this GPU. If the graphics unit is following the same design principle of the previous-generation GPUs, then there should be around 8 shading units per one execution unit, totaling 768 shading units for the whole DG1 GPU. If the 12th Gen Xe design inside the DG1 follows a different approach, then we can expect to see a double amount of shading units, meaning 1536 in total.

MSI Prepares Another Version of AMD Radeon RX 580 Armor Graphics Card

While AMD is giving all signs of being preparing to release their latest entries into the midrange graphics card market in the form of theRX 5500 and RX 5300 series of graphics cards based on Navi, AMD's AIB partners are giving the slow burn on existing inventories of AMD's Polaris graphics chips. MSI, in this case, seems to have bet on a slight redesign of their previously-released RX 580 Armor and Armor MK2.

Changed is the color scheme - MSI went full black on this one. There's also a redesigned PCB, a redesigned I/O bracket (which keeps four display connectors), and a new cooler shroud. The heatsink's surface area also seems to have been increased, which should provide lower operating temperatures (anything beyond that, such as higher overclockability and longer lifespan, are speculations). The redesigned Armor keeps the single 8-pin PCIe power connector. No other details are available at time of writing.
Return to Keyword Browsing
May 21st, 2024 06:54 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts