News Posts matching #Turing

Return to Keyword Browsing

Ghost of Tsushima Lets You Use DLSS 2 and FSR 3 Frame Generation Together

The latest update of Ghost of Tsushima lets you use DLSS 2 super-resolution and FSR 3 Frame Generation simultaneously, so you have the unique benefit of having the NVIDIA DLSS 2 handle super resolution and image quality, while letting the AMD FSR 3 nearly double frame-rates of the DLSS 2 output. All this, without the need for any mods, it's part of the game's original code. It's crazy when you think about it—you now have two performance enhancements running in tandem, with gamers reporting over 170 FPS at 4K with reasonably good image quality. This could particularly benefit those on older GeForce RTX 30-series "Ampere" and RTX 20-series "Turing" graphics cards, as those lack support for DLSS 3 Frame Generation.

NVIDIA RTX 20-series and GTX 16-series "Turing" GPUs Get Resizable BAR Support Through NVStrapsReBAR Mod

February saw community mods bring resizable BAR support to several older platforms; and now we come across a mod that brings it to some older GPUs. The NVStrapsReBAR mod by terminatorul, which is forked out of the ReBarUEFI mod by xCurio, brings resizable BAR support to NVIDIA GeForce RTX 20-series and GTX 16-series GPUs based on the "Turing" graphics architecture. This mod is intended for power users, and can potentially brick your motherboard. NVIDIA officially implemented resizable BAR support since its RTX 30-series "Ampere" GPUs in response to AMD's Radeon RX 6000 RDNA 2 GPUs implementing the tech under the marketing name Smart Access Memory. While AMD would go on to retroactively enable the tech for even the older RX 5000 series RDNA GPUs, NVIDIA didn't do so for "Turing."

NVStrapsReBAR is a motherboard UEFI firmware mod. It modifies the way your system firmware negotiates BAR size with the GPU on boot. There are only two ways to go about modding a platform to enable resizable BAR on an unsupported platform—by modding the motherboard firmware, or the video BIOS. Signature checks by security processors in NVIDIA GPUs make the video BIOS modding route impossible for most users; thankfully motherboard firmware modding isn't as difficult. There is an extensive documentation by the author to go about using this mod. The author has tested the mod to work with "Turing" GPUs, however, it doesn't work with older NVIDIA GPUs, including "Pascal." Resizable BAR enables the CPU (software) to see video memory as a single contiguously addressable block, rather than through 256 MB apertures.

NVIDIA GeForce GTX 16-series Finally Discontinued

NVIDIA has finally laid to rest the last GeForce GPUs to feature the "GTX" brand extension, the GTX 16-series "Turing." Although two generations older than the current RTX 40-series "Ada," the GTX 16-series formed the entry-level for NVIDIA, with certain SKUs continuing to ship to graphics card manufacturers, and more importantly, notebook ODMs as popular GeForce MX and GTX 16-series SKUs. With NVIDIA introducing further cut-down variants of its "Ampere" based GA107 silicon, such as the desktop RTX 3050 6 GB, the company has reportedly discontinued the GTX 16-series. All its inventories are drained on NVIDIA's end, and the channel is expected to consume the last remaining chips in the next 1-3 months, according to a source on Chinese forum Broad Channels.

NVIDIA had originally conceived the GTX 16-series to form the lower half of its 2018 product stack, with the upper half driven by the RTX 20-series. Both are based on the "Turing" graphics architecture, but the GTX 16-series has a reduced feature-set, namely the lack of RT cores and Tensor cores. The idea at the time behind the GTX 16-series, was that at their performance levels, ray tracing would be prohibitively slow at any resolution, and so these could be left with just the CUDA cores of "Turing," and made to power games with pure raster 3D graphics, so gamers could at least benefit from the higher IPC and 12 nm efficiency of "Turing" over the 16 nm "Pascal." Popular GPU models include the GTX 1650, and the GTX 1660 Super.

NVIDIA GeForce GTX 16-series NVENC Issues Fixed with Hotfix Driver

NVIDIA released a Hotfix driver update to fix certain issues with the NVENC hardware encoder of GeForce GTX 16-series "Turing" GPUs, such as the popular GTX 1660, and GTX 1650 Ti, etc. Apparently, applications utilizing the hardware acceleration provided by the GPU's NVENC unit would result in corrupted videos or spring up error messages. The Hotfix driver is based on GeForce 551.68, and is not WHQL-certified. NVIDIA may include fixes contained in the hotfix in one of its upcoming GeForce Game Ready or Studio main trunk drivers. GeForce GTX 16-series "Turing" GPUs feature an NVENC unit that can accelerate H.264 and H.265 encoding.
DOWNLOAD: NVIDIA GeForce 551.68 Hotfix for GTX 16-series NVENC Issues

22 GB Modded GeForce RTX 2080 Ti Cards Listed on Ebay - $499 per unit

An Ebay Store—customgpu_official—is selling memory modified GeForce RTX 2080 Ti graphics cards. The outfit (located in Palo Alto, California) has a large inventory of MSI GeForce RTX 2080 Ti AERO cards—judging from their listing's photo gallery. Workers in China are reportedly upgrading these (possibly refurbished) units with extra lashings of GDDR6 VRAM—going from the original 11 GB specification up to 22 GB. We have observed smaller scale GeForce RTX 2080 Ti modification projects and a very ambitious user-modified example in the past, but customgpu's latest endeavor targets a growth industry—the item description states: "Why do you need a 22 GB 2080 Ti? Large VRAM is essential to cool AIGC apps such as stable diffusion fine tuning, LLAMA, LLM." At the time of writing three cards are available to purchase, and interested customers have already acquired four memory modded units.

They advertise their upgraded "Turbo Edition" card as a great "budget alternative" to more modern GeForce RTX 3090 and 4090 models—"more information and videos" can be accessed via 2080ti22g.com. The MSI GeForce RTX 2080 Ti AERO 11 GB model is not documented within TPU's GPU database, but its dual-slot custom cooling solution is also sported by the MSI RTX 2080 SUPER AERO 8 GB graphics card. The AERO's blower fan system creates a "mini-wind tunnel, pulling fresh air from inside the case and blowing it out the IO panel, and out of the system." The seller's asking price is $499 per unit—perhaps a little bit steep for used cards (potentially involved in mining activities), but customgpu_official seems to be well versed in repairs. Other Ebay listings show non-upgraded MSI GeForce RTX 2080 Ti AERO cards selling in the region of $300 to $400. Custom GPU Upgrade and Repair's hype video proposes that their modified card offers great value, given that it sells for a third of the cost of a GeForce RTX 3090—their Ebay item description contradicts this claim: "only half price compared with GeForce RTX 3090 with almost the same GPU memory."

Microsoft Announces Participation in National AI Research Resource Pilot

We are delighted to announce our support for the National AI Research Resource (NAIRR) pilot, a vital initiative highlighted in the President's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This initiative aligns with our commitment to broaden AI research and spur innovation by providing greater computing resources to AI researchers and engineers in academia and non-profit sectors. We look forward to contributing to the pilot and sharing insights that can help inform the envisioned full-scale NAIRR.

The NAIRR's objective is to democratize access to the computational tools essential for advancing AI in critical areas such as safety, reliability, security, privacy, environmental challenges, infrastructure, health care, and education. Advocating for such a resource has been a longstanding goal of ours, one that promises to equalize the field of AI research and stimulate innovation across diverse sectors. As a commissioner on the National Security Commission on AI (NSCAI), I worked with colleagues on the committee to propose an early conception of the NAIRR, underlining our nation's need for this resource as detailed in the NSCAI Final Report. Concurrently, we enthusiastically supported a university-led initiative pursuing a national computing resource. It's rewarding to see these early ideas and endeavors now materialize into a tangible entity.

NVIDIA Releases GeForce 545.84 WHQL Game Ready Drivers

NVIDIA today released the latest version of its GeForce graphics drivers. The company released the GeForce 545 series drivers, a huge jump in version numbering from the GeForce 537 series. The new drivers extend NVIDIA RTX Video Super Resolution 1.5 technology down to even RTX 20-series "Turing" GPUs, besides the current RTX 30-series "Ampere" and RTX 40-series "Ada." The technology is a high-quality video upscaler that uses some of the upscaling techniques borrowed from DLSS feature-set over to video. The drivers also add optimization for the latest "Naraka: Bladepoint" patch that introduces DLSS 3 Frame Generation, as well as "Warhammer Vermintide II" DLSS 3 update. The drivers add TensorRT acceleration for Stable Diffusion. The drivers also add GeForce Experience optimal settings for over a dozen game titles. Among the issues fixed include a random black screen flicker seen on displays that use DSC (display stream compression), and incorrect monitor colors when returning from display sleep.

DOWNLOAD: NVIDIA GeForce 545.84 WHQL

NVIDIA Ada Lovelace Successor Set for 2025

According to the NVIDIA roadmap that was spotted in the recently published MLCommons training results, the Ada Lovelace successor is set to come in 2025. The roadmap also reveals the schedule for Hopper Next and Grace Next GPUs, as well as the BlueField-4 DPU.

While the roadmap does not provide a lot of details, it does give us a general idea of when to expect NVIDIA's next GeForce architecture. Since NVIDIA usually launches a new GeForce architecture every two years or so, the latest schedule might sound like a small delay, at least if it plans to launch the Ada Lovelace Next in early 2025 and not later. NVIDIA Pascal was launched in May 2016, Turing in September 2018, Ampere in May 2020, and Ada Lovelace in October 2022.

NVIDIA Ramps Up Battle Against Makers of Unlicensed GeForce Cards

NVIDIA is stepping up to manufacturers of counterfeit graphics card in China according to an article published by MyDrivers - the hardware giant is partnering up with a number of the nation's major e-commerce companies in order to eliminate inventories of bogus GPUs. It is claimed that these online retail platforms, including JD.com and Douyin, are partway into removing a swathe of dodgy stock from their listings. NVIDIA is seeking to disassociate itself from the pool of unlicensed hardware and the brands responsible for flooding the domestic and foreign markets with so-called fake graphics cards. The company is reputed to be puzzled about the murky origins of this bootlegging of their patented designs.

The market became saturated with fake hardware during the Ethereum mining boom - little known cottage companies such as 51RSIC, Corn, Bingying and JieShuoMllse were pushing rebadged cheap OEM cards to domestic e-tail sites. The knock-off GPUs also crept outside of that sector, and import listings started to appear on international platforms including Ebay, AliExpress, Amazon and Newegg. NVIDIA is also fighting to stop the sale of refurbished cards - these are very likely to have been utilized in intensive cryptocurrency mining activities. A flood of these hit the market following an extreme downturn in crypto mining efforts, and many enthusiast communities have warned against acquiring pre-owned cards due to the high risk of component failure.

NVIDIA Enables More Encoding Streams on GeForce Consumer GPUs

NVIDIA has quietly removed some video encoding limitations on its consumer GeForce graphics processing units (GPUs), allowing encoding of up to five simultaneous streams. Previously, NVIDIA's consumer GeForce GPUs were limited to three simultaneous NVENC encodes. The same limitation did not apply to professional GPUs.

According to NVIDIA's own Video Encode and Decode GPU Support Matrix document, the number of concurrent NVENC encodes on consumer GPUs have been increased from three to five. This includes certain GeForce GPUs based on Maxwell 2nd Gen, Pascal, Turing, Ampere, and Ada Lovelace GPU architectures. While the number of concurrent NVDEC decodes were never limited, there is a limitation on how many streams you can encode by certain GPU, depending on the resolution of the stream and the codec.

ASUS IoT Announces the PE3000G with Discrete GPU Support via MXM Module

SUS IoT, the global AIoT solution provider, today announced PE3000G—the industry's first edge AI system to support Mobile PCI Express Module (MXM) GPUs from both NVIDIA and Intel. Specifically, the all-new industrial PC works seamlessly with NVIDIA Ampere/Turing or Intel Arc A-series MXM GPUs. Powered by a 12th Gen Intel Core processor and up to 64 GB of DDR5 4800 MHz memory, and combining a proven power design, guaranteed and fanless thermal performance, and superior physical and mechanical ruggedness, PE3000G brings unprecedented longevity, computing power, flexibility and reliability to AI computing at the edge—making it an ideal option for scenarios where resilience, longevity and both CPU and GPU scalability are paramount.

"PE3000G is ASUS IoT's response to the burgeoning demand for accelerating AI inference and extreme deployment in industrial settings," commented KuoWei Chao, General Manager of the ASUS IoT business unit. "With robust power, thermal and mechanical design, it pushes versatile edge-AI-inference applications to business-critical applications. PE3000G is an ideal fit to accelerate edge AI inference in SWaP-constrained applications, such as machine vision in factory automation, outdoor surveillance system and AI-inference systems for autonomous vehicles."

Akasa Intros Turing WS Fanless Case for Wall Street Canyon NUC 12 Pro

Akasa today introduced the Turing WS, a variant of its large Turing fanless case for the Intel NUC 12 Pro "Wall Street Canyon," system boards, which use Intel 12th Gen Core "Alder Lake-P" mobile processors. The Turing WS is physically identical to the company's Turing family of cases, measuring 247.9 mm x 113.5 mm x 95 mm (WxDxH), but with its I/O designed to correspond with NUC 12 Pro boards, and its heatsink SoC base with that of the "Alder Lake-P" SoC. The case also includes an M.2-2280 cooling surface, so you can extend the cooling capacity of the case to an NVMe SSD. Available for pre-order from October 3, the Akasa Turing WS is priced at USD $170.

NVIDIA Ada's 4th Gen Tensor Core, 3rd Gen RT Core, and Latest CUDA Core at a Glance

Yesterday, NVIDIA launched its GeForce RTX 40-series, based on the "Ada" graphics architecture. We're yet to receive a technical briefing about the architecture itself, and the various hardware components that make up the silicon; but NVIDIA on its website gave us a first look at what's in store with the key number-crunching components of "Ada," namely the Ada CUDA core, 4th generation Tensor core, and 3rd generation RT core. Besides generational IPC and clock speed improvements, the latest CUDA core benefits from SER (shader execution reordering), an SM or GPC-level feature that reorders execution waves/threads to optimally load each CUDA core and improve parallelism.

Despite using specialized hardware such as the RT cores, the ray tracing pipeline still relies on CUDA cores and the CPU for a handful tasks, and here NVIDIA claims that SER contributes to a 3X ray tracing performance uplift (the performance contribution of CUDA cores). With traditional raster graphics, SER contributes a meaty 25% performance uplift. With Ada, NVIDIA is introducing its 4th generation of Tensor core (after Volta, Turing, and Ampere). The Tensor cores deployed on Ada are functionally identical to the ones on the Hopper H100 Tensor Core HPC processor, featuring the new FP8 Transformer Engine, which delivers up to 5X the AI inference performance over the previous generation Ampere Tensor Core (which itself delivered a similar leap by leveraging sparsity).

Palit Announces the GeForce GTX 1630 Dual Series Graphics Cards

Palit Microsystems Ltd, the leading graphics card manufacturer, announced the GeForce GTX 1630 Dual Series graphics cards. Palit GeForce GTX 1630 is fabricated with the breakthrough graphics performance of the NVIDIA Turing architecture. Offering 1.17x better performance than GTX 1050, and with performance on par with GTX 1050 Ti, the new GeForce GTX 1630 comes with even advanced specs and features, including faster GDDR6 memory, DX12 support, and plentiful video CODEC supports for broadcasting. The GeForce GTX 1630 is a smarter choice for gamers and creators who are looking for entry level graphics card.

The Palit GeForce GTX 1630 Dual Series brings a compact yet performance-focused design that maintains the essentials to accomplish gaming and multimedia tasks at hand. Coming in 170 mm in length, the model supports Mini-ITX form factor, perfect for mini PC lovers. Equipped with 0 db Tech, the fans will stop when under relatively light workload, eliminating noises when active cooling is unnecessary.

Manli Unveils its GeForce GTX 1630 Graphics Card

Manli Technology Group Limited, the major Graphics Cards, and other components manufacturer. today announced the affordable new member within the 16 series family, Manli GeForce GTX 1630.

Manli GeForce GTX 1630 is powered by award-winning NVIDIA Turing architecture. It is also equipped with 4 GB of GDDR6, 64bit memory controller, and built-in 512 CUDA Cores with core frequency set at 1740 MHz which can dynamically boost up to 1785 MHz. Moreover, Manli GeForce GTX 1630 has less power consumption with only 75 W, and no external power supply required.

INNO3D Launches GeForce GTX 1630 Twin X2 OC+ Graphics Card

INNO3D, a leading manufacturer of pioneering high-end multimedia components and various innovations is excited to announce the new INNO3D GeForce GTX 1630 GDDR6 TWIN X2 OC and COMPACT. INNO3D has released different versions of the GTX 16 Series such as the GeForce GTX 1660 Ti, GTX 1660 and GTX 1650 - the GTX 1630 TWIN X2 OC and COMPACT adopts the same award-winning 'Best-in-Class' NVIDIA's Turing architecture.

The INNO3D GeForce GTX 1630 TWIN X2 OC is factory overclocked out of the box while the COMPACT measures just 6.29 inches/160 mm and is ready to fit in most systems out there. The TWIN X2 OC is equipped with dual 8 cm fans while the COMPACT comes with a single 10 cm fan, pushing the boundaries of what our engineers could fit in while still confined to the standard card height.

EVGA Debuts the GeForce GTX 1630 SC Graphics Card

The EVGA GeForce GTX 1630 is built with the powerful graphics performance of the award-winning NVIDIA Turing architecture. Step up to better gaming with GeForce GTX. Featuring concurrent execution of floating point and integer operations, adaptive shading technology, and a new unified memory architecture, Turing shaders enable greater performance on today's games. Get improved power efficiency over the previous generation for a faster, cooler and quieter gaming experience.

Capture and share videos, screenshots, and livestreams with friends. Keep your GeForce drivers up to date and optimize your game settings. GeForce Experience lets you do it all. It's the essential companion to your GeForce graphics card. This powerful photo mode lets you take professional-grade photographs of your games like never before. Now, you can capture and share your most brilliant gaming experiences with super-resolution, 360-degree, HDR, and stereo photographs.

Gainward Announces GeForce GTX 1630 Ghost

As the leading brand in enthusiastic graphics market, Gainward proudly presents the new Gainward GeForce GTX 1630 Ghost Series graphics card. The Gainward GeForce GTX 1630 Ghost Series is powered by the award-winning NVIDIA Turing architecture and is equipped with faster GDDR6 memory. The new model is in a similar class with GeForce GTX 1050 Ti, and even brings 1.17x better gaming performance than the GeForce GTX 1050. Supporting DX12 and NVIDIA image scaling, the Gainward GeForce GTX 1630 Series also empowers the creator tasks and ensures better image quality.

GIGABYTE Launches GeForce GTX 1630 Graphics Cards

GIGABYTE TECHNOLOGY Co. Ltd, a leading manufacturer of premium gaming hardware, today announced the latest GeForce GTX 1630 graphics cards powered by NVIDIA Turing architecture. GIGABYTE launches the GeForce GTX 1630 OC 4G and GeForce GTX 1630 OC Low Profile 4G graphics cards. Turing architecture graphics cards have the ability to execute both integer and floating-point operations simultaneously making them much faster than the previous Pascal architecture. These graphics cards feature GIGABYTE-certified overclocked GPUs, coupled with GIGABYTE's cooling technology, allowing all gamers to enjoy a flawless gaming experience.

The GIGABYTE GeForce GTX 1630 OC 4G graphics card offers the GIGABYTE custom-designed cooling system, featuring unique blade fan design, delivering an effective heat dissipation capacity for higher performance at lower temperatures. The compact graphics card is less than 170 mm in length and can be easily installed in any small chassis. The GIGABYTE GeForce GTX 1630 OC Low Profile 4G graphics card features the advanced GIGABYTE cooling system. It is a half-height graphics card with a low profile bracket, allowing gamers to easily install it into a variety of chassis. This graphics card has four video outputs, which can meet the needs of multi-screen.

NVIDIA Launches GeForce GTX 1630 Graphics Card

NVIDIA today launched the GeForce GTX 1630 entry-level graphics card. A successor to the GT 1030, the new GTX 1630 is an entry-level product, despite its roughly $150 MSRP. It is based on the older "Turing" 16-series graphics architecture, which lacks hardware-accelerated ray tracing or even support for DLSS. It is carved out from the same 12 nm "TU117" silicon as the GTX 1650 from 2019.

The GTX 1630 features exactly half of the 16 streaming multiprocessors present on the TU117. The 8 available SM work out to 512 CUDA cores, 32 TMUs, and 32 ROPs. The card comes with 4 GB as the standard memory size, and this is GDDR6 type, across a 64-bit wide memory bus. The card typically features just two 12 Gbps-rated 16 Gbit GDDR6 chips. The GPU operates at a boost frequency of 1785 MHz. The card lacks hardware-accelerated AV1 decode, and has media features consistent with the rest of the "Turing" family. At $150, it competes with the Radeon RX 6400 (which can be had for as low as $160), and the Arc A380.
Catch the TechPowerUp review of the Gainward GTX 1630 Ghost graphics card.

GPU Hardware Encoders Benchmarked on AMD RDNA2 and NVIDIA Turing Architectures

Encoding video is one of the significant tasks that modern hardware performs. Today, we have some data of AMD and NVIDIA solutions for the problem that shows how good GPU hardware encoders are. Thanks to Chips and Cheese tech media, we have information about AMD's Video Core Next (VCN) encoder found in RDNA2 GPUs and NVIDIA's NVENC (short for NVIDIA Encoder). The site managed to benchmark AMD's Radeon RX 6900 XT and NVIDIA GeForce RTX 2060 GPUs. The AMD card features VCN 3.0, while the NVIDIA Turing card features a 6th generation NVENC design. Team red is represented by the latest work, while there exists a 7th generation of NVENC. C&C tested this because it means all that the reviewer possesses.

The metric used for video encoding was Netflix's Video Multimethod Assessment Fusion (VMAF) metric composed by the media giant. In addition to hardware acceleration, the site also tested software acceleration done by libx264, a software library used for encoding video streams into the H.264/MPEG-4 AVC compression format. The libx264 software acceleration was running on AMD Ryzen 9 3950X. Benchmark runs included streaming, recording, and transcoding in Overwatch and Elder Scrolls Online.
Below, you can find benchmarks of streaming, recording, transcoding, and transcoding speed.

NVIDIA GeForce MX550 Matches Ryzen 9 5900HS Vega iGPU in PassMark

The recently announced entry-level NVIDIA GeForce MX550 Turing-based discrete mobile graphics card for thin and light laptops has recently appeared on the PassMark video card benchmark site. The MX550 scores 5014 points in the G3D Mark test which places its performance nearly exactly with that of the integrated Vega 8 iGPU found in the Ryzen 9 5900HS that scores 4968 points in the same benchmark. There is only a single test result available for the MX550 so we will need to wait for further benchmarks to confirm its exact performance but either way it represents a significant performance improvement from the MX450 which scores just 3724 points. The MX550 is a PCIe 4.0 card featuring the 12 nm TU117 Turing GPU with 1024 shading units paired with 2 GB of GDDR6 memory.

Akasa Launches Turing ABX and Newton A50 Fanless Cases for Mini-PCs

Akasa, manufacturer of cooling solutions and computer cases, today updated two of its fanless compact cases designed to replace actively-cooled systems of mini-PCs. For starters, the new Akasa Turing ABX is a next-generation compact fanless case for GIGABYTE AMD Ryzen BRIX 4000U-Series Mini-PC with Radeon GPU. The Turing ABX case is compatible with the following GIGABYTE Ryzen BRIX models: GB-BRR3-4300, GB-BRR5-4500, GB-BRR7-4700, and GB-BRR7-480. It brings out all of the I/O ports that come standard with these BRIX models; however, the cooling system is replaced with Akasa's fanless design integrated within the case.

And last but not least, Akasa also launched Newton A50 fanless case for ASUS PN51 and PN50 mini-PCs. Coming in with a 1.3-liter design, this case represents a very compact solution capable of carrying 5000 and 4000 Series AMD Ryzen processors and Radeon Vega 7 Graphics. As far as I/O options, the case brings everything that ASUS PN51 and PN50 PCs have to offer; however, the cooling system is also replaced by Akasa's fanless design. You can learn more about Turing ABX here and Newton A50 here. For availability, you can expect these cases to become available in the next three weeks from Scan.co.uk, Amazon, Caseking, Jimms PC, Performance-PCs. Pricing is unknown.
Akasa Turing ABX Akasa Newton A50

Intel Adds Experimental Mesh Shader Support in DG2 GPU Vulkan Linux Drivers

Mesh shader is a relatively new concept of a programmable geometric shading pipeline, which promises to simplify the whole graphics rendering pipeline organization. NVIDIA introduced this concept with Turing back in 2018, and AMD joined with RDNA2. Today, thanks to the finds of Phoronix, we have gathered information that Intel's DG2 GPU will carry support for mesh shaders and bring it under Vulkan API. For starters, the difference between mesh/task and traditional graphics rendering pipeline is that the mesh edition is much simpler and offers higher scalability, bandwidth reduction, and greater flexibility in the design of mesh topology and graphics work. In Vulkan, the current mesh shader state is NVIDIA's contribution called the VK_NV_mesh_shader extension. The below docs explain it in greater detail:
Vulkan API documentationThis extension provides a new mechanism allowing applications to generate collections of geometric primitives via programmable mesh shading. It is an alternative to the existing programmable primitive shading pipeline, which relied on generating input primitives by a fixed function assembler as well as fixed function vertex fetch.

There are new programmable shader types—the task and mesh shader—to generate these collections to be processed by fixed-function primitive assembly and rasterization logic. When task and mesh shaders are dispatched, they replace the core pre-rasterization stages, including vertex array attribute fetching, vertex shader processing, tessellation, and geometry shader processing.

ASUS Intros GeForce RTX 2060 EVO 12GB DUAL Series

ASUS joined the GeForce RTX 2060 12 GB party with a pair of graphics card models under its DUAL series. NVIDIA earlier this month launched the RTX 2060 12 GB, a new SKU based on the "Turing" graphics architecture. This is more than a doubling in memory amount over the original RTX 2060. The new SKU features 2,176 CUDA cores, as compared to 1,920 on the original. NVIDIA is looking to target the Radeon RX 6600 with it.

The ASUS RTX 2060 EVO DUAL and DUAL OC graphics cards feature the company's latest iteration of the DUAL cooling solution, which features an aluminium fin-stack heatsink with heat-pipes tha make direct contact with the "TU106" GPU at the base; while a pair of the company's latest-generation Axial-Tech fans ventilate it. The DUAL OC SKU runs the GPU at 1680 MHz boost, while the DUAL sticks to NVIDIA-reference clock speeds of 1650 MHz boost. A software-based OC mode unlocks higher clocks on both SKUs. For the DUAL OC, this means 1710 MHz, and for the DUAL (standard) it means 1680 MHz. Both cards rely on a single 8-pin PCIe power connector. Display outputs include one DisplayPort 1.4, two HDMI 2.0, and one dual-link DVI-D. The cards are expected to be priced around 550€.
Return to Keyword Browsing
Nov 21st, 2024 05:44 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts