News Posts matching #GDDR5

Return to Keyword Browsing

Colorful Announces GeForce GTX 1650 4GB Ultra Graphics Card

Colorful Technology Company Limited, professional manufacturer of graphics cards, motherboards and high-performance storage solutions is proud to announce the launch of its latest graphics card for the entry-level gaming market. The COLORFUL iGame GeForce GTX 1650 Ultra 4G brings NVIDIA's Turing graphics architecture to the masses, and COLORFUL brings the best out of the GPU thanks to its years of experience of working with gamers.

For new gamers and those upgrading from integrated graphics and want a taste of what's to come, the COLORFUL iGame GeForce GTX 1650 Ultra 4G is a prime choice to start with. Featuring the latest NVIDIA GPU technology: 12nm Turing architecture that brings with the best of Geforce, including GeForce Experience, NVIDIA Ansel, G-Sync and G-Sync Compatible monitor supports, Game Ready Drivers and so much more. COLORFUL has given the iGame GTX 1650 Ultra 4G with a performance boost via One-key OC button so you can get extra performance without tinkering.

NVIDIA to Flesh out Lower Graphics Card Segment with GeForce GTX 1650 Ti

It seems NVIDIA's partners are gearing up for yet another launch, sometime after the GTX 1650 finally becomes available. ECC Listings have made it clear that partners are working on another TU117 variant, with improved performance, sitting between the GTX 1650 and the GTX 1660, which will should bring the fight to AMD's Radeon RX 580. Of course, with the GTX 1660 sitting pretty at a $219 price, this leaves anywhere between the GTX 1650's $149 and the GTX 1660's $229 for the GTX 1650 Ti to fill. With the GTX 1660 being an average of 13% faster than the RX 580, it makes sense for NVIDIA to look for another SKU to cover that large pricing gap between the 1650 and the 1660.

It's speculated that the GeForce GTX 1650 could feature 1024 CUDA Cores, 32 ROPs and 64 TMUs. These should be paired with the same 4 GB GDDR5 VRAM running across a 128-bit bus at the same 8000 MHz effective clock speeds as the GTX 1650, delivering a bandwidth of 128 GB/s. Should NVIDIA be able to pull the feat of keeping the same 75W TDP between its Ti and non-Ti GTX 1650 (as it did with the GTX 1660), that could mean that a 75 W graphics card would be contending with AMD's 185 W RX 580 - a mean, green feet in the power efficiency arena. A number of SKUs for the GTX 1650 Ti have been leaked on ASUS' side of the field, which you can find after the break.

GAINWARD, PALIT GeForce GTX 1650 Pictured, Lack DisplayPort Connectors

In the build-up to NVIDIA's GTX 1650 release, more and more cards are being revealed. While GAINWARD and PALIT's designs won't bring much in the way of interesting PCB designs and differences to be perused, since the PCBs are exactly the same. The GAINWARD Pegasus and the PALIT Storm X only differ in terms of the used shroud design, and both cards carry the same TU117 GPU paired with 4GB of GDDR5 memory.

ZOTAC GeForce GTX 1650 Pictured: No Power Connector

Here are some of the first clear renders of an NVIDIA GeForce GTX 1650 graphics card, this particular one from ZOTAC. The GTX 1650, slated for April 22, will be the most affordable GPU based on the "Turing" architecture, when launched. The box art confirms this card features 4 GB of GDDR5 memory. The ZOTAC card is compact and SFF-friendly, is no longer than the PCIe slot itself, and is 2 slots-thick. Its cooler is a simple fan-heatsink with an 80 mm fan ventilating an aluminium heatsink with radially-projecting fins. The card can make do with the 75W power drawn from the PCIe slot, and has no additional power connectors. Display outputs include one each of DisplayPort 1.4, HDMI 2.0b, and a dual-link DVI-D.

NVIDIA GeForce GTX 1650 Availability Revealed

NVIDIA is expected to launch its sub-$200 GeForce GTX 1650 graphics card on the 22nd of April, 2019. The card was earlier expected to launch towards the end of April. With it, NVIDIA will introduce the 12 nm "TU117," its smallest GPU based on the "Turing" architecture. The GTX 1650 could replace the current GTX 1060 3 GB, and may compete with AMD offerings in this segment, such as the Radeon RX 570 4 GB, in being Full HD-capable if not letting you max your game settings out at that resolution. The card could ship with 4 GB of GDDR5 memory.

ZOTAC Unveils its GeForce GTX 1660 Series

ZOTAC Technology, a global manufacturer of innovation, is pleased to expand the GeForce GTX 16 series with the ZOTAC GAMING GeForce GTX 1660 series featuring GDDR5 memory and the NVIDIA Turing Architecture.

Founded in 2017, ZOTAC GAMING is the pioneer movement that comes forth from the core of the ZOTAC brand that aims to create the ultimate PC gaming hardware for those who live to game. It is the epitome of our engineering prowess and design expertise representing over a decade of precision performance, making ZOTAC GAMING a born leading force with the goal to deliver the best PC gaming experience. The logo shows the piercing stare of the robotic eyes, where behind it, lies the strength and future technology that fills the ego of the undefeated and battle experienced.

NVIDIA Launches the GeForce GTX 1660 6GB Graphics Card

NVIDIA today launched the GeForce GTX 1660 6 GB graphics card, its successor to the immensely popular GTX 1060 6 GB. With prices starting at $219.99, the GTX 1660 is based on the same 12 nm "TU116" silicon as the GTX 1660 Ti launched last month; with fewer CUDA cores and a slower memory interface. NVIDIA carved the GTX 1660 out by disabling 2 out of 24 "Turing" SMs on the TU116, resulting in 1,408 CUDA cores, 88 TMUs, and 48 ROPs. The company is using 8 Gbps GDDR5 memory instead of 12 Gbps GDDR6, which makes its memory sub-system 33 percent slower. The GPU is clocked at 1530 MHz, with 1785 MHz boost, which are marginally higher than the GTX 1660 Ti. The GeForce GTX 1660 is a partner-driven launch, meaning that there won't be any reference-design cards, although NVIDIA made should every AIC partner has at least one product selling at the baseline price of $219.99.

Read TechPowerUp Reviews: Zotac GeForce GTX 1660 | EVGA GeForce GTX 1660 XC Ultra | Palit GeForce GTX 1660 StormX OC | MSI GTX 1660 Gaming X

Update: We have updated our GPU database with all GTX 1660 models announced today, so you can easily get an overview over what has been released.

EVGA and GIGABYTE GeForce GTX 1660 Graphics Cards Pictured

Here are some of the first pictures of EVGA's and GIGABYTE's upcoming GeForce GTX 1660 graphics cards reportedly slated for launch later this week. It should come as no surprise that these cards resemble the companies' GTX 1660 Ti offerings, since they're based on the same 12 nm "TU116" silicon, with fewer CUDA cores. The underlying PCBs could be slightly different as the GTX 1660 uses older generation 8 Gbps GDDR5 memory instead of 12 Gbps GDDR6. The "TU116" silicon is configured with 1,408 CUDA cores out of the 1,536 physically present; the memory amount is 6 GB, across a 192-bit wide memory bus. The GTX 1660 baseline price is reportedly USD $219, and the card replaces the GTX 1060 6 GB from NVIDIA's product stack.

EVGA is bringing two designs to the market, a short-length triple-slot card with a single fan; and a more conventional longer card with 2-slot, dual-fan design. The baseline "Black" card could be offered in the shorter design; while the top-tier XC Ultra could be exclusive to the longer design. GIGABYTE, on the other hand, has two designs, a shorter-length dual-fan; and a longer-length triple-fan. Both models are dual-slot. The baseline SKU will be restricted to the shorter board design, while premium Gaming OC SKUs could come in the longer board design.

Details on GeForce GTX 1660 Revealed Courtesy of MSI - 1408 CUDA Cores, GDDR 5 Memory

Details on NVIDIA's upcoming mainstream GTX 1660 graphics card have been revealed, which will help put its graphics-cruncinh prowess up to scrutiny. The new graphics card from NVIDIA slots in below the recently released GTX 1660 Ti (which provides roughly 5% better performance than NVIDIA's previous GTX 1070 graphics card) and above the yet-to-be-released GTX 1650.

The 1408 CUDA cores in the design amount to a 9% reduction in computing cores compared to the GTX 1660 Ti, but most of the savings (and performance impact) likely comes at the expense of the 6 GB (8 Gbps) GDDR5 memory this card is outfitted with, compared to the 1660 Ti's still GDDR6 implementation. The amount of cut GPU resources form NVIDIA is so low that we imagine these chips won't be coming from harvesting defective dies as much as from actually fusing off CUDA cores present in the TU116 chip. Using GDDR5 is still cheaper than the GDDR6 alternative (for now), and this also avoids straining the GDDR6 supply (if that was ever a concern for NVIDIA).

NVIDIA GeForce GTX 1650 Memory Size Revealed

NVIDIA's upcoming entry-mainstream graphics card based on the "Turing" architecture, the GeForce GTX 1650, will feature 4 GB of GDDR5 memory, according to tech industry commentator Andreas Schilling. Schilling also put out mast box-art by NVIDIA for this SKU. The source does not mention memory bus width. In related news, Schilling also mentions NVIDIA going with 6 GB as the memory amount for the GTX 1660. NVIDIA is expected to launch the GTX 1660 mid-March, and the GTX 1650 late-April.

Intel Readies Crimson Canyon NUC with 10nm Core i3 and AMD Radeon

Intel is giving final touches to a "Crimson Canyon" fully-assembled NUC desktop model which combines the company's first 10 nm Core processor, and AMD Radeon discrete graphics. The NUC8i3CYSM desktop from Intel packs a Core i3-8121U "Cannon Lake" SoC, 8 GB of dual-channel LPDDR4 memory, and discrete AMD Radeon RX 540 mobile GPU with 2 GB of dedicated GDDR5 memory. A 1 TB 2.5-inch hard drive comes included, although you also get an M.2-2280 slot with both PCIe 3.0 x4 (NVMe) and SATA 6 Gbps wiring. The i3-8121U packs a 2-core/4-thread CPU clocked up to 3.20 GHz and 4 MB of L3 cache; while the RX 540 packs 512 stream processors based on the "Polaris" architecture.

The NUC8i3CYSM offers plenty of modern connectivity, including 802.11ac + Bluetooth 5.0 powered by an Intel Wireless-AC 9560 WLAN card, wired 1 GbE from an Intel i219-V controller, consumer IR receiver, an included beam-forming microphone, an SDXC card reader, and stereo HD audio. USB connectivity includes four USB 3.1 type-A ports including a high-current port. Display outputs are care of two HDMI 2.0b, each with 7.1-channel digital audio passthrough. The company didn't reveal pricing, although you can already read a performance review of this NUC from the source link below.

Sapphire Outs an RX 570 Graphics Card with 16GB Memory, But Why?

Sapphire has reportedly developed an odd-ball Radeon RX 570 graphics card, equipped with 16 GB of GDDR5 memory, double the memory amount the SKU is possibly capable of. The card is based on the company's NITRO+ board design common to RX 570 thru RX 590 SKUs, and uses sixteen 8 Gbit GDDR5 memory chips that are piggybacked (i.e. chips on both sides of the PCB). When Chinese tech publication MyDrivers reached out to Sapphire for an explanation behind such a bizarre contraption, the Hong Kong-based AIB partner's response was fascinating.

Sapphire in its response said that they wanted to bolster the card's crypto-currency mining power, and giving the "Polaris 20" GPU additional memory would improve its performance compared to ASIC miners using the Cuckoo Cycle algorithm. This can load up the video memory anywhere between 5.5 GB to 11 GB, and giving the RX 570 16 GB of it was Sapphire's logical next step. Of course Cuckoo Cycle is being defeated time and again by currency curators. This card will be a stopgap for miners until ASIC mining machines with expanded memory come out, or the proof-of-work systems are significantly changed.

Hands On with a Pack of RTX 2060 Cards

NVIDIA late Sunday announced the GeForce RTX 2060 graphics card at $349. With performance rivaling the GTX 1070 Ti and RX Vega 56 on paper, and in some cases even the GTX 1080 and RX Vega 64, the RTX 2060 in its top-spec trim with 6 GB of GDDR6 memory, could go on to be NVIDIA's best-selling product from its "Turing" RTX 20-series. At the CES 2019 booth of NVIDIA, we went hands-on with a few of these cards, beginning NVIDIA's de-facto reference-design Founders Edition. This card indeed feels smaller and lighter than the RTX 2070 Founders Edition.

The Founders Edition still doesn't compromise on looks or build quality, and is bound to look slick in your case, provided you manage to find one in retail. The RTX 2060 launch will be dominated by NVIDIA's add-in card partners, who will dish out dozens of custom-design products. Although NVIDIA didn't announce them, there are still rumors of other variants of the RTX 2060 with lesser memory amounts, and GDDR5 memory. You get the full complement of display connectivity, including VirtualLink.

GDDR6 Memory Costs 70 Percent More than GDDR5

The latest GDDR6 memory standard, currently implemented by NVIDIA in its GeForce RTX 20-series graphics cards, pulls great premium. According to a 3DCenter.org report citing list-prices sourced from electronics components wholeseller DigiKey, 14 Gbps GDDR6 memory chips from Micron Technology cost over 70 percent more than common 8 Gbps GDDR5 chips of the same density, from the same manufacturer. Besides obsolescence, oversupply could be impacting GDDR5 chip prices.

Although GDDR6 is available in marginally cheaper 13 Gbps and 12 Gbps trims, NVIDIA has only been sourcing 14 Gbps chips. Even the company's upcoming RTX 2060 performance-segment graphics card is rumored to implement 14 Gbps chips in variants that feature GDDR6. The sheer disparity in pricing between GDDR6 and GDDR5 could explain why NVIDIA is developing cheaper GDDR5 variants of the RTX 2060. Graphics card manufacturers can save around $22 per card by using six GDDR5 chips instead of GDDR6.

Sapphire Outs Radeon RX 590 Nitro+ OC Sans "Special Edition"

Sapphire debuted its Radeon RX 590 series last month with the RX 590 Nitro+ Special Edition, which at the time was advertised as a limited-edition SKU. The company over Holiday weekend updated its product stack to introduce a new mass-production SKU, the RX 590 Nitro+ OC, minus "Special Edition" branding. There are only cosmetic changes between the two SKUs. Sapphire's favorite shade of blue on the Special Edition SKU makes way for matte-black on the cooler shroud, as do the black accents on the back-plate, instead of blue. The fan impellers are opaque matte black instead of frosty and translucent.

Thankfully, Sapphire hasn't changed the specs that matter - factory-overclock. The card still ships with 1560 MHz engine clocks (boost), and 8.40 GHz (GDDR5-effective) memory, and a "quiet" second BIOS that dials down the clocks to 1545 MHz boost and 8.00 GHz memory. The underlying PCB is unchanged, too, drawing power from a combination of 8-pin and 6-pin PCIe power connectors, and conditioning it with a 6+1 phase VRM. Display outputs include two each of DisplayPort 1.4 and HDMI 2.0, and a dual-link DVI-D. The company didn't reveal pricing, although we expect it to be marginally lower than the Special Edition SKU.

NVIDIA GeForce RTX 2060 to Ship in Six Variants Based on Memory Size and Type

NVIDIA drew consumer ire for differentiating its GeForce GTX 1060 into two variants based on memory, the GTX 1060 3 GB and GTX 1060 6 GB, with the two also featuring different GPU core-configurations. The company plans to double-down - or should we say, triple-down - on its sub-branding shenanigans with the upcoming GeForce RTX 2060. According to VideoCardz, citing a GIGABYTE leak about regulatory filings, NVIDIA could be carving out not two, but six variants of the RTX 2060!

There are at least two parameters that differentiate the six (that we know of anyway): memory size and memory type. There are three memory sizes, 3 GB, 4 GB, and 6 GB. Each of the three memory sizes come in two memory types, the latest GDDR6 and the older GDDR5. Based on the six RTX 2060 variants, GIGABYTE could launch up to thirty nine SKUs. When you add up similar SKU counts from NVIDIA's other AIC partners, there could be upward of 300 RTX 2060 graphics card models to choose from. It won't surprise us if in addition to memory size and type, GPU core-configurations also vary between the six RTX 2060 variants compounding consumer confusion. The 12 nm "TU106" silicon already has "A" and "non-A" ASIC classes, so there could be as many as twelve new device IDs in all! The GeForce RTX 2060 is expected to debut in January 2019.

AMD Radeon RX 590 Launch Price, Other Details Revealed

AMD is very close to launching its new Radeon RX 590 graphics card, targeting a middle-of-market segment that sells in high volumes, particularly with Holiday around the corner. The card is based on the new 12 nm "Polaris 30" silicon, which has the same exact specifications as the "Polaris 20" silicon, and the original "Polaris 10," but comes with significantly higher clock-speed headroom thanks to the new silicon fabrication process, which AMD and its partners will use to dial up engine clock speed by 10-15% over those of the RX 580. While the memory is still 8 Gbps 256-bit GDDR5, some partners will ship overclocked memory.

According to a slide deck seen by VideoCardz, AMD is setting the baseline price of the Radeon RX 590 at USD $279.99, which is about $50 higher than RX 580 8 GB, and $40 higher than the price the RX 480 launched at. AMD will add value to that price by bundling three AAA games, including "Tom Clancy's The Division 2," "Devil May Cry 5," and "Resident Evil 2." The latter two titles are unreleased, and the three games together pose a $120-150 value. AMD will also work with monitor manufacturers to come up with graphics card + AMD FreeSync monitor bundles.

HIS Radeon RX 590 IceQ X² Detailed

With a little Javascript trickery, Redditor "BadReIigion" succeeded in making the company website of AMD partner HIS to spit out details of its upcoming Radeon RX 590 IceQ X² graphics card (model number: HIS-590R8LCBR). Pictured below is the RX 580 IceQ X², but we expect the RX 590-based product to be mostly similar, with cosmetic changes such as a different cooler shroud or back-plate design. The website confirms some details like the ASIC being "Polaris 30 XT," a rendition of the 2,304-SP "Polaris 20" die on the 12 nm FinFET node, and that the card features 8 GB of GDDR5 memory. Some of the other details, such as the engine clock being mentioned as "2000 MHz" is unlikely.

The consensus emerging on engine clock boost frequencies from RX 590 leaks so far, put RX 590 custom-design, factory-overclocked cards to tick around 1500-1550 MHz, a 100-200 MHz improvement over the RX 580. Some board vendors such as Sapphire are even overclocking the memory by about 5%. "Polaris 30" is likely pin-compatible with "Polaris 20," because most board vendors are reusing their RX 580 PCBs, some of which are even carried over from the RX 480. For the HIS RX 590 IceQ X² this means drawing power from a single 8-pin PCIe power connector.

Sapphire Radeon RX 590 NITRO+ Special Edition Detailed

Sapphire is developing a premium variant of its upcoming Radeon RX 590 series, called the RX 590 NITRO+ Special Edition, much like the "limited edition" branding it gave its premium RX 580-based card. Komachi Ensaka accessed leaked brochures of this card, which will bear an internal SKU code 11289-01. The brochure also confirms that the RX 590 features an unchanged 2,304 stream processor count from the RX 580, and continues to feature 8 GB of GDDR5 memory across a 256-bit wide memory interface. All that's new is improved thermals from a transition to the new 12 nm FinFET silicon fabrication process.

The Sapphire RX 590 NITRO+ SE ships with two clock-speed profiles, that can be probably toggled on the hardware by switching between two BIOS ROMs. The first profile is called NITRO+ Boost, and it runs the GPU at 1560 MHz, and the memory at 8400 MHz (GDDR5-effective). The second profile, called Silent Mode, reduces the engine clock boost to 1545 MHz, and the memory to 8000 MHz. For both profiles, the fan settings are unchanged. The fans stay off until the GPU is warming up to 54 °C, and spins at its nominal speed at 75 °C. It cuts off at 45 °C. The nominal speed is 0 - 2,280 RPM and the maximum speed is 3200 RPM.

ASUS Prepares GPP-Ridden Radeon RX 590 ROG STRIX Graphics Card for Launch

Videocardz, through their industry sources, say they've confirmed that ASUS is working on their own Radeon RX 590 ROG STRIX graphics card. The naming isn't a typo: the GPP-fueled AREZ moniker has apparently gone off the window for ASUS by now, and the RX 590 should be marketed under its (again) brand-agnostic ROG lineup. The product code (ASUS Radeon RX 590 ROG STRIX GAMING (ROG-STRIX-RX590-8G-GAMING) indicates the usage of 8 GB of graphics memory just like the RX 580, and we all expect this to be of the GDDR5 kind with no further refinements. It's all in the die, as they (could) say.

Alleged AMD RX 590 3D Mark Time Spy Scores Surface

Benchmark scores for 3D Mark's Time Spy have surface, and are purported to represent the performance level of an unidentified "Generic VGA" - which is being identified as AMD's new 12 nm Polaris revision. The RX 590 product name makes almost as much sense as it doesn't, though; for one, there's no real reason to release another entire RX 600 series, unless AMD is giving the 12 nm treatment to the entire lineup (which likely wouldn't happen, due to the investment in fabrication process redesign and node capacity required for such). As such, the RX 590 moniker makes sense if AMD is only looking to increase its competitiveness in the sub-$300 space as a stop-gap until they finally have a new graphics architecture up their shader sleeves.

Intel "Crimson Canyon" NUCs with Discrete GPUs Up for Pre-order

One of the first Intel NUC (next unit of computing) mini PCs to feature completely discrete GPUs (and not MCMs of CPUs and GPUs), the "Crimson Canyon" NUC8i3CYSM and NUC8i3CYSN, are up for pre-order. The former is priced at USD $529, while the latter goes for $574. The two combine Intel's 10 nm Core i3-8121U "Cannon Lake" SoC with AMD Radeon 540 discrete GPU. Unlike the "Hades Canyon" NUC, which features an MCM with a powerful AMD Radeon Vega M GPU die and a quad-core "Kaby Lake" CPU die; the "Crimson Canyon" features its processor and GPU on separate packages. The Radeon 540 packs 512 stream processors, 32 TMUs, and 16 ROPs; with 2 GB of GDDR5 memory.

All that's differentiating the NUC8i3CYSM from the NUC8i3CYSN is memory. You get 4 GB of LPDDR4 memory with the former, and 8 GB of it with the latter. Both units come with a 2.5-inch 1 TB HDD pre-installed. You also get an M.2-2280 slot with PCIe 3.0 x4 wiring, and support for Optane caching. Intel Wireless-AC 9560 WLAN card handles wireless networking, while an i219-V handles wired. Connectivity includes four USB 3.0 type-A ports, one of which has high current; an SDXC card reader, CIR, two HDMI 2.0 outputs, and 7.1-channel HD audio. The NUC has certainly grown in size over the years. This one measures 117 mm x 112 mm x 52 mm (WxDxH). An external 90W power-brick adds to the bulk.

NVIDIA GeForce GT 1030 Shipping with DDR4 Instead of GDDR5

Low-end graphics cards usually don't attract much attention from the enthusiasts crowd. Nevertheless, not all computer users are avid gamers, and most average-joe users are perfectly happy with an entry-level graphics card, for example, a GeForce GT 1030. To refresh our memories a bit, NVIDIA launched the GeForce GT 1030 last year to compete against AMD's Radeon RX 550. It was recently discovered that several manufacturers have been shipping a lower-spec'd version of the GeForce GT 1030. According to NVIDIA's official specifications, the reference GeForce GT 1030 was shipped with 2 GB of GDDR5 memory running at 6008 MHz (GDDR5-effective) across a 64-bit wide memory bus which amassed to a memory bandwidth of 48 GB/s. However, some models from MSI, Gigabyte, and Palit come with DDR4 memory operating at 2100 MHz instead. If you do the math, that comes down to a memory bandwidth of 16.8 GB/s which certainly is a huge downgrade, on paper at least. The good news amid the bad is that the DDR4-based variants consume 10W less than the reference model.

Will this memory swap affect real-world performance? Probably. However, we won't know till what extent without proper testing. Unlike the GeForce MX150 fiasco, manufacturers were kind enough to let consumers know the difference between both models this time around. The lower-end DDR4 variant carries the "D4" denotation as part of the graphics card's model or consumers can find the denotation on the box. Beware, though, as not all manufacturers will give you the heads up. For example, Palit doesn't.

ASUS Intros Radeon RX 570 Expedition Graphics Card

ASUS today introduced the Radeon RX 570 Expedition graphics card (model: EX-RX570-O8G). The card is part of the company's Expedition family of graphics cards and motherboards designed for the rigors of gaming i-cafes, and is built with slightly more durable electrical components, and IP5X-certified dust-proof fans. The card features an engine clock (GPU clock) of up to 1256 MHz out of the box (against 1240 MHz reference), while its memory clock is untouched at 7.00 GHz (GDDR5-effective). It features 8 GB of memory.

The card is cooled by a custom-design aluminium fin-stack cooler to which heat drawn by a pair of 8 mm-thick nickel-plated copper heat-pipes is vented out by a pair of IP5X-certified 80 mm dual ball-bearing fans that are programmed to stay off when the GPU temperature is under 55 °C. The card is put through 144 hours of extreme stress-testing before being packaged. Power is drawn from a single 8-pin PCIe power connector. Display outputs include one each of DisplayPort 1.4, HDMI 2.0b, and dual-link DVI-D. The company didn't reveal pricing.

Gigabyte GeForce GTX 1060 5GB Windforce OC Already Spotted in the Wild

Yesterday we broke the news that NVIDIA was preparing to launch a fourth variant of their GeForce GTX 1060 graphics card. The Gigabyte GeForce GTX 1060 5GB Windforce OC is the first custom model to appear in the wild so far. The overall design is similar to its GTX 1060 6GB Windforce OC sibling. The dual-fan Windforce 2X cooling system is present once again, as is the full cover backplate with the Gigabyte engraving. The similarities don't just stop there either, the technical specifications are identical as well. The GTX 1060 5GB Windforce OC runs with a base clock of 1556 MHz and a boost clock of 1771 MHz in "Gaming" Mode. While in "OC" Mode, the graphics card cranks the base clock up to 1582 MHz and the boost clock to 1797 MHz. In terms of memory, the GTX 1060 5GB Windforce OC is shipped with 5GB of GDDR5 memory running at 8008 MHz which comes down to a bandwidth of 160 GB/s across a 160-bit bus. The video outputs consist of two DVI ports, one HDMI port and a DisplayPort.
Return to Keyword Browsing
Nov 22nd, 2024 07:00 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts