News Posts matching #12 GB

Return to Keyword Browsing

NVIDIA Announces the GeForce RTX 3060, $330, 12 GB of GDDR6

NVIDIA today announced that it is bringing the NVIDIA Ampere architecture to millions more PC gamers with the new GeForce RTX 3060 GPU. With its efficient, high-performance architecture and the second generation of NVIDIA RTX, the RTX 3060 brings amazing hardware raytracing capabilities and support for NVIDIA DLSS and other technologies, and is priced at $329.

NVIDIA's 60-class GPUs have traditionally been the single most popular cards for gamers on Steam, with the GTX 1060 long at the top of the GPU gaming charts since its introduction in 2016. An estimated 90 percent of GeForce gamers currently play with a GTX-class GPU. "There's unstoppable momentum behind raytracing, which has quickly redefined the new standard of gaming," said Matt Wuebbling, vice president of global GeForce marketing at NVIDIA. "The NVIDIA Ampere architecture has been our fastest-selling ever, and the RTX 3060 brings the strengths of the RTX 30 Series to millions more gamers everywhere."

Inno3D Presents GeForce RTX 3060 12 GB Series, Price Starts at $329

INNO3D announces the GeForce RTX 3060 TWIN X2 / OC and iCHILL X3 RED adds to the RTX 30 Series line-up powered by the advanced NVIDIA Ampere architecture. Founded in 1998 with the vision of developing pioneering computer hardware products on a global scale. Fast forward to the present day, INNO3D is now well-established in the gaming community known for our innovative and daring approach to design and technology. We are Brutal by Nature in everything we do and are 201% committed to you for the best gaming experience in the world.

With its efficient, high-performance architecture and the second generation of NVIDIA RTX, the GeForce RTX 3060 brings amazing hardware ray-tracing capabilities and support for NVIDIA DLSS and other technologies, and is priced starting at $329.Like all RTX 30 Series GPUs, the RTX 3060 supports the trifecta of GeForce gaming innovations: NVIDIA DLSS, NVIDIA Reflex and NVIDIA Broadcast, which accelerate performance and enhance image quality. Together with real-time ray tracing, these technologies are the foundation of the GeForce gaming platform, which brings unparalleled performance and features to games and gamers everywhere.

AMD Radeon RX 6700 XT BIOS Analysis Reveals Extreme GPU Clock Limits

AMD is expected to debut its Radeon RX 6700 series based on the "Navi 22" silicon following its RX 6900 XT launch, to compete with NVIDIA's GeForce RTX 3060/Ti. Several rumored specifications of the RX 6700 series surfaced in an older report from last week, which referenced a similar compute unit count to the previous-generation RX 5700 series, but with a 25% narrower memory bus, at 192-bit. The memory amount itself has been increased by 50% to 12 GB, using higher memory density per memory channel. In that report we wondered how AMD could overcome the deficit of lower memory bandwidth, and whether an Infinite Cache solution is being used. Turns out, that the RX 6700 series should end up faster than the RX 5700 series on virtue of an enormous GPU clock (engine clock) increase, according to an Igor's Lab report.

Igor Wallossek analyzed two video BIOS images of Radeon RX 6700 series graphics cards, using MorePowerTool, and uncovered engine clock limits as high as 2854 MHz with 2950 MHz overdrive limits. Just to be clear, these are limits, and not manufacturer-set boost clocks. For example, the RX 6800 XT has a reference max boost frequency of 2250 MHz, whereas its clock limit set in the BIOS is 2800 MHz. One of the BIOS analyzed by Wallossek has a power limit of 220 W, and the other 186 W. Interestingly, the cards have the same 1075 MHz memory clock limit seen on the RX 6800 XT, which confirms that AMD is using 16 Gbps-rated GDDR6 memory, and that over a 192-bit wide memory bus, this would yield 384 GB/s of memory bandwidth. Find more technical commentary by Igor's Lab in the source link below.

Possible Radeon RX 6700 XT Specs Surface, 12GB the New Mid-Range Memory Size?

AMD could follow up on its RX 6800 series and RX 6900 XT launches with the RX 6700 series, which logically succeeds the RX 5700 series, and competes with NVIDIA's RTX 3060/Ti. Patrick Schur on Twitter, who has a high hit-rate with specs of upcoming AMD products, put out possible specs of the RX 6700 series. Both are based on the new "Navi 22" silicon, with an interesting set of specifications.

Apparently 12 GB could be AMD's new memory amount for the mid-range. It's unknown whether the 12 GB is running over a 192-bit wide memory interface (6x 16 Gbit chips), or whether AMD is using mixed-density chips over a 256-bit wide memory bus (think 4x 16 Gbit and 4x 8 Gbit), because even the fastest JEDEC-standard GDDR6 chips, running at 16 Gbps, would only yield 384 GB/s memory bandwidth, which is less than the 448 GB/s the RX 5700 series enjoy. Perhaps an Infinity Cache is deployed to make up the difference?

Godfall System Requirements List 12 GB VRAM for 4K and Ultra HD Textures

Godfall, the RPG looter-slasher that's being developed by Counterplay Games in close collaboration with AMD, will require 12 GB VRAM for maxed-out settings at 4K resolution. As part of AMD's partner videos the company announced when it revealed the RX 6000 series of graphics cards, Godfall is being built with DirectX 12 Ultimate and DXR in mind, and takes advantage of a number of rendering technologies that are part of the DXR 1.1 feature-set, alongside AMD's Fidelity FX technologies. Counterplay Games will make a 4X x 4X Ultra HD texture pack available for maxed-out settings - well within the 16 GB of VRAM AMD has settled on for its RX 6900 XT, RX 6800 XT, and RX 6800 graphics cards.

Godfall features Variable Rate Shading (VRS) for increased performance with no discernible loss of visual quality, as well as raytraced shadows (platform agnostic) and makes use of AMD's Fidelity FX Contrast Adaptive Sharpening. This technology has shown great results in improving both performance (it has been benchmarked as offering performance levels similar to that of DLSS 2.0 in Death Stranding, for instance, compared to a full 4K render) and image quality.

Micron Confirms Next-Gen NVIDIA Ampere Memory Specifications - 12 GB GDDR6X, 1 TB/s Bandwidth

Micron have spilled the beans on at least some specifications for NVIDIA's next-gen Ampere graphics cards. In a new tech brief posted by the company earlier this week, hidden away behind Micron's market outlook, strategy and positioning, lie some secrets NVIDIA might not be too keen to see divulged before their #theultimatecountdown event.

Under a comparison on ultra bandwidth solutions, segregated into the GDDR6X column, Micron lists a next-gen NVIDIA card under the "RTX 3090" product name. According to the spec sheet, this card features a total memory capacity of 12 GB GDDR6X, achieved through 12 memory chips with a 384-bit wide memory bus. As we saw today, only 11 of these seem to be populated on the RTX 3090, which, when paired with specifications for the GDDR6X memory chips being capable of 19-21 Gbps speeds, brings total memory subsystem bandwidth towards the 912 - 1008 GB/s range (using 12 chips; 11 chips results in 836 GB/s minimum). It's possible the RTX 3090 product name isn't an official NVIDIA product, but rather a Micron-guessed possibility, so don't look at it as factual representation of an upcoming graphics card. One other interesting aspect from the tech brief is that Micron expects their GDDR6X technology to enable 16 Gb (or 2 GB) density chips with 24 Gbps bandwidth, as early as 2021. You can read over the tech brief - which mentions NVIDIA by name as a development partner for GDDR6X - by following the source link and clicking on the "The Demand for Ultra-Bandwidth Solutions" document.

AMD RDNA 2 "Big Navi" to Feature 12 GB and 16 GB VRAM Configurations

As we are getting close to the launch of RDNA 2 based GPUs, which are supposedly coming in September this year, the number of rumors is starting to increase. Today, a new rumor coming from the Chinese forum Chiphell is coming our way. A user called "wjm47196" known for providing rumors and all kinds of pieces of information has specified that AMD's RDNA 2 based "Big Navi" GPU will come in two configurations - 12 GB and 16 GB VRAM variants. Being that that is Navi 21 chip, which represents the top-end GPU, it is logical that AMD has put a higher amount of VRAM like 12 GB and 16 GB. It is possible that AMD could separate the two variants like NVIDIA has done with GeForce RTX 2080 Ti and Titan RTX, so the 16 GB variant is a bit faster, possibly featuring a higher number of streaming processors.

Micron's Low-Power DDR5 DRAM Boosts Performance and Consumer Experience of Motorola's New Flagship Edge+ Smartphone

Micron Technology, Inc., together with Motorola, today announced integration of Micron's low-power DDR5 (LPDDR5) DRAM into Motorola's new motorola edge+ smartphone, bringing the full potential of the 5G experience to consumers. Micron and Motorola worked in close collaboration to enable the edge+ to reach 5G network speeds that require maximum processing power coupled with high bandwidth memory and storage.

With 12 gigabytes (GB) of industry-leading Micron LPDDR5 DRAM memory, motorola edge+ delivers a smooth, lag-free consumer experience. The new phone takes advantage of the faster data speeds and lower latency of 5G to increase the performance of cloud-based applications such as gaming and streaming entertainment.
Motorola Edge+

Samsung Begins Mass-production of 12GB LPDDR4X uMCP Memory Chips

Samsung Electronics, a world leader in advanced memory technology, today announced that it has begun mass producing the industry's first 12-gigabyte (GB) low-power double data rate 4X (LPDDR4X) UFS-based multichip package (uMCP). The announcement was made as part of the company's annual Samsung Tech Day at its Device Solutions' America headquarters in San Jose, California.

"Leveraging our leading-edge 24-gigabit (Gb) LPDDR4X chips, we can offer the highest mobile DRAM capacity of 12 GB not only for high-end smartphones but also for mid-range devices," said Sewon Chun, executive vice president of Memory Marketing at Samsung Electronics. "Samsung will continue to support our smartphone-manufacturing customers with on-time development of next-generation mobile memory solutions, bringing enhanced smartphone experiences to many more users around the globe."

NVIDIA GV102 Prototype Board With GDDR6 Spotted, Up to 525 W Power Delivery. GTX 1180 Ti?

Reddit user 'dustinbrooks' has posted a photo of a prototype graphics card design that is clearly made by NVIDIA and "tested by a buddy of his that works for a company that tests NVIDIA boards". Dustin asked the community what he was looking at, which of course got tech enthusiasts interested.

The card is clearly made by NVIDIA as indicated by the markings near the PCI-Express x16 slot connector. What's also visible is three PCI-Express 8-pin power inputs and a huge VRM setup with four fans. Unfortunately the GPU in the center of the board is missing, but it should be GV102, the successor to GP102, since GDDR6 support is needed. The twelve GDDR6 memory chips located around the GPU's solder balls are marked as D9WCW, which decodes to MT61K256M32JE-14:A. These chips are Micron-made 8 Gbit GDDR6, specified for 14 Gb/s data rate, operating at 1.35 V. With twelve chips, this board has a 384-bit memory bus and 12 GB VRAM. The memory bandwidth at 14 Gbps data rate is a staggering 672 GB/s, which conclusively beats the 484 GB/s that Vega 64 and GTX 1080 Ti offer.

NVIDIA Announces TITAN V "Volta" Graphics Card

NVIDIA in a shock move, announced its new flagship graphics card, the TITAN V. This card implements the "Volta" GV100 graphics processor, the same one which drives the company's Tesla V100 HPC accelerator. The GV100 is a multi-chip module, with the GPU die and three HBM2 memory stacks sharing a package. The card features 12 GB of HBM2 memory across a 3072-bit wide memory interface. The GPU die has been built on the 12 nm FinFET+ process by TSMC. NVIDIA TITAN V maxes out the GV100 silicon, if not its memory interface, featuring a whopping 5,120 CUDA cores, 640 Tensor cores (specialized units that accelerate neural-net building/training). The CUDA cores are spread across 80 streaming multiprocessors (64 CUDA cores per SM), spread across 6 graphics processing clusters (GPCs). The TMU count is 320.

The GPU core is clocked at 1200 MHz, with a GPU Boost frequency of 1455 MHz, and an HBM2 memory clock of 850 MHz, translating into 652.8 GB/s memory bandwidth (1.70 Gbps stacks). The card draws power from a combination of 6-pin and 8-pin PCIe power connectors. Display outputs include three DP and one HDMI connectors. With a wallet-scorching price of USD $2,999, and available exclusively through NVIDIA store, the TITAN V is evidence that with Intel deciding to sell client-segment processors for $2,000, it was a matter of time before GPU makers seek out that price-band. At $3k, the GV100's margins are probably more than made up for.

Microsoft Won't be Profiting from the Xbox One X's $499 Price Point

The lid was taken from Microsoft's Project Scorpio console last weekend. Commercially named the Xbox One X, the new Xbox console will join the "Xbox family of devices" with much higher power envelope than any other console currently in the market, at 6 TFLOPs of computing power. At that rate, Microsoft says (and has demonstrated) that its new console will be able to power premium, true 4K experiences. However, some analysts say that the $499 price point will be too high for consumers, which usually look to purchase consoles in the $249, $349 price band.

That said, the question could be put to Microsoft whether or not the company could have decreased their new console's pricing even further, by taking a cut from the hardware selling profits. When asked whether Microsoft was making any profit at all from the Xbox One X's retail pricing, Phil Spencer answered with a pretty frontal "No". So Microsoft really isn't profiting from the sale of any Xbox One X console, which may look somewhat unbelievable considering its steep price point (relatively; we have to keep in mind this console Can actually power 4K experiences.) However, this is nothing new: in fact, most gaming consoles ever released barely made any amount of money on hardware sales at the moment of their introduction to market. Manufacturers such as Microsoft and Sony instead usually choose to subsidize console purchases by bringing their profit margin to zero (and sometimes even below zero, as in, the consoles cost more to manufacture than their selling point) so as to allow a greater number of customers to purchase the hardware. Software, and more recently DLC, is where the money is to be made in consoles.

NVIDIA Announces the TITAN Xp - Faster Than GTX 1080 Ti

NVIDIA GeForce GTX 1080 Ti cannibalized the TITAN X Pascal, and the company needed something faster to sell at USD $1,200. Without making much noise about it, the company launched the new TITAN Xp, and with it, discontinued the TITAN X Pascal. The new TITAN Xp features all 3,840 CUDA cores physically present on the "GP102" silicon, all 240 TMUs, all 96 ROPs, and 12 GB of faster 11.4 Gbps GDDR5X memory over the chip's full 384-bit wide memory interface.

Compare these to the 3,584 CUDA cores, 224 TMUs, 96 ROPs, and 10 Gbps GDDR5X memory of the TITAN X Pascal, and 3,584 CUDA cores, 224 TMUs, 88 ROPs, and 11 GB of 11 Gbps GDDR5X memory across a 352-bit memory bus, of the GTX 1080 Ti. The GPU Boost frequency is 1582 MHz. Here's the catch - the new TITAN Xp will be sold exclusively through GeForce.com, which means it will be available in very select markets where NVIDIA's online store has a presence.

NVIDIA Preparing GeForce GTX 1080 Ti for 2017 CES Launch

NVIDIA is preparing its next high-end graphics card under the GeForce GTX brand, the GTX 1080 Ti, for launch along the sidelines of the 2017 International CES, early next January. The card will be positioned between the $599-$699 GeForce GTX 1080, and the $1199 TITAN X Pascal, and will be based on the 16 nm "GP102" silicon.

Chinese tech publication Zol.com.cn reports a few possible specifications of the SKU, adding to what we know from an older report. NVIDIA is carving the GTX 1080 Ti out from the GP102 silicon by enabling 26 out of 30 streaming multiprocessors, resulting in a CUDA core count of 3,328. This sets the TMU count at 208. The ROP count is unchanged at 96. The card features a 384-bit wide GDDR5X memory interface (and not the previously-thought GDDR5). It will have an identical memory bandwidth to the TITAN X Pascal, of 480 GB/s. The card will feature 12 GB of standard memory amount. Its GPU clock speeds are expected to be 1503 MHz core, with 1623 MHz GPU Boost.

NVIDIA Announces the GeForce GTX TITAN X Pascal

In a show of shock and awe, NVIDIA today announced its flagship graphics card based on the "Pascal" architecture, the GeForce GTX TITAN X Pascal. Market availability of the card is scheduled for August 2, 2016, priced at US $1,199. Based on the 16 nm "GP102" silicon, this graphics card is endowed with 3,584 CUDA cores spread across 56 streaming multiprocessors, 224 TMUs, 96 ROPs, and a 384-bit GDDR5X memory interface, holding 12 GB of memory.

The core is clocked at 1417 MHz, with 1531 MHz GPU Boost, and 10 Gbps memory, churning out 480 GB/s of memory bandwidth. The card draws power from a combination of 6-pin and 8-pin PCIe power connectors, the GPU's TDP is rated at 250W. NVIDIA claims that the GTX TITAN X Pascal is up to 60 percent faster than the GTX TITAN X (Maxwell), and up to 3 times faster than the original GeForce GTX TITAN.

NVIDIA to Unveil GeForce GTX TITAN P at Gamescom

NVIDIA is preparing to launch its flagship graphics card based on the "Pascal" architecture, the so-called GeForce GTX TITAN P, at the 2016 Gamescom, held in Cologne, Germany, between 17-21 August. The card is expected to be based on the GP100 silicon, and could likely come in two variants - 16 GB and 12 GB. The two differ by memory bus width besides memory size. The 16 GB variant could feature four HBM2 stacks over a 4096-bit memory bus; while the 12 GB variant could feature three HBM2 stacks, and a 3072-bit bus. This approach by NVIDIA is identical to the way it carved out Tesla P100-based PCIe accelerators, based on this ASIC. The cards' TDP could be rated between 300-375W, drawing power from two 8-pin PCIe power connectors.

The GP100 and GTX TITAN P isn't the only high-end graphics card lineup targeted at gamers and PC enthusiasts, NVIDIA is also working the GP102 silicon, positioned between the GP104 and the GP100. This chip could lack FP64 CUDA cores found on the GP100 silicon, and feature up to 3,840 CUDA cores of the same kind found on the GP104. The GP102 is also expected to feature simpler 384-bit GDDR5X memory. NVIDIA could base the GTX 1080 Ti on this chip.

NVIDIA Announces a PCI-Express Variant of its Tesla P100 HPC Accelerator

NVIDIA announced a PCI-Express add-on card variant of its Tesla P100 HPC accelerator, at the 2016 International Supercomputing Conference, held in Frankfurt, Germany. The card is about 30 cm long, 2-slot thick, and of standard height, and is designed for PCIe multi-slot servers. The company had introduced the Tesla P100 earlier this year in April, with a dense mezzanine form-factor variant for servers with NVLink.

The PCIe variant of the P100 offers slightly lower performance than the NVLink variant, because of lower clock speeds, although the core-configuration of the GP100 silicon remains unchanged. It offers FP64 (double-precision floating-point) performance of 4.70 TFLOP/s, FP32 (single-precision) performance of 9.30 TFLOP/s, and FP16 performance of 18.7 TFLOP/s, compared to the NVLink variant's 5.3 TFLOP/s, 10.6 TFLOP/s, and 21 TFLOP/s, respectively. The card comes in two sub-variants based on memory, there's a 16 GB variant with 720 GB/s memory bandwidth and 4 MB L3 cache, and a 12 GB variant with 548 GB/s and 3 MB L3 cache. Both sub-variants feature 3,584 CUDA cores based on the "Pascal" architecture, and core clock speed of 1300 MHz.

Black Ops III: 12 GB RAM and GTX 980 Ti Not Enough

This year's installment to the Call of Duty franchise, Black Ops III, has just hit stores, and is predictably flying off shelves. As with every ceremonial annual release, Black Ops III raises the visual presentation standards for the franchise. There is, however, one hitch with the way the game deals with system memory amounts as high as 12 GB and video memory amounts as high as 8 GB. This hitch could possibly be the reason behind the stuttering issues many users are reporting.

In our first play-through of the game with its highest possible settings on our personal gaming machines - equipped with a 2560 x 1600 pixels display, Core i7 "Haswell" quad-core CPU, 12 GB of RAM, a GeForce GTX 980 Ti graphics card, NVIDIA's latest Black Ops III Game Ready driver 385.87, and Windows 7 64-bit to top it all off, we noticed that the game was running out of memory. Taking a peek at Task Manager revealed that in "Ultra" settings (and 2560 x 1600 resolution), the game was maxing out memory usage within our 12 GB, not counting the 1.5-2 GB used up by the OS and essential lightweight tasks (such as antivirus).

ZOTAC Unveils the GeForce GTX TITAN-X ArcticStorm Edition

Even as NVIDIA prevents its AIC (add-in card) partners from coming up with custom-design GeForce GTX TITAN-X graphics cards, ZOTAC seems to have found its way around that, either by using a loophole that allows partners to come up with TITAN-series cards with "factory fitted water blocks," or NVIDIA is loosening up on its custom-design policy for the SKU, in the wake of GTX 980 Ti not being faster than the GTX TITAN-X, and competition from AMD "Fiji XT" graphics card. The result, is the GeForce GTX TITAN-X ArcticStorm (model number: ZT-90402-10P).

This card comes with a hybrid air+liquid cooling solution. You can run it either as a conventional air-cooled graphics card, care of its meaty IceStorm triple-fan heatsink, which is carried over from the company's recent AMP! Omega SKUs; or plumb the card to a liquid-cooling loop. The fans stay off when the GPU is below a 65°C temperature threshold (or when the liquid-cooling loop is active). Even with the fans off and the liquid cooling loop handling the GPU, the heatsink cools the 12 GB of memory and VRM. The card offers factory-overclocked speeds of 1026 MHz core, 1114 MHz GPU Boost (compared to 1000/1086 MHz reference), and an untouched 7.00 GHz memory. ZOTAC will display the card at Computex 2015. The company will give this card a worldwide launch.

NVIDIA GeForce GTX TITAN-X Specs Revealed

NVIDIA's GeForce GTX TITAN-X, unveiled last week at GDC 2015, is shaping up to be a beast, on paper. According to an architecture block-diagram of the GM200 silicon leaked to the web, the GTX TITAN-X appears to be maxing out all available components on the 28 nm GM200 silicon, on which it is based. While maintaining the same essential component hierarchy as the GM204, the GM200 (and the GTX TITAN-X) features six graphics processing clusters, holding a total of 3,072 CUDA cores, based on the "Maxwell" architecture.

With "Maxwell" GPUs, TMU count is derived as CUDA core count / 16, giving us a count of 192 TMUs. Other specs include 96 ROPs, and a 384-bit wide GDDR5 memory interface, holding 12 GB of memory, using 24x 4 Gb memory chips. The core is reportedly clocked at 1002 MHz, with a GPU Boost frequency of 1089 MHz. The memory is clocked at 7012 MHz (GDDR5-effective), yielding a memory bandwidth of 336 GB/s. NVIDIA will use a lossless texture-compression technology to improve bandwidth utilization. The chip's TDP is rated at 250W. The card draws power from a combination of 6-pin and 8-pin PCIe power connectors, display outputs include three DisplayPort 1.2, one HDMI 2.0, and one dual-link DVI.

First Alleged GTX TITAN-X Benchmarks Surface

Here are some of the first purported benchmarks of NVIDIA's upcoming flagship graphics card, the GeForce GTX TITAN-X. Someone with access the four of these cards installed them on a system driven by a Core i7-5960X eight-core processor, and compared its single-GPU and 4-way SLI performance on 3DMark 11, with its "extreme" (X) preset. The card scored X7994 points going solo - comparable to Radeon R9 290X 2-way CrossFire, and a single GeForce GTX TITAN-Z. With four of these cards in play, you get X24064 points. Sadly, there's nothing you can compare that score with.

NVIDIA unveiled the GeForce GTX TITAN-X at the Game Developers Conference (GDC) 2015. It was just that - an unveiling, with no specs, performance numbers, or launch date announced. The card is rumored to be based on the GM200 silicon - NVIDIA's largest based on the "Maxwell" architecture - featuring 3072 CUDA cores, 192 TMUs, 96 ROPs, and a 384-bit wide GDDR5 memory interface, holding 12 GB of memory. The benchmark screenshots reveal core clock speeds to be around 1.00 GHz, and the memory clock at 7.00 GHz.

NVIDIA GeForce GTX TITAN-X Pictured Up-close

Here are some of the first close-up shots of NVIDIA's new flagship graphics card, the GeForce GTX TITAN-X, outside Jen-Hsun Huang's Rafiki moment at a GDC presentation. If we were to throw in an educated guess, NVIDIA probably coined the name "TITAN-X" as it sounds like "Titan Next," much like it chose "TITAN-Z" as it sounds like "Titans" (plural, since it's a dual-GPU card). Laid flat out on a table, the card features an a matte-black colored reference cooling solution that looks identical to the one on the original TITAN. Other cosmetic changes include a green glow inside the fan intake, the TITAN logo, and of course, the green glow on the GeForce GTX marking on the top.

The card lacks a back-plate, giving us a peek at its memory chips. The card features 12 GB of GDDR5 memory, and looking at the twelve memory chips on the back of the PCB, with no other traces, we reckon the chip features a 384-bit wide memory interface. The 12 GB is achieved using twenty-four 4 Gb chips. The card draws power from a combination of 8-pin and 6-pin power connectors. The display I/O is identical to that of the GTX 980, with three DisplayPorts, one HDMI, and one DVI. Built on the 28 nm GM200 silicon, the GTX TITAN-X is rumored to feature 3,072 CUDA cores. NVIDIA CEO claimed that the card will be faster than even the previous generation dual-GPU flagship product by NVIDIA, the GeForce GTX TITAN-Z.

NVIDIA Unveils the GeForce GTX TITAN-X

NVIDIA surprised everyone at its GDC 2015 event, by unveiling its flagship graphics card based on the "Maxwell" architecture, the GeForce GTX TITAN-X. Although the unveiling was no formal product launch, and it didn't come with a disclosure of specs, but a look at the card itself, and a claim by no less than NVIDIA CEO Jen-Hsun Huang, that the card will be faster than the current-gen dual-GPU GTX TITAN-Z, there are some highly plausible rumors about its specs doing the rounds.

The GTX TITAN-X is a single-GPU graphics card, expected to be based on the company's GM200 silicon. This chip is rumored to feature 3,072 CUDA cores based on the "Maxwell" architecture, and a 384-bit wide GDDR5 memory interface, holding 12 GB of memory. NVIDIA is likely taking advantage of new 8 Gb GDDR5 chips. Even otherwise, achieving 12 GB using 4 Gb chips isn't impossible. The card itself looks nearly identical to the GTX TITAN Black, with its nickel alloy cooler shroud, with two differences - the "TITAN" marking towards the front of the card glows white, while the fan is decked with green lights, in addition to green glowing "GeForce GTX" logo on the top. You get to control the lighting via GeForce Experience. NVIDIA plans to run more demos of the card throughout the week.

Gigabyte Unveils P37X 17.3-inch Gaming Notebook

GIGABYTE announced a 17.3" slim gaming laptop P37X for anyone who seeks uncompromised gaming performance and multimedia fun on the move. Boosted by a top-of-line GTX 980M graphics, P37X is capable of fantastic gameplay and splendid P10000+ in 3DMark 11. Measuring just 22.5mm thin and weighs 2.8 kg, this beast boasts a slender posture despite its dominating performance. GIGABYTE exclusive Macro Hub translates personalized macro recording into a matter of seconds and takes gaming experience on to another level. P37X equips hardcore gamers with full-spectrum features for uncompromised gaming experience without the heft synonymous to the desktop systems. Users looking for uncompromised mobile platform with larger vision would definitely find the P37X a dream all-rounder.

Powered by P37X a quad-core Intel Core i7 processor with an NVIDIA GeForce GTX 980M, a more punchy graphic force bringing a stunning score, P10000+ in 3DMark 11, yet staying efficient when battery life and running time are concerns. The long awaited Maxwell architecture brings multiple visual innovations including DSR, MFAA & VXGI for more realistic, stunning images and thus unprecedented gaming virtual reality, creating users a solid edge when running the most resource crunching games in high settings and graphic professionals desirable processing resources comparable to desktop systems. Storage is configured to two rapid 512 GB mSATA SSDs and dual 2TB hard drives, blending speed and capacity into an incredibly thin and lightweight chassis.

NVIDIA Launches the Tesla K40 GPU Accelerator

NVIDIA today unveiled the NVIDIA Tesla K40 GPU accelerator, the world's highest performance accelerator ever built, delivering extreme performance to a widening range of scientific, engineering, high performance computing (HPC) and enterprise applications.

Providing double the memory and up to 40 percent higher performance than its predecessor, the Tesla K20X GPU accelerator, and 10 times higher performance than today's fastest CPU, the Tesla K40 GPU is the world's first and highest-performance accelerator optimized for big data analytics and large-scale scientific workloads.
Return to Keyword Browsing
Dec 18th, 2024 04:34 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts