News Posts matching #12 GB

Return to Keyword Browsing

Samsung Begins Mass-production of 12GB LPDDR4X uMCP Memory Chips

Samsung Electronics, a world leader in advanced memory technology, today announced that it has begun mass producing the industry's first 12-gigabyte (GB) low-power double data rate 4X (LPDDR4X) UFS-based multichip package (uMCP). The announcement was made as part of the company's annual Samsung Tech Day at its Device Solutions' America headquarters in San Jose, California.

"Leveraging our leading-edge 24-gigabit (Gb) LPDDR4X chips, we can offer the highest mobile DRAM capacity of 12 GB not only for high-end smartphones but also for mid-range devices," said Sewon Chun, executive vice president of Memory Marketing at Samsung Electronics. "Samsung will continue to support our smartphone-manufacturing customers with on-time development of next-generation mobile memory solutions, bringing enhanced smartphone experiences to many more users around the globe."

NVIDIA GV102 Prototype Board With GDDR6 Spotted, Up to 525 W Power Delivery. GTX 1180 Ti?

Reddit user 'dustinbrooks' has posted a photo of a prototype graphics card design that is clearly made by NVIDIA and "tested by a buddy of his that works for a company that tests NVIDIA boards". Dustin asked the community what he was looking at, which of course got tech enthusiasts interested.

The card is clearly made by NVIDIA as indicated by the markings near the PCI-Express x16 slot connector. What's also visible is three PCI-Express 8-pin power inputs and a huge VRM setup with four fans. Unfortunately the GPU in the center of the board is missing, but it should be GV102, the successor to GP102, since GDDR6 support is needed. The twelve GDDR6 memory chips located around the GPU's solder balls are marked as D9WCW, which decodes to MT61K256M32JE-14:A. These chips are Micron-made 8 Gbit GDDR6, specified for 14 Gb/s data rate, operating at 1.35 V. With twelve chips, this board has a 384-bit memory bus and 12 GB VRAM. The memory bandwidth at 14 Gbps data rate is a staggering 672 GB/s, which conclusively beats the 484 GB/s that Vega 64 and GTX 1080 Ti offer.

NVIDIA Announces TITAN V "Volta" Graphics Card

NVIDIA in a shock move, announced its new flagship graphics card, the TITAN V. This card implements the "Volta" GV100 graphics processor, the same one which drives the company's Tesla V100 HPC accelerator. The GV100 is a multi-chip module, with the GPU die and three HBM2 memory stacks sharing a package. The card features 12 GB of HBM2 memory across a 3072-bit wide memory interface. The GPU die has been built on the 12 nm FinFET+ process by TSMC. NVIDIA TITAN V maxes out the GV100 silicon, if not its memory interface, featuring a whopping 5,120 CUDA cores, 640 Tensor cores (specialized units that accelerate neural-net building/training). The CUDA cores are spread across 80 streaming multiprocessors (64 CUDA cores per SM), spread across 6 graphics processing clusters (GPCs). The TMU count is 320.

The GPU core is clocked at 1200 MHz, with a GPU Boost frequency of 1455 MHz, and an HBM2 memory clock of 850 MHz, translating into 652.8 GB/s memory bandwidth (1.70 Gbps stacks). The card draws power from a combination of 6-pin and 8-pin PCIe power connectors. Display outputs include three DP and one HDMI connectors. With a wallet-scorching price of USD $2,999, and available exclusively through NVIDIA store, the TITAN V is evidence that with Intel deciding to sell client-segment processors for $2,000, it was a matter of time before GPU makers seek out that price-band. At $3k, the GV100's margins are probably more than made up for.

Microsoft Won't be Profiting from the Xbox One X's $499 Price Point

The lid was taken from Microsoft's Project Scorpio console last weekend. Commercially named the Xbox One X, the new Xbox console will join the "Xbox family of devices" with much higher power envelope than any other console currently in the market, at 6 TFLOPs of computing power. At that rate, Microsoft says (and has demonstrated) that its new console will be able to power premium, true 4K experiences. However, some analysts say that the $499 price point will be too high for consumers, which usually look to purchase consoles in the $249, $349 price band.

That said, the question could be put to Microsoft whether or not the company could have decreased their new console's pricing even further, by taking a cut from the hardware selling profits. When asked whether Microsoft was making any profit at all from the Xbox One X's retail pricing, Phil Spencer answered with a pretty frontal "No". So Microsoft really isn't profiting from the sale of any Xbox One X console, which may look somewhat unbelievable considering its steep price point (relatively; we have to keep in mind this console Can actually power 4K experiences.) However, this is nothing new: in fact, most gaming consoles ever released barely made any amount of money on hardware sales at the moment of their introduction to market. Manufacturers such as Microsoft and Sony instead usually choose to subsidize console purchases by bringing their profit margin to zero (and sometimes even below zero, as in, the consoles cost more to manufacture than their selling point) so as to allow a greater number of customers to purchase the hardware. Software, and more recently DLC, is where the money is to be made in consoles.

NVIDIA Announces the TITAN Xp - Faster Than GTX 1080 Ti

NVIDIA GeForce GTX 1080 Ti cannibalized the TITAN X Pascal, and the company needed something faster to sell at USD $1,200. Without making much noise about it, the company launched the new TITAN Xp, and with it, discontinued the TITAN X Pascal. The new TITAN Xp features all 3,840 CUDA cores physically present on the "GP102" silicon, all 240 TMUs, all 96 ROPs, and 12 GB of faster 11.4 Gbps GDDR5X memory over the chip's full 384-bit wide memory interface.

Compare these to the 3,584 CUDA cores, 224 TMUs, 96 ROPs, and 10 Gbps GDDR5X memory of the TITAN X Pascal, and 3,584 CUDA cores, 224 TMUs, 88 ROPs, and 11 GB of 11 Gbps GDDR5X memory across a 352-bit memory bus, of the GTX 1080 Ti. The GPU Boost frequency is 1582 MHz. Here's the catch - the new TITAN Xp will be sold exclusively through GeForce.com, which means it will be available in very select markets where NVIDIA's online store has a presence.

NVIDIA Preparing GeForce GTX 1080 Ti for 2017 CES Launch

NVIDIA is preparing its next high-end graphics card under the GeForce GTX brand, the GTX 1080 Ti, for launch along the sidelines of the 2017 International CES, early next January. The card will be positioned between the $599-$699 GeForce GTX 1080, and the $1199 TITAN X Pascal, and will be based on the 16 nm "GP102" silicon.

Chinese tech publication Zol.com.cn reports a few possible specifications of the SKU, adding to what we know from an older report. NVIDIA is carving the GTX 1080 Ti out from the GP102 silicon by enabling 26 out of 30 streaming multiprocessors, resulting in a CUDA core count of 3,328. This sets the TMU count at 208. The ROP count is unchanged at 96. The card features a 384-bit wide GDDR5X memory interface (and not the previously-thought GDDR5). It will have an identical memory bandwidth to the TITAN X Pascal, of 480 GB/s. The card will feature 12 GB of standard memory amount. Its GPU clock speeds are expected to be 1503 MHz core, with 1623 MHz GPU Boost.

NVIDIA Announces the GeForce GTX TITAN X Pascal

In a show of shock and awe, NVIDIA today announced its flagship graphics card based on the "Pascal" architecture, the GeForce GTX TITAN X Pascal. Market availability of the card is scheduled for August 2, 2016, priced at US $1,199. Based on the 16 nm "GP102" silicon, this graphics card is endowed with 3,584 CUDA cores spread across 56 streaming multiprocessors, 224 TMUs, 96 ROPs, and a 384-bit GDDR5X memory interface, holding 12 GB of memory.

The core is clocked at 1417 MHz, with 1531 MHz GPU Boost, and 10 Gbps memory, churning out 480 GB/s of memory bandwidth. The card draws power from a combination of 6-pin and 8-pin PCIe power connectors, the GPU's TDP is rated at 250W. NVIDIA claims that the GTX TITAN X Pascal is up to 60 percent faster than the GTX TITAN X (Maxwell), and up to 3 times faster than the original GeForce GTX TITAN.

NVIDIA to Unveil GeForce GTX TITAN P at Gamescom

NVIDIA is preparing to launch its flagship graphics card based on the "Pascal" architecture, the so-called GeForce GTX TITAN P, at the 2016 Gamescom, held in Cologne, Germany, between 17-21 August. The card is expected to be based on the GP100 silicon, and could likely come in two variants - 16 GB and 12 GB. The two differ by memory bus width besides memory size. The 16 GB variant could feature four HBM2 stacks over a 4096-bit memory bus; while the 12 GB variant could feature three HBM2 stacks, and a 3072-bit bus. This approach by NVIDIA is identical to the way it carved out Tesla P100-based PCIe accelerators, based on this ASIC. The cards' TDP could be rated between 300-375W, drawing power from two 8-pin PCIe power connectors.

The GP100 and GTX TITAN P isn't the only high-end graphics card lineup targeted at gamers and PC enthusiasts, NVIDIA is also working the GP102 silicon, positioned between the GP104 and the GP100. This chip could lack FP64 CUDA cores found on the GP100 silicon, and feature up to 3,840 CUDA cores of the same kind found on the GP104. The GP102 is also expected to feature simpler 384-bit GDDR5X memory. NVIDIA could base the GTX 1080 Ti on this chip.

NVIDIA Announces a PCI-Express Variant of its Tesla P100 HPC Accelerator

NVIDIA announced a PCI-Express add-on card variant of its Tesla P100 HPC accelerator, at the 2016 International Supercomputing Conference, held in Frankfurt, Germany. The card is about 30 cm long, 2-slot thick, and of standard height, and is designed for PCIe multi-slot servers. The company had introduced the Tesla P100 earlier this year in April, with a dense mezzanine form-factor variant for servers with NVLink.

The PCIe variant of the P100 offers slightly lower performance than the NVLink variant, because of lower clock speeds, although the core-configuration of the GP100 silicon remains unchanged. It offers FP64 (double-precision floating-point) performance of 4.70 TFLOP/s, FP32 (single-precision) performance of 9.30 TFLOP/s, and FP16 performance of 18.7 TFLOP/s, compared to the NVLink variant's 5.3 TFLOP/s, 10.6 TFLOP/s, and 21 TFLOP/s, respectively. The card comes in two sub-variants based on memory, there's a 16 GB variant with 720 GB/s memory bandwidth and 4 MB L3 cache, and a 12 GB variant with 548 GB/s and 3 MB L3 cache. Both sub-variants feature 3,584 CUDA cores based on the "Pascal" architecture, and core clock speed of 1300 MHz.

Black Ops III: 12 GB RAM and GTX 980 Ti Not Enough

This year's installment to the Call of Duty franchise, Black Ops III, has just hit stores, and is predictably flying off shelves. As with every ceremonial annual release, Black Ops III raises the visual presentation standards for the franchise. There is, however, one hitch with the way the game deals with system memory amounts as high as 12 GB and video memory amounts as high as 8 GB. This hitch could possibly be the reason behind the stuttering issues many users are reporting.

In our first play-through of the game with its highest possible settings on our personal gaming machines - equipped with a 2560 x 1600 pixels display, Core i7 "Haswell" quad-core CPU, 12 GB of RAM, a GeForce GTX 980 Ti graphics card, NVIDIA's latest Black Ops III Game Ready driver 385.87, and Windows 7 64-bit to top it all off, we noticed that the game was running out of memory. Taking a peek at Task Manager revealed that in "Ultra" settings (and 2560 x 1600 resolution), the game was maxing out memory usage within our 12 GB, not counting the 1.5-2 GB used up by the OS and essential lightweight tasks (such as antivirus).

ZOTAC Unveils the GeForce GTX TITAN-X ArcticStorm Edition

Even as NVIDIA prevents its AIC (add-in card) partners from coming up with custom-design GeForce GTX TITAN-X graphics cards, ZOTAC seems to have found its way around that, either by using a loophole that allows partners to come up with TITAN-series cards with "factory fitted water blocks," or NVIDIA is loosening up on its custom-design policy for the SKU, in the wake of GTX 980 Ti not being faster than the GTX TITAN-X, and competition from AMD "Fiji XT" graphics card. The result, is the GeForce GTX TITAN-X ArcticStorm (model number: ZT-90402-10P).

This card comes with a hybrid air+liquid cooling solution. You can run it either as a conventional air-cooled graphics card, care of its meaty IceStorm triple-fan heatsink, which is carried over from the company's recent AMP! Omega SKUs; or plumb the card to a liquid-cooling loop. The fans stay off when the GPU is below a 65°C temperature threshold (or when the liquid-cooling loop is active). Even with the fans off and the liquid cooling loop handling the GPU, the heatsink cools the 12 GB of memory and VRM. The card offers factory-overclocked speeds of 1026 MHz core, 1114 MHz GPU Boost (compared to 1000/1086 MHz reference), and an untouched 7.00 GHz memory. ZOTAC will display the card at Computex 2015. The company will give this card a worldwide launch.

NVIDIA GeForce GTX TITAN-X Specs Revealed

NVIDIA's GeForce GTX TITAN-X, unveiled last week at GDC 2015, is shaping up to be a beast, on paper. According to an architecture block-diagram of the GM200 silicon leaked to the web, the GTX TITAN-X appears to be maxing out all available components on the 28 nm GM200 silicon, on which it is based. While maintaining the same essential component hierarchy as the GM204, the GM200 (and the GTX TITAN-X) features six graphics processing clusters, holding a total of 3,072 CUDA cores, based on the "Maxwell" architecture.

With "Maxwell" GPUs, TMU count is derived as CUDA core count / 16, giving us a count of 192 TMUs. Other specs include 96 ROPs, and a 384-bit wide GDDR5 memory interface, holding 12 GB of memory, using 24x 4 Gb memory chips. The core is reportedly clocked at 1002 MHz, with a GPU Boost frequency of 1089 MHz. The memory is clocked at 7012 MHz (GDDR5-effective), yielding a memory bandwidth of 336 GB/s. NVIDIA will use a lossless texture-compression technology to improve bandwidth utilization. The chip's TDP is rated at 250W. The card draws power from a combination of 6-pin and 8-pin PCIe power connectors, display outputs include three DisplayPort 1.2, one HDMI 2.0, and one dual-link DVI.

First Alleged GTX TITAN-X Benchmarks Surface

Here are some of the first purported benchmarks of NVIDIA's upcoming flagship graphics card, the GeForce GTX TITAN-X. Someone with access the four of these cards installed them on a system driven by a Core i7-5960X eight-core processor, and compared its single-GPU and 4-way SLI performance on 3DMark 11, with its "extreme" (X) preset. The card scored X7994 points going solo - comparable to Radeon R9 290X 2-way CrossFire, and a single GeForce GTX TITAN-Z. With four of these cards in play, you get X24064 points. Sadly, there's nothing you can compare that score with.

NVIDIA unveiled the GeForce GTX TITAN-X at the Game Developers Conference (GDC) 2015. It was just that - an unveiling, with no specs, performance numbers, or launch date announced. The card is rumored to be based on the GM200 silicon - NVIDIA's largest based on the "Maxwell" architecture - featuring 3072 CUDA cores, 192 TMUs, 96 ROPs, and a 384-bit wide GDDR5 memory interface, holding 12 GB of memory. The benchmark screenshots reveal core clock speeds to be around 1.00 GHz, and the memory clock at 7.00 GHz.

NVIDIA GeForce GTX TITAN-X Pictured Up-close

Here are some of the first close-up shots of NVIDIA's new flagship graphics card, the GeForce GTX TITAN-X, outside Jen-Hsun Huang's Rafiki moment at a GDC presentation. If we were to throw in an educated guess, NVIDIA probably coined the name "TITAN-X" as it sounds like "Titan Next," much like it chose "TITAN-Z" as it sounds like "Titans" (plural, since it's a dual-GPU card). Laid flat out on a table, the card features an a matte-black colored reference cooling solution that looks identical to the one on the original TITAN. Other cosmetic changes include a green glow inside the fan intake, the TITAN logo, and of course, the green glow on the GeForce GTX marking on the top.

The card lacks a back-plate, giving us a peek at its memory chips. The card features 12 GB of GDDR5 memory, and looking at the twelve memory chips on the back of the PCB, with no other traces, we reckon the chip features a 384-bit wide memory interface. The 12 GB is achieved using twenty-four 4 Gb chips. The card draws power from a combination of 8-pin and 6-pin power connectors. The display I/O is identical to that of the GTX 980, with three DisplayPorts, one HDMI, and one DVI. Built on the 28 nm GM200 silicon, the GTX TITAN-X is rumored to feature 3,072 CUDA cores. NVIDIA CEO claimed that the card will be faster than even the previous generation dual-GPU flagship product by NVIDIA, the GeForce GTX TITAN-Z.

NVIDIA Unveils the GeForce GTX TITAN-X

NVIDIA surprised everyone at its GDC 2015 event, by unveiling its flagship graphics card based on the "Maxwell" architecture, the GeForce GTX TITAN-X. Although the unveiling was no formal product launch, and it didn't come with a disclosure of specs, but a look at the card itself, and a claim by no less than NVIDIA CEO Jen-Hsun Huang, that the card will be faster than the current-gen dual-GPU GTX TITAN-Z, there are some highly plausible rumors about its specs doing the rounds.

The GTX TITAN-X is a single-GPU graphics card, expected to be based on the company's GM200 silicon. This chip is rumored to feature 3,072 CUDA cores based on the "Maxwell" architecture, and a 384-bit wide GDDR5 memory interface, holding 12 GB of memory. NVIDIA is likely taking advantage of new 8 Gb GDDR5 chips. Even otherwise, achieving 12 GB using 4 Gb chips isn't impossible. The card itself looks nearly identical to the GTX TITAN Black, with its nickel alloy cooler shroud, with two differences - the "TITAN" marking towards the front of the card glows white, while the fan is decked with green lights, in addition to green glowing "GeForce GTX" logo on the top. You get to control the lighting via GeForce Experience. NVIDIA plans to run more demos of the card throughout the week.

Gigabyte Unveils P37X 17.3-inch Gaming Notebook

GIGABYTE announced a 17.3" slim gaming laptop P37X for anyone who seeks uncompromised gaming performance and multimedia fun on the move. Boosted by a top-of-line GTX 980M graphics, P37X is capable of fantastic gameplay and splendid P10000+ in 3DMark 11. Measuring just 22.5mm thin and weighs 2.8 kg, this beast boasts a slender posture despite its dominating performance. GIGABYTE exclusive Macro Hub translates personalized macro recording into a matter of seconds and takes gaming experience on to another level. P37X equips hardcore gamers with full-spectrum features for uncompromised gaming experience without the heft synonymous to the desktop systems. Users looking for uncompromised mobile platform with larger vision would definitely find the P37X a dream all-rounder.

Powered by P37X a quad-core Intel Core i7 processor with an NVIDIA GeForce GTX 980M, a more punchy graphic force bringing a stunning score, P10000+ in 3DMark 11, yet staying efficient when battery life and running time are concerns. The long awaited Maxwell architecture brings multiple visual innovations including DSR, MFAA & VXGI for more realistic, stunning images and thus unprecedented gaming virtual reality, creating users a solid edge when running the most resource crunching games in high settings and graphic professionals desirable processing resources comparable to desktop systems. Storage is configured to two rapid 512 GB mSATA SSDs and dual 2TB hard drives, blending speed and capacity into an incredibly thin and lightweight chassis.

NVIDIA Launches the Tesla K40 GPU Accelerator

NVIDIA today unveiled the NVIDIA Tesla K40 GPU accelerator, the world's highest performance accelerator ever built, delivering extreme performance to a widening range of scientific, engineering, high performance computing (HPC) and enterprise applications.

Providing double the memory and up to 40 percent higher performance than its predecessor, the Tesla K20X GPU accelerator, and 10 times higher performance than today's fastest CPU, the Tesla K40 GPU is the world's first and highest-performance accelerator optimized for big data analytics and large-scale scientific workloads.

AMD Announces First "Supercomputing" Server Graphics Card With 12 GB Memory

AMD today announced the new AMD FirePro S10000 12 GB Edition graphics card, designed for big data high-performance computing (HPC) workloads for single precision and double precision performance. With full support for PCI Express 3.0 and optimized for use with the OpenCL compute programming language, the AMD FirePro S10000 12 GB Edition GPU features ECC memory plus DirectGMA support allowing developers working with large models and assemblies to take advantage of the massively parallel processing capabilities of AMD GPUs based on the latest AMD Graphics Core Next (GCN) architecture. AMD FirePro S10000 12 GB Edition GPU is slated for availability in Spring 2014.

Apple Updates MacBook Air and Current Generation MacBook Pro with New Hardware

Apple today updated MacBook Air with the latest Intel Core processors, faster graphics and flash storage that is up to twice as fast as the previous generation.* MacBook Air is the ultimate everyday notebook, and with new lower prices it is more affordable than ever. The current generation 13-inch and 15-inch MacBook Pro have also been updated with the latest Intel Core processors and powerful discrete graphics from NVIDIA. Apple's popular AirPort Express has been redesigned to include features previously available only in AirPort Extreme.

"Today we've updated the entire MacBook line with faster processors, graphics, memory, flash storage and USB 3 connectivity," said Philip Schiller, Apple's senior vice president of Worldwide Marketing. "We've made the world's best portable family even better and we think users are going to love the performance advances in both the MacBook Air and MacBook Pro."

Kingston Intros New DDR3 1600 12GB Memory Kit

Kingston is out with its newest 12 GB triple-channel DDR3 memory kit. Unlike the last time, when the company came out with a three-module kit consisting of 4 GB modules, with the KHX1600C9D3K6/12GX, the new 12 GB kit consists of six modules. What is also noteworthy, is that the new kit costs a fraction: Around 350 Euro versus over 1000 Euro.

The six 2 GB modules are cooled by the classic Kingston HyperX heatspreaders. They are rated to operate at 1600 MHz (PC3-12800) with DRAM timings of 9-9-9-27, with module voltage of 1.65V, along with Intel XMP support, making them just about ideal for socket LGA-1366 desktops and importantly, 2-socket workstations.

Patriot Memory Announces the DDR3 12 GB Tri-Channel Viper Kit

Patriot Memory, a global provider of premium quality memory modules and flash memory solutions, today announced their DDR3 12 GB Tri-Channel Viper kit.

Designed specifically for the Intel Core i7 processor/Intel X58 Express Chipset, the Patriot Viper 12 GB kit has been built to excel for the Core i7 triple channel tehcnology. The Patriot Viper 12 GB are hand tested extensively on X58 motherboards to ensure maximum compatibility and supreme quality. The kits will be offered in 1333 MHz and come in both CL7 and CL9. "With all memory slots utilized in all three channels, the Core i7 brings the effectiveness of a server to mainstream desktops, and for a fraction of the price, says Benny Chea, Patriot Memory's Applications Engineer. Mr. Chea went on to say, "Multi-applications users will appreciate the increased bandwidth."

Return to Keyword Browsing
May 15th, 2024 15:51 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts