News Posts matching #TU117

Return to Keyword Browsing

NVIDIA GeForce GTX 1650 is Still the Most Popular GPU in the Steam Hardware Survey

NVIDIA GeForce GTX 1650 was released more than four years ago. With its TU117 graphics processor, it features 896 CUDA cores, 56 texture mapping units, and 32 ROPs. NVIDIA has paired 4 GB GDDR5 memory with the GeForce GTX 1650, which are connected using a 128-bit memory interface. Interestingly, according to the latest Steam Hardware Survey results, this GPU still remains the most popular choice among gamers. While the total addressable market is unknown with the exact number, it is fair to assume that a large group participates every month. The latest numbers for June 2023 indicate that the GeForce GTX 1650 is still the number one GPU, with 5.50% of the users having that GPU. The second closest one was GeForce RTX 3060, with 4.60%.

Other information in the survey remains similar, with CPUs mostly ranging from 2.3 GHz to 2.69 GHz in frequency and with six cores and twelve threads. Storage also recorded a small bump with capacity over 1 TB surging 1.48%, indicating that gamers are buying larger drives as game sizes get bigger.

NVIDIA Launches GeForce GTX 1630 Graphics Card

NVIDIA today launched the GeForce GTX 1630 entry-level graphics card. A successor to the GT 1030, the new GTX 1630 is an entry-level product, despite its roughly $150 MSRP. It is based on the older "Turing" 16-series graphics architecture, which lacks hardware-accelerated ray tracing or even support for DLSS. It is carved out from the same 12 nm "TU117" silicon as the GTX 1650 from 2019.

The GTX 1630 features exactly half of the 16 streaming multiprocessors present on the TU117. The 8 available SM work out to 512 CUDA cores, 32 TMUs, and 32 ROPs. The card comes with 4 GB as the standard memory size, and this is GDDR6 type, across a 64-bit wide memory bus. The card typically features just two 12 Gbps-rated 16 Gbit GDDR6 chips. The GPU operates at a boost frequency of 1785 MHz. The card lacks hardware-accelerated AV1 decode, and has media features consistent with the rest of the "Turing" family. At $150, it competes with the Radeon RX 6400 (which can be had for as low as $160), and the Arc A380.
Catch the TechPowerUp review of the Gainward GTX 1630 Ghost graphics card.

NVIDIA GeForce GTX 1630 Set To Launch Tomorrow

The NVIDIA GeForce GTX 1630 is set to be officially unveiled tomorrow as a successor to the GTX 1050 Ti with Colorful already listing one such model on their website. The GTX 1630 will be an entry-level card featuring a TU117-150 GPU with 512 CUDA cores running at 1785 MHz paired with 4 GB of GDDR6 memory on a 64-bit memory bus for a total bandwidth of 96 GB/s. The leaked Colorful GTX 1630 BattleAx features a dual-fan cooling solution, triple display connectors, and an additional 6-pin power input essentially copying the company's GTX 1650 model. The NVIDIA GeForce GTX 1630 will be available from multiple board partners when it launches tomorrow and could reportedly retail for ~150 USD according to some Chinese retailers.

NVIDIA GeForce GTX 1630 Launching May 31st with 512 CUDA Cores & 4 GB GDDR6

The NVIDIA GeForce GTX 1630 graphics card is set to be launched on May 31st according to a recent report from VideoCardz. The GTX 1630 is based on the GTX 1650 featuring a 12 nm Turing TU117-150 GPU with 512 CUDA cores and 4 GB of GDDR6 memory on a 64-bit memory bus. This is a reduction from the 896 CUDA cores and 128-bit memory bus found in the GTX 1650 however there is an increase in clock speeds with a boost clock of 1800 MHz at a TDP of 75 W. This memory configuration results in a maximum theoretical bandwidth of 96 GB/s which is exactly half of what is available on the GDDR6 GTX 1650. The NVIDIA GeForce GTX 1630 may be announced during NVIDIA's Computex keynote next week.

TechPowerUp GPU-Z v2.44.0 Released

TechPowerUp today released the latest version of TechPowerUp GPU-Z, the handy graphics sub-system information and diagnostic utility for gamers and PC enthusiasts. Version 2.44.0 adds support for several new GPUs, feature updates to the Resizable BAR detection, and a handful other fixes. To begin with, GPU-Z adds support for NVIDIA GeForce RTX 3050, RTX 3080 12 GB, RTX 3070 Ti Mobile, RTX 3050 Ti Mobile, RTX 2060 12 GB, MX550, and a number of other mobile GPUs from NVIDIA. On the AMD front, you get support for Navi 24: Radeon RX 6500 XT, RX 6400, RX 6300M, RX 6500M, PRO W6300M, PRO W6500M, and PRO W660M. Support is also added for Intel "Alder Lake" non-K processors, "Alder Lake" mobile processors, and Xeon processors based on "Rocket Lake."

TechPowerUp GPU-Z can now report the exact base-address register (BAR) size when Resizable BAR is enabled. Find it in the Advanced Panel, under Resizable BAR. Detection of Resizable BAR has been improved. Detection of LHR in certain RTX 3060 cards has been improved to weed out misreporting of LHR. Vendor detection was added for Vastarmor. The internal Screenshot hosting utility now uploads screenshots over HTTPS. The 64-bit Windows Vista name will now include a space character, so "Vista 64" instead of just "Vista64." Grab GPU-Z from the link below.

DOWNLOAD: TechPowerUp GPU-Z 2.44.0

NVIDIA GeForce RTX 2050 & MX550 Laptop Graphics Cards Benchmarked

The recently announced NVIDIA GeForce RTX 2050, MX570, and MX550 Ampere graphics cards have recently been benchmarked in 3DMark TimeSpy. The RTX 2050 and MX570 both feature the Ampere GA107 GPU with 2048 CUDA cores paired with 4 GB and 2 GB of 64-bit GDDR6 memory respectively. The MX550 uses the TU117 Turing GPU with 1024 CUDA cores running at 1320 MHz paired with 2 GB of 64-bit GDDR6 12 Gbps memory. The RTX 2050 and MX570 performed similarly in the 3DMark TimeSpy benchmark achieving a graphics score of 3369 while the MX550 scores 2510 points. These new laptop graphics cards will be officially launching in Spring 2022.

GALAX Designs a GeForce GTX 1650 "Ultra" with TU106 Silicon

NVIDIA board partners carving out GeForce RTX 20-series and GTX 16-series SKUs from ASICs they weren't originally based on, is becoming more common, but GALAX has taken things a step further. The company just launched a GeForce GTX 1650 (GDDR6) graphics card based on the "TU106" silicon (ASIC code: TU106-125-A1). The company carved a GTX 1650 out of this chip by disabling all of its RT cores, all its tensor cores, and a whopping 61% of its CUDA cores, along with proportionate reductions in TMU- and ROP counts. The memory bus width has been halved from 256-bit down to 128-bit.

The card, however, is only listed by the Chinese regional arm of GALAX. The card's marketing name is "GALAX GeForce GTX 1650 Ultra," with "Ultra" being a GALAX brand extension, and not an NVIDIA SKU (i.e. the GPU isn't called "GTX 1650 Ultra"). The GPU clock speeds for this card is identical to those of the original GTX 1650 that's based on TU117 - 1410 MHz base, 1590 MHz GPU Boost, and 12 Gbps (GDDR6-effective) memory.

ASUS Intros GeForce GTX 1650 GDDR6 Phoenix Graphics Card with Axial-Tech Fan

ASUS today introduced its GeForce GTX 1650 (GDDR6) Phoenix graphics card (model: PH-GTX1650-O4GD6). This 2-slot thick card is 17.4 cm in length and 12.6 cm in height, designed to fit in most SFF cases. Its cooling solution consists of an aluminium monoblock heatsink that's ventilated by a single 80 mm Axial-Tech fan. Found in some of ASUS's higher end cards, this fan features double-ball bearings, and an impeller with webbed edges, such that all its airflow is guided axially onto the heatsink below (and none laterally).

The GTX 1650 (GDDR6) Phoenix comes with a mild factory-overclock of 1605 MHz (vs. 1590 MHz reference for the GTX 1650 GDDR6). The card's 4 GB memory is untouched at 12 Gbps (GDDR6-effective). The card draws all its power from the PCI-Express slot. Display outputs include one each of dual-link DVI-D, HDMI 2.0b, and DisplayPort 1.4a. Based on the 12 nm "TU117" silicon, the GTX 1650 GDDR6 features 896 "Turing" CUDA cores, 56 TMUs, 32 ROPs, and a 128-bit wide GDDR6 memory interface, holding 4 GB of memory. The company didn't reveal pricing, although we expect it to be around $160.
ASUS GTX 1650 Phoenix

Leaked Benchmark shows Possible NVIDIA MX450 with GDDR6 Memory

A new listing was spotted on the 3DMark results browser for what could be the NVIDIA MX450 laptop GPU. The MX450 is expected to be based on the TU117, the same as the GTX 1650 speculated @_rogame. The leaked benchmark shows the MX450 having a clock speed of 540 MHz and 2 GB of GDDR6 memory. The memory is listed as having a speed of 2505 MHz meaning a potential memory speed of 10Gbit/s. It is interesting to see the shift to GDDR6 in NVIDIA's suite of products likely due to a shortage in GDDR5 or simply that GDDR6 is now cheaper.

The TU117 GPU found in the GTX 1650 GDDR6 has proven itself to be a solid 1080p gaming option. The chip is manufactured on TSMC's 12 nm process and features 1024 shading units, 64 texture mapping units and 32 ROPs. The MX450 should provide a significant boost over integrated graphics at a TDP of 25 W, and will sit under the GTX 1650 Mobile due to its reduced RAM and power/thermal constraints.

NVIDIA Readies GeForce GTX 1650 SUPER with GDDR6 Memory for Late November

It turns out that the GeForce GTX 1660 Super will be joined by another "Super" SKU by NVIDIA, the GeForce GTX 1650 Super, according to a VideoCardz report. Slated for a November 22 launch, the GTX 1650 Super appears to be NVIDIA's response to the Radeon RX 5500, which is being extensively compared to the current GTX 1650 in AMD's marketing material. While the core-configuration of the GTX 1650 Super is unknown, NVIDIA is giving it 4 GB of GDDR6 memory across a 128-bit wide memory interface, with a data-rate of 12 Gbps, working out to 192 GB/s of memory bandwidth. In comparison, the GTX 1650 uses 8 Gbps GDDR5 and achieves 128 GB/s memory bandwidth.

It remains to be seen just how much the improved memory subsystem helps the GTX 1650 Super catch up to the RX 5500, given that a maxed out TU117 silicon only has 128 more CUDA cores on offer, and AMD is claiming a 37% performance lead over the current GTX 1650 for its RX 5500. One possible way it can create the GTX 1650 Super is by tapping into the larger "TU116" silicon with 1/3rd of its memory interface disabled, and fewer CUDA cores than the GTX 1660. We'll know more in the run up to November 22.

ZOTAC Rolls Out a Low-profile GeForce GTX 1650 Graphics Card

ZOTAC rolled out its first low-profile GeForce GTX 1650 graphics card, close to a month after MSI released the very first card of its kind. ZOTAC's 16 cm-long card uses a 2-slot thick chunky aluminium heatsink to cool the GPU, memory, and a portion of the VRM. This heatsink is ventilated by two 40 mm fans. The card relies on the PCI-Express slot for all its power, and runs the GPU at NVIDIA-reference clock speeds of 1665 MHz boost, and 8 Gbps GDDR5 memory. The card uses 4 GB of memory across a 128-bit wide memory bus. Based on the 12 nm "TU117" silicon, the GTX 1650 packs 896 CUDA cores, 56 TMUs, and 32 ROPs. Display outputs include one each of HDMI, DisplayPort, and DVI-D. The company didn't reveal pricing.

NVIDIA GTX 1650 Lacks Turing NVENC Encoder, Packs Volta's Multimedia Engine

NVIDIA GeForce GTX 1650 has a significantly watered down multimedia feature-set compared to the other GeForce GTX 16-series GPUs. The card was launched this Tuesday (23 April) without any meaningful technical documentation for reviewers, which caused many, including us, to assume that NVIDIA carried over the "Turing" NVENC encoder, giving you a feature-rich HTPC or streaming card at $150. Apparently that is not the case. According to full specifications put out by NVIDIA on its website product-page that went up hours after product launch, the GTX 1650 (and the TU117 silicon) features a multimedia engine that's been carried over from the older "Volta" architecture.

Turing's NVENC is known to have around 15 percent performance uplift over Volta's, which means the GTX 1650 will have worse game livestreaming performance than expected. The GTX 1650 has sufficient muscle for playing e-Sports titles such as PUBG at 1080p, and with an up-to-date accelerated encoder, would have pulled droves of more amateur streamers to the mainstream on Twitch and YouTube Gaming. Alas, the $220 GTX 1660 would be your ticket to that.

NVIDIA GeForce GTX 1650 Released: TU117, 896 Cores, 4 GB GDDR5, $150

NVIDIA today rolled out the GeForce GTX 1650 graphics card at USD $149.99. Like its other GeForce GTX 16-series siblings, the GTX 1650 is derived from the "Turing" architecture, but without RTX real-time raytracing hardware, such as RT cores or tensor cores. The GTX 1650 is based on the 12 nm "TU117" silicon, which is the smallest implementation of "Turing." Measuring 200 mm² (die area), the TU117 crams 4.7 billion transistors. It is equipped with 896 CUDA cores, 56 TMUs, 32 ROPs, and a 128-bit wide GDDR5 memory interface, holding 4 GB of memory clocked at 8 Gbps (128 GB/s bandwidth). The GPU is clocked at 1485 MHz, and the GPU Boost at 1665 MHz.

The GeForce GTX 1650 at its given price is positioned competitively with the Radeon RX 570 4 GB from AMD. NVIDIA has been surprisingly low-key about this launch, by not just leaving it up to the partners to drive the launch, but also sample reviewers. There are no pre-launch Reviewer drivers provided by NVIDIA, and hence we don't have a launch-day review for you yet. We do have GTX 1650 graphics cards, namely the Palit GTX 1650 StormX, MSI GTX 1650 Gaming X, and ASUS ROG GTX 1650 Strix OC.

Update: Catch our reviews of the ASUS ROG Strix GTX 1650 OC and MSI GTX 1650 Gaming X

NVIDIA GeForce GTX 1650 Specifications and Price Revealed

NVIDIA is releasing its most affordable graphics card based on the "Turing" architecture, the GeForce GTX 1650, on the 23rd of April, starting at USD $149. There doesn't appear to be a reference-design (the GTX 1660 series lacked one, too), and so this GPU will be a partner-driven launch. Based on NVIDIA's smallest "Turing" silicon, the 12 nm "TU117," the GTX 1650 will pack 896 CUDA cores and will feature 4 GB of GDDR5 memory across a 128-bit wide memory interface.

The GPU is clocked at 1485 MHz with 1665 MHz GPU Boost, and the 8 Gbps memory produces 128 GB/s of memory bandwidth. With a TDP of just 75 Watts, most GTX 1650 cards will lack additional PCIe power inputs, relying entirely on the slot for power. Most entry-level implementations of the GTX 1650 feature very simple aluminium fan-heatsink coolers. VideoCardz compiled a number of leaked pictures of upcoming GTX 1650 graphics cards.

GAINWARD, PALIT GeForce GTX 1650 Pictured, Lack DisplayPort Connectors

In the build-up to NVIDIA's GTX 1650 release, more and more cards are being revealed. While GAINWARD and PALIT's designs won't bring much in the way of interesting PCB designs and differences to be perused, since the PCBs are exactly the same. The GAINWARD Pegasus and the PALIT Storm X only differ in terms of the used shroud design, and both cards carry the same TU117 GPU paired with 4GB of GDDR5 memory.

NVIDIA GeForce GTX 1650 Availability Revealed

NVIDIA is expected to launch its sub-$200 GeForce GTX 1650 graphics card on the 22nd of April, 2019. The card was earlier expected to launch towards the end of April. With it, NVIDIA will introduce the 12 nm "TU117," its smallest GPU based on the "Turing" architecture. The GTX 1650 could replace the current GTX 1060 3 GB, and may compete with AMD offerings in this segment, such as the Radeon RX 570 4 GB, in being Full HD-capable if not letting you max your game settings out at that resolution. The card could ship with 4 GB of GDDR5 memory.

NVIDIA GeForce GTX 1650 Details Leak Thanks to EEC Filing

The GeForce GTX 1650 will be NVIDIA's smallest "Turing" based graphics card, and is slated for a late-April launch as NVIDIA waits on inventories of sub-$200 "Pascal" based graphics cards, such as the GTX 1050 series, to be digested by the retail channel. A Eurasian Economic Commission filing revealed many more details of this card, as an MSI Gaming X custom-design board was finding its way through the regulator. The filing confirms that the GTX 1650 will pack 4 GB of memory. The GPU will be based on the new 12 nm "TU117" silicon, which will be NVIDIA's smallest based on the "Turing" architecture. This card will likely target e-Sports gamers, giving them the ability to max out their online battle royale titles at 1080p. It will probably compete with AMD's Radeon RX 570.

NVIDIA GeForce GTX 1660 and GTX 1650 Pricing and Availability Revealed

(Update 1: Andreas Schilling, at Hardware Luxx, seems to have obtained confirmation that NVIDIA's GTX 1650 graphics cards will pack 4 GB of GDDR5 memory, and that the GTX 1660 will be offering a 6 GB GDDR5 framebuffer.)

NVIDIA recently launched its GeForce GTX 1660 Ti graphics card at USD $279, which is the most affordable desktop discrete graphics card based on the "Turing" architecture thus far. NVIDIA's GeForce 16-series GPUs are based on 12 nm "Turing" chips, but lack RTX real-time ray-tracing and tensor cores that accelerate AI. The company is making two affordable additions to the GTX 16-series in March and April, according to Taiwan-based PC industry observer DigiTimes.

The GTX 1660 Ti launch will be followed by that of the GeForce GTX 1660 (non-Ti) on 15th March, 2019. This SKU is likely based on the same "TU116" silicon as the GTX 1660 Ti, but with fewer CUDA cores and possibly slower memory or lesser memory amount. NVIDIA is pricing the GTX 1660 at $229.99, a whole $50 cheaper than the GTX 1660 Ti. That's not all. We recently reported on the GeForce GTX 1650, which could quite possibly become NVIDIA's smallest "Turing" based desktop GPU. This product is real, and is bound for 30th April, at $179.99, $50 cheaper still than the GTX 1660. This SKU is expected to be based on the smaller "TU117" silicon. Much like the GTX 1660 Ti, these two launches could be entirely partner-driven, with the lack of reference-design cards.
Return to Keyword Browsing
Nov 15th, 2024 08:25 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts