Thursday, May 19th 2022
NVIDIA GeForce GTX 1630 Launching May 31st with 512 CUDA Cores & 4 GB GDDR6
The NVIDIA GeForce GTX 1630 graphics card is set to be launched on May 31st according to a recent report from VideoCardz. The GTX 1630 is based on the GTX 1650 featuring a 12 nm Turing TU117-150 GPU with 512 CUDA cores and 4 GB of GDDR6 memory on a 64-bit memory bus. This is a reduction from the 896 CUDA cores and 128-bit memory bus found in the GTX 1650 however there is an increase in clock speeds with a boost clock of 1800 MHz at a TDP of 75 W. This memory configuration results in a maximum theoretical bandwidth of 96 GB/s which is exactly half of what is available on the GDDR6 GTX 1650. The NVIDIA GeForce GTX 1630 may be announced during NVIDIA's Computex keynote next week.
Source:
VideoCardz
140 Comments on NVIDIA GeForce GTX 1630 Launching May 31st with 512 CUDA Cores & 4 GB GDDR6
gtx 1050 have 32 rops meanwhile gtx 1630 have 16 rops
gtx 1050 have 40 tmus meanwhile gtx 1630 have 32 tmus
gtx 1050 have 640 shaders meanwhile gtx 1630 have 512 shaders
however gtx 1630 have 4gb of vram meanwhile gtx 1050 non ti only 2gb
in my case stay very interested in arc because i dont want give more money to nvidia or amd because my use is only for basic things like as playback videos or light gaming and emulators
but gtx 1630 price can provide a better idea about which price could have intel arc a310, at least as your said drivers can be trouble specially for windows users but in my case use linux most times and linux drivers normally are better than windows drivers
as other said this companies obtain huge gains for believe them cant offer products in 100us or lower, this companies dont have any excuses to dont offer 100us products because them take huge advantage of market in mining times because many of us cant buy any gpu
:)
But the 64-bit memory bus seems like a BS call. It'd be a shame for another card to contain a chip held back by its own memory bus. Looking at you, 6500 XT.
For example the rops is indeed double but the gt 1050 design is bandwidth limited, so if you check with an old 3DMark or beyond3D or whatever the pixel fillrate, it won't be anywhere near the theoretical difference, also Turing has better compression.
Or take account the shaders, although gt 1630 has -20% it can do concurrent floating point and integer operation so the effective IPC is not the same.
Videocardz reported that the memory is 12Gbps GDDR6, GDDR6 also reported by the TPU, while the database mentions 8Gbps GDDR5 which is probably wrong.
Since GTX 1650 had only 1665MHz turbo frequency officially and 1630 has much higher 1800MHz and since the TDP according to the report is the same 75W despite 1630 being so extensively cutdown, I assumed that the real median frequency that we will see in a similar design is going to be higher than 1890MHz that a reference clocked dual fan 1650 can achieve and I went with a 3% (1950MHz) saying that in this case it will be at worst 89% of GTX 1050ti.
1950MHz is a little bit on the high side, so not my most preferable frequency prediction but even in the case that the actual median frequency ends just the same with GTX 1650 then it will be at worst 86% of GTX 1050ti.
In any case, it will be a lot faster than 1050 2GB in today's TPU games setup. Even in 2016 where 2GB memory wasn't such a handicap as 2022, GT 1050 was only 79-80% of GTX 1050Ti in 1080p high, so despite the on paper advantages GT 1050 will be slower than 1630 imo.
If you ask me, I would give Intel a chance if the pricing is competitive (despite the fact that I have many older DX11 games that I still didn't have the chance to play yet, so I don't think that I would have a good experience in some of them) and the reason is that I want to support them because we need a third player imo.
Looking forward to seeing how this card performs and is priced!
:)
:)