- Joined
- Jul 9, 2015
- Messages
- 3,413 (1.00/day)
System Name | M3401 notebook |
---|---|
Processor | 5600H |
Motherboard | NA |
Memory | 16GB |
Video Card(s) | 3050 |
Storage | 500GB SSD |
Display(s) | 14" OLED screen of the laptop |
Software | Windows 10 |
Benchmark Scores | 3050 scores good 15-20% lower than average, despite ASUS's claims that it has uber cooling. |
Last (the only?) time we saw Vega. Note that just beating 1080 (314mm^2) is laughable for nearly 500mm^2 chip, it would be Sony level epic.fail, the thing must take on 1080Ti at least:
So we know equipping cards with HBM is expensive, mkay, so that's why AMD even bothered with the whole "high bandwidth cache" thing.
At least in theory it sounds quite feasible: "dear developer, we know you don't need all that mem at once, just allocate whatever you need, we'll handle moving things into GPU mem ourselves, oh, and by the way, we don't call it VRAM, we call it high bandwidth cache now".
The basic idea here is that, especially in the professional space, data set size is vastly larger than local storage. So there needs to be a sensible system in place to move that data across various tiers of storage. This may sound like a simple concept, but in fact GPUs do a pretty bad job altogether of handling situations in which a memory request has to go off-package. AMD wants to do a better job here, both in deciding what data needs to actually be on-package, but also in breaking up those requests so that “data management” isn’t just moving around a few very large chunks of data. The latter makes for an especially interesting point, as it could potentially lead to a far more CPU-like process for managing memory, with a focus on pages instead of datasets.
http://www.anandtech.com/show/11002/the-amd-vega-gpu-architecture-teaser/3
Some games don't use >4 GiB of VRAM but some do and with every passing year, the latter group gets bigger.
So we know equipping cards with HBM is expensive, mkay, so that's why AMD even bothered with the whole "high bandwidth cache" thing.
At least in theory it sounds quite feasible: "dear developer, we know you don't need all that mem at once, just allocate whatever you need, we'll handle moving things into GPU mem ourselves, oh, and by the way, we don't call it VRAM, we call it high bandwidth cache now".
The basic idea here is that, especially in the professional space, data set size is vastly larger than local storage. So there needs to be a sensible system in place to move that data across various tiers of storage. This may sound like a simple concept, but in fact GPUs do a pretty bad job altogether of handling situations in which a memory request has to go off-package. AMD wants to do a better job here, both in deciding what data needs to actually be on-package, but also in breaking up those requests so that “data management” isn’t just moving around a few very large chunks of data. The latter makes for an especially interesting point, as it could potentially lead to a far more CPU-like process for managing memory, with a focus on pages instead of datasets.
http://www.anandtech.com/show/11002/the-amd-vega-gpu-architecture-teaser/3