Monday, June 19th 2017
ETH Mining: Lower VRAM GPUs to be Rendered Unprofitable in Time
Hold on to your ETH hats: you will still be able to cash in on the ETH mining craze for a while. However, you should look towards your 3 GB and 4 GB graphics cards with a slight distrust, for reasons that you should know, anyway, since you have surely studied your mining cryptocurrency of choice. Examples are the GTX 1060 3 GB, or one of those shiny new 4 GB RX 480 / RX 580 which are going at ridiculously premium prices right now. And as a side note, don't you love the mechanisms of pricing and demand?
The problem here stems from ETH's own design for its current PoW (Proof of Work) implementation (which is what allows you to mine the currency at all.) In a bid to make ETH mining unwieldy for the specialized silicon that brought Bitcoin difficulty through the roof, ETH implements a large size data set for your GPU to work with as you mine, which is stored in your GPU's memory (through the DAG, which stands for Directed Acyclic Graph). This is one of the essential differences between Bitcoin mining and Ethereum mining, in that Ethereum mining was designed to be memory-intensive, so as to prevent usage of ASICs and other specialized hardware. As a side-note, this also helps (at least theoretically) in ETH's decentralization, which Bitcoin sees more at risk because of the inherent centralization that results from the higher hardware costs associated with its mining.In time, as the number of blocks in the blockchain increases (at a rate of roughly 14 seconds per block), so does Ethereum's epoch level (which relates to DAG size and the size of the memory footprint your GPU must occupy for its calculation) increase. Every 30,000 blocks, a new epoch emerges, more costly in memory footprint than the previous one. This means that memory requirements for ETH mining are actually increasing as time passes. As the workload's memory footprint increases, it can (and will) overflow from the GPU's memory, and be stored in main system memory. Which, as you know, is much slower to access than the GPU's VRAM (this just made me remember of AMD's SSG graphics solutions. Neat almost solution to this problem, no?) Slower random memory accesses (of which ETH mining is extremely dependent on) will result in performance penalties for your mining hash-rate. And you can see where this is going.We are currently at epoch #129. Based on current estimates for DAG size and memory requirements, mining with an RX 470 4 GB should yield:
Now granted, if you know anything about Ethereum, you probably won't even care about this: the passage from PoW to PoS (Proof of Stake) is expected to occur by November 1st of this year. This means that ETH mining will simply cease to be a thing (though this implementation could see some delays, unlikely as that is.) And it lines up nicely with the 5 month, 30% computing power decrease estimation above. So maybe you don't have to worry that much about ETH mining ceasing to be profitable in 5 month's time. But if you are looking to buy into the mining craze and invest in hardware, you should study this market, and this technology, first (and pay attention to this article as well.) Likewise, if you have just recently bought into the mining hardware market with those exorbitantly-priced RX 400 and RX 500 - do the math and be ready to look for alternatives, either in cryptocurrencies or mining solutions. Don't let yourself be burned just because you want to follow the train.
Sources:
Cryptomining-blog, Bitcointalk, Ethereum @ Stackexchange, r/Ethermining, Bitcoinmagazine.com
The problem here stems from ETH's own design for its current PoW (Proof of Work) implementation (which is what allows you to mine the currency at all.) In a bid to make ETH mining unwieldy for the specialized silicon that brought Bitcoin difficulty through the roof, ETH implements a large size data set for your GPU to work with as you mine, which is stored in your GPU's memory (through the DAG, which stands for Directed Acyclic Graph). This is one of the essential differences between Bitcoin mining and Ethereum mining, in that Ethereum mining was designed to be memory-intensive, so as to prevent usage of ASICs and other specialized hardware. As a side-note, this also helps (at least theoretically) in ETH's decentralization, which Bitcoin sees more at risk because of the inherent centralization that results from the higher hardware costs associated with its mining.In time, as the number of blocks in the blockchain increases (at a rate of roughly 14 seconds per block), so does Ethereum's epoch level (which relates to DAG size and the size of the memory footprint your GPU must occupy for its calculation) increase. Every 30,000 blocks, a new epoch emerges, more costly in memory footprint than the previous one. This means that memory requirements for ETH mining are actually increasing as time passes. As the workload's memory footprint increases, it can (and will) overflow from the GPU's memory, and be stored in main system memory. Which, as you know, is much slower to access than the GPU's VRAM (this just made me remember of AMD's SSG graphics solutions. Neat almost solution to this problem, no?) Slower random memory accesses (of which ETH mining is extremely dependent on) will result in performance penalties for your mining hash-rate. And you can see where this is going.We are currently at epoch #129. Based on current estimates for DAG size and memory requirements, mining with an RX 470 4 GB should yield:
- Dag 130 - 27.400 Mh/s
- Dag 140 - 25.100 Mh/s
- Dag 150 - 22.500 Mh/s
- Dag 160 - 20.100 Mh/s
- Dag 199 - 10.000 Mh/s
Now granted, if you know anything about Ethereum, you probably won't even care about this: the passage from PoW to PoS (Proof of Stake) is expected to occur by November 1st of this year. This means that ETH mining will simply cease to be a thing (though this implementation could see some delays, unlikely as that is.) And it lines up nicely with the 5 month, 30% computing power decrease estimation above. So maybe you don't have to worry that much about ETH mining ceasing to be profitable in 5 month's time. But if you are looking to buy into the mining craze and invest in hardware, you should study this market, and this technology, first (and pay attention to this article as well.) Likewise, if you have just recently bought into the mining hardware market with those exorbitantly-priced RX 400 and RX 500 - do the math and be ready to look for alternatives, either in cryptocurrencies or mining solutions. Don't let yourself be burned just because you want to follow the train.
28 Comments on ETH Mining: Lower VRAM GPUs to be Rendered Unprofitable in Time
www.cnbc.com/2014/03/14/buffett-blasts-bitcoin-as-mirage-stay-away.html
Looks like 1070s will become a lot more expensive as well.
So HBM based Fury/Nano with 512GB/s throughput will be the last consumer GPUs along with TitanXp to be affected by the hash-rate drop problem
In short, this is actually very good for Fury/Nano, as miners will start to scrap up those older cards as well.
With the deals I got cobbling together my rig, I calculated that it would pay for itself in under 2 months, and so the PoS switch was no problem.
That gives me 3 months worth of VERY nice profit, and then I will just set it to ETC or Siacoin and let the thing have fun for a year!
Anyone buying these cards at $400 is idiotic though, unless they got the rest of the components for free or have some other masterplan.
I never thought I'd be upgrading PC hardware anytime soon, but I'm taking the chance to earn some money with it, after I return my investment.
I hope the market stays stable for you that long man. Ram speed only has an impact if the GPU memory overflows I think. Ok, this is news to me, but I might be completely off on how Ethereum operates.
Does anyone have some links on this?
There are two issues. One with worker size, where the DAG will increase in size with each new epoch (the number of finished blocks by 30,000 intervals) until it is over the physical memory present in the cards (3 GB, 6 GB, 8 GB...) This Is happening, and is intrinsic to the way PoW is designed in Ethereum. It's mathematic.
The other issue, which is being referred to and which is in the article as well, is Polaris' implementation of a TLB cache. The Polaris implementation is small, and is already showing signs of overflow with current DAG sizes. So even though physical VRAM limit isn't reached yet on 4, 8 GB Polaris, the TLB cache is being pushed to its limit with regards to the size of the virtual address (as well as the abundance of random read orders which ETH mining demands from memory and the TLB cache.) This results in read misses for the TLB cache which forces it to be refreshed, which is very costly in Polaris.
Previous AMD GCN didn't have TLB cache, hence why they aren't affected BY THE TLB CACHE MISSES (just Google AMD GCN TLB CACHE and look at the GCN white paper. They explicitly say GCN does not need a TLB cache.) However, these cards will still be hit by the worker size exceeding 4 GB problem, IF ETH doesn't implement PoS before the DAG size increases to that value. If it does, the point is moot, DAG won't ever achieve 4 GB sizes, and users can mine other coins happily.
Hope this clears some stuff up. There is only so much one can say in an article before the TL;DR drama.
2375 MB currently used. Mh/s about the same.
Also I did not see a drop with the new Epoch on my 1060 3GB cards.