Probably for the same reason I'm considering a 2060: because it's the cheapest way to get tensor cores. So there's a considerable boost over 1660Ti and it makes learning/prototyping for tensor cores possible.Eh no? Plenty of researchers use RTX 2060 for CUDA or Tensorflow accelerated work
I didn't say 2060 makes no sense for DL. I said running it non-stop doesn't seem very optimal.
Also, 2060 is relatively easy to get in a consumer laptop that doesn't look like a christmas tree. For more performance you usually have to choose between an RGB box and a more expensive Quadro-powered workstation laptop.
If you're interested in cloud options, you can get a T4 instance cheaper than from the top 3 providers. Also, you get a discount if you pay for long-term use.Amazon Cloud computing would cost me $8k a year (they charge $1 an hour for a regular GPU server, and about $800 for a quadcore CPU server).
But yeah, I was thinking about a more powerful GPU (which you have). Because you would usually be concerned about short delivery time and training non-stop didn't sound right. Unless of course it's some specific case, like: you have an image recognition model and you constantly feed it with new data.
When using cloud, I'm also paying for other stuff: regular hardware replacement, maintenance etc. I don't have to build the PC and spend time on maintenance (configuration, fixing issues).
And cloud is really non-stop, whereas your system surely has some gaps for crashes/fixing/software updates etc. So the actual cost of "computing time" for on-premise is always higher than it seems.
Also, lets say I don't know how to build a PC, admin an OS or fix issues. I'd have to learn that. It's a cost as well. And it goes on and on.
In the end, cloud is a sensible option for many of us (but not all, so maybe not for you). Just like eating out, getting a haircut and having the car fixed at a service station.
But most importantly...
... we don't use cloud because it's faster or cheaper. We use it because of the scalability and elasticity.A similar system at my home (minus the initial purchase price) runs for less than $500 a year ($200 for CPU).
The initial purchase price of the CPU server costs about $750-800, GPU costs about 2,5k-3,5k depending on how many GPUs you run and how cheap you can get them.
Both these systems will be heaps faster than Amazon (or Google).
If you compare 1 GPU running at home and the same 1 GPU running on AWS - AWS will be way more expensive.
However, I can get 8 GPUs on AWS and get a result tomorrow - a week earlier than you. That's the competitive advantage.
Sure, if you're doing is as a hobby, it may not make that much sense. But it absolutely makes a difference for an enterprise (or even for students who want to get better results for projects/thesis).
Oh, so you're building these machines for folding (exclusively or just for idle periods?). That's a waste.Made it to the top 20 contributors of FAH, and top 8 on Boinc. That's out of 2-4M clients.
Yeah... that's the DL OT for this thread. Let's leave it like that.
Last edited: