# 2600X vs 2700X - Machine Learning Dilema



## SIGSEGV (Sep 19, 2018)

right now, I live temporarily in the Netherlands to continue my study which is far away from my computer lab in Indonesia.
I wanna build a new PC for my thesis about machine learning/deep learning.
as the title said,  better to pick up 2600X with 1080ti (second hand) or 2700X with 1080 (second hand also..)?

thanks


----------



## 27MaD (Sep 19, 2018)

2700X with 1080 .


----------



## Johan45 (Sep 19, 2018)

What are you going to do with this PC.  Is it just for doing reports and gaming?


----------



## kastriot (Sep 19, 2018)

Well for deep learning more cores apply so 2700x or if you have money go for threadripper 12+ cores


----------



## bug (Sep 19, 2018)

If you build it for deep learning, you're going to be using mainly the GPU. So may want to change the thread title 
Depending on what you need to do with the GPU, you may be better served by an AMD video card instead.


----------



## Vya Domus (Sep 19, 2018)

1080ti and 2600X, pretty much all deep learning/ML platforms support CUDA/OpenCL so no you don't really need more CPU cores given the 1080ti is a decent 30% faster than the 1080. Or, you could pick a 1700/1700X since they are quite cheap and that way you can get your 1080ti and something with 8 cores. As a matter of fact that's what I would do.

Although for GPGPU I would personally pick an AMD card.


----------



## infrared (Sep 19, 2018)

Would the RTX2080 be a better choice with the Tensor cores?


----------



## SIGSEGV (Sep 19, 2018)

Johan45 said:


> What are you going to do with this PC.  Is it just for doing reports and gaming?



develop model and little use of gaming



bug said:


> If you build it for deep learning, you're going to be using mainly the GPU. So may want to change the thread title
> Depending on what you need to do with the GPU, you may be better served by an AMD video card instead.



well, I have just thought the same thing but there's almost non-existent vega card listed in the marketplace.



Vya Domus said:


> 1080ti and 2600X, pretty much all deep learning/ML platforms support CUDA/OpenCL so no you don't really need more CPU cores given the 1080ti is a decent 30% faster than the 1080. Or, you could pick a 1700/1700X since they are quite cheap and that way you can get your 1080ti and something with 8 cores. As a matter of fact that's what I would do.
> 
> Although for GPGPU I would personally pick an AMD card.



Yes, personally I want to try ROCM (https://rocm.github.io/) with AMD GPU but then miners ruin everything.



infrared said:


> Would the RTX2080 be a better choice with the Tensor cores?



exactly, but i couldn't afford the price. I'm gonna also wait for vega 20 which is rumoured has similar approach with Nvidia's tensor core.


----------



## bug (Sep 19, 2018)

So, what exactly do you need for deep learning? OpenCL 1.2? 2.0? CUDA?


----------



## SIGSEGV (Sep 19, 2018)

bug said:


> So, what exactly do you need for deep learning? OpenCL 1.2? 2.0? CUDA?



CUDA for now
OpenCL in the future

damn, it's problematic /facepalm


----------



## bug (Sep 19, 2018)

If in doubt, you can always dig up benchmarks at phoronix.com. The guy tests CUDA vs OpenCL several times each year, you're bound to find something interesting in there. For me, the interesting part is that sometimes Nvidia's OpenCL 1.2 beats AMD's OpenCL 2.0, but I don't know whether that does anything for your use case.


----------



## Caring1 (Sep 20, 2018)

SIGSEGV said:


> ….I wanna build a new PC for my thesis about machine learning/deep learning.
> as the title said,  better to pick up 2600X with 1080ti (second hand) or 2700X with 1080 (second hand also..)?


2600X or non X would do, with the best GPU you can afford with your budget.


----------



## notb (Sep 20, 2018)

SIGSEGV said:


> right now, I live temporarily in the Netherlands to continue my study which is far away from my computer lab in Indonesia.
> I wanna build a new PC for my thesis about machine learning/deep learning.
> as the title said,  better to pick up 2600X with 1080ti (second hand) or 2700X with 1080 (second hand also..)?
> 
> thanks


Well... the big question is: do you really need the performance?
Because studying ML with a cheap CPU/GPU looks exactly the same as on an expensive one. 

On the other hand, most current ML research is done on HPC clusters (both CPU and GPU).
This means that even if you get an expensive 1080Ti, you'll be years behind recent stuff in complexity/precision and just ~5x faster than a humble 1050Ti.
I'm totally happy with my 1050 for studying, researching and prototyping stuff for work. I only miss more RAM. :/

That 5x speed boost would also make sense if you're into competitive ML (like Kaggle).

There's a good chance your univ can provide a HPC environment (especially if they're doing ML research). Quite a few larger research universities have clusters in Top500 or not far behind. Getting access to them is another story, but if you ask long enough...

As for the actual choice... It depends on what you're doing or planning to do.
Some ML software uses CPUs, some GPUs. Some are single-threaded, some multi-threaded.

If you're going for typical deep learning problems, with standard tools available (e.g. Tensorflow), you'll be doing GPGPU mostly.

If you're after more specific, analytical stuff or you plan to write low-level stuff yourself, you might be more CPU-dependent. It's always more flexible and easier to code.
The big _achtung _here is that a lot of this stuff is sequential, so take care of single-threaded oomph while shopping. This will become obvious if you decide to use R, but python isn't much better.

Have fun. ;-)


Vya Domus said:


> Although for GPGPU I would personally pick an AMD card.


I'm sure you would ;-), but leading ML software is built around CUDA and Nvidia is the only sensible choice.
You can make it work with OpenCL, but it's very difficult (with a lot of compiling on the way) and the performance is so-so.


infrared said:


> Would the RTX2080 be a better choice with the Tensor cores?


Tensor cores are a novelty and a fantastic field for study (really cutting-edge), but not much stuff uses it today, obviously.
So if OP is into new methods in ML or writing software etc., Tensor cores is what everyone will talk about in next few years (it's already happening and will get a boost thanks to newborn Tensor accelerator).
But for actual ML work (when you want to research using ML, rather than research ML), Tensor cores are simply way too new.


----------



## SIGSEGV (Sep 20, 2018)

notb said:


> Well... the big question is: do you really need the *performance*?
> Because studying ML with a cheap CPU/GPU looks exactly the same as on an expensive one.
> 
> On the other hand, most current ML research is done on HPC clusters (both CPU and GPU).
> ...



thanks
I have decided to use Ryzen because its single threaded performance has been increased significantly but surely, I am going to use the full advantage of the multithreaded performance later on, for instance rendering and modeling apart from ML/DL.


----------



## bug (Sep 21, 2018)

Surely enough, Vega still fails to run several compute benchmarks using the latest ROCm 
https://www.phoronix.com/scan.php?page=article&item=nvidia-rtx2080ti-compute&num=1


----------



## notb (Sep 21, 2018)

bug said:


> Surely enough, Vega still fails to run several compute benchmarks using the latest ROCm
> https://www.phoronix.com/scan.php?page=article&item=nvidia-rtx2080ti-compute&num=1


Oh come one with this Vega nonsense. Vega 64 is a gaming card and it's not built for computation.
Even AMD doesn't care about it. Check the Vega website - not much happening there. Vega FE is still marketed as a solution for the great business of crypto-currency mining. :-D

Moving to more interesting stuff. 2080Ti. O-M-G. :-o
It's much better at computation than at gaming. And that's even before they run something optimized for Tensor and RT cores.
Looking at these figures, it's pretty obvious that for certain types of problems 2070 will at least match 1080Ti. It will surely be the case in Tensor-optimized ML.

@SIGSEGV if you can delay shopping by a few weeks (at least the GPU), I'd suggest waiting for 2070 reviews. By that time sites like Phoronix should have some specific ML benchmarks ready.


----------



## SIGSEGV (Sep 21, 2018)

notb said:


> @SIGSEGV if you can delay shopping by a few weeks (at least the GPU), I'd suggest waiting for 2070 reviews. By that time sites like Phoronix should have some specific ML benchmarks ready.



well, i will also wait until vega 20 come up but for now, i am gonna use 1080Ti and I managed to snap second hand of GTX 1080Ti for 520 EUR. That's pretty a good deal for me. Moreover, my thesis has already been started and i don't think have much time to wait.


----------



## bug (Sep 21, 2018)

notb said:


> Oh come one with this Vega nonsense. Vega 64 is a gaming card and it's not built for computation.
> Even AMD doesn't care about it. Check the Vega website - not much happening there. Vega FE is still marketed as a solution for the great business of crypto-currency mining. :-D



What do you mean "it's not built for computation"? It's got as much compute power as a Titan Xp (more, if you need double precision) and and it's got its ID in the drivers. What else does it need?


----------



## notb (Sep 22, 2018)

bug said:


> What do you mean "it's not built for computation"? It's got as much compute power as a Titan Xp (more, if you need double precision) and and it's got its ID in the drivers. What else does it need?


You see... this is exactly what people don't understand. You think people doing scientific or engineering computation are buying the GPU that offers the most computing power. 
Clearly, this is not the place for such discussion. Let's hope OP doesn't make such mistakes and is going to have a lot of fun in his ML career.


----------



## bug (Sep 22, 2018)

notb said:


> You see... this is exactly what people don't understand. You think people doing scientific or engineering computation are buying the GPU that offers the most computing power.
> Clearly, this is not the place for such discussion. Let's hope OP doesn't make such mistakes and is going to have a lot of fun in his ML career.


Seriously, that's your defense for drivers failing to run software?
A lot of the more casual software (e.g. photo and video editors, even LibreOffice) is gaining compute abilities. They won't work with botched driver.

And don't take this so personal, it was just an observation because the OP said he considered going AMD+ROCm. You can still use AMD+proprietary driver, that's supposedly better at compute.


----------

