• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Two 16-core AMD Threadripper Parts Listed Online

Which is what the delegate and event are for. Any class listening to that event will get the data contained in the delegate. In this case, the delegate had the value (6) and the time (ticks). When the event is raised, the delegate method is called by the issuing thread and carried out (printed to console). In cases where cross-thread references are a problem, the issuing thread invokes the owning thread: owning thread executes the method while the issuing thread continues its tasks (usually requesting more work from the main thread).
My point is that your example in #93 it's not showing that. Your building a string with the timing inside the new thread, before the thread has completed and really even before the output has even been printed to the screen. That's not measuring any time to get it to the outside world.

I also get a perverse pleasure in seeing how much code it takes to implement this in other languages. :laugh:
Code:
(require '[clojure.core.async :refer [<!! thread]])
(time (<!! (thread (+ 1 2 3))))
 
Last edited:
The point of async is to delegate complex tasks to other processors which increases the aggregate performance. The 1ms of time it takes to start a thread is utterly unimportant when code isn't worth doing async unless it takes seconds, if not minutes to execute. Case in point: #93, as proven by #98, showed that the bulk of the execution time was inside the DateTime class. The 1+2+3, creation of a new thread, and displaying the results is diminutive by comparison. In other words, you're little experiment proves nothing other than the obvious (creating a worker thread takes longer than keeping it in single threaded for work that doesn't benefit from multithreading).
 
The point of async is to delegate complex tasks to other processors which increases the aggregate performance. The 1ms of time it takes to start a thread is utterly unimportant when code isn't worth doing async unless it takes seconds, if not minutes to execute. Case in point: #93, as proven by #98, showed that the bulk of the execution time was inside the DateTime class. The 1+2+3, creation of a new thread, and displaying the results is diminutive by comparison. In other words, you're little experiment proves nothing other than the obvious (creating a worker thread takes longer than keeping it in single threaded for work that doesn't benefit from multithreading).
My point was to show that using threads incurs an overhead cost, I think you just made my point for me. See #82. Either way, we're saying the same thing.
 
Have a look at this post: AMD Dragged to Court over Core Count on "Bulldozer"

20% penalty, not 1700% and this is a program where there is a lot of main/worker communication. Cutting back on UI updates would improve performance hugely but I decided it strikes a good balance between doing work and informing the user the way it is.

It is also a program where a worker thread is always spawned to prevent the UI from locking up. Async multithreading is required to give users expected behavior.
 
I also get a perverse pleasure in seeing how much code it takes to implement this in other languages. :laugh:
Code:
(require '[clojure.core.async :refer [<!! thread]])
(time (<!! (thread (+ 1 2 3))))

PHP:
<?php 

class thread extends Thread {

    public function run(): int {
   
        return 1 + 2 + 3;

    }

}

$sum = (new thread)->start();

?>

PHP7 using pthreads, a github project. There's also the Threaded, Worker, Pool, and other classes for thread interactions and management. Requires CLI execution and a thread-safe build of PHP.

Not that I get a lot of use out of it, I'm more on the web side of things.
 
no pci-e 4.0 no new build for me
 
The fastest graphics cards can't saturate PCI Express 3.0 x16 as is.
 
The fastest graphics cards can't saturate PCI Express 3.0 x16 as is.
There's a bit more uses of computers than just games...

IMO the most important aspect of PCIe 4.0 is OCuLink-2 - a solid Thunderbolt 3 alternative. Since they are very late in the game (Thunderbolt 3 is already here and it uses USB), I doubt OCuLink-2 will ever be used for mainstream external PC accessories. However, it could be the next storage interface, because we're already way past SATA 3.0 capabilities.
 
SATA 3.2 supports 16 Gb/s (~2000 MB/s)
 
SATA 3.2 supports 16 Gb/s (~2000 MB/s)
Great idea. SATA Express - AFAIK the biggest failure in PC interfaces in last few years.
Even if SATAe drives actually existed, the connector is so huge it's almost impossible to put 6-8 of them on a typical motherboard, like we got used to with SATA 3.

OCuLink-2 will utilize a new connector - Molex NanoPitch - which will actually be usable...
 
Only SSDs really need that much bandwidth and they're mostly going NVMe/M.2. That's PCI Express 3.0 x4.
 
Only SSDs really need that much bandwidth and they're mostly going NVMe/M.2. That's PCI Express 3.0 x4.
Not true at all and you miss the point. PCIe / M.2. drives take a lot of motherboard's space. The whole point of SATA is to have a tiny interface for a cable connection.
OCuLink-2 will succeed SAS in servers. It might just as well replace SATA.

Anyway, most SSD sold connect via SATA and they usually hover around 600 MB/s - the SATA 6 Gbps limit. Just changing the interface would improve performance.
 
The U.2 connector isn't that huge but a smaller one would be good.
 
The U.2 connector isn't that huge but a smaller one would be good.
Actually it's even bigger than SATAe. :-)
SATAe connector is slighly shorter than 3x SATA (same width).
2x U.2 are larger than 6x SATA.

And if not placed on the edge, it'll need even more space for the cable, because the connector is parallel to PCB. But I assume it is possible to make a horizontal (perpendicular) version as well.

a87e4b55-d7bf-415f-a317-4508577e3d45.jpg
 
U.2 is smaller than SATA Express. You can fit approximate three U.2 in the space of two SATA Express.

That picture has no SATA Express slots. Here's two SATA Express next to four normal SATA:
330px-SATA_Express_connectors_on_a_computer_motherboard.jpg


M.2 NVMe killed U.2. Good luck finding SATA Express or U.2 devices. Reason: U.2 and SATA Express add latency to the PCI Express lanes. NVMe, being hardwired, prevents that. No M.2 slots? Makes more sense to get a PCI Express to M.2 card than to buy something that supports SATA Express or U.2.
 
Last edited:
U.2 is smaller than SATA Express. You can fit approximate three U.2 in the space of two SATA Express. That picture has no SATA Express slots.
Only in 1 dimension (length of mobo edge). I was talking 2D (area of mobo), where U.2 is larger.
The picture in my post has no SATA Express, but it shows 3x2 SATA cluster. That's enough.
M.2 NVMe killed U.2.
IMO temporarily.
A typical setup today is: SSD for OS+apps and HDD for storage (you're a good example, I assume).
But if we're going to shift to SSD-only PCs, a lot of people will need 4, 6... 10 TB. When will this happen? How big and how expensive will be the drives?
I think people will still want the possibility to connect 4+ storage drives - even on a mITX mobo. Where will you pack all these U.2? :-D

And I just can't believe people will be fine with the fact that their drives can do 1000MB/s (if not better), but their limited by a 15-year-old interface.
 
Back
Top