• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Year 2023: 8-Port 2.5GbE Unmanaged Switch - looking for the best

Joined
Dec 24, 2008
Messages
2,062 (0.36/day)
Location
Volos, Greece
System Name ATLAS
Processor Intel Core i7-4770 (4C/8T) Haswell
Motherboard GA-Z87X-UD5H , Dual Intel LAN, 10x SATA, 16x Power phace.
Cooling ProlimaTech Armageddon - Dual GELID 140 Silent PWM
Memory Mushkin Blackline DDR3 2400 997123F 16GB
Video Card(s) MSI GTX1060 OC 6GB (single fan) Micron
Storage WD Raptors 73Gb - Raid1 10.000rpm
Display(s) DELL U2311H
Case HEC Compucase CI-6919 Full tower (2003) moded .. hec-group.com.tw
Audio Device(s) Creative X-Fi Music + mods, Audigy front Panel - YAMAHA quad speakers with Sub.
Power Supply HPU-4M780-PE refurbished 23-3-2022
Mouse MS Pro IntelliMouse 16.000 Dpi Pixart Paw 3389
Keyboard Microsoft Wired 600
Software Win 7 Pro x64 ( Retail Box ) for EU
Back in time at 8-2009 and after, it was the period of time that successful made 1GbE unmanaged switch technology become available for home and office.

Nowadays, the step-up technology, 2.5GbE unmanaged switch, this started to become available, at May 2021 and later.
These new 2.5GbE products (5-2021 release date) it is of what we see beginning of 2023 as available technology.

My closer look at such 2.5GbE products 8-Port switch, it did shown that there is no endless quantity of hardware options.
Stats:
Two brands using or share the most successful (made in Taiwan) product design.
Three other brand, with much longer recorded history in the market, they use lower in performance hardware, and or the hardware up to 5-port only.

My first ever switch hub 10/100 of 2003, this were retired because it was unable to understand 1GbE handshake and to inform any 1GbE NIC this to drop it speed automatically at 100.
In simple English, entire network all NIC's, they should be manually configured or limited at Ethernet speed of 100 Mbps, so the old switch to operate.

At a specific and recent made product review I did read something truly smart:
Even if your entire network LAN, this has several 1GbE NIC devices, an 2.5GbE switch this it will help them to perform at their best.
Because the switch, due it double in bandwidth potentials, this is not considered any more as LAN performance limiter.
A know hero product 1GbE high performance this is no other than INTEL PRO/1000GT legacy PCI card.

Personally I am thinking to activate at my motherboard the INTEL Teaming functionality, i210AT & i217V = 2.0GbE total.
So it is of importance an 2.5GbE unmanaged switch, to be selected due it better or higher Switching forwarding rate along Switching capacity, and Packet buffer.

Even this period of time, few complains that I read about 2.5GbE Unmanaged Switch, it was about over heating.
But this also might be a false alarm, because no one actually measure the temperature of the switch (top side of metal box).
Another more serious complain about the2.5GbE Unmanaged Switch: within a period of a month or greater, packet loss recorded and or electronics died.

And now its the time for you to deliver, your own real experiences if any with 2.5GbE Unmanaged Switch.
Note: Technical issues of specific series INTEL NIC 2.5GbE (2021~2023) they are out of interest at this topic.
 
Last edited:
Joined
Dec 24, 2008
Messages
2,062 (0.36/day)
Location
Volos, Greece
System Name ATLAS
Processor Intel Core i7-4770 (4C/8T) Haswell
Motherboard GA-Z87X-UD5H , Dual Intel LAN, 10x SATA, 16x Power phace.
Cooling ProlimaTech Armageddon - Dual GELID 140 Silent PWM
Memory Mushkin Blackline DDR3 2400 997123F 16GB
Video Card(s) MSI GTX1060 OC 6GB (single fan) Micron
Storage WD Raptors 73Gb - Raid1 10.000rpm
Display(s) DELL U2311H
Case HEC Compucase CI-6919 Full tower (2003) moded .. hec-group.com.tw
Audio Device(s) Creative X-Fi Music + mods, Audigy front Panel - YAMAHA quad speakers with Sub.
Power Supply HPU-4M780-PE refurbished 23-3-2022
Mouse MS Pro IntelliMouse 16.000 Dpi Pixart Paw 3389
Keyboard Microsoft Wired 600
Software Win 7 Pro x64 ( Retail Box ) for EU
I did cast my vote in favor TRENDnet’s latest TPE-TG380, and by contacting the brand, TRENDnet accepted to deliver to me product sample along a 2.5G NIC PCIe (Realtek).
So in a few weeks I will be able to own personal experiences, and comparisons at 2.5G bandwidth.

According to FLUKE Networks certification tool, there is no 1GBe hardware able to deliver 1GBe in full.
Best score for a high quality 1GBe switch this is 900 M Bits
When cheap switch products did 450 M Bits

My first collected data due web search, a good 2.5 GBe switch succeeded 2200 M Bits as best.
Its all about mathematics in the end, so dual INTEL NIC's 1GBe due Teaming, I should expect 1800 M Bits as best.


 

Attachments

  • TPE-TG380_ITTSB.EU-2023.jpg
    TPE-TG380_ITTSB.EU-2023.jpg
    524.8 KB · Views: 80
Joined
Jul 10, 2017
Messages
2,671 (1.04/day)
I did cast my vote in favor TRENDnet’s latest TPE-TG380, and by contacting the brand, TRENDnet accepted to deliver to me product sample along a 2.5G NIC PCIe (Realtek).
So in a few weeks I will be able to own personal experiences, and comparisons at 2.5G bandwidth.

According to FLUKE Networks certification tool, there is no 1GBe hardware able to deliver 1GBe in full.
Best score for a high quality 1GBe switch this is 900 M Bits
When cheap switch products did 450 M Bits

My first collected data due web search, a good 2.5 GBe switch succeeded 2200 M Bits as best.
Its all about mathematics in the end, so dual INTEL NIC's 1GBe due Teaming, I should expect 1800 M Bits as best.


Are you talking about switching at wire speed? Also, at which layer are you needing those speeds?
 
Joined
Aug 15, 2022
Messages
316 (0.45/day)
Location
Some Where On Earth
System Name Spam
Processor i9-12900K PL1=125 TA=56 PL2=288
Motherboard MSI MAG B660M Mortar WiFi DDR4
Cooling Scythe Kaze Flex 120mm ARGB Fans x1 / Alphacool Eisbaer 360
Memory Mushkin Red Line DDR4 4000 16Gb x2 18-22-22-42 1T
Video Card(s) Sapphire Pulse RX 7900 XT
Storage Team Group MP33 512Mb / 1Tb
Display(s) LG 34GP63A-B (3440 x 1440)
Case Lan-Li A3
Audio Device(s) Real Tek on Board Audio
Power Supply EVGA SuperNOVA 850 GM
Mouse G203
Keyboard G413
Software WIN 11 Pro
According to FLUKE Networks certification tool, there is no 1GBe hardware able to deliver 1GBe in full.
Best score for a high quality 1GBe switch this is 900 M Bits
When cheap switch products did 450 M Bits
No ethernet interface will give you full speed since there is TCP/IP packet over head from checksums and header information. So, you will only get 940Mbs of data on a 1Gb link (about 94%), and the same holds true for 2.5Gb and above.

Fiber has the overhead built into the link, so 1.25Gb will give you 1.25Gb. If you need the data throughput, you might want to look fiber as alternat to copper.
 
Joined
Dec 24, 2008
Messages
2,062 (0.36/day)
Location
Volos, Greece
System Name ATLAS
Processor Intel Core i7-4770 (4C/8T) Haswell
Motherboard GA-Z87X-UD5H , Dual Intel LAN, 10x SATA, 16x Power phace.
Cooling ProlimaTech Armageddon - Dual GELID 140 Silent PWM
Memory Mushkin Blackline DDR3 2400 997123F 16GB
Video Card(s) MSI GTX1060 OC 6GB (single fan) Micron
Storage WD Raptors 73Gb - Raid1 10.000rpm
Display(s) DELL U2311H
Case HEC Compucase CI-6919 Full tower (2003) moded .. hec-group.com.tw
Audio Device(s) Creative X-Fi Music + mods, Audigy front Panel - YAMAHA quad speakers with Sub.
Power Supply HPU-4M780-PE refurbished 23-3-2022
Mouse MS Pro IntelliMouse 16.000 Dpi Pixart Paw 3389
Keyboard Microsoft Wired 600
Software Win 7 Pro x64 ( Retail Box ) for EU
Personally I am thinking to activate at my motherboard the INTEL Teaming functionality, i210AT & i217V = 2.0GbE total.
So it is of importance an 2.5GbE unmanaged switch, to be selected due it better or higher Switching forwarding rate along Switching capacity, and Packet buffer.
I am not answering back to my self here, for the ones missed my motive, here is a second opportunity to read it again.

About actual needs, I do not store HD movies, and neither I do any video editing, YouTube ended up as terrible employer.

I have content of my own, text, photographs, PDF files, and a few multimedia files.
An approximations of 160GB of data, I am working on several projects, and I have dedicated file folder per project.
I do add and or delete files at its project daily.

Two times its week, I do sync my work with storage space at a second PC at my LAN.
I am using special software that compares and keep in sync my data.

Entire small network this run at 1GBe, INTEL NIC everywhere, hart of the system Asus Switch GigaX GX1105B.


I am after to speed-up the files sync, by taking advantage of hardware that I do own ( Gigabyte GA-Z87X-UD5H + Intel NIC teaming 2GBe).
In conclusion, I do not need 5G, 10G or fiber, I am after to discover best hardware at 2.5 GBe, that is enough to serve future upgrade at 2.5 GBe small NAS (network storage) and or 2K surveillance camera.
So, I have small dreams but I love and insist to own quality solutions, high performance + long term reliability.
 
Joined
Nov 16, 2007
Messages
1,190 (0.20/day)
Location
Hampton Roads
Processor Xeon x5650
Motherboard SABERTOOTH X58
Cooling Fans
Memory 24 GB Kingston HyperX 1600
Video Card(s) GTX 1060 3GB
Storage small ssd
Display(s) Dell 2001F, BenQ short throw
Case Lian Li
Audio Device(s) onboard
Power Supply X750
Software Mint 19.3, Win 10
Benchmark Scores not so fast...
I am still learning, but I see a couple things:

Does the other PC have two NICs that will get the same treatment?

And, the switch must be managed. It has to negotiate a single MAC for both of the NICs, on specific ports of that switch.
 
Joined
Dec 24, 2008
Messages
2,062 (0.36/day)
Location
Volos, Greece
System Name ATLAS
Processor Intel Core i7-4770 (4C/8T) Haswell
Motherboard GA-Z87X-UD5H , Dual Intel LAN, 10x SATA, 16x Power phace.
Cooling ProlimaTech Armageddon - Dual GELID 140 Silent PWM
Memory Mushkin Blackline DDR3 2400 997123F 16GB
Video Card(s) MSI GTX1060 OC 6GB (single fan) Micron
Storage WD Raptors 73Gb - Raid1 10.000rpm
Display(s) DELL U2311H
Case HEC Compucase CI-6919 Full tower (2003) moded .. hec-group.com.tw
Audio Device(s) Creative X-Fi Music + mods, Audigy front Panel - YAMAHA quad speakers with Sub.
Power Supply HPU-4M780-PE refurbished 23-3-2022
Mouse MS Pro IntelliMouse 16.000 Dpi Pixart Paw 3389
Keyboard Microsoft Wired 600
Software Win 7 Pro x64 ( Retail Box ) for EU
I am still learning, but I see a couple things:

Does the other PC have two NICs that will get the same treatment?

And, the switch must be managed. It has to negotiate a single MAC for both of the NICs, on specific ports of that switch.

As far I did read from INTEL documentation, NIC's teaming, this is entirely controlled due the workstation which has active INTEL Teaming.
Packets address along packets priority all are set automatically.

The other workstation will be using PCIe 2.5G card, still there is a culprit, the motherboard in order to deliver it best, it should support PCIe 3 bandwidth.

Another culprit, this is that for stable operation at 2.5G big data transfer, the storage media must be SSD at both ends.
For the jump over the 1 GBe, there is so much complexity involved, too much documentation to read and hardware requirements.

The managed switch its about setting Virtual-LAN, you may have equipment connected at regular subnet-mask and also a virtual (second) subnet-mask, the simple explanation is that such a network will need one managed switch, and or two unmanaged instead.

Its all about network planning and cost effective choices about hardware.
At the end of the day, if there is single managed switch and this goes down, everything will stop working.
If the managed switch get hacked? this is an even worst scenario.

Packets collision, it was problem of old 90s Ethernet HUB, these be replaced by the switch-hub.
 
Joined
Apr 24, 2020
Messages
2,661 (1.72/day)
Strangely enough, I hear that 10GbE is easier to figure out than 2.5Gb.

2.5Gb should be cheaper in theory, but not quite true yet in practice. There's just this chasm between 1GbE and 10GbE.

If anyone has good experience with 2.5Gb, especially with switches, I'm all ears! Its a strangely difficult problem. There seem to be good 2.5Gb NICs today at least, its just a switch problem now.
 
Joined
Dec 24, 2008
Messages
2,062 (0.36/day)
Location
Volos, Greece
System Name ATLAS
Processor Intel Core i7-4770 (4C/8T) Haswell
Motherboard GA-Z87X-UD5H , Dual Intel LAN, 10x SATA, 16x Power phace.
Cooling ProlimaTech Armageddon - Dual GELID 140 Silent PWM
Memory Mushkin Blackline DDR3 2400 997123F 16GB
Video Card(s) MSI GTX1060 OC 6GB (single fan) Micron
Storage WD Raptors 73Gb - Raid1 10.000rpm
Display(s) DELL U2311H
Case HEC Compucase CI-6919 Full tower (2003) moded .. hec-group.com.tw
Audio Device(s) Creative X-Fi Music + mods, Audigy front Panel - YAMAHA quad speakers with Sub.
Power Supply HPU-4M780-PE refurbished 23-3-2022
Mouse MS Pro IntelliMouse 16.000 Dpi Pixart Paw 3389
Keyboard Microsoft Wired 600
Software Win 7 Pro x64 ( Retail Box ) for EU
Strangely enough, I hear that 10GbE is easier to figure out than 2.5Gb.

2.5Gb should be cheaper in theory, but not quite true yet in practice. There's just this chasm between 1GbE and 10GbE.

If anyone has good experience with 2.5Gb, especially with switches, I'm all ears! Its a strangely difficult problem. There seem to be good 2.5Gb NICs today at least, its just a switch problem now.
You are not far away from the truth.
Marvell (chip maker) it has NIC options with one up to four RJ45 plugs.
10 GbE x 2 RJ45 (max)
5 GbE x 4 RJ45
2.5 GbE x 4 RJ45

The complexity gets higher if you expect 10 GbE at both LAN and V-LAN.

The new trend this is video editing due NAS storage, 98% of people will never need it.
 
Joined
Apr 24, 2020
Messages
2,661 (1.72/day)
You are not far away from the truth.
Marvell (chip maker) it has NIC options with one up to four RJ45 plugs.
10 GbE x 2 RJ45 (max)
5 GbE x 4 RJ45
2.5 GbE x 4 RJ45

The complexity gets higher if you expect 10 GbE at both LAN and V-LAN.

The new trend this is video editing due NAS storage, 98% of people will never need it.

So, quickie question, do you really need a 2.5GbE switch?

If you're just video editing + NAS, you could just run the NAS RJ45 directly to your desktop. A lot of NAS boxes have 2x NICs or more, so one goes to the switch, the other can go directly to your 2.5GbE (or whatever faster) computer you got directly.
 

Solaris17

Super Dainty Moderator
Staff member
Joined
Aug 16, 2005
Messages
26,240 (3.79/day)
Location
Alabama
System Name Rocinante
Processor I9 14900KS
Motherboard MSI MPG Z790I Edge WiFi Gaming
Cooling be quiet! Pure Loop 240mm
Memory 64GB Gskill Trident Z5 DDR5 6000
Video Card(s) MSI SUPRIM Liquid X 4090
Storage 1x 500GB 980 Pro | 1x 1TB 980 Pro | 1x 8TB Corsair MP400
Display(s) Odyssey OLED G9 (G95SC)
Case LANCOOL 205M MESH Snow
Audio Device(s) Moondrop S8's on schitt Modi+ & Valhalla 2
Power Supply ASUS ROG Loki SFX-L 1000W
Mouse Lamzu Atlantis mini (White)
Keyboard Monsgeek M3 Lavender, Akko Crystal Blues
VR HMD Quest 3
Software openSUSE Tumbleweed
Benchmark Scores I dont have time for that.
Strangely enough, I hear that 10GbE is easier to figure out than 2.5Gb.

I dont think thats strange at all. I hate 2.5gb and 5gb. 10gb used hardware is pretty cheap and proven.

10gb was standardized in like 2006.

2.5 was seen in the wild around the end of 2013 by select manufacturers, but 2.5 and 5gb/s werent standardized until middle of 2016
 
Joined
Apr 24, 2020
Messages
2,661 (1.72/day)
I dont think thats strange at all. I hate 2.5gb and 5gb. 10gb used hardware is pretty cheap and proven.

While that's true, that 10Gb hardware needs to be 4x faster than 2.5Gb hardware. So in theory, the 2.5Gb hardware should be cheaper.

In particular, 10Gb hardware does get rather hot, because the compute power needed to run 10Gb is just significantly higher. The 2.5Gb / 5Gb speeds were designed to fix that, but in practice, the 2.5Gb / 5Gb chips are still hot as all heck, so might as well use the 10Gb ones...

Remember than an 8-port 10Gb switch can have 160Gbps of bandwidth internally. For comparison, DDR4 modules are ~160Gbps as well, except you often need 2x to 4x the RAM bandwidth to sustain communications of that speed. (Ex: read from port#0 to RAM. Write RAM to Port#1. That's 2-RAM operations per bit, ignoring CPU costs and other parts inside of the computer). Its actually really, really difficult from a computation perspective to design a computer at these speeds.

And switches aren't "just" copying data from ports into RAM and back. There's also all that TCP / IP / Ethernet packet stuff going on under it all. (IE: Figuring if the data needs to be copied out of port#1 or port#2, or whatever).
 
Joined
Jul 10, 2017
Messages
2,671 (1.04/day)
Strangely enough, I hear that 10GbE is easier to figure out than 2.5Gb.

2.5Gb should be cheaper in theory, but not quite true yet in practice. There's just this chasm between 1GbE and 10GbE.

If anyone has good experience with 2.5Gb, especially with switches, I'm all ears! Its a strangely difficult problem. There seem to be good 2.5Gb NICs today at least, its just a switch problem now.
All those speeds are ratified in the respective standards and recommendations. It's up to the designers and manufacturers to implement them properly.

All the gear I use nowadays just works fine. But it comes at a cost.
 

Solaris17

Super Dainty Moderator
Staff member
Joined
Aug 16, 2005
Messages
26,240 (3.79/day)
Location
Alabama
System Name Rocinante
Processor I9 14900KS
Motherboard MSI MPG Z790I Edge WiFi Gaming
Cooling be quiet! Pure Loop 240mm
Memory 64GB Gskill Trident Z5 DDR5 6000
Video Card(s) MSI SUPRIM Liquid X 4090
Storage 1x 500GB 980 Pro | 1x 1TB 980 Pro | 1x 8TB Corsair MP400
Display(s) Odyssey OLED G9 (G95SC)
Case LANCOOL 205M MESH Snow
Audio Device(s) Moondrop S8's on schitt Modi+ & Valhalla 2
Power Supply ASUS ROG Loki SFX-L 1000W
Mouse Lamzu Atlantis mini (White)
Keyboard Monsgeek M3 Lavender, Akko Crystal Blues
VR HMD Quest 3
Software openSUSE Tumbleweed
Benchmark Scores I dont have time for that.
While that's true, that 10Gb hardware needs to be 4x faster than 2.5Gb hardware. So in theory, the 2.5Gb hardware should be cheaper.

In particular, 10Gb hardware does get rather hot, because the compute power needed to run 10Gb is just significantly higher. The 2.5Gb / 5Gb speeds were designed to fix that, but in practice, the 2.5Gb / 5Gb chips are still hot as all heck, so might as well use the 10Gb ones...

I know how 10G networking works



I am not sure what you are trying to argue I guess?
 
Joined
Jul 10, 2017
Messages
2,671 (1.04/day)
While that's true, that 10Gb hardware needs to be 4x faster than 2.5Gb hardware. So in theory, the 2.5Gb hardware should be cheaper.

In particular, 10Gb hardware does get rather hot, because the compute power needed to run 10Gb is just significantly higher. The 2.5Gb / 5Gb speeds were designed to fix that, but in practice, the 2.5Gb / 5Gb chips are still hot as all heck, so might as well use the 10Gb ones...

Remember than an 8-port 10Gb switch can have 160Gbps of bandwidth internally. For comparison, DDR4 modules are ~160Gbps as well, except you often need 2x to 4x the RAM bandwidth to sustain communications of that speed. (Ex: read from port#0 to RAM. Write RAM to Port#1. That's 2-RAM operations per bit, ignoring CPU costs and other parts inside of the computer). Its actually really, really difficult from a computation perspective to design a computer at these speeds.

And switches aren't "just" copying data from ports into RAM and back. There's also all that TCP / IP / Ethernet packet stuff going on under it all. (IE: Figuring if the data needs to be copied out of port#1 or port#2, or whatever).
Passively cooled routers and switches with QSFP+ cages and 10GbE direct attach copper cables that barely get warm. Fact check before posting, please.

I know how 10G networking works



I am not sure what you are trying to argue I guess?
QNAP and Tiks? You are a man (?) of culture, I see. I just had a MikroGasm!
 
Joined
Apr 24, 2020
Messages
2,661 (1.72/day)
Passively cooled routers and switches with QSFP+ cages and 10GbE direct attach copper cables that barely get warm. Fact check before posting, please.

I know that 10GbE works well. But I'm just reminding yall that its actually got a rather substantial amount of RAM bandwidth going on internally.

Cutting down on those bandwidth requirements by 1/4th or 1/2 has solid backing in theory. Its weird to me that it doesn't work out in practice.
 

Solaris17

Super Dainty Moderator
Staff member
Joined
Aug 16, 2005
Messages
26,240 (3.79/day)
Location
Alabama
System Name Rocinante
Processor I9 14900KS
Motherboard MSI MPG Z790I Edge WiFi Gaming
Cooling be quiet! Pure Loop 240mm
Memory 64GB Gskill Trident Z5 DDR5 6000
Video Card(s) MSI SUPRIM Liquid X 4090
Storage 1x 500GB 980 Pro | 1x 1TB 980 Pro | 1x 8TB Corsair MP400
Display(s) Odyssey OLED G9 (G95SC)
Case LANCOOL 205M MESH Snow
Audio Device(s) Moondrop S8's on schitt Modi+ & Valhalla 2
Power Supply ASUS ROG Loki SFX-L 1000W
Mouse Lamzu Atlantis mini (White)
Keyboard Monsgeek M3 Lavender, Akko Crystal Blues
VR HMD Quest 3
Software openSUSE Tumbleweed
Benchmark Scores I dont have time for that.
QNAP and Tiks? You are a man (?) of culture, I see. I just had a MikroGasm!

Thats just the closet at home. I play with the big boy 100GB NICs at work for my PB Data arrays, but I wanted to save that card for later after more members tried to educate me on home routers.
 
Joined
Jul 10, 2017
Messages
2,671 (1.04/day)
I know that 10GbE works well. But I'm just reminding yall that its actually got a rather substantial amount of RAM bandwidth going on internally.

Cutting down on those bandwidth requirements by 1/4th or 1/2 has solid backing in theory. Its weird to me that it doesn't work out in practice.
RAM? Unless you are running some heavy BGP or some SW-defined solution, all proper switching chips and routing CPU's have sufficient internal buffers.

If you are talking wire-speed switching at tens of 100GbE ports is entirely different ballpark.
 

Solaris17

Super Dainty Moderator
Staff member
Joined
Aug 16, 2005
Messages
26,240 (3.79/day)
Location
Alabama
System Name Rocinante
Processor I9 14900KS
Motherboard MSI MPG Z790I Edge WiFi Gaming
Cooling be quiet! Pure Loop 240mm
Memory 64GB Gskill Trident Z5 DDR5 6000
Video Card(s) MSI SUPRIM Liquid X 4090
Storage 1x 500GB 980 Pro | 1x 1TB 980 Pro | 1x 8TB Corsair MP400
Display(s) Odyssey OLED G9 (G95SC)
Case LANCOOL 205M MESH Snow
Audio Device(s) Moondrop S8's on schitt Modi+ & Valhalla 2
Power Supply ASUS ROG Loki SFX-L 1000W
Mouse Lamzu Atlantis mini (White)
Keyboard Monsgeek M3 Lavender, Akko Crystal Blues
VR HMD Quest 3
Software openSUSE Tumbleweed
Benchmark Scores I dont have time for that.
I know that 10GbE works well.

No you dont.

Remember than an 8-port 10Gb switch can have 160Gbps of bandwidth internally.

Thats called a backplane

substantial amount of RAM bandwidth going on internally.

No. Not at all. When pushing massive throughput some enterprise switches will buffer NIC ports with single ICs in some cases upto 32GB of NAND buffer (if they arent flagged for passthrough) further, this uses almost no router ram to pass packets. In everything but the cheapest solutions throughput is managed on a hardware level. RAM on routers is used for IDS/IPS or other configuration data. Not buffering traffic.

Further requests and commands to flush buffer, turn on or turn off are handled by the systems using other network standards.

Traffic on the host OS is also handled on the card level, not the OS. Unless your application is doing some kind of odd buffering you see no increase of RAM or CPU utilization as all of this processing is offloaded.

This is also actually a standard because packets (TCP) are generally time sensitive most buffering is off by default unless turned on by the network operator on their equipment. NTP is another protocol that can even utilize NIC hardware timestamping if supported for more accurate latency metrics when the daemon is attempting to configure clock skew.

Finally, the internal backplane throughput is NOT like PCI-E cards at all. These signal channels are not used at all times unless all ports are passing the max amount of traffic and even then as long as the devices up stream are also signaling fast enough (10g/1g etc) there is no buffer being used and no resends or discards being requested.

I understand this forum only provides a small subnet of networking knowledge to the greater community so its easy to not get the big picture, but before you start posting "facts" please revisit the Net+ and then maybe grab a few juniper courses and work on something that isnt an asus router.

Once that is done we can probably have a conversation on level ground.
 
Joined
Apr 24, 2020
Messages
2,661 (1.72/day)
RAM? Unless you are running some heavy BGP or some SW-defined solution, all proper switching chips and routing CPU's have sufficient internal buffers.

If you are talking wire-speed switching at tens of 100GbE ports is entirely different ballpark.
While you have a good point for dumb switches, I'm really not sure how much software is needed. VLANs and QoL are pretty intricate operations. I dunno, I've never built a switch before, but some computer is performing that if() statement (or maybe an ASIC is doing it) and figuring out where all the packets need to go. I mean, port-security on these things are absolutely keeping tabs on which MAC addresses come and go, etc. etc.

In any case, I've got my internal model of what I think switches are doing. Some software (or hardware) is doing these checks. Maybe I'm wrong and its all ASIC these days.

I understand this forum only provides a small subnet of networking knowledge to the greater community so its easy to not get the big picture, but before you start posting "facts" please revisit the Net+ and then maybe grab a few juniper courses and work on something that isnt an asus router.

I recognize that I don't have as much experience with network gear as you. But my experience certainly isn't just home user / asus router level. I have theoretical, lower level knowledge of how to build CLOS networks and understand how ASIC chips would be designed and where these kinds of checks would be built into the hardware. I also have a degree of experience configuring these sorts of things for networks and labs (though have always worked with a more experienced network admin). I wouldn't say I'm "senior" in this kind of role, but I'm no dumbass as you seem to imply.

I'm not even trying to argue right now believe it or not, and am very confused by this seeming hostility I'm sensing from you. Just trying to contribute my 2-cents here. 10GbE has 4x the bandwidth of 2.5GbE, so 2.5GbE should be cheaper and easier to implement. Surely I'm not saying anything heretical here, or requires in depth knowledge of networking.

I recognize this isn't the case in practice. All 10GbE equipment works better in my experience (which seems to match your experience). So I'm not even sure where the heck we're disagreeing.

EDIT: I think we're getting lost in the weeds here somehow. All I'm saying is that 40GbE of backplane bandwidth should be easier to implement than 160GbE of backplane bandwidth. That is: 8-ports, bidirectional, 2.5GbE vs 10GbE. But in practice, this isn't true for some reason.
 
Joined
Jul 10, 2017
Messages
2,671 (1.04/day)
While you have a good point for dumb switches, I'm really not sure how much software is needed. VLANs and QoL are pretty intricate operations. I dunno, I've never built a switch before, but some computer is performing that if() statement (or maybe an ASIC is doing it) and figuring out where all the packets need to go. I mean, port-security on these things are absolutely keeping tabs on which MAC addresses come and go, etc. etc.
Some chips have built-in acceleration for this i.e., no additional resources are needed, while others offload VLAN, bonding, etc. to other chips, most likely to the general computing units of the routing chips.
In any case, I've got my internal model of what I think switches are doing. Some software (or hardware) is doing these checks. Maybe I'm wrong and its all ASIC these days.

I have theoretical, lower level knowledge of how to build CLOS networks and understand how ASIC chips would be designed and where these kinds of checks would be built into the hardware. I also have a degree of experience configuring these sorts of things for networks and labs (though have always worked with a more experienced network admin).
That is frankly hard to believe. No offence, I'm not trying to start a fight or to insult you.

Not picking sides either but me and @Solaris17 are on the same page here. Please get some basic networking courses and hopefully it will all become clear.
 

Solaris17

Super Dainty Moderator
Staff member
Joined
Aug 16, 2005
Messages
26,240 (3.79/day)
Location
Alabama
System Name Rocinante
Processor I9 14900KS
Motherboard MSI MPG Z790I Edge WiFi Gaming
Cooling be quiet! Pure Loop 240mm
Memory 64GB Gskill Trident Z5 DDR5 6000
Video Card(s) MSI SUPRIM Liquid X 4090
Storage 1x 500GB 980 Pro | 1x 1TB 980 Pro | 1x 8TB Corsair MP400
Display(s) Odyssey OLED G9 (G95SC)
Case LANCOOL 205M MESH Snow
Audio Device(s) Moondrop S8's on schitt Modi+ & Valhalla 2
Power Supply ASUS ROG Loki SFX-L 1000W
Mouse Lamzu Atlantis mini (White)
Keyboard Monsgeek M3 Lavender, Akko Crystal Blues
VR HMD Quest 3
Software openSUSE Tumbleweed
Benchmark Scores I dont have time for that.
I think in any case we are off topic.

As for 2.5 vs 5gb switches.

Like anything else. Make sure your backplane switching capacity is all the ports added together +1gb for management interface.

2.5gb/s is 250mb/s only in the world of math. So I wouldnt worry about SSDs on either end until you were using 5+gb/s networking gear.

Of course access and spinup times and latency come into play so you still may benefit from SSDs on either end when using something like iSCSI to mount it like a disk.

If you are just using a network share

//myip/folder1/secret-files

then you dont really even need to worry about that as the overhead from sharing protocols probably wont make disk access times matter as its much more forgiving.

I personally would go with any reputable brand, but that can change based on bias and locality.

Linksys (cisco)
Netgear
Mikrotik
Ubiquity

Are who I would look at, though Mikrotik and Ubi probably dont have anything in the consumer space.

I am bias in this regard and dont touch and wont touch, things like Asus, broadcom, trednet etc stuff. So others may have some kind of insight in that.
 
Joined
Nov 16, 2007
Messages
1,190 (0.20/day)
Location
Hampton Roads
Processor Xeon x5650
Motherboard SABERTOOTH X58
Cooling Fans
Memory 24 GB Kingston HyperX 1600
Video Card(s) GTX 1060 3GB
Storage small ssd
Display(s) Dell 2001F, BenQ short throw
Case Lian Li
Audio Device(s) onboard
Power Supply X750
Software Mint 19.3, Win 10
Benchmark Scores not so fast...
Will the dual NIC teaming increase data throughput?
 
Joined
Apr 24, 2020
Messages
2,661 (1.72/day)
Will the dual NIC teaming increase data throughput?

Depends on implementation, depends on traffic patterns, depends on your definition of throughput.

For the situation seemingly discussed in this topic, I'm guessing no. A lot of NIC teaming is implemented as some kind of load-balancing across IP/Port pairs, which strangely enough, won't help "ComputerA" get more bandwidth with "NASBox".

What NIC teaming is designed for, is for "ComputerA" + "Computer B" all to access "NASBox" faster with less interference. ComputerA might get one NIC, ComputerB might get another NIC, so NASBox gets 2xNICs worth of bandwidth against all the computers in general. Emphasis on "might", a lot of the devil is in the details.

NIC Teaming, in short, is designed for Netflix or Youtube to increase bandwidth. When dozens or hundreds of clients try to connect to one computer, each particular connection only gets "one NIC', so "one connection" can't get any faster. But the whole team of NICs are being distributed across ComputerA / ComputerB / ComputerC /... for the hundreds of clients that are accessing that particular server. This case strangely doesn't occur in typical NAS cases.
 
Joined
Jun 2, 2014
Messages
462 (0.12/day)
Location
Midwest
System Name Core
Processor Intel 12700k @ 5.1/3.6 no HT
Motherboard ASRock z690 Steel Legend
Cooling Artic Cooling Freezer 420 AiO
Memory GSkill 64GB 3200 cas 14 b die
Video Card(s) Asus Nvidia RTX 4070 Super OC
Storage Optane 900p x2, SK Hynix p41 Pro
Display(s) ACER 250hz 1080p 25" IPS display x2
Case Phanteks p500a with all Noctua Chromax/Arctic fans
Audio Device(s) Focusrite interface, Presonus Studio Monitors and Subwoofer
Power Supply Seasonic 850w plat with cable mod cables
Mouse Glorious Model O
Keyboard Corsair mech k65
Software Win 11 Pro
Benchmark Scores 3dmark TimeSpy 20240-rtx 3090FE/12700k/Optane 3dmark TimeSpy 21862-rtx 4070Super/12700k/Optane
This is getting interesting. All my rigs have 10g, but my switch is only 1g. I have been looking at 2.5 switches, but Id much rather have a 10g as they have been around the block for some time and well, the network team at work will usually steer me in the right direction when stuff gets too heavy.

OP, Im curious to your findings, especially in the longer term for reliability.
 
Top