Tuesday, September 3rd 2019

QNAP Introduces QXG-10G2T-107, a Dual-port 5-Speed 10GBASE-T NIC

QNAP Systems, Inc. today introduced the new QXG-10G2T-107, a dual-port PCI Express (PCIe) 10GBASE-T/NBASE-T NIC that supports 5 network speeds. It can be installed in a compatible QNAP NAS or a Windows /Linux PC with PCIe 2.0 x4 slots, providing organizations or individuals with a flexible and economical 10 GbE network connectivity solution.

The QXG-10G2T-107 uses the Aquantia AQtion AQC107S Ethernet controller that supports 10/5/2.5/1 Gbps and 100 Mbps network speeds. The RJ45 connector design also allows users to use existing cables. Transmission speeds can reach up to 5 Gbps when using CAT-5e cables, or up to 10 Gbps when using CAT 6 (or advanced versions) cables, unleashing the full potential of the QXG-10G2T-107.
"As 10 GbE network environment becomes more common, QNAP continues to deliver cost-effective 10 GbE solutions," said Dan Lin, Product Manager of QNAP, adding "Following QNAP's release of the single-port Multi-Gig QXG-10G1T NIC, the newly rolled out dual-port QXG-10G2T-107 NIC also leverages Aquantia Ethernet controller to offer Multi-Gig transfer rates, helping users to easily upgrade their PCs or NAS systems with 10 Gbps capability to accommodate intensive data transfer and boost productivity of team collaboration and personal workflows."

Windows and Linux users can download drivers from NIC manufacturer Aquantia's website. Using the QXG-10G2T-107 in a QNAP NAS requires QTS 4.3.6 or later.

Additionally, QNAP is offering 15% discount on popular PCIe network cards featuring Mellanox ConnectX -4 Lx SmartNIC - the 25 GbE QXG-25G2SF-CX4 and 10 GbE QXG-10G2SF-CX4 network NICs. Both cards can be installed in a NAS or PC, and support iSER (iSCSI Extension for RDMA) to offload CPU workloads and optimize VMware virtualization performance.

For more information, visit this page.
Add your own comment

17 Comments on QNAP Introduces QXG-10G2T-107, a Dual-port 5-Speed 10GBASE-T NIC

#2
Imsochobo
TheLostSwedeAround $180.
4X pci-e versus the trusty Intel X540-T2 at 8x pci-e but at higher price than used x540's.
good with more options :)
Posted on Reply
#3
TheLostSwede
News Editor
Imsochobo4X pci-e versus the trusty Intel X540-T2 at 8x pci-e but at higher price than used x540's.
good with more options :)
PCIe 3.0 vs 2.0 in your example, so yeah, hardly an issue.

It's hardly fair to compare new to second hand products in terms of cost either.
Posted on Reply
#4
randomUser
Mellanox ConnectX 2 costs 30Eur~
I have reached a max of 6gbps with copper cables so far.

Not sure whats the problem, but somehow it is very hungry for CPU, taking 100% of a single core.
Posted on Reply
#5
Solaris17
Super Dainty Moderator
TheLostSwedeAround $180.
Thanks, I hate it when the price isnt included. Its mind numbingly stupid.
randomUserMellanox ConnectX 2 costs 30Eur~
I have reached a max of 6gbps with copper cables so far.

Not sure whats the problem, but somehow it is very hungry for CPU, taking 100% of a single core.
If this is your first foray into higher than GB speeds I guess I could see why you would be confused. But it takes cpu power to pump packets. Wait until you start getting closer to 10gb speeds and start using multiple cards and optics.
Posted on Reply
#6
fynxer
10Gbit is overpriced by today's tech standard.

Should have been standard in gaming years ago and come down to humane prices by now.

Gigabyte among others tried to do a push and release motherboards with 10Gbit but failed when Intel inflated prices for their chipsets when there was no competition from AMD.

Seamed Intel was not interested in consumer 10Gbit to lower their profit, probably because they wanted to milk the ultra cheap 1Gbit standards to the end of days.

Hope that time is over now and we swiftly make a shift over to at least 10Gbit as a new home standard.
Posted on Reply
#8
TheLostSwede
News Editor
fynxer10Gbit is overpriced by today's tech standard.

Should have been standard in gaming years ago and come down to humane prices by now.

Gigabyte among others tried to do a push and release motherboards with 10Gbit but failed when Intel inflated prices for their chipsets when there was no competition from AMD.

Seamed Intel was not interested in consumer 10Gbit to lower their profit, probably because they wanted to milk the ultra cheap 1Gbit standards to the end of days.

Hope that time is over now and we swiftly make a shift over to at least 10Gbit as a new home standard.
2.5Gbps is the new "low cost" consumer standard.
Posted on Reply
#9
Ravenas
fynxer10Gbit is overpriced by today's tech standard.

Should have been standard in gaming years ago and come down to humane prices by now.

Gigabyte among others tried to do a push and release motherboards with 10Gbit but failed when Intel inflated prices for their chipsets when there was no competition from AMD.

Seamed Intel was not interested in consumer 10Gbit to lower their profit, probably because they wanted to milk the ultra cheap 1Gbit standards to the end of days.

Hope that time is over now and we swiftly make a shift over to at least 10Gbit as a new home standard.
10gbps is kind of dumb for consumer standards. 10 gbps costs $299 a month where I live. 2.5 gbps is more realistic, and they should begin pushing these out before they lose the market to other OEMS.
Posted on Reply
#10
TheLostSwede
News Editor
Solaris17I mean you can buy something that can route 10gig right now for like $130.

www.amazon.com/gp/product/B07LFKGP1L/ref=ppx_yo_dt_b_asin_title_o02_s00?ie=UTF8&psc=1

DAC cables and even optics and fiber isn't expensive. Its been like his for a few years. Consumers, just arent ready yet. If it doesn't say nighthawk or linksys and come with a pretty apple-esque GUI it isn't fast or its scary.
SFP+ is useless for most home users, as it either requires costly adapters, or a fibre based network. It's hard enough to make consumers understand the benefits of Ethernet. Wi-Fi is the standard consumer networking interface, as most consumers use mobile devices and only care about browsing the web.
Ravenas10gbps is kind of dumb for consumer standards. 10 gbps costs $299 a month where I live. 2.5 gbps is more realistic, and they should begin pushing these out before they lose the market to other OEMS.
We're talking local networks here, not internet access speeds...
I have a 10Gbps card in this PC and one in my NAS, so I can quickly copy files between the two.
I only have a 200Mbps internet connection.
Posted on Reply
#11
chodaboy19
We still need those dirt cheap switches that support 2.5/5/10 speeds!!!
Posted on Reply
#12
Ravenas
TheLostSwedeWe're talking local networks here, not internet access speeds...
I have a 10Gbps card in this PC and one in my NAS, so I can quickly copy files between the two.
I only have a 200Mbps internet connection.
Was just talking in the realm of internet speeds down and up and the availability of Intel competitive integrated products in the current motherboard market.

Referencing network transmission, the problem isn't the cost of the cards, but the switches.
Posted on Reply
#13
Solaris17
Super Dainty Moderator
TheLostSwedeIt's hard enough to make consumers understand the benefits of Ethernet. Wi-Fi is the standard consumer networking interface, as most consumers use mobile devices and only care about browsing the web.
That is fair, I let my profession get in my way. However, if your talking from a techy perspective (which this thread isnt) I would still argue it over buying an expensive pre-built router. Especially if cost is a concern to begin with.
Posted on Reply
#14
TheLostSwede
News Editor
chodaboy19We still need those dirt cheap switches that support 2.5/5/10 speeds!!!
A reasonably priced 8-port option would be a good start...
As in, something in the $200-300 range, rather than $400-500.
Posted on Reply
#15
ncrs
Solaris17If this is your first foray into higher than GB speeds I guess I could see why you would be confused. But it takes cpu power to pump packets. Wait until you start getting closer to 10gb speeds and start using multiple cards and optics.
It does not take those levels of CPU power to pump packets because every >1Gbit NIC has hardware offloading. Saturating my 10Gbit with iperf3 takes 7% of a single i7-2600 core. You can easily buy 200Gbit/s Mellanox ConnectX-6 NICs nowadays and they don't require huge CPU power either. The problem here is most likely misconfiguration - perhaps not using Jumbo packets or wrong drivers (if on Windows)/firmware?
Posted on Reply
#16
randomUser
ncrsIt does not take those levels of CPU power to pump packets because every >1Gbit NIC has hardware offloading. Saturating my 10Gbit with iperf3 takes 7% of a single i7-2600 core. You can easily buy 200Gbit/s Mellanox ConnectX-6 NICs nowadays and they don't require huge CPU power either. The problem here is most likely misconfiguration - perhaps not using Jumbo packets or wrong drivers (if on Windows)/firmware?
I tried many mellanox drivers and been messing with settings alot (by reading other people, who also had this problem, posts)

My windows system has 9900k, my "server" has ryzen 3600. When i had win10pro on my ryzen system, speeds were up to 6gbps. Sending from 9900k system yelds higher speed.
When i have installed ESXi 6.7 into my ryzen system (default vmware builtin drivers) and uploaded the iso files, it only reached 1.2gbps speed.
The source and the destination were MVMe drives.

Some people say it may not reach max speed because of copper.
Fiber will cost me additional 85 eur. I might try that at some point.
But it is strange that it requires so much cpu. I mean rack servers use Xeons, they are much weaker than desktop cpus and they have no problems with 10gbps speeds.
So yes, i do have a driver/configuration problem, but i have yet to find the right combo.
Posted on Reply
#17
ncrs
randomUserI tried many mellanox drivers and been messing with settings alot (by reading other people, who also had this problem, posts)

My windows system has 9900k, my "server" has ryzen 3600. When i had win10pro on my ryzen system, speeds were up to 6gbps. Sending from 9900k system yelds higher speed.
When i have installed ESXi 6.7 into my ryzen system (default vmware builtin drivers) and uploaded the iso files, it only reached 1.2gbps speed.
The source and the destination were MVMe drives.
Wait... you're talking about transfers between filesystems? That's totally different from raw network performance and depends on many more factors. Try running pure iperf3 between the hosts to check if the NICs are the problem in the first place.
randomUserSome people say it may not reach max speed because of copper.
I am running a 7m direct attach SFP copper cable between 2 Mellanox Connect-X 2 and am able to saturate the link with 9600 MTU with barely any CPU load.
Posted on Reply
Add your own comment
Dec 18th, 2024 07:15 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts