Friday, June 13th 2014

Thecus Updates 10GbE Network Interface Card

With the proliferation and widespread adoption of 10GbE environments accelerating in response to the dropping prices of switches and compatible systems, Thecus Technology Corp. has taken bold steps in positioning itself as a leader in storage at this top-tier of network technology. Further bolstering its goal of 'empowering professionals' is the release of the C10GTR NIC, the follow-up to its successful C10GT card.

Powered by an advanced Tehuti TN4010 processor, the new 10Gb Ethernet PCI Express Adapter from Thecus is compatible with both x4 and x8-sized PCIe slots as well as 100M, 1G, and 10G networks. Software supported by the C10GTR includes Windows Storage Server 2012, WSS 2008 R2, Windows 8, Windows 7, Linux 2.6, Linux 3.x, VMWare 5.x, and Hyper-V. This low profile card is suitable for a wide range of products, including enterprise-targeted NAS from Thecus (such as the Top Tower, large business rackmount, and N7710/N8810U series, as well as the N7700 and N8800 PRO v2). Lastly, the C10GTR is also fully compliant with a wide range of protocols, including IEEE 802.3az, IEEE 802.3ad Link Aggregation, and IEEE 802.1q VLAN.
A step towards the future
With the unstoppable evolution of the network technology marching onwards, the Thecus range of networking solutions is ready for the multimedia-rich and cloud-centric operations of the future with 10Gb Ethernet.

For more information on the Thecus C10GTR, visit this page.
Add your own comment

5 Comments on Thecus Updates 10GbE Network Interface Card

#1
The Von Matrices
Does anyone have any idea of the price? I've been wanting to upgrade to 10G ethernet for my storage server but the interface cards are too expensive. I'm waiting for a card to be less than $100 are preferably less than $80.
Posted on Reply
#2
Ferrum Master
adding another 1GBE and teaming them up doesn't help?
Posted on Reply
#3
Patriot
Ferrum Masteradding another 1GBE and teaming them up doesn't help?
Teaming doesn't scale very well... and even if you did a quad port and teamed them And it scaled perfectly (which it won't) that still gives you less than half of a single 10Gbe.
Posted on Reply
#4
Ferrum Master
Still it is a cheap solution that at least does something.
Posted on Reply
#5
Aquinus
Resident Wat-man
Ferrum MasterStill it is a cheap solution that at least does something.
Yes, but even still your max aggregate bandwidth would be 2Gbit under the best of conditions. 10Gbit would be more feasible for a SAN for the devices that can't tolerate 1Gbps access to your RAID. If your array can read at something like 350-400MB/s, you would never saturate that on 1Gbps reasonably (without teaming a ton of NICs together, which is unrealistic,).

In this case though, I wouldn't imagine there being a 10Gbit switch, just 10Gbit cards between the storage server and the other systems that demand full speed to storage... but even for two 10Gbps cards, you need 8 PCI-E lanes handy on each system and at least 1000 USD for the two cards.
Posted on Reply
Dec 20th, 2024 06:15 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts