News Posts matching #Interconnect

Return to Keyword Browsing

BittWare Announces PCIe 5.0/CXL FPGA Accelerators Featuring Intel Agilex M-Series and I-Series to Drive Memory and Interconnectivity Improvements

BittWare, a Molex company, a leading supplier of enterprise-class accelerators for edge and cloud-computing applications, today introduced new card and server-level solutions featuring Intel Agilex FPGAs. The new BittWare IA-860m helps customers alleviate memory-bound application workloads by leveraging up to 32 GB of HBM2E in-package memory and 16-lanes of PCIe 5.0 (with CXL upgrade option). BittWare also added new Intel Agilex I-Series FPGA-based products with the introduction of the IA-440i and IA-640i accelerators, which support high-performance interfaces, including 400G Ethernet and PCIe 5.0 (CXL option). These newest models complement BittWare's existing lineup of Intel Agilex F-Series products to comprise one of the broadest portfolios of Intel Agilex FPGA-based offerings on the market. This announcement reinforces BittWare's commitment to addressing ever-increasing demands of high-performance compute, storage, network and sensor processing applications.

"BittWare is excited to apply Intel's advanced technology to solve increasingly difficult application problems, quickly and at low risk," said Craig Petrie, vice president, Sales and Marketing of BittWare. "Our longstanding collaboration with Intel, expertise with the latest development tools, including OneAPI, as well as alignment with Molex's global supply chain and manufacturing capabilities enable BittWare to reduce development time by 12-to-18 months while ensuring smooth transitions from proof-of-concept to volume product deployment."

Avicena Raises $25 Million in Series A to Fund Development of High Capacity microLED-based Optical Interconnects

-AvicenaTech Corp., the leader in microLED-based chip-to-chip interconnects, today announced that the company has secured $25M in Series A funding from Samsung Catalyst Fund, Cerberus Capital Management, Clear Ventures, and Micron Ventures to drive the development of products based on Avicena's breakthrough photonic I/O solution. "We believe that Avicena technology can be transformational in unlocking compute-to-memory chip-to-chip high-speed interconnects. Such technology can be central to supporting future disaggregated architectures and distributed high-performance computing (HPC) systems," said Marco Chisari, EVP of Samsung Electronics and Head of the Samsung Semiconductor Innovation Center.

"We are excited to participate in this round at Avicena," said Amir Salek, Senior Managing Director at Cerberus Capital Management and former Head of silicon for Google Infrastructure and Cloud. "Avicena has a highly differentiated technology addressing one of the main challenges in modern computer architecture. The technology offered by Avicena meets the needs for scaling future HPC and cloud compute networks and covers applications in conventional datacenter and 5G cellular networking."

Ayar Labs Partners with NVIDIA to Deliver Light-Based Interconnect for AI Architectures

Ayar Labs, the leader in chip-to-chip optical connectivity, is developing with NVIDIA groundbreaking artificial intelligence (AI) infrastructure based on optical I/O technology to meet future demands of AI and high performance computing (HPC) workloads. The collaboration will focus on integrating Ayar Labs' technology to develop scale-out architectures enabled by high-bandwidth, low-latency and ultra-low-power optical-based interconnects for future NVIDIA products. Together, the companies plan to accelerate the development and adoption of optical I/O technology to support the explosive growth of AI and machine learning (ML) applications and data volumes.

Optical I/O uniquely changes the performance and power trajectories of system designs by enabling compute, memory and networking ASICs to communicate with dramatically increased bandwidth, at lower latency, over longer distances and at a fraction of the power of existing electrical I/O solutions. The technology is also foundational to enabling emerging heterogeneous compute systems, disaggregated/pooled designs, and unified memory architectures that are critical to accelerating future data center innovation.

Rambus to Acquire Hardent, Accelerating Roadmap for Next-Generation Data Center Solutions

-Rambus Inc., a provider of industry-leading chips and silicon IP making data faster and safer, today announced it has signed an agreement to acquire Hardent, Inc. ("Hardent"), a leading electronic design company. This acquisition augments the world-class team of engineers at Rambus and accelerates the development of CXL processing solutions for next-generation data centers. With 20 years of semiconductor experience, Hardent's world-class silicon design, verification, compression, and Error Correction Code (ECC) expertise provides key resources for the Rambus CXL Memory Interconnect Initiative.

"Driven by the demands of advanced workloads like AI/ML and the move to disaggregated data center architectures, industry momentum for CXL-based solutions continues to grow," said Luc Seraphin, president and CEO of Rambus. "The addition of the highly-skilled Hardent design team brings key resources that will accelerate our roadmap and expand our reach to address customer needs for next-generation data center solutions." "The Rambus culture and track record of technology leadership is an ideal fit for Hardent," said Simon Robin, president and founder of Hardent. "The team is looking forward to joining Rambus and is excited to be part of a global company advancing the future of data center solutions." In addition, Hardent brings complementary IP and services to the Rambus silicon IP portfolio, expanding the customer base and design wins in automotive and consumer electronic applications. The transaction is expected to close in the second calendar quarter of 2022 and will not materially impact results.

NVIDIA Opens NVLink for Custom Silicon Integration

Enabling a new generation of system-level integration in data centers, NVIDIA today announced NVIDIA NVLink -C2C, an ultra-fast chip-to-chip and die-to-die interconnect that will allow custom dies to coherently interconnect to the company's GPUs, CPUs, DPUs, NICs and SOCs. With advanced packaging, NVIDIA NVLink-C2C interconnect would deliver up to 25x more energy efficiency and be 90x more area-efficient than PCIe Gen 5 on NVIDIA chips and enable coherent interconnect bandwidth of 900 gigabytes per second or higher.

"Chiplets and heterogeneous computing are necessary to counter the slowing of Moore's law," said Ian Buck, vice president of Hyperscale Computing at NVIDIA. "We've used our world-class expertise in high-speed interconnects to build uniform, open technology that will help our GPUs, DPUs, NICs, CPUs and SoCs create a new class of integrated products built via chiplets."

Intel, AMD, Arm, and Others, Collaborate on UCIe (Universal Chiplet Interconnect Express)

Intel, along with Advanced Semiconductor Engineering Inc. (ASE), AMD, Arm, Google Cloud, Meta, Microsoft Corp., Qualcomm Inc., Samsung and Taiwan Semiconductor Manufacturing Co., have announced the establishment of an industry consortium to promote an open die-to-die interconnect standard called Universal Chiplet Interconnect Express (UCIe). Building on its work on the open Advanced Interface Bus (AIB), Intel developed the UCIe standard and donated it to the group of founding members as an open specification that defines the interconnect between chiplets within a package, enabling an open chiplet ecosystem and ubiquitous interconnect at the package level.

"Integrating multiple chiplets in a package to deliver product innovation across market segments is the future of the semiconductor industry and a pillar of Intel's IDM 2.0 strategy," said Sandra Rivera, executive vice president and general manager of the Datacenter and Artificial Intelligence Group at Intel. "Critical to this future is an open chiplet ecosystem with key industry partners working together under the UCIe Consortium toward a common goal of transforming the way the industry delivers new products and continues to deliver on the promise of Moore's Law."

CXL Consortium & Gen-Z Consortium Sign Letter of Intent to Advance Interconnect Technology

High performance computing continues to evolve—meeting the ever-increasing demand for high efficiency, low-latency, rapid and seamless processing. The Gen-Z Consortium was founded in 2016 to create a next-generation fabric capable of bridging existing solutions while enabling new, unbounded innovation in an open, non-proprietary standards body.

In 2019, the CXL Consortium launched to deliver Compute Express Link (CXL ), an industry-supported cache-coherent interconnect designed for processors, memory expansion, and accelerators. The CXL Consortium and the Gen-Z Consortium established a joint memorandum of understanding (MOU) providing an opportunity for collaboration to define bridging between the protocols. This took the form of a joint working group that encouraged creativity and innovation between the two organizations toward the betterment of the industry as a whole.

Samsung Announces Availability of Its Leading-Edge 2.5D Integration H-Cube Solution

Samsung Electronics, a world leader in advanced semiconductor technology, today announced that it has developed Hybrid-Substrate Cube (H-Cube) technology, its latest 2.5D packaging solution specialized for semiconductors for HPC, AI, data center, and network products that require high-performance and large-area packaging technology.

"H-Cube solution, which is jointly developed with Samsung Electro-mechanics (SEMCO) and Amkor Technology, is suited to high-performance semiconductors that need to integrate a large number of silicon dies," said Moonsoo Kang, senior vice president and Head of Foundry Market Strategy Team at Samsung Electronics. "By expanding and enriching the foundry ecosystem, we will provide various package solutions to find a breakthrough in the challenges our customers are facing."

"Zen 3" Chiplet Uses a Ringbus, AMD May Need to Transition to Mesh for Core-Count Growth

AMD's "Zen 3" CCD, or compute complex die, the physical building-block of both its client- and enterprise processors, possibly has a core count limitation owing to the way the various on-die bandwidth-heavy components are interconnected, says an AnandTech report. This cites what is possibly the first insights AMD provided on the CCD's switching fabric, which confirms the presence of a Ring Bus topology. More specifically, the "Zen 3" CCD uses a bi-directional Ring Bus to connect the eight CPU cores with the 32 MB of shared L3 cache, and other key components of the CCD, such as the IFOP interface that lets the CCD talk to the I/O die (IOD).

Imagine a literal bus driving around a city block, picking up and dropping off people between four buildings. The "bus" here resembles a strobe, the buildings resemble components (cores, uncore, etc.,) while the the bus-stops are ring-stops. Each component has its ring-stops. To disable components (eg: in product-stack segmentation), SKU designers simply disable ring-stops, making the component inaccessible. A bi-directional Ring Bus would see two "vehicles" driving in opposite directions around the city block. The Ring Bus topology comes with limitations of scale, mainly resulting from the latency added from too many ring-stops. This is precisely why coaxial ring-topology faded out in networking.

Intel and QuTech Demonstrate Advances in Solving Quantum Interconnect Bottlenecks

Today, Intel and QuTech—a collaboration between Delft University of Technology and the Netherlands Organisation for Applied Scientific Research - published key findings in quantum research to address the "interconnect bottleneck" that exists between quantum chips that sit in cryogenic dilution refrigerators and the complex room-temperature electronics that control the qubits. The innovations were covered in Nature, the industry-leading science journal of peer-reviewed research, and mark an important milestone in addressing one of the biggest challenges to quantum scalability with Intel's cryogenic controller chip Horse Ridge.

"Our research results, driven in partnership with QuTech, quantitatively prove that our cryogenic controller, Horse Ridge, can achieve the same high-fidelity results as room-temperature electronics while controlling multiple silicon qubits. We also successfully demonstrated frequency multiplexing on two qubits using a single cable, which clears the way for simplifying the "wiring challenge" in quantum computing. Together, these innovations pave the way for fully integrating quantum control chips with the quantum processor in the future, lifting a major roadblock in quantum scaling," said Stefano Pellerano, principal engineer at Intel Labs.

Samsung Unveils Industry-First Memory Module Incorporating New CXL Interconnect

Samsung Electronics Co., Ltd., the world leader in advanced memory technology, today unveiled the industry's first memory module supporting the new Compute Express Link (CXL) interconnect standard. Integrated with Samsung's Double Data Rate 5 (DDR5) technology, this CXL-based module will enable server systems to significantly scale memory capacity and bandwidth, accelerating artificial intelligence (AI) and high-performance computing (HPC) workloads in data centers.

The rise of AI and big data has been fueling the trend toward heterogeneous computing, where multiple processors work in parallel to process massive volumes of data. CXL—an open, industry-supported interconnect based on the PCI Express (PCIe) 5.0 interface—enables high-speed, low latency communication between the host processor and devices such as accelerators, memory buffers and smart I/O devices, while expanding memory capacity and bandwidth well beyond what is possible today. Samsung has been collaborating with several data center, server and chipset manufacturers to develop next-generation interface technology since the CXL consortium was formed in 2019.

Lian Li Launches New Fan Interlocking System with the UNI FAN SL120

LIAN LI Industrial Co. Ltd., a leading manufacturer of aluminium chassis and PC accessories, announces the UNI FAN SL120, an innovative approach to reducing cables by interlocking and daisy-chaining the fans. Designed as 120 mm high static pressure PWM fans with addressable RGB LEDs, the UNI FAN SL120 has a newly patented quick-connect daisy-chaining style system that simplifies cable management and mounting efforts. With up to 16 fans (4 sets of 4) under one fan controller, users can easily create an adapted profile to match their fan speed and synchronized lighting effect needs via the highly intuitive L-Connect software. Available in black or white, the UNI FAN SL120 offers a new premium look with lighting effects that do not compromise on performance.

Focused on giving users more control over fan functionality, the clean and intuitive interface of the L-Connect software provides instant speed and RGB light management for individual fans or simultaneous control over all 4 clusters of fans. With 5 fan speed profiles, 14 lighting effects, and control over each effect's color, brightness, direction, and speed, L-Connect allows full customization of the look and feel of the system. Alternatively, users can choose to sync with the motherboard software via the simple toggle of a switch.

SK Hynix Licenses DBI Ultra 3D Interconnect Technology

Xperi Corporation today announced that it entered into a new patent and technology license agreement with SK hynix, one of the world's largest semiconductor manufacturers. The agreement includes access to Xperi's broad portfolio of semiconductor intellectual property (IP) and a technology transfer of Invensas DBI Ultra 3D interconnect technology focused on next-generation memory.

"We are delighted to announce the extension of our long-standing relationship with SK hynix, a world-renowned technology leader and manufacturer of memory solutions," said Craig Mitchell, President of Invensas, a wholly owned subsidiary of Xperi Corporation. "As the industry increasingly looks beyond conventional node scaling and turns toward hybrid bonding, Invensas stands as a pioneering leader that continues to deliver improved performance, power, and functionality, while also reducing the cost of semiconductors. We are proud to partner with SK hynix to further develop and commercialize our DBI Ultra technology and look forward to a wide range of memory solutions that leverage the benefits of this revolutionary technology platform."

Intel joins CHIPS Alliance to promote Advanced Interface Bus (AIB) as an open standard

CHIPS Alliance, the leading consortium advancing common and open hardware for interfaces, processors and systems, today announced industry leading chipmaker Intel as its newest member. Intel is contributing the Advanced Interface Bus (AIB) to CHIPS Alliance to foster broad adoption.

CHIPS Alliance is hosted by the Linux Foundation to foster a collaborative environment to accelerate the creation and deployment of open SoCs, peripherals and software tools for use in mobile, computing, consumer electronics and Internet of Things (IoT) applications. The CHIPS Alliance project develops high-quality open source Register Transfer Level (RTL) code and software development tools relevant to the design of open source CPUs, SoCs, and complex peripherals for Field Programmable Gate Arrays (FPGAs) and custom silicon.

Intel Unveils New Tools in Its Advanced Chip Packaging Toolbox

What's New: This week at SEMICON West in San Francisco, Intel engineering leaders provided an update on Intel's advanced packaging capabilities and unveiled new building blocks, including innovative uses of EMIB and Foveros together and a new Omni-Directional Interconnect (ODI) technology. When combined with Intel's world-class process technologies, new packaging capabilities will unlock customer innovations and deliver the computing systems of tomorrow.

"Our vision is to develop leadership technology to connect chips and chiplets in a package to match the functionality of a monolithic system-on-chip. A heterogeneous approach gives our chip architects unprecedented flexibility to mix and match IP blocks and process technologies with various memory and I/O elements in new device form factors. Intel's vertically integrated structure provides an advantage in the era of heterogeneous integration, giving us an unmatched ability to co-optimize architecture, process and packaging to deliver leadership products." -Babak Sabi, Intel corporate vice president, Assembly and Test Technology Development.

Toshiba Memory America Charts Course for PCIe 4.0 SSDs

Toshiba Memory America, Inc. (TMA), the U.S.-based subsidiary of Toshiba Memory Corporation, participated in the PCI-SIG (Peripheral Component Interconnect Special Interest Group) Compliance Workshop #109 in Burlingame, California, where several prototype and engineering samples of the company's upcoming PCIe 4.0 NVMe SSDs underwent PCI-SIG FYI Gen 4 testing.

The fourth generation of the PCIe interface, PCIe 4.0, doubles available bandwidth for graphics cards, SSDs, Wi-Fi, and Ethernet cards. The new standard will enable SSDs in particular to provide much higher performance than previous PCIe 3.0 SSDs, especially sequential read performance. An early participant seeking to enable PCIe 4.0 technologies, Toshiba Memory leverages its technology leadership role and actively collaborates with PCI-SIG and other member companies to accelerate adoption of the new interface standard.

"We realized years ago that the future of flash storage would be built on the NVMe architecture," noted John Geldman, director, SSD Industry Standards for Toshiba Memory America, Inc. and a member of the NVM Express Board of Directors. "This new and faster PCIe standard will maximize performance capability - unlocking systems' full potential."

Intel "Sapphire Rapids" Brings PCIe Gen 5 and DDR5 to the Data-Center

As if the mother of all ironies, prior to its effective death-sentence dealt by the U.S. Department of Commerce, Huawei's server business developed an ambitious product roadmap for its Fusion Server family, aligning with Intel's enterprise processor roadmap. It describes in great detail the key features of these processors, such as core-counts, platform, and I/O. The "Sapphire Rapids" processor will introduce the biggest I/O advancements in close to a decade, when it releases sometime in 2021.

With an unannounced CPU core-count, the "Sapphire Rapids-SP" processor will introduce DDR5 memory support to the data-center, which aims to double bandwidth and memory capacity over the DDR4 generation. The processor features an 8-channel (512-bit wide) DDR5 memory interface. The second major I/O introduction is PCI-Express gen 5.0, which not only doubles bandwidth over gen 4.0 to 32 Gbps per lane, but also comes with a constellation of data-center-relevant features that Intel is pushing out in advance as part of the CXL Interconnect. CXL and PCIe gen 5 are practically identical.

Intel Reveals the "What" and "Why" of CXL Interconnect, its Answer to NVLink

CXL, short for Compute Express Link, is an ambitious new interconnect technology for removable high-bandwidth devices, such as GPU-based compute accelerators, in a data-center environment. It is designed to overcome many of the technical limitations of PCI-Express, the least of which is bandwidth. Intel sensed that its upcoming family of scalable compute accelerators under the Xe band need a specialized interconnect, which Intel wants to push as the next industry standard. The development of CXL is also triggered by compute accelerator majors NVIDIA and AMD already having similar interconnects of their own, NVLink and InfinityFabric, respectively. At a dedicated event dubbed "Interconnect Day 2019," Intel put out a technical presentation that spelled out the nuts and bolts of CXL.

Intel began by describing why the industry needs CXL, and why PCI-Express (PCIe) doesn't suit its use-case. For a client-segment device, PCIe is perfect, since client-segment machines don't have too many devices, too large memory, and the applications don't have a very large memory footprint or scale across multiple machines. PCIe fails big in the data-center, when dealing with multiple bandwidth-hungry devices and vast shared memory pools. Its biggest shortcoming is isolated memory pools for each device, and inefficient access mechanisms. Resource-sharing is almost impossible. Sharing operands and data between multiple devices, such as two GPU accelerators working on a problem, is very inefficient. And lastly, there's latency, lots of it. Latency is the biggest enemy of shared memory pools that span across multiple physical machines. CXL is designed to overcome many of these problems without discarding the best part about PCIe - the simplicity and adaptability of its physical layer.

AMD Ryzen 3000 "Zen 2" BIOS Analysis Reveals New Options for Overclocking & Tweaking

AMD will launch its 3rd generation Ryzen 3000 Socket AM4 desktop processors in 2019, with a product unveiling expected mid-year, likely on the sidelines of Computex 2019. AMD is keeping its promise of making these chips backwards compatible with existing Socket AM4 motherboards. To that effect, motherboard vendors such as ASUS and MSI began rolling out BIOS updates with AGESA-Combo 0.0.7.x microcode, which adds initial support for the platform to run and validate engineering samples of the upcoming "Zen 2" chips.

At CES 2019, AMD unveiled more technical details and a prototype of a 3rd generation Ryzen socket AM4 processor. The company confirmed that it will implement a multi-chip module (MCM) design even for their mainstream-desktop processor, in which it will use one or two 7 nm "Zen 2" CPU core chiplets, which talk to a 14 nm I/O controller die over Infinity Fabric. The two biggest components of the IO die are the PCI-Express root complex, and the all-important dual-channel DDR4 memory controller. We bring you never before reported details of this memory controller.

Intel Acquires NetSpeed Systems for Chip Design and Interconnect Fabric IP

Intel today announced the acquisition of NetSpeed Systems, a San Jose, California-based provider of system-on-chip (SoC) design tools and interconnect fabric intellectual property (IP). Deal terms were not disclosed. NetSpeed's highly configurable and synthesizable offerings will help Intel more quickly and cost-effectively design, develop and test new SoCs with an ever-increasing set of IP. The NetSpeed team is joining Intel's Silicon Engineering Group (SEG) led by Jim Keller. NetSpeed co-founder and CEO, Sundari Mitra, will continue to lead her team as an Intel vice president reporting to Keller.
Intel is designing more products with more specialized features than ever before, which is incredibly exciting for Intel architects and for our customers. The challenge is synthesizing a broader set of IP blocks for optimal performance while reining in design time and cost. NetSpeed's proven network-on-chip technology addresses this challenge, and we're excited to now have their IP and expertise in-house.

Jim Keller, senior vice president and general manager of the Silicon Engineering Group at Intel

Intel to Acquire eASIC to Bolster FPGA Talent and Solutions

Intel is competing to win in the largest-ever addressable market for silicon, which is being driven by the explosion of data and the need to process, analyze, store and share it. This dynamic is fueling demand for computing solutions of all kinds. Of course Intel is known for world-class CPUs, but today we offer a broader range of custom computing solutions to help customers tackle all kinds of workloads - in the cloud, over the network and at the edge. In recent years, Intel has expanded its products and introduced breakthrough innovations in memory, modems, purpose-built ASICs, vision processing units and field programmable gate arrays (FPGAs).

FPGAs are experiencing expanding adoption due to their versatility and real-time performance. These devices can be programmed anytime - even after equipment has been shipped to customers. FPGAs contain a mixture of logic, memory and digital signal processing blocks that can implement any desired function with extremely high throughput and very low latency. This makes FPGAs ideal for many critical cloud and edge applications, and Intel's Programmable Solutions Group revenue has grown double digits as customers use FPGAs to accelerate artificial intelligence, among other applications.

Latest Intel Roadmap Slide Leaked, Next Core X is "Cascade Lake-X"

The latest version of Intel's desktop client-platform roadmap has been leaked to the web, which reveals timelines and names of the company's upcoming product lines. To begin with, it states that Intel will upgrade its Core X high-end desktop (HEDT) product line only in Q4-2018. The new Core X HEDT processors will be based on the "Cascade Lake-X" silicon. This is the first appearance of the "Cascade Lake" micro-architecture. Intel is probably looking to differentiate its Ringbus-based multi-core processors (eg: "Coffee Lake," "Kaby Lake") from ones that use Mesh Interconnect (eg: "Skylake-X"), so people don't compare the single-threaded / less-parallized application performance between the two blindly.

Next up, Intel is poised to launch its second wave of 6-core, 4-core, and 2-core "Coffee Lake" processors in Q1-2018, with no mentions of an 8-core mainstream-desktop processor joining the lineup any time in 2018. These processors will be accompanied by more 300-series chipsets, namely the H370 Express, B360 Express, and H310 Express. Q1-2018 also sees Intel update its low-power processor lineup, with the introduction of the new "Gemini Lake" silicon, with 4-core and 2-core SoCs under the Pentium Silver and Celeron brands.

PCI SIG Releases PCI-Express Gen 4.0 Specifications

The Peripheral Component Interconnect (PCI) special interest group (SIG) published the first official specification (version 1.0) of PCI-Express gen 4.0 bus. The specification's previous draft 0.9 was under technical review by members of the SIG. The new generation PCIe comes with double the bandwidth of PCI-Express gen 3.0, reduced latency, lane margining, and I/O virtualization capabilities. With the specification published, one can expect end-user products implementing it. PCI SIG has now turned its attention to the even newer PCI-Express gen 5.0 specification, which will be close to ready by mid-2019.

PCI-Express gen 4.0 comes with 16 GT/s bandwidth per-lane, per-direction, which is double that of gen 3.0. An M.2 NVMe drive implementing it, for example, will have 64 Gbps of interface bandwidth at its disposal. The SIG has also been steered toward lowering the latencies of the interconnect as HPC hardware designers are turning toward alternatives such as NVLink and InfinityFabric, not primarily for the bandwidth, but the lower latency. Lane margining is a new feature that allows hardware to maintain a uniform physical layer signal clarity across multiple PCIe devices connected to a common root complex. This is particularly important when you have multiple pieces of mission-critical hardware (such as RAID HBAs or HPC accelerators), and require uniform performance across them. The new specification also adds new I/O virtualization features that should prove useful in HPC and cloud computing.

Intel Announces New Mesh Interconnect For Xeon Scalable, Skylake-X Processors

Intel's "Xeon Scalable" lineup is designed to compete directly with AMD's Naples platform. Naples, a core-laden, high performance server platform that relies deeply on linking multiple core complexes together via AMD's own HyperTransport derived Infinity Fabric Interconnect has given intel some challenges in terms of how to structure its own high-core count family of devices. This has led to a new mesh-based interconnect technology from Intel.

Tech Industry Leaders Unite, Unveil New High-Perf Server Interconnect Technology

On the heels of the recent Gen-Z interconnect announcement, an aggregate of some of the most recognizable names in the tech industry have once again banded together. This time, it's an effort towards the implementation of a fast, coherent and widely compatible interconnect technology that will pave the way towards tighter integration of ever-more heterogeneous systems.

Technology leaders AMD, Dell EMC, Google, Hewlett Packard Enterprise, IBM, Mellanox Technologies, Micron, NVIDIA and Xilinx announced the new open standard to appropriate fanfare, considering the promises of an up-to 10x performance uplift in datacenter server environments, thus accelerating big-data, machine learning, analytics, and other emerging workloads. The interconnect promises to provide a high-speed pathway towards tighter integration between different types of technology currently making up the heterogeneous server computing's needs, ranging through fixed-purpose accelerators, current and future system memory subsistems, and coherent storage and network controllers.
Return to Keyword Browsing
Apr 8th, 2025 02:59 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts