News Posts matching #EMIB

Return to Keyword Browsing

US Government to Announce Massive Grant for Intel's Arizona Facility

According to the latest report by Reuters, the US government is preparing to announce a multi-billion dollar grant for Intel's chip manufacturing operations in Arizona next week, possibly worth more than $10 billion. US President Joe Biden and Commerce Secretary Gina Raimondo will make the announcement, which is part of the 2022 CHIPS and Science Act aimed at expanding US chip production and reducing dependence on China and Taiwan manufacturing. The exact amount of the grant has yet to be confirmed, but rumors suggest it could exceed $10 billion, making it the most significant award yet under the CHIPS Act. The funding will include grants and loans to bolster Intel's competitive position and support the company's US semiconductor manufacturing expansion plans. This comes as a surprise just a day after the Pentagon reportedly refused to invest $2.5 billion in Intel as a part of a secret defense grant.

Intel has been investing significantly in its US expansion, recently opening a $3.5 billion advanced packaging facility in New Mexico, supposed to create extravagant packaging technology like Foveros and EMIB. The chipmaker is also expanding its semiconductor manufacturing capacity in Arizona, with plans to build new fabs in the state. Arizona is quickly becoming a significant hub for semiconductor manufacturing in the United States. In addition to Intel's expansion, Taiwan Semiconductor Manufacturing Company (TSMC) is also building new fabs in the state, attracting supply partners to the region. CHIPS Act has a total funding capacity of $39 billion allocated for semiconductor production and $11 billion for research and development. The Intel grant will likely cover the production part, as Team Blue has been reshaping its business units with the Intel Product and Intel Foundry segments.

Cadence Digital and Custom/Analog Flows Certified for Latest Intel 18A Process Technology

Cadence's digital and custom/analog flows are certified on the Intel 18A process technology. Cadence design IP supports this node from Intel Foundry, and the corresponding process design kits (PDKs) are delivered to accelerate the development of a wide variety of low-power consumer, high-performance computing (HPC), AI and mobile computing designs. Customers can now begin using the production-ready Cadence design flows and design IP to achieve design goals and speed up time to market.

"Intel Foundry is very excited to expand our partnership with Cadence to enable key markets for the leading-edge Intel 18A process technology," said Rahul Goyal, Vice President and General Manager, Product and Design Ecosystem, Intel Foundry. "We will leverage Cadence's world-class portfolio of IP, AI design technologies, and advanced packaging solutions to enable high-volume, high-performance, and power-efficient SoCs in Intel Foundry's most advanced process technology. Cadence is an indispensable partner supporting our IDM2.0 strategy and the Intel Foundry ecosystem."

Intel Opens Fab 9 Foundry in New Mexico

Today, Intel celebrated the opening of Fab 9, its cutting-edge factory in Rio Rancho, New Mexico. The milestone is part of Intel's previously announced $3.5 billion investment to equip its New Mexico operations for the manufacturing of advanced semiconductor packaging technologies, including Intel's breakthrough 3D packaging technology, Foveros, which offers flexible options for combining multiple chips that are optimized for power, performance and cost.

"Today, we celebrate the opening of Intel's first high-volume semiconductor operations and the only U.S. factory producing the world's most advanced packaging solutions at scale. This cutting-edge technology sets Intel apart and gives our customers real advantages in performance, form factor and flexibility in design applications, all within a resilient supply chain. Congratulations to the New Mexico team, the entire Intel family, our suppliers, and contractor partners who collaborate and relentlessly push the boundaries of packaging innovation," said Keyvan Esfarjani, Intel executive vice president and chief global operations officer.

Intel's Meteor Lake CPU Breaks Ground with On-Package LPDDR5X Memory Integration

During a recent demonstration, Intel showcased its cutting-edge packaging technologies, EMIB (embedded multi-die interconnect bridge) and Foveros, unveiling the highly-anticipated Meteor Lake processor with integrated LPDDR5X memory. This move appears to align with Apple's successful integration of LPDDR memory into its M1 and M2 chip packages. At the heart of Intel's presentation was the quad-tile Meteor Lake CPU, leveraging Foveros packaging for its chiplets and boasting 16 GB of Samsung's LPDDR5X-7500 memory. Although the specific CPU configuration remains undisclosed, the 16 GB of integrated memory delivers a remarkable peak bandwidth of 120 GB/s, outperforming traditional memory subsystems using DDR5-5200 or LPDDR5-6400.

Nevertheless, this approach comes with trade-offs, such as the potential for system-wide failure if a memory chip malfunctions, limited upgradeability in soldered-down configurations, and the need for more advanced cooling solutions to manage CPU and memory heat. While Apple pioneered on-package LPDDR memory integration in client CPUs, Intel has a history of using package-on-package DRAM with its Atom-branded CPUs for tablets and ultrathin laptops. While this approach simplifies manufacturing, enabling slimmer notebook designs, it curtails configuration flexibility. We are yet to see if big laptop makers such as Dell, HP, and Asus, take on this design in the coming months.

Intel "Emerald Rapids" Doubles Down on On-die Caches, Divests on Chiplets

Finding itself embattled with AMD's EPYC "Genoa" processors, Intel is giving its 4th Gen Xeon Scalable "Sapphire Rapids" processor a rather quick succession in the form of the Xeon Scalable "Emerald Rapids," bound for Q4-2023 (about 8-10 months in). The new processor shares the same LGA4677 platform and infrastructure, and much of the same I/O, but brings about two key design changes that should help Intel shore up per-core performance, making it competitive to EPYC "Zen 4" processors with higher core-counts. SemiAnalysis compiled a nice overview of the changes, the two broadest points of it being—1. Intel is peddling back on the chiplet approach to high core-count CPUs, and 2., that it wants to give the memory sub-system and inter-core performance a massive performance boost using larger on-die caches.

The "Emerald Rapids" processor has just two large dies in its extreme core-count (XCC) avatar, compared to "Sapphire Rapids," which can have up to four of these. There are just three EMIB dies interconnecting these two, compared to "Sapphire Rapids," which needs as many as 10 of these to ensure direct paths among the four dies. The CPU core count itself doesn't see a notable increase. Each of the two dies on "Emerald Rapids" physically features 33 CPU cores, so a total of 66 are physically present, although one core per die is left unused for harvesting, the SemiAnalysis article notes. So the maximum core-count possible commercially is 32 cores per die, or 64 cores per socket. "Emerald Rapids" continues to be based on the Intel 7 process (10 nm Enhanced SuperFin), probably with a few architectural improvements for higher clock-speeds.

Intel Launches New Xeon Workstation Processors - the Ultimate Solution for Professionals

Intel today announced the new Intel Xeon W-3400 and Intel Xeon W-2400 desktop workstation processors (code-named Sapphire Rapids), led by the Intel Xeon w9-3495X, Intel's most powerful desktop workstation processor ever designed. Built for professional creators, these new Xeon processors provide massive performance for media and entertainment, engineering and data science professionals. With a breakthrough new compute architecture, faster cores and new embedded multi-die interconnect bridge (EMIB) packaging, the Xeon W-3400 and Xeon W-2400 series of processors enable unprecedented scalability for increased performance.

"For more than 20 years, Intel has been committed to delivering the highest quality workstation platforms - combining high-performance compute and rock-solid stability - for professional PC users across the globe. Our new Intel Xeon desktop workstation platform is uniquely designed to unleash the innovation and creativity of professional creators, artists, engineers, designers, data scientists and power users - built to tackle both today's most demanding workloads as well as the professional workloads of the future." -Roger Chandler, Intel vice president and general manager, Creator and Workstation Solutions, Client Computing Group

Intel Introduces the Max Series Product Family: Ponte Vecchio and Sapphire Rapids

In advance of Supercomputing '22 in Dallas, Intel Corporation has introduced the Intel Max Series product family with two leading-edge products for high performance computing (HPC) and artificial intelligence (AI): Intel Xeon CPU Max Series (code-named Sapphire Rapids HBM) and Intel Data Center GPU Max Series (code-named Ponte Vecchio). The new products will power the upcoming Aurora supercomputer at Argonne National Laboratory, with updates on its deployment shared today.

The Xeon Max CPU is the first and only x86-based processor with high bandwidth memory, accelerating many HPC workloads without the need for code changes. The Max Series GPU is Intel's highest density processor, packing over 100 billion transistors into a 47-tile package with up to 128 gigabytes (GB) of high bandwidth memory. The oneAPI open software ecosystem provides a single programming environment for both new processors. Intel's 2023 oneAPI and AI tools will deliver capabilities to enable the Intel Max Series products' advanced features.

Intel Details Ponte Vecchio Accelerator: 63 Tiles, 600 Watt TDP, and Lots of Bandwidth

During the International Solid-State Circuits Conference (ISSCC) 2022, Intel gave us a more significant look at its upcoming Ponte Vecchio HPC accelerator and how it operates. So far, Intel convinced us that the company created Ponte Vecchio out of 47 tiles glued together in one package. However, the ISSCC presentation shows that the accelerator is structured rather interestingly. There are 63 tiles in total, where 16 are reserved for compute, eight are used for RAMBO cache, two are Foveros base tiles, two represent Xe-Link tiles, eight are HBM2E tiles, and EMIB connection takes up 11 tiles. This totals for about 47 tiles. However, an additional 16 thermal tiles used in Ponte Vecchio regulate the massive TDP output of this accelerator.

What is interesting is that Intel gave away details of the RAMBO cache. This novel SRAM technology uses four banks of 3.75 MB groups total of 15 MB per tile. They are connected to the fabric at 1.3 TB/s connection per chip. In contrast, compute tiles are connected at 2.6 TB/s speeds to the chip fabric. With eight RAMBO cache tiles, we get an additional 120 MB SRAM present. The base tile is a 646 mm² die manufactured in Intel 7 semiconductor process and contains 17 layers. It includes a memory controller, the Fully Integrated Voltage Regulators (FIVR), power management, 16-lane PCIe 5.0 connection, and CXL interface. The entire area of Ponte Vecchio is rather impressive, as 47 active tiles take up 2,330 mm², whereas when we include thermal dies, the total area jumps to 3,100 mm². And, of course, the entire package is much larger at 4,844 mm², connected to the system with 4,468 pins.

Intel "Sapphire Rapids" Xeon 4-tile MCM Annotated

Intel Xeon Scalable "Sapphire Rapids" is an upcoming enterprise processor with a CPU core count of up to 60. This core-count is achieved using four dies inter-connected using EMIB. Locuza, who leads social media with logic die annotation, posted one for "Sapphire Rapids," based on a high-resolution die-shot revealed by Intel in its ISSCC 2022 presentation.

Each of the four dies in "Sapphire Rapids" is a fully-fledged multi-core processor in its own right, complete with CPU cores, integrated northbridge, memory and PCIe interfaces, and other platform I/O. What brings four of these together is the use of five EMIB bridges per die. This allows CPU cores of a die to transparantly access the I/O and memory controlled any of the other dies transparently. Logically, "Sapphire Rapids" isn't unlike AMD "Naples," which uses IFOP (Infinity Fabric over package) to inter-connect four 8-core "Zeppelin" dies, but the effort here appears to be to minimize the latency arising from an on-package interconnect, toward a high-bandwidth, low-latency one that uses silicon bridges with high-density microscopic wiring between them (akin to an interposer).

Intel Breakthroughs Propel Moore's Law Beyond 2025

In its relentless pursuit of Moore's Law, Intel is unveiling key packaging, transistor and quantum physics breakthroughs fundamental to advancing and accelerating computing well into the next decade. At IEEE International Electron Devices Meeting (IEDM) 2021, Intel outlined its path toward more than 10x interconnect density improvement in packaging with hybrid bonding, 30% to 50% area improvement in transistor scaling, major breakthroughs in new power and memory technologies, and new concepts in physics that may one day revolutionize computing.

"At Intel, the research and innovation necessary for advancing Moore's Law never stops. Our Components Research Group is sharing key research breakthroughs at IEDM 2021 in bringing revolutionary process and packaging technologies to meet the insatiable demand for powerful computing that our industry and society depend on. This is the result of our best scientists' and engineers' tireless work. They continue to be at the forefront of innovations for continuing Moore's Law," said Robert Chau, Intel Senior Fellow and general manager of Components Research.

Intel Xeon "Sapphire Rapids" Memory Detailed, Resembles AMD 1st Gen EPYC: Decentralized 8-Channel DDR5

Intel's upcoming Xeon "Sapphire Rapids" processor features a memory interface topology that closely resembles that of first-generation AMD EPYC "Rome," thanks to the multi-chip module design of the processor. Back in 2017, Intel's competing "Skylake-SP" Xeon processors were based on monolithic dies. Despite being spread across multiple memory controller tiles, the 6-channel DDR4 memory interface was depicted by Intel as an advantage over EPYC "Rome." AMD's first "Zen" based enterprise processor was a multi-chip module of four 14 nm, 8-core "Zeppelin" dies, each with a 2-channel DDR4 memory interface that added up to the processor's 8-channel I/O. Much like "Sapphire Rapids," a CPU core from any of the four dies had access to memory and I/O controlled by any other die, as the four were networked over the Infinity Fabric interconnect in a configuration that essentially resembled "4P on a stick."

With "Sapphire Rapids," Intel is taking a largely similar approach—it has four compute tiles (dies) instead of a monolithic die, which Intel says helps with scalability in both directions; and each of the four compute tiles has a 2-channel DDR5 or 1024-bit HBM memory interface, which add up to the processor's 8-channel DDR5 total I/O. Intel says that CPU cores from each tile has equal access to memory, last-level cache, and I/O controlled by another die. Inter-tile communication is handled by EMIB physical media (55 micron bump-pitch wiring). UPI 2.0 makes up the inter-socket interconnect. Each of the four compute tiles has 24 UPI 2.0 links that operate at 16 GT/s. Intel didn't detail how memory is presented to the operating system, or the NUMA hierarchy, however much of Intel's engineering effort appears to be focused on making this disjointed memory I/O work as if "Sapphire Rapids" were a monolithic die. The company claims "consistent low-latency, high cross-sectional bandwidth across the SoC."

Intel Expects New US Fab Investment to Cost $60 to $120 billion

In an interview with the Washington Post, Intel CEO Pat Gelsinger shared some details on the company's plans to expand its foundry operations in the US. As part of the company's IDM 2.0 plan, the company aims to construct a new cutting edge fabrication complex that will cover both wafer manufacturing and advanced packaging technologies. While the final factory location still hasn't been disclosed, the company said it plans to build the complex in close proximity to universities - a way to facilitate the hiring process of qualified personnel and, perhaps, of establishing joint research and development. Intel expects this foundry complex to cost between $60 and $120 billion.
Intel CEO Pat GelsingerWe are looking broadly across the U.S.. This would be a very large site, so six to eight fab modules, and at each of those fab modules, between 10- and $15 billion. It's a project over the next decade on the order of $100 billion of capital, 10,000 direct jobs. 100,000 jobs are created as a result of those 10,000, by our experience. So, essentially, we want to build a little city."

Intel Accelerates Packaging and Process Innovations

Intel Corporation today revealed one of the most detailed process and packaging technology roadmaps the company has ever provided, showcasing a series of foundational innovations that will power products through 2025 and beyond. In addition to announcing RibbonFET, its first new transistor architecture in more than a decade, and PowerVia, an industry-first new backside power delivery method, the company highlighted its planned swift adoption of next-generation extreme ultraviolet lithography (EUV), referred to as High Numerical Aperture (High NA) EUV. Intel is positioned to receive the first High NA EUV production tool in the industry.

"Building on Intel's unquestioned leadership in advanced packaging, we are accelerating our innovation roadmap to ensure we are on a clear path to process performance leadership by 2025," Intel CEO Pat Gelsinger said during the global "Intel Accelerated" webcast. "We are leveraging our unparalleled pipeline of innovation to deliver technology advances from the transistor up to the system level. Until the periodic table is exhausted, we will be relentless in our pursuit of Moore's Law and our path to innovate with the magic of silicon."

Intel Ponte Vecchio GPU to Be Liquid Cooled Inside OAM Form Factor

Intel's upcoming Ponte Vecchio graphics card is set to be the company's most powerful processor ever designed, and the chip is indeed looking like an engineering marvel. From Intel's previous teasers, we have learned that Ponte Vecchio is built using 47 "magical tiles" or 47 dies which are responsible either for computing elements, Rambo Cache, Xe links, or something else. Today, we are getting a new piece of information coming from Igor's LAB, regarding the Ponte Vecchio and some of its design choices. For starters, the GPU will be a heterogeneous design that consists out of many different nodes. Some parts of the GPU will be manufactured on Intel's 10 nm SuperFin and 7 nm technologies, while others will use TSMC's 7 nm and 5 nm nodes. The smaller and more efficient nodes will probably be used for computing elements. Everything will be held together by Intel's EMIB and Foveros 3D packaging.

Next up, we have information that this massive Intel processor will be accountable for around 600 Watts of heat output, which is a lot to cool. That is why in the leaked renders, we see that Intel envisioned these processors to be liquid-cooled, which would make the cooling much easier and much more efficient compared to air cooling of such a high heat output. Another interesting thing is that the Ponte Vecchio is designed to fit inside OAM (OCP Accelerator Module) form factor, an alternative to the regular PCIe-based accelerators in data centers. OAM is used primarily by hyper scalers like Facebook, Amazon, Google, etc., so we imagine that Intel already knows its customers before the product even hits the market.

Intel "Meteor Lake" a "Breakthrough Client Processor" Leveraging Foveros Packaging

Intel CEO Pat Gelsinger made the first official reference to the company's future-generation client processor, codenamed "Meteor Lake." Slated for market release in 2023, the processor's compute tile will be taped out in Q2-2021. Launching alongside the "Granite Rapids" enterprise processor, "Meteor Lake" will be a multi-chip module leveraging Intel's Foveros chip packaging technology.

Different components of the processor will be fabricated on different kinds of silicon fabrication nodes, and interconnected on the package using EMIB inter-die connections, or even silicon interposers. The compute tile is likely the tile containing the processor's CPU cores, and Intel confirmed a 7 nm-class foundry node for it. "Meteor Lake" will be a hybrid processor, much like the upcoming "Alder Lake," meaning that it will have two kinds of CPU cores, larger "high performance" cores that remain dormant when the machine is idling or dealing with lightweight workloads; and smaller "high efficiency" cores based on a low-power microarchitecture.

Alleged Intel Sapphire Rapids Xeon Processor Image Leaks, Dual-Die Madness Showcased

Today, thanks to the ServeTheHome forum member "111alan", we have the first pictures of the alleged Intel Sapphire Rapids Xeon processor. Pictured is what appears to be a dual-die design similar to Cascade Lake-SP design with 56 cores and 112 threads that uses two dies. The Sapphire Rapids is a 10 nm SuperFin design that allegedly comes even in the dual-die configuration. To host this processor, the motherboard needs an LGA4677 socket with 4677 pins present. The new LGA socket, along with the new 10 nm Sapphire Rapids Xeon processors are set for delivery in 2021 when Intel is expected to launch its new processors and their respective platforms.

The processor pictured is clearly a dual-die design, meaning that Intel used some of its Multi-Chip Package (MCM) technology that uses EMIB to interconnect the silicon using an active interposer. As a reminder, the new 10 nm Sapphire Rapids platform is supposed to bring many new features like a DDR5 memory controller paired with Intel's Data Streaming Accelerator (DSA); a brand new PCIe 5.0 standard protocol with a 32 GT/s data transfer rate, and a CXL 1.1 support for next-generation accelerators. The exact configuration of this processor is unknown, however, it is an engineering sample with a clock frequency of a modest 2.0 GHz.

Another Nail on Intel Kaby Lake-G Coffin as AMD Pulls Graphics Driver Support

Kaby Lake-G was the result of one of the strangest collaborations in the industry - though that may not be a just way of looking at it. It made total sense at the time - a product that combined the world's best CPU design with one of the foremost graphics architectures seems a recipe for success. However, the Intel-AMD collaboration was an unexpected one, as these two rivals were never expected to look eye to eye in any sort of meaningful way. Kaby Lake-G was revolutionary in how it combined both AMD and Intel IP in an EMIB-capable design, but it wasn't one built to last.

Now, after Intel has announced a stop to product manufacturing and order capacity, it's come the time for AMD to pull driver support. The company's latest Windows 10 version 2004 update-compatible drivers don't install on Kaby Lake-G powered systems, citing an unsupported hardware configuration. Tom's Hardware contacted Intel, who said they're working with AMD to bring back "Radeon graphics driver support to Intel NUC 8 Extreme Mini PCs (previously codenamed "Hades Canyon")." AMD, however, still hasn't commented on the story.

Intel Takes Big Strides in Chip Packaging Tech

Intel's silicon fabrication technological edge over TSMC and Samsung may have buckled, but the company appears to have made big advances in chip packaging. We've known for some time about EMIB (embedded multi-die interconnect bridge), Intel's cost-effective alternative to using full-fledged interposers; and Foveros heterogenous multi-die packaging; but the company has apparently invented more forms of 3-D chip stacking, as detailed by a WikiChip Fuse report. By leveraging ODI (omni-directional interconnect), an evolutionary next-step to EMIB and Foveros, Intel is able to stack multiple chips above the fiberglass substrate, above each other; and inside indentations and cavities of the substrate.

ODI consists of EMIB-like silicon dies that enable high-density wiring between two dies (think a GPU and its memory stack, or an SoC and core-logic); and copper poles that serve as extensions of the bumps of silicon dies getting to the substrate. There are two types of ODI. Type-1 refers to an interconnect running between two top dies, with the ODI die sitting between them and the substrate at the point of the inter-die connection region; while copper poles compensate for the Z-height difference. In scenarios without copper poles, chip designers can opt for substrates with cavities (regions with fewer layers), where the ODI die can be slotted in. In type-2 ODI, the interconnect die sits completely under a top die, providing high-density wiring either between two regions of the same die, or between two dies. The two types can be mixed and matched to achieve extremely complex MCMs.

Intel Planning 14nm "Ozark Lake" 16-core Processor for Spring 2021

TechPowerUp has learned that Intel is planning to bring 16 cores onto the mainstream desktop platform by Spring 2021 by implementing a similar chip-design philosophy as AMD: MCMs. The new "Ozark Lake" processor will pack up to 16 cores and 32 threads by decoupling the "core" and "uncore" components of a typical Intel mainstream processor.

Intel will leverage the additional fiberglass substrate floor-space yielded from the new LGA1700 package to create a multi-chip module that has two [kinds of] dies, the "core complex" and the "uncore complex." The core complex is a 14 nm die purely composed of CPU cores and an EMIB interconnect. There will be as many as 16 "Skylake" cores in a conventional ringbus layout, and conventional cache hierarchy (256 KB L2$ and up to 2 MB/core L3$). The lack of uncore components and exclusive clock and voltage domains will allow the CPU cores to attain Thermal Velocity Boost Pro speeds of up to 6.00 GHz, if not more.

Intel Announces New GPU Architecture and oneAPI for Unified Software Stack at SC19

At Supercomputing 2019, Intel unveiled its vision for extending its leadership in the convergence of high-performance computing (HPC) and artificial intelligence (AI) with new additions to its data-centric silicon portfolio and an ambitious new software initiative that represents a paradigm shift from today's single-architecture, single-vendor programming models.

Addressing the increasing use of heterogeneous architectures in high-performance computing, Intel expanded on its existing technology portfolio to move, store and process data more effectively by announcing a new category of discrete general-purpose GPUs optimized for AI and HPC convergence. Intel also launched the oneAPI industry initiative to deliver a unified and simplified programming model for application development across heterogenous processing architectures, including CPUs, GPUs, FPGAs and other accelerators. The launch of oneAPI represents millions of Intel engineering hours in software development and marks a game-changing evolution from today's limiting, proprietary programming approaches to an open standards-based model for cross-architecture developer engagement and innovation.

Intel Unveils World's Largest FPGA

Intel has today announced the Stratix 10 GX 10M - a Field Programmable Gate Array (FPGA) built on 14 nm technology that has an astonishing 43.3 Billion transistors, making it the largest FPGA in the world, dethroning the Xilinx with their previously largest Virtex VU19P FPGA which had a "mere" 35 Billion transistors. The Stratix 10 GX 10M is a home to over 10.2 million logic cells housed inside two large dies, connected by Intel's own Embedded Multi-die Interconnect Bridge (EMIB).

The 10M model is packing four additional dies besides the two for logic, also connected by EMIB, that feature 48 transceivers in total which have a combined bandwidth of up to 4.5Tb/s. If you are wondering about the bandwidth between all dies, then judging by EMIB's 25,920 connections, there is 6.5 Tb/s of inner-die bandwidth, meaning that components will not be starving for additional speeds to transfer the data. Additionally there are 2,304 user I/O pins, allowing for some creative integration solutions that involve plenty of ports for development purposes.

Intel Ships First 10nm Agilex FPGAs

Intel today announced that it has begun shipments of the first Intel Agilex field programmable gate arrays (FPGAs) to early access program customers. Participants in the early access program include Colorado Engineering Inc., Mantaro Networks, Microsoft and Silicom. These customers are using Agilex FPGAs to develop advanced solutions for networking, 5G and accelerated data analytics.

"The Intel Agilex FPGA product family leverages the breadth of Intel innovation and technology leadership, including architecture, packaging, process technology, developer tools and a fast path to power reduction with eASIC technology. These unmatched assets enable new levels of heterogeneous computing, system integration and processor connectivity and will be the first 10nm FPGA to provide cache-coherent and low latency connectivity to Intel Xeon processors with the upcoming Compute Express Link," said Dan McNamara, Intel senior vice president and general manager of the Networking and Custom Logic Group.

Intel Unveils New Tools in Its Advanced Chip Packaging Toolbox

What's New: This week at SEMICON West in San Francisco, Intel engineering leaders provided an update on Intel's advanced packaging capabilities and unveiled new building blocks, including innovative uses of EMIB and Foveros together and a new Omni-Directional Interconnect (ODI) technology. When combined with Intel's world-class process technologies, new packaging capabilities will unlock customer innovations and deliver the computing systems of tomorrow.

"Our vision is to develop leadership technology to connect chips and chiplets in a package to match the functionality of a monolithic system-on-chip. A heterogeneous approach gives our chip architects unprecedented flexibility to mix and match IP blocks and process technologies with various memory and I/O elements in new device form factors. Intel's vertically integrated structure provides an advantage in the era of heterogeneous integration, giving us an unmatched ability to co-optimize architecture, process and packaging to deliver leadership products." -Babak Sabi, Intel corporate vice president, Assembly and Test Technology Development.

Intel Switches Gears to 7nm Post 10nm, First Node Live in 2021

Intel's semiconductor manufacturing business has had a terrible past 5 years as it struggled to execute its 10 nanometer roadmap forcing the company's processor designers to re-hash the "Skylake" microarchitecture for 5 generations of Core processors, including the upcoming "Comet Lake." Its truly next-generation microarchitecture, codenamed "Ice Lake," which features a new CPU core design called "Sunny Cove," comes out toward the end of 2019, with desktop rollouts expected 2020. It turns out that the 10 nm process it's designed for, will have a rather short reign at Intel's fabs. Speaking at an investor's summit on Wednesday, Intel put out its silicon fabrication roadmap that sees an accelerated roll-out of Intel's own 7 nm process.

When it goes live and fit for mass production some time in 2021, Intel's 7 nm process will be a staggering 3 years behind TSMC, which fired up its 7 nm node in 2018. AMD is already mass-producing CPUs and GPUs on this node. Unlike TSMC, Intel will implement EUV (extreme ultraviolet) lithography straightaway. TSMC began 7 nm with DUV (deep ultraviolet) in 2018, and its EUV node went live in March. Samsung's 7 nm EUV node went up last October. Intel's roadmap doesn't show a leap from its current 10 nm node to 7 nm EUV, though. Intel will refine the 10 nm node to squeeze out energy-efficiency, with a refreshed 10 nm+ node that goes live some time in 2020.

NVIDIA to Launch Efficiency-Oriented GeForce GTX 1050 Max-Q, Aims at Intel EMIB

NVIDIA through the changelog of one of its Linux driver releases may have spilled the beans in an as of yet unannounced, unreleased product. The company's Max-Q variants of their graphics cards typically trade performance for power efficiency, sitting the designs somewhat more optimally in the power/performance ratio curve. The fact that NVIDIA is looking to bolster efficiency of its GTX 1050 with a Max-Q design is likely aimed at competing with the performance level of the already announced Intel + AMD EMIB design, where an Intel discrete CPU is paired with a discrete, Vega-based AMD GPU and its accompanying HBM2 memory stacks, in a small, extremely power efficient package (when compared with current designs.)

The folks at Notebookcheck expect the 1050 Max-Q to perform about 10 to 15 percent slower than the standard 1050 and 1050 Ti, respectively, with TDP likely ranging between 34 W to 46 W - NVIDIA is aiming at the same market that the >AMD + Intel EMIB collaboration is going after (thin, light, adequate performance solutions.)
Return to Keyword Browsing
May 3rd, 2024 02:59 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts