News Posts matching #Larrabee

Return to Keyword Browsing

Intel Switches Gears to 7nm Post 10nm, First Node Live in 2021

Intel's semiconductor manufacturing business has had a terrible past 5 years as it struggled to execute its 10 nanometer roadmap forcing the company's processor designers to re-hash the "Skylake" microarchitecture for 5 generations of Core processors, including the upcoming "Comet Lake." Its truly next-generation microarchitecture, codenamed "Ice Lake," which features a new CPU core design called "Sunny Cove," comes out toward the end of 2019, with desktop rollouts expected 2020. It turns out that the 10 nm process it's designed for, will have a rather short reign at Intel's fabs. Speaking at an investor's summit on Wednesday, Intel put out its silicon fabrication roadmap that sees an accelerated roll-out of Intel's own 7 nm process.

When it goes live and fit for mass production some time in 2021, Intel's 7 nm process will be a staggering 3 years behind TSMC, which fired up its 7 nm node in 2018. AMD is already mass-producing CPUs and GPUs on this node. Unlike TSMC, Intel will implement EUV (extreme ultraviolet) lithography straightaway. TSMC began 7 nm with DUV (deep ultraviolet) in 2018, and its EUV node went live in March. Samsung's 7 nm EUV node went up last October. Intel's roadmap doesn't show a leap from its current 10 nm node to 7 nm EUV, though. Intel will refine the 10 nm node to squeeze out energy-efficiency, with a refreshed 10 nm+ node that goes live some time in 2020.

Intel is Giving up on Xeon Phi - Eight More Models Declared End-Of-Life

Intel's Xeon Phi lineup, which started as Larrabee. has never seen any commercial success in the market despite big promises from the big blue giant that its programming model would be more productive for developers coming from x86. In the meantime, NVIDIA GPUs have taken over the world of supercomputing, with the latest generation Volta decimating Intel Xeon Phi offerings.

Intel's plan was to release a new generation of Xeon Phi called "Knights Hill", on a 10 nanometer process. However, constant delays ramping up 10 nm, paired with generally low demand for Xeon Phi, forced the company to abandon this project. Now the company announces that they are stopping production for eight currently shipping Xeon Phi models.

Raja Hires Larrabee Architect Tom Forsyth to Help With Intel GPU

A few months ago we reported that Raja Koduri has left AMD to work at Intel on their new discrete GPU project. Looks like he's building a strong team, with the most recent addition being Tom Forsyth who is the father of Larrabee, which was Intel's first attempt at making an x86-based graphics processor. While Larrabee did not achieve its goal and is considered a failure by many, it brought some interesting improvements to the world, for example AVX512, and is now sold under the Xeon Phi brand.

Tom, who has previously worked at Oculus, Valve, and 3DLabs posted on Twitter that he's joining Intel in Raja's group, but he's "Not entirely sure what he'll be working on just yet." At Oculus and Valve he worked on Virtual Reality projects, for example he wrote big chunks of the Team Fortress 2 VR support for the Oculus Rift. Taking a look at Tom's papers suggests that he might join the Intel team as lead for VR-related projects, as that's without a doubt one of Raja's favorite topics to talk about.

Intel Unveils Discrete GPU Prototype Development

Intel is making progress in its development of a new discrete GPU architecture, after its failed attempt with "Larrabee" that ended up as an HPC accelerator; and ancient attempts such as the i740. This comes in the wake of the company's high-profile hiring of Raja Koduri, AMD's former Radeon Technologies Group (RTG) head. The company unveiled slides pointing to the direction in which its GPU development is headed, at the IEEE International Solid-State Circuits Conference (ISSCC) in San Francisco. That direction is essentially scaling up its existing iGPU architecture, and bolstering it with mechanisms to sustain high clock speeds better.

The company's first 14 nm dGPU prototype, shown as a test-chip at the ISSCC, is a 2-chip solution. The first chip contains two key components, the GPU itself, and a system agent; and the second chip is an FPGA that interfaces with the system bus. The GPU component, as it stands now, is based on Intel's Gen 9 architecture, and features a three execution unit (EU) clusters. Don't derive numbers from this yet, as Intel is only trying to demonstrate a proof of concept. The three clusters are wired to a sophisticated power/clock management mechanism that efficiently manages power and clock-speed of each individual EU. There's also a double-clock mechanism that doubles clock speeds (of the boost state) beyond what today's Gen 9 EUs can handle on Intel iGPUs. Once a suitable level of energy efficiency is achieved, Intel will use newer generations of EUs, and scale up EU counts taking advantage of newer fab processes, to develop bigger discrete GPUs.
More slides follow.

Intel Unveils New Product Plans for High-Performance Computing

During the International Supercomputing Conference (ISC), Intel Corporation announced plans to deliver new products based on the Intel Many Integrated Core (MIC) architecture that will create platforms running at trillions of calculations per second, while also retaining the benefits of standard Intel processors.

Targeting high-performance computing segments such as exploration, scientific research and financial or climate simulation, the first product, codenamed "Knights Corner," will be made on Intel's 22-nanometer manufacturing (nm) process - using transistor structures as small as 22 billionths of a meter - and will use Moore's Law to scale to more than 50 Intel processing cores on a single chip. While the vast majority of workloads will still run best on award-winning Intel Xeon processors, Intel MIC architecture will help accelerate select highly parallel applications.

Intel Shelves Larrabee as a GPU Architecture

Intel has once again shelved plans to come back to the discrete graphics market, with the much talked about GPU codenamed Larrabee. In a recent post by Director, Product and Technology Media Relations Bill Kircos on the company blog, it was detailed that the company's priorities at the moment lie with releasing industry-leading processors that have the graphics-processing horsepower for everyday computing. The Intel HD graphics will only get better with the 2011 series of Core processors based on the Sandy Bridge architecture, where the iGPU core will be completely integrated with the processing complex.

An unexpected yield of the Larrabee program seems to be that Intel has now perfected many-core processors. Since Larrabee essentially is a multi-core processor with over 32 IA x86 cores that handle graphics workload, it could as well give Intel a CPU that is Godsent for heavy-duty HPC applications. Intel has already demonstrated a derivative of this architecture, and is looking to induct it into its Xeon series of enterprise processors.

Intel Larrabee Fails as a GPU Project

Intel's ambitious attempts at building a discrete GPU have been shelved, as reports emerge of the company canceling the silicon's first implementation as a GPU, but rather as a "software development platform for internal and external use." In a statement issued to Internetnews.com, Intel spokesperson Nick Knupffer explained the company's current position of Larrabee, saying that development of Larrabee's silicon (the chip) and software were behind schedules. "Larrabee silicon and software development are behind where we hoped to be at this point in the project," he said. "As a result, our first Larrabee product will not be launched as a stand-alone, discrete graphics product, rather it will be used as a software development platform for internal and external use."

Larrabee as a discrete GPU made a lot of news in its short public-life, it was given much credibility as it was coming from Intel, an IT industry heavyweight. Earlier this year, Intel demonstrated a Larrabee-based product (including actual product design of the "Larrabee card"), at last month's SC'09 show. The company seemed to have avoided calling it a discrete GPU, instead a "computational co-processor for the Intel Xeon and Core families." It was reasonable in calling it that, since by design, Larrabee is a many-core processor which uses 32 IA cores interconnected by caches. At SC'09, Intel demonstrated its computational power which peaked at over 1 TFLOP, but not before overclocking it.

Larrabee Only by 2010

Last week, Intel announced its Visual Computing Research Center at Saarland University in Saarbrücken, Germany. During its opening ceremony, details emerged about when Intel plans to commercially introduce Larrabee, the company's take on graphics processing using x86-based parallelism. The company categorically stated that one could expect Larrabee to be out only by early 2010.

"I would expect volume introduction of this product to be early next year," said Intel chief executive Paul Otellini. Until now, Larrabee was known to be introduced coarsely around the 2009-2010 time-frame. "We always said it would launch in the 2009/2010 timeframe," said Intel spokesperson Nick Knupffer in an email to PC Magazine. "We are narrowing that timeframe. Larrabee is healthy and in our labs right now. There will be multiple versions of Larrabee over time. We are not releasing additional details at this time," he added. In the same event, Intel displayed a company slide with a die-shot of Larrabee, revealing what looked like the x86 processing elements. Sections of the media were abuzz with inferences drawn on the die-shot, some saying that it featured as many as 32 processing elements.

Intel Larrabee Die Zoomed in

Intel chose the occasion of the opening ceremony for the Intel Visual Computing Institute at the Saarland University in Germany, to conduct a brief presentation about the visual computing technologies the company is currently working on. The focal point was Larrabee, the codename for Intel's upcoming "many-cores" processor that will play a significant role in Intel's visual computing foray way beyond integrated graphics. The die-shot reveals in intricate network of what look like the much talked-about x86 processing elements that bring about computing parallelism in Larrabee. Another slide briefly describes where Intel sees performance demands heading, saying that its growth is near-exponential with growth in common dataset sizes.

Intel Displays Larrabee Wafer at IDF Beijing

Earlier this week, Intel conducted the Intel Developer Forum (IDF): Spring 2009 event at Beijing, China. Among several presentations on the the architectural advancements of the company's products, that include Nehalem and its scalable platforms, perhaps the most interesting was a brief talk by Pat Gelsinger, Senior Vice President and General Manager of Intel's Digital Enterprise Group, on Larrabee. The term is Intel's first "many cores" architecture used to work as a graphics processor. The architecture will be thoroughly backed by low-level and high-level programming languages and tools by Intel.

French website Hardware.fr took a timely snap off a webcast of the event, showing Gelsinger holding a 300 mm wafer of Larrabee dice. The theory that Intel has working prototypes of the GPU deep inside its labs gains weight. Making use of current-generation manufacturing technologies, Intel is scaling the performance of x86 processing elements, all 32+ of them. As you can faintly see from the wafer, Larrabee has a large die. It is reported that first generation of Larrabee will be built on the 45 nm manufacturing process. Products based on the architecture may arrive by late 2009, or early 2010. With the company kicking off its 32 nm production later this year, Larrabee may be built on the newer process a little later.

Quake 4 run Ray-tracing Enabled on Intel Larrabee

Part of the series of published slides by French website CanardPlus on slides part of the IDF event shows a picture of Quake 4 run with ray-tracing enabled on Intel's upcoming GPU codenamed Larrabee. The slide shows the advantages of ray-tracing, being accurate shadows, reflections, and the image looking more natural that what conventional shaders can achieve in bringing about.
Return to Keyword Browsing
Nov 21st, 2024 13:18 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts