Friday, April 10th 2009
Intel Displays Larrabee Wafer at IDF Beijing
Earlier this week, Intel conducted the Intel Developer Forum (IDF): Spring 2009 event at Beijing, China. Among several presentations on the the architectural advancements of the company's products, that include Nehalem and its scalable platforms, perhaps the most interesting was a brief talk by Pat Gelsinger, Senior Vice President and General Manager of Intel's Digital Enterprise Group, on Larrabee. The term is Intel's first "many cores" architecture used to work as a graphics processor. The architecture will be thoroughly backed by low-level and high-level programming languages and tools by Intel.
French website Hardware.fr took a timely snap off a webcast of the event, showing Gelsinger holding a 300 mm wafer of Larrabee dice. The theory that Intel has working prototypes of the GPU deep inside its labs gains weight. Making use of current-generation manufacturing technologies, Intel is scaling the performance of x86 processing elements, all 32+ of them. As you can faintly see from the wafer, Larrabee has a large die. It is reported that first generation of Larrabee will be built on the 45 nm manufacturing process. Products based on the architecture may arrive by late 2009, or early 2010. With the company kicking off its 32 nm production later this year, Larrabee may be built on the newer process a little later.
Source:
Hardware.fr
French website Hardware.fr took a timely snap off a webcast of the event, showing Gelsinger holding a 300 mm wafer of Larrabee dice. The theory that Intel has working prototypes of the GPU deep inside its labs gains weight. Making use of current-generation manufacturing technologies, Intel is scaling the performance of x86 processing elements, all 32+ of them. As you can faintly see from the wafer, Larrabee has a large die. It is reported that first generation of Larrabee will be built on the 45 nm manufacturing process. Products based on the architecture may arrive by late 2009, or early 2010. With the company kicking off its 32 nm production later this year, Larrabee may be built on the newer process a little later.
62 Comments on Intel Displays Larrabee Wafer at IDF Beijing
I think it is CPU+GPU
Larrabee is a fully programmable FlOp powerhouse. Think of it this way...
AMD = GPU -> GPGPU (Stream)
NVIDIA = GPU -> GPGPU (CUDA)
Intel = CPU (Core 2) -> SMP (Larrabee)
In essence, Intel started with a CPU and looked at what it would take to make a good graphics card. AMD/Intel took a GPU and tried to inject CPU code (not quite but similar) in to it.
What is particularly exciting is that, because of Nehalem's architecture, I wouldn't be surprised if Nehalem can borrow a few of those cores from a Larrabee card to off load some FPU burden. The sky is literally the limit with Larrabee.
Unlike CUDA that requires a separate development environment, Larrabee is x86 and can developed with very little additional learning and in existing x86 IDE compilers.
Just like you take a Intel C2D e8400 chip, it gets placed on a 775 socket board(not sure what you call it) so you can place it in the mobo.
Theres allot of reading out there you can do about it if you want to know more, from what ive been reading, Larrabee (if successfully) will reshape how graphics is drawn and programmed in games and any other application. It could basically put an end to the dedicated graphics market. It has astounding potential, we just need to see what the adoption for it will be like once its released, heck, there isn't even solid information about how it works yet, just that it will make previous graphical API's obsolete (like opengl and directx). afaik it will advance graphic rendering for everybody, at the same time it could resurrect software rendering for 3D. (doesn't sound like a good thing right? read up about it, its a good thing if it happens)
I am just a regular consumer who likes to know the stuff he buys. many comments are absolutely dumb even for my standards.
Larrabee is a GPU which happens to be a recycled CPU design (Pentium MMX) i guess and modified to be used as a GPU.
It will address to the dedicated graphics cards market.
And there is some mombo jambo about how it may be a different approach in the field.