News Posts matching #FPGA

Return to Keyword Browsing

Intel Ships First 10nm Agilex FPGAs

Intel today announced that it has begun shipments of the first Intel Agilex field programmable gate arrays (FPGAs) to early access program customers. Participants in the early access program include Colorado Engineering Inc., Mantaro Networks, Microsoft and Silicom. These customers are using Agilex FPGAs to develop advanced solutions for networking, 5G and accelerated data analytics.

"The Intel Agilex FPGA product family leverages the breadth of Intel innovation and technology leadership, including architecture, packaging, process technology, developer tools and a fast path to power reduction with eASIC technology. These unmatched assets enable new levels of heterogeneous computing, system integration and processor connectivity and will be the first 10nm FPGA to provide cache-coherent and low latency connectivity to Intel Xeon processors with the upcoming Compute Express Link," said Dan McNamara, Intel senior vice president and general manager of the Networking and Custom Logic Group.

Xilinx Announces Virtex UltraScale+, the World's Largest FPGA

Xilinx, Inc., the leader in adaptive and intelligent computing, today announced the expansion of its 16 nanometer (nm) Virtex UltraScale+ family to now include the world's largest FPGA — the Virtex UltraScale+ VU19P. With 35 billion transistors, the VU19P provides the highest logic density and I/O count on a single device ever built, enabling emulation and prototyping of tomorrow's most advanced ASIC and SoC technologies, as well as test, measurement, compute, networking, aerospace and defense-related applications.

The VU19P sets a new standard in FPGAs, featuring 9 million system logic cells, up to 1.5 terabits per-second of DDR4 memory bandwidth and up to 4.5 terabits per-second of transceiver bandwidth, and over 2,000 user I/Os. It enables the prototyping and emulation of today's most complex SoCs as well as the development of emerging, complex algorithms such as those used for artificial intelligence, machine learning, video processing and sensor fusion. The VU19P is 1.6X larger than its predecessor and what was previously the industry's largest FPGA — the 20 nm Virtex UltraScale 440 FPGA.

Intel's CEO Blames 10 nm Delay on being "Too Aggressive"

During Fortune's Brainstorm Tech conference in Aspen, Colorado, Intel's CEO Bob Swan took stage and talked about the company, about where Intel is now and where they are headed in the future and how the company plans to evolve. Particular focus was put on how Intel became "data centric" from "PC centric," and the struggles it encountered.

However, when asked about the demise of Moore's Law, Swan detailed the aggressiveness that they approached the challenge with. Instead of the regular two times improvement in transistor density every two years, Swan said that Intel has always targeted better and greater densities so that it would stay the leader in the business.

Intel Sets Up New Network and Custom-logic Group

In recent conversations with Intel customers, two words kept coming up: disruption and opportunity. Disruption because almost every single executive I talk with has seen business disrupted in one way or another or is worried about keeping up with new technology trends and keeping a competitive edge. And opportunity because when these customers discuss their needs -- be it how to better leverage data, how to modernize their infrastructure for 5G or how to accelerate artificial intelligence (AI) and analytics workloads -- they realize the massive prospects in front of them.

To help our customers capitalize on the opportunities ahead, Intel has created a new organization that combines our network infrastructure organization with our programmable solutions organization under my leadership. This new organization is called the Network and Custom Logic Group.
Both original organizations executed on record design wins and revenues in 2018. Their merger allows Intel to bring maximum value to our customers by delivering unprecedented and seamless access to Intel's broad portfolio of products, from Intel Xeon processors SoC, FPGA, eASIC, full-custom ASIC, software, IP, and systems and solutions across the cloud, enterprise, network, embedded and IoT markets. To that end, FPGA and custom silicon will continue to be important horizontal technologies. And this is just the beginning of a continuum of Custom Logic Portfolio of FPGA, eASIC, and ASIC to support our customers' unique needs throughout their life cycles. No other company in the world can offer that.

Intel Announces New Chief People Officer Sandra Rivera

Intel has announced that Sandra Rivera will take on a new role as the company's chief people officer and executive vice president, reporting to CEO Bob Swan. She will lead the human resources organization and serve as steward of Intel's culture evolution as it transforms to a data-centric company. Previously, Rivera was responsible for the Network Platforms Group, and served as Intel's 5G executive sponsor.

"Sandra is a role model for an Intel that is customer obsessed, collaborative and fearless while firmly grounded in trust, transparency and inclusivity. I am thrilled that Sandra will lead this critical part of our strategy to power a data-centric world," Swan said. "In a company driven by deep, technical talent, Sandra is an excellent technical leader who builds successful businesses by first building great teams. I am confident Sandra, as chief people officer, will help us accelerate our transformation and position our Intel team to play a bigger role in our customers' success."

Intel Reports First-Quarter 2019 Financial Results

Intel Corporation today reported first-quarter 2019 financial results. "Results for the first quarter were slightly higher than our January expectations. We shipped a strong mix of high performance products and continued spending discipline while ramping 10nm and managing a challenging NAND pricing environment. Looking ahead, we're taking a more cautious view of the year, although we expect market conditions to improve in the second half," said Bob Swan, Intel CEO. "Our team is focused on expanding our market opportunity, accelerating our innovation and improving execution while evolving our culture. We aim to capitalize on key technology inflections that set us up to play a larger role in our customers' success, while improving returns for our owners."

In the first quarter, the company generated approximately $5.0 billion in cash from operations, paid dividends of $1.4 billion and used $2.5 billion to repurchase 49 million shares of stock. In the first quarter, Intel achieved 4 percent growth in the PC-centric business while data-centric revenue declined 5 percent.

Intel Driving Data-Centric World with New 10nm Intel Agilex FPGA Family

Intel announced today a brand-new product family, the Intel Agilex FPGA. This new family of field programmable gate arrays (FPGA) will provide customized solutions to address the unique data-centric business challenges across embedded, network and data center markets. "The race to solve data-centric problems requires agile and flexible solutions that can move, store and process data efficiently. Intel Agilex FPGAs deliver customized connectivity and acceleration while delivering much needed improvements in performance and power for diverse workloads," said Dan McNamara, Intel senior vice president, Programmable Solutions Group.

Customers need solutions that can aggregate and process increasing amounts of data traffic to enable transformative applications in emerging, data-driven industries like edge computing, networking and cloud. Whether it's through edge analytics for low-latency processing, virtualized network functions to improve performance, or data center acceleration for greater efficiency, Intel Agilex FPGAs are built to deliver customized solutions for applications from the edge to the cloud. Advances in artificial intelligence (AI) analytics at the edge, network and the cloud are compelling hardware systems to cope with evolving standards, support varying AI workloads, and integrate multiple functions. Intel Agilex FPGAs provide the flexibility and agility required to meet these challenges and deliver gains in performance and power.

Intel Announces Broadest Product Portfolio for Moving, Storing, and Processing Data

Intel Tuesday unveiled a new portfolio of data-centric solutions consisting of 2nd-Generation Intel Xeon Scalable processors, Intel Optane DC memory and storage solutions, and software and platform technologies optimized to help its customers extract more value from their data. Intel's latest data center solutions target a wide range of use cases within cloud computing, network infrastructure and intelligent edge applications, and support high-growth workloads, including AI and 5G.

Building on more than 20 years of world-class data center platforms and deep customer collaboration, Intel's data center solutions target server, network, storage, internet of things (IoT) applications and workstations. The portfolio of products advances Intel's data-centric strategy to pursue a massive $300 billion data-driven market opportunity.

Intel Announces Next-Generation Acceleration Card to Deliver 5G

Today at Mobile World Congress (MWC) 2019, Intel announced the Intel FPGA Programmable Acceleration Card N3000 (Intel FPGA PAC N3000), designed for service providers to enable 5G next-generation core and virtualized radio access network solutions. The Intel FPGA PAC N3000 accelerates many virtualized workloads, ranging from 5G radio access networks to core network applications.

"As the mobile and telecommunications industry gears up for an explosion in internet protocol traffic and 5G rollouts, we designed the Intel FPGA PAC N3000 to provide the programmability and flexibility with the performance, power efficiency, density and system integration capabilities the market needs to fully support the capabilities of 5G networks," said Reynette Au, Intel vice president of marketing, Programmable Solutions Group.

Western Digital Delivers New SweRV Core RISC-V Processor

Western Digital Corp. today announced at the RISC-V Summit three new open-source innovations designed to support Western Digital's internal RISC-V development efforts and those of the growing RISC-V ecosystem. In his keynote address, Western Digital's Chief Technology Officer Martin Fink unveiled plans to release a new open source RISC-V core, an open standard initiative for cache coherent memory over a network and an open source RISC-V instruction set simulator.

These innovations are expected to accelerate development of new open, purpose-built compute architectures for Big Data and Fast Data environments. Western Digital has taken an active role in helping to advance the RISC-V ecosystem, including multiple related strategic investments and partnerships, and demonstrated progress toward its stated goal of transitioning one billion of the company's processor cores to the RISC-V architecture.

Micron and Achronix Deliver Next-Generation FPGAs Powered by GDDR6 Memory

Micron Technology, Inc., today announced that its GDDR6 memory, Micron's fastest and most powerful graphics memory, will be the high-performance memory of choice supporting Achronix's next-generation stand-alone FPGA products built on TSMC 7nm process technology. GDDR6 is optimized for a variety of demanding applications, including machine learning, that require multi-terabit memory bandwidth and will enable Achronix to offer FPGAs at less than half the cost of FPGAs with comparable memory solutions.

Achronix's high-performance FPGAs, combined with GDDR6 memory, are the industry's highest-bandwidth memory solution for accelerating machine learning workloads in data center and automotive applications.

This new joint solution addresses many of the inherent challenges in deep neural networks, including storing large data sets, weight parameters and activations in memory. The underlying hardware needs to store, process and rapidly move data between the processor and memory. In addition, it needs to be programmable to allow more efficient implementations for constantly changing machine learning algorithms. Achronix's next-generation FPGAs have been optimized to process machine learning workloads and currently are the only FPGAs that offer support for GDDR6 memory.

Intel Announces "Forward-Looking" Architecture Event to be Held December 11th

Intel today announced to press that they've scheduled an event for December 11th. The scheduled event should take the form of a small gathering of both Intel and press professionals, where Intel will be giving insights into its thought process and technologies with some in-depth presentations for technicians and engineers from the blue giant. Intel has become more and more secluded when it comes to the workings and architecture details of its technology advances, with the company even going so far as to cancel the (previously annual) Intel Developer Forums.

The event is apparently focusing on "architecture" considerations for future Intel products, so information shared could be strung with NDAs, and could fall under any product family Intel is working on (CPU, GPU, FPGA, AI...). We'll see what Intel has to share, and what kind of details (or watercolor ideas) can be painted on any future Intel products.

Samsung Unveils 256-Gigabyte 3DS DDR4 RDIMM, Other Datacenter Innovations

Samsung Electronics, a world leader in advanced semiconductor technology, today announced several groundbreaking additions to its comprehensive semiconductor ecosystem that encompass next-generation technologies in foundry as well as NAND flash, SSD (solid state drive) and DRAM. Together, these developments mark a giant step forward for Samsung's semiconductor business.

"Samsung's technology leadership and product breadth are unparalleled," said JS Choi, President, Samsung Semiconductor, Inc. "Bringing 7 nm EUV into production is an incredible achievement. Also, the announcements of SmartSSD and 256GB 3DS RDIMM represent performance and capacity breakthroughs that will continue to push compute boundaries. Together, these additions to Samsung's comprehensive technology ecosystem will power the next generation of datacenters, high-performance computing (HPC), enterprise, artificial intelligence (AI) and emerging applications."

AMD and Xilinx Announce a New World Record for AI Inference

At today's Xilinx Developer Forum in San Jose, Calif., our CEO, Victor Peng was joined by the AMD CTO Mark Papermaster for a Guinness. But not the kind that comes in a pint - the kind that comes in a record book. The companies revealed the AMD and Xilinx have been jointly working to connect AMD EPYC CPUs and the new Xilinx Alveo line of acceleration cards for high-performance, real-time AI inference processing. To back it up, they revealed a world-record 30,000 images per-second inference throughput!

The impressive system, which will be featured in the Alveo ecosystem zone at XDF today, leverages two AMD EPYC 7551 server CPUs with its industry-leading PCIe connectivity, along with eight of the freshly-announced Xilinx Alveo U250 acceleration cards. The inference performance is powered by Xilinx ML Suite, which allows developers to optimize and deploy accelerated inference and supports numerous machine learning frameworks such as TensorFlow. The benchmark was performed on GoogLeNet, a widely used convolutional neural network.

Intel Adds to Portfolio of FPGA Programmable Acceleration Cards

Intel today extended its field programmable gate array (FPGA) acceleration platform portfolio with the addition of the new Intel Programmable Acceleration Card (PAC) with Intel Stratix 10 SX FPGA, Intel's most powerful FPGA. This high-bandwidth card leverages the Acceleration Stack for Intel Xeon CPU with FPGAs, providing data center developers a robust platform to deploy FPGA-based accelerated workloads. Hewlett Packard Enterprise* will be the first OEM to incorporate the Intel PAC with Stratix 10 SX FPGA along with the Intel Acceleration Stack for Intel Xeon Scalable processor with FPGAs into its server offering.

"We're seeing a growing market for FPGA-based accelerators, and with Intel's new FPGA solution, more developers - no matter their expertise - can adopt the tool and benefit from workload acceleration. We plan to use the Intel Stratix 10 PAC and acceleration stack in our offerings to enable customers to easily manage complex, emerging workloads," said Bill Mannel, vice president and general manager, HPC and AI Group, HPE.

Rollercoaster Monday for AMD as it Loses Jim Anderson, Closes Above $25 in Stock Price

It has been a rollercoaster Monday for AMD as it bled yet another bright executive. Jim Anderson, who led Computing and Graphics Group after the departure of Raja Koduri, and who is rumored to have conceived the idea of Threadripper and the client-segment monetization of the "Zen" architecture, left AMD to become CEO of Lattice Semiconductor, a company that designs FPGAs. Anderson will be paid an inducement award of company shares valued up to $2.9 million.

On the same day, AMD stock crossed $25 to close at $25.26 up 5.34 percent, a historic high since way back in 2006 as Intel was beginning to regain its footing with its Core processor family. This raises the company's market cap to $22.9 billion. AMD is better funded than ever (in over 12 years), to start a new GPU project, for example. CTO Mark Papermaster, in a company blog post assured customers that AMD is going all-in with 7 nanometer, and it could bank more heavily on TSMC to achieve its roadmap goals of first-to-market 7 nm CPU and GPU by end of the year.

Intel to Acquire eASIC to Bolster FPGA Talent and Solutions

Intel is competing to win in the largest-ever addressable market for silicon, which is being driven by the explosion of data and the need to process, analyze, store and share it. This dynamic is fueling demand for computing solutions of all kinds. Of course Intel is known for world-class CPUs, but today we offer a broader range of custom computing solutions to help customers tackle all kinds of workloads - in the cloud, over the network and at the edge. In recent years, Intel has expanded its products and introduced breakthrough innovations in memory, modems, purpose-built ASICs, vision processing units and field programmable gate arrays (FPGAs).

FPGAs are experiencing expanding adoption due to their versatility and real-time performance. These devices can be programmed anytime - even after equipment has been shipped to customers. FPGAs contain a mixture of logic, memory and digital signal processing blocks that can implement any desired function with extremely high throughput and very low latency. This makes FPGAs ideal for many critical cloud and edge applications, and Intel's Programmable Solutions Group revenue has grown double digits as customers use FPGAs to accelerate artificial intelligence, among other applications.

Baidu Unveils 'Kunlun' High-Performance AI Chip

Baidu Inc. today announced Kunlun, China's first cloud-to-edge AI chip, built to accommodate high performance requirements of a wide variety of AI scenarios. The announcement includes training chip "818-300"and inference chip "818-100". Kunlun can be applied to both cloud and edge scenarios, such as data centers, public clouds and autonomous vehicles.

Kunlun is a high-performance and cost-effective solution for the high processing demands of AI. It leverages Baidu's AI ecosystem, which includes AI scenarios like search ranking and deep learning frameworks like PaddlePaddle. Baidu's years of experience in optimizing the performance of these AI services and frameworks afforded the company the expertise required to build a world class AI chip.

Samsung Doubles its HBM2 Output, May Still Fall Short of Demand

Samsung has reportedly doubled its manufacturing output of HBM2 (high-bandwidth memory 2) stacks. Despite this, the company may still fall short of the demand for HBM2, according to HPC expert Glenn K Lockwood, Tweeting from the ISC 2018, the annual HPC industry event held between 24th to 28th June in Frankfurt, where Samsung was talking about its 2nd generation "Aquabolt" HBM2 memory, which is up to 8 times faster than GDDR5, with up to 307 GB/s bandwidth from a single stack.

While HBM2 is uncommon on consumer graphics cards (barring AMD's flagship Radeon RX Vega series, and NVIDIA's TITAN V), the memory type is in high demand with HPC accelerators that are mostly GPU-based, such as AMD Radeon Instinct series, and NVIDIA Tesla. The HPC industry itself is riding the gold-rush of AI research based on deep-learning and neural-nets. FPGAs, chips that you can purpose-build for your applications, are the other class of devices soaking up HBM2 inventories. The result of high demand, coupled with high DRAM prices could mean HBM2 could still be too expensive for mainstream client applications.

NVIDIA G-Sync HDR Module Adds $500 to Monitor Pricing

PCPer had the opportunity to disassemble the ASUS ROG Swift PG27UQ 27", a 4K 144 Hz G-Sync HDR Monitor and found that the G-Sync module is a newer version than the one used on 1st generation G-Sync monitors (which of course do not support 4K / 144 Hz / HDR). The module is powered by an FPGA made by Altera (Intel-owned since 2015). The exact model number is Arria 10 GX 480, which is a high-performance 20 nanometer SoC that provides enough bandwidth and LVDS pins to process the data stream.

The FPGA is sold in low quantities for $2000 at Digikey and Mouser. Assuming that NVIDIA buys thousands, PCPer suggests that the price of this chip alone will add $500 to monitor cost. The BOM cost is further increased by 3 GB of DDR4 memory on the module. With added licensing fees for G-SYNC, this explains why these monitors are so expensive.

Say Hello to the Next Generation of Arduino Boards, Introducing FPGA Solutions

We're excited to kick off Maker Faire Bay Area by expanding our IoT lineup with two new boards: the MKR Vidor 4000 and the Uno WiFi Rev 2.

The MKR Vidor 4000 is the first-ever Arduino based on an FPGA chip, equipped with a SAM D21 microcontroller, a u-blox Nina W102 WiFi module, and an ECC508 crypto chip for secure connection to local networks and the Internet. MKR Vidor 4000 is the latest addition to the MKR family, designed for a wide range of IoT applications, with its distinctive form factor and substantial computational power for high performance. The board will be coupled with an innovative development environment, which aims to democratize and radically simplify access to the world of FPGAs.

Intel Stratix 10: Capable of 10 Trillion Calculations per Second

(Editor's Note: Intel says the Stratix 10 contains some 30 billion transistors - and they say that's more than triple the amount in CPUs that run in the fastest desktop processors today. They're really the ones to know it, since Intel has decided to cut on disclosing transistor count on its CPUs for some time now. The amount of data these FPGAs can process in a single second is nothing short of mind-blowing, though: Intel says they can process the data equivalent to 420 Blu-ray Discs... in just one second. If that doesn't spell an unimaginable future in terms of processing power, I don't know what does.)

Because of the Intel Stratix 10's unique design, it can whip through calculations at blinding speeds - often 10 to 100 times faster than the chips in consumer devices. Intel Stratix 10 FPGAs - the latest version came out in February - are capable of 10 TFLOPS, or 10 trillion floating point operations per second. The Stratix 10 is the fastest chip of its kind in the world.

Intel's Ice Lake Xeon Processor Details Leaked: LGA 4189, 8-Channel Memory

The Power Stamp Alliance (PSA) has posted some details on Intel's upcoming high-performance, 10 nm architecture. Code-named Ice Lake, the Xeon parts of this design will apparently usher in yet another new socket (socket LGA 4189, compared to the socket LGA 3647 solution for Kaby lake and upcoming Cascade Lake designs). TDP is being shown as increased with Intel's Ice Lake designs, with an "up to" 230 W TDp - more than the Skylake or Cascade Lake-based platforms, which just screams at higher core counts (and other features such as OmniPath or on-package FPGAs).

Digging a little deeper into the documentation released by the PSA shows Intel's Ice Lake natively supporting 8-channel memory as well, which makes sense, considering the growing needs in both available memory capacity, and actual throughput, that just keeps rising. More than an interesting, unexpected development, it's a sign of the times.

Intel FPGAs Accelerate Artificial Intelligence for Deep Learning in Microsoft's Bing

Artificial intelligence (AI) is transforming industries and changing how data is managed, interpreted and, most importantly, used to solve real problems for people and businesses faster than ever.

Today's Microsoft's Bing Intelligent Search news demonstrates how Intel FPGA (field programmable gate array) technology is powering some of the world's most advanced AI platforms. Advances to the Bing search engine with real-time AI will help people do more and learn by going beyond delivering standard search results. Bing Intelligent Search will provide answers instead of web pages, and enable a system that understands words and the meaning behind them, the context and intent of a search.

Xilinx Unveils Their Revolutionary Adaptive Compute Acceleration Platform

Xilinx, Inc., the leader in adaptive and intelligent computing, today announced a new breakthrough product category called adaptive compute acceleration platform (ACAP) that goes far beyond the capabilities of an FPGA. An ACAP is a highly integrated multi-core heterogeneous compute platform that can be changed at the hardware level to adapt to the needs of a wide range of applications and workloads. An ACAP's adaptability, which can be done dynamically during operation, delivers levels of performance and performance per-watt that is unmatched by CPUs or GPUs.

An ACAP is ideally suited to accelerate a broad set of applications in the emerging era of big data and artificial intelligence. These include: video transcoding, database, data compression, search, AI inference, genomics, machine vision, computational storage and network acceleration. Software and hardware developers will be able to design ACAP-based products for end point, edge and cloud applications. The first ACAP product family, codenamed "Everest," will be developed in TSMC 7nm process technology and will tape out later this year.
Return to Keyword Browsing
Nov 21st, 2024 12:31 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts