News Posts matching #Intel

Return to Keyword Browsing

Intel Brings AI Everywhere Across Network, Edge, Enterprise

At MWC 2024, Intel announced new platforms, solutions and services spanning network and edge AI, Intel Core Ultra processors and the AI PC, and more. In an era where technological advancements are integral to staying competitive, Intel is delivering products and solutions for its customers, partners and expansive ecosystem to capitalize on the emerging opportunities of artificial intelligence and built-in automation, to improve total cost of ownership (TCO) and operational efficiency, and to deliver new innovations and services.

Across today's announcements, Intel is focused on empowering the industry to further modernize and monetize 5G, edge and enterprise infrastructures and investments, and to take advantage of bringing AI Everywhere. For more than a decade, and alongside Intel's customers and partners, the company has been transforming today's network infrastructure from fixed-function to a software-defined platform and driving success at the edge with more than 90,000 real-world deployments.

Intel Optimizes PyTorch for Llama 2 on Arc A770, Higher Precision FP16

Intel just announced optimizations for PyTorch (IPEX) to take advantage of the AI acceleration features of its Arc "Alchemist" GPUs.PyTorch is a popular machine learning library that is often associated with NVIDIA GPUs, but it is actually platform-agnostic. It can be run on a variety of hardware, including CPUs and GPUs. However, performance may not be optimal without specific optimizations. Intel offers such optimizations through the Intel Extension for PyTorch (IPEX), which extends PyTorch with optimizations specifically designed for Intel's compute hardware.

Intel released a blog post detailing how to run Meta AI's Llama 2 large language model on its Arc "Alchemist" A770 graphics card. The model requires 14 GB of GPU RAM, so a 16 GB version of the A770 is recommended. This development could be seen as a direct response to NVIDIA's Chat with RTX tool, which allows GeForce users with >8 GB RTX 30-series "Ampere" and RTX 40-series "Ada" GPUs to run PyTorch-LLM models on their graphics cards. NVIDIA achieves lower VRAM usage by distributing INT4-quantized versions of the models, while Intel uses a higher-precision FP16 version. In theory, this should not have a significant impact on the results. This blog post by Intel provides instructions on how to set up Llama 2 inference with PyTorch (IPEX) on the A770.

Intel CEO Discloses TSMC Production Details: N3 for Arrow Lake & N3B for Lunar Lake

Intel CEO Pat Gelsinger engaged with press/media representatives following the conclusion of his IFS Direct Connect 2024 keynote speech—when asked about Team Blue's ongoing relationship with TSMC, he confirmed that their manufacturing agreement has advanced from "5 nm to 3 nm." According to a China Times news article: "Gelsinger also confirmed the expansion of orders to TSMC, confirming that TSMC will hold orders for Intel's Arrow and Lunar Lake CPU, GPU, and NPU chips this year, and will produce them using the N3B process, officially ushering in the Intel notebook platform that the outside world has been waiting for many years." Past leaks have indicated that Intel's Arrow Lake processor family will have CPU tiles based on their in-house 20A process, while TSMC takes care of the GPU tile aspect with their 3 nm N3 process node.

That generation is expected to launch later this year—the now "officially confirmed" upgrade to 3 nm should produce pleasing performance and efficiency improvements. The current crop of Core Ultra "Meteor Lake" mobile processors has struggled with the latter, especially when compared to rivals. Lunar Lake is marked down for a 2025 launch window, so some aspects of its internal workings remain a mystery—Gelsinger has confirmed that TSMC's N3B is in the picture, but no official source has disclosed their in-house manufacturing choice(s) for LNL chips. Wccftech believes that Lunar Lake will: "utilize the same P-Core (Lion Cove) and brand-new E-Core (Skymont) core architecture which are expected to be fabricated on the 20A node. But that might also be limited to the CPU tile. The GPU tile will be a significant upgrade over the Meteor Lake and Arrow Lake CPUs since Lunar Lake ditches Alchemist and goes for the next-gen graphics architecture codenamed "Battlemage" (AKA Xe2-LPG)." Late January whispers pointed to Intel and TSMC partnering up on a 2 nanometer process for the "Nova Lake" processor generation—perhaps a very distant prospect (2026).

Apple M2 Posts Single-Thread CPU-Z Bench Score Comparable to Intel Alder Lake

Apple's M-series chips frighten Intel, AMD, and Microsoft like nothing else can, as they have the potential to power MacBooks to grab a sizable share of the notebook market share. This is based squarely on the phenomenal performance/Watt on offer with Apple's chips. A user installed Windows 11 Arm on a virtual machine running on an M2-powered MacBook, opened up CPU-Z (which of course doesn't detect the chip since it's on a VM). They then ran a CPU-Z Bench session for a surprising result—a single-threaded score of 749.5 points, with a multithreaded score of 3822.3 points.

The single-thread score in particular is comparable to Intel's 12th Gen Core "Alder Lake" chips (their "Golden Cove" P-cores); maybe not on the fastest Core i9-12900K, but of the mid-range Core i5 chips, such as the i5-12600. It's able to do this at a fraction of the power and heat output. It is on the backs of this kind of IPC that Apple is building bigger chips such as the M3 Pro and M3 Max, which are able to provide HEDT or workstation-class performance, again, at a fraction of the power.

Intel's Desktop and Mobile "Arrow Lake" Chips Feature Different Versions of Xe-LPG

Toward the end of 2024, Intel will update its client processor product stack with the introduction of the new "Arrow Lake" microarchitecture targeting both the desktop and mobile segments. On the desktop side of things, this will herald the new Socket LGA1851 with more SoC connectivity being shifted to the processor; and on the mobile side of things, there will be a much-needed increase in CPU core counts form the current 6P+8E+2LP. This low maximum core-count for "Meteor Lake" is the reason why Intel couldn't debut it on the desktop platform, and couldn't use it to power enthusiast HX-segment mobile processors, either—it had to tap into "Raptor Lake Refresh," and use the older 14th Gen Core nomenclature one last time.

All hopes are now pinned on "Arrow Lake," which could make up Intel's second Core Ultra mobile lineup; its first desktop Core Ultra, and possibly push "Meteor Lake" to the non-Ultra tier. "Arrow Lake" carries forward the Xe-LPG graphics architecture for the iGPU that Intel debuted with "Meteor Lake," but there's a key difference between the desktop- and mobile "Arrow Lake" chips concerning this iGPU, and it has not just to do with the Xe core counts. It turns out, that while the desktop "Arrow Lake-S" processor comes with an iGPU based on the Xe-LPG graphics architecture; the mobile "Arrow Lake" chips spanning the U-, P-, and H-segments will use a newer version of this architecture, called the Xe-LPG+.

ASUS IoT Announces All-New Industrial Motherboards and Edge AI Computers Based on Latest Intel Core (14th Gen) Processors

ASUS IoT, the global AIoT solution provider, today announced the launch of its new lineup of industrial motherboards and edge AI computers powered by the latest Intel Core (14th gen) processors. These cutting-edge solutions offer supreme computing performance, enhanced power efficiency, and accelerated connectivity, making them ideal for a wide range of industrial applications. One of the key features of these new solutions is the accelerated transfer speeds and enhanced power efficiency offered by DDR5 memory. Compared to DDR4, DDR5 memory provides 50% faster transfer speeds and 8% improved power efficiency, ensuring reliability with ECC technology.

ASUS IoT industrial motherboards also support double-speed PCI Express (PCIe) 5.0, which doubles the speed of PCIe 4.0, while maintaining full backward compatibility for system flexibility. This allows for faster data transfer and expandability, meeting the demands of modern industrial applications. The integrated Intel UHD Graphics technology in ASUS IoT motherboards supports up to 8K60 HDR video and multiple 4K60 displays, providing vivid graphics and powerful AI acceleration. This makes them ideal for various applications, including retail, healthcare and AI in smart factories.

MSI Intel and AMD Motherboards Now Fully Support Up to 256GB of Memory Capacity

By the end of 2023, MSI unveiled its groundbreaking support for memory capacities of up to 256 GB. Now, both MSI Intel and AMD motherboards official support these capacities, with 4 DIMMs enabling 256 GB and 2 DIMMs supporting 128 GB. This advancement enhances multitasking capabilities and ensures seamless computing operations.

Intel Motherboard - 700 & 600 Series Platform, BIOS Rolling Out
The supported platforms for this memory capacity enhancement include Intel 700 and 600 series DDR5 motherboards. Gamers looking to benefit from these enhancements will need to upgrade to the own dedicated BIOS. MSI is currently diligently working on releasing the BIOS, with the first batch already available below. The rest of the models will be released in late February and March.

NVIDIA Expects Upcoming Blackwell GPU Generation to be Capacity-Constrained

NVIDIA is anticipating supply issues for its upcoming Blackwell GPUs, which are expected to significantly improve artificial intelligence compute performance. "We expect our next-generation products to be supply constrained as demand far exceeds supply," said Colette Kress, NVIDIA's chief financial officer, during a recent earnings call. This prediction of scarcity comes just days after an analyst noted much shorter lead times for NVIDIA's current flagship Hopper-based H100 GPUs tailored to AI and high-performance computing. The eagerly anticipated Blackwell architecture and B100 GPUs built on it promise major leaps in capability—likely spurring NVIDIA's existing customers to place pre-orders already. With skyrocketing demand in the red-hot AI compute market, NVIDIA appears poised to capitalize on the insatiable appetite for ever-greater processing power.

However, the scarcity of NVIDIA's products may present an excellent opportunity for significant rivals like AMD and Intel. If both companies can offer a product that could beat NVIDIA's current H100 and provide a suitable software stack, customers would be willing to jump to their offerings and not wait many months for the anticipated high lead times. Intel is preparing the next-generation Gaudi 3 and working on the Falcon Shores accelerator for AI and HPC. AMD is shipping its Instinct MI300 accelerator, a highly competitive product, while already working on the MI400 generation. It remains to be seen if AI companies will begin the adoption of non-NVIDIA hardware or if they will remain a loyal customer and agree to the higher lead times of the new Blackwell generation. However, capacity constrain should only be a problem at launch, where the availability should improve from quarter to quarter. As TSMC improves CoWoS packaging capacity and 3 nm production, NVIDIA's allocation of the 3 nm wafers will likely improve over time as the company moves its priority from H100 to B100.

Senao Networks Unveils SX904 SmartNIC with Embedded Xeon D to Process Network Stack

Senao Networks, a leading network solution provider, proudly announces its launch of SX904 SmartNIC based on the Intel NetSec Accelerator Reference Design. This cutting-edge NIC, harnessing the power of PCIe Gen 4 technology and fueled by the Intel Xeon D processor, sets an unprecedented standard in high-performance network computing. Senao will showcase a system demonstration at the Intel booth during the upcoming MWC in Barcelona. As transformative shift in networking edge, enterprises are increasingly leaning on scalable edge infrastructure. In order to cater the demands of workloads, low latency, local data processing, and robust security, SX904 marks a significant leap forward.

The combination of Intel XeonD processor, PCIe Gen 4 technology, dual 25 Gbps SFP28 support, and DDR4 ECC memory support enables the SX904 to achieve unparalleled data transfer rates and maximum bandwidth utilization, ideal for modern server architectures. It provides higher performance from the latest Intel Xeon D processor and Intel Ethernet Controller E810 and supports the latest Intel Platform Firmware Resilience, BMC, and TPM 2.0. SX904 enables the seamless offload of applications optimized for Intel architecture with zero changes, optimizing performance transmission effortlessly into an Intel-based server in PCIe add-in-card form factor.

US Commerce Chief: Nation Requires Additional Chip Funding

US Commerce Secretary, Gina Raimondo, was a notable guest speaker during yesterday's Intel Foundry Direct Connect Keynote—she was invited on (via a video link) to discuss the matter of strengthening the nation's semiconductor industry, and staying competitive with global rivals. During discussions, Pat Gelsinger (Intel CEO) cheekily asked whether a "CHIPS Act Part Two" was in the pipeline. Raimondo responded by stating that she is till busy with the original $52 billion tranche: "I'm out of breath running as fast as I can implementing CHIPS One." Earlier this week, her department revealed a $1.5 billion planned direct fund for GlobalFoundries: "this investment will enable GF to expand and create new manufacturing capacity and capabilities to securely produce more essential chips for automotive, IoT, aerospace, defense, and other vital markets."

Intel is set to receive a large grant courtesy of the US government's 2022-launched CHIPS and Science Act—exact figures have not been revealed to the public, but a Nikkei Asia report suggests that Team Blue will be benefiting significantly in the near future: "While the Commerce Department has not yet announced how much of the funding package's $52 billion it would grant Intel, the American chipmaker is expected to get a significant portion, according to analysts and officials close to the situation." Raimondo stated that: "Intel is an American champion company and has a very huge role to play in this revitalization." The US Commerce Chief also revealed that she had spoken with artificial intelligence industry leaders, including OpenAI's Sam Altman, about the ever-growing demand for AI-crunching processors/accelerators/GPUs. The country's semiconductor production efforts could be bolstered once more, in order to preserve a competitive edge—Raimondo addressed Gelsinger's jokey request for another batch of subsidies: "I suspect there will have to be—whether you call it Chips Two or something else—continued investment if we want to lead the world...We fell pretty far. We took our eye off the ball."

Framework Reveals $499 B-stock Laptop 13 Barebones Configuration

We're happy to share that Framework Laptop 16's are now in customer hands. It's been an excellent journey over the last two years designing and building an ultra-upgradeable, high-performance machine, and we're excited to see the early feedback. As always with Framework products, the first shipment is just the beginning, and we're looking forward to continuing to deliver on longevity, upgradeability, and repairability as we go. We've seen more press reviews go live as well, including by far the most thorough one, a deep dive from Jarrod's Tech that includes both a broad range of benchmarks and subjective evaluation of the overall experience. Framework Laptop 16 pre-orders are still open as we continue to manufacture our way through the pre-order batches. Most of our factory capacity, which we doubled last year, is now allocated to getting you Framework Laptop 16's as quickly as we can.

We recently uploaded the first set of developer documentation around Framework Laptop 16 internals on GitHub, adding to the existing material we have for the Expansion Bay and Input Module systems. The new release includes drawings and connector part numbers for the Mainboard to enable re-use. We'll continue to build out this documentation over time, like we have for Framework Laptop 13.

Samsung's New Galaxy Book4 Series Available Globally Beginning February 26

Samsung Electronics today announced the Galaxy Book4 series will be available in selected markets starting February 26. The latest premium PC lineup from Samsung delivers intelligent and powerful experiences that bring together highly optimized performance, a vivid touchscreen display and enhanced connectivity. The Galaxy Book4 series, including the Galaxy Book4 Ultra, Galaxy Book4 Pro and Galaxy Book4 Pro 360, launched in Korea on January 2 and experienced record-breaking interest, outselling last year's Galaxy Book3 series by 1.5 times during the first week of sales.

"We're excited for users to experience the intelligence, connectivity and productivity made possible by the Galaxy Book4 series, taking our premium PC lineup to the next level," said TM Roh, President and Head of Mobile eXperience Business at Samsung Electronics. "The Galaxy Book4 series delivers the powerful performance and multi-device connectivity that consumers expect from a high-performance PC in today's market."

Intel Core i9-13900K and i7-13700K Gaming Stability Issues Linked to Power Limit Unlocks

Users of Intel's 13th Gen unlocked K-series processors such as Core i9-13900K, i7-13700K, are reporting stability issues when gaming even at stock clock-speeds. Hassan Mujtaba of Wccftech and Tom's Hardware have isolated the issues to power limit unlocks. Most Z690 and Z790 chipset motherboards include BIOS-level unlocks for the power limits, particularly the Maximum Turbo Power (interchangeable with PL2). By default, the i9-13900K and i7-13700K come with a PL2 value of 253 W, but you can get the motherboard to unlock this to unlimited, which basically tells the processor that it has 4096 W of power on tap, so not technically a "stock" configuration anymore.

Of course, neither your PSU nor your CPU VRM are capable of delivering 4096 W, and so the processor tends to draw as much power as it needs, to maintain the best possible P-core boost frequencies, before running into thermal limits. At stock frequencies with stock boost bins, unlocked power limits can drive the power draw of i9-13900K as far high as 373 W under a multithreaded load, in our testing, when compared to 283 W with the power limits in place. It turns out, that unlocking the power limits can come with long-term costs, besides the literal cost of electricity—the processor's stability with gaming workloads can degrade with certain hardware combos and settings.

Intel to Make its Most Advanced Foundry Nodes Available even to AMD, NVIDIA, and Qualcomm

Intel CEO Pat Gelsinger, speaking at the Intel Foundry Services (IFS) Direct Connect event, confirmed to Tom's Hardware that he hopes to turn IFS into the West's premier foundry company, and a direct technological and volume rival to TSMC. He said that there is a clear line of distinction between Intel Products and Intel Foundry, and that later this year, IFS will be more legally distinct from Intel, becoming its own entity. The only way Gelsinger sees IFS being competitive to TSMC, is by making its most advanced semiconductor manufacturing nodes and 3D chip packaging innovations available to foundry customers other than itself (Intel Products), even if it means providing them to companies that directly compete with Intel products, such as AMD and Qualcomm.

Paul Alcorn of Tom's Hardware asked CEO Gelsinger "Intel will now offer its process nodes to some of its competitors, and there may be situations wherein your product teams are competing directly with competitors that are enabled by your crown jewels. How do you plan to navigate those types of situations and maybe soothe ruffled feathers on your product teams?" To this, Gelsinger responded "Well, if you go back to the picture I showed today, Paul, there are Intel products and Intel foundry, There's a clean line between those, and as I said on the last earnings call, we'll have a setup separate legal entity for Intel foundry this year," Gelsinger responded. "We'll start posting separate financials associated with that going forward. And the foundry team's objective is simple: Fill. The. Fabs. Deliver to the broadest set of customers on the planet."

Intel Introduces Advisory Committee at Intel Foundry Direct Connect

During his keynote address today at Intel Foundry Direct Connect, Intel's inaugural foundry event, CEO Pat Gelsinger introduced four members of the company's Foundry Advisory Committee. The committee advises Intel on its IDM 2.0 strategy, including creation and development of a thriving systems foundry for the AI era.
The advisory committee includes leaders from the semiconductor industry and academia, two of whom are also members of Intel's board of directors:
  • Chi-Foon Chan, former Co-CEO of Synopsys; former Microprocessor Group general manager at NEC; director at PDF Solutions.
  • Joe Kaeser, former CEO of Siemens; supervisory board chair at Siemens Energy and Daimler Truck; supervisory board member at Linde; former member of the board of NXP semiconductor; member of the board of trustees at the World Economic Forum.
  • Tsu-Jae King Liu, vice chair of the Foundry Advisory Committee; dean of College of Engineering at the University of California, Berkeley; Intel director; and director at MaxLinear.
  • Lip-Bu Tan, chair of the Foundry Advisory Committee; former CEO of Cadence Design Systems; chairman of Walden International; and Intel director; director at Credo Technology Group and Schneider Electric.

Intel CEO Pat Gelsinger Receives 2024 Distinguished Executive Leadership Award from JEDEC Board

The JEDEC Board of Directors presented its prestigious 2024 Distinguished Executive Leadership Award to Intel CEO, Pat Gelsinger, in a ceremony held at Intel's offices in Santa Clara, CA earlier this month. This award stands as JEDEC's highest honor and recognizes the most distinguished senior executives in the electronics industry who promote and support the advancement of JEDEC standards.

"Throughout Pat Gelsinger's distinguished career, he has consistently championed the development of open standards, as evidenced by Intel's contributions in this domain. Under his leadership, Intel has made a tremendous impact on many groundbreaking memory and IO technologies in JEDEC," said Mian Quddus, JEDEC Board of Directors Chairman. He added, "JEDEC is grateful for his invaluable support and that of the Intel team."

Intel Announces Intel 14A (1.4 nm) and Intel 3T Foundry Nodes, Launches World's First Systems Foundry Designed for the AI Era

Intel Corp. today launched Intel Foundry as a more sustainable systems foundry business designed for the AI era and announced an expanded process roadmap designed to establish leadership into the latter part of this decade. The company also highlighted customer momentum and support from ecosystem partners - including Synopsys, Cadence, Siemens and Ansys - who outlined their readiness to accelerate Intel Foundry customers' chip designs with tools, design flows and IP portfolios validated for Intel's advanced packaging and Intel 18A process technologies.

The announcements were made at Intel's first foundry event, Intel Foundry Direct Connect, where the company gathered customers, ecosystem companies and leaders from across the industry. Among the participants and speakers were U.S. Secretary of Commerce Gina Raimondo, Arm CEO Rene Haas, Microsoft CEO Satya Nadella, OpenAI CEO Sam Altman and others.

Intel Meteor Lake Linux Patches Set to Optimize Default Power Modes

Phoronix has spotted intriguing new Linux kernel patches for Intel Core Ultra "Meteor Lake" processors—the Monday morning notes reveal in-house software engineers are implementing default power profile adjustments. Meteor Lake CPUs have been operating on a default "balanced_performance" mode since their December 2023 launch—Linux adjustments will affect the processor's Energy Performance Preference (EPP) under Linux (similar to Windows Power Plans). Michael Larabel (Phoronix head honcho) laid out some history: "We've seen EPP overrides/tuning in the past within the Intel P-State driver for prior generations of Intel processors and this is much the same here. The ACPI EPP value is typically a range from 0 to 255 for indicating the processor/system power to performance preference."

He continued onto present day circumstances: "To date though the Intel P-State EPP override/tuning has been focused on the default "balanced_performance" mode while the first patch (from Monday) allows for model-specific EPP overrides for all pre-defined EPP strings. The second patch then goes ahead and updates the EPP values for Meteor Lake so that the balanced_performance default is now treated as 115 rather than 128 and the "performance" EPP is set to 16 rather than 0." Larabel is hopeful that a public release will coincide with the "upcoming Linux v6.9 cycle." Intel software engineers reckon that their tweaks/overrides have produced higher performance results—for "small form factor devices"—while reducing CPU temperatures and thermal throttling. Meteor Lake is considered to be quite energy inefficient when compared to the closest mobile processor architectures from AMD and Apple. Team Blue's next-gen Arrow Lake family is expected to launch later this year, but the current crop of CPUs require a bit of TLC and optimization in the meantime.

Intel Xeon "Granite Rapids" Wafer Pictured—First Silicon Built on Intel 3

Feast your eyes on the first pictures of an Intel "Granite Rapids" Xeon processor wafer, courtesy of Andreas Schilling with HardwareLuxx.de. This is Intel's first commercial silicon built on the new Intel 3 foundry node, which is expected to be the company's final silicon fabrication node to implement FinFET technology; before the company switches to Nanosheets with the next-generation Intel 20A. Intel 3 offers transistor densities and performance competitive to TSMC N3 series, and Samsung 3GA series nodes.

The wafer contains square 30-core tiles, two of which make up a "Granite Rapids-XCC" processor, with CPU core counts going up to 56-core/112-threads (two cores left unused per tile for harvesting). Each of the 30 cores on the tile is a "Redwood Cove" P-core. In comparison, the current "Emerald Rapids" Xeon processor uses "Raptor Cove" cores, and is built on the Intel 7 foundry node. Intel is planning to overcome the CPU core-count deficit to AMD EPYC, including the upcoming EPYC "Turin" Zen 5 processors with their rumored 128-core/256-thread counts, by implementing several on-silicon fixed-function accelerators that speed up popular kinds of server workloads. The "Redwood Cove" core is expected to be Intel's first IA core to implement AVX10 and APX.

Starfield AMD FSR 3.0 and Intel XeSS Support Out Now

Starfield game patch version 1.9.67 just released, with official support for AMD FSR 3.0 and Intel XeSS. Support for the two performance enhancements was beta (experimental) until now. FSR 3.0 brings frame generation support to Starfield. The game had received DLSS 3 Frame Generation support in November 2023, but by then, FSR 3.0 support wasn't fully integrated with the game, as it had just began rolling out in September. The FSR 3.0 option now replaces the game's FSR 2.0 implementation. FSR 3.0 works on Radeon RX 7000 series and RX 6000 series graphics cards. The patch also fixes certain visual artifacts on machines with DLSS performance preset enabled.

Intel Foundry Services (IFS) and Cadence Design Systems Expand Partnership on SoC Design

Intel Foundry Services (IFS) and Cadence Design Systems Inc. today announced a multiyear strategic agreement to jointly develop a portfolio of key customized intellectual property (IP), optimized design flows and techniques for Intel 18A process technology featuring RibbonFET gate-all-around transistors and PowerVia backside power delivery. Joint customers of the companies will be able to accelerate system-on-chip (SoC) project schedules on process nodes from Intel 18A and beyond while optimizing for performance, power, area, bandwidth and latency for demanding artificial intelligence, high performance computing and premium mobile applications.

"We're very excited to expand our partnership with Cadence to grow the IP ecosystem for IFS and provide choice for customers," said Stuart Paann, Intel senior vice president and general manager of IFS. "We will leverage Cadence's world-class portfolio of leading IP and advanced design solutions to enable our customers to deliver high-volume, high-performance and power-efficient SoCs on Intel's leading-edge process technologies."

CTL Announces the Chromebook NL73 Series

CTL, a global cloud-computing solution leader for education, announced today the introduction of the new CTL Chromebook NL73 Series. The new Chromebook, incorporating the Intel Processor N100 and Intel Processor N200, enables IT professionals to equip schools with the cloud-computing performance they need today and with the sustainability required for tomorrow.

"Since Chromebooks were widely deployed during the pandemic to remote students, new applications have come into use, requiring more processing power and cybersecurity measures than ever before," noted Erik Stromquist, CEO of CTL. "Chromebook users will need to level up their technology in 2024. Our new NL73 Series delivers not only the power to meet these new requirements but also CTL's flexible configuration options, purchase options, and whole lifecycle management services for the ultimate in sustainability."

Intel and Ohio Supercomputer Center Double AI Processing Power with New HPC Cluster

A collaboration including Intel, Dell Technologies, Nvidia and the Ohio Supercomputer Center (OSC), today introduces Cardinal, a cutting-edge high-performance computing (HPC) cluster. Purpose-built to meet the increasing demand for HPC resources in Ohio across research, education and industry innovation, particularly in artificial intelligence (AI).

AI and machine learning are integral tools in scientific, engineering and biomedical fields for solving complex research inquiries. As these technologies continue to demonstrate efficacy, academic domains such as agricultural sciences, architecture and social studies are embracing their potential. Cardinal is equipped with the hardware capable of meeting the demands of expanding AI workloads. In both capabilities and capacity, the new cluster will be a substantial upgrade from the system it will replace, the Owens Cluster launched in 2016.

ASUS Announces New Vivobook S Series Notebooks With AI-Enabled Intel Core Ultra Processors

ASUS today announced brand-new ASUS Vivobook S series laptops for 2024, designed for a sleek and lightweight lifestyle. These laptops - all featuring ASUS Lumina OLED display options - are driven by the latest AI-enabled the latest Intel Core Ultra processors, and offer exceptional performance. The series comprises the 14.0-inch ASUS Vivobook S 14 OLED (S5406), the 15.6-inch ASUS Vivobook S 15 OLED (S5506), and the 16.0-inch ASUS Vivobook S16 OLED (S5606). These sleek, powerful and lightweight Intel Evo -certified ASUS Vivobook laptops offer the ultimate experience for those seeking on-the-go productivity and instant entertainment, with modern color options and minimalist, high-end aesthetics, making them the perfect choice for balanced mobility and performance.

ASUS Vivobook S 14/15/16 OLED are powered by Intel Core Ultra processors, with up to a 50-watt TDP and a built-in Neural Processing Unit (NPU) that provides power-efficient acceleration for modern AI applications. Moreover, ASUS Vivobook S series laptops all have dedicated Copilot key, allowing you effortlessly dive into Windows 11 AI-powered tools with just one press. Lifelike visuals are provided by world-leading ASUS Lumina OLED displays with resolutions up to 3.2K (S5606) along with 120 Hz refresh rates, a 100% DCI-P3 gamut and DisplayHDR True Black 600 certification. The stylish and comfortable ASUS ErgoSense keyboard now features customizable single-zone RGB backlighting, and there's an extra-large ErgoSense touchpad. As with all ASUS Vivobook models, the user experience is prioritized: there's a lay-flat 180° hinge, an IR camera with a physical shutter, a full complement of I/O ports, and immersive Dolby Atmos audio from the powerful Harman Kardon-certified stereo speakers.

Groq LPU AI Inference Chip is Rivaling Major Players like NVIDIA, AMD, and Intel

AI workloads are split into two different categories: training and inference. While training requires large computing and memory capacity, access speeds are not a significant contributor; inference is another story. With inference, the AI model must run extremely fast to serve the end-user with as many tokens (words) as possible, hence giving the user answers to their prompts faster. An AI chip startup, Groq, which was in stealth mode for a long time, has been making major moves in providing ultra-fast inference speeds using its Language Processing Unit (LPU) designed for large language models (LLMs) like GPT, Llama, and Mistral LLMs. The Groq LPU is a single-core unit based on the Tensor-Streaming Processor (TSP) architecture which achieves 750 TOPS at INT8 and 188 TeraFLOPS at FP16, with 320x320 fused dot product matrix multiplication, in addition to 5,120 Vector ALUs.

Having massive concurrency with 80 TB/s of bandwidth, the Groq LPU has 230 MB capacity of local SRAM. All of this is working together to provide Groq with a fantastic performance, making waves over the past few days on the internet. Serving the Mixtral 8x7B model at 480 tokens per second, the Groq LPU is providing one of the leading inference numbers in the industry. In models like Llama 2 70B with 4096 token context length, Groq can serve 300 tokens/s, while in smaller Llama 2 7B with 2048 tokens of context, Groq LPU can output 750 tokens/s. According to the LLMPerf Leaderboard, the Groq LPU is beating the GPU-based cloud providers at inferencing LLMs Llama in configurations of anywhere from 7 to 70 billion parameters. In token throughput (output) and time to first token (latency), Groq is leading the pack, achieving the highest throughput and second lowest latency.
Return to Keyword Browsing
May 15th, 2024 15:59 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts