News Posts matching #Intel

Return to Keyword Browsing

Intel Xeon "Granite Rapids" Wafer Pictured—First Silicon Built on Intel 3

Feast your eyes on the first pictures of an Intel "Granite Rapids" Xeon processor wafer, courtesy of Andreas Schilling with HardwareLuxx.de. This is Intel's first commercial silicon built on the new Intel 3 foundry node, which is expected to be the company's final silicon fabrication node to implement FinFET technology; before the company switches to Nanosheets with the next-generation Intel 20A. Intel 3 offers transistor densities and performance competitive to TSMC N3 series, and Samsung 3GA series nodes.

The wafer contains square 30-core tiles, two of which make up a "Granite Rapids-XCC" processor, with CPU core counts going up to 56-core/112-threads (two cores left unused per tile for harvesting). Each of the 30 cores on the tile is a "Redwood Cove" P-core. In comparison, the current "Emerald Rapids" Xeon processor uses "Raptor Cove" cores, and is built on the Intel 7 foundry node. Intel is planning to overcome the CPU core-count deficit to AMD EPYC, including the upcoming EPYC "Turin" Zen 5 processors with their rumored 128-core/256-thread counts, by implementing several on-silicon fixed-function accelerators that speed up popular kinds of server workloads. The "Redwood Cove" core is expected to be Intel's first IA core to implement AVX10 and APX.

Starfield AMD FSR 3.0 and Intel XeSS Support Out Now

Starfield game patch version 1.9.67 just released, with official support for AMD FSR 3.0 and Intel XeSS. Support for the two performance enhancements was beta (experimental) until now. FSR 3.0 brings frame generation support to Starfield. The game had received DLSS 3 Frame Generation support in November 2023, but by then, FSR 3.0 support wasn't fully integrated with the game, as it had just began rolling out in September. The FSR 3.0 option now replaces the game's FSR 2.0 implementation. FSR 3.0 works on Radeon RX 7000 series and RX 6000 series graphics cards. The patch also fixes certain visual artifacts on machines with DLSS performance preset enabled.

Intel Foundry Services (IFS) and Cadence Design Systems Expand Partnership on SoC Design

Intel Foundry Services (IFS) and Cadence Design Systems Inc. today announced a multiyear strategic agreement to jointly develop a portfolio of key customized intellectual property (IP), optimized design flows and techniques for Intel 18A process technology featuring RibbonFET gate-all-around transistors and PowerVia backside power delivery. Joint customers of the companies will be able to accelerate system-on-chip (SoC) project schedules on process nodes from Intel 18A and beyond while optimizing for performance, power, area, bandwidth and latency for demanding artificial intelligence, high performance computing and premium mobile applications.

"We're very excited to expand our partnership with Cadence to grow the IP ecosystem for IFS and provide choice for customers," said Stuart Paann, Intel senior vice president and general manager of IFS. "We will leverage Cadence's world-class portfolio of leading IP and advanced design solutions to enable our customers to deliver high-volume, high-performance and power-efficient SoCs on Intel's leading-edge process technologies."

CTL Announces the Chromebook NL73 Series

CTL, a global cloud-computing solution leader for education, announced today the introduction of the new CTL Chromebook NL73 Series. The new Chromebook, incorporating the Intel Processor N100 and Intel Processor N200, enables IT professionals to equip schools with the cloud-computing performance they need today and with the sustainability required for tomorrow.

"Since Chromebooks were widely deployed during the pandemic to remote students, new applications have come into use, requiring more processing power and cybersecurity measures than ever before," noted Erik Stromquist, CEO of CTL. "Chromebook users will need to level up their technology in 2024. Our new NL73 Series delivers not only the power to meet these new requirements but also CTL's flexible configuration options, purchase options, and whole lifecycle management services for the ultimate in sustainability."

Intel and Ohio Supercomputer Center Double AI Processing Power with New HPC Cluster

A collaboration including Intel, Dell Technologies, Nvidia and the Ohio Supercomputer Center (OSC), today introduces Cardinal, a cutting-edge high-performance computing (HPC) cluster. Purpose-built to meet the increasing demand for HPC resources in Ohio across research, education and industry innovation, particularly in artificial intelligence (AI).

AI and machine learning are integral tools in scientific, engineering and biomedical fields for solving complex research inquiries. As these technologies continue to demonstrate efficacy, academic domains such as agricultural sciences, architecture and social studies are embracing their potential. Cardinal is equipped with the hardware capable of meeting the demands of expanding AI workloads. In both capabilities and capacity, the new cluster will be a substantial upgrade from the system it will replace, the Owens Cluster launched in 2016.

ASUS Announces New Vivobook S Series Notebooks With AI-Enabled Intel Core Ultra Processors

ASUS today announced brand-new ASUS Vivobook S series laptops for 2024, designed for a sleek and lightweight lifestyle. These laptops - all featuring ASUS Lumina OLED display options - are driven by the latest AI-enabled the latest Intel Core Ultra processors, and offer exceptional performance. The series comprises the 14.0-inch ASUS Vivobook S 14 OLED (S5406), the 15.6-inch ASUS Vivobook S 15 OLED (S5506), and the 16.0-inch ASUS Vivobook S16 OLED (S5606). These sleek, powerful and lightweight Intel Evo -certified ASUS Vivobook laptops offer the ultimate experience for those seeking on-the-go productivity and instant entertainment, with modern color options and minimalist, high-end aesthetics, making them the perfect choice for balanced mobility and performance.

ASUS Vivobook S 14/15/16 OLED are powered by Intel Core Ultra processors, with up to a 50-watt TDP and a built-in Neural Processing Unit (NPU) that provides power-efficient acceleration for modern AI applications. Moreover, ASUS Vivobook S series laptops all have dedicated Copilot key, allowing you effortlessly dive into Windows 11 AI-powered tools with just one press. Lifelike visuals are provided by world-leading ASUS Lumina OLED displays with resolutions up to 3.2K (S5606) along with 120 Hz refresh rates, a 100% DCI-P3 gamut and DisplayHDR True Black 600 certification. The stylish and comfortable ASUS ErgoSense keyboard now features customizable single-zone RGB backlighting, and there's an extra-large ErgoSense touchpad. As with all ASUS Vivobook models, the user experience is prioritized: there's a lay-flat 180° hinge, an IR camera with a physical shutter, a full complement of I/O ports, and immersive Dolby Atmos audio from the powerful Harman Kardon-certified stereo speakers.

Groq LPU AI Inference Chip is Rivaling Major Players like NVIDIA, AMD, and Intel

AI workloads are split into two different categories: training and inference. While training requires large computing and memory capacity, access speeds are not a significant contributor; inference is another story. With inference, the AI model must run extremely fast to serve the end-user with as many tokens (words) as possible, hence giving the user answers to their prompts faster. An AI chip startup, Groq, which was in stealth mode for a long time, has been making major moves in providing ultra-fast inference speeds using its Language Processing Unit (LPU) designed for large language models (LLMs) like GPT, Llama, and Mistral LLMs. The Groq LPU is a single-core unit based on the Tensor-Streaming Processor (TSP) architecture which achieves 750 TOPS at INT8 and 188 TeraFLOPS at FP16, with 320x320 fused dot product matrix multiplication, in addition to 5,120 Vector ALUs.

Having massive concurrency with 80 TB/s of bandwidth, the Groq LPU has 230 MB capacity of local SRAM. All of this is working together to provide Groq with a fantastic performance, making waves over the past few days on the internet. Serving the Mixtral 8x7B model at 480 tokens per second, the Groq LPU is providing one of the leading inference numbers in the industry. In models like Llama 2 70B with 4096 token context length, Groq can serve 300 tokens/s, while in smaller Llama 2 7B with 2048 tokens of context, Groq LPU can output 750 tokens/s. According to the LLMPerf Leaderboard, the Groq LPU is beating the GPU-based cloud providers at inferencing LLMs Llama in configurations of anywhere from 7 to 70 billion parameters. In token throughput (output) and time to first token (latency), Groq is leading the pack, achieving the highest throughput and second lowest latency.

MSI Claw Review Units Observed Trailing Behind ROG Ally in Benchmarks

Chinese review outlets have received MSI Claw sample units—the "Please, Xiao Fengfeng" Bilibili video channel has produced several comparison pieces detailing how the plucky Intel Meteor Lake-powered handheld stands up against its closest rival; ASUS ROG Ally. The latter utilizes an AMD Ryzen Z1 APU—in Extreme or Standard forms—many news outlets have pointed out that the Z1 Extreme processor is a slightly reworked Ryzen 7 7840U "Phoenix" processor. Intel and its handheld hardware partners have not dressed up Meteor Lake chips with alternative gaming monikers—simply put, the MSI Claw arrives with Core Ultra 7-155H or Ultra 5-135H processors onboard. The two rival systems both run on Window 11, and also share the same screen size, resolution, display technology (IPS) and 16 GB LPDDR5-6400 memory configuration. The almost eight months old ASUS handheld seems to outperform its near-launch competition.

Xiao Fengfeng's review (Ultra 7-155H versus Z1 Extreme) focuses on different power levels and how they affect handheld performance—the Claw and Ally have user selectable TDP modes. A VideoCardz analysis piece lays out key divergences: "Both companies offer easy TDP profile switches, allowing users to adjust performance based on the game's requirements or available battery life. The Claw's larger battery could theoretically offer more gaming time or higher TDP with the same battery life. The system can work at 40 W TDP level (but in reality it's between 35 and 40 watts)...In the Shadow of the Tomb Raider test, the Claw doesn't seem to outperform the ROG Ally. According to a Bilibili creator's test, the system falls short at four different power levels: 15 W, 20 W, 25 W, and max TDP (40 W for Claw and 30 W for Ally)."

GIGABYTE Elevates Computing Horizons at SupercomputingAsia 2024

GIGABYTE, a global leader in high-performance computing solutions, collaborates with industry partner Xenon at SupercomputingAsia 2024, held at the Sydney International Convention and Exhibition Centre from February 19 to 22. This collaboration showcases cutting-edge technologies, offering diverse solutions that redefine the high-performance computing landscape.

GIGABYTE's Highlights at SCA 2024
At booth 19, GIGABYTE presents the G593-SD0, our flagship AI server, and the industry's first Nvidia-certified HGX H100 8-GPU Server. Equipped with 4th/5th Gen Intel Xeon Scalable Processors, it incorporates GIGABYTE's thermal design, ensuring optimal performance within its density-optimized 5U server chassis, pushing the boundaries of AI computing. Additionally, GIGABYTE introduces the 2U 4-node H263-S62 server, designed for 4th Gen Intel Xeon Scalable Processors and now upgraded to the latest 5th Gen, tailored for hybrid and private cloud applications. It features a DLC (Direct Liquid Cooling) solution to efficiently manage heat generated by high-performance computing. Also on display is the newly released W773-W80 workstation, supporting the latest NVIDIA RTX 6000 Ada and catering to CAD, DME, research, data and image analysis, and SMB private cloud applications. At SCA 2024, explore our offerings, including rackmount servers and motherboards, reflecting GIGABYTE's commitment to innovative and reliable solutions. This offers a valuable opportunity to discuss your IT infrastructure requirements with our sales and consulting teams, supported by GIGABYTE and Xenon in Australia.

SoftBank Founder Wants $100 Billion to Compete with NVIDIA's AI

Japanese tech billionaire and founder of the SoftBank Group, Masayoshi Son, is embarking on a hugely ambitious new project to build an AI chip company that aims to rival NVIDIA, the current leader in AI semiconductor solutions. Codenamed "Izanagi" after the Japanese god of creation, Son aims to raise up to $100 billion in funding for the new venture. With his company SoftBank having recently scaled back investments in startups, Son is now setting his sights on the red-hot AI chip sector. Izanagi would leverage SoftBank's existing chip design firm, Arm, to develop advanced semiconductors tailored for artificial intelligence computing. The startup would use Arm's instruction set for the chip's processing elements. This could pit Izanagi directly against NVIDIA's leadership position in AI chips. Son has a chest of $41 billion in cash at SoftBank that he can deploy for Izanagi.

Additionally, he is courting sovereign wealth funds in the Middle East to contribute up to $70 billion in additional capital. In total, Son may be seeking up to $100 billion to bankroll Izanagi into a chip powerhouse. AI chips are seeing surging demand as machine learning and neural networks require specialized semiconductors that can process massive datasets. NVIDIA and other names like Intel, AMD, and select startups have capitalized on this trend. However, Son believes the market has room for another major player. Izanagi would focus squarely on developing bleeding-edge AI chip architectures to power the next generation of artificial intelligence applications. It is still unclear if this would be an AI training or AI inference project, but given that the training market is currently bigger as we are in the early buildout phase of AI infrastructure, the consensus might settle on training. With his track record of bold bets, Son is aiming very high with Izanagi. It's a hugely ambitious goal, but Son has defied expectations before. Project Izanagi will test the limits of even his vision and financial firepower.

Intel Core i9-14900KS Retail & OEM Packages Listed in France

We are likely to see even more Intel Core i9-14900KS pre-release leaks as its rumored mid-March launch window approaches—hardware sleuth, momomo_us, has spent the weekend following any Team Blue breadcrumb trails. Their latest discovery points to "BX8071514900KS" and "CM8071504820506" product codes, and two listings on PC21 France's web shop. Intel seems to be offering its upcoming limited edition Raptor Lake Refresh über-flagship unit in two different guises—the first being a traditional boxed package, and the second appears to be a tray option (for system integrators). As pointed out by VideoCardz, it is not unusual to see OEM parts reach retail channels—similar cases have leaked in the past. The no frills tray choice: "offers a more cost-effective option for users who don't require fancy packaging or bundled coolers, making it a budget-friendly choice for the new CPU."

The Core i9-14900KS is far from being a wallet friendly prospect, yet the untimely listings indicate that the OEM option shaves off a grand total of €16 (~$17.25) when lined up against its fancy boxed sibling. The French retailer states that both items are on order, with zero stock in their warehouses. The boxed Core i9-14900KS seems to cost €768.34 (~$828) including taxes, while the tray variant's entry indicates a charge of €752.62 (~$811), with VAT factored in. These leaked prices are subject to change—perhaps the current figures are based on a distributor's pre-launch estimation. PC21 France does not display any pricing for the already released Core i9-14900K and 14900KF SKUs, but VideoCardz has checked other retail listings in the country—they reckon that the gulf between "K" and "KS" is €146 (best case scenario).

Intel Lunar Lake A1 Sample CPU Boost & Cache Specs Leak Out

HXL (@9550pro) has highlighted an intriguing pinned post on the Chinese Zhihu community site—where XZiar, a self described "Central Processing Unit (CPU) expert," has shared a very fuzzy/low quality screenshot of a Windows Task Manager session. The information on display indicates that a "Genuine Intel(R) 0000 1.0 GHz" processor was in use—perhaps a very early Lunar Lake (LNL) engineering sample (ES1). XZiar confirmed the pre-release nature of the onboard chip, and teased its performance prowess: "It's good to use the craftsmanship that others have stepped on. It can run 2.8 GHz with only A1 step, and it is very smooth."

The "A1" designation implies that the leaked sample is among the first LNL processor prototypes to exit manufacturing facilities—Intel previewed its "Lunar Lake-MX" SoC package to press representatives last November. XZiar's followers have pored over the screenshot and ascertained that the leaked example sports a "8-core + 8-thread, without Hyperthreading, 4P+4LPE" configuration. Others were confused by the chip's somewhat odd on-board cache designations—L1: 836 KB, L2: 14 MB and L3: 12 MB—XZiar believes that prototype's setup "is obviously not up to par," when a replier compares the spec to an N300 series processor. It is theorized that Windows Task Manager is simply not fully capable of detecting the sample's full makeup, but XZiar reckons that 12 MB of L3 cache is the correct figure.

Intel to Present "AI Everywhere" Innovations at MWC Barcelona 2024

At MWC Barcelona 2024, Intel will demonstrate breakthrough innovations across a full spectrum of new hardware, software and services—bringing AI Everywhere - for the network, edge and enterprise in collaboration with the support and enablement of more than 65 pioneering customers and partners. Announcements will span AI network innovations, edge AI platforms, Intel Core Ultra processors and the AI PC. They are about empowering our ecosystem; modernizing and monetizing 5G, edge and enterprise infrastructures; and taking advantage of AI-based innovations. And it's all with the purpose of improving performance and power consumption for a more sustainable future.

Join Intel at MWC Barcelona (Hall 3, Stand 3E31) Feb. 26-29. Visitors will see and hear from ecosystem customers and partners about how innovations and collaborations across network, edge and enterprise create modern networks and opportunities for 5G monetization at the edge and bring AI across organizations making an impact across industries.

Tulpar Handheld Gaming PC Demoed at Intel Extreme Masters

The recently concluded Intel Extreme Masters (IEM) event (in Katowice, Poland) played host to another emerging Windows 11 handheld gaming computer—attendees spent hands-on time with potentially Meteor Lake APU-powered pre-release devices. International audiences were treated to a small selection of photos from IEM, as teased by Team Blue's Gaming division last weekend: "A new handheld PC is approaching. The Tulpar brings gaming on the go!" Exact details/specifications and user impressions have not reached the wider world, but many believe that rebadged Emdoor EM-GP080MTL units were showcased in Southern Poland last week. Tulpar is a subsidiary company of Turkish brand, Monster Notebook—it mainly sells gaming laptops and gear in UK and German markets.

Tulpar's mysterious handheld reportedly sports a seven inch display, indicating that it is a slightly smaller device when lined up against the EM-GP080MTL model (8-inch, 1200p). Internet sleuths reckon that the Tulpar shares many of the same technical underpinnings—general Emdoor OEM specs seem to include an unspecified Intel Core Ultra Meteor Lake-H processor, Arc Graphics 5 and 32 GB LPDDR5X. The EM-GP080MTL was unveiled several months before the debut presentation of MSI's Claw—another Meteor Lake-H-based handheld—at last month's CES trade show. Later on, MSI confirmed that three distinct Claw SKUs will be heading to retail—with an entry-level version sporting Intel's Core Ultra 5-135H CPU, and more expensive options utilizing Core Ultra 7-155H processors.

Report: Intel Seeks $2 Billion in Funding for Ireland Fab 34 Expansion

According to a Bloomberg report, Intel is seeking to raise at least $2 billion in equity funding from investors for expanding its fabrication facility in Leixlip, Ireland, known as Fab 34. The chipmaker has hired an advisor to find potential investors interested in providing capital for the project. Fab 34 is currently Intel's only chip plant in Europe that uses cutting-edge extreme ultraviolet (EUV) lithography. It produces processors on the Intel 4 process node, including compute tiles for Meteor Lake client CPUs and expected future Xeon data center chips. While $2 billion alone cannot finance the construction of an entirely new fab today, it can support meaningful expansion or upgrades of existing capacity. Intel likely aims to grow Fab 34's output and/or transition it to more advanced 3 nm-class technologies like Intel 3, Intel 20A, or Intel 18A.

Expanding production aligns with Intel's needs for its own products and its Intel Foundry Services business, providing contract manufacturing. Intel previously secured a $15 billion investment from Brookfield Infrastructure for its Arizona fabs in exchange for a 49% stake, demonstrating the company's willingness to partner to raise capital for manufacturing projects. The Brookfield deal also set a precedent of using outside financing to supplement Intel's own spending budget. It provided $15 billion in effectively free cash flow Intel can redirect to other priorities like new fabs without increasing debt. Intel's latest fundraising efforts for the Ireland site follow a similar equity investment model that leverages outside capital to support its manufacturing expansion plans. Acquiring High-NA EUV machinery for manufacturing is costly, as these machines can reach up to $380 million alone.

GIGABYTE Advanced Data Center Solutions Unveils Telecom and AI Servers at MWC 2024

GIGABYTE Technology, an IT pioneer whose focus is to advance global industries through cloud and AI computing systems, is coming to MWC 2024 with its next-generation servers empowering telcos, cloud service providers, enterprises, and SMBs to swiftly harness the value of 5G and AI. Featured is a cutting-edge AI server boasting AMD Instinct MI300X 8-GPU, and a comprehensive AI/HPC server series supporting the latest chip technology from AMD, Intel, and NVIDIA. The showcase will also feature integrated green computing solutions excelling in heat dissipation and energy reduction.

Continuing the booth theme "Future of COMPUTING", GIGABYTE's presentation will cover servers for AI/HPC, RAN and Core networks, modular edge platforms, all-in-one green computing solutions, and AI-powered self-driving technology. The exhibits will demonstrate how industries extend AI applications from cloud to edge and terminal devices through 5G connectivity, expanding future opportunities with faster time to market and sustainable operations. The showcase spans from February 26th to 29th at Booth #5F60, Hall 5, Fira Gran Via, Barcelona.

Insiders Propose mid-March Launch of Intel Core i9-14900KS Limited Edition CPU

Intel's 14th Generation "Raptor Lake Refresh" processor series debuted in "enthusiast" SKU form last October—Team Blue's official product unveiling was less than surprising, since multiple SKUs and specifications had been leaked throughout mid-to-late 2023. The true top-of-the-pile Intel Core i9-14900KS SKU was first linked to a possible announcement at January's CES trade show, but did not appear in any of last year's leaked product lists. Team Blue proceeded to introduce its 14th Gen "mainstream" 65 W SKUs to the crowd in Las Vegas, but the leaked Core i9-14900KS model did not pop up, contrary to tipster claims—Intel had a history of presenting "KS" variants during January showcases.

Industry experts reckon that the current Raptor Lake Refresh flagship—Core i9 14900K—is getting some extra time in the spotlight, before its inevitable dethroning courtesy of a "Special Edition" sibling. BenchLife has reached out to its cadre of insiders, following yesterday's reports of a "gargantuan 409 W maximum package power draw." The alleged top dog 14th Gen Core part is perhaps only a month away from launch, as leaked by industry moles: "According to our reliable sources, Intel plans to launch the Intel Core i9 in mid-March 2024. 14900KS is a limited edition processor with a clock speed of 6.2 GHz, but we cannot confirm whether it will be sold to a specific system vendor or a specific channel."

Intel Ohio Fab Equipment Deliveries Delayed by Extreme Weather

Intel is aiming to get its $20 billion fabrication location—in New Albany, Ohio—up and running by 2025, but the advanced manufacturing facility is facing another round of setbacks. According to a WCMH NBC4 local news report (covering the Colombus, Ohio area), a planned "oversized equipment" reshuffle has been delayed—the shifting of heavy machinery was supposed to start last weekend. Extreme weather conditions (flooding) have been cited as major factor, as well as the complicated nature of transporting "overweight and oversized" loads to Team Blue's 1000-acre site. Workers are set to resume efforts this weekend—starting no later than February 17. Tom's Hardware has kept tabs on the Ohio fab's progress: "The project to move the equipment is expected to last over nine months, meaning this phase of Intel's construction could be done near the end of 2024. There isn't a firm indication of how much work remains to be done at the site after the equipment is delivered." TPU previously covered the leading-edge location's indefinitely postponed groundbreaking ceremony—CHIPS Act subsidies were not delivered in an expected timely manner back in 2022.

A couple of media outlets (Tom's Hardware, Network World, etc.) have received an official statement regarding the slippage of events in New Albany: "While we will not meet the aggressive 2025 production goal that we anticipated when we first announced the selection of Ohio in January, 2022, construction has been underway since breaking ground in late 2022 and our construction has been proceeding on schedule. Typical construction timelines for semiconductor manufacturing facilities are 3-5 years from groundbreaking, depending on a range of factors...We remain fully committed to the project and are continuing to make progress on the construction of the factory and supporting facilities this year. As we said in our January 2022 site-selection announcement, the scope and pace of Intel's expansion in Ohio may depend on various conditions." Industry insiders believe that an "opening ceremony" could occur around late 2026, or even early 2027.

Adaptive Sharpening Filter Outlined in Intel Lunar Lake Xe2 Patch Notes

Intel appears to working on an intriguing next generation adaptive sharpening filter—as revealed in mid-week published patch notes. Lunar Lake's display engine seems to be the lucky recipient here—its Xe2 "Battlemage" graphics architecture is expected to debut later this year. Second generation Intel Arc integrated graphics solutions have been linked to mobile Lunar Lake (LNL) processors—driver enablement was uncovered by Phoronix last September. The notes reveal that Team Blue is exploring a more intelligent approach to improving visual enhancements across games and productivity applications.

Author, Nemesa Garg (an engineer at Intel India) stated: "Many a times images are blurred or upscaled content is also not as crisp as original rendered image. Traditional sharpening techniques often apply a uniform level of enhancement across entire image, which sometimes result in over-sharpening of some areas and potential loss of natural details. Intel has come up with Display Engine based adaptive sharpening filter with minimal power and performance impact. From LNL onwards, the Display hardware can use one of the pipe scaler for adaptive sharpness filter. This can be used for both gaming and non-gaming use cases like photos, image viewing. It works on a region of pixels depending on the tap size."

BIOSTAR Releases the BIH61-AHA Socket LGA1700 Industrial Motherboard with PCI Slots and Legacy IO

BIOSTAR, a leading manufacturer of motherboards, graphics cards, and storage devices today, introduces the BIH61-AHA industrial motherboard, designed for smooth, seamless industrial application from AIOT machines, Edge computing, HMI machine, Digital Signage and more. Built based on the Intel H610 chipset, the BIOSTAR BIH61-AHA motherboard is engineered to support the latest Intel Core i7/i5/i3 processors (LGA1700). Featuring the latest DDR5 technology with support for 2x DDR5-4800 MHz LONG-DIMM up to 96 GB max, it offers extensive customization to meet the intricate needs of various industries.

Targeting a diverse audience, from system integrators looking to build sophisticated AIoT machines to enthusiasts aiming to construct powerful automation or edge computing systems, the BIH61-AHA provides unparalleled connectivity and expansion options. With 10 COM ports, 5 PCI slots, dual Intel GbE LAN ports for reliable and high-speed network communication, extensive USB connectivity with 6x USB2.0 & 4x USB 3.2 Gen 1 ports, and multiple storage interfaces, including 4x SATA 6 Gb/s and 1x M.2 M-KEY socket. Its support for a wide operating temperature range (0 ~ 60 degrees C) and ATX power input also ensures the motherboard's adaptability in challenging environments.

Intel Core i9-14900KS Draws as much as 409W at Stock Speeds with Power Limits Unlocked

Intel's upcoming limited edition desktop processor for overclockers and enthusiasts, the Core i9-14900KS, comes with a gargantuan 409 W maximum package power draw at stock speeds with its PL2 power limit unlocked, reports HKEPC, based on an OCCT database result. This was measured under OCCT stress, with all CPU cores saturated, and the PL2 (maximum turbo power) limited set to unlimited/4096 W in the BIOS. The chip allows 56 seconds of maximum turbo power at a stretch, which was measured at 409 W.

The i9-14900KS is a speed-bump over its predecessor, the i9-13900KS. It comes with a maximum P-core boost frequency of 6.20 GHz, which is 200 MHz higher; and a maximum E-core boost frequency of 4.50 GHz, which is a 100 MHz increase over both the i9-13900KS and the mass market i9-14900K. The i9-14900KS comes with a base power value of 150 W, which is the guaranteed minimum amount of power the processor can draw under load (the idle power is much lower). There's no word on when Intel plans to make the i9-14900KS available, it was earlier expected to go on sale in January, along the sidelines of CES.

ASML High-NA EUV Twinscan EXE Machines Cost $380 Million, 10-20 Units Already Booked

ASML has revealed that its cutting-edge High-NA extreme ultraviolet (EUV) chipmaking tools, called High-NA Twinscan EXE, will cost around $380 million each—over twice as much as its existing Low-NA EUV lithography systems that cost about $183 million. The company has taken 10-20 initial orders from the likes of Intel and SK Hynix and plans to manufacture 20 High-NA systems annually by 2028 to meet demand. The High-NA EUV technology represents a major breakthrough, enabling an improved 8 nm imprint resolution compared to 13 nm with current Low-NA EUV tools. This allows chipmakers to produce transistors that are nearly 1.7 times smaller, translating to a threefold increase in transistor density on chips. Attaining this level of precision is critical for manufacturing sub-3 nm chips, an industry goal for 2025-2026. It also eliminates the need for complex double patterning techniques required presently.

However, superior performance comes at a cost - literally and figuratively. The hefty $380 million price tag for each High-NA system introduces financial challenges for chipmakers. Additionally, the larger High-NA tools require completely reconfiguring chip fabrication facilities. Their halved imaging field also necessitates rethinking chip designs. As a result, adoption timelines differ across companies - Intel intends to deploy High-NA EUV at an advanced 1.8 nm (18A) node, while TSMC is taking a more conservative approach, potentially implementing it only in 2030 and not rushing the use of these lithography machines, as the company's nodes are already developing well and on time. Interestingly, the installation process of ASML's High-NA Twinscan EXE 150,000-kilogram system required 250 crates, 250 engineers, and six months to complete. So, production is as equally complex as the installation and operation of this delicate machinery.

AMD Develops ROCm-based Solution to Run Unmodified NVIDIA's CUDA Binaries on AMD Graphics

AMD has quietly funded an effort over the past two years to enable binary compatibility for NVIDIA CUDA applications on their ROCm stack. This allows CUDA software to run on AMD Radeon GPUs without adapting the source code. The project responsible is ZLUDA, which was initially developed to provide CUDA support on Intel graphics. The developer behind ZLUDA, Andrzej Janik, was contracted by AMD in 2022 to adapt his project for use on Radeon GPUs with HIP/ROCm. He spent two years bringing functional CUDA support to AMD's platform, allowing many real-world CUDA workloads to run without modification. AMD decided not to productize this effort for unknown reasons but did open-source it once funding ended per their agreement. Over at Phoronix, there were several benchmarks testing AMD's ZLUDA implementation over a wide variety of benchmarks.

Benchmarks found that proprietary CUDA renderers and software worked on Radeon GPUs out-of-the-box with the drop-in ZLUDA library replacements. CUDA-optimized Blender 4.0 rendering now runs faster on AMD Radeon GPUs than the native ROCm/HIP port, reducing render times by around 10-20%, depending on the scene. The implementation is surprisingly robust, considering it was a single-developer project. However, there are some limitations—OptiX and PTX assembly codes still need to be fully supported. Overall, though, testing showed very promising results. Over the generic OpenCL runtimes in Geekbench, CUDA-optimized binaries produce up to 75% better results. With the ZLUDA libraries handling API translation, unmodified CUDA binaries can now run directly on top of ROCm and Radeon GPUs. Strangely, the ZLUDA port targets AMD ROCm 5.7, not the newest 6.x versions. Only time will tell if AMD continues investing in this approach to simplify porting of CUDA software. However, the open-sourced project now enables anyone to contribute and help improve compatibility. For a complete review, check out Phoronix tests.

NVIDIA to Create AI Semi-custom Chip Business Unit

NVIDIA is reportedly working to set up a new business unit focused on designing semi-custom chips for some of its largest data-center customers, Reuters reports. NVIDIA dominates the AI HPC processor market, although even its biggest customers are having to shop from its general lineup of A100 series and H100 series HPC processors. There are reports of some of these customers venturing out of the NVIDIA fold, wanting to develop their own AI processor designs. It is to cater to exactly this segment that NVIDIA is setting up the new unit.

A semi-custom chip isn't just a bespoke chip designed to a customer's specifications. It is co-developed by NVIDIA and its customer, using mainly NVIDIA IP blocks, but also integrating some third-party IP blocks the customer may want; and more importantly, approach semiconductor fabrication companies such as TSMC, Samsung, or Intel Foundry Services as separate entities from NVIDIA for their wafer allocation. For example, a company like Google may have a certain amount of wafer pre-allocation with TSMC (eg: for its Tensor SoCs powering the Pixel smartphones), which it may want to tap into for a semi-custom AI HPC processor for its cloud business. NVIDIA assesses a $30 billion TAM for this specific business unit—that's all its current customers wanting to pursue their own AI processor projects, who will now be motivated to stick to NVIDIA.

IDC Forecasts Artificial Intelligence PCs to Account for Nearly 60% of All PC Shipments by 2027

A new forecast from International Data Corporation (IDC) shows shipments of artificial intelligence (AI) PCs - personal computers with specific system-on-a-chip (SoC) capabilities designed to run generative AI tasks locally - growing from nearly 50 million units in 2024 to more than 167 million in 2027. By the end of the forecast, IDC expects AI PCs will represent nearly 60% of all PC shipments worldwide.

"As we enter a new year, the hype around generative AI has reached a fever pitch, and the PC industry is running fast to capitalize on the expected benefits of bringing AI capabilities down from the cloud to the client," said Tom Mainelli, group vice president, Devices and Consumer Research. "Promises around enhanced user productivity via faster performance, plus lower inferencing costs, and the benefit of on-device privacy and security, have driven strong IT decision-maker interest in AI PCs. In 2024, we'll see AI PC shipments begin to ramp, and over the next few years, we expect the technology to move from niche to a majority."
Return to Keyword Browsing
May 21st, 2024 02:10 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts