News Posts matching #2024

Return to Keyword Browsing

PC Market Returns to Growth in Q1 2024 with AI PCs to Drive Further 2024 Expansion

Global PC shipments grew around 3% YoY in Q1 2024 after eight consecutive quarters of declines due to demand slowdown and inventory correction, according to the latest data from Counterpoint Research. The shipment growth in Q1 2024 came on a relatively low base in Q1 2023. The coming quarters of 2024 will see sequential shipment growth, resulting in 3% YoY growth for the full year, largely driven by AI PC momentum, shipment recovery across different sectors, and a fresh replacement cycle.

Lenovo's PC shipments were up 8% in Q1 2024 off an easy comparison from last year. The brand managed to reclaim its 24% share in the market, compared to 23% in Q1 2023. HP and Dell, with market shares of 21% and 16% respectively, remained flattish, waiting for North America to drive shipment growth in the coming quarters. Apple's shipment performance was also resilient, with the 2% growth mainly supported by M3 base models.

ASML reports €5.3 billion total net sales and €1.2 billion net income in Q1 2024

Today, ASML Holding NV (ASML) has published its 2024 first-quarter results.
  • Q1 total net sales of €5.3 billion, gross margin of 51.0%, net income of €1.2 billion
  • Quarterly net bookings in Q1 of €3.6 billion of which €656 million is EUV
  • ASML expects Q2 2024 total net sales between €5.7 billion and €6.2 billion, and a gross margin between 50% and 51%
  • ASML expects 2024 total net sales to be similar to 2023
CEO statement and outlook
"Our first-quarter total net sales came in at €5.3 billion, at the midpoint of our guidance, with a gross margin of 51.0% which is above guidance, primarily driven by product mix and one-offs. We expect second-quarter total net sales between €5.7 billion and €6.2 billion with a gross margin between 50% and 51%. ASML expects R&D costs of around €1,070 million and SG&A costs of around €295 million. Our outlook for the full year 2024 is unchanged, with the second half of the year expected to be stronger than the first half, in line with the industry's continued recovery from the downturn. We see 2024 as a transition year with continued investments in both capacity ramp and technology, to be ready for the turn in the cycle," said ASML President and Chief Executive Officer Peter Wennink.

Sony PlayStation 5 Pro Specifications Confirmed, Console Arrives Before Holidays

Thanks for the detailed information obtained by The Verge, today we confirm previously leaked details as Sony gears up to unveil the highly anticipated PlayStation 5 Pro, codenamed "Trinity." According to insider reports, Sony is urging developers to optimize their games for the PS5 Pro, with a primary focus on enhancing ray tracing capabilities. The console is expected to feature an RDNA 3 GPU with 30 WGP running BVH8, capable of 33.5 TeraFLOPS of FP32 single-precision computing power, and a slightly quicker CPU running at 3.85 GHz, enabling it to render games with ray tracing enabled or achieve higher resolutions and frame rates in select titles. Sony anticipates GPU rendering on the PS5 Pro to be approximately 45 percent faster than the standard PlayStation 5. The PS5 Pro GPU will be larger and utilize faster system memory to bolster ray tracing performance, boasting up to three times the speed of the regular PS5.

Additionally, the console will employ a more powerful ray tracing architecture, backed by PlayStation Spectral Super Resolution (PSSR), allowing developers to leverage graphics features like ray tracing more extensively. To support this endeavor, Sony is providing developers with test kits, and all games submitted for certification from August onward must be compatible with the PS5 Pro. Insider Gaming, the first to report the full PS5 Pro specs, suggests a potential release during the 2024 holiday period. The PS5 Pro will also feature modifications for developers regarding system memory, with Sony increasing the memory bandwidth from 448 GB/s to 576 GB/s, enhancing efficiency for an even more immersive gaming experience. To do AI processing, there is an custom AI accelerator capable of 300 8-bit INT8 TOPS and 67 16-bit FP16 TeraFLOPS, in addition to ACV audio codec running up to 35% faster.

Apple Preparing M4 Chips with AI Capabilities to Fight Declining Mac Sales

While everyone has been focused on shipping an AI-enhanced product recently, one tech giant didn't appear to be bothered- Apple. However, according to Mark Gurman from Bloomberg, Apple is readying an overhaul of its Apple Silicon M-series chips to embed AI processing capabilities at the processor level. As the report indicates, Apple is preparing an update for late 2024 and early 2025 with the M4 series of chips, which will reportedly feature AI processing units similar to those found in other commercial chips. There should be three levels of the M4 series, with the entry-level M4 codenamed Donan, the mid-level M4 chip codenamed Brava, and the high-end M4 chip codenamed Hydra.

Sales of Apple Macs peaked in 2022; the following year was a sharp decline, and sales have continued to be flat since. The new AI PCs for Windows-based systems have been generating hype from all major vendors, hoping to introduce AI features to end users. However, Apple wants to be part of the revolution, and the company has already scheduled the World Wide Developer Conference for June 10th. At WWDC this year, Apple is supposed to show a suite of AI-powered solutions to enable better user experience and increase productivity. With M4 chips getting AI enhancement, the WWDC announcements will get extra hardware accelerations. However, we must wait for the exact announcements before making further assumptions.

Intel Discontinues 13th Generation "Raptor Lake" K-Series Overclockable CPU SKUs

Intel has decided to discontinue its entire 13th Gen Raptor Lake lineup of overclockable "K-series" CPU SKUs. According to an official product change notice, the company will stop accepting orders for chips like the Core i9-13900KS, Core i9-13900K, Core i9-13900KF, Core i7-13700K, Core i7-13700KF, Core i5-13600K, and Core i5-13600KF after May 24th, 2024. Final shipments to vendors are targeted for June 28th. After those dates, availability of the unlocked Raptor Lake processors will rapidly diminish as the remaining inventory gets sold off, possibly at inflated prices due to shortages. This discontinuation comes just over a year after Raptor Lake's launch in late 2022, which delivered additional performance improvements over the previous Alder Lake generation.

Raptor Lake brought higher clocks, more cache, additional efficiency cores, and enough muscle to compete with AMD's Ryzen 7000 CPUs in many workloads. Interestingly, Intel has not yet discontinued Alder Lake, suggesting those 12th-generation chips may still be available for some time. While the death of the overclockable Raptor Lake K-series CPUs is unfortunate for enthusiasts, there is an upside—it paves the way for Intel's current generation Raptor Lake refresh, 14th generation Core processors, to clear inventory before the next-generation processors arrive. The 15th generation "Arrow Lake" Core Ultra 2 series of processors could be teased at the upcoming Computex event in June.

Intel Arc Battlemage Could Arrive Before Black Friday, Right in Time for Holidays

According to the latest report from ComputerBase, Intel had a strong presence at the recently concluded Embedded World 2024 conference. The company officially showcased its Arc series of GPUs for the embedded market, based on the existing Alchemist chips rebranded as the "E series." However, industry whispers hint at a more significant development—the impending launch of Intel's second-generation Arc Xe² GPUs, codenamed "Battlemage," potentially before the lucrative Black Friday shopping season. While Alchemist serves as Intel's current offering for embedded applications, many companies in attendance expressed keen interest in Battlemage, the successor to Alchemist. These firms often cover a broad spectrum, from servers and desktops to notebooks and embedded systems, necessitating a hardware platform that caters to this diverse range of applications.

Officially, Intel had previously stated that Battlemage would "hopefully" arrive before CES 2025, implying a 2024 launch. However, rumors from the trade show floor suggest a more ambitious target—a release before Black Friday, which falls on November 29th this year. This timeline aligns with Intel's historical launch patterns, as the original Arc A380 and notebook GPUs debuted in early October 2022, albeit with a staggered and limited rollout. Intel's struggles with the Alchemist launch serve as a learning experience for the company. Early promises and performance claims for the first-generation Arc GPUs failed to materialize, leading to a stuttering market introduction. This time, Intel has adopted a more reserved approach, avoiding premature and grandiose proclamations about Battlemage's capabilities.

Intel Xeon Scalable Gets a Rebrand: Intel "Xeon 6" with Granite Rapids and Sierra Forest Start a New Naming Scheme

During the Vision 2024 event, Intel announced that its upcoming Xeon processors will be branded under the new "Xeon 6" moniker. This rebranding effort aims to simplify the company's product stack and align with the recent changes made to its consumer CPU naming scheme. In contrast to the previous "x Generation Xeon Scalable", the new branding aims to simplify the product family. The highly anticipated Sierra Forest and Granite Ridge chips will be the first processors to bear the Xeon 6 branding, and they are set to launch in the coming months. Intel has confirmed that Sierra Forest, designed entirely with efficiency cores (E-cores), remains on track for release this quarter. Supermicro has already announced early availability and remote testing programs for these chips. Intel's Sierra Forest is set to deliver a substantial leap in performance. According to the company, it will offer a 2.4X improvement in performance per watt and a staggering 2.7X better performance per rack compared to the previous generation. This means that 72 Sierra Forest server racks will provide the same performance as 200 racks equipped with older second-gen Xeon CPUs, leading to significant power savings and a boost in overall efficiency for data centers upgrading their system.

Intel has also teased an exciting feature in its forthcoming Granite Ridge processors-support for the MXFP4 data format. This new precision format, backed by the Open Compute Project (OCP) and major industry players like NVIDIA, AMD, and Arm, promises to revolutionize performance. It could reduce next-token latency by up to 6.5X compared to fourth-gen Xeons using FP16. Additionally, Intel stated that Granite Ridge will be capable of running 70 billion parameter Llama-2 models, a capability that could open up new possibilities in data processing. Intel claims that 70 billion 4-bit models run entirely on Xeon in just 86 milliseconds. While Sierra Forest is slated for this quarter, Intel has not provided a specific launch timeline for Granite Ridge, stating only that it will arrive "soon after" its E-core counterpart. The Xeon 6 branding aims to simplify the product stack and clarify customer performance tiers as the company gears up for these major releases.

Intel Launches Gaudi 3 AI Accelerator: 70% Faster Training, 50% Faster Inference Compared to NVIDIA H100, Promises Better Efficiency Too

During the Vision 2024 event, Intel announced its latest Gaudi 3 AI accelerator, promising significant improvements over its predecessor. Intel claims the Gaudi 3 offers up to 70% improvement in training performance, 50% better inference, and 40% better efficiency than Nvidia's H100 processors. The new AI accelerator is presented as a PCIe Gen 5 dual-slot add-in card with a 600 W TDP or an OAM module with 900 W. The PCIe card has the same peak 1,835 TeraFLOPS of FP8 performance as the OAM module despite a 300 W lower TDP. The PCIe version works as a group of four per system, while the OAM HL-325L modules can be run in an eight-accelerator configuration per server. This likely will result in a lower sustained performance, given the lower TDP, but it confirms that the same silicon is used, just finetuned with a lower frequency. Built on TSMC's N5 5 nm node, the AI accelerator features 64 Tensor Cores, delivering double the FP8 and quadruple FP16 performance over the previous generation Gaudi 2.

The Gaudi 3 AI chip comes with 128 GB of HBM2E with 3.7 TB/s of bandwidth and 24 200 Gbps Ethernet NICs, with dual 400 Gbps NICs used for scale-out. All of that is laid out on 10 tiles that make up the Gaudi 3 accelerator, which you can see pictured below. There is 96 MB of SRAM split between two compute tiles, which acts as a low-level cache that bridges data communication between Tensor Cores and HBM memory. Intel also announced support for the new performance-boosting standardized MXFP4 data format and is developing an AI NIC ASIC for Ultra Ethernet Consortium-compliant networking. The Gaudi 3 supports clusters of up to 8192 cards, coming from 1024 nodes comprised of systems with eight accelerators. It is on track for volume production in Q3, offering a cost-effective alternative to NVIDIA accelerators with the additional promise of a more open ecosystem. More information and a deeper dive can be found in the Gaudi 3 Whitepaper.

Intel Arc "Battlemage" GPUs Appear on SiSoftware Sandra Database

Intel is quietly working on second generation Arc GPUs—we have not heard much about Xe2 "Battlemage" since CES 2024. Back in January, Tom "TAP" Petersen—an Intel fellow and marketing guru—casually revealed during an interview conducted by PC World: "I'd say about 30% of our engineers are working on Battlemage, mostly on the software side because our hardware team is on the next thing (Celestial)...Battlemage has already has its first silicon in the labs which is very exciting and there's more good news coming which I can't talk about right now." Intel appears to be targeting a loose late 2024 launch window; Petersen stated that he would like to see second generation products arrive at retail before CES 2025's commencement. The SiSoftware Sandra database was updated around mid-March with two very intriguing new Intel GPU entries—test systems (built on an ASUS PRIME Z790-P WIFI mainboard) were spotted running graphics solutions "equipped with 20 Xe-Core (160 EU) and 24 Xe-Cores (160 EU)."

Michael/miktdt commented on the freshly discovered database entries: "some smaller versions are on Sisoft...I guess they are coming. Single-float GP Compute looks quite good for just 160 VE/192 VE. Doesn't tell much about the release though, I guess anything between Q4 2024 and Q2 2025 is a possibility." Both models seem to sport 11.6 GB VRAM capacities—likely 12 GB—and 8 MB of L2 cache. Wccftech has guesstimated potential 192-bit memory bandwidth for these speculative lower-level GPUs. Team Blue has a bit more tweaking to do—based on leaked figures—but time is on their side: "the performance per core for Alchemist currently sits an average of 16% faster than the alleged Battlemage GPU which isn't a big deal since driver-level optimizations and final silicon can give a huge boost when the retail products come out."

Samsung Introduces "Petabyte SSD as a Service" at GTC 2024, "Petascale" Servers Showcased

Leaked Samsung PBSSD presentation material popped up online a couple of days prior to the kick-off day of NVIDIA's GTC 2024 conference (March 18)—reports (at the time) jumped on the potential introduction of a "petabyte (PB)-level SSD solution," alongside an enterprise subscription service for the US market. Tom's Hardware took the time to investigate this matter—in-person—on the showroom floor up in San Jose, California. It turns out that interpretations of pre-event information were slightly off—according to on-site investigations: "despite the name, PBSSD is not a petabyte-scale solid-state drive (Samsung's highest-capacity drive can store circa 240 TB), but rather a 'petascale' storage system that can scale-out all-flash storage capacity to petabytes."

Samsung showcased a Supermicro Petascale server design, but a lone unit is nowhere near capable of providing a petabyte of storage—the Tom's Hardware reporter found out that the demonstration model housed: "sixteen 15.36 TB SSDs, so for now the whole 1U unit can only pack up to 245.76 TB of 3D NAND storage (which is pretty far from a petabyte), so four of such units will be needed to store a petabyte of data." Company representatives also had another Supermicro product at their booth: "(an) H13 all-flash petascale system with CXL support that can house eight E3.S SSDs (with) four front-loading E3.S CXL bays for memory expansion."

Qualcomm Believes that Most Windows Games Will Run on Snapdragon X Elite

Qualcomm's "Windows on Snapdragon, a Platform Ready for your PC Games" GDC presentation attracted a low number of attendees according to The Verge's Sean Hollister (senior editor). The semiconductor firm is readying its Snapdragon X Elite mobile chipset family for launch around mid-2024—prototypes and reference devices have been popping up lately. Leaked Samsung Galaxy Book 4 Edge specs suggest that Qualcomm's ARM-based solution is ready to take on Apple's M3 chipset series. Gaming is not a major priority for many owners of slimline notebooks, but Apple has made efforts to unleash some of its silicon's full potential in that area. Snapdragon Studios' GDC showcase outlined a promising future for their X Elite chips—according to Hollister's coverage of the GDC session, Qualcomm's principal engineer told "game developers (that) their titles should already work on a wave of upcoming Snapdragon-powered Windows laptops—no porting required."

Issam Khalil's presentation covered three different porting paths: x64 emulation, ARM64, and a hybrid approach (via an "ARM64EC" driver). He demonstrated tools that will be available for games developers to start enabling titles on Windows + Snapdragon platforms. The Snapdragon X Elite is capable of running x86/64 games at "close to full speed" via emulation, as claimed in presentation slides. Khalil posits that developers are not required to change the code or assets of their games to achieve full speed performance. Adreno GPU drivers have been prepped for DX11, DX12, Vulkan, and OpenCL—mapping layers were utilized to grant support for DX9 and OpenGL (up to v4.6). Specific titles were not highlighted as fully operational on Snapdragon X Elite-based devices, but the team has spent time combing through "top games" on Steam.

AI-Capable PCs Forecast to Make Up 40% of Global PC Shipments in 2025

Canalys' latest forecast predicts that an estimated 48 million AI-capable PCs will ship worldwide in 2024, representing 18% of total PC shipments. But this is just the start of a major market transition, with AI-capable PC shipments projected to surpass 100 million in 2025, 40% of all PC shipments. In 2028, Canalys expects vendors to ship 205 million AI-capable PCs, representing a staggering compound annual growth rate of 44% between 2024 and 2028.

These PCs, integrating dedicated AI accelerators, such as Neural Processing Units (NPUs), will unlock new capabilities for productivity, personalization and power efficiency, disrupting the PC market and delivering significant value gains to vendors and their partners.

Sony PlayStation 5 Pro Details Emerge: Faster CPU, More System Bandwidth, and Better Audio

Sony is preparing to launch its next-generation PlayStation 5 Pro console in the Fall of 2024, right around the holidays. We previously covered a few graphics details about the console. However, today, we get more details about the CPU and the overall system, thanks to the exclusive information from Insider Gaming. Starting off, the sources indicate that PS5 Pro system memory will get a 28% bump in bandwidth, where the standard PS5 console had 448 GB/s, and the upgraded PS5 Pro will get 576 GB/s. Apparently, the memory system is more efficient, likely coming from an upgrade in memory from the GDDR6 SDRAM of the regular PS5. The next upgrade is the CPU, which has special modes for the main processor. The CPU uArch is likely the same, with clocks pushed to 3.85 GHz, resulting in a 10% frequency increase.

However, this is only achieved in the "High CPU Frequency Mode," which steals the SoC's power from the GPU and downclocks it slightly to allocate more power to the CPU in highly CPU-intense settings. The GPU we discussed here is an RDNA 3 IP with up to 45% faster graphics rendering. The ray tracing performance can be up to four times higher than the regular PS5, while the entire GPU delivers 33.5 TeraFLOPS of FP32 single-precision computing. This comes from 30 WGP running BVH8 shaders vs the 18 WGPs running BVH4 shaders on the regular PS5. There are PSSR upscalers present, and the GPU can output 8K resolution, which will come with future software updates. Last but not least, the AI front also has a custom AI accelerator capable of 300 8-bit INT8 TOPS and 67 16-bit FP16 TeraFLOPS. Audio codecs are getting some love, as well, with ACV running up to 35% faster.

2024 HBM Supply Bit Growth Estimated to Reach 260%, Making Up 14% of DRAM Industry

TrendForce reports that significant capital investments have occurred in the memory sector due to the high ASP and profitability of HBM. Senior Vice President Avril Wu notes that by the end of 2024, the DRAM industry is expected to allocate approximately 250K/m (14%) of total capacity to producing HBM TSV, with an estimated annual supply bit growth of around 260%. Additionally, HBM's revenue share within the DRAM industry—around 8.4% in 2023—is projected to increase to 20.1% by the end of 2024.

HBM supply tightens with order volumes rising continuously into 2024
Wu explains that in terms of production differences between HBM and DDR5, the die size of HBM is generally 35-45% larger than DDR5 of the same process and capacity (for example, 24Gb compared to 24Gb). The yield rate (including TSV packaging) for HBM is approximately 20-30% lower than that of DDR5, and the production cycle (including TSV) is 1.5 to 2 months longer than DDR5.

NVIDIA B100 "Blackwell" AI GPU Technical Details Leak Out

Jensen Huang's opening GTC 2024 keynote is scheduled to happen tomorrow afternoon (13:00 Pacific time)—many industry experts believe that the NVIDIA boss will take the stage and formally introduce his company's B100 "Blackwell" GPU architecture. An enlightened few have been treated to preview (AI and HPC) units—including Dell's CEO, Jeff Clarke—but pre-introduction leaks have not flowed out. Team Green is likely enforcing strict conditions upon a fortunate selection of trusted evaluators, within a pool of ecosystem partners and customers.

Today, a brave soul has broken that silence—tech tipster, AGF/XpeaGPU, fears repercussions from the leather-jacketed one. They revealed a handful of technical details, a day prior to Team Green's highly anticipated unveiling: "I don't want to spoil NVIDIA B100 launch tomorrow, but this thing is a monster. 2 dies on (TSMC) CoWoS-L, 8x8-Hi HBM3E stacks for 192 GB of memory." They also crystal balled an inevitable follow-up card: "one year later, B200 goes with 12-Hi stacks and will offer a beefy 288 GB. And the performance! It's... oh no Jensen is there... me run away!" Reuters has also joined in on the fun, with some predictions and insider information: "NVIDIA is unlikely to give specific pricing, but the B100 is likely to cost more than its predecessor, which sells for upwards of $20,000." Enterprise products are expected to arrive first—possibly later this year—followed by gaming variants, maybe months later.

US Government to Announce Massive Grant for Intel's Arizona Facility

According to the latest report by Reuters, the US government is preparing to announce a multi-billion dollar grant for Intel's chip manufacturing operations in Arizona next week, possibly worth more than $10 billion. US President Joe Biden and Commerce Secretary Gina Raimondo will make the announcement, which is part of the 2022 CHIPS and Science Act aimed at expanding US chip production and reducing dependence on China and Taiwan manufacturing. The exact amount of the grant has yet to be confirmed, but rumors suggest it could exceed $10 billion, making it the most significant award yet under the CHIPS Act. The funding will include grants and loans to bolster Intel's competitive position and support the company's US semiconductor manufacturing expansion plans. This comes as a surprise just a day after the Pentagon reportedly refused to invest $2.5 billion in Intel as a part of a secret defense grant.

Intel has been investing significantly in its US expansion, recently opening a $3.5 billion advanced packaging facility in New Mexico, supposed to create extravagant packaging technology like Foveros and EMIB. The chipmaker is also expanding its semiconductor manufacturing capacity in Arizona, with plans to build new fabs in the state. Arizona is quickly becoming a significant hub for semiconductor manufacturing in the United States. In addition to Intel's expansion, Taiwan Semiconductor Manufacturing Company (TSMC) is also building new fabs in the state, attracting supply partners to the region. CHIPS Act has a total funding capacity of $39 billion allocated for semiconductor production and $11 billion for research and development. The Intel grant will likely cover the production part, as Team Blue has been reshaping its business units with the Intel Product and Intel Foundry segments.

The SEA Projects Prepare Europe for Exascale Supercomputing

The HPC research projects DEEP-SEA, IO-SEA and RED-SEA are wrapping up this month after a three-year project term. The three projects worked together to develop key technologies for European Exascale supercomputers, based on the Modular Supercomputing Architecture (MSA), a blueprint architecture for highly efficient and scalable heterogeneous Exascale HPC systems. To achieve this, the three projects collaborated on system software and programming environments, data management and storage, as well as interconnects adapted to this architecture. The results of their joint work will be presented at a co-design workshop and poster session at the EuroHPC Summit (Antwerp, 18-21 March, www.eurohpcsummit.eu).

JEDEC Reportedly Finalizing LPDDR6 Standard for Mobile Platforms

JEDEC is expected to announce a next-gen low-power RAM memory (LPDDR) standard specification by the third quarter of this year. Earlier today, smartphone technology watcher—Revegnus—highlighted insider information disclosed within an ETnews article. The International Semiconductor Standards Organization (JEDEC) has recently concluded negotiations regarding "next-generation mobile RAM standards"—the report posits that: "more than 60 people from memory, system semiconductor, and design asset (IP) companies participated" in a Lisbon, Portugal-situated meeting. A quoted participant stated (to ETnews): "We have held various discussions to confirm the LPDDR6 standard specification...(Details) will be released in the third quarter of this year."

The current generation LPDDR5 standard was secured back in February 2019—noted improvements included 50% performance and 30% power efficiency jumps over LPDDR4. Samsung Electronics and SK Hynix are in the process of mass-producing incremental improvements—in the form of LPDDR5X and LPDDR5T. A second source stated: "Technology development and standard discussions are taking place in a way to minimize power consumption, which increases along with the increase in data processing." A full-fledged successor is tasked with further enhancing data processing performance. Industry figures anticipate that LPDDR6 will greatly assist in an industry-wide push for "on-device AI" processing. They reckon that "large-scale AI calculations" will become the norm on smartphones, laptops, and tablet PCs. Revegnus has heard (fanciful) whispers about a potential 2024 rollout: "support may be available starting with Qualcomm's Snapdragon 8 Gen 4, expected to be released as early as the second half of this year." Sensible predictions point to possible commercialization in late 2025, or early 2026.

Intel 14A Node Delivers 15% Improvement over 18A, A14-E Adds Another 5%

Intel is revamping its foundry play, and the company is set on its goals of becoming a strong contender to rivals such as TSMC and Samsung. Under Pat Gelsinger's lead, Intel recently split (virtually, under the same company) its units into Intel Product and Intel Foundry. During the SPIE 2024 conference for optics and photonics, Anne Kelleher, Intel's senior vice president, revealed that the 14A (1.4 nm) process offers a 15% performance-per-watt improvement over the company's 18A (1.8 nanometers) process. Additionally, the enhanced 14A-E process boasts a further 5% performance boost from the regular A14 node, being a small refresh. Intel's 14A process is set to be the first to utilize High-NA extreme ultraviolet (EUV) equipment, delivering a 20% increase in transistor logic density compared to the 18A node.

The company's aggressive pursuit of next-generation processes poses a significant threat to Samsung Electronics, which currently holds the second position in the foundry market. As part of its IDM 2.0 strategy, Intel hopes to reclaim its position as a leading foundry player and surpass Samsung by 2030. The company's collaboration with American companies, such as Microsoft, further solidifies its ambitions. Intel has already secured a $15 billion chip production contract with Microsoft for its 1.8 nm 18A process. The semiconductor industry is closely monitoring Intel's progress, as the company's advancements in process technology could potentially reshape the competitive landscape. With Samsung planning to mass-produce 2 nm process products next year, the race for dominance in the foundry market is heating up.

SK Hynix To Invest $1 Billion into Advanced Chip Packaging Facilities

Lee Kang-Wook, Vice President of Research and Development at SK Hynix, has discussed the increased importance of advanced chip packaging with Bloomberg News. In an interview with the media company's business section, Lee referred to a tradition of prioritizing the design and fabrication of chips: "the first 50 years of the semiconductor industry has been about the front-end." He believes that the latter half of production processes will take precedence in the future: "...but the next 50 years is going to be all about the back-end." He outlined a "more than $1 billion" investment into South Korean facilities—his department is hoping to "improve the final steps" of chip manufacturing.

SK Hynix's Head of Packaging Development pioneered a novel method of packaging the third generation of high bandwidth technology (HBM2E)—that innovation secured NVIDIA as a high-profile and long term customer. Demand for Team Green's AI GPUs has boosted the significance of HBM technologies—Micron and Samsung are attempting to play catch up with new designs. South Korea's leading memory supplier is hoping to stay ahead in the next-gen HBM contest—supposedly 12-layer fifth generation samples have been submitted to NVIDIA for approval. SK Hynix's Vice President recently revealed that HBM production volumes for 2024 have sold out—currently company leadership is considering the next steps for market dominance in 2025. The majority of the firm's newly announced $1 billion budget will be spent on the advancement of MR-MUF and TSV technologies, according to their R&D chief.

Jensen Huang Celebrates Rise of Portable AI Workstations

2024 will be the year generative AI gets personal, the CEOs of NVIDIA and HP said today in a fireside chat, unveiling new laptops that can build, test and run large language models. "This is a renaissance of the personal computer," said NVIDIA founder and CEO Jensen Huang at HP Amplify, a gathering in Las Vegas of about 1,500 resellers and distributors. "The work of creators, designers and data scientists is going to be revolutionized by these new workstations."

Greater Speed and Security
"AI is the biggest thing to come to the PC in decades," said HP's Enrique Lores, in the runup to the announcement of what his company billed as "the industry's largest portfolio of AI PCs and workstations." Compared to running their AI work in the cloud, the new systems will provide increased speed and security while reducing costs and energy, Lores said in a keynote at the event. New HP ZBooks provide a portfolio of mobile AI workstations powered by a full range of NVIDIA RTX Ada Generation GPUs. Entry-level systems with the NVIDIA RTX 500 Ada Generation Laptop GPU let users run generative AI apps and tools wherever they go. High-end models pack the RTX 5000 to deliver up to 682 TOPS, so they can create and run LLMs locally, using retrieval-augmented generation (RAG) to connect to their content for results that are both personalized and private.

Samsung Foundry Renames 3 nm Process to 2 nm Amid Competition with Intel

In a move that could intensify competition with Intel in the cutting-edge chip manufacturing space, Samsung Foundry has reportedly decided to rebrand its second-generation 3 nm-class fabrication technology, previously known as SF3, to a 2 nm-class manufacturing process called SF2. According to reports from ZDNet, the renaming of Samsung's SF3 to SF2 is likely an attempt by the South Korean tech giant to simplify its process nomenclature and better compete against Intel Foundry, at least visually. Intel is set to roll out its Intel 20A production node, a 2 nm-class technology, later this year. The reports suggest that Samsung has already notified its customers about the changes in its roadmap and the renaming of SF3 to SF2. Significantly, the company has reportedly gone as far as re-signing contracts with customers initially intended to use the SF3 production node.

"We were informed by Samsung Electronics that the 2nd generation 3 nm [name] is being changed to 2 nm," an unnamed source noted to ZDNet. "We had contracted Samsung Foundry for the 2nd generation 3 nm production last year, but we recently revised the contract to change the name to 2 nm." Despite the name change, Samsung's SF3, now called SF2, has not undergone any actual process technology alterations. This suggests that the renaming is primarily a marketing move, as using a different process technology would require customers to rework their chip designs entirely. Samsung intends to start manufacturing chips based on the newly named SF2 process in the second half of 2024. The SF2 technology, which employs gate-all-around (GAA) transistors that Samsung brands as Multi-Bridge-Channel Field Effect Transistors (MBCFET), does not feature a backside power delivery network (BSPDN), a significant advantage of Intel's 20A process. Samsung Foundry has not officially confirmed the renaming.

Intel Sets 100 Million CPU Supply Goal for AI PCs by 2025

Intel has been hyping up their artificial intelligence-augmented processor products since late last year—their "AI Everywhere" marketing push started with the official launch of Intel Core Ultra mobile CPUs, AKA the much-delayed Meteor Lake processor family. CEO, Pat Gelsinger stated (mid-December 2023): "AI innovation is poised to raise the digital economy's impact up to as much as one-third of global gross domestic product...Intel is developing the technologies and solutions that empower customers to seamlessly integrate and effectively run AI in all their applications—in the cloud and, increasingly, locally at the PC and edge, where data is generated and used." Team Blue's presence at this week's MWC Barcelona 2024 event introduced "AI Everywhere Across Network, Edge, Enterprise."

Nikkei Asia sat down with Intel's David Feng—Vice President of Client Computing Group and General Manager of Client Segments. The impressively job-titled executive discussed the "future of AI PCs," and set some lofty sales goals for his firm. According to the Nikkei report, Intel leadership expects to "deliver 40 million AI PCs" this year and a further 60 million units next year—representing "more than 20% of the projected total global PC market in 2025." Feng and his colleagues predict that mainstream customers will prefer to use local "on-device" AI solutions (equipped with NPUs), rather than rely on remote cloud services. Significant Edge AI improvements are expected to arrive with next generation Lunar Lake and Arrow Lake processor families, the latter will be bringing Team Blue NPU technologies to desktop platforms—AMD's Ryzen 8000G series of AM5 APUs launched with XDNA engines last month.

Xbox & Microsoft Schedule GDC 2024 Presentations

As GDC, the world's largest game developer conference, returns to San Francisco, Microsoft and Xbox will be there to engage and empower developers, publishers, and technology partners across the industry. We are committed to supporting game developers on any platform, anywhere in the world, at every stage of development. Our message is simple: Microsoft and Xbox are here to help power your games and empower your teams. From March 18 - 22, the Xbox Lobby Lounge in the Moscone Center South can't be missed—an easy meeting point, and a first step toward learning more about the ID@Xbox publishing program, the Developer Acceleration Program (DAP) for underrepresented creators, Azure cloud gaming services, and anything else developers might need.

GDC features dozens of speakers from across Xbox, Activision, Blizzard, King and ZeniMax who will demonstrate groundbreaking in-game innovations and share community-building strategies. Microsoft technology teams, with support from partners, will also host talks that spotlight new tools, software and services that help increase developer velocity, grow player engagement and help creators grow. See below for the Conference programming details.

Supermicro Accelerates Performance of 5G and Telco Cloud Workloads with New and Expanded Portfolio of Infrastructure Solutions

Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, delivers an expanded portfolio of purpose-built infrastructure solutions to accelerate performance and increase efficiency in 5G and telecom workloads. With one of the industry's most diverse offerings, Supermicro enables customers to expand public and private 5G infrastructures with improved performance per watt and support for new and innovative AI applications. As a long-term advocate of open networking platforms and a member of the O-RAN Alliance, Supermicro's portfolio incorporates systems featuring 5th Gen Intel Xeon processors, AMD EPYC 8004 Series processors, and the NVIDIA Grace Hopper Superchip.

"Supermicro is expanding our broad portfolio of sustainable and state-of-the-art servers to address the demanding requirements of 5G and telco markets and Edge AI," said Charles Liang, president and CEO of Supermicro. "Our products are not just about technology, they are about delivering tangible customer benefits. We quickly bring data center AI capabilities to the network's edge using our Building Block architecture. Our products enable operators to offer new capabilities to their customers with improved performance and lower energy consumption. Our edge servers contain up to 2 TB of high-speed DDR5 memory, 6 PCIe slots, and a range of networking options. These systems are designed for increased power efficiency and performance-per-watt, enabling operators to create high-performance, customized solutions for their unique requirements. This reassures our customers that they are investing in reliable and efficient solutions."
Return to Keyword Browsing
May 1st, 2024 00:01 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts