News Posts matching #next-gen

Return to Keyword Browsing

AMD Ryzen 9000X3D Series to Keep the Same 64 MB 3D V-Cache Capacity, Offer Overclocking

AMD is preparing to release its next generation of high-performance CPUs, the Ryzen 9000X3D series, and rumors are circulating about potential increases in stacked L3 cache. However, a recent report from Wccftech suggests that the upcoming models will maintain the same 64 MB of additional 3D V-cache as their predecessors. The X3D moniker represents AMD's 3D V-Cache technology, which vertically stacks an extra L3 cache on top of one CPU chiplet. This design has proven particularly effective in enhancing gaming performance, leading AMD to market these processors as the "ultimate gaming" solutions. According to the latest information, the potential Ryzen 9 9950X3D would feature 16 Zen 5 cores with a total of 128 (64+64) MB L3 cache, while a Ryzen 9 9900X3D would offer 12 cores with the same cache capacity. The Ryzen 7 9800X3D is expected to provide 96 (32+64) MB of total L3 cache.

Regarding L2, the CPUs feature one MB of L2 cache per core. Perhaps the most exciting development for overclockers is the reported inclusion of full overclocking support in the new X3D series. This marks a significant evolution from the limited options available in previous generations, potentially allowing enthusiasts to push these gaming-focused chips to new heights of performance. While the release date for the Ryzen 9000X3D series remains unconfirmed, industry speculation suggests a launch window as early as September or October. This timing would coincide with the release of new X870 (E) chipset motherboards. PC enthusiasts would potentially wait to match the next-gen CPU and motherboards, so this should be a significant upgrade cycle for many.

Getac Unveils High-Performance F110 Rugged Tablet for Field Professionals

Getac today launched the next generation F110 tablet, which combines fully rugged reliability with a host of powerful new upgrades for exceptional performance and efficiency in the field. Getac's flagship F110 model has long been at the forefront of rugged tablet design, making it incredibly popular with customers across multiple sectors and industries. The new F110 builds on this legacy, offering upgraded processing power, brightness, and connectivity, alongside excellent energy efficiency, for full-shift performance in a variety of challenging indoor/outdoor working environments.

Powerful and versatile
Key features include an upgraded Intel Core 13th Gen i5/i7 processor, with Intel UHD Graphics offering new levels of processing speed and graphical performance. Elsewhere, the ultra-bright 1,200 nit LumiBond screen, with multitouch modes (touch, glove, pen) - the brightest ever available on the F110 - optimizes productivity in weather conditions ranging from full sun to rain and snow. For maximum mobility and productivity, the F110 can be used with a wide range of Getac accessories, including detachable keyboard, hard carry handle, and secure vehicle docks.

OpenAI Co-Founder Ilya Sutskever Launches a New Venture: Safe Superintelligence Inc.

OpenAI's co-founder and ex-chief scientist, Ilya Sutskever, has announced the formation of a new company promising a safe path to artificial superintelligence (ASI). Called Safe Superintelligence Inc. (SSI), the company has a simple mission: achieving ASI with safety at the front. "We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead," notes the SSI website, adding that "Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."

Interestingly, safety is a concern only a few frontier AI labs have. In recent history, OpenAI's safety team got the spotlight for being neglected, and the company's safety lead, Jan Leike, publically criticized safety practices before moving to Anthophic. Antrophic is focused on providing safe AI models, with its Claude Opus being one of the leading AI models to date. What is to come out of SSI? We still don't know. However, given the team of Ilya Sutskever, Daniel Gross, and Daniel Levy, we assume they attracted the best-in-class talent for developing next-generation AI models, focusing on safety. With offices in Palo Alto and Tel Aviv, SSI can tap a vast network of AI researchers and policymakers to establish safe ASI, free from short-term commercial pressure and focused on research and development. "Our team, investors, and business model are all aligned to achieve SSI," says the SSI website.

Gears of War: E-Day To Feature Ray Traced Lighting, Reflections, and Shadows

Few days ago at the Xbox Games Showcase, Microsoft has pulled a rather neat surprise, announcing Gears of War: E-Day, which will be developed by The Coalition in Unreal Engine 5. Microsoft did not go into a lot of details, reveling just a few story details about this prequel to the first Gears of War and releasing the official announce trailer. Thankfully, Microsoft and the developer now shared a bit more story and technical details, also confirming that the game will support ray traced lighting, reflections, and shadows.

In an extensive blog over at the Xbox Wire, Microsoft and The Coalition talk about the story behind the Gears of War: E-Day, which will follow younger versions of the original heroes, Marcus Fenix and Dom Santiago, going back to where it all started with the fight against the Locust invasion. The Creative Director Matt Searcy and Brand Director Nicole Fawcette were keen to note that the developer is working hard to recreate and improve the third-person action, storytelling, and all other things we expect from Gears of War game.

Western Digital Introduces New Enterprise AI Storage Solutions and AI Data Cycle Framework

Fueling the next wave of AI innovation, Western Digital today introduced a six-stage AI Data Cycle framework that defines the optimal storage mix for AI workloads at scale. This framework will help customers plan and develop advanced storage infrastructures to maximize their AI investments, improve efficiency, and reduce the total cost of ownership (TCO) of their AI workflows. AI models operate in a continuous loop of data consumption and generation - processing text, images, audio and video among other data types while simultaneously producing new unique data. As AI technologies become more advanced, data storage systems must deliver the capacity and performance to support the computational loads and speeds required for large, sophisticated models while managing immense volumes of data. Western Digital has strategically aligned its Flash and HDD product and technology roadmaps to the storage requirements of each critical stage of the cycle, and today introduced a new industry-leading, high-performance PCIe Gen 5 SSD to support AI training and inference; a high-capacity 64 TB SSD for fast AI data lakes; and the world's highest capacity ePMR, UltraSMR 32 TB HDD for cost-effective storage at scale.

"There's no doubt that Generative AI is the next transformational technology, and storage is a critical enabler. The implications for storage are expected to be significant as the role of storage, and access to data, influences the speed, efficiency and accuracy of AI Models, especially as larger and higher-quality data sets become more prevalent," said Ed Burns, Research Director at IDC. "As a leader in Flash and HDD, Western Digital has an opportunity to benefit in this growing AI landscape with its strong market position and broad portfolio, which meets a variety of needs within the different AI data cycle stages."

MONTECH New Cases Covering "All Sizes" Pictured at Computex

MONTECH booth at Computex 2024 is full of new and updated products, in the PC cases area we found 4 new models: SKY 3, HS01, HS01 Mini, and KING 95 MEGA. The SKY 3 has a curved glass front and two small 80 mm fans at the bottom front to cool the graphics card better. The HS01 has a special inset tray for the motherboard. The HS01 Mini is a small case with clever insides and a power supply that can be moved around. Lastly, the KING 95 MEGA is a big dual-chamber case, it promises excellent cooling with space for a large 420 mm radiator on top and a 360 mm radiator at the bottom. It also has a movable side fan holder to keep the graphics card cool.

The SKY 3 is the newest mid-sized case in the SKY series. The front has a curved glass panel, and it can fit the newest big graphics cards like the 4090. For cooling, it can hold a 360 mm radiator on top and a 240 mm radiator on the side. At the bottom front, it has two small 80 mm fans that push air toward the graphics card. It also comes with 2x RX140mm fans near the motherboard and one colorful AX140mm ARGB PWN fan at the back, so it has good airflow right from the start. The SKY 3 can also work with motherboards that have their connectors on the back. MONTECH plans to sell the SKY 3 in Q2 of 2025, the black one will cost $119, and the white one will be a bit more at $125.

Micron Samples Next-Gen GDDR7 Graphics Memory for Gaming and AI, Over 1.5 TB/s of System Bandwidth

Micron Technology, Inc., today announced the sampling of its next-generation GDDR7 graphics memory with the industry's highest bit density. Leveraging Micron's 1β (1-beta) DRAM technology and innovative architecture, Micron GDDR7 delivers 32 Gb/s high-performance memory in a power-optimized design. With over 1.5 TB/s of system bandwidth, which is up to 60% higher bandwidth than GDDR6, and four independent channels to optimize workloads, Micron GDDR7 memory enables faster response times, smoother gameplay and reduced processing times.

GDDR7 also provides a greater than 50% power-efficiency improvement compared to GDDR6 to better thermals and lengthen battery life, while the new sleep mode reduces standby power by up to 70%. Advanced reliability, availability and serviceability (RAS) features on Micron GDDR7 enhance device dependability and data integrity without compromising performance, broadening the spectrum of applications for Micron GDDR7 to AI, gaming and high-performance computing workloads.

Micron DRAM Production Plant in Japan Faces Two-Year Delay to 2027

Last year, Micron unveiled plans to construct a cutting-edge DRAM factory in Hiroshima, Japan. However, the project has faced a significant two-year delay, pushing back the initial timeline for mass production of the company's most advanced memory products. Originally slated to begin mass production by the end of 2025, Micron now aims to have the new facility operational by 2027. The complexity of integrating extreme ultraviolet lithography (EUV) equipment, which enables the production of highly advanced chips, has contributed to the delay. The Hiroshima plant will produce next-generation 1-gamma DRAM and high-bandwidth memory (HBM) designed for generative AI applications. Micron expects the HBM market, currently dominated by rivals SK Hynix and Samsung, to experience rapid growth, with the company targeting a 25% market share by 2025.

The project is expected to cost between 600 and 800 billion Japanese yen ($3.8 to $5.1 billion), with Japan's government covering one-third of the cost. Micron has received a subsidy of up to 192 billion yen ($1.2 billion) for construction and equipment, as well as a subsidy to cover half of the necessary funding to produce HBM at the plant, amounting to 25 billion yen ($159 million). Despite the delay, the increased investment in the factory reflects Micron's commitment to advancing its memory technology and capitalizing on the growing demand for HBM. An indication of that is the fact that customers have pre-ordered 100% of the HBM capacity for 2024, not leaving a single HBM die unused.

SpiNNcloud Systems Announces First Commercially Available Neuromorphic Supercomputer

Today, in advance of ISC High Performance 2024, SpiNNcloud Systems announced the commercial availability of its SpiNNaker2 platform, a supercomputer-level hybrid AI high-performance computer system based on principles of the human brain. Pioneered by Steve Furber, designer of the original ARM and SpiNNaker1 architectures, the SpiNNaker2 supercomputing platform uses a large number of low-power processors for efficiently computing AI and other workloads.

First-generation SpiNNaker1 architecture is currently used in dozens of research groups across 23 countries worldwide. Sandia National Laboratories, Technical University of München and Universität Göttingen are among the first customers placing orders for SpiNNaker2, which was developed around commercialized IP invented in the Human Brain Project, a billion-euro research project funded by the European Union to design intelligent, efficient artificial systems.

Intel Xeon "Granite Rapids-SP" 80-core Engineering Sample Leaked

A CPU-Z screenshot has been shared by YuuKi_AnS—the image contains details about an alleged next-gen Intel Xeon Scalable processor engineering sample (ES). The hardware tipster noted in (yesterday's post) that an error had occurred in the application's identification of this chunk of prototype silicon. CPU-Z v2.09 has recognized the basics—an Intel Granite Rapids-SP processor that is specced with 80 cores, 2.5 GHz max frequency, a whopping 672 MB of L3 cache, and a max. TDP rating of 350 W. The counting of 320 threads seems to be CPU-Z's big mistake here—previous Granite Rapids-related leaks have not revealed Team Blue's Hyper-Threading technology producing such impressive numbers.

The alleged prototype status of this Xeon chip is very apparent in CPU-Z's tracking of single and multi-core performance—the benchmark results are really off the mark, when compared to finalized current-gen scores (produced by rival silicon). Team Blue's next-gen Xeon series is likely positioned to catch up with AMD EPYC's deployment of large core counts—"Granite Rapids" has been linked to the Intel 3 foundry node, reports from last month suggest that XCC-type processors could be configured with "counts going up to 56-core/112-threads." Micron is prepping next-gen "Tall Form Factor" memory modules, designed with future enterprise processor platforms in mind—including Intel's Xeon Scalable "Granite Rapids" family. Industry watchdogs posit that Team Blue will be launching this series in the coming months.

Square Enix Artist Discusses Rebirth's Modernization of Final Fantasy VII 3D Assets

It'd be fair to say Final Fantasy VII Rebirth's next-gen makeover of characters, monsters, and more from the 1997 original has been a spectacular glow-up. The modern console era has returned an iconic cast and world to us with a level of realism in gameplay that even pre-rendered cutscenes over 25 years ago couldn't match. We asked Square Enix if they could crunch some numbers and share some insight into the changes nearly three decades of technological advancement have wrought. Here, main character modeler and lead character artist Dai Suzuki walks us through a selection of characters, creatures, weapons, and more.

Dai Suzuki: When people think of Cloud, most think of his gigantic sword and his unique hairstyle. Because it is so iconic, we needed to put special effort into creating Cloud's hair for Final Fantasy VII Remake, to properly express his personality. The hair was an extremely high-priority element and in fact accounted for half of the total polygon count for the whole model. In Final Fantasy VII Rebirth, the hardware has been changed to PS5, allowing for a higher polygon count to be used than in Final Fantasy VII Remake.

Taiwan Dominates Global AI Server Supply - Government Reportedly Estimates 90% Share

The Taiwanese Ministry of Economic Affairs (MOEA) managed to herd government representatives and leading Information and Communication Technology (ICT) industry figures together for an important meeting, according to DigiTimes Asia. The report suggests that the main topic of discussion focused on an anticipated growth of Taiwan's ICT industry—current market trends were analyzed, revealing that the nation absolutely dominates in the AI server segment. The MOEA has (allegedly) determined that Taiwan has shipped 90% of global AI server equipment—DigiTimes claims (based on insider info) that: "American brand vendors are expected to source their AI servers from Taiwanese partners." North American customers could be (presently) 100% reliant on supplies of Taiwanese-produced equipment—a scenario that potentially complicates ongoing international tensions.

The report posits that involved parties have formed plans to seize opportunities within an evergrowing global demand for AI hardware—a 90% market dominance is clearly not enough for some very ambitious industry bosses—although manufacturers will need to jump over several (rising) cost hurdles. Key components for AI servers are reported to be much higher than vanilla server parts—DigiTimes believes that AI processor/accelerator chips are priced close to ten times higher than general purpose server CPUs. Similar price hikes have reportedly affected AI adjacent component supply chains—notably cooling, power supplies and passive parts. Taiwanese manufacturers have spread operations around the world, but industry watchdogs (largely) believe that the best stuff gets produced on home ground—global expansions are underway, perhaps inching closer to better balanced supply conditions.

AMD EPYC "Turin" 9000-series Motherboard Specs Suggest Support for DDR5 6000 MT/s

AMD's next-gen EPYC Zen 5 processor family seems to be nearing launch status—late last week, momomo_us uncovered an unnamed motherboard's datasheet; this particular model will accommodate a single 9000-series CPU—with a maximum 400 W TDP—via an SP5 socket. 500 W and 600 W limits have been divulged (via leaks) in the past, so the 400 W spec could be an error or a: "legitimate compatibility issue with the motherboard, though 400 Watts would be in character with high-end Zen 4 SP5 motherboards," according to Tom's Hardware analysis.

AMD's current-gen "Zen 4" based EPYC "Genoa" processor family—sporting up to 96-cores/192-threads—is somewhat limited by its DDR5 support transfer rates of up to 4800 MT/s. The latest leak suggests that "Turin" is upgraded quite nicely in this area—when compared to predecessors—the SP5 board specs indicate DDR5 speeds of up to 6000 MT/s with 4 TB of RAM. December 2023 reports pointed to "Zen 5c" variants featuring (max.) 192-core/384-thread configurations, while larger "Zen 5" models are believed to be "modestly" specced with up to 128-cores and 256-threads. AMD has not settled on an official release date for its EPYC "Turin" 9000-series processors, but a loose launch window is expected "later in 2024" based on timeframes presented within product roadmaps.

Micron Shows Off "Tall Form Factor" 256 GB DDR5-8000 MCRDIMM

Micron representatives showcased new products at last week's NVIDIA GTC event—one eye-catching DIMM is all set for deployment within next-generation servers. Tom's Hardware spent some time at Micron's booth—they found out that the "Tall Form Factor" 256 GB DDR5-8800 Multiplexer Combined Ranks (MCR) DIMM is being prepared for future enterprise processor platforms, including Intel's Xeon Scalable "Granite Rapids" family. A lone "tall" prototype module was exhibited, but company representatives indicated that standard height MCRDIMMs are in development. Tom's Hardware found out that these will compact enough to fit in 1U-sized server systems. According to their in-person experience: "(Micron's) 256 GB MCRDIMMs are based on monolithic 32 Gb DDR5 ICs, but the tall one places 80 DRAM chips on both sides of the module, whereas the standard one uses 2Hi stacked packages, which means that they run slightly hotter due to less space for thermal dissipation. In any case, the tall module consumes around 20 W, which isn't bad as Micron's 128 GB DDR5-8000 RDIMM consumes 10 W at DDR5-4800."

In a recent earnings call, Micron CEO Sanjay Mehrotra, commented on his company's latest technology: "we (have) started sampling our 256 GB MCRDIMM module, which further enhances performance and increases DRAM content per server." Next-gen Intel Xeon platforms are expected to support 12 or 24 memory slots per processor socket. Enabled datacenter machines could be specced with total 3 TB or 6 TB (DDR5-8000) memory capacities. AnandTech has summarized the benefits of Micron's new part: "Multiplexer Combined Ranks (MCR) DIMMs are dual-rank memory modules featuring a specialized buffer that allows both ranks to operate simultaneously. This buffer enables the two physical ranks to operate as though they were separate modules working in parallel, which allows for concurrent retrieval of 128 bytes of data from both ranks per clock cycle—compared to 64 bytes per cycle when it comes to regular memory modules—effectively doubling performance of a single module." The added complexity is offset by significant performance boons—ideal for advanced server-side AI-crunching in the future.

Existence of Intel Meteor Lake-PS CPU Series Revealed in iBase MI1002 Datasheet

An intriguing offshoot of Intel's Meteor Lake generation of processors has been discovered by hardware sleuth momomo_us—an iBase MI1002 motherboard specification sheet contains references to a 14th Gen Core Ultra (Meteor Lake-PS) family, with a next-gen LGA1851 socket listed as the desktop platform. The industrial iBase Mini-ITX workstation board is "coming soon" according to a promotional image—this could signal a revival of Meteor Lake outside of laptop platforms. 2023 was a bit of a rollercoaster year for MTL-S SKUs (on socket LGA1851)—one moment Team Blue confirmed that it was happening, then a couple of days later it was disposed of. The upcoming Arrow Lake processor generation seems to be the logical taker of this mantle, but the (leaked) existence of Meteor Lake-PS throws a proverbial spanner into the works.

iBase's MTL-PS-ready boards will be niche "industrial/embedded" items—according to Tom's Hardware: "Intel hasn't officially revealed Meteor Lake PS, but given the "PS" designation, these upcoming processors target the IoT market, similar to Alder Lake PS. Therefore, it's safe to assume that Intel is bringing the mobile Meteor Lake processors to the LGA1851 socket...Although the motherboard has (this) socket, no chipset is present because Meteor Lake PS is the spitting image of the Meteor Lake chip and doesn't need a PCH." Team Blue is hyping up Arrow Lake (ARL-S) as its next-gen mainstream desktop platform, with a launch window set for later in 2024—by sharp contrast, Meteor Lake PS parts are highly unlikely to receive much fanfare upon release.

MediaTek Launches Next-gen ASIC Design Platform with Co-packaged Optics Solutions

Ahead of the 2024 Optical Fiber Communication Conference (OFC), MediaTek (last week) announced it is launching a next-generation custom ASIC design platform that includes the heterogeneous integration of both high-speed electrical and optical I/Os in the same ASIC implementation. MediaTek will be demonstrating a serviceable socketed implementation that combines 8x800G electrical links and 8x800G optical links for a more flexible deployment. It integrates both MediaTek's in-house SerDes for electrical I/O as well as co-packaged Odin optical engines from Ranovus for optical I/O. Leveraging the heterogeneous solution that includes both 112G LR SerDes and optical modules, this CPO demonstration delivers reduced board space and device costs, boosts bandwidth density, and lowers system power by up to 50% compared to existing solutions.

Additionally, Ranovus' Odin optical engine has the option to provide either internal or external laser optical modules to better align with practical usage scenarios. MediaTek's ASIC experience and capabilities in the 3 nm advanced process, 2.5D and 3D advanced packaging, thermal management, and reliability, combined with optical experience, makes it possible for customers to access the latest technology for high-performance computing (HPC), AI/ML and data center networking.

Samsung Roadmaps UFS 5.0 Storage Standard, Predicts Commercialization by 2027

Mobile tech tipster, Revegnus, has highlighted an interesting Samsung presentation slide—according to machine translation, the company's electronics division is already responding to an anticipated growth of "client-side large language model" service development. This market trend will demand improved Universal Flash Storage (UFS) interface speeds—Samsung engineers are currently engaged in: "developing a new product that uses UFS 4.0 technology, but increases the number of channels from the current 2 to 4." The upcoming "more advanced" UFS 4.0 storage chips could be beefy enough to be utilized alongside next-gen mobile processors in 2025. For example; ARM is gearing up "Blackhawk," the Cortex-X4's successor—industry watchdogs reckon that the semiconductor firm's new core is designed to deliver "great Large Language Model (LLM) performance" on future smartphones. Samsung's roadmap outlines another major R&D goal, but this prospect is far off from finalization—their chart reveals an anticipated 2027 rollout. The slide's body of text included a brief teaser: "at the same time, we are also actively participating in discussions on the UFS 5.0 standard."

SK hynix Unveils Highest-Performing SSD for AI PCs at NVIDIA GTC 2024

SK hynix unveiled a new consumer product based on its latest solid-state drive (SSD), PCB01, which boasts industry-leading performance levels at GPU Technology Conference (GTC) 2024. Hosted by NVIDIA in San Jose, California from March 18-21, GTC is one of the world's leading conferences for AI developers. Applied to on-device AI PCs, PCB01 is a PCIe fifth-generation SSD which recently had its performance and reliability verified by a major global customer. After completing product development in the first half of 2024, SK hynix plans to launch two versions of PCB01 by the end of the year which target both major technology companies and general consumers.

Optimized for AI PCs, Capable of Loading LLMs Within One Second
Offering the industry's highest sequential read speed of 14 gigabytes per second (GB/s) and a sequential write speed of 12 GB/s, PCB01 doubles the speed specifications of its previous generation. This enables the loading of LLMs required for AI learning and inference in less than one second. To make on-device AIs operational, PC manufacturers create a structure that stores an LLM in the PC's internal storage and quickly transfers the data to DRAMs for AI tasks. In this process, the PCB01 inside the PC efficiently supports the loading of LLMs. SK hynix expects these characteristics of its latest SSD to greatly increase the speed and quality of on-device AIs.

Dell Expands Generative AI Solutions Portfolio, Selects NVIDIA Blackwell GPUs

Dell Technologies is strengthening its collaboration with NVIDIA to help enterprises adopt AI technologies. By expanding the Dell Generative AI Solutions portfolio, including with the new Dell AI Factory with NVIDIA, organizations can accelerate integration of their data, AI tools and on-premises infrastructure to maximize their generative AI (GenAI) investments. "Our enterprise customers are looking for an easy way to implement AI solutions—that is exactly what Dell Technologies and NVIDIA are delivering," said Michael Dell, founder and CEO, Dell Technologies. "Through our combined efforts, organizations can seamlessly integrate data with their own use cases and streamline the development of customized GenAI models."

"AI factories are central to creating intelligence on an industrial scale," said Jensen Huang, founder and CEO, NVIDIA. "Together, NVIDIA and Dell are helping enterprises create AI factories to turn their proprietary data into powerful insights."

AMD Zen 5 "Znver5" CPU Enablement Spotted in Change Notes

Close monitoring of AMD engineering activities—around mid-February time—revealed the existence of a new set of patches for GNU Compiler Collection (GCC). At the time, news reports put spotlights on Team Red's "znver5" enablement—this target indicated that staffers were prepping Zen 5 processor microarchitecture with an expanded AVX instruction set (building on top of Zen 4's current capabilities). Phoronix's Michael Larabel has fretted over AMD's relative silence over the past month—regarding a possible merging of support prior to the stable release of GCC 14.

He was relieved to discover renewed activity earlier today: "AMD Zen 5 processor enablement has been merged to GCC Git in time for the GCC 14.1 stable release that will be out in the coming weeks. It was great seeing AMD getting their Zen 5 processor enablement upstreamed ahead of any Ryzen or EPYC product launches and being able to do so in time for the annual major GNU Compiler Collection feature release." Team Red is inching ever closer to the much anticipated 2024 rollout of next-gen Ryzen 9000 processors, please refer to a VideoCardz-authored timeline diagram (below)—"Granite Ridge" is an incoming AM5 desktop CPU family (reportedly utilizing Zen 5 and RDNA 2 tech), while "Strix Point" is scheduled to become a mobile APU series (Zen 5 + RDNA 3.5).

ScaleFlux To Integrate Arm Cortex-R82 Processors in Its Next-Generation Enterprise SSD Controllers

ScaleFlux, a leader in deploying computational storage at scale, today announced its commitment to integrating the Arm Cortex -R82 processor in its forthcoming line of enterprise Solid State Drive (SSD) controllers. The Cortex-R82, is the highest performance real-time processor from Arm and the first to implement the 64-bit Armv8-R AArch64 architecture, representing a significant advancement in processing power and efficiency for enterprise storage solutions.

ScaleFlux's adoption of the Cortex-R82 is a strategic move to leverage the processor's high performance and energy efficiency. This collaboration underscores ScaleFlux's dedication to delivering cutting-edge technology in its SSD controllers, enhancing data processing capabilities and efficiency for data center and AI infrastructure worldwide.

Samsung Accelerates R&D of Glass Substrate Chip Packaging

The Samsung Group has formed a new cross-department alliance—according to South Korea's Sedaily—this joint operation will concentrate on the research and development of a "dream substrate." The company's Electronics, Electrical Engineering, and Display divisions are collaborating in order to accelerate commercialization of "glass substrate" chip packaging. Last September, Intel revealed its intention to become an industry leader in "glass substrate production for next-generation advanced packaging." Team Blue's shiny new Arizona fabrication site will be taking on this challenge, following ten years of internal R&D work. Industry watchdogs reckon that mass production—in North America—is not expected to kick off anytime soon. Sensible guesstimates suggest a start date somewhere in 2030.

The Sedaily article states that Samsung's triple department alliance will target "commercialization faster than Intel." Company representatives—in attendance at CES 2024—set a 2026 window as their commencement goal for advanced glass substrate chip package mass production. An unnamed South Korean industry watcher has welcomed a new entrant on the field: "as each company possesses the world's best technology, synergies will be maximized in glass substrate research, which is a promising field...it is also important to watch how the glass substrate ecosystem of Samsung's joint venture will be established." Glass substrate packaging is ideal for "large-area and high-performance chip combinations" due to inherent heat-resistant properties and material strength. So far, the semiconductor industry has struggled with its development—hence the continued reliance on plastic boards and organic materials.

AMD Pushes Performance Monitoring Patches for Upcoming Zen 5 CPUs

Thanks to Phoronix, we have discovered that AMD has recently released initial patches for performance monitoring and events related to their upcoming Zen 5 processors in the Linux kernel. These patches, sent out for review on the kernel mailing list, provide the necessary JSON files for PMU (Performance Monitoring Unit) events and metrics that will be exposed through the Linux perf tooling. As the patches consist of JSON additions and do not risk regressing existing hardware support, there is a possibility that they could be included in the upcoming Linux v6.9 kernel cycle. This would allow developers and enthusiasts to access detailed performance data for Zen 5 CPUs once they become available, helping with optimization and analysis of the next-generation processors.

The release of these patches follows AMD's publication of performance monitor counter documentation for AMD Family 1Ah Model 00h to 0Fh processors last week, confirming that these models represent the upcoming Zen 5 lineup. While Linux kernel 6.8 already includes some elements of Zen 5 CPU support, the upstream Linux enablement for these next-generation AMD processors is an ongoing process. Upon Phoronix examining the Zen 5 core and uncore events, as well as the metrics and mappings, it appears that they are mainly similar to those found in the current Zen 4 processors. This suggests that AMD has focused on refining and optimizing the performance monitoring capabilities of its new architecture rather than introducing significant changes. As the launch of Zen 5 CPUs draws closer, we await to see the performance and capabilities of these next-generation processors. With performance monitoring also getting a push, this could be a sign that Zen 5 launch is nearing.

JEDEC Reportedly Finalizing LPDDR6 Standard for Mobile Platforms

JEDEC is expected to announce a next-gen low-power RAM memory (LPDDR) standard specification by the third quarter of this year. Earlier today, smartphone technology watcher—Revegnus—highlighted insider information disclosed within an ETnews article. The International Semiconductor Standards Organization (JEDEC) has recently concluded negotiations regarding "next-generation mobile RAM standards"—the report posits that: "more than 60 people from memory, system semiconductor, and design asset (IP) companies participated" in a Lisbon, Portugal-situated meeting. A quoted participant stated (to ETnews): "We have held various discussions to confirm the LPDDR6 standard specification...(Details) will be released in the third quarter of this year."

The current generation LPDDR5 standard was secured back in February 2019—noted improvements included 50% performance and 30% power efficiency jumps over LPDDR4. Samsung Electronics and SK Hynix are in the process of mass-producing incremental improvements—in the form of LPDDR5X and LPDDR5T. A second source stated: "Technology development and standard discussions are taking place in a way to minimize power consumption, which increases along with the increase in data processing." A full-fledged successor is tasked with further enhancing data processing performance. Industry figures anticipate that LPDDR6 will greatly assist in an industry-wide push for "on-device AI" processing. They reckon that "large-scale AI calculations" will become the norm on smartphones, laptops, and tablet PCs. Revegnus has heard (fanciful) whispers about a potential 2024 rollout: "support may be available starting with Qualcomm's Snapdragon 8 Gen 4, expected to be released as early as the second half of this year." Sensible predictions point to possible commercialization in late 2025, or early 2026.

NVIDIA Calls for Global Investment into Sovereign AI

Nations have long invested in domestic infrastructure to advance their economies, control their own data and take advantage of technology opportunities in areas such as transportation, communications, commerce, entertainment and healthcare. AI, the most important technology of our time, is turbocharging innovation across every facet of society. It's expected to generate trillions of dollars in economic dividends and productivity gains. Countries are investing in sovereign AI to develop and harness such benefits on their own. Sovereign AI refers to a nation's capabilities to produce artificial intelligence using its own infrastructure, data, workforce and business networks.

Why Sovereign AI Is Important
The global imperative for nations to invest in sovereign AI capabilities has grown since the rise of generative AI, which is reshaping markets, challenging governance models, inspiring new industries and transforming others—from gaming to biopharma. It's also rewriting the nature of work, as people in many fields start using AI-powered "copilots." Sovereign AI encompasses both physical and data infrastructures. The latter includes sovereign foundation models, such as large language models, developed by local teams and trained on local datasets to promote inclusiveness with specific dialects, cultures and practices. For example, speech AI models can help preserve, promote and revitalize indigenous languages. And LLMs aren't just for teaching AIs human languages, but for writing software code, protecting consumers from financial fraud, teaching robots physical skills and much more.
Return to Keyword Browsing
Jul 15th, 2024 23:07 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts