News Posts matching #AI

Return to Keyword Browsing

SK Hynix to Invest $75 Billion by 2028 in Memory Solutions for AI

South Korean giant SK Group has unveiled plans for substantial investments in AI and semiconductor technologies worth almost $75 billion. SK Group subsidiary, SK Hynix, will lead this initiative with a staggering 103 trillion won ($74.6 billion) investment over the next three years, with plans to realize the investment by 2028. This commitment is in addition to the ongoing construction of a $90 billion mega fab complex in Gyeonggi Province for cutting-edge memory production. SK Group has further pledged an additional $58 billion, bringing the total investment to a whopping $133 billion. This capital infusion aims to enhance the group's competitiveness in the AI value chain while funding operations across its 175 subsidiaries, including SK Hynix.

While specific details remain undisclosed, SK Group is reportedly exploring various options, including potential mergers and divestments. SK Group has signaled that business practices need change amid shifting geopolitical situations and the massive boost that AI is bringing to the overall economy. We may see more interesting products from SK Group in the coming years as it potentially enters new markets centered around AI. This strategic pivot comes after SK Hynix reported its first loss in a decade in 2022. However, the company has since shown signs of recovery, fueled by the surging demand for memory solutions for AI chips. The company currently has a 35% share of the global DRAM market and plans to have an even stronger presence in the coming years. The massive investment aligns with the South Korean government's recently announced $19 billion support package for the domestic semiconductor industry, which will be distributed across companies like SK Hynix and Samsung.

AMD Designs Neural Block Compression Tech for Games: Smaller Downloads and Updates

AMD is developing a new technology that promises to significantly reduce the size on disk of games, as well as reduce the size of game patches and updates. Today's AAA games tend to be over a 100 GB in size, with game updates running into tens of gigabytes, with some of the major updates practically downloading the game all over again. Upcoming games like Call of Duty: Black Ops 6 is reportedly over 300 GB in size, which pushes the game away from those with anything but Internet connections with hundreds of Mbps in speeds. Much of the bulk of the game is made up of visual assets—textures, sprites, and cutscene videos. A modern AAA title could have hundreds of thousands of individual game assets, and sometimes even redundant sets of textures for different image quality settings.

AMD's solution to this problem is the Neural Block Compression technology. The company will get into the nuts and bolts of the tech in its presentation at the 2024 Eurographics Symposium on Rendering (July 3-5), but we have a vague idea of what it could be. Modern games don't drape surfaces of a wireframe with a texture, but also additional layers, such as specular maps, normal maps, roughness maps, etc). AMD's idea is to "flatten" all these layers, including the base texture, into a single asset format, which the game engine could disaggregate into the individual layers using an AI neural network. This is not to be confused with mega-textures—something entirely different, which relies on a single large texture covering all objects in a scene. The idea here is to flatten the various data layers of individual textures and their maps, into a single asset type. In theory, this should yield significant file-size savings, even if it results in some additional compute cost on the client's end.

Report: US PC Market Set for 5% Growth in 2024 Amid a Healthy Recovery Trajectory

PC (excluding tablets) shipments to the United States grew 5% year-on-year to 14.8 million units in Q1 2024. The consumer and SMB segments were the key growth drivers, both witnessing shipment increases above 9% year-on-year in the first quarter. With a strong start to the year, the market is now poised for a healthy recovery trajectory amid the ongoing Windows refresh cycle. Total PC shipments to the US are expected to hit 69 million units in 2024 before growing another 8% to 75 million units in 2025.

For the third consecutive quarter, the consumer segment showed the best performance in the US market. "Continued discounting after the holiday season boosted consumer demand for PCs into the start of 2024," said Greg Davis, Analyst at Canalys. "However, the first quarter also saw an uptick in commercial sector performance. Shipment growth in small and medium businesses indicates that the anticipated refresh brought by the Windows 10 end-of-life is underway. With enterprise customers set to follow suit, the near-term outlook for the market remains highly positive."

Intel Xeon Processors Accelerate GenAI Workloads with Aible

Intel and Aible, an end-to-end serverless generative AI (GenAI) and augmented analytics enterprise solution, now offer solutions to shared customers to run advanced GenAI and retrieval-augmented generation (RAG) use cases on multiple generations of Intel Xeon CPUs. The collaboration, which includes engineering optimizations and a benchmarking program, enhances Aible's ability to deliver GenAI results at a low cost for enterprise customers and helps developers embed AI intelligence into applications. Together, the companies offer scalable and efficient AI solutions that draw on high-performing hardware to help customers solve challenges with AI and Intel.

"Customers are looking for efficient, enterprise-grade solutions to harness the power of AI. Our collaboration with Aible shows how we're closely working with the industry to deliver innovation in AI and lowering the barrier to entry for many customers to run the latest GenAI workloads using Intel Xeon processors," said Mishali Naik, Intel senior principal engineer, Data Center and AI Group.

Intel Demonstrates First Fully Integrated Optical IO Chiplet

Intel Corporation has achieved a revolutionary milestone in integrated photonics technology for high-speed data transmission. At the Optical Fiber Communication Conference (OFC) 2024, Intel's Integrated Photonics Solutions (IPS) Group demonstrated the industry's most advanced and first-ever fully integrated optical compute interconnect (OCI) chiplet co-packaged with an Intel CPU and running live data. Intel's OCI chiplet represents a leap forward in high-bandwidth interconnect by enabling co-packaged optical input/output (I/O) in emerging AI infrastructure for data centers and high performance computing (HPC) applications.

"The ever-increasing movement of data from server to server is straining the capabilities of today's data center infrastructure, and current solutions are rapidly approaching the practical limits of electrical I/O performance. However, Intel's groundbreaking achievement empowers customers to seamlessly integrate co-packaged silicon photonics interconnect solutions into next-generation compute systems. Our OCI chiplet boosts bandwidth, reduces power consumption and increases reach, enabling ML workload acceleration that promises to revolutionize high-performance AI infrastructure," said Thomas Liljeberg, senior director, Product Management and Strategy, Integrated Photonics Solutions (IPS) Group.

ByteDance and Broadcom to Collaborate on Advanced AI Chip

ByteDance, TikTok's parent company, is reportedly working with American chip designer Broadcom to develop a cutting-edge AI processor. This collaboration could secure a stable supply of high-performance chips for ByteDance, according to Reuters. Sources claim the joint project involves a 5 nm Application-Specific Integrated Circuit (ASIC), designed to comply with U.S. export regulations. TSMC is slated to manufacture the chip, though production is not expected to begin this year.

This partnership marks a significant development in U.S.-China tech relations, as no public announcements of such collaborations on advanced chips have been made since Washington implemented stricter export controls in 2022. For ByteDance, this move could reduce procurement costs and ensure a steady chip supply, crucial for powering its array of popular apps, including TikTok and the ChatGPT-like AI chatbot "Doubao." The company has already invested heavily in AI chips, reportedly spending $2 billion on NVIDIA processors in 2023.

AI Startup Etched Unveils Transformer ASIC Claiming 20x Speed-up Over NVIDIA H100

A new startup emerged out of stealth mode today to power the next generation of generative AI. Etched is a company that makes an application-specific integrated circuit (ASIC) to process "Transformers." The transformer is an architecture for designing deep learning models developed by Google and is now the powerhouse behind models like OpenAI's GPT-4o in ChatGPT, Anthropic Claude, Google Gemini, and Meta's Llama family. Etched wanted to create an ASIC for processing only the transformer models, making a chip called Sohu. The claim is Sohu outperforms NVIDIA's latest and greatest by an entire order of magnitude. Where a server configuration with eight NVIDIA H100 GPU clusters pushes Llama-3 70B models at 25,000 tokens per second, and the latest eight B200 "Blackwell" GPU cluster pushes 43,000 tokens/s, the eight Sohu clusters manage to output 500,000 tokens per second.

Why is this important? Not only does the ASIC outperform Hopper by 20x and Blackwell by 10x, but it also serves so many tokens per second that it enables an entirely new fleet of AI applications requiring real-time output. The Sohu architecture is so efficient that 90% of the FLOPS can be used, while traditional GPUs boast a 30-40% FLOP utilization rate. This translates into inefficiency and waste of power, which Etched hopes to solve by building an accelerator dedicated to power transformers (the "T" in GPT) at massive scales. Given that the frontier model development costs more than one billion US dollars, and hardware costs are measured in tens of billions of US Dollars, having an accelerator dedicated to powering a specific application can help advance AI faster. AI researchers often say that "scale is all you need" (resembling the legendary "attention is all you need" paper), and Etched wants to build on that.

CSPs to Expand into Edge AI, Driving Average NB DRAM Capacity Growth by at Least 7% in 2025

TrendForce has observed that in 2024, major CSPs such as Microsoft, Google, Meta, and AWS will continue to be the primary buyers of high-end AI servers, which are crucial for LLM and AI modeling. Following establishing a significant AI training server infrastructure in 2024, these CSPs are expected to actively expand into edge AI in 2025. This expansion will include the development of smaller LLM models and setting up edge AI servers to facilitate AI applications across various sectors, such as manufacturing, finance, healthcare, and business.

Moreover, AI PCs or notebooks share a similar architecture to AI servers, offering substantial computational power and the ability to run smaller LLM and generative AI applications. These devices are anticipated to serve as the final bridge between cloud AI infrastructure and edge AI for small-scale training or inference applications.

QNAP Thunderbolt 4 NAS TBS-h574TX and TVS-h874T Win the Red Dot Award 2024

Amid a field of over 20,000 submissions from 60 countries, the QNAP Thunderbolt 4 NAS TBS-h574TX and TVS-h874T won the Red Dot Award: Product Design 2024. The TBS-h574TX Thunderbolt 4 all-flash NASbook is designed for film sets, small studios, small-scale video production teams and SOHO users. Powered by an Intel Core i9 16-core / i7 12-core processor, the TVS-h874T Thunderbolt 4 NAS is a great sidekick for your creative talents. The professional jurors of the Red Dot Jury highly recognized the TBS-h574TX and the TVS-h874T with distinction, signifying high quality design.

The TBS-h574TX packs high-speed I/O and Intel Core performance required by video production, allowing creators using Mac or Windows to enjoy the smoothest experience ever in real-time video editing, large file transfer, video transcoding, and backup. The TBS-h574TX, acting as the bridge between pre-production and post-production, takes video projects and team collaboration to the next level. The TBS-h574TX runs the ZFS-based QuTS hero operating system that ensures data integrity. You can also switch to the QTS operating system based on your needs.

AMD Ryzen AI 300 Pro Series Could be Equipped with up to 128 GB of Memory

According to the leaked listing posted on X by user @Orlak29_, reports suggest that Pro versions of the AMD Ryzen 7 AI and Ryzen 9 AI are in the pipeline, with a potential game-changer in the form of the high-end "Strix Halo" model. The standout feature of the Strix Halo is its rumored support for up to 128 GB of RAM, a significant leap from AMD's current offerings. This massive memory capacity could prove valuable for AI workloads and data-intensive applications, potentially positioning AMD better against offerings from Intel and Qualcomm. Leaked diagrams hint at a unique design for the Strix Halo, featuring a chiplet layout reminiscent of a graphics card. The processor is reportedly surrounded by memory on three sides, enabling the massive 128 GB capacity.

While this top-tier model is expected to carry a premium price, it could find a ready market among professionals and enthusiasts demanding both raw processing power and extensive memory resources. On the performance front, rumors suggest the Strix Halo will boast up to 16 Zen 5 cores and a GPU with 40 Compute Units based on RDNA 3.5 architecture. This combination might rival the performance of high-end mobile GPUs like the RTX 4060 or even the RTX 4070 for laptops.
As with previous generations, AMD is expected to release Pro versions of these processors with additional features like ECC memory support.

Gigabyte Launches AMD Radeon PRO W7000 Series Graphics Cards

GIGABYTE TECHNOLOGY Co. Ltd, a leading manufacturer of premium gaming hardware, today launched the cutting-edge AMD Radeon PRO W7000 series workstation graphics cards, including the flagship GIGABYTE Radeon PRO W7900 Dual Slot AI TOP 48G as well as the GIGABYTE Radeon PRO W7800 AI TOP 32G. Powered by AMD RDNA 3 architecture, these graphics cards offer a massive 48 GB and 32 GB of GDDR6 memory, respectively, delivering cutting-edge performance and exceptional experiences for workstation professionals, creators and AI developers.⁠⁠

GIGABYTE stands as the AMD professional graphics partner in the market, with a proven ability to design and manufacture the entire Radeon PRO series. Our dedication to quality products, unwavering business commitment, and comprehensive customer service empower us to deliver professional-grade GPU solutions, expanding user's choices in workstation and AI computing.

New AMD ROCm 6.1 Software for Radeon Release Offers More Choices to AI Developers

AMD has unveiled the latest release of its open software, AMD ROCm 6.1.3, marking the next step in its strategy to make ROCm software broadly available across its GPU portfolio, including AMD Radeon desktop GPUs. The new release gives developers broader support for Radeon GPUs to run ROCm AI workloads. "The new AMD ROCm release extends functional parity from data center to desktops, enabling AI research and development on readily available and accessible platforms," said Andrej Zdravkovic, senior vice president at AMD.

Key feature enhancements in this release focus on improving compatibility, accessibility, and scalability, and include:
  • Multi-GPU support to enable building scalable AI desktops for multi-serving, multi-user solutions.
  • Beta-level support for Windows Subsystem for Linux, allowing these solutions to work with ROCm on a Windows OS-based system.
  • TensorFlow Framework support offering more choice for AI development.

OpenAI Co-Founder Ilya Sutskever Launches a New Venture: Safe Superintelligence Inc.

OpenAI's co-founder and ex-chief scientist, Ilya Sutskever, has announced the formation of a new company promising a safe path to artificial superintelligence (ASI). Called Safe Superintelligence Inc. (SSI), the company has a simple mission: achieving ASI with safety at the front. "We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead," notes the SSI website, adding that "Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."

Interestingly, safety is a concern only a few frontier AI labs have. In recent history, OpenAI's safety team got the spotlight for being neglected, and the company's safety lead, Jan Leike, publically criticized safety practices before moving to Anthropic. Anthropic is focused on providing safe AI models, with its Claude Opus being one of the leading AI models to date. What is to come out of SSI? We still don't know. However, given the team of Ilya Sutskever, Daniel Gross, and Daniel Levy, we assume they attracted the best-in-class talent for developing next-generation AI models, focusing on safety. With offices in Palo Alto and Tel Aviv, SSI can tap a vast network of AI researchers and policymakers to establish safe ASI, free from short-term commercial pressure and focused on research and development. "Our team, investors, and business model are all aligned to achieve SSI," says the SSI website.

Panasonic Connect Announces its First AI-Enabled TOUGHBOOK PC

Panasonic Connect Canada, Division of Panasonic Corporation of North America, today announced enhancements to the fully rugged and modular TOUGHBOOK 40 laptop. This second generation of the TOUGHBOOK 40, the Mk2, is the company's first PC to feature Intel Core Ultra processors, incorporating the latest CPU, GPU, and NPU technology advancements and up to 16 cores. Its dedicated NPU accelerates artificial intelligence (AI)-driven tasks for customers across law enforcement departments, federal agencies, and utility companies. Compared to the previous generation of Intel processors, these new processors deliver up to 143% faster AI application performance, 73% faster generative AI, and up to 40% lower processor power for AI-enhanced collaboration.

"We are dedicated to developing solutions that not only address the current needs of our customers, but also anticipate their future requirements," said Dominick Passanante, Vice President and GM of Panasonic Connect. "The TOUGHBOOK 40 Mk2, equipped with advanced AI capabilities, is another example of how we're providing the mobile workforce with tools to enhance productivity and efficiency on the job."

Gigabyte Promises 219,000 TBW for New AI TOP 100E SSD

Gigabyte has quietly added a new SSD to its growing lineup and this time around it's something quite different. The drive is part of Gigabyte's new AI TOP (Trillions of Operations per Second) and was announced at Computex with little fanfare. At the show, the company only announced that it would have 150x the TBW compared to regular SSDs and that it was built specifically for AI model training. What that 150x means in reality is that the 2 TB version of the AI TOP 100E SSD will deliver no less than 219,000 TBW (TeraBytes Written), whereas most high-end 2 TB consumer NVMe SSDs end up somewhere around 1,200 TBW. The 1 TB version promises 109,500 TBW and both drives have an MTBF time of 1.6 million hours and a five-year warranty.

Gigabyte didn't reveal the host controller or the exact NAND used, but the drives are said to use 3D NAND flash and both drives have a LPDDR4 DRAM cache of 1 or 2 GB depending on the drive size. However, the pictures of the drive suggest it might be a Phison based reference design. The AI TOP 100E SSDs are standard PCIe 4.0 drives, so the sequential read speed tops out at 7,200 MB/s with the write speed for the 1 TB SKU being up to 6,500 MB/s, with the 2 TB SKU slightly behind at 5,900 MB/s. No other performance figures were provided. The drives are said to draw up to 11 Watts in use, which seems very high for PCIe 4.0 drives. No word on pricing or availability as yet.

Samsung Releases Its First Copilot+ PC Galaxy Book4 Edge to Global Markets

Samsung Electronics today announced the immediate availability of the new Samsung Galaxy Book4 Edge in select markets. With next-level AI processing performance and intelligent hybrid AI integrations, the Galaxy Book4 Edge advances the era of AI and introduces users to new levels of seamless work, play and creation on their PC.

"The Galaxy Book4 Edge marks the beginning of a whole new category of PCs, and for Samsung, a continued commitment to expand the power of Galaxy AI and offer the most hyperconnected mobile AI ecosystem yet," said TM Roh, President and Head of the Mobile eXperience Business at Samsung Electronics. "Developed in close collaboration with our industry partners, we believe this next-generation AI PC will redefine the market and more importantly, give people cutting-edge ways to be more productive and creative in their everyday lives."

Report: China's PC Market to Contract 1% in 2024 Before 12% Rebound in 2025

The PC (desktops, notebooks, and workstations) market in Mainland China is forecast to contract by 1% in 2024 according to the latest Canalys data. The first quarter of the year already saw a sharp decline, with shipments down 12%, in contrast to the global market which returned to growth. Desktop shipments are expected to perform well in 2024, growing 10% annually as they benefit from commercial sector refresh demand, especially from large state-held enterprises and local governments. Notebook shipments are set to drop 5%, as demand from consumers and the private sector is anticipated to remain cautious on short-term expenditure such as PCs.

China's PC market trajectory is diverging from global trends in its recovery journey. In Q1 2024, the commercial sector bore the brunt of the market downturn, undergoing a 19% decline due to weak IT spending by large enterprises. The decline in consumer shipments was milder, with shipments dropping 8%. However, despite the muted performance in 2024, significant local developments point to a stronger market in 2025, in which PC shipments are expected to grow 12%.

Stability AI Outs Stable Diffusion 3 Medium, Company's Most Advanced Image Generation Model

Stability AI, a maker of various generative AI models and the company behind text-to-image Stable Diffusion models, has released its latest Stable Diffusion 3 (SD3) Medium AI model. Running on two billion dense parameters, the SD3 Medium is the company's most advanced text-to-image model to date. It boasts features like generating highly realistic and detailed images across a wide range of styles and compositions. It demonstrates capabilities in handling intricate prompts that involve spatial reasoning, actions, and diverse artistic directions. The model's innovative architecture, including the 16-channel variational autoencoder (VAE), allows it to overcome common challenges faced by other models, such as accurately rendering realistic human faces and hands.

Additionally, it achieves exceptional text quality, with precise letter formation, kerning, and spacing, thanks to the Diffusion Transformer architecture. Notably, the model is resource-efficient, capable of running smoothly on consumer-grade GPUs without compromising performance due to its low VRAM footprint. Furthermore, it exhibits impressive fine-tuning abilities, allowing it to absorb and replicate nuanced details from small datasets, making it highly customizable for specific use cases that users may have. Being an open-weight model, it is available for download on HuggingFace, and it has libraries optimized for both NVIDIA's TensorRT (all modern NVIDIA GPUs) and AMD Radeon/Instinct GPUs.

NVIDIA MLPerf Training Results Showcase Unprecedented Performance and Elasticity

The full-stack NVIDIA accelerated computing platform has once again demonstrated exceptional performance in the latest MLPerf Training v4.0 benchmarks. NVIDIA more than tripled the performance on the large language model (LLM) benchmark, based on GPT-3 175B, compared to the record-setting NVIDIA submission made last year. Using an AI supercomputer featuring 11,616 NVIDIA H100 Tensor Core GPUs connected with NVIDIA Quantum-2 InfiniBand networking, NVIDIA achieved this remarkable feat through larger scale - more than triple that of the 3,584 H100 GPU submission a year ago - and extensive full-stack engineering.

Thanks to the scalability of the NVIDIA AI platform, Eos can now train massive AI models like GPT-3 175B even faster, and this great AI performance translates into significant business opportunities. For example, in NVIDIA's recent earnings call, we described how LLM service providers can turn a single dollar invested into seven dollars in just four years running the Llama 3 70B model on NVIDIA HGX H200 servers. This return assumes an LLM service provider serving Llama 3 70B at $0.60/M tokens, with an HGX H200 server throughput of 24,000 tokens/second.

Intel Submits Gaudi 2 Results on MLCommons' Newest Benchmark

Today, MLCommons published results of its industry AI performance benchmark, MLPerf Training v4.0. Intel's results demonstrate the choice that Intel Gaudi 2 AI accelerators give enterprises and customers. Community-based software simplifies generative AI (GenAI) development and industry-standard Ethernet networking enables flexible scaling of AI systems. For the first time on the MLPerf benchmark, Intel submitted results on a large Gaudi 2 system (1,024 Gaudi 2 accelerators) trained in Intel Tiber Developer Cloud to demonstrate Gaudi 2 performance and scalability and Intel's cloud capacity for training MLPerf's GPT-3 175B1 parameter benchmark model.

"The industry has a clear need: address the gaps in today's generative AI enterprise offerings with high-performance, high-efficiency compute options. The latest MLPerf results published by MLCommons illustrate the unique value Intel Gaudi brings to market as enterprises and customers seek more cost-efficient, scalable systems with standard networking and open software, making GenAI more accessible to more customers," said Zane Ball, Intel corporate vice president and general manager, DCAI Product Management.

SK Hynix Targets Q1 2025 for GDDR7 Memory Mass Production

The race is on for memory manufacturers to bring the next generation GDDR7 graphics memory into mass production. While rivals Samsung and Micron are aiming to have GDDR7 chips available in Q4 of 2024, South Korean semiconductor giant SK Hynix revealed at Computex 2024 that it won't kick off mass production until the first quarter of 2025. GDDR7 is the upcoming JEDEC standard for high-performance graphics memory, succeeding the current GDDR6 and GDDR6X specifications. The new tech promises significantly increased bandwidth and capacities to feed the appetites of next-wave GPUs and AI accelerators. At its Computex booth, SK Hynix showed off engineering samples of its forthcoming GDDR7 chips, with plans for both 16 Gb and 24 Gb densities.

The company is targeting blazing-fast 40 Gbps data transfer rates with its GDDR7 offerings, outpacing the 32 Gbps rates its competitors are starting with on 16 Gb parts. If realized, higher speeds could give SK Hynix an edge, at least initially. While trailing a quarter or two behind Micron and Samsung isn't ideal, SK Hynix claims having working samples now validates its design and allows partners to begin testing and qualification. Mass production timing for standardized memories also doesn't necessarily indicate a company is "late" - it simply means another vendor secured an earlier production window with a specific customer. The GDDR7 transition is critical for SK Hynix and others, given the insatiable demand for high-bandwidth memory to power AI, graphics, and other data-intensive workloads. Hitting its stated Q1 2025 mass production target could ensure SK Hynix doesn't fall too far behind in the high-stakes GDDR7 race, with faster and higher-density chips to potentially follow shortly after volume ramp.

Intel's New SoC Solution Accelerates Electric Vehicle Innovation, Slashing Costs

The high purchase price of an electric vehicle (EV) remains one of the biggest barriers for potential buyers on a global scale. EVs are currently more expensive to build than traditional gasoline-powered cars, primarily because of the high costs associated with advanced battery and e-motor technology. The near-term solution is to enhance the efficiency of the existing battery technology through energy savings at the vehicle level, including improved integration with EV station infrastructure. This is exactly the challenge that Silicon Mobility, an Intel Company, has now solved with today's launch of the new OLEA U310 system-on-chip (SoC). This next-gen technology promises to significantly improve the overall performance of electric vehicles (EVs), streamline design and production processes, and expand SoC services to ensure seamless operation across various EV station platforms.

Representing a first for the industry, the new SoC is the only complete solution that combines hardware and software in one and is engineered to match the need for powertrain domain control in electrical architectures with distributed software. Built with a unique hybrid and heterogeneous architecture, a single OLEA 310 FPCU can replace as many as six standard microcontrollers in a system combination in which it controls an inverter, a motor, a gearbox, a DC-DC converter and an on-board-charger. Using the 310 FPCU, original equipment manufacturers (OEMs) and Tier 1 suppliers can control multiple and diverse power and energy functions simultaneously in real time.

Netgear Introduces New Additions to Industry-leading WiFi 7 Lineup of Home Networking Products

NETGEAR, Inc., the leading provider of innovative and secure solutions for people to connect and manage their digital lives, today expanded its WiFi 7 mesh and standalone router lines with the new Orbi 770 Tri-band Mesh System and Nighthawk RS300 Router. NETGEAR's most affordable WiFi 7 products to date build on the company's promise to provide powerful WiFi performance and secure connectivity.

WiFi 7 Changes the Game
WiFi 7 unlocks 2.4 times faster speeds than WiFi 6, delivers low latency and better handles WiFi interference for families to seamlessly enjoy next gen 4K/8K streaming, video conferencing, gaming, and more. Since the launch of NETGEAR's first WiFi 7 offerings - Nighthawk RS700 and Orbi 970 - multi-gig internet speeds have become more affordable, work from home demands have remained steady, and more devices such as AR/VR headsets or AI-focused platforms like CoPilot+ have been introduced that require extreme low latency and higher throughput.

Jabra Unveils Second Generation of Elite 8 Active and Elite 10

Jabra, a global leader in true wireless sound and hybrid work solutions, is unveiling the Elite 8 Active Gen 2 and Elite 10 Gen 2 earbuds. Building upon the same great design and success of their predecessors, these next-generation earbuds come with enhancements that strengthen the audio experience.

This includes the world's first LE Audio smart case, enhanced spatial sound powered by Dolby Audio for a better music experience, and enhanced Natural HearThrough for better awareness when outdoors. Jabra's Active Noise Cancellation (ANC) has also been made even stronger with improved mid- and low-frequency noise cancellation.

Curious "Navi 48 XTX" Graphics Card Prototype Detected in Regulatory Filings

A curiously described graphics card was detected by Olrak29 as it was making it through international shipping. The shipment description for the card reads "GRAPHIC CARD NAVI48 G28201 DT XTX REVB-PRE-CORRELATION AO PLATSI TT(SAMSUNG)-Q2 2024-3A-102-G28201." This can be decoded as a graphics card with the board number "G28201," for the desktop platform. It features a maxed out version of the "Navi 48" silicon, and is based on the B revision of the PCB. It features Samsung-made memory chips, and is dated Q2-2024.

AMD is planning to retreat from the enthusiast segment of gaming graphics cards with the RDNA 4 generation. The company originally entered this segment with the RX 6800 series and RX 6900 series RDNA 2 generation, where it saw unexpected success with the crypto-mining market boom, besides being competitive with the RTX 3080 and RTX 3090. This bust by the time RDNA 3 and the RX 7900 series arrived, and the chip wasn't competitive with NVIDIA's top-end. Around this time, the AI acceleration boom squeezed foundry allocation of all major chipmakers, including AMD, making large chips based on the latest process nodes even less viable for a market such as enthusiast graphics—the company would rather make CDNA AI accelerators with its allocation. Given all this, the company's fastest GPUs from the RDNA 4 generation could be the ones that succeed the current RX 7800 XT and RX 7700 XT, so AMD could capture a slice of the performance segment.
Return to Keyword Browsing
Feb 23rd, 2025 21:19 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts