News Posts matching #GPU

Return to Keyword Browsing

NVIDIA Advertises "Premium AI PC" Mocking the Compute Capability of Regular AI PCs

According to the report from BenchLife, NVIDIA has started the marketing campaign push for "Premium AI PC," squarely aimed at the industry's latest trend pushed by Intel, AMD, and Qualcomm for an "AI PC" system, which features a dedicated NPU for processing smaller models locally. NVIDIA's approach comes from a different point of view: every PC with an RTX GPU is a "Premium AI PC," which holds a lot of truth. Generally, GPUs (regardless of the manufacturer) hold more computing potential than the CPU and NPU combined. With NVIDIA's push to include Tensor cores in its GPUs, the company is preparing for next-generation software from vendors and OS providers that will harness the power of these powerful silicon pieces and embed more functionality in the PC.

At the Computex event in Taiwan, there should be more details about Premium AI PCs and general AI PCs. In its marketing materials, NVIDIA compares AI PCs to its Premium AI PCs, which have enhanced capabilities across various applications like image/video editing and upscaling, productivity, gaming, and developer applications. Another relevant selling point is the user base for these Premium AI PCs, which NVIDIA touts to be 100 million users. Those PCs support over 500 AI applications out of the box, highlighting the importance of proper software support. NVIDIA's systems are usually more powerful, with GeForce RTX GPUs reaching anywhere from 100-1300+ TOPS, compared to 40 TOPS of AI PCs. How other AI PC makers plan to fight in the AI PC era remains to be seen, but there is a high chance that this will be the spotlight of the upcoming Computex show.

Alphacool Launches New Eisblock Aurora 180° Terminal

With the new Alphacool Eisblock Aurora 180° terminal, you can give your Eisblock Aurora GPU cooler a new look and gain additional options for connecting your Eisblock Aurora GPU cooler to your water cooling circuit. The Alphacool Aurora 180° terminal allows flexible connection options for all Eisblock GPU coolers. Perfect for extensive modding projects or for systems with limited space. The elegant design is perfected by a magnetic cover.

Flexible connections
The Alphacool Eisblock Aurora 180° terminal replaces the standard terminal of the Eisblock GPU cooler. It positions the connections above the backplate, significantly reducing the depth of the cooling block. With three possible connection options for each input and output - top, side and rear - the terminal offers maximum flexibility.

SK hynix Presents CXL Memory Solutions Set to Power the AI Era at CXL DevCon 2024

SK hynix participated in the first-ever Compute Express Link Consortium Developers Conference (CXL DevCon) held in Santa Clara, California from April 30-May 1. Organized by a group of more than 240 global semiconductor companies known as the CXL Consortium, CXL DevCon 2024 welcomed a majority of the consortium's members to showcase their latest technologies and research results.

CXL is a technology that unifies the interfaces of different devices in a system such as semiconductor memory, storage, and logic chips. As it can increase system bandwidth and processing capacity, CXL is receiving attention as a key technology for the AI era in which high performance and capacity are essential. Under the slogan "Memory, The Power of AI," SK hynix showcased a range of CXL products at the conference that are set to strengthen the company's leadership in AI memory technology.

More than 500 AI Models Run Optimized on Intel Core Ultra Processors

Today, Intel announced it surpassed 500 AI models running optimized on new Intel Core Ultra processors - the industry's premier AI PC processor available in the market today, featuring new AI experiences, immersive graphics and optimal battery life. This significant milestone is a result of Intel's investment in client AI, the AI PC transformation, framework optimizations and AI tools including OpenVINO toolkit. The 500 models, which can be deployed across the central processing unit (CPU), graphics processing unit (GPU) and neural processing unit (NPU), are available across popular industry sources, including OpenVINO Model Zoo, Hugging Face, ONNX Model Zoo and PyTorch. The models draw from categories of local AI inferencing, including large language, diffusion, super resolution, object detection, image classification/segmentation, computer vision and others.

"Intel has a rich history of working with the ecosystem to bring AI applications to client devices, and today we celebrate another strong chapter in the heritage of client AI by surpassing 500 pre-trained AI models running optimized on Intel Core Ultra processors. This unmatched selection reflects our commitment to building not only the PC industry's most robust toolchain for AI developers, but a rock-solid foundation AI software users can implicitly trust."
-Robert Hallock, Intel vice president and general manager of AI and technical marketing in the Client Computing Group

AMD Celebrates its 55th Birthday

AMD is now a 55-year-old company. The chipmaker was founded on May Day, 1969, and traversed practically every era of digital computing to reach where it is today—a company that makes contemporary processors for PCs, servers, and consumer electronics; GPUs for gaming graphics, professional visualization, and the all important AI HPC processors that are driving the latest era of computing. As of this writing, AMD has a market capitalization of over $237 billion, presence in all market regions, and supplies hardware and services to nearly every Fortune 500 company, including every IT giant. Happy birthday, AMD!

We Tested NVIDIA's new ChatRTX: Your Own GPU-accelerated AI Assistant with Photo Recognition, Speech Input, Updated Models

NVIDIA today unveiled ChatRTX, the AI assistant that runs locally on your machine, and which is accelerated by your GeForce RTX GPU. NVIDIA had originally launched this as "Chat with RTX" back in February 2024, back then this was regarded more as a public tech demo. We reviewed the application in our feature article. The ChatRTX rebranding is probably aimed at making the name sound more like ChatGPT, which is what the application aims to be—except it runs completely on your machine, and is exhaustively customizable. The most obvious advantage of a locally-run AI assistant is privacy—you are interacting with an assistant that processes your prompt locally, and accelerated by your GPU; the second is that you're not held back by performance bottlenecks by cloud-based assistants.

ChatRTX is a major update over the Chat with RTX tech-demo from February. To begin with, the application has several stability refinements from Chat with RTX, which felt a little rough on the edges. NVIDIA has significantly updated the LLMs included with the application, including Mistral 7B INT4, and Llama 2 7B INT4. Support is also added for additional LLMs, including Gemma, a local LLM trained by Google, based on the same technology used to make Google's flagship Gemini model. ChatRTX now also supports ChatGLM3, for both English and Chinese prompts. Perhaps the biggest upgrade ChatRTX is its ability to recognize images on your machine, as it incorporates CLIP (contrastive language-image pre-training) from OpenAI. CLIP is an LLM that recognizes what it's seeing in image collections. Using this feature, you can interact with your image library without the need for metadata. ChatRTX doesn't just take text input—you can speak to it. It now accepts natural voice input, as it integrates the Whisper speech-to-text NLI model.
DOWNLOAD: NVIDIA ChatRTX

AMD Releases Software Adrenalin 24.4.1 WHQL GPU Drivers

AMD has released the latest version of Adrenalin Edition graphics drivers, version 24.4.1 WHQL. It includes support for the upcoming Manor Lords game, as well as add performance improvements for HELLDIVERS 2 game, and adds AMD HYPR-Tune support to Nightingale and SKULL AND BONES games. New drivers also expand Vulkan API extensions support with VK_KHR_shader_maximal_reconvergence and VK_KHR_dynamic_rendering_local_read, as well as include support and optimizations for Topaz Gigapixel AI application, versions 7.1.0 and 7.1.1, with new "Recovery" and "Low Resolution" AI upscaling features.

New AMD Software Adrenalin Edition 24.4.1 WHQL drivers come with several fixes, including performance improvements for HELLDIVERS 2, fix for intermittent application crash in Lords of the Fallen on Radeon RX 6000 series graphics cards, various artifact issues in SnowRunner and Horizon Forbidden West Complete Edition on Radeon RX 6800 and Radeon RX 6000 series graphics cards, fix for intermittent application crash or driver timeout in Overwatch 2 when Radeon Boost is enabled on Radeon RX 6000 and above series graphics cards, intermittent crash while changing Anti-Aliasing settings in Enshrouded on Radeon 7000 series graphics cards, and various application freeze or crash issues with the SteamVR while using Quest Link on Meta Quest 2 or when screen sharing with Microsoft Teams.

DOWNLOAD: AMD Software Adrenalin 24.4.1 WHQL

Aetina Accelerates Embedded AI with High-performance, Small Form-factor Aetina IA380E-QUFL Graphics Card

Aetina, a leading Edge AI solution provider, announced the launch of the Aetina IA380E-QUFL at Embedded World 2024 in Nuremberg, Germany. This groundbreaking product is a small form factor PCIe graphics card powered by the high-performance Intel Arc A380E GPU.

Unmatched Power in a Compact Design
The Aetina IA380E-QUFL delivers workstation-level performance packed into a low-profile, single-slot form factor. This innovative solution consumes only 50 W, making it ideal for space and power-constrained edge computing environments. Embedded system manufacturers and integrators can leverage the power of 4.096 TFLOPs peak FP32 performance delivered by the Intel Arc A380E GPU.

Unreal Engine 5.4 is Now Available With Improvements to Nanite, AI and Machine Learning, TSR, and More

Unreal Engine 5.4 is here, and it's packed with new features and improvements to performance, visual fidelity, and productivity that will benefit game developers and creators across industries. With this release, we're delivering the toolsets we've been using internally to build and ship Fortnite Chapter 5, Rocket Racing, Fortnite Festival, and LEGO Fortnite. Here are some of the highlights.

Animation
Character rigging and animation authoring
This release sees substantial updates to Unreal Engine's built-in animation toolset, enabling you to quickly, easily, and enjoyably rig characters and author animation directly in engine, without the frustrating and time-consuming need to round trip to external applications. With an Experimental new Modular Control Rig feature, you can build animation rigs from understandable modular parts instead of complex granular graphs, while Automatic Retargeting makes it easier to get great results when reusing bipedal character animations. There are also extensions to the Skeletal Editor and a suite of new deformer functions to make the Deformer Graph more accessible.

AMD's RDNA 4 GPUs Could Stick with 18 Gbps GDDR6 Memory

Today, we have the latest round of leaks that suggest that AMD's upcoming RDNA 4 graphics cards, codenamed the "RX 8000-series," might continue to rely on GDDR6 memory modules. According to Kepler on X, the next-generation GPUs from AMD are expected to feature 18 Gbps GDDR6 memory, marking the fourth consecutive RDNA architecture to employ this memory standard. While GDDR6 may not offer the same bandwidth capabilities as the newer GDDR7 standard, this decision does not necessarily imply that RDNA 4 GPUs will be slow performers. AMD's choice to stick with GDDR6 is likely driven by factors such as meeting specific memory bandwidth requirements and cost optimization for PCB designs. However, if the rumor of 18 Gbps GDDR6 memory proves accurate, it would represent a slight step back from the 18-20 Gbps GDDR6 memory used in AMD's current RDNA 3 offerings, such as the RX 7900 XT and RX 7900 XTX GPUs.

AMD's first generation RDNA used GDDR6 with 12-14 Gbps speeds, RDNA 2 came with GDDR6 at 14-18 Gbps, and the current RDNA 3 used 18-20 Gbps GDDR6. Without an increment in memory generation, speeds should stay the same at 18 Gbps. However, it is crucial to remember that leaks should be treated with skepticism, as AMD's final memory choices for RDNA 4 could change before the official launch. The decision to use GDDR6 versus GDDR7 could have significant implications in the upcoming battle between AMD, NVIDIA, and Intel's next-generation GPU architectures. If AMD indeed opts for GDDR6 while NVIDIA pivots to GDDR7 for its "Blackwell" GPUs, it could create a disparity in memory bandwidth performance between the competing products. All three major GPU manufacturers—AMD, NVIDIA, and Intel with its "Battlemage" architecture—are expected to unveil their next-generation offerings in the fall of this year. As we approach these highly anticipated releases, more concrete details on specifications and performance capabilities will emerge, providing a clearer picture of the competitive landscape.

China Circumvents US Restrictions, Still Acquiring NVIDIA GPUs

A recent Reuters investigation has uncovered evidence suggesting Chinese universities and research institutes may have circumvented US sanctions on high-performance NVIDIA GPUs by purchasing servers containing the restricted chips. The sanctions tightened on November 17, 2023, prohibit the export of advanced NVIDIA GPUs like the consumer GeForce RTX 4090 to China. Despite these restrictions, Reuters found that at least ten China-based organizations acquired servers equipped with the sanctioned NVIDIA GPUs between November 20, 2023, and February 28, 2024. These servers were purchased from major vendors such as Dell, Gigabyte, and Supermicro, raising concerns about potential sanctions evasion. When contacted by Reuters, the companies provided varying responses.

Dell stated that it had not observed any instances of servers with restricted chips being shipped to China and expressed willingness to terminate relationships with resellers found to be violating export control regulations. Gigabyte, on the other hand, stated that it adheres to Taiwanese laws and international regulations. Notably, the sale and purchase of the sanctioned GPUs are not illegal in China. This raises the possibility that the restricted NVIDIA chips may have already been present in the country before the sanctions took effect on November 17, 2023. The findings highlight the challenges in enforcing export controls on advanced technologies, particularly in the realm of high-performance computing hardware. As tensions between the US and China continue to rise, the potential for further tightening of export restrictions on cutting-edge technologies remains a possibility.

ZOTAC to Show Scalable GPU Platforms and Industrial Solutions at Hannover Messe 2024

ZOTAC Technology is announcing a new lineup of enterprise and healthcare-oriented mini PCs designed for specific applications and scalable deployment, as well as a whole new class of external GPU acceleration platforms for Thunderbolt 3-compatible PCs. Aside from the all-new additions, ZOTAC is also refreshing its best-selling performance mini PCs with the newest generations of Intel Core Processors and NVIDIA RTX-enabled GPUs. ZOTAC will debut these rugged, innovative solutions and showcase other AI-ready compute solutions during Hannover Messe 2024, reaffirming ZOTAC's commitment to embrace the AI-driven future.

ZOTAC ZBOX Healthcare Series: Medical AI Solution
With the all-new ZOTAC Healthcare Series, ZOTAC is offering the reputed superior quality and performance of ZOTAC ZBOX Mini PCs to the realm of Healthcare. The ZBOX H39R5000W and ZBOX H37R3500W are equipped with 13th Generation Intel Core i9 or i7 laptop processors, as well as professional-grade NVIDIA RTX Ada Generation laptop GPUs. These mini PCs are ready to power medical imaging, algorithms, and more, with some of the latest and greatest hardware currently available.

Long-Time Linux Nouveau Driver Chief Ben Skeggs Joins NVIDIA

Ben Skeggs, a lead maintainer of the open-source NVIDIA GPU driver for Linux kernel called Nouveau, has joined NVIDIA. Working as an open-source contributor for the Nouveau driver for more than a decade, Ben Skeggs has achieved the remarkable feat of working to support the NVIDIA GPU hardware on open-source drivers. Before joining NVIDIA, Ben Skeggs worked at Red Hat up until September 18th of 2023. At that date, he posted that he was resigning from Red Hat and stepping back from the Nouveau open-source driver development. However, this news today comes as a bit of an interesting development, as Ben Skeggs is going to NVIDIA, which has been reluctant in the past to support open-source drivers.

Now, he is able to continue working on the driver directly from NVIDIA. He posted a set of 156 patches to the driver, affecting tens of thousands of lines of code. And he signed it all off from the official NVIDIA work address. This signals a potential turn in NVIDIA's approach to open-source software development, where the company might pay more attention to the movement and potentially hire more developers to support these projects. Back in 2012, NVIDIA had a different stance on open-source development, infamously provoking the creator of the Linux kernel, Linus Torvalds, to issue some snide remarks to the company. Hopefully, better days are ahead for the OSS world of driver development and collaboration with tech giants.

Intel Builds World's Largest Neuromorphic System to Enable More Sustainable AI

Today, Intel announced that it has built the world's largest neuromorphic system. Code-named Hala Point, this large-scale neuromorphic system, initially deployed at Sandia National Laboratories, utilizes Intel's Loihi 2 processor, aims at supporting research for future brain-inspired artificial intelligence (AI), and tackles challenges related to the efficiency and sustainability of today's AI. Hala Point advances Intel's first-generation large-scale research system, Pohoiki Springs, with architectural improvements to achieve over 10 times more neuron capacity and up to 12 times higher performance.

"The computing cost of today's AI models is rising at unsustainable rates. The industry needs fundamentally new approaches capable of scaling. For that reason, we developed Hala Point, which combines deep learning efficiency with novel brain-inspired learning and optimization capabilities. We hope that research with Hala Point will advance the efficiency and adaptability of large-scale AI technology." -Mike Davies, director of the Neuromorphic Computing Lab at Intel Labs

Minisforum V3 High-Performance AMD AI 3-in-1 Tablet Starts at $1199 Pre-Sale

Minisforum has unveiled a game-changing device that blurs the lines between tablets and laptops: the Minisforum V3. Today, the V3 laptop has hit the Minisforum store. This innovative 3-in-1 tablet is powered by the high-performance AMD Ryzen 7 8840U processor, offering a unique blend of portability and computing power. Unlike its traditional Mini PC design, Minisforum has adopted the popular form factor of Microsoft Surface and Lenovo Yoga tablet PCs with the V3. This versatile device can be a handheld tablet, a laptop with an included magnetic attachable keyboard, or a solo kickstand. At the heart of the Minisforum V3 lies the 8-core, 16-thread Ryzen 7 8840U processor, capable of delivering exceptional performance for demanding tasks. The tablet features a stunning 14-inch 2560 x 1600 IPS screen with a 165 Hz refresh rate and 100% DCI-P3 color gamut coverage, making it an ideal choice for creative professionals and content creators.

The V3's standout feature is its advanced cooling system, which allows the Ryzen 7 8840U and onboard Radeon 780M iGPU to operate at a stable 28 watts. This ensures smooth and efficient performance even under heavy workloads, making it a reliable device for all your tasks. The tablet's screen boasts a remarkable 500 nits of brightness, and its high color gamut coverage makes it perfect for professionals who require accurate color representation. Minisforum has priced the V3 competitively at $1199 at the pre-sale offering, making it an attractive option for those seeking a powerful and versatile device that can adapt to various scenarios. This primary option includes 32 GB of RAM and 1 TB SSD for storage. For early birds, Minisforum offers a V Pen, tempered glass screen protector, and laptop sleeve as a gift. Here is the link to the Minisforum V3 store.

Demand for NVIDIA's Blackwell Platform Expected to Boost TSMC's CoWoS Total Capacity by Over 150% in 2024

NVIDIA's next-gen Blackwell platform, which includes B-series GPUs and integrates NVIDIA's own Grace Arm CPU in models such as the GB200, represents a significant development. TrendForce points out that the GB200 and its predecessor, the GH200, both feature a combined CPU+GPU solution, primarily equipped with the NVIDIA Grace CPU and H200 GPU. However, the GH200 accounted for only approximately 5% of NVIDIA's high-end GPU shipments. The supply chain has high expectations for the GB200, with projections suggesting that its shipments could exceed millions of units by 2025, potentially making up nearly 40 to 50% of NVIDIA's high-end GPU market.

Although NVIDIA plans to launch products such as the GB200 and B100 in the second half of this year, upstream wafer packaging will need to adopt more complex and high-precision CoWoS-L technology, making the validation and testing process time-consuming. Additionally, more time will be required to optimize the B-series for AI server systems in aspects such as network communication and cooling performance. It is anticipated that the GB200 and B100 products will not see significant production volumes until 4Q24 or 1Q25.

Sony PlayStation 5 Pro Specifications Confirmed, Console Arrives Before Holidays

Thanks for the detailed information obtained by The Verge, today we confirm previously leaked details as Sony gears up to unveil the highly anticipated PlayStation 5 Pro, codenamed "Trinity." According to insider reports, Sony is urging developers to optimize their games for the PS5 Pro, with a primary focus on enhancing ray tracing capabilities. The console is expected to feature an RDNA 3 GPU with 30 WGP running BVH8, capable of 33.5 TeraFLOPS of FP32 single-precision computing power, and a slightly quicker CPU running at 3.85 GHz, enabling it to render games with ray tracing enabled or achieve higher resolutions and frame rates in select titles. Sony anticipates GPU rendering on the PS5 Pro to be approximately 45 percent faster than the standard PlayStation 5. The PS5 Pro GPU will be larger and utilize faster system memory to bolster ray tracing performance, boasting up to three times the speed of the regular PS5.

Additionally, the console will employ a more powerful ray tracing architecture, backed by PlayStation Spectral Super Resolution (PSSR), allowing developers to leverage graphics features like ray tracing more extensively. To support this endeavor, Sony is providing developers with test kits, and all games submitted for certification from August onward must be compatible with the PS5 Pro. Insider Gaming, the first to report the full PS5 Pro specs, suggests a potential release during the 2024 holiday period. The PS5 Pro will also feature modifications for developers regarding system memory, with Sony increasing the memory bandwidth from 448 GB/s to 576 GB/s, enhancing efficiency for an even more immersive gaming experience. To do AI processing, there is an custom AI accelerator capable of 300 8-bit INT8 TOPS and 67 16-bit FP16 TeraFLOPS, in addition to ACV audio codec running up to 35% faster.

ADLINK Reveals New Graphics Card with Intel Arc A380E GPU at Embedded World 2024

The industrial grade A380E graphics card features an exceptional cost/performance ratio, high reliability and low power consumption (50 W). As with all ADLINK industrial products, it delivers on longevity with availability guaranteed for a minimum of five years. In addition, the A380E graphics card is slim and compact with a single slot design, measuring only 69 mm x 156 mm.

Flexible application
Although the core market is likely to be commercial gaming, the A380E graphics card is also suited to industrial Edge AI applications such as Industrial IoT and retail analytics. Video wall graphics and media processing and delivery are examples of the many other potential uses.

ASUS IoT Announces PE8000G

ASUS IoT, the global AIoT solution provider, today announced PE8000G at Embedded World 2024, a powerful edge AI computer that supports multiple GPU cards for high performance—and is expertly engineered to handle rugged conditions with resistance to extreme temperatures, vibration and variable voltage. PE8000G is powered by formidable Intel Core processors (13th and 12th gen) and the Intel R680E chipset to deliver high-octane processing power and efficiency.

With its advanced architecture, PE8000G excels at running multiple neural network modules simultaneously in real-time—and represents a significant leap forward in edge AI computing. With its robust design, exceptional performance and wide range of features, PE8000G series is poised to revolutionize AI-driven applications across multiple industries, elevating edge AI computing to new heights and enabling organizations to tackle mission-critical tasks with confidence and to achieve unprecedented levels of productivity and innovation.

Intel Arc Battlemage Could Arrive Before Black Friday, Right in Time for Holidays

According to the latest report from ComputerBase, Intel had a strong presence at the recently concluded Embedded World 2024 conference. The company officially showcased its Arc series of GPUs for the embedded market, based on the existing Alchemist chips rebranded as the "E series." However, industry whispers hint at a more significant development—the impending launch of Intel's second-generation Arc Xe² GPUs, codenamed "Battlemage," potentially before the lucrative Black Friday shopping season. While Alchemist serves as Intel's current offering for embedded applications, many companies in attendance expressed keen interest in Battlemage, the successor to Alchemist. These firms often cover a broad spectrum, from servers and desktops to notebooks and embedded systems, necessitating a hardware platform that caters to this diverse range of applications.

Officially, Intel had previously stated that Battlemage would "hopefully" arrive before CES 2025, implying a 2024 launch. However, rumors from the trade show floor suggest a more ambitious target—a release before Black Friday, which falls on November 29th this year. This timeline aligns with Intel's historical launch patterns, as the original Arc A380 and notebook GPUs debuted in early October 2022, albeit with a staggered and limited rollout. Intel's struggles with the Alchemist launch serve as a learning experience for the company. Early promises and performance claims for the first-generation Arc GPUs failed to materialize, leading to a stuttering market introduction. This time, Intel has adopted a more reserved approach, avoiding premature and grandiose proclamations about Battlemage's capabilities.

Intel Unleashes Enterprise AI with Gaudi 3, AI Open Systems Strategy and New Customer Wins

At the Intel Vision 2024 customer and partner conference, Intel introduced the Intel Gaudi 3 accelerator to bring performance, openness and choice to enterprise generative AI (GenAI), and unveiled a suite of new open scalable systems, next-gen products and strategic collaborations to accelerate GenAI adoption. With only 10% of enterprises successfully moving GenAI projects into production last year, Intel's latest offerings address the challenges businesses face in scaling AI initiatives.

"Innovation is advancing at an unprecedented pace, all enabled by silicon - and every company is quickly becoming an AI company," said Intel CEO Pat Gelsinger. "Intel is bringing AI everywhere across the enterprise, from the PC to the data center to the edge. Our latest Gaudi, Xeon and Core Ultra platforms are delivering a cohesive set of flexible solutions tailored to meet the changing needs of our customers and partners and capitalize on the immense opportunities ahead."

Acer Launches New Nitro 14 and Nitro 16 Gaming Laptops Powered by AMD Ryzen 8040 Series Processors

Acer today announced the new Nitro 14 and Nitro 16 gaming laptops, powered by AMD Ryzen 8040 Series processors with Ryzen AI[1]. With up to NVIDIA GeForce RTX 4060[2] Laptop GPUs supported by DLSS 3.5 technology, both are backed by NVIDIA's RTX AI platform, providing an array of capabilities in over 500 games and applications, enhanced by AI. Gamers are immersed in their 14- and 16-inch NVIDIA G-SYNC compatible panels with up to WQXGA (2560x1600) resolution.

Whether in call or streaming in-game, Acer PurifiedVoice 2.0 harnesses the power of AI to block out external noises, while Acer PurifiedView keeps users always front and center of all the action. Microsoft Copilot in Windows (with a dedicated Copilot key) helps accelerate everyday tasks on these AI laptops, and with one month of Xbox Game Pass Ultimate included with every device, players will enjoy hundreds of high-quality PC games. To seamlessly take command of device performance and customizations, one click of the NitroSense key directs users to the control center and the library of available AI-related functions through the new Experience Zone.

U.S. Updates Advanced Semiconductor Ban, Actual Impact on the Industry Will Be Insignificant

On March 29th, the United States announced another round of updates to its export controls, targeting advanced computing, supercomputers, semiconductor end-uses, and semiconductor manufacturing products. These new regulations, which took effect on April 4th, are designed to prevent certain countries and businesses from circumventing U.S. restrictions to access sensitive chip technologies and equipment. Despite these tighter controls, TrendForce believes the practical impact on the industry will be minimal.

The latest updates aim to refine the language and parameters of previous regulations, tightening the criteria for exports to Macau and D:5 countries (China, North Korea, Russia, Iran, etc.). They require a detailed examination of all technology products' Total Processing Performance (TPP) and Performance Density (PD). If a product exceeds certain computing power thresholds, it must undergo a case-by-case review. Nevertheless, a new provision, Advanced Computing Authorized (ACA), allows for specific exports and re-exports among selected countries, including the transshipment of particular products between Macau and D:5 countries.

Imagination's new Catapult CPU is Driving RISC-V Device Adoption

Imagination Technologies today unveils the next product in the Catapult CPU IP range, the Imagination APXM-6200 CPU: a RISC-V application processor with compelling performance density, seamless security and the artificial intelligence capabilities needed to support the compute and intuitive user experience needs for next generation consumer and industrial devices.

"The number of RISC-V based devices is skyrocketing with over 16Bn units forecast by 2030, and the consumer market is behind much of this growth" says Rich Wawrzyniak, Principal Analyst at SHD Group. "One fifth of all consumer devices will have a RISC-V based CPU by the end of this decade. Imagination is set to be a force in RISC-V with a strategy that prioritises quality and ease of adoption. Products like APXM-6200 are exactly what will help RISC-V achieve the promised success."

AIO Workstation Combines 128-Core Arm Processor and Four NVIDIA GPUs Totaling 28,416 CUDA Cores

All-in-one computers are often traditionally seen as lower-powered alternatives to traditional desktop workstations. However, a new offering from Alafia AI, a startup focused on medical imaging appliances, aims to shatter that perception. The company's upcoming Alafia Aivas SuperWorkstation packs serious hardware muscle, demonstrating that all-in-one systems can match the performance of their more modular counterparts. At the heart of the Aivas SuperWorkstation lies a 128-core Ampere Altra processor, running at 3.0 GHz clock speed. This CPU is complemented by not one but three NVIDIA L4 GPUs for compute, and a single NVIDIA RTX 4000 Ada GPU for video output, delivering a combined 28,416 CUDA cores for accelerated parallel computing tasks. The system doesn't skimp on other components, either. It features a 4K touch display with up to 360 nits of brightness, an extensive 2 TB of DDR4 RAM, and storage options up to an 8 TB solid-state drive. This combination of cutting-edge CPU, GPU, memory, and storage is squarely aimed at the demands of medical imaging and AI development workloads.

The all-in-one form factor packs this incredible hardware into a sleek, purposefully designed clinical research appliance. While initially targeting software developers, Alafia AI hopes that institutions that can optimize their applications for the Arm architecture can eventually deploy the Aivas SuperWorkstation for production medical imaging workloads. The company is aiming for application integration in Q3 2024 and full ecosystem device integration by Q4 2024. With this powerful new offering, Alafia AI is challenging long-held assumptions about the performance limitations of all-in-one systems. The Aivas SuperWorkstation demonstrates that the right hardware choices can transform these compact form factors into true powerhouse workstations. Especially with a combined total output of three NVIDIA L4 compute units, alongside RTX 4000 Ada graphics card, the AIO is more powerful than some of the high-end desktop workstations.
Return to Keyword Browsing
May 4th, 2024 15:53 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts