News Posts matching #GPU

Return to Keyword Browsing

NVIDIA Testing GeForce RTX 50 Series "Blackwell" GPU Designs Ranging from 250 W to 600 W

According to Benchlife.info insiders, NVIDIA is supposedly in the phase of testing designs with various Total Graphics Power (TGP), running from 250 Watts to 600 Watts, for its upcoming GeForce RTX 50 series Blackwell graphics cards. The company is testing designs ranging from 250 W aimed at mainstream users and a more powerful 600 W configuration tailored for enthusiast-level performance. The 250 W cooling system is expected to prioritize compactness and power efficiency, making it an appealing choice for gamers seeking a balance between capability and energy conservation. This design could prove particularly attractive for those building small form-factor rigs or AIBs looking to offer smaller cooler sizes. On the other end of the spectrum, the 600 W cooling solution is the highest TGP of the stack, which is possibly only made for testing purposes. Other SKUs with different power configurations come in between.

We witnessed NVIDIA testing a 900-watt version of the Ada Lovelace AD102 GPU SKU, which never saw the light of day, so we should take this testing phase with a grain of salt. Often, the engineering silicon is the first batch made for the enablement of software and firmware, while the final silicon is much more efficient and more optimized to use less power and align with regular TGP structures. The current highest-end SKU, the GeForce RTX 4090, uses 450-watt TGP. So, take this phase with some reservations as we wait for more information to come out.

SpiNNcloud Systems Announces First Commercially Available Neuromorphic Supercomputer

Today, in advance of ISC High Performance 2024, SpiNNcloud Systems announced the commercial availability of its SpiNNaker2 platform, a supercomputer-level hybrid AI high-performance computer system based on principles of the human brain. Pioneered by Steve Furber, designer of the original ARM and SpiNNaker1 architectures, the SpiNNaker2 supercomputing platform uses a large number of low-power processors for efficiently computing AI and other workloads.

First-generation SpiNNaker1 architecture is currently used in dozens of research groups across 23 countries worldwide. Sandia National Laboratories, Technical University of München and Universität Göttingen are among the first customers placing orders for SpiNNaker2, which was developed around commercialized IP invented in the Human Brain Project, a billion-euro research project funded by the European Union to design intelligent, efficient artificial systems.

Report: 3 Out of 4 Laptop PCs Sold in 2027 will be AI Laptop PCs

Personal computers (PCs) have been used as the major productivity device for several decades. But now we are entering a new era of PCs based on artificial intelligence (AI), thanks to the boom witnessed in generative AI (GenAI). We believe the inventory correction and demand weakness in the global PC market have already normalized, with the impacts from COVID-19 largely being factored in. All this has created a comparatively healthy backdrop for reshaping the PC industry. Counterpoint estimates that almost half a billion AI laptop PCs will be sold during the 2023-2027 period, with AI PCs reviving the replacement demand.

Counterpoint separates GenAI laptop PCs into three categories - AI basic laptop, AI-advanced laptop and AI-capable laptop - based on different levels of computational performance, corresponding use cases and the efficiency of computational performance. We believe AI basic laptops, which are already in the market, can perform basic AI tasks but not completely GenAI tasks and, starting this year, will be supplanted by more AI-advanced and AI-capable models with enough TOPS (tera operations per second) powered by NPU (neural processing unit) or GPU (graphics processing unit) to perform the advanced GenAI tasks really well.

Apple Unveils the Redesigned 11‑inch and All‑new 13‑inch iPad Air, Supercharged by the M2 Chip

Apple today announced the redesigned 11-inch and all-new 13-inch iPad Air, supercharged by the M2 chip. Now available in two sizes for the first time, the 11-inch iPad Air is super-portable, and the 13-inch model provides an even larger display for more room to work, learn, and play. Both deliver phenomenal performance and advanced capabilities, making iPad Air more powerful and versatile than ever before. Featuring a faster CPU, GPU, and Neural Engine in M2, the new iPad Air offers even more performance and is an incredibly powerful device for artificial intelligence. The front-facing Ultra Wide 12MP camera with Center Stage is now located along the landscape edge of iPad Air, which is perfect for video calls. It also includes faster Wi-Fi, and cellular models include super-fast 5G, so users can stay connected on the go. With a portable design, all-day battery life, a brilliant Liquid Retina display, and support for Apple Pencil Pro, Apple Pencil (USB-C), and Magic Keyboard, iPad Air empowers users to be even more productive and creative. The new iPad Air is available in new blue and purple finishes, along with starlight and space gray. The 11-inch iPad Air still starts at just $599, and the 13-inch iPad Air is a fantastic value at just $799. Customers can order the new iPad Air today, with availability beginning Wednesday, May 15.

"So many users—from students, to content creators, to small businesses, and more—love iPad Air for its performance, portability, and versatility, all at an affordable price. Today, iPad Air gets even better," said Bob Borchers, Apple's vice president of Product Marketing. "We're so excited to introduce the redesigned 11-inch and all-new 13-inch iPad Air, offering two sizes for the first time. With its combination of a brilliant Liquid Retina display, the phenomenal performance of the M2 chip, incredible AI capabilities, and its colorful, portable design with support for new accessories, iPad Air is more powerful and versatile than ever."

Apple Unveils Stunning New iPad Pro With the World's Most Advanced Display, M4 Chip and Apple Pencil Pro

Apple today unveiled the groundbreaking new iPad Pro in a stunningly thin and light design, taking portability and performance to the next level. Available in silver and space black finishes, the new iPad Pro comes in two sizes: an expansive 13-inch model and a super-portable 11-inch model. Both sizes feature the world's most advanced display—a new breakthrough Ultra Retina XDR display with state-of-the-art tandem OLED technology—providing a remarkable visual experience. The new iPad Pro is made possible with the new M4 chip, the next generation of Apple silicon, which delivers a huge leap in performance and capabilities. M4 features an entirely new display engine to enable the precision, color, and brightness of the Ultra Retina XDR display. With a new CPU, a next-generation GPU that builds upon the GPU architecture debuted on M3, and the most powerful Neural Engine yet, the new iPad Pro is an outrageously powerful device for artificial intelligence. The versatility and advanced capabilities of iPad Pro are also enhanced with all-new accessories. Apple Pencil Pro brings powerful new interactions that take the pencil experience even further, and a new thinner, lighter Magic Keyboard is packed with incredible features. The new iPad Pro, Apple Pencil Pro, and Magic Keyboard are available to order starting today, with availability in stores beginning Wednesday, May 15.

"iPad Pro empowers a broad set of pros and is perfect for anyone who wants the ultimate iPad experience—with its combination of the world's best displays, extraordinary performance of our latest M-series chips, and advanced accessories—all in a portable design. Today, we're taking it even further with the new, stunningly thin and light iPad Pro, our biggest update ever to iPad Pro," said John Ternus, Apple's senior vice president of Hardware Engineering. "With the breakthrough Ultra Retina XDR display, the next-level performance of M4, incredible AI capabilities, and support for the all-new Apple Pencil Pro and Magic Keyboard, there's no device like the new iPad Pro."

Apple Introduces the M4 Chip

Apple today announced M4, the latest chip delivering phenomenal performance to the all-new iPad Pro. Built using second-generation 3-nanometer technology, M4 is a system on a chip (SoC) that advances the industry-leading power efficiency of Apple silicon and enables the incredibly thin design of iPad Pro. It also features an entirely new display engine to drive the stunning precision, color, and brightness of the breakthrough Ultra Retina XDR display on iPad Pro. A new CPU has up to 10 cores, while the new 10-core GPU builds on the next-generation GPU architecture introduced in M3, and brings Dynamic Caching, hardware-accelerated ray tracing, and hardware-accelerated mesh shading to iPad for the first time. M4 has Apple's fastest Neural Engine ever, capable of up to 38 trillion operations per second, which is faster than the neural processing unit of any AI PC today. Combined with faster memory bandwidth, along with next-generation machine learning (ML) accelerators in the CPU, and a high-performance GPU, M4 makes the new iPad Pro an outrageously powerful device for artificial intelligence.

"The new iPad Pro with M4 is a great example of how building best-in-class custom silicon enables breakthrough products," said Johny Srouji, Apple's senior vice president of Hardware Technologies. "The power-efficient performance of M4, along with its new display engine, makes the thin design and game-changing display of iPad Pro possible, while fundamental improvements to the CPU, GPU, Neural Engine, and memory system make M4 extremely well suited for the latest applications leveraging AI. Altogether, this new chip makes iPad Pro the most powerful device of its kind."

NVIDIA Advertises "Premium AI PC" Mocking the Compute Capability of Regular AI PCs

According to the report from BenchLife, NVIDIA has started the marketing campaign push for "Premium AI PC," squarely aimed at the industry's latest trend pushed by Intel, AMD, and Qualcomm for an "AI PC" system, which features a dedicated NPU for processing smaller models locally. NVIDIA's approach comes from a different point of view: every PC with an RTX GPU is a "Premium AI PC," which holds a lot of truth. Generally, GPUs (regardless of the manufacturer) hold more computing potential than the CPU and NPU combined. With NVIDIA's push to include Tensor cores in its GPUs, the company is preparing for next-generation software from vendors and OS providers that will harness the power of these powerful silicon pieces and embed more functionality in the PC.

At the Computex event in Taiwan, there should be more details about Premium AI PCs and general AI PCs. In its marketing materials, NVIDIA compares AI PCs to its Premium AI PCs, which have enhanced capabilities across various applications like image/video editing and upscaling, productivity, gaming, and developer applications. Another relevant selling point is the user base for these Premium AI PCs, which NVIDIA touts to be 100 million users. Those PCs support over 500 AI applications out of the box, highlighting the importance of proper software support. NVIDIA's systems are usually more powerful, with GeForce RTX GPUs reaching anywhere from 100-1300+ TOPS, compared to 40 TOPS of AI PCs. How other AI PC makers plan to fight in the AI PC era remains to be seen, but there is a high chance that this will be the spotlight of the upcoming Computex show.

Alphacool Launches New Eisblock Aurora 180° Terminal

With the new Alphacool Eisblock Aurora 180° terminal, you can give your Eisblock Aurora GPU cooler a new look and gain additional options for connecting your Eisblock Aurora GPU cooler to your water cooling circuit. The Alphacool Aurora 180° terminal allows flexible connection options for all Eisblock GPU coolers. Perfect for extensive modding projects or for systems with limited space. The elegant design is perfected by a magnetic cover.

Flexible connections
The Alphacool Eisblock Aurora 180° terminal replaces the standard terminal of the Eisblock GPU cooler. It positions the connections above the backplate, significantly reducing the depth of the cooling block. With three possible connection options for each input and output - top, side and rear - the terminal offers maximum flexibility.

SK hynix Presents CXL Memory Solutions Set to Power the AI Era at CXL DevCon 2024

SK hynix participated in the first-ever Compute Express Link Consortium Developers Conference (CXL DevCon) held in Santa Clara, California from April 30-May 1. Organized by a group of more than 240 global semiconductor companies known as the CXL Consortium, CXL DevCon 2024 welcomed a majority of the consortium's members to showcase their latest technologies and research results.

CXL is a technology that unifies the interfaces of different devices in a system such as semiconductor memory, storage, and logic chips. As it can increase system bandwidth and processing capacity, CXL is receiving attention as a key technology for the AI era in which high performance and capacity are essential. Under the slogan "Memory, The Power of AI," SK hynix showcased a range of CXL products at the conference that are set to strengthen the company's leadership in AI memory technology.

More than 500 AI Models Run Optimized on Intel Core Ultra Processors

Today, Intel announced it surpassed 500 AI models running optimized on new Intel Core Ultra processors - the industry's premier AI PC processor available in the market today, featuring new AI experiences, immersive graphics and optimal battery life. This significant milestone is a result of Intel's investment in client AI, the AI PC transformation, framework optimizations and AI tools including OpenVINO toolkit. The 500 models, which can be deployed across the central processing unit (CPU), graphics processing unit (GPU) and neural processing unit (NPU), are available across popular industry sources, including OpenVINO Model Zoo, Hugging Face, ONNX Model Zoo and PyTorch. The models draw from categories of local AI inferencing, including large language, diffusion, super resolution, object detection, image classification/segmentation, computer vision and others.

"Intel has a rich history of working with the ecosystem to bring AI applications to client devices, and today we celebrate another strong chapter in the heritage of client AI by surpassing 500 pre-trained AI models running optimized on Intel Core Ultra processors. This unmatched selection reflects our commitment to building not only the PC industry's most robust toolchain for AI developers, but a rock-solid foundation AI software users can implicitly trust."
-Robert Hallock, Intel vice president and general manager of AI and technical marketing in the Client Computing Group

AMD Celebrates its 55th Birthday

AMD is now a 55-year-old company. The chipmaker was founded on May Day, 1969, and traversed practically every era of digital computing to reach where it is today—a company that makes contemporary processors for PCs, servers, and consumer electronics; GPUs for gaming graphics, professional visualization, and the all important AI HPC processors that are driving the latest era of computing. As of this writing, AMD has a market capitalization of over $237 billion, presence in all market regions, and supplies hardware and services to nearly every Fortune 500 company, including every IT giant. Happy birthday, AMD!

We Tested NVIDIA's new ChatRTX: Your Own GPU-accelerated AI Assistant with Photo Recognition, Speech Input, Updated Models

NVIDIA today unveiled ChatRTX, the AI assistant that runs locally on your machine, and which is accelerated by your GeForce RTX GPU. NVIDIA had originally launched this as "Chat with RTX" back in February 2024, back then this was regarded more as a public tech demo. We reviewed the application in our feature article. The ChatRTX rebranding is probably aimed at making the name sound more like ChatGPT, which is what the application aims to be—except it runs completely on your machine, and is exhaustively customizable. The most obvious advantage of a locally-run AI assistant is privacy—you are interacting with an assistant that processes your prompt locally, and accelerated by your GPU; the second is that you're not held back by performance bottlenecks by cloud-based assistants.

ChatRTX is a major update over the Chat with RTX tech-demo from February. To begin with, the application has several stability refinements from Chat with RTX, which felt a little rough on the edges. NVIDIA has significantly updated the LLMs included with the application, including Mistral 7B INT4, and Llama 2 7B INT4. Support is also added for additional LLMs, including Gemma, a local LLM trained by Google, based on the same technology used to make Google's flagship Gemini model. ChatRTX now also supports ChatGLM3, for both English and Chinese prompts. Perhaps the biggest upgrade ChatRTX is its ability to recognize images on your machine, as it incorporates CLIP (contrastive language-image pre-training) from OpenAI. CLIP is an LLM that recognizes what it's seeing in image collections. Using this feature, you can interact with your image library without the need for metadata. ChatRTX doesn't just take text input—you can speak to it. It now accepts natural voice input, as it integrates the Whisper speech-to-text NLI model.
DOWNLOAD: NVIDIA ChatRTX

AMD Releases Software Adrenalin 24.4.1 WHQL GPU Drivers

AMD has released the latest version of Adrenalin Edition graphics drivers, version 24.4.1 WHQL. It includes support for the upcoming Manor Lords game, as well as add performance improvements for HELLDIVERS 2 game, and adds AMD HYPR-Tune support to Nightingale and SKULL AND BONES games. New drivers also expand Vulkan API extensions support with VK_KHR_shader_maximal_reconvergence and VK_KHR_dynamic_rendering_local_read, as well as include support and optimizations for Topaz Gigapixel AI application, versions 7.1.0 and 7.1.1, with new "Recovery" and "Low Resolution" AI upscaling features.

New AMD Software Adrenalin Edition 24.4.1 WHQL drivers come with several fixes, including performance improvements for HELLDIVERS 2, fix for intermittent application crash in Lords of the Fallen on Radeon RX 6000 series graphics cards, various artifact issues in SnowRunner and Horizon Forbidden West Complete Edition on Radeon RX 6800 and Radeon RX 6000 series graphics cards, fix for intermittent application crash or driver timeout in Overwatch 2 when Radeon Boost is enabled on Radeon RX 6000 and above series graphics cards, intermittent crash while changing Anti-Aliasing settings in Enshrouded on Radeon 7000 series graphics cards, and various application freeze or crash issues with the SteamVR while using Quest Link on Meta Quest 2 or when screen sharing with Microsoft Teams.

DOWNLOAD: AMD Software Adrenalin 24.4.1 WHQL

Aetina Accelerates Embedded AI with High-performance, Small Form-factor Aetina IA380E-QUFL Graphics Card

Aetina, a leading Edge AI solution provider, announced the launch of the Aetina IA380E-QUFL at Embedded World 2024 in Nuremberg, Germany. This groundbreaking product is a small form factor PCIe graphics card powered by the high-performance Intel Arc A380E GPU.

Unmatched Power in a Compact Design
The Aetina IA380E-QUFL delivers workstation-level performance packed into a low-profile, single-slot form factor. This innovative solution consumes only 50 W, making it ideal for space and power-constrained edge computing environments. Embedded system manufacturers and integrators can leverage the power of 4.096 TFLOPs peak FP32 performance delivered by the Intel Arc A380E GPU.

Unreal Engine 5.4 is Now Available With Improvements to Nanite, AI and Machine Learning, TSR, and More

Unreal Engine 5.4 is here, and it's packed with new features and improvements to performance, visual fidelity, and productivity that will benefit game developers and creators across industries. With this release, we're delivering the toolsets we've been using internally to build and ship Fortnite Chapter 5, Rocket Racing, Fortnite Festival, and LEGO Fortnite. Here are some of the highlights.

Animation
Character rigging and animation authoring
This release sees substantial updates to Unreal Engine's built-in animation toolset, enabling you to quickly, easily, and enjoyably rig characters and author animation directly in engine, without the frustrating and time-consuming need to round trip to external applications. With an Experimental new Modular Control Rig feature, you can build animation rigs from understandable modular parts instead of complex granular graphs, while Automatic Retargeting makes it easier to get great results when reusing bipedal character animations. There are also extensions to the Skeletal Editor and a suite of new deformer functions to make the Deformer Graph more accessible.

AMD's RDNA 4 GPUs Could Stick with 18 Gbps GDDR6 Memory

Today, we have the latest round of leaks that suggest that AMD's upcoming RDNA 4 graphics cards, codenamed the "RX 8000-series," might continue to rely on GDDR6 memory modules. According to Kepler on X, the next-generation GPUs from AMD are expected to feature 18 Gbps GDDR6 memory, marking the fourth consecutive RDNA architecture to employ this memory standard. While GDDR6 may not offer the same bandwidth capabilities as the newer GDDR7 standard, this decision does not necessarily imply that RDNA 4 GPUs will be slow performers. AMD's choice to stick with GDDR6 is likely driven by factors such as meeting specific memory bandwidth requirements and cost optimization for PCB designs. However, if the rumor of 18 Gbps GDDR6 memory proves accurate, it would represent a slight step back from the 18-20 Gbps GDDR6 memory used in AMD's current RDNA 3 offerings, such as the RX 7900 XT and RX 7900 XTX GPUs.

AMD's first generation RDNA used GDDR6 with 12-14 Gbps speeds, RDNA 2 came with GDDR6 at 14-18 Gbps, and the current RDNA 3 used 18-20 Gbps GDDR6. Without an increment in memory generation, speeds should stay the same at 18 Gbps. However, it is crucial to remember that leaks should be treated with skepticism, as AMD's final memory choices for RDNA 4 could change before the official launch. The decision to use GDDR6 versus GDDR7 could have significant implications in the upcoming battle between AMD, NVIDIA, and Intel's next-generation GPU architectures. If AMD indeed opts for GDDR6 while NVIDIA pivots to GDDR7 for its "Blackwell" GPUs, it could create a disparity in memory bandwidth performance between the competing products. All three major GPU manufacturers—AMD, NVIDIA, and Intel with its "Battlemage" architecture—are expected to unveil their next-generation offerings in the fall of this year. As we approach these highly anticipated releases, more concrete details on specifications and performance capabilities will emerge, providing a clearer picture of the competitive landscape.

China Circumvents US Restrictions, Still Acquiring NVIDIA GPUs

A recent Reuters investigation has uncovered evidence suggesting Chinese universities and research institutes may have circumvented US sanctions on high-performance NVIDIA GPUs by purchasing servers containing the restricted chips. The sanctions tightened on November 17, 2023, prohibit the export of advanced NVIDIA GPUs like the consumer GeForce RTX 4090 to China. Despite these restrictions, Reuters found that at least ten China-based organizations acquired servers equipped with the sanctioned NVIDIA GPUs between November 20, 2023, and February 28, 2024. These servers were purchased from major vendors such as Dell, Gigabyte, and Supermicro, raising concerns about potential sanctions evasion. When contacted by Reuters, the companies provided varying responses.

Dell stated that it had not observed any instances of servers with restricted chips being shipped to China and expressed willingness to terminate relationships with resellers found to be violating export control regulations. Gigabyte, on the other hand, stated that it adheres to Taiwanese laws and international regulations. Notably, the sale and purchase of the sanctioned GPUs are not illegal in China. This raises the possibility that the restricted NVIDIA chips may have already been present in the country before the sanctions took effect on November 17, 2023. The findings highlight the challenges in enforcing export controls on advanced technologies, particularly in the realm of high-performance computing hardware. As tensions between the US and China continue to rise, the potential for further tightening of export restrictions on cutting-edge technologies remains a possibility.

ZOTAC to Show Scalable GPU Platforms and Industrial Solutions at Hannover Messe 2024

ZOTAC Technology is announcing a new lineup of enterprise and healthcare-oriented mini PCs designed for specific applications and scalable deployment, as well as a whole new class of external GPU acceleration platforms for Thunderbolt 3-compatible PCs. Aside from the all-new additions, ZOTAC is also refreshing its best-selling performance mini PCs with the newest generations of Intel Core Processors and NVIDIA RTX-enabled GPUs. ZOTAC will debut these rugged, innovative solutions and showcase other AI-ready compute solutions during Hannover Messe 2024, reaffirming ZOTAC's commitment to embrace the AI-driven future.

ZOTAC ZBOX Healthcare Series: Medical AI Solution
With the all-new ZOTAC Healthcare Series, ZOTAC is offering the reputed superior quality and performance of ZOTAC ZBOX Mini PCs to the realm of Healthcare. The ZBOX H39R5000W and ZBOX H37R3500W are equipped with 13th Generation Intel Core i9 or i7 laptop processors, as well as professional-grade NVIDIA RTX Ada Generation laptop GPUs. These mini PCs are ready to power medical imaging, algorithms, and more, with some of the latest and greatest hardware currently available.

Long-Time Linux Nouveau Driver Chief Ben Skeggs Joins NVIDIA

Ben Skeggs, a lead maintainer of the open-source NVIDIA GPU driver for Linux kernel called Nouveau, has joined NVIDIA. Working as an open-source contributor for the Nouveau driver for more than a decade, Ben Skeggs has achieved the remarkable feat of working to support the NVIDIA GPU hardware on open-source drivers. Before joining NVIDIA, Ben Skeggs worked at Red Hat up until September 18th of 2023. At that date, he posted that he was resigning from Red Hat and stepping back from the Nouveau open-source driver development. However, this news today comes as a bit of an interesting development, as Ben Skeggs is going to NVIDIA, which has been reluctant in the past to support open-source drivers.

Now, he is able to continue working on the driver directly from NVIDIA. He posted a set of 156 patches to the driver, affecting tens of thousands of lines of code. And he signed it all off from the official NVIDIA work address. This signals a potential turn in NVIDIA's approach to open-source software development, where the company might pay more attention to the movement and potentially hire more developers to support these projects. Back in 2012, NVIDIA had a different stance on open-source development, infamously provoking the creator of the Linux kernel, Linus Torvalds, to issue some snide remarks to the company. Hopefully, better days are ahead for the OSS world of driver development and collaboration with tech giants.

Intel Builds World's Largest Neuromorphic System to Enable More Sustainable AI

Today, Intel announced that it has built the world's largest neuromorphic system. Code-named Hala Point, this large-scale neuromorphic system, initially deployed at Sandia National Laboratories, utilizes Intel's Loihi 2 processor, aims at supporting research for future brain-inspired artificial intelligence (AI), and tackles challenges related to the efficiency and sustainability of today's AI. Hala Point advances Intel's first-generation large-scale research system, Pohoiki Springs, with architectural improvements to achieve over 10 times more neuron capacity and up to 12 times higher performance.

"The computing cost of today's AI models is rising at unsustainable rates. The industry needs fundamentally new approaches capable of scaling. For that reason, we developed Hala Point, which combines deep learning efficiency with novel brain-inspired learning and optimization capabilities. We hope that research with Hala Point will advance the efficiency and adaptability of large-scale AI technology." -Mike Davies, director of the Neuromorphic Computing Lab at Intel Labs

Minisforum V3 High-Performance AMD AI 3-in-1 Tablet Starts at $1199 Pre-Sale

Minisforum has unveiled a game-changing device that blurs the lines between tablets and laptops: the Minisforum V3. Today, the V3 laptop has hit the Minisforum store. This innovative 3-in-1 tablet is powered by the high-performance AMD Ryzen 7 8840U processor, offering a unique blend of portability and computing power. Unlike its traditional Mini PC design, Minisforum has adopted the popular form factor of Microsoft Surface and Lenovo Yoga tablet PCs with the V3. This versatile device can be a handheld tablet, a laptop with an included magnetic attachable keyboard, or a solo kickstand. At the heart of the Minisforum V3 lies the 8-core, 16-thread Ryzen 7 8840U processor, capable of delivering exceptional performance for demanding tasks. The tablet features a stunning 14-inch 2560 x 1600 IPS screen with a 165 Hz refresh rate and 100% DCI-P3 color gamut coverage, making it an ideal choice for creative professionals and content creators.

The V3's standout feature is its advanced cooling system, which allows the Ryzen 7 8840U and onboard Radeon 780M iGPU to operate at a stable 28 watts. This ensures smooth and efficient performance even under heavy workloads, making it a reliable device for all your tasks. The tablet's screen boasts a remarkable 500 nits of brightness, and its high color gamut coverage makes it perfect for professionals who require accurate color representation. Minisforum has priced the V3 competitively at $1199 at the pre-sale offering, making it an attractive option for those seeking a powerful and versatile device that can adapt to various scenarios. This primary option includes 32 GB of RAM and 1 TB SSD for storage. For early birds, Minisforum offers a V Pen, tempered glass screen protector, and laptop sleeve as a gift. Here is the link to the Minisforum V3 store.

Demand for NVIDIA's Blackwell Platform Expected to Boost TSMC's CoWoS Total Capacity by Over 150% in 2024

NVIDIA's next-gen Blackwell platform, which includes B-series GPUs and integrates NVIDIA's own Grace Arm CPU in models such as the GB200, represents a significant development. TrendForce points out that the GB200 and its predecessor, the GH200, both feature a combined CPU+GPU solution, primarily equipped with the NVIDIA Grace CPU and H200 GPU. However, the GH200 accounted for only approximately 5% of NVIDIA's high-end GPU shipments. The supply chain has high expectations for the GB200, with projections suggesting that its shipments could exceed millions of units by 2025, potentially making up nearly 40 to 50% of NVIDIA's high-end GPU market.

Although NVIDIA plans to launch products such as the GB200 and B100 in the second half of this year, upstream wafer packaging will need to adopt more complex and high-precision CoWoS-L technology, making the validation and testing process time-consuming. Additionally, more time will be required to optimize the B-series for AI server systems in aspects such as network communication and cooling performance. It is anticipated that the GB200 and B100 products will not see significant production volumes until 4Q24 or 1Q25.

Sony PlayStation 5 Pro Specifications Confirmed, Console Arrives Before Holidays

Thanks for the detailed information obtained by The Verge, today we confirm previously leaked details as Sony gears up to unveil the highly anticipated PlayStation 5 Pro, codenamed "Trinity." According to insider reports, Sony is urging developers to optimize their games for the PS5 Pro, with a primary focus on enhancing ray tracing capabilities. The console is expected to feature an RDNA 3 GPU with 30 WGP running BVH8, capable of 33.5 TeraFLOPS of FP32 single-precision computing power, and a slightly quicker CPU running at 3.85 GHz, enabling it to render games with ray tracing enabled or achieve higher resolutions and frame rates in select titles. Sony anticipates GPU rendering on the PS5 Pro to be approximately 45 percent faster than the standard PlayStation 5. The PS5 Pro GPU will be larger and utilize faster system memory to bolster ray tracing performance, boasting up to three times the speed of the regular PS5.

Additionally, the console will employ a more powerful ray tracing architecture, backed by PlayStation Spectral Super Resolution (PSSR), allowing developers to leverage graphics features like ray tracing more extensively. To support this endeavor, Sony is providing developers with test kits, and all games submitted for certification from August onward must be compatible with the PS5 Pro. Insider Gaming, the first to report the full PS5 Pro specs, suggests a potential release during the 2024 holiday period. The PS5 Pro will also feature modifications for developers regarding system memory, with Sony increasing the memory bandwidth from 448 GB/s to 576 GB/s, enhancing efficiency for an even more immersive gaming experience. To do AI processing, there is an custom AI accelerator capable of 300 8-bit INT8 TOPS and 67 16-bit FP16 TeraFLOPS, in addition to ACV audio codec running up to 35% faster.

ADLINK Reveals New Graphics Card with Intel Arc A380E GPU at Embedded World 2024

The industrial grade A380E graphics card features an exceptional cost/performance ratio, high reliability and low power consumption (50 W). As with all ADLINK industrial products, it delivers on longevity with availability guaranteed for a minimum of five years. In addition, the A380E graphics card is slim and compact with a single slot design, measuring only 69 mm x 156 mm.

Flexible application
Although the core market is likely to be commercial gaming, the A380E graphics card is also suited to industrial Edge AI applications such as Industrial IoT and retail analytics. Video wall graphics and media processing and delivery are examples of the many other potential uses.

ASUS IoT Announces PE8000G

ASUS IoT, the global AIoT solution provider, today announced PE8000G at Embedded World 2024, a powerful edge AI computer that supports multiple GPU cards for high performance—and is expertly engineered to handle rugged conditions with resistance to extreme temperatures, vibration and variable voltage. PE8000G is powered by formidable Intel Core processors (13th and 12th gen) and the Intel R680E chipset to deliver high-octane processing power and efficiency.

With its advanced architecture, PE8000G excels at running multiple neural network modules simultaneously in real-time—and represents a significant leap forward in edge AI computing. With its robust design, exceptional performance and wide range of features, PE8000G series is poised to revolutionize AI-driven applications across multiple industries, elevating edge AI computing to new heights and enabling organizations to tackle mission-critical tasks with confidence and to achieve unprecedented levels of productivity and innovation.
Return to Keyword Browsing
Nov 22nd, 2024 21:00 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts