News Posts matching #GPU

Return to Keyword Browsing

Intel's Next Generation GPUs to be Made by TSMC, Celestial Set for 3 nm Process

Intel has awarded TSMC with some big contracts for future manufacturing of next generation GPUs, according to Taiwan's Commercial Times. As previously covered on TPU, the second generation Battlemage graphics processing units will get fabricated via a 4 nm process. According to insider sources at both partnering companies, Intel is eyeing a release date in the second half of 2024 for this Xe2-based architecture. The same sources pointed to the third generation Celestial graphics processing units being ready in time for a second half of 2026 launch window. Arc Celestial, which is based on the Xe3 architecture, is set for manufacture in the coming years courtesy of TSMC's N3X (3 nm) process node.

One of the sources claim that Intel is quietly confident about its future prospects in the GPU sector, despite mixed critical and commercial reactions to the first generation line-up of Arc Alchemist discrete graphics cards. The company is said to be anticipating great demand for more potent versions of its graphics products in the future, and internal restructuring efforts have not dulled the will of a core team of engineers. The restructuring process resulted in the original AXG graphics division being divided into two sub-groups - CCG and DCAI. The pioneer of the entire endeavor, Raja Koduri, departed Intel midway through last month, to pursue new opportunities with an AI-focused startup.

Antec Unveils Full-Tower Performance 1 FT Flagship Case with Temperature-Control Display and High Cooling Performance

[Editor's note: Our in-depth review of Antec Performance 1 FT Case is now live]

With outstanding cooling performance and lots of useful features, Antec Inc. presents the latest flagship of full towers, the new Performance 1 FT. Featuring an airflow-enhanced front panel design, magnetic front filter, and four pre-installed PWM fans, this new chassis provides efficient airflow and great cooling performance. The case is now commercially available from MSRP US$159.99.

The new Antec flagship supports the latest RTX 40 Series GPUs. Considering the increasing demands for CPU and GPU cooling, Antec designed the Performance 1 FT to enhance the air intake, improve the cable routing, and enable an easy installation with various options. The new temperature display function allows to keep control of the components. The display screen located on the case top allows users to check the temperature of the GPU and CPU at a glance. It addresses the need for temperature monitoring without purchasing expensive cooling kits.

DirectX 12 API New Feature Set Introduces GPU Upload Heaps, Enables Simultaneous Access to VRAM for CPU and GPU

Microsoft has implemented two new features into its DirectX 12 API - GPU Upload Heaps and Non-Normalized sampling have been added via the latest Agility SDK 1.710.0 preview, and the former looks to be the more intriguing of the pair. The SDK preview is only accessible to developers at the present time, since its official introduction on Friday 31 March. Support has also been initiated via the latest graphics drivers issued by NVIDIA, Intel, and AMD. The Microsoft team has this to say about the preview version of GPU upload heaps feature in DirectX 12: "Historically a GPU's VRAM was inaccessible to the CPU, forcing programs to have to copy large amounts of data to the GPU via the PCI bus. Most modern GPUs have introduced VRAM resizable base address register (BAR) enabling Windows to manage the GPU VRAM in WDDM 2.0 or later."

They continue to describe how the update allows the CPU to gain access to the pool of VRAM on the connected graphics card: "With the VRAM being managed by Windows, D3D now exposes the heap memory access directly to the CPU! This allows both the CPU and GPU to directly access the memory simultaneously, removing the need to copy data from the CPU to the GPU increasing performance in certain scenarios." This GPU optimization could offer many benefits in the context of computer games, since memory requirements continue to grow in line with an increase in visual sophistication and complexity.

NVIDIA Ramps Up Battle Against Makers of Unlicensed GeForce Cards

NVIDIA is stepping up to manufacturers of counterfeit graphics card in China according to an article published by MyDrivers - the hardware giant is partnering up with a number of the nation's major e-commerce companies in order to eliminate inventories of bogus GPUs. It is claimed that these online retail platforms, including JD.com and Douyin, are partway into removing a swathe of dodgy stock from their listings. NVIDIA is seeking to disassociate itself from the pool of unlicensed hardware and the brands responsible for flooding the domestic and foreign markets with so-called fake graphics cards. The company is reputed to be puzzled about the murky origins of this bootlegging of their patented designs.

The market became saturated with fake hardware during the Ethereum mining boom - little known cottage companies such as 51RSIC, Corn, Bingying and JieShuoMllse were pushing rebadged cheap OEM cards to domestic e-tail sites. The knock-off GPUs also crept outside of that sector, and import listings started to appear on international platforms including Ebay, AliExpress, Amazon and Newegg. NVIDIA is also fighting to stop the sale of refurbished cards - these are very likely to have been utilized in intensive cryptocurrency mining activities. A flood of these hit the market following an extreme downturn in crypto mining efforts, and many enthusiast communities have warned against acquiring pre-owned cards due to the high risk of component failure.

HP Launches New Laptops and Accessories for Hybrid Work

Today at the Amplify Partner Conference, HP Inc. announced new products and solutions to usher in the next era of hybrid work for everyone with a comprehensive set of computing solutions for hybrid flexibility. With only 22 percent of workers describing themselves as 'thriving' in hybrid work, it's clear companies are still figuring out how to make hybrid work. "Most companies want to move past the 'forced return' to the office era of hybrid work," said Alex Cho, President of Personal Systems, HP Inc. "The challenge is, they're not sure how to. We believe the future is hybrid flexibility, which delivers the best of the home and the office to workers everywhere."

According to HP's Future of Work study, 80 percent of workers want to be in the office some of the time, but many companies continue to struggle to get workers back in the office. HP research suggests that the most significant barrier to a return to office is a sub-optimal technology experience. In fact, 89 percent say technology is the most important factor driving return to office decisions. Similarly, of those who report thriving in hybrid work, 90 percent believe that access to the right technology and tools leads to a positive work experience. To accelerate employees' return to work, the right technology is required for optimal work setups, enabling success for companies and their employees.

HP Boosts Gaming Solutions for Awe-Inspiring Experiences

Today at the Amplify Partner Conference, HP Inc announced its latest line-up of gaming hardware and software designed to bring gamers everything they need to enjoy the games they love. The new OMEN Transcend 16 Laptop, OMEN 16 Laptop, Victus 16 Laptop, and a vast range of stunning OMEN monitors offer casual, hobbyist, lifestyle, and hardcore gamers the power and flexibility to play and work hard. To bring everything together, new enhancements in OMEN Gaming Hub offer a variety of performance and personalization features.

People everywhere seek devices that can adapt to hybrid play and work. Sixty-two percent of gamers prefer a PC that fits their entire life instead of being used for gaming only. With a billion new gamers entering the space in the last seven years and 84 percent of them using games to connect with others with similar interests, gaming devices enable access to countless games and corresponding communities.

NVIDIA Enables More Encoding Streams on GeForce Consumer GPUs

NVIDIA has quietly removed some video encoding limitations on its consumer GeForce graphics processing units (GPUs), allowing encoding of up to five simultaneous streams. Previously, NVIDIA's consumer GeForce GPUs were limited to three simultaneous NVENC encodes. The same limitation did not apply to professional GPUs.

According to NVIDIA's own Video Encode and Decode GPU Support Matrix document, the number of concurrent NVENC encodes on consumer GPUs have been increased from three to five. This includes certain GeForce GPUs based on Maxwell 2nd Gen, Pascal, Turing, Ampere, and Ada Lovelace GPU architectures. While the number of concurrent NVDEC decodes were never limited, there is a limitation on how many streams you can encode by certain GPU, depending on the resolution of the stream and the codec.

AMD FSR 3 FidelityFX Super Resolution Technology Unveiled at GDC 2023

AMD issued briefing material earlier this month, teasing an upcoming reveal of its next generation FidelityFX at GDC 2023. True to form, today the hardware specialist has announced that FidelityFX Super Resolution 3.0 is incoming. The company is playing catch up with rival NVIDIA, who have already issued version 3.0 of its DLSS graphics enhancer/upscaler for a small number of games. AMD says that FSR 3.0 is in an early stage of development, but it is hoped that its work on temporal upscaling will result in a number of improvements over the previous generation.

The engineering team is aiming for a 2x frame performance improvement over the existing FSR 2.0 technique, which it claims is already capable of: "computing more pixels than we have samples in the current frame." This will be achieved by generating a greater number of pixels in a current frame, via the addition of interpolated frames. It is highly likely that the team will reach a point in development where one sample, at least, will be created for every interpolated pixel. The team wants to prevent feedback loops from occurring - an interpolated frame will only be shown once, and any interpolation artifact would only remain for one frame.

NVIDIA RTX 3080 Ti Owners Reporting Bricked Cards During Diablo IV Closed Beta Play Sessions

A combination of the Diablo IV Closed Beta and NVIDIA RTX 3080 Ti graphics card is proving lethal for the latter - community feedback has alerted Blizzard to take action, and they will be investigating the issue in the coming days, with assistance from NVIDIA. It is speculated that the game is exposing underlying hardware faults within an affected card, but it is odd that a specific model is generating the largest number of issues. Unlucky 3080 Ti owners participating in the closed beta are said to be experiencing unpleasant or inconsistent in-game performance at best, and BSODs followed by non-functional GPUs at worst.

A Blizzard forumite, ForANge, chimed in with their experience: "My graphics card also burned out. While playing, the fans of my Gigabyte GeForce RTX 3080 Ti suddenly started running at maximum speed. At the same time, the signal from the monitor disappeared. After turning off the power and trying to turn it back on, I couldn't get it to work anymore. The card is just under a year old. It happened during a cutscene with flowers when they showed a snowy landscape."

NVIDIA Prepares H800 Adaptation of H100 GPU for the Chinese Market

NVIDIA's H100 accelerator is one of the most powerful solutions for powering AI workloads. And, of course, every company and government wants to use it to power its AI workload. However, in countries like China, shipment of US-made goods is challenging. With export regulations in place, NVIDIA had to get creative and make a specific version of its H100 GPU for the Chinese market, labeled the H800 model. Late last year, NVIDIA also created a China-specific version of the A100 model called A800, with the only difference being the chip-to-chip interconnect bandwidth being dropped from 600 GB/s to 400 GB/s.

This year's H800 SKU also features similar restrictions, and the company appears to have made similar sacrifices for shipping its chips to China. From the 600 GB/s bandwidth of the regular H100 PCIe model, the H800 is gutted to only 300 GB/s of bi-directional chip-to-chip interconnect bandwidth speed. While we have no data if the CUDA or Tensor core count has been adjusted, the sacrifice of bandwidth to comply with export regulations will have consequences. As the communication speed is reduced, training large models will increase the latency and slow the workload compared to the regular H100 chip. This is due to the massive data size that needs to travel from one chip to another. According to Reuters, an NVIDIA spokesperson declined to discuss other differences, stating that "our 800 series products are fully compliant with export control regulations."

Halo Infinite's Latest PC Patch Shifts Minimum GPU Spec Requirements, Below 4 GB of VRAM Insufficient

The latest patch for Halo Infinite has introduced an undesired side effect for a select portion of its PC platform playerbase. Changes to minimum system specification requirements were not clarified by 343 Industries in their patch notes, but it appears that the game now refuses to launch for owners of older GPU hardware. A limit of 4 GB of VRAM has been listed as the bare minimum since Halo Infinite's launch in late 2021, with the AMD Radeon RX 570 and Nvidia GTX GeForce 1050 Ti cards representing the entry level GPU tier, basic versions of both were fitted with 4 GB of VRAM as standard.

Apparently users running the GTX 1060 3 GB model were able to launch and play the game just fine prior to the latest patch, due to it being more powerful than the entry level cards, but now it seems that the advertised hard VRAM limit has finally gone into full effect. The weaker RX 570 and GTX 1050 Ti cards are still capable of running Halo Infinite after the introduction of season 3 content, but a technically superior piece of hardware cannot, which is unfortunate for owners of the GTX 1060 3 GB model who want to play Halo Infinite in its current state.

NVIDIA Preparing RTX 5000 Ada Generation Workstation GPU

In addition to the RTX 4000 SFF Ada Generation workstation GPU launched at the GTC 2023, NVIDIA is apparently also working on the NVIDIA RTX 5000 Ada Generation, which should fit between the previously available AD102-based RTX 6000 Ada Generation workstation graphics card and the new AD104-based RTX 4000 SFF Ada Generation.

According to a fresh report coming from kopite7kimi, the NVIDIA RTX 5000 Ada Generation workstation GPU packs 15,360 CUDA cores and 32 GB of GDDR6 memory. If these specifications are spot on, the RTX 5000 Ada Generation GPU should be also based on the AD102 GPU, with a memory interface cut-down to 256-bit to match the 32 GB of GDDR6 memory. NVIDIA also has enough room to fill the rest of the lineup, but judging from this information, there will be a pretty big gap between the RTX 6000 and RTX 5000 Ada Generation workstation GPUs.

Gigabyte Joins NVIDIA GTC 2023 and Supports New NVIDIA L4 Tensor Core GPU and NVIDIA OVX 3.0

GIGABYTE Technology, an industry leader in high-performance servers and workstations, today announced participation in the global AI conference, NVIDIA GTC, and will share an AI session and other resources to educate attendees. Additionally, with the release of the NVIDIA L4 Tensor Core GPU, GIGABYTE has already begun qualifying its G-series servers to support it with validation. Last, as the NVIDIA OVX architecture has reached a new milestone, GIGABYTE has begun production of purpose-built GIGABYTE servers based on the OVX 3.0 architecture to handle the performance and scale needed for real-time, physically accurate simulations, expansive 3D worlds, and complex digital twins.

NVIDIA Session (S52463) "Protect and Optimize AI Models on Development Platform"
GTC is a great opportunity for researchers and industries to share what they have learned in AI to help further discoveries. This time around, GIGABYTE has a talk by one of MyelinTek's senior engineers that is responsible for the research and development of MLOps technologies. The session demonstrates an AI solution using a pipeline function to quickly retrain new AI models and encrypt them.

Apple A17 Bionic SoC Performance Targets Could be Lowered

Apple's engineering team is rumored to be adjusting performance targets set for its next generation mobile SoC - the A17 Bionic - due to issues at the TSMC foundry. The cutting edge 3 nm process is proving difficult to handle, according to industry tipsters on Twitter. The leaks point to the A17 Bionic's overall performance goals being lowered by 20%, mainly due to the TSMC N3B node not meeting production targets. The factory is apparently lowering its yield and execution targets due to ongoing problems with FinFET limitations.

The leakers have recently revealed more up-to-date A17 Bionic's Geekbench 6 scores, with single thread performance at 3019, and multi-thread at 7860. Various publications have been hyping the mobile SoC's single thread performance as matching that of desktop CPUs from Intel and AMD, more specifically 13th-gen Core i7 and 'high-end' Ryzen models. Naturally the A17 Bionic cannot compete with these CPUs in terms of multi-thread performance.

NVIDIA and Google Cloud Deliver Powerful New Generative AI Platform

NVIDIA today announced Google Cloud is integrating the newly launched L4 GPU and Vertex AI to accelerate the work of companies building a rapidly expanding number of generative AI applications. Google Cloud, with its announcement of G2 virtual machines available in private preview today, is the first cloud services provider to offer NVIDIA's L4 Tensor Core GPU. Additionally, L4 GPUs will be available with optimized support on Vertex AI, which now supports building, tuning and deploying large generative AI models.

Developers can access the latest state-of-the-art technology available to help them get new applications up and running quickly and cost-efficiently. The NVIDIA L4 GPU is a universal GPU for every workload, with enhanced AI video capabilities that can deliver 120x more AI-powered video performance than CPUs, combined with 99% better energy efficiency.

ASUS Announces NVIDIA-Certified Servers and ProArt Studiobook Pro 16 OLED at GTC

ASUS today announced its participation in NVIDIA GTC, a developer conference for the era of AI and the metaverse. ASUS will offer comprehensive NVIDIA-certified server solutions that support the latest NVIDIA L4 Tensor Core GPU—which accelerates real-time video AI and generative AI—as well as the NVIDIA BlueField -3 DPU, igniting unprecedented innovation for supercomputing infrastructure. ASUS will also launch the new ProArt Studiobook Pro 16 OLED laptop with the NVIDIA RTX 3000 Ada Generation Laptop GPU for mobile creative professionals.

Purpose-built GPU servers for generative AI
Generative AI applications enable businesses to develop better products and services, and deliver original content tailored to the unique needs of customers and audiences. ASUS ESC8000 and ESC4000 are fully certified NVIDIA servers that support up to eight NVIDIA L4 Tensor Core GPUs, which deliver universal acceleration and energy efficiency for AI with up to 2.7X more generative AI performance than the previous GPU generation. ASUS ESC and RS series servers are engineered for HPC workloads, with support for the NVIDIA Bluefield-3 DPU to transform data center infrastructure, as well as NVIDIA AI Enterprise applications for streamlined AI workflows and deployment.

Raja Koduri, Executive Vice President & Chief Architect, Leaves Intel

Intel CEO Pat Gelsinger has issued the news, via a tweet, of Raja Koduri's departure from the silicon giant. Koduri, who currently sits as Executive Vice President and Chief Architect, will be leaving the company at the end of this month. This ends a five year long tenure at Intel, where he started as Chief Architect back in 2017. He intends to form a brand new startup operation that will focus on AI-generative software for computer games. His tweeted reply to Gelsinger reads: "Thank you Pat and Intel for many cherished memories and incredible learning over the past 5 years. Will be embarking on a new chapter in my life, doing a software startup as noted below. Will have more to share in coming weeks."

Intel has been undergoing numerous internal restructures, and Koduri's AXG Graphics Unit was dissolved late last year. He was the general manager of the graphic chips division prior to its split, and returned to his previous role as Chief Architect at Intel. The company stated at the time that Koduri's new focus would be on: "growing efforts across CPU, GPU and AI, and accelerating high-priority technical programmes."

AT&T Supercharges Operations With NVIDIA AI

AT&T Corp. and NVIDIA today announced a collaboration in which AT&T will continue to transform its operations and enhance sustainability by using NVIDIA-powered AI for processing data, optimizing service-fleet routing and building digital avatars for employee support and training. AT&T is the first telecommunications provider to explore the use of a full suite of NVIDIA AI offerings. This includes enhancing its data processing using the NVIDIA AI Enterprise software suite, which includes the NVIDIA RAPIDS Accelerator for Apache Spark; enabling real-time vehicle routing and optimization with NVIDIA cuOpt; adopting digital avatars with NVIDIA Omniverse Avatar Cloud Engine and NVIDIA Tokkio; and utilizing conversational AI with NVIDIA Riva.

"We strive each day to deliver the most efficient global network, as we drive towards net zero emissions in our operations," said Andy Markus, chief data officer at AT&T. "Working with NVIDIA to drive AI solutions across our business will help enhance experiences for both our employees and customers." "Industries are embracing a new era in which chatbots, recommendation engines and accelerated libraries for data optimization help produce AI-driven innovations," said Manuvir Das, vice president of Enterprise Computing at NVIDIA. "Our work with AT&T will help the company better mine its data to drive new services and solutions for the AI-powered telco."

Maxsun's Mega Gamer GPU RTX 40 Series Sports Five Fans

Chinese board partner Maxsun announced today their new flagship product line dubbed Mega Gamer GPU (MGG) which packs five fans onto RTX 40 series GPUs. The first card to receive this new branding is NVIDIA's RTX 4070 Ti which Maxsun suggests will come with a healthy factory overclock, but they have not yet elaborated on complete specifications. The five fan array isn't the only unique aspect of this new lineup as the overall visual design features a striking rounded shroud with a large magnetically attached top-facing RGB LED panel. The fan arrangement appears at first to be a traditional set of three downward facing ~100 mm fans, but a glance toward the top of the card reveals the last two ~40 mm fans flanking either end. These fans are configured as exhaust fans for the finstack, likely intended to assist in pulling heat up and away from the card instead of blasting it down into the motherboard. The 40 mm fan toward the front of the card appears as though it may actually provide some benefit as it sits directly above the forward VRM to the left side of the GPU die.

For those of you that remember the bygone era of 2012 you'll likely have a light bulb going off in the memory department since this is not the first GPU to pack an excessive amount of fans. Some of Gigabyte's Super Overclock series of GPUs featured five 40 mm fans on a bulky triple-slot design, specifically their GeForce GTX 680 Windforce 5X SOC and exceptionally limited Radeon HD 7970 SoC. Unlike Maxsun's MGG design, these designs arranged all of the fans along the top edge in a "pull" configuration to force air up through the dense fin stack and away from the motherboard. Similarly to Maxsun's MGG these were overclocked cards with a considerably unique visual aesthetic.

Qualcomm Unveils Snapdragon 7-Series Mobile Platform to Bring Latest Premium Experiences to More Consumers

Qualcomm Technologies, Inc. announced the new Snapdragon 7+ Gen 2 Mobile Platform—delivering premium experiences brand new to the Snapdragon 7-series. Snapdragon 7+ Gen 2 provides exceptional CPU and GPU performance fueling swift, nonstop gaming, dynamic low-light photography and 4K HDR videography, AI-enhanced experiences and high-speed 5G and Wi-Fi connectivity.

"Snapdragon is synonymous with premium mobile experiences. Today's launch of the Snapdragon 7+ Gen 2 illustrates our ability to bring some of the most in-demand flagship features to our Snapdragon-7 series—making them accessible to more people," said Christopher Patrick, senior vice president and general manager of mobile handsets, Qualcomm Technologies, Inc. "We are committed to delivering the most innovative solutions to meet the needs of consumers, our customers, and the industry at large."

Gigabyte AORUS Invites Gamers to Explore AORUSVERSE at PAX EAST 2023

GIGABYTE AORUS invites gamers and fans to attend AORUSVERSE, a vast gaming universe packed with the latest AORUS gaming hardware and gears, at PAX EAST 2023. The triumphant return to Boston will also feature fun activities and exciting esports challenges throughout the booth, where visitors can get their hands on the latest tech and participate in various events to win prizes.

At AORUSVERSE, attendees can explore the 2023 gaming laptop lineup, featuring the flagship AORUS 17X and 15X. These laptops are powered by the latest Intel 13th gen CPU and NVIDIA GeForce RTX 40 laptop GPU, delivering a giant leap in gaming performance. They also come with QHD displays with up to 240 Hz refresh rates for a smooth and fluid gameplay. For those looking for a balance of performance and portability, the non-X variants, AORUS 17 and 15, will also be available onsite for attendees to test personally.

Supermicro Expands Storage Solutions Portfolio for Intensive I/O Workloads with Industry Standard Based All-Flash Servers Utilizing EDSFF E3.S, and E1

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, is announcing the latest addition to its revolutionary ultra-high performance, high-density petascale class all-flash NVMe server family. Supermicro systems in this high-performance storage product family will support the next-generation EDSFF form factor, including the E3.S and E1.S devices, in form factors that accommodate 16- and 32 high-performance PCIe Gen 5 NVMe drive bays.

The initial offering of the updated product line will support up to one-half of a petabyte of storage space in a 1U 16 bay rackmount system, followed by a full petabyte of storage space in a 2U 32 bay rackmount system for both Intel and AMD PCIe Gen 5 platforms. All of the Supermicro systems that support either the E1.S or E3.s form factors enable customers to realize the benefits in various application-optimized servers.

Giga Computing Releases First Workstation Motherboards to Support DDR5 and PCIe Gen5 Technologies

Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, server motherboards, and workstations, today announced two new workstation motherboards, GIGABYTE MW83-RP0 and MW53-HP0, built to support the Intel Xeon W-3400 or Intel Xeon W-2400 desktop workstation processors. The new CPU platform, developed on the Intel W790 chipset, is the first workstation platform in the market that supports both DDR5 and PCIe 5.0 technology, and this platform excels at demanding applications such as complex 3D CAD, AI development, simulations, 3D rendering, and more.

The new generation of Intel "Sapphire Rapids" Xeon W-3400 & W-2400 series processors adds some significant benefits to its workstation processors when compared to the prior gen of "Ice Lake" Xeon W-3300 processors. Like its predecessor, the new Xeon processors support up to 4 TB of 8-channel memory; however, the new Xeon CPUs have moved to DDR5, which is incredibly advantageous because of the big jump in memory bandwidth performance. Second, higher CPU performance across most workloads, partially due to the higher CPU core count and higher clock speeds. As mentioned before, the new Xeon processors support PCIe Gen 5 devices and speeds for higher throughput between CPU and devices such as GPU.

AMD Announces Appointment of New Corporate Fellows

AMD today announced the appointment of five technical leaders to the role of AMD Corporate Fellow. These appointments recognize each leader's significant impact on semiconductor innovation across various areas, from graphics architecture to advanced packaging. "David, Nathan, Suresh, Ben and Ralph - whose engineering contributions have already left an indelible mark on our industry - represent the best of our innovation culture," said Mark Papermaster, chief technology officer and executive vice president of Technology and Engineering at AMD. "Their appointments to Corporate Fellow will enable AMD to innovate in new dimensions as we work to deliver the most significant breakthroughs in high-performance computing in the decade ahead."

Appointment to AMD Corporate Fellow is an honor bestowed on the most accomplished AMD innovators. AMD Corporate Fellows are appointed after a rigorous review process that assesses not only specific technical contributions to the company, but also involvement in the industry, mentoring of others and improving the long-term strategic position of the company. Currently, only 13 engineers at AMD hold the title of Corporate Fellow.

Aetina to Showcase Its New AI Solutions at Embedded World 2023

Aetina Corporation, a leading provider of AI solutions for the creations of different types of vertical AI, will showcase its new embedded computers, AI inference platforms, GPUs, AI accelerators, and edge devices management software at upcoming Embedded World 2023. Aetina provides different types of form factors based on GPUs or ASICs, such as MXMs, graphic cards, and edge computing systems. The MXMs that are powered by NVIDIA Ampere architecture-based GPU offer extra computing power to existing AI systems, ensuring low-latency data analytics tasks. The MXMs and systems that are built with ASICs, on the other hand, are ideal for the creation of any specific applications or AI systems that involve multi-inference processes.

As an Elite member of the NVIDIA Partner Network, Aetina offers a variety of edge computing systems and platforms powered by the NVIDIA Jetson edge AI and robotics platform. Aetina's newly released embedded computers are built with the Jetson Orin series SoMs—Jetson AGX Orin, Jetson Orin NX, and Jetson Orin Nano ; these small-sized systems and platforms, supporting different peripherals, can be easily integrated into larger AI-powered systems while also being able to function as a standalone AI computer.
Return to Keyword Browsing
Nov 23rd, 2024 12:18 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts