News Posts matching #GPU

Return to Keyword Browsing

NVIDIA Could Launch Hopper H100 PCIe GPU with 120 GB Memory

NVIDIA's high-performance computing hardware stack is now equipped with the top-of-the-line Hopper H100 GPU. It features 16896 or 14592 CUDA cores, developing if it comes in SXM5 of PCIe variant, with the former being more powerful. Both variants come with a 5120-bit interface, with the SXM5 version using HBM3 memory running at 3.0 Gbps speed and the PCIe version using HBM2E memory running at 2.0 Gbps. Both versions use the same capacity capped at 80 GBs. However, that could soon change with the latest rumor suggesting that NVIDIA could be preparing a PCIe version of Hopper H100 GPU with 120 GBs of an unknown type of memory installed.

According to the Chinese website "s-ss.cc" the 120 GB variant of the H100 PCIe card will feature an entire GH100 chip with everything unlocked. As the site suggests, this version will improve memory capacity and performance over the regular H100 PCIe SKU. With HPC workloads increasing in size and complexity, more significant memory allocation is needed for better performance. With the recent advances in Large Language Models (LLMs), AI workloads use trillions of parameters for tranining, most of which is done on GPUs like NVIDIA H100.

XPG Announces ATX 3.0 Compliant Power Supply Units

XPG, a fast-growing provider of systems, components, and peripherals for gamers, Esports pros, and tech enthusiasts, today announces a new series of high performance power supply units. With the newly announced NVIDIA GeForce RTX 40 series GPUs, end users who plan on updating to these latest graphics cards will now need power supply units with a new type of connector. XPG actively works to provide the most up-to-date technology in all their products and happily upgrades/updates product specifications to meet the latest standards where possible. In order to meet the needs of gamers looking to upgrade soon, XPG has developed a new series of power supplies that are both ATX 3.0 compliant and PCIE 5.0 ready.

The 12VHPWR (12 + 4 pin) connector is now required for the next generation of top-tier gaming performance. Meaning you will need a compatible PSU to upgrade. In light of this new connector type and the updated Intel ATX 3.0 specifications, XPG CYBERCORE II series models will come equipped with this new connector type and an updated internal platform.

NVIDIA Introduces L40 Omniverse Graphics Card

During its GTC 2022 session, NVIDIA introduced its new generation of gaming graphics cards based on the novel Ada Lovelace architecture. Dubbed NVIDIA GeForce RTX 40 series, it brings various updates like more CUDA cores, a new DLSS 3 version, 4th generation Tensor cores, 3rd generation Ray Tracing cores, and much more, which you can read about here. However, today, we also got a new Ada Lovelace card intended for the data center. Called the L40, NVIDIA updated its previous Ampere-based A40 design. While the NVIDIA website provides sparse, the new L40 GPU uses 48 GB GDDR6 memory with ECC error correction. Using NVLink, you can get 96GBs of VRAM. Paired with an unknown SKU, we assume that it uses AD102 with adjusted frequencies to lower the TDP and allow for passive cooling.

NVIDIA is calling this their Omniverse GPU, as it is a part of the push to separate its GPUs used for graphics and AI/HPC models. The "L" model in the current product stack is used to accelerate graphics, with display ports installed on the GPU, while the "H" models (H100) are there to accelerate HPC/AI installments where visual elements are a secondary task. This is a further separation of the entire GPU market, where the HPC/AI SKUs get their own architecture, and GPUs for graphics processing are built on a new architecture as well. You can see the specifications provided by NVIDIA below.

NVIDIA Jetson Orin Nano Sets New Standard for Entry-Level Edge AI and Robotics With 80x Performance Leap

NVIDIA today expanded the NVIDIA Jetson lineup with the launch of new Jetson Orin Nano system-on-modules that deliver up to 80x the performance over the prior generation, setting a new standard for entry-level edge AI and robotics. For the first time, the NVIDIA Jetson family spans six Orin-based production modules to support a full range of edge AI and robotics applications. This includes the Orin Nano—which delivers up to 40 trillion operations per second (TOPS) of AI performance in the smallest Jetson form factor—up to the AGX Orin, delivering 275 TOPS for advanced autonomous machines.

Jetson Orin features an NVIDIA Ampere architecture GPU, Arm-based CPUs, next-generation deep learning and vision accelerators, high-speed interfaces, fast memory bandwidth and multimodal sensor support. This performance and versatility empower more customers to commercialize products that once seemed impossible, from engineers deploying edge AI applications to Robotics Operating System (ROS) developers building next-generation intelligent machines.

ASUS Servers Announce AI Developments at NVIDIA GTC

ASUS, the leading IT company in server systems, server motherboards and workstations, today announced its presence at NVIDIA GTC - a developer conference for the era of AI and the metaverse. ASUS will focus on three demonstrations outlining its strategic developments in AI, including: the methodology behind ASUS MLPerf Training v2.0 results that achieved multiple breakthrough records; a success story exploring the building of an academic AI data center at King Abdullah University of Science and Technology (KAUST) in Saudi Arabia; and a research AI data center created in conjunction with the National Health Research Institute in Taiwan.

MLPerf benchmark results help advance machine-learning performance and efficiency, allowing researchers to evaluate the efficacy of AI training and inference based on specific server configurations. Since joining MLCommons in 2021, ASUS has gained multiple breakthrough records in the data center closed division across six AI-benchmark tasks in AI training and inferencing MLPerf Training v2.0. At the ASUS GTC session, senior ASUS software engineers will share the methodology for achieving these world-class results—as well as the company's efforts to deliver more efficient AI workflows through machine learning.

MSI Unveils its First Custom NVIDIA GeForce RTX 40 Series Graphics Cards

As a leading brand in True Gaming hardware, MSI is proud to share its take on NVIDIA 's exciting new GeForce RTX 4090 and RTX 4080 series GPUs, with graphics cards that unite the latest in graphics technology, high-performance circuit board design, and advanced cooling.

Powered by the new ultra-efficient NVIDIA Ada Lovelace architecture, the 3rd generation of RTX, GeForce RTX 40 Series graphics cards are beyond fast, giving gamers and creators a quantum leap in performance, neural rendering, and many more leading platform capabilities. This massive advancement in GPU technology is the gateway to the most immersive gaming experiences, incredible AI features and the fastest content creation workflows. These GPUs push state-of-the-art graphics into the future.

NVIDIA Rush-Orders A100 and H100 AI-GPUs with TSMC Before US Sanctions Hit

Early this month, the US Government banned American companies from exporting AI-acceleration GPUs to China and Russia, but these restrictions don't take effect before March 2023. This gives NVIDIA time to take rush-orders from Chinese companies for its AI-accelerators before the sanctions hit. The company has placed "rush orders" for a large quantity of A100 "Ampere" and H100 "Hopper" chips with TSMC, so they could be delivered to firms in China before March 2023, according to a report by Chinese business news publication UDN. The rush-orders for high-margin products such as AI-GPUs, could come as a shot in the arm for NVIDIA, which is facing a sudden loss in gaming GPU revenues, as those chips are no longer in demand from crypto-currency miners.

Intel Arc A770 Overclocks Up to 2.70 GHz on Stock Cooling, with Minimal Effort

In its latest video presentation dealing with the reference board design and overclocking architecture of the Arc A770 Limited Edition graphics card, Intel revealed that the cards should be "monster overclockers," and that they've been able to get their randomly selected card to run at 2.70 GHz (up from 2.10 GHz reference), without the need for custom-cooling, just by using the overclocking controls on the Arc Control software. The cooler has a noise output of up to 39 dBA, and even with the overclocked GPU, Intel claims, the temperatures never crossed the 80-90 °C range. The GPU power was claimed to be around 228 W.

Intel clarified that the "GPU Clock" advertised with the A770 is the guaranteed clock-speed sustained by the GPU at least 50% of the time, even on the "least performing" silicon. The actual clock will vary around this point. This is represented as a bell-curve on top of the voltage-frequency curve of the GPU. There are two ways to go about increasing the performance of the GPU—increasing the voltage, which would increase the clock residency (sustainability of elevated clock-states); and by increasing the frequency itself. Both of these can be accomplished using Arc Control.

Supermicro Adds New 8U Universal GPU Server for AI Training, NVIDIA Omniverse, and Meta

Super Micro Computer, Inc. (SMCI), a global leader in enterprise computing, storage, networking solutions, and green computing technology, is announcing its most advanced GPU server, incorporating eight NVIDIA H100 Tensor Core GPUs. Due to its advanced airflow design, the new high-end GPU system will allow increased inlet temperatures, reducing a data center's overall Power Usage Effectiveness (PUE) while maintaining the absolute highest performance profile. In addition, Supermicro is expanding its GPU server lineup with this new Universal GPU server, which is already the largest in the industry. Supermicro now offers three distinct Universal GPU systems: the 4U,5U, and new 8U 8GPU server. The Universal GPU platforms support both current and future Intel and AMD CPUs -- up to 400 W, 350 W, and higher.

"Supermicro is leading the industry with an extremely flexible and high-performance GPU server, which features the powerful NVIDIA A100 and H100 GPU," said Charles Liang, president, and CEO, of Supermicro. "This new server will support the next generation of CPUs and GPUs and is designed with maximum cooling capacity using the same chassis. We constantly look for innovative ways to deliver total IT Solutions to our growing customer base."

ZOTAC RTX 4090 Graphics Card Pictured

The tentative day of announcement for NVIDIA's next-gen RTX 4000 series is fast approaching, with an expected announcement from NVIDIA through its GeForce Beyond broadcast, scheduled for September 20th at GTC. And with time running out until we see what NVIDIA has laid in store for us, photographs of ZOTAC'z iteration of the RTX 4090 are already leaking about - specifically in Baidu.

The photographs showcase a production run from ZOTAC's RTX 4090 cards, featuring a complete cooling and shroud redesign for NVIDIA's next-generation. Gone are the typical straight, boxy lines of any high-tier GPU; ZOTAC seems to be taking a more curvaceous approach to design this time, with more organic lines enveloping a more mundane heatsink. The card features ZOTAC's IceStorm 3.0 cooling solution, which houses a triple-fan, triple-slot design that extends more than a third of its area over the PCB itself. There's still no confirmation on board power and the GPU powering these cards themselves, but we have some very (very) educated guesses.

Global Top Ten IC Design House Revenue Spikes 32% in 2Q22, Ability to Destock Inventory to be Tested in 2H22, Says TrendForce

According to the latest TrendForce statistics, revenue of the top ten global IC design houses reached US$39.56 billion in 2Q22, growing 32% YoY. Growth was primarily driven by demand for data centers, networking, IoT, and high-end product portfolios. AMD achieved synergy through mergers and acquisitions. In addition to climbing to third place, the company also posted the highest annual revenue growth rate in 2Q22 at 70%.

Qualcomm continues in the No. 1 position worldwide, exhibiting growth in the mobile phone, RF front-end, automotive, and IoT sectors. Sales of mid/low-end mobile phone APs were weak but demand for high-end mobile phone APs was relatively stable. Company revenue reached US$9.38 billion, or 45% growth YoY. NVIDIA benefitted from expanded application of GPUs in data centers to expand this product category's revenue share past the 50% mark to 53.5%, making up for the 13% YoY slump in its game application business, bringing total revenue to US$7.09 billion, though annual growth rate slowed to 21%. AMD reorganized its business after the addition of Xilinx and Pensando. The company's embedded division revenue increased by 2,228% YoY. In addition, its data center department also made a considerable contribution. AMD posted revenue of US$6.55 billion, achieving 70% growth YoY, highest amongst the top ten. Broadcom's sales performance in semiconductor solutions remained solid and demand for cloud services, data centers, and networking is quite strong. The company's purchase order backlog is still increasing with 2Q22 revenue reaching US$6.49 billion, an annual growth rate of 31%.

Qualcomm Announces the Snapdragon 6 and 4 Gen 1

Qualcomm Technologies, Inc. announces Snapdragon 6 Gen 1 and Snapdragon 4 Gen 1 Mobile Platforms, providing advanced technology solutions to address the mid-tier and mass-volume segment. The Snapdragon 6 Gen 1 provides illuminating capture, hard-hitting game play, and intuitive AI assistance. It extends users' reach with expansive connectivity and sustained, efficient power and performance across the board. The latest 4-series platform, Snapdragon 4 Gen 1, offers impressive performance and AI to make interactions seamless and intuitive. Plus, this platform provides advanced photography features to enable striking capture, as well as improved connectivity so users can share endlessly.

"Both Snapdragon 6 and Snapdragon 4 provide upgrades in their respective series to enable advancements in capture, connectivity, entertainment, and AI. These new mobile platforms help our customers to deliver advanced solutions for consumers," said Deepu John, senior director, product management, Qualcomm Technologies, Inc.

Intel Meteor Lake Can Play Videos Without a GPU, Thanks to the new Standalone Media Unit

Intel's upcoming Meteor Lake (MTL) processor is set to deliver a wide range of exciting solutions, with the first being the Intel 4 manufacturing node. However, today we have some interesting Linux kernel patches that indicate that Meteor Lake will have a dedicated "Standalone Media" Graphics Technology (GT) block to process video/audio. Moving encoding and decoding off GPU to a dedicated media engine will allow MTL to play back video without the GPU, and the GPU can be used as a parallel processing powerhouse. Features like Intel QuickSync will be built into this unit. What is interesting is that this unit will be made on a separate tile, which will be fused with the rest using tile-based manufacturing found in Ponte Vecchio (which has 47 tiles).
Intel Linux PatchesStarting with [Meteor Lake], media functionality has moved into a new, second GT at the hardware level. This new GT, referred to as "standalone media" in the spec, has its own GuC, power management/forcewake, etc. The general non-engine GT registers for standalone media start at 0x380000, but otherwise use the same MMIO offsets as the primary GT.

Standalone media has a lot of similarity to the remote tiles present on platforms like [Xe HP Software Development Vehicle] and [Ponte Vecchio], and our i915 [kernel graphics driver] implementation can share much of the general "multi GT" infrastructure between the two types of platforms.

NVIDIA & Dell Deliver New Data Center Solution for Zero-Trust Security and the Era of AI

NVIDIA today announced a new data center solution with Dell Technologies designed for the era of AI, bringing state-of-the-art AI training, AI inference, data processing, data science and zero-trust security capabilities to enterprises worldwide. The solution combines Dell PowerEdge servers with NVIDIA BlueField DPUs, NVIDIA GPUs and NVIDIA AI Enterprise software, and is optimized for VMware vSphere 8 enterprise workload platform, also announced today.

"AI and zero-trust security are powerful forces driving the world's enterprises to rearchitect their data centers as computing and networking workloads are skyrocketing," said Manuvir Das, head of Enterprise Computing at NVIDIA. "VMware vSphere 8 offloads, accelerates, isolates and better secures data center infrastructure services onto the NVIDIA BlueField DPU, and frees the computing resources to process the intelligence factories of the world's enterprises."

AMD B650E "Extreme" Chipset Confirmed, Brings PCIe 5.0 for GPU and SSD

AMD's upcoming launch of Ryzen 7000 series processors will bring an entirely new AM5 platform that will enable newer technologies and protocols. We have DDR5 memory and PCIe 5.0 connection with everything at level five. However, the upcoming chipsets AMD has designed to work alongside the new processors will be available in several variants. There will be regular X670 and B650 versions that support either a PCIe 5.0 GPU or a PCIe 5.0 M.2 NVIMe SSD. Today, we got a confirmation that not only the big X670 chipset has an "E" or "Extreme" version, but its smaller brother B650 as well. With X670E and B650E, users get both PCIe 5.0 connectivity for their GPU and M.2 NVIMe SSD. For more information, we have to wait for AMD's official launch information later today.

Microsoft: No Plans to Increase Xbox Console Pricing

Considering Sony's recently announced price hike for the PS5 (in whatever soil lies outside the U.S.), the question remained whether Microsoft would follow suit. Sony's claimed reasons for the price hike, stemming from rising inflation and increased production costs, are certainly general and actual enough that they could be true for any business. Yet it seems that Microsoft is either not operating in the same global landscape as Sony - or perhaps the company is merely more willing to shoulder the additional costs so as not to increase pricing.

Speaking with Windows Central, Microsoft clarified that "We are constantly evaluating our business to offer our fans great gaming options. Our Xbox Series S suggested retail price remains at $299 (£250, €300) the Xbox Series X is $499 (£450, €500)." Which is actually a great thing, especially considering that gamers around the world are still underserved in the amount of available PS5 and Xbox consoles that have been made available to buy. The Xbox stock situation has improved faster than that of the PS5, but there are still millions of gamers who haven't been able to get their hands on one or the other - and those still waiting for a PS5 console for no fault of their own are now dealing with increased pricing on an almost 2-year-old console.

Ansys and AMD Collaborate to Speed Simulation of Large Structural Mechanical Models Up to 6x Faster

Ansys announced that Ansys Mechanical is one of the first commercial finite element analysis (FEA) programs supporting AMD Instinct accelerators, the newest data center GPUs from AMD. The AMD Instinct accelerators are designed to provide exceptional performance for data centers and supercomputers to help solve the world's most complex problems. To support the AMD Instinct accelerators, Ansys developed APDL code in Ansys Mechanical to interface with AMD ROCm libraries on Linux, which will support performance and scaling on the AMD accelerators.

Ansys' latest collaboration with AMD resulted in a solution that, according to Ansys' tests, significantly speeds up simulation of large structural mechanical models—between three and six times faster for Ansys Mechanical applications using the sparse direct solver. Adding support for AMD Instinct accelerators in Ansys Mechanical gives customers greater flexibility in their choice of high-performance computing (HPC) hardware.

NVIDIA GeForce RTX 4080 Could Get 23 Gbps GDDR6X Memory with 340 Watt Total Board Power

NVIDIA's upcoming GeForce RTX 40 series graphics cards are less than two months from the official launch. As we near the final specification draft, we are constantly getting updates from hardware leakers claiming that the specification is ever-changing. Today, @kopite7kimi has updated his GeForce RTX 4080 GPU predictions with some exciting changes. First off, the GPU memory will get an upgrade over the previously believed specification. Before, we thought that the SKU used GDDR6X running at 21 Gbps; however, now, it is assumed that it uses a 23 Gbps variant. Faster memory will definitely result in better overall performance, and we are yet to see what it can achieve with overclocking.

Next, another update for NVIDIA GeForce RTX 4080 comes with the SKU's total board power (TBP). Previously we believed it came with a 420 Watt TBP; however, the sources of kopite7kimi claim that it has a 340 Watt TBP. This 60 Watt reduction is rather significant and could be attributed to NVIDIA's optimization to have the most efficient design possible.

TSMC has Seven Major Customers Lined Up for its 3 nm Node

Based on media reports out of Taiwan, TSMC seems to have plenty of customers lined up for its 3 nm node, with Apple being the first customer out the gates when production starts sometime next month. However, TSMC is only expected to start the production with a mere 1,000 wafer starts a month, which seems like a very low figure, especially as this is said to remain unchanged through all of Q4. On the plus side, yields are expected to be better than the initial 5 nm node yields. Full-on mass production for the 3 nm node isn't expected to happen until the second half of 2023 and TSMC will also kick off its N3E node sometime in 2023.

Apart from Apple, major customers for the 3 nm node include AMD, Broadcom, Intel, MediaTek, NVIDIA and Qualcomm. Contrary to earlier reports by TrendForce, it appears that TSMC will continue its rollout of the 3 nm node as previously planned. Apple is expected to produce the A17 smartphone and tablet SoC, as well as advanced versions of the M2, as well as the M3 laptop and desktop processors on the 3 nm node. Intel is still said to be producing its graphics chiplets with TSMC, with the potential for GPU and FPGA products in the future. There's no word on what the other customers are planning to produce on the 3 nm node, but MediaTek and Qualcomm are obviously looking at using the node for future smartphone and tablet SoCs, with AMD and NVIDIA most likely aiming for upcoming GPUs and Broadcom for some kind of HPC related hardware.

Intel Arc A380 Desktop Graphics Card Pre-Orders Open in USA for 139 USD

The Intel Arc Alchemist A380 desktop graphics card is now available to pre-order in the USA with Newegg listing ASRock's Challenger ITX model for 139.99 USD and shipping from August 22nd. The ASRock Arc A380 Challenger ITX 6GB OC is a custom design featuring a singular cooling fan and a GPU clock speed of 2250 MHz running at a 75 W TDP paired with a single 8-pin power connector. The card features PCIe 4.0 connectivity and 8 Xe-Cores alongside triple DisplayPort 2.0 connectors and a single HDMI 2.0b. The card will compete with the similarly priced NVIDIA GTX 1650 and the AMD Radeon RX 6400 as seen in our review of the GUNNIR Photon Arc A380 model.

MSI Introduces Prestige 16 Mini-LED Laptops with Alder Lake-P

MSI has updated its Prestige series lineup with new members, Prestige 16 and Prestige 16 EVO. Both in Urban Silver color and equipped with Intel 12th Gen Core i7 Processor, they are powerful productivity tools that business users can really appreciate.

The MSI Prestige 16 has a decent discrete GPU performance from NVIDIA GeForce RTX 3050 Ti, and is the first Prestige laptop that has a 16:10 ratio mini-LED panel. With the MSI True Color Technology, it reaches the high dynamic range (HDR) with DisplayHDR 1000 standard, which significantly expands the range of two important factors—contrast ratio and color accuracy. Thanks to Dynamic Cooler Boost, MSI's patented dual-fan thermal technology, Prestige 16/Prestige 16 EVO are powerful laptops that maintain less than 35 dB background noise. For business users who hold online conferences frequently, they can expect to have smooth video conferencing experience with the quadruple microphone and Ambient Light Sensor that come along with AI noise canceling solution.

Due to Chip Oversupply, NVIDIA Reportedly Resumes Production of RTX 3080 12 GB

NVIDIA has reportedly resumed production of its GA102-based RTX 3080 12 GB graphics cards, according to a Tweet from GPU leaker Zed__Wang. The reason cited has to do with oversupply of the company's GA102 chips, which powers the company's high-end lineup from the RTX 3080 through the RTX 3090 Ti (in all, there are six RTX 30-series cards powered by this chip, alongside the CMP 90HX mining-specific card, datacenter and AI inferencing accelerators A10, A10G, and A40, as well as the company's RTX A4500, A5000, A5500, and A6000 series for a total of 14 SKUs).

Oversupply, in this case, has more to do with contracting demand - not only is NVIDIA's next-gen RTX 40-series right around the corner, but the already-announced death of Ethereum's Proof of Work mining has flooded the market with second-hand RTX 30-series cards. This, alongside the already long-winded shelf-life of the RTX 30-series - which hit the market back in September 2020 - has led to contracting demand for NVIDIA's GPUs. Rampant inflation and general macroeconomic indicators also do little to instill confidence in the purchase of non-essential products.

Flagship Intel Arc A770 GPU Showcased in Blender with Ray Tracing and Live Denoising

Intel Arc Alchemist graphics cards span both gamer and creator/professional user market sector, where we witnessed Intel announce gamer and pro-vis GPU SKUs. Today, we are seeing the usage of the flagship Arc Alchemist SKU called A770 in Blender rendering with ray tracing enabled. The GPU is designed to have a DG2-512 GPU with 512 EUs, 4096 Shading Units, 16 GB of GDDR6 memory, and 32 Xe cores for ray tracing, be a powerhouse for games, and handle some professional software as well. At SIGGRAPH 2022, Bob Duffy, Intel's Director of Graphics Community Engagement, showcased a system with Arc A770 GPU running Blender Cycles with ray tracing and denoising.

While we don't have any comparable data to showcase, the system managed to produce a decent rendering in Blender 3.3 LTS release, using Intel's oneAPI. The demo scene had 4,369,466 vertices, 8,702,031 edges, 4,349,606 faces, and 8,682,950 triangles, backed by ray tracing and live denoising. We are yet to see more detailed benchmarks and how the GPU fares against the competition.

Intel Asks Xe-HPG Scavenger Hunt Winners to Accept a CPU In Lieu of Graphics Card

Remember that Xe-HPG Scavenger Hunt that Intel hosted last year? If you somehow missed it, Intel was maybe giving away some Arc graphics cards to 300 lucky winners. There were two different tiers of prizes, grand prize and first prize, which later ended up translating to an Arc A770 and an Arc A750 graphics card respectively. Now news via VideoCardz are suggesting that Intel is trying to get out of giving these 300 people their prize, well, at least the promised graphics card, in exchange for an Alder Lake CPU.

Intel has apparently sent out an email to the winners, asking them to accept an Intel Core i7-12700K if they were a grand prize winner or a Core i5-12600K if they were a first prize winner, instead of the promised graphics card. The winners have until Friday the 19th of August to decide if they want a CPU instead of a GPU, although Intel is apparently still allowing them to wait for a GPU, the company just doesn't say how long the wait will be. As the prize has to have a similar retail price, it's also possible to get a ballpark figure of the MSRP of Intel's supposedly upcoming Arc 700-series graphics cards. The Arc A770 should end up at around the $410 mark and the A750 around the $290 mark, as this is the ballpark MSRP for the CPU's that are being offered. It would be interesting to know how many people would be willing to do the trade, but sadly we're unlikely to ever find out.

Biren Technology Unveils BR100 7 nm HPC GPU with 77 Billion Transistors

Chinese company Biren Technology has recently unveiled the Biren BR100 HPC GPU during their Biren Explore Summit 2022 event. The Biren BR100 features an in-house chiplet architecture with 77 billion transistors and is manufactured on a 7 nm process using TSMC's 2.5D CoWoS packaging technology. The card is equipped with 300 MB of onboard cache alongside 64 GB of HBM2E memory running at 2.3 TFLOPs. This combination delivers performance above that of the NVIDIA Ampere A100 achieving 1024 TFLOPs in 16-bit floating point operations.

The company also announced the BR104 which features a monolithic design and should offer approximately half the performance of the BR100 at a TDP of 300 W. The Biren BR104 will be available as a standard PCIe card while the BR100 will come in the form of an OAM compatible board with a custom tower cooler. The pricing and availability information for these cards is currently unknown.
Return to Keyword Browsing
Jul 4th, 2025 01:12 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts