News Posts matching #GPU

Return to Keyword Browsing

Researchers Exploit GPU Fingerprinting to Track Users Online

Online tracking of users happens when 3rd party services collect information about various people and use that to help identify them in the sea of other online persons. This collection of specific information is often called "fingerprinting," and attackers usually exploit it to gain user information. Today, researchers have announced that they managed to use WebGL (Web Graphics Library) to their advantage and create a unique fingerprint for every GPU out there to track users online. This exploit works because every piece of silicon has its own variations and unique characteristics when manufactured, just like each human has a unique fingerprint. Even among the exact processor models, silicon differences make each product distinct. That is the reason why you can not overclock every processor to the same frequency, and binning exists.

What would happen if someone were to precisely explore the differences in GPUs and use those differences to identify online users by those characteristics? This is exactly what researchers that created DrawnApart thought of. Using WebGL, they run a GPU workload that identifies more than 176 measurements across 16 data collection places. This is done using vertex operations in GLSL (OpenGL Shading Language), where workloads are prevented from random distribution on the network of processing units. DrawnApart can measure and record the time to complete vertex renders, record the exact route that the rendering took, handle stall functions, and much more. This enables the framework to give off unique combinations of data turned into fingerprints of GPUs, which can be exploited online. Below you can see the data trace recording of two GPUs (same models) showing variations.

AMD Radeon RX 6900 XT Scores Top Spot in 3DMark Fire Strike Hall of Fame with 3.1 GHz Overclock

3DMark Fire Strike Hall of Fame is where overclockers submit their best hardware benchmark trials and try to beat the very best. For years, one record managed to hold, and today it just got defeated. According to an extreme overclocker called "biso biso" from South Korea and a part of the EVGA OC team, the top spot now belongs to the AMD Radeon RX 6900 XT graphics card. The previous 3DMark Fire Strike world record was set on April 22nd in 2020, when Vince Lucido, also known as K|NGP|N, set a record with four-way SLI of NVIDIA GeForce GTX 1080 Ti GPUs. However, that record is old news since January 27th, when biso biso set the history with AMD Radeon RX 6900 XT GPU.

The overclocker scored 62389 points, just 1,183 more from the previous record. He pushed the Navi 21 XTX silicon that powers the Radeon RX 6900 XT card to an impressive 3,147 MHz. Paired with a GPU memory clock of 2370 MHz, the GPU was probably LN2 cooled to achieve these results. The overclocker used EVGA's Z690 DARK KINGPIN motherboard with Intel Core i9-12900K processor as a platform of choice to achieve this record. You can check it out on the 3DMark Fire Strike Hall of Fame website to see yourself.

NVIDIA "Hopper" Might Have Huge 1000 mm² Die, Monolithic Design

Renowned hardware leaker kopike7kimi on Twitter revealed some purported details on NVIDIA's next-generation architecture for HPC (High Performance Computing), Hopper. According to the leaker, Hopper is still sporting a classic monolithic die design despite previous rumors, and it appears that NVIDIA's performance targets have led to the creation of a monstrous, ~1000 mm² die package for the GH100 chip, which usually maxes out the complexity and performance that can be achieved on a particular manufacturing process. This is despite the fact that Hopper is also rumored to be manufactured under TSMC's 5 nm technology, thus achieving higher transistor density and power efficiency compared to the 8 nm Samsung process that NVIDIA is currently contracting. At the very least, it means that the final die will be bigger than the already enormous 826 mm² of NVIDIA's GA100.

If this is indeed the case and NVIDIA isn't deploying a MCM (Multi-Chip Module) design on Hopper, which is designed for a market with increased profit margins, it likely means that less profitable consumer-oriented products from NVIDIA won't be featuring the technology either. MCM designs also make more sense in NVIDIA's HPC products, as they would enable higher theoretical performance when scaling - exactly what that market demands. Of course, NVIDIA could be looking to develop an MCM version of the GH100 still; but if that were to happen, the company could be looking to pair two of these chips together as another HPC product (rumored GH-102). ~2,000 mm² in a single GPU package, paired with increased density and architectural improvements might actually be what NVIDIA requires to achieve the 3x performance jump from the Ampere-based A100 the company is reportedly targeting.

MAINGEAR Launches New NVIDIA GeForce RTX 3050 Desktops, Offering Next-Gen Gaming Features

MAINGEAR—an award-winning PC system integrator of custom gaming desktops, notebooks, and workstations—today announced that new NVIDIA GeForce RTX 3050 graphics cards are now available to configure within MAINGEAR's product line of award-winning custom gaming desktop PCs and workstations. Featuring support for real-time ray tracing effects and AI technologies, MAINGEAR PCs equipped with the NVIDIA GeForce RTX 3050 offer gamers next-generation ray-traced graphics and performance comparable to the latest consoles.

Powered by Ampere, the NVIDIA GeForce RTX 3050 features NVIDIA's 2nd gen Ray Tracing Cores and 3rd generation Tensor Cores. Combined with new streaming multiprocessors and high-speed G6 memory, the NVIDIA GeForce RTX 3050 can power the latest and greatest games. NVIDIA RTX on 30 Series GPUs deliver real-time ray tracing effects—including shadows, reflections, and Ambient Occlusion (AO). The groundbreaking NVIDIA DLSS (Deep Learning Super Sampling) 2.0 AI technology utilizes Tensor Core AI processors to boost frame rates while producing sharp, uncompromised visual fidelity comparable to high native resolutions.

Alphacool Launches Aurora Vertical GPU Mount & Eisblock Acryl GPX for Zotac RTX 3070Ti

With the new Aurora Vertical GPU Mount, Alphacool now offers the possibility to install the graphics card vertically inside compatible PC cases, whether they're air or liquid cooled. In addition, the mount features 11 digitally addressable 5V RGB LEDs that create a unique, very classy looking illumination. The digital aRGB LED lighting can be controlled either with a digital RGB controller or a digital RGB capable motherboard.

There is more new blood within the Ice Block range! In the future, Zotac RTX 3070Ti AMP Holo, Trinity and Trinity OC can also be water-cooled with Alphacool's Eisblock Aurora Acryl GPX custom cooler.

EVGA Introduces E1 Gaming Rig

With the launch of the EVGA E1, EVGA is once again taking extreme gaming to the next level by setting a statement with our new gaming rig. The E1 will be extremely limited and available to EVGA Members only.

Back in 1999 when EVGA first was founded, graphics cards were basic. EVGA was the first to introduce Heat Pipe Cooling and iCX Technology along with 4-Way SLI bringing more benefits to gamers. In 2012, EVGA introduced its first Power Supply. Through quality and features EVGA has become a top choice for Power Supplies when gamers are building their systems. E1 continues the tradition of top tier hardware for gamers.

NVIDIA "GA103" GeForce RTX 3080 Ti Laptop GPU SKU Pictured

When NVIDIA announced the appearance of the GeForce RTX 3080 Ti mobile graphics card, we were left with a desire to see just what the GA103 silicon powering the GPU looks like. And thanks to the Chinese YouTuber Geekerwan, we have the first pictures of the GPU. Pictured below is GA103S/GA103M SKU with GN20-E8-A1 labeling. It features 58 SMs that make up for 7424 CUDA cores in total. The number of Tensor cores for this SKU is set to 232, while there are 58 RT cores. NVIDIA has decided to pair this GPU with a 256-bit memory bus and 16 GB GDDR6 memory.

As it turns out, the full GA103 silicon has a total of 7680 CUDA cores and a 320-bit memory bus, so this mobile version is a slightly cut-down variant. It sits perfectly between GA104 and GA102 SKUs, providing a significant improvement to the core count of GA104 silicon. Power consumption of the GA103 SKU for GeForce RTX 3080 Ti mobile is set to a variable 80-150 Watt range, which can be adjusted according to the system's cooling capacity. An interesting thing to point out is a die size of 496 mm², which is a quarter more significant compared to GA104, for a quarter higher number of CUDA cores.

Tachyum Selected for Pan-European Project Enabling 1 AI Zettaflop in 2024

Tachyum today announced that it was selected by the Slovak Republic to participate in the latest submission for the Important Projects of Common European Interest (IPCEI), to develop Prodigy 2 for HPC/AI. Prodigy 2 for HPC/AI will enable 1 AI Zettaflop and more than 10 DP Exaflops computers to support superhuman brain-scale computing by 2024 for under €1B. As part of this selection, Tachyum could receive a 49 million Euro grant to accelerate a second-generation of its Tachyum Prodigy processor for HPC/AI in a 3-nanometer process.

The IPCEI program can make a very important contribution to sustainable economic growth, jobs, competitiveness and resilience for industry and the economy in the European Union. IPCEI will strengthen the EU's open strategic autonomy by enabling breakthrough innovation and infrastructure projects through cross-border cooperation and with positive spill-over effects on the internal market and society as a whole.

Intel Arc Alchemist DG2 GPU Memory Configurations Leak

Intel's upcoming Arc Alchemist lineup of discrete graphics cards generates a lot of attention from consumers. Leaks of these cards' performance and detailed specifications appear more and more as we enter the countdown to the launch day, which is sometime in Q1 of this year. Today, we managed to see a slide from @9950pro on Twitter that shows the laptop memory configuration of Intel's DG2 GPU. As the picture suggests, we can see that the top-end SKU1 with 512 EUs supports a 16 GB capacity of GDDR6 memory that runs at 16 Gbps speeds. The memory runs on a 256-bit bus and generates 512 GB/s bandwidth while having eight VRAM modules present.

When it comes to SKU2, which is a variant with 384 EUs, this configuration supports six VRAM modules on a 192-bit bus, running at 16 Gbps speeds. They generate a total capacity of 12 GBs and a bandwidth of 384 GB/s. We have SKU3 DG2 GPU going down the stack, featuring 256 EUs, four VRAM modules on a 128-bit bus, 8 GB capacity, and a 256 GB/s bandwidth. And last but not least, the smallest DG2 variants come in the form of SKU4 and SKU5, feating 128 EUs and 96 EUs, respectively. Intel envisions these lower-end SKUs with two VRAM modules on a 64-bit bus, and this time slower GDDR6 memory running at 14 Gbps. They are paired with 4 GB of total capacity, and the total bandwidth comes down to 112 GB/s.

Intel Arc Alchemist Xe-HPG Graphics Card with 512 EUs Outperforms NVIDIA GeForce RTX 3070 Ti

Intel's Arc Alchemist discrete lineup of graphics cards is scheduled for launch this quarter. We are getting some performance benchmarks of the DG2-512EU silicon, representing the top-end Xe-HPG configuration. Thanks to a discovery of a famous hardware leaker TUM_APISAK, we have a measurement performed in the SiSoftware database that shows Intel's Arc Alchemist GPU with 4096 cores and, according to the report from the benchmark, just 12.8 GB of GDDR6 VRAM. This is just an error on the report, as this GPU SKU should be coupled with 16 GB of GDDR6 VRAM. The card was reportedly running at 2.1 GHz frequency. However, we don't know if this represents base or boost speeds.

When it comes to actual performance, the DG2-512EU GPU managed to score 9017.52 Mpix/s, while something like NVIDIA GeForce RTX 3070 Ti managed to get 8369.51 Mpix/s in the same test group. Comparing these two cards in floating-point operations, Intel has an advantage in half-float, double-float, and quad-float tests, while NVIDIA manages to hold the single-float crown. This represents a 7% advantage for Intel's GPU, meaning that Arc Alchemist has the potential for standing up against NVIDIA's offerings.

Sapphire Announces PULSE Radeon RX 6500 XT Graphics Card

SAPPHIRE Technology announces the new PULSE AMD Radeon RX 6500 XT Graphics Card targeting quiet and effective 1080p gaming graphics. With the signature PULSE red accents on a classic black minimalist shroud, it is designed to be an eye-catching addition to any PC. Play AAA games with striking performance and cooling with the PULSE AMD Radeon RX 6500 XT Graphics Card.

The PULSE AMD Radeon RX 6500 Graphics Card headlines with 1024 stream processors running with a Boost Clock of up to 2825 MHz and a Game Clock of up to 2685 MHz. The latest GDDR6 high-speed memory clocked at 18 Gbps Effective, with 16 MB of AMD Infinity Cache memory technology, which reduces latency and power consumption, enabling high overall gaming performance. To support the latest display monitors in the market, it is equipped with output ports including HDMI and DisplayPort 1.4 with DSC outputs. The PULSE AMD Radeon RX 6500 XT Graphics Card series features 16 powerful Compute Units and 16 Ray Accelerators.

Chinese GPU Makers Aiming for 5 and 7 nm GPUs in 2022

According to DigiTimes, several Chinese GPU makers are aiming to build GPUs using 5 or 7 nm nodes this year, something that might be a challenge for them, considering that TSMC's key customers are already said to have pre-paid TSMC to get preferential access to these nodes. The companies in question are Innosilicon, who works with Imagination Technologies IP, Changsha Jingjia Microelectronics and Biren Technology, all of which are largely unknown players in the GPU market space.

Innosilicon's Fantasy One is said to offer performance similar to GeForce RTX 2070, which means it might even be a competitor for Intel's Arc GPUs, assuming the Fantasy One doesn't end up being a pipe dream. Changsha Jingjia Microelectronics is said to have announced it first GPU back in December last year with the JM9 series, which is said to offer around 80 percent of the performance of a GeForce GTX 950, which puts it in the office PC category these days and almost make you wonder why they bothered. Finally Biren Technology announced the BR100 in October last year and it's apparently already manufactured on TSMC's 7 nm node, although no word on performance is available. The bigger question is if any of these products will have any impact in the GPU market, since at best, they might offload some customers in the PRC from buying GPUs from AMD, Intel and Nvidia, until these companies have proven that they can deliver viable drivers alongside their hardware.

NVIDIA Unlocks GPU System Processor (GSP) for Improved System Performance

In 2016, NVIDIA announced that the company is working on replacing its Fast Logic Controller processor codenamed Falcon with a new GPU System Processor (GSP) solution based on RISC-V Instruction Set Architecture (ISA). This novel RISC-V processor is codenamed NV-RISCV and has been used as GPU's controller core, coordinating everything in the massive pool of GPU cores. Today, NVIDIA has decided to open this NV-RISCV CPU to a broader spectrum of applications starting with 510.39 drivers. According to the NVIDIA documents, this is only available in the select GPUs for now, mainly data-centric Tesla accelerators.
NVIDIA DocumentsSome GPUs include a GPU System Processor (GSP) which can be used to offload GPU initialization and management tasks. This processor is driven by the firmware file /lib/firmware/nvidia/510.39.01/gsp.bin. A few select products currently use GSP by default, and more products will take advantage of GSP in future driver releases.
Offloading tasks which were traditionally performed by the driver on the CPU can improve performance due to lower latency access to GPU hardware internals.

Intel "Bonanza Mine" is a Bitcoin Mining ASIC, Intel Finally Sees Where the Money is

Intel is reportedly looking to disrupt the cryptocurrency mining hardware business with fixed-function ASICs that either outperform GPUs, or end up with lower enough performance/Watt or performance/Dollar to take make GPUs unviable as a mining hardware option. The company is planning to unveil its first such product, codenamed "Bonanza Mine," an ASIC purpose-built for Bitcoin mining.

Since it's an ASIC, "Bonanza Mine" doesn't appear to be a re-purposed Xe-HPC processor, or even an FPGA that's been programmed to mine Bitcoin. It's a purpose-built piece of silicon. Intel will unveil "Bonanza Mine" at the 2022 ISSCC Conference. It describes the chip as being an "ultra low-voltage energy-efficient Bitcoin mining ASIC," putting power-guzzling GPUs on notice. If Intel can clinch Bitcoin with "Bonanza Lake," designing ASICs for other cryptocurrencies is straightforward. With demand from crypto-miners slashed, graphics cards will see a tremendous fall in value, forcing scalpers to cut prices.

The Power of AI Arrives in Upcoming NVIDIA Game-Ready Driver Release with Deep Learning Dynamic Super Resolution (DLDSR)

Among the broad range of new game titles getting support, we are in for a surprise. NVIDIA yesterday announced a feature list of its upcoming game-ready GeForce driver scheduled for public release on January 14th. According to the new blog post on NVIDIA's website, the forthcoming game-ready driver release will feature an AI-enhanced version of Dynamic Super Resolution (DSR), available in GeForce drivers for a while. The new AI-powered tech is, what the company calls, Deep Learning Dynamic Super Resolution or DLDSR shortly. It uses neural networks that require fewer input pixels and produces stunning image quality on your monitor.
NVIDIAOur January 14th Game Ready Driver updates the NVIDIA DSR feature with AI. DLDSR (Deep Learning Dynamic Super Resolution) renders a game at higher, more detailed resolution before intelligently shrinking the result back down to the resolution of your monitor. This downsampling method improves image quality by enhancing detail, smoothing edges, and reducing shimmering.

DLDSR improves upon DSR by adding an AI network that requires fewer input pixels, making the image quality of DLDSR 2.25X comparable to that of DSR 4X, but with higher performance. DLDSR works in most games on GeForce RTX GPUs, thanks to their Tensor Cores.
NVIDIA Deep Learning Dynamic Super Resolution NVIDIA Deep Learning Dynamic Super Resolution

DeepCool Launches CK Series Mid-Tower ATX Cases

DeepCool, a global brand in designing and manufacturing high-performance computer components for enthusiasts worldwide, announces the CK SERIES Mid-Tower Cases, which consists the CK500, CK500WH, CK560 and CK560WH. Available in black and white, all cases deliver a balance of airflow, silent performance and exceptional cooling for the modern-day builder.

The CK SERIES deliver a no-nonsense approach to a clean, sleek computer chassis design. For Minimalist Builders who appreciate the CK500 and CK500WH minimal solid front panel can be rest assured that airflow performance is not hindered thanks to enlarged ventilation outlets throughout the front, top, and rear panels for additional air movement. For Builders that insist for even more airflow, the CK560 and CK560WH feature a unique cross-hair patterned steel front panel. All cases offer a clean aesthetic and fits a modern day look while being feature-packed for additional hardware expansion and upgrades.

NVIDIA GeForce RTX 3080 12 GB Edition Rumored to Launch on January 11th

During the CES 2022 keynote, we have witnessed NVIDIA update its GeForce RTX 30 series family with GeForce RTX 3050 and RTX 3090 Ti. However, this is not an end to NVIDIA's updates to the Ampere generation, as we now hear industry sources from Wccftech suggest that we could see a GeForce RTX 3080 GPU with 12 GB of GDDR6X VRAM enabled, launched as a separate product. Compared to the regular RTX 3080 that carries only 10 GB of GDDR6X, the new 12 GB version is supposed to bring a slight bump up to the specification list. The GA102-220 GPU SKU found inside the 12 GB variant will feature 70 SMs with 8960 CUDA, 70 RT cores, and 280 TMUs.

This represents a minor improvement over the regular GA102-200 silicon inside the 8 GB model. However, the significant difference is the memory organization. With the new 12 GB model, we have a 384-bit memory bus allowing GDDR6X modules to achieve a bandwidth of 912 GB/s, all while running at 19 Gbps speeds. The overall TDP will also receive a bump to 350 Watts, compared to 320 Watts of the regular RTX 3080 model. For more information regarding final clock speeds and pricing, we have to wait for the alleged launch date - January 11th.

congatec launches 10 new COM-HPC and COM Express Computer-on-Modules with 12th Gen Intel Core processors

congatec - a leading vendor of embedded and edge computing technology - introduces the 12th Generation Intel Core mobile and desktop processors (formerly code named Alder Lake) on 10 new COM-HPC and COM Express Computer-on-Modules. Featuring the latest high performance cores from Intel, the new modules in COM-HPC Size A and C as well as COM Express Type 6 form factors offer major performance gains and improvements for the world of embedded and edge computing systems. Most impressive is the fact that engineers can now leverage Intel's innovative performance hybrid architecture. Offering of up to 14 cores/20 threads on BGA and 16 cores/24 threads on desktop variants (LGA mounted), 12th Gen Intel Core processors provide a quantum leap [1] in multitasking and scalability levels. Next-gen IoT and edge applications benefit from up to 6 or 8 (BGA/LGA) optimized Performance-cores (P-cores) plus up to 8 low power Efficient-cores (E-cores) and DDR5 memory support to accelerate multithreaded applications and execute background tasks more efficiently.

Razer Announces All-New Blade Gaming Laptops at CES 2022

Razer, the leading global lifestyle brand for gamers (Hong Kong Stock Code: 1337), is kicking off 2022 with new Razer Blade gaming laptop models including the Razer Blade 14, Razer Blade 15, and Razer Blade 17. The world's fastest laptops for gamers and creators are equipped with the recently announced NVIDIA GeForce RTX 30 Series Laptop GPUs, up to an RTX 3080 Ti, making the new Blades better than ever, now shipping with Windows 11. All new Razer Blade gaming laptops now also include groundbreaking DDR5 memory, providing blistering clock speeds up to 4800 MHz, an increase in frequency by up to 50% compared to the previous generation.

"The Razer Blade series continues to be the best gaming laptop by providing desktop-class performance on-the-go," says Travis Furst, Senior Director of Razer's Systems business unit. "Additionally, we've enabled creators to work anywhere with gorgeous displays, available NVIDIA Studio drivers, and up to 14-Core CPUs. Users will have the ability to choose any model or configuration that best fits their gaming or creating needs, while getting the latest and greatest in graphics, memory and processing technology."

Intel's NUC 12 Extreme Edition to Feature Non-Soldered LGA1700 Socket for Alder Lake

For a significant period, Intel's Next Unit of Computing (NUC) series has featured soldered processors on the PC's motherboard. However, according to the latest leaks from Twitter hardware leaker @9550pro, we have a potential Alder Lake-based NUC featuring desktop processor versions and a dedicated LGA1700 socket. As the leaked image shows, it looks like Intel's NUC 12 Extreme edition will feature an LGA1700 socket that features support for desktop-class of Alder Lake processors. If this leak is correct, we could see a compelling NUC solution filled with Intel-only processors, meaning an Alder Lake CPU and Arc Alchemist discrete graphics card.

There is room for PCIe expansion, which means that theoretically, you could connect any GPU to the mainboard. However, it is natural to assume that Intel could force their own GPU SKUs to launch this mini PC. We have to wait and see what Intel presents at tomorrow's CES 2022 event for more information.

NVIDIA GeForce RTX 3080 Ti Mobile Brings 16 Gbps Memory and TGP of 175 Watts

NVIDIA is preparing to launch an ultimate solution for high-end laptops and gamers that could benefit from the high-performance graphics card integration in mobile systems like gaming laptops. Rumored to launch sometime in January, NVIDIA is preparing a GeForce RTX 3080 Ti mobile GPU SKU that supposedly offers the highest performance in the Ampere mobile family. According to sources close to VideoCardz, team green has prepared to announce RTX 3080 Ti mobile design with faster memory and higher total graphics power (TGP). The memory speed will get an upgrade to 16 Gbps, compared to the 14 Gbps speed in RTX 3080 mobile SKU.

Similarly, the total overall TGP will also receive a bump to 175 Watts. This is just a tad higher than the 165 Watt TGP of RTX 3080 mobile. The Ti version will upgrade the CUDA core count and other things like TMUs to undetermined specifications. Currently, it is rumored that the Ti version could carry 7424 CUDA cores, which is an upgrade from 6144 of the regular RTX 3080 version.

Leaked Document Confirms That MSI GeForce RTX 3090 Ti SUPRIM X Graphics Card Launches January 27th

In the past few months, we have heard rumors of NVIDIA launching an upgraded version of the GA102 silicon called GeForce RTX 3090 Ti. The upgraded version is supposed to max out the chip and bring additional performance to the table. According to anonymous sources of VideoCardz, MSI, one of NVIDIA's add-in board (AIB) partners, is preparing to update its SUPRIM X lineup of graphics cards with the MSI GeForce RTX 3090 Ti SUPRIM X GPU, scheduled for January 27th launch date. This suggests that the official NDA lifts for these RTX 3090 Ti GPUs on January 27th, meaning that we could see AIBs teasing their models very soon.

As a general reminder, the GeForce RTX 3090 Ti graphics card should use a GA102-350 silicon SKU with 84 SMs, 10752 CUDA cores, 336 TMUs, 24 GB of GDDR6X memory running on a 384-bit bus at 21 Gbps speed with 1008 GB/s bandwidth, and a TBP of a whopping 450 Watts. If these specifications remain valid, the GPU could become the top contender in the market, however, with a massive drawback of pulling nearly half a KiloWatt of power.

Lightelligence's Optical Processor Outperforms GPUs by 100 Times in Some of The Hardest Math Problems

Optical computing has been the research topic of many startups and tech companies like Intel and IBM, searching for the practical approach to bring a new way of computing. However, the most innovative solutions often come out of startups and today is no exception. According to the report from EETimes, optical computing startup Lightelligence has developed a processor that outperforms regular GPUs by 100 times in calculating some of the most challenging mathematical problems. As the report indicates, the Photonic Arithmetic Computing Engine (PACE) from Lightelligence manages to outperform regular GPUs, like NVIDIA's GeForce RTX 3080, by almost 100 times in the NP-complete class of problems.

More precisely, the PACE accelerator was tackling the Ising model, an example of a thermodynamic system used for understanding phase transitions, and it achieved some impressive results. Compared to the RTX 3080, it reached 100 times greater speed-up. All of that was performed using 12,000 optical devices integrated onto a circuit and running at 1 GHz frequency. Compared to the purpose-built Toshiba's simulated bifurcation machine based on FPGAs, the PACE still outperforms this system designed to tackle the Ising mathematical computation by 25 times. The PACE chip uses standard silicon photonics integration of Mach-Zehnder Interferometer (MZI) for computing and MEMS to change the waveguide shape in the MZI.
Lightelligence Photonic Arithmetic Computing Engine Lightelligence Photonic Arithmetic Computing Engine

Intel CEO Planning Trip to Taiwan and Malaysia, Meeting with TSMC

Pat Gelsinger is planning a trip to Asia next week, where he'll stop over in Taiwan and Malaysia according to Bloomberg. There he's apparently planning to hold talks that show that manufacturing in Asia is a key part to his efforts of turning Intel's fortunes around. It's said that he'll also be meeting with TSMC.

This will be Gelsinger's first trip to Asia as Intel's CEO, largely due to the pandemic, although outside of meeting with TSMC, his schedule wasn't further mentioned, but it's likely he will be meeting with key partners and suppliers. Intel does some of its chip packaging in Malaysia, on the island of Penang to be more specific, where plants have been temporarily closed due to the pandemic, which in turn has hurt supply for the tech companies located there.

AMD Allegedly Preparing Refreshed 6 nm RDNA 2 Radeon RX 6000S GPU

AMD is allegedly preparing to announce the Radeon RX 6000S mobile graphics card based on a refreshed RDNA 2 architecture. The new card will be manufactured on TSMC's N6 process which offers an 18% logic density improvement over the N7 process currently used for RDNA 2 products resulting in increased efficiency or performance. The switch to the IP compatible N6 node should also improve yields and shorten production cycles allowing AMD to remain competitive with new cards from NVIDIA and Intel. We have limited information on this alleged card except that it will likely be announced in early 2022 at CES and that AMD may also release discrete RX 6000S series desktop graphics cards.
Return to Keyword Browsing
Jul 2nd, 2025 22:50 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts