Apr 24th, 2025 06:09 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts

News Posts matching #limitations

Return to Keyword Browsing

Quantum Machines OPX+ Platform Enabled Breaking of Entanglement Qubit Bottleneck, via Multiplexing

Quantum networks—where entanglement is distributed across distant nodes—promise to revolutionize quantum computing, communication, and sensing. However, a major bottleneck has been scalability, as the entanglement rate in most existing systems is limited by a network design of a single qubit per node. A new study, led by Prof. A. Faraon at Caltech and conducted by A. Ruskuc et al., recently published in Nature (ref: 1-2), presents a groundbreaking solution: multiplexed entanglement using multiple emitters in quantum network nodes. By harnessing rare-earth ions coupled to nanophotonic cavities, researchers at Caltech and Stanford have demonstrated a scalable platform that significantly enhances entanglement rates and network efficiency. Let's take a closer look at the two key challenges they tackled—multiplexing to boost entanglement rates and dynamic control strategies to ensure qubit indistinguishability—and how they overcame them.

Breaking the Entanglement Bottleneck via Multiplexing
One of the biggest challenges in scaling quantum networks is the entanglement rate bottleneck, which arises due to the fundamental constraints of long-distance quantum communication. When two distant qubits are entangled via photon interference, the rate of entanglement distribution is typically limited by the speed of light and the node separation distance. In typical systems with a single qubit per node, this rate scales as c/L (where c is the speed of light and L is the distance between nodes), leading to long waiting times between successful entanglement events. This severely limits the scalability of quantum networks.

FuturLab Announces PowerWash Simulator 2, Teases 2025 Launch

What's better than first-person sprayer PowerWash Simulator? A second PowerWash Simulator! That's right, we're thrilled to finally be able to talk about the next soap-erior sequel in our beloved franchise, PowerWash Simulator 2, coming in 2025 to Steam, Xbox Series X|S, PlayStation 5 and Epic Games Store. Available to wish list now on your platform of choice. Those of you who are eagle-eyed Game Pass fans may have enjoyed the original already, but for those unfamiliar with the concept, let me give you the rinse. PowerWash Simulator is what it says on the tin: find yourself in a mess, clean it up, build your business, become deeply immersed in soothing satisfaction, and indulge in some fascinating lore, including a mayor, a cat and… aliens?

That sounds random, but this game has been so beloved (by over 17 million of you!) that we wanted to give you more. Design Director Dan Chequer tells us a little more about why a sequel was the natural next step. "PowerWash Simulator has exceeded our expectations by taking off in the incredible way it has. Although we've added a huge amount of incredible content to the original game, we built and designed the game over four years ago when we never expected it to become so large. As a result, it has some technological limitations keeping us from adding new and interesting features. So, it made sense to start discussing where we could go from here."

Chinese Researchers Develop No-Silicon 2D GAAFET Transistor Technology

Scientists from Beijing University have developed the world's first two-dimensional gate-all-around field-effect transistor (GAAFET), establishing a new performance benchmark in domestic semiconductor design. The design, documented in Nature, represents a difference in transistor architecture that could reshape the future of Chinese microelectronics design. Given the reported characteristic of 40% higher performance and 10% improved efficiency compared to the TSMC 3 nm N3 node, it looks rather promising. The research team, headed by Professors Peng Hailin and Qiu Chenguang, engineered a "wafer-scale multi-layer-stacked single-crystalline 2D GAA configuration" that demonstrated superior performance metrics when benchmarked against current industry leaders. The innovation leverages bismuth oxyselenide (Bi₂O₂Se), a novel semiconductor material that maintains exceptional carrier mobility at sub-nanometer dimensions—a critical advantage as the industry struggles to push angstrom-era semiconductor nodes.

"Traditional silicon-based transistors face fundamental physical limitations at extreme scales," explained Professor Peng, who characterized the technology as "the fastest, most efficient transistor ever developed." The 2D GAAFET architecture circumvents the mobility degradation that plagues silicon in ultra-small geometries, allowing for continued performance scaling beyond current nodes. The development comes during China's intensified efforts to achieve semiconductor self-sufficiency, as trade restrictions have limited access to advanced lithography equipment and other critical manufacturing technologies. Even with China developing domestic EUV technology, it is still not "battle" proven. Rather than competing directly with established fabrication processes, the Beijing team has pioneered an entirely different technological approach—what Professor Peng described as "changing lanes entirely" rather than seeking incremental improvements, where China can not compete in the near term.

UK Retailer to Limit GeForce RTX 5090 Pre-orders, Current Inventory in Single-digits

Yesterday evening (GMT), Overclockers UK's product purchasing manager set expectations for his store's day one inventory of GeForce RTX 5090 and 5080 graphics cards. Taking to the OCUK forum, Gibbo (aka Andrew Gibson) revealed that the flagship stock count was in: "single digits at present, maybe double for launch." His "TDLR" also pointed to the store having a "few hundred" RTX 5080 models ready for launch day, with pre-orders starting on January 30 (for both Blackwell GPU product tiers). Gibbo warned potential customers about anticipated tight conditions: "we are expecting greater demand than (the RTX) 40 series, but with the launch just prior to CNY and lots of other rumors circulating initial waves of supply are poor and will probably take some time to build up. So the stock we have will be made available from the launch via the webshop, but I know what we have is likely to last only seconds, minutes at most."

Similar (predicted) circumstances have been reported across Europe and the Far East—certain outlets believe that GeForce RTX 50 series shortages will last up to three months post-launch. Potential "Blackwell" GPU customers are very likely dreading a forthcoming buying experience riddled with scalper bots, price gouging and all sorts of shady shenanigans. OCUK's product manager recommends taking a pragmatic approach when faced with a chaotic state of affairs: "to put it simply patience and expectations need to be realistic if the UK has—say 10,000 cards, and 500,000 people want one—well it is going to take time so plan ahead and also act like adults. I shall try and keep these forums updated with stock drops with heads up on the site etc. Do not call Sales or Customer service for any info or try to place orders, it shall be strictly via website only and all information will be posted on forums and on product display pages for the products as and when we have it."

Report Suggests "Extreme" Stock Limits for GeForce RTX 5090 & 5080 GPUs in Germany

A moderator on the PC Games Hardware (PCGH.de) discussion board had disclosed worrying details regarding stock limitations—presumably affecting the upcoming GeForce RTX 50 series launch in Germany. In turn, this disclosure was picked up by PCGH's new department. The predicted circumstances will—reportedly—make matters most difficult for customers looking to acquire higher-end "Blackwell" GPUs. The forum moderator gathered damning evidence from his network of contacts: "I was able to learn from well-informed dealer circles, the available contingent of graphics cards will be extremely limited! This applies in particular to the GeForce RTX 5090. Accordingly, NVIDIA determines where and who exactly will offer graphics cards at market launch. B2B dealers and the entire local wholesale trade, which primarily also works with business customers, will most likely come away empty-handed."

A bit of humor was sprinkled in with this informative post—the moderator joked about customers resorting to "cheerful" repetitive pressings of their F5 keys. They posit that the online buying experience for flagship Blackwell GPUs will be tiring and frustrating: "...so anyone who wants to get a GeForce RTX 5090 or GeForce RTX 5080 at market launch will have to queue digitally at the end customer dealers together with waiting (private) customers. Scalpers and bots will probably also get involved here. The quantities that can be purchased are likely to be limited to a maximum of one unit." Several stores are listed as being prime sources of stock (see below)—they reckon that the likes of Amazon will be not be receiving initial batches. "Second, third, or even fourth" waves of stock are anticipated, with some retailers set to act as resellers—inevitably opening the door to predicted price gouging. It is not clear whether these alleged restrictions will come into effect in markets beyond German borders—additionally, the VideoCardz insider network has not discovered any behind-the-scenes information regarding Team Green's launch period supply strategy.

Apple Silicon Macs Gain x86 Emulation Capability, Run x86 Windows Apps on macOS

Parallels has announced the introduction of x86 emulation support in Parallels Desktop 20.2.0 for Apple Silicon Macs. This new feature enables users to run x86-based virtual machines on their M-series Mac computers, addressing a longstanding limitation since Apple's transition to its custom Arm-based processors. The early technology preview allows users to run Windows 10, Windows 11 (with some restrictions), Windows Server 2019/2022, and various Linux distributions through a proprietary emulation engine. This development particularly benefits developers and users who need to run 32-bit Windows applications or prefer x86-64 Linux virtual machines as an alternative to Apple Rosetta-based solutions.

However, Parallels is transparent about the current limitations of this preview release. Performance is notably slow, with Windows boot times ranging from 2 to 7 minutes, and overall system responsiveness remains low. The emulation only supports 64-bit operating systems, though it can run 32-bit applications. Additionally, USB device support is not available, and users must rely on Apple's hypervisor as the Parallels hypervisor isn't compatible. Despite these constraints, the release is a crucial step forward in bridging the compatibility gap for Apple Silicon Mac users so legacy software can still be used. The feature has been implemented with the option to start virtual machines hidden in the user interface to manage expectations, as it is still imperfect.

NVIDIA's Bryan Catanzaro Discusses Future of AI Personal Computing

Imagine a world where you can whisper your digital wishes into your device, and poof, it happens. That world may be coming sooner than you think. But if you're worried about AI doing your thinking for you, you might be waiting for a while. In a fireside chat Wednesday (March 20) at NVIDIA GTC, the global AI conference, Kanjun Qiu, CEO of Imbue, and Bryan Catanzaro, VP of applied deep learning research at NVIDIA, challenged many of the clichés that have long dominated conversations about AI. Launched in October 2022, Imbue made headlines with its Series B fundraiser last year, raising over $200 million at a $1 billion valuation.

The Future of Personal Computing
Qiu and Catanzaro discussed the role that virtual worlds will play in this, and how they could serve as interfaces for human-technology interaction. "I think it's pretty clear that AI is going to help build virtual worlds," said Catanzaro. "I think the maybe more controversial part is virtual worlds are going to be necessary for humans to interact with AI." People have an almost primal fear of being displaced, Catanzaro said, but what's much more likely is that our capabilities will be amplified as the technology fades into the background. Catanzaro compared it to the adoption of electricity. A century ago, people talked a lot about electricity. Now that it's ubiquitous, it's no longer the focus of broader conversations, even as it makes our day-to-day lives better.

AMD 24.3.1 Drivers Unlock RX 7900 GRE Memory OC Limits, Additional Performance Boost Tested

Without making much noise, AMD lifted the memory overclocking limits of the Radeon RX 7900 GRE graphics card with its latest Adrenalin 24.3.1 WHQL drivers, TechPowerUp found. The changelog is a bit vague and states "The maximum memory tuning limit may be incorrectly reported on AMD Radeon RX 7900 GRE graphics products."—we tested it. The RX 7900 GRE has been around since mid-2023, but gained prominence as the company gave it a global launch in February 2024, to help AMD better compete with the NVIDIA GeForce RTX 4070 Super. Before this, the RX 7900 GRE had started out its lifecycle as a special edition product confined to China, and its designers had ensured that it came with just the right performance positioning that didn't end up disrupting other products in the AMD stack. One of these limitations had to do with the memory overclocking potential, which was probably put in place to ensure that the RX 7900 GRE has a near-identical total board power as the RX 7800 XT.

Shortly after the global launch of the RX 7900 GRE, and responding to drama online, AMD declared the limited memory overclocking range a bug and promised a fix. The overclocking limits are defined in the graphics card VBIOS, so increasing those limits would mean shipping BIOS updates for over a dozen SKUs from all the major vendors, and requiring users to upgrade it by themselves. Such a solution isn't very practical, so AMD implemented a clock limit override in their new drivers, which reprograms the power limits on the GPU during boot-up. Nicely done, good job AMD!

Nightdive Studios Discusses Remastering of "Star Wars: Dark Forces"

Last year, it was revealed that the masters of remasters at Nightdive Studios have taken on the task of bringing the beloved 90s classic Star Wars: Dark Forces to modern audiences. The remaster is set to release February 28 on PS5 and PS4, nearly 30 years after the release of the original game from LucasArts in 1995. Similar to Nightdive's previous endeavors with titles like Quake II and Turok 3: Shadow of Oblivion Remastered, Star Wars: Dark Forces Remaster honors the strong foundation of the original while updating it for modern consoles through the studio's proprietary KEX engine, allowing the game to run at up to 4K resolution at 120 FPS on PlayStation 5. With this, fans of the original as well as a whole new generation of gamers, will be able to experience Star Wars: Dark Forces and appreciate what made it such an essential title within LucasArts' (now Lucasfilm Games) impressive catalog. Further honoring the work that went into its initial development, it's been revealed that Star Wars: Dark Forces Remaster will feature a special Vault jam-packed with never-before-seen content from the making of the 1995 original!

With improved spritework and remastered cutscenes, those looking to dig deeper into a truly unique story within the Star Wars galaxy will be able to enjoy a visually pleasing narrative experience as they join protagonist Kyle Katarn, a defector turned mercenary for hire working for the Rebel Alliance, in foiling the Galactic Empire and its secret Dark Troopers Project. As much as we'd love to continue gushing over why this has been such an exciting project for Nightdive and must-play title for fans and newcomers alike, let's dive deeper into the fascinating history and behind-the-scenes work of breathing new life into Star Wars: Dark Forces with Nightdive's Project Lead and Producer, Max Waine.

Intel Unveils Industry-Leading Glass Substrates to Meet Demand for More Powerful Compute

What's New: Intel today announced one of the industry's first glass substrates for next-generation advanced packaging, planned for the latter part of this decade. This breakthrough achievement will enable the continued scaling of transistors in a package and advance Moore's Law to deliver data-centric applications.

"After a decade of research, Intel has achieved industry-leading glass substrates for advanced packaging. We look forward to delivering these cutting-edge technologies that will benefit our key players and foundry customers for decades to come."
-Babak Sabi, Intel senior vice president and general manager of Assembly and Test Development

Xbox Series S Hitting VRAM Limits, 8 GB is the Magic Number

Microsoft launched two flavors of its Xbox Series console back in November of 2020 - a more expensive and powerful "X" model appealing to hardcore enthusiasts arrived alongside an entry-level/budget friendly "S" system that featured lesser hardware specifications. The current generation Xbox consoles share the same custom AMD 8-core Zen 2 processor, albeit with different clock configurations, but the key divergence lies in Microsoft's choice of graphical hardware. The Series X packs an AMD "Scarlett" graphics processor with access to 16 GB of VRAM, while the Series S makes do with only 8 GB of high speed video memory with its "Lockhart" GPU.

Games studios have historically struggled to optimize their projects for the step down Xbox model - with software engineers complaining about memory allocation issues thanks to a smaller pool of VRAM - the Series S CPU and GPU have to fight over a total of 10 GB GDDR6 system memory. Microsoft listened to this feedback and made necessary changes last year - an updated SDK was released and a video briefing explained: "Hundreds of additional megabytes of memory are now available to Xbox Series S developers...This gives developers more control over memory, which can improve graphics performance in memory-constrained conditions."

AMD's A620 Chipset More Capable Than Early Motherboards Suggest

For whatever reason, all of the AMD A620 chipset based motherboards that were announced on Friday, are not showing off the capabilities of the chipset and are in fact making it look worse than it is. AMD has no doubt limited the A620 platform, with some limitations that seem arbitrary, but the motherboards makers clearly haven't helped, as they've made the platform look very unattractive, when in fact it could be entirely acceptable, for a budget build. As you can see from AMD's feature matrix below, the company has removed a fair share of features compared to the B650 chipset, but for example, two 10 Gbps USB 3.2 Gen 2 ports can be implemented. Despite this, only Biostar and Gigabyte have implemented one such port each, with ASUS, ASRock and MSI implementing zero.

Yes, the platform is limited to 65 W CPUs—assuming you want your CPUs boost behaviour to work as intended—which is likely to cause some issues, as it might not be clear to potential buyers that are looking for a cheap motherboard for their system and it's something AMD and its board partners need to communicate a lot better. However, the A620 platform has enough PCIe lanes for two M.2 drives and enough left for all the peripheral connectivity and some PCIe slots, yet most of the boards appear to shun a second M.2 slot for no apparent reason beyond the cost of the physical interface. It looks as if AMD's board partners have decided to try and cut back as much as they can in terms of features that we've ended up with boards that no sensible person should be buying, as the boards are barely fit for purpose. Time will tell if we'll see some better boards down the road, but it would appear that AMD's board partner would rather sell its potential customers a more expensive B650 board, based on the weak line-up of boards that launched on Friday.

Halo Infinite's Latest PC Patch Shifts Minimum GPU Spec Requirements, Below 4 GB of VRAM Insufficient

The latest patch for Halo Infinite has introduced an undesired side effect for a select portion of its PC platform playerbase. Changes to minimum system specification requirements were not clarified by 343 Industries in their patch notes, but it appears that the game now refuses to launch for owners of older GPU hardware. A limit of 4 GB of VRAM has been listed as the bare minimum since Halo Infinite's launch in late 2021, with the AMD Radeon RX 570 and Nvidia GTX GeForce 1050 Ti cards representing the entry level GPU tier, basic versions of both were fitted with 4 GB of VRAM as standard.

Apparently users running the GTX 1060 3 GB model were able to launch and play the game just fine prior to the latest patch, due to it being more powerful than the entry level cards, but now it seems that the advertised hard VRAM limit has finally gone into full effect. The weaker RX 570 and GTX 1050 Ti cards are still capable of running Halo Infinite after the introduction of season 3 content, but a technically superior piece of hardware cannot, which is unfortunate for owners of the GTX 1060 3 GB model who want to play Halo Infinite in its current state.
Return to Keyword Browsing
Apr 24th, 2025 06:09 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts