News Posts matching #specification

Return to Keyword Browsing

NVIDIA GeForce RTX 50 Series "Blackwell" Features Similar L1/L2 Cache Architecture to RTX 40 Series

NVIDIA's upcoming RTX 5090 and 5080 graphics cards are maintaining similar L1 cache architectures as their predecessors while introducing marginal improvements to L2 cache capacity, according to recent specifications reported by HardwareLuxx. The flagship RTX 5090 maintains the same 128 KB L1 cache per SM as the RTX 4090 but achieves a higher total L1 cache of 21.7 MB thanks to its increased SM count of 170. This represents a notable improvement over the RTX 4090's 16.3 MB total L1 cache, which features 128 SMs. In terms of L2 cache, the RTX 5090 sees a 33.3% increase over its predecessor, boasting 96 MB compared to the RTX 4090's 72 MB, with SM count going up by 32.8%, so there is a slight difference.

However, this improvement is relatively modest compared to the previous generation's leap, where the RTX 4090 featured twelve times more L2 cache than the RTX 3090. The RTX 5080 shows more conservative improvements, with its L1 cache capacity only marginally exceeding its predecessor by 1 MB (10.7 MB vs 9.7 MB). Its L2 cache maintains parity at 64 MB, matching the RTX 4080 and 4080 Super. To compensate for these incremental cache improvements, NVIDIA is implementing faster GDDR7 memory across the RTX 50 series. Most models will feature 28 Gbps modules, with the RTX 5080 receiving special treatment with 30 Gbps memory. Additionally, some models are getting wider memory buses, with the RTX 5090 featuring a 512-bit bus and the RTX 5070 Ti upgrading to a 256-bit interface.

UK Retailer Inadvertently Posts Radeon RX 9070 XT & 9070 GPU Specs

The majority of AMD Radeon RX 9070 XT GPU-related leaks have emerged thanks to insiders playing around with pre-launch PowerColor RDNA 4 sample models. During and since CES, Team Red and its board partners have kept mum about specifications and performance figures—but happy accidents have allowed tech enthusiasts to pore over NDA-busting information. As reported by VideoCardz yesterday, Overclockers UK (OCUK) published a landing page that provided a brief look at basic Radeon RX 9070 XT and RX 9070 (non-XT) specs—the British retailer has since scrubbed this entry from its site.

Leaks have revealed alleged core counts—4096 for XT, and 3584 for non-XT—but Overclockers UK's charts listed a count of 4096 for both Navi 48 GPUs. They both sport 16 GB GDDR6 VRAM and 256-bit memory buses, and the leak reveals another shared trait: a 260 W TDP rating. VideoCardz reckons that this is an error—based on previous clock speed insider info, the Radeon RX 9070 non-XT's power consumption figure should be rated lower. The accidentally published clock speeds appear to be sourced from overclocked examples—AMD is reportedly not going to release full/finalized information until closer to launch, so OCUK could have relied on preliminary product guides. The FAQ section states that Team Red's RDNA 4 generation is sticking with a PCI-Express 4.0 x16 host interface—PCIe 5.0 systems are "thankfully" backwards compatible. NVIDIA's GeForce RTX 50 series will be leading the way into PCIe 5.0 spec territories.

ZOTAC Publishes GeForce RTX 5090 Tech Specs on Product Pages

ZOTAC's website has been updated with technical specifications for its GeForce RTX 5090 GPU-based custom models—the Hong Kong-based hardware company is perhaps the first NVIDIA board partner to publicly disclose these details. According to VideoCardz, this discovery was made by a loyal reader—product pages for SOLID, SOLID OC and AMP Extreme INFINITY seemed to have appeared online within the past couple of days. A resultant report suggests that Team Green has only recently communicated (potentially) finalized specs with its AIBs.

The aforementioned discoverer of ZOTAC's GeForce RTX 5090 spec sheets noticed an unusual memory clock figure on the SOLID OC model's listing—30 Gbps instead of 28 Gbps. This was an error—ZOTAC has since amended that particular data point (see VideoCardz's screenshots below)—their presumably more expensive AMP Extreme INFINITY card's memory clock spec is set at 28 Gbps. Interestingly, ZOTAC's upcoming flagship is the first example of an NVIDIA GPU configured with a power consumption rating of 600 W. It is not immediately apparent whether this TDP figure is an out-of-the-box default—VideoCardz reckons that the GeForce RTX 5090 AMP Extreme INFINITY will arrive with dual-BIOS functionality. A high-performance mode could be user selected. Will rival flagship GeForce RTX 5090 custom cards roll out with similar TDPs? TechPowerUp anticipates the emergence of pre-launch technical details—from other brands/manufacturers—over the next two weeks.

HDMI Forum Announces Version 2.2 of the HDMI Specification with 96 Gbps of Bandwidth

HDMI Forum, Inc. today announced the upcoming release of Version 2.2 of the HDMI Specification. The new HDMI Specification delivers enhanced options for the vast HDMI ecosystem, with more advanced solutions to create, distribute and experience the best end-user outcome.

New technology enables higher quality options now and in the future for content producers such as TV, movie and game studios, while enabling multiple distribution platforms. Higher 96 Gbps bandwidth and next-gen HDMI Fixed Rate Link technology provide optimal audio and video for a wide range of device applications. An end-user can be assured that their displays support a native video format in the best way possible and can deliver a seamless and reliable experience.

VESA to Update DisplayPort 2.1 With New Active Cable Specification for Up to 3X Longer DP80 Cables

The Video Electronics Standards Association (VESA)today announced that it is working with members to introduce new DP80LL ("low loss") ultra-high-bit-rate (UHBR) cables that enable up to four-lane UHBR20 link rate support - a maximum throughput of 80 Gbps - over an active cable up to three meters in length. The spec for these new cables will be a key highlight of DisplayPort version 2.1b, which will be released in the spring of 2025. As a result, the DisplayPort 2.1b update will provide for up to 3X the cable length for UHBR20 GPU-to-display connections compared to existing VESA certified DP80 passive cables. VESA certified DP80LL cables are expected to roll out into the market within the next several months.

UHBR20 and DP80LL Product Demos at CES 2025
VESA will showcase samples of pre-certified DP80LL cables alongside product demonstrations of DisplayPort 2.1 UHBR20 - the highest-performance UHBR tier in the DisplayPort 2.1 spec - as well as other VESA standards at the Consumer Electronics Show (CES), taking place this week in Las Vegas, January 7-10.

VESA Adds New Performance Levels to ClearMR and DisplayHDR True Black Standards

The Video Electronics Standards Association (VESA) today announced that it has published updates to several of its front-of-screen performance standard and logo programs, which are designed to help drive the industry to produce better-quality display products for consumers and content creators. These updates include the addition of new performance tiers for VESA's ClearMR standard for measuring motion blur that are designed for the next generation of ultra-high-refresh-rate displays for competitive gamers, such as 480-Hz and above displays. They also include a new 1000 luminance performance tier for VESA's DisplayHDR True Black standard to validate new OLED displays featuring the high luminance and greater color accuracy necessary to support professional video content creation.

VESA and its member companies will showcase product demonstrations of its latest video standards, including the new ClearMR and DisplayHDR True Black performance tiers, at the Consumer Electronics Show (CES), taking place next week in Las Vegas, January 7-10.

NVIDIA GeForce RTX 5070 and RTX 5070 Ti Final Specifications Seemingly Confirmed

Thanks to kopite7kimi, we are able to finalize the leaked specifications of NVIDIA's upcoming GeForce RTX 5070 and RTX 5070 Ti graphics cards.
Starting off with RTX 5070 Ti, it will feature 8,960 CUDA cores and come equipped with 16 GB GDDR7 memory on a 256-bit memory bus, offering 896 GB/s bandwidth. The card is reportedly designed with a total board power (TBP) of 300 W. The Ti variant appears to use the PG147-SKU60 board design with a GB203-300-A1 GPU. The standard RTX 5070 is positioned as a more power-efficient option, with specifications pointing to 6,144 CUDA cores and 12 GB of GDDR7 memory on a 192-bit bus, with 627 GB/s memory bandwidth. This model is expected to operate at a slightly lower 250 W TBP.

Interestingly, the non-Ti RTX 5070 card will be available in two board variants, PG146 and PG147, both utilizing the GB205-300-A1 GPU. While we don't know what the pricing structure looks like, we see that NVIDIA has chosen to make more considerable differentiating factors between its SKUs. The Ti variant not only gets an extra four GB of GDDR7 memory, but it also gets a whopping 45% increase in CUDA core count, going from 6,144 to 8,960 cores. While we wait for the CES to see the initial wave of GeForce RTX 50 series cards, the GeForce RTX 5070 and RTX 5070 Ti are expected to arrive later, possibly after RTX 5080 and RTX 5090 GPUs.

Intel Abandons "x86S" Plans to Focus on The Regular x86-64 ISA Advisory Group

Intel has announced it will not proceed with X86S, an experimental instruction set architecture that aims to simplify its processor design by removing legacy support for older 32-bit and 16-bit operating modes. The decision comes after gathering feedback from the technology ecosystem on a draft specification that was released for evaluation. The x86, and its 64-bit x86-64 we use today, is a giant cluster of specifications that contains so many instructions rarely anyone can say with precision how many are there. All of this stems from the era of original 8086 processor, which has its own 16-bit instructions. Later on we transitioned to 32, then 64-bit systems with all have brought their own specific instructions. Adding support for processing of vector, matrix, and other data types has increased the ISA specification so much that no one outside a few select engineers at Intel (and AMD) understands in full. From that x86S idea was born to solve the issue of supporting legacy systems and legacy code, and moving on to the x86S ISA, where "S" stands for simplified.

The X86S proposal included several notable modifications, such as eliminating support for rings 1 and 2 in the processor's protection model, removing 16-bit addressing capabilities, and discontinuing legacy interrupt controller support. These changes would have potentially reduced hardware complexity and modernized the platform's architecture. A key feature of the proposed design was a simplified boot process that would have allowed processors to start directly in 64-bit mode, eliminating the current requirement for systems to boot through various legacy modes before reaching 64-bit operation. The architecture also promised improvements in handling modern features like 5-level paging. "Intel will continue to maintain its longstanding commitment to software compatibility," the company stated in the official document on its website, acknowledging that the x86S dream is over.

UEFI Forum Releases the UEFI 2.11 and the PI 1.9 Specifications

The UEFI Forum today announced the release of the Unified Extensible Firmware Interface (UEFI) 2.11 specification and the Platform Initialization (PI) 1.9 specification. The goal of these specification updates is to streamline user implementation by providing increased compatibility across hardware architectures, including security updates, algorithm support and improving alignment implementation guidance.

"We have created a vibrant firmware community, and these specification updates provide maintenance and enhancement of fundamental capabilities in order to help increase the momentum of the UEFI specifications and add value to the ecosystem," said Mark Doran, UEFI Forum President. "The latest specifications continue the UEFI Forum's commitment to developing standards for all significant CPU architectures as underscored by additions such as the new LoongArch support in the PI 1.9 specification."

Nintendo Switch Leak Tips LCD, Hall Effect Joysticks

With the upcoming Nintendo Switch 2 all but a given, much has been said about the new handheld gaming console in leaks and rumors. Now, a new set of supposed leaks from Decky Wizard on X have caused a bit of consternation among the Switch community, given the seemingly random mix of upgrades and perceived downgrades coming to the next-gen Switch. Minor details include larger buttons, a redesigned dock, and three colorways at launch, however, details around the new display and controls are more significant. For starters, the leaks suggest that the Switch 2 will have larger buttons and Hall-effect joysticks, both of which would likely be a massive upgrade in the eyes of most gamers, however it also seems as though the Switch 2 will use an LCD, as opposed to an OLED display, for the base model.

It seems as though most Switch fans were expecting an OLED panel right out the gate, given the Nintendo Switch OLED has been available for quite some time, now. Hall-effect joysticks will also likely solve one of the community's biggest complaints about the Switch controls—that being stick drift—but Nintendo would have to also provide a calibration tool in the Switch software to correct for wear and tear on the joysticks. Larger buttons may also be a welcome change for most Switch gamers, since cramped controls are a fairly common criticism of the original Switch. In addition to the news of the physical and technical specifications, Decky Wizard also claims that the Switch 2 will be lighter than the Steam Deck and be launched as soon as January 2025. In a subsequent post, Decky Wizard uploaded leaked images of the new Switch 2, showing off not only a fresh looking chassis with smaller screen bezels and a new built-in kickstand design, but also the design of the new first-party dock and two new, larger Joycon release buttons on the backs of each Joycon.

Next-Gen HDMI Specifications to Be Announced in January Before CES 2025

The HDMI Forum confirmed the development of the next-generation HDMI standard with increased bandwidth. According to various media reports, including Videocardz and Dday, the press release from HDMI Forum indicates the possibility of new cables or refinement of existing specifications. Moreover, it could mean we will have new HDMI 2.2 specs. The current HDMI 2.1 specifications, established in 2017, provide bandwidth up to 48 Gbps and support native non-DSC configurations for 4K at 144 Hz and 8K at 30 Hz. When combined with Display Stream Compression (DSC) technology, the current standard can handle up to 10K at 120 Hz. A bandwidth increase could enable higher resolutions and refresh rates without DSC compression.

This development of new HDMI specifications is due to the emergence of other display interface standards such as DisplayPort 2.1, which offers up to 80 Gbps over UHBR20. AMD's Radeon RX 7000 series and Intel's recently launched Arc Battlemage GPUs support UHBR 13.5 while the Radeon PRO supports UHBR20. The HDMI Forum is scheduled to release these new specifications on January 6th, one day before the official CES 2025 opening event on January 7th. With the launch of NVIDIA's GeForce RTX 50 and AMD's Radeon RX 8000 series at CES 2025, it would be interesting to see if the latest graphics cards will support the HDMI 2.2 specs.

Firefox Ditches 'Do Not Track' Feature in Version 135 in Favor of 'Global Privacy Control'

Mozilla says that "many sites do not respect" Do Not Track requests, as they rely on voluntary compliance, adding that the feature may actually harm user privacy—likely alluding to the fact that it makes it easier for sites to fingerprint and track you. As such, as of Firefox version 135, Mozilla will disable the Do Not Track feature. As a replacement for the feature, Mozilla recommends using the more advanced "Tell websites not to sell or share my data" toggle built into Global Privacy Control, which it says is more widely respected and backed by law in some regions.

This is also just the latest in a long line of changes to both Firefox and web privacy, at large. For one, Google recently completely removed third-party cookies from its Chrome browser—a move it claims is in support of user privacy but has been widely criticized for putting Google in something of a monopoly position when it comes to tracking the data of Chrome users. Overall, the community feedback on Reddit seems to be either positive or indifferent, although one criticism of the new reliance on Global Privacy Control is that GPC doesn't block Google Analytics tracking requests, although the reasoning behind leaving Google Analytics in-tact is that many sites don't function correctly when it is blocked or disabled.

CXL Consortium Announces Compute Express Link 3.2 Specification Release

The CXL Consortium, an industry standard body advancing coherent connectivity, announces the release of its Compute Express Link (CXL) 3.2 Specification. The 3.2 Specification optimizes CXL Memory Device monitoring and management, enhances functionality of CXL Memory Devices for OS and Applications, and extends security with the Trusted Security Protocol (TSP).

"We are excited to announce the release of the CXL 3.2 Specification to advance the CXL ecosystem by providing enhancements to security, compliance, and functionality of CXL Memory Devices," said Larrie Carr, CXL Consortium President. "The Consortium continues to develop an open, coherent interconnect and enable an interoperable ecosystem for heterogeneous memory and computing solutions."

JEDEC Announces Enhanced NAND Flash Interface Standard With Increased Speeds and Efficiency

JEDEC Solid State Technology Association, the global leader in the development of standards for the microelectronics industry, today announced the publication of JESD230G: NAND Flash Interface Interoperability Standard. JESD230G introduces speeds of up to 4800 MT/s, as compared to 400 MT/s in the first version of JESD230 published in 2011. Also, JESD230G adds a separate Command/Address Bus Protocol (SCA), delivering enhanced throughput and efficiency by allowing hosts and NAND devices to take maximum advantage of the latest interface speeds. JESD230G is available for free download from the JEDEC website.

"JEDEC is excited to release JESD230G," said David Landsman, Distinguished Engineer at Western Digital and Chair of the JEDEC NAND TG. He added, "This version of JESD230 further advances the capabilities of NAND flash devices to meet the growing demands of their expanding range of applications and continues the JEDEC tradition of building interoperable ecosystems through open industry standards."

Interview with RISC-V International: High-Performance Chips, AI, Ecosystem Fragmentation, and The Future

RISC-V is an industry standard instruction set architecture (ISA) born in UC Berkeley. RISC-V is the fifth iteration in the lineage of historic RISC processors. The core value of the RISC-V ISA is the freedom of usage it offers. Any organization can leverage the ISA to design the best possible core for their specific needs, with no regional restrictions or licensing costs. It attracts a massive ecosystem of developers and companies building systems using the RISC-V ISA. To support these efforts and grow the ecosystem, the brains behind RISC decided to form RISC-V International—a non-profit foundation that governs the ISA and guides the ecosystem.

We had the privilege of talking with Andrea Gallo, Vice President of Technology at RISC-V International. Andrea oversees the technological advancement of RISC-V, collaborating with vendors and institutions to overcome challenges and expand its global presence. Andrea's career in technology spans several influential roles at major companies. Before joining RISC-V International, he worked at Linaro, where he pioneered Arm data center engineering initiatives, later overseeing diverse technological sectors as Vice President of Segment Groups, and ultimately managing crucial business development activities as executive Vice President. During his earlier tenure as a Fellow at ST-Ericsson, he focused on smartphone and application processor technology, and at STMicroelectronics he optimized hardware-software architectures and established international development teams.

NVIDIA "Blackwell" GB200 Server Dedicates Two-Thirds of Space to Cooling at Microsoft Azure

Late Tuesday, Microsoft Azure shared an interesting picture on its social media platform X, showcasing the pinnacle of GPU-accelerated servers—NVIDIA "Blackwell" GB200-powered AI systems. Microsoft is one of NVIDIA's largest customers, and the company often receives products first to integrate into its cloud and company infrastructure. Even NVIDIA listens to feedback from companies like Microsoft about designing future products, especially those like the now-canceled NVL36x2 system. The picture below shows a massive cluster that roughly divides the compute area into a single-third of the entire system, with a gigantic two-thirds of the system dedicated to closed-loop liquid cooling.

The entire system is connected using Infiniband networking, a standard for GPU-accelerated systems due to its lower latency in packet transfer. While the details of the system are scarce, we can see that the integrated closed-loop liquid cooling allows the GPU racks to be in a 1U form for increased density. Given that these systems will go into the wider Microsoft Azure data centers, a system needs to be easily maintained and cooled. There are indeed limits in power and heat output that Microsoft's data centers can handle, so these types of systems often fit inside internal specifications that Microsoft designs. There are more compute-dense systems, of course, like NVIDIA's NVL72, but hyperscalers should usually opt for other custom solutions that fit into their data center specifications. Finally, Microsoft noted that we can expect to see more details at the upcoming Microsoft Ignite conference in November and learn more about its GB200-powered AI systems.

24-Core Intel Core Ultra 9 285 Falls Short of 8-Core Ryzen 7 9700X in Geekbench Leak

The leaks and rumors surrounding Intel's upcoming Arrow Lake desktop CPU line-up are starting to heat up, with recent rumors tipping the existence of the Core Ultra 9 285K as the top-end chip in the upcoming launch. A new set of Geekbench 6 scores spotted by BenchLeaks on X, however, suggests the Core Ultra 9 285 non-K variant of this CPU might lag its Ryzen 9 counterparts significantly.

The Geekbench 6 test results, which were apparently achieved on an ASUS Prime Z890-P motherboard, reveal performance that falls short of even the current-generation AMD Ryzen 7 9700X, never mind any of the Ryzen 9 variants. The Geekbench 6 multicore score came in at an unimpressive 14,150, while the single-core score was a mere 3,081, falling short of the likes of the AMD Ryzen 7 9700X, which scored up to 19,381 and 3,624 in multi- and single-core tests, respectively. However, there appears to be more to this story—namely an odd test configuration that could heavily skew the test results, since the "stock" Intel Core Ultra 9 285K scores significantly higher in the Geekbench 6 charts than this particular 285 seems to.

Intel Updates 64-Bit Only "X86S" Instruction Set Architecture Specification to Version 1.2

Intel has released version 1.2 of its X86S architecture specification. The X86S project, first announced last year, aims to modernize the x86 architecture that has been the heart of PCs since the late 1970s. Over the decades, Intel and AMD have continually expanded x86's capabilities, resulting in a complex instruction set that Intel now sees as partially outdated. The latest specification primarily focuses on removing legacy features, particularly 16-bit and 32-bit support. This radical departure from x86's long-standing commitment to backward compatibility aligns with the simplification of x86. While the specification does mention a "32-bit compatibility mode," we are yet to how would 32-bit apps run. This ambiguity raises questions about how X86S might handle existing 32-bit applications, which, despite declining relevance, still play a role in many computing environments.

The potential transition to X86S comes at a time when the industry is already moving away from 32-bit support. However, the proposed changes are subject to controversy. The x86 architecture's strength has long been its extensive legacy support, allowing older software to run on modern hardware. A move to X86S could disrupt this ecosystem, particularly for users relying on older applications. Furthermore, introducing X86S raises questions about the future relationship between Intel and AMD, the two primary x86 CPU designers. While Intel leads the initiative, AMD's role in the potential transition remains uncertain, given its significant contributions to the current x86-64 standard.

JEDEC Adds Two New Standards Supporting Compute Express Link (CXL) Technology

JEDEC Solid State Technology Association, the global leader in standards development for the microelectronics industry, today announced the publication of two new standards supporting Compute Express Link (CXL ) technology. These additions complete a comprehensive family of four standards that provide the industry with unparalleled flexibility to develop a wide range of CXL memory products. All four standards are available for free download from the JEDEC website.

JESD319: JEDEC Memory Controller Standard - for Compute Express Link (CXL ) defines the overall specifications, interface parameters, signaling protocols, and features for a CXL Memory Controller ASIC. Key aspects include pinout reference information and a functional description that includes CXL interface, memory controller, memory RAS, metadata, clocking, reset, performance, and controller configuration requirements. JESD319 focuses on the CXL 3.1 based direct attached memory expansion application, providing a baseline of standardized functionality while allowing for additional innovations and customizations.

Bluetooth SIG Introduces True Distance Awareness

The Bluetooth Special Interest Group (SIG), the organization that oversees Bluetooth technology, announced the release of Bluetooth Channel Sounding, a new secure, fine-ranging feature that promises to enhance the convenience, safety, and security of Bluetooth connected devices. By enabling true distance awareness in billions of everyday devices, Bluetooth Channel Sounding opens countless possibilities for developers and users alike.

"Bluetooth technology has become an ingredient of everyday life," said Neville Meijers, CEO, Bluetooth Special Interest Group. "When connected devices are distance-aware, a range of new possibilities emerge. Adding true distance awareness to Bluetooth technology exemplifies the ongoing commitment of the Bluetooth SIG community to continuously enhance our connection with our devices, one another, and the world around us."

JEDEC Releases New Standard for LPDDR5/5X Serial Presence Detect (SPD) Contents

JEDEC Solid State Technology Association, the global leader in standards development for the microelectronics industry, today announced the publication of the JESD406-5 LPDDR5/5X Serial Presence Detect (SPD) Contents V1.0, consistent with the updated contents of JESD401-5B DDR5 DIMM Label and JESD318 DDR5/LPDDR5 Compression Attached Memory Module (CAMM2) Common Standard.

JESD406-5 documents the contents of the SPD non-volatile configuration device included on all JEDEC standard memory modules using LPDDR5/5X SDRAMs, including the CAMM2 standard designs outlined in JESD318. The JESD401-5B standard defines the content of standard memory module labels using the other two standards, assisting end users in selecting compatible modules for their applications.

"Black Myth: Wukong" Game Gets Benchmarking Tool Companion Designed to Evaluate PC Performance

Game Science, the developer behind the highly anticipated action RPG "Black Myth: Wukong," has released a free benchmark tool on Steam for its upcoming game. This standalone application, separate from the main game, allows PC users to evaluate their hardware performance and system compatibility in preparation for the game's launch. The "Black Myth: Wukong Benchmark Tool" offers a unique glimpse into the game's visuals by rendering a real-time in-game sequence. While not playable, it provides valuable insights into how well a user's system will handle the game's demanding graphics and performance requirements. One of the tool's standout features is its customization options. Users can tweak various graphics settings to preview the game's visuals and performance under different configurations. This flexibility allows gamers to find the optimal balance between visual fidelity and smooth gameplay for their specific hardware setup.

However, Game Science has cautioned that due to the complexity and variability of gaming scenarios, the benchmark results may not fully represent the final gaming experience. This caveat shows the tool's role as a guide rather than a definitive measure of performance. The benchmark tool's system requirements offer a clear picture of the hardware needed to run "Black Myth: Wukong." At a minimum, users will need a Windows 10 system with an Intel Core i5-8400 or AMD Ryzen 5 1600 processor, 16 GB of RAM, and either an NVIDIA GeForce GTX 1060 6 GB or AMD Radeon RX 580 8 GB graphics card. For an optimal experience, the recommended specifications include an Intel Core i7-9700 or AMD Ryzen 5 5500 processor and an NVIDIA GeForce RTX 2060, AMD Radeon RX 5700 XT, or Intel Arc A750 graphics card. Interestingly, the benchmark tool supports DLSS, FSR, and XeSS technologies, indicating that the final game will likely include these performance-enhancing features. The developers also strongly recommend using an SSD for storage.

NVM Express Releases NVMe 2.1 Specifications

NVM Express, Inc. today announced the release of three new specifications and eight updated specifications. This update to NVMe technology builds on the strengths of previous NVMe specifications, introducing significant new features for modern computing environments while also streamlining development and time to market.

"Beginning as a single PCIe SSD specification, NVMe technology has grown into nearly a dozen specifications, including multiple command sets, that provide pivotal support for NVMe technology across all major transports and standardize many aspects of storage," said Peter Onufryk, NVM Express Technical Workgroup Chair. "NVMe technology adoption continues to grow and has succeeded in unifying client, cloud, AI and enterprise storage around a common architecture. The future of NVMe technology is bright and we have 75 new authorized technical proposals underway."

JEDEC Publishes Compute Express Link (CXL) Support Standards

JEDEC Solid State Technology Association, the global leader in standards development for the microelectronics industry, today announced the publication of JESD405-1B JEDEC Memory Module Label - for Compute Express Link (CXL ) V1.1. JESD405-1B joins JESD317A JEDEC Memory Module Reference Base Standard - for Compute Express Link (CXL ) V1.0, first introduced in March 2023, in defining the function and configuration of memory modules that support CXL specifications, as well as the standardized content for labels for these modules. JESD405-1B and JESD317A were developed in coordination with the Compute Express Link standards organization. Both standards are available for free download from the JEDEC website.

JESD317A provides detailed guidelines for CXL memory modules including mechanical, electrical, pinout, power and thermal, and environmental guidelines for emerging CXL Memory Modules (CMMs). These modules conform to SNIA (Storage Networking Industry Association) EDSFF form factors E1.S and E3.S to provide end-user friendly hot pluggable assemblies for data centers and similar server applications.

Possible Specs of NVIDIA GeForce "Blackwell" GPU Lineup Leaked

Possible specifications of the various NVIDIA GeForce "Blackwell" gaming GPUs were leaked to the web by Kopite7kimi, a reliable source with NVIDIA leaks. These are specs of the maxed out silicon, NVIDIA will carve out several GeForce RTX 50-series SKUs based on these chips, which could end up with lower shader counts than those shown here. We've known from older reports that there will be five chips in all, the GB202 being the largest, followed by the GB203, the GB205, the GB206, and the GB207. There is a notable absence of a successor to the AD104, GA104, and TU104, because NVIDIA is trying a slightly different way to approach the performance segment with this generation.

The GB202 is the halo segment chip that will drive the possible RTX 5090 (RTX 4090 successor). This chip is endowed with 192 streaming multiprocessors (SM), or 96 texture processing clusters (TPCs). These 96 TPCs are spread across 12 graphics processing clusters (GPCs), which each have 8 of them. Assuming that "Blackwell" has the same 256 CUDA cores per TPC that the past several generations of NVIDIA gaming GPUs have had, we end up with a total CUDA core count of 24,576. Another interesting aspect about this mega-chip is memory. The GPU implements the next-generation GDDR7 memory, and uses a mammoth 512-bit memory bus. Assuming the 28 Gbps memory speed that was being rumored for NVIDIA's "Blackwell" generation, this chip has 1,792 GB/s of memory bandwidth on tap!
Return to Keyword Browsing
Jan 17th, 2025 18:29 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts