News Posts matching #DirectX 12 Ultimate

Return to Keyword Browsing

Intel to Tease Arc "Battlemage" Discrete GPU in December?

Intel is expected to debut its next-generation Arc "Battlemage" discrete GPU in December 2024, or ahead of the 2025 CES, HotHardware reports, citing Golden Pig Upgrade, a reliable source with GPU leaks. The source says that they expect "wonderful performance" for the GPU. Intel has a lot invested in its PC graphics division, across not just its two-year-old Arc "Alchemist" discrete GPUs, but also the integrated graphics solutions it's been launching with its Core Ultra processor generations. It debuted the DirectX 12 Ultimate-capable Xe-LPG graphics architecture with Core Ultra "Meteor Lake" and Arc Graphics branding, which it carried forward to the Core Ultra Series 200 "Arrow Lake" on the desktop platform. Meanwhile, "Battlemage" got debuted as the iGPU of the Core Ultra 200V series "Lunar Lake" mobile processor, which posted gaming performance beating that of the Ryzen 8000 "Hawk Point" processor, but falling short of the Ryzen AI 300 series "Strix Point."

Intel is expected to tap into a fairly new foundry node for the Arc "Battlemage" discrete GPU series. Its chips could strike a performance/Watt and performance/price inflection point in the performance segment, that drives the most volumes for NVIDIA and AMD. It is this exact segment that AMD has withdrawn from the enthusiast segment to focus on, with its next-generation Radeon RDNA 4 generation. With "Alchemist," Intel already laid a strong foundation for hardware-accelerated ray tracing and AI, and the company is only expected to advance on these fronts further. Could "Battlemage" and "Granite Rapids" go down as the most exciting products from Intel in 2024? We should find out next month.

NVIDIA's Arm-based AI PC Processor Could Leverage Arm Cortex X5 CPU Cores and Blackwell Graphics

Last week, we got confirmation from the highest levels of Dell and NVIDIA that the latter is making a client PC processor for the Windows on Arm (WoA) AI PC ecosystem that only has one player in it currently, Qualcomm. Michael Dell hinted that this NVIDIA AI PC processor would be ready in 2025. Since then, speculation has been rife about the various IP blocks NVIDIA could use in the development of this chip, the two key areas of debate have been the CPU cores and the process node.

Given that NVIDIA is gunning toward a 2025 launch of its AI PC processor, the company could implement reference Arm IP CPU cores, such as the Arm Cortex X5 "Blackhawk," and not venture out toward developing its own CPU cores on the Arm machine architecture, unlike Apple. Depending on how the market recieves its chips, NVIDIA could eventually develop its own cores. Next up, the company could use the most advanced 3 nm-class foundry node available in 2025 for its chip, such as the TSMC N3P. Given that even Apple and Qualcomm will build their contemporary notebook chips on this node, it would be a logical choice of node for NVIDIA. Then there's graphics and AI acceleration hardware.

AMD to Redesign Ray Tracing Hardware on RDNA 4

AMD's next generation RDNA 4 graphics architecture is expected to feature a completely new ray tracing engine, Kepler L2, a reliable source with GPU leaks, claims. Currently, AMD uses a component called Ray Accelerator, which performs the most compute-intensive portion of the ray intersection and testing pipeline, while AMD's approach to ray tracing on a hardware level still relies greatly on the shader engines. The company had debuted the ray accelerator with RDNA 2, its first architecture to meet DirectX 12 Ultimate specs, and improved the component with RDNA 3, by optimizing certain aspects of its ray testing, to bring about a 50% improvement in ray intersection performance over RDNA 2.

The way Kepler L2 puts it, RDNA 4 will feature a fundamentally transformed ray tracing hardware solution from the ones on RDNA 2 and RDNA 3. This could probably delegate more of the ray tracing workflow onto fixed-function hardware, unburdening the shader engines further. AMD is expected to debut RDNA 4 with its next line of discrete Radeon RX GPUs in the second half of 2024. Given the chatter about a power-packed event by AMD at Computex, with the company expected to unveil "Zen 5" CPU microarchitecture on both server and client processors; we might expect some talk on RDNA 4, too.

BIOSTAR Introduces the Arc A380 ST Graphics Card

BIOSTAR, a leading manufacturer of motherboards, graphics cards, and storage devices today, is excited to introduce the all-new Intel Arc A380 ST graphics card. Engineered for the modern user, the Arc A380 ST is the pinnacle of performance and efficiency in the world of digital entertainment and casual gaming. Featuring the cutting-edge Intel Arc A380 ST Graphics Engine and built on the PCIe 4.0 interface, the Arc A380 ST reigns supreme in its category. With DirectX 12 Ultimate and OpenGL 4.6 support, the BIOSTAR Arc A380 ST graphics card promises unparalleled visual fidelity, allowing users to immerse themselves in the latest gaming titles and multimedia content with extraordinary clarity and detail.

With a robust 6 GB of GDDR6 memory and a 96-bit memory interface clocked at 15.5 Gbps, the Arc A380 ST provides the bandwidth and speed necessary for seamless gaming and multitasking. The card's energy-efficient design does not compromise performance, boasting a graphics clock of 2000 MHz while maintaining a low total board power of 75 W.

Intel Lunar Lake Chiplet Arrangement Sees Fewer Tiles—Compute and SoC

Intel Core Ultra "Lunar Lake-MX" will be the company's bulwark against Apple's M-series Pro and Max chips, designed to power the next crop of performance ultraportables. The MX codename extension denotes MoP (memory-on-package), which sees stacked LPDDR5X memory chips share the package's fiberglass substrate with the chip, to conserve PCB footprint, and give Intel greater control over the right kind of memory speed, timings, and power-management features suited to its microarchitecture. This is essentially what Apple does with its M-series SoCs powering its MacBooks and iPad Pros. Igor's Lab scored the motherlode on the way Intel has restructured the various components across its chiplets, and the various I/O wired to the package.

When compared to "Meteor Lake," the "Lunar Lake" microarchitecture sees a small amount of "re-aggregation" of the various logic-heavy components of the processor. On "Meteor Lake," the CPU cores and the iGPU sat on separate tiles—Compute tile and Graphics tile, respectively, with a large SoC tile sitting between them, and a smaller I/O tile that serves as an extension of the SoC tile. All four tiles sat on top of a Foveros base tile, which is essentially an interposer—a silicon die that facilitates high-density microscopic wiring between the various tiles that are placed on top of it. With "Lunar Lake," there are only two tiles—the Compute tile, and the SoC tile.

AMD Ryzen 7 8700G Loves Memory Overclocking, which Vastly Favors its iGPU Performance

Entry level discrete GPUs are in trouble, as the first reviews of the AMD Ryzen 7 8700G desktop APU show that its iGPU is capable of beating the discrete GeForce GTX 1650, which means it should also beat the Radeon RX 6500 XT that offers comparable performance. Based on the 4 nm "Hawk Point" monolithic silicon, the 8700G packs the powerful Radeon 780M iGPU based on the latest RDNA3 graphics architecture, with as many as 12 compute units, worth 768 stream processors, 48 TMUs, and an impressive 32 ROPs; and full support for the DirectX 12 Ultimate API requirements, including ray tracing. A review by a Chinese tech publication on BiliBili showed that it's possible for an overclocked 8700G to beat a discrete GTX 1650 in 3DMark TimeSpy.

It's important to note here that both the iGPU engine clock and the APU's memory frequency are increased. The reviewer set the iGPU engine clock to 3400 MHz, up from its 2900 MHz reference speed. It turns out that much like its predecessor, the 5700G "Cezanne," the new 8700G "Hawk Point" features a more advanced memory controller than its chiplet-based counterpart (in this case the Ryzen 7000 "Raphael"). The reviewer succeeded in a DDR5-8400 memory overclock. A combination of the two resulted in a 17% increase in the Time Spy score over stock speeds; which is how the chip manages to beat the discrete GTX 1650 (comparable performance to the RX 6500 XT at 1080p).

SPARKLE Announces Arc A380 Genie and A310 Eco Low-profile Graphics Cards

SPARKLE is announcing the low-profile series: SPARKLE Intel Arc A380 GENIE graphics and SPARKLE Intel Arc A310 ECO graphics. Both graphics cards come as low-profile configurations with 1x HDMI and 2x mini-DP video outputs, a free additional short bracket in the box, and are packed with Intel Arc technologies. Advanced technologies include AI-enhanced Intel Xe Super Sampling (XeSS) for higher image quality and performance, DirectX 12 Ultimate support including hardware-accelerated ray tracing, full AV1 hardware encode and decode for the latest multimedia support, and Intel Deep Link Technologies for exclusive platform advantages combining Intel Core processor and Intel Arc graphics.

These cards are ready to fit into any magic lamp and make gaming wishes come alive! Furthermore, SPARKLE has built an exclusive Intel Arc A310 by successfully reducing the TBP (total board power) of the Intel Arc A310 from Intel's default 75 W to 50 W, providing the best balance of features, technologies and experiences in a small but advanced form-factor.

AMD Ryzen 8040 Series "Hawk Point" Mobile Processors Announced with a Faster NPU

AMD today announced the new Ryzen 8040 mobile processor series codenamed "Hawk Point." These chips are shipping to notebook manufacturers now, and the first notebooks powered by these should be available to consumers in Q1-2024. At the heart of this processor is a significantly faster neural processing unit (NPU), designed to accelerate AI applications that will become relevant next year, as Microsoft prepares to launch Windows 12, and software vendors make greater use of generative AI in consumer applications.

The Ryzen 8040 "Hawk Point" processor is almost identical in design and features to the Ryzen 7040 "Phoenix," except for a faster Ryzen AI NPU. While this is based on the same first-generation XDNA architecture, its NPU performance has been increased to 16 TOPS, compared to 10 TOPS of the NPU on the "Phoenix" silicon. AMD is taking a whole-of-silicon approach to AI acceleration, which includes not just the NPU, but also the "Zen 4" CPU cores that support the AVX-512 VNNI instruction set that's relevant to AI; and the iGPU based on the RDNA 3 graphics architecture, with each of its compute unit featuring two AI accelerators, components that make the SIMD cores crunch matrix math. The whole-of-silicon performance figures for "Phoenix" is 33 TOPS; while "Hawk Point" boasts of 39 TOPS. In benchmarks by AMD, "Hawk Point" is shown delivering a 40% improvement in vision models, and Llama 2, over the Ryzen 7040 "Phoenix" series.

Intel Lunar Lake-MX SoC with On-Package LPDDR5X Memory Detailed

With the reality of high performance Arm processors from Apple and Qualcomm threatening Intel's market share in the client computing space, Intel is working on learner more PCB-efficient client SoCs that can take the fight to them, while holding onto the foundations of x86. The first such form-factor of processors are dubbed -MX. These are essentially -U segment processors with memory on package, to minimize PCB footprint. Intel has fully integrated the PCH into the processor chip with "Meteor Lake," with PCH functions scattered across the SoC and I/O tiles of the processor. An SoC package with dimensions similar to those of -UP4 packages meant for ultrabooks, can now cram main memory, so the PCBs of next-generation notebooks can be further compacted.

Intel had recently shown Meteor Lake-MX packages to the press as a packaging technology demonstration in its Arizona facility. It's unclear whether this could release as actual products, but in a leaked company presentation, confirmed that its first commercial outing will be with Lunar Lake-MX. The current "Alder Lake-UP4" package measures 19 mm x 28.5 mm, and is a classic multi-chip module that combines a monolithic "Alder Lake" SoC die with a PCH die. The "Meteor Lake-UP4" package measures 19 mm x 23 mm, and is a chiplet-based processor, with a Foveros base tile that holds the Compute (CPU cores), Graphics (iGPU), SoC and I/O (platform core-logic) tiles. The "Lunar Lake-MX" package is slightly larger than its -UP4 predecessors, measuring 27 mm x 27.5 mm, but completely frees up space on the PCB for memory.

AMD Ryzen 7000G APU Series Includes Lower End Models Based on "Phoenix 2"

AMD is giving final touches to its Ryzen 7000G series desktop APUs that bring the 4 nm "Phoenix" monolithic processor silicon to the Socket AM5 desktop package. The star attraction with these processors is their large iGPU based on the latest RDNA3 graphics architecture, featuring up to 12 compute units worth 768 stream processors, and full DirectX 12 Ultimate feature-set support. These processors should be able to provide 720p to 1080p gaming with entry-medium settings, where you take take advantage of FSR for even better performance. At this point we don't know whether the Ryzen AI feature-set will make its way to the desktop platform. "Phoenix" features an 8-core/16-thread CPU based on the latest "Zen 4" microarchitecture.

An interesting development here is that not only is AMD bring the "Phoenix" silicon to the desktop platform, but the processor models highlighted in this leak reference the smaller "Phoenix 2" silicon. This chip is physically smaller, features a CPU with two "Zen 4" and four "Zen 4c" cores; and an iGPU that has no more than 4 compute units worth 256 stream processors. The OPN codes of at least three processor models surfaced on the web. These include the Ryzen 5 PRO 7500G (100-000001183-00), the Ryzen 5 7500G (100-00000931-00), and the Ryzen 3 7300G (100-000001187-00). No specs about these chips are known at this point. The PRO 7500G and regular 7500G are expected to feature the full 2+4 core configuration, while the 7300G could probably feature a 2+2 core configuration. If the company does plan a 7600G and 7700G, those would likely be based on "Phoenix" with 6 or 8 regular "Zen 4" cores.

PSA: Alan Wake II Runs on Older GPUs, Mesh Shaders not Required

"Alan Wake II," released earlier this week, is the latest third person action adventure loaded with psychological thriller elements that call back to some of the best works of Remedy Entertainment, including "Control," "Max Payne 2," and "Alan Wake." It's also a visual feast as our performance review of the game should show you, leveraging the full spectrum of the DirectX 12 Ultimate feature-set. In the run up to the release, when Remedy put out the system requirements lists for "Alan Wake II" with clear segregation for experiences with ray tracing and without; what wasn't clear was just how much the game depended on hardware support for mesh shaders, which is why its bare minimum list called for at least an NVIDIA RTX 2060 "Turing," or at least an AMD RX 6600 XT RDNA2, both of which are DirectX 12 Ultimate GPUs with hardware mesh shaders support.

There was some confusion among gaming online forums over the requirement for hardware mesh shaders. Many people assumed that the game will not work on GPUs without mesh shader support, locking out lots of gamers. Through the course of our testing for our performance review, we learned that while it is true that "Alan Wake II" relies on hardware support for mesh shaders, the lack of this does not break gameplay. You will, however, pay a heavy performance penalty on GPUs that lack hardware mesh shader support. On such GPUs, the game is designed to show users a warning dialog box that their GPU lacks mesh shader support (screenshot below), but you can choose to ignore this warning, and go ahead to play the game. The game considers mesh shaders a "recommended GPU feature," and not a requirement. Without mesh shaders, you can expect a severe performance loss that is best illustrated with the AMD Radeon RX 5700 XT based on the RDNA architecture, which lacks hardware mesh shaders.

Intel Launches Arc A580 Graphics Card for 1080p AAA Gaming at $179

Intel today launched the Arc A580 "Alchemist" desktop graphics card, with general availability across both the prebuilt and DIY retail channels. The card starts at a price of USD $179.99. The A580 targets the lower-end of the mid-range, and is targeted at AAA gaming at 1080p with medium-thru-high settings. The card fully meets DirectX 12 Ultimate feature-requirements, and is based on the Xe HPG "Alchemist" graphics architecture that powers the current Arc A750 and A770.

The new A580 has a lot in common with the Arc 7-series, as it is based on the same 6 nm ACM-G10 (aka DGX-512) silicon that powers them. Intel carved out this SKU by enabling 24 out of 32 Xe Cores, across 6 out of 8 Render Slices. This results in 384 execution units, or 3,072 unified shaders, 384 XMX AI acceleration cores, 24 Ray Tracing engines, 192 TMUs, and 96 ROPs. Perhaps the best aspect of the A580 is its memory sub-system that's been carried over from the A750—you get 8 GB of 16 Gbps GDDR6 memory across a 256-bit memory bus, yielding a segment-best 512 GB/s memory bandwidth. Intel claims that the Arc A580 should provide performance highly competitive to the GeForce RTX 3050, but there's more to this, do check out our reviews.

ASRock Arc A580 Challenger OC | Sparkle Arc A580 Orc

Latest AMD AGESA Hints at Ryzen 7000G "Phoenix" Desktop APUs

AMD is preparing to launch its first APUs on the Socket AM5 desktop platform, with the Ryzen 7000G series. While the company has standardized integrated graphics with the Ryzen 7000 series, it does not consider the regular Ryzen 7000 series "Raphael" processors as APUs. AMD considers APUs to be processors with overpowered iGPUs that are fit for entry-mainstream PC gaming. As was expected for a while now, for the Ryzen 7000G series, AMD is tapping into its 4 nm "Phoenix" monolithic silicon, the same chip that powers the Ryzen 7040 series mobile processors. Proof of "Phoenix" making its way to desktop surfaced with CPU support lists for the latest AGESA SMUs (system management units) compiled by Reous, with the AGESA ComboAM5PI 1.0.8.0 listing support for "Raphael," as well as "Phoenix." Another piece of evidence was an ASUS B650 motherboard support page that listed a UEFI firmware update encapsulating 1.0.8.0, which references an "upcoming CPU."

Unlike "Raphael" and "Dragon Range," "Phoenix" is a monolithic processor die built on the TSMC 4 nm foundry node. Its CPU is based on the latest "Zen 4" microarchitecture, and features an 8-core/16-thread configuration, with 1 MB of L2 cache per core, and 16 MB of shared L3 cache. The star attraction here is the iGPU, which is based on the RDNA3 graphics architecture, meets the DirectX 12 Ultimate feature requirements, and is powered by 12 compute units worth 768 stream processors. Unlike "Raphael," the "Phoenix" silicon is known to feature an older PCI-Express Gen 4 root complex, with 24 lanes, so you get a PCI-Express 4.0 x16 PEG slot, one CPU-attached M.2 NVMe slot limited to Gen 4 x4, and a 4-lane chipset bus. "Phoenix" features a dual-channel (4 sub-channel) DDR5 memory controller, with native support for DDR5-5600. A big unknown with the Ryzen 7000G desktop APUs is whether they retain the Ryzen AI feature-set from the Ryzen 7040 series mobile processors.

Intel Arc A750 Pricing Sinks to New Low of $180

Intel Arc A750 graphics card has been regularly shown by its designers to be comparable to the NVIDIA GeForce RTX 3060 in terms of performance, making it a formidable DirectX 12 Ultimate-capable 1080p-class gaming graphics card, with each of its new driver release ironing out game-specific performance issues. An ASRock A750 custom-design graphics card is now listed on Newegg for $199, with a coupon shaving off a further $20. At $179, the A750 is hard to pass, considering that other cards from its class are a bit pricier. The cheapest Radeon RX 6650 XT is currently going for $240, and the cheapest RTX 3060 (12 GB original spec) for $280.

Intel "Arrow Lake-S" Desktop Processor Projected 6%-21% Faster than "Raptor Lake-S"

Intel's future-generation "Arrow Lake-S" desktop processor is already being sampled internally, and to some of the company's closest industry partners, and some of the first performance projections of the processor, comparing it with the current "Raptor Lake-S" (Core i9-13900K), have surfaced, and upcoming "Raptor Lake Refresh" desktop processor (probably the i9-14900K), have surfaced. First, while the "Raptor Lake Refresh" family sees core-count increases across the board for Core i3, Core i5, and Core i7 brand extensions, the 14th Gen Core i9 series is widely expected to be a damp squib compared to the current i9-13900 series, and it shows in the performance projection graphs, where the supposed-i9-14900K is barely 0% to 3% faster, probably on account of slightly higher clock speeds (100-300 MHz).

The "Arrow Lake-S" processor in these graphs has a core-configuration of 8P+16E. Since this is a projection, it does not reflect the final core-configuration of "Arrow Lake-S," but is a guideline on what performance increase to expect versus "Raptor Lake," assuming the same core-configuration and power limits. All said and done, "Arrow Lake-S" is projected to offer a performance increase ranging between 6% in the worst case, to 21% in the best-case benchmark, compared to the current i9-13900K, assuming an identical core-config and power-limits. The CPU benchmarks in the projection span the SPECrate2017 suite, CrossMark, SYSmark 25, WebXPRT 4, Chrome Speedometer 2.1, and Geekbench 5.4.5 ST and MT.

Maingear Ships Gaming Desktops Powered by Intel Arc A750 Graphics

Today, award-winning systems integrator MAINGEAR introduced a new line of PC gaming desktops equipped with Intel Arc A750 Limited Edition GPUs, opening up advanced high-performance gaming experiences at an affordable price point. With AI-enhanced Xe Super Sampling (XeSS) upscaling, and the latest breakthroughs in graphics technologies, MAINGEAR PCs with Intel Arc A750 Limited Edition GPUs offer the perfect entry point for those looking to take their game to the next level. Experience high-refresh gaming from the latest AAA games, high-octane esports titles, and then some. Pre-configured options, including the flagship MG-1, can be purchased through MAINGEAR starting at $999 USD.

"Intel is on a mission to bring balance back to the market by offering great performance per dollar with GPUs featuring modern technologies such as XeSS AI-based upscaling, powerful ray tracing hardware, and AV1 encoding," said Qi Lin, Intel Sr. Director Client Graphics Group. "MAINGEAR has a reputation of using the best quality parts and including the Intel Arc A750 Limited Edition GPUs in the MG-1 and VYBE systems validates the tremendous progress we've made with consistent driver updates to increase performance and support new games on day of release. We look forward to more gamers enjoying what Arc graphics has to offer."

3DMark Gets AMD FidelityFX Super Resolution 2 (FSR 2) Feature Test

UL Benchmarks today released an update to 3DMark that adds a Feature Test for AMD FidelityFX Super Resolution 2 (FSR 2), the company's popular upscaling-based performance enhancement. This was long overdue, as 3DMark has had a Feature Test for DLSS for years now; and as of October 2022, it even got one for Intel XeSS. The new FSR 2 Feature Test uses a scene from the Speed Way DirectX 12 Ultimate benchmark, where it compares fine details of a vehicle and a technic droid between native resolution with TAA and FSR 2, and highlights the performance uplift. To use the feature test, you'll need any GPU that supports DirectX 12 and FSR 2 (that covers AMD, NVIDIA, and Intel Arc). For owners of 3DMark who purchased it before October 12, 2022, they'll need to purchase the Speed Way upgrade to unlock the AMD FSR feature test.

Intel Talks "Battlemage" Xe2-LPG and Xe2-HPG Graphics Architectures

Intel in an interview with Hardwareluxx shed more light on its second generation Xe graphics architecture, codenamed "Battlemage." There will be two key variants of "Battlemage,"—Xe2-LPG and Xe2-HPG. The Xe2-LPG (low-power graphics) architecture is a slimmed-down derivative of "Battlemage" that's optimized for low-power. It is meant for iGPUs (integrated graphics), particularly upcoming "disaggregated" Intel Core processors in which the iGPU exists on Graphics Tiles (chiplets). The iGPU powering the upcoming Core "Meteor Lake" processor is rumored to meet the full DirectX 12 Ultimate feature-set (something Xe-LP doesn't), and so it's likely that Xe2-LPG is getting its first outing with that processor. The Xe2-HPG (high performance graphics) architecture is designed squarely for discrete GPUs—either desktop graphics cards, or mobile discrete GPUs hardwired into laptops.

In the interview, Intel talked about how its first-generation Xe graphics IP had at least four separate product verticals based on the scalability of the product, and the specific application (Xe-LP for iGPUs and tiny dGPUs, Xe-HPG for client- and pro-vis discrete GPUs, Xe-HPC for scalar compute processors, and Xe-HP for data-center graphics). The company eventually axed Xe-HP as it felt the Xe-HPG and Xe-HPC architectures adequately addressed this segment. With AXG (accelerated compute group) being split up between the CCG (client computing group) and DCG (data-center group); Xe2-LPG and Xe2-HPG will be developed primarily under CCG, with a client and pro-visualization focus; while Xe-HPC will be developed as a scalar-compute architecture by DCG, which effectively leaves the Intel Arc Graphics team with just two verticals—to deliver a feature-rich iGPU for its next-generation Core processors, and a performance discrete GPU lineup so it can eat away market-share from NVIDIA and AMD—hopefully with better time-to-market.

Intel Core "Meteor Lake" On Course for 2H-2023 Launch

Intel in its Q4-2022 Financial release call reiterated that its Core "Meteor Lake" processor remains on course for a 2H-2023 launch. The company slide does not mention the client form-factor the architecture targets, and there are still rumors of a "Raptor Lake Refresh" desktop processor lineup for 2H, which would mean that "Meteor Lake" will debut as a high-performance mobile processor architecture attempting to dominate the 7 W, 15 W, 28 W, and 35 W device market-segments, with its 6P+16E CPU that introduce IPC increases on both the P-cores and E-cores; and a powerful new iGPU. The slide also mentions that its succeeding "Lunar Lake" architecture is on course for 2024.

"Meteor Lake" is Intel's first chiplet-based MCM processor, in which the key components of the processor are built on various silicon fabrication nodes, based on their need for such a cutting-edge node; such that the cost-optimization upholds the economic aspect of Moore's Law. The compute tile, the die that has the CPU cores, features a 6P+16E setup, with six "Redwood Cove" P-cores, and sixteen "Crestmont" E-cores. At this point it's not known if "Crestmont" cores are arranged in clusters of 4 cores, each. The graphics tile features a powerful iGPU based on the newer Xe-LPG graphics architecture that meets full DirectX 12 Ultimate feature-set. The processor's I/O is expected to support even faster DDR5/LPDDR5 memory speeds, and feature PCIe Gen 5.

ICYMI, Intel Improved DirectX 9 API Performance for Arc "Alchemist" GPUs Spanning Several Popular Game Titles

Intel Arc "Alchemist" graphics architecture was originally developed as a forward-facing PC GPU architecture with many of the contemporary graphics technologies, including full DirectX 12 Ultimate support, however, the GPU curiously lacks hardware support for DirectX 9. Released 20 years ago, DirectX 9 continued to power AAA PC titles well into the 2010s as game console development lagged (the era of Xbox 360 and PlayStation 3), and most e-sports titles of the time included either native or fallback DirectX 9 support for those on older GPUs. This is a problem for Intel, as many of the currently-popular e-Sports titles may still use DirectX 9, and so the Intel Graphics team set out to individually optimize DirectX 9 titles with each new Arc GPU driver release.

While Arc GPUs lack DirectX 9 support, foolproof API translation technologies exist, which convert DirectX 9 API instructions into DirectX 12. This is not fundamentally unlike how 32-bit applications work on 64-bit Windows (using WOW64 machine-architecture translation). This, however, requires per-game optimization to ensure any engine-level special features are correctly translated. With the latest 101.3959 Beta drivers, Intel optimized popular DirectX 9 titles "League of Legends," "Counter Strike: Global Offensive," "Starcraft 2," "Payday 2," "Guild Wars 2," "Stellaris," "NiZhan," and "Moonlight Blade." The company seems to be going about this the smart way, by relying on market analysis for selecting the games in need of optimization (understanding what DirectX 9 games are still being played).

Intel's Next-Gen Desktop Platform Intros Socket LGA1851, "Meteor Lake-S" to Feature 6P+16E Core Counts

Keeping up with the cadence of two generations of desktop processors per socket, Intel will turn the page of the current LGA1700, with the introduction of the new Socket LGA1851. The processor package will likely have the same dimensions as LGA1700, and the two sockets may share cooler compatibility. The first processor microarchitecture to debut on LGA1851 will be the 14th Gen Core "Meteor Lake-S." These chips will feature a generationally lower CPU core-count compared to "Raptor Lake," but significantly bump the IPC on both the P-cores and E-cores.

"Raptor Lake" is Intel's final monolithic silicon client processor before the company pivots to chiplets built on various foundry nodes, as part of its IDM 2.0 strategy. The client-desktop version of "Meteor Lake," dubbed "Meteor Lake-S," will have a maximum CPU core configuration of 6P+16E (that's 6 performance cores with 16 efficiency cores). The chip has 6 "Redwood Cove" P-cores, and 16 "Crestmont" E-cores. Both of these are expected to receive IPC uplifts, such that the processor will end up faster (and hopefully more efficient) than the top "Raptor Lake-S" part. Particularly, it should be able to overcome the deficit of 2 P-cores.

UL Benchmarks Launches 3DMark Speedway DirectX 12 Ultimate Benchmark

UL Solutions is excited to announce that our new DirectX 12 Ultimate Benchmark 3DMark Speed Way is now available to download and buy on Steam and on the UL Solutions website. 3DMark Speed Way is sponsored by Lenovo Legion. Developed with input from AMD, Intel, NVIDIA, and other leading technology companies, Speed Way is an ideal benchmark for comparing the DirectX 12 Ultimate performance of the latest graphics cards.

DirectX 12 Ultimate is the next-generation application programming interface (API) for gaming graphics. It adds powerful new capabilities to DirectX 12, helping game developers improve visual quality, boost frame rates, reduce loading times and create vast, detailed worlds. 3DMark Speed Way's engine demonstrates what the latest DirectX API brings to ray-traced gaming, using DirectX Raytracing tier 1.1 for real-time global illumination and real-time ray-traced reflections, coupled with new performance optimizations like mesh shaders.

3DMark Speed Way DirectX 12 Ultimate Benchmark is Launching on October 12

3DMark Speed Way is a new GPU benchmark that showcases the graphics technology that will power the next generation of gaming experiences. We're excited to announce Speed Way, sponsored by Lenovo Legion, is releasing on October 12. Our team has been working hard to get Speed Way ready for you to use benchmarking, stress testing, and comparing the new PC hardware coming this fall.

From October 12 onward, Speed Way will be included in the price when you buy 3DMark from Steam or our own online store. Since we released Time Spy in 2016, 3DMark users have enjoyed many free updates, including Time Spy Extreme, the 3DMark CPU Profile, 3DMark Wild Life, and multiple tests demonstrating new DirectX features. With the addition of Speed Way, the price of 3DMark on Steam and 3DMark Advanced Edition will go up from $29.99 to $34.99.

Intel Arc A770 Launched at USD $329, Available from October 12

Intel today announced the pricing for the Arc A770 Limited Edition desktop graphics card, and it is set at USD $329, offering a class of performance comparable to NVIDIA and AMD graphics cards around the $400-range. The A770 is a full-feature DirectX 12 Ultimate-capable graphics cards. The Arc A770 Limited Edition maxes out the 6 nm ACM-G10 silicon, features 32 Xe Cores, 512 XMX matrix processors, and 512 EUs, which work out to 4,096 unified shaders. The card comes with 8 GB or 16 GB of 17.5 Gbps GDDR6 memory across a 256-bit wide memory bus. $329 could be the starting price of the A770 for its 8 GB model. Available from October 12.

Intel Launches the NUC 12 Enthusiast, its Most Powerful Mini-PC

Today, Intel announced the Intel NUC 12 Enthusiast Mini PC and Kit (code-named Serpent Canyon). Designed for gamers and content creators, the compact mini-PC is built to include Intel Arc graphics in the smallest form factor. The NUC 12 Enthusiast features the latest 12th Gen Intel Core processors and is the first Intel NUC to include Intel Arc A-series graphics in the form of the Intel Arc A770M graphics processing unit (GPU).

"The Intel NUC 12 Enthusiast Kit is one of the most exciting NUC's to launch because it's the first to pair an Intel processor with discrete Intel graphics. The system provides a strong combination of high performance in content creation and gaming usages, and wide array of I/O - typically found in larger systems - all in a small form-factor design. More importantly, this NUC features helpful technologies like Intel Thread Director and Intel Deep Link that make it perfect for anyone trying to create and game in the convenience of a truly compact design," said Brian McCarson, Intel vice president and general manager of the NUC Group.
Return to Keyword Browsing
Dec 18th, 2024 06:07 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts