News Posts matching #GPU

Return to Keyword Browsing

Latest Beta Patch for Starfield Adds NVIDIA DLSS 3 Support and Brings CPU and GPU Optimizations

Bethesda has released its latest beta patch for Starfield, adding official support for NVIDIA DLSS 3 Frame Generation, fixing several issues, and implementing further CPU and GPU optimizations. The November 8th Beta Update, as it is called, is only available via Steam Beta Update, and it should go live to all Xbox and PC players later this month.

According to the release notes, the new update adds NVIDIA DLSS support with DLSS Super Resolution, Deep Learning Anti-aliasing (DLAA), Nvidia Reflex Low Latency, and DLSS Frame Generation. The update also fixes several performance and stability issues, including a fix for a number of memory related issues and leaks, additional GPU optimizations that will benefit higher-end graphics cards, and improvements to the renderer threading model that should improve CPU usage on higher-end systems. The update also includes various stability and performance improvements.

NVIDIA Turbocharges Generative AI Training in MLPerf Benchmarks

NVIDIA's AI platform raised the bar for AI training and high performance computing in the latest MLPerf industry benchmarks. Among many new records and milestones, one in generative AI stands out: NVIDIA Eos - an AI supercomputer powered by a whopping 10,752 NVIDIA H100 Tensor Core GPUs and NVIDIA Quantum-2 InfiniBand networking - completed a training benchmark based on a GPT-3 model with 175 billion parameters trained on one billion tokens in just 3.9 minutes. That's a nearly 3x gain from 10.9 minutes, the record NVIDIA set when the test was introduced less than six months ago.

The benchmark uses a portion of the full GPT-3 data set behind the popular ChatGPT service that, by extrapolation, Eos could now train in just eight days, 73x faster than a prior state-of-the-art system using 512 A100 GPUs. The acceleration in training time reduces costs, saves energy and speeds time-to-market. It's heavy lifting that makes large language models widely available so every business can adopt them with tools like NVIDIA NeMo, a framework for customizing LLMs. In a new generative AI test ‌this round, 1,024 NVIDIA Hopper architecture GPUs completed a training benchmark based on the Stable Diffusion text-to-image model in 2.5 minutes, setting a high bar on this new workload. By adopting these two tests, MLPerf reinforces its leadership as the industry standard for measuring AI performance, since generative AI is the most transformative technology of our time.

NZXT Announces the H6 Flow — A Compact Dual Chamber Mid-Tower ATX Case

NZXT, a leader in PC gaming hardware and services, today announces the H6 Flow and H6 Flow RGB, a compact dual chamber mid-tower ATX case. The H6 Flow offers a harmonious blend of performance and visual appeal for PC enthusiasts. Designed for an expansive and uninterrupted view, the H6 Flow is adorned with consistent tempered glass on the front and sides, granting a panoramic peek into the insides of your build. Leveraging its dual chamber architecture, the new angel front panel directs the airflow from the three pre-included 120 mm fans (or 120 mm RGB fans for the H6 Flow RGB version) and two 140 mm fans at the base of the case allow you to cool your heat generating components. The revamped perforated panels come with a design fine-tuned for optimal airflow and superior performance. While all being easy to build with the H6 Flow facilitates generous cable-routing channels and straps, ensuring organized cable management.

NVIDIA is Rushing GeForce RTX 4090 Orders to China Before Export Restrictions

NVIDIA is reportedly rushing shipments of GeForce RTX 4090 GPUs to China in anticipation of expected export restrictions. We have already reported that NVIDIA might be canceling 5 billion US Dollars worth of orders. The US government will require an export license for shipping RTX 4090s to China, effectively restricting sales to the country. NVIDIA's add-in-board (AIB) partners are reportedly working at full capacity to produce as many RTX 4090 products for the Chinese market as possible before the potential restriction on November 17. While it remains unclear whether the export restrictions will ultimately be implemented, the anticipation of such measures has prompted NVIDIA and its partners to accelerate their production.

The Tweet that feeds this information is coming from Zed Wang, a well-known hardware leaker with historically accurate insights into NVIDIA's operations, who claims that "NVIDIA has been shipping tons of AD102 for AICs this week to manufacture as much RTX 4090 as possible before the original restriction date of RTX 4090 in China. It is still unclear whether the restriction will become true or not. But all AICs are at their full power in producing RTX 4090, regardless of that."

AMD Reports Third Quarter 2023 Financial Results, Revenue Up 4% YoY

AMD (NASDAQ:AMD) today announced revenue for the third quarter of 2023 of $5.8 billion, gross margin of 47%, operating income of $224 million, net income of $299 million and diluted earnings per share of $0.18. On a non-GAAP basis, gross margin was 51%, operating income was $1.3 billion, net income was $1.1 billion and diluted earnings per share was $0.70.

"We delivered strong revenue and earnings growth driven by demand for our Ryzen 7000 series PC processors and record server processor sales," said AMD Chair and CEO Dr. Lisa Su. "Our data center business is on a significant growth trajectory based on the strength of our EPYC CPU portfolio and the ramp of Instinct MI300 accelerator shipments to support multiple deployments with hyperscale, enterprise and AI customers."

IBM Unleashes the Potential of Data and AI with its Next-Generation IBM Storage Scale System 6000

Today, IBM introduced the new IBM Storage Scale System 6000, a cloud-scale global data platform designed to meet today's data intensive and AI workload demands, and the latest offering in the IBM Storage for Data and AI portfolio.

For the seventh consecutive year and counting, IBM is a 2022 Gartner Magic Quadrant for Distributed File Systems and Object Storage Leader, recognized for its vision and execution. The new IBM Storage Scale System 6000 seeks to build on IBM's leadership position with an enhanced high performance parallel file system designed for data intensive use-cases. It provides up to 7M IOPs and up to 256 GB/s throughput for read only workloads per system in a 4U (four rack units) footprint.

Velocity Micro Announces ProMagix G480a and G480i, Two GPU Server Solutions for AI and HPC

Velocity Micro, the premier builder of award-winning enthusiast desktops, laptops, high performance computing solutions, and professional workstations announces the immediate availability of the ProMagix G480a and G480i - two GPU servers optimized for High Performance Computing and Artificial Intelligence. Powered by either dual AMD Epyc 4th Gen or dual 4th Gen Intel Scalable Xeon processors, these 4U form factor servers support up to eight dual slot PCIe Gen 5 GPUs to create incredible compute power designed specifically for the highest demand workflows including simulation, rendering, analytics, deep learning, AI, and more. Shipments begin immediately.

"By putting emphasis on scalability, functionality, and performance, we've created a line of server solutions that tie in the legacy of our high-end brand while also offering businesses alternative options for more specialized solutions for the highest demand workflows," said Randy Copeland, President and CEO of Velocity Micro. "We're excited to introduce a whole new market to what we can do."

NVIDIA to Start Selling Arm-based CPUs to PC Clients by 2025

According to sources close to Reuters, NVIDIA is reportedly developing its custom CPUs based on Arm instruction set architecture (ISA), specifically tailored for the client ecosystem, also known as PC. NVIDIA has already developed an Arm-based CPU codenamed Grace, which is designed to handle server and HPC workloads in combination with the company's Hopper GPU. However, as we learn today, NVIDIA also wants to provide CPUs for PC users and to power Microsoft's Windows operating system. The push for more vendors of Arm-based CPUs is also supported by Microsoft, which is losing PC market share to Apple and its M-series of processors.

The creation of custom processors for PCs that Arm ISA would power makes the decades of x86-based applications either obsolete or in need of recompilation. Apple allows users to emulate x86 applications using the x86-to-Arm translation layer, and even Microsoft allows it for Windows-on-Arm devices. We are left to see how NVIDIA's solution would compete in the entire market of PC processors, which are expected to arrive in 2025. Still, the company could make some compelling solutions given its incredible silicon engineering history and performant Arm design like Grace. With the upcoming Arm-based processors hitting the market, we expect the Windows-on-Arm ecosystem to thrive and get massive investment from independent software vendors.

NVIDIA and AMD Deliver Powerful Workstations to Accelerate AI, Rendering and Simulation

To enable professionals worldwide to build and run AI applications right from their desktops, NVIDIA and AMD are powering a new line of workstations equipped with NVIDIA RTX Ada Generation GPUs and AMD Ryzen Threadripper PRO 7000 WX-Series CPUs. Bringing together the highest levels of AI computing, rendering and simulation capabilities, these new platforms enable professionals to efficiently tackle the most resource-intensive, large-scale AI workflows locally.

Bringing AI Innovation to the Desktop
Advanced AI tasks typically require data-center-level performance. Training a large language model with a trillion parameters, for example, takes thousands of GPUs running for weeks, though research is underway to reduce model size and enable model training on smaller systems while still maintaining high levels of AI model accuracy. The new NVIDIA RTX GPU and AMD CPU-powered AI workstations provide the power and performance required for training such smaller models, as well as local fine-tuning, and helping to offload data center and cloud resources for AI development tasks. The devices let users select single- or multi-GPU configurations as required for their workloads.

Lords of the Fallen Gets New Patches, Latest Patch 1.1.207 Brings Stability Improvements

Lords of the Fallen got plenty of patches in the last few days, with two of them, Patch 1.1.199 and 1.1.203, launched yesterday, and the latest, Patch v1.1.207, launched earlier today. The previous two fixed GPU crashes on AMD graphics cards, as well as a big fix for the issue in communication between the drivers and DirectX 12. The Patch 1.1.203 also brought a reduction in VRAM usage that should provide additional headroom for GPUs operating at the limit, which in turn should provide a substantial performance improvement, at least according to Paradox Interactive.

The latest Patch 1.1.207 brought further stability improvements, fixing several crash issues as well as implementing various optimization, multiplayer, gameplay, AI, Quest and other improvements. The release notes also note that the fix for the issue that causes the game to crash on Steam Deck has been fixed, and should be published as soon as it passes QA.

Moore Threads Prepares S90 and S4000 GPUs for Gaming and Data Center

Moore Threads Technology (MTT), a Chinese GPU manufacturer, is reportedly testing its next-generation graphics processors for client PCs and data centers. The products under scrutiny are the MTT S90 for client/gaming computers and the MTT S4000 for data centers. Characterized by their Device IDs, 0301 and 0323, this could imply that these GPUs belong to MTT's 3rd generation GPU lineup. While few details about these GPUs are available, the new Device IDs suggest a possible introduction of a novel microarchitecture following the MTT Chunxiao GPU series. The current generation Chunxiao series, featuring the MTT S70, MTT S80, and MTT S3000, failed to compete effectively with AMD, Intel, and NVIDIA GPUs.

Thanks to @Löschzwerg who found the Device Hunt submission, we see hardware identifiers in PCI ID and USB ID repositories earlier than launch, as this often signals the testing of new chips or drivers by various companies. In the case of MTT, the latest developments are complicated by its recent inclusion on the U.S. Entity List, limiting its access to US-made technologies. This introduces a problem for the company, as they can't access TSMC's facilities for chip production, and will have to turn to domestic production in the likely case, with SMIC being the only leading option to consider.

AMD's Radeon RX 6750 GRE Specs and Pricing Revealed

There have been several rumours about AMD's upcoming RX 6750 GRE graphics cards, which may or maybe not be limited to the Chinese market. Details have now appeared of not one, but two different RX 6750 GRE SKUs, courtesy of @momomo_us on Twitter/X and it seems like AMD has simply re-branded the RX 6700 XT and RX 6700, adjusted the clock speed minimally and slapped a new cooler on the cards. To call this disappointing would be an understatement, but then again, these cards weren't expected to bring anything new to the table.

The fact that AMD is calling the cards the RX 6750 GRE 10 GB and RX 6750 GRE 12 GB will help confuse consumers as well, especially when you consider the two cards were clearly different SKUs when they launched as the RX 6700 and RX 6700 XT. Now it just looks like one has las VRAM than the other, when in fact it also has a different GPU. At least the pricing difference between the two SKUs is minimal, with the 10 GB model having an MSRP of US$269 and the 12 GB model coming in at a mere $20 more, at US$289. The RX 6700 XT had a launch price of US$479 and still retails for over US$300, which at least makes these refreshed products somewhat more wallet friendly.

NVIDIA Reflex Reduces Latency In Counter-Strike 2, Overwatch 2 Season 7 and Warhammer: Vermintide 2

NVIDIA Reflex is a must-have in games, reducing system latency so your actions occur quicker, giving you a competitive edge in multiplayer matches, and making single-player titles more responsive and enjoyable. NVIDIA Reflex is now used by over 50 million players each month, is available in 9 of the top 10 competitive shooters, including Counter-Strike 2, and is activated by 90% of GeForce gamers in over 80 supported titles. NVIDIA Reflex is synonymous with responsive gaming, and can be found in the latest and greatest games. Counter-Strike 2 and Overwatch 2 Season 7: Rise of Darkness are reducing system latency with Reflex today, and Warhammer: Vermintide 2 adds support in the coming days.

Overwatch 2 Season 7 Available Now
NVIDIA Reflex enables GeForce gamers to play with minimal system latency in Overwatch 2's highly competitive multiplayer matches. Overwatch 2 Season 7: Rise of Darkness is now live. Defeat powerful bosses and level up your hero in the new Trials of Sanctuary game mode. Travel to a tropical paradise with the new Samoa map. Try out Sombra's new ability kit and look out for Roadhog's rework later in the season. Explore seasonal cosmetics in the shop or earn them with the Battle Pass. Using NVIDIA Reflex on a GeForce RTX 40 Series GPU, your actions have improved responsiveness, occurring virtually without delay, and performance is so high that you'll see the action at its best with maximum clarity.

NVIDIA RTX Video Super Resolution Update Enhances Video Quality, Detail Preservation and Expands to GeForce RTX 20 Series GPUs

NVIDIA today announced an update to RTX Video Super Resolution (VSR) that delivers greater overall graphical fidelity with preserved details, upscaling for native videos and support for GeForce RTX 20 Series desktop and laptop GPUs. For AI assists from RTX VSR and more - from enhanced creativity and productivity to blisteringly fast gaming - check out the RTX for AI page.

The Super RTX VSR Update 1.5
RTX VSR's AI model has been retrained to more accurately identify the difference between subtle details and compression artifacts to better preserve image details during the upscaling process. Finer details are more visible, and the overall image looks sharper and crisper than before. RTX VSR version 1.5 will also de-artifact videos played at their native resolution - prior, only upscaled video could be enhanced. Providing a leap in graphical fidelity for laptop owners with 1080p screens, the updated RTX VSR makes 1080p resolution, which is popular for content and displays, look smoother at its native resolution, even with heavy artifacts. And with expanded RTX VSR support, owners of GeForce RTX 20 Series GPUs can benefit from the same AI-enhanced video as those using RTX 30 and 40 Series GPUs. RTX VSR 1.5 is available as part of the latest Game Ready Driver, available for download today. Content creators downloading NVIDIA Studio Drivers - designed to enhance features, reduce repetitiveness and dramatically accelerate creative workflows - can install the driver with RTX VSR releasing in early November.

Phison Introduces New High-Speed Signal Conditioner IC Products, Expanding its PCIe 5.0 Ecosystem for AI-Era Data Centers

Phison Electronics, a global leader in NAND controllers and storage solutions, announced today that the company has expanded its portfolio of PCIe 5.0 high-speed transmission solutions with PCIe 5.0, CXL 2.0 compatible redriver and retimer data signal conditioning IC products. Leveraging the company's deep expertise in PCIe engineering, Phison is the only signal conditioners provider that offers the widest portfolio of multi-channel PCIe 5.0 redriver and retimer solutions and PCIe 5.0 storage solutions designed specifically to meet the data infrastructure demands of artificial intelligence and machine learning (AI+ML), edge computing, high-performance computing, and other data-intensive, next-gen applications. At the 2023 Open Compute Project Global Summit, the Phison team is showcasing its expansive PCIe 5.0 portfolio, demonstrating the redriver and retimer technologies alongside other enterprise NAND flash, illustrating a holistic vision for a PCIe 5.0 data ecosystem to address the most demanding applications of the AI-everywhere era.

"Phison has focused industry-leading R&D efforts on developing in-house, chip-to-chip communication technologies since the introduction of the PCIe 3.0 protocol, with PCIe 4.0 and PCIe 5.0 solutions now in mass production, and PCIe 6.0 solutions now in the design phase," said Michael Wu, President & General Manager, Phison US. "Phison's accumulated experience in high-speed signaling enables our team to deliver retimer and redriver design solutions that are optimized for top signal integration, low power usage, and high temperature endurance, to deliver interface speeds for the most challenging compute environments."

GALAX 20th Anniversary GeForce RTX 4090 GPU Pictured

When GALAX began its operations in China in 2003, the company broke into the market and established itself as one of the best GPU AIBs out there. Now, the company is celebrating its 20th anniversary with a special-edition graphics card to mark its two-decade run. We have previously reported that the company is planning to show more of the special-edition card; however, all we managed to get was a teaser of the GPU. Today, thanks to the X/Twitter user CornerJack we manage to see the new GPU in person and its uniquely positioned 12VHPWR power connector.

In the images below, we see that GALAX decided to work around the problem of bad cable management by introducing a more stealthy approach. The 16-pin 12VHPWR connector is now hidden inside of a GPU in an interesting place. The connector is placed in line with the GPU, next to the PCIe connector, to prevent cable bending, which is proven infamously dangerous. In regards to the special-edition features, we expect the card to be a great performer and certainly a star of the show of any PC build due to its white aesthetics.

Samsung Notes: HBM4 Memory is Coming in 2025 with New Assembly and Bonding Technology

According to the editorial blog post published on the Samsung blog by SangJoon Hwang, Executive Vice President and Head of the DRAM Product & Technology Team at Samsung Electronics, we have information that High-Bandwidth Memory 4 (HBM4) is coming in 2025. In the recent timeline of HBM development, we saw the first appearance of HBM memory in 2015 with the AMD Radeon R9 Fury X. The second-generation HBM2 appeared with NVIDIA Tesla P100 in 2016, and the third-generation HBM3 saw the light of the day with NVIDIA Hopper GH100 GPU in 2022. Currently, Samsung has developed 9.8 Gbps HBM3E memory, which will start sampling to customers soon.

However, Samsung is more ambitious with development timelines this time, and the company expects to announce HBM4 in 2025, possibly with commercial products in the same calendar year. Interestingly, the HBM4 memory will have some technology optimized for high thermal properties, such as non-conductive film (NCF) assembly and hybrid copper bonding (HCB). The NCF is a polymer layer that enhances the stability of micro bumps and TSVs in the chip, so memory solder bump dies are protected from shock. Hybrid copper bonding is an advanced semiconductor packaging method that creates direct copper-to-copper connections between semiconductor components, enabling high-density, 3D-like packaging. It offers high I/O density, enhanced bandwidth, and improved power efficiency. It uses a copper layer as a conductor and oxide insulator instead of regular micro bumps to increase the connection density needed for HBM-like structures.

AMD to Acquire Open-Source AI Software Expert Nod.ai

AMD today announced the signing of a definitive agreement to acquire Nod.ai to expand the company's open AI software capabilities. The addition of Nod.ai will bring an experienced team that has developed an industry-leading software technology that accelerates the deployment of AI solutions optimized for AMD Instinct data center accelerators, Ryzen AI processors, EPYC processors, Versal SoCs and Radeon GPUs to AMD. The agreement strongly aligns with the AMD AI growth strategy centered on an open software ecosystem that lowers the barriers of entry for customers through developer tools, libraries and models.

"The acquisition of Nod.ai is expected to significantly enhance our ability to provide AI customers with open software that allows them to easily deploy highly performant AI models tuned for AMD hardware," said Vamsi Boppana, senior vice president, Artificial Intelligence Group at AMD. "The addition of the talented Nod.ai team accelerates our ability to advance open-source compiler technology and enable portable, high-performance AI solutions across the AMD product portfolio. Nod.ai's technologies are already widely deployed in the cloud, at the edge and across a broad range of end point devices today."

Starfield Gets New Update 1.7.36, Improving Intel Arc GPU Support and Adding FOV Slider

Bethesda has released a new Starfield Update 1.7.36, which brings the previously promised updates to the game as well as fixes general stability and performance. While the previous update resolved some issues with AMD GPUs, namely the issue that caused lens flares not to appear correctly, issues with upscaling, and more, the latest one focuses on Intel Arc GPUs.

Bethesda made quite a few update promises, including built-in mod support and top community requests, but so far, updates have focused on general performance and stability issues. The latest one is not an exception either, but it does bring the promised FOV slider as well as improved stability for Intel Arc GPUs. It also adds various additional stability and performance improvements.

OpenAI Could Make Custom Chips to Power Next-Generation AI Models

OpenAI, the company behind ChatGPT and the GPT-4 large language model, is reportedly exploring the possibility of creating custom silicon to power its next-generation AI models. According to Reuters, Insider sources have even alluded to the firm evaluating potential acquisitions of chip design firms. While a final decision is yet to be cemented, conversations from as early as last year highlighted OpenAI's struggle with the growing scarcity and escalating costs of AI chips, with NVIDIA being its primary supplier. The CEO of OpenAI, Sam Altman, has been rather vocal about the shortage of GPUs, a sector predominantly monopolized by NVIDIA, which holds control over an astounding 80% of the global market for AI-optimized chips.

Back in 2020, OpenAI banked on a colossal supercomputer crafted by Microsoft, a significant investor in OpenAI, which harnesses the power of 10,000 NVIDIA GPUs. This setup is instrumental in driving the operations of ChatGPT, which, as per Bernstein's analyst Stacy Rasgon, comes with its own hefty price tag. Each interaction with ChatGPT is estimated to cost around 4 cents. Drawing a comparative scale with Google search, if ChatGPT queries ever burgeoned to a mere tenth of Google's search volume, the initial GPU investment would skyrocket to an overwhelming $48.1 billion, with a recurring annual expenditure of approximately $16 billion for sustained operations. For an invitation to comment, OpenAI declined to provide any statements. The potential entry into the world of custom silicon signals a strategic move towards greater self-reliance and cost optimization so further development of AI can be sustained.

Intel Lunar Lake Processor Appears in SiSoftware Sandra Benchmark

Intel's next-generation Lunar Lake processor has appeared in the SiSoftware Sandra benchmarking suite, and the online database has revealed many details, thanks to a spotting by @Olrak29 of X/Twitter. Considering Intel's Meteor Lake is still two months away from its launch, the presence of Lunar Lake's benchmarks is indeed intriguing. Interestingly, Intel showcased a Lunar Lake laptop at the Intel Innovation 2023 event, and this SiSoft entry might be related to that demo. The data from SiSoft details the system as a "Genuine Inte l(R) 0000 1.00 GHz (5M 20c 3.91 GHz + 2.61 GHz, 3.3 GHz IMC, 4x 2.5 MB + 4 MB L2, 2x 8 MB L3)," hinting at a "Lunar Lake Client System (Intel LNL-M LP5 RVP1)." Deciphering these details, the Lunar Lake system adopts a 4+4 core configuration, utilizing a mix of Lion Cove and Skymont architecture cores tailored for performance and efficiency.

Moreover, the benchmark report pegs this CPU as a low-power laptop variant with a 17 W TDP. While it operates at a 1.0 GHz base frequency, it reached a speed of 3.91 GHz during the testing. However, these numbers should be taken cautiously since it's likely an engineering sample. Cache details are outlined, suggesting a 2.5 MB L2 cache per P-core, an added 4 MB L2 cache for E-cores, and a 16 MB L3 cache. No details on the integrated GPU were revealed, although it's anticipated that Lunar Lake will house Intel's Xe2-LPG graphics and LPDDR5 system memory. Intel has shared that Lunar Lake is scheduled for a 2024 release in mobile/laptop devices, targeting performance-per-watt leadership. Arrow Lake processors, catering to desktops, might share the core architecture and are anticipated to launch around the same timeframe.

NVIDIA Reportedly in Talks to Lease Data Center Space for its own Cloud Service

The recent development of AI models that are more capable than ever has led to a massive demand for hardware infrastructure that powers them. As the dominant player in the industry with its GPU and CPU-GPU solutions, NVIDIA has reportedly discussed leasing data center space to power its own cloud service for these AI applications. Called NVIDIA Cloud DGX, it will reportedly put the company right up against its clients, which are cloud service providers (CSPs) as well. Companies like Microsoft Azure, Amazon AWS, Google Cloud, and Oracle actively acquire NVIDIA GPUs to power their GPU-accelerated cloud instances. According to the report, this has been developing for a few years.

Additionally, it is worth noting that NVIDIA already owns parts for its potential data center infrastructure. This includes NVIDIA DGX and HGX units, which can just be interconnected in a data center, with cloud provisioning so developers can access NVIDIA's instances. A great benefit that would attract the end-user is that NVIDIA could potentially lower the price point of its offerings, as they are acquiring GPUs for much less compared to the CSPs that receive them with a profit margin that NVIDIA imposes. This can attract potential customers, leaving hyperscalers like Amazon, Microsoft, and Google without a moat in the cloud game. Of course, until this project is official, we should take this information with a grain of salt.

Microsoft Tech Chief Prefers Using NVIDIA AI GPUs, Keeping Tabs on AMD Alternatives

Kevin Scott, Microsoft's chief technology officer was interviewed at last week's Code Conference (organized by Vox Media), where he was happy to reveal that his company is having an easier time acquiring Team Green's popular HPC GPU hardware: "Demand was far exceeding the supply of GPU capacity that the whole ecosystem could produce...That is resolving. It's still tight, but it's getting better every week, and we've got more good news ahead of us than bad on that front, which is great." Microsoft is investing heavily into its internal artificial intelligence endeavors and external interests alike (they are a main backer of OpenAI's ChatGPT system). Having a healthy budget certainly helps, but Scott has previously described his experience in this field as "a terrible job" spanning five years of misery (as of May 2023).

Last week's follow-up conversation on-stage in Dana Point, California revealed that conditions have improved since springtime: "It's easier now than when we talked last time." The improved supply circumstances have made his "job of adjudicating these very gnarly conflicts less terrible." Industry reports have Microsoft secretly working on proprietary AI chips with an unnamed partner—CNBC pinpointed Arm as a likely candidate—Scott acknowledged that something is happening behind-the-scenes but it will not be ready imminently: "I'm not confirming anything, but I will say that we've got a pretty substantial silicon investment that we've had for years...And the thing that we will do is we'll make sure that we're making the best choices for how we build these systems, using whatever options we have available. And the best option that's been available during the last handful of years has been NVIDIA."

Newegg Introduces Graphics Card Trade-In Program

Newegg Commerce, Inc., a global e-commerce leader for technology products, today announced the launch of Newegg's GPU Trade-In Program, allowing customers to trade in an eligible GPU device and receive a trade-in value credit toward the purchase of a new qualifying graphics card also known as a graphics processing unit (GPU).

Newegg's GPU Trade-In Program not only helps customers upgrade to a newer GPU model, the program also helps limit electronic waste. By offering a resource for customers to exchange their unwanted GPUs for new ones, the program simultaneously contributes to waste reduction and facilitates cost-effective PC upgrades.

Cyberpunk 2077: Phantom Liberty Available Now With NVIDIA DLSS 3.5 and Full Ray Tracing

Cyberpunk 2077: Phantom Liberty is now available on PC with full ray tracing, DLSS 3.5, and Ray Reconstruction. Phantom Liberty is a new spy-thriller adventure for Cyberpunk 2077. When the orbital shuttle of the President of the New United States of America is shot down over the deadliest district of Night City, there's only one person who can save her - you. Become V, a cyberpunk for hire, and dive deep into a tangled web of espionage and political intrigue, unraveling a story that connects the highest echelons of power with the brutal world of black-market mercenaries. If you don't own the expansion, return to the neon lights of Night City in Cyberpunk 2077's 2.0 update, available now, featuring support for NVIDIA DLSS 3.5 and its new Ray Reconstruction technology, enhancing the quality of full ray tracing in Cyberpunk 2077's Ray Tracing: Overdrive Mode.

Crank Ray Tracing Up To Overdrive
Both Cyberpunk 2077 and Cyberpunk 2077: Phantom Liberty feature full ray tracing in the technology preview for the Ray Tracing: Overdrive Mode. Full ray tracing, also known as path tracing, accurately simulates light throughout an entire scene. It is used by visual effects artists to create film and TV graphics that are indistinguishable from reality, but until the arrival of GeForce RTX GPUs with RT Cores, and the AI-powered acceleration of NVIDIA DLSS, real-time video game full ray tracing was impossible because it is extremely GPU intensive.
Return to Keyword Browsing
Nov 26th, 2024 20:06 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts