News Posts matching #CUDA

Return to Keyword Browsing

NVIDIA Cracks Down on CUDA Translation Layers, Changes Licensing Terms

NVIDIA's Compute Unified Device Architecture (CUDA) has long been the de facto standard programming interface for developing GPU-accelerated software. Over the years, NVIDIA has built an entire ecosystem around CUDA, cementing its position as the leading GPU computing and AI manufacturer. However, rivals AMD and Intel have been trying to make inroads with their own open API offerings—ROCm from AMD and oneAPI from Intel. The idea was that developers could more easily run existing CUDA code on non-NVIDIA GPUs by providing open access through translation layers. Developers had created projects like ZLUDA to translate CUDA to ROCm, and Intel's CUDA to SYCL aimed to do the same for oneAPI. However, with the release of CUDA 11.5, NVIDIA appears to have cracked down on these translation efforts by modifying its terms of use, according to developer Longhorn on X.

"You may not reverse engineer, decompile or disassemble any portion of the output generated using Software elements for the purpose of translating such output artifacts to target a non-NVIDIA platform," says the CUDA 11.5 terms of service document. The changes don't seem to be technical in nature but rather licensing restrictions. The impact remains to be seen, depending on how much code still requires translation versus running natively on each vendor's API. While CUDA gave NVIDIA a unique selling point, its supremacy has diminished as more libraries work across hardware. Still, the move could slow the adoption of AMD and Intel offerings by making it harder for developers to port existing CUDA applications. As GPU-accelerated computing grows in fields like AI, the battle for developer mindshare between NVIDIA, AMD, and Intel is heating up.

NVIDIA Announces RTX 500 and 1000 Professional Ada Generation Laptop GPUs

With generative AI and hybrid work environments becoming the new standard, nearly every professional, whether a content creator, researcher or engineer, needs a powerful, AI-accelerated laptop to help users tackle their industry's toughest challenges - even on the go. The new NVIDIA RTX 500 and 1000 Ada Generation Laptop GPUs will be available in new, highly portable mobile workstations, expanding the NVIDIA Ada Lovelace architecture-based lineup, which includes the RTX 2000, 3000, 3500, 4000 and 5000 Ada Generation Laptop GPUs.

AI is rapidly being adopted to drive efficiencies across professional design and content creation workflows and everyday productivity applications, underscoring the importance of having powerful local AI acceleration and sufficient processing power in systems. The next generation of mobile workstations with Ada Generation GPUs, including the RTX 500 and 1000 GPUs, will include both a neural processing unit (NPU), a component of the CPU, and an NVIDIA RTX GPU, which includes Tensor Cores for AI processing. The NPU helps offload light AI tasks, while the GPU provides up to an additional 682 TOPS of AI performance for more demanding day-to-day AI workflows.

NVIDIA Accelerates Quantum Computing Exploration at Australia's Pawsey Supercomputing Centre

NVIDIA today announced that Australia's Pawsey Supercomputing Research Centre will add the NVIDIA CUDA Quantum platform accelerated by NVIDIA Grace Hopper Superchips to its National Supercomputing and Quantum Computing Innovation Hub, furthering its work driving breakthroughs in quantum computing.

Researchers at the Perth-based center will leverage CUDA Quantum - an open-source hybrid quantum computing platform that features powerful simulation tools, and capabilities to program hybrid CPU, GPU and QPU systems - as well as, the NVIDIA cuQuantum software development kit of optimized libraries and tools for accelerating quantum computing workflows. The NVIDIA Grace Hopper Superchip - which combines the NVIDIA Grace CPU and Hopper GPU architectures - provides extreme performance to run high-fidelity and scalable quantum simulations on accelerators and seamlessly interface with future quantum hardware infrastructure.

AMD Develops ROCm-based Solution to Run Unmodified NVIDIA's CUDA Binaries on AMD Graphics

AMD has quietly funded an effort over the past two years to enable binary compatibility for NVIDIA CUDA applications on their ROCm stack. This allows CUDA software to run on AMD Radeon GPUs without adapting the source code. The project responsible is ZLUDA, which was initially developed to provide CUDA support on Intel graphics. The developer behind ZLUDA, Andrzej Janik, was contracted by AMD in 2022 to adapt his project for use on Radeon GPUs with HIP/ROCm. He spent two years bringing functional CUDA support to AMD's platform, allowing many real-world CUDA workloads to run without modification. AMD decided not to productize this effort for unknown reasons but did open-source it once funding ended per their agreement. Over at Phoronix, there were several benchmarks testing AMD's ZLUDA implementation over a wide variety of benchmarks.

Benchmarks found that proprietary CUDA renderers and software worked on Radeon GPUs out-of-the-box with the drop-in ZLUDA library replacements. CUDA-optimized Blender 4.0 rendering now runs faster on AMD Radeon GPUs than the native ROCm/HIP port, reducing render times by around 10-20%, depending on the scene. The implementation is surprisingly robust, considering it was a single-developer project. However, there are some limitations—OptiX and PTX assembly codes still need to be fully supported. Overall, though, testing showed very promising results. Over the generic OpenCL runtimes in Geekbench, CUDA-optimized binaries produce up to 75% better results. With the ZLUDA libraries handling API translation, unmodified CUDA binaries can now run directly on top of ROCm and Radeon GPUs. Strangely, the ZLUDA port targets AMD ROCm 5.7, not the newest 6.x versions. Only time will tell if AMD continues investing in this approach to simplify porting of CUDA software. However, the open-sourced project now enables anyone to contribute and help improve compatibility. For a complete review, check out Phoronix tests.

Intel Open Image Denoise v2.2 Adds Metal Support & AArch64 Improvements

An Open Image Denoise 2.2 release candidate was released earlier today—as discovered by Phoronix's founder and principal writer; Michael Larabel. Intel's dedicated website has not been updated with any new documentation or changelogs (at the time of writing), but a GitHub release page shows all of the crucial information. Team Blue's open-source oneAPI has been kept up-to-date with the latest technologies—not only limited to Intel's stable of Xe-LP, Xe-HPG and Xe-HPC components—the Phonorix article highlights updated support on competing platforms. The v2.2 preview adds support for Meteor Lake's integrated Arc graphics solution, and additional "denoising quality enhancements and other improvements."

Non-Intel platform improvements include updates for Apple's M-series chipsets, AArch64 processors, and NVIDIA CUDA. OIDn 2.2-rc: "adds Metal device support for Apple Silicon GPUs on recent versions of macOS. OIDn has already been supporting ARM64/AArch64 for Apple Silicon CPUs while now Open Image Denoise has extended that AArch64 support to work on Windows and Linux too. There is better performance in general for Open Image Denoise on CPUs with this forthcoming release." The changelog also highlights a general improvement performance across processors, and a fix that resolves a crash incident: "when releasing a buffer after releasing the device."

Aetina Introduces New MXM GPUs Powered by NVIDIA Ada Lovelace for Enhanced AI Capabilities at the Edge

Aetina, a leading global Edge AI solution provider, announces the release of its new embedded MXM GPU series utilizing the NVIDIA Ada Lovelace architecture - MX2000A-VP, MX3500A-SP, and MX5000A-WP. Designed for real-time ray tracing and AI-based neural graphics, this series significantly enhances GPU performance, delivering outstanding gaming and creative, professional graphics, AI, and compute performance. It provides the ultimate AI processing and computing capabilities for applications in smart healthcare, autonomous machines, smart manufacturing, and commercial gaming.

The global GPU (graphics processing unit) market is expected to achieve a 34.4% compound annual growth rate from 2023 to 2028, with advancements in the artificial intelligence (AI) industry being a key driver of this growth. As the trend of AI applications expands from the cloud to edge devices, many businesses are seeking to maximize AI computing performance within minimal devices due to space constraints in deployment environments. Aetina's latest embedded MXM modules - MX2000A-VP, MX3500A-SP, and MX5000A-WP, adopting the NVIDIA Ada Lovelace architecture, not only make significant breakthroughs in performance and energy efficiency but also enhance the performance of ray tracing and AI-based neural graphics. The modules, with their compact design, efficiently save space, thereby opening up more possibilities for edge AI devices.

NVIDIA GeForce RTX 4080 SUPER GPUs Pop Up in Geekbench Browser

We are well aware that NVIDIA GeForce RTX 4080 SUPER graphics cards are next up on the review table (January 31)—TPU's W1zzard has so far toiled away on getting his evaluations published on time for options further down the Ada Lovelace SUPER food chain. This process was interrupted briefly by the appearance of custom Radeon RX 7600 XT models, but today's attention soon returned to another batch of GeForce RTX 4070 Ti SUPER cards. Reviewers are already toying around with driver-enabled GeForce RTX 4080 SUPER sample units—under strict confidentiality conditions—but the occasional leak is expected to happen. The appropriately named Benchleaks social media account has kept track of emerging test results.

The Geekbench Browser database was updated earlier today with premature GeForce RTX 4080 SUPER GPU test results—one entry highlighted by Benchleaks provides a quick look at the card's prowess in three of Geekbench 5.1's graphics API trials: Vulkan, CUDA and OpenCL. VideoCardz points out that all of the scores could be fundamentally flawed; in particular the Vulkan result of 100378 points—the regular (non-SUPER) GeForce RTX 4080 GPU can achieve almost double that figure in Geekbench 6. The SUPER's other results included a Geekbench 5 CUDA score of 309554, and an achievement of 264806 points in OpenCL. A late morning entrant looks to be hitting the right mark—an ASUS testbed (PRIME Z790-A WIFI + Intel Core i9-13900KF) managed to score 210551 points in Geekbench 6.2.2 Vulkan.

Possible NVIDIA GeForce RTX 3050 6 GB Edition Specifications Appear

Alleged full specifications leaked for NVIDIA's upcoming GeForce RTX 3050 6 GB graphics card show extensive reductions beyond merely reducing memory size versus the 8 GB model. If accurate, performance could lag the existing RTX 3050 8 GB SKU by up to 25%, making it weaker competition even for AMD's budget RX 6500 XT. Previous rumors suggested only capacity and bandwidth differences on a partially disabled memory bus between 3050 variants, which would reduce the memory to 6 GB and 96-bit bus, from 8 GB and 128-bit bus.. But leaked specs indicate CUDA core counts, clock speeds, and TDP all see cuts for the upcoming 6 GB version. With 18 SMs and 2304 cores rather than 20 SMs and 2560 cores at lower base and boost frequencies, the impact looks more severe than expected. A 70 W TDP does allow passive cooling but hurts performance versus the 3050 8 GB's 130 W design.

Some napkin math suggests the 3050 6 GB could deliver only 75% of its elder sibling's frame rates, putting it more in line with the entry-level 6500 XT. While having 50% more VRAM helps, dramatic core and clock downgrades counteract that memory advantage. According to rumors, the RTX 3050 6 GB is set to launch in February, bringing lower-end Ampere to even more budget-focused builders. But with specifications seemingly hobbled beyond just capacity, its real-world gaming value remains to be determined. NVIDIA likely intends RTX 3060 6 GB primarily for less demanding esports titles. Given the scale of cutbacks and the modern AAA title's recommended specifications, mainstream AAA gaming performance seems improbable.

No Overclocking and Lower TGP for NVIDIA GeForce RTX 4090 D Edition for China

NVIDIA is preparing to launch the GeForce RTX 4090 D, or "Dragon" edition, designed explicitly for China. Circumventing the US export rules of GPUs that could potentially be used for AI acceleration, the GeForce RTX 4090 D is reportedly cutting back on overclocking as a feature. According to BenchLife, the AD102-250 GPU used in the RTX 4090 D will be a stranger to overclocking, as the card will not support it, possibly being disabled by firmware and/or physically in the die. The information from @Zed__Wang suggests that the Dragon version will be running at 2280 MHz base frequency, higher than the 2235 MHz of AD102-300 found in the regular RTX 4090, and 2520 MHz boost, matching the regular version.

Interestingly, the RTX 4090 D for China will also feature a slightly lower Total Graphics Power (TGP) of 425 Watts, down from the 450 Watts of the regular model. With memory configuration appearing to be the same, this new China-specific model will most likely perform within a few percent of the original design. Higher base frequency probably indicates a lack of a few CUDA cores to comply with the US export regulation policy and serve the Chinese GPU market. The NVIDIA GeForce RTX 4090 D is scheduled for rollout in January 2024 in China, which is just a few weeks away.

NVIDIA and AMD Deliver Powerful Workstations to Accelerate AI, Rendering and Simulation

To enable professionals worldwide to build and run AI applications right from their desktops, NVIDIA and AMD are powering a new line of workstations equipped with NVIDIA RTX Ada Generation GPUs and AMD Ryzen Threadripper PRO 7000 WX-Series CPUs. Bringing together the highest levels of AI computing, rendering and simulation capabilities, these new platforms enable professionals to efficiently tackle the most resource-intensive, large-scale AI workflows locally.

Bringing AI Innovation to the Desktop
Advanced AI tasks typically require data-center-level performance. Training a large language model with a trillion parameters, for example, takes thousands of GPUs running for weeks, though research is underway to reduce model size and enable model training on smaller systems while still maintaining high levels of AI model accuracy. The new NVIDIA RTX GPU and AMD CPU-powered AI workstations provide the power and performance required for training such smaller models, as well as local fine-tuning, and helping to offload data center and cloud resources for AI development tasks. The devices let users select single- or multi-GPU configurations as required for their workloads.

NVIDIA Lends Support to Washington's Efforts to Ensure AI Safety

In an event at the White House today, NVIDIA announced support for voluntary commitments that the Biden Administration developed to ensure advanced AI systems are safe, secure and trustworthy. The news came the same day NVIDIA's chief scientist, Bill Dally, testified before a U.S. Senate subcommittee seeking input on potential legislation covering generative AI. Separately, NVIDIA founder and CEO Jensen Huang will join other industry leaders in a closed-door meeting on AI Wednesday with the full Senate.

Seven companies including Adobe, IBM, Palantir and Salesforce joined NVIDIA in supporting the eight agreements the Biden-Harris administration released in July with support from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI.

NVIDIA GH200 Superchip Aces MLPerf Inference Benchmarks

In its debut on the MLPerf industry benchmarks, the NVIDIA GH200 Grace Hopper Superchip ran all data center inference tests, extending the leading performance of NVIDIA H100 Tensor Core GPUs. The overall results showed the exceptional performance and versatility of the NVIDIA AI platform from the cloud to the network's edge. Separately, NVIDIA announced inference software that will give users leaps in performance, energy efficiency and total cost of ownership.

GH200 Superchips Shine in MLPerf
The GH200 links a Hopper GPU with a Grace CPU in one superchip. The combination provides more memory, bandwidth and the ability to automatically shift power between the CPU and GPU to optimize performance. Separately, NVIDIA HGX H100 systems that pack eight H100 GPUs delivered the highest throughput on every MLPerf Inference test in this round. Grace Hopper Superchips and H100 GPUs led across all MLPerf's data center tests, including inference for computer vision, speech recognition and medical imaging, in addition to the more demanding use cases of recommendation systems and the large language models (LLMs) used in generative AI.

NVIDIA CEO Meets with India Prime Minister Narendra Modi

Underscoring NVIDIA's growing relationship with the global technology superpower, Indian Prime Minister Narendra Modi met with NVIDIA founder and CEO Jensen Huang Monday evening. The meeting at 7 Lok Kalyan Marg—as the Prime Minister's official residence in New Delhi is known—comes as Modi prepares to host a gathering of leaders from the G20 group of the world's largest economies, including U.S. President Joe Biden, later this week.

"Had an excellent meeting with Mr. Jensen Huang, the CEO of NVIDIA," Modi said in a social media post. "We talked at length about the rich potential India offers in the world of AI." The event marks the second meeting between Modi and Huang, highlighting NVIDIA's role in the country's fast-growing technology industry.

Strong Cloud AI Server Demand Propels NVIDIA's FY2Q24 Data Center Business to Surpass 76% for the First Time

NVIDIA's latest financial report for FY2Q24 reveals that its data center business reached US$10.32 billion—a QoQ growth of 141% and YoY increase of 171%. The company remains optimistic about its future growth. TrendForce believes that the primary driver behind NVIDIA's robust revenue growth stems from its data center's AI server-related solutions. Key products include AI-accelerated GPUs and AI server HGX reference architecture, which serve as the foundational AI infrastructure for large data centers.

TrendForce further anticipates that NVIDIA will integrate its software and hardware resources. Utilizing a refined approach, NVIDIA will align its high-end, mid-tier, and entry-level GPU AI accelerator chips with various ODMs and OEMs, establishing a collaborative system certification model. Beyond accelerating the deployment of CSP cloud AI server infrastructures, NVIDIA is also partnering with entities like VMware on solutions including the Private AI Foundation. This strategy extends NVIDIA's reach into the edge enterprise AI server market, underpinning steady growth in its data center business for the next two years.

PNY Announces Availability of New NVIDIA Ada Lovelace Workstation GPUs

PNY Technologies today announced it is now offering the latest NVIDIA RTX Ada Generation GPUs - the NVIDIA RTX 5000, NVIDIA RTX 4500 and NVIDIA RTX 4000 high-performance workstation graphics cards and the NVIDIA L40S GPU for data centers. These new GPUs are now available to order from PNY.

Joining the NVIDIA RTX 6000 Ada Generation and NVIDIA RTX 4000 SFF Ada Generation, the NVIDIA RTX 5000, NVIDIA RTX 4500 and NVIDIA RTX 4000 high-performance GPUs are based on the powerful and ultra-efficient NVIDIA Ada Lovelace architecture, making them ideal for real-time ray tracing, physically accurate simulation, neural graphics, and generative AI. These GPUs combine the latest-gen RT Cores, Tensor Cores, and CUDA cores with large GPU memory to offer unprecedented performance for creators and professionals, empowering them to unleash their imagination while maximizing productivity. Turnkey HW + Sync bundles are also available (NVIDIA RTX 5000 + HW Sync, NVIDIA RTX 4500 + HW Sync, NVIDIA RTX 4000 + HW Sync).

NVIDIA and Global Workstation Manufacturers Bring New NVIDIA RTX Workstations

NVIDIA and global manufacturers today announced powerful new NVIDIA RTX workstations designed for development and content creation in the age of generative AI and digitalization. The systems, including those from BOXX, Dell Technologies, HP and Lenovo, are based on NVIDIA RTX 6000 Ada Generation GPUs and incorporate NVIDIA AI Enterprise and NVIDIA Omniverse Enterprise software.

Separately, NVIDIA also released three new desktop workstation Ada Generation GPUs - the NVIDIA RTX 5000, RTX 4500 and RTX 4000 - to deliver the latest AI, graphics and real-time rendering technology to professionals worldwide. "Few workloads are as challenging as generative AI and digitalization applications, which require a full-stack approach to computing," said Bob Pette, vice president of professional visualization at NVIDIA. "Professionals can now tackle these on a desktop with the latest NVIDIA-powered RTX workstations, enabling them to build vast, digitalized worlds in the new age of generative AI."

NVIDIA GeForce GTX 1650 is Still the Most Popular GPU in the Steam Hardware Survey

NVIDIA GeForce GTX 1650 was released more than four years ago. With its TU117 graphics processor, it features 896 CUDA cores, 56 texture mapping units, and 32 ROPs. NVIDIA has paired 4 GB GDDR5 memory with the GeForce GTX 1650, which are connected using a 128-bit memory interface. Interestingly, according to the latest Steam Hardware Survey results, this GPU still remains the most popular choice among gamers. While the total addressable market is unknown with the exact number, it is fair to assume that a large group participates every month. The latest numbers for June 2023 indicate that the GeForce GTX 1650 is still the number one GPU, with 5.50% of the users having that GPU. The second closest one was GeForce RTX 3060, with 4.60%.

Other information in the survey remains similar, with CPUs mostly ranging from 2.3 GHz to 2.69 GHz in frequency and with six cores and twelve threads. Storage also recorded a small bump with capacity over 1 TB surging 1.48%, indicating that gamers are buying larger drives as game sizes get bigger.

NVIDIA H100 Hopper GPU Tested for Gaming, Slower Than Integrated GPU

NVIDIA's H100 Hopper GPU is a device designed for pure AI and other compute workloads, with the least amount of consideration for gaming workloads that involve graphics processing. However, it is still interesting to see how this 30,000 USD GPU fairs in comparison to other gaming GPUs and whether it is even possible to run games on it. It turns out that it is technically feasible but not making much sense, as the Chinese YouTube channel Geekerwan notes. Based on the GH100 GPU SKU with 14,592 CUDA, the H100 PCIe version tested here can achieve 204.9 TeraFLOPS at FP16, 51.22 TeraFLOPS at FP32, and 25.61 TeraFLOPS at FP64, with its natural power laying in accelerating AI workloads.

However, how does it fare in gaming benchmarks? Not very well, as the testing shows. It scored 2681 points in 3DMark Time Spy, which is lower than AMD's integrated Radeon 680M, which managed to score 2710 points. Interestingly, the GH100 has only 24 ROPs (render output units), while the gaming-oriented GA102 (highest-end gaming GPU SKU) has 112 ROPs. This is self-explanatory and provides a clear picture as to why the H100 GPU is used for computing only. Since it doesn't have any display outputs, the system needed another regular GPU to provide the picture, while the computation happened on the H100 GPU.

Gigabyte Launches the AORUS RTX 4090 GAMING BOX

GIGABYTE TECHNOLOGY Co. Ltd, a leading manufacturer of premium gaming hardware, today launched the top-grade water-cooled external graphics - AORUS RTX 4090 GAMING BOX. AORUS RTX 4090 GAMING BOX is equipped with the most powerful NVIDIA Ada Lovelace architecture - GeForce RTX 4090 graphics card and the Thunderbolt 3 high-speed transmission interface. It endows ultrabooks with 3D computational performance beyond imagination, transforming ultrabooks into game platforms with full ray tracing and becoming a reliable assistant for creators, creating an unprecedented work efficiency experience. In addition, the AORUS WATERFORCE Cooling System is the only solution that combines performance and comfort, allowing users to enjoy a quiet and comfortable environment while handling heavy work.

The AORUS RTX 4090 GAMING BOX is the top-of-the-line water-cooled external graphics box in the market. It enables users to enjoy top-level GeForce RTX 4090 performance with independent high-wattage and stable power supply, while enjoying a quiet and comfortable environment. AORUS has minimized the size of the GAMING BOX, taking up minimal desktop space, making it the ideal companion for ultrabooks.

NVIDIA GeForce RTX 4070 Variant Could be Refreshed With AD103 GPU

Hardware tipster kopite7kimi has learned from insider sources that a variant of NVIDIA's GeForce RTX 4070 graphic card could be lined up with a different GPU - the AD103 instead of the currently utilized AD104-derived AD104-250-A1. The Ada Lovelace-based architecture is a staple across the RTX 40-series of graphics cards, but a fully unlocked AD103 is not yet attached to any product on the market - it will be a strange move for NVIDIA to refresh or expand the mid-range RTX 4070 lineup with a much larger GPU, albeit in a reduced form. A cut down variant of the AD103 is currently housed within NVIDIA's GeForce RTX 4080 graphics card - its AD103-300-A1 GPU has 9728 CUDA Cores and Team Green's engineers have chosen to disable 5% of the full article's capabilities.

The hardware boffins will need to do a lot of pruning if the larger GPU ends up on the rumored RTX 4070 sort-of upgrade - the SKU's 5,888 CUDA core count spec would require a 42% reduction in GPU potency. It is somewhat curious that the RTX 4070 Ti has not been mentioned by the tipster - you would think that the more powerful card (than the standard 4070) would be the logical and immediate candidate for this type of treatment. In theory NVIDIA could be re-purposing dies that do not meet RTX 4080-level standards, thus salvaging rejected material and repurposing it for step down card models.

NVIDIA RTX 5000 Ada Generation Workstation GPU Mentioned in Official Driver Documents

NVIDIA's rumored RTX 5000 Ada Generation GPU has been outed once again, according to VideoCardz - the cited source being a keen-eyed member posting information dumps on a laptop discussion forum. Team Green has released new driver documentation that makes mention of hardware ID "26B2" under an entry for a now supported device: "NVIDIA RTX 5000 Ada Generation." Forum admin StefanG3D posted the small discovery on their favored forum in the small hours of Sunday morning (April 23).

As reported last month, the NVIDIA RTX 5000 Ada is destined to sit between existing sibling workstation GPUs - the AD102-based RTX 6000 and AD104-based RTX 4000 SFF. Hardware tipster kopite7kimi has learned enough to theorize that the NVIDIA RTX 5000 Ada Generation workstation graphics card will feature 15,360 CUDA cores and 32 GB of GDDR6 memory. The AD102 GPU is expected to sit at the heart of this unannounced card.

Square Enix Unearths Old Crime Puzzler - The Portopia Serial Murder Case, Remaster Features AI Interaction

At the turn of the 1980s, most PC adventure games were played using only the keyboard. In those days, adventure games didn't use action menus like more modern games, but simply presented the player with a command line where they could freely input text to decide the actions that characters would take and proceed through the story. Free text input systems like these allowed players to feel a great deal of freedom. However, they did come with one common source of frustration: players knowing what action they wanted to perform but being unable to do so because they could not find the right wording. This problem was caused by the limitations of PC performance and NLP technology of the time.

40 years have passed since then, and PC performance has drastically improved, as have the capabilities of NLP technology. Using "The Portopia Serial Murder Case" as a test case, we'd like to show you the capabilities of modern NLP and the impact it can have on adventure games, as well as deepen your understanding of NLP technologies.

NVIDIA's Tiny RTX 4000 Ada Lovelace Graphics Cards is now Available

NVIDIA has begun selling its compact RTX 4000 Ada Lovelace graphics card, offering GeForce RTX 3070-like performance at a mere 70 W power consumption, allowing it to fit in almost all desktop PCs. The low-profile, dual-slot board is priced higher than the RTX 4080 as it targets professional users, but it can still be used in a regular gaming computer. PNY's RTX 4000 Ada generation graphics card is the first to reach consumer shelves, currently available for $1,444 at ShopBLT, a retailer known for obtaining hardware before its competitors. The card comes with four Mini-DisplayPort connectors, so an additional mDP-DP or mDP-HDMI adapter must be factored into the cost.

The NVIDIA RTX 4000 SFF Ada generation board features an AD104 GPU with 6,144 CUDA cores, 20 GB of GDDR6 ECC memory, and a 160-bit interface. With a fixed boost frequency floating around 1560 MHz to reduce overall board power consumption, the GPU is rated for just 70 Watts of power. To emphasize the efficiency, this card requires no external PCIe power connector, as all the juice is fed through the PCIe slot. The GA104 graphics processor in this configuration delivers a peak FP32 performance of 19.2 TFLOPS, comparable to the GeForce RTX 3070. The 20 GB of memory makes the card more valuable for professionals and AI researchers needing compact solutions. Although the card's performance is overshadowed by the recently launched GeForce RTX 4070, the RTX 4000 SFF Ada's professional drivers, support for professional software ISVs, and additional features make it a strong contender in the semi-professional market. Availability and pricing are expected to improve in the coming weeks as the card becomes more widely accessible.

More images, along with specification table, follow.

AMD Brings ROCm to Consumer GPUs on Windows OS

AMD has published an exciting development for its Radeon Open Compute Ecosystem (ROCm) users today. Now, ROCm is coming to the Windows operating system, and the company has extended ROCm support for consumer graphics cards instead of only supporting professional-grade GPUs. This development milestone is essential for making AMD's GPU family more competent with NVIDIA and its CUDA-accelerated GPUs. For those unaware, AMD ROCm is a software stack designed for GPU programming. Similarly to NVIDIA's CUDA, ROCm is designed for AMD GPUs and was historically limited to Linux-based OSes and GFX9, CDNA, and professional-grade RDNA GPUs.

However, according to documents obtained by Tom's Hardware (which are behind a login wall), AMD has brought support for ROCm to Radeon RX 6900 XT, Radeon RX 6600, and R9 Fury GPU. What is interesting is not the inclusion of RX 6900 XT and RX 6600 but the support for R9 Fury, an eight-year-old graphics card. Also, what is interesting is that out of these three GPUs, only R9 Fury has full ROCm support, the RX 6900 XT has HIP SDK support, and RX 6600 has only HIP runtime support. And to make matters even more complicated, the consumer-grade R9 Fury GPU has full ROCm support only on Linux and not Windows. The reason for this strange selection of support has yet to be discovered. However, it is a step in the right direction, as AMD has yet to enable more functionality on Windows and more consumer GPUs to compete with NVIDIA.

Adlink launches portable GPU accelerator with NVIDIA RTX A500

ADLINK Technology Inc., a global leader in edge computing, today launched Pocket AI - the first ever ultra-portable GPU accelerator to offer exceptional power at a cost-effective price point. With hardware and software compatibility, Pocket AI is the perfect tool to boost performance and productivity. It provides plug-and-play scalability from development to deployment for AI developers, professional graphics users and embedded industrial applications.

Pocket AI is a simple, reliable route to impressive GPU acceleration at a fraction of the cost of a laptop with equivalent GPU power. Its many benefits include a perfect power/performance balance from the NVIDIA RTX A500 GPU; high functionality driven by NVIDIA CUDA X and accelerated libraries; quick, easy delivery/power via Thunderbolt 3 interface and USB PD; and compatibility supported by NVIDIA developer tools. For the ultimate portability, the Pocket AI is compact and lightweight - est. 106 x 72 x 25 mm and 250 grams.
Return to Keyword Browsing
Jul 1st, 2025 21:01 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts