Wednesday, February 26th 2025

NVIDIA & Partners Will Discuss Supercharging of AI Development at GTC 2025

Generative AI is redefining computing, unlocking new ways to build, train and optimize AI models on PCs and workstations. From content creation and large and small language models to software development, AI-powered PCs and workstations are transforming workflows and enhancing productivity. At GTC 2025, running March 17-21 in the San Jose Convention Center, experts from across the AI ecosystem will share insights on deploying AI locally, optimizing models and harnessing cutting-edge hardware and software to enhance AI workloads—highlighting key advancements in RTX AI PCs and workstations.

Develop and Deploy on RTX
RTX GPUs are built with specialized AI hardware called Tensor Cores that provide the compute performance needed to run the latest and most demanding AI models. These high-performance GPUs can help build digital humans, chatbots, AI-generated podcasts and more. With more than 100 million GeForce RTX and NVIDIA RTX GPUs users, developers have a large audience to target when new AI apps and features are deployed. In the session "Build Digital Humans, Chatbots, and AI-Generated Podcasts for RTX PCs and Workstations," Annamalai Chockalingam, senior product manager at NVIDIA, will showcase the end-to-end suite of tools developers can use to streamline development and deploy incredibly fast AI-enabled applications.
Model Behavior
Large language models (LLMs) can be used for an abundance of use cases—and scale to tackle complex tasks like writing code or translating Japanese into Greek. But since they're typically trained with a wide spectrum of knowledge for broad applications, they may not be the right fit for specific tasks, like nonplayer character dialog generation in a video game. In contrast, small language models balance need with reduced size, maintaining accuracy while running locally on more devices.

In the session "Watch Your Language: Create Small Language Models That Run On-Device," Oluwatobi Olabiyi, senior engineering manager at NVIDIA, will present tools and techniques that developers and enthusiasts can use to generate, curate and distill a dataset—then train a small language model that can perform tasks designed for it.

Advancing Local AI Development
Building, testing and deploying AI models on local infrastructure ensures security and performance even without a connection to cloud-based services. Accelerated with NVIDIA RTX GPUs, Z by HP's AI solutions provide the tools needed to develop AI on premises while maintaining control over data and IP.

Learn more by attending the following sessions:Developers and enthusiasts can get started with AI development on RTX AI PCs and workstations using NVIDIA NIM microservices. Rolling out today, the initial public beta release includes the Llama 3.1 LLM, NVIDIA Riva Parakeet for automatic speech recognition (ASR), and YOLOX for computer vision.
NIM microservices are optimized, prepackaged models for generative AI. They span modalities important for PC development, and are easy to download and connect to via industry-standard application programming interfaces.

Attend GTC 2025
From the keynote by NVIDIA founder and CEO Jensen Huang to over 1,000 inspiring sessions, 300+ exhibits, technical hands-on training and tons of unique networking events—GTC is set to put a spotlight on AI and all its benefits.
Source: NVIDIA Blog
Add your own comment

1 Comment on NVIDIA & Partners Will Discuss Supercharging of AI Development at GTC 2025

#1
Wirko
They need to establish a company called Brownvidia, which would develop and then make small nuclear reactors, about two per week. If they don't want their AI to crash into the ceiling soon, that is.
Posted on Reply
Feb 28th, 2025 16:00 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts