- Joined
- Oct 9, 2007
- Messages
- 47,235 (7.55/day)
- Location
- Hyderabad, India
System Name | RBMK-1000 |
---|---|
Processor | AMD Ryzen 7 5700G |
Motherboard | ASUS ROG Strix B450-E Gaming |
Cooling | DeepCool Gammax L240 V2 |
Memory | 2x 8GB G.Skill Sniper X |
Video Card(s) | Palit GeForce RTX 2080 SUPER GameRock |
Storage | Western Digital Black NVMe 512GB |
Display(s) | BenQ 1440p 60 Hz 27-inch |
Case | Corsair Carbide 100R |
Audio Device(s) | ASUS SupremeFX S1220A |
Power Supply | Cooler Master MWE Gold 650W |
Mouse | ASUS ROG Strix Impact |
Keyboard | Gamdias Hermes E2 |
Software | Windows 11 Pro |
NVIDIA, in collaboration with Google, today launched optimizations across all NVIDIA AI platforms for Gemma—Google's state-of-the-art new lightweight 2 billion- and 7 billion-parameter open language models that can be run anywhere, reducing costs and speeding innovative work for domain-specific use cases.
Teams from the companies worked closely together to accelerate the performance of Gemma—built from the same research and technology used to create the Gemini models—with NVIDIA TensorRT-LLM, an open-source library for optimizing large language model inference, when running on NVIDIA GPUs in the data center, in the cloud and on PCs with NVIDIA RTX GPUs. This allows developers to target the installed base of over 100 million NVIDIA RTX GPUs available in high-performance AI PCs globally.
Developers can also run Gemma on NVIDIA GPUs in the cloud, including on Google Cloud's A3 instances based on the H100 Tensor Core GPU and soon, NVIDIA's H200 Tensor Core GPUs—featuring 141 GB of HBM3e memory at 4.8 terabytes per second—which Google will deploy this year.
Enterprise developers can additionally take advantage of NVIDIA's rich ecosystem of tools—including NVIDIA AI Enterprise with the NeMo framework and TensorRT-LLM—to fine-tune Gemma and deploy the optimized model in their production application.
Learn more about how TensorRT-LLM is revving up inference for Gemma, along with additional information for developers. This includes several model checkpoints of Gemma and the FP8-quantized version of the model, all optimized with TensorRT-LLM.
Experience Gemma 2B and Gemma 7B directly from your browser on the NVIDIA AI Playground.
Gemma Coming to Chat With RTX
Adding support for Gemma soon is Chat with RTX, an NVIDIA tech demo that uses retrieval-augmented generation and TensorRT-LLM software to give users generative AI capabilities on their local, RTX-powered Windows PCs.
The Chat with RTX lets users personalize a chatbot with their own data by easily connecting local files on a PC to a large language model.
Since the model runs locally, it provides results fast, and user data stays on the device. Rather than relying on cloud-based LLM services, Chat with RTX lets users process sensitive data on a local PC without the need to share it with a third party or have an internet connection.
View at TechPowerUp Main Site
Teams from the companies worked closely together to accelerate the performance of Gemma—built from the same research and technology used to create the Gemini models—with NVIDIA TensorRT-LLM, an open-source library for optimizing large language model inference, when running on NVIDIA GPUs in the data center, in the cloud and on PCs with NVIDIA RTX GPUs. This allows developers to target the installed base of over 100 million NVIDIA RTX GPUs available in high-performance AI PCs globally.
Developers can also run Gemma on NVIDIA GPUs in the cloud, including on Google Cloud's A3 instances based on the H100 Tensor Core GPU and soon, NVIDIA's H200 Tensor Core GPUs—featuring 141 GB of HBM3e memory at 4.8 terabytes per second—which Google will deploy this year.
Enterprise developers can additionally take advantage of NVIDIA's rich ecosystem of tools—including NVIDIA AI Enterprise with the NeMo framework and TensorRT-LLM—to fine-tune Gemma and deploy the optimized model in their production application.
Learn more about how TensorRT-LLM is revving up inference for Gemma, along with additional information for developers. This includes several model checkpoints of Gemma and the FP8-quantized version of the model, all optimized with TensorRT-LLM.
Experience Gemma 2B and Gemma 7B directly from your browser on the NVIDIA AI Playground.
Gemma Coming to Chat With RTX
Adding support for Gemma soon is Chat with RTX, an NVIDIA tech demo that uses retrieval-augmented generation and TensorRT-LLM software to give users generative AI capabilities on their local, RTX-powered Windows PCs.
The Chat with RTX lets users personalize a chatbot with their own data by easily connecting local files on a PC to a large language model.
Since the model runs locally, it provides results fast, and user data stays on the device. Rather than relying on cloud-based LLM services, Chat with RTX lets users process sensitive data on a local PC without the need to share it with a third party or have an internet connection.
View at TechPowerUp Main Site