News Posts matching #service

Return to Keyword Browsing

QNAP Launches myQNAPcloud One Beta: Shared Cloud Storage for NAS Backups

QNAP Systems, Inc., a leading computing, networking, and storage solution innovator, today announced the launch of myQNAPcloud One Beta, a subscription-based unified cloud storage solution. This service combines advanced version myQNAPcloud Storage, tailored for NAS backups, with myQNAPcloud Object, a newly introduced object storage service. Users can flexibly utilize their subscribed storage capacity across both services, based on actual usage needs—eliminating the need for separate subscriptions.

Introducing myQNAPcloud Object
myQNAPcloud Object meets the growing demand for scalable cloud storage. This S3-compatible solution supports a wide range of applications, including data lakes, long-term archiving, and backups. With compatibility for AWS S3 workflows, myQNAPcloud Object provides a seamless migration experience. Businesses can take advantage of immutable data storage with object locking feature. Bucket versioning enables easy restoration to previous versions, safeguarding data even after changes. Data access logs allow business to meet the needs of security management, troubleshooting, and auditing.

Shadow Launches Neo: The Next Generation Cloud Gaming PC

SHADOW, the Global leader in high-performance cloud computing, is proud to announce the launch of Neo, a brand-new cloud gaming PC offering designed to deliver next-level RTX experiences for gamers, creators, and professionals alike. Neo will officially roll out in Europe and North America starting June 16, 2025.

Building on the success of the company's previous offers, Neo replaces its widely adopted "Boost" tier and delivers major performance leaps—up to 150% more in gaming and 200% more in pro software performance. All existing Boost users are being upgraded to Neo at no additional cost while new users rates will start at $37.99 per month.

Europe Builds AI Infrastructure With NVIDIA to Fuel Region's Next Industrial Transformation

NVIDIA today announced it is working with European nations, and technology and industry leaders, to build NVIDIA Blackwell AI infrastructure that will strengthen digital sovereignty, support economic growth and position the continent as a leader in the AI industrial revolution. France, Italy, Spain and the U.K. are among the nations building domestic AI infrastructure with an ecosystem of technology and cloud providers, including Domyn, Mistral AI, Nebius and Nscale, and telecommunications providers, including Orange, Swisscom, Telefónica and Telenor.

These deployments will deliver more than 3,000 exaflops of NVIDIA Blackwell compute resources for sovereign AI, enabling European enterprises, startups and public sector organizations to securely develop, train and deploy agentic and physical AI applications. NVIDIA is establishing and expanding AI technology centers in Germany, Sweden, Italy, Spain, the U.K. and Finland. These centers build on NVIDIA's history of collaborating with academic institutions and industry through the NVIDIA AI Technology Center program and NVIDIA Deep Learning Institute to develop the AI workforce and scientific discovery throughout the regions.

Funcom Details Dune: Awakening's Rentable Private Server System

Greetings soon-to-be-awakened, today, just about 72 hours before the floodgates open, we can finally share with you that rentable private servers will be available from head start launch on June 5th! We've previously communicated that private servers are for post-launch, but we're happy to share that progress has been faster than expected. We do, however, want to manage expectations about how private servers work in Dune: Awakening. As you know, this is not your typical survival game.

Why private servers work differently in Dune: Awakening
Dune: Awakening is powered by a unique server and world structure, something we went in-depth on in a recent blog post. In short: each server belongs to a World consisting of several other servers, and each of those share the same social hubs and Deep Desert. This allows us to retain a neighborhood-like feel to the Hagga Basin and provide persistent, freeform building, and other server-demanding mechanics you typically see in survival games. We combine this with the large-scale multiplayer mechanics you would expect to find in MMOs where hundreds of players meet each other in social hubs and the Deep Desert to engage in social activities, trade, conflict, and more.

Former Sony Gaming Head Decries Impact of Game Subscription Services As "Risky" for Developers

Shuhei Yoshida, the former head of Sony Entertainment, has decried the outsized impact of game subscription services, essentially saying that they had the potential to stifle innovation and put too much emphasis on AAA and first-party games and make it even more difficult for indie developers to break into the scene. In an interview with Game Developer, Yoshida shared his concerns about the rise of subscription services, adding that Sony's approach was less harmful than Xbox Game Pass, specifically because Sony wasn't trying to launch AAA titles straight to the subscription model like Xbox is.

Yoshida's implication is that Sony's model of allowing games to have a traditional release before going to PlayStation Plus is likely a healthier approach than the day-one AAA launches that became popular on Xbox Game Pass. His concerns boil down to the aforementioned issues with innovation, saying that "what [type] of games can be created will be dictated by the owner of the subscription services," and adding that "the big companies dictate what games can be created, I don't think that will advance the industry," however he also takes issue with the financial side of things, implying that, if gamers have day-one access to games on subscription services, they won't want to pay up-front for games. This last point has implications for innovation as much as his former thought, because if game developers depend on subscription services for launches, it might make them more averse to trying new things. These comments seem all the more relevant in a modern gaming landscape, where indie developers seem to be largely responsible for pushing the envelope. You simply need to look at the popularity of games like Hades, Terraria, or the roguelike and survival-craft genre in general for evidence of such.

NVIDIA & Microsoft Accelerate Agentic AI Innovation - From Cloud to PC

Agentic AI is redefining scientific discovery and unlocking research breakthroughs and innovations across industries. Through deepened collaboration, NVIDIA and Microsoft are delivering advancements that accelerate agentic AI-powered applications from the cloud to the PC. At Microsoft Build, Microsoft unveiled Microsoft Discovery, an extensible platform built to empower researchers to transform the entire discovery process with agentic AI. This will help research and development departments across various industries accelerate the time to market for new products, as well as speed and expand the end-to-end discovery process for all scientists.

Microsoft Discovery will integrate the NVIDIA ALCHEMI NIM microservice, which optimizes AI inference for chemical simulations, to accelerate materials science research with property prediction and candidate recommendation. The platform will also integrate NVIDIA BioNeMo NIM microservices, tapping into pretrained AI workflows to speed up AI model development for drug discovery. These integrations equip researchers with accelerated performance for faster scientific discoveries. In testing, researchers at Microsoft used Microsoft Discovery to detect a novel coolant prototype with promising properties for immersion cooling in data centers in under 200 hours, rather than months or years with traditional methods.

NVIDIA Discusses the Revenue-Generating Potential of AI Factories

AI is creating value for everyone—from researchers in drug discovery to quantitative analysts navigating financial market changes. The faster an AI system can produce tokens, a unit of data used to string together outputs, the greater its impact. That's why AI factories are key, providing the most efficient path from "time to first token" to "time to first value." AI factories are redefining the economics of modern infrastructure. They produce intelligence by transforming data into valuable outputs—whether tokens, predictions, images, proteins or other forms—at massive scale.

They help enhance three key aspects of the AI journey—data ingestion, model training and high-volume inference. AI factories are being built to generate tokens faster and more accurately, using three critical technology stacks: AI models, accelerated computing infrastructure and enterprise-grade software. Read on to learn how AI factories are helping enterprises and organizations around the world convert the most valuable digital commodity—data—into revenue potential.

QNAP Unveils High Availability Solution (Beta)

QNAP Systems, Inc., a leading computing, networking, and storage solutions innovator, has launched the Beta version of QNAP High Availability (HA) solution. By leveraging dual-server architecture and failover technology, the QNAP HA solution minimizes system downtime risks and ensures uninterrupted critical services. This solution provides highly reliable data storage and continuous operation capabilities for businesses across various industries.

Modern enterprises rely heavily on critical services such as file servers, virtualization storage, and databases, where downtime tolerance is extremely low. Any unexpected disruption can lead to data loss, business interruptions, and reputational damage. QNAP's HA solution is not only for large enterprises, but also for small and medium-sized businesses and budget-conscious professional users, offering an easy-to-deploy and cost-effective HA architecture. This enables IT teams to deploy and manage HA environments efficiently while reducing downtime risks.

NVIDIA Bringing Cybersecurity Platform to Every AI Factory

As enterprises increasingly adopt AI, securing AI factories—where complex, agentic workflows are executed—has never been more critical. NVIDIA is bringing runtime cybersecurity to every AI factory with a new NVIDIA DOCA software framework, part of the NVIDIA cybersecurity AI platform. Running on the NVIDIA BlueField networking platform, NVIDIA DOCA Argus operates on every node to immediately detect and respond to attacks on AI workloads, integrating seamlessly with enterprise security systems to deliver instant threat insights. The DOCA Argus framework provides runtime threat detection by using advanced memory forensics to monitor threats in real time, delivering detection speeds up to 1,000x faster than existing agentless solutions—without impacting system performance.

Unlike conventional tools, Argus runs independently of the host, requiring no agents, integration or reliance on host-based resources. This agentless, zero-overhead design enhances system efficiency and ensures resilient security in any AI compute environment, including containerized and multi-tenant infrastructures. By operating outside the host, Argus remains invisible to attackers—even in the event of a system compromise. Cybersecurity professionals can seamlessly integrate the framework with their SIEM, SOAR and XDR security platforms, enabling continuous monitoring and automated threat mitigation and extending their existing cybersecurity capabilities for AI infrastructure.

Oracle Cloud Infrastructure Bolstered by Thousands of NVIDIA Blackwell GPUs

Oracle has stood up and optimized its first wave of liquid-cooled NVIDIA GB200 NVL72 racks in its data centers. Thousands of NVIDIA Blackwell GPUs are now being deployed and ready for customer use on NVIDIA DGX Cloud and Oracle Cloud Infrastructure (OCI) to develop and run next-generation reasoning models and AI agents. Oracle's state-of-the-art GB200 deployment includes high-speed NVIDIA Quantum-2 InfiniBand and NVIDIA Spectrum-X Ethernet networking to enable scalable, low-latency performance, as well as a full stack of software and database integrations from NVIDIA and OCI.

OCI, one of the world's largest and fastest-growing cloud service providers, is among the first to deploy NVIDIA GB200 NVL72 systems. The company has ambitious plans to build one of the world's largest Blackwell clusters. OCI Superclusters will scale beyond 100,000 NVIDIA Blackwell GPUs to meet the world's skyrocketing need for inference tokens and accelerated computing. The torrid pace of AI innovation continues as several companies including OpenAI have released new reasoning models in the past few weeks.

GIGABYTE AORUS RTX 5080 MASTER Starts Leaking Thermal Gel After Four Weeks of Light MMO Gaming

An unlucky owner of a GIGABYTE AORUS GeForce RTX 5080 MASTER ICE 16 GB graphics card has reported a baffling instance of thermal gel leakage. A forum post—titled: "5080 oh my god thermal problem"—on the Quasar Zone BBS alerted the wider world to this bizarre fault. The South Korean MMORPG enthusiast described circumstances up until the point of critical liquefaction: "it's been exactly a month since I bought it. I use it for (Blizzard's) World of Warcraft. Two hours of use per day. I set up the card with a riser kit. Thermal (material) is crawling out?!" Early 2025 press coverage has largely focused on other types of unwanted high temperature events involving GeForce RTX 50-series cards, but the seeping out of "server-grade thermal conductive gel" compound is something new. As reported by several PC hardware news outlets, GIGABYTE has utilized fancy thermal conductive gel within flagship SKUs—instead of traditional/conventional thermal pads. This gel was placed over the card's VRAM and MOSFET sections; following fairly light usage (as described above) some of this material started to head down—getting ever closer to the unit's PCIe interface.

Assisted by the AORUS RTX 5080 MASTER ICE's vertical orientation, the (apparently) highly deformable, but non-fluid thermal gel was susceptible to the effects of gravity. JC Hyun System Co., Ltd.—GIGABYTE's official domestic importer (for South Korea)—weighed in with a separate bulletin: "we are aware of the thermal gel issue with the GIGABYTE GeForce RTX 50 series, which was first posted on Quasar Zone—(we) are currently discussing the thermal gel issue with GIGABYTE HQ and future customer service regulations. In addition, we sincerely apologize for the confusion caused to many customers who love and use GIGABYTE products due to inaccurate guidance provided to customers who received the products due to unclear customer service regulations regarding the issue that occurred this time. Lastly, when the manufacturer's customer service policy regarding this thermal gel issue is finalized, we will also forward the service policy to CS Innovation so that it can be processed smoothly in accordance with the service policy. We will also provide information through a separate post so that more customers can be aware of the information." As mentioned by Notebookcheck, GIGABYTE uses this special thermal gel solution on other highly expensive custom: "RTX 50-series cards like the GeForce RTX 5090 XTREME WATERFORCE 32G, RTX 5090 MASTER ICE, RTX 5070 Ti MASTER, and others."

LG Announces First-Quarter 2025 Financial Results

LG Electronics Inc. (LG) today announced consolidated revenue of KRW 22.74 trillion and operating profit of KRW 1.26 trillion for the first quarter of 2025. This marks the highest first-quarter revenue in the company's history and the sixth consecutive year in which operating profit surpassed KRW 1 trillion during the first quarter. Strong performance was driven by qualitative growth across key business areas, especially in B2B, non-hardware segments (such as subscriptions and webOS platform) and direct-to-consumer (D2C) sales.

Vehicle solutions and heating, ventilation and air conditioning (HVAC) - both critical to LG's B2B strategy and future growth - delivered record-breaking quarterly revenue and operating profit. Combined operating profit from the Vehicle Solution Company and Eco Solution Company increased 37.2 percent year-over-year, with revenue climbing 12.3 percent.

NVIDIA Will Bring Agentic AI Reasoning to Enterprises with Google Cloud

NVIDIA is collaborating with Google Cloud to bring agentic AI to enterprises seeking to locally harness the Google Gemini family of AI models using the NVIDIA Blackwell HGX and DGX platforms and NVIDIA Confidential Computing for data safety. With the NVIDIA Blackwell platform on Google Distributed Cloud, on-premises data centers can stay aligned with regulatory requirements and data sovereignty laws by locking down access to sensitive information, such as patient records, financial transactions and classified government information. NVIDIA Confidential Computing also secures sensitive code in the Gemini models from unauthorized access and data leaks.

"By bringing our Gemini models on premises with NVIDIA Blackwell's breakthrough performance and confidential computing capabilities, we're enabling enterprises to unlock the full potential of agentic AI," said Sachin Gupta, vice president and general manager of infrastructure and solutions at Google Cloud. "This collaboration helps ensure customers can innovate securely without compromising on performance or operational ease." Confidential computing with NVIDIA Blackwell provides enterprises with the technical assurance that their user prompts to the Gemini models' application programming interface—as well as the data they used for fine-tuning—remain secure and cannot be viewed or modified. At the same time, model owners can protect against unauthorized access or tampering, providing dual-layer protection that enables enterprises to innovate with Gemini models while maintaining data privacy.

Intel's 18A Node Process Has Entered "Risk Production" - Foundry's Output Scaling Up

Intel's Vision 2025 conference ended yesterday—since then, media outlets have spent time poring over a multitude of announcements made during the two-day Las Vegas, Nevada event. Notably, Team Blue leadership confirmed that their Core Ultra 300 "Panther Lake" processor series is built to scale (on) 18A, and is on track for production later this year." Prominently-displayed presentation material indicated a roadmapped 2026 launch of "Panther Lake" client chips. The success of this next-gen mobile processor family is intertwined with Intel's Foundry service making marked progress. As summarized by the company's social media account, production teams are celebrating another milestone: "Intel 18A has entered risk production. This final stage is about stress-testing volume manufacturing before scaling up to high volume in the second half of 2025."

Under Pat Gelsinger's command, Team Blue set off on a "five nodes in four years" (5N4Y) adventure around mid-2021. This plan is set to conclude with the finalization of 18A, at some point this year, under a newly refreshed regime—with Lip-Bu Tan recently established as CEO. During an on-stage Intel Vision 2025 session, Kevin O'Buckley—Senior VP of Foundry Services—explained the meaning of: "risk production, while it sounds scary, is actually an industry standard terminology, and the importance of risk production is we've gotten the technology to a point where we're freezing it...Our customers have validated that; 'Yep, 18A is good enough for my product.' And we have to now do the 'risk' part, which is to scale it from making hundreds of units per day to thousands, tens of thousands, and then hundreds of thousands. So risk production..is scaling our manufacturing up and ensuring that we can meet not just the capabilities of the technology, but the capabilities at scale." By original "5N4Y" decree, top brass demanded that process nodes be (fully) available for production, rather than be stuck in a (not quite there) final high volume manufacturing (HVM) phase.

IBM & Intel Announce the Availability of Gaudi 3 AI Accelerators on IBM Cloud

Yesterday, at Intel Vision 2025, IBM announced the availability of Intel Gaudi 3 AI accelerators on IBM Cloud. This offering delivers Intel Gaudi 3 in a public cloud environment for production workloads. Through this collaboration, IBM Cloud aims to help clients more cost-effectively scale and deploy enterprise AI. Intel Gaudi 3 AI accelerators on IBM Cloud are currently available in Frankfurt (eu-de) and Washington, D.C. (us-east) IBM Cloud regions, with future availability for the Dallas (us-south) IBM Cloud region in Q2 2025.

IBM's AI in Action 2024 report found that 67% of surveyed leaders reported revenue increases of 25% or more due to including AI in business operations. Although AI is demonstrating promising revenue increases, enterprises are also balancing the costs associated with the infrastructure needed to drive performance. By leveraging Intel's Gaudi 3 on IBM Cloud, the two companies are aiming to help clients more cost effectively test, innovate and deploy generative AI solutions. "By bringing Intel Gaudi 3 AI accelerators to IBM Cloud, we're enabling businesses to help scale generative AI workloads with optimized performance for inferencing and fine-tuning. This collaboration underscores our shared commitment to making AI more accessible and cost-effective for enterprises worldwide," said Saurabh Kulkarni, Vice President, Datacenter AI Strategy and Product Management, Intel.

NVIDIA & Storage Industry Leaders Unveil New Class of Enterprise Infrastructure for the AI Era

At GTC 2025, NVIDIA announced the NVIDIA AI Data Platform, a customizable reference design that leading providers are using to build a new class of AI infrastructure for demanding AI inference workloads: enterprise storage platforms with AI query agents fueled by NVIDIA accelerated computing, networking and software. Using the NVIDIA AI Data Platform, NVIDIA-Certified Storage providers can build infrastructure to speed AI reasoning workloads with specialized AI query agents. These agents help businesses generate insights from data in near real time, using NVIDIA AI Enterprise software—including NVIDIA NIM microservices for the new NVIDIA Llama Nemotron models with reasoning capabilities—as well as the new NVIDIA AI-Q Blueprint.

Storage providers can optimize their infrastructure to power these agents with NVIDIA Blackwell GPUs, NVIDIA BlueField DPUs, NVIDIA Spectrum-X networking and the NVIDIA Dynamo open-source inference library. Leading data platform and storage providers—including DDN, Dell Technologies, Hewlett Packard Enterprise, Hitachi Vantara, IBM, NetApp, Nutanix, Pure Storage, VAST Data and WEKA—are collaborating with NVIDIA to create customized AI data platforms that can harness enterprise data to reason and respond to complex queries. "Data is the raw material powering industries in the age of AI," said Jensen Huang, founder and CEO of NVIDIA. "With the world's storage leaders, we're building a new class of enterprise infrastructure that companies need to deploy and scale agentic AI across hybrid data centers."

Amazon GameLift Streams Empowers Developers to Stream Games to Virtually Any Device

Amazon Web Services (AWS), an Amazon.com, Inc. company, today announced Amazon GameLift Streams, a fully managed capability that enables developers to deliver high-fidelity, low-latency game experiences to players using virtually any device with a browser. Game developers no longer need to spend time and resources modifying their games for streaming or building their own streaming infrastructure. Players around the world can begin playing games in seconds instead of waiting minutes for streams or hours for downloads. Amazon GameLift Streams is a new capability of Amazon GameLift, the AWS service that empowers developers to build and deliver the world's most demanding games. The new streaming capability opens opportunities for developers to deliver new experiences to more players, helping them grow engagement and sales of their games.

"With more than 750 million people playing games running on AWS every month, we have a long history of supporting the industry's game development, content creation, player acquisition, personalization, and more," said Chris Lee, general manager and head of Immersive Technology at AWS. "Amazon GameLift Streams can help the game industry transform billions of everyday devices around the world into gaming machines without rebuilding game code or managing your own infrastructure. For game developers, this creates exciting new revenue and monetization opportunities that weren't possible before."

NVIDIA Explains How CUDA Libraries Bolster Cybersecurity With AI

Traditional cybersecurity measures are proving insufficient for addressing emerging cyber threats such as malware, ransomware, phishing and data access attacks. Moreover, future quantum computers pose a security risk to today's data through "harvest now, decrypt later" attack strategies. Cybersecurity technology powered by NVIDIA accelerated computing and high-speed networking is transforming the way organizations protect their data, systems and operations. These advanced technologies not only enhance security but also drive operational efficiency, scalability and business growth.

Accelerated AI-Powered Cybersecurity
Modern cybersecurity relies heavily on AI for predictive analytics and automated threat mitigation. NVIDIA GPUs are essential for training and deploying AI models due to their exceptional computational power.

Intel Xeon 6 Processors With E-Core Achieve Ecosystem Adoption Speed by Industry-Leading 5G Core Solution Partners

Intel today showcased how Intel Xeon 6 processors with Efficient-cores (E-cores) have dramatically accelerated time-to-market adoption for the company's solutions in collaboration with the ecosystem. Since product introduction in June 2024, 5G core solution partners have independently validated a 3.2x performance improvement, a 3.8x performance per watt increase and, in collaboration with the Intel Infrastructure Power Manager launched at MWC 2024, a 60% reduction in run-time power consumption.

"As 5G core networks continue to build out using Intel Xeon processors, which are deployed in the vast majority of 5G networks worldwide, infrastructure efficiency, power savings and uncompromised performance are essential criteria for communication service providers (CoSPs). Intel is pleased to announce that our 5G core solution partners have accelerated the adoption of Intel Xeon 6 with E-cores and are immediately passing along these benefits to their customers. In addition, with Intel Infrastructure Power Manager, our partners have a run-time software solution that is showing tremendous progress in reducing server power in CoSP environments on existing and new infrastructure." -Alex Quach, Intel vice president and general manager of Wireline and Core Network Division

Lenovo Group: Third Quarter Financial Results 2024/25

Lenovo Group Limited (HKSE: 992) (ADR: LNVGY), together with its subsidiaries ('the Group'), today announced Q3 results for fiscal year 2024/25, reporting significant increases in overall group revenue and profit. Revenue grew 20% year-on-year to US$18.8 billion, marking the third consecutive quarter of double-digit growth. Net income more than doubled year-on-year to US$693 million (including a non-recurring income tax credit of US$282 million) on a Hong Kong Financial Reporting Standards (HKFRS) basis. The Group's diversified growth engines continue to accelerate, with non-PC revenue mix up more than four points year-on-year to 46%. The quarter's results were driven by the Group's focused hybrid-AI strategy, the turnaround of the Infrastructure Solutions Group, as well as double-digit growth for both the Intelligent Devices Group and Solutions and Services Group.

Lenovo continues to invest in R&D, with R&D expenses up nearly 14% year-on-year to US$621 million. At the recent global technology event CES 2025, Lenovo launched a series of innovative products, including the world's first rollable AI laptop, the world's first handheld gaming device that allows gamers free choice of Windows OS or Steam OS, as well as Moto AI - winning 185 industry awards for its portfolio of innovation.

MSI Unveils New Modern MD272UPSW Smart Monitor

MSI proudly announces the release of its first branded Google TV Smart Monitor - Modern MD272UPSW. The Smart Monitor supports Multi Control and KVM functions, making it a versatile choice for both entertainment and work. Equipped with 4K UHD resolution, IPS panel, and wide color gamut coverage of 94% Adobe RGB and 98% DCI-P3, delivering vibrant and lifelike visuals. In addition, the monitor includes a USB Type-C port with 65 W Power Delivery and an ergonomic stand that tilts, swivels, rotates and is height adjustable for seamless connectivity and comfortable use while working.

The entertainment you love. With a little help from Google
No more jumping from app to app. Google TV brings together 400,000+ movies, TV episodes, and more from across your streaming services - organized in one place. Need inspiration? Get curated recommendations and use Google's powerful search to find shows across 10,000+ apps or to browse 800+ free live TV channels and thousands of free movies. Ask Google Assistant to find movies, stream apps, play music, and control the monitor - all with your voice. Simply press the Google Assistant button on the remote to get started.

AMD & Nutanix Solutions Discuss Energy Efficient EPYC 9004 CPU Family

AMD and Nutanix have jointly developed virtualization/HCI solutions since 2019, working with major OEMs including Dell, HP and Lenovo, systems integrators and other resellers and partners. You can learn more about AMD-Nutanix solutions here.

AMD EPYC Processors
The EPYC 9004 family of high performance processors provide up to 128 cores per processor to help meet the demands of a wide range of workloads and use cases. High density core counts allow you to reduce the number of servers you need by as much as a five to one ratio when looking at retiring older, inefficient servers and replacing with a new one. Systems based on AMD processors can also be more energy efficient than many competitive processor based systems. For example, running 2000 VMs on 11 2P AMD EPYC 9654 processor-powered servers will use up to 29% less power annually than the 17 2P Intel Xeon Platinum 8490H processor-based servers required to deliver the same performance, while helping reduce CAPEX up to 46%.

Xbox Celebrates Safer Internet Day with Minecraft Education, Digging Deeper into AI

People are using AI more and more at home, at work, in school, and everywhere in between. According to the most recent Microsoft Global Online Safety Survey, there has been a global increase in active generative AI users. Our findings showed that in 2024, 51% of people are users or experimenters of generative AI compared to 38% in 2023. Generation Z continues to drive this adoption with 64% of young adults reporting ever using the technology. That means it's up to us - especially those of us who work in technology and gaming - to make sure that young people have the support they need to navigate the world of AI safely while also fostering their curiosity and creativity in exploring these new technologies.

That's why, for Safer Internet Day 2025, Minecraft Education is releasing a new installment in the CyberSafe series where players can explore the risks and opportunities of AI use through fun, game-based challenges. In each instance, players are tasked with articulating guidelines for how to use AI safely and responsibly. Welcome to CyberSafe AI: Dig Deeper, available free on the Minecraft Marketplace and in Minecraft Education!

IBM & Lenovo Expand Strategic AI Technology Partnership in Saudi Arabia

IBM and Lenovo today announced at LEAP 2025 a planned expansion of their strategic technology partnership designed to help scale the impact of generative AI for clients in the Kingdom of Saudi Arabia. IDC expects annual worldwide spending on AI-centric systems to surpass $300 billion by 2026, with many leading organizations in Saudi Arabia exploring and investing in generative AI use cases as they prepare for the emergence of an "AI everywhere" world.

Building upon their 20-year partnership, IBM and Lenovo will collaborate to deliver AI solutions comprised of technology from the IBM watsonx portfolio of AI products, including the Saudi Data and Artificial Intelligence Authority (SDAIA) open-source Arabic Large Language Model (ALLaM), and Lenovo infrastructure. These solutions are expected to help government and business clients in the Kingdom to accelerate their use of AI to improve public services and make data-driven decisions in areas such as fraud detection, public safety, customer service, code modernization, and IT operations.

CoreWeave Launches Debut Wave of NVIDIA GB200 NVL72-based Cloud Instances

AI reasoning models and agents are set to transform industries, but delivering their full potential at scale requires massive compute and optimized software. The "reasoning" process involves multiple models, generating many additional tokens, and demands infrastructure with a combination of high-speed communication, memory and compute to ensure real-time, high-quality results. To meet this demand, CoreWeave has launched NVIDIA GB200 NVL72-based instances, becoming the first cloud service provider to make the NVIDIA Blackwell platform generally available. With rack-scale NVIDIA NVLink across 72 NVIDIA Blackwell GPUs and 36 NVIDIA Grace CPUs, scaling to up to 110,000 GPUs with NVIDIA Quantum-2 InfiniBand networking, these instances provide the scale and performance needed to build and deploy the next generation of AI reasoning models and agents.

NVIDIA GB200 NVL72 on CoreWeave
NVIDIA GB200 NVL72 is a liquid-cooled, rack-scale solution with a 72-GPU NVLink domain, which enables the six dozen GPUs to act as a single massive GPU. NVIDIA Blackwell features many technological breakthroughs that accelerate inference token generation, boosting performance while reducing service costs. For example, fifth-generation NVLink enables 130 TB/s of GPU bandwidth in one 72-GPU NVLink domain, and the second-generation Transformer Engine enables FP4 for faster AI performance while maintaining high accuracy. CoreWeave's portfolio of managed cloud services is purpose-built for Blackwell. CoreWeave Kubernetes Service optimizes workload orchestration by exposing NVLink domain IDs, ensuring efficient scheduling within the same rack. Slurm on Kubernetes (SUNK) supports the topology block plug-in, enabling intelligent workload distribution across GB200 NVL72 racks. In addition, CoreWeave's Observability Platform provides real-time insights into NVLink performance, GPU utilization and temperatures.
Return to Keyword Browsing
Jul 12th, 2025 03:12 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts