News Posts matching #Artificial Intelligence

Return to Keyword Browsing

AMD Introduces GAIA - an Open-Source Project That Runs Local LLMs on Ryzen AI NPUs

AMD has launched a new open-source project called, GAIA (pronounced /ˈɡaɪ.ə/), an awesome application that leverages the power of Ryzen AI Neural Processing Unit (NPU) to run private and local large language models (LLMs). In this blog, we'll dive into the features and benefits of GAIA, while introducing how you can take advantage of GAIA's open-source project to adopt into your own applications.

Introduction to GAIA
GAIA is a generative AI application designed to run local, private LLMs on Windows PCs and is optimized for AMD Ryzen AI hardware (AMD Ryzen AI 300 Series Processors). This integration allows for faster, more efficient processing - i.e. lower power- while keeping your data local and secure. On Ryzen AI PCs, GAIA interacts with the NPU and iGPU to run models seamlessly by using the open-source Lemonade (LLM-Aid) SDK from ONNX TurnkeyML for LLM inference. GAIA supports a variety of local LLMs optimized to run on Ryzen AI PCs. Popular models like Llama and Phi derivatives can be tailored for different use cases, such as Q&A, summarization, and complex reasoning tasks.

Pat Gelsinger Repeats Observation that NVIDIA CEO "Got Lucky" with AI Industry Boom

Pat Gelsinger has quite bravely stepped into the belly of the beast this week. The former Intel boss was an invited guest at NVIDIA's GTC 2025 conference; currently taking place in San Francisco, California. Technology news outlets have extracted key quotes from Gelsinger's musings during an in-person appearance on Acquired's "Live at GTC" video podcast. In the past, the ex-Team Blue chief held the belief that NVIDIA was "extraordinarily lucky" with a market leading position. Yesterday's panel discussion provided a repeat visit—where Gelsinger repeated his long-held opinion: "the CPU was the king of the hill, and I applaud Jensen for his tenacity in just saying, 'No, I am not trying to build one of those; I am trying to deliver against the workload starting in graphics. You know, it became this broader view. And then he got lucky with AI, and one time I was debating with him, he said: 'No, I got really lucky with AI workload because it just demanded that type of architecture.' That is where the center of application development is (right now)."

The American businessman and electrical engineer reckons that AI hardware costs are climbing to unreasonable levels: "today, if we think about the training workload, okay, but you have to give away something much more optimized for inferencing. You know a GPU is way too expensive; I argue it is 10,000 times too expensive to fully realize what we want to do with the deployment of inferencing for AI and then, of course, what's beyond that." Despite the "failure" of a much older Intel design, Gelsinger delved into some rose-tinted nostalgia: "I had a project that was well known in the industry called Larrabee and which was trying to bridge the programmability of the CPU with a throughput oriented architecture (of a GPU), and I think had Intel stay on that path, you know, the future could have been different...I give Jensen a lot of credit (as) he just stayed true to that throughput computing or accelerated (vision)." With the semi-recent cancelation of "Falcon Shores" chip design, Intel's AI GPU division is likely regrouping around their next-generation "Jaguar Shores" project—industry watchdogs reckon that this rack-scale platform will arrive in 2026.

AMD Recommends EPYC Processors for Everyday AI Server Tasks

Ask a typical IT professional today whether they're leveraging AI, and there's a good chance they'll say yes-after all, they have reputations to protect! Kidding aside, many will report that their teams may use Web-based tools like ChatGPT or even have internal chatbots that serve their employee base on their intranet, but for that not much AI is really being implemented at the infrastructure level. As it turns out, the true answer is a bit different. AI tools and techniques have embedded themselves firmly into standard enterprise workloads and are a more common, everyday phenomena than even many IT people may realize. Assembly line operations now include computer vision-powered inspections. Supply chains use AI for demand forecasting making business move faster and of course, AI note-taking and meeting summary is embedded on virtually all the variants of collaboration and meeting software.

Increasingly, critical enterprise software tools incorporate built-in recommendation systems, virtual agents or some other form of AI-enabled assistance. AI is truly becoming a pervasive, complementary tool for everyday business. At the same time, today's enterprises are navigating a hybrid landscape where traditional, mission-critical workloads coexist with innovative AI-driven tasks. This "mixed enterprise and AI" workload environment calls for infrastructure that can handle both types of processing seamlessly. Robust, general-purpose CPUs like the AMD EPYC processors are designed to be powerful and secure and flexible to address this need. They handle everyday tasks—running databases, web servers, ERP systems—and offer strong security features crucial for enterprise operations augmented with AI workloads. In essence, modern enterprise infrastructure is about creating a balanced ecosystem. AMD EPYC CPUs play a pivotal role in creating this balance, delivering high performance, efficiency, and security features that underpin both traditional enterprise workloads and advanced AI operations.

Meta Reportedly Reaches Test Phase with First In-house AI Training Chip

According to a Reuters technology report, Meta's engineering department is engaged in the testing of their "first in-house chip for training artificial intelligence systems." Two inside sources have declared this significant development milestone; involving a small-scale deployment of early samples. The owner of Facebook could ramp up production, upon initial batches passing muster. Despite a recent-ish showcasing of an open-architecture NVIDIA "Blackwell" GB200 system for enterprise, Meta leadership is reported to be pursuing proprietary solutions. Multiple big players—in the field of artificial intelligence—are attempting to breakaway from a total reliance on Team Green. Last month, press outlets concentrated on OpenAI's alleged finalization of an in-house design, with rumored involvement coming from Broadcom and TSMC.

One of the Reuters industry moles believes that Meta has signed up with TSMC—supposedly, the Taiwanese foundry was responsible for the production of test batches. Tom's Hardware reckons that Meta and Broadcom were working together with the tape out of the social media giant's "first AI training accelerator." Development of the company's "Meta Training and Inference Accelerator" (MTIA) series has stretched back a couple of years—according to Reuters, this multi-part project: "had a wobbly start for years, and at one point scrapped a chip at a similar phase of development...Meta last year, started using an MTIA chip to perform inference, or the process involved in running an AI system as users interact with it, for the recommendation systems that determine which content shows up on Facebook and Instagram news feeds." Leadership is reportedly aiming to get custom silicon solutions up and running for AI training by next year. Past examples of MTIA hardware were deployed with open-source RISC-V cores (for inference tasks), but is not clear whether this architecture will form the basis of Meta's latest AI chip design.

OpenAI Has "Run Out of GPUs" - Sam Altman Mentions Incoming Delivery of "Tens of Thousands"

Yesterday, OpenAI introduced its "strongest" GPT-4.5 model. A research preview build is only available to paying customers—Pro-tier subscribers fork out $200 a month for early access privileges. The non-profit organization's CEO shared an update via social media post; complete with a "necessary" hyping up of version 4.5: "it is the first model that feels like talking to a thoughtful person to me. I have had several moments where I've sat back in my chair and been astonished at getting actual good advice from an AI." There are apparent performance caveats—Sam Altman proceeded to add a short addendum: "this isn't a reasoning model and won't crush benchmarks. It's a different kind of intelligence, and there's a magic to it (that) I haven't felt before. Really excited for people to try it!" OpenAI had plans to make GPT-4.5 available to its audience of "Plus" subscribers, but major hardware shortages have delayed a roll-out to the $20 per month tier.

Altman disclosed his personal disappointment: "bad news: it is a giant, expensive model. We really wanted to launch it to Plus and Pro (customers) at the same time, but we've been growing a lot and are out of GPUs. We will add tens of thousands of GPUs next week, and roll it out to the plus tier then...Hundreds of thousands coming soon, and I'm pretty sure y'all will use every one we can rack up." Insiders believe that OpenAI is finalizing a proprietary AI-crunching solution, but a rumored mass production phase is not expected to kick-off until 2026. In the meantime, Altman & Co. are still reliant on NVIDIA for new shipments of AI GPUs. Despite being a very important customer, OpenAI is reportedly not satisfied about the "slow" flow of Team Green's latest DGX B200 and DGX H200 platforms into server facilities. Several big players are developing in-house designs, in an attempt to ween themselves off prevalent NVIDIA technologies.

DeepSeek Reportedly Pursuing Development of Proprietary AI Chip

The notion of designing tailor-made AI-crunching chips is nothing new; several major organizations—with access to big coffers—are engaged in the formulation of proprietary hardware. A new DigiTimes Asia report suggests that DeepSeek is the latest company to jump on the in-house design bandwagon. The publication's insider network believes that the open-source large language model development house has: "initiated a major recruitment drive for semiconductor design talent, signaling potential plans to develop its proprietary processors." The recent news cycle has highlighted DeepSeek's deep reliance on an NVIDIA ecosystem, despite alternative options emerging from local sources.

Industry watchdogs believe that DeepSeek has access to 10,000 of sanction-approved Team Green "Hopper" H800 AI chips, and (now banned) 10,000 H100 AI GPUs. Around late January, DeepSeek's Scale AI CEO—Alexandr Wang—claimed that the organization could utilize up to 50,000 H100 chips for model training purposes. This unsubstantiated declaration raised eyebrows; given current global political tensions. Press outlets have speculated that DeepSeek is in no rush to reveal its full deck of cards, but they appear to have a competitive volume of resources; when lined up against with Western competitors. The DigiTimes news article did not provide any detailed insight into the rumored in-house chip design. DeepSeek faces a major challenge; the Chinese semiconductor industry trails behind market leading regions. Will local foundries be able to provide an advanced enough node process for required purposes?

AMD Reiterates Belief that 2025 is the Year of the AI PC

AI PC capabilities have evolved rapidly in the two years since AMD introduced the first x86 AI PC CPUs at CES 2023. New neural processing units have debuted, pushing available performance from a peak of 10 AI TOPS at the launch of the AMD Ryzen 7 7840U processor to peak 50+ TOPS on the latest AMD Ryzen AI Max PRO 300 Series processors. A wide range of software and hardware companies have announced various AI development plans or brought AI-infused products to market, while major operating system vendors like Microsoft are actively working to integrate AI into the operating system via its Copilot+ PC capabilities. AMD is on the forefront of those efforts and is working closely with Microsoft to deliver Copilot+ for Ryzen AI and Ryzen AI PRO PCs.

In the report "The Year of the AI PC is 2025," Forrester lays out its argument for why this year is likely to bring significant changes for AI PCs. Forrester defines the term "AI PC" to mean any system "embedded with an AI chip and algorithms specifically designed to improve the experience of AI workloads across the computer processing unit (CPU), graphics processing unit (GPU), and neural processing unit (NPU)." This includes AMD products, as well as competing products made by both x86 and non-x86 CPU manufacturers. 2025 represents a turning point for these efforts, both in terms of hardware and software, and this Forrester report is an excellent deep dive into why AI PCs represent the future for enterprise computing.

IBM & Lenovo Expand Strategic AI Technology Partnership in Saudi Arabia

IBM and Lenovo today announced at LEAP 2025 a planned expansion of their strategic technology partnership designed to help scale the impact of generative AI for clients in the Kingdom of Saudi Arabia. IDC expects annual worldwide spending on AI-centric systems to surpass $300 billion by 2026, with many leading organizations in Saudi Arabia exploring and investing in generative AI use cases as they prepare for the emergence of an "AI everywhere" world.

Building upon their 20-year partnership, IBM and Lenovo will collaborate to deliver AI solutions comprised of technology from the IBM watsonx portfolio of AI products, including the Saudi Data and Artificial Intelligence Authority (SDAIA) open-source Arabic Large Language Model (ALLaM), and Lenovo infrastructure. These solutions are expected to help government and business clients in the Kingdom to accelerate their use of AI to improve public services and make data-driven decisions in areas such as fraud detection, public safety, customer service, code modernization, and IT operations.

Moore Threads Teases Excellent Performance of DeepSeek-R1 Model on MTT GPUs

Moore Threads, a Chinese manufacturer of proprietary GPU designs is (reportedly) the latest company to jump onto the DeepSeek-R1 bandwagon. Since late January, NVIDIA, Microsoft and AMD have swooped in with their own interpretations/deployments. By global standards, Moore Threads GPUs trail behind Western-developed offerings—early 2024 evaluations presented the firm's MTT S80 dedicated desktop graphics card struggling against an AMD integrated solution: Radeon 760M. The recent emergence of DeepSeek's open source models has signalled a shift away from reliance on extremely powerful and expensive AI-crunching hardware (often accessed via the cloud)—widespread excitement has been generated by DeepSeek solutions being relatively frugal, in terms of processing requirements. Tom's Hardware has observed cases of open source AI models running (locally) on: "inexpensive hardware, like the Raspberry Pi."

According to recent Chinese press coverage, Moore Threads has announced a successful deployment of DeepSeek's R1-Distill-Qwen-7B distilled model on the aforementioned MTT S80 GPU. The company also revealed that it had taken similar steps with its MTT S4000 datacenter-oriented graphics hardware. On the subject of adaptation, a Moore Threads spokesperson stated: "based on the Ollama open source framework, Moore Threads completed the deployment of the DeepSeek-R1-Distill-Qwen-7B distillation model and demonstrated excellent performance in a variety of Chinese tasks, verifying the versatility and CUDA compatibility of Moore Threads' self-developed full-featured GPU." Exact performance figures, benchmark results and technical details were not disclosed to the Chinese public, so Moore Threads appears to be teasing the prowess of its MTT GPU designs. ITHome reported that: "users can also perform inference deployment of the DeepSeek-R1 distillation model based on MTT S80 and MTT S4000. Some users have previously completed the practice manually on MTT S80." Moore Threads believes that its: "self-developed high-performance inference engine, combined with software and hardware co-optimization technology, significantly improves the model's computing efficiency and resource utilization through customized operator acceleration and memory management. This engine not only supports the efficient operation of the DeepSeek distillation model, but also provides technical support for the deployment of more large-scale models in the future."

Aetina & Qualcomm Collaborate on Flagship MegaEdge AIP-FR68 Edge AI Solution

Aetina, a leading provider of edge AI solutions and a subsidiary of Innodisk Group, today announced a collaboration with Qualcomm Technologies, Inc., who unveiled a revolutionary Qualcomm AI On-Prem Appliance Solution and Qualcomm AI Inference Suite for On-Prem. This collaboration combines Qualcomm Technologies' cutting-edge inference accelerators and advanced software with Aetina's edge computing hardware to deliver unprecedented computing power and ready-to-use AI applications for enterprises and industrial organizations.

The flagship offering, the Aetina MegaEdge AIP-FR68, sets a new industry benchmark by integrating Qualcomm Cloud AI family of accelerator cards. Each Cloud AI 100 Ultra card delivers an impressive 870 TOPS of AI computing power at 8-bit integer (INT8) while maintaining remarkable energy efficiency at just 150 W power consumption. The system supports dual Cloud AI 100 Ultra cards in a single desktop workstation. This groundbreaking combination of power and efficiency in a compact form factor revolutionizes on-premises AI processing, making enterprise-grade computing more accessible than ever.

Intel Launches Gaudi 3 AI Accelerator and P-Core Xeon 6 CPU

As AI continues to revolutionize industries, enterprises are increasingly in need of infrastructure that is both cost-effective and available for rapid development and deployment. To meet this demand head-on, Intel today launched Xeon 6 with Performance-cores (P-cores) and Gaudi 3 AI accelerators, bolstering the company's commitment to deliver powerful AI systems with optimal performance per watt and lower total cost of ownership (TCO).

"Demand for AI is leading to a massive transformation in the data center, and the industry is asking for choice in hardware, software and developer tools," said Justin Hotard, Intel executive vice president and general manager of the Data Center and Artificial Intelligence Group. "With our launch of Xeon 6 with P-cores and Gaudi 3 AI accelerators, Intel is enabling an open ecosystem that allows our customers to implement all of their workloads with greater performance, efficiency and security."

Netgear Announces Three New WiFi 7 Routers to Join the Industry-leading Nighthawk Lineup

NETGEAR, Inc., a leading provider of innovative and secure solutions for people to connect and manage their digital lives, today expanded its Nighthawk WiFi 7 standalone router line to include the new RS600, RS500, and RS200. The lineup of Nighthawk routers is built on the company's promise to deliver the latest WiFi 7 technology combined with powerful WiFi performance and secure connectivity for homes of any size.

Powerful, Secure WiFi 7 for Everyone
Since the launch of NETGEAR's first WiFi 7 offering - Nighthawk RS700S - multi-gig internet speeds have become more accessible and affordable, and IoT devices that require extreme low latency and higher throughput continue to be adopted at high rates. WiFi 7, which is 2.4x faster than WiFi 6, offers enhanced performance to address the ever-growing demands of modern connectivity. This next-generation technology supports increased capacity across multiple connected smart home devices, providing faster speeds, near-zero latency and the ability to handle the demands of high-quality video streaming as well as advanced computing, including AI-driven applications and AR/VR experiences.

AMD Completes Acquisition of Silo AI

AMD today announced the completion of its acquisition of Silo AI, the largest private AI lab in Europe. The all-cash transaction valued at approximately $665 million furthers the company's commitment to deliver end-to-end AI solutions based on open standards and in strong partnership with the global AI ecosystem. Silo AI brings a team of world-class AI scientists and engineers to AMD experienced in developing cutting-edge AI models, platforms and solutions for large enterprise customers including Allianz, Philips, Rolls-Royce and Unilever. Their expertise spans diverse markets and they have created state-of-the-art open source multilingual Large Language Models (LLMs) including Poro and Viking on AMD platforms. The Silo AI team will join the AMD Artificial Intelligence Group (AIG), led by AMD Senior Vice President Vamsi Boppana.

"AI is our number one strategic priority, and we continue to invest in both the talent and software capabilities to support our growing customer deployments and roadmaps," said Vamsi Boppana, AMD senior vice president, AIG. "The Silo AI team has developed state-of-the-art language models that have been trained at scale on AMD Instinct accelerators and they have broad experience developing and integrating AI models to solve critical problems for end customers. We expect their expertise and software capabilities will directly improve the experience for customers in delivering the best performing AI solutions on AMD platforms."

Report: AI Software Sales to Experience Massive Growth with 40.6% CAGR Over the Next Five Years

The market for artificial intelligence (AI) platforms software grew at a rapid pace in 2023 and is projected to maintain its remarkable momentum, driven by the increasing adoption of AI across many industries. A new International Data Corporation (IDC) forecast shows that worldwide revenue for AI platforms software will grow to $153.0 billion in 2028 with a compound annual growth rate (CAGR) of 40.6% over the 2023-2028 forecast period.

"The AI platforms market shows no signs of slowing down. Rapid innovations in generative AI is changing how companies think about their products, how they develop and deploy AI applications, and how they leverage technology themselves for reinventing their business models and competitive positioning," said Ritu Jyoti, group vice president and general manager of IDC's Artificial Intelligence, Automation, Data and Analytics research. "IDC expects this upward trajectory will continue to accelerate with the emergence of unified platforms for predictive and generative AI that supports interoperating APIs, ecosystem extensibility, and responsible AI adoption at scale."

AMD to Acquire Silo AI to Expand Enterprise AI Solutions Globally

AMD today announced the signing of a definitive agreement to acquire Silo AI, the largest private AI lab in Europe, in an all-cash transaction valued at approximately $665 million. The agreement represents another significant step in the company's strategy to deliver end-to-end AI solutions based on open standards and in strong partnership with the global AI ecosystem. The Silo AI team consists of world-class AI scientists and engineers with extensive experience developing tailored AI models, platforms and solutions for leading enterprises spanning cloud, embedded and endpoint computing markets.

Silo AI CEO and co-founder Peter Sarlin will continue to lead the Silo AI team as part of the AMD Artificial Intelligence Group, reporting to AMD senior vice president Vamsi Boppana. The acquisition is expected to close in the second half of 2024.

AMD, Broadcom, Cisco, Google, HPE, Intel, Meta and Microsoft Form Ultra Accelerator Link (UALink) Promoter Group to Combat NVIDIA NVLink

AMD, Broadcom, Cisco, Google, Hewlett Packard Enterprise (HPE), Intel, Meta and Microsoft today announced they have aligned to develop a new industry standard dedicated to advancing high-speed and low latency communication for scale-up AI systems linking in Data Centers.

Called the Ultra Accelerator Link (UALink), this initial group will define and establish an open industry standard that will enable AI accelerators to communicate more effectively. By creating an interconnect based upon open standards, UALink will enable system OEMs, IT professionals and system integrators to create a pathway for easier integration, greater flexibility and scalability of their AI-connected data centers.

Lenovo Announces its New AI PC ThinkPad P14s Gen 5 Mobile Workstation Powered by AMD Ryzen PRO Processors

Today, Lenovo launched the Lenovo ThinkPad P14s Gen 5 designed for professionals who need top-notch performance in a portable 14-inch chassis. Featuring a stunning 16:10 display, this mobile workstation is powered by AMD Ryzen PRO 8040 HS-Series processors. These processors are ultra-advanced and energy-efficient, making them perfect for use in thin and light mobile workstations. The AMD Ryzen PRO HS- Series processors also come with built-in Artificial Intelligence (AI) capabilities, including an integrated Neural Processing Unit (NPU) for optimized performance in AI workflows.

The Lenovo ThinkPad P14s Gen 5 is provided with independent software vendor (ISV) certifications and integrated AMD Radeon graphics, making it ideal for running applications like AutoCAD, Revit, and SOLIDWORKS with seamless performance. This mobile workstation is ideal for mobile power users, offering advanced ThinkShield security features and passes comprehensive MIL-SPEC testing for ultimate durability.

Taiwan Dominates Global AI Server Supply - Government Reportedly Estimates 90% Share

The Taiwanese Ministry of Economic Affairs (MOEA) managed to herd government representatives and leading Information and Communication Technology (ICT) industry figures together for an important meeting, according to DigiTimes Asia. The report suggests that the main topic of discussion focused on an anticipated growth of Taiwan's ICT industry—current market trends were analyzed, revealing that the nation absolutely dominates in the AI server segment. The MOEA has (allegedly) determined that Taiwan has shipped 90% of global AI server equipment—DigiTimes claims (based on insider info) that: "American brand vendors are expected to source their AI servers from Taiwanese partners." North American customers could be (presently) 100% reliant on supplies of Taiwanese-produced equipment—a scenario that potentially complicates ongoing international tensions.

The report posits that involved parties have formed plans to seize opportunities within an evergrowing global demand for AI hardware—a 90% market dominance is clearly not enough for some very ambitious industry bosses—although manufacturers will need to jump over several (rising) cost hurdles. Key components for AI servers are reported to be much higher than vanilla server parts—DigiTimes believes that AI processor/accelerator chips are priced close to ten times higher than general purpose server CPUs. Similar price hikes have reportedly affected AI adjacent component supply chains—notably cooling, power supplies and passive parts. Taiwanese manufacturers have spread operations around the world, but industry watchdogs (largely) believe that the best stuff gets produced on home ground—global expansions are underway, perhaps inching closer to better balanced supply conditions.

Tiny Corp. Pauses Development of AMD Radeon GPU-based Tinybox AI Cluster

George Hotz and his Tiny Corporation colleagues were pinning their hopes on AMD delivering some good news earlier this month. The development of a "TinyBox" AI compute cluster project hit some major roadblocks a couple of weeks ago—at the time, Radeon RX 7900 XTX GPU firmware was not gelling with Tiny Corp.'s setup. Hotz expressed "70% confidence" in AMD approving open-sourcing certain bits of firmware. At the time of writing this has not transpired—this week the Tiny Corp. social media account has, once again, switched to an "all guns blazing" mode. Hotz and Co. have publicly disclosed that they were dabbling with Intel Arc graphics cards, as of a few weeks ago. NVIDIA hardware is another possible route, according to freshly posted open thoughts.

Yesterday, it was confirmed that the young startup organization had paused its utilization of XFX Speedster MERC310 RX 7900 XTX graphics cards: "the driver is still very unstable, and when it crashes or hangs we have no way of debugging it. We have no way of dumping the state of a GPU. Apparently it isn't just the MES causing these issues, it's also the Command Processor (CP). After seeing how open Tenstorrent is, it's hard to deal with this. With Tenstorrent, I feel confident that if there's an issue, I can debug and fix it. With AMD, I don't." The $15,000 TinyBox system relies on "cheaper" gaming-oriented GPUs, rather than traditional enterprise solutions—this oddball approach has attracted a number of customers, but the latest announcements likely signal another delay. Yesterday's tweet continued to state: "we are exploring Intel, working on adding Level Zero support to tinygrad. We also added a $400 bounty for XMX support. We are also (sadly) exploring a 6x GeForce RTX 4090 GPU box. At least we know the software is good there. We will revisit AMD once we have an open and reproducible build process for the driver and firmware. We are willing to dive really deep into hardware to make it amazing. But without access, we can't."

Ubisoft Exploring Generative AI, Could Revolutionize NPC Narratives

Have you ever dreamed of having a real conversation with an NPC in a video game? Not just one gated within a dialogue tree of pre-determined answers, but an actual conversation, conducted through spontaneous action and reaction? Lately, a small R&D team at Ubisoft's Paris studio, in collaboration with Nvidia's Audio2Face application and Inworld's Large Language Model (LLM), have been experimenting with generative AI in an attempt to turn this dream into a reality. Their project, NEO NPC, uses GenAI to prod at the limits of how a player can interact with an NPC without breaking the authenticity of the situation they are in, or the character of the NPC itself.

Considering that word—authenticity—the project has had to be a hugely collaborative effort across artistic and scientific disciplines. Generative AI is a hot topic of conversation in the videogame industry, and Senior Vice President of Production Technology Guillemette Picard is keen to stress that the goal behind all genAI projects at Ubisoft is to bring value to the player; and that means continuing to focus on human creativity behind the scenes. "The way we worked on this project, is always with our players and our developers in mind," says Picard. "With the player in mind, we know that developers and their creativity must still drive our projects. Generative AI is only of value if it has value for them."

IBM Praises EU Parliament's Approval of EU AI Act

IBM applauds the EU Parliament's decision to adopt the EU AI Act, a significant milestone in establishing responsible AI regulation in the European Union. The EU AI Act provides a much-needed framework for ensuring transparency, accountability, and human oversight in developing and deploying AI technologies. While important work must be done to ensure the Act is successfully implemented, IBM believes the regulation will foster trust and confidence in AI systems while promoting innovation and competitiveness.

"I commend the EU for its leadership in passing comprehensive, smart AI legislation. The risk-based approach aligns with IBM's commitment to ethical AI practices and will contribute to building open and trustworthy AI ecosystems," said Christina Montgomery, Vice President and Chief Privacy & Trust Officer at IBM. "IBM stands ready to lend our technology and expertise - including our watsonx.governance product - to help our clients and other stakeholders comply with the EU AI Act and upcoming legislation worldwide so we can all unlock the incredible potential of responsible AI." For more information, visit watsonx.governance and ibm.com/consulting/ai-governance.

NVIDIA to Showcase AI-generated "Large Nature Model" at GTC 2024

The ecosystem around NVIDIA's technologies has always been verdant—but this is absurd. After a stunning premiere at the World Economic Forum in Davos, immersive artworks based on Refit Anadol Studio's Large Nature Model will come to the U.S. for the first time at NVIDIA GTC. Offering a deep dive into the synergy between AI and the natural world, Anadol's multisensory work, "Large Nature Model: A Living Archive," will be situated prominently on the main concourse of the San Jose Convention Center, where the global AI event is taking place, from March 18-21.

Fueled by NVIDIA's advanced AI technology, including powerful DGX A100 stations and high-performance GPUs, the exhibit offers a captivating journey through our planet's ecosystems with stunning visuals, sounds and scents. These scenes are rendered in breathtaking clarity across screens with a total output of 12.5 million pixels, immersing attendees in an unprecedented digital portrayal of Earth's ecosystems. Refik Anadol, recognized by The Economist as "the artist of the moment," has emerged as a key figure in AI art. His work, notable for its use of data and machine learning, places him at the forefront of a generation pushing the boundaries between technology, interdisciplinary research and aesthetics. Anadol's influence reflects a wider movement in the art world towards embracing digital innovation, setting new precedents in how art is created and experienced.

MiTAC Unleashes Revolutionary Server Solutions, Powering Ahead with 5th Gen Intel Xeon Scalable Processors Accelerated by Intel Data Center GPUs

MiTAC Computing Technology, a subsidiary of MiTAC Holdings Corp., proudly reveals its groundbreaking suite of server solutions that deliver unsurpassed capabilities with the 5th Gen Intel Xeon Scalable Processors. MiTAC introduces its cutting-edge signature platforms that seamlessly integrate the Intel Data Center GPUs, both Intel Max Series and Intel Flex Series, an unparalleled leap in computing performance is unleashed targeting HPC and AI applications.

MiTAC Announce its Full Array of Platforms Supporting the latest 5th Gen Intel Xeon Scalable Processors
Last year, Intel transitioned the right to manufacture and sell products based on Intel Data Center Solution Group designs to MiTAC. MiTAC confidently announces a transformative upgrade to its product offerings, unveiling advanced platforms that epitomize the future of computing. Featured with up to 64 cores, expanded shared cache, increased UPI and DDR5 support, the latest 5th Gen Intel Xeon Scalable Processors deliver remarkable performance per watt gains across various workloads. MiTAC's Intel Server M50FCP Family and Intel Server D50DNP Family fully support the latest 5th Gen Intel Xeon Scalable Processors, made possible through a quick BIOS update and easy technical resource revisions which provide unsurpassed performance to diverse computing environments.

NVIDIA Announces Q4 and Fiscal 2024 Results, Clocks 126% YoY Revenue Growth, Gaming Just 1/6th of Data Center Revenues

NVIDIA (NASDAQ: NVDA) today reported revenue for the fourth quarter ended January 28, 2024, of $22.1 billion, up 22% from the previous quarter and up 265% from a year ago. For the quarter, GAAP earnings per diluted share was $4.93, up 33% from the previous quarter and up 765% from a year ago. Non-GAAP earnings per diluted share was $5.16, up 28% from the previous quarter and up 486% from a year ago.

For fiscal 2024, revenue was up 126% to $60.9 billion. GAAP earnings per diluted share was $11.93, up 586% from a year ago. Non-GAAP earnings per diluted share was $12.96, up 288% from a year ago. "Accelerated computing and generative AI have hit the tipping point. Demand is surging worldwide across companies, industries and nations," said Jensen Huang, founder and CEO of NVIDIA.

NVIDIA Joins US Artificial Intelligence Safety Institute Consortium

NVIDIA has joined the National Institute of Standards and Technology's new U.S. Artificial Intelligence Safety Institute Consortium as part of the company's effort to advance safe, secure and trustworthy AI. AISIC will work to create tools, methodologies and standards to promote the safe and trustworthy development and deployment of AI. As a member, NVIDIA will work with NIST—an agency of the U.S. Department of Commerce—and fellow consortium members to advance the consortium's mandate. NVIDIA's participation builds on a record of working with governments, researchers and industries of all sizes to help ensure AI is developed and deployed safely and responsibly.

Through a broad range of development initiatives, including NeMo Guardrails, open-source software for ensuring large language model responses are accurate, appropriate, on topic and secure, NVIDIA actively works to make AI safety a reality. In 2023, NVIDIA endorsed the Biden Administration's voluntary AI safety commitments. Last month, the company announced a $30 million contribution to the U.S. National Science Foundation's National Artificial Intelligence Research Resource pilot program, which aims to broaden access to the tools needed to power responsible AI discovery and innovation.
Return to Keyword Browsing
Mar 24th, 2025 02:13 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts