News Posts matching #Generative AI

Return to Keyword Browsing

SK Hynix's LPDDR5T, World's Fastest Mobile DRAM, Completes Compatibility Validation with Qualcomm

SK hynix Inc. announced today that it has started commercialization of the LPDDR5T (Low Power Double Data Rate 5 Turbo), the world's fastest DRAM for mobile with 9.6 Gbps speed. The company said that it has obtained the validation that the LPDDR5T is compatible with Qualcomm Technologies' new Snapdragon 8 Gen 3 Mobile Platform, marking the industry's first case for such product to be verified by the U.S. company.

SK hynix has proceeded with the compatibility validation of the LPDDR5T, following the completion of the development in January, with support from Qualcomm Technologies. The completion of the process means that it is compatible with Snapdragon 8 Gen 3. With the validation process with Qualcomm Technologies, a leader in wireless telecommunication products and services, and other major mobile AP (Application Processor) providers successfully completed, SK hynix expects the range of the LPDDR5T adoption to grow rapidly.

Qualcomm Launches Premium Snapdragon 8 Gen 3 to Bring Generative AI to the Next Wave of Flagship Smartphones

At Snapdragon Summit, Qualcomm Technologies, Inc. today announced its latest premium mobile platform, the Snapdragon 8 Gen 3—a true titan of on-device intelligence, premium-tier performance, and power efficiency. As the premium Android smartphone SoC leader, Qualcomm Technologies' latest processor will be adopted for flagship devices by global OEMs and smartphone brands including ASUS, Honor, iQOO, MEIZU, NIO, Nubia, OnePlus, OPPO, realme, Redmi, RedMagic, Sony, vivo, Xiaomi, and ZTE.

"Snapdragon 8 Gen 3 infuses high-performance AI across the entire system to deliver premium-level performance and extraordinary experiences to consumers. This platform unlocks a new era of generative AI enabling users to generate unique content, help with productivity, and other breakthrough use cases." said Chris Patrick, senior vice president and general manager of mobile handsets, Qualcomm Technologies, Inc. "Each year, we set out to design leading features and technologies that will power our latest Snapdragon 8-series mobile platform and the next generation of flagship Android devices. The Snapdragon 8 Gen 3 delivers."

IDC Forecasts Spending on GenAI Solutions Will Reach $143 Billion in 2027 with a Five-Year Compound Annual Growth Rate of 73.3%

A new forecast from International Data Corporation (IDC) shows that enterprises will invest nearly $16 billion worldwide on GenAI solutions in 2023. This spending, which includes GenAI software as well as related infrastructure hardware and IT/business services, is expected to reach $143 billion in 2027 with a compound annual growth rate (CAGR) of 73.3% over the 2023-2027 forecast period. This is more than twice the rate of growth in overall AI spending and almost 13 times greater than the CAGR for worldwide IT spending over the same period.

"Generative AI is more than a fleeting trend or mere hype. It is a transformative technology with far-reaching implications and business impact," says Ritu Jyoti, group vice president, Worldwide Artificial Intelligence and Automation market research and advisory services at IDC. "With ethical and responsible implementation, GenAI is poised to reshape industries, changing the way we work, play, and interact with the world."

Baidu Launches ERNIE 4.0 Foundation Model, Leading a New Wave of AI-Native Applications

Baidu, Inc., a leading AI company with strong Internet foundation, today hosted its annual flagship technology conference Baidu World 2023 in Beijing, marking the conference's return to an offline format after four years. With the theme "Prompt the World," this year's Baidu World conference saw Baidu launch ERNIE 4.0, Baidu's next-generation and most powerful foundation model offering drastically enhanced core AI capabilities. Baidu also showcased some of its most popular applications, solutions, and products re-built around the company's state-of-the-art generative AI.

"ERNIE 4.0 has achieved a full upgrade with drastically improved performance in understanding, generation, reasoning, and memory," Robin Li, Co-founder, Chairman and CEO of Baidu, said at the event. "These four core capabilities form the foundation of AI-native applications and have now unleashed unlimited opportunities for new innovations."

Report: Global PC Shipments Decline Again in the Third Quarter of 2023 Amid Signs of Market Improvement

The downward spiral for PC shipments continued during the third quarter of 2023 (3Q23) as global volumes declined 7.6% year over year with 68.2 million PCs shipped, according to preliminary results from the International Data Corporation (IDC) Worldwide Quarterly Personal Computing Device Tracker. Though demand and the global economy remain subdued, PC shipments have increased in each of the last two quarters, slowing the rate of annual decline and indicating that the market has moved past the bottom of the trough.

PC inventory has also become leaner in the past few months and is near healthy levels in most channels. However, downward pressure on pricing persists and will likely remain an issue within the consumer and business sectors. While most of the top 5 vendors experienced double-digit declines during the quarter, Apple's outsized decline was the result of unfavorable year-over-year comparisons as the company recovered from a COVID-related halt in production during 3Q22. Meanwhile, HP's growth was largely due to the normalizing of inventory.

Dell Technologies Expands Generative AI Portfolio

Dell Technologies expands its Dell Generative AI Solutions portfolio, helping businesses transform how they work along every step of their generative AI (GenAI) journeys. "To maximize AI efforts and support workloads across public clouds, on-premises environments and at the edge, companies need a robust data foundation with the right infrastructure, software and services," said Jeff Boudreau, chief AI officer, Dell Technologies. "That's what we are building with our expanded validated designs, professional services, modern data lakehouse and the world's broadest GenAI solutions portfolio."

Customizing GenAI models to maximize proprietary data
The Dell Validated Design for Generative AI with NVIDIA for Model Customization offers pre-trained models that extract intelligence from data without building models from scratch. This solution provides best practices for customizing and fine-tuning GenAI models based on desired outcomes while helping keep information secure and on-premises. With a scalable blueprint for customization, organizations now have multiple ways to tailor GenAI models to accomplish specific tasks with their proprietary data. Its modular and flexible design supports a wide range of computational requirements and use cases, spanning training diffusion, transfer learning and prompt tuning.

TSMC Announces Breakthrough Set to Redefine the Future of 3D IC

TSMC today announced the new 3Dblox 2.0 open standard and major achievements of its Open Innovation Platform (OIP) 3DFabric Alliance at the TSMC 2023 OIP Ecosystem Forum. The 3Dblox 2.0 features early 3D IC design capability that aims to significantly boost design efficiency, while the 3DFabric Alliance continues to drive memory, substrate, testing, manufacturing, and packaging integration. TSMC continues to push the envelope of 3D IC innovation, making its comprehensive 3D silicon stacking and advanced packaging technologies more accessible to every customer.

"As the industry shifted toward embracing 3D IC and system-level innovation, the need for industry-wide collaboration has become even more essential than it was when we launched OIP 15 years ago," said Dr. L.C. Lu, TSMC fellow and vice president of Design and Technology Platform. "As our sustained collaboration with OIP ecosystem partners continues to flourish, we're enabling customers to harness TSMC's leading process and 3DFabric technologies to reach an entirely new level of performance and power efficiency for the next-generation artificial intelligence (AI), high-performance computing (HPC), and mobile applications."

Run AI on Your PC? NVIDIA GeForce Users Are Ahead of the Curve

Generative AI is no longer just for tech giants. With GeForce, it's already at your fingertips. Gone are the days when AI was the domain of sprawling data centers or elite researchers. For GeForce RTX users, AI is now running on your PC. It's personal, enhancing every keystroke, every frame and every moment. Gamers are already enjoying the benefits of AI in over 300 RTX games. Meanwhile, content creators have access to over 100 RTX creative and design apps, with AI enhancing everything from video and photo editing to asset generation. And for GeForce enthusiasts, it's just the beginning. RTX is the platform for today and the accelerator that will power the AI of tomorrow.

How Did AI and Gaming Converge?
NVIDIA pioneered the integration of AI and gaming with DLSS, a technique that uses AI to generate pixels in video games automatically and which has increased frame rates by up to 4x. And with the recent introduction of DLSS 3.5, NVIDIA has enhanced the visual quality in some of the world's top titles, setting a new standard for visually richer and more immersive gameplay. But NVIDIA's AI integration doesn't stop there. Tools like RTX Remix empower game modders to remaster classic content using high-quality textures and materials generated by AI.

ITRI Leads Global Semiconductor Collaboration for Heterogeneous Integration to Pioneer Pilot Production Solutions

The introduction of Generative AI (GAI) has significantly increased the demand for advanced semiconductor chips, drawing increased attention to the development of complex calculations for large-scale AI models and high-speed transmission interfaces. To assist the industry in grasping the key to high-end semiconductor manufacturing and integration capabilities, the Heterogeneous Integrated Chiplet System Package (Hi-CHIP) Alliance brings together leading semiconductor companies from Taiwan and around the world to provide comprehensive services, spanning from packaging design, testing and verification, to pilot production. Since its establishment in 2021, the alliance has accumulated important industry players as its members, including EVG, Kulicke and Soffa (K&S), USI, Raytek Semiconductor, Unimicron, DuPont, and Brewer Science. Looking forward, the alliance is set to actively explore its global market potential.

Dr. Shih-Chieh Chang, General Director of Electronic and Optoelectronic System Research Laboratories at ITRI and Chairman of the Hi-CHIP Alliance, indicated that advanced manufacturing processes have led to a considerable increase in IC design cycles and costs. Multi-dimensional chip design and heterogeneous integrated packaging architecture are key tools to tackle this demand in semiconductors. On top of that, the advent of GAI such as ChatGPT, which demands substantial computing power and transmission speed, requires even higher levels of integration capacity in chip manufacturing. ITRI has been committed to developing manufacturing technologies and upgrading materials and equipment to enhance heterogeneous integration technologies. Achievements include the fan-out wafer level packaging (FOWLP), 2.5 and 3D chips, embedded interposer connections (EIC), and programmable packages. With both local and foreign semiconductor manufacturer members, the Hi-CHIP Alliance is establishing an advanced packaging process production line to provide an integrated one-stop service platform.

SK hynix Debuts Prototype of First GDDR6-AiM Accelerator Card 'AiMX' for Generative AI

SK hynix unveiled and demonstrated a prototype of AiMX (Accelerator-in-Memory based Accelerator), a generative AI accelerator card based on GDDR6-AiM, at the AI Hardware & Edge AI Summit 2023 held September 12-14 at the Santa Clara Marriott, California.Hosted annually by the UK marketing firm Kisaco Research, the AI Hardware & Edge AI Summit brings together global IT companies and high-profile startups to share their developments in artificial intelligence and machine learning. This is SK hynix's third time participating in the summit.

At the event, the company showcased the prototype of AiMX, an accelerator card that combines multiple GDDR6-AiMs to further enhance performance, along with the GDDR6-AIM itself under the slogan of "Boost Your AI: Discover the Power of PIM (Processing-In-Memory) with SK hynix's AiM (Accelerator in Memory)." As a low-power, high-speed memory solution capable of handling large amounts of data, AiMX is set to play a key role in the advancement of data-intensive generative AI systems. The performance of generative AI improves as it is trained on more data, highlighting the need for high-performance products which can be applied to an array of generative AI systems.

Dell Delivers Second Quarter Fiscal 2024 Financial Results

Dell Technologies announces financial results for its fiscal 2024 second quarter. Revenue was $22.9 billion, down 13% year-over-year and up 10% sequentially. The company generated operating income of $1.2 billion and non-GAAP operating income of $2 billion, down 8% and up 1% year-over-year, respectively. Diluted earnings per share was $0.63, and non-GAAP diluted earnings per share was $1.74, down 7% and up 4% year-over-year, respectively. Cash flow from operations for the second quarter was $3.2 billion, driven by working capital improvements, sequential growth and profitability. The company has generated $8.1 billion of cash flow from operations throughout the last 12 months.

Dell ended the quarter with remaining performance obligations of $39 billion, recurring revenue of $5.6 billion, up 8% year-over-year, and deferred revenue of $30.3 billion, up 8% year-over-year, primarily due to increases in service and software maintenance agreements. Cash and investments were $9.9 billion, and the company returned $525 million to shareholders in the second quarter through share repurchases and dividends.

Google Introduces Cloud TPU v5e and Announces A3 Instance Availability

We're at a once-in-a-generation inflection point in computing. The traditional ways of designing and building computing infrastructure are no longer adequate for the exponentially growing demands of workloads like generative AI and LLMs. In fact, the number of parameters in LLMs has increased by 10x per year over the past five years. As a result, customers need AI-optimized infrastructure that is both cost effective and scalable.

For two decades, Google has built some of the industry's leading AI capabilities: from the creation of Google's Transformer architecture that makes gen AI possible, to our AI-optimized infrastructure, which is built to deliver the global scale and performance required by Google products that serve billions of users like YouTube, Gmail, Google Maps, Google Play, and Android. We are excited to bring decades of innovation and research to Google Cloud customers as they pursue transformative opportunities in AI. We offer a complete solution for AI, from computing infrastructure optimized for AI to the end-to-end software and services that support the full lifecycle of model training, tuning, and serving at global scale.

NVIDIA Announces NVIDIA OVX servers Featuring New NVIDIA L40S GPU for Generative AI and Industrial Digitalization

NVIDIA today announced NVIDIA OVX servers featuring the new NVIDIA L40S GPU, a powerful, universal data center processor designed to accelerate the most compute-intensive, complex applications, including AI training and inference, 3D design and visualization, video processing and industrial digitalization with the NVIDIA Omniverse platform. The new GPU powers accelerated computing workloads for generative AI, which is transforming workflows and services across industries, including text, image and video generation, chatbots, game development, product design and healthcare.

"As generative AI transforms every industry, enterprises are increasingly seeking large-scale compute resources in the data center," said Bob Pette, vice president of professional visualization at NVIDIA. "OVX systems with NVIDIA L40S GPUs accelerate AI, graphics and video processing workloads, and meet the demanding performance requirements of an ever-increasing set of complex and diverse applications."

Dell Technologies Expands AI Offerings, in Collaboration with NVIDIA

Dell Technologies introduces new offerings to help customers quickly and securely build generative AI (GenAI) models on-premises to accelerate improved outcomes and drive new levels of intelligence. New Dell Generative AI Solutions, expanding upon our May's Project Helix announcement, span IT infrastructure, PCs and professional services to simplify the adoption of full-stack GenAI with large language models (LLM), meeting organizations wherever they are in their GenAI journey. These solutions help organizations, of all sizes and across industries, securely transform and deliver better outcomes.

"Generative AI represents an inflection point that is driving fundamental change in the pace of innovation while improving the customer experience and enabling new ways to work," Jeff Clarke, vice chairman and co-chief operating officer, Dell Technologies, said on a recent investor call. "Customers, big and small, are using their own data and business context to train, fine-tune and inference on Dell infrastructure solutions to incorporate advanced AI into their core business processes effectively and efficiently."

Micron Delivers Industry's Fastest, Highest-Capacity HBM to Advance Generative AI Innovation

Micron Technology, Inc. today announced it has begun sampling the industry's first 8-high 24 GB HBM3 Gen2 memory with bandwidth greater than 1.2 TB/s and pin speed over 9.2 Gb/s, which is up to a 50% improvement over currently shipping HBM3 solutions. With a 2.5 times performance per watt improvement over previous generations, Micron's HBM3 Gen2 offering sets new records for the critical artificial intelligence (AI) data center metrics of performance, capacity and power efficiency. These Micron improvements reduce training times of large language models like GPT-4 and beyond, deliver efficient infrastructure use for AI inference and provide superior total cost of ownership (TCO).

The foundation of Micron's high-bandwidth memory (HBM) solution is Micron's industry-leading 1β (1-beta) DRAM process node, which allows a 24Gb DRAM die to be assembled into an 8-high cube within an industry-standard package dimension. Moreover, Micron's 12-high stack with 36 GB capacity will begin sampling in the first quarter of calendar 2024. Micron provides 50% more capacity for a given stack height compared to existing competitive solutions. Micron's HBM3 Gen2 performance-to-power ratio and pin speed improvements are critical for managing the extreme power demands of today's AI data centers. The improved power efficiency is possible because of Micron advancements such as doubling of the through-silicon vias (TSVs) over competitive HBM3 offerings, thermal impedance reduction through a five-time increase in metal density, and an energy-efficient data path design.

NVIDIA DGX Cloud Now Available to Supercharge Generative AI Training

NVIDIA DGX Cloud - which delivers tools that can turn nearly any company into an AI company - is now broadly available, with thousands of NVIDIA GPUs online on Oracle Cloud Infrastructure, as well as NVIDIA infrastructure located in the U.S. and U.K. Unveiled at NVIDIA's GTC conference in March, DGX Cloud is an AI supercomputing service that gives enterprises immediate access to the infrastructure and software needed to train advanced models for generative AI and other groundbreaking applications.

"Generative AI has made the rapid adoption of AI a business imperative for leading companies in every industry, driving many enterprises to seek more accelerated computing infrastructure," said Pat Moorhead, chief analyst at Moor Insights & Strategy. Generative AI could add more than $4 trillion to the economy annually, turning proprietary business knowledge across a vast swath of the world's industries into next-generation AI applications, according to recent estimates by global management consultancy McKinsey.

Cerebras and G42 Unveil World's Largest Supercomputer for AI Training with 4 ExaFLOPS

Cerebras Systems, the pioneer in accelerating generative AI, and G42, the UAE-based technology holding group, today announced Condor Galaxy, a network of nine interconnected supercomputers, offering a new approach to AI compute that promises to significantly reduce AI model training time. The first AI supercomputer on this network, Condor Galaxy 1 (CG-1), has 4 exaFLOPs and 54 million cores. Cerebras and G42 are planning to deploy two more such supercomputers, CG-2 and CG-3, in the U.S. in early 2024. With a planned capacity of 36 exaFLOPs in total, this unprecedented supercomputing network will revolutionize the advancement of AI globally.

"Collaborating with Cerebras to rapidly deliver the world's fastest AI training supercomputer and laying the foundation for interconnecting a constellation of these supercomputers across the world has been enormously exciting. This partnership brings together Cerebras' extraordinary compute capabilities, together with G42's multi-industry AI expertise. G42 and Cerebras' shared vision is that Condor Galaxy will be used to address society's most pressing challenges across healthcare, energy, climate action and more," said Talal Alkaissi, CEO of G42 Cloud, a subsidiary of G42.

NVIDIA Espouses Generative AI for Improved Productivity Across Industries

A watershed moment on Nov. 22, 2022, was mostly virtual, yet it shook the foundations of nearly every industry on the planet. On that day, OpenAI released ChatGPT, the most advanced artificial intelligence chatbot ever developed. This set off demand for generative AI applications that help businesses become more efficient, from providing consumers with answers to their questions to accelerating the work of researchers as they seek scientific breakthroughs, and much, much more.

Businesses that previously dabbled in AI are now rushing to adopt and deploy the latest applications. Generative AI—the ability of algorithms to create new text, images, sounds, animations, 3D models and even computer code—is moving at warp speed, transforming the way people work and play. By employing large language models (LLMs) to handle queries, the technology can dramatically reduce the time people devote to manual tasks like searching for and compiling information.

Jensen Huang & Leading EU Generative AI Execs Participated in Fireside Chat

Three leading European generative AI startups joined NVIDIA founder and CEO Jensen Huang this week to talk about the new era of computing. More than 500 developers, researchers, entrepreneurs and executives from across Europe and further afield packed into the Spindler and Klatt, a sleek, riverside gathering spot in Berlin. Huang started the reception by touching on the message he delivered Monday at the Berlin Summit for Earth Virtualization Engines (EVE), an international collaboration focused on climate science. He shared details of NVIDIA's Earth-2 initiative and how accelerated computing, AI-augmented simulation and interactive digital twins drive climate science research.

Before sitting down for a fireside chat with the founders of the three startups, Huang introduced some "special guests" to the audience—four of the world's leading climate modeling scientists, who he called the "unsung heroes" of saving the planet. "These scientists have dedicated their careers to advancing climate science," said Huang. "With the vision of EVE, they are the architects of the new era of climate science."

Oracle Fusion Cloud HCM Enhanced with Generative AI, Projected to Boost HR Productivity

Oracle today announced the addition of generative AI-powered capabilities within Oracle Fusion Cloud Human Capital Management (HCM). Supported by the Oracle Cloud Infrastructure (OCI) generative AI service, the new capabilities are embedded in existing HR processes to drive faster business value, improve productivity, enhance the candidate and employee experience, and streamline HR processes.

"Generative AI is boosting productivity and unlocking a new world of skills, ideas, and creativity that can have an immediate impact in the workplace," said Chris Leone, executive vice president, applications development, Oracle Cloud HCM. "With the ability to summarize, author, and recommend content, generative AI helps to reduce friction as employees complete important HR functions. For example, with the new embedded generative AI capabilities in Oracle Cloud HCM, our customers will be able to take advantage of large language models to drastically reduce the time required to complete tasks, improve the employee experience, enhance the accuracy of workforce insights, and ultimately increase business value."

IBM Study Finds That CEOs are Embracing Generative AI

A new global study by the IBM Institute for Business Value found that nearly half of CEOs surveyed identify productivity as their highest business priority—up from sixth place in 2022. They recognize technology modernization is key to achieving their productivity goals, ranking it as second highest priority. Yet, CEOs can face key barriers as they race to modernize and adopt new technologies like generative AI.

The annual CEO study, CEO decision-making in the age of AI, Act with intention, found three-quarters of CEO respondents believe that competitive advantage will depend on who has the most advanced generative AI. However, executives are also weighing potential risks or barriers of the technology such as bias, ethics and security. More than half (57%) of CEOs surveyed are concerned about data security and 48% worry about bias or data accuracy.
Return to Keyword Browsing
Dec 21st, 2024 05:43 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts