News Posts matching #Artificial Intelligence

Return to Keyword Browsing

Taiwan Dominates Global AI Server Supply - Government Reportedly Estimates 90% Share

The Taiwanese Ministry of Economic Affairs (MOEA) managed to herd government representatives and leading Information and Communication Technology (ICT) industry figures together for an important meeting, according to DigiTimes Asia. The report suggests that the main topic of discussion focused on an anticipated growth of Taiwan's ICT industry—current market trends were analyzed, revealing that the nation absolutely dominates in the AI server segment. The MOEA has (allegedly) determined that Taiwan has shipped 90% of global AI server equipment—DigiTimes claims (based on insider info) that: "American brand vendors are expected to source their AI servers from Taiwanese partners." North American customers could be (presently) 100% reliant on supplies of Taiwanese-produced equipment—a scenario that potentially complicates ongoing international tensions.

The report posits that involved parties have formed plans to seize opportunities within an evergrowing global demand for AI hardware—a 90% market dominance is clearly not enough for some very ambitious industry bosses—although manufacturers will need to jump over several (rising) cost hurdles. Key components for AI servers are reported to be much higher than vanilla server parts—DigiTimes believes that AI processor/accelerator chips are priced close to ten times higher than general purpose server CPUs. Similar price hikes have reportedly affected AI adjacent component supply chains—notably cooling, power supplies and passive parts. Taiwanese manufacturers have spread operations around the world, but industry watchdogs (largely) believe that the best stuff gets produced on home ground—global expansions are underway, perhaps inching closer to better balanced supply conditions.

Tiny Corp. Pauses Development of AMD Radeon GPU-based Tinybox AI Cluster

George Hotz and his Tiny Corporation colleagues were pinning their hopes on AMD delivering some good news earlier this month. The development of a "TinyBox" AI compute cluster project hit some major roadblocks a couple of weeks ago—at the time, Radeon RX 7900 XTX GPU firmware was not gelling with Tiny Corp.'s setup. Hotz expressed "70% confidence" in AMD approving open-sourcing certain bits of firmware. At the time of writing this has not transpired—this week the Tiny Corp. social media account has, once again, switched to an "all guns blazing" mode. Hotz and Co. have publicly disclosed that they were dabbling with Intel Arc graphics cards, as of a few weeks ago. NVIDIA hardware is another possible route, according to freshly posted open thoughts.

Yesterday, it was confirmed that the young startup organization had paused its utilization of XFX Speedster MERC310 RX 7900 XTX graphics cards: "the driver is still very unstable, and when it crashes or hangs we have no way of debugging it. We have no way of dumping the state of a GPU. Apparently it isn't just the MES causing these issues, it's also the Command Processor (CP). After seeing how open Tenstorrent is, it's hard to deal with this. With Tenstorrent, I feel confident that if there's an issue, I can debug and fix it. With AMD, I don't." The $15,000 TinyBox system relies on "cheaper" gaming-oriented GPUs, rather than traditional enterprise solutions—this oddball approach has attracted a number of customers, but the latest announcements likely signal another delay. Yesterday's tweet continued to state: "we are exploring Intel, working on adding Level Zero support to tinygrad. We also added a $400 bounty for XMX support. We are also (sadly) exploring a 6x GeForce RTX 4090 GPU box. At least we know the software is good there. We will revisit AMD once we have an open and reproducible build process for the driver and firmware. We are willing to dive really deep into hardware to make it amazing. But without access, we can't."

Ubisoft Exploring Generative AI, Could Revolutionize NPC Narratives

Have you ever dreamed of having a real conversation with an NPC in a video game? Not just one gated within a dialogue tree of pre-determined answers, but an actual conversation, conducted through spontaneous action and reaction? Lately, a small R&D team at Ubisoft's Paris studio, in collaboration with Nvidia's Audio2Face application and Inworld's Large Language Model (LLM), have been experimenting with generative AI in an attempt to turn this dream into a reality. Their project, NEO NPC, uses GenAI to prod at the limits of how a player can interact with an NPC without breaking the authenticity of the situation they are in, or the character of the NPC itself.

Considering that word—authenticity—the project has had to be a hugely collaborative effort across artistic and scientific disciplines. Generative AI is a hot topic of conversation in the videogame industry, and Senior Vice President of Production Technology Guillemette Picard is keen to stress that the goal behind all genAI projects at Ubisoft is to bring value to the player; and that means continuing to focus on human creativity behind the scenes. "The way we worked on this project, is always with our players and our developers in mind," says Picard. "With the player in mind, we know that developers and their creativity must still drive our projects. Generative AI is only of value if it has value for them."

IBM Praises EU Parliament's Approval of EU AI Act

IBM applauds the EU Parliament's decision to adopt the EU AI Act, a significant milestone in establishing responsible AI regulation in the European Union. The EU AI Act provides a much-needed framework for ensuring transparency, accountability, and human oversight in developing and deploying AI technologies. While important work must be done to ensure the Act is successfully implemented, IBM believes the regulation will foster trust and confidence in AI systems while promoting innovation and competitiveness.

"I commend the EU for its leadership in passing comprehensive, smart AI legislation. The risk-based approach aligns with IBM's commitment to ethical AI practices and will contribute to building open and trustworthy AI ecosystems," said Christina Montgomery, Vice President and Chief Privacy & Trust Officer at IBM. "IBM stands ready to lend our technology and expertise - including our watsonx.governance product - to help our clients and other stakeholders comply with the EU AI Act and upcoming legislation worldwide so we can all unlock the incredible potential of responsible AI." For more information, visit watsonx.governance and ibm.com/consulting/ai-governance.

NVIDIA to Showcase AI-generated "Large Nature Model" at GTC 2024

The ecosystem around NVIDIA's technologies has always been verdant—but this is absurd. After a stunning premiere at the World Economic Forum in Davos, immersive artworks based on Refit Anadol Studio's Large Nature Model will come to the U.S. for the first time at NVIDIA GTC. Offering a deep dive into the synergy between AI and the natural world, Anadol's multisensory work, "Large Nature Model: A Living Archive," will be situated prominently on the main concourse of the San Jose Convention Center, where the global AI event is taking place, from March 18-21.

Fueled by NVIDIA's advanced AI technology, including powerful DGX A100 stations and high-performance GPUs, the exhibit offers a captivating journey through our planet's ecosystems with stunning visuals, sounds and scents. These scenes are rendered in breathtaking clarity across screens with a total output of 12.5 million pixels, immersing attendees in an unprecedented digital portrayal of Earth's ecosystems. Refik Anadol, recognized by The Economist as "the artist of the moment," has emerged as a key figure in AI art. His work, notable for its use of data and machine learning, places him at the forefront of a generation pushing the boundaries between technology, interdisciplinary research and aesthetics. Anadol's influence reflects a wider movement in the art world towards embracing digital innovation, setting new precedents in how art is created and experienced.

MiTAC Unleashes Revolutionary Server Solutions, Powering Ahead with 5th Gen Intel Xeon Scalable Processors Accelerated by Intel Data Center GPUs

MiTAC Computing Technology, a subsidiary of MiTAC Holdings Corp., proudly reveals its groundbreaking suite of server solutions that deliver unsurpassed capabilities with the 5th Gen Intel Xeon Scalable Processors. MiTAC introduces its cutting-edge signature platforms that seamlessly integrate the Intel Data Center GPUs, both Intel Max Series and Intel Flex Series, an unparalleled leap in computing performance is unleashed targeting HPC and AI applications.

MiTAC Announce its Full Array of Platforms Supporting the latest 5th Gen Intel Xeon Scalable Processors
Last year, Intel transitioned the right to manufacture and sell products based on Intel Data Center Solution Group designs to MiTAC. MiTAC confidently announces a transformative upgrade to its product offerings, unveiling advanced platforms that epitomize the future of computing. Featured with up to 64 cores, expanded shared cache, increased UPI and DDR5 support, the latest 5th Gen Intel Xeon Scalable Processors deliver remarkable performance per watt gains across various workloads. MiTAC's Intel Server M50FCP Family and Intel Server D50DNP Family fully support the latest 5th Gen Intel Xeon Scalable Processors, made possible through a quick BIOS update and easy technical resource revisions which provide unsurpassed performance to diverse computing environments.

NVIDIA Announces Q4 and Fiscal 2024 Results, Clocks 126% YoY Revenue Growth, Gaming Just 1/6th of Data Center Revenues

NVIDIA (NASDAQ: NVDA) today reported revenue for the fourth quarter ended January 28, 2024, of $22.1 billion, up 22% from the previous quarter and up 265% from a year ago. For the quarter, GAAP earnings per diluted share was $4.93, up 33% from the previous quarter and up 765% from a year ago. Non-GAAP earnings per diluted share was $5.16, up 28% from the previous quarter and up 486% from a year ago.

For fiscal 2024, revenue was up 126% to $60.9 billion. GAAP earnings per diluted share was $11.93, up 586% from a year ago. Non-GAAP earnings per diluted share was $12.96, up 288% from a year ago. "Accelerated computing and generative AI have hit the tipping point. Demand is surging worldwide across companies, industries and nations," said Jensen Huang, founder and CEO of NVIDIA.

NVIDIA Joins US Artificial Intelligence Safety Institute Consortium

NVIDIA has joined the National Institute of Standards and Technology's new U.S. Artificial Intelligence Safety Institute Consortium as part of the company's effort to advance safe, secure and trustworthy AI. AISIC will work to create tools, methodologies and standards to promote the safe and trustworthy development and deployment of AI. As a member, NVIDIA will work with NIST—an agency of the U.S. Department of Commerce—and fellow consortium members to advance the consortium's mandate. NVIDIA's participation builds on a record of working with governments, researchers and industries of all sizes to help ensure AI is developed and deployed safely and responsibly.

Through a broad range of development initiatives, including NeMo Guardrails, open-source software for ensuring large language model responses are accurate, appropriate, on topic and secure, NVIDIA actively works to make AI safety a reality. In 2023, NVIDIA endorsed the Biden Administration's voluntary AI safety commitments. Last month, the company announced a $30 million contribution to the U.S. National Science Foundation's National Artificial Intelligence Research Resource pilot program, which aims to broaden access to the tools needed to power responsible AI discovery and innovation.

IDC Forecasts Artificial Intelligence PCs to Account for Nearly 60% of All PC Shipments by 2027

A new forecast from International Data Corporation (IDC) shows shipments of artificial intelligence (AI) PCs - personal computers with specific system-on-a-chip (SoC) capabilities designed to run generative AI tasks locally - growing from nearly 50 million units in 2024 to more than 167 million in 2027. By the end of the forecast, IDC expects AI PCs will represent nearly 60% of all PC shipments worldwide.

"As we enter a new year, the hype around generative AI has reached a fever pitch, and the PC industry is running fast to capitalize on the expected benefits of bringing AI capabilities down from the cloud to the client," said Tom Mainelli, group vice president, Devices and Consumer Research. "Promises around enhanced user productivity via faster performance, plus lower inferencing costs, and the benefit of on-device privacy and security, have driven strong IT decision-maker interest in AI PCs. In 2024, we'll see AI PC shipments begin to ramp, and over the next few years, we expect the technology to move from niche to a majority."

Nubis Communications and Alphawave Semi Showcase First Demonstration of Optical PCI Express 6.0 Technology

Nubis Communications, Inc., provider of low-latency high-density optical inter-connect (HDI/O), and Alphawave Semi (LN: AWE), a global leader in high-speed connectivity and compute silicon for the world's technology infrastructure, today announced their upcoming demonstration of PCI Express 6.0 technology driving over an optical link at 64GT/s per lane. Data Center providers are exploring the use of PCIe over Optics to greatly expand the reach and flexibility of the interconnect for memory, CPUs, GPUs, and custom silicon accelerators to enable more scalable and energy-efficient clusters for Artificial Intelligence and Machine Learning (ML/AI) architectures.

Nubis Communications and Alphawave Semi will be showing a live demonstration in the Tektronix booth at DesignCon, the leading conference for advanced chip, board, and system design technologies. An Alphawave Semi PCIe Subsystem with PiCORE Controller IP and PipeCORE PHY will directly drive and receive PCIe 6.0 traffic through a Nubis XT1600 linear optical engine to demonstrate a PCIe 6.0 optical link at 64GT/s per fiber, with optical output waveform measured on a Tektronix sampling scope with a high-speed optical probe.

Korea Quantum Computing Signs IBM watsonx Deal

IBM has announced (on January 29) that Korea Quantum Computing (KQC) has engaged IBM to offer IBM's most advanced AI software and infrastructure, as well as quantum computing services. KQC's ecosystem of users will have access to IBM's full stack solution for AI, including watsonx, an AI and data platform to train, tune and deploy advanced AI models and software for enterprises. KQC is also expanding its quantum computing collaboration with IBM. Having operated as an IBM Quantum Innovation Center since 2022, KQC will continue to offer access to IBM's global fleet of utility-scale quantum systems over the cloud. Additionally, IBM and KQC plan to deploy an IBM Quantum System Two on-site at KQC in Busan, South Korea by 2028.

"We are excited to work with KQC to deploy AI and quantum systems to drive innovation across Korean industries. With this engagement, KQC clients will have the ability to train, fine-tune, and deploy advanced AI models, using IBM watsonx and advanced AI infrastructure. Additionally, by having the opportunity to access IBM quantum systems over the cloud, today—and a next-generation quantum system in the coming years—KQC members will be able to combine the power of AI and quantum to develop new applications to address their industries' toughest problems," said Darío Gil, IBM Senior Vice President and Director of Research. This collaboration includes an investment in infrastructure to support the development and deployment of generative AI. Plans for the AI-optimized infrastructure includes advanced GPUs and IBM's Artificial Intelligence Unit (AIU), managed with Red Hat OpenShift to provide a cloud-native environment. Together, the GPU system and AIU combination is being engineered to offer members state-of-the-art hardware to power AI research and business opportunities.

MSI Unveils AI-Driven Gaming Desktops with NVIDIA GeForce RTX 40 SUPER Series

MSI, a forefront leader in True Gaming hardware, proudly announces the incorporation of NVIDIA GeForce RTX 40 SUPER Series graphics cards into its latest 14th generation AI gaming desktops. The MEG Trident X2 14th, MPG Infinite X2 14th, MPG Trident AS 14th, MAG Infinite S3 14th, and MAG Codex 6 14th, initially featuring the NVIDIA GeForce RTX 40 Series graphics cards, now boast the cutting-edge RTX 40 SUPER Series, ushering in a new era of gaming excellence.

At the heart of these 14th gen AI gaming desktops lies the revolutionary RTX 40 SUPER Series, which are GeForce RTX 4080 SUPER, GeForce RTX 4070 Ti SUPER, and GeForce RTX 4070 SUPER. This series reshapes the gaming experience with cutting-edge AI capabilities, surpassing the speed of their predecessors. Equipped with RTX platform superpowers, these GPUs elevate the performance of games, applications, and AI tasks, marking a significant advancement in the gaming landscape.

Microsoft Announces Participation in National AI Research Resource Pilot

We are delighted to announce our support for the National AI Research Resource (NAIRR) pilot, a vital initiative highlighted in the President's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This initiative aligns with our commitment to broaden AI research and spur innovation by providing greater computing resources to AI researchers and engineers in academia and non-profit sectors. We look forward to contributing to the pilot and sharing insights that can help inform the envisioned full-scale NAIRR.

The NAIRR's objective is to democratize access to the computational tools essential for advancing AI in critical areas such as safety, reliability, security, privacy, environmental challenges, infrastructure, health care, and education. Advocating for such a resource has been a longstanding goal of ours, one that promises to equalize the field of AI research and stimulate innovation across diverse sectors. As a commissioner on the National Security Commission on AI (NSCAI), I worked with colleagues on the committee to propose an early conception of the NAIRR, underlining our nation's need for this resource as detailed in the NSCAI Final Report. Concurrently, we enthusiastically supported a university-led initiative pursuing a national computing resource. It's rewarding to see these early ideas and endeavors now materialize into a tangible entity.

NVIDIA Contributes $30 Million of Tech to NAIRR Pilot Program

In a major stride toward building a shared national research infrastructure, the U.S. National Science Foundation has launched the National Artificial Intelligence Research Resource pilot program with significant support from NVIDIA. The initiative aims to broaden access to the tools needed to power responsible AI discovery and innovation. It was announced Wednesday in partnership with 10 other federal agencies as well as private-sector, nonprofit and philanthropic organizations. "The breadth of partners that have come together for this pilot underscores the urgency of developing a National AI Research Resource for the future of AI in America," said NSF Director Sethuraman Panchanathan. "By investing in AI research through the NAIRR pilot, the United States unleashes discovery and impact and bolsters its global competitiveness."

NVIDIA's commitment of $30 million in technology contributions over two years is a key factor in enlarging the scale of the pilot, fueling the potential for broader achievements and accelerating the momentum toward full-scale implementation. "The NAIRR is a vision of a national research infrastructure that will provide access to computing, data, models and software to empower researchers and communities," said Katie Antypas, director of the Office of Advanced Cyberinfrastructure at the NSF. "Our primary goals for the NAIRR pilot are to support fundamental AI research and domain-specific research applying AI, reach broader communities, particularly those currently unable to participate in the AI innovation ecosystem, and refine the design for the future full NAIRR," Antypas added.

HBM Industry Revenue Could Double by 2025 - Growth Driven by Next-gen AI GPUs Cited

Samsung, SK hynix, and Micron are considered to be the top manufacturing sources of High Bandwidth Memory (HBM)—the HBM3 and HBM3E standards are becoming increasingly in demand, due to a widespread deployment of GPUs and accelerators by generative AI companies. Taiwan's Commercial Times proposes that there is an ongoing shortage of HBM components—but this presents a growth opportunity for smaller manufacturers in the region. Naturally, the big name producers are expected to dive in head first with the development of next generation models. The aforementioned financial news article cites research conducted by the Gartner group—they predict that the HBM market will hit an all-time high of $4.976 billion (USD) by 2025.

This estimate is almost double that of projected revenues (just over $2 billion) generated by the HBM market in 2023—the explosive growth of generative AI applications has "boosted" demand for the most performant memory standards. The Commercial Times report states that SK Hynix is the current HBM3E leader, with Micron and Samsung trailing behind—industry experts believe that stragglers will need to "expand HBM production capacity" in order to stay competitive. SK Hynix has shacked up with NVIDIA—the GH200 Grace Hopper platform was unveiled last summer; outfitted with the South Korean firm's HBM3e parts. In a similar timeframe, Samsung was named as AMD's preferred supplier of HBM3 packages—as featured within the recently launched Instinct MI300X accelerator. NVIDIA's HBM3E deal with SK Hynix is believed to extend to the internal makeup of Blackwell GB100 data-center GPUs. The HBM4 memory standard is expected to be the next major battleground for the industry's hardest hitters.

LG Ushers in 'Zero Labor Home' With Its Smart Home AI Agent at CES 2024

LG Electronics (LG) is ready to unveil its innovative smart home Artificial Intelligence (AI) agent at CES 2024. LG's smart home AI agent boasts robotic, AI and multi-modal technologies that enable it to move, learn, comprehend and engage in complex conversations. An all-around home manager and companion rolled into one, LG's smart life solution enhances users' daily lives and showcases the company's commitment to realizing its "Zero Labor Home" vision.

With its advanced 'two-legged' wheel design, LG's smart home AI agent is able to navigate the home independently. The intelligent device can verbally interact with users and express emotions through movements made possible by its articulated leg joints. Moreover, the use of multi-modal AI technology, which combines voice and image recognition along with natural language processing, enables the smart home AI agent to understand context and intentions as well as actively communicate with users.

Phison Predicts 2024: Security is Paramount, PCIe 5.0 NAND Flash Infrastructure Imminent as AI Requires More Balanced AI Data Ecosystem

Phison Electronics Corp., a global leader in NAND flash controller and storage solutions, today announced the company's predictions for 2024 trends in NAND flash infrastructure deployment. The company predicts that rapid proliferation of artificial intelligence (AI) technologies will continue apace, with PCIe 5.0-based infrastructure providing high-performance, sustainable support for AI workload consistency as adoption rapidly expands. PCIe 5.0 NAND flash solutions will be at the core of a well-balanced hardware ecosystem, with private AI deployments such as on-premise large language models (LLMs) driving significant growth in both everyday AI and the infrastructure required to support it.

"We are moving past initial excitement over AI toward wider everyday deployment of the technology. In these configurations, high-quality AI output must be achieved by infrastructure designed to be secure, while also being affordable. The organizations that leverage AI to boost productivity will be incredibly successful," said Sebastien Jean, CTO, Phison US. "Building on the widespread proliferation of AI applications, infrastructure providers will be responsible for making certain that AI models do not run up against the limitations of memory - and NAND flash will become central to how we configure data center architectures to support today's developing AI market while laying the foundation for success in our fast-evolving digital future."

Velocity Micro Announces ProMagix G480a and G480i, Two GPU Server Solutions for AI and HPC

Velocity Micro, the premier builder of award-winning enthusiast desktops, laptops, high performance computing solutions, and professional workstations announces the immediate availability of the ProMagix G480a and G480i - two GPU servers optimized for High Performance Computing and Artificial Intelligence. Powered by either dual AMD Epyc 4th Gen or dual 4th Gen Intel Scalable Xeon processors, these 4U form factor servers support up to eight dual slot PCIe Gen 5 GPUs to create incredible compute power designed specifically for the highest demand workflows including simulation, rendering, analytics, deep learning, AI, and more. Shipments begin immediately.

"By putting emphasis on scalability, functionality, and performance, we've created a line of server solutions that tie in the legacy of our high-end brand while also offering businesses alternative options for more specialized solutions for the highest demand workflows," said Randy Copeland, President and CEO of Velocity Micro. "We're excited to introduce a whole new market to what we can do."

IDC Forecasts Spending on GenAI Solutions Will Reach $143 Billion in 2027 with a Five-Year Compound Annual Growth Rate of 73.3%

A new forecast from International Data Corporation (IDC) shows that enterprises will invest nearly $16 billion worldwide on GenAI solutions in 2023. This spending, which includes GenAI software as well as related infrastructure hardware and IT/business services, is expected to reach $143 billion in 2027 with a compound annual growth rate (CAGR) of 73.3% over the 2023-2027 forecast period. This is more than twice the rate of growth in overall AI spending and almost 13 times greater than the CAGR for worldwide IT spending over the same period.

"Generative AI is more than a fleeting trend or mere hype. It is a transformative technology with far-reaching implications and business impact," says Ritu Jyoti, group vice president, Worldwide Artificial Intelligence and Automation market research and advisory services at IDC. "With ethical and responsible implementation, GenAI is poised to reshape industries, changing the way we work, play, and interact with the world."

AMD to Acquire Open-Source AI Software Expert Nod.ai

AMD today announced the signing of a definitive agreement to acquire Nod.ai to expand the company's open AI software capabilities. The addition of Nod.ai will bring an experienced team that has developed an industry-leading software technology that accelerates the deployment of AI solutions optimized for AMD Instinct data center accelerators, Ryzen AI processors, EPYC processors, Versal SoCs and Radeon GPUs to AMD. The agreement strongly aligns with the AMD AI growth strategy centered on an open software ecosystem that lowers the barriers of entry for customers through developer tools, libraries and models.

"The acquisition of Nod.ai is expected to significantly enhance our ability to provide AI customers with open software that allows them to easily deploy highly performant AI models tuned for AMD hardware," said Vamsi Boppana, senior vice president, Artificial Intelligence Group at AMD. "The addition of the talented Nod.ai team accelerates our ability to advance open-source compiler technology and enable portable, high-performance AI solutions across the AMD product portfolio. Nod.ai's technologies are already widely deployed in the cloud, at the edge and across a broad range of end point devices today."

NVIDIA Lends Support to Washington's Efforts to Ensure AI Safety

In an event at the White House today, NVIDIA announced support for voluntary commitments that the Biden Administration developed to ensure advanced AI systems are safe, secure and trustworthy. The news came the same day NVIDIA's chief scientist, Bill Dally, testified before a U.S. Senate subcommittee seeking input on potential legislation covering generative AI. Separately, NVIDIA founder and CEO Jensen Huang will join other industry leaders in a closed-door meeting on AI Wednesday with the full Senate.

Seven companies including Adobe, IBM, Palantir and Salesforce joined NVIDIA in supporting the eight agreements the Biden-Harris administration released in July with support from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI.

d-Matrix Announces $110 Million in Funding for Corsair Inference Compute Platform

d-Matrix, the leader in high-efficiency generative AI compute for data centers, has closed $110 million in a Series-B funding round led by Singapore-based global investment firm Temasek. The goal of the fundraise is to enable d-Matrix to begin commercializing Corsair, the world's first Digital-In Memory Compute (DIMC), chiplet-based inference compute platform, after the successful launches of its prior Nighthawk, Jayhawk-I and Jayhawk II chiplets.

d-Matrix's recent silicon announcement, Jayhawk II, is the latest example of how the company is working to fundamentally change the physics of memory-bound compute workloads common in generative AI and large language model (LLM) applications. With the explosion of this revolutionary technology over the past nine months, there has never been a greater need to overcome the memory bottleneck and current technology approaches that limit performance and drive up AI compute costs.

IBM Introduces Watsonx, an Innovative AI Solution Tailored to Business

IBM has formally introduced watsonx, the company's next generation enterprise-focused artificial intelligence and data platform. Global business leaders remain unclear about the real, transformative power of AI and how to leverage it. The campaign is designed to define and differentiate watsonx as a force multiplier that can accelerate impact for global business leaders as they look to apply AI solutions in new and innovative ways.

The two distinct spots feature a fast-paced, multi-media technique that aims to provide inspiration and guidance around the value proposition of watsonx, while underscoring the need to identify the right AI that will empower businesses to advance objectives and accelerate workloads. These concepts come to life through potential use cases that spotlight the importance of applying AI that is trusted, targeted, and built on the best open technology available.

Samsung Electronics Unveils Industry's Highest-Capacity 12nm-Class 32Gb DDR5 DRAM

collaboration with diverse industries and support various applications
Samsung Electronics, a world leader in advanced memory technology, today announced that it has developed the industry's first and highest-capacity 32-gigabit (Gb) DDR5 DRAM using 12 nanometer (nm)-class process technology. This achievement comes after Samsung began mass production of its 12 nm-class 16Gb DDR5 DRAM in May 2023. It solidifies Samsung's leadership in next-generation DRAM technology and signals the next chapter of high-capacity memory.

"With our 12 nm-class 32Gb DRAM, we have secured a solution that will enable DRAM modules of up to 1-terabyte (TB), allowing us to be ideally positioned to serve the growing need for high-capacity DRAM in the era of AI (Artificial Intelligence) and big data," said SangJoon Hwang, Executive Vice President of DRAM Product & Technology at Samsung Electronics. "We will continue to develop DRAM solutions through differentiated process and design technologies to break the boundaries of memory technology."

NVIDIA Predicted to Pull in $300 Billion AI Revenues by 2027

NVIDIA has been raking in lots of cash this year and hit a major milestone back in late May, with a trillion dollar valuation—its stock price doubled thanks to upward trends in the artificial intelligence market, with growing global demand for AI-hardware. Business Insider believes that Team Green will continue to do very well for itself over the next couple of years: "Mizuho analyst Vijay Rakesh has given NVIDIA's stock price another 20% upside to run—and even this new target of $530 is "conservative," according to a Sunday client note seen by Insider. Rakesh's previous price target for NVIDIA was $400. NVIDIA shares closed 0.7% higher at $446.12 apiece on Monday. The stock has surged 205% so far this year."

Despite the emergence of competing hardware from the likes of AMD and Intel, Rakesh predicts that NVIDIA will maintain a dominant position in the AI chip market until at least 2027: "With demand for generative AI accelerating, we see significant opportunities for hardware suppliers powering the higher compute needs for large-language models, particularly AI powerhouse NVIDIA. Insider reports that the company: "could generate around $300 billion in AI-specific revenue by 2027 with a 75% market share of AI server units...That's 10 times his projection of $25 billion to $30 billion in AI revenues this year." Rakesh has reportedly stuck with a $140 buy rating and price target for AMD shares.
Return to Keyword Browsing
Apr 30th, 2024 21:30 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts