News Posts matching #AI

Return to Keyword Browsing

IBM Launches Its Most Advanced Quantum Computers, Fueling New Scientific Value and Progress towards Quantum Advantage

Today at its inaugural IBM Quantum Developer Conference, IBM announced quantum hardware and software advancements to execute complex algorithms on IBM quantum computers with record levels of scale, speed, and accuracy.

IBM Quantum Heron, the company's most performant quantum processor to-date and available in IBM's global quantum data centers, can now leverage Qiskit to accurately run certain classes of quantum circuits with up to 5,000 two-qubit gate operations. Users can now use these capabilities to expand explorations in how quantum computers can tackle scientific problems across materials, chemistry, life sciences, high-energy physics, and more.

Solidigm Launches D5-P5336 PCIe Data Center SSDs With 122 TB Capacity

Solidigm, a leading provider of innovative NAND flash memory solutions, announced today the introduction of the world's highest capacity PCIe solid-state drive (SSD): the 122 TB (terabyte) Solidigm D5-P5336 data center SSD. The D5-P5336 doubles the storage space of Solidigm's earlier 61.44 TB version of the drive and is the world's first SSD with unlimited Random Write endurance for five years—offering an ideal solution for AI and data-intensive workloads. Just how much storage is 122.88 TB? Roughly enough for 4K-quality copies of every movie theatrically released in the 1990s, 2.6 times over.

Data storage power, thermal and space constraints are accelerating as AI adoption increases. Power and space-efficient, the new 122 TB D5-P5336 delivers industry-leading storage efficiency from the core data center to the edge. Data center operators can deploy with confidence the 122 TB D5-P5336 from Solidigm, the proven QLC (quad-level cell) density leader with more than 100EB (exabytes) of QLC-based product shipped since 2018.

Report: GPU Market Records Explosive Growth, Reaching $98.5 Billion in 2024

With the latest industry boom in AI, the demand for more compute power is greater than ever, and the recent industry forecast predicts that the global GPU market will exceed $98.5 billion in value by the year 2024. This staggering projection, outlined in the 2024 supply-side GPU market summary report by Jon Peddie Research (JPR), shows how far the GPU market has come. Once primarily associated with powering consumer gaming rigs with AMD or NVIDIA inside, GPUs have become a key part of our modern tech stack, worth almost $100 billion in 2024 alone. Nowadays, GPUs are found in many products, from smartphones and vehicles to internet-connected devices and data centers.

"Graphics processor units (GPUs) have become ubiquitous and can be found in almost every industrial, scientific, commercial, and consumer product made today," said Dr. Jon Peddie, founder of JPR. "Some market segments, like AI, have grabbed headlines because of their rapid growth and high average selling price (ASP), but they are low-volume compared to other market segments." The report also shows the wide range of companies that are actively participating in the GPU marketplace, including industry giants like AMD, NVIDIA, and Intel, as well as smaller players from China like Loongson Zhongke, Siroyw, and Lingjiu Micro. Besides the discrete GPU solutions, the GPU IP market is very competitive, and millions of chips are shipped with GPU IP every year. Some revenue estimates of Chinese companies are not public, but JPR is measuring it from the supply chain side, so these estimates are pretty plausible.

NEC to Build Japan's Newest Supercomputer Based on Intel Xeon 6900P and AMD Instinct MI300A

NEC Corporation (NEC; TSE: 6701) has received an order for a next-generation supercomputer system from Japan's National Institutes for Quantum Science and Technology (QST), under the National Research and Development Agency, and the National Institute for Fusion Science (NIFS), part of the National Institutes of Natural Sciences under the Inter-University Research Institute Corporation. The new supercomputer system is scheduled to be operational from July 2025. The next-generation supercomputer system will feature multi-architecture with the latest CPUs and GPUs and will consist of large storage capacity and a high-speed network. This system is expected to be used for various research and development in the field of fusion science research.

Specifically, the system will be used for precise prediction of experiments and creation of operation scenarios in the ITER project, which is being promoted as an international project, and the Satellite Tokamak (JT-60SA) project, which is being promoted as a Broader Approach activity, and for design of DEMO reactors. The DEMO project promotes large-scale numerical calculations for DEMO design and R&D to accelerate the realization of a DEMO reactor that contributes to carbon neutrality. In addition, NIFS will conduct numerical simulation research using the supercomputer for multi-scale and multi-physics systems, including fusion plasmas, to broadly accelerate research on the science and applications of fusion plasmas, and as an Inter-University Research Institute, will provide universities and research institutes nationwide with opportunities for collaborative research using the state-of-the-art supercomputer.

Japan Plans to Invest $65 Billion to Boost Its Chip Industry

Japan has proposed a $65 billion (or more) plan to strengthen the semiconductor and AI industries in the country through grants and financial support by fiscal year 2030. The government plans to present this proposal at the next parliamentary session. The draft includes support for mass production of next-generation chips, focusing on AI chipmakers such as Rapidus, the government estimates an economic impact of about 160 trillion yen from this investment. Rapidus plans to start mass production of advanced chips in Hokkaido from 2027 and will work with IBM and Belgian research organization Imec.

According to the report from Reuters, Prime Minister Shigeru Ishiba said the government would not issue deficit-financing bonds to fund the support plan, although specific financial details are not yet known. The new initiative builds on last year's 2 trillion yen investment in the chip industry, and it is part of a broader economic package. Expected to be approved by the Cabinet on November 22, the plan calls for combined public and private investment in the semiconductor industry of more than 50 trillion yen over the next decade.

LG and Tenstorrent Expand Partnership to Enhance AI Chip Capabilities

LG Electronics (LG) and Tenstorrent are pleased to announce an expanded collaboration, building on their initial chiplet project to develop System-on-Chips (SoCs) and systems for the global market. Through this partnership, LG aims to enhance its design and development capabilities for AI chips tailored to its products and services, aligning with its vision of "Affectionate Intelligence." LG is dedicated to advancing AI-driven innovation, with a focus on enhancing its AI-powered home appliances and smart home solutions, as well as expanding its capabilities in future mobility and commercial applications.

Recognizing the critical role of high-performance AI semiconductors in implementing AI technology, LG plans to strengthen its in-house development capabilities while collaborating with leading global companies, including Tenstorrent, to boost its AI competitiveness.

AMD Captures 28.7% Desktop Market Share in Q3 2024, Intel Maintains Lead

According to the market research firm Mercury Research, the desktop CPU market has witnessed a remarkable transformation, with AMD seizing a substantial 28.7% market share in Q3 of 2024—a giant leap since the launch of the original Zen architecture in 2017. This 5.7 percentage point surge from the previous quarter is a testament to the company's continuous innovation against the long-standing industry leader, Intel. Their year-over-year growth of nearly ten percentage points, fueled by the success of their Ryzen 7000 and 9000 series processors, starkly contrasts Intel's Raptor Lake processors, which encountered technical hurdles like stability issues. AMD's revenue share soared by 8.5 percentage points, indicating robust performance in premium processor segments. Intel, witnessing a decline in its desktop market share to 71.3%, attributes this shift to inventory adjustments rather than competitive pressure and still holds the majority.

AMD's success story extends beyond desktops, with the company claiming 22.3% of the laptop processor market and 24.2% of the server segment. A significant milestone was reached as AMD's data center division generated $3.549 billion in quarterly revenue, a new record for a company not even present in the data center in any considerable quantity just a decade ago. Stemming from strong EPYC processor sales to hyperscalers and cloud providers, along with Instinct MI300X for AI applications, AMD's acceleration of data center deployments is massive. Despite these shifts, Intel continues to hold its dominant position in client computing, with 76.1% of the overall PC market, held by its strong corporate relationships and extensive manufacturing infrastructure. OEM partners like Dell, HP, Lenovo, and others rely heavily on Intel for their CPU choice, equipping institutions like schools, universities, and government agencies.

Anthropic and Palantir Partner to Bring Claude AI Models to AWS for U.S. Government Intelligence and Defense Operations

Anthropic and Palantir Technologies Inc. (NYSE: PLTR) today announced a partnership with Amazon Web Services (AWS) to provide U.S. intelligence and defense agencies access to the Claude 3 and 3.5 family of models on AWS. This partnership allows for an integrated suite of technology to operationalize the use of Claude within Palantir's AI Platform (AIP) while leveraging the security, agility, flexibility, and sustainability benefits provided by AWS.

The partnership facilitates the responsible application of AI, enabling the use of Claude within Palantir's products to support government operations such as processing vast amounts of complex data rapidly, elevating data driven insights, identifying patterns and trends more effectively, streamlining document review and preparation, and helping U.S. officials to make more informed decisions in time-sensitive situations while preserving their decision-making authorities. Claude became accessible within Palantir AIP on AWS earlier this month.

Axiomtek Announces MMB541 Micro ATX Motherboard for AI and Industrial Applications

Axiomtek - a world-renowned leader relentlessly devoted to the research, development, and manufacture of innovative, highly efficient, and reliable industrial computer products - is pleased to introduce the MMB541, a Micro ATX motherboard specifically optimized for high-performance AI, automation, and industrial applications. The MMB541 is powered by the LGA1700 socket 14th, 13th, and 12th Gen Intel Core i9/i7/i5/i3, Pentium or Celeron processors with Intel H610 chipset. With support for up to 125 W high-performance CPU, it delivers exceptional performance to handle demanding tasks with ease. The powerful industrial motherboard delivers advanced connectivity options with four USB 3.2 Gen 1 ports, four USB 2.0 ports, and three LAN ports. With support for PCIe x16 for GPU or accelerator cards and PCIe x4 for additional add-on cards, the MMB541 is ideal for AI workstations and factory automation setups, providing robust computational power and high-speed data processing.

"The MMB541 represents our commitment to innovation and high-quality industrial-grade solutions," said Wayne Chung, the product manager of the AIoT Team at Axiomtek. "With its cost-effective design, the MMB541 simplifies the development of custom AI and automation systems by providing ample I/O interfaces and expansion capabilities, making it easy to integrate GPU cards and other accelerators for enhanced performance. Engineered for versatility, this motherboard can handle applications ranging from AI workstations to complex automation tasks while ensuring the durability and stability needed for long-term industrial use. The MMB541 empowers our customers to build efficient, scalable solutions tailored to today's data-intensive environments."

Sony Interactive Entertainment Launches the PlayStation 5 Pro

Today, Sony Interactive Entertainment expands the PlayStation 5 (PS5) family of products with the release of the new PlayStation 5 Pro (PS5 Pro) console - the company's most advanced and innovative gaming console to date. PlayStation 5 Pro was designed with deeply engaged players and game creators in mind and includes key performance features that allow games to run with higher fidelity graphics at smoother frame rates.

"With PlayStation 5 Pro, we wanted to make sure that the most dedicated gamers, as well as game creators, could utilize the most advanced console technology, taking the PlayStation 5 experience even farther," said Hideaki Nishino, CEO Platform Business Group, Sony Interactive Entertainment. "This is our most advanced PlayStation to date, and it gives our community of players the opportunity to experience games the way that developers intended for them to be. Players will be thrilled with how this console enhances some of their favorite titles, while opening avenues to discover new ones."

Microsoft Brings Copilot AI Assistant to Windows Terminal

Microsoft has taken another significant step in its AI integration strategy by introducing "Terminal Chat," an AI assistant now available in Windows Terminal. This latest feature brings conversational AI capabilities directly to the command-line interface, marking a notable advancement in making terminal operations more accessible to users of all skill levels. The new feature, currently available in Windows Terminal (Canary), leverages various AI services, including ChatGPT, GitHub Copilot, and Azure OpenAI, to provide interactive assistance for command-line operations. What sets Terminal Chat apart is its context-aware functionality, which automatically recognizes the specific shell environment being used—whether it's PowerShell, Command Prompt, WSL Ubuntu, or Azure Cloud Shell—and tailors its responses accordingly.

Users can interact with Terminal Chat through a dedicated interface within Windows Terminal, where they can ask questions, troubleshoot errors, and request guidance on specific commands. The system provides shell-specific suggestions, automatically adjusting its recommendations based on whether a user is working in Windows PowerShell, Linux, or other environments. For example, when asked about creating a directory, Terminal Chat will suggest "New-Item -ItemType Directory" for PowerShell users while providing "mkdir" as the appropriate command for Linux environments. This intelligent adaptation helps bridge the knowledge gap between different command-line interfaces. Below are some examples courtesy of Windows Latest and their testing:

Samsung Hopes PIM Memory Technology Can Replace HBM in Next-Gen AI Applications

The 8th edition of the Samsung AI Forum was held on November 4th and 5th in Seoul, and among all the presentations and keynote speeches, one piece of information caught our attention. As reported by The Chosun Daily, Samsung is (again) turning its attention to Processing-in-Memory (PIM) technology, in what appears to be the company's latest attempt to keep up with its rival SK Hynix in this area. In 2021, Samsung introduced the world's first HBM-PIM, the chips showing impressive gains in performance (nearly double) while reducing energy consumption by almost 50% on average. PIM technology basically adds the processor functions necessary for computational tasks, reducing data transfer between the CPU and memory.

Now, the company hopes that PIM memory chips could replace HBM in the future, based on the advantages this next-generation memory technology possesses, mainly for artificial intelligence (AI) applications. "AI is transforming our lives at an unprecedented rate, and the question of how to use AI more responsibly is becoming increasingly important," said Samsung Electronics CEO Han Jong-hee in his opening remarks. "Samsung Electronics is committed to fostering a more efficient and sustainable AI ecosystem." During the event, Samsung also highlighted its partnership with AMD, which reportedly supplies AMD with its fifth-generation HBM, the HBM3E.

Sony's PS5 Pro To Launch on November 7 With Over 50 Enhanced Games

Many gamers have been skeptical of the recently announced Sony PS5 Pro since the day it was announced, largely due to the high price and the perceived lack of meaningful improvements. It seemed to many as though the PS5 Pro was simply a meaningless mid-cycle cash-grab with a few extra features tacked onto the top, however, it looks like Sony and its development partners have put in the work to make the PS5 Pro experience fresh and worthwhile. According to a new post on the official PlayStation Blog, the new console will launch with at least 50 confirmed "Enhanced" games.

What exactly Sony means by Enhanced is rather nebulous, since many of the Enhanced games for the PS5 Pro have a mishmash of different Pro features. For example, Resident Evil Village gets the full 120 FPS treatment, while Horizon Forbidden West only gets a bump up to 4K at 60 FPS. Stellar Blade, on the other hand, only gets an FPS boost to 80 FPS or 50 FPS at 4K. It's likely that, like Stellar Blade, all the titles aiming for higher refresh rates on the PS5 Pro are using some mix of PSSR, dedicated AI acceleration, and traditional rasterization rendering techniques to achieve the increased frame rates. Both The Last of Us Part I and The Last of Us II Remastered will run at 60 FPS on the PS5 Pro, but they will render at 1440p and use PSSR to upscale to 4K output.

AAEON Unveils BOXER-8642AI Edge AI Box Featuring 8 Independent 10Gbps USB and up to 275 TOPS

AAEON Technology Inc. (stock code: 6579), a leading designer and manufacturer of industrial and embedded computers, today announced the release of the BOXER-8642AI, a fanless embedded AI computer featuring eight independent 10 Gbps USB ports, powered by either a 32 GB or 64 GB NVIDIA Jetson AGX Orin module.
Built for the smart retail market, the BOXER-8642AI is able to support multiple Intel RealSense D405 3D cameras for high and wide bandwidth image processing to facilitate inferencing tasks conducive to applications such as AI-assisted self-checkout kiosks.

The BOXER-8642AI's design is suitable for reliable, around-the-clock operation in various retail environments, as indicated by its 12 V to 24 V power input range and -25°C and 55°C temperature tolerance. Moreover, AAEON claims the device's standout feature, its selection of 10 Gbps USB ports, have been tested to ensure they can withstand continuous on/off cycling while maintaining a drop-rate of less than 0.1%.

AMD and Fujitsu to Begin Strategic Partnership to Create Computing Platforms for AI and High-Performance Computing (HPC)

AMD and Fujitsu Limited today announced that they have signed a memorandum of understanding (MOU) to form a strategic partnership to create computing platforms for AI and high-performance computing (HPC). The partnership, encompassing aspects from technology development to commercialization, will seek to facilitate the creation of open source and energy efficient platforms comprised of advanced processors with superior power performance and highly flexible AI/HPC software and aims to accelerate open-source AI and/or HPC initiatives.

Due to the rapid spread of AI, including generative AI, cloud service providers and end-users are seeking optimized architectures at various price and power per performance configurations. From end-to-end, AMD supports an open ecosystem, and strongly believes in giving customers choice. Fujitsu has worked to develop FUJITSU-MONAKA, a next-generation Arm-based processor that aims to achieve both high performance and low power consumption. With FUJITSU-MONAKA, together with AMD Instinct accelerators, customers have an additional choice to achieve large-scale AI workload processing to whilst attempting to reduce the data center total cost of ownership.

New Arm CPUs from NVIDIA Coming in 2025

According to DigiTimes, NVIDIA is reportedly targeting the high-end segment for its first consumer CPU attempt. Slated to arrive in 2025, NVIDIA is partnering with MediaTek to break into the AI PC market, currently being popularized by Qualcomm, Intel, and AMD. With Microsoft and Qualcomm laying the foundation for Windows-on-Arm (WoA) development, NVIDIA plans to join and leverage its massive ecosystem of partners to design and deliver regular applications and games for its Arm-based processors. At the same time, NVIDIA is also scheduled to launch "Blackwell" GPUs for consumers, which could end up in these AI PCs with an Arm CPU at its core.

NVIDIA's partner, MediaTek, has recently launched a big core SoC for mobile called Dimensity 9400. NVIDIA could use something like that as a base for its SoC and add its Blackwell IP to the mix. This would be similar to what Apple is doing with its Apple Silicon and the recent M4 Max chip, which is apparently the fastest CPU in single-threaded and multithreaded workloads, as per recent Geekbench recordings. For NVIDIA, the company already has a team of CPU designers that delivered its Grace CPU to enterprise/server customers. Using off-the-shelf Arm Neoverse IP, the company's customers are acquiring systems with Grace CPUs as fast as they are produced. This puts a lot of hope into NVIDIA's upcoming AI PC, which could offer a selling point no other WoA device currently provides, and that is tried and tested gaming-grade GPU with AI accelerators.

Etched Introduces AI-Powered Games Without GPUs, Displays Minecraft Replica

The gaming industry is about to get massively disrupted. Instead of using game engines to power games, we are now witnessing an entirely new and crazy concept. A startup specializing in designing ASICs specifically for Transformer architecture, the foundation behind generative AI models like GPT/Claude/Stable Diffusion, has showcased a demo in partnership with Decart of a Minecraft clone being entirely generated and operated by AI instead of the traditional game engine. While we use AI to create images and videos based on specific descriptions and output pretty realistic content, having an AI model spit out an entire playable game is something different. Oasis is the first playable, real-time, real-time, open-world AI model that takes users' input and generates real-time gameplay, including physics, game rules, and graphics.

An interesting thing to point out is the hardware that powers this setup. Using a single NVIDIA H100 GPU, this 500-million parameter Oasis model can run at 720p resolution at 20 generated frames per second. Due to limitations of accelerators like NVIDIA's H100/B200, gameplay at 4K is almost impossible. However, Etched has its own accelerator called Sohu, which is specialized in accelerating transformer architectures. Eight NVIDIA H100 GPUs can power five Oasis models to five users, while the eight Sohu cards are capable of serving 65 Oasis runs to 65 users. This is more than a 10x increase in inference capability compared to NVIDIA's hardware on a single-use case alone. The accelerator is designed to run much larger models like future 100 billion-parameter generative AI video game models that can output 4K 30 FPS, all thanks to 144 GB of HBM3E memory, yielding 1,152 GB in eight-accelerator server configuration.

Apple Reports Q4 2024 Financial Results

Apple today announced financial results for its fiscal 2024 fourth quarter ended September 28, 2024. The Company posted quarterly revenue of $94.9 billion, up 6 percent year over year, and quarterly diluted earnings per share of $0.97. Diluted earnings per share was $1.64,1 up 12 percent year over year when excluding the one-time charge recognized during the fourth quarter of 2024 related to the impact of the reversal of the European General Court's State Aid decision.

"Today Apple is reporting a new September quarter revenue record of $94.9 billion, up 6 percent from a year ago," said Tim Cook, Apple's CEO. "During the quarter, we were excited to announce our best products yet, with the all-new iPhone 16 lineup, Apple Watch Series 10, AirPods 4, and remarkable features for hearing health and sleep apnea detection. And this week, we released our first set of features for Apple Intelligence, which sets a new standard for privacy in AI and supercharges our lineup heading into the holiday season."

AI Contributes to 25% of Google's New Code, CEO Sundar Pichai Confirms

During Alphabet's Q3 earnings call, CEO Sundar Pichai announced that AI now generates more than a quarter of the company's new code, marking a significant milestone for AI advancement and for the tech giant. This development comes alongside impressive financial results, with the company reporting $88.2 billion in revenue, representing a 15% year-over-year increase. Implementing AI in code generation has raised concerns, though Google maintains rigorous safety protocols. Every AI-generated code segment undergoes thorough review by human (natural intelligence) engineers before deployment, ensuring quality and security standards are met. This hybrid approach helps Google balance productivity with reliability. The tech giant's commitment to AI development extends beyond code generation.

Recent achievements include the revolutionary AI Overviews feature, which has undergone significant optimization. Through optimizing hardware solutions and technical improvements, Google has managed to reduce query costs by over 90% while simultaneously doubling the capacity of their custom Gemini models. Google's AI push has also garnered prestigious recognition, with DeepMind researchers Demis Hassabis and John Jumper receiving the Nobel Prize in Chemistry for their groundbreaking AlphaFold project. Former Google researcher Geoff Hinton also achieved Nobel recognition in Physics. The impact of Google's AI integration is evident across its product ecosystem, with Gemini models now powering seven platforms that each serve over two billion monthly users. Google Maps recently joined this elite group, while the company has expanded its AI capabilities to external developers through partnerships with platforms like GitHub Copilot to help developers write code with AI assistance.

OpenAI Designs its First AI Chip in Collaboration with Broadcom and TSMC

According to a recent Reuters report, OpenAI is continuing with its moves in the custom silicon space, expanding beyond its reported talks with Broadcom to include a broader strategy involving multiple industry leaders. Broadcom is a fabless chip designer known for a wide range of silicon solutions spanning from networking, PCIe, SSD controllers, and PHYs all the way up to custom ASICs. The company behind ChatGPT is actively working with both Broadcom and TSMC to develop its first proprietary AI chip, specifically focused on inference operations. Getting a custom chip to do training runs is a bit more complex task, and OpenAI leaves that to its current partners until the company figures out all details. Even with an inference chip, the scale at which OpenAI works and serves its models makes financial sense for the company to develop custom solutions tailored to its infrastructure needs.

This time, the initiative represents a more concrete and nuanced approach than previously understood. Rather than just exploratory discussions, OpenAI has assembled a dedicated chip team of approximately 20 people, led by former Google TPU engineers Thomas Norrie and Richard Ho. The company has secured manufacturing capacity with TSMC, targeting a 2026 timeline for its first custom-designed chip. While Broadcom's involvement leverages its expertise in helping companies optimize chip designs for manufacturing and manage data movement between chips—crucial for AI systems running thousands of processors in parallel—OpenAI is simultaneously diversifying its compute strategy. This includes adding AMD's Instinct MI300X chips to its infrastructure alongside its existing NVIDIA deployments. Similarly, Meta has the same approach, where it now trains its models on NVIDIA GPUs and serves them to the public (inferencing) using AMD Instinct MI300X.

MSI Introduces Modern AM273QP AI and AM273Q AI Series All-in-One PCs

MSI launches its latest All-in-One PCs: the Modern AM273QP AI and Modern AM273Q AI Series. These devices integrate the latest Intel Core Ultra processor, featuring Intel AI Boost NPU, along with DDR5 memory, Microsoft Copilot, and the MSI AI Engine.

Enhanced Your Productivity
The Modern AM273Q/QP AI Series features a 27-inch WQHD (2560 x 1440) IPS panel that delivers crystal-clear visuals and an immersive viewing experience. Powered by the latest Intel Core Ultra processor with Intel AI Boost NPU, these PCs offer smooth multitasking and advanced AI capabilities. With Microsoft Copilot integration, users can simplify their daily tasks and enhance productivity using intuitive AI-driven features.

AMD Reports Third Quarter 2024 Financial Results, Revenue Up 18 Percent YoY

AMD today announced revenue for the third quarter of 2024 of $6.8 billion, gross margin of 50%, operating income of $724 million, net income of $771 million and diluted earnings per share of $0.47. On a non-GAAP basis, gross margin was 54%, operating income was $1.7 billion, net income was $1.5 billion and diluted earnings per share was $0.92.

"We delivered strong third quarter financial results with record revenue led by higher sales of EPYC and Instinct data center products and robust demand for our Ryzen PC processors," said AMD Chair and CEO Dr. Lisa Su. "Looking forward, we see significant growth opportunities across our data center, client and embedded businesses driven by the insatiable demand for more compute."

Ultra Accelerator Link Consortium Plans Year-End Launch of UALink v1.0

Ultra Accelerator Link (UALink ) Consortium, led by Board Members from AMD, Amazon Web Services (AWS), Astera Labs, Cisco, Google, Hewlett Packard Enterprise (HPE), Intel, Meta and Microsoft, have announced the incorporation of the Consortium and are extending an invitation for membership to the community. The UALink Promoter Group was founded in May 2024 to define a high-speed, low-latency interconnect for scale-up communications between accelerators and switches in AI pods & clusters. "The UALink standard defines high-speed and low latency communication for scale-up AI systems in data centers"

Cisco Unveils Plug-and-Play AI Solutions Powered by NVIDIA H100 and H200 Tensor Core GPUs

Today, Cisco announced new additions to its data center infrastructure portfolio: an AI server family purpose-built for GPU-intensive AI workloads with NVIDIA accelerated computing, and AI PODs to simplify and de-risk AI infrastructure investment. They give organizations an adaptable and scalable path to AI, supported by Cisco's industry-leading networking capabilities.

"Enterprise customers are under pressure to deploy AI workloads, especially as we move toward agentic workflows and AI begins solving problems on its own," said Jeetu Patel, Chief Product Officer, Cisco. "Cisco innovations like AI PODs and the GPU server strengthen the security, compliance, and processing power of those workloads as customers navigate their AI journeys from inferencing to training."

BenQ Introduces the Revolutionary W2720i 4K Home Cinema Projector with AI-Powered Technology

BenQ, the global leader in DLP 4K projectors, proudly announces the launch of the W2720i, a cutting-edge AI-powered home cinema projector. This innovative projector is designed to transform modern living spaces into theatre-like settings with unparalleled ease and sophistication, offering first-time home theatre projector buyers an unmatched viewing experience.

Effortless Setup and Automatic Optimisation with AI Cinema Mode
The W2720i's revolutionary AI Cinema Mode is engineered to provide a seamless setup and superior picture quality, making it ideal for newcomers to home theatre projectors. With features like real-time ambient brightness adjustment and content-specific enhancements, the W2720i adapts to varying light conditions, providing a tailored experience for every user. This smart technology ensures that users can enjoy director-level movies without extensive colour calibration knowledge.
Return to Keyword Browsing
Nov 21st, 2024 06:39 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts