News Posts matching #HPC

Return to Keyword Browsing

Synopsys and Samsung Collaborate to Deliver Broad IP Portfolio Across All Advanced Samsung Foundry Processes

Synopsys, Inc. today announced an expanded agreement with Samsung Foundry to develop a broad portfolio of IP to reduce design risk and accelerate silicon success for automotive, mobile, high-performance computing (HPC) and multi-die designs. This agreement expands Synopsys' collaboration with Samsung to enhance the Synopsys IP offering for Samsung's advanced 8LPU, SF5, SF4 and SF3 processes and includes Foundation IP, USB, PCI Express, 112G Ethernet, UCIe, LPDDR, DDR, MIPI and more. In addition, Synopsys will optimize IP for Samsung's SF5A and SF4A automotive process nodes to meet stringent Grade 1 or Grade 2 temperature and AEC-Q100 reliability requirements, enabling automotive chip designers to reduce their design effort and accelerate AEC-Q100 qualification. The auto-grade IP for ADAS SoCs will include design failure mode and effect analysis (DFMEA) reports that can save months of development effort for automotive SoC applications.

"Our extensive co-optimization efforts with Samsung across both EDA and IP help automotive, mobile, HPC, and multi-die system architects cope with the inherent challenges of designing chips for advanced process technologies," said John Koeter, senior vice president of product management and strategy for IP at Synopsys. "This extension of our decades-long collaboration provides designers with a low-risk path to achieving their design requirements and quickly launching differentiated products to the market."

AMD Details New EPYC CPUs, Next-Generation AMD Instinct Accelerator, and Networking Portfolio for Cloud and Enterprise

Today, at the "Data Center and AI Technology Premiere," AMD announced the products, strategy and ecosystem partners that will shape the future of computing, highlighting the next phase of data center innovation. AMD was joined on stage with executives from Amazon Web Services (AWS), Citadel, Hugging Face, Meta, Microsoft Azure and PyTorch to showcase the technological partnerships with industry leaders to bring the next generation of high performance CPU and AI accelerator solutions to market.

"Today, we took another significant step forward in our data center strategy as we expanded our 4th Gen EPYC processor family with new leadership solutions for cloud and technical computing workloads and announced new public instances and internal deployments with the largest cloud providers," said AMD Chair and CEO Dr. Lisa Su. "AI is the defining technology shaping the next generation of computing and the largest strategic growth opportunity for AMD. We are laser focused on accelerating the deployment of AMD AI platforms at scale in the data center, led by the launch of our Instinct MI300 accelerators planned for later this year and the growing ecosystem of enterprise-ready AI software optimized for our hardware."

4th Gen Intel Xeon Outperforms Competition on Real-World Workloads

With the launch of 4th Gen Intel Xeon Scalable processors in January 2023, Intel delivered significant advancements in performance with industry-leading Intel accelerator engines and improved performance per watt across key workloads like AI, data analytics, high performance computing (HPC) and others. The industry has taken notice: 4th Gen Xeon has seen a rapid ramp, global customer adoption and leadership performance on a myriad of critical workloads for a broad range of business use cases.

Today, after weeks of rigorous and comprehensive head-to-head testing against the most comparable competitive processors, Intel is sharing compelling results that go far beyond simple industry benchmarks.

ASUS Unveils ESC N8-E11, an HGX H100 Eight-GPU Server

ASUS today announced ESC N8-E11, its most advanced HGX H100 eight-GPU AI server, along with a comprehensive PCI Express (PCIe) GPU server portfolio—the ESC8000 and ESC4000 series empowered by Intel and AMD platforms to support higher CPU and GPU TDPs to accelerate the development of AI and data science.

ASUS is one of the few HPC solution providers with its own all-dimensional resources that consist of the ASUS server business unit, Taiwan Web Service (TWS) and ASUS Cloud—all part of the ASUS group. This uniquely positions ASUS to deliver in-house AI server design, data-center infrastructure, and AI software-development capabilities, plus a diverse ecosystem of industrial hardware and software partners.

Gigabyte Shows AI/HPC and Data Center Servers at Computex

GIGABYTE is exhibiting cutting-edge technologies and solutions at COMPUTEX 2023, presenting the theme "Future of COMPUTING". From May 30th to June 2nd, GIGABYTE is showcasing over 110 products that are driving future industry transformation, demonstrating the emerging trends of AI technology and sustainability, on the 1st floor, Taipei Nangang Exhibition Center, Hall 1.

GIGABYTE and its subsidiary, Giga Computing, are introducing unparalleled AI/HPC server lineups, leading the era of exascale supercomputing. One of the stars is the industry's first NVIDIA-certified HGX H100 8-GPU SXM5 server, G593-SD0. Equipped with the 4th Gen Intel Xeon Scalable Processors and GIGABYTE's industry-leading thermal design, G593-SD0 can perform extremely intensive workloads from generative AI and deep learning model training within a density-optimized 5U server chassis, making it a top choice for data centers aimed for AI breakthroughs. In addition, GIGABYTE is debuting AI computing servers supporting NVIDIA Grace CPU and Grace Hopper Superchips. The high-density servers are accelerated with NVLink-C2C technology under the ARM Neoverse V2 platform, setting a new standard for AI/HPC computing efficiency and bandwidth.

TYAN Server Platforms to Boost Data Center Computing Performance with 4th Gen AMD EPYC Processors at Computex 2023

TYAN, an industry-leading server platform design manufacturer and a subsidiary of MiTAC Computing Technology Corporation, will be showcasing its latest HPC, cloud and storage platforms at Computex 2023, Booth #M0701a in Taipei, Taiwan from May 30 to June 2. These platforms are powered by AMD EPYC 9004 Series processors, which offer superior energy efficiency and are designed to enhance data center computing performance.

"As businesses increasingly prioritize sustainability in their operations, data centers - which serve as the computational core of an organization - offer a significant opportunity to improve efficiency and support ambitious sustainability targets," said Eric Kuo, Vice President of the Server Infrastructure Business Unit at MiTAC Computing Technology Corporation. "TYAN's server platforms powered by 4th Gen AMD EPYC processor enable IT organizations to achieve high performance while remaining cost-effective and contributing to environmental sustainability."

Giga Computing Goes Big with Green Computing and HPC and AI at Computex

Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, server motherboards, and workstations, today announced a major presence at Computex 2023, held May 30 to June 2, with a GIGABYTE booth that inspires while showcasing more than fifty servers that span GIGABYTE's comprehensive enterprise portfolio, including green computing solutions that feature liquid cooled servers and immersion cooling technology. The international computer expo attracts over 100,000 visitors annually and GIGABYTE will be ready with a spacious and attractive booth that will draw in curious minds, and at the same time there will be plenty of knowledgeable staff to answer questions about how our products are being utilized today.

The slogan for Computex 2023 is "Together we create." And just like parts that make a whole, GIGABYTE's slogan of "Future of COMPUTING" embodies all the distinct computing products from consumer to enterprise applications. For the enterprise business unit, there will be sections with themes: "Win Big with AI HPC," "Advance Data Centers," and "Embrace Sustainability." Each theme will show off cutting edge technologies that span x86 and ARM platforms, and great attention is placed on solutions that address challenges that come with more powerful computing.

Molex Unveils 224 Gbps PAM4 Chip-to-Chip Connectors

Molex, a company known for making various electronics and connectors, has today announced that the company has developed a first-of-its-kind chip-to-chip connector. Designed mainly for the data center, the Molex 224G product portfolio includes next-generation cables, backplanes, board-to-board connectors, and near-ASIC connector-to-cable solutions. Running at 224 Gbps speeds, these products use PAM4 signaling and boast with " highest levels of electrical, mechanical, physical and signal integrity." As the company states, future high-performance computing (HPC) data centers require a lot of board-to-board, chip-to-chip, and other types of communication to improve overall efficiency and remove bottlenecks in data transfer. To tackle this problem, Molex has a range of products, including Mirror Mezz Enhanced, Inception, and CX2 Dual Speed products.

Future generative AI, 1.6T (1.6 Tb/s) Ethernet, and other data center challenges need a dedicated communication standard, which Molex is aiming to provide. Working with various data center and enterprise customers, the company claims to have set the pace for products based on this 224G PAM4 chip-to-chip technology. We suspect that Open Compute Project (OCP) will be first in the line of adoption, ad Molex has historically worked with them as they adopted Mirror Mezz and Mirror Mezz Pro board-to-board connectors. The new products can be seen below, and we expect to hear more announcements from Molex's partners. Solutions like OSFP 1600, QSFP 800, and QSFP-DD 1600 already use 224G products.

Intel Falcon Shores is Initially a GPU, Gaudi Accelerators to Disappear

During the ISC High Performance 2023 international conference, Intel announced interesting roadmap updates to its high-performance computing (HPC) and artificial intelligence (AI). With the scrapping of Rialto Bridge and Lancaster Sound, Intel merged these accelerator lines into Falcon Shores processor for HPC and AI, initially claiming to be a CPU+GPU solution on a single package. However, during the ISC 2023 talk, the company forced a change of plans, and now, Falcon Shores is GPU only solution destined for a 2025 launch. Originally, Intel wanted to combine x86-64 cores with Xe GPU to form an "XPU" module that powers HPC and AI workloads. However, Intel did not see a point in forcing customers to choose between specific CPU-to-GPU core ratios that would need to be in an XPU accelerator. Instead, a regular GPU solution paired with a separate CPU is the choice of Intel for now. In the future, as workloads get more defined, XPU solutions are still a possibility, just delayed from what was originally intended.

Regarding Intel's Gaudi accelerators, the story is about to end. The company originally paid two billion US Dollars for Habana Labs and its Gaudi hardware. However, Intel now plans to stop the Gaudi development as a standalone accelerator and instead use the IP to integrate it into its Falcon Shores GPU. Using modular, tile-based architecture, the Falcon Shores GPU features standard ethernet switching, up to 288 GB of HBM3 running at 9.8 TB/s throughput, I/O optimized for scaling, and support for FP8 and FP16 floating point precision needed for AI and other workloads. As noted, the creation of XPU was premature, and now, the initial Falcon Shores GPU will become an accelerator for HPC, AI, and a mix of both, depending on a specific application. You can see the roadmap below for more information.

Intel Delivers AI-Accelerated HPC Performance

At the ISC High Performance Conference, Intel showcased leadership performance for high performance computing (HPC) and artificial intelligence (AI) workloads; shared its portfolio of future HPC and AI products, unified by the oneAPI open programming model; and announced an ambitious international effort to use the Aurora supercomputer to develop generative AI models for science and society.

"Intel is committed to serving the HPC and AI community with products that help customers and end-users make breakthrough discoveries faster," said Jeff McVeigh, Intel corporate vice president and general manager of the Super Compute Group. "Our product portfolio spanning Intel Xeon CPU Max Series, Intel Data Center GPU Max Series, 4th Generation Intel Xeon Scalable Processors and Habana Gaudi 2 are outperforming the competition on a variety of workloads, offering energy and total cost of ownership advantages, democratizing AI and providing choice, openness and flexibility."

Intel Launches Agilex 7 FPGAs with R-Tile, First FPGA with PCIe 5.0 and CXL Capabilities

Intel's Programmable Solutions Group today announced that the Intel Agilex 7 with the R-Tile chiplet is shipping production-qualified devices in volume - bringing customers the first FPGA with PCIe 5.0 and CXL capabilities and the only FPGA with hard intellectual property (IP) supporting these interfaces. "Customers are demanding cutting-edge technology that offers the scalability and customization needed to not only efficiently manage current workloads, but also pivot capabilities and functions as their needs evolve. Our Agilex products offer the programmable innovation with the speed, power and capabilities our customers need while providing flexibility and resilience for the future. For example, customers are leveraging R-Tile, with PCIe Gen 5 and CXL, to accelerate software and data analytics, cutting the processing time from hours to minutes," said Shannon Poulin, Intel corporate vice president and general manager of the Programmable Solutions Group.

Faced with time, budget and power constraints, organizations across industries including data center, telecommunications and financial services, turn to FPGAs as flexible, programmable and efficient solutions. Using Agilex 7 with R-Tile, customers can seamlessly connect their FPGAs with processors, such as 4th Gen Intel Xeon Scalable processors, with the highest bandwidth processor interfaces to accelerate targeted data center and high performance computing (HPC) workloads. Agilex 7's configurable and scalable architecture enables customers to quickly deploy customized technology - at scale with hardware speeds based on their specific needs - to reduce overall design costs and development processes and to expedite execution to achieve optimal data center performance.

Frontier Remains As Sole Exaflop Machine on TOP500 List

Increasing its HPL score from 1.02 Eflop/s in November 2022 to an impressive 1.194 Eflop/s on this list, Frontier was able to improve upon its score after a stagnation between June 2022 and November 2022. Considering exascale was only a goal to aspire to just a few years ago, a roughly 17% increase here is an enormous success. Additionally, Frontier earned a score of 9.95 Eflop/s on the HLP-MxP benchmark, which measures performance for mixed-precision calculation. This is also an increase over the 7.94 EFlop/s that the system achieved on the previous list and nearly 10 times more powerful than the machine's HPL score. Frontier is based on the HPE Cray EX235a architecture and utilizes AMD EPYC 64C 2 GHz processors. It also has 8,699,904 cores and an incredible energy efficiency rating of 52.59 Gflops/watt. It also relies on gigabit ethernet for data transfer.

NVIDIA Grace Drives Wave of New Energy-Efficient Arm Supercomputers

NVIDIA today announced a supercomputer built on the NVIDIA Grace CPU Superchip, adding to a wave of new energy-efficient supercomputers based on the Arm Neoverse platform. The Isambard 3 supercomputer to be based at the Bristol & Bath Science Park, in the U.K., will feature 384 Arm-based NVIDIA Grace CPU Superchips to power medical and scientific research, and is expected to deliver 6x the performance and energy efficiency of Isambard 2, placing it among Europe's most energy-efficient systems.

It will achieve about 2.7 petaflops of FP64 peak performance and consume less than 270 kilowatts of power, ranking it among the world's three greenest non-accelerated supercomputers. The project is being led by the University of Bristol, as part of the research consortium the GW4 Alliance, together with the universities of Bath, Cardiff and Exeter.

Samsung Trademark Applications Hint at Next Gen DRAM for HPC & AI Platforms

The Korea Intellectual Property Rights Information Service (KIPRIS) has been processing a bunch of trademark applications in recent weeks, submitted by Samsung Electronics Corporation. News outlets pointed out, earlier on this month, that the South Korean multinational manufacturing conglomerate was attempting to secure the term "Snowbolt" as a moniker for an unreleased HBM3P DRAM-based product. Industry insiders and Samsung representatives have indicated that high bandwidth memory (5 TB/s bandwidth speeds per stack) will be featured in upcoming cloud servers, high-performance and AI computing - slated for release later on in 2023.

A Samsung-focused news outlet, SamMobile, has reported (on May 15) of further trademark applications for next generation DRAM (Dynamic Random Access Memory) products. Samsung has filed for two additional monikers - "Shinebolt" and "Flamebolt" - details published online show that these products share the same "designated goods" descriptors with the preceding "Snowbolt" registration: "DRAM modules with high bandwidth for use in high-performance computing equipment, artificial intelligence, and supercomputing equipment" and "DRAM with high bandwidth for use in graphic cards." Kye Hyun Kyung, CEO of Samsung Semiconductor, has been talking up his company's ambitions of competing with rival TSMC in providing cutting edge component technology, especially in the field of AI computing. It is too early to determine whether these "-bolt" DRAM products will be part of that competitive move, but it is good to know that speedier memory is on the way - future generation GPUs are set to benefit.

Samsung to Detail SF4X Process for High-Performance Chips

Samsung has invested heavily in semiconductor manufacturing technology to provide clients with a viable alternative to TSMC and its portfolio of nodes spanning anything from mobile to high-performance computing (HPC) applications. Today, we have information that Samsung will present its SF4X node to the public in this year's VLSI Symposium. Previously known as a 4HPC node, it is designed as a 4 nm-class node with a specialized use case for HPC processors, in contrast to the standard SF4 (4LPP) node that uses 4 nm transistors designed for low-power standards applicable to mobile/laptop space. According to the VLSI Symposium schedule, Samsung is set to present more info about the paper titled "Highly Reliable/Manufacturable 4nm FinFET Platform Technology (SF4X) for HPC Application with Dual-CPP/HP-HD Standard Cells."

As the brief introduction notes, "In this paper, the most upgraded 4nm (SF4X) ensuring HPC application was successfully demonstrated. Key features are (1) Significant performance +10% boosting with Power -23% reduction via advanced SD stress engineering, Transistor level DTCO (T-DTCO) and [middle-of-line] MOL scheme, (2) New HPC options: Ultra-Low-Vt device (ULVT), high speed SRAM and high Vdd operation guarantee with a newly developed MOL scheme. SF4X enhancement has been proved by a product to bring CPU Vmin reduction -60mV / IDDQ -10% variation reduction together with improved SRAM process margin. Moreover, to secure high Vdd operation, Contact-Gate breakdown voltage is improved by >1V without Performance degradation. This SF4X technology provides a tremendous performance benefits for various applications in a wide operation range." While we have no information on the reference for these claims, we suspect it is likely the regular SF4 node. More performance figures and an in-depth look will be available on Thursday, June 15, at Technology Session 16 at the symposium.

Nfina Technologies Releases Two New 3rd Gen Intel Xeon Scalable Processor-based Systems

Nfina announces the addition of two new server systems to its lineup, customized for small to medium businesses and virtualized environments. Featuring 3rd Gen Intel Xeon Scalable Processors, these scalable server systems fill a void in the marketplace, bringing exceptional multi-socket processing performance, easy setup, operability, and Nfina's five-year warranty.

"We are excited to add two new 3rd generation Intel systems to Nfina's lineup. Performance, scalability, and flexibility are key deciding factors when expanding our offerings," says Warren Nicholson, President and CEO of Nfina. "Both servers are optimized for high- performance computing, virtualized environments, and growing data needs." He continues by saying, "The two servers can also be leased through our managed services division. We provide customers with choices that fit the size of their application and budget - not a one size fits all approach."

Investment Firm KKR to Acquire CoolIT Systems for $270 Million

KKR, a leading global investment firm, and CoolIT Systems, a leading provider of scalable liquid cooling solutions for the world's most demanding computing environments, today announced the signing of a definitive agreement under which KKR will acquire CoolIT. The deal, valued at $270 million, will give CoolIT Systems added capital and other resources to scale up to meet growing demand for cooling systems from data-center operators, including giant cloud-computing providers such as Amazon.com's Amazon Web Services and Microsoft's Azure cloud unit. CoolIT also works with individual companies running AI applications and other business software in their own data centers.

Founded in 2001, CoolIT designs, engineers and manufactures advanced liquid cooling solutions for the data center and desktop markets. CoolIT's patented Split-Flow Direct Liquid Cooling technology is designed to improve equipment reliability and lifespan, decrease operating cost, lower energy demand and carbon emissions, reduce water consumption and allow for higher server density than legacy air-cooling methods.

"Our business has evolved tremendously over the past few years and today we are proud to be one of the most trusted providers of liquid cooling solutions to the global data center market," said Steve Walton, Chief Executive Officer of CoolIT. "KKR shares our perspective on the significant opportunity ahead for liquid cooling. Having access to KKR's expertise, capital and resources will put us in an even better position to keep scaling, innovating and delivering for our customers."

MIT Researchers Grow Transistors on Top of Silicon Wafers

MIT researchers have developed a groundbreaking technology that allows for the growth of 2D transition metal dichalcogenide (TMD) materials directly on fully fabricated silicon chips, enabling denser integrations. Conventional methods require temperatures of about 600°C, which can damage silicon transistors and circuits as they break down above 400°C. The MIT team overcame this challenge by creating a low-temperature growth process that preserves the chip's integrity, allowing 2D semiconductor transistors to be directly integrated on top of standard silicon circuits. The new approach grows a smooth, highly uniform layer across an entire 8-inch wafer, unlike previous methods that involved growing 2D materials elsewhere before transferring them to a chip or wafer. This process often led to imperfections that negatively impacted device and chip performance.

Additionally, the novel technology can grow a uniform layer of TMD material in less than an hour over 8-inch wafers, a significant improvement from previous methods that required over a day for a single layer. The enhanced speed and uniformity of this technology make it suitable for commercial applications, where 8-inch or larger wafers are essential. The researchers focused on molybdenum disulfide, a flexible, transparent 2D material with powerful electronic and photonic properties ideal for semiconductor transistors. They designed a new furnace for the metal-organic chemical vapor deposition process, which has separate low and high-temperature regions. The silicon wafer is placed in the low-temperature region while vaporized molybdenum and sulfur precursors flow into the furnace. Molybdenum remains in the low-temperature region, while the sulfur precursor decomposes in the high-temperature region before flowing back into the low-temperature region to grow molybdenum disulfide on the wafer surface.

Samsung Electronics Announces First Quarter 2023 Results, Profits Lowest in 14 Years

Samsung Electronics today reported financial results for the first quarter ended March 31, 2023. The Company posted KRW 63.75 trillion in consolidated revenue, a 10% decline from the previous quarter, as overall consumer spending slowed amid the uncertain global macroeconomic environment. Operating profit was KRW 0.64 trillion as the DS (Device Solutions) Division faced decreased demand, while profit in the DX (Device eXperience) Division increased.

The DS Division's profit declined from the previous quarter due to weak demand in the Memory Business, a decline in utilization rates in the Foundry Business and continued weak demand and inventory adjustments from customers. Samsung Display Corporation (SDC) saw earnings in the mobile panel business decline quarter-on-quarter amid a market contraction, while the large panel business slightly narrowed its losses. The DX Division's results improved on the back of strong sales of the premium Galaxy S23 series as well as an enhanced sales mix focusing on premium TVs.

TSMC Showcases New Technology Developments at 2023 Technology Symposium

TSMC today showcased its latest technology developments at its 2023 North America Technology Symposium, including progress in 2 nm technology and new members of its industry-leading 3 nm technology family, offering a range of processes tuned to meet diverse customer demands. These include N3P, an enhanced 3 nm process for better power, performance and density, N3X, a process tailored for high performance computing (HPC) applications, and N3AE, enabling early start of automotive applications on the most advanced silicon technology.

With more than 1,600 customers and partners registered to attend, the North America Technology Symposium in Santa Clara, California is the first of the TSMC's Technology Symposiums around the world in the coming months. The North America symposium also features an Innovation Zone spotlighting the exciting technologies of 18 emerging start-up customers.

Samsung Hit With $303 Million Fine, Sued Over Alleged Memory Patent Infringements

Netlist Inc. an enterprise solid state storage drive specialist has been awarded over $303 million in damages by a federal jury in Texas on April 21, over apparent patent infringement on Samsung's part. Netlist has alleged that the South Korean multinational electronics corporation had knowingly infringed on five patents, all relating to improvements in data processing within the design makeup of memory modules intended for high performance computing (HPC) purposes. The Irvine, CA-based computer-memory specialist has sued Samsung in the past - with a legal suit filed at the Federal District Court for the Central District of California.

Netlist was seemingly pleased by the verdict reached at the time (2021) when the court: "granted summary judgements in favor of Netlist and against Samsung for material breach of various obligations under the Joint Development and License Agreement (JDLA), which the parties executed in November 2015. A summary judgment is a final determination rendered by the judge and has the same force and effect as a final ruling after a jury trial in litigation."

SK hynix Develops Industry's First 12-Layer HBM3, Provides Samples To Customers

SK hynix announced today it has become the industry's first to develop 12-layer HBM3 product with a 24 gigabyte (GB) memory capacity, currently the largest in the industry, and said customers' performance evaluation of samples is underway. HBM (High Bandwidth Memory): A high-value, high-performance memory that vertically interconnects multiple DRAM chips and dramatically increases data processing speed in comparison to traditional DRAM products. HBM3 is the 4th generation product, succeeding the previous generations HBM, HBM2 and HBM2E

"The company succeeded in developing the 24 GB package product that increased the memory capacity by 50% from the previous product, following the mass production of the world's first HBM3 in June last year," SK hynix said. "We will be able to supply the new products to the market from the second half of the year, in line with growing demand for premium memory products driven by the AI-powered chatbot industry." SK hynix engineers improved process efficiency and performance stability by applying Advanced Mass Reflow Molded Underfill (MR-MUF)# technology to the latest product, while Through Silicon Via (TSV)## technology reduced the thickness of a single DRAM chip by 40%, achieving the same stack height level as the 16 GB product.

AMD Joins AWS ISV Accelerate Program

AMD announced it has joined the Amazon Web Services (AWS) Independent Software Vendor (ISV) Accelerate Program, a co-sell program for AWS Partners - like AMD - who provide integrated solutions on AWS. The program helps AWS Partners drive new business by directly connecting participating ISVs with the AWS Sales organization.

Through the AWS ISV Accelerate Program, AMD will receive focused co-selling support from AWS, including, access to further sales enablement resources, reduced AWS Marketplace listing fees, and incentives for AWS Sales teams. The program will also allow participating ISVs access to millions of active AWS customers globally.

Bulk Order of GPUs Points to Twitter Tapping Big Time into AI Potential

According to Business Insider, Twitter has made a substantial investment into hardware upgrades at its North American datacenter operation. The company has purchased somewhere in the region of 10,000 GPUs - destined for the social media giant's two remaining datacenter locations. Insider sources claim that Elon Musk has committed to a large language model (LLM) project, in an effort to rival OpenAI's ChatGPT system. The GPUs will not provide much computational value in the current/normal day-to-day tasks at Twitter - the source reckons that the extra processing power will be utilized for deep learning purposes.

Twitter has not revealed any concrete plans for its relatively new in-house artificial intelligence project but something was afoot when, earlier this year, Musk recruited several research personnel from Alphabet's DeepMind division. It was theorized that he was incubating a resident AI research lab at the time, following personal criticisms levelled at his former colleagues at OpenAI, ergo their very popular and much adopted chatbot.

Intel Discontinues Brand New Max 1350 Data Center GPU, Successor Targets Alternative Markets

Intel has decided to re-organize its Max series of Data Center GPUs (codenamed Ponte Vecchio), as revealed to Tom's Hardware this week, with a particular model - the Data Center Max GPU 1350 set for removal from the lineup. Industry experts are puzzled by this decision, given that the 1350 has been officially "available" on the market since January 2023, following soon after the announcement of the entire Max range in November 2022. Intel has removed listings and entries for the Data Center GPU Max 1350 from its various web presences.

A (sort of) successor is in the works, Intel has lined up the Data Center Max GPU 1450 for release later in the year. This model will have a trimmed I/O bandwidth - this modification is likely targeting companies in China, where performance standards are capped at a certain level (via U.S. sanctions on GPU exports). An Intel spokesperson provided further details and reasons for rearranging the Max product range: "We launched the Intel Data Center Max GPU 1550 (600 W), which was initially targeted for liquid-cooled solutions only. We have since expanded our support by offering Intel Data Center Max GPU 1550 (600 W) to include air-cooled solutions."
Return to Keyword Browsing
Jul 16th, 2024 00:16 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts