News Posts matching #ASIC

Return to Keyword Browsing

Intel Launches Gaudi 3 AI Accelerator: 70% Faster Training, 50% Faster Inference Compared to NVIDIA H100, Promises Better Efficiency Too

During the Vision 2024 event, Intel announced its latest Gaudi 3 AI accelerator, promising significant improvements over its predecessor. Intel claims the Gaudi 3 offers up to 70% improvement in training performance, 50% better inference, and 40% better efficiency than Nvidia's H100 processors. The new AI accelerator is presented as a PCIe Gen 5 dual-slot add-in card with a 600 W TDP or an OAM module with 900 W. The PCIe card has the same peak 1,835 TeraFLOPS of FP8 performance as the OAM module despite a 300 W lower TDP. The PCIe version works as a group of four per system, while the OAM HL-325L modules can be run in an eight-accelerator configuration per server. This likely will result in a lower sustained performance, given the lower TDP, but it confirms that the same silicon is used, just finetuned with a lower frequency. Built on TSMC's N5 5 nm node, the AI accelerator features 64 Tensor Cores, delivering double the FP8 and quadruple FP16 performance over the previous generation Gaudi 2.

The Gaudi 3 AI chip comes with 128 GB of HBM2E with 3.7 TB/s of bandwidth and 24 200 Gbps Ethernet NICs, with dual 400 Gbps NICs used for scale-out. All of that is laid out on 10 tiles that make up the Gaudi 3 accelerator, which you can see pictured below. There is 96 MB of SRAM split between two compute tiles, which acts as a low-level cache that bridges data communication between Tensor Cores and HBM memory. Intel also announced support for the new performance-boosting standardized MXFP4 data format and is developing an AI NIC ASIC for Ultra Ethernet Consortium-compliant networking. The Gaudi 3 supports clusters of up to 8192 cards, coming from 1024 nodes comprised of systems with eight accelerators. It is on track for volume production in Q3, offering a cost-effective alternative to NVIDIA accelerators with the additional promise of a more open ecosystem. More information and a deeper dive can be found in the Gaudi 3 Whitepaper.

Report Suggests Naver Siding with Samsung in $752 Million "Mach-1" AI Chip Deal

Samsung debuted its Mach-1 generation of AI processors during a recent shareholder meeting—the South Korean megacorp anticipates an early 2025 launch window. Their application-specific integrated circuit (ASIC) design is expected to "excel in edge computing applications," with a focus on low power and efficiency-oriented operating environments. Naver Corporation was a key NVIDIA high-end AI customer in South Korea (and Japan), but the leading search platform firm and creator of HyperCLOVA X LLM (reportedly) deliberated on an adoption alternative hardware last October. The Korea Economic Daily believes that Naver's relationship with Samsung is set to grow, courtesy of a proposed $752 million investment: "the world's top memory chipmaker, will supply its next-generation Mach-1 artificial intelligence chips to Naver Corp. by the end of this year."

Reports from last December indicated that the two companies were deep into the process of co-designing power-efficient AI accelerators—Naver's main goal is to finalize a product that will offer eight times more energy efficiency than NVIDIA's H100 AI accelerator. Naver's alleged bulk order—of roughly 150,000 to 200,000 Samsung Mach-1 AI chips—appears to be a stopgap. Industry insiders reckon that Samsung's first-gen AI accelerator is much cheaper when compared to NVIDIA H100 GPU price points—a per-unit figure of $3756 is mentioned in the KED Global article. Samsung is speculated to be shopping its fledgling AI tech to Microsoft and Meta.

Tenstorrent and MosChip Partner on High Performance RISC-V Design

Tenstorrent and MosChip Technologies announced today that they are partnering on design for Tenstorrent's cutting-edge RISC-V solutions. In selecting MosChip Technologies, Tenstorrent stands to strongly advance both its own and its customers' development of RISC-V solutions as they work together on Physical Design, DFT, Verification, and RTL Design services.

"MosChip Technologies is special in that they have unparalleled tape out expertise in design services, with more than 200 multi-million gate ASICs under their belt", said David Bennett, CCO of Tenstorrent. "Partnering with MosChip enables us to design the strongest RISC-V solution we can to serve ourselves, our partners, and our customers alike."

MICLEDI Microdisplays Raises Series A Funding to Advance Best-in-Class microLED Display Design and Manufacturing

MICLEDI Microdisplays today announced a first closing of its Series A funding round with participation from imec.xpand, PMV, imec, KBC and SFPIM demonstrating strong support for the company's value proposition and commercial and technological progress achieved in the seed round. Series A follows a significant seed round award and additional non-dilutive funding in the form of grants and other vehicles from VLAIO. This brings the company's total funding to date to nearly $30 million.

"The company's achievements during this seed round have been astounding," said Sean Lord, CEO of MICLEDI. "Our door is open to engagements with some of the world's largest and most innovative electronic product manufacturing companies, most of whom are working on their own internal development projects for augmented reality (AR) displays in such diverse use cases as smart-wearable devices and automotive HUDs. This level of total funding to date is almost unheard of for a four-year-old startup."

Global Server Shipments Expected to Increase by 2.05% in 2024, with AI Servers Accounting For Around 12.1%

TrendForce underscores that the primary momentum for server shipments this year remains with American CSPs. However, due to persistently high inflation and elevated corporate financing costs curtailing capital expenditures, overall demand has not yet returned to pre-pandemic growth levels. Global server shipments are estimated to reach approximately. 13.654 million units in 2024, an increase of about 2.05% YoY. Meanwhile, the market continues to focus on the deployment of AI servers, with their shipment share estimated at around 12.1%.

Foxconn is expected to see the highest growth rate, with an estimated annual increase of about 5-7%. This growth includes significant orders such as Dell's 16G platform, AWS Graviton 3 and 4, Google Genoa, and Microsoft Gen9. In terms of AI server orders, Foxconn has made notable inroads with Oracle and has also secured some AWS ASIC orders.

AI's Rocketing Demand to Drive Server DRAM—2024 Predictions Show a 17.3% Annual Increase in Content per Box, Outpacing Other Applications

In 2024, the tech industry remains steadfastly focused on AI, with the continued rollout of advanced AI chips leading to significant enhancements in processing speeds. TrendForce posits that this advancement is set to drive growth in both DRAM and NAND Flash across various AI applications, including smartphones, servers, and notebooks. The server sector is expected to see the most significant growth, with content per box for server DRAM projected to rise by 17.3% annually, while enterprise SSDs are forecast to increase by 13.2%. The market penetration rate for AI smartphones and AI PCs is expected to experience noticeable growth in 2025 and is anticipated to further drive the average content per box upward.

Looking first at smartphones, despite chipmakers focusing on improving processing performance, the absence of new AI functionalities has somewhat constrained the impact of AI. Memory prices plummeted in 2023 due to oversupply, making lower-priced options attractive and leading to a 17.5% increase in average DRAM capacity and a 19.2% increase in NAND Flash capacity per smartphone. However, with no new applications expected in 2024, the growth rate in content per box for both DRAM and NAND Flash in smartphones is set to slow down, estimated at 14.1% and 9.3%, respectively.

Intel Foundry Services Get 18A Order: Arm-based 64-Core Neoverse SoC

Faraday Technology Corporation, a Taiwanese silicon IP designer, has announced plans to develop a new 64-core system-on-chip (SoC) utilizing Intel's most advanced 18A process technology. The Arm-based SoC will integrate Arm Neoverse compute subsystems (CSS) to deliver high performance and efficiency for data centers, infrastructure edge, and 5G networks. This collaboration brings together Faraday, Arm, and Intel Foundry Services. Faraday will leverage its ASIC design and IP solutions expertise to build the SoC. Arm will provide the Neoverse compute subsystem IP to enable scalable computing. Intel Foundry Services will manufacture the chip using its cutting-edge 18A process, which delivers one of the best-in-class transistor performance.

The new 64-core SoC will be a key component of Faraday's upcoming SoC evaluation platform. This platform aims to accelerate customer development of data center servers, high-performance computing ASICs, and custom SoCs. The platform will also incorporate interface IPs from the Arm Total Design ecosystem for complete implementation and verification. Both Arm and Intel Foundry Services expressed excitement about working with Faraday on this advanced Arm-based custom silicon project. "We're thrilled to see industry leaders like Faraday and Intel on the cutting edge of Arm-based custom silicon development," said an Arm spokesperson. Intel SVP Stuart Pann said, "We are pleased to work with Faraday in the development of the SoC based on Arm Neoverse CSS utilizing our most competitive Intel 18A process technology." The collaboration represents Faraday's strategic focus on leading-edge technologies to meet evolving application requirements. With its extensive silicon IP portfolio and design capabilities, Faraday wants to deliver innovative solutions and break into next-generation computing design.

Neuchips to Showcase Industry-Leading Gen AI Inferencing Accelerators at CES 2024

Neuchips, a leading AI Application-Specific Integrated Circuits (ASIC) solutions provider, will demo its revolutionary Raptor Gen AI accelerator chip (previously named N3000) and Evo PCIe accelerator card LLM solutions at CES 2024. Raptor, the new chip solution, enables enterprises to deploy large language models (LLMs) inference at a fraction of the cost of existing solutions.

"We are thrilled to unveil our Raptor chip and Evo card to the industry at CES 2024," said Ken Lau, CEO of Neuchips. "Neuchips' solutions represent a massive leap in price to performance for natural language processing. With Neuchips, any organisation can now access the power of LLMs for a wide range of AI applications."

Top Ten IC Design Houses Ride Wave of Seasonal Consumer Demand and Continued AI Boom to See 17.8% Increase in Quarterly Revenue in 3Q23

TrendForce reports that 3Q23 has been a historic quarter for the world's leading IC design houses as total revenue soared 17.8% to reach a record-breaking US$44.7 billion. This remarkable growth is fueled by a robust season of stockpiling for smartphones and laptops, combined with a rapid acceleration in the shipment of generative AI chips and components. NVIDIA, capitalizing on the AI boom, emerged as the top performer in revenue and market share. Notably, analog IC supplier Cirrus Logic overtook US PMIC manufacturer MPS to snatch the tenth spot, driven by strong demand for smartphone stockpiling.

NVIDIA's revenue soared 45.7% to US$16.5 billion in the third quarter, bolstered by sustained demand for generative AI and LLMs. Its data center business—accounting for nearly 80% of its revenue—was a key driver in this exceptional growth.

China Continues to Enhance AI Chip Self-Sufficiency, but High-End AI Chip Development Remains Constrained

Huawei's subsidiary HiSilicon has made significant strides in the independent R&D of AI chips, launching the next-gen Ascend 910B. These chips are utilized not only in Huawei's public cloud infrastructure but also sold to other Chinese companies. This year, Baidu ordered over a thousand Ascend 910B chips from Huawei to build approximately 200 AI servers. Additionally, in August, Chinese company iFlytek, in partnership with Huawei, released the "Gemini Star Program," a hardware and software integrated device for exclusive enterprise LLMs, equipped with the Ascend 910B AI acceleration chip, according to TrendForce's research.

TrendForce conjectures that the next-generation Ascend 910B chip is likely manufactured using SMIC's N+2 process. However, the production faces two potential risks. Firstly, as Huawei recently focused on expanding its smartphone business, the N+2 process capacity at SMIC is almost entirely allocated to Huawei's smartphone products, potentially limiting future capacity for AI chips. Secondly, SMIC remains on the Entity List, possibly restricting access to advanced process equipment.

Zero ASIC Democratizing Chip Making

Zero ASIC, a semiconductor startup, came out of stealth today to announce early access to its one-of-a-kind ChipMaker platform, demonstrating a number of world firsts:
  • 3D chiplet composability enabling billions of new silicon products
  • Fully automated no-code chiplet-based chip design
  • Zero install interactive RTL-based chip emulation
  • Roadmap to 100X reduction in chip development costs
"Custom Application Specific Integrated Circuits (ASICs) offer 10-100X cost and energy advantage over commercial off the shelf (COTS) devices, but the enormous development cost makes ASICs non-viable for most applications," said Andreas Olofsson, CEO and founder of Zero ASIC. "To build the next wave of world changing silicon devices, we need to reduce the barrier to ASICs by orders of magnitude. Our mission at Zero ASIC is to make ordering an ASIC as easy as ordering catalog parts from an electronics distributor."

Phison Introduces New High-Speed Signal Conditioner IC Products, Expanding its PCIe 5.0 Ecosystem for AI-Era Data Centers

Phison Electronics, a global leader in NAND controllers and storage solutions, announced today that the company has expanded its portfolio of PCIe 5.0 high-speed transmission solutions with PCIe 5.0, CXL 2.0 compatible redriver and retimer data signal conditioning IC products. Leveraging the company's deep expertise in PCIe engineering, Phison is the only signal conditioners provider that offers the widest portfolio of multi-channel PCIe 5.0 redriver and retimer solutions and PCIe 5.0 storage solutions designed specifically to meet the data infrastructure demands of artificial intelligence and machine learning (AI+ML), edge computing, high-performance computing, and other data-intensive, next-gen applications. At the 2023 Open Compute Project Global Summit, the Phison team is showcasing its expansive PCIe 5.0 portfolio, demonstrating the redriver and retimer technologies alongside other enterprise NAND flash, illustrating a holistic vision for a PCIe 5.0 data ecosystem to address the most demanding applications of the AI-everywhere era.

"Phison has focused industry-leading R&D efforts on developing in-house, chip-to-chip communication technologies since the introduction of the PCIe 3.0 protocol, with PCIe 4.0 and PCIe 5.0 solutions now in mass production, and PCIe 6.0 solutions now in the design phase," said Michael Wu, President & General Manager, Phison US. "Phison's accumulated experience in high-speed signaling enables our team to deliver retimer and redriver design solutions that are optimized for top signal integration, low power usage, and high temperature endurance, to deliver interface speeds for the most challenging compute environments."

Avicena Demonstrates First microLED Based Transceiver IC in 16 nm finFET CMOS for Chip-to-Chip Communications

Avicena, a privately held company headquartered in Sunnyvale, CA, is demonstrating its LightBundle multi-Tbps chip-to-chip interconnect technology at the European Conference for Optical Communications (ECOC) 2023 in Glasgow, Scotland (https://www.ecocexhibition.com/). Avicena's microLED-based LightBundle architecture breaks new ground by unlocking the performance of processors, memory and sensors, removing key bandwidth and proximity constraints while simultaneously offering class leading energy efficiency.

"As generative AI continues to evolve, the role of high bandwidth-density, low-power and low latency interconnects between xPUs and HBM modules cannot be overstated", says Chris Pfistner, VP Sales & Marketing of Avicena. "Avicena's innovative LightBundle interconnects have the potential to fundamentally change the way processors connect to each other and to memory because their inherent parallelism is well-matched to the internal wide and slow bus architecture within ICs. With a roadmap to multi-terabit per second capacity and sub-pJ/bit efficiency these interconnects are poised to enable the next era of AI innovation, paving the way for even more capable models and a wide range of AI applications that will shape the future."

Strong Cloud AI Server Demand Propels NVIDIA's FY2Q24 Data Center Business to Surpass 76% for the First Time

NVIDIA's latest financial report for FY2Q24 reveals that its data center business reached US$10.32 billion—a QoQ growth of 141% and YoY increase of 171%. The company remains optimistic about its future growth. TrendForce believes that the primary driver behind NVIDIA's robust revenue growth stems from its data center's AI server-related solutions. Key products include AI-accelerated GPUs and AI server HGX reference architecture, which serve as the foundational AI infrastructure for large data centers.

TrendForce further anticipates that NVIDIA will integrate its software and hardware resources. Utilizing a refined approach, NVIDIA will align its high-end, mid-tier, and entry-level GPU AI accelerator chips with various ODMs and OEMs, establishing a collaborative system certification model. Beyond accelerating the deployment of CSP cloud AI server infrastructures, NVIDIA is also partnering with entities like VMware on solutions including the Private AI Foundation. This strategy extends NVIDIA's reach into the edge enterprise AI server market, underpinning steady growth in its data center business for the next two years.

Samsung's 3 nm GAA Process Identified in a Crypto-mining ASIC Designed by China Startup MicroBT

Semiconductor industry research firm TechInsights said it has found that Samsung's 3 nm GAA (gate-all-around) process has been incorporated into the crypto miner ASIC (Whatsminer M56S++) from a Chinese manufacturer, MicroBT. In a Disruptive Technology Event Brief exclusively provided to DIGITIMES Asia, TechInsights points out that the significance of this development lies in the commercial utilization of GAA technology, which facilitates the scaling of transistors to 2 nm and beyond. "This development is crucial because it has the potential to enhance performance, improve energy efficiency, keep up with Moore's Law, and enable advanced applications," said TechInsights, identifying the MicroBT ASIC chip the first commercialized product using GAA technology in the industry.

But this would also reveal that Samsung is the foundry for MicroBT, using the 3 nm GAA process. DIGITIMES Research semiconductor analyst Eric Chen pointed out that Samsung indeed has started producing chips using the 3 nm GAA process, but the capacity is still small. "Getting revenues from shipment can be defined as 'commercialization', but ASIC is a relatively simple kind of chip to produce, in terms of architecture."

AMD Introduces World's Largest FPGA-Based Adaptive SoC for Emulation and Prototyping

AMD today announced the AMD Versal Premium VP1902 adaptive system-on-chip (SoC), the world's largest adaptive SoC. The VP1902 adaptive SoC is an emulation-class, chiplet-based device designed to streamline the verification of increasingly complex semiconductor designs. Offering 2X the capacity over the prior generation, designers can confidently innovate and validate application-specific integrated circuits (ASICs) and SoC designs to help bring next generation technologies to market faster.

AI workloads are driving increased complexity in chipmaking, requiring next-generation solutions to develop the chips of tomorrow. FPGA-based emulation and prototyping provides the highest level of performance, allowing faster silicon verification and enabling developers to shift left in the design cycle and begin software development well before silicon tape-out. AMD, through Xilinx, brings over 17 years of leadership and six generations of the industry's highest capacity emulation devices, which have nearly doubled in capacity each generation.

More Pictures of NVIDIA's Cinder Block-sized RTX 4090 Ti Cooler Surface

Back in January, we got our first look at the cinder block-like 4-slot cooling solution of NVIDIA's upcoming flagship graphics card (called either the RTX 4090 Ti, or the TITAN (Ada). "ExperteVallah" on Twitter scored additional pictures of the cooler. Its design sees the heat dissipation surface pushed to the entire thickness of the cooler, and ventilated the entire length.

The card's PCB isn't conventional—not perpendicular to the plane of the motherboard like any other add-in card—but is rather along the plane of the motherboard, with additional breakaway daughter cards interfacing with the sole 12VHPWR power connector, and the PCIe slot. This slender, ruler-shaped PCB spans the entire length of the card, without coming in the way of its heat dissipation surfaces. The length is used for the large AD102 ASIC that's probably maxed out (with all its 144 SM enabled), twelve GDDR6X (possibly faster 23 Gbps), and a mammoth VRM that nearly maxes out the 600 W continuous power delivery design limit of the 12VHPWR.

Radeon RX 7800 XT Based on New ASIC with Navi 31 GCD on Navi 32 Package?

AMD Radeon RX 7800 XT will be a much-needed performance-segment addition to the company's Radeon RX 7000-series, which has a massive performance gap between the enthusiast-class RX 7900 series, and the mainstream RX 7600. A report by "Moore's Law is Dead" makes a sensational claim that it is based on a whole new ASIC that's neither the "Navi 31" powering the RX 7900 series, nor the "Navi 32" designed for lower performance tiers, but something in between. This GPU will be AMD's answer to the "AD103." Apparently, the GPU features the same exact 350 mm² graphics compute die (GCD) as the "Navi 31," but on a smaller package resembling that of the "Navi 32." This large GCD is surrounded by four MCDs (memory cache dies), which amount to a 256-bit wide GDDR6 memory interface, and 64 MB of 2nd Gen Infinity Cache memory.

The GCD physically features 96 RDNA3 compute units, but AMD's product managers now have the ability to give the RX 7800 XT a much higher CU count than that of the "Navi 32," while being lower than that of the RX 7900 XT (which is configured with 84). It's rumored that the smaller "Navi 32" GCD tops out at 60 CU (3,840 stream processors), so the new ASIC will enable the RX 7800 XT to have a CU count anywhere between 60 to 84. The resulting RX 7800 XT could have an ASIC with a lower manufacturing cost than that of a theoretical Navi 31 with two disabled MCDs (>60 mm² of wasted 6 nm dies), and even if it ends up performing within 10% of the RX 7900 XT (and matching the GeForce RTX 4070 Ti in the process), it would do so with better pricing headroom. The same ASIC could even power mobile RX 7900 series, where the smaller package and narrower memory bus will conserve precious PCB footprint.

Molex Unveils 224 Gbps PAM4 Chip-to-Chip Connectors

Molex, a company known for making various electronics and connectors, has today announced that the company has developed a first-of-its-kind chip-to-chip connector. Designed mainly for the data center, the Molex 224G product portfolio includes next-generation cables, backplanes, board-to-board connectors, and near-ASIC connector-to-cable solutions. Running at 224 Gbps speeds, these products use PAM4 signaling and boast with " highest levels of electrical, mechanical, physical and signal integrity." As the company states, future high-performance computing (HPC) data centers require a lot of board-to-board, chip-to-chip, and other types of communication to improve overall efficiency and remove bottlenecks in data transfer. To tackle this problem, Molex has a range of products, including Mirror Mezz Enhanced, Inception, and CX2 Dual Speed products.

Future generative AI, 1.6T (1.6 Tb/s) Ethernet, and other data center challenges need a dedicated communication standard, which Molex is aiming to provide. Working with various data center and enterprise customers, the company claims to have set the pace for products based on this 224G PAM4 chip-to-chip technology. We suspect that Open Compute Project (OCP) will be first in the line of adoption, ad Molex has historically worked with them as they adopted Mirror Mezz and Mirror Mezz Pro board-to-board connectors. The new products can be seen below, and we expect to hear more announcements from Molex's partners. Solutions like OSFP 1600, QSFP 800, and QSFP-DD 1600 already use 224G products.

Bosch Plans to Acquire U.S. Chipmaker TSI Semiconductors

Bosch is expanding its semiconductor business with silicon carbide chips. The technology company plans to acquire assets of the U.S. chipmaker TSI Semiconductors, based in Roseville, California. With a workforce of 250, the company is a foundry for application-specific integrated circuits, or ASICs. Currently, it mainly develops and produces large volumes of chips on 200-millimeter silicon wafers for applications in the mobility, telecommunications, energy, and life sciences industries. Over the next years, Bosch intends to invest more than 1.5 billion USD in the Roseville site and convert the TSI Semiconductors manufacturing facilities to state-of-the-art processes. Starting in 2026, the first chips will be produced on 200-millimeter wafers based on the innovative material silicon carbide (SiC).

In this way, Bosch is systematically reinforcing its semiconductor business, and will have significantly extended its global portfolio of SiC chips by the end of 2030. Above all, the global boom and ramp-up of electromobility are resulting in huge demand for such special semiconductors. The full scope of the planned investment will be heavily dependent on federal funding opportunities available via the CHIPS and Science Act as well as economic development opportunities within the State of California. Bosch and TSI Semiconductors have reached an agreement to not to disclose any financial details of the transaction, which is subject to regulatory approval.

Intel Discontinues Blockscale Crypto Mining ASICs

Today Intel announced that they would be discontinuing production of their Blockscale 1000 series of ASICs built for cryptocurrency mining. Blockscale was designed by the Custom Compute Group within what was Intel's AXG graphics division at the time, and launched to market back in April 2022 when the value of Bitcoin was still above $40K USD. Blockscale initially succeeded with efficiency and supply advantages over competing ASICs as Intel leveraged their manufacturing capacity to produce the chips, however the valuation of the crypto currency market experienced a major slump over the second half of 2022. Intel's AXG has also recently seen a major restructuring, though there have been no mentions of what the operable status is of the Custom Compute Group. Support for existing Blockscale customers is set to continue for some time. Intel has not announced any possible follow-up crypto ASIC generation, only saying, "We continue to monitor market opportunities."

Intel's Blockscale was rather late to the market as far as crypto mining ASICs go. Early mining ASICs began hitting the scene in mid-2012 as FPGAs started to reach their limits in efficiency, and investment funds began to funnel into crypto startups. Intel's interest in cryptocurrency hardware lagged behind even their contemporaries at NVIDIA and AMD, both of which had crypto-focused variants of consumer GPUs on the market as early as 2017 during the first major mining-induced hardware shortages. Intel hasn't mentioned whether the timing of Blockscale contributed to its short shelf life, but Bitcoin is on its way back up after the recent slump, shooting up to around $30K USD just prior to Intel's announcement.

AMD Introduces Alveo MA35D Media Accelerator

AMD today announced the AMD Alveo MA35D media accelerator featuring two 5 nm, ASIC-based video processing units (VPUs) supporting the AV1 compression standard and purpose-built to power a new era of live interactive streaming services at scale. With over 70% of the global video market being dominated by live content, a new class of low-latency, high-volume interactive streaming applications are emerging such as watch parties, live shopping, online auctions, and social streaming.

The Alveo MA35D media accelerator delivers the high channel density, with up to 32x 1080p60 streams per card, power efficiency and ultra-low-latency performance critical to reducing the skyrocketing infrastructure costs now required for scaling such compute intensive content delivery. Compared to the previous generation Alveo U30 media accelerator, the Alveo MA35D delivers up to 4x higher channel density, 4x max lower latency in 4K and 1.8x greater compression efficiency to achieve the same VMAF score—a common video quality metric.

Phison Introduces Upgraded IMAGIN+ Platform For Customized NAND Storage, ASIC Design Services

Phison Electronics, a global leader in NAND flash and storage solutions, announced today the launch of IMAGIN+, an upgraded platform offering R&D resource sharing and ASIC (Application-Specific Integrated Circuit) design services for NAND flash controllers, storage solutions, PMIC, and Redrivers/Retimers. The introduction of IMAGIN+ comes during the Embedded World Exhibition & Conference (March 14-16) in Nuremberg, a premier global event for the embedded community.

Phison's rejuvenated platform, bolstered by more than two decades of research and development expertise, empowers global partners and customers to create not just ASIC chips and NAND flash storage solutions but also to participate in the growth of a thriving ecosystem of emerging technologies. Phison understands that success in today's fast-paced market requires more than just providing NAND storage solutions; it requires the ability to influence and shape the industry through signal integrity and power management ICs, Compute Express Link and other value-added offerings.

Aetina to Showcase Its New AI Solutions at Embedded World 2023

Aetina Corporation, a leading provider of AI solutions for the creations of different types of vertical AI, will showcase its new embedded computers, AI inference platforms, GPUs, AI accelerators, and edge devices management software at upcoming Embedded World 2023. Aetina provides different types of form factors based on GPUs or ASICs, such as MXMs, graphic cards, and edge computing systems. The MXMs that are powered by NVIDIA Ampere architecture-based GPU offer extra computing power to existing AI systems, ensuring low-latency data analytics tasks. The MXMs and systems that are built with ASICs, on the other hand, are ideal for the creation of any specific applications or AI systems that involve multi-inference processes.

As an Elite member of the NVIDIA Partner Network, Aetina offers a variety of edge computing systems and platforms powered by the NVIDIA Jetson edge AI and robotics platform. Aetina's newly released embedded computers are built with the Jetson Orin series SoMs—Jetson AGX Orin, Jetson Orin NX, and Jetson Orin Nano ; these small-sized systems and platforms, supporting different peripherals, can be easily integrated into larger AI-powered systems while also being able to function as a standalone AI computer.

Wi-Fi 7 Cryptomining Router - A Fresh Scam (Ab)using a Friendly Name

An entity calling themselves "TP-Link ASIC" recently announced a Wi-Fi 7 capable ASIC cryptocurrency miner with claims of hashing rates above even the mighty RTX 4090... In one specific ASIC friendly algorithm. If the concept of a router that mines crypto sounds strange, deranged, or downright questionable to you, you're not alone. The consumer market for crypto mining has waned heavily in the last handful of months due in no small part to Ethereum switching to Proof of Stake last September, which led to GPUs being ineffectual for mining the previously profitable coin. However, ASIC mining does remain prevalent across the sea of various algorithms and alt-coins that exist. One such alt-coin is Kadena, a smaller Proof of Work cryptocurrency that appears to flutter around the $1USD range. This is where "TP-Link ASIC" has placed their engineering efforts with the, "TP-Link NX31 31.2 THS Router Miner," which, as the name implies, offers 31.2 TH/s of Kadena hashing power at a cool $1,990 USD (previously $1,440).

If you've been around long enough to remember the Butterfly Labs fiasco and fallout, you're probably already groaning and holding your head in your hands. Buying ASICs from new and unproven vendors promising the moon is never a good idea. But TP-Link isn't an unproven company, they have a successful business and sell legitimate products. Surely this would be handled properly since this is a well established brand, right? Well, here's the twist; "TP-Link ASIC" is NOT TP-Link. When questioned about this new launch by Tom's Hardware a TP-Link representative responded that "TP-Link ASIC" has no affiliation with TP-Link, nor does their NX31 mining router.
Return to Keyword Browsing
Apr 30th, 2024 22:51 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts