News Posts matching #AI

Return to Keyword Browsing

Arm and Synopsys Strengthen Partnership to Accelerate Custom Silicon on Advanced Nodes

Synopsys today announced it has expanded its collaboration with Arm to provide optimized IP and EDA solutions for the newest Arm technology, including the Arm Neoverse V2 platform and Arm Neoverse Compute Subsystem (CSS). Synopsys has joined Arm Total Design where Synopsys will leverage their deep design expertise, the Synopsys.ai full-stack AI-driven EDA suite, and Synopsys Interface, Security, and Silicon Lifecycle Management IP to help mutual customers speed development of their Arm-based CSS solutions. The expanded partnership builds on three decades of collaboration to enable mutual customers to quickly develop specialized silicon at lower cost, with less risk and faster time to market.

"With Arm Total Design, our aim is to enable rapid innovation on Arm Neoverse CSS and engage critical ecosystem expertise at every stage of SoC development," said Mohamed Awad, senior vice president and general manager, Infrastructure Line of Business at Arm. "Our deep technical collaboration with Synopsys to deliver pre-integrated and validated IP and EDA tools will help our mutual customers address the industry's most complex computing challenges with specialized compute."

Comcast and Broadcom to Develop the World's First AI-Powered Access Network With Pioneering New Chipset

Comcast and Broadcom today announced joint efforts to develop the world's first AI-powered access network with a new chipset that embeds artificial intelligence (AI) and machine learning (ML) within the nodes, amps and modems that comprise the last few miles of Comcast's network. With these new capabilities broadly deployed throughout the network, Comcast will be able to transform its operations by automating more network functions and deliver an improved customer experience through better and more actionable intelligence.

Additionally, the new chipset will be the first in the world to incorporate DOCSIS 4.0 Full Duplex (FDX), Extended Spectrum (ESD) and the ability to run both simultaneously, enabling Internet service providers across the globe to deliver DOCSIS 4.0 services using a toolkit with technology options to meet their business needs. DOCSIS 4.0 is the next-generation network technology that will introduce symmetrical multi-gigabit Internet speeds, lower latency, and even better security and reliability to hundreds of millions of people and businesses over their existing connections without the need for major construction of new network infrastructure.

NVIDIA RTX Video Super Resolution Update Enhances Video Quality, Detail Preservation and Expands to GeForce RTX 20 Series GPUs

NVIDIA today announced an update to RTX Video Super Resolution (VSR) that delivers greater overall graphical fidelity with preserved details, upscaling for native videos and support for GeForce RTX 20 Series desktop and laptop GPUs. For AI assists from RTX VSR and more - from enhanced creativity and productivity to blisteringly fast gaming - check out the RTX for AI page.

The Super RTX VSR Update 1.5
RTX VSR's AI model has been retrained to more accurately identify the difference between subtle details and compression artifacts to better preserve image details during the upscaling process. Finer details are more visible, and the overall image looks sharper and crisper than before. RTX VSR version 1.5 will also de-artifact videos played at their native resolution - prior, only upscaled video could be enhanced. Providing a leap in graphical fidelity for laptop owners with 1080p screens, the updated RTX VSR makes 1080p resolution, which is popular for content and displays, look smoother at its native resolution, even with heavy artifacts. And with expanded RTX VSR support, owners of GeForce RTX 20 Series GPUs can benefit from the same AI-enhanced video as those using RTX 30 and 40 Series GPUs. RTX VSR 1.5 is available as part of the latest Game Ready Driver, available for download today. Content creators downloading NVIDIA Studio Drivers - designed to enhance features, reduce repetitiveness and dramatically accelerate creative workflows - can install the driver with RTX VSR releasing in early November.

Baidu Launches ERNIE 4.0 Foundation Model, Leading a New Wave of AI-Native Applications

Baidu, Inc., a leading AI company with strong Internet foundation, today hosted its annual flagship technology conference Baidu World 2023 in Beijing, marking the conference's return to an offline format after four years. With the theme "Prompt the World," this year's Baidu World conference saw Baidu launch ERNIE 4.0, Baidu's next-generation and most powerful foundation model offering drastically enhanced core AI capabilities. Baidu also showcased some of its most popular applications, solutions, and products re-built around the company's state-of-the-art generative AI.

"ERNIE 4.0 has achieved a full upgrade with drastically improved performance in understanding, generation, reasoning, and memory," Robin Li, Co-founder, Chairman and CEO of Baidu, said at the event. "These four core capabilities form the foundation of AI-native applications and have now unleashed unlimited opportunities for new innovations."

Phison Introduces New High-Speed Signal Conditioner IC Products, Expanding its PCIe 5.0 Ecosystem for AI-Era Data Centers

Phison Electronics, a global leader in NAND controllers and storage solutions, announced today that the company has expanded its portfolio of PCIe 5.0 high-speed transmission solutions with PCIe 5.0, CXL 2.0 compatible redriver and retimer data signal conditioning IC products. Leveraging the company's deep expertise in PCIe engineering, Phison is the only signal conditioners provider that offers the widest portfolio of multi-channel PCIe 5.0 redriver and retimer solutions and PCIe 5.0 storage solutions designed specifically to meet the data infrastructure demands of artificial intelligence and machine learning (AI+ML), edge computing, high-performance computing, and other data-intensive, next-gen applications. At the 2023 Open Compute Project Global Summit, the Phison team is showcasing its expansive PCIe 5.0 portfolio, demonstrating the redriver and retimer technologies alongside other enterprise NAND flash, illustrating a holistic vision for a PCIe 5.0 data ecosystem to address the most demanding applications of the AI-everywhere era.

"Phison has focused industry-leading R&D efforts on developing in-house, chip-to-chip communication technologies since the introduction of the PCIe 3.0 protocol, with PCIe 4.0 and PCIe 5.0 solutions now in mass production, and PCIe 6.0 solutions now in the design phase," said Michael Wu, President & General Manager, Phison US. "Phison's accumulated experience in high-speed signaling enables our team to deliver retimer and redriver design solutions that are optimized for top signal integration, low power usage, and high temperature endurance, to deliver interface speeds for the most challenging compute environments."

Advantech Announces New Edge Computing Solutions Powered by 13th Gen Intel Core Processors

Advantech has skillfully incorporated 13th Gen Intel Core processors across a wide array of products, setting new benchmarks for performance and capability. These enhanced solutions harness cutting-edge CPU architectures in their latest designs, delivering a noticeable performance boost over the previous processor generation. This upgrade significantly enhances application efficiency in smart factories, computer vision, intelligent retail, healthcare, and edge AI applications. At Advantech, we are unwavering in our commitment to providing long-term support, ensuring that customers can rely on our products for uninterrupted operation.

Advantech's new solutions offer Intel processors with up to 24 cores and 32 threads, up to 128 GB of DDR5 memory, and a PCIe 4.0/5.0 x16 slot for efficient multitasking. Experience optimized productivity with Advantech's Embedded Design-In Services and quality solutions featuring industrial-grade components. Advantech's embedded OS and value-added software options include Windows, Ubuntu, and Yocto BSP, along with the Edge AI Suite for AI performance evaluation and DeviceOn for device management.

Fujitsu Details Monaka: 150-core Armv9 CPU for AI and Data Center

Ever since the creation of A64FX for the Fugaku supercomputer, Fujitsu has been plotting the development of next-generation CPU design for accelerating AI and general-purpose HPC workloads in the data center. Codenamed Monaka, the CPU is the latest creation for TSMC's 2 nm semiconductor manufacturing node. Based on Armv9-A ISA, the CPU will feature up to 150 cores with Scalable Vector Extensions 2 (SVE2), so it can process a wide variety of vector data sets in parallel. Using a 3D chiplet design, the 150 cores will be split into different dies and placed alongside SRAM and I/O controller. The current width of the SVE2 implementation is unknown.

The CPU is designed to support DDR5 memory and PCIe 6.0 connection for attaching storage and other accelerators. To bring cache coherency among application-specific accelerators, CXL 3.0 is present as well. Interestingly, Monaka is planned to arrive in FY2027, which starts in 2026 on January 1st. The CPU will supposedly use air cooling, meaning the design aims for power efficiency. Additionally, it is essential to note that Monaka is not a processor that will power the post-Fugaku supercomputer. The post-Fugaku supercomputer will use post-Monaka design, likely iterating on the design principles that Monaka uses and refining them for the launch of the post-Fugaku supercomputer scheduled for 2030. Below are the slides from Fujitsu's presentation, in Japenese, which highlight the design goals of the CPU.

Acer's New SpatialLabs View Pro 27 Display Elevates Glasses-Free Stereoscopic 3D Experiences

Acer unveiled its largest and most advanced glasses-free stereoscopic 3D display to date, the Acer SpatialLabs View Pro 27. Crafted as a state-of-the-art 3D canvas for creators and developers, the display elevates the way ideas and audiovisual elements take shape without needing specialized glasses or accessories. The device is powered by SpatialLabs's proven stereoscopic 3D solution and is complimented by the new Acer Immerse Audio system, along with a suite of advanced developer tools to bring out creations in their truest 3D forms. Users can also fully maximize its vast 27-inch 4K panel for magnified, lifelike visuals, while its ergonomic design and detachable hood provide comfortable viewing even under extremely low-light conditions.

Expanded Design for Mesmerizing 3D Illustrations
The Acer SpatialLabs View Pro 27 harmoniously combines cutting-edge 3D technology and stereo real-time rendering capabilities in an expanded landscape to support creators in bringing 3D experiences to life. The optimized 3D display uses an eye-tracking module to follow the position and movement of users in real-time even in dim environments. Crystal-clear details and image depth are projected as envisioned thanks to its 27-inch 4K UHD display with 2D and 3D modes, allowing users to switch between 2D and 3D stereoscopic views, along with the panel's 160 Hz refresh rate, 400 nits brightness, Delta E< 2 color accuracy. A detachable hood on the monitor enhances perceived color accuracy and lessens distractions, helping users stay focused and maintain image quality when viewing their designs on screen.

AMD to Acquire Open-Source AI Software Expert Nod.ai

AMD today announced the signing of a definitive agreement to acquire Nod.ai to expand the company's open AI software capabilities. The addition of Nod.ai will bring an experienced team that has developed an industry-leading software technology that accelerates the deployment of AI solutions optimized for AMD Instinct data center accelerators, Ryzen AI processors, EPYC processors, Versal SoCs and Radeon GPUs to AMD. The agreement strongly aligns with the AMD AI growth strategy centered on an open software ecosystem that lowers the barriers of entry for customers through developer tools, libraries and models.

"The acquisition of Nod.ai is expected to significantly enhance our ability to provide AI customers with open software that allows them to easily deploy highly performant AI models tuned for AMD hardware," said Vamsi Boppana, senior vice president, Artificial Intelligence Group at AMD. "The addition of the talented Nod.ai team accelerates our ability to advance open-source compiler technology and enable portable, high-performance AI solutions across the AMD product portfolio. Nod.ai's technologies are already widely deployed in the cloud, at the edge and across a broad range of end point devices today."

Microsoft to Unveil Custom AI Chips to Fight NVIDIA's Monopoly

According to sources close to The Information, Microsoft is supposed to unveil details about its upcoming custom silicon design for accelerating AI workloads. Allegedly, the incoming chip announcement is scheduled for November during Microsoft's annual Ignite conference. Held in Seattle from November 14 to 17, the conference is supposed to show all of the work that the company has been doing in the field of AI. The alleged launch of an AI chip will undoubtedly take center stage in the announcement, as the demand for AI accelerators has been so great that companies can't get their hands on GPUs. The sector is mainly dominated by NVIDIA, with its H100 and A100 GPUs powering most of the AI infrastructure worldwide.

With the launch of a custom AI chip codenamed Athena, Microsoft hopes to match or beat the performance of NVIDIA's offerings and reduce the cost of AI infrastructure. As the price of H100 GPU can get up to 30,000 US Dollars, building a data center filled with H100s can cost hundreds of millions. The cost could be winded down using homemade chips, and Microsoft could be less dependent on NVIDIA to provide the backbone of AI servers needed in the coming years. Nevertheless, we are excited to see what the company has prepared, and we will report on the Microsoft Ignite announcement in November.

NVIDIA Reportedly in Talks to Lease Data Center Space for its own Cloud Service

The recent development of AI models that are more capable than ever has led to a massive demand for hardware infrastructure that powers them. As the dominant player in the industry with its GPU and CPU-GPU solutions, NVIDIA has reportedly discussed leasing data center space to power its own cloud service for these AI applications. Called NVIDIA Cloud DGX, it will reportedly put the company right up against its clients, which are cloud service providers (CSPs) as well. Companies like Microsoft Azure, Amazon AWS, Google Cloud, and Oracle actively acquire NVIDIA GPUs to power their GPU-accelerated cloud instances. According to the report, this has been developing for a few years.

Additionally, it is worth noting that NVIDIA already owns parts for its potential data center infrastructure. This includes NVIDIA DGX and HGX units, which can just be interconnected in a data center, with cloud provisioning so developers can access NVIDIA's instances. A great benefit that would attract the end-user is that NVIDIA could potentially lower the price point of its offerings, as they are acquiring GPUs for much less compared to the CSPs that receive them with a profit margin that NVIDIA imposes. This can attract potential customers, leaving hyperscalers like Amazon, Microsoft, and Google without a moat in the cloud game. Of course, until this project is official, we should take this information with a grain of salt.

Intel Announces Intent to Operate Programmable Solutions Group as Standalone Business Under Leadership of Sandra Rivera

Intel Corporation today announced its intent to separate its Programmable Solutions Group (PSG) operations into a standalone business. This will give PSG the autonomy and flexibility it needs to fully accelerate its growth and more effectively compete in the FPGA industry, which serves a broad array of markets, including the data center, communications, industrial, automotive, aerospace and defense sectors. Intel also announced that Sandra Rivera, executive vice president at Intel, will assume leadership of PSG as chief executive officer; Shannon Poulin has been named chief operating officer.

Standalone operations for PSG are expected to begin Jan. 1, 2024, with ongoing support from Intel. Intel expects to report PSG as a separate business unit when it releases first-quarter 2024 financials. Over the next two to three years, Intel intends to conduct an IPO for PSG and may explore opportunities with private investors to accelerate the business's growth, with Intel retaining a majority stake.

Tenstorrent Selects Samsung Foundry to Manufacture Next-Generation AI Chiplet

Tenstorrent, a company that sells AI processors and licenses AI and RISC-V IP, announced today that it selected Samsung Foundry to bring Tenstorrent's next generation of AI chiplets to market. Tenstorrent builds powerful RISC-V CPU and AI acceleration chiplets, aiming to push the boundaries of compute in multiple industries such as data center, automotive and robotics. These chiplets are designed to deliver scalable power from milliwatts to megawatts, catering to a wide range of applications from edge devices to data centers.

To ensure the highest quality and cutting-edge manufacturing capabilities for its chiplet, Tenstorrent has selected Samsung's Foundry Design Service team, known for their expertise in silicon manufacturing. The chiplets will be manufactured using Samsung's state-of-the-art SF4X process, which boasts an impressive 4 nm architecture.

Google Introduces Chromebook Plus Lineup: Better Performance and AI Capabilities

Today, Google announced its next generation of Chromebook devices, called the Chromebook Plus, said to improve upon the legacy set by Chromebooks over a decade ago. Starting at an enticing price point of $399, this new breed of Chromebooks integrates powerful AI capabilities and a range of built-in Google apps. Notably, it features tools like the Google Photos Magic Eraser and web-based Adobe Photoshop, positioning itself as a dynamic tool for productivity and creative exploration. In collaboration with hardware manufacturers such as Acer, ASUS, HP, and Lenovo, Google is launching a lineup of eight Chromebook Plus devices on the launch date, with more possibly coming in the future.

Each model boasts improved hardware configurations over the regular Chromebook, including processors like the Intel Core i3 12th Gen or the AMD Ryzen 3 7000 series, a minimum of 8 GB RAM, and 128 GB storage. Users are also in for a visual treat with a 1080p IPS display, ensuring crisp visuals for entertainment and work. And for the modern remote workforce, video conferencing gets a substantial upgrade. Every Chromebook Plus comes equipped with a 1080p camera and utilizes AI enhancements to elevate video call clarity, with compatibility spanning various platforms, including Google Meet, Zoom, and Microsoft Teams. Set to be available from October 8, 2023, in the US and October 9 in Canada and Europe, the Chromebook Plus is positioning itself as the go-to device for many users. On the other hand, the AI features are slated for arrival in 2024, when companies ensure their software is compatible.
Below you can see the upcoming models.

Microsoft Tech Chief Prefers Using NVIDIA AI GPUs, Keeping Tabs on AMD Alternatives

Kevin Scott, Microsoft's chief technology officer was interviewed at last week's Code Conference (organized by Vox Media), where he was happy to reveal that his company is having an easier time acquiring Team Green's popular HPC GPU hardware: "Demand was far exceeding the supply of GPU capacity that the whole ecosystem could produce...That is resolving. It's still tight, but it's getting better every week, and we've got more good news ahead of us than bad on that front, which is great." Microsoft is investing heavily into its internal artificial intelligence endeavors and external interests alike (they are a main backer of OpenAI's ChatGPT system). Having a healthy budget certainly helps, but Scott has previously described his experience in this field as "a terrible job" spanning five years of misery (as of May 2023).

Last week's follow-up conversation on-stage in Dana Point, California revealed that conditions have improved since springtime: "It's easier now than when we talked last time." The improved supply circumstances have made his "job of adjudicating these very gnarly conflicts less terrible." Industry reports have Microsoft secretly working on proprietary AI chips with an unnamed partner—CNBC pinpointed Arm as a likely candidate—Scott acknowledged that something is happening behind-the-scenes but it will not be ready imminently: "I'm not confirming anything, but I will say that we've got a pretty substantial silicon investment that we've had for years...And the thing that we will do is we'll make sure that we're making the best choices for how we build these systems, using whatever options we have available. And the best option that's been available during the last handful of years has been NVIDIA."

Quantinuum's H1 Quantum Computer Successfully Executes a Fully Fault-tolerant Algorithm

Fault-tolerant quantum computers that offer radical new solutions to some of the world's most pressing problems in medicine, finance and the environment, as well as facilitating a truly widespread use of AI, are driving global interest in quantum technologies. Yet the various timetables that have been established for achieving this paradigm require major breakthroughs and innovations to remain achievable, and none is more pressing than the move from merely physical qubits to those that are fault-tolerant.

In one of the first meaningful steps along this path, scientists from Quantinuum, the world's largest integrated quantum computing company, along with collaborators, have demonstrated the first fault-tolerant method using three logically-encoded qubits on the Quantinuum H1 quantum computer, Powered by Honeywell, to perform a mathematical procedure.

Qualcomm Launches Its Next Generation XR and AR Platforms, Enabling Immersive Experiences and Slimmer Devices

Qualcomm Technologies, Inc. today announced two new spatial computing platforms - Snapdragon XR2 Gen 2 and Snapdragon AR1 Gen 1 - that will enable the next generation of mixed reality (MR), virtual reality (VR) devices and smart glasses. Snapdragon XR2 Gen 2 Platform: The platform brings premium MR and VR technology into a single chip architecture to unlock next level immersive experiences in thinner and more comfortable headsets, that don't require an external battery pack.

Engineered to deliver a lag free experience with breathtaking visuals and fully immersive sound, the platform allows users to blend virtual content with their physical surroundings and transition seamlessly between MR and VR experiences.

TSMC Announces Breakthrough Set to Redefine the Future of 3D IC

TSMC today announced the new 3Dblox 2.0 open standard and major achievements of its Open Innovation Platform (OIP) 3DFabric Alliance at the TSMC 2023 OIP Ecosystem Forum. The 3Dblox 2.0 features early 3D IC design capability that aims to significantly boost design efficiency, while the 3DFabric Alliance continues to drive memory, substrate, testing, manufacturing, and packaging integration. TSMC continues to push the envelope of 3D IC innovation, making its comprehensive 3D silicon stacking and advanced packaging technologies more accessible to every customer.

"As the industry shifted toward embracing 3D IC and system-level innovation, the need for industry-wide collaboration has become even more essential than it was when we launched OIP 15 years ago," said Dr. L.C. Lu, TSMC fellow and vice president of Design and Technology Platform. "As our sustained collaboration with OIP ecosystem partners continues to flourish, we're enabling customers to harness TSMC's leading process and 3DFabric technologies to reach an entirely new level of performance and power efficiency for the next-generation artificial intelligence (AI), high-performance computing (HPC), and mobile applications."

Winbond Introduces Innovative CUBE Architecture for Powerful Edge AI Devices

Winbond Electronics Corporation, a leading global supplier of semiconductor memory solutions, has unveiled a powerful enabling technology for affordable Edge AI computing in mainstream use cases. The Company's new customized ultra-bandwidth elements (CUBE) enable memory technology to be optimized for seamless performance running generative AI on hybrid edge/cloud applications.

CUBE enhances the performance of front-end 3D structures such as chip on wafer (CoW) and wafer on wafer (WoW), as well as back-end 2.5D/3D chip on Si-interposer on substrate and fan-out solutions. Designed to meet the growing demands of edge AI computing devices, it is compatible with memory density from 256 Mb to 8 Gb with a single die, and it can also be 3D stacked to enhance bandwidth while reducing data transfer power consumption.

Broadcom Partners with Google Cloud to Strengthen Gen AI-Powered Cybersecurity

Symantec, a division of Broadcom Inc., is partnering with Google Cloud to embed generative AI (gen AI) into the Symantec Security platform in a phased rollout that will give customers a significant technical edge for detecting, understanding, and remediating sophisticated cyber attacks.

Symantec is leveraging the Google Cloud Security AI Workbench and security-specific large language model (LLM)--Sec-PaLM 2-across its portfolio to enable natural language interfaces and generate more comprehensive and easy-to-understand threat analyses. With Security AI Workbench-powered summarization of complex incidents and alignment to MITRE ATT&CK context, security operations center (SOC) analysts of all levels can better understand threats and be able to respond faster. That, in turn, translates into greater security and higher SOC productivity.

Useful Sensors Launches AI-In-A-Box Module, a Low Cost Offline Solution

Useful Sensors, an AI-focused start-up, today launched the world's first low-cost, off-the-shelf AI module to enable intuitive, natural language interaction with electronic devices, locally and privately, with no need for an account or internet connection. The new AI-In-A-Box module can answer queries and solve problems in a way similar to well-known AI tools based on a large language model (LLM). But thanks to compression and acceleration technologies developed by Useful Sensors, the module hosts its LLM file locally, enabling its low-cost microprocessor to understand and respond instantly to spoken natural language queries or commands without reference to a data center.

Disconnected from the internet, the AI-In-A-Box module definitively eliminates user concerns about privacy, snooping, or dependence on third-party cloud services that are prevalent with conventional LLM-based AI products and services marketed by large technology companies. The AI-In-A-Box module is available to buy now at CrowdSupply, priced at $299.

Tesla Reportedly Doubling Dojo D1 Supercomputer Chip Orders

Tesla first revealed plans for its Dojo D1 training chip back in 2021, with hopes of it powering self-driving technology in the near future. The automative division has relied mostly on NVIDIA over the ensuing years, but is seemingly keen to move onto proprietary solutions. Media reports from two years ago suggest that 5760 NVIDIA A100 GPUs were in play to develop Tesla's advanced driver-assistance system (Autopilot ADAS). Tom's Hardware believed that a $300 Million AI supercomputer cluster—comprised of roughly 10,000 NVIDIA H100 GPUs—was powered on last month. Recent reports emerging from Taiwan suggest that Tesla is doubling Dojo D1 supercomputer chip orders with TSMC.

An Economic Daily report posits that 10,000 Dojo D1 are in a production queue for the next year, with insiders believing that Tesla is quietly expressing confidence in its custom application-specific integrated circuit (ASIC). An upcoming order count could increase for the next batch (in 2025). The article hints that TSMC's "HPC-related order momentum has increased thanks to Tesla." Both organizations have not publicly commented on these developments, but insider sources have disclosed some technical details—most notably that the finalized Dojo design: "mainly uses TSMC's 7 nm family process and combines it with InFO-level system-on-wafer (SoW) advanced packaging."

Samsung and AMD Collaborate To Advance Network Transformation With vRAN

Samsung Electronics today announced a new collaboration with AMD to advance 5G virtualized RAN (vRAN) for network transformation. This collaboration represents Samsung's ongoing commitment to enriching vRAN and Open RAN ecosystems to help operators build and modernize mobile networks with unmatched flexibility and optimized performance. The two companies have completed several rounds of tests at Samsung's lab to verify high-capacity and telco-grade performance using FDD bands and TDD Massive MIMO wide-bands, while significantly reducing power consumption. In this joint collaboration, Samsung used its versatile vRAN software integrated with the new AMD EPYC 8004 processors, focused on telco and intelligent edge. During technical verification, the EPYC 8004 processors combined with Samsung's vRAN solutions delivered optimized cell capacity per server as well as high power efficiency.

"This technical collaboration demonstrates Samsung's commitment to delivering network flexibility and high performance for service providers by building a larger vRAN and Open RAN ecosystem," said Henrik Jansson, Vice President and Head of SI Business Group, Networks Business at Samsung Electronics. "Samsung has been at the forefront of unleashing the full potential of 5G vRAN technology to meet rising demands, and we look forward to collaborating with industry leaders like AMD to provide operators the capabilities to transform their networks."

Amazon to Invest $4 Billion into Anthropic AI

Today, we're announcing that Amazon will invest up to $4 billion in Anthropic. The agreement is part of a broader collaboration to develop the most reliable and high-performing foundation models in the industry. Our frontier safety research and products, together with Amazon Web Services' (AWS) expertise in running secure, reliable infrastructure, will make Anthropic's safe and steerable AI widely accessible to AWS customers.

AWS will become Anthropic's primary cloud provider for mission critical workloads, providing our team with access to leading compute infrastructure in the form of AWS Trainium and Inferentia chips, which will be used in addition to existing solutions for model training and deployment. Together, we'll combine our respective expertise to collaborate on the development of future Trainium and Inferentia technology.

AAEON Unveils BOXER-8651AI Mini PC Powered by NVIDIA Jetson Orin NX

Industry-leading designer and manufacturer of edge AI solutions, AAEON, has released the BOXER-8651AI, a compact fanless embedded AI System powered by the NVIDIA Jetson Orin NX module. Consequently, the BOXER-8651AI takes advantage of the module's NVIDIA Ampere architecture GPU, featuring 1024 CUDA and 32 Tensor Cores, along with support for NVIDIA JetPack 5.0 and above to provide users with accelerated graphics, data processing, and image classification.

With a fanless chassis measuring just 105 mm x 90 mm x 52 mm, the BOXER-8651AI is an extremely small solution that houses a dense range of interfaces, including DB-9 and DB-15 ports for RS-232 (Rx/Tx/CTS/RTS)/RS-485, CANBus, and DIO functions. Additionally, the device provides HDMI 2.1 display output, GbE LAN, and a variety of USB Type-A ports, supporting both USB 3.2 Gen 2 and USB 2.0 functionality.

The BOXER-8651AI, despite containing such powerful AI performance for its size, is built to operate in rugged conditions, boasting a -5°F to 131°F (-15°C to 55°C) temperature range alongside anti-shock and vibration resistance features. Consequently, the PC is ideally suited for wall mounted deployment across a range of environments.
Return to Keyword Browsing
Feb 21st, 2025 04:58 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts