News Posts matching #TensorFlow

Return to Keyword Browsing

BrainChip Introduces Lowest-Power AI Acceleration Co-Processor, the Akida Pico

BrainChip Holdings Ltd, the world's first commercial producer of ultra-low power, fully digital, event-based, brain-inspired AI, today introduced the Akida Pico, the lowest power acceleration coprocessor that enables the creation of very compact, ultra-low power, portable and intelligent devices for wearable and sensor integrated AI into consumer, healthcare, IoT, defense and wake-up applications.

Akida Pico accelerates limited use case-specific neural network models to create an ultra-energy efficient, purely digital architecture. Akida Pico enables secure personalization for applications including voice wake detection, keyword spotting, speech noise reduction, audio enhancement, presence detection, personal voice assistant, automatic doorbell, wearable AI, appliance voice interfaces and more.

Altera Announces Agilex 3 Series FPGAs and Agilex 5 Development Kits

Altera, an Intel Company, today unveiled an array of FPGA hardware, software and development tools that make its programmable solutions more accessible across a broader range of use cases and markets. At its annual developer's conference, Altera revealed new details on its next-generation, power- and cost-optimized Agilex 3 FPGAs and announced new development kits and software support for its Agilex 5 FPGAs.

"Working closely with our ecosystem and distribution partners, Altera remains committed to delivering FPGA-based solutions that empower innovators with leading-edge programmable technologies that are easy to design and deploy. With these key announcements, we continue to execute on our vision of shaping the future by using programmable logic to help customers unlock greater value across a broad range of use cases within the data center, aerospace and defense sectors, communications infrastructure, automotive, industrial, test, medical and embedded markets," said Sandra Rivera, CEO of Altera.

Nuvoton Unveils New Production-Ready Endpoint AI Platform for Machine Learning

Nuvoton is pleased to announce its new Endpoint AI Platform to accelerate the development of fully-featured microcontroller (MCU) AI products. These solutions are enabled by Nuvoton's powerful new MCU and MPU silicon, including the NuMicro M55M1 equipped with Ethos U55 NPU, NuMicro MA35D1, and NuMicro M467 series. These MCUs are a valuable addition to the modern AI-centric computing toolkit and demonstrate how Nuvoton continues to work closely with Arm and other companies to develop a user-friendly and complete Endpoint AI Ecosystem.

Development on these platforms is made easy by Nuvoton's NuEdgeWise: a well-rounded, simple-to-adopt tool for machine learning (ML) development, which is nonetheless suitable for cutting-edge tasks. Together, this powerful core hardware, combined with unique rich development tools, cements Nuvoton's reputation as a leading microcontroller platform provider. These new single-chip-based platforms are ideal for applications including smart home appliances and security, smart city services, industry, agriculture, entertainment, environmental protection, education, highly accurate voice-control tasks, and sports, health, and fitness.

Intel Accelerates AI Everywhere with Launch of Powerful Next-Gen Products

At its "AI Everywhere" launch in New York City today, Intel introduced an unmatched portfolio of AI products to enable customers' AI solutions everywhere—across the data center, cloud, network, edge and PC. "AI innovation is poised to raise the digital economy's impact up to as much as one-third of global gross domestic product," Gelsinger said. "Intel is developing the technologies and solutions that empower customers to seamlessly integrate and effectively run AI in all their applications—in the cloud and, increasingly, locally at the PC and edge, where data is generated and used."

Gelsinger showcased Intel's expansive AI footprint, spanning cloud and enterprise servers to networks, volume clients and ubiquitous edge environments. He also reinforced that Intel is on track to deliver five new process technology nodes in four years. "Intel is on a mission to bring AI everywhere through exceptionally engineered platforms, secure solutions and support for open ecosystems. Our AI portfolio gets even stronger with today's launch of Intel Core Ultra ushering in the age of the AI PC and AI-accelerated 5th Gen Xeon for the enterprise," Gelsinger said.

Google Introduces Cloud TPU v5e and Announces A3 Instance Availability

We're at a once-in-a-generation inflection point in computing. The traditional ways of designing and building computing infrastructure are no longer adequate for the exponentially growing demands of workloads like generative AI and LLMs. In fact, the number of parameters in LLMs has increased by 10x per year over the past five years. As a result, customers need AI-optimized infrastructure that is both cost effective and scalable.

For two decades, Google has built some of the industry's leading AI capabilities: from the creation of Google's Transformer architecture that makes gen AI possible, to our AI-optimized infrastructure, which is built to deliver the global scale and performance required by Google products that serve billions of users like YouTube, Gmail, Google Maps, Google Play, and Android. We are excited to bring decades of innovation and research to Google Cloud customers as they pursue transformative opportunities in AI. We offer a complete solution for AI, from computing infrastructure optimized for AI to the end-to-end software and services that support the full lifecycle of model training, tuning, and serving at global scale.

Google Merges its AI Subsidiaries into Google DeepMind

Google has announced that the company is officially merging its subsidiaries focused on artificial intelligence to form a single group. More specifically, Google Brain and DeepMind companies are now joining forces to become a single unit called Google DeepMind. As Google CEO Sundar Pichai notes: "This group, called Google DeepMind, will bring together two leading research groups in the AI field: the Brain team from Google Research, and DeepMind. Their collective accomplishments in AI over the last decade span AlphaGo, Transformers, word2vec, WaveNet, AlphaFold, sequence to sequence models, distillation, deep reinforcement learning, and distributed systems and software frameworks like TensorFlow and JAX for expressing, training and deploying large scale ML models."

As a CEO of this group, Demis Hassabis, a previous CEO of DeepMind, will work together with Jeff Dean, now promoted to Google's Chief Scientist, where he will report to the Sundar. In the spirit of a new role, Jeff Dean will work as a Chief Scientist at Google Research and Google DeepMind, where he will set the goal for AI research at both units. This corporate restructuring will help the two previously separate teams work together on a single plan and help advance AI capabilities faster. We are eager to see the upcoming developments these teams accomplish.

NVIDIA Announces Microsoft, Tencent, Baidu Adopting CV-CUDA for Computer Vision AI

Microsoft, Tencent and Baidu are adopting NVIDIA CV-CUDA for computer vision AI. NVIDIA CEO Jensen Huang highlighted work in content understanding, visual search and deep learning Tuesday as he announced the beta release for NVIDIA's CV-CUDA—an open-source, GPU-accelerated library for computer vision at cloud scale. "Eighty percent of internet traffic is video, user-generated video content is driving significant growth and consuming massive amounts of power," said Huang in his keynote at NVIDIA's GTC technology conference. "We should accelerate all video processing and reclaim the power."

CV-CUDA promises to help companies across the world build and scale end-to-end, AI-based computer vision and image processing pipelines on GPUs. The majority of internet traffic is video and image data, driving incredible scale in applications such as content creation, visual search and recommendation, and mapping. These applications use a specialized, recurring set of computer vision and image-processing algorithms to process image and video data before and after they're processed by neural networks.

Intel Delivers Leading AI Performance Results on MLPerf v2.1 Industry Benchmark for DL Training

Today, MLCommons published results of its industry AI performance benchmark in which both the 4th Generation Intel Xeon Scalable processor (code-named Sapphire Rapids) and Habana Gaudi 2 dedicated deep learning accelerator logged impressive training results.


"I'm proud of our team's continued progress since we last submitted leadership results on MLPerf in June. Intel's 4th gen Xeon Scalable processor and Gaudi 2 AI accelerator support a wide array of AI functions and deliver leadership performance for customers who require deep learning training and large-scale workloads." Sandra Rivera, Intel executive vice president and general manager of the Datacenter and AI Group

Intel Accelerates Developer Innovation with Open, Software-First Approach

On Day 2 of Intel Innovation, Intel illustrated how its efforts and investments to foster an open ecosystem catalyze community innovation, from silicon to systems to apps and across all levels of the software stack. Through an expanding array of platforms, tools and solutions, Intel is focused on helping developers become more productive and more capable of realizing their potential for positive social good. The company introduced new tools to support developers in artificial intelligence, security and quantum computing, and announced the first customers of its new Project Amber attestation service.

"We are making good on our software-first strategy by empowering an open ecosystem that will enable us to collectively and continuously innovate," said Intel Chief Technology Officer Greg Lavender. "We are committed members of the developer community and our breadth and depth of hardware and software assets facilitate the scaling of opportunities for all through co-innovation and collaboration."

Arm Announces Next-Generation Neoverse Cores for High Performance Computing

The demand for data is insatiable, from 5G to the cloud to smart cities. As a society we want more autonomy, information to fuel our decisions and habits, and connection - to people, stories, and experiences.

To address these demands, the cloud infrastructure of tomorrow will need to handle the coming data explosion and the effective processing of evermore complex workloads … all while increasing power efficiency and minimizing carbon footprint. It's why the industry is increasingly looking to the performance, power efficiency, specialized processing and workload acceleration enabled by Arm Neoverse to redefine and transform the world's computing infrastructure.

Habana Labs Launches Second-generation AI Deep Learning Processors

Today at the Intel Vision conference, Habana Labs, an Intel company, announced its second-generation deep learning processors, the Habana Gaudi 2 Training and Habana Greco Inference processors. The processors are purpose-built for AI deep learning applications, implemented in 7nm technology and build upon Habana's high-efficiency architecture to provide customers with higher-performance model training and inferencing for computer vision and natural language applications in the data center. At Intel Vision, Habana Labs revealed Gaudi2's training throughput performance for the ResNet-50 computer vision model and the BERT natural language processing model delivers twice the training throughput over the Nvidia A100-80GB GPU.

"The launch of Habana's new deep learning processors is a prime example of Intel executing on its AI strategy to give customers a wide array of solution choices - from cloud to edge - addressing the growing number and complex nature of AI workloads. Gaudi2 can help Intel customers train increasingly large and complex deep learning workloads with speed and efficiency, and we're anticipating the inference efficiencies that Greco will bring."—Sandra Rivera, Intel executive vice president and general manager of the Datacenter and AI Group

Lambda Teams Up With Razer to Launch the World's Most Powerful Laptop for Deep Learning

Lambda, the Deep Learning Company, today in collaboration with Razer, released the new Lambda Tensorbook, the world's most powerful laptop designed for deep learning, available with Linux and Lambda's deep learning software. The sleek laptop, coupled with the Lambda GPU Cloud, gives engineers all the software tools and compute performance they need to create, train, and test deep learning models locally. Since its launch in 2012, Lambda has quickly become the de-facto deep learning infrastructure provider for the world's leading research and engineering teams. Thousands of businesses and organizations use Lambda including: all of the top five tech companies, 97 percent of the top research universities in the U.S. including MIT and Caltech, and the Department of Defense. These teams use Lambda's GPU clusters, servers, workstations, and cloud instances to train neural networks for cancer detection, autonomous aircraft, drug discovery, self-driving cars, and much more.

"Most ML engineers don't have a dedicated GPU laptop, which forces them to use shared resources on a remote machine, slowing down their development cycle." said Stephen Balaban, co-founder and CEO of Lambda. "When you're stuck SSHing into a remote server, you don't have any of your local data or code and even have a hard time demoing your model to colleagues. The Razer x Lambda Tensorbook solves this. It's pre-installed with PyTorch and TensorFlow and lets you quickly train and demo your models: all from a local GUI interface. No more SSH!"

Intel Releases oneAPI 2022 Toolkits to Developers

Intel today released oneAPI 2022 toolkits. Newly enhanced toolkits expand cross-architecture features to provide developers greater utility and architectural choice to accelerate computing. "I am impressed by the breadth of more than 900 technical improvements that the oneAPI software engineering team has done to accelerate development time and performance for critical application workloads across Intel's client and server CPUs and GPUs. The rich set of oneAPI technologies conforms to key industry standards, with deep technical innovations that enable applications developers to obtain the best possible run-time performance from the cloud to the edge. Multi-language support and cross-architecture performance acceleration are ready today in our oneAPI 2022 release to further enable programmer productivity on Intel platforms," said Greg Lavender, Intel chief technology officer, senior vice president and general manager of the Software and Advanced Technology Group.

New capabilities include the world's first unified compiler implementing C++, SYCL and Fortran, data parallel Python for CPUs and GPUs, advanced accelerator performance modeling and tuning, and performance acceleration for AI and ray tracing visualization workloads. The oneAPI cross-architecture programming model provides developers with tools that aim to improve the productivity and velocity of code development when building cross-architecture applications.

AMD Details Instinct MI200 Series Compute Accelerator Lineup

AMD today announced the new AMD Instinct MI200 series accelerators, the first exascale-class GPU accelerators. AMD Instinct MI200 series accelerators includes the world's fastest high performance computing (HPC) and artificial intelligence (AI) accelerator,1 the AMD Instinct MI250X.

Built on AMD CDNA 2 architecture, AMD Instinct MI200 series accelerators deliver leading application performance for a broad set of HPC workloads. The AMD Instinct MI250X accelerator provides up to 4.9X better performance than competitive accelerators for double precision (FP64) HPC applications and surpasses 380 teraflops of peak theoretical half-precision (FP16) for AI workloads to enable disruptive approaches in further accelerating data-driven research.

AAEON Announces BOXER-8521AI Edge AI Computing Platform

AAEON, an industry leader in rugged AI Edge platforms, announces the BOXER-8521AI is now available on a mass market scale. Winner of the 2021 Taiwan Excellence Award, the BOXER-8521AI combines the flexibility of PoE PD deployment with the Google Edge TPU in a rugged, fanless system designed to bring AI Edge Computing to where it's needed.

Recently awarded the Taiwan Excellence Award for 2021, the BOXER-8521AI is focused on providing flexibility in deployment and connectivity. The BOXER-8521AI features a PoE PD port, allowing the system to be deployed further away from its power source, as well as enable internet connection and remote monitoring of the system over the same single cable, reducing the complexity of installation. Additionally, by utilizing both the PoE PD port and DC-input, the system can continue operating even if one power supply is cut off.

AWS Leverages Habana Gaudi AI Processors

Today at AWS re:Invent 2020, AWS CEO Andy Jassy announced EC2 instances that will leverage up to eight Habana Gaudi accelerators and deliver up to 40% better price performance than current graphics processing unit-based EC2 instances for machine learning workloads. Gaudi accelerators are specifically designed for training deep learning models for workloads that include natural language processing, object detection and machine learning training, classification, recommendation and personalization.

"We are proud that AWS has chosen Habana Gaudi processors for its forthcoming EC2 training instances. The Habana team looks forward to our continued collaboration with AWS to deliver on a roadmap that will provide customers with continuity and advances over time." -David Dahan, chief executive officer at Habana Labs, an Intel Company.

Apple Announces New Line of MacBooks and Mac Minis Powered by M1

On a momentous day for the Mac, Apple today introduced a new MacBook Air, 13-inch MacBook Pro, and Mac mini powered by the revolutionary M1, the first in a family of chips designed by Apple specifically for the Mac. By far the most powerful chip Apple has ever made, M1 transforms the Mac experience. With its industry-leading performance per watt, together with macOS Big Sur, M1 delivers up to 3.5x faster CPU, up to 6x faster GPU, up to 15x faster machine learning (ML) capabilities, and battery life up to 2x longer than before. And with M1 and Big Sur, users get access to the biggest collection of apps ever for Mac. With amazing performance and remarkable new features, the new lineup of M1-powered Macs are an incredible value, and all are available to order today.

"The introduction of three new Macs featuring Apple's breakthrough M1 chip represents a bold change that was years in the making, and marks a truly historic day for the Mac and for Apple," said Tim Cook, Apple's CEO. "M1 is by far the most powerful chip we've ever created, and combined with Big Sur, delivers mind-blowing performance, extraordinary battery life, and access to more software and apps than ever before. We can't wait for our customers to experience this new generation of Mac, and we have no doubt it will help them continue to change the world."

Tachyum Prodigy Native AI Supports TensorFlow and PyTorch

Tachyum Inc. today announced that it has further expanded the capabilities of its Prodigy Universal Processor through support for TensorFlow and PyTorch environments, enabling a faster, less expensive and more dynamic solution for the most challenging artificial intelligence/machine learning workloads.

Analysts predict that AI revenue will surpass $300 billion by 2024 with a compound annual growth rate (CAGR) of up to 42 percent through 2027. AI is being heavily invested in by technology giants looking to make the technology more accessible for enterprise use-cases. They include self-driving vehicles to more sophisticated and control-intensive disciplines like Spiking Neural Nets, Explainable AI, Symbolic AI and Bio AI. When deployed into AI environments, Prodigy is able to simplify software processes, accelerate performance, save energy and better incorporate rich data sets to allow for faster innovation.

Lightmatter Introduces Optical Processor to Speed Compute for Next-Gen AI

Lightmatter, a leader in silicon photonics processors, today announces its artificial intelligence (AI) photonic processor, a general-purpose AI inference accelerator that uses light to compute and transport data. Using light to calculate and communicate within the chip reduces heat—leading to orders of magnitude reduction in energy consumption per chip and dramatic improvements in processor speed. Since 2010, the amount of compute power needed to train a state-of-the-art AI algorithm has grown at five times the rate of Moore's Law scaling—doubling approximately every three and a half months. Lightmatter's processor solves the growing need for computation to support next-generation AI algorithms.

"The Department of Energy estimates that by 2030, computing and communications technology will consume more than 8 percent of the world's power. Transistors, the workhorse of traditional processors, aren't improving; they're simply too hot. Building larger and larger datacenters is a dead end path along the road of computational progress," said Nicholas Harris, PhD, founder and CEO at Lightmatter. "We need a new computing paradigm. Lightmatter's optical processors are dramatically faster and more energy efficient than traditional processors. We're simultaneously enabling the growth of computing and reducing its impact on our planet."

GIGABYTE Introduces a Broad Portfolio of G-series Servers Powered by NVIDIA A100 PCIe

GIGABYTE, an industry leader in high-performance servers and workstations, announced its G-series servers' validation plan. Following the NVIDIA A100 PCIe GPU announcement today, GIGABYTE has completed the compatibility validation of the G481-HA0 / G292-Z40 and added the NVIDIA A100 to the support list for these two servers. The remaining G-series servers will be divided into two waves to complete their respective compatibility tests soon. At the same time, GIGABYTE also launched a new G492 series server based on the AMD EPYC 7002 processor family, which provides PCIe Gen4 support for up to 10 NVIDIA A100 PCIe GPUs. The G492 is a server with the highest computing power for AI models training on the market today. GIGABYTE will offer two SKUs for the G492. The G492-Z50 will be at a more approachable price point, whereas the G492-Z51 will be geared towards higher performance.

The G492 is GIGABYTE's second-generation 4U G-series server. Based on the first generation G481 (Intel architecture) / G482 (AMD architecture) servers, the user-friendly design and scalability have been further optimized. In addition to supporting two 280 W 2nd Gen AMD EPYC 7002 processors, the 32 DDR4 memory slots support up to 8 TB of memory and maintain data transmission at 3200 MHz. The G492 has built-in PCIe Gen4 switches, which can provide more PCIe Gen4 lanes. PCIe Gen4 has twice the I/O performance of PCIe Gen3 and fully enables the computing power of the NVIDIA A100 Tensor Core GPU, or it can be applied to PCIe storage to help provide a storage upgrade path that is native to the G492.

Intel Announces "Cooper Lake" 4P-8P Xeons, New Optane Memory, PCIe 4.0 SSDs, and FPGAs for AI

Intel today introduced its 3rd Gen Intel Xeon Scalable processors and additions to its hardware and software AI portfolio, enabling customers to accelerate the development and use of AI and analytics workloads running in data center, network and intelligent-edge environments. As the industry's first mainstream server processor with built-in bfloat16 support, Intel's new 3rd Gen Xeon Scalable processors makes artificial intelligence (AI) inference and training more widely deployable on general-purpose CPUs for applications that include image classification, recommendation engines, speech recognition and language modeling.

"The ability to rapidly deploy AI and data analytics is essential for today's businesses. We remain committed to enhancing built-in AI acceleration and software optimizations within the processor that powers the world's data center and edge solutions, as well as delivering an unmatched silicon foundation to unleash insight from data," said Lisa Spelman, Intel corporate vice president and general manager, Xeon and Memory Group.

AMD Announces Radeon Pro VII Graphics Card, Brings Back Multi-GPU Bridge

AMD today announced its Radeon Pro VII professional graphics card targeting 3D artists, engineering professionals, broadcast media professionals, and HPC researchers. The card is based on AMD's "Vega 20" multi-chip module that incorporates a 7 nm (TSMC N7) GPU die, along with a 4096-bit wide HBM2 memory interface, and four memory stacks adding up to 16 GB of video memory. The GPU die is configured with 3,840 stream processors across 60 compute units, 240 TMUs, and 64 ROPs. The card is built in a workstation-optimized add-on card form-factor (rear-facing power connectors and lateral-blower cooling solution).

What separates the Radeon Pro VII from last year's Radeon VII is full double precision floating point support, which is 1:2 FP32 throughput compared to the Radeon VII, which is locked to 1:4 FP32. Specifically, the Radeon Pro VII offers 6.55 TFLOPs double-precision floating point performance (vs. 3.36 TFLOPs on the Radeon VII). Another major difference is the physical Infinity Fabric bridge interface, which lets you pair up to two of these cards in a multi-GPU setup to double the memory capacity, to 32 GB. Each GPU has two Infinity Fabric links, running at 1333 MHz, with a per-direction bandwidth of 42 GB/s. This brings the total bidirectional bandwidth to a whopping 168 GB/s—more than twice the PCIe 4.0 x16 limit of 64 GB/s.

ASUS Announces Tinker Edge R with AI Machine-Learning Capabilities

ASUS today announced Tinker Edge R, a single-board computer (SBC) specially designed for AI applications. It uses a Rockchip RK3399Pro NPU, a machine-learning (ML) accelerator that speeds up processing efficiency, lowers power demands and makes it easier to build connected devices and intelligent applications.

With this integrated ML accelerator, Tinker Edge R can perform three tera-operations per second (3 TOPS), using low power consumption. It also features an optimized neural-network (NN) architecture, which means Tinker Edge R can support multiple ML frameworks and allow lots of common ML models to be compiled and run easily.
ASUS Tinker Edge R

Arm Delivers New Edge Processor IPs for IoT

Today, Arm announced significant additions to its artificial intelligence (AI) platform, including new machine learning (ML) IP, the Arm Cortex -M55 processor and Arm Ethos -U55 NPU, the industry's first microNPU (Neural Processing Unit) for Cortex-M, designed to deliver a combined 480x leap in ML performance to microcontrollers. The new IP and supporting unified toolchain enable AI hardware and software developers with more ways to innovate as a result of unprecedented levels of on-device ML processing for billions of small, power-constrained IoT and embedded devices.

Intel Announces Broadest Product Portfolio for Moving, Storing, and Processing Data

Intel Tuesday unveiled a new portfolio of data-centric solutions consisting of 2nd-Generation Intel Xeon Scalable processors, Intel Optane DC memory and storage solutions, and software and platform technologies optimized to help its customers extract more value from their data. Intel's latest data center solutions target a wide range of use cases within cloud computing, network infrastructure and intelligent edge applications, and support high-growth workloads, including AI and 5G.

Building on more than 20 years of world-class data center platforms and deep customer collaboration, Intel's data center solutions target server, network, storage, internet of things (IoT) applications and workstations. The portfolio of products advances Intel's data-centric strategy to pursue a massive $300 billion data-driven market opportunity.
Return to Keyword Browsing
Nov 20th, 2024 06:24 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts