News Posts matching #ML

Return to Keyword Browsing

Google Teases Upcoming Custom Tensor Processor in Pixel 6

In 2016, we launched the first Pixel. Our goal was to give people a more helpful, smarter phone. Over the years, we introduced features like HDR+ and Night Sight, which used artificial intelligence (AI) to create beautiful images with computational photography. In later years, we applied powerful speech recognition models to build Recorder, which can record, transcribe and search for audio clips, all on device.

AI is the future of our innovation work, but the problem is we've run into computing limitations that prevented us from fully pursuing our mission. So we set about building a technology platform built for mobile that enabled us to bring our most innovative AI and machine learning (ML) to our Pixel users. We set out to make our own System on a Chip (SoC) to power Pixel 6. And now, years later, it's almost here. Tensor is our first custom-built SoC specifically for Pixel phones, and it will power the Pixel 6 and Pixel 6 Pro later this fall.

Microsoft Seemingly Looking to Develop AI-based Upscaling Tech via DirectML

Microsoft seems to be throwing its hat in the image upscale battle that's currently raging between NVIDIA and AMD. The company has added two new job openings to its careers page: one for a Senior Software Engineer and another for a Principal Software Engineer for Graphics. Those job openings would be quite innocent by themselves; however, once we cut through the chaff, it becomes clear that the Senior Software Engineer is expected to "implement machine learning algorithms in graphics software to delight millions of gamers," while working closely with "partners" to develop software for "future machine learning hardware" - partners here could be first-party titles or even the hardware providers themselves (read, AMD). AMD themselves have touted a DirectML upscaling solution back when they first introduced their FidelityFX program - and FSR clearly isn't it.

It is interesting how Microsoft posted these job openings in June 30th - a few days after AMD's reveal of their FidelityFX Super Resolution (FSR) solution for all graphics cards - and which Microsoft themselves confirmed would be implemented in the Xbox product stack, where applicable. Of course, that there is one solution available already does not mean companies should rest on their laurels - AMD is surely at work on improving its FSR tech as we speak, and Microsoft has seen the advantages on having a pure ML-powered image upscaling solution thanks to NVIDIA's DLSS. Whether Microsoft's solution with DirectML will improve on DLSS as it exists at time of launch (if ever) is, of course, unknowable at this point.

SiFive Performance P550 Core Sets New Standard as Highest Performance RISC-V Processor IP

SiFive, Inc., the industry leader in RISC-V processors and silicon solutions, today announced launched the new SiFive Performance family of processors. The SiFive Performance family debuts with two new processor cores, the P270, SiFive's first Linux capable processor with full support for the RISC-V vector extension v1.0 rc, and the SiFive Performance P550 core, SiFive's highest performance processor to date. The new SiFive Performance P550 delivers a SPECInt 2006 score of 8.65/GHz, making it the highest performance RISC-V processor available today, and comparable to existing proprietary solutions in the application processor space.

"SiFive Performance is a significant milestone in our commitment to deliver a complete, scalable portfolio of RISC-V cores to customers in all markets who are at the vanguard of SOC design and are dissatisfied with the status quo," said Dr. Yunsup Lee, Co-Founder and CTO of SiFive. "These two new products cover new performance points and a wide range of application areas, from efficient vector processors that easily displace yesterday's SIMD architectures, to the bleeding edge that the P550 represents. SiFive is proud to set the standard for RISC-V processing and is ready to deliver these products to customers today."

Intel Collaborates with Microsoft against Cryptojacking

Starting today, Microsoft Defender for Endpoint expands its use of Intel Threat Detection Technology (Intel TDT) beyond accelerated memory scanning capabilities to activate central processing unit (CPU) based cryptomining machine learning (ML) detection. This move further accelerates endpoint detection and response for millions of customers without compromising experience.

"This is a true inflection point for the security industry as well as our SMB, mid-market and enterprise customers that have rapidly adopted Windows 10 with built-in endpoint protections. Customers who choose Intel vPro with the exclusive Intel Hardware Shield now gain full-stack visibility to detect threats out of the box with no need for IT configuration. The scale of this CPU-based threat detection rollout across customer systems is unmatched and helps close gaps in corporate defenses," said Michael Nordquist, senior director of Strategic Planning and Architecture in the Business Client Group at Intel.

Apple Announces New Line of MacBooks and Mac Minis Powered by M1

On a momentous day for the Mac, Apple today introduced a new MacBook Air, 13-inch MacBook Pro, and Mac mini powered by the revolutionary M1, the first in a family of chips designed by Apple specifically for the Mac. By far the most powerful chip Apple has ever made, M1 transforms the Mac experience. With its industry-leading performance per watt, together with macOS Big Sur, M1 delivers up to 3.5x faster CPU, up to 6x faster GPU, up to 15x faster machine learning (ML) capabilities, and battery life up to 2x longer than before. And with M1 and Big Sur, users get access to the biggest collection of apps ever for Mac. With amazing performance and remarkable new features, the new lineup of M1-powered Macs are an incredible value, and all are available to order today.

"The introduction of three new Macs featuring Apple's breakthrough M1 chip represents a bold change that was years in the making, and marks a truly historic day for the Mac and for Apple," said Tim Cook, Apple's CEO. "M1 is by far the most powerful chip we've ever created, and combined with Big Sur, delivers mind-blowing performance, extraordinary battery life, and access to more software and apps than ever before. We can't wait for our customers to experience this new generation of Mac, and we have no doubt it will help them continue to change the world."

Arm Highlights its Next Two Generations of CPUs, codenamed Matterhorn and Makalu, with up to a 30% Performance Uplift

Editor's Note: This is written by Arm vice president and general manager Paul Williamson.

Over the last year, I have been inspired by the innovators who are dreaming up solutions to improve and enrich our daily lives. Tomorrow's mobile applications will be even more imaginative, immersive, and intelligent. To that point, the industry has come such a long way in making this happen. Take app stores for instance - we had the choice of roughly 500 apps when smartphones first began shipping in volume in 2007 and today there are 8.9 million apps available to choose from.

Mobile has transformed from a simple utility to the most powerful, pervasive device we engage with daily, much like Arm-based chips have progressed to more powerful but still energy-efficient SoCs. Although the chip-level innovation has already evolved significantly, more is still required as use cases become more complex, with more AI and ML workloads being processed locally on our devices.

Rambus Advances HBM2E Performance to 4.0 Gbps for AI/ML Training Applications

Rambus Inc. (NASDAQ: RMBS), a premier silicon IP and chip provider making data faster and safer, today announced it has achieved a record 4 Gbps performance with the Rambus HBM2E memory interface solution consisting of a fully-integrated PHY and controller. Paired with the industry's fastest HBM2E DRAM from SK hynix operating at 3.6 Gbps, the solution can deliver 460 GB/s of bandwidth from a single HBM2E device. This performance meets the terabyte-scale bandwidth needs of accelerators targeting the most demanding AI/ML training and high-performance computing (HPC) applications.

"With this achievement by Rambus, designers of AI and HPC systems can now implement systems using the world's fastest HBM2E DRAM running at 3.6 Gbps from SK hynix," said Uksong Kang, vice president of product planning at SK hynix. "In July, we announced full-scale mass-production of HBM2E for state-of-the-art computing applications demanding the highest bandwidth available."

Arm Announces Cortex-R82: The First 64-bit Real Time Processor to Power the Future of Computational Storage

There is expected to be more than 79 zettabytes of IoT data in 2025, but the real value of this data is found in the insights it generates. The closer to the data source we can produce these insights the better, because of the improved security, latency and energy efficiency enabled. Computational storage is emerging as a critical piece of the data storage puzzle because it puts processing power directly on the storage device, giving companies secure, quick and easy access to vital information.

Our expertise and legacy in storage puts Arm in a strong position to address the changing needs of this market - with around 85% of hard disk drive controllers and solid-state drive controllers based on Arm, we are already a trusted partner for billions of storage devices. Today, we're announcing Arm Cortex-R82, our first 64-bit, Linux-capable Cortex-R processor designed to accelerate the development and deployment of next-generation enterprise and computational storage solutions.

NVIDIA A100 Ampere GPU Benchmarked on MLPerf

When NVIDIA announced its Ampere lineup of the graphics cards, the A100 GPU was there to represent the higher performance of the lineup. The GPU is optimized for heavy computing workloads as well as machine learning and AI tasks. Today, NVIDIA has submitted the MLPerf results on the A100 GPU to the MLPerf database. What is MLPerf and why it matters you might think? Well, MLPerf is a system benchmark designed to test the capability of a system for machine learning tasks and enable comparability between systems. The A100 GPU got benchmarked in the latest 0.7 version of the benchmark.

The baseline for the results was the previous generation king, V100 Volta GPU. The new A100 GPU is average 1.5 to 2.5 times faster compared to V100. So far A100 GPU system beats all offers available. It is worth pointing out that not all competing systems have been submitted, however, so far the A100 GPU is the fastest.
The performance results follow:

Qualcomm Launches World's First 5G and AI-Enabled Robotics Platform

Qualcomm Technologies, Inc., today announced the Qualcomm Robotics RB5 platform - the Company's most advanced, integrated, comprehensive offering designed specifically for robotics. Building on the successful Qualcomm Robotics RB3 platform and its broad adoption in a wide array of robotics and drone products available today, the Qualcomm Robotics RB5 platform is comprised of an extensive set of hardware, software and development tools.

The Qualcomm Robotics RB5 platform is the first of its kind to bring together the Company's deep expertise in 5G and AI to empower developers and manufacturers to create the next generation of high-compute, low-power robots and drones for the consumer, enterprise, defense, industrial and professional service sectors - and the comprehensive Qualcomm Robotics RB5 Development Kit helps ensure developers have the customization and flexibility they need to make their visions a commercial reality. To date, Qualcomm Technologies has engaged many leading companies that have endorsed the Qualcomm Robotics RB5 platform, including 20+ early adopters in the process of evaluating the platform.
Qualcomm Robotics RB5 Platform

Microsoft is Replacing MSN Journalists with Artificial Intelligence

Microsoft is working on bringing the latest artificial intelligence technology everywhere it can, and everywhere when it works. According to a few reports from Business Insider and Seattle Times, Microsoft is working on terminating its contracts with journalists and replacing them with artificial intelligence software. In the period between Wednesday and Thursday of last week, around 50 employees have received the information that their contracts will not be renewed after June 30th. The journalists in question were responsible for Microsoft's MSN Web portal which will now use machine learning (ML) models that will generate news stream. To use ML for an application like this, Microsoft is surely utilizing its Azure infrastructure to process everything in the cloud.

One of the ex-employees has said that the MSN platform has been semi-automated for some time now and that this is a completion of the automation. "Like all companies, we evaluate our business on a regular basis. This can result in increased investment in some places and, from time to time, re-deployment in others. These decisions are not the result of the current pandemic." - said Microsoft spokesman to Seattle Times.

Arm Announces new IP Portfolio with Cortex-A78 CPU

During this unprecedented global health crisis, we have experienced rapid societal changes in how we interact with and rely on technology to connect, aid, and support us. As a result of this we are increasingly living our lives on our smartphones, which have been essential in helping feed our families through application-based grocery or meal delivery services, as well as virtually seeing our colleagues and loved ones daily. Without question, our Arm-based smartphones are the computing hub of our lives.

However, even before this increased reliance on our smartphones, there was already growing interest among users to explore the limits of what is possible. The combination of these factors with the convergence of 5G and AI, are generating greater demand for more performance and efficiency in the palm of our hands.
Arm Cortex-A78

Arm Delivers New Edge Processor IPs for IoT

Today, Arm announced significant additions to its artificial intelligence (AI) platform, including new machine learning (ML) IP, the Arm Cortex -M55 processor and Arm Ethos -U55 NPU, the industry's first microNPU (Neural Processing Unit) for Cortex-M, designed to deliver a combined 480x leap in ML performance to microcontrollers. The new IP and supporting unified toolchain enable AI hardware and software developers with more ways to innovate as a result of unprecedented levels of on-device ML processing for billions of small, power-constrained IoT and embedded devices.

Rambus Achieves Industry-Leading GDDR6 Performance at 18 Gbps

Rambus Inc., a premier silicon IP and chip provider making data faster and safer, today announced it has achieved industry-leading 18 Gbps performance with the Rambus GDDR6 Memory PHY. Running at the industry's fastest data rate of 18 Gbps, the Rambus GDDR6 PHY IP delivers peak performance four-to-five times faster than current DDR4 solutions and continues the company's longstanding tradition of developing leading-edge products. The Rambus GDDR6 PHY pairs with the companion GDDR6 memory controller from the recent acquisition of Northwest Logic to provide a complete and optimized memory subsystem solution.

Increased data usage in applications such as AI, ML, data center, networking and automotive systems is driving a need for higher bandwidth memory. The coming introduction of high-bandwidth 5G networks will exacerbate this challenge. Working closely with our memory partners, the Rambus GDDR6 solution gives system designers more options in selecting the memory system that meets both their bandwidth and cost requirements.

India First Country to Deploy AI Machine Learning to Fight Income Tax Evasion

India is building a large AI machine learning data-center that can crunch through trillions of financial transactions per hour to process income tax returns of India's billion-strong income tax assessee base. India's Income Tax Department has relied on human tax assessment officers that are randomly selected by a computer to assess tax returned filed by individuals, in an increasingly inefficient system that's prone to both evasion and corruption. India has already been using machine learning since 2017 to fish out cases of tax-evasion for further human scrutiny. The AI now replaces human assessment officers, relegating them up an escalation matrix.

The AI/ML assessment system is a logical next step to two big policy decisions the Indian government has taken in recent years: one of 100% data-localization by foreign entities conducting commerce in India; and getting India's vast population to use electronic payment instruments, away from paper-money, by de-monetizing high-value currency, and replacing it with a scarce supply of newer bank-notes that effectively force people to use electronic instruments. Contributing to these efforts are some of the lowest 4G mobile data prices in the world (as low as $1.50 for 40 GB of 4G LTE data), and low-cost smartphone handsets. It's also free to open a basic bank account with no minimum balance requirements.

Google Cloud Introduces NVIDIA Tesla P4 GPUs, for $430 per Month

Today, we are excited to announce a new addition to the Google Cloud Platform (GCP) GPU family that's optimized for graphics-intensive applications and machine learning inference: the NVIDIA Tesla P4 GPU.

We've come a long way since we introduced our first-generation compute accelerator, the K80 GPU, adding along the way P100 and V100 GPUs that are optimized for machine learning and HPC workloads. The new P4 accelerators, now in beta, provide a good balance of price/performance for remote display applications and real-time machine learning inference.

ARM Reveals Its Plan for World Domination: Announces DynamIQ Technology

ARM processors have been making forays into hitherto shallow markets, with it's technology and processor architectures winning an ever increasing amount of design wins. Most recently, Microsoft itself announced a platform meant to use ARM processors in a server environment. Now, ARM has put forward its plans towards achieving a grand total of 100 billion chips shipped in the 2017-2021 time frame.

To put that goal in perspective, ARM is looking to ship as many ARM-powered processors in this 2017-2021 time frame as it did between 1991 and 2017. This is no easy task - at least if ARM were to stay in its known markets, where it has already achieved almost total saturation. The plan: to widen the appeal of its processor design, with big bets in the AI, Automotive, XR (which encompasses the Virtual Reality, Augmented Reality, and Mixed Reality markets), leveraged by what ARM does best: hyper-efficient processors.
Return to Keyword Browsing
May 21st, 2024 11:40 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts