News Posts matching #Artificial Intelligence

Return to Keyword Browsing

SMART Modular Announces the SMART Kestral PCIe Optane Memory Add-in-Card to Enable Memory Expansion and Acceleration

SMART Modular Technologies, Inc. ("SMART"), a division of SGH and a global leader in memory solutions, solid-state drives, and hybrid storage products, announces its new SMART Kestral PCIe Optane Memory Add-in-Card (AIC), which is able to add up to 2 TB of Optane Memory expansion on a PCIe-Gen4-x16 or PCIe-Gen3-x16 interface independent of the motherboard CPU. SMART's Kestral AICs accelerate selected algorithms by offloading software-defined storage functions from the host CPU to the Intel FPGA on the AIC. SMART's Kestral memory AICs are ideal for hyperscale, data center, and other similar environments that run large memory applications, and would benefit from memory acceleration or system acceleration through computational storage.

"With the advancement of new interconnect standards such as CXL and OpenCAPI, SMART's new family of SMART Kestral AICs addresses the industry's need for a variety of new memory module form factors and interfaces for memory expansion and acceleration," stated Mike Rubino, SMART Modular's vice president of engineering. "SMART is able to leverage our many years of experience in developing and productizing controller-based memory solutions to meet today's emerging and continually evolving memory add-on needs of server and storage system customers."

Supermicro Breakthrough Universal GPU System - Supports All Major CPU, GPU, and Fabric Architectures

Super Micro Computer, Inc. (SMCI), a global leader in enterprise computing, storage, networking solutions, and green computing technology, has announced a revolutionary technology that simplifies large scale GPU deployments and is a future proof design that supports yet to be announced technologies. The Universal GPU server provides the ultimate flexibility in a resource-saving server.

The Universal GPU system architecture combines the latest technologies supporting multiple GPU form factors, CPU choices, storage, and networking options optimized together to deliver uniquely-configured and highly scalable systems. Systems can be optimized for each customer's specific Artificial Intelligence (AI), Machine Learning (ML), and High-Performance Computing (HPC) applications. Organizations worldwide are demanding new options for their next generation of computing environments, which have the thermal headroom for the next generation of CPUs and GPUs.

Google Uses Artificial Intelligence to Develop Faster and Smaller Hardware Accelerators

Designing Artificial Intelligence / Machine Learning hardware accelerators takes effort from hardware engineers in conjunction with scientists working in the AI/ML area itself. A few years ago, we started seeing AI incorporated into parts of electronic design automation (EDA) software tools, helping chip designers speed up the process of creating hardware. What we were "used to" seeing AI do are just a couple of things like placement and routing. And having that automated is a huge deal. However, it looks like the power of AI for chip design is not going to stop there. Researchers at Google and UC Berkeley have made a research project that helps AI design and develop AI-tailored accelerators smaller and faster than anything humans made.

In the published paper, researchers present PRIME - a framework that created AI processors based on a database of blueprints. The PRIME framework feeds off an offline database containing accelerator designs and their corresponding performance metrics (e.g., latency, power, etc.) to design next-generation hardware accelerators. According to Google, PRIME can do so without further hardware simulation and has processors ready for use. As per the paper, PRIME improves performance upon state-of-the-art simulation-driven methods by as much as 1.2x-1.5x. It also reduces the required total simulation time by 93% and 99%, respectively. The framework is also capable of architecting accelerators for unseen applications.

Intel, AMD, Arm, and Others, Collaborate on UCIe (Universal Chiplet Interconnect Express)

Intel, along with Advanced Semiconductor Engineering Inc. (ASE), AMD, Arm, Google Cloud, Meta, Microsoft Corp., Qualcomm Inc., Samsung and Taiwan Semiconductor Manufacturing Co., have announced the establishment of an industry consortium to promote an open die-to-die interconnect standard called Universal Chiplet Interconnect Express (UCIe). Building on its work on the open Advanced Interface Bus (AIB), Intel developed the UCIe standard and donated it to the group of founding members as an open specification that defines the interconnect between chiplets within a package, enabling an open chiplet ecosystem and ubiquitous interconnect at the package level.

"Integrating multiple chiplets in a package to deliver product innovation across market segments is the future of the semiconductor industry and a pillar of Intel's IDM 2.0 strategy," said Sandra Rivera, executive vice president and general manager of the Datacenter and Artificial Intelligence Group at Intel. "Critical to this future is an open chiplet ecosystem with key industry partners working together under the UCIe Consortium toward a common goal of transforming the way the industry delivers new products and continues to deliver on the promise of Moore's Law."

AAEON Partners with AI Chipmaker Hailo to Enable Next-Gen AI Applications at the Edge

UP Bridge the Gap, a brand of AAEON - is pleased to announce a partnership with Hailo, a leading Artificial Intelligence (AI) chipmaker, to meet skyrocketing demands for next-generation AI applications at the edge. The latest UP Bridge the Gap platforms are compatible with Hailo's Hailo-8 M.2 AI Acceleration Module, offering unprecedented AI performance with best-in-class power efficiency.

Edge computing requires increasingly intensive workloads for computer vision and other artificial intelligence tasks, making it increasingly important to move Deep Learning workloads from the cloud to the edge. Running AI applications at the edge ensures real-time inferencing, data privacy, and low latency for smart city, smart retail, Industry 4.0, and many other applications across various markets.

EuroHPC Joint Undertaking Launches Three New Research and Innovation Projects

The European High Performance Computing Joint Undertaking (EuroHPC JU) has launched 3 new research and innovation projects. The projects aim to bring the EU and its partners in the EuroHPC JU closer to developing independent microprocessor and HPC technology and advance a sovereign European HPC ecosystem. The European Processor Initiative (EPI SGA2), The European PILOT and the European Pilot for Exascale (EUPEX) are interlinked projects and an important milestone towards a more autonomous European supply chain for digital technologies and specifically HPC.

With joint investments of €140 million from the European Union (EU) and the EuroHPC JU Participating States, the three projects will carry out research and innovation activities to contribute to the overarching goal of securing European autonomy and sovereignty in HPC components and technologies, especially in anticipation of the European exascale supercomputers.

Lightelligence's Optical Processor Outperforms GPUs by 100 Times in Some of The Hardest Math Problems

Optical computing has been the research topic of many startups and tech companies like Intel and IBM, searching for the practical approach to bring a new way of computing. However, the most innovative solutions often come out of startups and today is no exception. According to the report from EETimes, optical computing startup Lightelligence has developed a processor that outperforms regular GPUs by 100 times in calculating some of the most challenging mathematical problems. As the report indicates, the Photonic Arithmetic Computing Engine (PACE) from Lightelligence manages to outperform regular GPUs, like NVIDIA's GeForce RTX 3080, by almost 100 times in the NP-complete class of problems.

More precisely, the PACE accelerator was tackling the Ising model, an example of a thermodynamic system used for understanding phase transitions, and it achieved some impressive results. Compared to the RTX 3080, it reached 100 times greater speed-up. All of that was performed using 12,000 optical devices integrated onto a circuit and running at 1 GHz frequency. Compared to the purpose-built Toshiba's simulated bifurcation machine based on FPGAs, the PACE still outperforms this system designed to tackle the Ising mathematical computation by 25 times. The PACE chip uses standard silicon photonics integration of Mach-Zehnder Interferometer (MZI) for computing and MEMS to change the waveguide shape in the MZI.
Lightelligence Photonic Arithmetic Computing Engine Lightelligence Photonic Arithmetic Computing Engine

QNAP and ULINK Release DA Drive Analyzer, AI-powered Drive Failure Prediction Tool for NAS

QNAP, partnered with ULINK Technology, to launch the DA Drive Analyzer. By leveraging cloud-based AI, this drive failure prediction tool allows users to take proactive steps to protect against server downtime and data loss by replacing drives before they fail. The DA Drive Analyzer leverages statistics generated from ULINK's cloud AI portal. Driven by historical usage data of millions of drives provided by users just like yourself, the DA Drive Analyzer's drive health prediction applies machine learning to track historical behaviors and is able to find drive failure events that won't be flagged by traditional diagnostics tools that rely on S.M.A.R.T. thresholds. Its user interface is also much more friendly and intuitive, allowing you to make plans to replace drives based on clearly defined drive information.

"Artificial Intelligence is a new technology that has tackled many real-life problems. By applying this technology to disk failure prediction, ULINK can actively and continuously monitor drives, detect problems, predict failures, and notify end users with our unique cloud-based data processing system. We are fortunate to have worked with QNAP to create this service, and we believe that many will benefit from it," said Joseph Chen, CEO of ULINK Technology.

AMD Announces Ambitious Goal to Increase Energy Efficiency of Processors Running AI Training and High Performance Computing Applications 30x by 2025

AMD today announced a goal to deliver a 30x increase in energy efficiency for AMD EPYC CPUs and AMD Instinct accelerators in Artificial Intelligence (AI) training and High Performance Computing (HPC) applications running on accelerated compute nodes by 2025.1 Accomplishing this ambitious goal will require AMD to increase the energy efficiency of a compute node at a rate that is more than 2.5x faster than the aggregate industry-wide improvement made during the last five years.

Accelerated compute nodes are the most powerful and advanced computing systems in the world used for scientific research and large-scale supercomputer simulations. They provide the computing capability used by scientists to achieve breakthroughs across many fields including material sciences, climate predictions, genomics, drug discovery and alternative energy. Accelerated nodes are also integral for training AI neural networks that are currently used for activities including speech recognition, language translation and expert recommendation systems, with similar promising uses over the coming decade. The 30x goal would save billions of kilowatt hours of electricity in 2025, reducing the power required for these systems to complete a single calculation by 97% over five years.

IBM Unveils On-Chip Accelerated Artificial Intelligence Processor

At the annual Hot Chips conference, IBM (NYSE: IBM) today unveiled details of the upcoming new IBM Telum Processor, designed to bring deep learning inference to enterprise workloads to help address fraud in real-time. Telum is IBM's first processor that contains on-chip acceleration for AI inferencing while a transaction is taking place. Three years in development, the breakthrough of this new on-chip hardware acceleration is designed to help customers achieve business insights at scale across banking, finance, trading, insurance applications and customer interactions. A Telum-based system is planned for the first half of 2022.

Today, businesses typically apply detection techniques to catch fraud after it occurs, a process that can be time consuming and compute-intensive due to the limitations of today's technology, particularly when fraud analysis and detection is conducted far away from mission critical transactions and data. Due to latency requirements, complex fraud detection often cannot be completed in real-time - meaning a bad actor could have already successfully purchased goods with a stolen credit card before the retailer is aware fraud has taken place.

IDC Forecasts Companies to Spend Almost $342 Billion on AI Solutions in 2021

Worldwide revenues for the artificial intelligence (AI) market, including software, hardware, and services, is estimated to grow 15.2% year over year in 2021 to $341.8 billion, according to the latest release of the International Data Corporation (IDC) Worldwide Semiannual Artificial Intelligence Tracker. The market is forecast to accelerate further in 2022 with 18.8% growth and remain on track to break the $500 billion mark by 2024. Among the three technology categories, AI Software occupied 88% of the overall AI market. However, in terms of growth, AI Hardware is estimated to grow the fastest in the next several years. From 2023 onwards, AI Services is forecast to become the fastest growing category.

Within the AI Software category, AI Applications has the lion's share at nearly 50% of revenues. In terms of growth, AI Platforms is the strongest with a five-year compound annual growth rate (CAGR) of 33.2%. The slowest will be AI System Infrastructure Software with a five-year CAGR of 14.4% while accounting for roughly 35% of all AI Software revenues. Within the AI Applications market, AI ERM is expected to grow slightly stronger than AI CRM over the next five years. Meanwhile, AI Lifecycle Software is forecast to grow the fastest among the markets within AI Platforms.

All Team Group Industrial Products Pass Military-Grade Certification

Leading memory provider TEAMGROUP today announced that all of its industrial storage products have passed military standards testing for shock and vibration resistance. The company's unique graphene-coated copper heatsink was also awarded a U.S. utility patent and is the world's first heat sink to be used for industrial SSDs. In response to the growing Artificial Intelligence of Things (AIoT) and edge computing trends, TEAMGROUP continues to improve the durability, reliability, and safety of its industrial products, striving to create innovative solutions in the field of industrial control.

TEAMGROUP's industrial product series, including industrial memory modules, SSDs and memory cards, have all been tested and certified to meet military shock (MIL-STD-202G and MIL-STD-883K) and vibration (MIL-STD-810G) standards. Whether focused on edge computing or 5G related applications, these products are guaranteed to handle high-speed data processing and computing for long periods of continuous operation. TEAMGROUP has once again proven the stability and durability of its industrial products, which meet the demanding needs for data security and industrial control in extreme conditions.

Qualcomm Introduces New 5G Distributed Unit Accelerator Card to Drive Global 5G Virtualized RAN Growth

Qualcomm Technologies, Inc. today announced the expansion of its 5G RAN Platforms portfolio with the addition of the Qualcomm 5G DU X100 Accelerator Card. The Qualcomm 5G DU X100 is designed to enable operators and infrastructure vendors the ability to readily reap the benefits of high performance, low latency, and power efficient 5G, while accelerating the cellular ecosystem's transition towards virtualized radio access networks.

The Qualcomm 5G DU X100 is a PCIe inline accelerator card with concurrent Sub-6 GHz and mmWave baseband support which is designed to simplify 5G deployments by offering a turnkey solution for ease of deployment with O-RAN fronthaul and 5G NR layer 1 High (L1 High) processing. The PCIe card is designed to seamlessly plug into standard Commercial-Off-The-Shelf (COTS) servers to offload CPUs from latency-sensitive and compute-intensive 5G baseband functions such as demodulation, beamforming, channel coding, and Massive MIMO computation needed for high-capacity deployments. For use in public or private networks, this accelerator card aims to give carriers the ability to increase overall network capacity and fully realize the transformative potential of 5G.

Seagate Launches SkyHawk AI 18TB Hard Drive

Seagate Technology plc, a world leader in data storage and management solutions, today announced it is shipping 18 TB SkyHawk Artificial Intelligence drives in volume. SkyHawk AI is the world's first purpose-built hard drive for artificial intelligence (AI)-enabled Surveillance solutions, enabling quicker and smarter decisions. The new drive supports deep learning and machine learning workload streams for Edge applications with ImagePerfectAI.

The capacity to retain more data over time is required for deep learning systems to become smarter and more accurate in their predictive analysis, and behavior analysis requires significantly more data than traditional video capture. SkyHawk AI simultaneously sustains 32 AI streams alongside 64 video streams and supports multi-bay NVR and AI-enabled NVR. SkyHawk AI offers a 550 TB/year workload rate, more than 3× the workload rate of standard surveillance hard drives in order to manage data deluge in complex video security system environments without sacrificing performance. This drive intelligently adapts between traditional video workloads and video+AI workloads.

Asetek Collaborates With HPE to Deliver Next-Gen HPC Server Cooling Solutions

Asetek today announced a collaboration with Hewlett Packard Enterprise (HPE) to deliver its premium data center liquid cooling solutions in HPE Apollo Systems, which are high-performing and density-optimized to target high-performance computing (HPC) and Artificial Intelligence (AI) needs. The integration enables deployment of high wattage processors in high density configurations to support compute-intense workloads.

When developing its next-generation HPC server solutions, HPE worked closely with Asetek to define a plug and play HPC system that is integrated, installed, and serviced by HPE that serves as the ideal complement to HPE's Gen10 Plus platform. With the resulting solution, HPE is able to maximize processor and interconnect performance by efficiently cooling high density computing clusters. HPE will be deploying these DLC systems, which support warm water cooling, this calendar year.

Qualcomm Launches World's First 5G and AI-Enabled Robotics Platform

Qualcomm Technologies, Inc., today announced the Qualcomm Robotics RB5 platform - the Company's most advanced, integrated, comprehensive offering designed specifically for robotics. Building on the successful Qualcomm Robotics RB3 platform and its broad adoption in a wide array of robotics and drone products available today, the Qualcomm Robotics RB5 platform is comprised of an extensive set of hardware, software and development tools.

The Qualcomm Robotics RB5 platform is the first of its kind to bring together the Company's deep expertise in 5G and AI to empower developers and manufacturers to create the next generation of high-compute, low-power robots and drones for the consumer, enterprise, defense, industrial and professional service sectors - and the comprehensive Qualcomm Robotics RB5 Development Kit helps ensure developers have the customization and flexibility they need to make their visions a commercial reality. To date, Qualcomm Technologies has engaged many leading companies that have endorsed the Qualcomm Robotics RB5 platform, including 20+ early adopters in the process of evaluating the platform.
Qualcomm Robotics RB5 Platform

Intel Showcases Intelligent Edge and Energy-efficient Performance Research

This week at the 2020 Symposia on VLSI Technology and Circuits, Intel will present a body of research and technical perspectives on the computing transformation driven by data that is increasingly distributed across the core, edge and endpoints. Chief Technology Officer Mike Mayberry will deliver a plenary keynote, "The Future of Compute: How Data Transformation is Reshaping VLSI," that highlights the importance of transitioning computing from a hardware/program-centric approach to a data/information-centric approach.

"The sheer volume of data flowing across distributed edge, network and cloud infrastructure demands energy-efficient, powerful processing to happen close to where the data is generated, but is often limited by bandwidth, memory and power resources. The research Intel Labs is showcasing at the VLSI Symposia highlights several novel approaches to more efficient computation that show promise for a range of applications - from robotics and augmented reality to machine vision and video analytics. This body of research is focused on addressing barriers to the movement and computation of data, which represent the biggest data challenges of the future," said Vivek K. De, Intel fellow and director of Circuit Technology Research, Intel Labs.

Microsoft is Replacing MSN Journalists with Artificial Intelligence

Microsoft is working on bringing the latest artificial intelligence technology everywhere it can, and everywhere when it works. According to a few reports from Business Insider and Seattle Times, Microsoft is working on terminating its contracts with journalists and replacing them with artificial intelligence software. In the period between Wednesday and Thursday of last week, around 50 employees have received the information that their contracts will not be renewed after June 30th. The journalists in question were responsible for Microsoft's MSN Web portal which will now use machine learning (ML) models that will generate news stream. To use ML for an application like this, Microsoft is surely utilizing its Azure infrastructure to process everything in the cloud.

One of the ex-employees has said that the MSN platform has been semi-automated for some time now and that this is a completion of the automation. "Like all companies, we evaluate our business on a regular basis. This can result in increased investment in some places and, from time to time, re-deployment in others. These decisions are not the result of the current pandemic." - said Microsoft spokesman to Seattle Times.

Dell Announces New Generation Latitude, OptiPlex, and Precision Commercial Notebooks, Desktops, and Services

Dell Technologies unveiled the world's most intelligent and secure business PCs across its award-winning Latitude, Precision and OptiPlex portfolios to make work more efficient and safe - no matter the location. As the industry's most sustainable commercial PC portfolio, the new devices further advance Dell's commitment to sustainability with recycled materials, sustainable packaging, energy efficient designs and EPEAT Gold registrations.

Professionals can work smarter with Dell Optimizer, the automated Artificial Intelligence (AI)-based optimization technology, now available across Latitude, Precision and OptiPlex devices. The built-in software learns how each person works and adapts to their behavior to help them focus on the tasks that matter most. It works behind the scenes to improve overall application performance; enable faster log-in and secure lock outs; eliminate echoes and reduce background noise on conference calls; and extend battery run time.

GLOBALFOUNDRIES Delivers Industry's First Production-ready eMRAM on 22FDX Platform

GLOBALFOUNDRIES (GF ) today announced its embedded magnetoresistive non-volatile memory (eMRAM) on the company's 22 nm FD-SOI (22FDX ) platform has entered production, and GF is working with several clients with multiple production tape-outs scheduled in 2020. Today's announcement represents a significant industry milestone, demonstrating the scalability of eMRAM as a cost-effective option at advanced process nodes for Internet of Things (IoT), general-purpose microcontrollers, automotive, edge-AI (Artificial Intelligence), and other low-power applications.

Designed as a replacement for high-volume embedded NOR flash (eFlash), GF's eMRAM allows designers to extend their existing IoT and microcontroller unit architectures to access the power and density benefits of technology nodes below 28 nm.

Intel Acquires Artificial Intelligence Chipmaker Habana Labs

Intel Corporation today announced that it has acquired Habana Labs, an Israel-based developer of programmable deep learning accelerators for the data center for approximately $2 billion. The combination strengthens Intel's artificial intelligence (AI) portfolio and accelerates its efforts in the nascent, fast-growing AI silicon market, which Intel expects to be greater than $25 billion by 2024 (1).

"This acquisition advances our AI strategy, which is to provide customers with solutions to fit every performance need - from the intelligent edge to the data center," said Navin Shenoy, executive vice president and general manager of the Data Platforms Group at Intel. "More specifically, Habana turbo-charges our AI offerings for the data center with a high-performance training processor family and a standards-based programming environment to address evolving AI workloads."

Intel's AI strategy is grounded in the belief that harnessing the power of AI to improve business outcomes requires a broad mix of technology - hardware and software - and full ecosystem support. Today, Intel AI solutions are helping customers turn data into business value and driving meaningful revenue for the company. In 2019, Intel expects to generate over $3.5 billion in AI-driven revenue, up more than 20 percent year-over-year. Together, Intel and Habana can accelerate the delivery of best-in-class AI products for the data center, addressing customers' evolving needs.

NVIDIA Leads the Edge AI Chipset Market but Competition is Intensifying: ABI Research

Diversity is the name of the game when it comes to the edge Artificial Intelligence (AI) chipset industry. In 2019, the AI industry is witnessing the continual migration of AI workloads, particularly AI inference, to edge devices, including on-premise servers, gateways, and end-devices and sensors. Based on the AI development in 17 vertical markets, ABI Research, a global tech market advisory firm, estimates that the edge AI chipset market will grow from US $2.6 billion in 2019 to US $7.6 billion by 2024, with no vendor commanding more than 40% of the market.

The frontrunner of this market is NVIDIA, with a 39% revenue share in the first half of 2019. The GPU vendor has a strong presence in key AI verticals that are currently leading in AI deployments, such as automotive, camera systems, robotics, and smart manufacturing. "In the face of different use cases, NVIDIA chooses to release GPU chipsets with different computational and power budgets. In combination with its large developer ecosystem and partnerships with academic and research institutions, the chipset vendor has developed a strong foothold in the edge AI industry," said Lian Jye Su, Principal Analyst at ABI Research.

NVIDIA is facing stiff competition from Intel with its comprehensive chipset portfolio, from Xeon CPU to Mobileye and Movidius Myriad. At the same time, FPGA vendors, such as Xilinx, QuickLogic, and Lattice Semiconductor, are creating compelling solutions for industrial AI applications. One missing vertical from NVIDIA's wide footprint is consumer electronics, specifically smartphones. In recent years, AI processing in smartphones has been driven by smartphone chipset manufacturers and smartphone vendors, such as Qualcomm, Huawei, and Apple. In smart home applications, MediaTek and Amlogic are making their presence known through the widespread adoption of voice control front ends and smart appliances.

Compute Express Link Consortium (CXL) Officially Incorporates

Today, Alibaba, Cisco, Dell EMC, Facebook, Google, Hewlett Packard Enterprise, Huawei, Intel Corporation and Microsoft announce the incorporation of the Compute Express Link (CXL) Consortium, and unveiled the names of its newly-elected members to its Board of Directors. The core group of key industry partners announced their intent to incorporate in March 2019, and remain dedicated to advancing the CXL standard, a new high-speed CPU-to-Device and CPU-to-Memory interconnect which accelerates next-generation data center performance.

The five new CXL board members are as follows: Steve Fields, Fellow and Chief Engineer of Power Systems, IBM; Gaurav Singh, Corporate Vice President, Xilinx; Dong Wei, Standards Architect and Fellow at ARM Holdings; Nathan Kalyanasundharam, Senior Fellow at AMD Semiconductor; and Larrie Carr, Fellow, Technical Strategy and Architecture, Data Center Solutions, Microchip Technology Inc.

Intel's CEO Blames 10 nm Delay on being "Too Aggressive"

During Fortune's Brainstorm Tech conference in Aspen, Colorado, Intel's CEO Bob Swan took stage and talked about the company, about where Intel is now and where they are headed in the future and how the company plans to evolve. Particular focus was put on how Intel became "data centric" from "PC centric," and the struggles it encountered.

However, when asked about the demise of Moore's Law, Swan detailed the aggressiveness that they approached the challenge with. Instead of the regular two times improvement in transistor density every two years, Swan said that Intel has always targeted better and greater densities so that it would stay the leader in the business.

SHERPA Consortium: If AI Could Feel, it Would Fear Cyber-attacks from People

Artificial intelligence (AI) is rapidly finding applications in nearly every walk of life. Self-driving cars, social media networks, cyber security companies, and everything in between uses it. But a new report published by the SHERPA consortium - an EU project studying the impact of AI on ethics and human rights that F-Secure joined in 2018 - finds that while human attackers have access to machine learning techniques, they currently focus most of their efforts on manipulating existing AI systems for malicious purposes instead of creating new attacks that would use machine learning.

The study's primary focus is on how malicious actors can abuse AI, machine learning, and smart information systems. The researchers identify a variety of potentially malicious uses for AI that are well within reach of today's attackers, including the creation of sophisticated disinformation and social engineering campaigns.
Return to Keyword Browsing
Dec 18th, 2024 01:16 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts