News Posts matching #ML

Return to Keyword Browsing

Open Compute Project Foundation and JEDEC Announce a New Collaboration

Today, the Open Compute Project Foundation (OCP), the nonprofit organization bringing hyperscale innovations to all, and JEDEC Solid State Technology Association, the global leader in the development of standards for the microelectronics industry, announce a new collaboration to establish a framework for the transfer of technology captured in an OCP-approved specification to JEDEC for inclusion in one of its standards. This alliance brings together members from both the OCP and JEDEC communities to share efforts in developing and maintaining global standards needed to advance the electronics industry.

Under this new alliance, the current effort will be to provide a mechanism to standardize Chiplet part descriptions leveraging OCP Chiplet Data Extensible Markup Language (CDXML) specification to become part of JEDEC JEP30: Part Model Guidelines for use with today's EDA tools. With this updated JEDEC standard, expected to be published in 2023, Chiplet builders will be able to provide electronically a standardized Chiplet part description to their customers paving the way for automating System in Package (SiP) design and build using Chiplets. The description will include information needed by SiP builders such as Chiplet thermal properties, physical and mechanical requirements, behavior specifications, power and signal integrity properties, testing the Chiplet in package, and security parameters.

NVIDIA Could Release AI-Optimized Drivers, Improving Overall Performance

NVIDIA is using artificial intelligence to design and develop parts of its chip designs, as we have seen in the past, making optimization much more efficient. However, today we have a new rumor that NVIDIA will use AI to optimize its driver performance to reach grounds that the human workforce can not. According to CapFrameX, NVIDIA is allegedly preparing special drivers with optimizations done by AI algorithms. As the source claims, the average improvement will yield a 10% performance increase with up to 30% in best-case scenarios. Presumably, AI can do optimization on two fronts: shader compiles / game optimization side or for power management, which includes clocks, voltages, and boost frequency curves.

It still needs to be made clear which aspect will company's AI optimize and work on; however, it can be a combination of two, given the expected drastic improvement in performance. Special tuning of code for more efficient execution and a better power/frequency curve will bring the efficiency level one notch above current releases. We have already seen AI solve these problems last year with the PrefixML model that compacted circuit design by 25%. We need to find out which cards NVIDIA plans to target, and we can only assume that the latest-generation GeForce RTX 40 series will be the goal if the project is made public in Q1 of this year.

AMD Shows Instinct MI300 Exascale APU with 146 Billion Transistors

During its CES 2023 keynote, AMD announced its latest Instinct MI300 APU, a first of its kind in the data center world. Combining the CPU, GPU, and memory elements into a single package eliminates latency imposed by long travel distances of data from CPU to memory and from CPU to GPU throughout the PCIe connector. In addition to solving some latency issues, less power is needed to move the data and provide greater efficiency. The Instinct MI300 features 24 Zen4 cores with simultaneous multi-threading enabled, CDNA3 GPU IP, and 128 GB of HBM3 memory on a single package. The memory bus is 8192-bit wide, providing unified memory access for CPU and GPU cores. CLX 3.0 is also supported, making cache-coherent interconnecting a reality.

The Instinct MI300 APU package is an engineering marvel of its own, with advanced chiplet techniques used. AMD managed to do 3D stacking and has nine 5 nm logic chiplets that are 3D stacked on top of four 6 nm chiplets with HBM surrounding it. All of this makes the transistor count go up to 146 billion, representing the sheer complexity of a such design. For performance figures, AMD provided a comparison to Instinct MI250X GPU. In raw AI performance, the MI300 features an 8x improvement over MI250X, while the performance-per-watt is "reduced" to a 5x increase. While we do not know what benchmark applications were used, there is a probability that some standard benchmarks like MLPerf were used. For availability, AMD targets the end of 2023, when the "El Capitan" exascale supercomputer will arrive using these Instinct MI300 APU accelerators. Pricing is unknown and will be unveiled to enterprise customers first around launch.

DPVR E4 Announced with November Launch, Aims to Dominating the Consumer Market for Tethered PC VR Headsets

DPVR, a leading provider of virtual reality (VR) devices, has today announced the launch of its newest PC VR gaming headset with the introduction of the 'DPVR E4,' which is aimed at dominating the consumer market for tethered PC VR headsets. In a different category altogether from standalone VR headsets such as the Meta Quest 2 and Pico 4 devices, the DPVR E4 provides PC VR gamers with a tethered alternative that offers a wider field of view (FoV), in a more compact and lighter form factor, as well as offering a more affordable solution compared to high-price tag devices such as the VIVE Pro 2.

DPVR has been making VR headsets for seven years. Prior to E4's launch, the company's efforts were primarily directed towards the B2B market, with a specialized focus on the education and medical industries. Over the last decade, DPVR has completed three successful funding rounds, which the company has used for its research and development efforts into furthering its VR hardware and software offerings. This latest announcement from DPVR marks the company's first step into the consumer VR headset market.

Cerebras Unveils Andromeda, a 13.5 Million Core AI Supercomputer that Delivers Near-Perfect Linear Scaling for Large Language Models

Cerebras Systems, the pioneer in accelerating artificial intelligence (AI) compute, today unveiled Andromeda, a 13.5 million core AI supercomputer, now available and being used for commercial and academic work. Built with a cluster of 16 Cerebras CS-2 systems and leveraging Cerebras MemoryX and SwarmX technologies, Andromeda delivers more than 1 Exaflop of AI compute and 120 Petaflops of dense compute at 16-bit half precision. It is the only AI supercomputer to ever demonstrate near-perfect linear scaling on large language model workloads relying on simple data parallelism alone.

With more than 13.5 million AI-optimized compute cores and fed by 18,176 3rd Gen AMD EPYC processors, Andromeda features more cores than 1,953 Nvidia A100 GPUs and 1.6 times as many cores as the largest supercomputer in the world, Frontier, which has 8.7 million cores. Unlike any known GPU-based cluster, Andromeda delivers near-perfect scaling via simple data parallelism across GPT-class large language models, including GPT-3, GPT-J and GPT-NeoX.

Intel Delivers Leading AI Performance Results on MLPerf v2.1 Industry Benchmark for DL Training

Today, MLCommons published results of its industry AI performance benchmark in which both the 4th Generation Intel Xeon Scalable processor (code-named Sapphire Rapids) and Habana Gaudi 2 dedicated deep learning accelerator logged impressive training results.


"I'm proud of our team's continued progress since we last submitted leadership results on MLPerf in June. Intel's 4th gen Xeon Scalable processor and Gaudi 2 AI accelerator support a wide array of AI functions and deliver leadership performance for customers who require deep learning training and large-scale workloads." Sandra Rivera, Intel executive vice president and general manager of the Datacenter and AI Group

Rescale Teams with NVIDIA to Unite HPC and AI for Optimized Engineering in the Cloud

Rescale, the leader in high performance computing built for the cloud to accelerate engineering innovation, today announced it is teaming with NVIDIA to integrate the NVIDIA AI platform into Rescale's HPC-as-a-Service offering. The integration is designed to advance computational engineering simulation with AI and machine learning, helping enterprises commercialize new product innovations faster, more efficiently and at less cost.

Additionally, Rescale announced the world's first Compute Recommendation Engine (CRE) to power Intelligent Computing for HPC and AI workloads. Optimizing workload performance can be prohibitively complex as organizations seek to balance decisions among architectures, geographic regions, price points, scalability, service levels, compliance, and sustainability objectives. Developed using machine learning on NVIDIA architectures with infrastructure telemetry, industry benchmarks, and full-stack metadata spanning over 100 million production HPC workloads, Rescale CRE provides customers unprecedented insight to optimize overall performance.

ASUS Announces AMD EPYC 9004-Powered Rack Servers and Liquid-Cooling Solutions

ASUS, a leading provider of server systems, server motherboards and workstations, today announced new best-in-class server solutions powered by the latest AMD EPYC 9004 Series processors. ASUS also launched superior liquid-cooling solutions that dramatically improve the data-center power-usage effectiveness (PUE).

The breakthrough thermal design in this new generation delivers superior power and thermal capabilities to support class-leading features, including up to 400-watt CPUs, up to 350-watt GPUs, and 400 Gbps networking. All ASUS liquid-cooling solutions will be demonstrated in the ASUS booth (number 3816) at SC22 from November 14-17, 2022, at Kay Bailey Hutchison Convention Center in Dallas, Texas.

Cincoze's Embedded Computer DV-1000 Plays a Key Role in Predictive Maintenance

With the evolution of IIoT and AI technology, more and more manufacturing industries are introducing predictive maintenance to collect equipment data on-site. Predictive maintenance collects machine equipment data and uses AI and machine learning (ML) for data analysis, avoiding manual decision-making and strengthening automation to reduce the average cost and downtime of maintenance and increase the accuracy of machine downtime predictions. These factors extend machine lifetime and enhance efficiency, and in turn increase profits. Cincoze's Rugged Computing - DIAMOND product line includes the high-performance and essential embedded computer series DV-1000, integrating high-performance computing and flexible expansion capabilities into a small and compact body. The system can process large amounts of data in real time for data analytics. It is the ultimate system for bringing preventive maintenance to industrial sites with challenging conditions.

Performance is the top priority. It lies at the core of predictive maintenance, where data from different devices must be collected, processed, and analyzed in real-time, and then connected with a cloud platform. The high-performance and essential embedded computer series DV-1000 offers high-performance computing, supporting a 9/8th gen Intel Core i7/i5/i3 (Coffee Lake-R S series) processor and up to 32 GB of DDR4 2666 MHz memory. Storage options include a 2.5" SATA HDD/SSD tray, 2× mSATA slots, 1× M.2 Key M 2280, and NVMe high-speed SSD. The DV-1000 meets the performance requirements necessary for multi-tasking and on-site data analytics in smart manufacturing and can also be used in fields such as machine vision and railway transportation.

ASML Reports €5.4 Billion Net Sales and €1.4 Billion Net Income in Q2 2022

Today ASML Holding NV (ASML) has published its 2022 second-quarter results. Q2 net sales of €5.4 billion, gross margin of 49.1%, net income of €1.4 billion. Record quarterly net bookings in Q2 of €8.5 billion. ASML expects Q3 2022 net sales between €5.1 billion and €5.4 billion and a gross margin between 49% and 50%. Expected sales growth for the full year of around 10%.

The value of fast shipments*in 2022 leading to delayed revenue recognition into 2023 is expected to increase from around €1 billion to around €2.8 billion.
"Our second-quarter net sales came in at €5.4 billion with a gross margin of 49.1%. Demand from our customers remains very strong, as reflected by record net bookings in the second quarter of €8.5 billion, including €5.4 billion from 0.33 NA and 0.55 NA EUV systems as well as strong DUV bookings.

Ayar Labs Partners with NVIDIA to Deliver Light-Based Interconnect for AI Architectures

Ayar Labs, the leader in chip-to-chip optical connectivity, is developing with NVIDIA groundbreaking artificial intelligence (AI) infrastructure based on optical I/O technology to meet future demands of AI and high performance computing (HPC) workloads. The collaboration will focus on integrating Ayar Labs' technology to develop scale-out architectures enabled by high-bandwidth, low-latency and ultra-low-power optical-based interconnects for future NVIDIA products. Together, the companies plan to accelerate the development and adoption of optical I/O technology to support the explosive growth of AI and machine learning (ML) applications and data volumes.

Optical I/O uniquely changes the performance and power trajectories of system designs by enabling compute, memory and networking ASICs to communicate with dramatically increased bandwidth, at lower latency, over longer distances and at a fraction of the power of existing electrical I/O solutions. The technology is also foundational to enabling emerging heterogeneous compute systems, disaggregated/pooled designs, and unified memory architectures that are critical to accelerating future data center innovation.

HPE Build Supercomputer Factory in Czech Republic

Hewlett Packard Enterprise (NYSE: HPE) today announced its ongoing commitment in Europe by building its first factory in the region for next-generation high performance computing (HPC) and artificial intelligence (AI) systems to accelerate delivery to customers and strengthen the region's supplier ecosystem. The new site will manufacture HPE's industry-leading systems as custom-designed solutions to advance scientific research, mature AL/ML initiatives, and bolster innovation.

The dedicated HPC factory, which will become the fourth of HPE's global HPC sites, will be located in Kutná Hora, Czech Republic, next to HPE's existing European site for manufacturing its industry-standard servers and storage solutions. Operations will begin in summer 2022.

GrAI Matter Labs Unveils sparsity-native AI SoC

GrAI Matter Labs, a pioneer of brain-inspired ultra-low latency computing, announced today that it will be unveiling GrAI VIP, a full-stack AI system-on-chip platform, to partners and customers at GLOBAL INDUSTRIE, May 17th-20th, 2022. At GLOBAL INDUSTRIE, GML will demonstrate a live event-based, brain-inspired computing solution for purpose-built, efficient inference in a real-world application of robotics using the Life-Ready GrAI VIP chip. GrAI VIP is an industry-first near-sensor AI solution with 16-bit floating-point capability that achieves best-in-class performance with a low-power envelope. It opens up unparalleled applications that rely on understanding and transformations of signals produced by a multitude of sensors at the edge in Robotics, AR/VR, Smart Homes, Infotainment in automobiles and more.

"GrAI VIP is ready to deliver Life-Ready AI to industrial automation applications and revolutionize systems such as pick & place robots, cobots, and warehouse robots, as being demonstrated at the show," said Ingolf Held, CEO of GrAI Matter Labs. "GrAI Matter Labs has a pipeline of over $1 Million in pre-orders, and we are thrilled to enable our early-access partners and customers in industrial automation, consumer electronics, defence and more, with our GrAI VIP M.2 cards sampling today." "GML is targeting the $1 billion+ fast-growing market (20%+ per year) of endpoint AI with a unique approach backed by innovative technology," said Karl Freund, Founder and Principal Analyst at Cambrian-AI Research. "GML's 'Life-Ready' AI provides solutions that here-to-fore were simply impossible at such low footprint and power." AI application developers looking for high fidelity and low latency responses for their edge algorithms can now get early access to the GrAI VIP platform and drive game-changing products in industrial automation, consumer electronics, and more.

Google Uses Artificial Intelligence to Develop Faster and Smaller Hardware Accelerators

Designing Artificial Intelligence / Machine Learning hardware accelerators takes effort from hardware engineers in conjunction with scientists working in the AI/ML area itself. A few years ago, we started seeing AI incorporated into parts of electronic design automation (EDA) software tools, helping chip designers speed up the process of creating hardware. What we were "used to" seeing AI do are just a couple of things like placement and routing. And having that automated is a huge deal. However, it looks like the power of AI for chip design is not going to stop there. Researchers at Google and UC Berkeley have made a research project that helps AI design and develop AI-tailored accelerators smaller and faster than anything humans made.

In the published paper, researchers present PRIME - a framework that created AI processors based on a database of blueprints. The PRIME framework feeds off an offline database containing accelerator designs and their corresponding performance metrics (e.g., latency, power, etc.) to design next-generation hardware accelerators. According to Google, PRIME can do so without further hardware simulation and has processors ready for use. As per the paper, PRIME improves performance upon state-of-the-art simulation-driven methods by as much as 1.2x-1.5x. It also reduces the required total simulation time by 93% and 99%, respectively. The framework is also capable of architecting accelerators for unseen applications.

Marvell Introduces Industry's First 800G Multimode Electro-Optics Platform for Cloud Data Centers

Marvell (NASDAQ: MRVL) today announced the industry's first 800 Gbps or 8x 100 Gbps multimode platform solution, that enables data center infrastructure to achieve dramatically higher speeds for short-reach optical modules and Active Optical Cable (AOC) applications. As artificial intelligence (AI), machine learning (ML) and high-performance computing (HPC) applications continue to drive greater bandwidth requirements, cloud-optimized solutions are needed that can bring lower power, latency and cost to short-range data center interconnections. The new 800G platform, which includes Marvell's PAM4 DSP with a multimode transimpedance amplifier (TIA) and Driver, enables faster data center speeds scaling to 800 Gbps, using conventional cost-effective vertical-cavity surface-emitting laser (VCSEL) technology while accelerating time-to-market with plug-and-play deployment.

Today's data centers are packed with equipment utilizing optical modules or AOCs connected by multimode optical fiber optimized for communication over short distances within data centers. This 100G per lane multimode fiber provides cost-effective, low-power, short-reach connectivity. To support multi-gigabit transmissions, multimode architectures often use VCSEL transmitters, which offer the cost benefits of reliability, power efficiency and easy deployment.

Ceremorphic Exits Stealth Mode; Unveils Technology Plans to Deliver a New Architecture Specifically Designed for Reliable Performance Computing

Armed with more than 100 patents and leveraging multi-decade expertise in creating industry-leading silicon systems, Ceremorphic Inc. today announced its plans to deliver a complete silicon system that provides the performance needed for next-generation applications such as AI model training, HPC, automotive processing, drug discovery, and metaverse processing. Designed in advanced silicon geometry (TSMC 5 nm node), this new architecture was built from the ground up to solve today's high-performance computing problems in reliability, security and energy consumption to serve all performance-demanding market segments.

Ceremorphic was founded in April 2020 by industry-veteran Dr. Venkat Mattela, the Founding CEO of Redpine Signals, which sold its wireless assets to Silicon Labs, Inc. in March 2020 for $308 million. Under his leadership, the team at Redpine Signals delivered breakthrough innovations and industry-first products that led to the development of an ultra-low-power wireless solution that outperformed products from industry giants in the wireless space by as much as 26 times on energy consumption. Ceremorphic leverages its own patented multi-thread processor technology ThreadArch combined with cutting-edge new technology developed by the silicon, algorithm and software engineers currently employed by Ceremorphic. This team is leveraging its deep expertise and patented technology to design an ultra-low-power training supercomputing chip.

congatec launches 10 new COM-HPC and COM Express Computer-on-Modules with 12th Gen Intel Core processors

congatec - a leading vendor of embedded and edge computing technology - introduces the 12th Generation Intel Core mobile and desktop processors (formerly code named Alder Lake) on 10 new COM-HPC and COM Express Computer-on-Modules. Featuring the latest high performance cores from Intel, the new modules in COM-HPC Size A and C as well as COM Express Type 6 form factors offer major performance gains and improvements for the world of embedded and edge computing systems. Most impressive is the fact that engineers can now leverage Intel's innovative performance hybrid architecture. Offering of up to 14 cores/20 threads on BGA and 16 cores/24 threads on desktop variants (LGA mounted), 12th Gen Intel Core processors provide a quantum leap [1] in multitasking and scalability levels. Next-gen IoT and edge applications benefit from up to 6 or 8 (BGA/LGA) optimized Performance-cores (P-cores) plus up to 8 low power Efficient-cores (E-cores) and DDR5 memory support to accelerate multithreaded applications and execute background tasks more efficiently.

Extreme CPU Cooling with Your Own Digital Dashboard - CORSAIR Launches ELITE LCD CPU Coolers

CORSAIR, a world leader in enthusiast components for gamers, creators, and PC builders, today announced new, highly customizable additions to its ELITE line of all-in-one CPU coolers: iCUE ELITE LCD Display Liquid CPU Coolers. With a vivid 2.1" LCD screen on the pump head to display anything from system vitals to animated GIFs, ELITE LCD coolers offer a unique window into both your PC's performance and your own style and personality. The ultra-bright LCD screen is also available as an upgrade kit for CORSAIR iCUE ELITE CAPELLIX coolers, letting you add a digital dashboard to your existing cooler.

The new H100i ELITE LCD, H150i ELITE LCD, and H170i ELITE LCD are also equipped with new ML RGB ELITE Series fans, delivering powerful concentrated airflow with the performance of magnetic levitation bearings and AirGuide technology, illuminated by eight individually addressable RGB LEDs per fan. ML RGB ELITE fans are also available separately in both 120 mm and 140 mm sizes and either black or white frames, so you can take advantage of their high performance to cool your entire system as well.

Kubuntu Focus Team Announces the 3rd Gen M2 Linux Mobile Workstation

The Kubuntu Focus Team announces the availability of the third-generation M2 Linux mobile workstation with multiple performance enhancements. RTX 3080 and RTX 3070 models are in stock now. RTX 3060 models can be reserved now and ship in the first week of November. The thin-and-light M2 laptop is a superb choice for anyone looking for the best out-of-the-box Linux experience with the most powerful mobile hardware. Customers include ML scientists, developers, and creators. Improvements to the third generation include:
  • Cooler and faster Intel 11th generation Core i7-11800H. Geekbench scores improve 19 and 29%.
  • Double the iGPU performance with Intel Iris Xe 32EU Graphics.
  • Increased RAM speed from 2933 to 3200 MHz, up to 64 GB dual-channel.
  • BIOS switchable USB-C GPU output.
  • Upgrade from Thunderbolt 3 to version 4.

NVIDIA Announces Financial Results for Second Quarter Fiscal 2022

NVIDIA (NASDAQ: NVDA) today reported record revenue for the second quarter ended August 1, 2021, of $6.51 billion, up 68 percent from a year earlier and up 15 percent from the previous quarter, with record revenue from the company's Gaming, Data Center and Professional Visualization platforms. GAAP earnings per diluted share for the quarter were $0.94, up 276 percent from a year ago and up 24 percent from the previous quarter. Non-GAAP earnings per diluted share were $1.04, up 89 percent from a year ago and up 14 percent from the previous quarter.

"NVIDIA's pioneering work in accelerated computing continues to advance graphics, scientific computing and AI," said Jensen Huang, founder and CEO of NVIDIA. "Enabled by the NVIDIA platform, developers are creating the most impactful technologies of our time - from natural language understanding and recommender systems, to autonomous vehicles and logistic centers, to digital biology and climate science, to metaverse worlds that obey the laws of physics.

Google Teases Upcoming Custom Tensor Processor in Pixel 6

In 2016, we launched the first Pixel. Our goal was to give people a more helpful, smarter phone. Over the years, we introduced features like HDR+ and Night Sight, which used artificial intelligence (AI) to create beautiful images with computational photography. In later years, we applied powerful speech recognition models to build Recorder, which can record, transcribe and search for audio clips, all on device.

AI is the future of our innovation work, but the problem is we've run into computing limitations that prevented us from fully pursuing our mission. So we set about building a technology platform built for mobile that enabled us to bring our most innovative AI and machine learning (ML) to our Pixel users. We set out to make our own System on a Chip (SoC) to power Pixel 6. And now, years later, it's almost here. Tensor is our first custom-built SoC specifically for Pixel phones, and it will power the Pixel 6 and Pixel 6 Pro later this fall.

Microsoft Seemingly Looking to Develop AI-based Upscaling Tech via DirectML

Microsoft seems to be throwing its hat in the image upscale battle that's currently raging between NVIDIA and AMD. The company has added two new job openings to its careers page: one for a Senior Software Engineer and another for a Principal Software Engineer for Graphics. Those job openings would be quite innocent by themselves; however, once we cut through the chaff, it becomes clear that the Senior Software Engineer is expected to "implement machine learning algorithms in graphics software to delight millions of gamers," while working closely with "partners" to develop software for "future machine learning hardware" - partners here could be first-party titles or even the hardware providers themselves (read, AMD). AMD themselves have touted a DirectML upscaling solution back when they first introduced their FidelityFX program - and FSR clearly isn't it.

It is interesting how Microsoft posted these job openings in June 30th - a few days after AMD's reveal of their FidelityFX Super Resolution (FSR) solution for all graphics cards - and which Microsoft themselves confirmed would be implemented in the Xbox product stack, where applicable. Of course, that there is one solution available already does not mean companies should rest on their laurels - AMD is surely at work on improving its FSR tech as we speak, and Microsoft has seen the advantages on having a pure ML-powered image upscaling solution thanks to NVIDIA's DLSS. Whether Microsoft's solution with DirectML will improve on DLSS as it exists at time of launch (if ever) is, of course, unknowable at this point.

SiFive Performance P550 Core Sets New Standard as Highest Performance RISC-V Processor IP

SiFive, Inc., the industry leader in RISC-V processors and silicon solutions, today announced launched the new SiFive Performance family of processors. The SiFive Performance family debuts with two new processor cores, the P270, SiFive's first Linux capable processor with full support for the RISC-V vector extension v1.0 rc, and the SiFive Performance P550 core, SiFive's highest performance processor to date. The new SiFive Performance P550 delivers a SPECInt 2006 score of 8.65/GHz, making it the highest performance RISC-V processor available today, and comparable to existing proprietary solutions in the application processor space.

"SiFive Performance is a significant milestone in our commitment to deliver a complete, scalable portfolio of RISC-V cores to customers in all markets who are at the vanguard of SOC design and are dissatisfied with the status quo," said Dr. Yunsup Lee, Co-Founder and CTO of SiFive. "These two new products cover new performance points and a wide range of application areas, from efficient vector processors that easily displace yesterday's SIMD architectures, to the bleeding edge that the P550 represents. SiFive is proud to set the standard for RISC-V processing and is ready to deliver these products to customers today."

Intel Collaborates with Microsoft against Cryptojacking

Starting today, Microsoft Defender for Endpoint expands its use of Intel Threat Detection Technology (Intel TDT) beyond accelerated memory scanning capabilities to activate central processing unit (CPU) based cryptomining machine learning (ML) detection. This move further accelerates endpoint detection and response for millions of customers without compromising experience.

"This is a true inflection point for the security industry as well as our SMB, mid-market and enterprise customers that have rapidly adopted Windows 10 with built-in endpoint protections. Customers who choose Intel vPro with the exclusive Intel Hardware Shield now gain full-stack visibility to detect threats out of the box with no need for IT configuration. The scale of this CPU-based threat detection rollout across customer systems is unmatched and helps close gaps in corporate defenses," said Michael Nordquist, senior director of Strategic Planning and Architecture in the Business Client Group at Intel.

Apple Announces New Line of MacBooks and Mac Minis Powered by M1

On a momentous day for the Mac, Apple today introduced a new MacBook Air, 13-inch MacBook Pro, and Mac mini powered by the revolutionary M1, the first in a family of chips designed by Apple specifically for the Mac. By far the most powerful chip Apple has ever made, M1 transforms the Mac experience. With its industry-leading performance per watt, together with macOS Big Sur, M1 delivers up to 3.5x faster CPU, up to 6x faster GPU, up to 15x faster machine learning (ML) capabilities, and battery life up to 2x longer than before. And with M1 and Big Sur, users get access to the biggest collection of apps ever for Mac. With amazing performance and remarkable new features, the new lineup of M1-powered Macs are an incredible value, and all are available to order today.

"The introduction of three new Macs featuring Apple's breakthrough M1 chip represents a bold change that was years in the making, and marks a truly historic day for the Mac and for Apple," said Tim Cook, Apple's CEO. "M1 is by far the most powerful chip we've ever created, and combined with Big Sur, delivers mind-blowing performance, extraordinary battery life, and access to more software and apps than ever before. We can't wait for our customers to experience this new generation of Mac, and we have no doubt it will help them continue to change the world."
Return to Keyword Browsing
Dec 18th, 2024 10:17 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts