News Posts matching #AI

Return to Keyword Browsing

PALIT Announces GeForce RTX 3090, 3080, 3070 GamingPro and GameRock Series

Palit Microsystems Ltd, the leading graphics card manufacturer, today launched the GeForce RTX 3090, RTX 3080, RTX 3070 GameRock and GamingPro Series powered by the NVIDIA Ampere architecture.

The new NVIDIA GeForce RTX 30 Series GPUs, the 2nd generation of RTX, features new RT Cores, Tensor Cores and streaming multiprocessors, bringing stunning visuals, amazingly fast frame rates, and AI acceleration to games and creative applications. Powered by the NVIDIA Ampere architecture, which delivers increases of up to 1.9X performance-per-watt over the previous generation, the RTX 30 Series effortlessly powers graphics experiences at all resolutions, even up to 8K at the top end. The GeForce RTX 3090, 3080, and 3070 represent the greatest GPU generational leap in the history of NVIDIA.

Tachyum Prodigy Native AI Supports TensorFlow and PyTorch

Tachyum Inc. today announced that it has further expanded the capabilities of its Prodigy Universal Processor through support for TensorFlow and PyTorch environments, enabling a faster, less expensive and more dynamic solution for the most challenging artificial intelligence/machine learning workloads.

Analysts predict that AI revenue will surpass $300 billion by 2024 with a compound annual growth rate (CAGR) of up to 42 percent through 2027. AI is being heavily invested in by technology giants looking to make the technology more accessible for enterprise use-cases. They include self-driving vehicles to more sophisticated and control-intensive disciplines like Spiking Neural Nets, Explainable AI, Symbolic AI and Bio AI. When deployed into AI environments, Prodigy is able to simplify software processes, accelerate performance, save energy and better incorporate rich data sets to allow for faster innovation.

Elon Musk to Show Working Neuralink Device This Friday

Elon Musk, via its Neuralink company, is set to reveal a working device this Friday. Neuralink Corporation was started back in 2016 with the mission to develop a BMI (Brain-Machine Interface), ultimately allowing for integration of a computer with the human mind. Work has gone on in relative secrecy until now, but the announcement from Elon Musk shows that the company has been diligently working behind closed doors - as one would expect for such a fundamental technology. The first step is for Neuralink to serve as a "treatment" of sorts for brain diseases and assorted conditions. The device works by implanting threads into the brain, for which Neuralink is developing a "sewing machine-like" device that can manipulate and insert 4 to 6 μm in width threads throughout a recipient's brain (note that patient wasn't the word used there).

The basis behind Neuralink's foundation, and its ultimate goal, is the belief for a need for human augmentation (sometimes referred to as transhumanism). This aims to keep up with the increasingly entrenched Dataist interpretation of humankind, and the advent of increasingly complex algorithms - and even AI - throughout the sphere of our lives. Apart from showing off a working Neuralink prototype, which will supposedly demonstrate the ability to "fire neurons in real time", the company is unveiling a second-generation robot for sewing the threads into the brain. The objective is to develop flexible threads that circumvent currently-employed rigid threads in BMI interfaces, which always run the risk of damaging the brain. Eventually, this surgery will be non-invasive - an objective example is the workings of LASIK eye surgery. Being a Musk-backed project, lofty claims and unrealistic deadlines are aplenty; the company first expected to start human trials by the end of this year. For now, no more information on that milestone has been shared.

Lightmatter Introduces Optical Processor to Speed Compute for Next-Gen AI

Lightmatter, a leader in silicon photonics processors, today announces its artificial intelligence (AI) photonic processor, a general-purpose AI inference accelerator that uses light to compute and transport data. Using light to calculate and communicate within the chip reduces heat—leading to orders of magnitude reduction in energy consumption per chip and dramatic improvements in processor speed. Since 2010, the amount of compute power needed to train a state-of-the-art AI algorithm has grown at five times the rate of Moore's Law scaling—doubling approximately every three and a half months. Lightmatter's processor solves the growing need for computation to support next-generation AI algorithms.

"The Department of Energy estimates that by 2030, computing and communications technology will consume more than 8 percent of the world's power. Transistors, the workhorse of traditional processors, aren't improving; they're simply too hot. Building larger and larger datacenters is a dead end path along the road of computational progress," said Nicholas Harris, PhD, founder and CEO at Lightmatter. "We need a new computing paradigm. Lightmatter's optical processors are dramatically faster and more energy efficient than traditional processors. We're simultaneously enabling the growth of computing and reducing its impact on our planet."

Raja Koduri Previews "PetaFLOPs Scale" 4-Tile Intel Xe HP GPU

Raja Koduri, Intel's chief architect and senior vice president of Intel's discrete graphics division, has today held a talk at HotChips 32, the latest online conference of 2020, that shows off the latest architectural advancements in the semiconductor industry. So Intel has prepared two talks, one about Ice Lake-SP server CPUs and one about Intel's efforts in the upcoming graphics card launch. So what has Intel been working on the whole time? Raja Koduri took over the talk and has benchmarked the upcoming GPU and recorded how much raw power the GPUs posses, possibly counting in PetaFLOPs.

When Mr. Koduri got to talk, he pulled the 4-tile Xe HP GPU out of his pocket and showed for the first time how the chip looks. And it is one big chip. Featuring 4 tiles, the GPU represents Intel's fastest and biggest variant of Xe HP GPUs. The benchmark Intel ran was made to show off scaling on the Xe architecture and how the increase in the number of tiles results in a scalable increase in performance. Running on a single tile, the GPU managed to develop the performance of 10588 GFLOPs or around 10.588 TeraFLOPs. When there are two tiles, the performance scales almost perfectly at 21161 GFLOPS (21.161 TeraFLOPs) for 1.999X improvement. At four tiles the GPU achieves 3.993 times scaling and scores 41908 GFLOPs resulting in 41.908 TeraFLOPS, all measured in single-precision FP32.
Intel Xe HP GPU Demo Intel Xe HP GPU Demo Intel Xe HP GPU Demo

IBM Reveals Next-Generation IBM POWER10 Processor

IBM today revealed the next generation of its IBM POWER central processing unit (CPU) family: IBM POWER10. Designed to offer a platform to meet the unique needs of enterprise hybrid cloud computing, the IBM POWER10 processor uses a design focused on energy efficiency and performance in a 7 nm form factor with an expected improvement of up to 3x greater processor energy efficiency, workload capacity, and container density than the IBM POWER9 processor.

Designed over five years with hundreds of new and pending patents, the IBM POWER10 processor is an important evolution in IBM's roadmap for POWER. Systems taking advantage of IBM POWER10 are expected to be available in the second half of 2021. Some of the new processor innovations include:
IBM POWER10 Processor IBM POWER10 Processor

Blaize Delivers Breakthrough for AI Edge Computing

Blaize today announced the company's first AI computing hardware and software products built to overcome today's unmet requirements for compute and productization of AI applications at the edge. With multiple feature advancements vs. legacy GPU/CPU solutions, the Blaize Pathfinder and Xplorer platforms coupled with the Blaize AI Software Suite enable developers to usher in a new era of more practical and commercially viable edge AI products across a wide range of edge use cases and industries.

"Today's edge solutions are either too small to compute the load or too costly and too hard to productize," says Dinakar Munagala, Co-founder and CEO, Blaize. "Blaize AI edge computing products overcome these limitations of power, complexity and cost to unleash the adoption of AI at the edge, facilitating the migration of AI computing out of the data center to the edge."

SiFive Secures $61 Million in Series E Funding Led by SK Hynix

SiFive, Inc., the leading provider of commercial RISC-V processor IP and silicon solutions, today announced it raised $61 million in a Series E round led by SK hynix, joined by new investor Prosperity7 Ventures, with additional funding from existing investors, Sutter Hill Ventures, Western Digital Capital, Qualcomm Ventures, Intel Capital, Osage University Partners, and Spark Capital.

"Global demand for storage and memory in the data center is increasing as AI-powered business intelligence and data processing growth continues", said Youjong Kang, VP of Growth Strategy, SK hynix. "SiFive is well-positioned to grow with opportunities created from data center, enterprise, storage and networking requirements for workload-focused processor IP."

NVIDIA A100 Ampere GPU Benchmarked on MLPerf

When NVIDIA announced its Ampere lineup of the graphics cards, the A100 GPU was there to represent the higher performance of the lineup. The GPU is optimized for heavy computing workloads as well as machine learning and AI tasks. Today, NVIDIA has submitted the MLPerf results on the A100 GPU to the MLPerf database. What is MLPerf and why it matters you might think? Well, MLPerf is a system benchmark designed to test the capability of a system for machine learning tasks and enable comparability between systems. The A100 GPU got benchmarked in the latest 0.7 version of the benchmark.

The baseline for the results was the previous generation king, V100 Volta GPU. The new A100 GPU is average 1.5 to 2.5 times faster compared to V100. So far A100 GPU system beats all offers available. It is worth pointing out that not all competing systems have been submitted, however, so far the A100 GPU is the fastest.
The performance results follow:

MediaTek Announces Dimensity 720, its Newest 5G Chip

MediaTek today announced the Dimensity 720, its latest 5G SoC that will give consumers access to premium 5G experiences on mid-tier smartphones. The Dimensity 720, is part of MediaTek's 5G chipset family that includes range of chipsets from Dimensity 1000 for flagship 5G smartphones to the Dimensity 800 and 700 series for more accessible 5G mid-tier devices.

"The Dimensity 720 sets a new standard, delivering feature-packed 5G experiences and technology to devices that are more accessible to mass market consumers," said Dr. Yenchi Lee, Deputy General Manager, Wireless Communications Business Unit, MediaTek. "This chip is highly power-efficient, has impressive performance and advanced display and imaging technologies. All of that combined will help brands usher in differentiated 5G devices for consumers around the globe."

NVIDIA to Build Fastest AI Supercomputer in Academia

The University of Florida and NVIDIA Tuesday unveiled a plan to build the world's fastest AI supercomputer in academia, delivering 700 petaflops of AI performance. The effort is anchored by a $50 million gift: $25 million from alumnus and NVIDIA co-founder Chris Malachowsky and $25 million in hardware, software, training and services from NVIDIA.

"We've created a replicable, powerful model of public-private cooperation for everyone's benefit," said Malachowsky, who serves as an NVIDIA Fellow, in an online event featuring leaders from both the UF and NVIDIA. UF will invest an additional $20 million to create an AI-centric supercomputing and data center.

AAEON Launches BOXER-8251AI AI Edge Computing System

AAEON, a leader in AI edge solutions, announces the release of the BOXER-8251AI AI edge box PC powered by NVIDIA Jetson Xavier NX. With greater performance and compact size, the BOXER-8251AI offers greater flexibility to bring even more smart applications to life.

The BOXER-8251AI is powered by the innovative Jetson Xavier NX from NVIDIA. Featuring a six-core 64-bit ARM processor, it boasts 384 CUDA cores, 48 Tensor Cores, and two NVDLA engines capable of running multiple neural networks in parallel, delivering accelerated computing performance up to 21 TOPS. Built to bring dedicated AI processing to the edge, the system also features 8 GB of LPDDR4 memory and 16 GB of onboard eMMC memory that's expandable through the Micro-SD card slot.

NVIDIA Surpasses Intel in Market Cap Size

Yesterday after the stock market has closed, NVIDIA has officially reached a bigger market cap compared to Intel. After hours, the price of the NVIDIA (ticker: NVDA) stock is $411.20 with a market cap of 251.31B USD. It marks a historic day for NVIDIA as the company has historically been smaller than Intel (ticker: INTC), with some speculating that Intel could buy NVIDIA in the past while the company was much smaller. Intel's market cap now stands at 248.15B USD, which is a bit lower than NVIDIA's. However, the market cap is not an indication of everything. NVIDIA's stock is fueled by the hype generated around Machine Learning and AI, while Intel is not relying on any possible bubbles.

If we compare the revenues of both companies, Intel is having much better performance. It had a revenue of 71.9 billion USD in 2019, while NVIDIA has 11.72 billion USD of revenue. No doubt that NVIDIA has managed to do a good job and it managed to almost double revenue from 2017, where it went from $6.91 billion in 2017 to $11.72 billion in 2019. That is an amazing feat and market predictions are that it is not stopping to grow. With the recent acquisition of Mellanox, the company now has much bigger opportunities for expansion and growth.

Qualcomm Announces Snapdragon 865 Plus 5G Mobile Platform, Breaking the 3 GHz Barrier

Qualcomm Technologies, Inc. unveiled the Qualcomm Snapdragon 865 Plus 5G Mobile Platform, a follow-on to the flagship Snapdragon 865 that has powered more than 140 devices (announced or in development) - the most individual premium-tier designs powered by a single mobile platform this year. The new Snapdragon 865 Plus is designed to deliver increased performance across the board for superior gameplay and insanely fast Qualcomm Snapdragon Elite Gaming experiences, truly global 5G, and ultra-intuitive AI.

"As we work to scale 5G, we continue to invest in our premium tier, 8-series mobile platforms, to push the envelope in terms of performance and power efficiency and deliver the next generation of camera, AI and gaming experiences," said Alex Katouzian, senior vice president and general manager, mobile, Qualcomm Technologies, Inc. "Building upon the success of Snapdragon 865, the new Snapdragon 865 Plus will deliver enhanced performance for the next wave of flagship smartphones."

SK hynix Starts Mass-Production of HBM2E High-Speed DRAM

SK hynix announced that it has started the full-scale mass-production of high-speed DRAM, 'HBM2E', only ten months after the Company announced the development of the new product in August last year. SK hynix's HBM2E supports over 460 GB (Gigabyte) per second with 1,024 I/Os (Inputs/Outputs) based on the 3.6 Gbps (gigabits-per-second) speed performance per pin. It is the fastest DRAM solution in the industry, being able to transmit 124 FHD (full-HD) movies (3.7 GB each) per second. The density is 16 GB by vertically stacking eight 16 Gb chips through TSV (Through Silicon Via) technology, and it is more than doubled from the previous generation (HBM2).

HBM2E boasts high-speed, high-capacity, and low-power characteristics; it is an optimal memory solution for the next-generation AI (Artificial Intelligence) systems including Deep Learning Accelerator and High-Performance Computing, which all require high-level computing performance. Furthermore, it is expected to be applied to the Exascale supercomputer - a high-performance computing system which can perform calculations a quintillion times per second - that will lead the research of next-generation basic and applied science, such as climate changes, bio-medics, and space exploration.

Death Stranding with DLSS 2.0 Enables 4K-60 FPS on Any RTX 20-series GPU: Report

Ahead of its PC platform release on July 14, testing of a pre-release build by Tom's Hardware reveals that "Death Stranding" will offer 4K 60 frames per second on any NVIDIA RTX 20-series graphics card if DLSS 2.0 is enabled. NVIDIA's performance-enhancing feature renders the game at a resolution lower than that of the display head, and uses AI to reconstruct details. We've detailed DLSS 2.0 in an older article. The PC version has a frame-rate limit of 240 FPS, ultra-wide resolution support, and a photo mode (unsure if it's an Ansel implementation). It has rather relaxed recommended system requirements for 1080p 60 FPS gaming (sans DLSS).

NVIDIA Unveils AI Platform to Minimize Downtime in Supercomputing Data Centers

NVIDIA today unveiled the NVIDIA Mellanox UFM Cyber-AI platform, which minimizes downtime in InfiniBand data centers by harnessing AI-powered analytics to detect security threats and operational issues, as well as predict network failures.

This extension of the UFM platform product portfolio — which has managed InfiniBand systems for nearly a decade — applies AI to learn a data center's operational cadence and network workload patterns, drawing on both real-time and historic telemetry and workload data. Against this baseline, it tracks the system's health and network modifications, and detects performance degradations, usage and profile changes.

Qualcomm Launches World's First 5G and AI-Enabled Robotics Platform

Qualcomm Technologies, Inc., today announced the Qualcomm Robotics RB5 platform - the Company's most advanced, integrated, comprehensive offering designed specifically for robotics. Building on the successful Qualcomm Robotics RB3 platform and its broad adoption in a wide array of robotics and drone products available today, the Qualcomm Robotics RB5 platform is comprised of an extensive set of hardware, software and development tools.

The Qualcomm Robotics RB5 platform is the first of its kind to bring together the Company's deep expertise in 5G and AI to empower developers and manufacturers to create the next generation of high-compute, low-power robots and drones for the consumer, enterprise, defense, industrial and professional service sectors - and the comprehensive Qualcomm Robotics RB5 Development Kit helps ensure developers have the customization and flexibility they need to make their visions a commercial reality. To date, Qualcomm Technologies has engaged many leading companies that have endorsed the Qualcomm Robotics RB5 platform, including 20+ early adopters in the process of evaluating the platform.
Qualcomm Robotics RB5 Platform

AAEON Releases Body Temperature Monitoring IoT Board

AAEON, a leading manufacturer of AI and IoT platforms, has worked with partners and developers to create solutions designed to address COVID-19 and other infectious diseases. While the current pandemic is winding down in some areas, many companies are rethinking what it means to do business in a world where it is key to be aware of how disease spreads. One key sector is in banking, where many kinds of customers may visit on a daily basis. In Southeast Asia, AAEON is partnering with developers to deploy the BOXER-8170AI and SCA-M01 IoT Node Board in body temperature and meeting room monitoring.

HTC Vive Launches VIVE XR Suite

HTC VIVE, a global leader in innovative technology, today officially announces it will enter the cloud software business with the VIVE XR Suite offering at its hybrid event, "Journey into the Next Normal", which took place physically in Shanghai and online through the Engage virtual events platform. Comprised of five separate applications covering remote collaboration, productivity, events, social and culture, the VIVE XR Suite gives users the tools they need to overcome the new challenges faced while working and living in a socially distant world. The VIVE XR Suite is targeted to launch in Q3 2020 in China, with additional regions to follow throughout the year.

The VIVE XR Suite is comprised of 5 major applications (VIVE Sync, VIVE Sessions, VIVE Campus, VIVE Social, and VIVE Museum) to meet the daily needs of the users to overcome the new challenges faced by users around the world who are working, learning and living remotely. Although it is called an XR Suite, it is important to note that this software is not dependent on VR/AR devices to function. All the applications will function on existing PCs/laptops and some apps will even support modern smartphones, but for a superior immersive experience, PC VR or standalone VR devices would be recommended. Users will be able to login to all apps in the suite using a single account and across various devices they own. This integrated application bundle which is created in partnership with the leading software companies in their respective areas will provide a seamless experience for the consumer and business user. The CEO's of all the software partners in the VIVE XR Suite (Immersive VR Education, VirBELA, VRChat, and Museum of Other Realities) attended the event live via video and within VR in avatar form.

ASUS Announces AI Noise-canceling Microphone Technology

New technology eliminates background noise for clear voice communication with ASUS AI Noise-Canceling Mic Adapter, ROG Strix Go, ROG Theta 7.1 headsets and more. ASUS today announced new AI Noise-Canceling Microphone (AI Mic) technology that intelligently eliminates unwanted background noise for clear voice communication for work or play. The new technology uses chipset-based machine learning to filter out and remove other human voices and ambient sounds like wind or traffic noise. This new technology is now available on the ASUS AI Noise Canceling Mic Adapter and the latest ROG headsets.

ASUS AI Noise-Canceling Mic Adapter is the world's first USB-C to 3.5 mm adapter with integrated AI Mic technology. It connects to any headset via a 3.5 mm audio jack to provide users with crystal-clear voice communication. The built‑in chipset handles all of the sound processing, so the adapter does not affect the performance of the mobile device, PC or laptop it is connected to. Weighing just 8 grams, the AI Noise-Canceling Mic Adapter includes exclusive ASUS Hyper-Grounding technology to prevent electromagnetic interference for noise-free audio. In select markets, it is available with a USB Type-C-to-Type‑A adapter.

AMD EPYC Processors Ecosystem Continues to Grow with Integration into New NVIDIA DGX A100

AMD today announced the NVIDIA DGX A100, the third generation of the world's most advanced AI system, is the latest high-performance computing system featuring 2nd Gen AMD EPYC processors. Delivering 5 petaflops of AI performance, the elastic architecture of the NVIDIA DGX A100 enables enterprises to accelerate diverse AI workloads such as data analytics, training, and inference.

NVIDIA DGX A100 leverages the high-performance capabilities, 128 cores, DDR4-3200 MHz and PCIe 4 support from two AMD EPYC 7742 processors running at speeds up to 3.4 GHz¹. The 2nd Gen AMD EPYC processor is the first and only current x86-architecture server processor that supports PCIe 4, providing leadership high-bandwidth I/O that's critical for high performance computing and connections between the CPU and other devices like GPUs.

Aetina Launches New Edge AI Computer Powered by the NVIDIA Jetson

Aetina Corp., a provider of high-performance GPGPU solutions, announced the new AN110-XNX edge AI computer leveraging the powerful capabilities of the NVIDIA Jetson Xavier NX, expanding its range of edge AI systems built on the Jetson platform for applications in smart transportation, factories, retail, healthcare, AIoT, robotics, and more.

The AN110-XNX combines the NVIDIA Jetson Xavier NX and Aetina AN110 carrier board in a compact form factor of 87.4 x 68.2 x 52 mm (with fan). AN110-XNX supports the MIPI CSI-2 interface for 1x4k or 2xFHD cameras to handle intensive AI workloads from ultra-high-resolution cameras to more accurate image analysis. It is as small as Aetina's AN110-NAO based on the NVIDIA Jetson Nano platform, but delivers more powerful AI computing via the new Jetson Xavier NX. With 384 CUDA cores, 48 Tensor Cores, and cloud-native capability the Jetson Xavier NX delivers up to 21 TOPS and is the ideal platform to accelerate AI applications. Bundled with the latest NVIDIA Jetpack 4.4 SDK, the energy-efficient module significantly expands the choices now available for developers and customers looking for embedded edge-computing options that demand increased performance to support AI workloads but are constrained by size, weight, power budget, or cost.

LG's 48-inch OLED Gaming TV with G-SYNC Goes on Sale This Month

LG is preparing to launch its latest addition to the gaming lineup of panels and this time it goes big. Preparing to launch this month is LG's 48-inch OLED Gaming TV with 120 HZ refreshing and G-SYNC support. To round up the impressive feature set, LG has priced this panel at $1499, which is a pricey but a tempting buy. Featuring 1 ms response time and low input lag, the 48CX TV is designed for gaming and fits into NVIDIA's Big Format Gaming Display (BFGD) philosophy. Interestingly, the TV uses LG's a9 Gen3 AI processor which does content upscaling so everything can look nice and crisp. Ai is used to "authentically upscale lower resolution content, translating the source to 4K's 8.3+ million pixels. The technology is so good, you might mistake non-4K for true 4K".

XRSpace Launches 5G-connected VR headset

XRSPACE, the company pioneering the next generation of social reality, has today announced the launch of the world's first social VR platform designed for mass market users, combined with the first 5G mobile VR headset, delivering on the promise of a true social VR experience for all.

XRSPACE aims to create the social reality of the future - a world where people can interact both physically and virtually in a way that is contextual, familiar, immersive, interactive and personal. At a time when face-to-face interaction is restricted due to social distancing measures, XRSPACE is aiming to bring people together in a virtual world that is powered by cutting edge XR, AI, and computer vision technology, creating a new experience through 5G which is meaningful, anytime, anywhere.
Return to Keyword Browsing
Apr 22nd, 2025 21:57 CDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts