News Posts matching #Deep Learning

Return to Keyword Browsing

Basemark Releases Breaking Limit Cross-Platform Ray Tracing Benchmark

Basemark announced today the release of a groundbreaking cross-platform ray tracing benchmark, GPUScore: Breaking Limit. This new benchmark is designed to evaluate the performance of the full range of ray tracing capable devices, including smartphones, tablets, laptops and high-end desktops with discrete GPUs. With support for multiple operating systems and graphics APIs, Breaking Limit provides a comprehensive performance evaluation across various platforms and devices.

As ray tracing technology becomes increasingly prevalent in consumer electronics, from high-end desktops to portable devices like laptops and smartphones, there is a critical need for a benchmark that can accurately assess and compare performance across different devices and platforms. Breaking Limit addresses this gap, providing valuable insights into how various devices handle hardware-accelerated graphics rendering. The benchmark is an essential tool for developers, manufacturers, and consumers to measure and compare the performance of real-time ray tracing rendering across different hardware and software environments reliably.

NVIDIA Releases DLSS 3.7.0 With Quality E Preset for Image Quality Improvements

Yesterday, NVIDIA released the latest version of its Deep Learning Super Sampling (DLSS) 3.7.0. The newest version promises to improve image quality. Among the most notable additions is the now default "E" quality preset. This builds upon the previous DLSS versions but introduces noticeably sharper images, generally improved fine detail stability, reduced ghosting, and better temporal stability in general compared to DLSS 3.5. It has been tested with Cyberpunk 2077 in the YouTube video with the comparison between DLSS 3.5.10, DLSS 3.6.0, and the newest DLSS 3.7.0. Additionally, some Reddit users reported seeing a noticeable difference on Horizon Forbidden West at 1440p.

Generally, the DLSS 3.7.0 version can be a drop-in replacement to the older DLSS versions. Using DLSS Tweaks, or even manually, users can patch in the latest DLSS 3.7.0 DLL and force games that weren't shipped initially or updated to support the latest DLSS 3.7.0 DLL file. We have the latest DLL download up on our Downloads section on TechPowerUp, so users can install DLSSTweaks and grab the desired file version on our website.

Grab the latest DLSS 3.7.0 DLL file here.

MSI Unveils AI-Driven Gaming Desktops with NVIDIA GeForce RTX 40 SUPER Series

MSI, a forefront leader in True Gaming hardware, proudly announces the incorporation of NVIDIA GeForce RTX 40 SUPER Series graphics cards into its latest 14th generation AI gaming desktops. The MEG Trident X2 14th, MPG Infinite X2 14th, MPG Trident AS 14th, MAG Infinite S3 14th, and MAG Codex 6 14th, initially featuring the NVIDIA GeForce RTX 40 Series graphics cards, now boast the cutting-edge RTX 40 SUPER Series, ushering in a new era of gaming excellence.

At the heart of these 14th gen AI gaming desktops lies the revolutionary RTX 40 SUPER Series, which are GeForce RTX 4080 SUPER, GeForce RTX 4070 Ti SUPER, and GeForce RTX 4070 SUPER. This series reshapes the gaming experience with cutting-edge AI capabilities, surpassing the speed of their predecessors. Equipped with RTX platform superpowers, these GPUs elevate the performance of games, applications, and AI tasks, marking a significant advancement in the gaming landscape.

GIGABYTE Announces its GeForce RTX 40 SUPER Series Graphics Cards

GIGABYTE TECHNOLOGY Co. Ltd, a leading manufacturer of premium gaming hardware, today launched GeForce RTX 40 SUPER series graphics cards powered by NVIDIA ADA Lovelace architecture. Featuring AORUS, AERO, GAMING, EAGLE and WINDFORCE series, the lineup completely serves the needs for every customer, from gamers, creators, to AI developers.

The new GeForce RTX SUPER GPUs are the ultimate way to experience AI on PCs. Specialized AI Tensor Cores deliver up to 836 AI TOPS to deliver transformative capabilities for AI in gaming, creating and everyday productivity. PC gamers demand the very best in visual quality, and AI-powered NVIDIA Deep Learning Super Sampling (DLSS) Super Resolution, Frame Generation and Ray Reconstruction combine with ray tracing to offer stunning worlds. With DLSS, seven out of eight pixels can be AI-generated, accelerating full ray tracing by up to 4x with better image quality.

Samsung Announces the Galaxy Book4 Series: The Most Intelligent and Powerful Galaxy Book Yet

Samsung Electronics Co., Ltd. today announced the release of its most intelligent PC lineup yet: Galaxy Book4 Ultra, Book4 Pro and Book4 Pro 360. The latest series comes with a new intelligent processor, a more vivid and interactive display and a robust security system—beginning a new era of AI PCs that offers ultimate productivity, mobility and connectivity. These enhancements not only improve the device itself but also elevate the entire Samsung Galaxy ecosystem, advancing the PC category and accelerating Samsung's vision of AI innovation—for both today and tomorrow.

"Samsung is committed to empowering people to experience new possibilities that enhance their everyday lives. This new paradigm can be achieved through our expansive Galaxy ecosystem and open collaboration with other industry leaders," said TM Roh, President and Head of Mobile eXperience Business at Samsung Electronics. "The Galaxy Book4 series plays a key role in bringing best-in-class connectivity to our ecosystem that will broaden how people interact with their PC, phone, tablet and other devices for truly intelligent and connected experiences."

AMD FidelityFX Super Resolution Could Come to Samsung and Qualcomm SoCs

AMD FidelityFX Super Resolution (FSR) is an open-source resolution upscaling technology that takes lower-resolution input and uses super-resolution temporal upscaling technology, frame generation using AMD Fluid Motion Frames (AFMF) technology, and built-in latency reduction technology to provide greater-resolution output images from lower-resolution settings. While the technology is open-source, it battles in market share with NVIDIA and the company's Deep Learning Super Sampling (DLSS). However, in the mobile space, there hasn't been much talk about implementing upscaling technology up until now. According to a popular leaker @Tech_Reve on X/Twitter, we have information that AMD is collaborating with Samsung and Qualcomm to standardize on upscaling technology implementations in mobile SoCs.

Not only does the leak imply that the AMD FSR technology will be used in Samsung's upcoming Exynos SoC, but some AMD ray tracing will be present as well. The leaker has mentioned Qualcomm, which means that future iterations of Snapdragon are up to adopt the FSR algorithmic approach to resolution upscaling. We will see how and when, but with mobile games growing in size and demand, FSR could come in handy to provide mobile gamers with a better experience. Primarily, this targets Android devices, which Qualcomm supplies, where Apple's iPhone recently announced MetalFX Upscaling technology with an A17 Pro chip.

Jensen Huang & Leading EU Generative AI Execs Participated in Fireside Chat

Three leading European generative AI startups joined NVIDIA founder and CEO Jensen Huang this week to talk about the new era of computing. More than 500 developers, researchers, entrepreneurs and executives from across Europe and further afield packed into the Spindler and Klatt, a sleek, riverside gathering spot in Berlin. Huang started the reception by touching on the message he delivered Monday at the Berlin Summit for Earth Virtualization Engines (EVE), an international collaboration focused on climate science. He shared details of NVIDIA's Earth-2 initiative and how accelerated computing, AI-augmented simulation and interactive digital twins drive climate science research.

Before sitting down for a fireside chat with the founders of the three startups, Huang introduced some "special guests" to the audience—four of the world's leading climate modeling scientists, who he called the "unsung heroes" of saving the planet. "These scientists have dedicated their careers to advancing climate science," said Huang. "With the vision of EVE, they are the architects of the new era of climate science."

"Godfather of AI" Geoffrey Hinton Departs Google, Voices Concern Over Dangers of AI

Geoffrey Hinton, British-Canadian psychologist, computer scientist, and 2018 Turing Award winner in deep learning, has departed the Google Brain team after a decade-long tenure. His research on AI and neural networks dating back to the 1980s has helped shape the current landscape of deep learning, neural processing, and artificial intelligence algorithms with direct and indirect contributions over the years. 2012's AlexNet, designed and developed in collaboration with his students Alex Krizhevsky and Ilya Sutskever, formed the modern backbone of computer vision and AI image recognition used today in Generative AI. Hinton joined Google when the company won the bid for the tiny startup he and his two students formed in the months following the reveal of AlexNet. Ilya Sutskever left their cohort at Google in 2015 to become co-founder and Chief Scientist of OpenAI; creators of ChatGPT and one of Google's most prominent competitors.

In an interview with the New York Times Hinton says that he quit his position at Google so that he may speak freely about the risks of AI, and that a part of him regrets his life's work in the field. He said that during his time there Google has acted as a "proper steward" of AI development, and was careful about releasing anything that might be harmful. His viewpoint on the industry shifted within the last year as Microsoft's Bing Chat took shots at Google's core business, the web browser, leading to Google being more reactionary than deliberate in response with Bard. The concern arises that as these companies battle it out for AI supremacy they won't take proper precautions against bad-faith actors using the technologies to flood the internet with false photos, text, and even videos. That the average person would no longer be able to tell what was real, and what was manufactured by AI prompt.

NVIDIA Announces Microsoft, Tencent, Baidu Adopting CV-CUDA for Computer Vision AI

Microsoft, Tencent and Baidu are adopting NVIDIA CV-CUDA for computer vision AI. NVIDIA CEO Jensen Huang highlighted work in content understanding, visual search and deep learning Tuesday as he announced the beta release for NVIDIA's CV-CUDA—an open-source, GPU-accelerated library for computer vision at cloud scale. "Eighty percent of internet traffic is video, user-generated video content is driving significant growth and consuming massive amounts of power," said Huang in his keynote at NVIDIA's GTC technology conference. "We should accelerate all video processing and reclaim the power."

CV-CUDA promises to help companies across the world build and scale end-to-end, AI-based computer vision and image processing pipelines on GPUs. The majority of internet traffic is video and image data, driving incredible scale in applications such as content creation, visual search and recommendation, and mapping. These applications use a specialized, recurring set of computer vision and image-processing algorithms to process image and video data before and after they're processed by neural networks.

NVIDIA GTC 2023 to Feature Latest Advances in AI Computing Systems, Generative AI, Industrial Metaverse, Robotics; Keynote by Jensen Huang

NVIDIA today announced that company founder and CEO Jensen Huang will deliver the opening keynote at GTC 2023, covering the latest advancements in generative AI, the metaverse, large language models, robotics, cloud computing and more. More than 250,000 people are expected to register for the four-day event, which will include 650+ sessions from researchers, developers and industry leaders in virtually every computing domain. GTC will also feature a fireside chat with Huang and OpenAI co-founder Ilya Sutskever, plus talks by DeepMind's Demis Hassabis, Stability AI's Emad Mostaque and many others.

"This is the most extraordinary moment we have witnessed in the history of AI," Huang said. "New AI technologies and rapidly spreading adoption are transforming science and industry, and opening new frontiers for thousands of new companies. This will be our most important GTC yet."

Intel Launches New Xeon Workstation Processors - the Ultimate Solution for Professionals

Intel today announced the new Intel Xeon W-3400 and Intel Xeon W-2400 desktop workstation processors (code-named Sapphire Rapids), led by the Intel Xeon w9-3495X, Intel's most powerful desktop workstation processor ever designed. Built for professional creators, these new Xeon processors provide massive performance for media and entertainment, engineering and data science professionals. With a breakthrough new compute architecture, faster cores and new embedded multi-die interconnect bridge (EMIB) packaging, the Xeon W-3400 and Xeon W-2400 series of processors enable unprecedented scalability for increased performance.

"For more than 20 years, Intel has been committed to delivering the highest quality workstation platforms - combining high-performance compute and rock-solid stability - for professional PC users across the globe. Our new Intel Xeon desktop workstation platform is uniquely designed to unleash the innovation and creativity of professional creators, artists, engineers, designers, data scientists and power users - built to tackle both today's most demanding workloads as well as the professional workloads of the future." -Roger Chandler, Intel vice president and general manager, Creator and Workstation Solutions, Client Computing Group

Next-Generation Dell PowerEdge Servers Deliver Advanced Performance and Energy Efficient Design

Dell Technologies expands the industry's top selling server portfolio, with an additional 13 next-generation Dell PowerEdge servers, designed to accelerate performance and reliability for powerful computing across core data centers, large-scale public clouds and edge locations. Next-generation rack, tower and multi-node PowerEdge servers, with 4th Gen Intel Xeon Scalable processors, include Dell software and engineering advancements, such as a new Smart Flow design, to improve energy and cost efficiency. Expanded Dell APEX capabilities will help organizations take an as-a-Service approach, allowing for more effective IT operations that make the most of compute resources while minimizing risk.

"Customers come to Dell for easily managed yet sophisticated and efficient servers with advanced capabilities to power their business-critical workloads," said Jeff Boudreau, president and general manager, Infrastructure Solutions Group, Dell Technologies. "Our next-generation Dell PowerEdge servers offer unmatched innovation that raises the bar in power efficiency, performance and reliability while simplifying how customers can implement a Zero Trust approach for greater security throughout their IT environments."

Supermicro Unveils a Broad Portfolio of Performance Optimized and Energy Efficient Systems Incorporating 4th Gen Intel Xeon Scalable Processors

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, at the 2022 Super Computing Conference is unveiling the most extensive portfolio of servers and storage systems in the industry based on the upcoming 4th Gen Intel Xeon Scalable processor, formerly codenamed Sapphire Rapids. Supermicro continues to use its Building Block Solutions approach to deliver state-of-the-art and secure systems for the most demanding AI, Cloud, and 5G Edge requirements. The systems support high-performance CPUs and DDR5 memory with up to 2X the performance and capacities up to 512 GB DIMMs and PCIe 5.0, which doubles I/O bandwidth. Intel Xeon CPU Max Series CPUs (formerly codenamed Sapphire Rapids HBM High Bandwidth Memory (HBM)) is also available on a range of Supermicro X13 systems. In addition, support for high ambient temperature environments at up to 40° C (104° F), with servers designed for air and liquid cooling for optimal efficiency, are rack-scale optimized with open industry standard designs and improved security and manageability.

"Supermicro is once again at the forefront of delivering the broadest portfolio of systems based on the latest technology from Intel," stated Charles Liang, president and CEO of Supermicro. "Our Total IT Solutions strategy enables us to deliver a complete solution to our customers, which includes hardware, software, rack-scale testing, and liquid cooling. Our innovative platform design and architecture bring the best from the 4th Gen Intel Xeon Scalable processors, delivering maximum performance, configurability, and power savings to tackle the growing demand for performance and energy efficiency. The systems are rack-scale optimized with Supermicro's significant growth of rack-scale manufacturing of up to 3X rack capacity."

Intel Introduces Real-Time Deepfake Detector

As part of Intel's Responsible AI work, the company has developed FakeCatcher, a technology that can detect fake videos with a 96% accuracy rate. Intel's deepfake detection platform is the world's first real-time deepfake detector that returns results in milliseconds. "Deepfake videos are everywhere now. You have probably already seen them; videos of celebrities doing or saying things they never actually did," said Ilke Demir, senior staff research scientist in Intel Labs.

Intel's real-time deepfake detection uses Intel hardware and software and runs on a server and interfaces through a web-based platform. On the software side, an orchestra of specialist tools form the optimized FakeCatcher architecture. Teams used OpenVino to run AI models for face and landmark detection algorithms. Computer vision blocks were optimized with Intel Integrated Performance Primitives (a multi-threaded software library) and OpenCV (a toolkit for processing real-time images and videos), while inference blocks were optimized with Intel Deep Learning Boost and with Intel Advanced Vector Extensions 512, and media blocks were optimized with Intel Advanced Vector Extensions 2. Teams also leaned on the Open Visual Cloud project to provide an integrated software stack for the Intel Xeon Scalable processor family. On the hardware side, the real-time detection platform can run up to 72 different detection streams simultaneously on 3rd Gen Intel Xeon Scalable processors.

DFI Unveils ATX Motherboard ICX610-C621A

DFI, the global leading provider of high-performance computing technology across multiple embedded industries, unveils a server-grade ATX motherboard, designed for Intel Ice Lake platform, powered by the 3rd Generation Intel Xeon Scalable processors, and equipped with ultra-high speed computing that can support up to 205 W. ICX610-C621A also comes with built-in Intel Speed Select Technology (Intel SST), which provides an excellent load balancing between CPUs and multiple accelerator cards to effectively distribute CPU resource, stabilize computation loads and maximize computing power. As a result, it improves the performance by 1.46 times compared to previous generation.

Featuring powerful performance, the offers three PCIe x 16, two PCIe x8 slots and one M.2 Key and enables ultra-performance computing, AI workload and deep learning, specifically for high-end inspection equipment, such as AOI, CT, and MRI application. The ICX610 also supports ECC RDIMM up to 512 GB 3200 MHz enhances high end performance for advanced inspection equipment and improves efficiency.

Intel Unveils First Socketed SoC Processors for Edge Innovation

Intel has announced the availability of its 12th Gen Intel Core SoC processors for IoT Edge. Representing a new lineup of purpose-built edge products optimized for Internet of Things (IoT) applications, this first-of-its-kind socketed system-on-chip (SoC) delivers high performance integrated graphics and media processing for visual compute workloads, a compact footprint to enable smaller innovative form factor designs, and a wide operating thermal design power (TDP) that enables fanless designs and helps customers achieve product sustainability goals.

"As the digitization of business processes continues to accelerate—fueled by workforce demand, supply chain constraints and changing consumer behavior—the amount of data created at the edge and the need for it to be processed and analyzed locally continues to explode. Intel understands the challenges that businesses face—across a wide range of vertical industries—and is committed to help them continue to deliver innovative use cases," said Jeni Panhorst, Intel vice president and general manager of the Network and Edge Compute Division.

Intel Xeon W9-3495 Sapphire Rapids HEDT CPU with 56 Cores and 112 Threads Appears

Intel's upcoming Sapphire Rapids processors will not only be present in the server sector but will also span the high-end desktop (HEDT) platform. Today, according to the findings of a Twitter user, @InstLatX64, we have an appearance of Intel's upcoming Sapphire Rapids HEDT SKU in Kernel.org boot logs. Named Intel Xeon W9-3495, this model features 56 cores and 112 threads. While there is no specific information about base and boost frequencies, we know that the SKU supports AVX-512 and AMX instructions. This is a welcome addition, as we have seen Intel disable AVX-512 on consumer chips altogether.

With a high core count and additional instructions for Deep Learning, this CPU will power workstations sometimes in the future. With the late arrival of Sapphire Rapids for servers, a HEDT variant should follow.

Habana Labs Launches Second-generation AI Deep Learning Processors

Today at the Intel Vision conference, Habana Labs, an Intel company, announced its second-generation deep learning processors, the Habana Gaudi 2 Training and Habana Greco Inference processors. The processors are purpose-built for AI deep learning applications, implemented in 7nm technology and build upon Habana's high-efficiency architecture to provide customers with higher-performance model training and inferencing for computer vision and natural language applications in the data center. At Intel Vision, Habana Labs revealed Gaudi2's training throughput performance for the ResNet-50 computer vision model and the BERT natural language processing model delivers twice the training throughput over the Nvidia A100-80GB GPU.

"The launch of Habana's new deep learning processors is a prime example of Intel executing on its AI strategy to give customers a wide array of solution choices - from cloud to edge - addressing the growing number and complex nature of AI workloads. Gaudi2 can help Intel customers train increasingly large and complex deep learning workloads with speed and efficiency, and we're anticipating the inference efficiencies that Greco will bring."—Sandra Rivera, Intel executive vice president and general manager of the Datacenter and AI Group

Lambda Teams Up With Razer to Launch the World's Most Powerful Laptop for Deep Learning

Lambda, the Deep Learning Company, today in collaboration with Razer, released the new Lambda Tensorbook, the world's most powerful laptop designed for deep learning, available with Linux and Lambda's deep learning software. The sleek laptop, coupled with the Lambda GPU Cloud, gives engineers all the software tools and compute performance they need to create, train, and test deep learning models locally. Since its launch in 2012, Lambda has quickly become the de-facto deep learning infrastructure provider for the world's leading research and engineering teams. Thousands of businesses and organizations use Lambda including: all of the top five tech companies, 97 percent of the top research universities in the U.S. including MIT and Caltech, and the Department of Defense. These teams use Lambda's GPU clusters, servers, workstations, and cloud instances to train neural networks for cancer detection, autonomous aircraft, drug discovery, self-driving cars, and much more.

"Most ML engineers don't have a dedicated GPU laptop, which forces them to use shared resources on a remote machine, slowing down their development cycle." said Stephen Balaban, co-founder and CEO of Lambda. "When you're stuck SSHing into a remote server, you don't have any of your local data or code and even have a hard time demoing your model to colleagues. The Razer x Lambda Tensorbook solves this. It's pre-installed with PyTorch and TensorFlow and lets you quickly train and demo your models: all from a local GUI interface. No more SSH!"

AAEON Partners with AI Chipmaker Hailo to Enable Next-Gen AI Applications at the Edge

UP Bridge the Gap, a brand of AAEON - is pleased to announce a partnership with Hailo, a leading Artificial Intelligence (AI) chipmaker, to meet skyrocketing demands for next-generation AI applications at the edge. The latest UP Bridge the Gap platforms are compatible with Hailo's Hailo-8 M.2 AI Acceleration Module, offering unprecedented AI performance with best-in-class power efficiency.

Edge computing requires increasingly intensive workloads for computer vision and other artificial intelligence tasks, making it increasingly important to move Deep Learning workloads from the cloud to the edge. Running AI applications at the edge ensures real-time inferencing, data privacy, and low latency for smart city, smart retail, Industry 4.0, and many other applications across various markets.

MAINGEAR Launches New NVIDIA GeForce RTX 3050 Desktops, Offering Next-Gen Gaming Features

MAINGEAR—an award-winning PC system integrator of custom gaming desktops, notebooks, and workstations—today announced that new NVIDIA GeForce RTX 3050 graphics cards are now available to configure within MAINGEAR's product line of award-winning custom gaming desktop PCs and workstations. Featuring support for real-time ray tracing effects and AI technologies, MAINGEAR PCs equipped with the NVIDIA GeForce RTX 3050 offer gamers next-generation ray-traced graphics and performance comparable to the latest consoles.

Powered by Ampere, the NVIDIA GeForce RTX 3050 features NVIDIA's 2nd gen Ray Tracing Cores and 3rd generation Tensor Cores. Combined with new streaming multiprocessors and high-speed G6 memory, the NVIDIA GeForce RTX 3050 can power the latest and greatest games. NVIDIA RTX on 30 Series GPUs deliver real-time ray tracing effects—including shadows, reflections, and Ambient Occlusion (AO). The groundbreaking NVIDIA DLSS (Deep Learning Super Sampling) 2.0 AI technology utilizes Tensor Core AI processors to boost frame rates while producing sharp, uncompromised visual fidelity comparable to high native resolutions.

The Power of AI Arrives in Upcoming NVIDIA Game-Ready Driver Release with Deep Learning Dynamic Super Resolution (DLDSR)

Among the broad range of new game titles getting support, we are in for a surprise. NVIDIA yesterday announced a feature list of its upcoming game-ready GeForce driver scheduled for public release on January 14th. According to the new blog post on NVIDIA's website, the forthcoming game-ready driver release will feature an AI-enhanced version of Dynamic Super Resolution (DSR), available in GeForce drivers for a while. The new AI-powered tech is, what the company calls, Deep Learning Dynamic Super Resolution or DLDSR shortly. It uses neural networks that require fewer input pixels and produces stunning image quality on your monitor.
NVIDIAOur January 14th Game Ready Driver updates the NVIDIA DSR feature with AI. DLDSR (Deep Learning Dynamic Super Resolution) renders a game at higher, more detailed resolution before intelligently shrinking the result back down to the resolution of your monitor. This downsampling method improves image quality by enhancing detail, smoothing edges, and reducing shimmering.

DLDSR improves upon DSR by adding an AI network that requires fewer input pixels, making the image quality of DLDSR 2.25X comparable to that of DSR 4X, but with higher performance. DLDSR works in most games on GeForce RTX GPUs, thanks to their Tensor Cores.
NVIDIA Deep Learning Dynamic Super Resolution NVIDIA Deep Learning Dynamic Super Resolution

congatec launches 10 new COM-HPC and COM Express Computer-on-Modules with 12th Gen Intel Core processors

congatec - a leading vendor of embedded and edge computing technology - introduces the 12th Generation Intel Core mobile and desktop processors (formerly code named Alder Lake) on 10 new COM-HPC and COM Express Computer-on-Modules. Featuring the latest high performance cores from Intel, the new modules in COM-HPC Size A and C as well as COM Express Type 6 form factors offer major performance gains and improvements for the world of embedded and edge computing systems. Most impressive is the fact that engineers can now leverage Intel's innovative performance hybrid architecture. Offering of up to 14 cores/20 threads on BGA and 16 cores/24 threads on desktop variants (LGA mounted), 12th Gen Intel Core processors provide a quantum leap [1] in multitasking and scalability levels. Next-gen IoT and edge applications benefit from up to 6 or 8 (BGA/LGA) optimized Performance-cores (P-cores) plus up to 8 low power Efficient-cores (E-cores) and DDR5 memory support to accelerate multithreaded applications and execute background tasks more efficiently.

Axiomtek Partners with Leading AI Chipmaker Hailo to Launch Edge AI Computer for Smart City Applications

Axiomtek - a world-renowned leader relentlessly devoted in the research, development, and manufacturing of innovative and reliable industrial computer products of high efficiency - is pleased to announce its partnership with leading AI chipmaker Hailo to launch the RSC100, a state-of-the-art ARM-based edge AI computer (also named Plato). The ARM-based RSC100 (Plato) supports the Hailo-8 edge AI processor, the latter of which features up to 26 TOPS for running deep learning applications at full scale efficiently, intelligently, and sustainably. The partnership with Hailo offers customers a new level of AI solutions across a wide range of market segments such as smart city, smart retail, industry 4.0, and smart transportation.

"Hailo has gained a lot of experience in developing high-performance AI processors suitable for edge devices. The powerful yet affordable RSC100 (Plato) is our first fanless edge AI computing system that adopts the Hailo-8. The RSC100 (Plato) provides system developers with highly efficient implementation of innovative AI solutions at reduced time-to-market and engineering costs. With the advanced AI performance, the RSC100 (Plato) is well suited for use in smart city applications, including smart surveillance, smart factory, smart agriculture, and smart transportation," said Ken Pan, the product manager at Axiomtek.

God of War PC Port Arrives on January 14, 2022

Santa Monica Studio, a video game developer, seated in Los Angeles and owned by PlayStation Studios, is the creator of the highly successful game God of War. Today, the company announced that they would be releasing a God of War port for PC owners, entering a whole new market. The PC port of the game will allow thousands of players to enjoy the story of Kratos and his adventures with a considerable boost to graphics. According to a company announcement, the PC port will allow fine-tuning of graphics settings, including a range of new technologies to back it.

Some essential upgrades over the console port include native 4K rendering and an unlimited frame rate. The company stated that the ambient occlusion pipeline had been upgraded with GTAO and SSDO tech, creating unique visuals. In addition to that, the game will feature support for NVIDIA's Deep Learning Super Sampling (DLSS) and Reflex low-latency technology. For controllers, the game will feature support for Sony's DualSense and DualShock4 controllers. And last but not least, ultrawide gamers are in luck as well, as the game will support a 21:9 aspect ratio. The game is going to be available on January 14, 2022, on the Steam storefront.

For some PC visuals, check out the images and the video below.
Return to Keyword Browsing
Dec 20th, 2024 09:22 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts