News Posts matching #Generative AI

Return to Keyword Browsing

Extropic Intends to Accelerate AI through Thermodynamic Computing

Extropic, a pioneer in physics-based computing, this week emerged from stealth mode and announced the release of its Litepaper, which outlines the company's revolutionary approach to AI acceleration through thermodynamic computing. Founded in 2022 by Guillaume Verdon, Extropic has been developing novel chips and algorithms that leverage the natural properties of out-of-equilibrium thermodynamic systems to perform probabilistic computations for generative AI applications in a highly efficient manner. The Litepaper delves into Extropic's groundbreaking computational paradigm, which aims to address the limitations of current digital hardware in handling the complex probability distributions required for generative AI.

Today's algorithms spend around 25% of their time moving numbers around in memory, limiting the speedup achievable by accelerating specific operations. In contrast, Extropic's chips natively accelerate a broad class of probabilistic algorithms by running them physically as a rapid and energy-efficient, physics-based process in their entirety, unlocking a new regime of AI acceleration well beyond what was previously thought achievable. In coming out of stealth, the company has announced the fabrication of a superconducting prototype processor and developments surrounding room-temperature semiconductor-based devices for the broader market, with the goal of revolutionizing the field of AI acceleration and enabling new possibilities in generative AI.

Acer Reports FY2023 Net Income of NT$4.93 Billion and Announces NT$1.6 Cash Dividend Per Share

Acer Inc. (TWSE: 2353) announced today its financial results for the fourth quarter of 2023 and fiscal 2023 ended December 31. In the fourth quarter, Acer reported consolidated revenues of NT$63.15 billion, gross profits of NT$6.91 billion with 10.9% margin, operating income of NT$1.39 billion with 2.2% margin, and net income [1] of NT$1.02 billion with earning-per-share (EPS) of NT$0.34.

For the full year of 2023, consolidated revenues reached NT$241.31 billion, gross profits of NT$25.82 billion with 10.7% margin, operating income was NT$4.23 billion with 1.8% margin, and net income was NT$4.93 billion with earning-per-share (EPS) of NT$1.64. Acer's computer and display business has returned to the right track of profitability and seasonality while inventory is under control. The company is optimistic about the business opportunities that artificial intelligence brings and considers Generative AI to become a megatrend in 2024 and beyond.

NVIDIA Introduces Generative AI Professional Certification

NVIDIA is offering a new professional certification in generative AI to enable developers to establish technical credibility in this important domain. Generative AI is revolutionizing industries worldwide, yet there's a critical skills gap and need to uplevel employees to more fully harness the technology. Available for the first time from NVIDIA, this new professional certification enables developers, career professionals, and others to validate and showcase their generative AI skills and expertise. Our new professional certification program introduces two associate-level generative AI certifications, focusing on proficiency in large language models and multimodal workflow skills.

"Generative AI has moved to center stage as governments, industries and organizations everywhere look to harness its transformative capabilities," NVIDIA founder and CEO Jensen Huang recently said. The certification will become available starting at GTC, where in-person attendees can also access recommended training to prepare for a certification exam. "Organizations in every industry need to increase their expertise in this transformative technology," said Greg Estes, VP of developer programs at NVIDIA. "Our goals are to assist in upskilling workforces, sharpen the skills of qualified professionals, and enable individuals to demonstrate their proficiency in order to gain a competitive advantage in the job market."

NVIDIA and HP Supercharge Data Science and Generative AI on Workstations

NVIDIA and HP Inc. today announced that NVIDIA CUDA-X data processing libraries will be integrated with HP AI workstation solutions to turbocharge the data preparation and processing work that forms the foundation of generative AI development.

Built on the NVIDIA CUDA compute platform, CUDA-X libraries speed data processing for a broad range of data types, including tables, text, images and video. They include the NVIDIA RAPIDS cuDF library, which accelerates the work of the nearly 10 million data scientists using pandas software by up to 110x using an NVIDIA RTX 6000 Ada Generation GPU instead of a CPU-only system, without requiring any code changes.

Qualcomm AI Hub Introduced at MWC 2024

Qualcomm Technologies, Inc. unveiled its latest advancements in artificial intelligence (AI) at Mobile World Congress (MWC) Barcelona. From the new Qualcomm AI Hub, to cutting-edge research breakthroughs and a display of commercial AI-enabled devices, Qualcomm Technologies is empowering developers and revolutionizing user experiences across a wide range of devices powered by Snapdragon and Qualcomm platforms.

"With Snapdragon 8 Gen 3 for smartphones and Snapdragon X Elite for PCs, we sparked commercialization of on-device AI at scale. Now with the Qualcomm AI Hub, we will empower developers to fully harness the potential of these cutting-edge technologies and create captivating AI-enabled apps," said Durga Malladi, senior vice president and general manager, technology planning and edge solutions, Qualcomm Technologies, Inc. "The Qualcomm AI Hub provides developers with a comprehensive AI model library to quickly and easily integrate pre-optimized AI models into their applications, leading to faster, more reliable and private user experiences."

Supermicro Accelerates Performance of 5G and Telco Cloud Workloads with New and Expanded Portfolio of Infrastructure Solutions

Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, delivers an expanded portfolio of purpose-built infrastructure solutions to accelerate performance and increase efficiency in 5G and telecom workloads. With one of the industry's most diverse offerings, Supermicro enables customers to expand public and private 5G infrastructures with improved performance per watt and support for new and innovative AI applications. As a long-term advocate of open networking platforms and a member of the O-RAN Alliance, Supermicro's portfolio incorporates systems featuring 5th Gen Intel Xeon processors, AMD EPYC 8004 Series processors, and the NVIDIA Grace Hopper Superchip.

"Supermicro is expanding our broad portfolio of sustainable and state-of-the-art servers to address the demanding requirements of 5G and telco markets and Edge AI," said Charles Liang, president and CEO of Supermicro. "Our products are not just about technology, they are about delivering tangible customer benefits. We quickly bring data center AI capabilities to the network's edge using our Building Block architecture. Our products enable operators to offer new capabilities to their customers with improved performance and lower energy consumption. Our edge servers contain up to 2 TB of high-speed DDR5 memory, 6 PCIe slots, and a range of networking options. These systems are designed for increased power efficiency and performance-per-watt, enabling operators to create high-performance, customized solutions for their unique requirements. This reassures our customers that they are investing in reliable and efficient solutions."

Jensen Huang to Unveil Latest AI Breakthroughs at GTC 2024 Conference

NVIDIA today announced it will host its flagship GTC 2024 conference at the San Jose Convention Center from March 18-21. More than 300,000 people are expected to register to attend in person or virtually. NVIDIA founder and CEO Jensen Huang will deliver the keynote from the SAP Center on Monday, March 18, at 1 p.m. Pacific time. It will be livestreamed and available on demand. Registration is not required to view the keynote online. Since Huang first highlighted machine learning in his 2014 GTC keynote, NVIDIA has been at the forefront of the AI revolution. The company's platforms have played a crucial role in enabling AI across numerous domains including large language models, biology, cybersecurity, data center and cloud computing, conversational AI, networking, physics, robotics, and quantum, scientific and edge computing.

The event's 900 sessions and over 300 exhibitors will showcase how organizations are deploying NVIDIA platforms to achieve remarkable breakthroughs across industries, including aerospace, agriculture, automotive and transportation, cloud services, financial services, healthcare and life sciences, manufacturing, retail and telecommunications. "Generative AI has moved to center stage as governments, industries and organizations everywhere look to harness its transformative capabilities," Huang said. "GTC has become the world's most important AI conference because the entire ecosystem is there to share knowledge and advance the state of the art. Come join us."

Samsung Electronics Collaborates with Arm on Optimized Next Gen Cortex-X CPU Using 2nm SF2 GAAFET Process

Samsung Electronics Co., Ltd., a world leader in advanced semiconductor technology, today announced a collaboration to deliver optimized next generation Arm Cortex -X CPU developed on Samsung Foundry's latest Gate-All-Around (GAA) process technology. This initiative is built on years of partnership with millions of devices shipped with Arm CPU intellectual property (IP) on various process nodes offered by Samsung Foundry.

This collaboration sets the stage for a series of announcements and planned innovation between Samsung and Arm. The companies have bold plans to reinvent 2-nanometer (nm) GAA for next-generation data center and infrastructure custom silicon, and a groundbreaking AI chiplet solution that will revolutionize the future generative artificial intelligence (AI) mobile computing market.

NVIDIA Introduces NVIDIA RTX 2000 Ada Generation GPU

Generative AI is driving change across industries—and to take advantage of its benefits, businesses must select the right hardware to power their workflows. The new NVIDIA RTX 2000 Ada Generation GPU delivers the latest AI, graphics and compute technology to compact workstations, offering up to 1.5x the performance of the previous-generation RTX A2000 12 GB in professional workflows. From crafting stunning 3D environments to streamlining complex design reviews to refining industrial designs, the card's capabilities pave the way for an AI-accelerated future, empowering professionals to achieve more without compromising on performance or capabilities. Modern multi-application workflows, such as AI-powered tools, multi-display setups and high-resolution content, put significant demands on GPU memory. With 16 GB of memory in the RTX 2000 Ada, professionals can tap the latest technologies and tools to work faster and better with their data.

Powered by NVIDIA RTX technology, the new GPU delivers impressive realism in graphics with NVIDIA DLSS, delivering ultra-high-quality, photorealistic ray-traced images more than 3x faster than before. In addition, the RTX 2000 Ada enables an immersive experience for enterprise virtual-reality workflows, such as for product design and engineering design reviews. With its blend of performance, versatility and AI capabilities, the RTX 2000 Ada helps professionals across industries achieve efficiencies. Architects and urban planners can use it to accelerate visualization workflows and structural analysis, enhancing design precision. Product designers and engineers using industrial PCs can iterate rapidly on product designs with fast, photorealistic rendering and AI-powered generative design. Content creators can edit high-resolution videos and images seamlessly, and use AI for realistic visual effects and content creation assistance. And in vital embedded applications and edge computing, the RTX 2000 Ada can power real-time data processing for medical devices, optimize manufacturing processes with predictive maintenance and enable AI-driven intelligence in retail environments.

Cisco & NVIDIA Announce Easy to Deploy & Manage Secure AI Solutions for Enterprise

This week, Cisco and NVIDIA have announced plans to deliver AI infrastructure solutions for the data center that are easy to deploy and manage, enabling the massive computing power that enterprises need to succeed in the AI era. "AI is fundamentally changing how we work and live, and history has shown that a shift of this magnitude is going to require enterprises to rethink and re-architect their infrastructures," said Chuck Robbins, Chair and CEO, Cisco. "Strengthening our great partnership with NVIDIA is going to arm enterprises with the technology and the expertise they need to build, deploy, manage, and secure AI solutions at scale." Jensen Huang, founder and CEO of NVIDIA said: "Companies everywhere are racing to transform their businesses with generative AI. Working closely with Cisco, we're making it easier than ever for enterprises to obtain the infrastructure they need to benefit from AI, the most powerful technology force of our lifetime."

A Powerful Partnership
Cisco, with its industry-leading expertise in Ethernet networking and extensive partner ecosystem, together with NVIDIA, the inventor of the GPU that fueled the AI boom, share a vision and commitment to help customers navigate the transitions for AI with highly secure Ethernet-based infrastructure. Cisco and NVIDIA have offered a broad range of integrated product solutions over the past several years across Webex collaboration devices and data center compute environments to enable hybrid workforces with flexible workspaces, AI-powered meetings and virtual desktop infrastructure.

Huawei Reportedly Prioritizing Ascend AI GPU Production

Huawei's Ascend 910B AI GPU is reportedly in high demand in China—we last learned that NVIDIA's latest US sanction-busting H20 "Hopper" model is lined up as a main competitor, allegedly in terms of both pricing and performance. A recent Reuters report proposes that Huawei is reacting to native enterprise market trends by shifting its production priorities—in favor of Ascend product ranges, while demoting their Kirin smartphone chipset family. Generative AI industry experts believe that the likes of Alibaba and Tencent have rejected Team Green's latest batch of re-jigged AI chips (H20, L20 and L2)—tastes have gradually shifted to locally developed alternatives.

Huawei leadership is seemingly keen to seize these growth opportunities—their Ascend 910B is supposedly ideal for workloads "that require low-to-mind inferencing power." Reuters has spoken to three anonymous sources—all with insider knowledge of goings-on at a single facility that manufacturers Ascend AI chips and the Kirin smartphone SoCs. Two of the leakers claim that this unnamed fabrication location is facing many "production quality" challenges, namely output being "hamstrung by a low yield rate." The report claims that Huawei has pivoted by deprioritizing Kirin 9000S (7 nm) production, thus creating a knock-on effect for its premium Mate 60 smartphone range.

FTC Launches Inquiry into Generative AI Investments and Partnerships

The Federal Trade Commission announced today that it issued orders to five companies requiring them to provide information regarding recent investments and partnerships involving generative AI companies and major cloud service providers. The agency's 6(b) inquiry will scrutinize corporate partnerships and investments with AI providers to build a better internal understanding of these relationships and their impact on the competitive landscape. The compulsory orders were sent to Alphabet, Inc., Amazon.com, Inc., Anthropic PBC, Microsoft Corp., and OpenAI, Inc.

"History shows that new technologies can create new markets and healthy competition. As companies race to develop and monetize AI, we must guard against tactics that foreclose this opportunity, "said FTC Chair Lina M. Khan. "Our study will shed light on whether investments and partnerships pursued by dominant companies risk distorting innovation and undermining fair competition."

Team Group Launches the Industrial P745 Gen 4 SSD

The leading industrial memory and storage provider, Team Group has employed its advanced R&D capabilities and manufacturing processes to launch the industrial P745 SSD, which combines 112-layer 3D NAND flash memory, PCIe Gen 4x4 level speeds, and 8-channel controllers. Emphasizing high transfer speeds, power efficiency, and low latency, the P745 SSD delivers sequential read and write speeds of up to 7,000 MB/s and 6,200 MB/s, respectively, delivering excellent IOPS performance. To meet the demands of AI applications, temperature control was enhanced to maintain stable and high-speed performance. In the rapidly developing era of AI and high-performance computing, Team Group continues to provide the best industrial storage solutions.

The P745 SSD is available in both standard temperature (0 to 70°C) and wide temperature (-40 to 85°C) models. It integrates Team Group's cooling technology, the patented graphene and fin heat sinks, resulting in a significant temperature reduction of about 8-15% compared to common products without fin heat sinks. The P745 can be configured to meet the needs of different application environments, enabling the product to maintain stable operation at high temperatures and high performance. The P745 is also equipped with advanced firmware that protects data by automatically adjusting speeds when temperatures exceed the safe range. With a maximum capacity of 4 TB, the P745 is an NVMe 1.4 drive that uses the PCIe Gen 4x4 interface and is backward compatible with PCIe 3.0 platforms. It features a built-in DRAM cache buffer for high-speed AI computing that enhances system loading and data caching, reducing NAND flash wear and increasing product life span. In addition, the P745 is equipped with an LDPC error correction function and AES 256-bit high-level encryption technology to ensure the accuracy and security of data transmission.

Intel and DigitalBridge Launch Articul8, an Enterprise Generative AI Company

Intel Corp and DigitalBridge Group, Inc., a global investment firm, today announced the formation of Articul8 AI, Inc. (Articul8), an independent company offering enterprise customers a full-stack, vertically-optimized and secure generative artificial intelligence (GenAI) software platform. The platform delivers AI capabilities that keep customer data, training and inference within the enterprise security perimeter. The platform also provides customers the choice of cloud, on-prem or hybrid deployment.

Articul8 was created with intellectual property (IP) and technology developed at Intel, and the two companies will remain strategically aligned on go-to-market opportunities and collaborate on driving GenAI adoption in the enterprise. Arun Subramaniyan, formerly vice president and general manager in Intel's Data Center and AI Group, has assumed leadership of Articul8 as its CEO.

Dell Generative AI Open Ecosystem with AMD Instinct Accelerators

Generative AI (GenAI) is the decade's most promising accelerator for innovation with 78% of IT decision makers reporting they're largely excited for the potential GenAI can have on their organizations.¹ Most see GenAI as a means to provide productivity gains, streamline processes and achieve cost savings. Harnessing this technology is critical to ensure organizations can compete in this new digital era.

Dell Technologies and AMD are coming together to unveil an expansion to the Dell Generative AI Solutions portfolio, continuing the work of accelerating advanced workloads and offering businesses more choice to continue their unique GenAI journeys. This new technology highlights a pivotal role played by open ecosystems and silicon diversity in empowering customers with simple, trusted and tailored solutions to bring AI to their data.

AMD Ryzen 8040 Series "Hawk Point" Mobile Processors Announced with a Faster NPU

AMD today announced the new Ryzen 8040 mobile processor series codenamed "Hawk Point." These chips are shipping to notebook manufacturers now, and the first notebooks powered by these should be available to consumers in Q1-2024. At the heart of this processor is a significantly faster neural processing unit (NPU), designed to accelerate AI applications that will become relevant next year, as Microsoft prepares to launch Windows 12, and software vendors make greater use of generative AI in consumer applications.

The Ryzen 8040 "Hawk Point" processor is almost identical in design and features to the Ryzen 7040 "Phoenix," except for a faster Ryzen AI NPU. While this is based on the same first-generation XDNA architecture, its NPU performance has been increased to 16 TOPS, compared to 10 TOPS of the NPU on the "Phoenix" silicon. AMD is taking a whole-of-silicon approach to AI acceleration, which includes not just the NPU, but also the "Zen 4" CPU cores that support the AVX-512 VNNI instruction set that's relevant to AI; and the iGPU based on the RDNA 3 graphics architecture, with each of its compute unit featuring two AI accelerators, components that make the SIMD cores crunch matrix math. The whole-of-silicon performance figures for "Phoenix" is 33 TOPS; while "Hawk Point" boasts of 39 TOPS. In benchmarks by AMD, "Hawk Point" is shown delivering a 40% improvement in vision models, and Llama 2, over the Ryzen 7040 "Phoenix" series.

AWS and NVIDIA Partner to Deliver 65 ExaFLOP AI Supercomputer, Other Solutions

Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company (NASDAQ: AMZN), and NVIDIA (NASDAQ: NVDA) today announced an expansion of their strategic collaboration to deliver the most-advanced infrastructure, software and services to power customers' generative artificial intelligence (AI) innovations. The companies will bring together the best of NVIDIA and AWS technologies—from NVIDIA's newest multi-node systems featuring next-generation GPUs, CPUs and AI software, to AWS Nitro System advanced virtualization and security, Elastic Fabric Adapter (EFA) interconnect, and UltraCluster scalability—that are ideal for training foundation models and building generative AI applications.

The expanded collaboration builds on a longstanding relationship that has fueled the generative AI era by offering early machine learning (ML) pioneers the compute performance required to advance the state-of-the-art in these technologies.

AWS Unveils Next Generation AWS-Designed Graviton4 and Trainium2 Chips

At AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company (NASDAQ: AMZN), today announced the next generation of two AWS-designed chip families—AWS Graviton4 and AWS Trainium2—delivering advancements in price performance and energy efficiency for a broad range of customer workloads, including machine learning (ML) training and generative artificial intelligence (AI) applications. Graviton4 and Trainium2 mark the latest innovations in chip design from AWS. With each successive generation of chip, AWS delivers better price performance and energy efficiency, giving customers even more options—in addition to chip/instance combinations featuring the latest chips from third parties like AMD, Intel, and NVIDIA—to run virtually any application or workload on Amazon Elastic Compute Cloud (Amazon EC2).

NVIDIA's New Ethernet Networking Platform for AI Available Soon From Dell Technologies, Hewlett Packard Enterprise, Lenovo

NVIDIA today announced that Dell Technologies, Hewlett Packard Enterprise and Lenovo will be the first to integrate NVIDIA Spectrum-X Ethernet networking technologies for AI into their server lineups to help enterprise customers speed up generative AI workloads. Purpose-built for generative AI, Spectrum-X offers enterprises a new class of Ethernet networking that can achieve 1.6x higher networking performance for AI communication versus traditional Ethernet offerings. The new systems coming from three of the top system makers bring together Spectrum-X with NVIDIA Tensor Core GPUs, NVIDIA AI Enterprise software and NVIDIA AI Workbench software to provide enterprises the building blocks to transform their businesses with generative AI.

"Generative AI and accelerated computing are driving a generational transition as enterprises upgrade their data centers to serve these workloads," said Jensen Huang, founder and CEO of NVIDIA. "Accelerated networking is the catalyst for a new wave of systems from NVIDIA's leading server manufacturer partners to speed the shift to the era of generative AI."

TYAN Announces New Server Line-Up Powered by 4th Gen AMD EPYC (9004/8004 Series) and AMD Ryzen (7000 Series) Processors at SC23

TYAN, an industry leader in server platform design and a subsidiary of MiTAC Computing Technology Corporation, debuts its new server line-up for 4th Gen AMD EPYC & AMD Ryzen Processors at SC23, Booth #1917, in the Colorado Convention Center, Denver, CO, November 13-16.

AMD EPYC 9004 processor features leadership performance and is optimized for a wide range of HPC, cloud-native computing and Generative AI workloads
TYAN offers server platforms supporting the AMD EPYC 9004 processors that provide up to 128 Zen 4C cores and 256 MB of L3 Cache for dynamic cloud-native applications with high performance, density, energy efficiency, and compatibility.

NVIDIA Introduces Generative AI Foundry Service on Microsoft Azure for Enterprises and Startups Worldwide

NVIDIA today introduced an AI foundry service to supercharge the development and tuning of custom generative AI applications for enterprises and startups deploying on Microsoft Azure.

The NVIDIA AI foundry service pulls together three elements—a collection of NVIDIA AI Foundation Models, NVIDIA NeMo framework and tools, and NVIDIA DGX Cloud AI supercomputing services—that give enterprises an end-to-end solution for creating custom generative AI models. Businesses can then deploy their customized models with NVIDIA AI Enterprise software to power generative AI applications, including intelligent search, summarization and content generation.

QNAP Introduces the New Half-width Rackmount 100GbE Managed Switch with 1.2 Tbps Capacity

QNAP Systems, Inc., a leading computing, networking, and storage solutions innovator, today released the new 100GbE QSFP28/25GbE SFP28 managed switch QSW-M7308R-4X. Offering up to 1200 Gbps of switching capacity in a compact half-width rackmount case, the QSW-M7308R-4X satisfies the high-bandwidth demands of big data storage, video editing, virtualization, and AI applications to accelerate deploying high-speed data storage centers, smart medicine, and professional multimedia studios.

"According to market research, the demand for 100GbE+ ultra-high-speed switches has taken off in 2023, in order to support the bandwidth demands of new applications such as Generative AI workloads and cluster servers." said Jerry Deng, Product Manager of QNAP, adding "QNAP's first 100GbE switch is a cost-optimized and space-saving 100GbE solution for SMBs, and is ideally paired with QNAP 25GbE NAS or other NAS with 25GbE/100GbE network cards to fully maximize your IT potential."

NVIDIA Turbocharges Generative AI Training in MLPerf Benchmarks

NVIDIA's AI platform raised the bar for AI training and high performance computing in the latest MLPerf industry benchmarks. Among many new records and milestones, one in generative AI stands out: NVIDIA Eos - an AI supercomputer powered by a whopping 10,752 NVIDIA H100 Tensor Core GPUs and NVIDIA Quantum-2 InfiniBand networking - completed a training benchmark based on a GPT-3 model with 175 billion parameters trained on one billion tokens in just 3.9 minutes. That's a nearly 3x gain from 10.9 minutes, the record NVIDIA set when the test was introduced less than six months ago.

The benchmark uses a portion of the full GPT-3 data set behind the popular ChatGPT service that, by extrapolation, Eos could now train in just eight days, 73x faster than a prior state-of-the-art system using 512 A100 GPUs. The acceleration in training time reduces costs, saves energy and speeds time-to-market. It's heavy lifting that makes large language models widely available so every business can adopt them with tools like NVIDIA NeMo, a framework for customizing LLMs. In a new generative AI test ‌this round, 1,024 NVIDIA Hopper architecture GPUs completed a training benchmark based on the Stable Diffusion text-to-image model in 2.5 minutes, setting a high bar on this new workload. By adopting these two tests, MLPerf reinforces its leadership as the industry standard for measuring AI performance, since generative AI is the most transformative technology of our time.

Microsoft Windows 11 23H2 Major Update Begins Rolling Out, Bets Big on Generative AI

Microsoft on Tuesday began rolling out Windows 11 23H2, the year's major update to its PC operating system. This release sees a major integration of AI into several features across the OS. To begin with, Microsoft Copilot, which made its debut with 365 and Office, is getting integrated with Windows. Powered by Bing Chat, Copilot is a GPT-based chatbot that not just gathers information from web-search, but can also be made to do a variety of OS level functions. For example, you can ask it to pair a Bluetooth device, or find something on your machine, find stuff within the files of your machine. The WIN+C key now brings up Copilot. Next up, Microsoft Paint gets its biggest feature update, with the generative AI-based Paint Cocreator feature. Not only will Paint assist your brush-strokes, in getting the shapes and contents right, but much like Stable Diffusion and Midjourney, Paint now has a prompt-based image generation feature. For now, Paint Cocreator is being released as a preview feature.

Microsoft Clipchamp, the video editor included with Windows, now has a set of generative AI enhancements of its own, with tools such as Auto Compose, which assists in building a movie with scenes, getting the sequence of clips and transitions right, getting the effects and filters right; and audio features such as narration and background score. Clipchamp also has integration with social platforms including Tik Tok, YouTube, and LinkedIn. Snipping Tool, the screengrab application of Windows, now gets a couple of AI enhancements too, such as scanning an image to extract and redact information. Photos gets AI-accelerated image recognition and categorization. Much like Google Photos, you can look for a picture by describing what you're looking for. As with each such annual major release, Microsoft will be releasing 23H2 in a phased manner through Windows Update, but if you're impatient and want to immediately update, or perform a clean installation, visit the link below.

DOWNLOAD: Windows 11 23H2 (Installation Assistant, Media Creator, ISOs)

NVIDIA NeMo: Designers Tap Generative AI for a Chip Assist

A research paper released this week describes ways generative AI can assist one of the most complex engineering efforts: designing semiconductors. The work demonstrates how companies in highly specialized fields can train large language models (LLMs) on their internal data to build assistants that increase productivity.

Few pursuits are as challenging as semiconductor design. Under a microscope, a state-of-the-art chip like an NVIDIA H100 Tensor Core GPU (above) looks like a well-planned metropolis, built with tens of billions of transistors, connected on streets 10,000x thinner than a human hair. Multiple engineering teams coordinate for as long as two years to construct one of these digital mega cities. Some groups define the chip's overall architecture, some craft and place a variety of ultra-small circuits, and others test their work. Each job requires specialized methods, software programs and computer languages.
Return to Keyword Browsing
Dec 21st, 2024 05:49 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts