News Posts matching #Google

Return to Keyword Browsing

Google Launches Axion Arm-based CPU for Data Center and Cloud

Google has officially joined the club of custom Arm-based, in-house-developed CPUs. As of today, Google's in-house semiconductor development team has launched the "Axion" CPU based on Arm instruction set architecture. Using the Arm Neoverse V2 cores, Google claims that the Axion CPU outperforms general-purpose Arm chips by 30% and Intel's processors by a staggering 50% in terms of performance. This custom silicon will fuel various Google Cloud offerings, including Compute Engine, Kubernetes Engine, Dataproc, Dataflow, and Cloud Batch. The Axion CPU, designed from the ground up, will initially support Google's AI-driven services like YouTube ads and Google Earth Engine. According to Mark Lohmeyer, Google Cloud's VP and GM of compute and machine learning infrastructure, Axion will soon be available to cloud customers, enabling them to leverage its performance without overhauling their existing applications.

Google's foray into custom silicon aligns with the strategies of its cloud rivals, Microsoft and Amazon. Microsoft recently unveiled its own AI chip for training large language models and an Arm-based CPU called Cobalt 100 for cloud and AI workloads. Amazon, on the other hand, has been offering Arm-based servers through its custom Graviton CPUs for several years. While Google won't sell these chips directly to customers, it plans to make them available through its cloud services, enabling businesses to rent and leverage their capabilities. As Amin Vahdat, the executive overseeing Google's in-house chip operations, stated, "Becoming a great hardware company is very different from becoming a great cloud company or a great organizer of the world's information."

US Government Wants Nuclear Plants to Offload AI Data Center Expansion

The expansion of AI technology affects not only the production and demand for graphics cards but also the electricity grid that powers them. Data centers hosting thousands of GPUs are becoming more common, and the industry has been building new facilities for GPU-enhanced servers to serve the need for more AI. However, these powerful GPUs often consume over 500 Watts per single card, and NVIDIA's latest Blackwell B200 GPU has a TGP of 1000 Watts or a single kilowatt. These kilowatt GPUs will be present in data centers with 10s of thousands of cards, resulting in multi-megawatt facilities. To combat the load on the national electricity grid, US President Joe Biden's administration has been discussing with big tech to re-evaluate their power sources, possibly using smaller nuclear plants. According to an Axios interview with Energy Secretary Jennifer Granholm, she has noted that "AI itself isn't a problem because AI could help to solve the problem." However, the problem is the load-bearing of the national electricity grid, which can't sustain the rapid expansion of the AI data centers.

The Department of Energy (DOE) has been reportedly talking with firms, most notably hyperscalers like Microsoft, Google, and Amazon, to start considering nuclear fusion and fission power plants to satisfy the need for AI expansion. We have already discussed the plan by Microsoft to embed a nuclear reactor near its data center facility and help manage the load of thousands of GPUs running AI training/inference. However, this time, it is not just Microsoft. Other tech giants are reportedly thinking about nuclear as well. They all need to offload their AI expansion from the US national power grid and develop a nuclear solution. Nuclear power is a mere 20% of the US power sourcing, and DOE is currently financing a Holtec Palisades 800-MW electric nuclear generating station with $1.52 billion in funds for restoration and resumption of service. Microsoft is investing in a Small Modular Reactors (SMRs) microreactor energy strategy, which could be an example for other big tech companies to follow.

Google Launches Arm-Optimized Chrome for Windows, in Time for Qualcomm Snapdragon X Elite Processors

Google has just released an Arm-optimized version of its popular Chrome browser for Windows PCs. This new version is designed to take full advantage of Arm-based devices' hardware and operating system, promising users a faster and smoother browsing experience. The Arm-optimized Chrome for Windows has been developed in close collaboration with Qualcomm, ensuring that Chrome users get the best possible experience on current Arm-compatible PCs. Hiroshi Lockheimer, Senior Vice President at Google, stated, "We've designed Chrome browser to be fast, secure, and easy to use across desktops and mobile devices, and we're always looking for ways to bring this experience to more people." Early testers of the Arm-optimized Chrome have reported significant performance improvements compared to the x86-emulated version. The new browser is rolling out starting today and will be available on existing Arm devices, including PCs powered by Snapdragon 8cx, 8c, and 7c processors.

Shortly, Chrome will receive an even more performant chip boost with Qualcomm's upcoming Snapdragon X Elite SoC launch. Cristiano Amon, President and CEO of Qualcomm, expressed his excitement about the collaboration, saying, "As we enter the era of the AI PC, we can't wait to see Chrome shine by taking advantage of the powerful Snapdragon X Elite system." Qualcomm's Snapdragon X Elite devices are expected to hit the market in mid-2024 with "dramatic performance improvement in the Speedometer 2.0 benchmark" on reference hardware. Being one of the most essential applications, getting a native Chrome build to run on Windows-on-Arm is a significant step for the platform, promising more investment from software makers.

Report: Apple to Use Google's Gemini AI for iPhones

In the world where the largest companies are riding the AI train, the biggest of them all—Apple—seemed to stay quiet for a while. Even with many companies announcing their systems/models, Apple has stayed relatively silent about the use of LLMs in their products. However, according to Bloomberg, Apple is not pushing out an AI model of its own; rather, it will license Google's leading Gemini models for its iPhone smartphones. Gemini is Google's leading AI model with three variants: Gemini Nano 1/2, Gemini Pro, and Gemini Ultra. The Gemini Nano 1 and Nano 2 are designed to run locally on hardware like smartphones. At the same time, Gemini Pro and Ultra are inferenced from Google's servers onto a local device using API and the internet.

Apple could use a local Gemini Nano for basic tasks while also utilizing Geminin Pro or Ultra for more complex tasks, where a router sends user input to the available model. That way, users could use AI capabilities both online and offline. Since Apple is readying a suite of changes for iOS version 18, backed by Neural Engine inside A-series Bionic chips, the LLM game of Apple's iPhones might get a significant upgrade with the Google partnership. While we still don't know the size of the deal, it surely is a massive deal for Google to tap into millions of devices Apple ships every year and for Apple to give its users a more optimized experience.

Google: CPUs are Leading AI Inference Workloads, Not GPUs

The AI infrastructure of today is mostly fueled by the expansion that relies on GPU-accelerated servers. Google, one of the world's largest hyperscalers, has noted that CPUs are still a leading compute for AI/ML workloads, recorded on their Google Cloud Services cloud internal analysis. During the TechFieldDay event, a speech by Brandon Royal, product manager at Google Cloud, explained the position of CPUs in today's AI game. The AI lifecycle is divided into two parts: training and inference. During training, massive compute capacity is needed, along with enormous memory capacity, to fit ever-expanding AI models into memory. The latest models, like GPT-4 and Gemini, contain billions of parameters and require thousands of GPUs or other accelerators working in parallel to train efficiently.

On the other hand, inference requires less compute intensity but still benefits from acceleration. The pre-trained model is optimized and deployed during inference to make predictions on new data. While less compute is needed than training, latency and throughput are essential for real-time inference. Google found out that, while GPUs are ideal for the training phase, models are often optimized and run inference on CPUs. This means that there are customers who choose CPUs as their medium of AI inference for a wide variety of reasons.

Global Server Shipments Expected to Increase by 2.05% in 2024, with AI Servers Accounting For Around 12.1%

TrendForce underscores that the primary momentum for server shipments this year remains with American CSPs. However, due to persistently high inflation and elevated corporate financing costs curtailing capital expenditures, overall demand has not yet returned to pre-pandemic growth levels. Global server shipments are estimated to reach approximately. 13.654 million units in 2024, an increase of about 2.05% YoY. Meanwhile, the market continues to focus on the deployment of AI servers, with their shipment share estimated at around 12.1%.

Foxconn is expected to see the highest growth rate, with an estimated annual increase of about 5-7%. This growth includes significant orders such as Dell's 16G platform, AWS Graviton 3 and 4, Google Genoa, and Microsoft Gen9. In terms of AI server orders, Foxconn has made notable inroads with Oracle and has also secured some AWS ASIC orders.

NVIDIA Accused of Acting as "GPU Cartel" and Controlling Supply

World's most important fuel of the AI frenzy, NVIDIA, is facing accusations of acting as a "GPU cartel" and controlling supply in the data center market, according to statements made by executives at rival chipmaker Groq and former AMD executive Scott Herkelman. In an interview with the Wall Street Journal, Groq CEO Jonathan Ross alleged that some of NVIDIA's data center customers are afraid to even meet with rival AI chipmakers out of fear that NVIDIA will retaliate by delaying shipments of already ordered GPUs. This is despite NVIDIA's claims that it is trying to allocate supply fairly during global shortages. "This happens more than you expect, NVIDIA does this with DC customers, OEMs, AIBs, press, and resellers. They learned from GPP to not put it into writing. They just don't ship after a customer has ordered. They are the GPU cartel, and they control all supply," said former Senior Vice President and General Manager at AMD Radeon, Scott Herkelman, in response to the accusations on X/Twitter.

Google's Gemma Optimized to Run on NVIDIA GPUs, Gemma Coming to Chat with RTX

NVIDIA, in collaboration with Google, today launched optimizations across all NVIDIA AI platforms for Gemma—Google's state-of-the-art new lightweight 2 billion- and 7 billion-parameter open language models that can be run anywhere, reducing costs and speeding innovative work for domain-specific use cases.

Teams from the companies worked closely together to accelerate the performance of Gemma—built from the same research and technology used to create the Gemini models—with NVIDIA TensorRT-LLM, an open-source library for optimizing large language model inference, when running on NVIDIA GPUs in the data center, in the cloud and on PCs with NVIDIA RTX GPUs. This allows developers to target the installed base of over 100 million NVIDIA RTX GPUs available in high-performance AI PCs globally.

Samsung Lands Significant 2 nm AI Chip Order from Unnamed Hyperscaler

This week in its earnings call, Samsung announced that its foundry business has received a significant order for a two nanometer AI chips, marking a major win for its advanced fabrication technology. The unnamed customer has contracted Samsung to produce AI accelerators using its upcoming 2 nm process node, which promises significant gains in performance and efficiency over today's leading-edge chips. Along with the AI chips, the deal includes supporting HBM and advanced packaging - indicating a large-scale and complex project. Industry sources speculate the order may be from a major hyperscaler like Google, Microsoft, or Alibaba, who are aggressively expanding their AI capabilities. Competition for AI chip contracts has heated up as the field becomes crucial for data centers, autonomous vehicles, and other emerging applications. Samsung said demand recovery in 2023 across smartphones, PCs and enterprise hardware will fuel growth for its broader foundry business. It's forging ahead with 3 nm production while eyeing 2 nm for launch around 2025.

Compared to its 3 nm process, 2 nm aims to increase power efficiency by 25% and boost performance by 12% while reducing chip area by 5%. The new order provides validation for Samsung's billion-dollar investments in next-generation manufacturing. It also bolsters Samsung's position against Taiwan-based TSMC, which holds a large portion of the foundry market share. TSMC landed Apple as its first 2 nm customer, while Intel announced 5G infrastructure chip orders from Ericsson and Faraday Technology using its "Intel 18A" node. With rivals securing major customers, Samsung is aggressively pricing 2 nm to attract clients. Reports indicate Qualcomm may shift some flagship mobile chips to Samsung's foundry at the 2 nm node, so if the yields are good, the node has a great potential to attract customers.

FTC Launches Inquiry into Generative AI Investments and Partnerships

The Federal Trade Commission announced today that it issued orders to five companies requiring them to provide information regarding recent investments and partnerships involving generative AI companies and major cloud service providers. The agency's 6(b) inquiry will scrutinize corporate partnerships and investments with AI providers to build a better internal understanding of these relationships and their impact on the competitive landscape. The compulsory orders were sent to Alphabet, Inc., Amazon.com, Inc., Anthropic PBC, Microsoft Corp., and OpenAI, Inc.

"History shows that new technologies can create new markets and healthy competition. As companies race to develop and monetize AI, we must guard against tactics that foreclose this opportunity, "said FTC Chair Lina M. Khan. "Our study will shed light on whether investments and partnerships pursued by dominant companies risk distorting innovation and undermining fair competition."

Google Faces Potential Billion-Dollar Damages in TPU Patent Dispute

Tech giant Google is embroiled in a high-stakes legal battle over the alleged infringement of patents related to its Tensor Processing Units (TPUs), custom AI accelerator chips used to power machine learning applications. Massachusetts-based startup Singular Computing has accused Google of incorporating architectures described in several of its patents into the design of the TPU without permission. The disputed patents, first filed in 2009, outline computer architectures optimized for executing a high volume of low-precision calculations per cycle - an approach well-suited for neural network-based AI. In a 2019 lawsuit, Singular argues that Google knowingly infringed on these patents in developing its TPU v2 and TPU v3 chips introduced in 2017 and 2018. Singular Computing is seeking between $1.6 billion and $5.19 billion in damages from Google.

Google denies these claims, stating that its TPUs were independently developed over many years. The company is currently appealing to have Singular's patents invalidated, which would undermine the infringement allegations. The high-profile case highlights mounting legal tensions as tech giants race to dominate the burgeoning field of AI hardware. With billions in potential damages at stake, the outcome could have major implications for the competitive landscape in cloud-based machine learning services. As both sides prepare for court, the dispute underscores the massive investments tech leaders like Google make to integrate specialized AI accelerators into their cloud infrastructures. Dominance in this sphere is a crucial strategic advantage as more industries embrace data-hungry neural network applications.

Update 17:25 UTC: According to Reuters, Google and Singular Computing have settled the case with details remaining private for the time being.

Samsung Announces the Galaxy S24 Series with Mobile AI

Samsung Electronics today unveiled the Galaxy S24 Ultra, Galaxy S24+ and Galaxy S24, unleashing new mobile experiences with Galaxy AI. Galaxy S series leads the way into a new era that will forever change how mobile devices empower users. AI amplifies nearly every experience on Galaxy S24 series, from enabling barrier-free communication with intelligent text and call translations, to maximizing creative freedom with Galaxy's ProVisual Engine, to setting a new standard for search that will change how Galaxy users discover the world around them.

"The Galaxy S24 series transforms our connection with the world and ignites the next decade of mobile innovation," said TM Roh, President and Head of Mobile eXperience (MX) Business at Samsung Electronics. "Galaxy AI is built on our innovation heritage and deep understanding of how people use their phones. We're excited to see how our users around the world empower their everyday lives with Galaxy AI to open up new possibilities."

Epic Wins Store Spat Against Google, Jury Holds Google Play Guilty of Monopolistic Practices

Epic Games won a pivotal anti-trust dispute against industry giant Google, with a Jury holding Google Play and its billing service guilty of running an illegal monopoly for the sale of software and digital assets. Epic had sued both Google and Apple of running restrictive, walled-garden marketplaces on their mobile platforms, which forced people to buy, subscribe, or pay for its products only through their marketplaces, namely Google Play and the App Store, after gouging huge revenue shares. Epic had sought to release its own marketplace, the Epic Games Store, on these platforms, so it could sell its wares just the way it does on the PC. With this favorable verdict, Epic stands to save "hundreds of millions or even billions of Dollars" in fees to Google. Meanwhile, Google stated that it is preparing to appeal in a higher court, on the basis that the Play Store isn't the only software/content marketplace, and that it is competing with Apple's App Store (although not on the same devices).

Lenovo Introduces New Chromebox Micro for Digital and Interactive Display Solutions

Today, Lenovo announces its new Chromebox Micro media player during the Digital Signage Experience (DSE). The Lenovo Chromebox Micro is a new breed of Chromebox: an ultra-thin and affordable media player offering high performance, proven data security, and easier remote control and device management with ChromeOS.

The global digital signage market size was estimated at nearly USD 25 billion in 2022 and is expected to expand at a compound annual growth rate of 8 percent from 2023 to 2030.i Growth in the industry is being driven by innovation in consumer experiences in retail and hospitality, as well as smarter infrastructure in healthcare, education, and the enterprise.

NVIDIA Experiences Strong Cloud AI Demand but Faces Challenges in China, with High-End AI Server Shipments Expected to Be Below 4% in 2024

NVIDIA's most recent FY3Q24 financial reports reveal record-high revenue coming from its data center segment, driven by escalating demand for AI servers from major North American CSPs. However, TrendForce points out that recent US government sanctions targeting China have impacted NVIDIA's business in the region. Despite strong shipments of NVIDIA's high-end GPUs—and the rapid introduction of compliant products such as the H20, L20, and L2—Chinese cloud operators are still in the testing phase, making substantial revenue contributions to NVIDIA unlikely in Q4. Gradual shipments increases are expected from the first quarter of 2024.

The US ban continues to influence China's foundry market as Chinese CSPs' high-end AI server shipments potentially drop below 4% next year
TrendForce reports that North American CSPs like Microsoft, Google, and AWS will remain key drivers of high-end AI servers (including those with NVIDIA, AMD, or other high-end ASIC chips) from 2023 to 2024. Their estimated shipments are expected to be 24%, 18.6%, and 16.3%, respectively, for 2024. Chinese CSPs such as ByteDance, Baidu, Alibaba, and Tencent (BBAT) are projected to have a combined shipment share of approximately 6.3% in 2023. However, this could decrease to less than 4% in 2024, considering the current and potential future impacts of the ban.

AMD Extends 3rd Gen EPYC CPU Lineup to Deliver New Levels of Value for Mainstream Applications

Today, AMD announced the extension of its 3rd Gen AMD EPYC processor family with six new offerings providing a robust suite of data center CPUs to meet the needs of general IT and mainstream computing for businesses seeking to leverage the economics of established platforms. The complete family of 3rd Gen AMD EPYC CPUs complements the leadership performance and efficiency of the latest 4th Gen AMD EPYC processors with impressive price-performance, modern security features and energy efficiency for less technically demanding business critical workloads.

The race to deliver AI and high performance computing is creating a technology gap for IT decision-makers seeking mainstream performance. To meet the growing demand for widely deployed, cost effective and proven mainstream solutions in the mid-market and in the channel, AMD is extending the 3rd Gen EPYC CPU offering to provide excellent value, performance, energy efficiency and security features for business-critical applications. The 3rd Gen AMD EPYC CPU portfolio enables a wide array of broadly deployed enterprise server solutions, supported by trusted channel sellers and OEMs such as Cisco, Dell Technologies, Gigabyte, HPE, Lenovo and Supermicro.

Qualcomm to Bring RISC-V Based Wearable Platform to Wear OS by Google

Qualcomm Technologies, Inc. announced today that they are building on their long-standing collaboration with Google by bringing a RISC-V based wearables solution for use with Wear OS by Google. This expanded framework will help pave the way for more products within the ecosystem to take advantage of custom CPUs that are low power and high performance. Leading up to this, the companies will continue to invest in Snapdragon Wear platforms as the leading smartwatch silicon provider for the Wear OS ecosystem.

"Qualcomm Technologies have been a pillar of the Wear OS ecosystem, providing high performance, low power systems for many of our OEM partners," said Bjorn Kilburn, GM of Wear OS by Google. "We are excited to extend our work with Qualcomm Technologies and bring a RISC-V wearable solution to market."
"We are excited to leverage RISC-V and expand our Snapdragon Wear platform as a leading silicon provider for Wear OS. Our Snapdragon Wear platform innovations will help the Wear OS ecosystem rapidly evolve and streamline new device launches globally," said Dino Bekis, vice president and general manager, Wearables and Mixed Signal Solutions, Qualcomm Technologies, Inc.

OpenAI Could Make Custom Chips to Power Next-Generation AI Models

OpenAI, the company behind ChatGPT and the GPT-4 large language model, is reportedly exploring the possibility of creating custom silicon to power its next-generation AI models. According to Reuters, Insider sources have even alluded to the firm evaluating potential acquisitions of chip design firms. While a final decision is yet to be cemented, conversations from as early as last year highlighted OpenAI's struggle with the growing scarcity and escalating costs of AI chips, with NVIDIA being its primary supplier. The CEO of OpenAI, Sam Altman, has been rather vocal about the shortage of GPUs, a sector predominantly monopolized by NVIDIA, which holds control over an astounding 80% of the global market for AI-optimized chips.

Back in 2020, OpenAI banked on a colossal supercomputer crafted by Microsoft, a significant investor in OpenAI, which harnesses the power of 10,000 NVIDIA GPUs. This setup is instrumental in driving the operations of ChatGPT, which, as per Bernstein's analyst Stacy Rasgon, comes with its own hefty price tag. Each interaction with ChatGPT is estimated to cost around 4 cents. Drawing a comparative scale with Google search, if ChatGPT queries ever burgeoned to a mere tenth of Google's search volume, the initial GPU investment would skyrocket to an overwhelming $48.1 billion, with a recurring annual expenditure of approximately $16 billion for sustained operations. For an invitation to comment, OpenAI declined to provide any statements. The potential entry into the world of custom silicon signals a strategic move towards greater self-reliance and cost optimization so further development of AI can be sustained.

Google Announces the Pixel 8 and Pixel 8 Pro

Meet Pixel 8 and Pixel 8 Pro, engineered by Google and built with AI at the center for a more helpful and personal experience. These phones are packed with first-of-their-kind features, all powered by Google Tensor G3. And they'll get seven years of software updates, including Android OS upgrades, security updates and regular Feature Drops. Take a closer look at the new phones—everything from the beautiful design and new sensors to updated cameras.

A polished look made for your everyday
Pixel 8 and Pixel 8 Pro are elegantly designed with softer silhouettes, beautiful metal finishes and recycled materials. Pixel 8, with its contoured edges and smaller size than Pixel 7, feels great in your hand. It has a 6.2-inch Actua display, which gives you real-world clarity and is 42% brighter than Pixel 7's display. Pixel 8 features satin metal finishes, a polished glass back and comes in Rose, Hazel and Obsidian.

NVIDIA Reportedly in Talks to Lease Data Center Space for its own Cloud Service

The recent development of AI models that are more capable than ever has led to a massive demand for hardware infrastructure that powers them. As the dominant player in the industry with its GPU and CPU-GPU solutions, NVIDIA has reportedly discussed leasing data center space to power its own cloud service for these AI applications. Called NVIDIA Cloud DGX, it will reportedly put the company right up against its clients, which are cloud service providers (CSPs) as well. Companies like Microsoft Azure, Amazon AWS, Google Cloud, and Oracle actively acquire NVIDIA GPUs to power their GPU-accelerated cloud instances. According to the report, this has been developing for a few years.

Additionally, it is worth noting that NVIDIA already owns parts for its potential data center infrastructure. This includes NVIDIA DGX and HGX units, which can just be interconnected in a data center, with cloud provisioning so developers can access NVIDIA's instances. A great benefit that would attract the end-user is that NVIDIA could potentially lower the price point of its offerings, as they are acquiring GPUs for much less compared to the CSPs that receive them with a profit margin that NVIDIA imposes. This can attract potential customers, leaving hyperscalers like Amazon, Microsoft, and Google without a moat in the cloud game. Of course, until this project is official, we should take this information with a grain of salt.

Google Introduces Chromebook Plus Lineup: Better Performance and AI Capabilities

Today, Google announced its next generation of Chromebook devices, called the Chromebook Plus, said to improve upon the legacy set by Chromebooks over a decade ago. Starting at an enticing price point of $399, this new breed of Chromebooks integrates powerful AI capabilities and a range of built-in Google apps. Notably, it features tools like the Google Photos Magic Eraser and web-based Adobe Photoshop, positioning itself as a dynamic tool for productivity and creative exploration. In collaboration with hardware manufacturers such as Acer, ASUS, HP, and Lenovo, Google is launching a lineup of eight Chromebook Plus devices on the launch date, with more possibly coming in the future.

Each model boasts improved hardware configurations over the regular Chromebook, including processors like the Intel Core i3 12th Gen or the AMD Ryzen 3 7000 series, a minimum of 8 GB RAM, and 128 GB storage. Users are also in for a visual treat with a 1080p IPS display, ensuring crisp visuals for entertainment and work. And for the modern remote workforce, video conferencing gets a substantial upgrade. Every Chromebook Plus comes equipped with a 1080p camera and utilizes AI enhancements to elevate video call clarity, with compatibility spanning various platforms, including Google Meet, Zoom, and Microsoft Teams. Set to be available from October 8, 2023, in the US and October 9 in Canada and Europe, the Chromebook Plus is positioning itself as the go-to device for many users. On the other hand, the AI features are slated for arrival in 2024, when companies ensure their software is compatible.
Below you can see the upcoming models.

Get Ready to Do More with Lenovo's New IdeaPad Chromebook Plus Laptops

Lenovo unveils a selection of IdeaPad Chromebook Plus laptops, bringing together elevated hardware components with AI-powered tools for more productivity, creativity, and ease of use.

Lenovo's lineup of IdeaPad Chromebook Plus laptops continues the legacy of showcasing the ideal balance of value and performance, the ability to do more from anywhere, and reliability for peace of mind. Now equipped with more exclusive tools and premium services, the new selection of laptops comes with File Sync to access Google Drive files offline, a 1080p webcam with AI-powered video calling tools for crystal clear video conferencing, and other advanced features including the AI-powered Google Photos Magic Eraser that easily removes unwanted distractions from photos.

Broadcom Partners with Google Cloud to Strengthen Gen AI-Powered Cybersecurity

Symantec, a division of Broadcom Inc., is partnering with Google Cloud to embed generative AI (gen AI) into the Symantec Security platform in a phased rollout that will give customers a significant technical edge for detecting, understanding, and remediating sophisticated cyber attacks.

Symantec is leveraging the Google Cloud Security AI Workbench and security-specific large language model (LLM)--Sec-PaLM 2-across its portfolio to enable natural language interfaces and generate more comprehensive and easy-to-understand threat analyses. With Security AI Workbench-powered summarization of complex incidents and alignment to MITRE ATT&CK context, security operations center (SOC) analysts of all levels can better understand threats and be able to respond faster. That, in turn, translates into greater security and higher SOC productivity.

Bose Announces New QuietComfort Ultra Headphones and Earbuds

Today, Bose announces the next generation of its iconic QuietComfort line: the QuietComfort Ultra Headphones, QuietComfort Ultra Earbuds, and the QuietComfort Headphones, all featuring the world-renowned hallmarks that the QuietComfort name has become synonymous with—world-class noise cancellation, high-quality audio, and legendary comfort and stability. And now, the QC Ultra Headphones and Earbuds both feature an all-new premium design and debut Bose Immersive Audio, taking audio performance to an entirely new level.

The QuietComfort Ultra Headphones and QuietComfort Ultra Earbuds will be available beginning early October for $429 and $299 respectively. Both are available in Black and White Smoke. The QuietComfort Headphones will be available on September 21st for $349 also available in Black and White Smoke, plus a limited-edition Cypress Green. Pre-orders for all products begin today in the U.S. on Bose.com.

Arm Prepares for IPO: Apple, NVIDIA, Intel, and Samsung are Strategic Partners

Arm's impending IPO, valued between $60 billion and $70 billion, has reportedly garnered substantial backing from industry giants such as Apple, NVIDIA, Intel, and Samsung, as per sources cited in a Bloomberg report. This much-anticipated public offering serves as a litmus test for investor interest in new chip-related stocks and could reshape the tech industry landscape. While the information remains unofficial, it underscores the significant support Arm has received from major licensees, including Apple, AMD, Cadence, Intel, Google, NVIDIA, Samsung, and Synopsys, with each potentially contributing between $25 million and $100 million, a testament to their confidence in Arm's future prospects. Originally, SoftBank aimed to raise $8 billion to $10 billion through the IPO, but a strategic shift to retain a larger Arm stake revised the target to $5 billion to $7 billion.

This IPO's success holds paramount importance for SoftBank and its CEO, Masayoshi Son, particularly following the Vision Fund's substantial $30 billion loss in the previous fiscal year. Masayoshi Son is reportedly committed to maintaining significant control over Arm, planning to release no more than 10% of the company's shares during this initial phase, aligning with SoftBank's recent acquisition of the Vision Fund's Arm stake and reinforcing their belief in Arm's long-term potential. Arm has enlisted renowned global financial institutions such as Barclays, Goldman Sachs Group, JPMorgan Chase & Co., and Mizuho Financial Group to prepare for the IPO, highlighting the widespread interest in the offering and the anticipated benefits for these financial institutions.
Return to Keyword Browsing
Apr 30th, 2024 23:11 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts