News Posts matching #AWS

Return to Keyword Browsing

Ultra Accelerator Link Consortium Plans Year-End Launch of UALink v1.0

Ultra Accelerator Link (UALink ) Consortium, led by Board Members from AMD, Amazon Web Services (AWS), Astera Labs, Cisco, Google, Hewlett Packard Enterprise (HPE), Intel, Meta and Microsoft, have announced the incorporation of the Consortium and are extending an invitation for membership to the community. The UALink Promoter Group was founded in May 2024 to define a high-speed, low-latency interconnect for scale-up communications between accelerators and switches in AI pods & clusters. "The UALink standard defines high-speed and low latency communication for scale-up AI systems in data centers"

Pat Gelsinger Writes to Employees on Foundry Momentum, Progress on Plan

All eyes have been on Intel since we announced Q2 earnings. There has been no shortage of rumors and speculation about the company, including last week's Board of Directors meeting, so I'm writing today to provide some updates and outline what comes next. Let me start by saying we had a highly productive and supportive Board meeting. We have a strong Board comprised of independent directors whose job it is to challenge and push us to perform at our best. And we had deep discussions about our strategy, our portfolio and the immediate progress we are making against the plan we announced on August 1.

The Board and I agreed that we have a lot of work ahead to drive greater efficiency, improve our profitability and enhance our market competitiveness—and there are three key takeaways from last week's meeting that I want to focus on:
  • We must build on our momentum in Foundry as we near the launch of Intel 18A and drive greater capital efficiency across this part of our business.
  • We must continue acting with urgency to create a more competitive cost structure and deliver the $10B in savings target we announced last month.
  • We must refocus on our strong x86 franchise as we drive our AI strategy while streamlining our product portfolio in service to Intel customers and partners.
We have several pieces of news to share that support these priorities.

Intel to Produce Custom AI Chips and Xeon 6 Processors for AWS

Intel Corp. and Amazon Web Services. Inc., an Amazon.com company, today announced a co-investment in custom chip designs under a multi-year, multi-billion-dollar framework covering product and wafers from Intel. This is a significant expansion of the two companies' longstanding strategic collaboration to help customers power virtually any workload and accelerate the performance of artificial intelligence (AI) applications.

As part of the expanded collaboration, Intel will produce an AI fabric chip for AWS on Intel 18A, the company's most advanced process node. Intel will also produce a custom Xeon 6 chip on Intel 3, building on the existing partnership under which Intel produces Xeon Scalable processors for AWS.

Global AI Server Demand Surge Expected to Drive 2024 Market Value to US$187 Billion; Represents 65% of Server Market

TrendForce's latest industry report on AI servers reveals that high demand for advanced AI servers from major CSPs and brand clients is expected to continue in 2024. Meanwhile, TSMC, SK hynix, Samsung, and Micron's gradual production expansion has significantly eased shortages in 2Q24. Consequently, the lead time for NVIDIA's flagship H100 solution has decreased from the previous 40-50 weeks to less than 16 weeks.

TrendForce estimates that AI server shipments in the second quarter will increase by nearly 20% QoQ, and has revised the annual shipment forecast up to 1.67 million units—marking a 41.5% YoY growth.

CSPs to Expand into Edge AI, Driving Average NB DRAM Capacity Growth by at Least 7% in 2025

TrendForce has observed that in 2024, major CSPs such as Microsoft, Google, Meta, and AWS will continue to be the primary buyers of high-end AI servers, which are crucial for LLM and AI modeling. Following establishing a significant AI training server infrastructure in 2024, these CSPs are expected to actively expand into edge AI in 2025. This expansion will include the development of smaller LLM models and setting up edge AI servers to facilitate AI applications across various sectors, such as manufacturing, finance, healthcare, and business.

Moreover, AI PCs or notebooks share a similar architecture to AI servers, offering substantial computational power and the ability to run smaller LLM and generative AI applications. These devices are anticipated to serve as the final bridge between cloud AI infrastructure and edge AI for small-scale training or inference applications.

AWS Launches 896-Core Instance, Double What Competitors Offer

Liftr Insights, a pioneer in market intelligence driven by unique data, revealed today that it detected AWS's recent launch of an 896-core instance type, surpassing the previous highest core counts by any cloud provider. This is important to companies looking to improve performance. If they are not using these, their competitors might be.

Liftr data show the previous AWS high core-count instance had 448 cores and first appeared in May 2021. Prior to that, the largest instance available in the six largest cloud providers (representing over 75% of the public cloud space) was a 384-core instance first offered by Azure in 2019.

AWS and NVIDIA Extend Collaboration to Advance Generative AI Innovation

Amazon Web Services (AWS), an Amazon.com company, and NVIDIA today announced that the new NVIDIA Blackwell GPU platform - unveiled by NVIDIA at GTC 2024 - is coming to AWS. AWS will offer the NVIDIA GB200 Grace Blackwell Superchip and B100 Tensor Core GPUs, extending the companies' long standing strategic collaboration to deliver the most secure and advanced infrastructure, software, and services to help customers unlock new generative artificial intelligence (AI) capabilities.

NVIDIA and AWS continue to bring together the best of their technologies, including NVIDIA's newest multi-node systems featuring the next-generation NVIDIA Blackwell platform and AI software, AWS's Nitro System and AWS Key Management Service (AWS KMS) advanced security, Elastic Fabric Adapter (EFA) petabit scale networking, and Amazon Elastic Compute Cloud (Amazon EC2) UltraCluster hyper-scale clustering. Together, they deliver the infrastructure and tools that enable customers to build and run real-time inference on multi-trillion parameter large language models (LLMs) faster, at massive scale, and at a lower cost than previous-generation NVIDIA GPUs on Amazon EC2.

GOG Partners Up with Amazon's Luna Cloud Streaming Service

Soon, you'll be able to play your favorite games from GOG, like the Witcher series or Cyberpunk 2077, on multiple devices of your choice. We're teaming up with Amazon Luna cloud gaming service to give you even more ways of enjoying your titles, while still keeping our mission of DRM-free gaming. Let's dive into it and take a look at how it works!

What exactly is Amazon Luna?
It is a cloud gaming service developed and operated by Amazon. The service first launched in March 2022 in the United States, and then spread its reach to other countries last year, with availability in the USA, Canada, UK, Germany, France, Italy, and Spain. Luna works by streaming games from cloud servers and runs on Amazon's powerful cloud computing service Amazon Web Services (AWS). And what it means is that it allows its customers to enjoy gaming on the go, on the couch, or anywhere else you have an internet connection. No lengthy downloads or updates, no need for an expensive gaming PC, complicated setup, or heavy computer processing - just pure joy of running your games on a device of your choice in high-quality.

Global Server Shipments Expected to Increase by 2.05% in 2024, with AI Servers Accounting For Around 12.1%

TrendForce underscores that the primary momentum for server shipments this year remains with American CSPs. However, due to persistently high inflation and elevated corporate financing costs curtailing capital expenditures, overall demand has not yet returned to pre-pandemic growth levels. Global server shipments are estimated to reach approximately. 13.654 million units in 2024, an increase of about 2.05% YoY. Meanwhile, the market continues to focus on the deployment of AI servers, with their shipment share estimated at around 12.1%.

Foxconn is expected to see the highest growth rate, with an estimated annual increase of about 5-7%. This growth includes significant orders such as Dell's 16G platform, AWS Graviton 3 and 4, Google Genoa, and Microsoft Gen9. In terms of AI server orders, Foxconn has made notable inroads with Oracle and has also secured some AWS ASIC orders.

Arm Launches Next-Generation Neoverse CSS V3 and N3 Designs for Cloud, HPC, and AI Acceleration

Last year, Arm introduced its Neoverse Compute Subsystem (CSS) for the N2 and V2 series of data center processors, providing a reference platform for the development of efficient Arm-based chips. Major cloud service providers like AWS with Graviton 4 and Trainuium 2, Microsoft with Cobalt 100 and Maia 100, and even NVIDIA with Grace CPU and Bluefield DPUs are already utilizing custom Arm server CPU and accelerator designs based on the CSS foundation in their data centers. The CSS allows hyperscalers to optimize Arm processor designs specifically for their workloads, focusing on efficiency rather than outright performance. Today, Arm has unveiled the next generation CSS N3 and V3 for even greater efficiency and AI inferencing capabilities. The N3 design provides up to 32 high-efficiency cores per die with improved branch prediction and larger caches to boost AI performance by 196%, while the V3 design scales up to 64 cores and is 50% faster overall than previous generations.

Both the N3 and V3 leverage advanced features like DDR5, PCIe 5.0, CXL 3.0, and chiplet architecture, continuing Arm's push to make chiplets the standard for data center and cloud architectures. The chiplet approach enables customers to connect their own accelerators and other chiplets to the Arm cores via UCIe interfaces, reducing costs and time-to-market. Looking ahead, Arm has a clear roadmap for its Neoverse platform. The upcoming CSS V4 "Adonis" and N4 "Dionysus" designs will build on the improvements in the N3 and V3, advancing Arm's goal of greater efficiency and performance using optimized chiplet architectures. As more major data center operators introduce custom Arm-based designs, the Neoverse CSS aims to provide a flexible, efficient foundation to power the next generation of cloud computing.

Wacom Takes Care of Artists with Digital Rights Management and the new Cintiq Pro 27 and Wacom One Tablets

During the CES 2024 international show, Wacom, one of the leaders in the digital design space, unveiled the new Wacom Cintiq Pro and Wacom One tablets. The company also showcased its digital rights management software, Yuify, and introduced Wacom Bridge, a tool designed to enhance remote collaborative workflows for studios. The new Wacom Cintiq Pro line, including the Pro 27, 22, and 17, was developed in collaboration with professionals in virtual production, VFX, CG, and animation. The latest Wacom Cintiq Pro 27, with its precision and best-in-class color fidelity, is poised to take virtual production workflows to the next level. Color accuracy is crucial in virtual production workflows, and the Wacom Cintiq Pro 27 delivers 100% Rec. 709 and 98% DCI-P3 color accuracy. Its 4K display, with 10-bit color, offers high color performance and calibration options, reducing the traditional setup footprint without compromising performance.

The new Wacom Pro Pen 3, redesigned for ergonomic comfort and customization, complements the Cintiq Pro 27's eight Express Keys and multi-touch screen, offering a harmonious workflow. Wacom Bridge, developed in partnership with AWS NICE DCV and Splashtop, is a technology solution that enhances the use of Wacom products on supported remote desktop connections, catering to the needs of remote and hybrid work environments. The Wacom One line, first launched in 2019, has been redesigned and upgraded, offering more options and customization opportunities. The line includes the Wacom One 13 and 12 displays and the Wacom One Medium and Small pen tablets. Finally, Wacom's commitment to protecting artists' work is embodied in "Yuify", a service that allows artists to protect their artwork, manage usage rights, and establish legally binding license permissions. This digital rights management platform enables creators to conveniently manage their authorship records and sign licenses and contracts.

AWS and NVIDIA Partner to Deliver 65 ExaFLOP AI Supercomputer, Other Solutions

Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company (NASDAQ: AMZN), and NVIDIA (NASDAQ: NVDA) today announced an expansion of their strategic collaboration to deliver the most-advanced infrastructure, software and services to power customers' generative artificial intelligence (AI) innovations. The companies will bring together the best of NVIDIA and AWS technologies—from NVIDIA's newest multi-node systems featuring next-generation GPUs, CPUs and AI software, to AWS Nitro System advanced virtualization and security, Elastic Fabric Adapter (EFA) interconnect, and UltraCluster scalability—that are ideal for training foundation models and building generative AI applications.

The expanded collaboration builds on a longstanding relationship that has fueled the generative AI era by offering early machine learning (ML) pioneers the compute performance required to advance the state-of-the-art in these technologies.

AWS Unveils Next Generation AWS-Designed Graviton4 and Trainium2 Chips

At AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company (NASDAQ: AMZN), today announced the next generation of two AWS-designed chip families—AWS Graviton4 and AWS Trainium2—delivering advancements in price performance and energy efficiency for a broad range of customer workloads, including machine learning (ML) training and generative artificial intelligence (AI) applications. Graviton4 and Trainium2 mark the latest innovations in chip design from AWS. With each successive generation of chip, AWS delivers better price performance and energy efficiency, giving customers even more options—in addition to chip/instance combinations featuring the latest chips from third parties like AMD, Intel, and NVIDIA—to run virtually any application or workload on Amazon Elastic Compute Cloud (Amazon EC2).

NVIDIA Experiences Strong Cloud AI Demand but Faces Challenges in China, with High-End AI Server Shipments Expected to Be Below 4% in 2024

NVIDIA's most recent FY3Q24 financial reports reveal record-high revenue coming from its data center segment, driven by escalating demand for AI servers from major North American CSPs. However, TrendForce points out that recent US government sanctions targeting China have impacted NVIDIA's business in the region. Despite strong shipments of NVIDIA's high-end GPUs—and the rapid introduction of compliant products such as the H20, L20, and L2—Chinese cloud operators are still in the testing phase, making substantial revenue contributions to NVIDIA unlikely in Q4. Gradual shipments increases are expected from the first quarter of 2024.

The US ban continues to influence China's foundry market as Chinese CSPs' high-end AI server shipments potentially drop below 4% next year
TrendForce reports that North American CSPs like Microsoft, Google, and AWS will remain key drivers of high-end AI servers (including those with NVIDIA, AMD, or other high-end ASIC chips) from 2023 to 2024. Their estimated shipments are expected to be 24%, 18.6%, and 16.3%, respectively, for 2024. Chinese CSPs such as ByteDance, Baidu, Alibaba, and Tencent (BBAT) are projected to have a combined shipment share of approximately 6.3% in 2023. However, this could decrease to less than 4% in 2024, considering the current and potential future impacts of the ban.

NVIDIA Reportedly in Talks to Lease Data Center Space for its own Cloud Service

The recent development of AI models that are more capable than ever has led to a massive demand for hardware infrastructure that powers them. As the dominant player in the industry with its GPU and CPU-GPU solutions, NVIDIA has reportedly discussed leasing data center space to power its own cloud service for these AI applications. Called NVIDIA Cloud DGX, it will reportedly put the company right up against its clients, which are cloud service providers (CSPs) as well. Companies like Microsoft Azure, Amazon AWS, Google Cloud, and Oracle actively acquire NVIDIA GPUs to power their GPU-accelerated cloud instances. According to the report, this has been developing for a few years.

Additionally, it is worth noting that NVIDIA already owns parts for its potential data center infrastructure. This includes NVIDIA DGX and HGX units, which can just be interconnected in a data center, with cloud provisioning so developers can access NVIDIA's instances. A great benefit that would attract the end-user is that NVIDIA could potentially lower the price point of its offerings, as they are acquiring GPUs for much less compared to the CSPs that receive them with a profit margin that NVIDIA imposes. This can attract potential customers, leaving hyperscalers like Amazon, Microsoft, and Google without a moat in the cloud game. Of course, until this project is official, we should take this information with a grain of salt.

Amazon to Invest $4 Billion into Anthropic AI

Today, we're announcing that Amazon will invest up to $4 billion in Anthropic. The agreement is part of a broader collaboration to develop the most reliable and high-performing foundation models in the industry. Our frontier safety research and products, together with Amazon Web Services' (AWS) expertise in running secure, reliable infrastructure, will make Anthropic's safe and steerable AI widely accessible to AWS customers.

AWS will become Anthropic's primary cloud provider for mission critical workloads, providing our team with access to leading compute infrastructure in the form of AWS Trainium and Inferentia chips, which will be used in addition to existing solutions for model training and deployment. Together, we'll combine our respective expertise to collaborate on the development of future Trainium and Inferentia technology.

Intel 4th Gen Xeon Powers New Amazon EC2 M7i-flex and M7i Instances

Today, Amazon Web Services (AWS) announced the general availability of new Amazon Elastic Compute Cloud (Amazon EC2) instances powered by custom 4th Gen Intel Xeon Scalable processors. This launch is the latest on a growing list of 4th Gen Xeon-powered instances that deliver leading total cost of ownership (TCO) and the most built-in accelerators of any CPU to fuel key workloads like AI, database, networking and enterprise applications.

"Intel worked closely with AWS to bring our feature-rich 4th Gen Xeon processors to its cloud customers, many of which have benefited from its performance and value for months in private and public preview. Today, we're happy to bring that same real-world value to cloud customers around the globe," said Lisa Spelman, Intel corporate vice president and general manager of the Xeon Products and Solutions Group.

China Hosts 40% of all Arm-based Servers in the World

The escalating challenges in acquiring high-performance x86 servers have prompted Chinese data center companies to accelerate the shift to Arm-based system-on-chips (SoCs). Investment banking firm Bernstein reports that approximately 40% of all Arm-powered servers globally are currently being used in China. While most servers operate on x86 processors from AMD and Intel, there's a growing preference for Arm-based SoCs, especially in the Chinese market. Several global tech giants, including AWS, Ampere, Google, Fujitsu, Microsoft, and Nvidia, have already adopted or developed Arm-powered SoCs. However, Arm-based SoCs are increasingly favorable for Chinese firms, given the difficulty in consistently sourcing Intel's Xeon or AMD's EPYC. Chinese companies like Alibaba, Huawei, and Phytium are pioneering the development of these Arm-based SoCs for client and data center processors.

However, the US government's restrictions present some challenges. Both Huawei and Phytium, blacklisted by the US, cannot access TSMC's cutting-edge process technologies, limiting their ability to produce competitive processors. Although Alibaba's T-Head can leverage TSMC's latest innovations, it can't license Arm's high-performance computing Neoverse V-series CPU cores due to various export control rules. Despite these challenges, many chip designers are considering alternatives such as RISC-V, an unrestricted, rapidly evolving open-source instruction set architecture (ISA) suitable for designing highly customized general-purpose cores for specific workloads. Still, with the backing of influential firms like AWS, Google, Nvidia, Microsoft, Qualcomm, and Samsung, the Armv8 and Armv9 instruction set architectures continue to hold an edge over RISC-V. These companies' support ensures that the software ecosystem remains compatible with their CPUs, which will likely continue to drive the adoption of Arm in the data center space.

NVIDIA H100 GPUs Now Available on AWS Cloud

AWS users can now access the leading performance demonstrated in industry benchmarks of AI training and inference. The cloud giant officially switched on a new Amazon EC2 P5 instance powered by NVIDIA H100 Tensor Core GPUs. The service lets users scale generative AI, high performance computing (HPC) and other applications with a click from a browser.

The news comes in the wake of AI's iPhone moment. Developers and researchers are using large language models (LLMs) to uncover new applications for AI almost daily. Bringing these new use cases to market requires the efficiency of accelerated computing. The NVIDIA H100 GPU delivers supercomputing-class performance through architectural innovations including fourth-generation Tensor Cores, a new Transformer Engine for accelerating LLMs and the latest NVLink technology that lets GPUs talk to each other at 900 GB/sec.

Major CSPs Aggressively Constructing AI Servers and Boosting Demand for AI Chips and HBM, Advanced Packaging Capacity Forecasted to Surge 30~40%

TrendForce reports that explosive growth in generative AI applications like chatbots has spurred significant expansion in AI server development in 2023. Major CSPs including Microsoft, Google, AWS, as well as Chinese enterprises like Baidu and ByteDance, have invested heavily in high-end AI servers to continuously train and optimize their AI models. This reliance on high-end AI servers necessitates the use of high-end AI chips, which in turn will not only drive up demand for HBM during 2023~2024, but is also expected to boost growth in advanced packaging capacity by 30~40% in 2024.

TrendForce highlights that to augment the computational efficiency of AI servers and enhance memory transmission bandwidth, leading AI chip makers such as Nvidia, AMD, and Intel have opted to incorporate HBM. Presently, Nvidia's A100 and H100 chips each boast up to 80 GB of HBM2e and HBM3. In its latest integrated CPU and GPU, the Grace Hopper Superchip, Nvidia expanded a single chip's HBM capacity by 20%, hitting a mark of 96 GB. AMD's MI300 also uses HBM3, with the MI300A capacity remaining at 128 GB like its predecessor, while the more advanced MI300X has ramped up to 192 GB, marking a 50% increase. Google is expected to broaden its partnership with Broadcom in late 2023 to produce the AISC AI accelerator chip TPU, which will also incorporate HBM memory, in order to extend AI infrastructure.

AMD Details New EPYC CPUs, Next-Generation AMD Instinct Accelerator, and Networking Portfolio for Cloud and Enterprise

Today, at the "Data Center and AI Technology Premiere," AMD announced the products, strategy and ecosystem partners that will shape the future of computing, highlighting the next phase of data center innovation. AMD was joined on stage with executives from Amazon Web Services (AWS), Citadel, Hugging Face, Meta, Microsoft Azure and PyTorch to showcase the technological partnerships with industry leaders to bring the next generation of high performance CPU and AI accelerator solutions to market.

"Today, we took another significant step forward in our data center strategy as we expanded our 4th Gen EPYC processor family with new leadership solutions for cloud and technical computing workloads and announced new public instances and internal deployments with the largest cloud providers," said AMD Chair and CEO Dr. Lisa Su. "AI is the defining technology shaping the next generation of computing and the largest strategic growth opportunity for AMD. We are laser focused on accelerating the deployment of AMD AI platforms at scale in the data center, led by the launch of our Instinct MI300 accelerators planned for later this year and the growing ecosystem of enterprise-ready AI software optimized for our hardware."

IonQ Aria Now Available on Amazon Braket Cloud Quantum Computing Service

Today at Commercialising Quantum Global 2023, IonQ (NYSE: IONQ), an industry leader in quantum computing, announced the availability of IonQ Aria on Amazon Braket, AWS's quantum computing service. This expands upon IonQ's existing presence on Amazon Braket, following the debut of IonQ's Harmony system on the platform in 2020. With broader access to IonQ Aria, IonQ's flagship system with 25 algorithmic qubits (#AQ)—more than 65,000 times more powerful than IonQ Harmony—users can now explore, design, and run more complex quantum algorithms to tackle some of the most challenging problems of today.

"We are excited for IonQ Aria to become available on Amazon Braket, as we expand the ways users can access our leading quantum computer on the most broadly adopted cloud service provider," said Peter Chapman, CEO and President, IonQ. "Amazon Braket has been instrumental in commercializing quantum, and we look forward to seeing what new approaches will come from the brightest, most curious, minds in the space."

Microsoft Activision Blizzard Merger Blocked by UK Market Regulator Citing "Cloud Gaming Concerns"

The United Kingdom Competition and Markets Authority (UK-CMA) on Wednesday blocked the proposed $68.7 billion merger of Microsoft and Activision-Blizzard. In its press-releasing announcing its final decision into an investigation on the question of how the merger will affect consumer-choice and innovation in the market, the CMA says that the merger would alter the future of cloud gaming, and lead to "reduced innovation and less choice for United Kingdom gamers over the years to come." Cloud gaming in this context would be games rendered on the cloud, and consumed on the edge by gamers. NVIDIA's GeForce NOW is one such service.

Microsoft Azure is one of the big-three cloud computing providers (besides AWS and Google Cloud), and the CMA fears that Microsoft's acquisition of Activision-Blizzard IP (besides its control over the Xbox and Windows PC ecosystems), would "strengthen that advantage giving it the ability to undermine new and innovative competitors." The CMA report continues: "Cloud gaming needs a free, competitive market to drive innovation and choice. That is best achieved by allowing the current competitive dynamics in cloud gaming to continue to do their job." Microsoft and Activision-Blizzard are unsurprisingly unhappy with the verdict.

Linux Foundation Launches New TLA+ Organization

SAN FRANCISCO, April 21, 2023 -- The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the launch of the TLA+ Foundation to promote the adoption and development of the TLA+ programming language and its community of TLA+ practitioners. Inaugural members include Amazon Web Services (AWS), Oracle and Microsoft. TLA+ is a high-level language for modeling programs and systems, especially concurrent and distributed ones. TLA+ has been successfully used by companies to verify complex software systems, reducing errors and improving reliability. The language helps detect design flaws early in the development process, saving time and resources.

TLA+ and its tools are useful for eliminating fundamental design errors, which are hard to find and expensive to correct in code. The language is based on the idea that the best way to describe things precisely is with simple mathematics. The language was invented decades ago by the pioneering computer scientist Leslie Lamport, now a distinguished scientist with Microsoft Research. After years of Lamport's stewardship and Microsoft's support, TLA+ has found a new home at the Linux Foundation.

AMD Joins AWS ISV Accelerate Program

AMD announced it has joined the Amazon Web Services (AWS) Independent Software Vendor (ISV) Accelerate Program, a co-sell program for AWS Partners - like AMD - who provide integrated solutions on AWS. The program helps AWS Partners drive new business by directly connecting participating ISVs with the AWS Sales organization.

Through the AWS ISV Accelerate Program, AMD will receive focused co-selling support from AWS, including, access to further sales enablement resources, reduced AWS Marketplace listing fees, and incentives for AWS Sales teams. The program will also allow participating ISVs access to millions of active AWS customers globally.
Return to Keyword Browsing
Oct 31st, 2024 20:13 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts