News Posts matching #Oracle

Return to Keyword Browsing

Intel and AMD Form x86 Ecosystem Advisory Group

Intel Corp. (INTC) and AMD (NASDAQ: AMD) today announced the creation of an x86 ecosystem advisory group bringing together technology leaders to shape the future of the world's most widely used computing architecture. x86 is uniquely positioned to meet customers' emerging needs by delivering superior performance and seamless interoperability across hardware and software platforms. The group will focus on identifying new ways to expand the x86 ecosystem by enabling compatibility across platforms, simplifying software development, and providing developers with a platform to identify architectural needs and features to create innovative and scalable solutions for the future.

For over four decades, x86 has served as the bedrock of modern computing, establishing itself as the preferred architecture in data centers and PCs worldwide. In today's evolving landscape - characterized by dynamic AI workloads, custom chiplets, and advancements in 3D packaging and system architectures - the importance of a robust and expanding x86 ecosystem is more crucial than ever.

AMD Instinct MI300X Accelerators Available on Oracle Cloud Infrastructure

AMD today announced that Oracle Cloud Infrastructure (OCI) has chosen AMD Instinct MI300X accelerators with ROCm open software to power its newest OCI Compute Supercluster instance called BM.GPU.MI300X.8. For AI models that can comprise hundreds of billions of parameters, the OCI Supercluster with AMD MI300X supports up to 16,384 GPUs in a single cluster by harnessing the same ultrafast network fabric technology used by other accelerators on OCI. Designed to run demanding AI workloads including large language model (LLM) inference and training that requires high throughput with leading memory capacity and bandwidth, these OCI bare metal instances have already been adopted by companies including Fireworks AI.

"AMD Instinct MI300X and ROCm open software continue to gain momentum as trusted solutions for powering the most critical OCI AI workloads," said Andrew Dieckmann, corporate vice president and general manager, Data Center GPU Business, AMD. "As these solutions expand further into growing AI-intensive markets, the combination will benefit OCI customers with high performance, efficiency, and greater system design flexibility."

Oracle Offers First Zettascale Cloud Computing Cluster

Oracle today announced the first zettascale cloud computing clusters accelerated by the NVIDIA Blackwell platform. Oracle Cloud Infrastructure (OCI) is now taking orders for the largest AI supercomputer in the cloud—available with up to 131,072 NVIDIA Blackwell GPUs.

"We have one of the broadest AI infrastructure offerings and are supporting customers that are running some of the most demanding AI workloads in the cloud," said Mahesh Thiagarajan, executive vice president, Oracle Cloud Infrastructure. "With Oracle's distributed cloud, customers have the flexibility to deploy cloud and AI services wherever they choose while preserving the highest levels of data and AI sovereignty."

NVIDIA Announces New Switches Optimized for Trillion-Parameter GPU Computing and AI Infrastructure

NVIDIA today announced a new wave of networking switches, the X800 series, designed for massive-scale AI. The world's first networking platforms capable of end-to-end 800 Gb/s throughput, NVIDIA Quantum-X800 InfiniBand and NVIDIA Spectrum -X800 Ethernet push the boundaries of networking performance for computing and AI workloads. They feature software that further accelerates AI, cloud, data processing and HPC applications in every type of data center, including those that incorporate the newly released NVIDIA Blackwell architecture-based product lineup.

"NVIDIA Networking is central to the scalability of our AI supercomputing infrastructure," said Gilad Shainer, senior vice president of Networking at NVIDIA. "NVIDIA X800 switches are end-to-end networking platforms that enable us to achieve trillion-parameter-scale generative AI essential for new AI infrastructures."

Global Server Shipments Expected to Increase by 2.05% in 2024, with AI Servers Accounting For Around 12.1%

TrendForce underscores that the primary momentum for server shipments this year remains with American CSPs. However, due to persistently high inflation and elevated corporate financing costs curtailing capital expenditures, overall demand has not yet returned to pre-pandemic growth levels. Global server shipments are estimated to reach approximately. 13.654 million units in 2024, an increase of about 2.05% YoY. Meanwhile, the market continues to focus on the deployment of AI servers, with their shipment share estimated at around 12.1%.

Foxconn is expected to see the highest growth rate, with an estimated annual increase of about 5-7%. This growth includes significant orders such as Dell's 16G platform, AWS Graviton 3 and 4, Google Genoa, and Microsoft Gen9. In terms of AI server orders, Foxconn has made notable inroads with Oracle and has also secured some AWS ASIC orders.

AMD Showcases Growing Momentum for AMD Powered AI Solutions from the Data Center to PCs

Today at the "Advancing AI" event, AMD was joined by industry leaders including Microsoft, Meta, Oracle, Dell Technologies, HPE, Lenovo, Supermicro, Arista, Broadcom and Cisco to showcase how these companies are working with AMD to deliver advanced AI solutions spanning from cloud to enterprise and PCs. AMD launched multiple new products at the event, including the AMD Instinct MI300 Series data center AI accelerators, ROCm 6 open software stack with significant optimizations and new features supporting Large Language Models (LLMs) and Ryzen 8040 Series processors with Ryzen AI.

"AI is the future of computing and AMD is uniquely positioned to power the end-to-end infrastructure that will define this AI era, from massive cloud installations to enterprise clusters and AI-enabled intelligent embedded devices and PCs," said AMD Chair and CEO Dr. Lisa Su. "We are seeing very strong demand for our new Instinct MI300 GPUs, which are the highest-performance accelerators in the world for generative AI. We are also building significant momentum for our data center AI solutions with the largest cloud companies, the industry's top server providers, and the most innovative AI startups ꟷ who we are working closely with to rapidly bring Instinct MI300 solutions to market that will dramatically accelerate the pace of innovation across the entire AI ecosystem."

NVIDIA AI Now Available in Oracle Cloud Marketplace

Training generative AI models just got easier. NVIDIA DGX Cloud AI supercomputing platform and NVIDIA AI Enterprise software are now available in Oracle Cloud Marketplace, making it possible for Oracle Cloud Infrastructure customers to access high-performance accelerated computing and software to run secure, stable and supported production AI in just a few clicks. The addition - an industry first - brings new capabilities for end-to-end development and deployment on Oracle Cloud. Enterprises can get started from the Oracle Cloud Marketplace to train models on DGX Cloud, and then deploy their applications on OCI with NVIDIA AI Enterprise.

Oracle Cloud and NVIDIA Lift Industries Into Era of AI
Thousands of enterprises around the world rely on OCI to power the applications that drive their businesses. Its customers include leaders across industries such as healthcare, scientific research, financial services, telecommunications and more. Oracle Cloud Marketplace is a catalog of solutions that offers customers flexible consumption models and simple billing. Its addition of DGX Cloud and NVIDIA AI Enterprise lets OCI customers use their existing cloud credits to integrate NVIDIA's leading AI supercomputing platform and software into their development and deployment pipelines. With DGX Cloud, OCI customers can train models for generative AI applications like intelligent chatbots, search, summarization and content generation.

GIGABYTE Introduces New Servers for Cloud-Native Deployments on Arm Architecture with AmpereOne Family of Processors

GIGABYTE Technology, Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, server motherboards, and workstations, today announced four new GIGABYTE R-series servers for AmpereOne Family of processors for cloud-native computing where high compute density per rack and power-efficiency matter.

For cloud-native computing, hyperscalers or cloud service providers (CSPs) rely on predictable high-performance, scalable infrastructure, and power efficient nodes. GIGABYTE servers running the AmpereOne Family platform achieve those expectations, but this is not the first time GIGABYTE has worked with Ampere Computing. The partnership first started in 2020 with the launch of the Ampere Altra platform. And this new family of AmpereOne processors will not supersede the Altra platform, rather it is an extension of what Arm architecture is capable of by Ampere Computing. For instance, the CPU core count goes beyond 128 cores in Altra to 136-192 cores in AmpereOne for new levels of performance and VM density. On top of that, the private L2 cache per core has doubled and there is support for DDR5 memory and PCIe Gen 5.

NVIDIA Reportedly in Talks to Lease Data Center Space for its own Cloud Service

The recent development of AI models that are more capable than ever has led to a massive demand for hardware infrastructure that powers them. As the dominant player in the industry with its GPU and CPU-GPU solutions, NVIDIA has reportedly discussed leasing data center space to power its own cloud service for these AI applications. Called NVIDIA Cloud DGX, it will reportedly put the company right up against its clients, which are cloud service providers (CSPs) as well. Companies like Microsoft Azure, Amazon AWS, Google Cloud, and Oracle actively acquire NVIDIA GPUs to power their GPU-accelerated cloud instances. According to the report, this has been developing for a few years.

Additionally, it is worth noting that NVIDIA already owns parts for its potential data center infrastructure. This includes NVIDIA DGX and HGX units, which can just be interconnected in a data center, with cloud provisioning so developers can access NVIDIA's instances. A great benefit that would attract the end-user is that NVIDIA could potentially lower the price point of its offerings, as they are acquiring GPUs for much less compared to the CSPs that receive them with a profit margin that NVIDIA imposes. This can attract potential customers, leaving hyperscalers like Amazon, Microsoft, and Google without a moat in the cloud game. Of course, until this project is official, we should take this information with a grain of salt.

Oracle Cloud Adds AmpereOne Processor and Broad Set of New Services on Ampere

Oracle has announced their next generation Ampere A2 Compute Instances based on the latest AmpereOne processor, with availability starting later this year. According to Oracle, the new instances will deliver up to 44% more price-performance compared to x86 offerings and are ideal for AI inference, databases, web services, media transcoding workloads and run-time language support, such as GO and Java.

In related news, several new customers including industry leading real-time video service companies 8x8 and Phenix, along with AI startups like Wallaroo, said they are migrating to Oracle Cloud Infrastructure (OCI) and Ampere as more and more companies seek to maximize price, performance and energy efficiency.

Dutch Government Renews Oracle Cloud Infrastructure Deal

The Government of the Netherlands has agreed to incorporate Oracle Cloud Infrastructure (OCI) in its cloud service offerings for government agencies as part of a renewal of its existing service contract with Oracle. OCI's commercial public cloud regions will enable the National Government to take advantage of the many benefits cloud computing offers, including scalability, security, flexibility, and reliable performance.

The renewal of the agreement includes a version of the standard cloud terms and conditions as well as a Data Processing Agreement based on the government's Data Protection Impact Assessment (DPIA) of available cloud services. "This renewed agreement with Oracle marks an important milestone in our strategic collaboration," said Richard Wiersema, director operations, DICTU of the Ministry of Economic Affairs and Climate and strategic supplier manager, Oracle for the Dutch government. "With Oracle, we as the national government have an important partner in house that helps us achieve our digital goals and enables us to meet the needs of Dutch society. The cloud plays a crucial role in meeting these objectives."

Oracle Advocates Keeping Linux Open and Free, Calls Out IBM

Oracle has been part of the Linux community for 25 years. Our goal has remained the same over all those years: help make Linux the best server operating system for everyone, freely available to all, with high-quality, low-cost support provided to those who need it. Our Linux engineering team makes significant contributions to the kernel, file systems, and tools. We push all that work back to mainline so that every Linux distribution can include it. We are proud those contributions are part of the reason Linux is now so very capable, benefiting not just Oracle customers, but all users.

In 2006, we launched what is now called Oracle Linux, a RHEL compatible distribution and support offering that is used widely, and powers Oracle's engineered systems and our cloud infrastructure. We chose to be RHEL compatible because we did not want to fragment the Linux community. Our effort to remain compatible has been enormously successful. In all the years since launch, we have had almost no compatibility bugs filed. Customers and ISVs can switch to Oracle Linux from RHEL without modifying their applications, and we certify Oracle software products on RHEL even though they are built and tested on Oracle Linux only, never on RHEL.

Oracle to Spend Billions on NVIDIA Data Center GPUs, Even More on Ampere & AMD CPUs

Oracle founder and Chairman Larry Ellison last week announced a substantial spending spree on new equipment as he prepares his business for a cloud computing service expansion that will be aimed at attracting a "new wave" of artificial intelligence (AI) companies. He made this announcement at a recent Ampere event: "This year, Oracle will buy GPUs and CPUs from three companies...We will buy GPUs from NVIDIA, and we're buying billions of dollars of those. We will spend three times that on CPUs from Ampere and AMD. We still spend more money on conventional compute." His cloud division is said to be gearing up to take on larger competition—namely Amazon Web Services and Microsoft Corp. Oracle is hoping to outmaneuver these major players by focusing on the construction of fast networks, capable of shifting around huge volumes of data—the end goal being the creation of its own ChatGPT-type model.

Ellison's expressed that he was leaving Team Blue behind—Oracle has invested heavily in Ampere Computing—a startup founded by ex-Intel folks: "It's a major commitment to move to a new supplier. We've moved to a new architecture...We think that this is the future. The old Intel x86 architecture, after many decades in the market, is reaching its limit." Oracle's database software has been updated to run on Ampere's Arm-based chips, Ellison posits that these grant greater power efficiency when compared to AMD and NVIDIA enterprise processors. There will be some reliance on "x86-64" going forward, since Oracle's next-gen Exadata X10M platform was recently announced with the integration of Team Red's EPYC 9004 series processors—a company spokesman stated that these server CPUs offer higher core counts and "extreme scale and dramatically improved price performance," when compared to older Intel Xeon systems.

Oracle Fusion Cloud HCM Enhanced with Generative AI, Projected to Boost HR Productivity

Oracle today announced the addition of generative AI-powered capabilities within Oracle Fusion Cloud Human Capital Management (HCM). Supported by the Oracle Cloud Infrastructure (OCI) generative AI service, the new capabilities are embedded in existing HR processes to drive faster business value, improve productivity, enhance the candidate and employee experience, and streamline HR processes.

"Generative AI is boosting productivity and unlocking a new world of skills, ideas, and creativity that can have an immediate impact in the workplace," said Chris Leone, executive vice president, applications development, Oracle Cloud HCM. "With the ability to summarize, author, and recommend content, generative AI helps to reduce friction as employees complete important HR functions. For example, with the new embedded generative AI capabilities in Oracle Cloud HCM, our customers will be able to take advantage of large language models to drastically reduce the time required to complete tasks, improve the employee experience, enhance the accuracy of workforce insights, and ultimately increase business value."

Oracle Introduces Next-Gen Exadata X10M Platforms

Oracle today introduced the latest generation of the Oracle Exadata platforms, the X10M, delivering unrivaled performance and availability for all Oracle Database workloads. Starting at the same price as the previous generation, these platforms support higher levels of database consolidation with more capacity and offer dramatically greater value than previous generations. Thousands of organizations, large and small, run their most critical and demanding workloads on Oracle Exadata including the majority of the largest financial, telecom, and retail businesses in the world.

"Our 12th generation Oracle Exadata X10M continues our strategy to provide customers with extreme scale, performance, and value, and we will make it available everywhere—in the cloud and on-premises," said Juan Loaiza, executive vice president, Mission-Critical Database Technologies, Oracle. "Customers that choose cloud deployments also benefit from running Oracle Autonomous Database, which further lowers costs by delivering true pay-per-use and eliminating database and infrastructure administration."

Ampere Computing Unveils New AmpereOne Processor Family with 192 Custom Cores

Ampere Computing today announced a new AmpereOne Family of processors with up to 192 single threaded Ampere cores - the highest core count in the industry. This is the first product from Ampere based on the company's new custom core, built from the ground up and leveraging the company's internal IP. CEO Renée James, who founded Ampere Computing to offer a modern alternative to the industry with processors designed specifically for both efficiency and performance in the Cloud, said there was a fundamental shift happening that required a new approach.

"Every few decades of compute there has emerged a driving application or use of performance that sets a new bar of what is required of performance," James said. "The current driving uses are AI and connected everything combined with our continued use and desire for streaming media. We cannot continue to use power as a proxy for performance in the data center. At Ampere, we design our products to maximize performance at a sustainable power, so we can continue to drive the future of the industry."

Linux Foundation Launches New TLA+ Organization

SAN FRANCISCO, April 21, 2023 -- The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the launch of the TLA+ Foundation to promote the adoption and development of the TLA+ programming language and its community of TLA+ practitioners. Inaugural members include Amazon Web Services (AWS), Oracle and Microsoft. TLA+ is a high-level language for modeling programs and systems, especially concurrent and distributed ones. TLA+ has been successfully used by companies to verify complex software systems, reducing errors and improving reliability. The language helps detect design flaws early in the development process, saving time and resources.

TLA+ and its tools are useful for eliminating fundamental design errors, which are hard to find and expensive to correct in code. The language is based on the idea that the best way to describe things precisely is with simple mathematics. The language was invented decades ago by the pioneering computer scientist Leslie Lamport, now a distinguished scientist with Microsoft Research. After years of Lamport's stewardship and Microsoft's support, TLA+ has found a new home at the Linux Foundation.

NVIDIA to Put DGX Computers in the Cloud, Becomes AI-as-a-Service Provider

NVIDIA has recently reported its Q4 earnings, and the earnings call following the report contains exciting details about the company and its plans to open up to new possibilities. NVIDIA's CEO Jensen Huang has stated that the company is on track to become an AI-as-a-Service (AIaaS) provider, which technically makes it a cloud service provider (CSP). "Today, I want to share with you the next level of our business model to help put AI within reach of every enterprise customer. We are partnering with major service -- cloud service providers to offer NVIDIA AI cloud services, offered directly by NVIDIA and through our network of go-to-market partners, and hosted within the world's largest clouds." Said Mr. Huang, adding that "NVIDIA AI as a service offers enterprises easy access to the world's most advanced AI platform, while remaining close to the storage, networking, security and cloud services offered by the world's most advanced clouds. Customers can engage NVIDIA AI cloud services at the AI supercomputer, acceleration library software, or pretrained AI model layers."

In addition to enrolling other CSPs into the race, NVIDIA is also going to offer DGX machines on demand in the cloud. Using select CSPs, you can get access to an entire DGX and harness the computing power for AI research purposes. Mr. Huang noted "NVIDIA DGX is an AI supercomputer, and the blueprint of AI factories being built around the world. AI supercomputers are hard and time-consuming to build. Today, we are announcing the NVIDIA DGX Cloud, the fastest and easiest way to have your own DGX AI supercomputer, just open your browser. NVIDIA DGX Cloud is already available through Oracle Cloud Infrastructure and Microsoft Azure, Google GCP, and others on the way."

AMD EPYC Processors Power New Oracle Cloud Infrastructure Compute Instances and Enable Hybrid Cloud Environment

AMD (NASDAQ: AMD) today announced the expansion of the AMD EPYC processor footprint within the cloud ecosystem, powering the new Oracle Cloud Infrastructure (OCI) E4 Dense instances. These new instances are part of the Oracle Cloud VMware Solution offerings, enable customers to build and run a hybrid-cloud environment for their VMware based workloads.

Based on 3rd Gen AMD EPYC processors, the new E4 Dense instances expand the AMD EPYC presence at OCI and are made to support memory and storage intense VMware workloads. The E4 Dense instances utilize the core density and performance capabilities of EPYC processors to provide customers a fast path to a cloud environment, enabling similar performance and advanced security features through enabling the AMD Secure Encrypted Virtualization (SEV) security feature for VMware workloads that they have on-premises.

AMD EPYC Processors Hit by 22 Security Vulnerabilities, Patch is Already Out

AMD EPYC class of enterprise processors has gotten infected by as many as 22 different security vulnerabilities. These vulnerabilities range anywhere from medium to high severity, affecting all three generations of AMD EPYC processors. This includes AMD Naples, Rome, and Milan generations, where almost all three are concerned with the whole 22 exploits. There are a few exceptions, and you can find that on AMD's website. However, not all seems to be bad. AMD says that "During security reviews in collaboration with Google, Microsoft, and Oracle, potential vulnerabilities in the AMD Platform Security Processor (PSP), AMD System Management Unit (SMU), AMD Secure Encrypted Virtualization (SEV) and other platform components were discovered and have been mitigated in AMD EPYC AGESA PI packages."

AMD has already shipped new mitigations in the form of AGESA updates, and users should not fear if they keep their firmware up to date. If you or your organization is running on AMD EPYC processors, you should update the firmware to avoid any exploits from happening. The latest updates in question are NaplesPI-SP3_1.0.0.G, RomePI-SP3_1.0.0.C, and MilanPI-SP3_1.0.0.4 AGESA versions, which fix all of 22 security holes.

NVIDIA and Global Partners Launch New HGX A100 Systems to Accelerate Industrial AI and HPC

NVIDIA today announced it is turbocharging the NVIDIA HGX AI supercomputing platform with new technologies that fuse AI with high performance computing, making supercomputing more useful to a growing number of industries.

To accelerate the new era of industrial AI and HPC, NVIDIA has added three key technologies to its HGX platform: the NVIDIA A100 80 GB PCIe GPU, NVIDIA NDR 400G InfiniBand networking, and NVIDIA Magnum IO GPUDirect Storage software. Together, they provide the extreme performance to enable industrial HPC innovation.

Arm Announces Neoverse N2 and V1 Server Platforms

The demands of data center workloads and internet traffic are growing exponentially, and new solutions are needed to keep up with these demands while reducing the current and anticipated growth of power consumption. But the variety of workloads and applications being run today means the traditional one-size-fits all approach to computing is not the answer. The industry demands flexibility; design freedom to achieve the right level of compute for the right application.

As Moore's Law comes to an end, solution providers are seeking specialized processing. Enabling specialized processing has been a focal point since the inception of our Neoverse line of platforms, and we expect these latest additions to accelerate this trend.

Oracle Chosen as TikTok's Secure Cloud Provider

Oracle Corporation (NYSE: ORCL) announced today that it was chosen to become TikTok's secure cloud technology provider. This technical decision by TikTok was heavily influenced by Zoom's recent success in moving a large portion of its video conferencing capacity to the Oracle Public Cloud.

"TikTok picked Oracle's new Generation 2 Cloud infrastructure because it's much faster, more reliable, and more secure than the first generation technology currently offered by all the other major cloud providers," said Oracle Chief Technology Officer Larry Ellison. "In the 2020 Industry CloudPath survey that IDC recently released where it surveyed 935 Infrastructure as a Service (IaaS) customers on their satisfaction with the top IaaS vendors including Oracle, Amazon Web Services, Microsoft, IBM and Google Cloud.... Oracle IaaS received the highest satisfaction score."

AMD 2nd Gen EPYC Processors Set to Power Oracle Cloud Infrastructure Compute E3 Platform

Today, AMD announced that 2nd Gen AMD EPYC processors are powering the Oracle Cloud Infrastructure Compute E3 platform, bringing a new level of high-performance computing to Oracle Cloud. Using the AMD EPYC 7742 processor, the Oracle Cloud "E3 standard" and the bare metal compute instances are available today and leverage key features of the Gen AMD EPYC processors including class-leading memory bandwidth and the highest core count for an x86 data center processor. These features enable the Oracle Cloud E3 platform to be well suited for both general purpose and high bandwidth workloads such as big data analytics, memory intense workloads and Oracle business applications.

AMD and Oracle Collaborate to Provide AMD EPYC Processor-Based Offering in the Cloud

Today at Oracle OpenWorld 2018, AMD (NASDAQ: AMD) announced the availability of the first AMD EPYCTM processor-based instance on Oracle Cloud Infrastructure. With this announcement, Oracle becomes the largest public cloud provider to have a Bare Metal version on AMD EPYCTM processors1. The AMD EPYC processor-based "E" series will lead with the bare metal, Standard "E2", available immediately as the first instance type within the Series. At $0.03/Core hour, the AMD EPYC instance is up to 66 percent less on average per core than general purpose instances offered by the competition2 and is the most cost-effective instance available on any public cloud.

"With the launch of the AMD instance, Oracle has once again demonstrated that we are focused on getting the best value and performance to our customers," said Clay Magouyrk, senior vice president, software development, Oracle Cloud Infrastructure. "At greater than 269 GB/Sec, the AMD EPYC platform3, offers the highest memory bandwidth of any public cloud instance. Combined with increased performance, these cost advantages help customers maximize their IT dollars as they make the move to the cloud."
Return to Keyword Browsing
Nov 21st, 2024 07:12 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts