News Posts matching #partners

Return to Keyword Browsing

Infineon and Quantinuum Partner to Advance Quantum Computing

Infineon Technologies AG, a global leader in semiconductor solutions, and Quantinuum, a global leader in integrated, full-stack quantum computing, today announced a strategic partnership to develop the future generation of ion traps. This partnership will drive the acceleration of quantum computing and enable progress in fields such as generative chemistry, material science, and artificial intelligence.

"We are thrilled to partner with Quantinuum, a leader in quantum computing, to push the boundaries of quantum computing and generate larger, more powerful machines that solve meaningful real-life problems," said Richard Kuncic, Senior Vice President and General Manager Power Systems at Infineon Technologies. "This collaboration brings together Infineon's state-of-the-art knowledge in process development, fabrication, and quantum processing unit (QPU) technology with Quantinuum's cutting-edge ion-trap design expertise and experience with operating high-performance commercial quantum computers."

Fractal Design partners with Best Buy to expand retail in North America

Today, Fractal Design announced a new retailer partnership with Best Buy, one of the world's largest specialty computer and hardware retailers. Starting today, Best Buy customers will be able to find Fractal's award-winning PC cases online at BestBuy.com. This new retail expansion ensures gamers and enthusiasts have ready access to the Scandinavian design, user-centric innovation, and premium quality that Fractal infuses into each of its products.

"We're excited to expand the retail options available to customers throughout the US by making Fractal gaming products purchasable at BestBuy.com." said Keith Washo, Fractal Design Senior Director of Sales. "We believe that combining our highly acclaimed gaming hardware and equipment with Best Buy's first-rate ecommerce platform will ensure greater access to our best-selling products and a quality shopping experience for Fractal customers and new shoppers alike."

Microsoft Schedules Next Xbox Partner Preview for October 17

We're thrilled to announce the next Xbox Partner Preview - our no-fluff, all-games broadcast - is coming this Thursday, October 17. In the latest installment, we'll feature a mix of new and upcoming games for you from incredible partners like Remedy Entertainment, Sega, 505 Games, and many more, with over a dozen new trailers over the course of around 25 minutes. During Xbox Partner Preview, you'll get a first look at gameplay from Alan Wake II's next expansion, The Lake House, an action-packed new trailer for Like A Dragon: Pirate Yakuza in Hawaii, a peek at multiple bosses in dark-fantasy action game Wuchang: Fallen Feathers, multiple world premieres, and other great titles coming to Xbox consoles, Windows PC, and Game Pass.

As always, Xbox Partner Preview is all about sharing exciting games news from our talented partners across the globe: you'll get new game reveals, release date announcements, and fresh new gameplay from upcoming games. And to sweeten the deal, during the broadcast, Xbox Wire will post exclusive behind-the-scenes stories about select titles shown. This event will be broadcast digitally on Thursday, October 17, at 10am Pacific / 1pm Eastern / 6pm UK across our Xbox channels on YouTube and Twitch.

AMI Partners with Samsung to Bring Firmware Security to PCs

AMI, the global leader in Dynamic Firmware for worldwide computing, has partnered with Samsung Electronics, the global leader in consumer technology, to create an enhanced joint security solution available in Samsung's Galaxy Book PCs. Alongside Samsung's multi-layer security platform Samsung Knox, AMI's Tektagon - the industry-leading Platform Root of Trust firmware security solution - is now integrated into Samsung PCs including the Galaxy Book5 Pro 360, Galaxy Book4 Pro, Galaxy Book4 Pro 360, and Galaxy Book4 Ultra.

Through this collaborative partnership, AMI's Tektagon seamlessly integrates with Samsung Knox to ensure that confidential and sensitive data stays safe at every layer of the device through real-time threat detection and collaborative protection, while providing the highest level of security against firmware-injected malware to help prevent ransomware and denial of service attacks.

Intel to Produce Custom AI Chips and Xeon 6 Processors for AWS

Intel Corp. and Amazon Web Services. Inc., an Amazon.com company, today announced a co-investment in custom chip designs under a multi-year, multi-billion-dollar framework covering product and wafers from Intel. This is a significant expansion of the two companies' longstanding strategic collaboration to help customers power virtually any workload and accelerate the performance of artificial intelligence (AI) applications.

As part of the expanded collaboration, Intel will produce an AI fabric chip for AWS on Intel 18A, the company's most advanced process node. Intel will also produce a custom Xeon 6 chip on Intel 3, building on the existing partnership under which Intel produces Xeon Scalable processors for AWS.

Efficient Teams Up with GlobalFoundries to Develop Ultra-Low Power MRAM Processors

Today, Efficient announced a strategic partnership with GlobalFoundries (GF) to bring to market a new high-performance computer processor that is up to 166x more energy-efficient than industry-standard embedded CPUs. Efficient is already working with select customers for early access and customer sampling by summer 2025. The official introduction of the category-creating processor will mark a new era in computing, free from restrictive energy limitations.

The partnership will combine Efficient's novel architecture and technology with GF's U.S.-based manufacturing, global reach and market expertise to enable a quantum leap in edge device capabilities and battery lifetime. Through this partnership, Efficient will provide the computing power to smarter, longer-lasting devices and applications across the Internet of Things, wearable and implantable health devices, space systems, and security and defense.

AMD Acquires Hyperscale Solutions Provider ZT Systems

AMD today announced the signing of a definitive agreement to acquire ZT Systems, a leading provider of AI infrastructure for the world's largest hyperscale computing companies. The strategic transaction marks the next major step in AMD's AI strategy to deliver leadership AI training and inferencing solutions based on innovating across silicon, software and systems. ZT Systems' extensive experience designing and optimizing cloud computing solutions will also help cloud and enterprise customers significantly accelerate the deployment of AMD-powered AI infrastructure at scale. AMD has agreed to acquire ZT Systems in a cash and stock transaction valued at $4.9 billion, inclusive of a contingent payment of up to $400 million based on certain post-closing milestones. AMD expects the transaction to be accretive on a non-GAAP basis by the end of 2025.

"Our acquisition of ZT Systems is the next major step in our long-term AI strategy to deliver leadership training and inferencing solutions that can be rapidly deployed at scale across cloud and enterprise customers," said AMD Chair and CEO Dr. Lisa Su. "ZT adds world-class systems design and rack-scale solutions expertise that will significantly strengthen our data center AI systems and customer enablement capabilities. This acquisition also builds on the investments we have made to accelerate our AI hardware and software roadmaps. Combining our high-performance Instinct AI accelerator, EPYC CPU, and networking product portfolios with ZT Systems' industry-leading data center systems expertise will enable AMD to deliver end-to-end data center AI infrastructure at scale with our ecosystem of OEM and ODM partners."

Cooler Master Announces Collaboration with AMD for Ryzen 9000 Series

Cooler Master, a leading provider of PC components, gaming peripherals, and tech lifestyle solutions, today announced its collaboration with AMD to integrate their recently announced AMD's Ryzen 9000 Series, into a select range of Cooler Master PC Systems. Cooler Master is leveraging AMD's cutting-edge technology to deliver unparalleled performance in their latest products, including the Sneaker X, Ncore 1 Pro, and MasterBox 6 Pro. The collaboration underscores Cooler Master's commitment to pushing the boundaries of what's possible in gaming and professional computing.

The inclusion of AMD's CPUs within Cooler Master products will not only elevate gaming and professional experiences but also highlight the symbiotic relationship between powerful processing and efficient cooling. This is a testament to Cooler Master's relentless pursuit of excellence in thermal management, ensuring that each system operates at peak efficiency.

AIC Partners with Unigen to Launch Power-Efficient AI Inference Server

AIC, a global leader in design and manufacturing of industrial-strength servers, in partnership with Unigen Corporation has launched the EB202-CP-UG, an ultra-efficient Artificial Intelligence (AI) inference server boasting over 400 trillion operations per second (TOPS) of performance. This innovative server is designed around the robust EB202-CP, a 2U Genoa-based storage server featuring a removable storage cage. By integrating eight Unigen Biscotti E1.S AI modules in place of standard E1.S SSDs, AIC is offering a specialized configuration for AI, the EB202-CP-UG—an air-cooled AI inference server characterized by an exceptional performance-per-watt ratio that ensures long-term cost savings.

"We are excited to partner with AIC to introduce innovative AI solutions," said Paul W. Heng, Founder and CEO of Unigen. "Their commitment to excellence in every product, especially their storage servers, made it clear that our AI technology would integrate seamlessly."

ByteDance and Broadcom to Collaborate on Advanced AI Chip

ByteDance, TikTok's parent company, is reportedly working with American chip designer Broadcom to develop a cutting-edge AI processor. This collaboration could secure a stable supply of high-performance chips for ByteDance, according to Reuters. Sources claim the joint project involves a 5 nm Application-Specific Integrated Circuit (ASIC), designed to comply with U.S. export regulations. TSMC is slated to manufacture the chip, though production is not expected to begin this year.

This partnership marks a significant development in U.S.-China tech relations, as no public announcements of such collaborations on advanced chips have been made since Washington implemented stricter export controls in 2022. For ByteDance, this move could reduce procurement costs and ensure a steady chip supply, crucial for powering its array of popular apps, including TikTok and the ChatGPT-like AI chatbot "Doubao." The company has already invested heavily in AI chips, reportedly spending $2 billion on NVIDIA processors in 2023.

Lenovo Releases Fiscal Year 2023/24 Earnings Report

Lenovo Group today announced Q4 and full-year results for fiscal year 2023/24. After resuming growth in Q3, the Group reported year-on-year revenue growth across all business groups in Q4, with Group revenue increasing nearly 10% year-on-year to US$13.8 billion, net income doubling year-on-year to US$248 million, and non-PC revenue mix reaching a historic high of 45%. The Group's Q4 and overall 2nd half performance demonstrates how Lenovo has navigated the past year's industry downturn, captured the tremendous growth opportunities presented by AI, and accelerated momentum across the business. Revenue for the full fiscal year was US$56.9 billion, and net income was US$1 billion. From the second half of the fiscal year, Lenovo achieved year-on-year revenue growth of 6% and net margin recovered from a first half year-on-year decline to flat in the second half.

The Group is leading in an era of unprecedented AI opportunities with its pocket-to-cloud portfolio, strong ecosystem and partnerships, and full-stack AI capabilities. Since announcing its AI strategy in October 2023 at its annual Tech World event, Lenovo has launched its first wave of AI PCs as well as AI capabilities covering other smart devices, smart infrastructure, and smart solutions and services. The Group expects the AI PC - which is defined as equipped with a personal AI agent based on natural interactions, heterogeneous computing, personal knowledge base, connected to an open AI application ecosystem, and with privacy and security protection - to grow from its current premium position to mainstream over the next three years, driving a new refresh cycle for the industry. Hybrid AI is also driving greater demand for AI infrastructure and customers are increasingly asking for customized AI solutions and services, particularly consulting, design, deployment and maintenance of AI.

Report Suggests Naver Siding with Samsung in $752 Million "Mach-1" AI Chip Deal

Samsung debuted its Mach-1 generation of AI processors during a recent shareholder meeting—the South Korean megacorp anticipates an early 2025 launch window. Their application-specific integrated circuit (ASIC) design is expected to "excel in edge computing applications," with a focus on low power and efficiency-oriented operating environments. Naver Corporation was a key NVIDIA high-end AI customer in South Korea (and Japan), but the leading search platform firm and creator of HyperCLOVA X LLM (reportedly) deliberated on an adoption alternative hardware last October. The Korea Economic Daily believes that Naver's relationship with Samsung is set to grow, courtesy of a proposed $752 million investment: "the world's top memory chipmaker, will supply its next-generation Mach-1 artificial intelligence chips to Naver Corp. by the end of this year."

Reports from last December indicated that the two companies were deep into the process of co-designing power-efficient AI accelerators—Naver's main goal is to finalize a product that will offer eight times more energy efficiency than NVIDIA's H100 AI accelerator. Naver's alleged bulk order—of roughly 150,000 to 200,000 Samsung Mach-1 AI chips—appears to be a stopgap. Industry insiders reckon that Samsung's first-gen AI accelerator is much cheaper when compared to NVIDIA H100 GPU price points—a per-unit figure of $3756 is mentioned in the KED Global article. Samsung is speculated to be shopping its fledgling AI tech to Microsoft and Meta.

NVIDIA Hopper Leaps Ahead in Generative AI at MLPerf

It's official: NVIDIA delivered the world's fastest platform in industry-standard tests for inference on generative AI. In the latest MLPerf benchmarks, NVIDIA TensorRT-LLM—software that speeds and simplifies the complex job of inference on large language models—boosted the performance of NVIDIA Hopper architecture GPUs on the GPT-J LLM nearly 3x over their results just six months ago. The dramatic speedup demonstrates the power of NVIDIA's full-stack platform of chips, systems and software to handle the demanding requirements of running generative AI. Leading companies are using TensorRT-LLM to optimize their models. And NVIDIA NIM—a set of inference microservices that includes inferencing engines like TensorRT-LLM—makes it easier than ever for businesses to deploy NVIDIA's inference platform.

Raising the Bar in Generative AI
TensorRT-LLM running on NVIDIA H200 Tensor Core GPUs—the latest, memory-enhanced Hopper GPUs—delivered the fastest performance running inference in MLPerf's biggest test of generative AI to date. The new benchmark uses the largest version of Llama 2, a state-of-the-art large language model packing 70 billion parameters. The model is more than 10x larger than the GPT-J LLM first used in the September benchmarks. The memory-enhanced H200 GPUs, in their MLPerf debut, used TensorRT-LLM to produce up to 31,000 tokens/second, a record on MLPerf's Llama 2 benchmark. The H200 GPU results include up to 14% gains from a custom thermal solution. It's one example of innovations beyond standard air cooling that systems builders are applying to their NVIDIA MGX designs to take the performance of Hopper GPUs to new heights.

Silicon Box Announces $3.6 Billion Foundry Deal - New Facility Marked for Northern Italy

Silicon Box, a cutting-edge, advanced panel-level packaging foundry announced its intention to collaborate with the Italian government to invest up to $3.6 billion (€3.2 billion) in Northern Italy, as the site of a new, state-of-the-art semiconductor assembly and test facility. This facility will help meet critical demand for advanced packaging capacity to enable next generation technologies that Silicon Box anticipates by 2028. The multi-year investment will replicate Silicon Box's flagship foundry in Singapore which has proven capability and capacity for the world's most advanced semiconductor packaging solutions, then expand further into 3D integration and testing. When completed, the new facility will support approximately 1,600 Silicon Box employees in Italy. The construction of the facility is also expected to create several thousand more jobs, including eventual hiring by suppliers. Design and planning for the facility will begin immediately, with construction to commence pending European Commission approval of planned financial support by the Italian State.

As well as bringing the most advanced chiplet integration, packaging, and testing to Italy, Silicon Box's manufacturing process is based on panel-level-production; a world leading, first-of-its-kind combination that is already shipping product to customers from its Singapore foundry. Through the investment, Silicon Box has plans for greater innovation and expansion in Europe, and globally. The new integrated production facility is expected to serve as a catalyst for broader ecosystem investments and innovation in Italy, as well as the rest of the European Union.

Cerebras & G42 Break Ground on Condor Galaxy 3 - an 8 exaFLOPs AI Supercomputer

Cerebras Systems, the pioneer in accelerating generative AI, and G42, the Abu Dhabi-based leading technology holding group, today announced the build of Condor Galaxy 3 (CG-3), the third cluster of their constellation of AI supercomputers, the Condor Galaxy. Featuring 64 of Cerebras' newly announced CS-3 systems - all powered by the industry's fastest AI chip, the Wafer-Scale Engine 3 (WSE-3) - Condor Galaxy 3 will deliver 8 exaFLOPs of AI with 58 million AI-optimized cores. The Cerebras and G42 strategic partnership already delivered 8 exaFLOPs of AI supercomputing performance via Condor Galaxy 1 and Condor Galaxy 2, each amongst the largest AI supercomputers in the world. Located in Dallas, Texas, Condor Galaxy 3 brings the current total of the Condor Galaxy network to 16 exaFLOPs.

"With Condor Galaxy 3, we continue to achieve our joint vision of transforming the worldwide inventory of AI compute through the development of the world's largest and fastest AI supercomputers," said Kiril Evtimov, Group CTO of G42. "The existing Condor Galaxy network has trained some of the leading open-source models in the industry, with tens of thousands of downloads. By doubling the capacity to 16exaFLOPs, we look forward to seeing the next wave of innovation Condor Galaxy supercomputers can enable." At the heart of Condor Galaxy 3 are 64 Cerebras CS-3 Systems. Each CS-3 is powered by the new 4 trillion transistor, 900,000 AI core WSE-3. Manufactured at TSMC at the 5-nanometer node, the WSE-3 delivers twice the performance at the same power and for the same price as the previous generation part. Purpose built for training the industry's largest AI models, WSE-3 delivers an astounding 125 petaflops of peak AI performance per chip.

Ghost of Tsushima Director's Cut is Coming to PC on May 16

Ghost of Tsushima Director's Cut is coming to PC! We at Nixxes are thrilled to collaborate with the talented team at Sucker Punch Productions who created this critically acclaimed open-world action adventure. We are excited to bring Jin's story to a new audience and to bring a Sucker Punch title to PC for the first time. The PC version of Ghost of Tsushima Director's Cut includes the full game, the Iki Island expansion, and the cooperative online multiplayer Legends mode. For the past year, the team at Nixxes has been working hard to bring the Sucker Punch in-house engine technology to PC and implement PC features such as unlocked frame rates, a variety of graphics settings and presets, and customizable mouse and keyboard controls.

With ultrawide monitor support, you can take in the expansive terrain and ancient landmarks with a cinematic field of view. Ghost of Tsushima Director's Cut on PC is fully optimized for 21:9 and 32:9 resolutions and even supports 48:9 resolutions for triple monitor setups. Ghost of Tsushima Director's Cut on PC features the latest performance-enhancing technologies. NVIDIA DLSS 3 and AMD FSR 3 are available with both upscaling and frame generation options. Intel XeSS upscaling is also supported and if your hardware has headroom to spare, you can use NVIDIA DLAA or FSR 3 Native AA to further boost image quality.

Microsoft DirectSR Super Resolution API Brings Together DLSS, FSR and XeSS

Microsoft has just announced that their new DirectSR Super Resolution API for DirectX will provide a unified interface for developers to implement super resolution in their games. This means that game studios no longer have to choose between DLSS, FSR, XeSS, or spend additional resources to implement, bug-test and support multiple upscalers. For gamers this is huge news, too, because they will be able to run upscaling in all DirectSR games—no matter the hardware they own. While AMD FSR and Intel XeSS run on all GPUs from all vendors, NVIDIA DLSS is exclusive to Team Green's hardware. With their post, Microsoft also confirms that DirectSR will not replace FSR/DLSS/XeSS with a new upscaler by Microsoft, rather that it builds on existing technologies that are already available, unifying access to them.

While we have to wait until March 21 for more details to be revealed at GDC 2024, Microsoft's Joshua Tucker stated in a blog post: "We're thrilled to announce DirectSR, our new API designed in partnership with GPU hardware vendors to enable seamless integration of Super Resolution (SR) into the next generation of games. Super Resolution is a cutting-edge technique that increases the resolution and visual quality in games. DirectSR is the missing link developers have been waiting for when approaching SR integration, providing a smoother, more efficient experience that scales across hardware. This API enables multi-vendor SR through a common set of inputs and outputs, allowing a single code path to activate a variety of solutions including NVIDIA DLSS Super Resolution, AMD FidelityFX Super Resolution, and Intel XeSS. DirectSR will be available soon in the Agility SDK as a public preview, which will enable developers to test it out and provide feedback. Don't miss our DirectX State of the Union at GDC to catch a sneak peek at how DirectSR can be used with your games!"

ServiceNow, Hugging Face & NVIDIA Release StarCoder2 - a New Open-Access LLM Family

ServiceNow, Hugging Face, and NVIDIA today announced the release of StarCoder2, a family of open-access large language models for code generation that sets new standards for performance, transparency, and cost-effectiveness. StarCoder2 was developed in partnership with the BigCode Community, managed by ServiceNow, the leading digital workflow company making the world work better for everyone, and Hugging Face, the most-used open-source platform, where the machine learning community collaborates on models, datasets, and applications. Trained on 619 programming languages, StarCoder2 can be further trained and embedded in enterprise applications to perform specialized tasks such as application source code generation, workflow generation, text summarization, and more. Developers can use its code completion, advanced code summarization, code snippets retrieval, and other capabilities to accelerate innovation and improve productivity.

StarCoder2 offers three model sizes: a 3-billion-parameter model trained by ServiceNow; a 7-billion-parameter model trained by Hugging Face; and a 15-billion-parameter model built by NVIDIA with NVIDIA NeMo and trained on NVIDIA accelerated infrastructure. The smaller variants provide powerful performance while saving on compute costs, as fewer parameters require less computing during inference. In fact, the new 3-billion-parameter model matches the performance of the original StarCoder 15-billion-parameter model. "StarCoder2 stands as a testament to the combined power of open scientific collaboration and responsible AI practices with an ethical data supply chain," emphasized Harm de Vries, lead of ServiceNow's StarCoder2 development team and co-lead of BigCode. "The state-of-the-art open-access model improves on prior generative AI performance to increase developer productivity and provides developers equal access to the benefits of code generation AI, which in turn enables organizations of any size to more easily meet their full business potential."

Xbox & Microsoft Schedule GDC 2024 Presentations

As GDC, the world's largest game developer conference, returns to San Francisco, Microsoft and Xbox will be there to engage and empower developers, publishers, and technology partners across the industry. We are committed to supporting game developers on any platform, anywhere in the world, at every stage of development. Our message is simple: Microsoft and Xbox are here to help power your games and empower your teams. From March 18 - 22, the Xbox Lobby Lounge in the Moscone Center South can't be missed—an easy meeting point, and a first step toward learning more about the ID@Xbox publishing program, the Developer Acceleration Program (DAP) for underrepresented creators, Azure cloud gaming services, and anything else developers might need.

GDC features dozens of speakers from across Xbox, Activision, Blizzard, King and ZeniMax who will demonstrate groundbreaking in-game innovations and share community-building strategies. Microsoft technology teams, with support from partners, will also host talks that spotlight new tools, software and services that help increase developer velocity, grow player engagement and help creators grow. See below for the Conference programming details.

Samsung & Vodafone "Open RAN Ecosystem" Bolstered by AMD EPYC 8004 Series

Samsung Electronics and Vodafone, in collaboration with AMD, today announced that the three companies have successfully demonstrated an end-to-end call with the latest AMD processors enabling Open RAN technology, a first for the industry. This joint achievement represents the companies' technical leadership in enriching the Open RAN ecosystem throughout the industry. Conducted in Samsung's R&D lab in Korea, the first call was completed using Samsung's versatile, O-RAN-compliant, virtualized RAN (vRAN) software, powered by AMD EPYC 8004 Series processors on Supermicro's Telco/Edge servers, supported by Wind River Studio Container-as-a-Service (CaaS) platform. This demonstration aimed to verify optimized performance, energy efficiency and interoperability among partners' solutions.

The joint demonstration represents Samsung and Vodafone's ongoing commitment to reinforce their position in the Open RAN market and expand their ecosystem with industry-leading partners. This broader and growing Open RAN ecosystem helps operators to build and modernize mobile networks with greater flexibility, faster time-to-market (TTM), and unmatched performance. "Open RAN represents the forthcoming major transformation in advancing mobile networks for the future. Reaching this milestone with top industry partners like Samsung and AMD shows Vodafone's dedication to delivering on the promise of Open RAN innovation," said Nadia Benabdallah, Network Strategy and Engineering Director at Vodafone Group. "Vodafone is continually looking to innovate its network by exploring the potential and diversity of the ecosystem."

IBM Introduces LinuxONE 4 Express, a Value-oriented Hybrid Cloud & AI Platform

IBM has announced IBM LinuxONE 4 Express, extending the latest performance, security and AI capabilities of LinuxONE to small and medium sized businesses and within new data center environments. The pre-configured rack mount system is designed to offer cost savings and to remove client guess work when spinning up workloads quickly and getting started with the platform to address new and traditional use cases such as digital assets, medical imaging with AI, and workload consolidation.

Building an integrated hybrid cloud strategy for today and years to come
As businesses move their products and services online quickly, oftentimes, they are left with a hybrid cloud environment created by default, with siloed stacks that are not conducive to alignment across businesses or the introduction of AI. In a recent IBM IBV survey, 84% of executives asked acknowledged their enterprise struggles in eliminating silo-to-silo handoffs. And 78% of responding executives said that an inadequate operating model impedes successful adoption of their multicloud platform. With the pressure to accelerate and scale the impact of data and AI across the enterprise - and improve business outcomes - another approach that organizations can take is to more carefully identify which workloads should be on-premises vs in the cloud.

GIGABYTE Enterprise Servers & Motherboards Roll Out on European E-commerce Platform

GIGABYTE Technology, a pioneer in computer hardware, has taken a significant stride in shaping its European business model. Today, GIGABYTE has broadened its e-commerce platform, shop.gigabyte.eu, by integrating enterprise server and server motherboard solutions into its product portfolio. Being at the forefront of computer hardware manufacturing, GIGABYTE recognizes that it is imperative to expand its presence in the EMEA region to maintain its leadership across all markets. With the introduction of our enterprise-level server and motherboard solutions, we are dedicated to delivering a diverse range of high-performance products directly to our B2B clients.

GIGABYTE offers a complete product portfolio that addresses all workloads from the data center to edge including traditional and emerging workloads in HPC and AI to data analytics, 5G/edge, cloud computing, and more. Our enduring partnerships with key technology leaders ensure that our new products are at the forefront of innovation and launch with new partner platforms. Our systems embody performance, security, scalability, and sustainability. Within the e-commerce product portfolio, we offer a selection of models from our Edge, Rack, GPU, and Storage series. Additionally, the platform provides server motherboards for custom integration. The current selection comprises a mix of solutions tailored to online sales. For more complex solutions, customers can get in touch via the integrated contact form.

Bungie Announces Destiny 2 x BioWare Crossover, Normandy Crew Landing February 13

Bungie has announced its new collaboration with EA and BioWare to allow Guardians to join the crew of the Normandy with new cosmetics and in-game items, launching February 13, 2024. The Normandy Crew Bundle will be available at the Eververse store in-game and will include a Commander Shepard-inspired N7 armor set for Titans, a Garrus-inspired Vakarian set for Hunters, and a Liara-inspired Shadow Broker set for Warlocks. In celebration of the partnership, all players will be able to claim the Alliance Requisitions Bundle, including the Enhanced Defense Ghost Shell, Alliance Scout Frigate ship, and Alliance Drop Ship Sparrow, which will be available at no cost. Players can also get the Omni Strike finisher and Flux Dance emote for Silver.

Now released in Destiny 2, Riven's Wishes are new weekly quests open to all players from January 30 until March 12. During this period, Guardians will be tasked with completing a pursuit each week to earn a token redeemable for a treasure trove of rewards. Choose from the Wish for Strength to earn Last Wish raid Deepsight weapons, Wish for Protection to armor up with Exotic gear from the Lightfall year, or Wish for Beauty to collect event mementos and essential Ascendant crafting materials.

MSI MPOWER Motherboard Series Resurrected After Long Absence

An exclusive report provides an initial tease of MSI's relaunch of MPOWER—a beloved product line of high performance yet wallet friendly motherboards. Wccftech published their Z790MPOWER model coverage only a few hours ago. MSI's final batch of MPOWER-branded boards landed back in 2017, with Z170 and Z270 chipsets (on Intel Socket 1151). Here is Wccftech's statement on the matter: "MSI is marking the return of the MPOWER series with a new and cost-effective Z790 product, the Z790MPOWER. This motherboard may look like a very mainstream design but it has something that only a few high-end motherboards can do and that is support for the best DDR5 memory out there."

They moved onto showcasing the board's feature set: "Starting with the details, the MSI Z790MPOWER motherboard features the LGA 1700/1800 socket & supports 12th, 13th, and 14th Gen CPUs from Intel. It is powered by a 15-Phase VRM design which is provided power through dual 8-pin connectors. There are large heatsinks over the VRMs and the Mini-ATX with a silver and black finish looks great." The usual bits of overclocking terminology adorn Z790MPOWER's various heatsinks—including "Overclock, Frequency, MHz, Voltages and Clock."

OpenAI CEO Reportedly Seeking Funds for Purpose-built Chip Foundries

OpenAI CEO, Sam Altman, had a turbulent winter 2023 career moment, but appears to be going all in with his company's future interests. A Bloomberg report suggests that the tech visionary has initiated a major fundraising initiative for the construction of OpenAI-specific semiconductor production plants. The AI evangelist reckons that his industry will become prevalent enough to demand a dedicated network of manufacturing facilities—the U.S. based artificial intelligence (AI) research organization is (reportedly) exploring custom artificial intelligence chip designs. Proprietary AI-focused GPUs and accelerators are not novelties at this stage in time—many top tech companies rely on NVIDIA solutions, but are keen to deploy custom-built hardware in the near future.

OpenAI's popular ChatGPT system is reliant on NVIDIA H100 and A100 GPUs, but tailor-made alternatives seem to be the desired route for Altman & Co. The "on their own terms" pathway seemingly skips an expected/traditional chip manufacturing process—the big foundries could struggle to keep up with demand for AI-oriented silicon. G42 (an Abu Dhabi-based AI development holding company) and SoftBank Group are mentioned as prime investment partners in OpenAI's fledgling scheme—Bloomberg proposes that Altman's team is negotiating a $8 to 10 billion deal with top brass at G42. OpenAI's planned creation of its own foundry network is certainly a lofty and costly goal—the report does not specify whether existing facilities will be purchased and overhauled, or new plants being constructed entirely from scratch.
Return to Keyword Browsing
Nov 20th, 2024 10:33 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts