Apr 19th, 2025 14:20 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts

News Posts matching #Efficient

Return to Keyword Browsing

addlink EMP-520 Series Compact Box PC Wins Best-in-Show at Embedded World 2025

ADLINK Technology Inc., a global leader in edge computing, proudly announces that its newly released EMP-520 Series industrial compact box PC has been recognized for its excellence, securing the Best-in-Show award in the Computer Boards, Systems, Components & Peripherals category at embedded world 2025. It was selected for its innovative design, user-centric features, and exceptional performance in industrial environments.

Smart, Reliable, and Efficient Industrial Computing Solution
Designed to meet the needs of modern industrial applications, the EMP-520 Series delivers high computing performance with minimal maintenance, helping businesses maximize productivity and reduce downtime. Powered by Intel 14th Gen Core processors, it ensures energy-efficient operation, making it ideal for high-demand environments that require continuous, reliable computing. Supporting four simultaneous 4K video outputs and EDID emulation, users can achieve seamless visual performance. The compact design enables easy integration into space-constrained environments while ensuring reliable operation in mission-critical applications.

Raspberry Pi Announces New 45 W USB-C Power Supply

Whether you're running a Raspberry Pi or charging a laptop, the quality of your power supply makes all the difference. Today, we're excited to introduce our best power supply yet, perfect for either task: the $15 Raspberry Pi 45 W USB-C Power Supply.

Efficient regulation
Every Raspberry Pi single-board computer we've ever sold needs flash storage and a power supply. And not just any flash storage or power supply: buying the cheapest SD card or USB wall wart you can find on Amazon is a guaranteed way to have a bad experience. So over time, we started to regulate the accessories offered by our Approved Resellers. We would test resellers' SD cards, to ensure that they had sufficient random-access performance and were resilient against thousands of unplanned power loss events. Last year, we took this to the next level, launching Raspberry Pi-branded A2-class SD cards and NVMe SSDs, which are now the only storage options promoted alongside our computers.

MangoBoost Achieves Record-Breaking MLPerf Inference v5.0 Results with AMD Instinct MI300X

MangoBoost, a provider of cutting-edge system solutions designed to maximize AI data center efficiency, has set a new industry benchmark with its latest MLPerf Inference v5.0 submission. The company's Mango LLMBoost AI Enterprise MLOps software has demonstrated unparalleled performance on AMD Instinct MI300X GPUs, delivering the highest-ever recorded results for Llama2-70B in the offline inference category. This milestone marks the first-ever multi-node MLPerf inference result on AMD Instinct MI300X GPUs. By harnessing the power of 32 MI300X GPUs across four server nodes, Mango LLMBoost has surpassed all previous MLPerf inference results, including those from competitors using NVIDIA H100 GPUs.

Unmatched Performance and Cost Efficiency
MangoBoost's MLPerf submission demonstrates a 24% performance advantage over the best-published MLPerf result from Juniper Networks utilizing 32 NVIDIA H100 GPUs. Mango LLMBoost achieved 103,182 tokens per second (TPS) in the offline scenario and 93,039 TPS in the server scenario on AMD MI300X GPUs, outperforming the previous best result of 82,749 TPS on NVIDIA H100 GPUs. In addition to superior performance, Mango LLMBoost + MI300X offers significant cost advantages. With AMD MI300X GPUs priced between $15,000 and $17,000—compared to the $32,000-$40,000 cost of NVIDIA H100 GPUs (source: Tom's Hardware—H100 vs. MI300X Pricing)—Mango LLMBoost delivers up to 62% cost savings while maintaining industry-leading inference throughput.

Cervoz Announces T455 NVMe SSD Series - Offering Advanced Endurance for Demanding Industries

In industrial environments, equipment runs 24/7, handling frequent and intensive write operations. To support these demanding workloads, storage solutions must offer high endurance to extend lifespan, minimize downtime, and lower Total Cost of Ownership (TCO). Designed to meet these demands, the Cervoz T455 Series, M.2 2280 NVMe SSD delivers 35% greater endurance through a refined firmware architecture and proven storage technologies. Ideal for industrial automation, edge computing, and high-performance computing (HPC), it ensures reliable performance under heavy workloads.

Enhanced Endurance for Demanding Industrial Applications—Over-Provisioning Technology: Endurance SSD Performance and Longevity
Cervoz's Over-Provisioning technology optimizes SSD performance by reserving extra storage space, boosting efficiency, and extending lifespan. How SSD Over-Provisioning Delivers Benefits:
  • Extended Lifespan: Efficiently reserves additional storage space, enhancing NAND durability at least 35%.
  • Consistent Performance: Maintains steady performance levels even under high-intensity workloads.
  • Optimized Resource Management: Reserved space allocation for optimal write efficiency.

SK hynix Reportedly Developing "LPDDR5M" Memory, More Power Efficient than LPDDR5X Standard

According to South Korea's Money Today, SK hynix is currently engaged in the development of yet another variation of LPDDR5. The mega supplier of DRAM and flash memory chips has publicly disclosed its LPDDR5 Turbo (T) design—going back to late 2023; this iteration was advertised as the "world's fastest mobile memory standard." The first public demonstration of LPDDR5T (10533) was performed at last February's IEEE Solid State Circuit Conference. Currently, the familiar LPDDR5X standard is prevalent throughout commercial channels. Insiders believe that a proposed new "LPDDR5M" design will be released as a lower power alternative to LPDDR5X.

Insiders reckon that the unannounced LPDDR5M standard operates at lower voltages (reportedly 0.98 V), when compared to current offerings (X: 1.05 V). Given the nature of its acronym—Low Power Double Data Rate—this memory type was first devised with efficient operations in mind; ideal for mobile applications. An industry mole proposes that internal company discussions have highlighted a key percentage difference: "at maximum speed, LPDDR5M is ~8% more power efficient than LPDDR5X." The recent Money Today SK news article mentions that older LPDDR4 standards are classed as "legacy products" by company leadership. In contrast, LPDDR5 variants are (allegedly) categorized as "high value-added products." The rumored addition of LPDDR5M is viewed—by regional memory industry watchdogs—as a fortification (and diversification) of SK hynix's strategy; that already encompasses LPDDR5X and LPDDR5T. Tipsters posit that LPDDR5M memory is destined to feature inside next-gen smartphone devices with on-board AI capabilities.

Imagination's New DXTP GPU for Mobile and Laptop: 20% More Power Efficient

Today Imagination Technologies announces its latest GPU IP, Imagination DXTP, which sets a new standard for the efficient acceleration of graphics and compute workloads on smartphones and other power-constrained devices. Thanks to an array of micro-architectural improvements, DXTP delivers up to 20% improved power efficiency (FPS/W) on popular graphics workloads when compared to its DXT equivalent.

"The global smartphone market is experiencing a resurgence, propelled by cutting-edge AI features such as personal agents and enhanced photography," says Peter Richardson, Partner & VP at Counterpoint Research. "However, the success of this AI-driven revolution hinges on maintaining the high standards users expect: smooth interfaces, sleek designs, and all-day battery life. As the market matures, consumers are gravitating towards premium devices that seamlessly integrate these advanced AI capabilities without compromising on essential smartphone qualities."

EIZO Unveils FlexScan FLT, Its Most Power Efficient and Lightweight Monitor

EIZO Corporation today unveiled the FlexScan FLT, the world's most power-efficient monitor, embodying EIZO's vision for the future of monitor design, work style, and sustainability. The FLT is a lightweight, 23.8-inch monitor with Full HD (1920 x 1080 pixels) resolution designed for business users.

For more than 30 years, EIZO has led the development of power-efficient, ergonomic, and eco-conscious monitors with its FlexScan series for business enterprise. In 2023, the company accelerated its efforts to achieve carbon neutrality across the entire value chain by 2040, as part of its "Transition to Net Zero" plan, aiming to realize a low-carbon society. The FLT, or "Future-Leading Technology," was designed with this vision in mind. It embodies the philosophies driving FlexScan's evolution and paves the way to a future that EIZO envisions with coming generations of its monitors. Its sustainability-focused approach, robust functionality, and sleek design reflect EIZO's commitment to continuous innovation - not just for today's challenges, but for a brighter tomorrow.

Efficient Teams Up with GlobalFoundries to Develop Ultra-Low Power MRAM Processors

Today, Efficient announced a strategic partnership with GlobalFoundries (GF) to bring to market a new high-performance computer processor that is up to 166x more energy-efficient than industry-standard embedded CPUs. Efficient is already working with select customers for early access and customer sampling by summer 2025. The official introduction of the category-creating processor will mark a new era in computing, free from restrictive energy limitations.

The partnership will combine Efficient's novel architecture and technology with GF's U.S.-based manufacturing, global reach and market expertise to enable a quantum leap in edge device capabilities and battery lifetime. Through this partnership, Efficient will provide the computing power to smarter, longer-lasting devices and applications across the Internet of Things, wearable and implantable health devices, space systems, and security and defense.

FuriosaAI Unveils RNGD Power-Efficient AI Processor at Hot Chips 2024

Today at Hot Chips 2024, FuriosaAI is pulling back the curtain on RNGD (pronounced "Renegade"), our new AI accelerator designed for high-performance, highly efficient large language model (LLM) and multimodal model inference in data centers. As part of his Hot Chips presentation, Furiosa co-founder and CEO June Paik is sharing technical details and providing the first hands-on look at the fully functioning RNGD card.

With a TDP of 150 watts, a novel chip architecture, and advanced memory technology like HBM3, RNGD is optimized for inference with demanding LLMs and multimodal models. It's built to deliver high performance, power efficiency, and programmability all in a single product - a trifecta that the industry has struggled to achieve in GPUs and other AI chips.

AIC Partners with Unigen to Launch Power-Efficient AI Inference Server

AIC, a global leader in design and manufacturing of industrial-strength servers, in partnership with Unigen Corporation has launched the EB202-CP-UG, an ultra-efficient Artificial Intelligence (AI) inference server boasting over 400 trillion operations per second (TOPS) of performance. This innovative server is designed around the robust EB202-CP, a 2U Genoa-based storage server featuring a removable storage cage. By integrating eight Unigen Biscotti E1.S AI modules in place of standard E1.S SSDs, AIC is offering a specialized configuration for AI, the EB202-CP-UG—an air-cooled AI inference server characterized by an exceptional performance-per-watt ratio that ensures long-term cost savings.

"We are excited to partner with AIC to introduce innovative AI solutions," said Paul W. Heng, Founder and CEO of Unigen. "Their commitment to excellence in every product, especially their storage servers, made it clear that our AI technology would integrate seamlessly."

Applied Materials Unveils Chip Wiring Innovations for More Energy-Efficient Computing

Applied Materials, Inc. today introduced materials engineering innovations designed to increase the performance-per-watt of computer systems by enabling copper wiring to scale to the 2 nm logic node and beyond. "The AI era needs more energy-efficient computing, and chip wiring and stacking are critical to performance and power consumption," said Dr. Prabu Raja, President of the Semiconductor Products Group at Applied Materials. "Applied's newest integrated materials solution enables the industry to scale low-resistance copper wiring to the emerging angstrom nodes, while our latest low-k dielectric material simultaneously reduces capacitance and strengthens chips to take 3D stacking to new heights."

Overcoming the Physics Challenges of Classic Moore's Law Scaling
Today's most advanced logic chips can contain tens of billions of transistors connected by more than 60 miles of microscopic copper wiring. Each layer of a chip's wiring begins with a thin film of dielectric material, which is etched to create channels that are filled with copper. Low-k dielectrics and copper have been the industry's workhorse wiring combination for decades, allowing chipmakers to deliver improvements in scaling, performance and power-efficiency with each generation.

NVIDIA Modulus & Omniverse Drive Physics-informed Models and Simulations

A manufacturing plant near Hsinchu, Taiwan's Silicon Valley, is among facilities worldwide boosting energy efficiency with AI-enabled digital twins. A virtual model can help streamline operations, maximizing throughput for its physical counterpart, say engineers at Wistron, a global designer and manufacturer of computers and electronics systems. In the first of several use cases, the company built a digital copy of a room where NVIDIA DGX systems undergo thermal stress tests (pictured above). Early results were impressive.

Making Smart Simulations
Using NVIDIA Modulus, a framework for building AI models that understand the laws of physics, Wistron created digital twins that let them accurately predict the airflow and temperature in test facilities that must remain between 27 and 32 degrees C. A simulation that would've taken nearly 15 hours with traditional methods on a CPU took just 3.3 seconds on an NVIDIA GPU running inference with an AI model developed using Modulus, a whopping 15,000x speedup. The results were fed into tools and applications built by Wistron developers with NVIDIA Omniverse, a platform for creating 3D workflows and applications based on OpenUSD.

Cervoz Introduces T425 Series of Industrial M.2 NVMe SSDs

Cervoz brings its new storage solution to industrial applications with the launch of its new T425 Series M.2 NVMe SSDs-M.2 2230 (B+M) and M.2 2242 (B+M). Available in the compact 2230 and 2242 form factors, these PCIe Gen3x2 SSDs pack impressive performance into small footprints. Engineered for reliability and efficiency, the T425 Series provides industrial-grade solutions for embedded systems and space-constrained applications.

Space-Saving Form Factors for Seamless Integration
The tiny size of the T425 Series SSDs enables easy integration into small, fanless devices where internal space is limited. From in-vehicle systems and handheld scanners to medical equipment and industrial PCs, these SSDs allow seamless upgrades without compromising capacity or performance.

Cervoz Embraces Edge Computing with its M.2 Compact Solutions

Seizing the Edge: Cervoz Adapts to Shifting Data Landscape—The rapid emergence of technologies like AIoT and 5G and their demand for high-speed data processing has accelerated the data transition from the cloud to the edge. This shift exposes data to unpredictable environments with extreme temperature variations, vibrations, and space constraints, making it critical for edge devices to thrive in these settings. Cervoz strategically targets the blooming edge computing sector by introducing an extensive array of compact product lines, enhancing its existing SSDs, DRAM, and Modular Expansion Cards to meet the unique needs of edge computing.

Cervoz Reveals NVMe M.2 SSDs and Connectivity Solutions to Power the Edge
Cervoz introduces its latest compact PCIe Gen. 3x2 SSD offerings, the T421 M.2 2242 (B+M key) and T425 M.2 2230 (A+E key). These space-efficient design and low power consumption feature offer exceptional performance, catering to the storage needs of fanless embedded PCs and motherboards for purpose-built edge applications. Cervoz is also leading the way in developing connectivity solutions, including Ethernet, Wi-Fi, Serial, USB, and CAN Bus all available in M.2 2230 (A+E key) and M.2 2242/2260/2280 (B+M) form factors. The M.2 (B+M key) 2242/2260/2280 card is a versatile three-in-one solution designed for maximum adaptability. While it initially comes in a 2280 form factor, it can be easily adjusted to fit 2260 or 2242 sizes. It offers an effortless upgrade of existing systems without sacrificing connection capability, especially in edge devices.

Edged Energy Launches Four Ultra-Efficient AI-Ready Data Centers in USA

Edged Energy, a subsidiary of Endeavour devoted to carbon neutral data center infrastructure, announced today the launch of its first four U.S. data centers, all designed for today's high-density AI workloads and equipped with advanced waterless cooling and ultra-efficient energy systems. The facilities will bring more than 300 MW of critical capacity with an industry-leading average Power Usage Effectiveness (PUE) of 1.15 portfolio-wide. Edged has nearly a dozen new data centers operating or under construction across Europe and North America and a gigawatt-scale project pipeline.

The first phase of this U.S. expansion includes a 168 MW campus in Atlanta, a 96 MW campus in the Chicago area, 36 MW in Phoenix and 24 MW in Kansas City. At a time of growing water scarcity where rivers, aquifers and watersheds are at dangerously low levels, it is more critical than ever that IT infrastructure conserve precious water resources. The new Edged facilities are expected to save more than 1.2 billion gallons of water each year compared to conventional data centers. "The rise of AI and machine learning is requiring more power, and often more water, to cool outdated servers. While traditional data centers struggle to adapt, Edged facilities are ready for the advanced computing of today and tomorrow without consuming any water for cooling," said Bryant Farland, Chief Executive Officer for Edged. "Sustainability is at the core of our platform. It is why our data centers are uniquely optimized for energy efficiency and water conservation. We are excited to be partnering with local communities to bring future-proof solutions to a growing digital economy."

EdgeCortix Foresees Barrier Breaking Efficient Next-gen Edge AI Chips

EdgeCortix, the Japan-based fabless semiconductor company focused on energy-efficient AI processing, predicts that 2024 is set to be a watershed moment for Edge AI. Through its predictions for the year, EdgeCortix believes that Edge AI landscape will be transformed during this exciting year for the industry. Next-gen AI chips, hybrid edge-cloud architectures, software supremacy and the rise of new generative-AI applications "at the edge," will revolutionize the world of business as we know it.

1. Next-Gen efficient Edge AI Chips will break barriers:
Prepare for a hardware uprising! EdgeCortix foresees next-gen energy-efficient AI chips that not only break the barriers of processing power but redefine them. These chips are not just powerful; they are customized for multi-modal generative AI and efficient language models, enabling cutting-edge AI capabilities at low power for a whole new spectrum of applications.

Intel Developing Efficient Solution for Path Tracing on Integrated GPUs

Intel's software engineers are working on path-traced light simulation and conducting neural graphics research, as documented in a recent company news article, with an ambition to create a more efficient solution for integrated graphics cards. The company's Graphics Research Organization is set to present their path-traced optimizations at SIGGRAPH 2023. Their papers have been showcased at recent EGSR and HPG events. The team is aiming to get iGPUs running path-tracing in real time, by reducing the number of calculations required to simulate light bounces.

The article covers three different techniques, all designed to improve GPU performance: "Across the process of path tracing, the research presented in these papers demonstrates improvements in efficiency in path tracing's main building blocks, namely ray tracing, shading, and sampling. These are important components to make photorealistic rendering with path tracing available on more affordable GPUs, such as Intel Arc GPUs, and a step toward real-time performance on integrated GPUs." Although there is an emphasis on in-house products in the article, Intel's "open source-first mindset" hints that their R&D could be shared with others—NVIDIA and AMD are likely still struggling to make ray tracing practical on their modest graphics card models.

CXL Memory Pooling will Save Millions in DRAM Cost

Hyperscalers such as Microsoft, Google, Amazon, etc., all run their cloud divisions with a specific goal. To provide their hardware to someone else in a form called instance and have the user pay for it by the hour. However, instances are usually bound by a specific CPU and memory configuration, which you can not configure yourself. But instead, you can only choose from the few available options that are listed. For example, when selecting one virtual CPU core, you get two GB of RAM and can go as high as you want with CPU cores. However, the available RAM will also double, even though you might not need it. When renting an instance, the allocated CPU cores and memory are yours until the instance is turned off.

And it is precisely this that hyperscalers are dealing with. Many instances don't fully utilize their DRAM, making the whole data center usage inefficient. Microsoft Azure, one of the largest cloud providers, measured that 50% of all VMs never touch 50% of their rented memory. This makes memory stranded in a rented VM, making it unusable for anything else.
At Azure, we find that a major contributor to DRAM inefficiency is platform-level memory stranding. Memory stranding occurs when a server's cores are fully rented to virtual machines (VMs), but unrented memory remains. With the cores exhausted, the remaining memory is unrentable on its own, and is thus stranded. Surprisingly, we find that up to 25% of DRAM may become stranded at any given moment.

NVIDIA PrefixRL Model Designs 25% Smaller Circuits, Making GPUs More Efficient

When designing integrated circuits, engineers aim to produce an efficient design that is easier to manufacture. If they manage to keep the circuit size down, the economics of manufacturing that circuit is also going down. NVIDIA has posted on its technical blog a technique where the company uses an artificial intelligence model called PrefixRL. Using deep reinforcement learning, NVIDIA uses the PrefixRL model to outperform traditional EDA (Electronics Design Automation) tools from major vendors such as Cadence, Synopsys, or Siemens/Mentor. EDA vendors usually implement their in-house AI solution to silicon placement and routing (PnR); however, NVIDIA's PrefixRL solution seems to be doing wonders in the company's workflow.

Creating a deep reinforcement learning model that aims to keep the latency the same as the EDA PnR attempt while achieving a smaller die area is the goal of PrefixRL. According to the technical blog, the latest Hopper H100 GPU architecture uses 13,000 instances of arithmetic circuits that the PrefixRL AI model designed. NVIDIA produced a model that outputs a 25% smaller circuit than comparable EDA output. This is all while achieving similar or better latency. Below, you can compare a 64-bit adder design made by PrefixRL and the same design made by an industry-leading EDA tool.

ASRock Industrial Announces New Range of Industrial Motherboards with 12th Gen Intel Core Processors

ASRock Industrial launches a new range of industrial motherboards powered by 12th Gen Intel Core Processors (Alder Lake-S) with up to 16 cores and 24 threads, supporting the new Intel 600 Series W680, Q670, and H610 chipsets. Featuring high computing power with performance hybrid architecture and enhanced AI capabilities, rich IOs and expansions for up to quad displays 4K@60 Hz, USB 3.2 Gen2x2 (20 Gbit/s), triple Intel 2.5 GbE LANs with real-time TSN, multi M.2 Key M, ECC memory, plus TPM 2.0, and wide voltage support. The new series covers comprehensive form factors, including industrial Mini-ITX, Micro-ATX, and ATX motherboards for diverse applications, such as factory automation, kiosks, digital signage, smart cities, medical, and Edge AIoT applications.

congatec launches 10 new COM-HPC and COM Express Computer-on-Modules with 12th Gen Intel Core processors

congatec - a leading vendor of embedded and edge computing technology - introduces the 12th Generation Intel Core mobile and desktop processors (formerly code named Alder Lake) on 10 new COM-HPC and COM Express Computer-on-Modules. Featuring the latest high performance cores from Intel, the new modules in COM-HPC Size A and C as well as COM Express Type 6 form factors offer major performance gains and improvements for the world of embedded and edge computing systems. Most impressive is the fact that engineers can now leverage Intel's innovative performance hybrid architecture. Offering of up to 14 cores/20 threads on BGA and 16 cores/24 threads on desktop variants (LGA mounted), 12th Gen Intel Core processors provide a quantum leap [1] in multitasking and scalability levels. Next-gen IoT and edge applications benefit from up to 6 or 8 (BGA/LGA) optimized Performance-cores (P-cores) plus up to 8 low power Efficient-cores (E-cores) and DDR5 memory support to accelerate multithreaded applications and execute background tasks more efficiently.

Apple Introduces M1 Pro and M1 Max: the Most Powerful Chips Apple Has Ever Built

Apple today announced M1 Pro and M1 Max, the next breakthrough chips for the Mac. Scaling up M1's transformational architecture, M1 Pro offers amazing performance with industry-leading power efficiency, while M1 Max takes these capabilities to new heights. The CPU in M1 Pro and M1 Max delivers up to 70 percent faster CPU performance than M1, so tasks like compiling projects in Xcode are faster than ever. The GPU in M1 Pro is up to 2x faster than M1, while M1 Max is up to an astonishing 4x faster than M1, allowing pro users to fly through the most demanding graphics workflows.

M1 Pro and M1 Max introduce a system-on-a-chip (SoC) architecture to pro systems for the first time. The chips feature fast unified memory, industry-leading performance per watt, and incredible power efficiency, along with increased memory bandwidth and capacity. M1 Pro offers up to 200 GB/s of memory bandwidth with support for up to 32 GB of unified memory. M1 Max delivers up to 400 GB/s of memory bandwidth—2x that of M1 Pro and nearly 6x that of M1—and support for up to 64 GB of unified memory. And while the latest PC laptops top out at 16 GB of graphics memory, having this huge amount of memory enables graphics-intensive workflows previously unimaginable on a notebook. The efficient architecture of M1 Pro and M1 Max means they deliver the same level of performance whether MacBook Pro is plugged in or using the battery. M1 Pro and M1 Max also feature enhanced media engines with dedicated ProRes accelerators specifically for pro video processing. M1 Pro and M1 Max are by far the most powerful chips Apple has ever built.

​LIAN LI Launches Fully Modular 750W SFX PSU - SP750

LIAN LI Industrial Co. Ltd., a leading manufacturer of aluminium chassis and PC accessories, announces the SP750, a fully modular SFX size PSU. Perfect for power-hungry small form factor builds, the new 750 watt PSU features reliable Japanese capacitors, an 80 PLUS Gold certification, and a 5-year warranty. Built in a sleek and classic brushed aluminium housing with braided cables, the SP750 also runs quietly with its ZERO RPM mode under 40% load.

LIAN LI features an elegant and classic-looking SFX PSU with a brushed aluminium finish and braided modular cables, giving users the flexibility to utilize only the necessary cables to power their PC components, the flexible and braided motherboard, CPU, and PCIe cables enhance the system's aesthetics.

BIOSTAR iMiner A578X8D Now Available Stateside

BIOSTAR iMiner A578X8D, an easy to set up all-in-one solution for home and professional miners, is now available on Newegg for US$3499. The BIOSTAR iMiner A578X8D is the world's first riser card-free and all-in-one crypto mining solution, offering ultra-mining flexibility for different crypto-currencies (Ethereum, Monero, Bitcoin Gold, Zcash, etc.). This plug-and-mine system requires no additional hardware installation, simply power-on to start mining. The BIOSTAR iMiner A578X8D supports Windows 10, Linux and ethOS for different types of miners, and as an exclusive bundle with Newegg, the iMiner A578X8D also comes with an optional ethOS mining operating system to allow users to set up and start mining immediately.

EK Launches Full-Cover Water Blocks for EVGA FTW2 Graphics Cards

EK Water Blocks, Slovenia-based premium computer liquid cooling gear manufacturer, is releasing two EK-FC1080 GTX FTW2 water blocks that are compatible with multiple EVGA GeForce GTX FTW2 1080 and 1070 series graphics cards. This kind of efficient cooling will allow your high-end graphics card to reach higher boost clocks, thus providing more performance during gaming or other GPU intense tasks.
Return to Keyword Browsing
Apr 19th, 2025 14:20 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts