News Posts matching #Linux

Return to Keyword Browsing

Kubuntu Focus Team Announces the 3rd Gen M2 Linux Mobile Workstation

The Kubuntu Focus Team announces the availability of the third-generation M2 Linux mobile workstation with multiple performance enhancements. RTX 3080 and RTX 3070 models are in stock now. RTX 3060 models can be reserved now and ship in the first week of November. The thin-and-light M2 laptop is a superb choice for anyone looking for the best out-of-the-box Linux experience with the most powerful mobile hardware. Customers include ML scientists, developers, and creators. Improvements to the third generation include:
  • Cooler and faster Intel 11th generation Core i7-11800H. Geekbench scores improve 19 and 29%.
  • Double the iGPU performance with Intel Iris Xe 32EU Graphics.
  • Increased RAM speed from 2933 to 3200 MHz, up to 64 GB dual-channel.
  • BIOS switchable USB-C GPU output.
  • Upgrade from Thunderbolt 3 to version 4.

Update for "Yet Another Hardware Trainwreck" Lands in Linux Kernel as an Urgent Fix for x86 Processors

The x86 instruction set architecture has experienced many issues, and today's announcement is no exception. Yesterday morning, the Linux kernel received an urgent set of patches that are supposed to fix "yet another hardware trainwreck," as Thomas Gleixner, the kernel developer, describes. This time, the problem occurs with the high precision event timer (HPET) that stops once x86 processors reach PC10 idle state. In that event, the timer stops even when the OS/kernel uses it and could potentially cause a vulnerability inside a processor that an attacker can exploit. The problem has been known for quite a while since, in 2019, the Linux kernel started removing HPET functionality from some Intel processors.

The priority of this patch for Linux Kernel version 5.15-rc5 is high and marked as an urgent update. A reliable hardware timer and an interrupt are a must for the proper function of a processor. The hardware fix for this will not happen soon, so the Linux kernel has to adapt to it and create a solution at the software level. According to Mr. Gleixner, "The probability that this problem is going to be solved in the forseeable future is close to zero, so the kernel has to be cluttered with heuristics to keep up with the ever growing amount of hardware and firmware trainwrecks. Hopefully some day hardware people will understand that the approach of "This can be fixed in software" is not sustainable. Hope dies last..."

Is Intel Working on CPU-Features-as-a-Service Xeon processors?

Some of you might remember Intel's Upgrade Service, aka software locked CPUs that launched back in 2010 with the Pentium G6951 that could have an extra 1 MB of cache and Hyper-Threading unlocked for a mere $50. Well, it seems like Intel is working on something similar, but for Xeon CPUs this time around, although the exact details aren't clear as yet.

Phoronix spotted a Linux patch on GitHub for something called Intel Software Defined Silicon or SDSi for short. It's clear that it's for Xeon CPUs and the GitHub page mentions that SDSi "allows the configuration of additional CPU features through a license activation process." There's very little to go by beyond this, but it's not hard to draw parallels with Intel's Upgrade Service from last decade, just this time Intel is targeting its business customers rather than consumers.

Epic Games Announces Linux Support for Easy Anti-Cheat

When Valve claimed that their Linux-powered Steam Deck device would be able to run any game from the Steam library most of us assumed this was simply a statement on the power of the device. We assumed that the Linux OS wouldn't be compatible with certain games such as those using Easy Anti-Cheat (EAC) or BattlEye however Valve confirmed that they would work with the companies to add support. This has culminated in Epic Games recently introducing Linux & Mac support for their EAC software noting the Steam Deck in their announcement.

The addition of Linux support has been specifically designed to work with the Wine and Proton compatibility layers to ensure that all games using the software should run correctly. This will mean that titles such as Apex Legends, Dead by Daylight, War Thunder, 7 Days to Die, Fall Guys, Black Desert, Hunt: Showdown, Paladins, and Halo: The Master Chief Collection can now be easily updated to include Linux support. The rival BattlEye software isn't currently available for Linux but the CEO has confirmed that support will be added with the first game featuring it coming soon. These moves will drastically improve the Linux gaming landscape and will hopefully encourage more developers to natively support the platform.

Intel Prepares Seamless Updating of Firmware Without a Need for Reboot

Intel has been working on a technology that will improve the lives of all users that have an Intel-based processor in their system. According to the recent round of patches for the Linux kernel, Intel's engineers have been working on a feature called Intel Seamless Update, which promises to bring updating of system firmware without a need to reboot. First of all, it is important to note that firmware upgrades have been stuck at requiring reboot in order to apply patches. This has caused many systems to be down and to slow down the infrastructure by a wide margin, as these updates can last up to several minutes, where the system is rebooting and can not be used.

Intel has presented an idea of creating a technology that will update system firmware, such as UEFI, in the run time. That means that the system will be able to apply firmware patches, without ever needing a reboot, minimizing downtime. This is especially valuable for customers with very high service level agreements (SLAs) around downtime, meaning that almost 100% uptime (not possible to be 100% generally speaking) is required for these systems. An example of this would be medical server infrastructure, which has to constantly be available for access. Using this technology, systems such as these could update their firmware and be online non-stop, without maybe ever needing to reboot. The said feature is supposed to arrive in time for the launch alongside Intel "Sapphire Rapids" Xeon processors.

IBM Unveils New Generation of IBM Power Servers for Frictionless, Scalable Hybrid Cloud

IBM (NYSE: IBM) today announced the new IBM Power E1080 server, the first in a new family of servers based on the new IBM Power10 processor, designed specifically for hybrid cloud environments. The IBM Power10-equipped E1080 server is engineered to be one of the most secured server platforms and is designed to help clients operate a secured, frictionless hybrid cloud experience across their entire IT infrastructure.

The IBM Power E1080 server is launching at a critical time for IT. As organizations around the world continue to adapt to unpredictable changes in consumer behaviors and needs, they need a platform that can deliver their applications and insights securely where and when they need them. The IBM Institute of Business Value's 2021 CEO Study found that, of the 3,000 CEOs surveyed, 56% emphasized the need to enhance operational agility and flexibility when asked what they'll most aggressively pursue over the next two to three years.

Tachyum Boots Linux on Prodigy FPGA

Tachyum Inc. today announced that it has successfully executed the Linux boot process on the field-programmable gate array (FPGA) prototype of its Prodigy Universal Processor, in 2 months after taking delivery of the IO motherboard from manufacturing. This achievement proves the stability of the Prodigy emulation system and allows the company to move forward with additional testing before advancing to tape out.

Tachyum engineers were able to perform the Linux boot, execute a short user-mode program and shutdown the system on the fully functional FPGA emulation system. Not only does this successful test prove that the basic processor is stable, but interrupts, exceptions, timing, and system-mode transitions are, as well. This is a key milestone, which dramatically reduces risk, as booting and running large and complex pieces of software like Linux reliably on the Tachyum FPGA processor prototype shows that verification and hardware stability are past the most difficult turning point, and it is now obvious that verification and testing should successfully complete in the coming months. Designers are now shifting their attention to debug and verification processes, running hundreds of trillions of test cycles over the next few months, and running large scale user mode applications with compatibility testing to get the processor to production quality.

Valve's Steam Hardware Survey Shows Progress for Gaming on Linux, Breaking 1% Marketshare

When Valve made a debut of Proton for Steam on Linux, the company committed to enabling Linux gamers from across the globe to play all of the latest games available for the Windows platform, on their Linux distributions. Since the announcement, the market share of people who game on Linux has been rather stagnating for a while. When Proton was announced, the Linux gaming market share jumped to 2%, according to a Valve survey. However, later on, it dropped and remained at the stagnating 0.8~0.9% mark. Today, according to the latest data obtained from Steam Hardware Survey, we see that the Linux gaming market share has reached 1.0% in July, making for a +0.14% increase. What drove the spike in usage is unknown, however, it is interesting to see the new trend. You can check out the Steam Hardware Survey data here.

Linux Foundation to Form New Open 3D Foundation

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced an intent to form the Open 3D Foundation to accelerate developer collaboration on 3D game and simulation technology. The Open 3D Foundation will support open source projects that advance capabilities related to 3D graphics, rendering, authoring, and development. As the first project governed by the new foundation, Amazon Web Services, Inc. (AWS) is contributing an updated version of the Amazon Lumberyard game engine as the Open 3D Engine (O3DE), under the permissive Apache 2.0 license. The Open 3D Engine enables developers and content creators to build 3D experiences unencumbered by commercial terms and will provide the support and infrastructure of an open source community through forums, code repositories, and developer events. A developer preview of O3DE is available on GitHub today. For more information and/or to contribute, please visit: https://o3de.org

3D engines are used to create a range of virtual experiences, including games and simulations, by providing capabilities such as 3D rendering, content authoring tools, animation, physics systems, and asset processing. Many developers are seeking ways to build their intellectual property on top of an open source engine where the roadmap is highly visible, openly governed, and collaborative to the community as a whole. More developers look to be able to create or augment their current technological foundations with highly collaborative solutions that can be used in any development environment. O3DE introduces a new ecosystem for developers and content creators to innovate, build, share, and distribute immersive 3D worlds that will inspire their users with rich experiences that bring the imaginations of their creators to life.

SiFive Performance P550 Core Sets New Standard as Highest Performance RISC-V Processor IP

SiFive, Inc., the industry leader in RISC-V processors and silicon solutions, today announced launched the new SiFive Performance family of processors. The SiFive Performance family debuts with two new processor cores, the P270, SiFive's first Linux capable processor with full support for the RISC-V vector extension v1.0 rc, and the SiFive Performance P550 core, SiFive's highest performance processor to date. The new SiFive Performance P550 delivers a SPECInt 2006 score of 8.65/GHz, making it the highest performance RISC-V processor available today, and comparable to existing proprietary solutions in the application processor space.

"SiFive Performance is a significant milestone in our commitment to deliver a complete, scalable portfolio of RISC-V cores to customers in all markets who are at the vanguard of SOC design and are dissatisfied with the status quo," said Dr. Yunsup Lee, Co-Founder and CTO of SiFive. "These two new products cover new performance points and a wide range of application areas, from efficient vector processors that easily displace yesterday's SIMD architectures, to the bleeding edge that the P550 represents. SiFive is proud to set the standard for RISC-V processing and is ready to deliver these products to customers today."

ASUSTOR Launches AS-T10G2 10 Gigabit Ethernet Card

The all-new AS-T10G2 is here, bringing increased efficiency and speeds to the much beloved AS-T10G. The AS-T10G2 uses the AQC-107 controller, which offers increased performance, and lower power requirements. Using the Lockerstor 16R Pro, transfer rates were found to be up to 1127 MB/s when reading and 1124 MB/s when writing. The AS-T10G2 also supports IP, TCP, UDP checksum offload to reduce CPU usage for a more efficient experience.

The AS-T10G2 is equipped with a 10 Gbps 8p8c RJ-45 Ethernet port. The AS-T10G2 supports automatic switching between all major Ethernet speeds and is compatible with four lanes of PCI Express 3.0. Pop it into an ASUSTOR NAS running ADM 4.0 or a PC to upgrade network speeds to 10-Gigabit Ethernet. The AS-T10G2 is compatible with both full-height and half-height computers, making it compatible with almost any device featuring a PCI Express slot, ensuring affordable, yet high speed networking for both homes and businesses.

AMD Confirms CDNA2 Instinct MI200 GPU Will Feature at Least Two Dies in MCM Design

Today we've got the first genuine piece of information that confirms AMD's MCM approach to CDNA2, the next-gen compute architecture meant for ML/HPC/Exascale computing. This comes courtesy of a Linux kernel update, where AMD engineers annotated the latest Linux kernel patch with some considerations specific for their upcoming Aldebaran, CDNA2-based compute cards. Namely, the engineers clarify the existence of a "Die0" and a "Die1", where power data fetching should be allocated to Die0 of the accelerator card - and that the power limit shouldn't be set on the secondary die.

This confirms that Aldebaran will be made of at least two CDNA2 compute dies, and as (almost) always in computing, one seems to be tasked with general administration of both compute dies. It is unclear as of yet whether the HBM2 memory controller will be allocated to the primary die, or if there will be an external I/O die (much like in Zen) that AMD can leverage for off-chip communication. AMD's approach to CDNA2 will eventually find its way (in an updated form) for AMD's consumer-geared next-generation graphics architecture with RDNA3.

Tachyum Receives Prodigy FPGA DDR-IO Motherboard to Create Full System Emulation

Tachyum Inc. today announced that it has taken delivery of an IO motherboard for its Prodigy Universal Processor hardware emulator from manufacturing. This provides the company with a complete system prototype integrating CPU, memory, PCI Express, networking and BMC management subsystems when connected to the previously announced field-programmable gate array (FPGA) emulation system board.

The Tachyum Prodigy FPGA DDR-IO Board connects to the Prodigy FPGA CPU Board to provide memory and IO connectivity for the FPGA-based CPU tiles. The fully functional Prodigy emulation system is now ready for further build out, including Linux boot and incorporation of additional test chips. It is available to customers to perform early testing and software development prior to a full four-socket reference design motherboard, which is expected to be available Q4 2021.

Valve Working with NVIDIA to Bring DLSS Support to Linux through Proton

NVIDIA has announced that they are partnering with Valve to bring its Deep Learning Super Sampling (DLSS) graphics technology to Linux via Steam Proton. This will allow Linux gamers with an RTX GPU to take advantage of the AI tool to improve their framerates in games through Steam. Proton is an open-source tool from Valve which allows Windows games to be run on Linux, the tool is built into the Linux Steam Client Beta. This news comes after AMD announced their open-source DLSS competitor FidelityFX Super Resolution which supports AMD, and NVIDIA graphics cards.
NVIDIANVIDIA, Valve, and the Linux gaming community are collaborating to bring NVIDIA DLSS to Proton - Linux gamers will be able to use the dedicated AI cores on GeForce RTX GPUs to boost frame rates for their favorite Windows Games running on the Linux operating system. Support for Vulkan titles is coming this month with DirectX support coming in the Fall.

Apple M1 Processor Receives Preliminary Support in Linux Kernel

Apple's M1 custom processor has been widely adopted among the developer community. However, it is exactly this part of the M1 customer base that wants something different. For months, various developers have been helping with the adoption of the M1 processor for the Linux Kernel, which has today received preliminary support for the processor. The latest 5.13-RC1 release of the Linux Kernel is out, and it adds some basic functionality for the M1 processor. For now, it is some basic stuff like a simple bring up, however, much more has to be added. For example, the GPU support is still not done. Not even half-done. The M1 SoC is now able to boot, however, it takes a lot more work to get the full SoC working correctly.

Mr. Linus Torvalds, the Linux kernel developer, and its creator highlights that "This was - as expected - a fairly big merge window, but things seem to have proceeded fairly smoothly. Famous last words." According to one of the main activists for Linux on M1, Mr. Hector Martin, "This is just basic bring-up, but it lays a solid foundation and is probably the most challenging up-streaming step we'll have to do, at least until the GPU stuff is done." So it is still a long way before the M1 processor takes a full Linux kernel for a spin and the software becomes usable.

AAEON Announces Official Support for NVIDIA Ubuntu, Jetpack 4.5 and Secureboot on BOXER-8200 Systems

AAEON, an industry leader in embedded AI Edge systems, announces new software support for the BOXER-8200 series of embedded PCs featuring NVIDIA Jetson System on Modules (SOM). AAEON has officially signed an agreement with Canonical to provide customers with the NVIDIA Ubuntu operating system pre-installed on new BOXER-8200 systems. Systems with the NVIDIA Ubuntu OS will also ship with the Jetpack 4.5 drivers and toolkit package preinstalled. Additionally, AAEON announces a new customization services to provide Secureboot to clients in addition to other customization options.

AAEON is dedicated to delivering the most comprehensive platform solutions powered by NVIDIA Jetson SOMs. To meet the needs of their clients, AAEON has signed an agreement with Canonical to provide the official NVIDIA Ubuntu OS image on the entire range of BOXER-8200 series systems. Developers and customers who purchase new BOXER-8200 series systems can receive the system with the OS preinstalled, with no need to flash the image before starting the system up for the first time. The BOXER-8200 series includes the BOXER-822x platforms with Jetson Nano, BOXER-8240AI with Jetson AGX Xavier, BOXER-825x platforms with Jetson Xavier NX, and BOXER-823x platforms with Jetson TX2 NX (currently under development).

AMD Ryzen 5000 Series CPUs with Zen 3 Cores Could be Vulnerable to Spectre-Like Exploit

AMD Ryzen 5000 series of processors feature the new Zen 3 core design, which uses many techniques to deliver the best possible performance. One of those techniques is called Predictive Store Forwarding (PSF). According to AMD, "PSF is a hardware-based micro-architectural optimization designed to improve the performance of code execution by predicting dependencies between loads and stores." That means that PSF is another "prediction" feature put in a microprocessor that could be exploited. Just like Spectre, the feature could be exploited and it could result in a vulnerability in the new processors. Speculative execution has been a part of much bigger problems in CPU microarchitecture design, showing that each design choice has its flaws.

AMD's CPU architects have discovered that the software that relies upon isolation aka "sandboxing", is highly at risk. PSF predictions can sometimes miss, and it is exactly these applications that are at risk. It is reported that a mispredicted dependency between load and store can lead to a vulnerability similar to Spectre v4. So what a solution to it would be? You could simply turn it off and be safe. Phoronix conducted a suite of tests on Linux and concluded that turning the feature off is taking between half a percent to one percent hit, which is very low. You can see more of that testing here, and read AMD's whitepaper describing PSF.

NVIDIA Enables GPU Passthrough for Virtual Machines on Consumer-Grade GeForce GPUs

Editor's note: This is not a part of April Fools.

NVIDIA has separated professional users and regular gamers with the company's graphics card offering. There is a GeForce lineup of GPUs, which represents a gaming-oriented version and its main task is to simply play games, display graphics, and run some basic CUDA accelerated software. However, what would happen if you were to start experimenting with your GPU? For example, if you are running Linux, and you want to spin a virtual machine with Windows on it for gaming, you could just use your integrated GPU as the GeForce card doesn't allow for virtual GPU passthrough. For these purposes, NVIDIA has its professional graphics card lineup like Quadro and Tesla.

However, this specific feature is about to arrive in even the GeForce lineup offerings. NVIDIA has announced that the company is finally bringing the basic virtual machine passthrough functions to the gaming GPUs. While the feature is representing a step in the right direction, it is still limited. For example, the GeForce GPU passthrough supports only one virtual machine and SR-IOV feature is still not supported on GeForce. "If you want to enable multiple virtual machines to have direct access to a single GPU or want the GPU to be able to assign virtual functions to multiple virtual machines, you will need to use NVIDIA Tesla, Quadro, or RTX enterprise GPUs.", says the NVIDIA FAQ. GeForce virtualization, which is now still in beta, is supported on R465 or higher drivers.
The full content from NVIDIA's website is written below.

AMD Outs 32 MB Infinity Cache on Navi 23, No Cache on Upcoming Van Gogh APUs

AMD has revealed the Infinity Cache size for the upcoming Navi 23 GPU, as well as its absence in the next-generation Van Gogh APU, which features Zen 2 cores and an RDNA GPU. The reveal comes via a new patch done by AMD to the AMKFD, a Linux kernel HSA driver for AMD APUs. The patch file doesn't list Infinity Cache per se, but does clarify the last-level cache for AMD's GPUs - L3, which is essentially the same.

The patch reveals L3 size for Sienna Cichlid (Navi 21), Navy Flounder (Navi 22), and Dimgrey Cavefish (Navi 23). Navi 21 features 128*1024 (128 MB) of Infinity Cache, the just-released Navi 22 has 96 MB, as we know, and according to the file, Navi 23 is bound to feature 32 MB of it. Considering that Van Gogh lacks an infinity Cache, it would seem that it's making use of previous-gen Navi graphics, and won't leverage RDNA2, of which the Infinity Cache is a big part of. It remains to be seen if Van Gogh will materialize in an APU product lineup or if it's a specific part for a customer. It also remains to be seen which RX product will Navi 23 power - if an AMD RX 66000 series, or 6500 series.

Intel "Lunar Lake" Microarchitecture Hits the Radar, Possible "Meteor Lake" Successor

Intel published Linux kernel driver patches that reference a new CPU microarchitecture, codenamed "Lunar Lake." The patch comments refer to "Lunar Lake" as a client platform, and VideoCardz predicts that it could succeed "Meteor Lake." the microarchitecture that follows "Alder Lake," which was recently announced by Intel.

Targeting both mobile and desktop platforms, "Alder Lake" will herald a new 1,700-pin LGA socket for the client desktop, and debut hybrid CPU cores on the form-factor. Expected to be built on a newer silicon fabrication node, such as the 10 nm SuperFin, the chip will combine high-performance "Golden Cove" big cores, with "Gracemont" low-power cores. Its commercial success will determine if Intel continues to take the hybrid-core approach to client processors with future "Meteor Lake" and "Lunar Lake," or whether it will have sorted out its foundry woes and build "Lunar Lake" with a homogeneous CPU core type. With "Alder Lake" expected to debut toward the end of 2021 and "Meteor Lake" [hopefully] by 2022, "Lunar Lake" would only follow by 2023-24.

AMD is Preparing RDNA-Based Cryptomining GPU SKUs

Back in February, NVIDIA has announced its GPU SKUs dedicated to the cryptocurrency mining task, without any graphics outputs present on the chips. Today, we are getting information that AMD is rumored to introduce its own lineup of graphics cards dedicated to cryptocurrency mining. In the latest patch for AMD Direct Rendering Manager (DRM), a subsystem of the Linux kernel responsible for interfacing with GPUs, we see the appearance of the Navi 12. This GPU SKU was not used for anything except Apple's Mac devices in a form of Radeon Pro 5600M GPU. However, it seems like the Navi 12 could join forces with Navi 10 GPU SKU and become a part of special "blockchain" GPUs.

Way back in November, popular hardware leaker, KOMACHI, has noted that AMD is preparing three additional Radeon SKUs called Radeon RX 5700XTB, RX 5700B, and RX 5500XTB. The "B" added to the end of each name is denoting the blockchain revision, made specifically for crypto-mining. When it comes to specifications of the upcoming mining-specific AMD GPUs, we know that both use first-generation RDNA architecture and have 2560 Stream Processors (40 Compute Units). Memory configuration for these cards remains unknown, as AMD surely won't be putting HBM2 stacks for mining like it did with Navi 12 GPU. All that remains is to wait and see what AMD announces in the coming months.

Linux Gets Ported to Apple's M1-Based Devices

When Apple introduces its lineup of devices based on the custom Apple Silicon, many people have thought that it represents the end for any further device customization and that Apple is effectively locking-up the ecosystem even more. That is not the case we have today. Usually, developers working on Macs are always in need of another operating system to test their software and try it out. It means that they have to run some virtualization software like virtual machines to test another OS like Linux and possibly Windows. However, it would be a lot easier if they could just boot that OS directly on the device and that is exactly why we are here today.

Researchers from Corellium, a startup company based in Florida, working on ARM device virtualization, have pulled off an incredible feat. They have managed to get Linux running on Apple's M1 custom silicon based devices. The CTO of Corellium, Mr. Chris Wade, has announced that Linux is now fully usable on M1 silicon. The port can take full advantage of the CPU, however, there is no GPU acceleration for now, and graphics are set to the software rendering mode. Corellium also promises to take the changes it made upstream to the Linux kernel itself, meaning open-source and permissive license model. Below you can find an image of Apple M1 Mac Mini running the latest Ubuntu OS build.

Linus Torvalds Calls Out Intel for ECC Memory Market Stagnation

Linus Torvalds, the inventor of the Linux kernel and version-control system called git, has posted another one of his famous rants, addressing his views about the lack of ECC memory in consumer devices. Mr. Torvalds has posted his views on the Linux kernel mailing list, where he usually comments about the development of the kernel. The ECC or Error Correcting Code memory is a special kind of DRAM that fixes the problems that occur inside the memory itself, where a bit can get corrupted and change the data stored, thus offering false results. ECC aims to fix those mistakes by implementing a system that fixes these small errors and avoids bigger problems. According to Mr. Torvalds, it is a technology that we need to be implemented everywhere, not just server space like Intel imagines.
Linus TorvaldsIntel has been instrumental in killing the whole ECC industry with it's horribly bad market segmentation... Intel has been detrimental to the whole industry and to users because of their bad and misguided policies wrt ECC. Seriously...The arguments against ECC were always complete and utter garbage... Now even the memory manufacturers are starting do do ECC internally because they finally owned up to the fact that they absolutely have to. And the memory manufacturers claim it's because of economics and lower power. And they are lying bastards - let me once again point to row-hammer about how those problems have existed for several generations already, but these f***** happily sold broken hardware to consumers and claimed it was an "attack", when it always was "we're cutting corners".

Tachyum Prodigy Software Emulation Systems Now Available for Pre-Order

Tachyum Inc. today announced that it is signing early adopter customers for the software emulation system for its Prodigy Universal Processor, customers may begin the process of native software development (i.e. using Prodigy Instruction Set Architecture) and porting applications to run on Prodigy. Prodigy software emulation systems will be available at the end of January 2021.

Customers and partners can use Prodigy's software emulation for evaluation, development and debug, and with it, they can begin to transition existing applications that demand high performance and low power to run optimally on Prodigy processors. Pre-built systems include a Prodigy emulator, native Linux, toolchains, compilers, user mode applications, x86, ARM and RISC-V emulators. Software updates will be issued as needed.

RISC-V Comes to PC: SiFive Introduces HiFive Unmatched Development Board

RISC-V architecture is a relatively new Instruction Set Architecture (ISA) developed at the University of California Berkeley. Starting as a "short, three-month project" the RISC-V ISA is a fifth generation of the Reduced Instruction Set Computing (RISC) ideology. A company working on this technology and helping to grow the ecosystem is SiFive. Today, they announced a big step forward for the ecosystem that will enable developers to make and optimize even more software for this architecture and platform. Called the HiFive Unmatched, the development board represents the first entry of RISC-V ISA to the world of personal computing, with its Mini-ITX form factor and PC-like connectors of power supply and I/O.

The board is home to SiFive's FU740 SoC, a five-core heterogeneous, coherent processor with four SiFive U74 cores, and one SiFive S7 core. This SoC is capable of smooth Linux OS operation, giving the developers a good platform to do their optimizations for. There is 8 GB of onboard DDR4 RAM (unknown frequencies and timing), a MicroSD card slot, and one PCIe 3.0 x4 M.2 slot for system storage. To connect the board to the outside world, you get one Gigabit Ethernet port. For user I/O there are four USB 3.2 Gen 1 Type-A ports (1 Charging port) and one MicroUSB Console port. To power the board, you need a proper power supply with a 24-pin power connector. If you plan to build a PC based on the Unmatched board, you would need a standard ITX case, as it comes in the standard Mini-ITX (170x170 mm) form factor. For more information, please check out SiFive's website.
Return to Keyword Browsing
Nov 22nd, 2024 00:03 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts