News Posts matching #Linux

Return to Keyword Browsing

Valve's Steam Hardware Survey Shows Progress for Gaming on Linux, Breaking 1% Marketshare

When Valve made a debut of Proton for Steam on Linux, the company committed to enabling Linux gamers from across the globe to play all of the latest games available for the Windows platform, on their Linux distributions. Since the announcement, the market share of people who game on Linux has been rather stagnating for a while. When Proton was announced, the Linux gaming market share jumped to 2%, according to a Valve survey. However, later on, it dropped and remained at the stagnating 0.8~0.9% mark. Today, according to the latest data obtained from Steam Hardware Survey, we see that the Linux gaming market share has reached 1.0% in July, making for a +0.14% increase. What drove the spike in usage is unknown, however, it is interesting to see the new trend. You can check out the Steam Hardware Survey data here.

Linux Foundation to Form New Open 3D Foundation

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced an intent to form the Open 3D Foundation to accelerate developer collaboration on 3D game and simulation technology. The Open 3D Foundation will support open source projects that advance capabilities related to 3D graphics, rendering, authoring, and development. As the first project governed by the new foundation, Amazon Web Services, Inc. (AWS) is contributing an updated version of the Amazon Lumberyard game engine as the Open 3D Engine (O3DE), under the permissive Apache 2.0 license. The Open 3D Engine enables developers and content creators to build 3D experiences unencumbered by commercial terms and will provide the support and infrastructure of an open source community through forums, code repositories, and developer events. A developer preview of O3DE is available on GitHub today. For more information and/or to contribute, please visit: https://o3de.org

3D engines are used to create a range of virtual experiences, including games and simulations, by providing capabilities such as 3D rendering, content authoring tools, animation, physics systems, and asset processing. Many developers are seeking ways to build their intellectual property on top of an open source engine where the roadmap is highly visible, openly governed, and collaborative to the community as a whole. More developers look to be able to create or augment their current technological foundations with highly collaborative solutions that can be used in any development environment. O3DE introduces a new ecosystem for developers and content creators to innovate, build, share, and distribute immersive 3D worlds that will inspire their users with rich experiences that bring the imaginations of their creators to life.

SiFive Performance P550 Core Sets New Standard as Highest Performance RISC-V Processor IP

SiFive, Inc., the industry leader in RISC-V processors and silicon solutions, today announced launched the new SiFive Performance family of processors. The SiFive Performance family debuts with two new processor cores, the P270, SiFive's first Linux capable processor with full support for the RISC-V vector extension v1.0 rc, and the SiFive Performance P550 core, SiFive's highest performance processor to date. The new SiFive Performance P550 delivers a SPECInt 2006 score of 8.65/GHz, making it the highest performance RISC-V processor available today, and comparable to existing proprietary solutions in the application processor space.

"SiFive Performance is a significant milestone in our commitment to deliver a complete, scalable portfolio of RISC-V cores to customers in all markets who are at the vanguard of SOC design and are dissatisfied with the status quo," said Dr. Yunsup Lee, Co-Founder and CTO of SiFive. "These two new products cover new performance points and a wide range of application areas, from efficient vector processors that easily displace yesterday's SIMD architectures, to the bleeding edge that the P550 represents. SiFive is proud to set the standard for RISC-V processing and is ready to deliver these products to customers today."

ASUSTOR Launches AS-T10G2 10 Gigabit Ethernet Card

The all-new AS-T10G2 is here, bringing increased efficiency and speeds to the much beloved AS-T10G. The AS-T10G2 uses the AQC-107 controller, which offers increased performance, and lower power requirements. Using the Lockerstor 16R Pro, transfer rates were found to be up to 1127 MB/s when reading and 1124 MB/s when writing. The AS-T10G2 also supports IP, TCP, UDP checksum offload to reduce CPU usage for a more efficient experience.

The AS-T10G2 is equipped with a 10 Gbps 8p8c RJ-45 Ethernet port. The AS-T10G2 supports automatic switching between all major Ethernet speeds and is compatible with four lanes of PCI Express 3.0. Pop it into an ASUSTOR NAS running ADM 4.0 or a PC to upgrade network speeds to 10-Gigabit Ethernet. The AS-T10G2 is compatible with both full-height and half-height computers, making it compatible with almost any device featuring a PCI Express slot, ensuring affordable, yet high speed networking for both homes and businesses.

AMD Confirms CDNA2 Instinct MI200 GPU Will Feature at Least Two Dies in MCM Design

Today we've got the first genuine piece of information that confirms AMD's MCM approach to CDNA2, the next-gen compute architecture meant for ML/HPC/Exascale computing. This comes courtesy of a Linux kernel update, where AMD engineers annotated the latest Linux kernel patch with some considerations specific for their upcoming Aldebaran, CDNA2-based compute cards. Namely, the engineers clarify the existence of a "Die0" and a "Die1", where power data fetching should be allocated to Die0 of the accelerator card - and that the power limit shouldn't be set on the secondary die.

This confirms that Aldebaran will be made of at least two CDNA2 compute dies, and as (almost) always in computing, one seems to be tasked with general administration of both compute dies. It is unclear as of yet whether the HBM2 memory controller will be allocated to the primary die, or if there will be an external I/O die (much like in Zen) that AMD can leverage for off-chip communication. AMD's approach to CDNA2 will eventually find its way (in an updated form) for AMD's consumer-geared next-generation graphics architecture with RDNA3.

Tachyum Receives Prodigy FPGA DDR-IO Motherboard to Create Full System Emulation

Tachyum Inc. today announced that it has taken delivery of an IO motherboard for its Prodigy Universal Processor hardware emulator from manufacturing. This provides the company with a complete system prototype integrating CPU, memory, PCI Express, networking and BMC management subsystems when connected to the previously announced field-programmable gate array (FPGA) emulation system board.

The Tachyum Prodigy FPGA DDR-IO Board connects to the Prodigy FPGA CPU Board to provide memory and IO connectivity for the FPGA-based CPU tiles. The fully functional Prodigy emulation system is now ready for further build out, including Linux boot and incorporation of additional test chips. It is available to customers to perform early testing and software development prior to a full four-socket reference design motherboard, which is expected to be available Q4 2021.

Valve Working with NVIDIA to Bring DLSS Support to Linux through Proton

NVIDIA has announced that they are partnering with Valve to bring its Deep Learning Super Sampling (DLSS) graphics technology to Linux via Steam Proton. This will allow Linux gamers with an RTX GPU to take advantage of the AI tool to improve their framerates in games through Steam. Proton is an open-source tool from Valve which allows Windows games to be run on Linux, the tool is built into the Linux Steam Client Beta. This news comes after AMD announced their open-source DLSS competitor FidelityFX Super Resolution which supports AMD, and NVIDIA graphics cards.
NVIDIANVIDIA, Valve, and the Linux gaming community are collaborating to bring NVIDIA DLSS to Proton - Linux gamers will be able to use the dedicated AI cores on GeForce RTX GPUs to boost frame rates for their favorite Windows Games running on the Linux operating system. Support for Vulkan titles is coming this month with DirectX support coming in the Fall.

Apple M1 Processor Receives Preliminary Support in Linux Kernel

Apple's M1 custom processor has been widely adopted among the developer community. However, it is exactly this part of the M1 customer base that wants something different. For months, various developers have been helping with the adoption of the M1 processor for the Linux Kernel, which has today received preliminary support for the processor. The latest 5.13-RC1 release of the Linux Kernel is out, and it adds some basic functionality for the M1 processor. For now, it is some basic stuff like a simple bring up, however, much more has to be added. For example, the GPU support is still not done. Not even half-done. The M1 SoC is now able to boot, however, it takes a lot more work to get the full SoC working correctly.

Mr. Linus Torvalds, the Linux kernel developer, and its creator highlights that "This was - as expected - a fairly big merge window, but things seem to have proceeded fairly smoothly. Famous last words." According to one of the main activists for Linux on M1, Mr. Hector Martin, "This is just basic bring-up, but it lays a solid foundation and is probably the most challenging up-streaming step we'll have to do, at least until the GPU stuff is done." So it is still a long way before the M1 processor takes a full Linux kernel for a spin and the software becomes usable.

AAEON Announces Official Support for NVIDIA Ubuntu, Jetpack 4.5 and Secureboot on BOXER-8200 Systems

AAEON, an industry leader in embedded AI Edge systems, announces new software support for the BOXER-8200 series of embedded PCs featuring NVIDIA Jetson System on Modules (SOM). AAEON has officially signed an agreement with Canonical to provide customers with the NVIDIA Ubuntu operating system pre-installed on new BOXER-8200 systems. Systems with the NVIDIA Ubuntu OS will also ship with the Jetpack 4.5 drivers and toolkit package preinstalled. Additionally, AAEON announces a new customization services to provide Secureboot to clients in addition to other customization options.

AAEON is dedicated to delivering the most comprehensive platform solutions powered by NVIDIA Jetson SOMs. To meet the needs of their clients, AAEON has signed an agreement with Canonical to provide the official NVIDIA Ubuntu OS image on the entire range of BOXER-8200 series systems. Developers and customers who purchase new BOXER-8200 series systems can receive the system with the OS preinstalled, with no need to flash the image before starting the system up for the first time. The BOXER-8200 series includes the BOXER-822x platforms with Jetson Nano, BOXER-8240AI with Jetson AGX Xavier, BOXER-825x platforms with Jetson Xavier NX, and BOXER-823x platforms with Jetson TX2 NX (currently under development).

AMD Ryzen 5000 Series CPUs with Zen 3 Cores Could be Vulnerable to Spectre-Like Exploit

AMD Ryzen 5000 series of processors feature the new Zen 3 core design, which uses many techniques to deliver the best possible performance. One of those techniques is called Predictive Store Forwarding (PSF). According to AMD, "PSF is a hardware-based micro-architectural optimization designed to improve the performance of code execution by predicting dependencies between loads and stores." That means that PSF is another "prediction" feature put in a microprocessor that could be exploited. Just like Spectre, the feature could be exploited and it could result in a vulnerability in the new processors. Speculative execution has been a part of much bigger problems in CPU microarchitecture design, showing that each design choice has its flaws.

AMD's CPU architects have discovered that the software that relies upon isolation aka "sandboxing", is highly at risk. PSF predictions can sometimes miss, and it is exactly these applications that are at risk. It is reported that a mispredicted dependency between load and store can lead to a vulnerability similar to Spectre v4. So what a solution to it would be? You could simply turn it off and be safe. Phoronix conducted a suite of tests on Linux and concluded that turning the feature off is taking between half a percent to one percent hit, which is very low. You can see more of that testing here, and read AMD's whitepaper describing PSF.

NVIDIA Enables GPU Passthrough for Virtual Machines on Consumer-Grade GeForce GPUs

Editor's note: This is not a part of April Fools.

NVIDIA has separated professional users and regular gamers with the company's graphics card offering. There is a GeForce lineup of GPUs, which represents a gaming-oriented version and its main task is to simply play games, display graphics, and run some basic CUDA accelerated software. However, what would happen if you were to start experimenting with your GPU? For example, if you are running Linux, and you want to spin a virtual machine with Windows on it for gaming, you could just use your integrated GPU as the GeForce card doesn't allow for virtual GPU passthrough. For these purposes, NVIDIA has its professional graphics card lineup like Quadro and Tesla.

However, this specific feature is about to arrive in even the GeForce lineup offerings. NVIDIA has announced that the company is finally bringing the basic virtual machine passthrough functions to the gaming GPUs. While the feature is representing a step in the right direction, it is still limited. For example, the GeForce GPU passthrough supports only one virtual machine and SR-IOV feature is still not supported on GeForce. "If you want to enable multiple virtual machines to have direct access to a single GPU or want the GPU to be able to assign virtual functions to multiple virtual machines, you will need to use NVIDIA Tesla, Quadro, or RTX enterprise GPUs.", says the NVIDIA FAQ. GeForce virtualization, which is now still in beta, is supported on R465 or higher drivers.
The full content from NVIDIA's website is written below.

AMD Outs 32 MB Infinity Cache on Navi 23, No Cache on Upcoming Van Gogh APUs

AMD has revealed the Infinity Cache size for the upcoming Navi 23 GPU, as well as its absence in the next-generation Van Gogh APU, which features Zen 2 cores and an RDNA GPU. The reveal comes via a new patch done by AMD to the AMKFD, a Linux kernel HSA driver for AMD APUs. The patch file doesn't list Infinity Cache per se, but does clarify the last-level cache for AMD's GPUs - L3, which is essentially the same.

The patch reveals L3 size for Sienna Cichlid (Navi 21), Navy Flounder (Navi 22), and Dimgrey Cavefish (Navi 23). Navi 21 features 128*1024 (128 MB) of Infinity Cache, the just-released Navi 22 has 96 MB, as we know, and according to the file, Navi 23 is bound to feature 32 MB of it. Considering that Van Gogh lacks an infinity Cache, it would seem that it's making use of previous-gen Navi graphics, and won't leverage RDNA2, of which the Infinity Cache is a big part of. It remains to be seen if Van Gogh will materialize in an APU product lineup or if it's a specific part for a customer. It also remains to be seen which RX product will Navi 23 power - if an AMD RX 66000 series, or 6500 series.

Intel "Lunar Lake" Microarchitecture Hits the Radar, Possible "Meteor Lake" Successor

Intel published Linux kernel driver patches that reference a new CPU microarchitecture, codenamed "Lunar Lake." The patch comments refer to "Lunar Lake" as a client platform, and VideoCardz predicts that it could succeed "Meteor Lake." the microarchitecture that follows "Alder Lake," which was recently announced by Intel.

Targeting both mobile and desktop platforms, "Alder Lake" will herald a new 1,700-pin LGA socket for the client desktop, and debut hybrid CPU cores on the form-factor. Expected to be built on a newer silicon fabrication node, such as the 10 nm SuperFin, the chip will combine high-performance "Golden Cove" big cores, with "Gracemont" low-power cores. Its commercial success will determine if Intel continues to take the hybrid-core approach to client processors with future "Meteor Lake" and "Lunar Lake," or whether it will have sorted out its foundry woes and build "Lunar Lake" with a homogeneous CPU core type. With "Alder Lake" expected to debut toward the end of 2021 and "Meteor Lake" [hopefully] by 2022, "Lunar Lake" would only follow by 2023-24.

AMD is Preparing RDNA-Based Cryptomining GPU SKUs

Back in February, NVIDIA has announced its GPU SKUs dedicated to the cryptocurrency mining task, without any graphics outputs present on the chips. Today, we are getting information that AMD is rumored to introduce its own lineup of graphics cards dedicated to cryptocurrency mining. In the latest patch for AMD Direct Rendering Manager (DRM), a subsystem of the Linux kernel responsible for interfacing with GPUs, we see the appearance of the Navi 12. This GPU SKU was not used for anything except Apple's Mac devices in a form of Radeon Pro 5600M GPU. However, it seems like the Navi 12 could join forces with Navi 10 GPU SKU and become a part of special "blockchain" GPUs.

Way back in November, popular hardware leaker, KOMACHI, has noted that AMD is preparing three additional Radeon SKUs called Radeon RX 5700XTB, RX 5700B, and RX 5500XTB. The "B" added to the end of each name is denoting the blockchain revision, made specifically for crypto-mining. When it comes to specifications of the upcoming mining-specific AMD GPUs, we know that both use first-generation RDNA architecture and have 2560 Stream Processors (40 Compute Units). Memory configuration for these cards remains unknown, as AMD surely won't be putting HBM2 stacks for mining like it did with Navi 12 GPU. All that remains is to wait and see what AMD announces in the coming months.

Linux Gets Ported to Apple's M1-Based Devices

When Apple introduces its lineup of devices based on the custom Apple Silicon, many people have thought that it represents the end for any further device customization and that Apple is effectively locking-up the ecosystem even more. That is not the case we have today. Usually, developers working on Macs are always in need of another operating system to test their software and try it out. It means that they have to run some virtualization software like virtual machines to test another OS like Linux and possibly Windows. However, it would be a lot easier if they could just boot that OS directly on the device and that is exactly why we are here today.

Researchers from Corellium, a startup company based in Florida, working on ARM device virtualization, have pulled off an incredible feat. They have managed to get Linux running on Apple's M1 custom silicon based devices. The CTO of Corellium, Mr. Chris Wade, has announced that Linux is now fully usable on M1 silicon. The port can take full advantage of the CPU, however, there is no GPU acceleration for now, and graphics are set to the software rendering mode. Corellium also promises to take the changes it made upstream to the Linux kernel itself, meaning open-source and permissive license model. Below you can find an image of Apple M1 Mac Mini running the latest Ubuntu OS build.

Linus Torvalds Calls Out Intel for ECC Memory Market Stagnation

Linus Torvalds, the inventor of the Linux kernel and version-control system called git, has posted another one of his famous rants, addressing his views about the lack of ECC memory in consumer devices. Mr. Torvalds has posted his views on the Linux kernel mailing list, where he usually comments about the development of the kernel. The ECC or Error Correcting Code memory is a special kind of DRAM that fixes the problems that occur inside the memory itself, where a bit can get corrupted and change the data stored, thus offering false results. ECC aims to fix those mistakes by implementing a system that fixes these small errors and avoids bigger problems. According to Mr. Torvalds, it is a technology that we need to be implemented everywhere, not just server space like Intel imagines.
Linus TorvaldsIntel has been instrumental in killing the whole ECC industry with it's horribly bad market segmentation... Intel has been detrimental to the whole industry and to users because of their bad and misguided policies wrt ECC. Seriously...The arguments against ECC were always complete and utter garbage... Now even the memory manufacturers are starting do do ECC internally because they finally owned up to the fact that they absolutely have to. And the memory manufacturers claim it's because of economics and lower power. And they are lying bastards - let me once again point to row-hammer about how those problems have existed for several generations already, but these f***** happily sold broken hardware to consumers and claimed it was an "attack", when it always was "we're cutting corners".

Tachyum Prodigy Software Emulation Systems Now Available for Pre-Order

Tachyum Inc. today announced that it is signing early adopter customers for the software emulation system for its Prodigy Universal Processor, customers may begin the process of native software development (i.e. using Prodigy Instruction Set Architecture) and porting applications to run on Prodigy. Prodigy software emulation systems will be available at the end of January 2021.

Customers and partners can use Prodigy's software emulation for evaluation, development and debug, and with it, they can begin to transition existing applications that demand high performance and low power to run optimally on Prodigy processors. Pre-built systems include a Prodigy emulator, native Linux, toolchains, compilers, user mode applications, x86, ARM and RISC-V emulators. Software updates will be issued as needed.

RISC-V Comes to PC: SiFive Introduces HiFive Unmatched Development Board

RISC-V architecture is a relatively new Instruction Set Architecture (ISA) developed at the University of California Berkeley. Starting as a "short, three-month project" the RISC-V ISA is a fifth generation of the Reduced Instruction Set Computing (RISC) ideology. A company working on this technology and helping to grow the ecosystem is SiFive. Today, they announced a big step forward for the ecosystem that will enable developers to make and optimize even more software for this architecture and platform. Called the HiFive Unmatched, the development board represents the first entry of RISC-V ISA to the world of personal computing, with its Mini-ITX form factor and PC-like connectors of power supply and I/O.

The board is home to SiFive's FU740 SoC, a five-core heterogeneous, coherent processor with four SiFive U74 cores, and one SiFive S7 core. This SoC is capable of smooth Linux OS operation, giving the developers a good platform to do their optimizations for. There is 8 GB of onboard DDR4 RAM (unknown frequencies and timing), a MicroSD card slot, and one PCIe 3.0 x4 M.2 slot for system storage. To connect the board to the outside world, you get one Gigabit Ethernet port. For user I/O there are four USB 3.2 Gen 1 Type-A ports (1 Charging port) and one MicroUSB Console port. To power the board, you need a proper power supply with a 24-pin power connector. If you plan to build a PC based on the Unmatched board, you would need a standard ITX case, as it comes in the standard Mini-ITX (170x170 mm) form factor. For more information, please check out SiFive's website.

AMD Seemingly Working on Cryptocurrency-focused Navi 10 GPU

New Linux patches seem to point towards a cryptocurrency-focused graphics card from AMD. First spotted by Phoronix, the patches add descriptions for a "navi10 blockchain SKU" - it's a pretty self-describing, well, description. The device ID is reported as 0x731E, and Phoronix says that the major difference between this graphics card and the other Navi 10 offerings in the market (namely RX 5700XT and RX 5700) is the absence of Display Core Next (DCN) and Video Core Next (VCN) engines. Whether these are absent from the silicon, or simply disabled by other means is currently unclear. Their absence, however points towards cards with no graphical outputs, a lapalissian practicality for cryptocurrency-focused graphics mining products.

Phoronix estimates a release of no sooner than early 2021, considering the timing of the patch information on Linux. While the market for GPU-accelerated cryptocurrency mining isn't what it used to be (luckily), there is still a market opportunity to be taken advantage of here - while ASICs have become more commonplace, there are still many GPU-mining alternatives within the realm of crypto. A crypto-focused product might steer users away from gaming-oriented consumer products, thus easing strain on supply for AMD's upcoming RX 6000 series - especially if this Navi 10-based GPU (or should we call it a CHU - Cryptocurrency Hashing Unit?) features some voltage and power adjustments to increase power efficiency on these workloads.

Basemark Launches GPUScore Relic of Life RayTracing Benchmark

Basemark is pioneer in GPU benchmarking. Our current product Basemark GPU has been improving the 3D graphics industry since 2016. After releasing GPU 1.2 in March Basemark development team has been really busy developing brand new benchmark - GPUScore. GPUScore benchmark will introduce hyper realistic, true gaming type of content in three different workloads: Relic of Life, Sacret Path and Expedition.

GPUScore Relic of Life is targeted to benchmark high end graphics cards. It is completely new benchmark with many new features. The key new feature is real-time ray traced reflections and reflections of reflections. The benchmark will not only support Windows & DirectX 12, but also Linux & Vulkan raytracing.

Marvell Launches Industry's First Native NVMe RAID Accelerator

Marvell (NASDAQ: MRVL) today introduced the industry's first native NVMe RAID 1 accelerator, a state-of-the-art technology for virtualized, multi-tenant cloud and enterprise data center environments which demand optimized reliability, efficiency, and performance. Hewlett Packard Enterprise (HPE) is the first of Marvell's partners to support the new accelerator in the HPE NS204i-p NVMe OS Boot Device offered on select HPE ProLiant servers and HPE Apollo systems.

As the industry transitions from legacy SAS and SATA to NVMe SSDs, Marvell's offering helps data centers fast-track the move to higher performance flash storage. The innovative accelerator lowers data center total cost of ownership (TCO) by offloading RAID 1 processing from costly and precious server CPU resources, maximizing application processing performance. IT organizations can now deploy a "plug-and-play," NVMe-based OS boot solution, like the HPE NS204i-p NVMe OS Boot Device, that protects the integrity of flash data storage while delivering an optimized, application-level user experience.

Intel Starts Hardware Enablement of Meteor Lake 7 nm Architecture

In a report by Phoronix, we have the latest information about Intel's efforts to prepare the next generation of hardware for launch sometime in the future. In the latest Linux kernel patches prepared to go mainline soon, Intel has been adding support for its "Meteor Lake" processor architecture manufactured on Intel's most advanced 7 nm node. While there are no official patches in the mainline kernel yet, the first signs of Meteor Lake are expected to show up in the version 5.10, where we will be seeing the mentions of it. This way Intel is ensuring that the Meteor Lake platform will see the best software support, even though it is a few years away from the launch.

Meteor Lake is expected to debut in late 2022 or 2023, which will replace the Alder Lake platform coming soon. In a similar way to Alder Lake, Meteor Lake will use a hybrid core technology where it will combine small and big cores. The Meteor Lake platform will use the new big "Ocean Cove" design paired with small "Gracemont" cores that will be powering the CPU. This processor is going to be manufactured on Intel's 7 nm node that will be the first 7 nm design from Intel. With all the delays to the node, we are in for an interesting period to see how the company copes with it and how the design IPs turn out.

NVIDIA Introduces New Family of BlueField DPUs to Bring Breakthrough Networking, Storage and Security Performance to Every Data Center

NVIDIA today announced a new kind of processor—DPUs, or data processing units—supported by DOCA, a novel data-center-infrastructure-on-a-chip architecture that enables breakthrough networking, storage and security performance.

NVIDIA founder and CEO Jensen Huang revealed the company's three-year DPU roadmap in today's GPU Technology Conference keynote. It features the new NVIDIA BlueField -2 family of DPUs and NVIDIA DOCA software development kit for building applications on DPU-accelerated data center infrastructure services.

Lenovo Announces the Lightest ThinkPad Ever - ThinkPad X1 Nano

Lenovo is very excited to unveil the latest addition to our premium X1 portfolio, ThinkPad X1 Nano. The lightest ThinkPad ever at just 1.99 pounds (907 g) breaks new ground for performance and functionality in an incredibly featherweight package. Lenovo's first ThinkPad based on Intel Evo platform and powered by 11th Gen Intel Core processors, the X1 Nano delivers supreme speed and intelligence while maintaining outstanding battery life. Stunning visuals are delivered through a narrow bezel 13-inch 2K display with a 16:10 aspect ratio, and four speakers and four 360-degree microphones enhance the audio-visual capabilities. For a truly immersive user experience, the X1 Nano supports Dolby Vision and Dolby Atmos. State of the art connectivity is provided by WiFi 65 and optional 5G will deliver higher bandwidth capability and drive new levels of always on always connected efficiency and collaboration in a new hybrid working world.

Lenovo today is also delighted to announce that the world's first foldable PC, ThinkPad X1 Fold, is available to order and will ship in a few weeks. A pinnacle of engineering innovation, the X1 Fold offers a revolutionary mix of portability and versatility that defines a new computing category enabled by Intel Core processors with Intel Hybrid Technology and made possible by Intel's Project Athena innovation program. Blending familiar functionality that we all know from smartphones, tablets and laptops into a single foldable PC device that will forever reshape the way you work, play, create and connect. With optional 5G, you can trust that your connection speed is more secure and optimized where available and that you are better protected with ThinkShield security features. Find out more how ThinkPad X1 Fold is pioneering a new category: A Game Changing Category

SiFive To Introduce New RISC-V Processor Architecture and RISC-V PC at Linley Fall Virtual Processor Conference

SiFive, Inc., the leading provider of commercial RISC-V processor IP and silicon solutions, today announced that Dr. Yunsup Lee, CTO of SiFive, and Dr. Krste Asanovic, Chief Architect of SiFive, will present at the technology industry's premier processor conference, the Linley Fall Virtual Processor Conference. The conference will be held on October 20th - 22nd and 27th - 29th, 2020 and will feature high-quality technical content from leading semiconductor companies worldwide.

"Industry demand for AI performance has skyrocketed over the last few years driven by rapid adoption from the data center to the edge. This year's Linley Fall Processor Conference will feature our biggest program yet and will introduce a host of new technology disclosures and product announcements of innovative processor architectures and IP technologies," said Linley Gwennap, principal analyst and conference chairperson. "In spite of the challenges posed by the pandemic, development of these technologies continues to accelerate and we're excited to be sharing these presentations with a global audience via our live-streamed format."
Return to Keyword Browsing
Apr 3rd, 2025 03:15 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts