• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Banana Pi Announces BPI-R4 Open Source Router SBC With WiFi 7 and 5G Capabilities

I wonder what makes people feel so inclined, to know better what to do with one's owned-equipment?
Truly, I'm getting sick and tired of that attitude everywhere.

Here's just a few things off the top of my head:
-Multiple systems w/ Gen3/4/5 NVMe/RAID drives.
-Imaging.
-Game Installs.
-Media transfers.
-(uncompressed, low-latency) KVM/Game Streaming
-On-Property WAN (fibre would be appropriate here)

I have at least 2 dual-RJ45 NICs. I don't think either cost me more than $40. I'd imagine If you got crafty with the 'reverse slot' cards, it could be even less $$.
For imaging, just plug in a drive into a USB3 dock and copy your backup images there. Safer since you can keep it offsite, and same speed out of much less money than a 10 gbit networked solution (the drive(s) will be the bottleneck). Having an external SATA dock is also an option, I have one of those back panels with an eSata port and a molex to sata power cable port, so I can just plug in a drive directly without opening the case.
For media transfers, getting above 110 MByte/sec will be exciting for an entire day, after that you realize you just launch the transfer and minimize it and do something else while it is running and won't even notice when it finishes. I'm speaking from experience.
Game streaming wouldn't even push a 100mbit connection.

The rest would not benefit from higher speed networks since they all depend on your internet, which is capped to 1gbit everywhere but a handful places in the globe. From the top of my head there is one ISP in Finland, Spain and Romania, plus who knows how many in South Korea and Japan, where you can get 10gbit lines. Or are you telling me you have an internet line faster than 1gbit?

And if you have multiple systems with gen3/4/5 NVMe drives in RAID, and expect to have them interconnected through a local network so you can transfer stuff between them faster, then you already have a setup in far excess of what most consumers require. You also most likely spent so much money on it that spending an extra $400 on a 2.5gbit or 10gbit switch wouldn't put a dent on your budget. Or do you just want a 10 gbit line so you can watch the speeds go vroom vroom on iperf3 benchmarks? Why do you even have multiple gen3/4/5 NVMe RAID setups, what do you do on them, do you make money out of editing uncompressed 4k videos?

I wonder what makes people feel so inclined, to know better what to do with one's owned-equipment?
The fact that I do have a 10gbit home network. Trust me. It's overkill. If you would truly NEED one, you wouldn't be bitching about the enterprise prices on network equipment.
 
The second link is nearly perfect. Clearly, there's been some improvements in the market since I last took a serious look.
There's an 8-port version as well, but it has a fan it, which is apparently quite noisy.
 
For imaging, just plug in a drive into a USB3 dock and copy your backup images there. Safer since you can keep it offsite, and same speed out of much less money than a 10 gbit networked solution (the drive(s) will be the bottleneck). Having an external SATA dock is also an option, I have one of those back panels with an eSata port and a molex to sata power cable port, so I can just plug in a drive directly without opening the case.
For media transfers, getting above 110 MByte/sec will be exciting for an entire day, after that you realize you just launch the transfer and minimize it and do something else while it is running and won't even notice when it finishes. I'm speaking from experience.
Game streaming wouldn't even push a 100mbit connection.

The rest would not benefit from higher speed networks since they all depend on your internet, which is capped to 1gbit everywhere but a handful places in the globe. From the top of my head there is one ISP in Finland, Spain and Romania, plus who knows how many in South Korea and Japan, where you can get 10gbit lines. Or are you telling me you have an internet line faster than 1gbit?

And if you have multiple systems with gen3/4/5 NVMe drives in RAID, and expect to have them interconnected through a local network so you can transfer stuff between them faster, then you already have a setup in far excess of what most consumers require. You also most likely spent so much money on it that spending an extra $400 on a 2.5gbit or 10gbit switch wouldn't put a dent on your budget. Or do you just want a 10 gbit line so you can watch the speeds go vroom vroom on iperf3 benchmarks? Why do you even have multiple gen3/4/5 NVMe RAID setups, what do you do on them, do you make money out of editing uncompressed 4k videos?


The fact that I do have a 10gbit home network. Trust me. It's overkill. If you would truly NEED one, you wouldn't be bitching about the enterprise prices on network equipment.
Psst, hey, have you heard of jumbo frames? ;)
 
Jumbo frames aren't recommended any more.
Eh? What did I miss?

Header overhead elimination for large file transfers alone is worth it. Granted, it does nothing for a typical gamer/streaming household.

Can you share some docs on the topic?
 
Last edited:
For imaging, just plug in a drive into a USB3 dock and copy your backup images there. Safer since you can keep it offsite, and same speed out of much less money than a 10 gbit networked solution (the drive(s) will be the bottleneck). Having an external SATA dock is also an option, I have one of those back panels with an eSata port and a molex to sata power cable port, so I can just plug in a drive directly without opening the case.
For media transfers, getting above 110 MByte/sec will be exciting for an entire day, after that you realize you just launch the transfer and minimize it and do something else while it is running and won't even notice when it finishes. I'm speaking from experience.

It seems you are the dude that really doesn't understands that time means money. The faster the job is done, the faster you move on. External drives are not for backup, those are unsafe, you can carry your pet photos there not a job related thing without backup.

10Gb is too much for the time being, we have discussed it last year too with Swede, the SoCs are still MIA and it simply doesn't take off yet. Even the 2.5Gb switches are pretty rare because there are no mainstream solutions, I have read pretty mixed reviews about current solutions. We are waiting for Realtek for years... but something is a miss.

I kinda managed to make 2.5Gb 3 port Switch + stable 2Gb WIFI6 solution under 200€, but the Mediatek chipset is still under development, it will be mature only next year really. Can I recommend BPI solutions for an average Joe? Absolutely no. Is it safe? Mostly yes, code main OpenWRT contributors are Europeans. Is it stable? On most part yes, it does not have any stable OS release yet, only snapshots, some had problems with some pieces, but they work. I have another sister MT7981B device using multi MWAN 5G modem handler, it is more capricious.

Eh? What did I miss?

He's a hardware guy, just the new HW SoCs does not support(or not implemented) it anymore. HW engines just don't do them IMHO, at least those I've had met. Not needed. The only thing you should do is SQM Cake your WAN. Most routers shit their pants using that tho, not these tho, these Mediateks have a lot of horsepower.
 
Last edited:
Psst, hey, have you heard of jumbo frames? ;)
They've never worked on any equipment i've owned, they're definitely low down on the compatiblity side

According to wiki it's only about 4% more bandwidth in a perfect environment for it, so it's been ignored a lot - the fragmentation issues going off hardware with jumbo frame support to hardware without it, the penalty is far worse
It's trading latency for bandwidth, which is not something home users want
 
He's a hardware guy, just the new HW SoCs does not support(or not implemented) it anymore. HW engines just don't do them IMHO, at least those I've have met. Not needed. The only thing you should do is SQM Cake your WAN. Most routers shit their pants using that tho, not these tho, these Mediateks have a lot of horsepower.
For SOHO routers with no dedicated switch chip it makes no sense to even have jumbo frames in first place. As I wrote earlier, typical home environment doesn't benefit from jumbos.

There are pro grade switches that don't store frames before forwarding them, and these switches can do wirespeed jumbos with even LAG and VLAN.

If you want to route jumbo frames however, that is another problem - a conceptual one.
 
For SOHO routers with no dedicated switch chip it makes no sense to even have jumbo frames in first place

My router has dedicated mt7531 switch chip, but it does not work. You are talking about enterprise grade, but I haven't heard them touch it either. Because it depends on the whole infrastructure. It takes only one bitchy server NIC like Mellanox to bring all that down, server maintainers will chose stability over speed any day and will leave out Jumbo Frames as first thing to omit with HW offloading being the second.
 
They've never worked on any equipment i've owned, they're definitely low down on the compatiblity side

According to wiki it's only about 4% more bandwidth in a perfect environment for it, so it's been ignored a lot - the fragmentation issues going off hardware with jumbo frame support to hardware without it, the penalty is far worse
It's trading latency for bandwidth, which is not something home users want
They are a bit fiddly to get them working, and the main misconception is that they can help you on 1Gbps interface. You need 10Gbps and beyond to start noticing the gains. Here is a cool paper on the topic as it would take me time to do a fresh batch of benchmark recordings in my setups.


My router has dedicated mt7531 switch chip, but it does not work. You are talking about enterprise grade, but I haven't heard them touch it either. Because it depends on the whole infrastructure. It takes only one bitchy server NIC like Mellanox to bring all that down, server maintainers will chose stability over speed any day and will leave out Jumbo Frames as first thing to omit with HW offloading being the second.
I'm talking about switches starting in the 5-figures range. ;) And that's for small enterprises. Heavy hitters start from a 6-figure a piece. And jumbo frames work just fine then, because you don't just plop any server and NIC there.

Not to mention the usecases with SANs and the rest.
 
I'm talking about switches starting in the 5-figures range. ;) And that's for small enterprises. Heavy hitters start from a 6-figure a piece. And jumbo frames work just fine then, because you don't just plop any server and NIC there.

Not to mention the usecases with SANs and the rest.

It is all elder knowledge now. Especially IPv6 handles that thing on auto negotiation. Seconds it potentially breaks NPU/HW acceleration strategies, the HW has to be tailored for it(not only the switch/router). It ain't year 2000 with brute approach to things.

But the main reason even for iSCSI guys... setting jumbo just doesn't improve anything over 1500 as CPU's are enough but the storage backbone actually ain't, you end up on file system + SQL bottleneck actually. People often screw around thing thinking that larger is better but often fail to realize that it doesn't change anything. That's why even Jumbo Frames has always been a questionable thing with very limited usage, it fits only in specially scenarios with CPU being the bottleneck. It especially applies to Enterprise gear or routing to AWS etc.
 
It is all elder knowledge now. Especially IPv6 handles that thing on auto negotiation. Seconds it potentially breaks NPU/HW acceleration strategies, the HW has to be tailored for it(not only the switch/router). It ain't year 2000 with brute approach to things.

But the main reason even for iSCSI guys... setting jumbo just doesn't improve anything over 1500 as CPU's are enough but the storage backbone actually ain't, you end up on file system + SQL bottleneck actually. People often screw around thing thinking that larger is better but often fail to realize that it doesn't change anything. That's why even Jumbo Frames has always been a questionable thing with very limited usage, it fits only in specially scenarios with CPU being the bottleneck. It especially applies to Enterprise gear or routing to AWS etc.
Really, really far off if we talk IPv4.

It is clear to me that you haven't had the pleasure seeing a properly designed and implemented networks with jumbo frames. If you had, you wouldn't be saying the things you are saying.

If you have the need, the hardware and the know-how to set jumbos right, it's a day and night difference in performance.
 
I'm talking about switches starting in the 5-figures range. ;) And that's for small enterprises. Heavy hitters start from a 6-figure a piece. And jumbo frames work just fine then, because you don't just plop any server and NIC there.
You responded to someone talking about 100Mb to 1Gb is overkill for most users talking about the JF's, so it doesnt quite fit in that context

It doesnt support wifi (always fragmented) so it's a niche tech for very limited scope networks with high end equipment at each end - the situations it's of benefit are pretty limited
 
Sorry if this has been covered already but any concerns or testing done to confirm this hardware is not phoning home to the CCP or other governmental entities?
 
Sorry if this has been covered already but any concerns or testing done to confirm this hardware is not phoning home to the CCP or other governmental entities?

Git clone and look through the code yourself. This thing is not aimed for casual users either way. You have to read a lot and get familiar with m76.
 
Back
Top