My experience with the OrangePi 4 LTS has been poor, and I'm unwilling to purchase more of their hardware. Mine is now running Armbian because I didn't care for the instability, or for the Chinese repos.
They seem uninterested in trying to get their hardware supported by submitting their patches for inclusion in the Linux kernel, and popular distros. Instead, you have to trust their repos (based in PRC).
I opened the review and immediately ctrl-F'd "kernel". It said no upstream support so I closed the article.
I would never buy one of these things without upstream kernel support for the SoC and a sane bootloader. Even the Raspberry Pi is not great on this front TBH (kernel is mostly OK but the fucked up boot chain is a PITA, requires special distro support).
"Chinese repos" is a very charitable interpretation of the Google drive links they used to distribute the os. It seemed like it was on the free plan too, it often didn't work because it tripped the maximum downloads per month limit.
It's always better than a link in the sticky post on the manufacturer's phpbb forum. I bought some audio equipment directly from a Chinese company, and everything look like a hobbies/student project.
I bought a MiniPC directly from a Chinese company (an AOOSTAR G37) and the driver downloads on their website are MEGA links. I thought only piracy and child porn sites used those..
I am somewhat amazed how you can manufacture such expensive high tech equipment yet are too cheap to setup a proper download service for the software, which would be very simple and cheap compared to making the hardware itself.
Maybe it is a Chinese mentality thing where the first question is always "What is the absolutely cheapest way to do this?" and all other concerns are secondary at best.
..which does not inspire confidence in the hardware either.
Maybe Chinese customers are different, see this, and think "These people are smart! Why pay more if you don't have to!".
That was not my experience, at least for very large files (100+ GB). There was a workaround (that has since been patched) where you could link files into your own Google drive and circumvent the bandwidth restriction that way. The current workaround is to link the files into a directory and then download the directory containing the link as an archive, which does not count against the bandwidth limit.
The Jetson Nano launched with Ubuntu 18.04, today, this is still the only officially supported distro for it. I have no reason to think this would be different with the Orin and Thor series, or even with the DGX Spark with its customized Ubuntu/"DGX OS".
I still don't understand why they couldn't support them properly. There are so many situations in which they could be better than alternatives, only to be hamstring by the poorest OS support.
You see, a small startup like NVIDIA just doesn't have the budget to support their older devices the same way a multi-trillion dollar company like Raspberry Pi can.
I don't have any Radxa model, but I have a bunch of SBCs from different makers and I have never seen a problem with boot working half of the time only.
The review shows ARM64 software support is still painful vs x86. For $200 for the 16gb model, this is the price point where you could just get an Intel N150 mini PC in the same form factor. And those usually come with cases. They also tend to pull 5-8w at idle, while this is 15w. Cool if you really want ARM64, but at this end of the performance spectrum, why not stick with the x86 stack where everything just works a lot easier?
From the article: "[...] the Linux support for various parts of the boards, not being upstreamed and mainlined, is very likely to be stuck on an older version. This is usually what causes headaches down the road [...]".
The problem isn't support for the ARM architecture in general, it's the support for this particular board.
Other boards like the Raspberry Pi and many boards based on Rockchip SoCs have most of the necessary support mainlined, so the experience is quite painless. Many are starting to get support for UEFI as well.
The exception (even those are questionable as running plain Debian did not work right on Pi 3B and others when I tried recently) proves the rule. You have to look really hard to find an x86 computer where things don't just basically work, the reverse is true for ARM. The power draw between the two is comparable these days, so I don't understand why anyone would bother with ARM when you've got something where you need more than minimally powerful hardware.
The Pi 3B doesn't have UEFI support, so it requires special support on the distro side for the boot process but for the 4 and newer you can flash (or it'll already be there, depending on luck and age of the device) the firmware on the board to support UEFI and USB boot, though installing is a bit of a pain since there's no easy images to do it with. https://wiki.debian.org/RaspberryPi4
I believe some other distros also have UEFI booting/installers setup for PI4 and newer devices because of this, though there's a good chance you'll want some of the other libraries that come with Raspberry PI OS (aka Raspbian) still for some of the hardware specific features like CSI/DSI and some of the GPIO features that might not be fully upstreamed yet.
There's also a port of Proxmox called PXVirt (Formerly Proxmox Port) that exists to use a number of similar ARM systems now as a virtualization host with a nice ui and automation around it.
My uninformed normie view of the ecosystem suggests that it's the support for almost every particular board, and that's exactly the issue. For some reason, ARM devices always have some custom OS or Android and can't run off-the-shelf Linux. Meanwhile you can just buy an x86/amd64 device and assume it will just work. I presume there is some fundamental reason why ARM devices are so bad about this? Like they're just missing standardization and every device requires some custom firmware to be loaded by the OS that's inevitably always packaged in a hacky way?
Its the kernel drivers, not firmware. There is no bios or acpi, so the kernel itself has to support a specifc board. In practice it means there is a dtb file that configures it and the actual drivers in the kernel.
Manufacturers hack it together, flash to device and publish the sources, but dont bother with upstreaming and move on.
Same story as android devices not having updates two years after release.
But "no BOIS or ACPI" and requiring the kernel to support each individual board sounds exactly like the problem is the ARM architecture in general. Until that's sorted it makes sense to be wary of ARM.
It's not a problem with ARM servers or vendors that care about building well designed ARM workstations.
It's a problem that's inherit to mobile computing and will likely never change unless with regulation or an open standards device line somehow hitting it out of the park and setting new expectations a la PCs.
The problem is zero expectation of ever running anything other than the vendor supplied support package/image and how fast/cheap it is to just wire shit together instead of worrying about standards and interoperability with 3rd party integrators.
How so? The Steam Deck is an x86 mobile PC with all the implications of everything (well, all the generic hardware e.g. WiFi, GPU IIRC) work out of the box.
What is ACPI other than a DTB baked into the firmware/bootloader?
Any SBC could buy an extra flash chip and burn an outdated U-Boot with the manufacturer's DTB baked in. Then U-Boot would boot Linux, just like UEFI does, and Linux would read the firmware's fixed DTB, just like it reads x86 firmware's fixed ACPI tables.
But - cui bono?
You need drivers in your main OS either way. On x86 you are not generally relying on your EFI's drivers for storage, video or networking.
It's actually nice that you can go without, and have one less layer.
It's the shape of the delivered artifact that's driven the way things are implemented in the ecosystem, not a really fundamental architecture difference.
The shape of historically delivered ARM artifacts has been embedded devices. Embedded devices usually work once in one specific configuration. The shape of historically delivered ARM Linux products is a Thing that boots and runs. This only requires a kernel that works on one single device in one single configuration.
The shape of historically delivered x86 artifacts is socketed processors that plug into a variety of motherboards with a variety of downstream hardware, and the shape of historically delivered x86 operating systems is floppies, CDs, or install media that is expected to work on any x86 machine.
As ARM moves out of this historical system, things improve; I believe that for example you could run the same aarch64 Linux kernel on Pi 2B 1.2+, 3, and 4, with either UEFI/ACPI or just different DTBs for each device, because the drivers for these devices are mainline-quality and capable of discovering the environment in which they are running at runtime.
People commonly point to ACPI+UEFI vs DeviceTree as causes for these differences, but I think this is wrong; these are symptoms, not causes, and are broadly Not The Problem. With properly constructed drivers you could load a different DTB for each device and achieve similar results as ACPI; it's just different formats (and different levels of complexity + dynamic behavior). In some ways ACPI is "superior" since it enables runtime dynamism (ie - power events or even keystrokes can trigger behavior changes) without driver knowledge, but in some ways it's worse since it's a complex bytecode system and usually full of weird bugs and edge cases, versus DTB where what you see is what you get.
This has often been the case in the past but the situation is much improved now.
For example I have an Orange Pi 5 Plus running the totally generic aarch64 image of Home Assistant OS [0]. Zero customization was needed, it just works with mainline everything.
There's even UEFI [1].
Granted this isn't the case for all boards but Rockchip at least seems to have great upstream support.
Yeah but you can get a n100 on sale for about the same price, and it comes with a case, nvme storage (way better then sd card), power supply, proper cooling solution, and less maintanance…
So, I agree but less than I did a few months ago. I purchased an Orange Pi 5 Ultra and was put off by the pre-built image and custom kernel. The “patch“ for the provided kernel was inscrutable as well. Now I’m running a vanilla 6.18 kernel on a vanilla uboot firmware (still a binary blob required to build that though) with a vanilla install of Debian. That support includes the NPU, GPU, 2.5G Ethernet and NVMe root/boot. I don’t have performance numbers but it’s definitely fast enough for what I use it for.
There's also a risk of your DeviceTree getting pruned from the kernel in X years when it's decided that "no one uses that board anymore", which is something that's happened to several boards I bought in the 2010's, but not something that's happened to any PC I've ever owned.
It’s weirded me out for a long time that we’ve gone from ‘we will probe the hardware in a standard way and automatically load the appropriate drivers at boot’ ideal we seemed to have settled on for computers in the 2000s - and still use on x86 - back to ‘we’ll write a specific description file for every configuration of hardware’ for ARM.
Isn't this one of the benefits of ACPI? That the kernel asks the motherboard for the hardware information that on ARM SoCs is stored in the device tree?
That makes sense, as the Pi is as easy as x86 at this point. I almost never have to compile from scratch.
I'm not a compiler expert... But it seems each ARM64 board needs its own custom kernel support, but once that is done, it can support anything compiled to ARM64 as a general target? Or will we still need to have separate builds for RPi, for this board, etc?
Little bit of both. Pi still uses a sort of unique boot sequence due to it’s heritage. Most devices will have the CPU load the bootloader and then have the OS bring up the GPU. Pi sort of inverts this, having the GPU leading the charge with the CPU held at reset until after the GPU has finished it’s boot sequence.
Once you get into the CPU though the Aarch64 registers become more standardized. You still have drivers and such to worry about and differing memory offsets for the peripherals - but since you have the kernel running it’s easier to kind of poke around until you find it. Pi 5 added someone complexity to this with the RP1 South Bridge which adds another layer of abstraction.
Hopefully that all makes sense. Basically the Pi itself is backwards while everything else should conform. It’s not Arm specific, but how the Pi does things.
With this board the SoC is the main problem.
CIX is working on mainlining that stuff for over a year and we still dont have gpu and npu support in mainline
Why? I'm running an Orange Pi 5+ with a fully generic aarch64 image of Home Assistant OS and it works great. Is there some particular feature that doesn't work on mainline?
This. The issue is the culture inside many of these HW companies that is oppositional to upstreaming changes and developing in the open in general.
Often an outright mediocre software development culture generally, that sees software as a pure cost centre, in fact. The "product" is seem to be the chip, the software "just" a side show (or worse, a channel by which their IP could leak).
The Rockchip stuff is better, but still has similar problems.
These companies need to learn that their hardware will be adopted more aggressively for products if the experience of integrating with it isn't sub-par.
They exist in a strange space. They want to be a Linux host but they also want to be an embedded host. The two cultures are pretty different in terms of expectations around kernels. A Linux sysadmin will (rightly) balk at not having an upgrade path for the kernel while a lot of embedded stuff that just happens to use Linux, often has a single kernel released… ever.
I’m not saying one approach is better than the other but there is definitely a lot of art in each camp. I know the one I innately prefer but I’ve definitely had eyebrows raised at me in a professional setting when expressing that view; Some places value upgrading dependencies while others value extreme stability at the potential cost of security.
> Some places value upgrading dependencies while others value extreme stability at the potential cost of security.
Both are valid. The latter is often used as an excuse, though. No, your $50 wifi connected camera does not need the same level of stability as the WiFi connected medical device that allows doctor to remotely monitor medication. Yes, you should have a moderately robust way to update and build and distribute a new FW image for that camera.
I can't tell you the number of times I've gotten a shell on some device only to find that the kernel/os-image/app-binary or whatever has build strings that CLEARLY feature `some-user@their-laptop` betraying that if there's ever going to be an updated firmware, it's going to be down to that one guy's laptop still working and being able to build the artifact and not because a PR was merged.
The obvious counterpoint is that a PR system is also likely to break unless it is exercised+maintained often enough to catch little issues as they appear. Without a set of robust tests the new artifact is also potentially useless to a company that has already sold their last $50 WiFi camera. If the artifact is also used for their upcoming $54.99 camera then often they will have one good version there too. The artifact might work on the old camera but the risk/reward ratio is pretty high for updating the abandonware.
No it's definitely a problem with the ARM architecture, specifically that it's standard to make black box SoCs that nobody can write drivers for and the manufacturer gives you one binary version and then fucks off forever. It's a problem with the ARM ecosystem as a whole for literally every board (except Raspberry Pi), likely stemming from the bulk of ARM being throwaway smartphones with proprietary designs.
If ARM cannot outdo x86 on power draw anymore then it really is entirely pointless to use it because you're trading off a lot, and it's basically guaranteed that the board will be a useless brick a few years down the line.
> The problem isn't support for the ARM architecture in general,
Of course it is not. That's why almost every ARM board comes with it's own distro, sometimes bootloader and kernel version. Because "it is supported". /s
With RAM it will be costing notably more, with 4 cores instead of 12. I'd expect this to run circles around an N150 for single-threaded perf too.
They are not in the same class, which is reflected in the power envelope.
BTW what's up with people pushing N150 and N300 in every single ARM SBC thread? Y'all Intel shareholders or something? I run both but not to the exclusion of everything else. There is nothing I've failed to run successfully on my ARM ones and the only thing I haven't tried is gaming.
> BTW what's up with people pushing N150 and N300 in every single ARM SBC thread?
At this point I expect a lot of people have been enticed by niche SBCs and then discovered that driver support is a nightmare, as this article shows. So in time, everyone discovers that cheap x86-64 boxes accomplish their generic computing goals easier than these niche SBCs, even if the multi-core performance isn't the same.
Being able to install a mainline OS and common drivers and just get to work is valuable.
> BTW what's up with people pushing N150 and N300 in every single ARM SBC thread?
Because they have a great watt/performance ratio along with a GPU that is very well supported by a wide range of devices and mainline kernel support. In other words a great general purpose SBC.
Meanwhile people are using ARM SBCs, with SoCs designed for embedded or mobile devices, as general purpose computers.
I will admit with RAM and SSD prices sky rocketing these ARM SBC look more attractive.
Because most ARM SBCs are still limited to whatever linux distro they added support to. Intel SBCs might underperform but you can be sure it will run anything built for x86-64.
> BTW what's up with people pushing N150 and N300 in every single ARM SBC thread?
For 90% of use cases, ARM SBCs are not appropriate and will not meet expectations over time.
People expect them to be little PCs, and intend to use them that way, but they are not. Mini PCs, on the other hand, are literally little PCs and will meet the expectations users have when dealing with PCs.
1. Wow, never thought I'd need to do an investment disclosure for an HN comment. But sure thing: I'm sure Intel is somewhere in my 401K's index funds, but also probably Qualcomm. But I'm not a corporate shill, thank you very much for the good faith. Just a hobbyist looking to not get seduced by the lastest trend. If I were an ARM developer that'd be different, I get that.
2. The review says single core Geekbench performance is 1290, same as i5-10500 which is also similar to N150, which is 1235.
3. You can still get N150s with 16gb ram in a case for $200 all in.
But realistically, most workloads of the kind you would run on these machines don't scale benefit from multithreading as much as single core performance. At least at home these machines will do things like video streaming, router, or serving files. Even if you want to use it in the living room as a console/emulator, you are better off with higher single core performance and fewer cores than the opposite.
> But realistically, most workloads of the kind you would run on these machines don't scale benefit from multithreading as much as single core performance. At least at home these machines will do things like video streaming, router, or serving files.
You're probably right about "most workloads", but as a single counter-example, I added several seasons of shows to my N305 Plex server last night, and it pinned all eight threads for quite a while doing its intro/credit detection.
I actually went and checked if it would be at all practical to move my Plex server to a VM on my bigger home server where it could get 16 Skymont threads (at 4.6ghz vs 8 Gracemont threads at ~3ghz - so something like 3x the multithreaded potential on E-cores). Doesn't really seem workable to use Intel Quick Sync on Linux guests with a Hyper-V host though.
if you are talking about ancient hardware, yes, it's mostly driven by single core performance. But any console more recent than the 2000s will hugely benefit from multiple cores (because of the split between CPU and GPU, and the fact that more recent consoles also had multiple cores, too).
Are you sure you don't have single-threaded and multi-threaded backwards?
Why would the A720 at 2.8 GHz run circles around the N150 that boosts up to 3.6 GHz in single-threaded workloads, while the 12-core chip would wouldn't beat the 4-core chip in multithreaded workloads?
I can't speak to why other people bring up the N150 in ARM SBC threads any more than "AMD doesn't compete in the ~$200 SBC segment".
FWIW, as far as SBC/NUCs go, I've had a Pi 4, an RK3399 board, an RK3568 board, an N100 NUC from GMKTec, and a N150 NUC from Geekom, and the N150 has by far been my favorite out of those for real-world workloads rather than tinkering. The gap between the x86 software ecosystem and the ARM software ecosystem is no joke.
P.S. Stay away from GMKTec. Even if you don't get burned, your SODIMM cards will. There are stoves, ovens, and hot plates with better heat dissipation and thermals than GMKTec NUCs.
x86 based small computers are just so much easier to work with than most second- and third-string ARM vendors. The x86 scene has had standards in place for a long time, like PCIe and the PC BIOS (now UEFI) for hardware initialization and mapping, that make it a doddle to just boot a kernel and let it get the hardware working. ARM boards don't have that yet, requiring per-board support in the kernel which board manufacturers famously drag their feet on implementing openly let alone upstreaming. Raspberry Pi has its own setup, which means kernel support for the Pi series is pretty good, but it doesn't generalize to other boards, which means users and integrators may be stuck with whatever last version of Ubuntu or Android the vendor thought to ship. Which means if you want a little network appliance like a router, firewall, Jellyfin server, etc. it often makes more sense to go with an N150 bitty box than an ARM SBC because the former is going to be price- and power-draw-competitive with the latter while being able to draw on the OS support of the well-tested PC ecosystem.
ARM actually has a spec in place called SystemReady that standardizes on UEFI, which should make bringup of ARM systems much less jank. But few have implemented it yet. I keep saying, the first cheap Chinese vendor that ships a SystemReady-compliant SBC is gonna make a killing.
> I keep saying, the first cheap Chinese vendor that ships a SystemReady-compliant SBC is gonna make a killing.
Agree. When ARM announced the initiative, I thought that the raspberry pi people would be quick but they haven't even announced a plan to eventually support it.
I don't know what the hold up is! Is it really that difficult to implement?
Apparently Pine64 and Radxa sell SystemReady-compliant SBCs; even a Raspberry Pi 4 can be made compliant (presumably by booting a UEFI firmware from the Raspberry's GPU-based custom-schmustom boot procedure, which then loads your OS).
Depends on what you need - for pure performance regardless of power usage and 3D use cases like gaming, agreed. For performance per watt under load and video transcoding use cases, the 12th-gen E-core CPUs ala the N100 are _really_ hard to beat.
Agreed, at least for a likely "home use" case, such as a TV box, router, or general purpose file server or Docker host, I don't see how this board is better than something like a Beelink mini PC. The Orange Pi does not even come with a case, power supply or cooler. Contrast that with a Beelink that has a built-in power supply (no external brick) and of course a case and cooler.
Fair enough, but I suppose it does not come which storage (NVME). Typically ready to use NUCs that retail for around $200 do. That's often only about 0.5GB, so not a use amount of storage, but more than enough for a streaming box or retro console, say.
Yes x86 will win for convenience on about every metric (at least for now), but this SoC's CPU is much faster than a mere Intel N150 (especially for multicore use cases).
I've got i3 and i5 systems that do 15W or better idle, and I don't have to worry about the absolute clusterfuck of ARM hardware (and those systems used can be had for less and will probably long outlive mystery meat ARM SBCs).
One of my Arm systems idles at leas than 1W and has a max TDP less than your idle draw (10W). I also have an N200 box, and a 16-core workstation with an obscene power draw - each platform has its pros and cons.
I noticed nuance is the first thing discarded in the recurring x86 vs Arm flame wars, with each side minimizing the strength of the "opposing" platform. Pick the right tool for the right job, there are use-cases where the Orange Pi 6 is the right choice.
I was soured on ARM SBCs by the Orange Pi 5, which does not have an option to ignore its SD card during boot. Something trivial on basically every x86 platform I had been taking for granted.
the high end of the performance is impressive and this has idle power similar to the processors in it's performance range(AMD Ryzen 7 4800H idles at 45W). This is certainly not meant for lower power computing.
I've got two RK3588 boards here doing Linux-y things around my place (Jellyfin, Forgejo builders, Immich, etc) and ... I don't think I've run into pain? They're running various debian and just ... work? I can't think of a single package that I couldn't get for ARM64.
Likewise my VPS @ Hetzner is running Aarch64. No drama. Only pain is how brutal the Rust cross-compile is from my x86 machine.
(Those things are expensive, but I just ordered one [the ASUS variant] for myself.)
Meanwhile Apple is pushing the ARM64 architecture hard, and Windows is apparently actually quite viable now?
Personally... it's totally irrational, but I have always had a grudge against x86 since it "won" in the early 90s and I had to switch from 68k. I want diversity in ISAs. RISC-V would be nice, but I'll settle for ARM for now.
We need an acronym for these types of boards: Yet Another SBC With Poor Longterm Support. YASBCWPLS. Really rolls off the tongue.
Or we should just have "STS" (Short Term Support) after the board names to let others know the board will be essentially obsolete (based on lack of software updates) in two months.
Without mainline Linux support I have no interest in these more obscure SBCs. Mainline Linux is the bare minimum, put in some effort please manufacturers.
Buying one of these Pi knockoffs taught me one thing, software support is the key to raspberry pi’s success.
Whenever I would have a problem, and it was more often than not, I would search for a solution and come across something that worked for rpi that I could try to port across.
Double the hardware spec matters little if you can’t get the software to even compile
I'm not sure I'm gonna grab another OrangePi board again. I was happy to grab the RV2 just to experiment around with, but I didn't realize that the linux kernel they provided to build their ubuntu distro doesn't actually build properly. I got it to build after throwing a version of ubuntu onto an unused pc, but then it didn't matter the options I selected for the build (like gui options) it seemed like the gui just didn't exist at all in the final binary. I've yet to try and build a 3rd party os with support since I spent so much time just trying to get the official distro to work properly.
When something has an 30 TOPS NPU, what are the implications? Do NPUs like this have some common backend that ggml/llama.cpp targets? Is it proprietary and only works for some specific software? Does it have access to all the system RAM and at what bandwidth?
I know the concept has been around for a while but no idea if it actually means anything. I assume that people are targeting ones in common devices like Apple, but what about here?
Ignorant of this NPU, but in my experience, you're expected to use some cursed stack of proprietary tools/runtimes/SDKs/etc and no, it will not play nicely with anything you want it to unless you write the support yourself.
The specific NPU doesn't seem to be mentioned in TFA, but my guess is that the blessed way to deal with it is the Neon SDK: https://www.arm.com/technologies/neon
I've not found Neon to be fun or easy to use, and I frequently see devices ignoring the NPU and inferring on CPU because it's easier. Maybe you get lucky and someone has made a backend for something specific you want, but it's not common.
TFA does directly mention the NPU "Arm-China Zhouyi: 30 TOPS (Dedicated)"
"you cannot simply use standard versions of PyTorch or TensorFlow out of the box. You must use the NeuralONE AI SDK."
Neon is a SIMD instruction set for the CPU, not a separate accelerator. It doesn't need an SDK to use, it's supported by compiler intrinsics and assembly language in any modern ARM compiler.
30 TOPS NPU is the almost-useful minimum for a device, but as we've seen that even microsoft couldn't come up with anything useful to do with it in the AI laptops. This has all but disappeared, they are pushing the cloud licensing over local AI now
Can't speak to this specific NPU but these kind of accelerators are really made more for more general ML things like machine vision etc. For example while people have made the (6 TOPS) NPU in the (similar board) RK3588 work with llama.cpp it isn't super useful because of the RAM constraints. I believe it has some sort of 32-bit memory addressing limit, so you can never give it more than 3 or 4 GB for example. So for LLMs, not all that useful.
It needs specific support, and for example llama.cpp would have support for some of them. But that comes with limitations in how much RAM they can allocate. But when they work, you see a flat CPU usage and the NPU does everything for inference.
I think the sweet spot for ARM SBCs are smaller, less powerful and cheaper for headless IOT edge cases. I use a couple of them that way when I need LAN connectivity, either by ethernet or wifi, and things wired to GPIO pins. I don't need a powerful CPU or lots of RAM for that. The SBC makers are caught up in a horsepower race and I just shrug, it's not for me.
This is my experience as well. I have a couple PINE64 devices, a Rock64 (Rockchip RK3328) and a RockPro64 (RK3399). And an N150 device.
Both ARM64 devices run headless, make use of GPIO, and have more than enough CPU. In fact, these are stable enough that I run BSDs on them and don't bother with Linux.
The Rock64 runs FreeBSD for SDR applications (e.g. ADS-B receiver). FreeBSD has stable USB support for RTL-SDR devices.
The RockPro64 runs NetBSD with ZFS with a PCIe SSD. NetBSD can handle ARM big.LITTLE well. I run several home lab workloads on this. Fun device.
I also have an N150 device running the latest Debian 13 as my main home lab server for home automation, Docker, MQTT broker, etc.
In short: SBCs are cheap enough that you can choose more than one, each for the right task, including IoT.
My Orange Pi RV2 sucks :( The available distros, drivers, kernel, and tools do work, but they’re crappy, and poorly maintained. There’s no support and very little documentation, which is a real shame. From a hardware point of view, it’s a nice board and when I properly compiled some softwares myself I actually got really interesting performance, but it was a pain in the ass.
So I ended up buying a Raspberry Pi 4, much better supported and documented.
Their approach to software support does leave a lot to be desired.
For what it's worth though the v5 did have Talos support, so you could just throw that on there, connect it to a cluster and have a decent arm node that is fanless and has 32gb
I have a software that need to build aarch64 (for some aarch64 box with 4 core cpu), currently using Oracle cloud's 4core24G Arm neoverse n1 as github self host runner to build it.
Seems this machine is more powerful than it, definitely attractive to me for a physical aarch64 self host runner.
I am newly interested in Compute Module style SBCs after I bought a one to toy around with. I was surprised to learn that the PCBs that interface to them are open specs and I can probably build myself more custom PCB solutions to match different form factors instead of being stuck with a bulky normal Raspberry Pi.
I was pleased to learn that Radxa and Orange Pi have compatible similar boards.
I have wanted to see more RISC SBCs so I may toy with these but I rather wait for the software support to get much richer.
I am not a kernel developer, so I don't really have any idea what this means, but CIX appears to have patches in the Linux kernel[0], so I assume mainlining more stuff is in the works?
i really with raspberry pi foundation released a pi with built in nvme instead of using a hat. i think using flash memory is the true bottleneck on the system
Heh, I'm the opposite. I wish the rpi stayed the course of cheapest "working" SBC, and move their high-end boards to a different brand. Raspberry Sigma, or 67 or whatever gets the younguns crazy these days.
After the pandemic, the "25$" SBC suddenly became 100+ with low availability. The main thing that made rpis worth it is gone now, and they're all chasing number go up on benchmarks.
Actually the Orange Pi 5 Ultra would be the most recent board from Orange Pi to compare it with. You can see a comparison between the Orange Pi 5 Ultra and the Raspberry Pi 5 here: https://boilingsteam.com/orange-pi-5-ultra-review/
In a nutshell, this new Orange Pi 6 Plus is much faster than Orange Pi 5 Ultra and anything that came before.
Plus and ultra are almost identical with the exception of an HMDI in port on the latter. I've used the same HAL on both boards, they are effectively the same.
Yet another board which will never have proper upstream support because the SoC vendor refused to implement the ARM BSA standard which would provide EFI/ACPI support instead of relying on undiscoverable devices only exposed through device tree. ACPI isn't perfect but it's way better than device trees which are seldom updated so the device will remain stuck with old kernels.
How are we still in a world where there are breathless, hand-waving blog posts written about the theoretical potential of super-fast SBCs for which the manufacturer shows fuck all interest in competent OS support?
Yet again, OrangePi crank out half-baked products and tech enthusiasts who quite understandably lack the deep knowledge to do more than follow others' instructions on how to compile stuff talk about it as if their specifications actually matter.
Yet again the HN discourse will likely gather around stuff like "why not just use an N1x0" and side quests about how the Raspberry Pi Foundation has abandoned its principles / is just a cynical Broadcom psyop / is "lagging behind" in hardware.
This stuff can be done better and the geek world should be done excusing OrangePi producing hardware abandonware time after time. Stop buying this crap and maybe they will finally start focussing on doing more than shipping support for one or two old kernels and last year's OS while kicking vague commitments about future support just far enough down the road that they can release another board first.
Please stop falling for it :-/
ETA: I think what grinds my gears the most is that OrangePi, BananaPi etc., are largely free-riding off the Linux community while producing products that only "beat" the market-defining manufacturers (Raspberry Pi, BeagleBoard) because they treat software support as an uncosted externality.
This kind of "build it and they will use it" logic works well for microcontrollers, where a manufacturer can reasonably expect to produce a chip with a couple of tech demos, a spec sheet and a limited C SDK and people will find uses for it.
But for "near-desktop class" SBCs it is not much better than misrepresentation. Consequently these things are e-waste in a way that even the global desk drawer population of the Raspberry Pi does not reach.
And yet they are graded on a curve and never live up to their potential.
I wouldn’t be surprised if it performs adequately in this context —- isn’t the manufacturer a VOIP device maker?
The reality is that they spam the market with a large number of products with little consistency, poor (if labyrinthine) documentation, random google drive links for firmware etc., and there are the same issues with hardware support.
I dunno, maybe the situation there is better than it was. But the broad picture is the same: better hardware but you are basically on your own.
Prices are considerably higher through the links than quoted in the article. This usually happens when someone posts about a great deal for surplus hardware on Ebay or a hidden gem on aliexpress. Just the thundering herd of traffic causes algorithmic pricing to spike the price.
My experience with the OrangePi 4 LTS has been poor, and I'm unwilling to purchase more of their hardware. Mine is now running Armbian because I didn't care for the instability, or for the Chinese repos.
They seem uninterested in trying to get their hardware supported by submitting their patches for inclusion in the Linux kernel, and popular distros. Instead, you have to trust their repos (based in PRC).
I opened the review and immediately ctrl-F'd "kernel". It said no upstream support so I closed the article.
I would never buy one of these things without upstream kernel support for the SoC and a sane bootloader. Even the Raspberry Pi is not great on this front TBH (kernel is mostly OK but the fucked up boot chain is a PITA, requires special distro support).
"Chinese repos" is a very charitable interpretation of the Google drive links they used to distribute the os. It seemed like it was on the free plan too, it often didn't work because it tripped the maximum downloads per month limit.
It's always better than a link in the sticky post on the manufacturer's phpbb forum. I bought some audio equipment directly from a Chinese company, and everything look like a hobbies/student project.
Is it? A google drive link to an OS image is worse IMO
I bought a MiniPC directly from a Chinese company (an AOOSTAR G37) and the driver downloads on their website are MEGA links. I thought only piracy and child porn sites used those..
I am somewhat amazed how you can manufacture such expensive high tech equipment yet are too cheap to setup a proper download service for the software, which would be very simple and cheap compared to making the hardware itself.
Maybe it is a Chinese mentality thing where the first question is always "What is the absolutely cheapest way to do this?" and all other concerns are secondary at best.
..which does not inspire confidence in the hardware either.
Maybe Chinese customers are different, see this, and think "These people are smart! Why pay more if you don't have to!".
> "Chinese repos" is a very charitable interpretation of the Google drive links they used to distribute the os.
"Chinese repos" refer to the fact that the debian repos links for updates point to custom Huawei servers.
> it often didn't work because it tripped the maximum downloads per month limit.
it always work if you login into a Google account prior to downloading. If you don't, indeed the downloads will regularly fail.
> it always work[s]
That was not my experience, at least for very large files (100+ GB). There was a workaround (that has since been patched) where you could link files into your own Google drive and circumvent the bandwidth restriction that way. The current workaround is to link the files into a directory and then download the directory containing the link as an archive, which does not count against the bandwidth limit.
I see. I never had to download such large files from Drive. For files up to 10Gb I never had any issue though.
That's always the problem with these non-Pi SBCs. They never have good software support.
Even bigger brands such as Nvidia seem to expect us to recycle SBCs every couple years.
The Jetson Nano launched with Ubuntu 18.04, today, this is still the only officially supported distro for it. I have no reason to think this would be different with the Orin and Thor series, or even with the DGX Spark with its customized Ubuntu/"DGX OS".
I still don't understand why they couldn't support them properly. There are so many situations in which they could be better than alternatives, only to be hamstring by the poorest OS support.
You see, a small startup like NVIDIA just doesn't have the budget to support their older devices the same way a multi-trillion dollar company like Raspberry Pi can.
The NanoPi models from FriendlyElec tend to have better support.
I have this experience with most of these SBC-s. The new Radxa board boots 50% of the time. The only reliable SBCs I have are RPI3|4.
I have a Radxa zero 3E that boots and runs fine.
Sounds like a faulty SD card.
I have 2 nvmes and i have tried it with several sd card.
I don't have any Radxa model, but I have a bunch of SBCs from different makers and I have never seen a problem with boot working half of the time only.
you keep insinuating PRC yet you don't realize you're already pwned just running their hardware no matter the OS.
Directly stating something twice is not insinuating…
Point to the spot on the board where China hurt you.
...and you would point at a backdoor. (If it is there.)
I'd namedrop Salt Typhoon, but it feels a bit unfair to rely on American SigInt.
This is hilarious
The review shows ARM64 software support is still painful vs x86. For $200 for the 16gb model, this is the price point where you could just get an Intel N150 mini PC in the same form factor. And those usually come with cases. They also tend to pull 5-8w at idle, while this is 15w. Cool if you really want ARM64, but at this end of the performance spectrum, why not stick with the x86 stack where everything just works a lot easier?
From the article: "[...] the Linux support for various parts of the boards, not being upstreamed and mainlined, is very likely to be stuck on an older version. This is usually what causes headaches down the road [...]".
The problem isn't support for the ARM architecture in general, it's the support for this particular board.
Other boards like the Raspberry Pi and many boards based on Rockchip SoCs have most of the necessary support mainlined, so the experience is quite painless. Many are starting to get support for UEFI as well.
The exception (even those are questionable as running plain Debian did not work right on Pi 3B and others when I tried recently) proves the rule. You have to look really hard to find an x86 computer where things don't just basically work, the reverse is true for ARM. The power draw between the two is comparable these days, so I don't understand why anyone would bother with ARM when you've got something where you need more than minimally powerful hardware.
The Pi 3B doesn't have UEFI support, so it requires special support on the distro side for the boot process but for the 4 and newer you can flash (or it'll already be there, depending on luck and age of the device) the firmware on the board to support UEFI and USB boot, though installing is a bit of a pain since there's no easy images to do it with. https://wiki.debian.org/RaspberryPi4
I believe some other distros also have UEFI booting/installers setup for PI4 and newer devices because of this, though there's a good chance you'll want some of the other libraries that come with Raspberry PI OS (aka Raspbian) still for some of the hardware specific features like CSI/DSI and some of the GPIO features that might not be fully upstreamed yet.
There's also a port of Proxmox called PXVirt (Formerly Proxmox Port) that exists to use a number of similar ARM systems now as a virtualization host with a nice ui and automation around it.
My uninformed normie view of the ecosystem suggests that it's the support for almost every particular board, and that's exactly the issue. For some reason, ARM devices always have some custom OS or Android and can't run off-the-shelf Linux. Meanwhile you can just buy an x86/amd64 device and assume it will just work. I presume there is some fundamental reason why ARM devices are so bad about this? Like they're just missing standardization and every device requires some custom firmware to be loaded by the OS that's inevitably always packaged in a hacky way?
Its the kernel drivers, not firmware. There is no bios or acpi, so the kernel itself has to support a specifc board. In practice it means there is a dtb file that configures it and the actual drivers in the kernel.
Manufacturers hack it together, flash to device and publish the sources, but dont bother with upstreaming and move on.
Same story as android devices not having updates two years after release.
But "no BOIS or ACPI" and requiring the kernel to support each individual board sounds exactly like the problem is the ARM architecture in general. Until that's sorted it makes sense to be wary of ARM.
It's not a problem with ARM servers or vendors that care about building well designed ARM workstations.
It's a problem that's inherit to mobile computing and will likely never change unless with regulation or an open standards device line somehow hitting it out of the park and setting new expectations a la PCs.
The problem is zero expectation of ever running anything other than the vendor supplied support package/image and how fast/cheap it is to just wire shit together instead of worrying about standards and interoperability with 3rd party integrators.
How so? The Steam Deck is an x86 mobile PC with all the implications of everything (well, all the generic hardware e.g. WiFi, GPU IIRC) work out of the box.
When I say mobile, I mean ARM SoCs in the phone, embedded and IoT lineage, not so much full featured PCs in mobile form factor.
What is ACPI other than a DTB baked into the firmware/bootloader?
Any SBC could buy an extra flash chip and burn an outdated U-Boot with the manufacturer's DTB baked in. Then U-Boot would boot Linux, just like UEFI does, and Linux would read the firmware's fixed DTB, just like it reads x86 firmware's fixed ACPI tables.
But - cui bono?
You need drivers in your main OS either way. On x86 you are not generally relying on your EFI's drivers for storage, video or networking.
It's actually nice that you can go without, and have one less layer.
It is more or less like wifi problem on laptops, but multiplied by the number of chips. In a way it's more of a lunux problem than arm problem.
At some point the "good" boards get enough support and the situation slowly improves.
We reached the state where you dont need to spec-check the laptop if you want to run linux on it, the same will happen to arm sbc I hope.
Is a decision of linux about how to handle HW in the ARM world. So is a little like in the middle.
It's the shape of the delivered artifact that's driven the way things are implemented in the ecosystem, not a really fundamental architecture difference.
The shape of historically delivered ARM artifacts has been embedded devices. Embedded devices usually work once in one specific configuration. The shape of historically delivered ARM Linux products is a Thing that boots and runs. This only requires a kernel that works on one single device in one single configuration.
The shape of historically delivered x86 artifacts is socketed processors that plug into a variety of motherboards with a variety of downstream hardware, and the shape of historically delivered x86 operating systems is floppies, CDs, or install media that is expected to work on any x86 machine.
As ARM moves out of this historical system, things improve; I believe that for example you could run the same aarch64 Linux kernel on Pi 2B 1.2+, 3, and 4, with either UEFI/ACPI or just different DTBs for each device, because the drivers for these devices are mainline-quality and capable of discovering the environment in which they are running at runtime.
People commonly point to ACPI+UEFI vs DeviceTree as causes for these differences, but I think this is wrong; these are symptoms, not causes, and are broadly Not The Problem. With properly constructed drivers you could load a different DTB for each device and achieve similar results as ACPI; it's just different formats (and different levels of complexity + dynamic behavior). In some ways ACPI is "superior" since it enables runtime dynamism (ie - power events or even keystrokes can trigger behavior changes) without driver knowledge, but in some ways it's worse since it's a complex bytecode system and usually full of weird bugs and edge cases, versus DTB where what you see is what you get.
This has often been the case in the past but the situation is much improved now.
For example I have an Orange Pi 5 Plus running the totally generic aarch64 image of Home Assistant OS [0]. Zero customization was needed, it just works with mainline everything.
There's even UEFI [1].
Granted this isn't the case for all boards but Rockchip at least seems to have great upstream support.
[0]: https://github.com/home-assistant/operating-system/releases
[1]: https://github.com/edk2-porting/edk2-rk3588
Yeah but you can get a n100 on sale for about the same price, and it comes with a case, nvme storage (way better then sd card), power supply, proper cooling solution, and less maintanance…
So, I agree but less than I did a few months ago. I purchased an Orange Pi 5 Ultra and was put off by the pre-built image and custom kernel. The “patch“ for the provided kernel was inscrutable as well. Now I’m running a vanilla 6.18 kernel on a vanilla uboot firmware (still a binary blob required to build that though) with a vanilla install of Debian. That support includes the NPU, GPU, 2.5G Ethernet and NVMe root/boot. I don’t have performance numbers but it’s definitely fast enough for what I use it for.
Interesting, where did you get an image with a 6.18 kernel that has NPU support?
NPU support in general seems to be moving pretty fast, it shares a lot of code with the graphics drivers.
I started with the published Debian image and then just built my own... and then installed onto an NVMe SSD.
There's also a risk of your DeviceTree getting pruned from the kernel in X years when it's decided that "no one uses that board anymore", which is something that's happened to several boards I bought in the 2010's, but not something that's happened to any PC I've ever owned.
It’s weirded me out for a long time that we’ve gone from ‘we will probe the hardware in a standard way and automatically load the appropriate drivers at boot’ ideal we seemed to have settled on for computers in the 2000s - and still use on x86 - back to ‘we’ll write a specific description file for every configuration of hardware’ for ARM.
Isn't this one of the benefits of ACPI? That the kernel asks the motherboard for the hardware information that on ARM SoCs is stored in the device tree?
Yep
That makes sense, as the Pi is as easy as x86 at this point. I almost never have to compile from scratch.
I'm not a compiler expert... But it seems each ARM64 board needs its own custom kernel support, but once that is done, it can support anything compiled to ARM64 as a general target? Or will we still need to have separate builds for RPi, for this board, etc?
Little bit of both. Pi still uses a sort of unique boot sequence due to it’s heritage. Most devices will have the CPU load the bootloader and then have the OS bring up the GPU. Pi sort of inverts this, having the GPU leading the charge with the CPU held at reset until after the GPU has finished it’s boot sequence.
Once you get into the CPU though the Aarch64 registers become more standardized. You still have drivers and such to worry about and differing memory offsets for the peripherals - but since you have the kernel running it’s easier to kind of poke around until you find it. Pi 5 added someone complexity to this with the RP1 South Bridge which adds another layer of abstraction.
Hopefully that all makes sense. Basically the Pi itself is backwards while everything else should conform. It’s not Arm specific, but how the Pi does things.
Apart from very rare cases, this will run any linux-arm64 binary.
Fot the Pi you have to rely on the manufacturer's image too. It does not run a vanilla arm64 distro
With this board the SoC is the main problem. CIX is working on mainlining that stuff for over a year and we still dont have gpu and npu support in mainline
I still have to run my own build of kernel on Opi5+, so that unfortunately tracks. At least I dont have to write the drivers this decade
Why? I'm running an Orange Pi 5+ with a fully generic aarch64 image of Home Assistant OS and it works great. Is there some particular feature that doesn't work on mainline?
for server use you can live with generic images. When you want stuff like HDMI audio out and all, generic images usually won't do.
This. The issue is the culture inside many of these HW companies that is oppositional to upstreaming changes and developing in the open in general.
Often an outright mediocre software development culture generally, that sees software as a pure cost centre, in fact. The "product" is seem to be the chip, the software "just" a side show (or worse, a channel by which their IP could leak).
The Rockchip stuff is better, but still has similar problems.
These companies need to learn that their hardware will be adopted more aggressively for products if the experience of integrating with it isn't sub-par.
They exist in a strange space. They want to be a Linux host but they also want to be an embedded host. The two cultures are pretty different in terms of expectations around kernels. A Linux sysadmin will (rightly) balk at not having an upgrade path for the kernel while a lot of embedded stuff that just happens to use Linux, often has a single kernel released… ever.
I’m not saying one approach is better than the other but there is definitely a lot of art in each camp. I know the one I innately prefer but I’ve definitely had eyebrows raised at me in a professional setting when expressing that view; Some places value upgrading dependencies while others value extreme stability at the potential cost of security.
> Some places value upgrading dependencies while others value extreme stability at the potential cost of security.
Both are valid. The latter is often used as an excuse, though. No, your $50 wifi connected camera does not need the same level of stability as the WiFi connected medical device that allows doctor to remotely monitor medication. Yes, you should have a moderately robust way to update and build and distribute a new FW image for that camera.
I can't tell you the number of times I've gotten a shell on some device only to find that the kernel/os-image/app-binary or whatever has build strings that CLEARLY feature `some-user@their-laptop` betraying that if there's ever going to be an updated firmware, it's going to be down to that one guy's laptop still working and being able to build the artifact and not because a PR was merged.
The obvious counterpoint is that a PR system is also likely to break unless it is exercised+maintained often enough to catch little issues as they appear. Without a set of robust tests the new artifact is also potentially useless to a company that has already sold their last $50 WiFi camera. If the artifact is also used for their upcoming $54.99 camera then often they will have one good version there too. The artifact might work on the old camera but the risk/reward ratio is pretty high for updating the abandonware.
No it's definitely a problem with the ARM architecture, specifically that it's standard to make black box SoCs that nobody can write drivers for and the manufacturer gives you one binary version and then fucks off forever. It's a problem with the ARM ecosystem as a whole for literally every board (except Raspberry Pi), likely stemming from the bulk of ARM being throwaway smartphones with proprietary designs.
If ARM cannot outdo x86 on power draw anymore then it really is entirely pointless to use it because you're trading off a lot, and it's basically guaranteed that the board will be a useless brick a few years down the line.
> The problem isn't support for the ARM architecture in general,
Of course it is not. That's why almost every ARM board comes with it's own distro, sometimes bootloader and kernel version. Because "it is supported". /s
With RAM it will be costing notably more, with 4 cores instead of 12. I'd expect this to run circles around an N150 for single-threaded perf too.
They are not in the same class, which is reflected in the power envelope.
BTW what's up with people pushing N150 and N300 in every single ARM SBC thread? Y'all Intel shareholders or something? I run both but not to the exclusion of everything else. There is nothing I've failed to run successfully on my ARM ones and the only thing I haven't tried is gaming.
> I'd expect this to run circles around an N150 for single-threaded perf too
It has basically the same single-core performance as an N150 box
Random N150 result: https://browser.geekbench.com/v6/cpu/10992465
> BTW what's up with people pushing N150 and N300 in every single ARM SBC thread?
At this point I expect a lot of people have been enticed by niche SBCs and then discovered that driver support is a nightmare, as this article shows. So in time, everyone discovers that cheap x86-64 boxes accomplish their generic computing goals easier than these niche SBCs, even if the multi-core performance isn't the same.
Being able to install a mainline OS and common drivers and just get to work is valuable.
> BTW what's up with people pushing N150 and N300 in every single ARM SBC thread?
Because they have a great watt/performance ratio along with a GPU that is very well supported by a wide range of devices and mainline kernel support. In other words a great general purpose SBC.
Meanwhile people are using ARM SBCs, with SoCs designed for embedded or mobile devices, as general purpose computers.
I will admit with RAM and SSD prices sky rocketing these ARM SBC look more attractive.
Because most ARM SBCs are still limited to whatever linux distro they added support to. Intel SBCs might underperform but you can be sure it will run anything built for x86-64.
ARM SBCs that cost over $90 are totally not worth it considering those Nxxx options exist
Many of the NXXX options are, sadly, going up in price a lot right now due to the RAM shortages.
> BTW what's up with people pushing N150 and N300 in every single ARM SBC thread?
For 90% of use cases, ARM SBCs are not appropriate and will not meet expectations over time.
People expect them to be little PCs, and intend to use them that way, but they are not. Mini PCs, on the other hand, are literally little PCs and will meet the expectations users have when dealing with PCs.
1. Wow, never thought I'd need to do an investment disclosure for an HN comment. But sure thing: I'm sure Intel is somewhere in my 401K's index funds, but also probably Qualcomm. But I'm not a corporate shill, thank you very much for the good faith. Just a hobbyist looking to not get seduced by the lastest trend. If I were an ARM developer that'd be different, I get that.
2. The review says single core Geekbench performance is 1290, same as i5-10500 which is also similar to N150, which is 1235.
3. You can still get N150s with 16gb ram in a case for $200 all in.
> review says single core Geekbench performance is 1290, same as i5-10500 which is also similar to N150, which is 1235.
Single core, yes. Multi core score is much higher for this SBC vs the N150.
But realistically, most workloads of the kind you would run on these machines don't scale benefit from multithreading as much as single core performance. At least at home these machines will do things like video streaming, router, or serving files. Even if you want to use it in the living room as a console/emulator, you are better off with higher single core performance and fewer cores than the opposite.
> But realistically, most workloads of the kind you would run on these machines don't scale benefit from multithreading as much as single core performance. At least at home these machines will do things like video streaming, router, or serving files.
You're probably right about "most workloads", but as a single counter-example, I added several seasons of shows to my N305 Plex server last night, and it pinned all eight threads for quite a while doing its intro/credit detection.
I actually went and checked if it would be at all practical to move my Plex server to a VM on my bigger home server where it could get 16 Skymont threads (at 4.6ghz vs 8 Gracemont threads at ~3ghz - so something like 3x the multithreaded potential on E-cores). Doesn't really seem workable to use Intel Quick Sync on Linux guests with a Hyper-V host though.
> in the living room as a console/emulator,
if you are talking about ancient hardware, yes, it's mostly driven by single core performance. But any console more recent than the 2000s will hugely benefit from multiple cores (because of the split between CPU and GPU, and the fact that more recent consoles also had multiple cores, too).
Are you sure you don't have single-threaded and multi-threaded backwards?
Why would the A720 at 2.8 GHz run circles around the N150 that boosts up to 3.6 GHz in single-threaded workloads, while the 12-core chip would wouldn't beat the 4-core chip in multithreaded workloads?
Obviously, the Intel chip wins in single-threaded performance while losing in multi-threaded: https://www.cpubenchmark.net/compare/6304vs6617/Intel-N150-v...
I can't speak to why other people bring up the N150 in ARM SBC threads any more than "AMD doesn't compete in the ~$200 SBC segment".
FWIW, as far as SBC/NUCs go, I've had a Pi 4, an RK3399 board, an RK3568 board, an N100 NUC from GMKTec, and a N150 NUC from Geekom, and the N150 has by far been my favorite out of those for real-world workloads rather than tinkering. The gap between the x86 software ecosystem and the ARM software ecosystem is no joke.
P.S. Stay away from GMKTec. Even if you don't get burned, your SODIMM cards will. There are stoves, ovens, and hot plates with better heat dissipation and thermals than GMKTec NUCs.
x86 based small computers are just so much easier to work with than most second- and third-string ARM vendors. The x86 scene has had standards in place for a long time, like PCIe and the PC BIOS (now UEFI) for hardware initialization and mapping, that make it a doddle to just boot a kernel and let it get the hardware working. ARM boards don't have that yet, requiring per-board support in the kernel which board manufacturers famously drag their feet on implementing openly let alone upstreaming. Raspberry Pi has its own setup, which means kernel support for the Pi series is pretty good, but it doesn't generalize to other boards, which means users and integrators may be stuck with whatever last version of Ubuntu or Android the vendor thought to ship. Which means if you want a little network appliance like a router, firewall, Jellyfin server, etc. it often makes more sense to go with an N150 bitty box than an ARM SBC because the former is going to be price- and power-draw-competitive with the latter while being able to draw on the OS support of the well-tested PC ecosystem.
ARM actually has a spec in place called SystemReady that standardizes on UEFI, which should make bringup of ARM systems much less jank. But few have implemented it yet. I keep saying, the first cheap Chinese vendor that ships a SystemReady-compliant SBC is gonna make a killing.
> I keep saying, the first cheap Chinese vendor that ships a SystemReady-compliant SBC is gonna make a killing.
Agree. When ARM announced the initiative, I thought that the raspberry pi people would be quick but they haven't even announced a plan to eventually support it. I don't know what the hold up is! Is it really that difficult to implement?
The Pi boots on its GPU, which is a closed off Broadcom design. Likely complicates things a bit.
Apparently Pine64 and Radxa sell SystemReady-compliant SBCs; even a Raspberry Pi 4 can be made compliant (presumably by booting a UEFI firmware from the Raspberry's GPU-based custom-schmustom boot procedure, which then loads your OS).
No idea - the ryzen based ones are better!
Depends on what you need - for pure performance regardless of power usage and 3D use cases like gaming, agreed. For performance per watt under load and video transcoding use cases, the 12th-gen E-core CPUs ala the N100 are _really_ hard to beat.
Agreed, at least for a likely "home use" case, such as a TV box, router, or general purpose file server or Docker host, I don't see how this board is better than something like a Beelink mini PC. The Orange Pi does not even come with a case, power supply or cooler. Contrast that with a Beelink that has a built-in power supply (no external brick) and of course a case and cooler.
This OrangePi 6 Plus board comes with cooling and a power supply (usb-c). No case, though.
Fair enough, but I suppose it does not come which storage (NVME). Typically ready to use NUCs that retail for around $200 do. That's often only about 0.5GB, so not a use amount of storage, but more than enough for a streaming box or retro console, say.
Correct, you have to buy the NVME storage separately.
It allows you to build for what is coming. In a couple of years arm hardware this powerful will cheap and common.
i use the rpi zero 2 for the IO pins
4b / 5 for the camera stuff.
i don’t think using these boards for just compute makes a lot of sense unless it’s for toy stuff like an ssh shell or pihole
Yes x86 will win for convenience on about every metric (at least for now), but this SoC's CPU is much faster than a mere Intel N150 (especially for multicore use cases).
I've got i3 and i5 systems that do 15W or better idle, and I don't have to worry about the absolute clusterfuck of ARM hardware (and those systems used can be had for less and will probably long outlive mystery meat ARM SBCs).
One of my Arm systems idles at leas than 1W and has a max TDP less than your idle draw (10W). I also have an N200 box, and a 16-core workstation with an obscene power draw - each platform has its pros and cons.
I noticed nuance is the first thing discarded in the recurring x86 vs Arm flame wars, with each side minimizing the strength of the "opposing" platform. Pick the right tool for the right job, there are use-cases where the Orange Pi 6 is the right choice.
I was soured on ARM SBCs by the Orange Pi 5, which does not have an option to ignore its SD card during boot. Something trivial on basically every x86 platform I had been taking for granted.
the high end of the performance is impressive and this has idle power similar to the processors in it's performance range(AMD Ryzen 7 4800H idles at 45W). This is certainly not meant for lower power computing.
I've got two RK3588 boards here doing Linux-y things around my place (Jellyfin, Forgejo builders, Immich, etc) and ... I don't think I've run into pain? They're running various debian and just ... work? I can't think of a single package that I couldn't get for ARM64.
Likewise my VPS @ Hetzner is running Aarch64. No drama. Only pain is how brutal the Rust cross-compile is from my x86 machine.
I mean, here's Geerling running a bunch of Steam games flawlessly on a Aarch64 NVIDIA GB10 machine: https://www.youtube.com/watch?v=FjRKvKC4ntw
(Those things are expensive, but I just ordered one [the ASUS variant] for myself.)
Meanwhile Apple is pushing the ARM64 architecture hard, and Windows is apparently actually quite viable now?
Personally... it's totally irrational, but I have always had a grudge against x86 since it "won" in the early 90s and I had to switch from 68k. I want diversity in ISAs. RISC-V would be nice, but I'll settle for ARM for now.
We need an acronym for these types of boards: Yet Another SBC With Poor Longterm Support. YASBCWPLS. Really rolls off the tongue.
Or we should just have "STS" (Short Term Support) after the board names to let others know the board will be essentially obsolete (based on lack of software updates) in two months.
> We need an acronym for these types of boards: Yet Another SBC With Poor Longterm Support. YASBCWPLS.
Deadend is how I describe it.
STS - Shit Tier Support
Without mainline Linux support I have no interest in these more obscure SBCs. Mainline Linux is the bare minimum, put in some effort please manufacturers.
Note that this is also happening with Nvidia and the Jetson boards.
Buying one of these Pi knockoffs taught me one thing, software support is the key to raspberry pi’s success.
Whenever I would have a problem, and it was more often than not, I would search for a solution and come across something that worked for rpi that I could try to port across.
Double the hardware spec matters little if you can’t get the software to even compile
> Double the hardware spec matters little if you can’t get the software to even compile
You can get any software to compile on this SBC. On the Raspberry Pi platform you usually don't need to compile anything.
I'm not sure I'm gonna grab another OrangePi board again. I was happy to grab the RV2 just to experiment around with, but I didn't realize that the linux kernel they provided to build their ubuntu distro doesn't actually build properly. I got it to build after throwing a version of ubuntu onto an unused pc, but then it didn't matter the options I selected for the build (like gui options) it seemed like the gui just didn't exist at all in the final binary. I've yet to try and build a 3rd party os with support since I spent so much time just trying to get the official distro to work properly.
When something has an 30 TOPS NPU, what are the implications? Do NPUs like this have some common backend that ggml/llama.cpp targets? Is it proprietary and only works for some specific software? Does it have access to all the system RAM and at what bandwidth?
I know the concept has been around for a while but no idea if it actually means anything. I assume that people are targeting ones in common devices like Apple, but what about here?
Ignorant of this NPU, but in my experience, you're expected to use some cursed stack of proprietary tools/runtimes/SDKs/etc and no, it will not play nicely with anything you want it to unless you write the support yourself.
The specific NPU doesn't seem to be mentioned in TFA, but my guess is that the blessed way to deal with it is the Neon SDK: https://www.arm.com/technologies/neon
I've not found Neon to be fun or easy to use, and I frequently see devices ignoring the NPU and inferring on CPU because it's easier. Maybe you get lucky and someone has made a backend for something specific you want, but it's not common.
TFA does directly mention the NPU "Arm-China Zhouyi: 30 TOPS (Dedicated)"
"you cannot simply use standard versions of PyTorch or TensorFlow out of the box. You must use the NeuralONE AI SDK."
Neon is a SIMD instruction set for the CPU, not a separate accelerator. It doesn't need an SDK to use, it's supported by compiler intrinsics and assembly language in any modern ARM compiler.
Quite right, I mixed up Neon with NN:
https://www.arm.com/products/silicon-ip-cpu/ethos/arm-nn
NPUs like this tend to have one thing in common: being decorative without drivers and support 9 times out of 10.
Even if it worked though, they're usually heavily bandwidth bottlenecked and near useless for LLM inference. CPU wins every time.
30 TOPS NPU is the almost-useful minimum for a device, but as we've seen that even microsoft couldn't come up with anything useful to do with it in the AI laptops. This has all but disappeared, they are pushing the cloud licensing over local AI now
Can't speak to this specific NPU but these kind of accelerators are really made more for more general ML things like machine vision etc. For example while people have made the (6 TOPS) NPU in the (similar board) RK3588 work with llama.cpp it isn't super useful because of the RAM constraints. I believe it has some sort of 32-bit memory addressing limit, so you can never give it more than 3 or 4 GB for example. So for LLMs, not all that useful.
It needs specific support, and for example llama.cpp would have support for some of them. But that comes with limitations in how much RAM they can allocate. But when they work, you see a flat CPU usage and the NPU does everything for inference.
e-*ing-waste if you have to wait for manufacturer to provide supported images.
Upstream the drivers to the mainline kernel or go bankrupt. Nobody should buy these.
this. Warning: do not buy any SBC without mainline kernel support, or if you have to download an unverified image from a google drive.
I think the sweet spot for ARM SBCs are smaller, less powerful and cheaper for headless IOT edge cases. I use a couple of them that way when I need LAN connectivity, either by ethernet or wifi, and things wired to GPIO pins. I don't need a powerful CPU or lots of RAM for that. The SBC makers are caught up in a horsepower race and I just shrug, it's not for me.
This is my experience as well. I have a couple PINE64 devices, a Rock64 (Rockchip RK3328) and a RockPro64 (RK3399). And an N150 device.
Both ARM64 devices run headless, make use of GPIO, and have more than enough CPU. In fact, these are stable enough that I run BSDs on them and don't bother with Linux.
The Rock64 runs FreeBSD for SDR applications (e.g. ADS-B receiver). FreeBSD has stable USB support for RTL-SDR devices.
The RockPro64 runs NetBSD with ZFS with a PCIe SSD. NetBSD can handle ARM big.LITTLE well. I run several home lab workloads on this. Fun device.
I also have an N150 device running the latest Debian 13 as my main home lab server for home automation, Docker, MQTT broker, etc.
In short: SBCs are cheap enough that you can choose more than one, each for the right task, including IoT.
I'm setting up to run an APRS iGate. Is Rock64 a decent alternative to Pi with Linux?
Unfortunately, this board seems to be using the CIX CPU that has power management issues:
> 15W at idle, which is fairly high
For comparison, an N150 mini PC uses around 6 watts at idle.
I have a pre-Ryzen AMD thin client which draws MAXIMUM 10W. Idle 5W. A lot less CPU power but still. 15W for a modern SBC is a joke.
And here I was thinking the Pi 5 which idles at 3W was unreasonably high.
My Orange Pi RV2 sucks :( The available distros, drivers, kernel, and tools do work, but they’re crappy, and poorly maintained. There’s no support and very little documentation, which is a real shame. From a hardware point of view, it’s a nice board and when I properly compiled some softwares myself I actually got really interesting performance, but it was a pain in the ass. So I ended up buying a Raspberry Pi 4, much better supported and documented.
Their approach to software support does leave a lot to be desired.
For what it's worth though the v5 did have Talos support, so you could just throw that on there, connect it to a cluster and have a decent arm node that is fanless and has 32gb
https://docs.siderolabs.com/talos/v1.12/platform-specific-in...
I have a software that need to build aarch64 (for some aarch64 box with 4 core cpu), currently using Oracle cloud's 4core24G Arm neoverse n1 as github self host runner to build it.
Seems this machine is more powerful than it, definitely attractive to me for a physical aarch64 self host runner.
I am newly interested in Compute Module style SBCs after I bought a one to toy around with. I was surprised to learn that the PCBs that interface to them are open specs and I can probably build myself more custom PCB solutions to match different form factors instead of being stuck with a bulky normal Raspberry Pi.
I was pleased to learn that Radxa and Orange Pi have compatible similar boards.
I have wanted to see more RISC SBCs so I may toy with these but I rather wait for the software support to get much richer.
I am not a kernel developer, so I don't really have any idea what this means, but CIX appears to have patches in the Linux kernel[0], so I assume mainlining more stuff is in the works?
[0] https://lwn.net/ml/all/20250609031627.1605851-1-peter.chen@c...
That is correct.
I thought the /. effect was to hug a site to death, not cause the price of the product under review to skyrocket. I see $414 instead of $258 now!
This happens with algorithmic marketplaces.
Is the orangePi 6 plus really almost 4x as fast as an intel n100?
There are faster Intel chips though.
i really with raspberry pi foundation released a pi with built in nvme instead of using a hat. i think using flash memory is the true bottleneck on the system
Heh, I'm the opposite. I wish the rpi stayed the course of cheapest "working" SBC, and move their high-end boards to a different brand. Raspberry Sigma, or 67 or whatever gets the younguns crazy these days.
After the pandemic, the "25$" SBC suddenly became 100+ with low availability. The main thing that made rpis worth it is gone now, and they're all chasing number go up on benchmarks.
The 500+ model has NVMe storage and comes with a 256GB drive.
Weird article: it compares with raspberry pi 5 instead of OrangePi 5 Plus, the predecessor.
Actually the Orange Pi 5 Ultra would be the most recent board from Orange Pi to compare it with. You can see a comparison between the Orange Pi 5 Ultra and the Raspberry Pi 5 here: https://boilingsteam.com/orange-pi-5-ultra-review/
In a nutshell, this new Orange Pi 6 Plus is much faster than Orange Pi 5 Ultra and anything that came before.
Plus and ultra are almost identical with the exception of an HMDI in port on the latter. I've used the same HAL on both boards, they are effectively the same.
> at the beginning, my OrangePi did not boot ... Turns out that the firmware required an update to be able to boot
No thanks.
Wow this website just keep crashing my firefox, like 3 times in a row
Which version of Firefox and OS are you on?
146.0 (Build #2016129543) on Android.
How yo trigger: just follow the link, and start scrolling down. Totally reproducible, just did it again
Does uboot work on this?
Yet another board which will never have proper upstream support because the SoC vendor refused to implement the ARM BSA standard which would provide EFI/ACPI support instead of relying on undiscoverable devices only exposed through device tree. ACPI isn't perfect but it's way better than device trees which are seldom updated so the device will remain stuck with old kernels.
Devicetree continues to be a massive crutch for arm soc vendors.
How are we still in a world where there are breathless, hand-waving blog posts written about the theoretical potential of super-fast SBCs for which the manufacturer shows fuck all interest in competent OS support?
Yet again, OrangePi crank out half-baked products and tech enthusiasts who quite understandably lack the deep knowledge to do more than follow others' instructions on how to compile stuff talk about it as if their specifications actually matter.
Yet again the HN discourse will likely gather around stuff like "why not just use an N1x0" and side quests about how the Raspberry Pi Foundation has abandoned its principles / is just a cynical Broadcom psyop / is "lagging behind" in hardware.
This stuff can be done better and the geek world should be done excusing OrangePi producing hardware abandonware time after time. Stop buying this crap and maybe they will finally start focussing on doing more than shipping support for one or two old kernels and last year's OS while kicking vague commitments about future support just far enough down the road that they can release another board first.
Please stop falling for it :-/
ETA: I think what grinds my gears the most is that OrangePi, BananaPi etc., are largely free-riding off the Linux community while producing products that only "beat" the market-defining manufacturers (Raspberry Pi, BeagleBoard) because they treat software support as an uncosted externality.
This kind of "build it and they will use it" logic works well for microcontrollers, where a manufacturer can reasonably expect to produce a chip with a couple of tech demos, a spec sheet and a limited C SDK and people will find uses for it.
But for "near-desktop class" SBCs it is not much better than misrepresentation. Consequently these things are e-waste in a way that even the global desk drawer population of the Raspberry Pi does not reach.
And yet they are graded on a curve and never live up to their potential.
What is wrong with BananaPi? My understanding is it used for the basis for the OpenWRT community flagship router.
I wouldn’t be surprised if it performs adequately in this context —- isn’t the manufacturer a VOIP device maker?
The reality is that they spam the market with a large number of products with little consistency, poor (if labyrinthine) documentation, random google drive links for firmware etc., and there are the same issues with hardware support.
I dunno, maybe the situation there is better than it was. But the broad picture is the same: better hardware but you are basically on your own.
Prices are considerably higher through the links than quoted in the article. This usually happens when someone posts about a great deal for surplus hardware on Ebay or a hidden gem on aliexpress. Just the thundering herd of traffic causes algorithmic pricing to spike the price.
I have the itx board from radxa. This CIX chip is a disappointment, you'll never see the 2.8ghz.
> you'll never see the 2.8ghz.
What do you mean?
Why bother with these obscure boards with spotty software support when you can get a better deal all around with an x86 mini PC with a N150 CPU?
Exactly! Just grab a mini PC such an Optiplex, it will be so much better.
The half baked hardware comments are humerous, because pretty much any piece of software is half baked if we are lucky.