Friday, January 9, 2026

"Because they didn't know what to do with them."

I am Intel's biggest fan, but also one of its biggest critics. Last week I purchased a brand new 1st Generation Intel Galileo board, which were sold from 2013-2017, and a surplus can still be found on Ebay from a few sellers ($18 shipped- a deal if you're into embedded devices). There is also a more rare Quark D2000 Dev Kit, which occasionally pops up for a decent price. The seller quickly mailed the board, and I received it today. Plugged it in an it booted its lights, although I didn't test an OS yet. I had made plans to install a Yocto-based image, either one someone already developed, or a custom one from scratch.


In reading forums about the chip, I had read some comments stating that it didn't have an MMU. While I was aware that the D2000 didn't have an FPU (x87), I had assumed that the X1000 had at least an MMU. It also has an FPU. I stumbled on some comments saying this morning that there was no mention in the datasheets (which were fairly labyrinth to find on the website, now requiring a search box to find, more or less (direct links are often expired).  

Thanks to some experienced developers, they confirmed it actually does contain an MMU, and a single page mentions the word (MMU) on p. 137, while another datasheet refers to Memory Manager and paging, but not the full phrase.

 Some highlights of the Element14 forum comments:

  in reply to morgaine

This thread on the Intel Quark SoC is quite interesting, but theres one big piece of Quark that has been overlooked.  Having worked with Intel architectures for some time now, I'm familiar with the way they write their manuals.  And what isn't there is always more important that what is!

 

Unfortunately Intel confuses the discussion here quite a bit, because in the opening paragraph (page 37) of their full 920-page Quark datasheet they call the Quark SoC X1000 an "application processor".

 

and

Being a conventional microcontroller, the Quark has no MMU to support the flexible process separation and virtual memory of full operating systems like Linux. 

 

Actually, Quark does have an MMU, its just not described in the Datasheet.

 

The MMU is part of the CPU Core, which is described the Core Hardware Reference Manual and the Core Developer's Manual.   The Quark CPU Core is a complete IA32 implementation,  a little slow but with a few extras for the embedded micro market.  Check out page 20 of the hardware ref, it shows a block diagram of the core.  The three manuals together make up the Quark documentation.  You almost need a monitor for the datasheet, the hardware ref manual printed out and a second monitor for the developers manual to make sense of it all. image

 

Morgaine Dinova wrote:

 

Many thanks, Walt.  That makes all the difference in the world.  I guess Intel technical authors just have a warped sense of humour in not even mentioning the existence of an MMU as a bullet point in the SoC datasheet.  Perhaps there just wasn't room, they had only 920 pages after all ...

You have to view that datasheet much as you would the chipset hardware datasheet that intel would provide for something like the PCH as that's really what it is.  There's a single page (121) with cpu core details. Intel has traditionally done things this way, one doc covering the hardware side, another for the cpu internals/software/programming side.

 

Walt did well finding those manuals, I was coming up blank searching for them on the Intel site.

 And from this EEVblog forum:

 Re: Intel Quark / Curie vs PIC32, ARM etc...

« Reply #6 on: October 23, 2016, 12:29:01 pm »
Seems Intel have been trying to get into embedded for decades and haven't really succeeded yet
You are forgetting Intel came up with the 8051 and before that the 8048. The latter was used in every PC and PC keyboard until USB came around! And lets not forget you can find x86/x88 microcontrollers in various electronics (I've seen them in hard drives for example). All in all they must have sold billions of their microcontrollers. IMHO it is more accurate to say Intel has lost the embedded market a couple of decades ago.

edit: As coppice noted: Intel seems to lack the patience to grow a new business. IIRC Intel acquired several companies in the past decades and either killed them or sold them off again because they didn't know what to do with them.
« Last Edit: October 23, 2016, 12:31:05 pm by nctnico »

Wednesday, December 24, 2025

Announcing Pokey Linux, A Yocto-based distribution of distributions for embedded systems




Pokey Linux is a platform for developing many single-application OSes, using the fewest resources possible. It is of course a reference to the Poky tool in Yocto, and the Earthbound character for the 1995 SNES game. While I have read about Yocto for years, I just thought it was a convenient and funny mnemonic way to recall a tool. While I haven't developed any code for it, it aims to be a repository of binaries that one can search and download, but also locate the package dependencies and the Yocto methods for creating it (in other words, non-technical users can work their way from a GUI-based software manager like Synaptic or Cubic, and see the steps taken to develop it, or perhaps just a few steps to make a modification). I realized this might be a better approach to indecision, as indecision of what single application OS to develop first is besides the point. Someone might want a binary with just VLC, or Thunderbird, or Midori, and nothing else (and of course, a GUI for the board support). It's also a way to learn to build without knowing what OS I need to use. In a way, it's kind of like a sideshow OS (a secondary character, not the daily driver, or side driver) 

Friday, December 19, 2025

The Smartest Companies are in the Same Room, but are not Building New Products Together...for you

 A brief follow up to my previous post, "The State of Stateless Linux (And the Future of Solar Computing)" is about the technology industry as a whole.

Obviously, money drives product development - anticipated revenue streams from new products. But sometimes newer is just cheaper materials, rather than a new feature, which of course, isn't always a bad thing, since the savings can be passed down to the consumer. 

In the Qualcomm case, The UNO Q isn't really innovative in terms of features. It might be a cash cow if Raspberry Pi wants to get out of the consumer division and only focus on corporate/industry customers. That too, isn't always a bad thing- serving consumers where a former company is unable or less willing. After all, Qualcomm has a large patent portfolio, and wouldn't need to outsource every thing or anything. This is the same Qualcomm that wanted to make a bid on Intel to buy them  out, however Intel  & The U.S had "other plans.



Now, sometimes it might be a good idea to choose your battles wisely. For example, Qualcomm is known more for its mobile chipsets and wireless IP, rather than single board computers. So them amassing a war chest was probably prudent if they decided against that 10 years ago. And of course there are also potential benefits to avoiding tariffs since it is an American company, whereas the UK, however unlikely it would face tariffs to the extent of other countries, could see a surcharge on even a $35 Raspberry Pi.

Even so, product development often follows other successful products, and sometimes it is simpler/easier to develop a cheaper product that does the same thing at the same energy consumption for a lower cost, than something that uses significantly less power for the same price- because energy is still cheap, especially at the low end where devices consume just a couple watts of power.

There are certainly countless instances where companies work together to build a product- an EUV machine isn't built by one company, for example, but uses a laser from Germany and materials from the U.S. and Asia:


There are other kinds of "cooperation", of course, such as "no-poach":
A lot of times success depends on a few limited players who have the time to market and develop a product before anyone else, and true competition in a free market is rarely as fair as it is encouraged.

But what happens when the sucesssful cases become complacent? They rarely, pro-actively innovate, unless they have some guilt or a conscience.

So instead, they, the smartest people and companies in the room, gather at conferences, exchange notes, and informally focus on non-competing products in markets that don't interact, or if they do, at a minimum. It's far easier to control competition when, as Lincoln would do - he keeps his friends close, and his enemies closer (in the tent).

AI, despite all its hype, in 2025, is already considered a safer investment than legacy 32-bit linux products for the consumer market, because the reach is far greater. When you can sell 100,000 IP cameras with AI-enhanced motion detection, your most reliable and highest paying customers are surveillance customers who need panopticon technology for small business warehouses in a crime-ridden town.

In other words semiconductor companies aren't going to profit as much from a smaller market of 8-9 billion humans, when they can sell a trillion IP cameras to a few thousand companies. So that's all Big Tech is doing these days. Meeting at conferences like a plenary session for planetary domination. So it's not enough to just follow the mainstream linux anymore. Ordinary individuals have been left behind.

Open source/ Free software hobbyists who want anything more than retro technology products are going to need to consider more leading edge solutions. And forking, if necessary.







Tuesday, December 16, 2025

The State of Stateless Linux OSes, and the future of Solar Computing

The title is more of a gag on the other parodied State of the Union addresses, like State of the Onion by Perl Developer Larry Wall: https://www.perl.com/pub/2006/09/21/onion.html/

Stateless applications, or even machines, may involve some amount of storage, even if it is not local. The purpose of stateless systems may have diverged with increasing options/capabilities, but its adoption in embedded systems can still have a lot of utility.

https://www.redhat.com/en/topics/cloud-native-apps/stateful-vs-stateless

A stateless application may comprise only one part of a lightweight system, whereas the rest of the kernel and OS might have stateful applications.

What about Stateless OSes?

Depending on the need, such as a LiveCD/USB, a stateless OS isn't going to save info on ROM, but it can serve a useful purpose such as on PuppyLinux, which boots into RAM and allows a persistant storage option. One theory is that developing a new, lightweight system might be easier to select off-the-shelf stateless applications/modules, and interfaces, then integrate them into a single OS that limits where storage must take place.

Rather than starting from scratch, like Gentoo (which itself isn't technically from scratch), building an OS with predefined and pretested benchmarked applications can produce a list of memory requirements, and then the applications can be loaded as separate, single application OSes. This might allow bypassing higher memory needs, at the expense of a "truly" userspace OS.

Stateful Userspace, just not all at once

"The sum of the parts is less than the whole" can alternately be written as "The whole is greater than the sum of the parts." Unless multitasking and time is of essence. The tradeoff of serial applications (RISC analogy and CISC comparisons are somewhat congruent) is at the expense of time and processing needs. 

Deliberately limiting IPC bandwith/memory cache and clockrate is only to meet energy constraints, not to artificially limit processing for useless reasons. Not use less, although that sounds like a joke. Get it? While there are certainly efficiency cases where clock rate improves throughput, there are many edge/niche cases where that may not apply.

The purpose of solar femtoTX motherboard and Solar Kernel is to explore those edge cases.

Another Four Core A53... in 2025?

Just 2 months ago, Qualcomm, one of the wealthiest chipmakers in the world, just behind Apple, Nvidia, and Intel (historically), with a market capitalization of $188 Billion, decided to release yet another Quad Core Single Board Computer, to win perhaps a 10-15% share of the Single Board Computer Market.


My guess is that someone in the product development meeting at Qualcomm had this idea:

Developer Jerry: "Hey, let's take on Raspberry Pi!"

Manager Tom: That's a great idea! We've got the cash! SEO assistant, let's get on the first page of Google Search results.

Developer Berry (SEO Whiz): Sure thing! On it.

Developer Jerry: We'll have the Raspberry Pi cornered in time for our 2nd quarterly results!

What I think is needed

More 16MB-128MB SoCs with Display Interfaces & GUIS and boring bootloaders - Towboot, Coreboot (for 386 and 486, etc) or kexecboot. Bootloaders that are standardized and don't require a highly proprietary or convoluted boot processes across boards, especially using the same architecture.


Memory in Pixel display controllers (present on the Ambiq Micro Apollo 510): https://contentportal.ambiq.com/documents/20123/387733/Apollo-SoC-Selector-Guide.pdf


SAM9X60 https://ww1.microchip.com/downloads/aemDocuments/documents/MPU32/ProductDocuments/DataSheets/SAM9X60-SIP-Data-Sheet-DS60001580.pdf



Why 16-128MB? Because the era of Solar is upon us.


Today you can solar power 4MB without much of a sweat. 5 years ago you could solar power around 384KB or RAM. The Apollo 3 was released in 2020. The Apollo 510 in 2025. I'm referring to portable solar panels that can fit inside a pocket, or maybe a briefcase, not a foldout panel that is as large as a newspaper. The purpose of portable solar mobile devices is just that, as most commuters aren't setting up a camping spot in the middle of rush hour on 5th Avenue.

That's only when paired with a lightweight processor no larger than a Pentium. At 32nm or less, and at 60MHz or less. That was in 2011, but Intel never released it and that didn't include RAM. The Quark was released and even partnered with the Arduino to create the Galileo Board. But very little RAM, and it was a microcontroller for all intents and purposes (Windows IoT, not 10, or 98). Has anyone soldered 4MB low power MRAM to a standalone chip like the D2000 with 0.025W (25mW) power consumption and installed Windows 3.1? Maybe. But it was never sold separately like a loose diamond (because it's diamond, silly!). Intel knows that, but just won't admit it. Available to Intel partners for development only, and today Intel Foundry advertises its services but the Quark is not on the menu (I've tried to reach out to them multiple times but never got a response). 
 
(Edit: 1/7/26: A correction was made to the X1000- the lowest power Lakemont Quarks were actually the "D" series Silver Butte & Mint Valley D2000, along with the Atlas Peak SE C1000 (Curie), as the Clanton based X1000 used 2.5W, which is still relatively low, but not as low as the D-series, which resembled the Claremont. The D2000's were sold (and still are, by third parties from the available remaining stock), except for the D1000. Yet so much emphasis on the Edison and Galileo boards was on underselling them as "microcontrollers" and not computers that could once display much more graphically rich user interfaces. Almost in the sense of "Use this 586 to blink LEDs!" instead of "You can run Netscape Navigator!" Or maybe that was their challenge?

By comparison, a Cortex M4 uses around the same number or slight more than and ARM1 processor (25,000 transistors).


The 80386 had 275,000. The 486 had 1.2 million. The Pentium 3.3m. When RAM is 90% of your SBC's energy consumption, the motivation to create a low RAM board (w/ ultra low voltage and power - 0.6V-0.8V) increases.

Because then you don't need a USB port in your bill of materials to recharge/power it (unless you want to).

Millions of computer users worldwide could type on a solar powered laptop, with a solar powered keyboard by ONiO, and a board that uses a 10mW of power. Set the ceiling, and the applications will follow.

A microcontroller such as the Arduino or a Single Board computer requires access to either another PC/laptop, or a power supply. One can plug in a microcontroller to a USB or a Serial TTL interface. Boards should be standalone and require just a lightweight (low power) monitor and keyboard to run.  

Remember when these could run on their own? (Some had a backup battery, but still)




Low Power Memory makers typically sell just a few MB, at most: . https://www.sure-core.com/memory-products/ (SRAM) https://www.weebit-nano.com/ (ReRAM) I am not really sure which memory suppliers are developing for the high end (many MB), but I imagine if it's really leading edge, their partners aren't publically advertising it, esp if they are using it internally to confer or research some further competitive advantage)

When I was a kid, my uncle took me strawberry picking. The farm charged by the basket. Whatever you could fit in the basket was yours. Software development should follow that principle. 






Monday, September 1, 2025

The future of 32-bit linux support

 An interesting article was posted on LWN.net on linux support for 32-bit systems. https://lwn.net/SubscriberLink/1035727/4837b0d3dccf1cbb/ 

"Arnd Bergmann started his Open Source Summit Europe 2025 talk with a clear statement of position: 32-bit systems are obsolete when it comes to use in any sort of new products."

This statement doesn't really make a lot of sense for embedded developers that plan to use no more than 64-128MB. Systems that use only a tiny fraction of the linux kernel and drivers.

"Currently the kernel is able to support 32-bit systems with up to 16GB of installed memory. Such systems are exceedingly rare, though, and support for them will be going away soon. There are a few 4GB systems out there, including some Chromebooks. Systems with 2GB are a bit more common. Even these systems, he said, are "a bit silly" since the memory costs more than the CPU does. There are some use cases for such systems, though. Most 32-bit systems now have less than 1GB of installed memory. The kernel, soon, will not try to support systems with more than 4GB." 

This is another odd statement, which doesn't seem to really matter. Memory once cost much more (before the 2000s), so if it it costs more than the CPU, then it's not really a big deal if the premium is on something else, such as legacy support, or power consumption. 

Edited on 12/27/2025:

From: https://infosec.exchange/@JessTheUnstill/115786136251963231

"We'll just fork it" is privileged mindset. It means you think you can gather enough clout and like minded people to put up a bunch of unplanned work and time and passion to make a whole new project despite the old one still working "well enough".

The only problem with this understandable criticism is that 64 bit is in some ways considered a fork too, and the people on 32 bit aren't exactly the forkers.

When energy consumption isn't a statistically significant increase, it makes sense to use 64 bit addressing and architecture. But 64 bit on a 1GB RAM machine might use 100MB more for the same instructions (possibly more efficiently). I have an Atom N450 netbook from 2011 and I've run both 32 bit and 64 bit Windows 7 & 10. The idle RAM was lower on 32 bit.

The systems I'd like to develop for may be just 16-64MB. What happens when one needs to design a microcontroller or application processor for 32MB, and a 64 bit ISA now requires 40MB? If you're Apple and have $100 billion in liquid cash, you can afford a 2nm chip and that L2 cache is not much more power. But if you're a small startup, 32MB on 22nm RAM might barely use less than 40mW, and 40MB of 22nm RAM might use 50mW. That's going to result in fewer hours of battery life. Not quite butterfly effect in terms of dramatic appearance from afar, but up close it makes a lot more of a difference on IoT devices (and consumer electronics like laptops). 

TL/DR: We still need 32-bit to save ~10mW for at least the next three decades on some new chip designs' TDPs (unless you're Apple or Intel and are offering free/low cost access to their leading edge PDK) 

The same can be said of 8-bit and 16 bit microcontrollers, but at some point developers realized that a 16MB address space was too limiting, and 32 bit is more future proof. There will likely be new 32-bit microcontrollers in the year 2100 because it makes no sense to use an 8GB chip for a simple sensor or appliance. I realize you might be thinking, any startup will be able to afford a 2nm chip in 2100- and yes, that might be true, if there aren't other architectures adopted, such as quantum or photonics- but realizing carburetors are over 150 years old, still have a use at a small scale.

That might not be the best example, but the fact is that the pointers use more space, and in hard real time environments (RT), 8-bit and 16-bit microcontrollers with fewer instructions can get their operations completed faster, and that could benefit "time to complete" windows for scheduling tasks, since there would be less latency between loading new instruction caches.   

 

Sources: https://qr.ae/pC9Yse

                 https://www.stata.com/support/faqs/windows/32-bit-versus-64-bit-performance/

 "The question is “Is a 64-bit operating system twice as fast as a 32-bit OS?

No. It’s almost always slower if either OS can satisfy the same requirements on the same hardware - largely because storing memory references (pointers) will require twice as much memory. (Not all the work the CPU does involves storing and retrieving such memory addresses but a lot does.)

But a 64-bit operating system (on hardware with more than 4GB of RAM) solves so many more problems than a 32-bit OS can - and badly written active web pages running too much javascript in a browser is a big and perennial problem."

 Edit 1/3/2026:

Linux Developers who say everyone should upgrade to 64 bit (for every new build) is like Ford or GM saying they will stop manufacturing cars, SUVs, vans and pickup trucks because only an 18 wheeler can fit everyone or everything, except wind turbines. 


Thursday, August 28, 2025

In a Voyage

 If I spell my name backwards, it's "innavoig." Phonetically, it's closest to "in a voyage." Hence the naming of this blog.

"Because they didn't know what to do with them."

I am Intel's biggest fan, but also one of its biggest critics. Last week I purchased a brand new 1st Generation Intel Galileo board, whi...