Tag Archives: programming

Linux Kernel Programming – my second book

I’ve recently completed a project – the writing of the Linux Kernel Programming book, published by Packt (it was announced by the publisher on 01 March 2021). This project took just over two years…

All those long days and nights, poring over the writing and the code, I now feel has definitely been very worth-while and that the book will be a useful contribution to the Linux programming community.

A key point: I’ve ensured that all the material and code examples are based on the 5.4 LTS Linux kernel; it’s slated to be maintained right through Dec 2025, thus keeping the book’s content very relevant for a long while!

Due to its sheer size and depth, the publisher suggested we split the original tome into two books. That’s what has happened: 

  • the first part, Linux Kernel Programming, covers the essentials and, in my opinion, should be read first (of course, if you’re already very familiar with the topics it covers, feel free to start either way)
  • the second part, Linux Kernel Programming Part 2, covering a small section of device driver topics, focusses on the basics and the character ‘misc’ class device driver framework.

Many cross-references, especially from the second book to topics in the first, do turn up; hence the suggestion to read them in order.

Here’s a quick run down on what’s covered in each book.


Lets begin with the Linux Kernel Programming book; firstly, it’s targeted at people who are quite new to the world of Linux kernel development and makes no assumptions regarding knowledge of the kernel. The prerequisite is a working knowledge of programming on Linux with ‘C’; it’s the medium we use throughout (along with a few bash scripts). The book is divided into three major sections, each containing appropriate chapters:

  • Section 1 covers the basics: firstly, the appropriate setup of the kernel development workspace on your system; next, two chapters cover the building of the Linux kernel from scratch, from code. (It includes the cross compile as well, using the popular Raspberry Pi board as a ‘live’ example).
    • The following two chapters delve in-depth into the kernel’s powerful Loadable Kernel Module (LKM) framework, how to program it along with more advanced features. I also try and take a lot of trouble to point out how one should code with security in mind!
  • In Section 2 we deal with having you, the reader, gain a deeper understanding (to the practical extent required) of key kernel internals topics. A big reason why many struggle with kernel development is a lack of understanding of its internals.
    • Here, Chapter 6 covers the kernel architecture, focusing on how the kernel maintains attribute information on processes/threads and their associated stacks.
    • The next chapter – a really key one, again – delves into a difficult topic for many – memory management internals. I try to keep the coverage focused on what matters to a kernel and/or driver developer.
    • The following two chapters dive into the many and varied ways to allocate and deallocate memory when working within the kernel – an area where you can make a big difference performance-wise by knowing which kernel APIs and methods to use when.
    • The remaining two chapters here round off kernel internals with discussion on the kernel-level CPU scheduler; several concepts and practical code examples have the reader learn what’s required.
  • Section 3 is where the books dives into what folks new to it consider to be difficult and arcane matters – how and why synchronization matters, how data races occur and how you can protect critical sections in your kernel / driver code!
    • The amount of material here requires two chapters to do justice to: the first of them focuses on critical sections, concurrency concerns, the understanding and the practical usage of the mutex and the spinlock.
    • The book’s last chapter continues this discussion on kernel synchronization covering more areas relevant to the modern kernel and/or driver developer – atomic (and refcount) operators, cache effects, a primer on ‘lock-free’ programming techniques, with one of them – the percpu one – covered in some detail. Lock debugging within the kernel – using the powerful lockdep validator – as well as other techniques is covered as well!

The second book – Linux Kernel Programming Part 2 – Char Device Drivers and Kernel Synchronization deliberately covers just a small section of ‘how to write a device driver on Linux’. It does not purport to cover the many types and aspects of device driver development, instead focusing on the basics of teaching the reader how to write a simple yet complete character device driver belonging to the ‘misc’ class.

Great news! This book – Linux Kernel Programming Part 2 – Char Device Drivers and Kernel Synchronization – is downloadable for FREE. Enjoy!

Access it now!

Having said that, the materials covering user-kernel communication pathways, working with peripheral I/O memory, and especially, the topic on dealing with hardware interrupts, is very detailed and will prove to be very useful in pretty much all kinds of Linux device driver projects.

A quick chapter-wise run down of the second book:

  • In Chapter 1, we cover the basics – the reader understands the basics of the Linux Device Model (LDM) and ends up writing a small, simple, yet complete ‘misc’ class character driver. Security-awareness is built too: we demonstrate a simple “privesc” – privilege escalation – attack
  • Chapter 2 shows the reader something every driver author will at one time or the other have to do: efficiently communicate between user and kernel address spaces. You’ll learn to use various technologies to do so – via procfs, sysfs, debugfs (especially useful to insert debug hooks as well), netlink sockets and the ioctl system call
  • The next chapter has the reader understand the nuances of reading and writing peripheral (hardware) I/O memory, via both the memory-mapped I/O (MMIO) as well as the Port I/O (PIO) technique
  • Chapter 4 covers dealing with hardware interrupts in-depth; the reader will learn how the kernel works with hardware interrupts, then move onto how one is expected to allocate an IRQ line (covering modern resource-managed APIs), and how to correctly implement the interrupt handler routine. The modern approach of using threaded handlers (and the why of it) is then covered. The reasons for and using both “top half” and “bottom half” interrupt mechanisms (hardirq, tasklet, and softirqs) in code, as well as key information regarding the dos and don’ts of hardware interrupt handling are covered. Measuring interrupt latencies with the modern [e]BPF toolset, as well as with Ftrace, concludes this key chapter
  • Common kernel mechanisms – setting up delays, kernel timers, working with kernel threads and kernel workqueues – is the subject matter of Chapter 5. Several example kernel modules, including three versions of a ‘simple encrypt decrypt’ (‘sed’) example driver, serve to illustrate the concepts learned in code
  • The final two chapters of this book deal with the really important topic of kernel synchronization (the same material in fact as the last two chapters of the first book). 

I think you’ll find that both books have a fairly large number of high quality, relevant code examples, all of which are based on the 5.4 LTS kernel.

[ LKP : code on GitHub ] [ LKP Part 2 : code on GitHub ]

Thanks for taking the time to read this post; more, I really hope you will read and enjoy these books!

Get Linux Kernel Programming, Kaiwan N Billimoria, Packt, Mar 2021 :

[ On Amazon (US)  ]    [ On Amazon (India) ]    [ On Packt ]

Pthreads Dev – Common Programming Mistakes to Avoid

Disclaimer: Okay, let me straight away say this: most of these points below are from various books and online sources. One that stands out in my mind, an excellent tome, though quite old now, is “Multithreaded Programming with Pthreads”, Bil Lewis and Daniel J. Berg.

Common Programming Errors one (often?) makes when programming MT apps with Pthreads

  • Failure to check return values for errors
  • Using errno without actually checking that an error has occurred
WRONG                       Correct
syscall_foo();              if (syscall_foo() < 0) {
if (errno) { ... }              if (errno) { ... } }

(Also, note that all Pthread APIs may not set errno)

  • Not joining on joinable threads
  • A critical one: Failure to verify that library calls are MT Safe
    Use the foo_r API version if it exists, over the foo.
    Use TLS, TSD, etc.
  • Falling off the bottom of main()
    must call pthread_exit() ; yes, in main as well!
  • Forgetting to include the POSIX_C_SOURCE flag
  • Depending upon Scheduling Order
    Write your programs to depend upon synchronization. Don’t do :

          sleep(5); /* Enough time for manager to start */

Instead, wait until the event in question actually occurs; synchronize on it, perhaps using CVs (condition variables)

  • Not Recognizing Shared Data
    Especially true when manipulating complex data structures – such as lists in which each element (or node) as well as the entire list has a separate mutex for protection; so to search the list, you would have to obtain, then release, each lock as the thread moved down the list. It would work, but be very expensive. Reconsider the design perhaps?
  • Assuming bit, byte or word stores are atomic
    Maybe, maybe not. Don’t assume – protect shared data
  • Not blocking signals when using sigwait(3)
    Any signals you’re blocking upon with the sigwait(3) should never be delivered asynchronously to another thread; block ’em and use ’em
  • Passing pointers to data on the stack to another thread

(Case a) Simple – an integer value:

Correct (below):

main()
{
 ...
 // thread creation loop
 for (i=0; i<NUM_THREADS; i++) {
    thread_create(&thrd[i], &attr, worker, i);
 }
 ...
 // join loop...

 pthread_exit();
}

The integer is passed to each thread as a ‘literal value’; no issues.

Wrong approach (below):

main()
{
 ...
 // thread creation loop
 for (i=0; i<NUM_THREADS; i++) {
     thread_create(&thrd[i], &attr, worker, &i);
 }
  ...
  // join loop...

  pthread_exit();
}

Passing the integer by address implies that some thread B could be reading it while thread main is writing to it! A race, a bug.

(Case b) More complex – a data structure:

Correct (below):

main()
{
my_struct *pstr;
...
// thread creation loop
for (i=0; i<NUM_THREADS; i++) {
   pstr = (my_struct *) malloc(...);
   pstr->data = <whatever>;
   pstr->... = ...; // and so on...
   pthread_create(&thrd[i], &attr, worker, pstr);
}
...
// in the join loop..
  free(pstr);

pthread_exit();
}

The malloc ensures the memory is accessible to the particular thread it’s being passed to. Thread Safe.

Wrong! (below)

my_struct *pstr = malloc(...);

main()
{
...
for (i=0; i<NUM_THREADS; i++) {
   pstr->data = <whatever>;
   pstr->... = ...; // and so on...
   pthread_create(&thrd[i], &attr, worker, pstr);
}
// join

free(pstr);
pthread_exit();
}

If you do this (the wrong one, above), then the global pointer (one instance of the data structure only) is being passed around without protection – threads will step on “each other’s toes” corrupting the data and the app. Thread Unsafe.

  • Avoid the above problems; use the TLS (Thread-Local Storage) – a simple and elegant approach to making your code thread safe.

Resource: GCC page on TLS.

 

 

Application Binary Interface (ABI) Docs and Their Meaning

Have you, the programmer, ever really thought about how it all actually works? Am sure you have…

We write

printf("Hello, world! value = %d\n", 41+1);

and it works. But it’s ‘C’ code – the microprocessor cannot possibly understand it; all it  “understands” is a stream of binary digits – machine language. So, who or what transforms source code into this machine language?

The compiler of course! How? It just does (cheeky). So who wrote the compiler? How?
Ah. Compiler authors figure out how by reading a document provided by the microprocessor (cpu) folks – the ABI – Application Binary Interface.

People often ask “But what exactly is an ABI?”. I like the answer provided here by JesperE:

"... If you know assembly and how things work at the OS-level, you are conforming to a certain ABI. The ABI govern things like
how parameters are passed, where return values are placed. For many platforms there is only one ABI to choose from, and in those
cases the ABI is just "how things work".

However, the ABI also govern things like how classes/objects are laid out in C++. This is necessary if you want to be able to pass
object references across module boundaries or if you want to mix code compiled with different compilers. ..."

Another way to state it:
The ABI describes the underlying nuts and bolts of the mechanisms  that systems software such as the compiler, linker, loader – IOW, the toolchain – needs to be aware of: data representation, function calling and return conventions, register usage conventions, stack construction, stack frame layout, argument passing – formal linkage, encoding of object files (eg. ELF), etc.

Having a minimal understanding of :

  • a CPU’s ABI – which includes stuff like
    • it’s procedure calling convention
    • stack frame layout
    • ISA (Instruction Set Architecture)
    • registers and their internal usage, and,
  • bare minimal assembly language for that CPU,

helps to no end when debugging a complex situation at the level of the “metal”.

With this in mind, here are a few links to various CPU ABI documents, and other related tutorials:

However, especially for folks new to it, reading the ABI docs can be quite a daunting task! Below, I hope to provide some simplifications which help one gain the essentials without getting completely lost in details (that probably do not matter).

Often, when debugging, one finds that the issue lies with how exactly a function is being called – we need to examine the function parameters, locals, return value. This can even be done when all we have is a binary dump – like the well known core file (see man 5 core for details).

Intel x86 – the IA-32

On the IA-32, the stack is used for function calling, parameter passing, locals.

Stack Frame Layout on IA-32

[...                            <-- Bottom; higher addresses.
PARAMS 
...]              
RET addr 
[SFP]                      <-- SFP = pointer to previous stack frame [EBP] [optional]
[... 
LOCALS 
...]                           <-- ESP: Top of stack; in effect, lowest stack address


Intel 64-bit – the x86_64

On this processor family, the situation is far more optimized. Registers are used to pass along the first six arguments to a function; the seventh onwards is passed on the stack. The stack layout is very similar to that on IA-32.

Register Set

x86_64_registers

<Original image: from Intel manuals>

Actually, the above register-set image applies to all x86 processors – it’s an overlay model:

  • the 32-bit registers are literally “half” the size and their prefix changes from R to E
  • the 16-bit registers are half the size of the 32-bit and their prefix changes from E to A
  • the 8-bit registers are half the size of the 16-bit and their prefix changes from A to AH, AL.

The first six arguments are passed in the following registers as follows:

RDI, RSI, RDX, RCX, R8, R9

(By the way, looking up the registers is easy from within GDB: just use it’s info registers command).

An example from this excellent blog “Stack frame layout on x86-64” will help illustrate:

On the x86_64, call a function that receives 8 parameters – ‘a, b, c, d, e, f, g, h’. The situation looks like this now:

x86_64_func

What is this “red zone” thing above? From the ABI doc:

The 128-byte area beyond the location pointed to by %rsp is considered to be reserved and shall not be modified by signal or interrupt handlers. Therefore, functions may use this area for temporary data that is not needed across function calls. In particular, leaf functions may use this area for their entire stack frame, rather than adjusting the stack pointer in the prologue and epilogue. This area is known as the red zone.

Basically it’s an optimization for the compiler folks: when a ‘leaf’ function is called (one that does not invoke any other functions), the compiler will generate code to use the 128 byte area as ‘scratch’ for the locals. This way we save two machine instructions to lower and raise the stack on function prologue (entry) and epilogue (return).

ARM-32 (Aarch32)

<Credits: some pics shown below are from here : ‘ARM University Program’, YouTube. Please see it for details>.

The Aarch32 processor family has seven modes of operation: of these, six of them are privileged and only one – ‘User’ – is the non-privileged mode, in which user application processes run.

modes

When a process or thread makes a system call, the compiler has the code issue the SWI machine instruction which puts the CPU into Supervisor (SVC) mode.

The Aarch32 Register Set:

regs

Register usage conventions are mentioned below.

Function Calling on the ARM-32

The Aarch32 ABI reveals that it’s registers are used as follows:

Register APCS name Purpose
R0 a1 Argument registerspassing values, don’t need to be preserved,
results are usually returned in R0
R1 a2
R2 a3
R3 a4
R4 v1 Variable registers, used internally by functions, must be preserved if used. Essentially, r4 to r9 hold local variables as register variables.

(Also, in case of the SWI machine instruction (syscall), r7 holds the syscall #).
R5 v2
R6 v3
R7 v4
R8 v5
R9 v6
R10 sl Stack Limit / stack chunk handle
R11 fp Frame Pointer, contains zero, or points to stack backtrace structure
R12 ip Procedure entry temporary workspace
R13 sp Stack Pointer, fully descending stack, points to lowest free word
R14 lr Link Register, return address at function exit
R15 pc Program Counter

(APCS = ARM Procedure Calling Standard)

When a function is called on the ARM-32 family, the compiler generates assembly code such that the first four integer or pointer arguments are placed in the registers r0, r1, r2 and r3. If the function is to receive more than four parameters, the fifth one onwards goes onto the stack. If enabled, the frame pointer (very useful for accurate stack unwinding/backtracing) is in r11. The last three registers are always used for special purposes:

  • r13: stack pointer register
  • r14: link register; in effect, return (text/code) address
  • r15: the program counter (the PC)

The PSR – Processor State Register – holds the system ‘state’; it is constructed like this:

cpsr

[Update: 24 Sept 2021]

ARM 64-bit

Ref: ARM Cortex-A Series Programmer’s Guide for ARMv8-A

Execution on the ARMv8 is at one of four Exception Levels (ELn; n=0,1,2,3). It determines the privilege level (just as x86 has 4 rings, and the ARM has seven modes). […] Exception Levels provide a logical separation of software execution privilege that applies across all operating states of the ARMv8 architecture. […] The following is a typical example of what software runs at each Exception level:

EL0

Normal user applications.

EL1

Operating system kernel typically described as privileged.

EL2

Hypervisor.

EL3

Low-level firmware, including the Secure Monitor.

Figure 3.1. Exception levels

Figure 3.1. Exception levels

ARMv8 Registers and their Usage (ABI)

Screenshot from 2021-09-24 12-40-21

In addition, the ‘special’ registers:

Screenshot from 2021-09-24 12-42-15

ARM-64 / A64 / Aarch64 ABI calling conventions

(The following is directly excerpted from the Wikipedia page here: https://en.wikipedia.org/wiki/Calling_convention#ARM_(A64)).

The 64-bit ARM (AArch64) calling convention allocates the 31 general-purpose registers as:

  • x31 (SP): Stack pointer or a zero register, depending on context.
  • x30 (LR): Procedure link register, used to return from subroutines.
  • x29 (FP): Frame pointer.
  • x19 to x29: Callee-saved.
  • x18 (PR): Platform register. Used for some operating-system-specific special purpose, or an additional caller-saved register.
  • x16 (IP0) and x17 (IP1): Intra-Procedure-call scratch registers.
  • x9 to x15: Local variables, caller saved.
  • x8 (XR): Indirect return value address.
  • x0 to x7: Argument values passed to and results returned from a subroutine.

All registers starting with x have a corresponding 32-bit register prefixed with w. Thus, a 32-bit x0 is called w0.

Similarly, the 32 floating-point registers are allocated as:[3]

  • v0 to v7: Argument values passed to and results returned from a subroutine.
  • v8 to v15: callee-saved, but only the bottom 64 bits need to be preserved.
  • v16 to v31: Local variables, caller saved.

Hope this helps!

Exploring Linux procfs via shell scripts

Very often, while working on a Linux project, we’d like information about the system we’re working on: both at a global scope and a local (process) scope.

Have we not wondered: is there a quick way to query which kernel version am using, what interrupts are enabled & hit, what my processor(s) are, details about kernel subsystems, memory usage, file, network, IPC usage, etc etc. Linux’s proc filesystem makes this easy.

So what exactly is the proc filesystem all about?

Essentially, some quick salient points about the proc filesystem:

  • it’s a RAM-based filesystem (think ramdisk; yup, it’s volatile)
  • it’s a kernel feature, not userspace – proc is a filesystem supported by the Linux kernel VFS
  • it serves two primary purposes
    • proc serves as a “view” deep into the kernel internals; we can see details about hardware and software subsystems that userspace otherwise would have no access to (no syscalls)
    • certain “files” under proc, typically anchored under /proc/sys, can be written into: these basically are the “tuning knobs” of the Linux kernel. Sysads, developers, apps, etc exploit this feature
  • proc is mounted on start-up under /proc
  • a quick peek under /proc will show you several “files” and “folders”. These are pseudo-entries in the sense that they exist only in RAM while power is applied. The “folders” that are numbers are in fact the PID of each process that’s alive when you typed ‘ls’! it’s a snapshot of the system at that moment in time..
  • in fact, the name “proc” suggests “process”

At this point, and if you’re not really familiar with this stuff, I’d urge you to peek around /proc on your Linux box, cat-ting stuff as you go. (Also, lest i forget, it’s better to run as root (sudo /bin/bash) so that we don’t get annoying ‘permission denied’ messages). Of course, be careful when you run as root!!!

For example, to get one started off:

Continue reading Exploring Linux procfs via shell scripts

SIGRTMIN or SIGRTMAX – who’s higher up (in signal delivery priority order)?

Linux supports Realtime (RT) signals (part of the Posix IEEE 1003.1b [POSIX.1b Realtime] draft standard). The only other GPOS (General Purpose OS) that does so is Solaris (TODO- verify. Right? Anyone??).

The Linux RT signals can be seen with the usual ‘kill’ command (‘kill -l’; that’s the letter lowercase L).

Continue reading SIGRTMIN or SIGRTMAX – who’s higher up (in signal delivery priority order)?