When writing a technical blog, the first 90% of every article is a lot easier than the final 10%. Sometimes, the challenge is collecting your own thoughts; I remember walking through the forest and talking to myself about the articles about Gödel’s beavers or infinity. Other times, the difficulty is the implementation of an idea. I sometimes spend days in the workshop or writing code to get, say, the throwaway image of a square-wave spectrogram at the end of a whimsical post.
That said, by far the most consistent challenge is art. Illustrations are important, easy to half-ass, and fiendishly difficult to get right. I’m fortunate enough that photography has been my lifelong hobby, so I have little difficulty capturing good photos of the physical items I want to talk about:
A macro photo of a photodiode sensor. By author.
Similarly, because I’ve been interested in CAD and CAM for nearly two decades, I know how to draw shapes in 3D and know enough about rendering tech to make the result look good:
An explanation of resin casting, by author.
Alas, both approaches have their limits. Photography just doesn’t work for conceptual diagrams; 3D could, but it’s slow and makes little sense for two-dimensional diagrams, such as circuit schematics of most function plots.
Over the past three years, this forced me to step outside my comfort zone and develop a new toolkit for simple, technical visualizations. If you’re a long-time subscriber, you might have seen the changing art style of the posts. What you probably don’t know is that I often revise older articles to try out new visualizations and hone in my skills. So, let’s talk shop!
Circuit schematics
Electronic circuits are a common theme of my posts; the lifeblood of this trade are circuit schematics. I’m old enough to remember the beautiful look of hand-drawn schematics in the era before the advent of electronic design automation (EDA) software:
An old circuit schematic.
Unfortunately, the industry no longer takes pride in this craft; the output from modern schematic capture tools, such as KiCad, is uniformly hideous:
An example of KiCad schematic capture.
I used this style for some of the electronics-related articles I published in the 2010s, but for this Substack, I wanted to do better. This meant ditching EDA for general-purpose drawing software. At first, I experimented with the same CAD software I use for 3D part design, Rhino3D:
Chicken coop controller in Rhino3D. By author.
This approach had several advantages. First, I was already familiar with the software. Second, CAD tools are tailored for technical drawings: it’s a breeze to precisely align shapes, parametrically transform and duplicate objects, and so forth. At the same time, while the schematics looked more readable, they were nothing to write home about.
In a quest for software that would allow me to give the schematics a more organic look, I eventually came across Excalidraw. Excalidraw is an exceedingly simple, web-based vector drawing tool. It’s limited and clunky, but with time, I’ve gotten good at working around many of its flaws:
A schematic of a microphone amplifier in Excalidraw, by author.
What I learned from these two tools is that consistency is key. There is a temptation to start every new diagram with a clean slate, but it’s almost always the wrong call. You need to develop a set of conventions you follow every time: scale, line thickness, font colors, a library of reusable design elements to copy-and-paste into new designs. This both makes the tool faster to use — rivaling any EDA package — and allows you to refine the style over time, discarding failed ideas and preserving the tricks that worked well.
This brings us to Affinity. Affinity is a “grown-up” image editing suite that supports bitmap and vector files; I’ve been using it for photo editing ever since Adobe moved to a predatory subscription model for Photoshop. It took me longer to figure out the vector features, in part because of the overwhelming feature set. This is where the lessons from Rhino3D and Excalidraw paid off: on the latest attempt, I knew not to get distracted and to focus on a simple, reusable workflow first.
My own library of electronic components in Affinity.
This allowed me to finally get in the groove and replicate the hand-drawn vibe I’ve been after. The new style hasn’t been featured in any recent articles yet, but I’ve gone ahead and updated some older posts. For example, the earlier microphone amplifier circuit now looks the following way:
A decent microphone amplifier. By author.
Explanatory illustrations
Electronic schematics are about the simplest case of technical illustrations. They’re just a map of connections between standard symbols, laid out according to simple rules. There’s no need to make use of depth, color, or motion.
Many other technical drawings aren’t as easy; the challenge isn’t putting lines on paper, it’s figuring out the most effective way to convey the information in the first place. You need to figure out which elements you want to draw the attention to, and how to provide visual hints of the dynamics you’re trying to illustrate.
I confess that I wasn’t putting much thought into it early on. For example, here’s the original 2024 illustration for an article on photodiodes:
Photodiode structure.
It’s not unusable, but it’s also not good. It’s hard to read and doesn’t make a clear distinction between different materials (solid color) and an electrical region that forms at the junction (hatched overlay).
Here’s my more recent take:
A better version of the same.
Once again, the trick isn’t pulling off a single illustration like this; it’s building a standardized workflow that lets you crank out dozens of them. You need to converge on backgrounds, line styles, shading, typefaces, arrows, and so on. With this done, you can take an old and janky illustration, such as the following visual from an article on magnetism:
A simple model of a conductor.
…and then turn it into the following:
A prettier model of the same. By author.
As hinted earlier, in many 2D drawings, it’s a challenge to imply a specific three-dimensional order of objects or to suggest that some of them are in motion. Arrows and annotations don’t always cut it. After a fair amount of trial and error, I settled on subtle outlines, nonlinear shadows, and “afterimages”, as shown in this illustration of a simple rotary encoder:
Explaining a rotary encoder.
The next time you see a blog illustration that doesn’t look like 💩 and wasn’t cranked out by AI, remember that more time might have gone into making that single picture than into writing all of the surrounding text.
This blog has a history of answering questions that no one should be asking. Today, we continue that proud legacy.
Jan 10, 2026
For the past couple of weeks, I couldn’t shake off an intrusive thought: raster graphics and audio files are awfully similar — they’re sequences of analog measurements — so what would happen if we apply the same transformations to both?…
Let’s start with downsampling: what if we divide the data stream into buckets of n samples each, and then map the entire bucket to a single, averaged value?
for (pos = 0; pos < len; pos = win_size) {
float sum = 0;
for (int i = 0; i < win_size; i++) sum += buf[pos + i];
for (int i = 0; i < win_size; i++) buf[pos + i] = sum / win_size;
}
For images, the result is aesthetically pleasing pixel art. But if we do the same audio… well, put your headphones on, you’re in for a treat:
The model for the images is our dog, Skye. The song fragment is a cover of “It Must Have Been Love” performed by Effie Passero.
If you’re familiar with audio formats, you might’ve expected this to sound different: a muffled but neutral rendition associated with low sample rates. Yet, the result of the “audio pixelation” filter is different: it adds unpleasant, metallic-sounding overtones. The culprit is the stairstep pattern in the resulting waveform:
Not great, not terrible.
Our eyes don’t mind the pattern on the computer screen, but the cochlea is a complex mechanical structure that doesn’t measure sound pressure levels per se; instead, it has clusters of different nerve cells sensitive to different sine-wave frequencies. Abrupt jumps in the waveform are perceived as wideband noise that wasn’t present in the original audio stream.
The problem is easy to solve: we can run the jagged waveform through a rolling-average filter, the equivalent of blurring the pixelated image to remove the artifacts:
But this brings up another question: is the effect similar if we keep the original 44.1 kHz sample rate but reduce the bit depth of each sample in the file?
/* Assumes signed int16_t buffer, produces n + 1 levels for even n. */
for (int i = 0; i < len; i++) {
int div = 32767 / (levels / 2);
buf[i] = round(((float)buf[i]) / div) * div;
}
The answer is yes and no: because the frequency of the injected errors will be on average much higher, we get hiss instead of squeals:
Also note that the loss of fidelity is far more rapid for audio than for quantized images!
As for the hiss itself, it’s inherent to any attempt to play back quantized audio; it’s why digital-to-analog converters in your computer and audio gear typically need to incorporate some form of lowpass filtering. Your sound card has that, but we injected errors greater than what the circuitry was designed to mask.
But enough with image filters that ruin audio: we can also try some audio filters that ruin images! Let’s start by adding a slightly delayed and attenuated copy of the data stream to itself:
for (int i = shift; i < len; i++)
buf[i] = (5 * buf[i] + 4 * buf[i - shift]) / 9;
Check it out:
For photos, small offsets result in an unappealing blur, while large offsets produce a weird “double exposure” look. For audio, the approach gives birth to a large and important family of filters. Small delays give the impression of a live performance in a small room; large delays sound like an echo in a large hall. Phase-shifted signals create effects such as “flanger” or “phaser”, a pitch-shifted echo sounds like a chorus, and so on.
So far, we’ve been working in the time domain, but we can also analyze data in the frequency domain; any finite signal can be deconstructed into a sum of sine waves with different amplitudes, phases, and frequency. The two most common conversion methods are the discrete Fourier transform and the discrete cosine transform, but there are more wacky options to choose from if you’re so inclined.
For images, the frequency-domain view is rarely used for editing because almost all changes tend to produce visual artifacts; the technique is used for compression, feature detection, and noise removal, but not much more; it can be used for sharpening or blurring images, but there are easier ways of doing it without FFT.
For audio, the story is different. For example, the approach makes it fairly easy to build vocoders that modulate the output from other instruments to resemble human speech, or to develop systems such as Auto-Tune, which make out-of-tune singing sound passable.
In the earlier article, I shared a simple implementation of the fast Fourier transform (FFT) in C:
Unfortunately, the transform gives us decent output only if the input buffer contains nearly-steady signals; the more change there is in the analysis window, the more smeared and intelligible the frequency-domain image. This means we can’t just take the entire song, run it through the aforementioned C function, and expect useful results.
Instead, we need to chop up the track into small slices, typically somewhere around 20-100 ms. This is long enough for each slice to contain a reasonable number of samples, but short enough to more or less represent a momentary “steady state” of the underlying waveform.
An example of FFT windowing.
If we run the FFT function on each of these windows separately, each output will tell us about the distribution frequencies in that time slice; we can also string these outputs together into a spectrogram, plotting how frequencies (vertical axis) change over time (horizontal axis):
Audio waveform (top) and its FFT spectrogram view.
Alas, the method isn’t conductive to audio editing: if we make separate frequency-domain changes to each window and then convert the data back to the time domain, there’s no guarantee that the tail end of the reconstituted waveform for window n will still line up perfectly with the front of the waveform for window n + 1. We’re likely to end up with clicks and other audible artifacts where the FFT windows meet.
A clever solution to the problem is to use the Hann function for windowing. In essence, we multiply the waveform in every time slice by the value of y = sin2(t), where t is scaled so that each window begins at t = 0 and ends at t = π. This yields a sinusoidal shape that has a value of zero near the edges of the buffer and peaks at 1 in the middle:
The Hann function for FFT windows.
At first blush, it’s hard to see how this multiplication would help: the consequence of the operation is that the input waveform is attenuated by an cyclic sinusoidal pattern, and the attenuation pattern will carry over to any waveform reconstructed from the FFT data (bottom row).
The trick is to also calculate another sequence of “halfway” FFT windows of the same size that are shifted 50% in relation to the existing ones (second row below):
Overlapping FFT windows with Hann weighting.
This leaves us with one output waveform (A in the bottom row) that’s attenuated by the repeating sin2 pattern that starts at the beginning of the clip, and then another waveform (B) that’s attenuated by an identical sin2 pattern shifted one-half of the cycle. The second pattern can be also written as cos2.
With this in mind, we can write the equations for the two waveforms we can reconstruct from the FFT streams as:
This is where we wheel out the Pythagorean identity, an easily-derived rule that tells us that the following must hold for any x:
\(sin^2(x) + cos^2(x) = 1\)
If you’re unfamiliar with this identity, recall that in a right triangle, sin(α) is the ratio of the opposite to the hypotenuse (a/c), while cos(α) is the ratio of the adjacent to the hypotenuse (b/c). If we choose c = 1, this simplifies to sin(α) = a and cos(α) = b. Further, from the Pythagorean theorem, a2 + b2 = c2, so we can assert that sin2(α) + cos2(α) = 1 for any angle α.
In effect, the underlined multiplier in the earlier equation for the summed waveform is always 1; in the A + B sum, the Hann-induced attenuation cancels out.
At the same time, because the signal at the edges of each FFT window is attenuated to zero, we get rid of the waveform-merging discontinuities. Instead, the transitions between windows involve gradual shifts between A and B signals, masking any editing artifacts.
Where was I going with this? Ah, right! With this trick up of our sleeve, we can goof around in the frequency domain to — for example — selectively shift the pitch of the vocals in our clip:
Source code for the effect is available here. It’s short and easy to experiment with.
I also spent some time approximating the transform for the dog image. In the first instance, some low-frequency components are shifted to higher FFT bins, causing spurious additional edges to crop up and making Skye look jittery. In the second instance, the bins are moved in the other direction, producing a distinctive type of blur.
PS. Before I get hate mail from DSP folks, I should note that high-quality pitch shifting is usually done in a more complex way. For example, many systems actively track the dominant frequency of the vocal track and add correction for voiceless consonants such as “s”. If you want to down a massive rabbit hole, this text is a pretty accessible summary.
As for the 20 minutes spent reading this article, you’re not getting that back.
How do you turn 1 MHz into 100 MHz? With magic, of course.
Dec 26, 2025
Welcome to another installment of Cursed Circuits. My goal for the series is to highlight a small collection of common yet mind-bending circuits that must’ve taken a stroke of genius to invent, but that are usually presented on the internet without explaining how or why they work.
In today’s episode, let’s have a look at a phase-locked loop clock multiplier: a circuit that, among other things, can take a 20 MHz timing signal produced by a quartz crystal and turn it into a perfectly-synchronized computer clock that’s running at 500 MHz, 3 GHz, or any other frequency of your choice.
A primer on latches
To understand the PLL frequency multiplier, it’s probably good to cover latches first. A latch is a fundamental data-storage circuit capable of holding a single bit. The simplest variant is the set-reset (S-R) latch, which can be constructed from basic logic gates in a couple of ways. Perhaps the most intuitive layout is the following three-gate approach:
A three-gate S-R latch.
To analyze the circuit, let’s assume that the “set” signal (S) is high and the “reset” signal (R) is low. In this case, the output of the OR gate is a logical one regardless of the looped-back signal present on the gate’s other terminal; this produces a logical one on the first input of the downstream AND gate. The other input of that AND gate is also equal to one, because it’s just an inverted copy of R = 0. All in all, in the S = 1 and R = 0 scenario, both inputs of the AND gate are high; therefore, so is the signal on the circuit’s output leg (Q).
Next, let’s imagine that S transitions to a logical zero. This puts one of the OR inputs at zero volts, but the other is still high because it’s the looped-back output signal Q. The circuit is latched: it keeps outputting the same voltage as before, even though the original driving signal is gone.
The only thing that can break the cycle if the “reset” line is pulled high. This causes one of the AND input to go low, thus forcing the output signal to zero and breaking the loop that kept the OR gate latched. From now on, the output remains low even if R returns to zero volts.
This two-lever latch can be fashioned into a more practical data (D) latch, which stores an arbitrary input bit supplied on the data line whenever the enable signal (E) is high, and keeps it when E is low:
A conceptual illustration of a D latch.
In this circuit, a pair of input-side AND gates ensures that when E is at zero volts, the underlying S and R lines remain low regardless of the value presented on the data line. Conversely, if enable is high, the gates pass through a logical one either to the S line (if D is high) or the R line (if D is low).
Going further down that path, we can turn a D latch into a clocked D flip-flop, which stores a bit of data on the rising edge of the clock signal:
A clocked D flip-flop.
In this circuit, the latch on the left passes through the input data when the clock signal is low, or keeps the previous value if the clock is high. The latch on the right works the opposite way: it passes through the output from the first latch if the clock is high or holds the last value otherwise.
In effect, the value on the input line appears to propagate to the circuit’s output only during the 0 → 1 transition (rising edge) of the clock signal. More to the point, the propagation happens in two stages and there is never a direct signal path between D and Q, which prevents the cell from misbehaving if Q is looped back onto D.
Phase error detector
Once we have a clocked D flip-flop — and make a trivial modification to furnish it with an additional reset input — we can build a digital phase error detector circuit. One type of such a detector is shown below:
A simple phase error detector.
The purpose of the detector is to compare clock signal B to a reference clock provided on input A. If the positive edge on input A arrives before a positive edge on input B, the output of the upper flip-flop (QA) goes high before the output of the bottom flip-flop (QB); this signals that clock B is running too slow. Conversely, if the edge on B arrives before the edge on A, the circuit generates a complementary output indicating that B is running too fast. As soon as both flip-flops are latched high — i.e., after encountering a positive edge on whichever of the two clock signals is running slower — the circuit is reset.
The following plot shows the behavior of the circuit when the clock supplied on the B leg is running too slow (left) or too fast (right) in relation to the reference signal on leg A:
The basic behavior of the phase error detector circuit.
In effect, the detector generates longer pulses on the output labeled P if the analyzed clock signal is lagging behind the reference; and longer pulses on the other output (R) if the signal is rushing ahead.
It’s worth noting that the frequencies in the plot are not cherry-picked; although a rigorous mathematical analysis of phase detectors is fairly involved and they have transient failure modes, the following simulation shows that happens if the frequency B is changing continuously:
A continuously-variable-frequency variant of the simulation.
PLL loop
The detector can serve as the fundamental building block of a circuit known as a phase-locked loop. Despite the name, the main forte of phase-locked loops is that they can generate output frequencies that match an input signal of some sort, even if that signal is noisy or faint:
The basic architecture of a digital PLL.
The output stage of the PLL is a voltage-controlled oscillator (VCO). We’ve briefly covered VCOs before: they generate an output waveform with a frequency proportional to the supplied input voltage.
The voltage for the VCO is selected by a simple switched capacitor section in the middle; the section has two digital inputs, marked “+” and “-”. Supplying a digital signal on the “+” input turns on a high-side transistor that gradually charges the output capacitor to a higher voltage, thus increasing the output frequency of the VCO. Supplying a signal on the “-” leg turns on a low-side transistor, slowly discharges the capacitor, and achieves the opposite effect.
The last part of the circuit is the now-familiar phase error detector; it compares the externally-supplied clock to the looped-back output from the VCO. The detector outputs long pulses on the P output if the VCO frequency is lower than the reference clock, or on the R output if the VCO is running too fast. In doing so, the circuit adjusts the capacitor voltage and nudges the VCO to match the frequency and phase of the input waveform.
Toward the frequency multiplier
So far, we have a circuit that synchronizes the VCO with an external clock; that has some uses in communications, but doesn’t seem all that interesting on its own. To take it to the next level, we need to add a small but ingenious tweak:
A PLL-based frequency multiplier.
In this new circuit, we incorporated a frequency divider in the feedback loop. A frequency divider is not a complicated concept; most simply, it can be a binary counter (e.g., 74HC393) that advances by one with every cycle of the input clock. For a three-bit counter, the outputs will be:
Note that the counter produces a square wave with half the clock frequency on the LSB output (Q0); with one-fourth the frequency on the second output (Q1); and with one-eighth on the MSB leg (Q2).
If we choose Q0 for the divider, the phase error detector will be presented with a looped-back signal that’s equal to one half the running frequency of the VCO; it will then work to get the VCO frequency high enough so that the divided signal matches the reference clock. This will cause the VCO to run exactly twice as fast — and yet, precisely in lockstep with the input clock.
👉 Previous installments: one, two, three. If you like the content, please subscribe. I’m not selling anything; it’s just a good way to stay in touch with the authors you like.
In today’s episode, I’d like to talk about the use of operational amplifiers to do something other than amplification: to solve analog math. Analog computing at a scale is wildly impractical because errors tend to accumulate every step along the way; nevertheless, individual techniques find a number of specialized uses, perhaps most prominently in analog-to-digital converters. Let’s have a look at how it’s done.
Before we get to less obvious circuits, let’s start with a brief recap: operational amplifiers are to analog electronics what logic gates are to digital logic. They are simple but remarkably versatile building blocks that let you accomplish far more than appears possible at first blush.
Unfortunately, in introductory texts, their operation is often explained in confusing ways. All that an op-amp does is taking two input voltages — Vin- (“inverting input”) and Vin+ (“non-inverting input”) — and then outputting a voltage that’s equal to the difference between the two, amplified by a huge factor (AOL, often 100,000 or more) and then referenced to the midpoint of the supply (Vmid). You can write it the following way:
That’s all the chip does. Because the gain is massive, there is a very narrow linear region near Vin- = Vin+; a difference greater than a couple of microvolts will send the output toward one of the supply rails. The chip doesn’t care about the absolute value of Vin- or Vin+ it can’t “see” any external components you connect to it, and its internal gain can’t be changed.
To show the versatility of the component, we can have a quick look at the following circuit that you might be already familiar with — a non-inverting amplifier:
The basic non-inverting voltage amplifier.
One input of the op-amp is connected to the external signal source: Vin+ = Vsignal. The other input is hooked up to a two-resistor voltage divider that straddles the ground and the output leg; the divider’s midpoint voltage is:
As discussed earlier, the only way for the op-amp to output voltages other than 0 V or Vsupply is for Vin+ to be very close to Vin-. We can assume that we’re operating near that equilibrium point, combine the equations for the voltages on the two input legs, and write:
In other words, the output voltage is the input signal amplified by a factor of 1 + Rf/Rg. We have a near-ideal single-ended voltage amplifier with a configurable gain. Again, the circuit is probably familiar to most folks dabbling in analog electronics, but it’s worth pondering that we implemented it by adding a couple of resistors to a chip that does something conceptually quite different.
Note: there’s a bit more to op-amp lore when dealing with high-frequency signals; a more rigorous analysis of their frequency characteristics can be found in this article.
Addition
Now that we have the basics covered, we can show that op-amps can do more than just amplify signals. The first contender is the following summing layout that differs from what’s usually covered in textbooks, but that’s well-suited for single-supply use:
A three-way non-inverting summing amplifier.
Assuming well-behaved signal sources that can supply and sink currents, it should be pretty intuitive that the voltage on the Vin+ leg is just an average of three input signals:
\(V_{in+} = {V_A + V_B + V_C \over 3}\)
For readers who are unpersuaded, we can show this from Kirchoff’s current law (KCL); the law essentially just says “what comes in must come out” — i.e., the currents flowing into and out of the three-resistor junction must balance out. If we use Vjct to denote the voltage at the junction, then from Ohm’s law, we can write the following current equations for each resistor branch:
Further, from KCL, we can assert that the currents must balance out: I1 + I2 + I3 = 0 A. Combining all these equations and multiplying both sides by R, we get:
Solving for Vjct, we get (VA + VB + VC) / 3. We have a confirmation that the input-side resistor section averages the input voltages.
To be fair, the averaging portion of the circuit has a minor weakness: it depends some inputs sinking current while others source it. Some signal sources might not have that ability. That said, compared to the alternative design, it has the benefit of being more useful in single-supply circuits, so let’s stick with that.
Moving on to the op-amp section: this is just another sighting of the non-inverting amplifier. The gain of the amplifier circuit is set by the Rf and Rg resistors, and in this instance, works out to A = 1 + Rf/Rg = 3. In other words, the signal on the output leg is:
That looks like a sum! But it also feels like we cheated in some way: it just so happens that we could implement averaging using passive components, and then tack on an amplifier for some gain. Surely, resistor magic can’t get us much further than that?
Subtraction
It can! The next stop is subtraction, which can be achieved with the following circuit topology:
A simple difference amplifier (A - B).
We can start the analysis with the non-inverting input of the amplifier. The signal on this leg is generated by a voltage divider consisting of two identical resistances connected in series between VA and the ground. In other words, the voltage here is Vin+ = ½ · VA.
The inverting input is a voltage divider too, except it produces a voltage that’s halfway between VBand Vout: Vin- = ½ · (VB + Vout).
As with any op-amp topology, linear operation can happen only when Vin- ≈ Vin+. In other words, we can assert that for the circuit to function, the following must be true:
\(½ \cdot V_A \approx ½ \cdot (V_B + V_{out})\)
We can cancel out the repeated ½ term on both sides, and then reorder the equation to:
\(V_{out} \approx V_A - V_B\)
Neat: that’s precisely what we’ve been trying to do.
To be fair, not all is roses: in a single-supply circuit, an op-amp can’t output negative voltages, so the topology we’ve just analyzed works only if VA > VB; otherwise, Vout just hits the lower rail and stays there until the input voltages change.
To accommodate use cases where VA < VB, we’d need to use a higher output voltage as the “zero” point (Vzero). For example, if Vzero = 2.5 V, then a computed difference of -1 V could be represented by Vout = Vzero - 1 V = 1.5 V; in the same vein, a difference of +2 V could correspond to Vout = 4.5 V.
To do this, we just need to disconnect the bottom voltage divider from the ground and replace 0 V with a fixed “zero” voltage of our choice. This changes the equation for the positive leg to Vin+ = ½ · (VA + Vzero). The overall equilibrium condition becomes:
After tidying up and solving for the output signal, we get:
\(V_{out} \approx V_{zero} + (V_A - V_B)\)
A common choice of a reference point would be the midpoint of the supply (Vmid = ½ · Vsupply).
Multiplication and division
The concept of analog computation can be also extended to multiplication and division. The most common and mildly mind-bending approach hinges on the fact that any positive number can be rewritten as a constant base n raised to some power; for example, 8 can be written as 23, while 42 is approximately 25.3924.
From the basic properties of exponentiation, it’s easy to show that na · nb is the same as na+b; it follows that if we have two numbers represented as exponents of a common base, we can reduce the problem of multiplication to the addition of these exponents.
We already know how to build a summing circuit, so all we’re missing is a way to convert a number to an exponent. We don’t really care what base we’re using, as long as the base remains constant over time.
This brings us to the following design:
A logarithmic amplifier.
As before, the linear equilibrium condition requires Vin- ≈ Vin+. Let’s assume that the initial input voltage is about equal to Vzero; in this case, the output settles in the same vicinity.
Next, let’s analyze what would happen if the input voltage increased by vs= 100 mV. In such a scenario, for the op-amp to stay at an equilibrium of Vin- ≈ Vin+, we would need a sufficient current to flow through the resistor to create a 100 mV voltage drop:
\(I_R = {v_{s} \over R}\)
The op-amp has a very high input impedance, so the current must flow through the diode; if it doesn’t, that’d move the circuit toward a condition of Vin- ≫ Vin+, which would cause Vout to move toward the negative supply rail. That would forward-bias the diode and thus motivate it to conduct better. In other words, the circuit has an automatic mechanism that coerces the diode to admit the current matching IR, and the amount of convincing is reflected in how much the output voltage has been reduced from the midpoint. We can denote this relative shift as vo.
From an earlier feature about diodes, you might recall that although the relationship between the applied diode voltage and the resulting current is complicated, there is an initial region where the component’s V-I curve is exponential. In the following plot for a 1N4148 diode, this property holds up for currents up to about 1 mA:
V-I curve for 1N4148, normal (left) and log scale current (right). By author.
In other words, if the input resistor is large enough (10 kΩ or so), we can say that vo will be dictated by the magnitude of an exponent of some constant base n that yields the correct diode current: ID = nvₒ.
We also know that the current that must flow through the diode is proportional to the shift in the input signal (vs) divided by R. This means that we’ve accomplished the number-to-exponent conversion between vs and vo. Or, in the mathematical parlance, we’ve calculated a logarithm.
To implement multiplication, we need two logarithmic converters on the input side, a summing amplifier to add the exponents, and then an exponential converter that goes from the summed exponent back to a normal value. That last part can be accomplished by switching the location of the diode and the resistor in the log converter circuit we already have.
Integration
Integration is just a fancy word for summing values over time; if you want to sound posh, you can say that a bucket in your backyard “integrates” rainfall over the duration of a storm.
Although integration is important in calculus, analog integrators have down-to-earth uses too. For example, the circuits can convert square waves into triangular shapes that are useful in electronic musical instruments. The circuit’s ability to produce very linear up and down slopes also comes in handy in slope-based and delta-sigma ADCs.
The simplest, textbook integrator is shown below:
Basic integrator.
Once again, we can note that the linear operation condition is Vin- ≈ Vin+. Further, let’s assume that the input signal is equal to Vzero and the capacitor is discharged, so both op-amp inputs and the output are at about the midpoint.
Next, similarly to the analysis we’ve done for the log amplifier, let’s assume that the input signal shifts up by vs = 100 mV. For the op-amp to stay at an equilibrium, we would need a sufficient current to flow through the resistor to create a 100 mV voltage drop: IR = vs/R.
The only possible path for this current is the capacitor; a capacitor doesn’t admit steady currents, but it will allow the movement of charges during the charging process, which will kick off when the op-amp’s output voltage begins to drop; this drop causes a voltage differential appears across the capacitor’s terminals.
From the fundamental capacitor equation, charging the capacitor with a constant current IR for a time t will produce the following voltage across its terminals:
\(V_{cap} = {I_R \cdot t \over C} = {v_s \cdot t \over RC}\)
To keep Vin- steady, the voltage to which the capacitor gets charged must be accounted for by a directionally opposite shift of the output voltage (vo). The shift will persist after Vsignal returns to the midpoint, because with no charging or discharging current, the capacitor just retains charge. The shift can be undone if vs swings the other way around.
From the earlier formula for the capacitor voltage, it should be clear that the circuit keeps a sum of (midpoint-relative) input voltages summed over time.
The textbook integrator we’ve been working with has an inverted output: Vout moves down whenever Vsignal moves up; this makes it somewhat clunky to use in single-supply applications. The problem can be addressed in a couple of intuitive ways, but a particularly efficient — if positively cursed — solution is shown below:
The single-supply, non-inverting integrator.
As in all other cases, the prerequisite for linear operation is Vin- ≈ Vin+.
We can start the analysis with the two-resistor divider on the top: it simply ensures that Vin-is equal to ½ · Vout. As the bottom portion of the circuit, the instantaneous voltage on the non-inverting input is decided by the capacitor’s charge state (Vcap). The bottom resistors will influence the charge of the capacitor over time, but if we’re living in the moment, we can combine the equations and write the following equilibrium rule:
\(V_{cap} \approx ½ \cdot V_{out}\)
Equivalently, we can say that Vout ≈ 2 · Vcap.
We have established that Vout is equal to twice the value of Vcap, but if so, the resistor on the bottom right is subjected to a voltage differential between these two points (always equal to Vcap). From Ohm’s law, the resistor will admit the following current:
\(I_1 = {V_{cap} \over R}\)
If the input voltage is zero, the neighboring resistor to the left is subjected to the same voltage differential, so the current flowing into the junction (I1) is the same as the current flowing out (I2). With the currents in balance, the capacitor holds its previous charge and the output voltage doesn’t change.
That said, if the input voltage (Vsignal) is non-zero, the voltage differential across the terminals of the resistor on the left is different, and the formula for I2 becomes:
\(I_2 = {V_{cap} -V_{signal} \over R}\)
In this case, there is a non-zero balance of the currents flowing in and out via the resistors:
This current is flowing in via the resistor on the right but not flowing out via the resistor on the left, so it necessarily charges the capacitor. Note that the capacitor charging current is independent of Vcap; it remains constant as long as the input voltage is constant too.
As before, from the fundamental capacitor equation (V = I·t/C), we can tell that a constant charging current will cause the voltage on the output leg to ramp up in a straight line. Of course, this will come to an end once we hit the output voltage limit of the amplifier. To reset the circuit, we’d need to short the terminals of the capacitor.
👉 For another installment of the series, click here.
If you enjoy the content, please subscribe. I don’t sell anything; it’s just a good way to stay in touch with the authors you like.
In the previous post on this Substack, we looked at charge pump circuits and the mildly cursed example of a capacitor-based voltage halver. I find these topologies interesting because they are very simple, yet they subvert the usual way of thinking about what a capacitors can or cannot do.
In today’s episode, let’s continue down that path and consider an even more perplexing example: a switched capacitor lowpass filter. The usual way to design an analog lowpass filter is to combine a resistor with a capacitor, as shown below:
A standard R-C lowpass filter.
The filter can be thought of as a voltage divider in which R is constant and C begins to conduct better as the sine-wave frequency of the input signal increases. This frequency-dependent resistor-like behavior of a capacitor is known as reactance and is described by the following formula:
\(X_C = {1 \over 2 \pi f C}\)
Most sources give this equation without explaining where it comes from, but it can be derived with basic trigonometry; if you’re unfamiliar with its origins, you might enjoy this foundational article posted here back in 2023.
In an R-C lowpass circuit, the reactance is initially much larger than the value of R, so up to a certain frequency, the capacitor can be ignored and the input voltage is more or less equal to the output voltage. Past a certain point, however, the reactance of the capacitor becomes low enough so that the signal is markedly pulled toward the ground, attenuating it and producing a filter response plot similar to the following:
R-C lowpass filter behavior for R = 100 kΩ and C = 10 nF.
It’s easy to find the frequency at which R = XC. The solution corresponds to the “knee” in the logarithmic plot shown before:
\(f_{knee} = {1 \over 2 \pi R C}\)
This is basic analog electronics, and something we covered on the blog before. But did you know that you can construct a perfectly good lowpass filter with a pair of capacitors and a toggle switch? The architecture is shown below:
A conceptual illustration of a switched capacitor lowpass filter.
In practical circuits, the “switch” would be a pair of MOSFETs driven by a timing signal, but nothing stops us from using a real switch or an electromechanical relay to experiment with this topology on a breadboard.
In the first half of the timing cycle, the switch is in position A. This connects a small input capacitor, Cin, to the input terminal. As soon as the connection is made, the capacitor charges to a voltage equal to the momentary level of the input waveform, which we can denote as VA.
In the second half of the cycle, the two-way switch is moved to position B. This causes the voltages across Cin and Cout to equalize. We don’t need to solve for the relative shifts in their terminal voltages; it suffices to say that Cin started at VAand ended up at VB.
With this in mind, we can apply a a form of the fundamental capacitor equation (I = C·Δv/t) to find the average current that must have flowed out of Cin to shift its voltage by that amount in time t:
\(I_B = {(V_A - V_B) \ \cdot \ C_{in} \over t}\)
The switching period t is the reciprocal of the switching frequency fs, so we can also restate this as IB = (VA - VB) · Cin · fs.
The formulas tell us that the average current is proportional to to the voltage present across the A and B terminals of the switch. The actual current is pulsed, but it otherwise looks like the ohmic behavior of a series resistor. In fact, from Ohm’s law, we can find the equivalent resistance that appears to be feeding Cout:
If the circuit can be modeled as a series resistor feeding a shunt capacitor, we’re essentially looking at a bog-standard lowpass R-C architecture. The knee frequency of this R-C filter can be written as:
The math might seem improbable, but the circuit works in practice. The following oscilloscope traces show a filter constructed with Cin = 100 nF and Cout = 1 µF, toggled at fs = 50 Hz. This corresponds to fknee ≈ 0.8 Hz — and indeed, we see some attenuation for a 1 Hz input sine, and a lot more for a 5 Hz one:
A switched capacitor lowpass filter, showing variable attenuation.
The plot also offers an intuitive interpretation of the math: in each clock cycle, only a certain amount of charge can move between the capacitors; this corresponds to the maximum height of a single step in the output waveform, proportional to the relative sizing of the caps (i.e., the ratio of Cinto Cout). The larger the vertical step, the easier it is for the output waveform to track a fast-moving input signal.
The switching frequency fs controls how many charge transfers occur per second, which gives us another method of controlling the filter’s center frequency: to shift it, we can speed up or slow down the supplied clock. That’s a major boon for digitally-controlled analog signal processing.
In practice, to minimize the stairstep pattern, we tend choose the switching frequency fs to be about two orders of magnitude higher than the filter’s intended center frequency. To achieve this, from the earlier fknee formula, we need to aim for Cout ≈ 16 · Cin.
For a third installment of the series, click here.