47 Concertina Again – A Gift that keeps on giving

On 22-07-2014, I posted a post about some favourite questions that I had used when interviewing electronic engineers and technicians for a job. One of the favourites that I described was a single transistor circuit with outputs from both the collector and the emitter. Later, a friend identified this as a “Concertina Circuit”

That 2014 Blog Post can be seen at

Over the years, I had been amazed by how, on the one hand, the circuit was so simple, and on the other, it was so powerful in providing for a candidate to expose his merit (or lack of).

This is the circuit that I used.

You see I used an npn transistor, but just about any active device except an SCR or a TRIAC could be used. Old fogies will remember it being used as a phase splitter in valve amplifiers. I don’t know whether it is popular in latter day valve amplifiers as are used in electric guitar service.

I recently saw it in a final year project report that was written by a friend in 1962.

The “Concertina” is the second triode. Note the twin triode to the right acting as a buffer on the concertina stage outputs. Maybe this is significant. See below. Note that in those days, circuits were simpler, and the argument that one might make these days that it is important to show active devices the right way up for clarity (an argument that I hold to), had not really come into effect.

My title for this post included the words “A gift that keeps on giving”. The reason for this is that there are two extra aspects of complexity that never arose when I was using this circuit for job interviewing. In all its simplicity, it is really more complex than I realized.

Gift 1.

I had actually been aware of this years before I ever conducted a job interview, but I hadn’t thought of it in the context. The argument goes that one possible problem with the circuit is that the two outputs have different impedances, so that they will suffer to different extents if capacitively loaded. For discussion of this, I will stick to the npn transistor case, but the argument holds for the 12AU7 as well. The output on the collector will have an impedance which will be the parallel connection of the 1k load resistor and the resistance of the collector itself. For practical purposes we can take this to be the load resistor alone. That is 1k.

The output impedance at the emitter on the other hand will be the load resistor in parallel with Re

Re = 25E-3/Ic

= (approx) 25E-3/3.15E-3

= 8 ohms

In this case, it is the load resistor that becomes insignificant, and we can say that the output impedance at the emitter is eight ohms.

Let us imagine that we load each output with a 100nF capacitor. We might expect that the Collector output will droop at high frequencies with a pole constructed of the 1k output impedance and the capacitor.

RC = 1E3 * 100E-9

= 100 us

Break freq. = 1.59 kHz

Similarly, at the emitter, R = 8 ohms

RC = 8 * 100E-9

= 800 ns

Break freq. = 199 kHz

This is not how it works out, however. The capacitor on the emitter provides emitter bypassing, so that as the frequency at which the capacitor breaks with the emitter load resistor is passed, the gain of the transistor as a common emitter stage starts to rise. Over the next decade, this rise cancels exactly the fall in output at the collector due to the shunting by the capacitor there.

Here are three cases:

Case 1. 100nF Capacitor on Collector.

Here are the Bode plots for the outputs.

At the low frequency end, the gain is limited by the coupling capacitor on the input.

The Green line represents the Collector. The pole at 1.59 kHz is evident.

The Blue line represents the Emitter. No high frequency attenuation in the frequency range of interest.

Case 2. 100nF Capacitor on Emitter

Here are the Bode plots for the outputs.

The Blue line represents the Emitter. Notice that the bandwidth at the Emitter is limited by the pole at 199 kHz as predicted in the above sums.

The Green line shows the voltage at the Collector. Note the zero at 1.59 kHz which takes effect as the capacitance on the Emitter breaks with the 1k Emitter load resistor.

Above 1.59kHz, the gain rises as Xc on the emitter falls with increasing frequency. This continues up to 199kHz where Xc has fallen to the same magnitude as Re. Above that, Re dominates and the gain flattens out.

Case 3. 100nF Capacitor on Collector and Emitter

Here is the bode plot for the two outputs:

The simulator has drawn the Collector response (Green) first. Then the Emitter response in Blue. The Emitter response overwrites the Collector response for the whole plot, except for up near 100 MHz where the Green line peeps out from under the blue.

The Emitter response (Blue) is the same as in Case 2.

For the Collector response, the pole that was evident in Case 1. is exactly cancelled by the zero that we saw in Case 2.

This result which is, at first blush unexpected, provides a rewarding little bit of sparkle to this circuit. I have to admit that I did not initially think this out for myself. I read an explanation similar to the above in the Wireless World magazine in the 1970s. I found that it fell to me to explain it to a friend recently, so having thought out the explanation properly, I share it with you here. My friend, by the way, suffered from no disadvantage with respect to me except that he is not old enough to have read Wireless World in the 1970s.

Gift 2.

Here is another gem that emerges from the apparent simplicity of this circuit. Unlike Gift 1. I have thought this all out for myself.

One possible application for a phase splitter is to provide a “balanced” output for a connection between items of equipment where there might be some common mode signal between the different local grounds. A “balanced” signal path might be used even where the two grounds are solid, but where magnetically induced or capacitatively coupled interference might advantageously be cancelled.

The concertina circuit is NOT suitable for use as the phase splitter to provide the balanced signal in such an application. I have seen it so used (many years ago), but at that time, I didn’t realize what the problem was.

There are two distinct factors at play here. First the two outputs have different output impedances. Any common mode current to both outputs will give rise to a much higher noise voltage on the Collector where the output impedance is about 1k (in our example) than at the Emitter where the output impedance in our example is about 8 ohms.

Earlier in this post, I made a snide remark about drawing active devices lying on their side. One problem with this is that some circuit configuration might inadvertantly arise, and not be immediately recognized as it does not show the active device in the way we are used to. My habit is to strictly constrain the way I draw these things. Although it is common to draw a transistor lying on its front with its nose in the mud when wired in the common base configuration, it need not be. I drew it the right way up when I designed a common base circuit in an earlier Blog post.

Post number 29 “I did the test question – and failed!” 23-07-2014

This was how I drew my common base circuit on that occasion:

The similarity to our concertina circuit is immediately apparent. To apply a noise current to the Emitter, is to apply it to the input to a common base stage. Whatever current we feed into the Emitter will come out the Collector (ignore base current a.c. component). If the current is a “common mode” current, applies to both outputs, then the Collector will get a double dose. One from the noise source, and one from the collector itself. All this is clearly seen in simulation.

I used a sinusoid of amplitude 1 millivolt and frequency of 10 kHz to provide the signal “Common”.
Here is what we get:

Don’t you worry about absolute magnitudes here. What we are looking at is how the noise voltages on the two outputs compare. The Green trace is the noise voltage on the Emitter.

The Blue trace is the noise voltage on the collector with noise current applied to the emitter only. That is, the signal we see on the collector is there as the transistor is acting as a common base amplifier.

The Red trace is the voltage on the Collector when identical noise currents are applied to both outputs.

I found it interesting that if I placed a 2k resistor in the connection between C3 and the current source G1, then the two outputs have matched noise voltages. This means that if the receiver at the other end of the “balanced” line, has good common mode rejection, then the noise will not appear. This would be a very unsatisfactory circuit though. The balancing of the two noise voltages would be precarious, and anyway, the introduction of the 2k resistor would spoil the matching of frequency responses that I discussed under the heading of “Gift 1.”.

The best way to utilize the Concertina in a circuit with any interference or capacitative loading on the outputs is to isolate it with a buffer stage as shown in the vacuum tube example above. This is almost exactly what Williamson did in his celebrated audio amplifier https://en.wikipedia.org/wiki/Williamson_amplifier design in 1947.

Sorry about those old 12AU7s, Cyril!

In my earlier discussion of the Concertina circuit, I referred the reader to an article on the Concertina circuit at
Looking at it again now, I am concerned that it might contain piffle. Go into it with care.

46 Even More Oliver Heaviside

The purpose of this post is to mop up a few bits and bobs that turned up whilst I was researching the previous two posts 44 and 45.

Just as electrical concepts had not fully revealed themselves in the 1800s, the language for discussing these things had not evolved. I read in the Kelvin biography that at the time that the first Atlantic cable was laid, the names of the electrical units had not been agreed on.

Heaviside invented the word “impedance” in 1886. This word seems so normal to us today, but to learned latinists it seemed barbaric at the time.

Even in the 1920s when Henry Fowler wrote “Modern English Usage”, “Impedance still had the power to raise the philologist’s ire. In that work, Fowler had three entries that mentioned “impedance”, and they were all negative. Here is an example:

Although Heaviside was not the sort of bloke to pander to the sensibilities of latinists, other were more inclined to take care. Michael Faraday, for instance.

In the Wren Library at Trinity College Cambridge there is a collection of the correspondence of William Whewell. Librarian Nicholas Bell read from a letter from Whewell to Michael Faraday on a recent radio program. There is an MP3 at:

It ran (in part) like this…

“My Dear Sir,

I still think anode and cathode the best terms beyond comparison for the two electrodes. The terms which you mention … show that you have come to the conviction that the essential thing is to express a difference and nothing more. This conviction is nearly correct but I think one may say that it is very desirable in this case to express an opposition: a contrariety, as well as a difference. The terms you suggest are objectional in not doing this.”

Whewell was responsible for the coining of “anode”, “cathode” and “electrode”.

It interests me that Fowler should get so hot under the collar about “impedance”, yet ignore other electrical terms that seem (on the face of it) to be worse. He does not mention “voltage” for instance. OED2 (The second edition of the Oxford Dictionary) records the first use of the word “voltage” in 1890 in Pall Mall magazine.

Wikipedia says: The Pall Mall Magazine was a monthly British literary magazine published between 1893 and 1914. Started by William Waldorf Astor as an offshoot of the Pall Mall Gazette, the magazine included poetry, short stories, serialized fiction, and general commentaries, along with extensive artwork.

In other words, this was a source for a new technical term coining that was about as technologically sophisticated as Women’s Weekly. It seems that at the time, the word would have been about as acceptable amongst those who took an interest in electrical matters as “amperage” is today. Yet “voltage” somehow has become widely acceptable. Why would this be?

In the 1800s two distinct concepts crystallized which had the same unit: the volt. The first was “electromotive force”, and the second was “potential”. A year or so ago, I wrote a regular magazine column that had a tutorial aspect for some who needed to strengthen their understanding of electrical matters. I invented a circuit to help make the distinction between electromotive force and potential clear. It had four 1.5 volt primary cells and four incandescent lamps in series.

The emf in this circuit is six volts, and yet if you poke around with a volt meter, you will not find a potential difference anywhere that exceed 1.5 volts.

As conceptualization advanced, this distinction between emf and potential came to be seen differently. Now, we speak of a Thevenan equivalent circuit. When we do this, we mean (of course) “Linear Circuit Model”. In this context, we speak of the circuit’s “Open Circuit Voltage”, or the voltage at the terminals”. We do somehow need this more general concept “voltage”, and then use other words to set the details and show what we really mean by voltage. “Amperage” has no corresponding utility.

As I indicated in recent posts, in the early days of telegraphy, the speed were so low that transmission lines could be modelled with (distributed) resistance and (distributed) capacitance, and that inductance could be ignored entirely. It was Heaviside who first worked out the significance of inductance where it did have to be taken into account, and how to address it. He worked out the criterion for distortionless transmission.

I find it interesting that in the early days of transmission line research, the inductance was completely ignored. It wasn’t until telephony made its demands that inductance became important. Here is a note about the properties of a transmission line in which inductance can be ignored. This is from “Life of Lord Kelvin by Silvanus Thompson P329:

You don’t hear much about this “square law” these days!

Back to the Present.

(My mate Cyril complains that this Blog dwells too much on the “very old”.)

The modelling of a transmission line in which inductance does not appear at all, is not common. There are circumstances in which it is still completely appropriate. One example is in the probes used to measure the electrical potential inside a living cell. This interesting electrical measurement problem is mentioned in ADALPAD (P845), where it is stated that “high impedance is essential in these applications, since living cells are destroyed by the passage of quite minute currents”, but that is only a part of the story. The electrical activity in living cells involves the movement of ionized molecules. It does matter exactly what these molecules are, as every species of molecule will have its own characteristic preponderance to take up or dispose of charge. The introduction of a metallic probe, would involve the doping of the cell interior with metal ions, which would invalidate the investigation.

For this reason, probes are made of very fine glass tubes which are filled with an aqueous liquid charged with ionized molecules to match or mimic the liquid in the cell where the potential is to be measured. A metal electrode is installed in the other end of the tube, but the tube life is limited to the time before metal contaminants reach the active end.

This construction gives a probe with a very high series resistance, which is in itself a reason for a very high input impedance on the attached equipment. As well as that the shunt capacitance in the glass tube wall is distributed along the probe resistance. Here in the most up-to-date biological research work, we find an analogue for the submarine cables of 160 years ago.

I first read about design solutions to the problem of such a high impedance probe in the lamented Wireless World magazine. (This was very different from its successors in that design details and the design process were discussed.) The idea that I read about there was to apply a negative capacitor to the probe to partially cancel the probe capacitance. This was done with a non-inverting amplifier (I think the voltage gain was 3) with a capacitor to apply capacitative positive feedback.

When I try this now, I find evidence that the old idea of modelling the line by assuming lumped capacitance all in one spot doesn’t look that good.

Years ago, before I had either circuit modelling or filter design software available, I did some work on ladder networks in which R(series, C(shunt) networks are strung together in cascade. Of course, for any particular RC, the subsequent members of the cascade provide loading and spoil the simple determination of a pole frequency. A usual trick here is to make the impedance of each stage, higher than the preceding. If a stage is ten times the impedance of its predecessor, it will not make a significant impact on the pass response of the earlier one.

In the following picture, I show the amplitude and phase responses for three networks. The green lines represent a network with three stages or RC low pass filter with RC = 100us. Corner frequency = 1591 Hz. The stages are isolated from each other with unity gain buffers. That is, each stage suffers no loading from following stages.

The Blue lines represent a “ladder network” with three stages of RC low pass filter. These are directly coupled, but the second and third stages have an impedance that is 10 times the previous stage. The Blue line is a little less sharper in the knee than the green one, but the difference is not great.

The red line represents a “ladder network” with three identical RC low pass filters. The second and third stages impose a load on the previous stage. For the red line, the three poles are “spread out”, and the result is a much less clearly defined knee in the response.

I have taken an interest in extending this to the situation where there are a very large number of identical RC stages. Such a circuit might serve as a model for a transmission line with distributed resistance, and distributed capacitance, such as a glass biological probe discussed above.

Maybe I will go into this a little more in a later post.

45. More Oliver Heaviside and Getting the best speed out of your submarine cable

In the previous post, I wrote about a biography of Oliver Heaviside I had been reading. It is a biography, and not a technical treatise, but it does have end notes to some chapters with pages of maths. Maybe the author is more comfortable with maths that he is with the engineering application of it, so we have to allow that he might have omitted some important details.

My interest piqued, I also turned to my copy of “Life of Lord Kelvin” by Sylvanus Thompson.

Kelvin started life in 1824 as William Thompson. He did not become Lord Kelvin until 1892, but Kelvin is the name we know him by, so I will refer to him thus. He died in 1907.

The books shows us that in Heaviside and Kelvin’s early times, transmission lines were modelled as distributed series R and shunt capacitance. No Inductance!

Was it just that the significance of inductance was not understood at the time, or was it that the particular applications of transmission lines in those days had the character that line inductance was not significant. The Heaviside book does not tell us, but in the Lord Kelvin book we read:

The “but now” in the above snippet from the book, appears to be when the telephone was introduced and greater bandwidth was required.

When inductance is ignored, then the transmission of a change in voltage down a line is analogous to the transmission of a change in temperature in a solid. This problem had been tackled and resolved by Fourier. Lord Kelvin specifically acknowledged Fourier.

From Life of lord Kelvin

In addition, we are led to believe that Heaviside stated that for analysis purposes, all the shunt capacitance of a transmission line such as a submarine cable could be modelled as concentrated at the centre. For practical purposes, he and his contemporaries seemed to have found that simplification justified. In 1850 overland telegraphy was well established, and by 1856 a large number of short submarine cables were in service.

What could be done to increase the speed of a cable that behaved like a hot poker (Fourier)? In looking at this on the simulator, I have chosen a very simple Morse code message: the one, which just a few years ago, I used to hear from my mobile phone. “SMS”. As most people who know very little of Morse code know, “S” is three dots. We know this from the Morse for “SOS”. “M” is two dashes. My reasons for choosing this as a test message rest both with its familiarity, and the fact that it is exceedingly easy to generate with Pulse generators in the simulator.

My method (This is MY method: there is no hint that Heaviside or Kelvin ever did this.) is to write out my model circuit, and then look at its response in the frequency domain. After making some changes to the model in an attempt to improve this, I look at how it might come up in the time domain. For my models I have chosen my time scale completely arbitrarily. As an artefact of one of my “mucking around” sessions, I have different time scales for my frequency and time domain analyses. Don’t you worry about this. You can do all this with whatever time scale you wish. Just change the capacitor values and the speed of the pulses accordingly.

Before I start, a little note on the way speed is characterized in Morse. Heaviside and Kelvin had not heard of Baudot at this time! (Modern measure of symbol rate is the “Baud”. Computer buffs please note that this is pronounced “Bode”. Named after Baudot, of course.) In the paragraph that follows, The two symbols are referred to as “dit” and “dah”. I believe that these names would have arisen when morse came to be used for radio communication. They represent the sounds of the symbols when heard on a receiver with a beat frequency oscillator. “Dot” and “Dash” in Heaviside and Kelvin’s day as the symbols were seen on a paper tape.

Paper tape inker for recording Morse code traffic. The tape is moved by a clockwork machanism. The received signal appears as dots and dashes on the tape.

From: http://en.wikipedia.org/wiki/Morse_code

“The speed of Morse code is typically specified in “words per minute” (WPM). In text-book, full-speed Morse, a dah is conventionally 3 times as long as a dit. The spacing between dits and dahs within a character is the length of one dit; between letters in a word it is the length of a dah (3 dits); and between words it is 7 dits. The Paris standard defines the speed of Morse transmission as the dot and dash timing needed to send the word “Paris” a given number of times per minute. The word Paris is used because it is precisely 50 “dits” based on the text book timing.
Under this standard, the time for one “dit” can be computed by the formula:
T = 1200 / W
Where: W is the desired speed in words-per-minute, and T is one dit-time in milliseconds.”

Here is a model of a submarine cable. The model is built according to Heaviside’s simplification: that is, it has all the capacitance lumped in the middle.

I have called the signal on the output “Preece” as this represents a badly designed system (See last post for information about Preece)

Telegraphers had noticed that when a marine cable developed a fault (resistance in shunt), the speed at which it could be worked increased. The problem with such faults (caused by salt water ingress, I suppose) is that they inevitably got worse. Heaviside said that if only one could have a low resistance across the line that stayed fixed (did not get worse), then the speed of the circuit would be permanently increased. He proposed a resistance of a value as low as a thirty second of the series resistance of the cable.

Here is a frequency domain representation of the performance of the Preece and the Heaviside cable.

The Heaviside cable had lots more attenuation, but a decade more bandwidth. For my time-domain simulations, I am going to assume “Double Current Working, and a sensitive balanced relay on the receiving end. I am assuming that the relay acts like a zero crossing detector. I have added DC offsets just for display so that the traces are separated.

Note that I have chosen the time scale in the simulation so that the Heaviside trace is good, but the Preece trace is not. I left the time scale the same for all following time-domain simulations.

On all three traces, “Mark” is positive, and “Space” is negative. You can see that the signal on the contacts of the balanced relay in the Heaviside case is a faithful reproduction of what was transmitted. Easily read as “SMS” by the operator.

The “Preece” signal, however suffers from severe Telegraph Distortion. Telegraph Distortion was the term that used to enjoy official sanction is Standards as the name for distortion of a two state signal in which the relative timing of the transitions deviates from what is sent.

The first dot is missing entirely, as the sending key has not had time to remove the negative charge on the line capacitance due to the proceeding extended Space state. The two dashes of the letter “M” would be hardly recognized as such, and the three dots from the second “”S”, sent where the original bias on the line has been neutralized a little, is not much better than the first group of three.

Heviside had done a good job of improving the performance of this marine cable. The installation of a shunt resistor at the half-way mark in the cable must have been a bothersome thing, though. Best to eliminate that. So I had a crack at it myself. I have an advantage over Heaviside: the spice simulator. My approach was to see what sort of compensator I could add at the receiving end of the line to get the same effect. I started in the frequency domain. In the first instance, I restricted myself to components that would have been available in Heaviside’s day. (NO op-amps!)

Here is my first effort.

Here is how this comes up against Preece and Heaviside in the frequency domain.

You can see that the “Richard” model has a frequency response that is almost exactly coincident with the “Heaviside” one. At this point I am going to stop showing the time domain responses for the receiving of “SMS”, as they are all the same except the “Preece” one.

The Richard circuit model worked as well as Heaviside’s, and it would seem that it would have been much more convenient to place a resistor and inductor across the line at the receiving end, than to install and maintain a resistor under the sea.

The fact that a shunt inductor is of value is of interest in the light of a subsequent event as well.

A question arose: what if the simplification of the cable model by placing all the capacitance in the middle, although shown to be valid for Heaviside’s purposes, is not valid for my process of tweaking my compensation with frequency domain purposes and then testing in the time domain?

I repeated all the above model runs with a cable model consisting of cable resistance divided into four equal parts and three capacitances to ground from the junctions. The results were identical, although I had to change the value of resistance end inductance for my terminator. It did not seem to be necessary to try extending the model further towards fully distributed capacitance.

I decided to go a little further. What if I modelled a compensator without the restrictions that the components must have been available to Heaviside? It happens that this gives us a start to an approach that when followed through gives us a scheme that would have been realizable in Heaviside or Kelvin’s day.

I decided to try high frequency boost. This requires voltage gain, which was not available to Heaviside, but follow me like a leopard, and we will see what we could have offered him.

The circular symbols are voltage controlled voltage sources (idealized amplifiers). The number to the right, and below the symbol is the voltage gain.

I won’t cover the process for arriving at this compensator circuit here (although I will account for it if asked).

Here is the frequency response of the compensator.

As with all these plots, the solid line is amplitude, dotted line is phase.

Here is the frequency response of the whole shooting match with compensator included.

Almost 20 dB more gain than the Heaviside scheme of the mid cable shunt resistor, and a little bit of extra bandwidth. The output from the balanced relay is identical. The extra gain might be important on a very long cable where signal strength at the receiving end might not quite meet the balanced relay requirements.

Let us look at the time domain response of this compensator and consider what it is doing for our long, high capacitance submarine cable. What I have done here is put the compensator first: that is at the transmit end of the cable.

What we see here is the SMS waveform with an amplitude of about 2 volts, and superimposed, 110 volt spikes accentuating the transitions or edges in the data stream. It is easy to see in this time domain representation what the compensator is doing for the cable. At the beginning of each positive pulse, the large spike adds charge to the cable capacitance, and at the end of each pulse, a negative going spike removes that charge again.

After the first trans-Atlantic cable was laid, it was destroyed when one of the engineers set up an induction coil that imposed 2kV impulses on the line. I have only read about this in de-technicalized writing aimed at non-electronic engineers. Such writing leaves the interested reader guessing as to what had really been going on. Perhaps my compensator gives a clue. Perhaps attempts were made to generate a waveform like the above to increase the signalling rate. Perhaps it appeared that if 110 volts was good, then 2kV would be better. Whatever the reasoning, the 2kV was too much for the gutta percha insulation of the day, and the cable was destroyed.

It occurred to me that if this waveform, that was derived from my circuit model that could not have been represented by available hardware in the 1850s has the desired effect, then maybe we can introduce short pulses to quickly charge and discharge the cable capacitance without necessarily following the waveform exactly.

This graph shows a waveform presented to the send end of the cable (in red). This could have been generated by some mechanical momentary action relay that switches in the higher voltage momentarily at each transition. Nothing here that could not have been done in the 1850s.

The voltage of the spikes is only eleven times that of the raw data (as opposed to 55 times from the linear model). However the height and width of the spike are adjusted to have about the same impact on the charge of the line capacitance. The receive end in this model has no sort of compensator or special termination: just the model of the balanced relay (zero crossing detector). The result is a perfect Morse rendition with a little delay (green).

A little more Heaviside to come.

Post 44 A look at Oliver Heaviside and a defence of Lord Rayleigh

I recently found a biography of Oliver Heaviside (1) in a second hand bookshop. Written by a Paul Nahin who claims (!) to be professor of electrical engineering at the University of New Hampshire. There were a few gems in there. Heaviside was involved in the design of early submarine telegraph cables. We have all read about the attempts and eventual success in placing the trans-Atlantic cable. As you would imagine, (when you think about it) there were shorter cables before that, so the technology was established before the attempts to cross an actual big ocean were undertaken.

Heaviside was involved with a company that installed a cable between Great Britain and Scandinavia. My interest rose when I read that it fell to Heaviside to explain why the data sending speed attainable in one direction in the cable was higher then the other direction. Was I reading of a real phenomenon, or as I read on would I find that the observers had been mistaken? Then again, maybe my leg was being pulled. No. It was real. Wouldn’t the unidirectional – 100% pure copper – long grained rice conductors with homeopathically treated dielectric – speaker cable loonies love to know about this. Please lets hope that they do not get wind of it.

As I read on, I encountered a slightly confused explanation, but I was able to get a feel for what Heaviside discovered. First, I read, he invoked the reciprocity theorem to establish that the cable itself could not support different speeds in different directions. I was familiar with the idea of reciprocity from the field of acoustics. However, when I look it up now, (here) (but also see here) I find that it is considered to be a circuit network theorem. The thing about it is that the conditions at the sending and receiving ends have to be identical. In a telegraph line in the 1800s, the conditions at each end of the line were not identical. I will make some assumptions to make by exposition simpler. You will be able to imagine what the impact of changing assumptions would be.

One early scheme for getting the most out of one’s telegraph line investment was what was called “Double Current Working”. In this scheme, instead of just using the morse key to turn a supply of current on and off, two batteries were provided, and the key switched the line from one polarity to the other.

Double Current Working Telegraph

Advantages were that one could apply twice the voltage for a given voltage rating of the line, and the threshold between the Mark state and the Space state at the receiving end was always zero, no matter what the line attenuation was. A feature of double current working is that at the sending end, the impedance to line is always low. At the receiving end, there would be a balanced relay of high resistance. This means that the sending end of the line is terminated with a low impedance, and the receiving end with a high impedance.

“Double Current Working” has come down to us and is only fading away now. It is found embodied in the up until recently ubiquitous RS232.

It is surprising these days to see that telegraph lines of those days could be modelled without reference to inductance. An aerial line was modelled by a series resistance with a small shunt capacitance. The marine cable was modelled with a series resistance and a much larger shunt capacitance. In the case of the telegraph to Scandinavia, there was a land line from the telegraph office to the coast at each end, and critically these two land lines were not the same length.

I have modelled this without any attempt to have realistic values for the components in the model, as that is not necessary to make the point. In my model, I have imagined that the land line at one end has a resistance of 250 ohms, and the (longer) land line at the other end has a resistance of 1000 ohms.

There are two models of the line here. To the left is a model where sending is taking place from the shorter (250 ohm) land line end, and to the right, sending is taking place from the long land line end. I have used the frequency domain to contrast the bandwidths of the two models.

… About an octave difference.

There you are… if you want to go into the exotic speaker cable business – you can cite Oliver Heaviside!

Another matter caught my eye, and this worried me more.

Heaviside had a running battle with a William Henry Preece. Preece fancied himself as a competent engineer, which he was not, but he was successful with personal advancement. He rose to the position of Engineer In Chief of the (British) General Post Office. After mentioning some of Preece’s crack-pot ideas, the author attempts to temper our scorn. I think that his point was that these (mid 1800s) were early days for the development of electrical technology, and there were many ideas about that might seem silly today. At least I think that that is the point the professor was trying to make. At this point there was a reference to note number 5. at the end of the chapter.

Note number 5. reads:

“In fairness to Preece I should admit that a-c circuit analysis provided some interesting surprises even for a genius like (sic) Lord Rayleigh. In Volume 1. of his Theory of Sound (London Macmillan, 1894, pp. 442-443), for example he discussed the curious ability of an alternating current in a main circuit to divide into two parallel branch circuits in such a way that each branch current, individually, is numerically greater than the main current!”

I put the “(sic)” in to indicate that I am quoting him exactly. Not my words. There was no genius like Lord Rayleigh. Nahin meant “such as Lord Rayleigh”. Nahin is an American, so allowances made.

It happens that I have some familiarity with Theory of Sound. Most of it is not suitable for bed-time reading, as the maths gets a little heavy for that. The sections of the book about electrical matters are very interesting, and not heavy going for the modern reader, and I recently flipped through the book reading only the electrical bits over a few bedtimes.

What could Nahin have meant by the words he chose in that Note 5.? First of all, that exclamation mark is not Lord Rayleigh’s: it is Nahin’s. I can’t tell what Nahin is getting at here, but if he is having a snigger at Lord Rayleigh, then he has me to answer to.

Could it be that Professor Nahin is unaware that a situation where “ an alternating current in a main circuit to divide into two parallel branch circuits in such a way that each branch current, individually, is numerically greater than the main current” is commonplace? If he is unaware of this, then he is hardly competent to write this book.

Did he think that Rayleigh’s readers of the day would be unaware of such a circumstance, and that it might have been appropriate in the mid 1800s to inform them with an exclamation mark? If he did, he is misrepresenting Rayleigh, as there is no exclamation mark, and no element of “Jee Wizz” in Rayleigh’s account. Remember, this is an end-note to support the notion that there were people other than Preece who had crazy ideas.

It is interesting that in the early days the grasp of how electrical things worked was only just crystallizing. As well as that was the factor that people’s thinking tended to follow what they saw around them. Whereas in those days, a telegraph line was modelled with what we would call resistance and capacitance, with no consideration for inductance, the apparatus on the laboratory bench was limited. Inductances were there: called coils. Capacitors were more of a problem. Only low values were available. When an investigator thought of the way a transmission line worked, he (I would like to say “or she”, but I think it was always a “he”) thought of capacitance. When he planned an experiment on the bench, he thought of what he could get.

These days, if we think of a circuit in which “ an alternating current in a main circuit to divide into two parallel branch circuits in such a way that each branch current, individually, is numerically greater than the main current”, we think of one branch containing a capacitor and the other, an inductor.

Here is a circuit that performs according to professor Nahin’s exclamation. I just slapped this together on the simulator to make the point. The current amplitudes stated are just measured from the waveforms. I have not done the vector analysis to ensure that they agree to any particular accuracy.

Here are the waveforms:

The inductor current and the capacitor current are clearly “ individually, … numerically greater than the main current”

However, this is not the type of circuit that Rayleigh used as his example in the indicated passage of Theory of Sound. I think that rounding up a whole microfarad might have been a problem. It would have taken a whole lecture theatre full of Leyden Jars. (It says here that a Leyden Jar had a capacitance of about 1 nF, so a thousand of them would be required.)

He chose an example that was much easier for him.

He describes a multifilar wound coil with five wires. Three of these are placed in series to make one inductor. The other two are in series to make a second inductor. In modern terms, he had made a transformer with a 3:2 turns ratio, and a very low leakage inductance. The Theory of Sound was published in 1877, but OED2 records the first use of the word “Transformer” in the modern sense by Hospitalier in 1883.

Again, I have just slapped something up on the simulator to make the point.

Here the line above the inductors “K1 L1 L2 1” is a directive to the simulator to establish mutual inductance with a Mutual Coupling Coefficient of 1. L1 and L2 are consequently a transformer.

Here are the waveforms:

You will see that the amplitude of the current in L1 and the amplitude of the current in L2 exceed the amplitude of the supply current. No need for an exclamation mark here.

Note (1) Paul J. Nahin Oliver Heaviside – The Life, Work, and Times of an Electrical Genius of the Victorian Age. John Hopkins University Press. paperback 2002. ISBN 0-8018-6909-9

Post 43. I was Cross!

Posted 25-03-2018

This post is made up of old material. It had been stored away in several emails that I had sent off years ago.

Last night I had dinner with an old electronic engineering friend. Several interesting points came up in our discussion, and in many cases, I was able to refer to a post in this blog where I had expanded on the matter. (It was George Bernard Shaw who wrote “I often quote myself. It adds spice to my conversation.”) There was one exception: and that is the subject of this post. So…. here it is, out of the shadows and where I can refer people to it.

This post does not expand on some technical matter for my engineering peers. I expect that the general thrust of what I write here is already known to you.

It all started at a different dinner party. NOT a gathering of engineers. One fellow was expanding with raucousness and passion on why he hated digital audio. As he described the quantization process, his face screwed up as if he was being forced to sniff fresh dog shit. I took the view, and I still do, that engineers had put in a lot of work developing modern audio systems, and for a person who does not understand any of the issues at stake, and to give priority of his baseless opinion over the development work of many people who had gone into the design problems in a lot of detail IS OFFENSIVE.

I “blew my top”.

Later, I tried to be a little constructive with what follows. I took a two pronged approach. One was an “in principle” approach, and the other was a report of actual results.

Prong 1.

Here is a (slightly edited) version of what I wrote over several emails.

First, I did a little circuit modelling.

This circuit model (I explained) is to generate some demonstration signals. This circuit takes a 1kHz sine wave, samples it at 42 kHz (A sampling frequency, I believe, that is used in digital audio) and then limits the frequency content to that that is discernible to human ears.

This is the sort of situation in which one could easily get buried in detail. This sampler samples the signal in time. I have not introduced the complexity of quantizing it in voltage as well. I just left out any attempt to present some maths to justify this, as my reader would not have appreciated that. He might not have understood the distinction between quantizing in time (sampling) and quantizing in voltage.

Here are the signals we get.

The red trace is the original audio.
The brown trace is the digitized audio.
The blue trace is the result passed through the low pass filter.
The thing is that NO ONE can tell the sound of the blue trace from the sound of the red trace.

In a following email, I covered myself a little for simplifying the matter as follows:

I have been thinking a little more about what I sent you earlier. I was trying to be helpful, but the task is really very difficult. It would REALLY NOT be helpful to you if I said that you needed an electronics degree to understand, or to set up tutorials to cover the material to the same level of detail that brought you up to electronics degree standard. So I have thought up a much simplified model to demonstrate the point.

You need to be aware that what I have done is DEMONSTRATE the point: not PROVED it.
If you were to focus your attention on the simplicity of my model and search for weaknesses in it, I could counter every discovery of weakness that you made, with the addition of complexity to overcome that weakness, and we could continue that process until our correspondence had almost amounted to a degree in electronic engineering by correspondence.

However, there is a little I want to add the the exegesis of the simple model so as to make the point clearer.

In the above trace of the output of the circuit that is quantizing the signal in time (brown) and the signal at the output of the filter, which I have previously identified as indistinguishable from the original signal (blue)

You will notice two things about the blue trace:
1. The “stepiness” is smoothed out.
2. The blue trace is offset a little to the right. This means that the blue trace is delayed by about 100 microseconds or so.

You will see that the delay doesn’t matter at all, as it can be completely overcome by just starting the playback of the recording 100 microseconds earlier!

The original sine wave signal had an amplitude of 1 volt. (2 volts “peak to peak”)
The  135 mV peak to peak of the “steppy” artifact corresponds to an amplitude of (roughly) 70 mV.
This is 1/14.2 of the amplitude of the incoming signal.
This is at a signal level of – 23 dB with respect to the incoming signal.

We can observe that the filter appears to “smooth out” the “steppy” signal. However it is actually wrong to characterize it this way. If you were to print out the “steppy” line, on a huge piece of paper, you could draw a line that appeared to “smooth” it out, but then again, someone else could draw a slightly different line which also represented a “smoothing” out of the steppy line.

The line (blue) is actually a VERY SPECIAL case of “smoothing out”. It is the case where all the high frequency components have been (for all practical purposes) removed.

Remember that I just chose this particular filter design to make a simple point. If you were to complain that it does not filter enough, then I can increase the filter performance WITHOUT LIMIT, so, in this way I can eliminate that objection, so don’t go there.

The filter I have chosen is just to be illustrative.
The graphs that I have sent you are said to be in the “time domain”. The horizontal axis is time. An alternative way of looking at the same thing is to look at the frequency domain. It is conventional (and with good reason, that is outside the scope of this email) to do this with log log axes.

Here is a representation in what we call the “frequency domain”

In the graph above, the dark line line is the filter performance.

I notice that I had chosen a filter with a 10 kHz corner frequency. This could give rise to an objection. I will work out how it looks with all the high frequency stuff 30 dB stronger as they would be with a 20 kHz filter.

The red line represents our demo test signal.
The green lines represent the “steppiness” in the frequency domain. I have drawn them in by hand. The green lines continue on, to the right until the point is reached where they have gone down below the sort of “electron per fortnight” current levels, and you are into quantum physics.
If you look at the  first green line, you will see that it is -65.3 dB with respect to the signal. However the “steppy” signal was already at -23 dB with respect to the signal, so the steppiness comes out to be at (-23)+(-65.3)dB = – 88.3 dB. This is certainly below the noise and undetectable, but as I said before, If you want it to meet some stricter criteria, I can extend the filtering WITHOUT LIMIT.

Another  way of looking at it, is to say that the filter separates everything to the left of the purple line from everything to the right of it.

It turns out that your ears  do this filtering for you and for nothing. That is, if you were doing ABX tests and there were ultrasound transducers in the room putting out the green line signal, you could not detect them. However there IS still a good reason for doing that filtering. That is that if the filtering is not done, then the presence of the out-of-band signal could degrade the performance of the electronics between the D to A converter and your ear.

Since that correspondence took place, I have looked into this a little further. The sampling noise will be a saw tooth waveform, but the polarity of it depends on the sign of the slope of the signal. and the magnitude on the steepness of the slope. In real life, it will have sidebands! Let us assume that it is a constant amplitude sawtooth with a peak to peak value of 135 mV. Is this a worst case”?

Here is the spectrum of it:

I have extracted the level of the first three spectral lines from the simulator and added them to the plot. Note that the voltage levels in the simulator FFT package are dB re a volt RMS. Our imagined signal was 1 volt amplitude which is -3 dB on the above scale.

Smpling Noise spectral lineFrequency

Level re 1 volt RMS

Level re 1 volt amplitude sine at 1 kHz

Level re 1 volt amplitude if the filter corner frequency had been 20 kHz

42 kHz

-96 dB

-93 dB

-63 dB

2nd harmonic
82 kHz

-125 dB

-122 dB

-92 dB

3rd harmonic
126 kHz

-128 dB

-125 dB

-95 dB

Note that the -93 dB figure in this table is 4.7 dB different from the figure arrived at (above) following different assumptions. I am not going to attempt to track the origins of the differences. The assumptions are very crude to start with. My point is made without having to resort to a precision as fine as 4.7 dB!

If a person such as my correspondent reckons he can hear that sampling noise, he is having a wank. He needs to get a good solid ABX test up his jacksie. This brings us to:

Prong 2.

Remember that this is NOT about record players vs some other sound reproduction system. The statement I was making was all about CD quality sound. It does not apply to MP3 quality sound.

Imagine this system.

A person can listen to the music with both switches in the A position, in which case digital audio doesn’t come into it. The same person can listen to the same music with both switches in the B position in which case, the analogue signal is converted to digital and then back to analogue again. The thing is, it is not possible to hear the difference. Very well documented.
There is actually no information flowing up the wire to the speaker or the headphones that could be utilized to discriminate.

I found the following for my correspondent.

Lipshitz and Vanderkooy are both audio equipment designers. I think that Lipshitz is an electronics engineer. Vanderkooy is a physics professor in Canada.

My correspondent did respond to my emails, but in his responses, he only addressed “Prong 2”. The substance of “Prong 1” was not to his taste. Do you reckon that a person who will not make the effort to get his head around this stuff, is entitled to an opinion about it?

Post 42 Two and a Half Times Part 2

There has been a delay of many months in getting this blog post out the door. This has not been because of flagging interest, but the opposite: there have been interesting revelations and discoveries that have caused the goal posts for this write-up to move about. Several times, I had thought that this post was complete, and then new information came in, or a new insight was gained that seemed to redefine the problem.

The factors that have come to bear, fall (in part) outside the scope of this blog, as I have tried to keep this blog focussed on the technical aspects of electronic engineering. The subject of how to order the affairs of an engineering team for synergistic productivity has been kept aside for publication in a different forum. However we can never really divorce the consideration of how we design something from the consideration of how that design task will fit into an overall plan.

I will touch lightly here on one aspect of project management that has had an impact in this case. This is risk management. Almost by definition, until some development work has been performed, we do not know what the outcome will be. If we knew the outcome in every detail, there would be no development work to do. One possible path through the development work is that a design approach is thought up and documented sufficiently for a prototype to be made. Testing of the prototype will yield results that might:

(a) Prove that the idea is without flaw and can be incorporated into the design without modification.

(b) Show that the idea is workable, but that changes have to be made in breathing life into the prototype. These changes might represent a discovery that there was a mistake in the concept, or a mistake in the documentation that served to represent an invocation of the concept.

(c) Show that the idea is not suitable to proceed with.

Of course, one hopes for an (a), but one has to be prepared to proceed with the project whatever the outcome of the prototype building and testing (the “technicianing”) effort.

Note that if no competent technicianing effort is brought to bear, then the conclusion will be that we have a (c). This might be an error, and a really good (a) or (b) opportunity can be missed.

It falls to the project manager to conduct realistic risk analysis before the technician sets to work, and to have plans up his sleeve for all outcomes. Part of his work is to evaluate the competence of the technician workforce. If the technician assigned cannot distinguish between a failure on his part, and a failure of the concept, then a different plan is called for.

During explorations with spice during the preparation of this blog post, I discovered a simple mistake in a concept implementation that I had made in 1987. [Spice was not available (to me) at that time. The first implementation of spice that stepped out of the mainframe/Fortran environment seems to have been released in 1985. (See here.) Another reference says 1989, but it looks as if that can’t be right. James Fenech has a “Pspice” book that is dated 1988. In my next job, (1988) I was using MC3, an early schematic entry based spice invocation.] The mistake was immediately obvious, and very easy to fix. It would have been very obvious, and easy to fix by the person charged with the task of getting the prototype going in 1987.

The concept described in this blog post was wrongly categorized as a (c) in 1987 by the team technician and my boss. A big mistake.

Here we go then:

My second example of the exploitation of the relationship of two frequencies in the ratio of 2.5 to 1 is also a communications one.

The application was a network of stations that were to be linked by radio. There was one central station which was to gather information from all the others. Some radio links were short and could be expected to provide a strong low-noise signal at the receiver. Other links were longer or involved difficult paths, and were expected to present a noisy or degraded signal at the receiver. We were aware that it would take more time to send an error free packet of data down a noisy and degraded link than through a high fidelity one.

The amount of traffic was such, that it looked as if we could not slow the traffic down on all links to meet the needs of the worst one. We needed a system with a variable bit rate. The rate on each link could be set at commissioning time, or (and this was my hope) set dynamically to adjust to the needs of each link. Dynamic bit rate adjustment might require expensive or tricky software. We did not have any idea of an algorithm for it. It looked as if “set up” bit rate for each channel would be workable, if not complying with all my flights of fancy.

Many readers in this forum know the difference between bit rate and symbol rate. For those who are not quite clear on this, here is a brief diversion.

The symbol rate, for which the unit is the baud, (pronounced “bode”) is the rate at which data are passed. Each datum, however can carry more than one bit of information. The number of bits per symbol need not even be an integer. Imagine a channel that transfers equally likely symbols that are a number between 0 and 9.

Number of choices, N = 2n where n is the number of bits.

n = log2(N) = ln(N)/ln(2)

= 3.321928

At first, it might look like a really difficult problem to deal with symbols with that number of bits each, but it is in fact very easy.

1. Look at the first symbol

2. Get the value of the symbol.

3. Multiply by 10

4. Add the next symbol symbol

5. Loop back to step 3. until the end of the message is reached.

I first encountered a simple and practical application of a non-integer number of bits in the Digital Equipment Corporation Radix-50 system. https://en.wikipedia.org/wiki/DEC_Radix-50

In this system, characters were drawn from a character set of 40 characters

(space, A, B, C, D. E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U, V, W, X, Y, Z, $, ., %, 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9). Forty is “50” when expressed in Octal, the preferred numbering on DEC machines; hence the name. The number of bits of information embodied on one of these characters is 5.3219 (or thereabouts). Packing three of these characters into a 16 bit word is very simple.

(using decimal notation)

The characters are assigned values from 0 to 39

Take the first character and multiply its value by 40

Add the second character.

Multiply the sum by 40

Add the third character.

The maximum value of the result is 63999. This is less than 65535, the maximum value we can store in 16 bits (binary 1111 1111 1111 1111), and will thus fit in a 16 bit word.

Extracting the characters is easily accomplished by dividing by 40 a couple of times and saving the remainders. All this was worthwhile in the days when non-volatile memory was made of magnetic cores, and a 32k x 16 memory board was a big expensive memory!

In a variable bit rate system, it is a simple matter to vary the multiplier to suit the number of choices represented by the symbols in each link.

[Note that we might regard this as an extension to the simplified concept we apply when we say (for example) that an LS74374 is an eight bit register. We say this without regard to what information it might hold in any particular implementation. If it only ever holds the ascii representations of the numerals 0 to 9 then there is a sense in which it is a 3.321928 bit register. The use of bit is thus context sensitive. I have never seen this lead to ambiguity. There is a third level of sophistication that we could apply when we consider the probabilities of occurrence of the particular values (following Shannon). The communication system being described here employed a layered architecture, and the layer being considered (lower layers) did not have information about the meaning of the data that was being handled, or the probability of occurrence of any particular values.]

The data was encoded by phase shift keying a tone with a frequency that was placed in the middle of the baseband. This was 1650 Hz. The symbol rate was 660 bauds. The time required to set up the carrier with a new phase, and let the bandwidth limited channel settle and then measure the phase at the receiver end was just 2.5 times the carrier period.

The receiver was phase locked to a pilot tone with a frequency at the symbol rate, that is 660 Hz. The symbol time was divided up like this:

In this receiver the channel was gated so as to accept only the last two fifths of the symbol time. The gate period is exactly one cycle of the carrier frequency. The symbol data is determined from the timing of the positive going zero crossing during that time. This was done with a technique that I think I must have pinched from Ed Cherry’s “Omtrack” Omega receiver. An eight bit counter is clocked at 256 times the carrier frequency. When a trigger circuit detects the positive going zero crossing, the contents of the counter are transferred to an eight bit latch.

The pilot tone will add some error, although this could easily be corrected for by cancelling it out. In the first instance, (pilot tone 15 dB below carrier) we showed that the system would work without correction.

One task was the establishment of message timing at the receiver. This was a task for which an approach was proposed but several back up approaches were formulated in case there were problems with the first.

The proposed message format consisted of an initializing period during which a 660 Hz (symbol rate) pilot tone was transmitted at the maximum amplitude provided for by the channel. After sufficient time for the receiver phase locked loop to capture the message pilot tone phase trajectory, the pilot tone amplitude was reduced to a small fraction of the initial amplitude, and the receiver phase locked loop speed was reduced, as from then on it only had to correct for errors due to phase drift, a low frequency phenomenon.

I knew less about feedback loops then than I came to know later, and the provision I made for adjusting the speed of response of the phase locked loop (fast for acquisition: slow for phase maintenance) was very naïve by the standards of today. I used CMOS switches to select between two different loop filter alignments. Interestingly, I have seen that “naive” scheme published by others, so I was not the only one to miss an opportunity. Perhaps varying the speed of a control loop could be the subject for another blog post (but see below).

The interesting trick is that the gate circuit which passes the channel signal for exactly one cycle of the carrier, also forms the first block of a neat phase comparator for the phase locked loop that keeps track of symbol timing. This phase comparator has very good resistance to interference from the carrier. The signal from the gate is fed to an integrator. Over the one carrier cycle time, the net contribution to the charge on the integrator from the carrier is zero. The sample of the pilot tone, is however a measure of the phase error. The output of the integrator is captured at the end of the integrate time by a sample and hold circuit.

I have shown the narrow “Sample” pulse occurring right at the end of the integrate time. Of course, the integrate time could be brought to an end, and then the static voltage on the integrator sampled after that. Analysis showed that this was not necessary. The situation is analogous to an A/D converter following an amplifier with no sample and hold. If the slewing rate of the amplifier output is low enough, and the A/D is fast enough, the special saving of a static value is not required.

This is the block diagram of the “Carrier Immune Phase Comparator”.

For this phase comparator, we have:

RC is the time constant of the integrator.

Note that this expression has two components that depend on the pilot tone signal. θ is the phase error, and a is the amplitude. If we like, we can represent these as being introduced to the loop separately. That is, we can take a out of KD and apply it to the loop separately.

You can see that the (B) block diagram representation is the same as the (A) representation, except that the θ and the a components are shown separately. This clarifies an interesting point. There is a multiplier in the loop, and the loop gain depends on the amplitude of the pilot tone.

Here I show a “canonical” phase locked loop representation on which I have added the signal names from the discussion above.

If the loop filter has a characteristic which has falling gain with increasing frequency (which it will inevitably have), then the speed of the loop will change as the loop gain changes.

Here I show an arbitrarily chosen loop filter characteristic. The brown line represents the loop gain that we would have during the initializing period when the pilot tone is at full strength. The blue line represents the loop when the pilot tone has reduced by 15 dB during the data transfer part of the message. This means that a has reduced by a factor of 0.178. Note that as the loop gain is reduced (Arrow 1. on diagram), the unity gain crossover frequency is reduced (Arrow 2. on diagram) by a factor of 3.2/16 = 0.2 (which is about 2.3 octaves)

The red line is the pilot tone frequency.

I was very pleased with this phase comparator concept, although these days, I would approach that original problem very differently. It seemed to me that the line of thinking behind this phase comparator might have other applications, and indeed this has been the case.

Blog Post 41 Two and a Half Times 1B bis

10-01-2017       Happy New Year!

I thought I had done with “Two and  Half Times 1B”, but recently, I was sorting some old papers and I found circuit diagrams of the receiver that I described in the last post. A lot of what I reported in Post 40 was from memory, so it is interesting to see this material and see how good my memory has been. If you are coming here from scratch, this won’t make much sense, so read Blog Post 40 first.

To present this blog to you, I use “Word Press”. This seems to have lots of nice features, but it does limit the width of the part of the screen that I can use for diagrams. For those of with moderate screens, I wanted to use 100% of the screen width, so I have stepped out of WordPress for this.

To read on, follow this link here



It has been some time since the last post, so I will reiterate the background to this one. The context is a requirement for two way communications between driverless tractors following a wire guidepath set in a warehouse concrete floor. In the last post, I discussed the traffic from the central controller to the tractors, which we called “down” traffic. As the carrier for the data signal could be of large amplitude, the only limit being that it did not interfere with the tractor guidance system, we had no trouble getting the signal up out of the noise. We used frequency shift keying using off-the-shelf PLL chips. The frequency deviation was determined by the chip manufacturers. It was not ideal for our purposes but it worked. This was all described in the last post.

In this post I turn to data traffic from a tractor to the central controller (“Up” traffic).

The tractor transmit antenna was designed by Roger Riordan. I have described this in this blog before (“Time Passes 1” posted in August 2013. You can find it easily by placing “Riordan” in the search box.). This was a second attempt: the first was an un-tuned transmit coil driven by a square wave at carrier frequency. This was unsuccessful, and could probably have been made successful, but when I called Roger Riordan in, he wanted to do it “his way” and I (wisely) let him meet the requirements in whatever way he liked.

He chose the Class C output stage with his clever tuning scheme. There was a potential problem with the idea of a tuned antenna. A portion of the magnetic circuit was through the concrete floor. We had discovered that at the sorts of frequencies we were dealing with, the permeability of concrete is much higher than μ0 and varies between concretes with different aggregates. The tractors had to work over the floor of an old building and a new building, so the tank circuit antenna might have detuned a bit as the tractor moved from building to building. It was fortunate that we didn’t waste time worrying about this, as that would have just wasted time. I realise now, as I am writing this about 38 years later, that the difference in the different concretes might have caused a problem that we faced – and then solved later on.

Riordan’s choice of a Class C stage with tuned tank came after my decisions about modulation scheme. The Riordan antenna design does not have the bandwidth to carry the FSK type of signal used for the Up traffic.

I had been concerned about bandwidth for a completely different reason. The receiver had to pick out a small signal from a channel with other large signals on it. One aspect of our efforts to find the signal in amongst all the noise was to place the signal in a quiet place in the frequency spectrum, and use no more bandwidth than necessary. The FSK encoding used for the Down traffic had to use much more bandwidth than the theoretical minimum, as the characteristic relationship between frequency and voltage at the phase locked loop chip VCOs, and the need for voltage swing dictated a large difference between the frequencies representing mark and Space. A different scheme was called for.

When the signal is weak, and of narrow band, it become important to synchronize the transmitter and the receiver. The scheme chosen was to use frequency synthesizers clocked from the guidepath signal at the Tractor and at the receiver and place a phase modulated carrier at 2.5 times the guidepath frequency. The choice of two and a half times seemed to maximize the frequency difference between the signal and the guidepath harmonics. I seem to recall that experiments showed that we had about 100mV of signal at the central station receiver, but we wanted to allow for a worst case position (physical position of the tractor) where the signal was very much weaker. I believe that we had a particular value in mind for tests, but I do not recall what that was. That the communication actually worked in the real warehouse was, when it came to it, a more important test.

The task of designing a suitable receiver was a daunting one. It had to be made according to a robust plan, and that plan had to be executed intelligently with carefully chosen details.

I mapped out a plan for a double conversion receiver. I perceived that analogue filter design was a weakness in our team, (There was no spice, or filter design programs available for running on your pc in those days. There were no pc’s!) so I came up with a scheme that relied on filtering with anything so complicated as conjugation of poles, to an absolute minimum.

The block diagram of the receiver in a later form is shown here.


The plan was to use phase inversion keying (phase shift keying where the phase shift is 180 degrees). The source of carrier was a phase locked loop locked on to the guidance signal. The modulator was an exclusive OR gate.

I set up project nomenclature for frequency in which the guidepath frequency was nominated ONE frequency unit (“1U”) and the data carrier, 2.5U. The actual frequencies were 6.25kHz for the guidepath, and 15.625kHz for the carrier.

At the receiver, the first stage was a very simple high pass filter to boost the carrier to guidepath signal ratio. These have a ratio of frequencies of 2.5 which is about 1.3 octaves. Three zeros can give us (6dB per octave each) about 23 dB of preference of carrier over guidepath. For the analogue multiplier, I selected a Motorola part containing a Gilbert Cell (https://en.wikipedia.org/wiki/Gilbert_cell) (designed for use as an RF mixer) as an analogue multiplier to be the first mixer. The signal at the input to the first mixer could not be fed through a trigger, as there were expected to be more zero crossings due to the guidepath and its harmonics than from the carrier itself. The first local oscillator (LO) was at a frequency of 3U, giving an IF of 0.5U or 3.125kHz. The output from the first mixer was capacitively coupled to subsequent stages, giving a very satisfactory and very deep notch in the passband at zero frequency. Interfering signals from the third harmonic of the guidepath are at zero frequency here, so we get a lot of filtering out of a single capacitor just by carefully choosing the local oscillator frequency.

The next step in the block diagram is a bandpass filter utilizing a Riordan Gyrator. This was not part of the original concept plan and was not required according to my signal budgeting. After the gyrator, we really do have a signal that is stronger than the various interferences that we were planning for. This meant that the signal could be passed through a schmitt trigger. Interfering signals might alter the timing of the edges on the output of this, but (according to the budget) they would not alter it enough to introduce ambiguities in subsequent circuits.

The second Mixer was an exclusive OR gate. The output of this could have many glitches at the changes in state, and these were easily removed with a single pole low pass filter, and a second schmitt trigger.

The bloke who was assigned the task of putting this into effect had trouble and the task had to be rescued. Roger Riordan came to the rescue again. On this occasion, I did not give him freedom to do the task however he liked. I thought that I had too much investment in the planning of the topology, which seemed valid, even though the attempts to put it into practice had not borne fruit.

Roger wanted to change to an Analog Devices analogue multiplier, and I gave him free rein, even though the price seemed crippling (Forty five 1978 dollars, if I recall: AU$306 in 2016 money!), and he added the Riordan Gyrator to give a useful bandpass characteristic to the IF strip. Whereas we had been nervous of introducing active filters, this was no problem to Roger. After all, he had invented the Riordan Gyrator! At the time, I thought that this was adding an improvement that was not necessary, although we never conducted tests of the circuit without it. Recently when I have created a spice model (not a trick that was available to us in 1975) it looks as if the gyrator was necessary, so I must have made an error in my calculations at that time.

(Riordan Gyrator http://corybas.com/?ident=10W0G)

The receiver worked on the bench, and it worked in the warehouse when the transmitter was tuned, but it didn’t work when the tractor was out in the warehouse!

Some quick checks with an oscilloscope showed that the phase of the incoming carrier that represented a “1” (The MARK state) was different for each message. The challenge was to adjust the receiver to wide variations in reference phase. This is the matter that I suspect the different concrete formulations might have contributed to.

The microprocessor at the central station knew when a message was to come in, as the incoming messages were all in response to polling from the central station. The message format included a “front porch of a prolonged burst of carrier at the phase that represented the MARK state. The solution was to direct the signal at the end of the IF strip into a shift register that was clocked at eight times the IF frequency. At a time calculated to be about half way through the “front porch”, the shift register output was switched back to the input, making it into a ring counter. A ring counter that contained the last cycle of MARK phase signal. This was then used as the second Local Oscillator. For the rest of that particular message, whenever the phase of the signal was the phase that represented MARK, it was in phase with the output from the ring counter, and was 180 degrees out for a SPACE. This worked splendidly.

I added some (quite unnecessary) complication to remove a small phase shift caused by the quantization of the IF frequency cycle time. Not necessary, but it did no harm.

Only two of these receivers were made. Recently, I was sorting through some old stuff, and I found one of them! In this project, we made prototypes of many of the boards using a popular prototyping board that had gold edge connector finger at one end. We stuck to this format, so that when printed circuit boards were made, they were plug compatible with the prototypes.


The flyback converter was to provide a minus rail for the multiplier. I think it might have run from plus and minus 12 volt supplies.

This board was found in a garage, and was covered with a thick layer of dirt and fluff. I only really wanted it to show here, so I washed it in the dish washer! Here is the rear view.


The small board attached to the rear is the shift register/ring counter MARK phase regenerator. As you can see this was built up as a “one off” prototype board. There never was a case made to “do this properly”.

In the case of this receiver, it worked without a hitch. We couldn’t do bit error rate tests because we didn’t detect any bit errors.

When looked at with modern tools, the scheme holds up well.

I have broken the simulation circuit up to make it fit on this page a little better.


Here the signal on the guidepath is made up of the sum of The guidance signal of amplitude 15 volts, second harmonic of that at 0.5 volt, third harmonic at 1 volt (I am making it hard for the model, I don’t think the distortion of the guidance signal was really that bad!) and the modulated carrier with an amplitude of 50 mV.


The initial filtering “front end filter” of three zeros provides a preliminary reduction in the guidance signal. The arbitrary voltage source B1 represents the analogue multiplier. The first local oscillator V5 is at the third harmonic frequency. The LC tuned circuit plays the role of the Riordan Gyrator. The LT1017 comparator is used as a trigger (RF people might say “limiter”)


The arbitrary voltage generator B4 is set up as another multiplier, but it is only multiplying digital signals. It plays the role that was played by an EXCLUSIVE-OR gate in the real hardware.

Even 30 years later, I love the way the waveform of the data emerges in stages as we look at various points of the circuit. This was very exciting at the oscilloscope in 1978.


Top Left: The guidance signal is so large that the data is completely invisible on it. Top Right:. Even a passive filter of three zeros give a very different view. Middle left: After the multiplier. The data is at 3.125 kHz, the second harmonic of the guidance signal is at 6.25 kHz, the third harmonic is now DC and the Remnant guidepath signal is now at 12.5kHz. Middle Right: After the gyrator. The lowest frequency we see here is our signal. Bottom Left: the trigger picks the signal out (at IF frequency). Bottom Right: The red trace is the “raw” data. Notice that it is low for most of the time to the left of the trace, “mucky” in the middle” and high for most of the time to the right of the trace. The blue line is the “cleaned up” data.

The data was ISO1177 format (for a UART) at 300 bits per second.

Unfortunately, the driverless tractor system that this receiver played a part in was short lived. This was not a reflection on the driverless system but the fact that the warehouse workforce was not ready to share their work space with an automated system. This is why the “piggy back” board on the back of the receiver was never laid out as a printed circuit board. There were many interesting aspects to this project, but the thread that links it to my next story is in the title “Two and a Half Times”. It is very easy with a phase locked loop to generate two waveforms that are locked together with a non-integer relationship between the frequencies. In this case, the key was to place a carrier so it could be picked up with synchronous demodulation but which was remote from the harmonics of a large interfering signal. In the next case, the application for the number “Two and a Half” was quite different, but there were common threads as you shall see.


Posted 25-03-2016

Driverless Tractor (900 x 495)

The Hyteco Driverless Tractor System was a scheme for running towing tractors around a circuit in a large warehouse. The tractors followed a guidepath which consisted of a wire set in the concrete floor. The wire carried a direct current with a 6.25 kHz sinusoid superimposed. The wire also carried other signals for communication purposes. The guidance system was an adaptation of a proprietary system that Hyteco had the agency for. Indeed, part of the justification for the development of the system was to sell more guidance equipment. The system had many interesting aspects to it. One of these was the provision for communications between the tractors and the central controller. Each tractor and the Controller had a central processor which consisted of a Motorola “D2Kit” which used a 6800 processor. This was 1978.

We had a “DECwriter” Printer/keyboard terminal which had a fixed speed serial port at 300 bits per second. I made a decision very early in the project that the system communications would all be ISO1177 style (UART Compatible – Start/Stop serial) so that the DECwriter could be used for sending or receiving test messages. Indeed I remember at one time a tractor with the DecWriter mounted on it was charging up the factory with a bevy of blokes running along behind feeding out the extension cord which was powering the DECwriter which was printing out “The Quick Brown Fox Jumps over the Lazy Dog” or something like that.

We had communication from the Controller to the tractor in a realistic environment, and we could prove it!

The decision to format the data stream in this way thus had an advantage (for exercising, proving and testing), but it also caused us some problems. In some places in the system, we had to derive a bit clock from the guidance signal, and that was possible, but in the technology of the day, it took quite a number of CMOS SSI integrated circuits.

ISO 1177 data is not always a good choice where signal to noise (and consequently non-zero bit error rates) is an issue. A UART only has to mis-read one START BIT, and probably the whole of the rest of the message is lost. Nevertheless, this was an early decision (as it had to be) and we were stuck with it and we stuck to it.

There were four distinct communications paths in this system.

1. Location Loops

One was communication to tractors from particular places so that the tractor had location information. This might have been seen as a design challenge, but it was one of those things that worked first time and never gave any trouble. The scheme was to provide an auxiliary guide path signal which was phase-reverse keyed. the demodulator required no more than a schmitt trigger on the guidepath signal and the auxiliary signal, an exclusive OR gate and a low pass filter. Maybe there was a third schmitt trigger at the end, but the result was a channel that was trouble free when fed directly to a UART.
Location Loop marked up (900 x 645)Location Loop – set in the concrete floor, and connected to a box attached to a nearby wall. The box contains electronics that imposes a data byte on the loop by reversing the current in the loop. The two coils shown are mounted on a passing tractor.
Location Loop ReceiverBlock Diagram of Tractor mounted Location Loop decoder. Unequal delay in the Guidepath and Loop sensing circuits can lead to glitches and multiple edges on a change of logic state. the duration of these is very short compared to a bit period, enabling the very simple RC and trigger clean-up circuit to be used.

2. Turnouts

There were small circuit boards in lineside boxes that switched the guidepath current from one wire to another, and thus provided the function of a “turnout” or “pair of points”. The receive end of the communication to these from the central control was designed by a bloke named Peter Resmer, who made a good job of it. Like the location loop circuit, it was one of the aspects of the project that fell into place quickly and didn’t need constant revisiting.

Each of the other two communication paths was much more troublesome. These were Communication from the Central controller to the Tractors, which we called the “Down” direction, and communication from the tractor to the central controller which we called “Up” traffic. (One needs terms, and we chose these from railway practice. Our system was a railway system in every respect but one: it didn’t have rails.)

3. The Down Channel Communication from the Central controller to the Tractors

This was tackled first. There were several factors at play in determining the choices here. We were a bunch of young, inexperienced engineers and technicians. We were supposed to be designing a practical system: not making world shattering advances in the technology of the components. Any established designs that could be brought into play with minimum mucking around looked good. This was the early days of telephone line modems. Frequency shift keying was very popular, and there were many ICs available. Most of the data sheets had beaut looking circuits in the applications section. I was a bit naive about how reliable application notes circuits are in those days. We decided to use frequency shift keying an adapt a data book application circuit to suit our need.

The story of how we encountered problems and learned a lot and got this to work is a story in itself. We got it to work, but there had been troubles, and I didn’t want to launch into another troublesome task. Added to this was the fact that for the Down Channel, we could transmit a strong signal. The signal was added to the guidepath signal, and the only upper limit to the data signal power was the power at which it interfered with the guidance system.

In this respect, the Up signal path was completely different. The tractor ran on a concrete floor with the guidepath wire set in it. It was required that the tractor had about 100 mm clearance, so however the tractor was to convey the signal into the wire, it was most unlikely to be a high powered signal when established in the wire.

Whenever a communication system is asymmetrical in that one station communicates with the others, but that they don’t communicate with each other, then the optimum focus for investment is also asymmetrical. A broadcast transmitter working with a large number of receivers is the classic case. If improvements to the system can be applied at the transmitter, then they have to be applied only once. The result is that we have large powerful expensive broadcast transmitters, and large numbers of cheap, possibly not very sensitive or selective receivers. The same asymmetry rule applies whatever the direction of the traffic. The equipment in a cellular mobile phone base station will be vastly more expensive than any individual phone, but might not be vastly more expansive than the population of phones that it serves. There were no mobile phones in 1978, but this point was clear: The central controller circuit could be complex, but the tractor circuit should be simple and cheap.

The tractor transmit antenna was designed by Roger Riordan. I have described this in this blog before (“Time Passes 1” posted in August 2013. You can find it easily by placing “Riordan” in the search box.). It happens that that transmit antenna design does not have the bandwidth to carry the FSK type of signal used for the Up traffic.

I had been concerned about bandwidth for a completely different reason. The receiver had to pick out a small signal from a channel with other large signals on it. One aspect of our efforts to find the signal in amongst all the noise was to place the signal in a quiet place in the frequency spectrum, and use no more bandwidth than necessary. The FSK encoding used for the Down traffic had to use much more bandwidth than the theoretical minimum, as the characteristic relationship between frequency and voltage at the phase locked loop chip VCOs, and the need for voltage swing dictated a large difference between the frequencies representing mark and Space. A different scheme was called for.

When the signal is weak, and of narrow band, it become important to synchronize the transmitter and the receiver. The scheme chosen was to use frequency sythesizers clocked from the guidepath signal at the Tractor and at the receiver and place a phase modulated carrier at 2.5 times the guidepath frequency. The choice of two and a half times seemed to maximize the frequency difference between the signal and the guidepath harmonics. There were other advantages as well, which I will go into next time.

Blog Post 38 Class C with Immediate Feedback

The story so far…

I have postulated that for high efficiency, one needs to drive a tank circuit in one of two ways: with a current drive,

Tuned Circuit Current drive

or with a voltage drive:

Tuned Circuit with Voltage drive

It had not escaped my notice that traditional Class C stages did neither of these. Since posting those musings, information has come my way about how real Class C stages really did it, and I will pass that information on in a later post. For now, I continue to use the second of these.

I have investigated the introduction of amplitude modulation by pulse width modulation. Unfortunately, there is not a linear relationship between pulse width and amplitude of the ringing of the tank circuit, and I have introduced two schemes to overcome this non linearity. The first was negative feedback around the modulator and a following demodulator. Modelling showed that this could, in principle, give good results. I have since been informed that negative feedback around an amplitude modulated transmitter is a trick that has been used in practice.

The second scheme was a pulse width modulator that did not give pulse width proportional to instantaneous baseband voltage, but contrived a pulse width that would give the desired amplitude at the tank circuit. This gave good results as well.

I can see no reason why if a really high fidelity amplitude modulator was required then both of these techniques could not be used.

However a third technique has arisen. It is simpler than the other two. Maybe it could stand on its own in a low fidelity speech band modulator, or in combination with feedback around a demodulator for a higher fidelity application.

In developing the explanation of this third technique, I will not follow the path that led to its accidental discovery. However, by way of confession, I will tell you about the discovery.

When I was spice modelling the feedback around the modulator – demodulator combination, I was troubled by a crazy wobble that arose in the waveforms. For a while I thought that I had discovered a circuit instability that was being exposed by the LTspice, and I explored a few ways to deal with it. Soon it became evident that it was an artifact of the LTspice itself, and was completely cured by reducing the Maximum Step Time. I had learned a valuable lesson about setting up spice parameters, but more than that, I discovered that one of my attempts to quell the instability had revealed itself as a really valuable trick.

This “design by serendipity” is an interesting phenomenon: not to be dismissed as one of the paths to good design ideas. This is a discussion for another time.

My initial pulse width modulator scheme was very simple. The carrier was generated as a triangle waveform. This was added to the baseband signal, and the sum offered to a threshold detector. The instantaneous value of the baseband determined how far up the triangle the threshold detector threshold was, and thus the width of the pulse.

In Post number 35, I reduced the carrier to 5 kHz so that it and a 1 kHz sinusoidal baseband signal could both be clearly seen on the same axes, and presented a graph to explain this modulator action.

I reproduce it here.

Demo waveforms

The next important detail in the explanation of this scheme, is the time or phase relationship between the excitation pulse and the voltage waveform at the tank circuit. I had shown this relationship in Post 37, and I reproduce that here as well:

1 Tank waveform

The voltage on the tank circuit lags the fundamental component of the pulse waveform by 90 degrees.

If we place an RC circuit with a break frequency that is much lower then the carrier, on the Tank, that will give a further ninety degrees lag. Overall, 180 degrees lag, or a suitable signal for negative feedback. Alternatively (and this is what I have done below) we can set up the RC to give about 90 degrees lead, which gives us an output that is in phase with the excitation, and then subtract this from the modulated carrier. Here is my circuit model:


The tank circuit is made up of L1 and C1. The resistor R1 represents the load (antenna if this is a transmitter). C2 and R16 are my lead network. They provide a feedback sinusoid at the carrier frequency and in phase with the excitation.

V1 is the carrier generator, which is of a triangle waveform at 1 MHz.

V3 is the baseband signal. In this case, it is a triangle waveform as well. I had developed the habit of doing this, as it is very easy to visualize the distortion if there is any curvature of the reproduction of the ramps in a triangle base band signal. I will quickly go through the way my model represents a pulse width modulated drive for the tank circuit. A comparator is represented by the switch model, which is closed for positive voltage on the differential inputs and open otherwise. V2 and R2 convert the switch state to a voltage waveform, and this is presented to the tank circuit by the voltage controlled voltage source E1 (which represents the power stage). As before, the pulse width modulation is executed by just adding the baseband to the sawtooth carrier. The switch/comparator converts this to a pulse width modulated signal as the base band shifts the carrier sawtooth up and down and varies the time during which it presents a positive value to the switch/comparator.

It will be seen that the switch is closed by a positive voltage, but that closed switch gives the negative state to the output drive. Thus the switch – voltage follower combination is inverting to the carrier signal. The feedback from the lead network is fed to the plus input to the switch differential input is thus negative feedback.

This circuit is the same as in previous postings except for the lead network and the application of this negative feedback to the modulator.

Here are the revealing traces.


The red trace is the output from the power stage (voltage controlled voltage source).

The brown trace is the voltage on the tank circuit (output of the transmitter).

The blue trace is the feedback signal derived from the lead network.

It can be seen that as the base band signal applies a bias to the triangle carrier to provide a wider part of the triangle to the switch, and to broaden the pulse, the feedback will apply an opposite bias and tend to make the pulse narrower – that is reduce the drive to the tank.

The result is shown below. The blue trace is the base band signal. The brown is the output from a synchronous demodulator. The thickness of the brown line represents carrier feed through. Notice that the brown line is “thicker” when the voltage is high: that is when the carrier amplitude is highest.


There IS some distortion visible. The brown line is seen to be slightly curved when shown in close proximity to the baseband signal. Note that I have used the circuit exactly as it was when I was trying to eliminate what I thought at the time to be an instability. I have not explored how much negative feedback can be applied here. When you consider that this negative feedback has been applied at a total cost of one capacitor and one resistor, it is seen to be a cost-effective result. It would most probably be worth incorporating in conjunction with overall feedback around a demodulator. I will leave this as an exercise for the interested reader.

This “Class C” business has gone on for longer than I had originally planned – and it is still yielding interesting material. I will leave it for a while now, and come back to it later.