I’ve reached a milestone

After writing the previous post, I realized that I was at a point where I could actually test one of the memory board devices, the real-time clock. This chip will allow LEO-1 to keep track of the date and time. It supports a battery backup so it can remember the date and time even when the power is turned off. Since all this was already soldered up and I had tested the address decoding, I put the chip in its socket, set up the test probes and set my test DIP switches to the address of the chip. Then I switched the test switch called /OE to off, which is the signal to the addressed unit that it must be output-enabled and put its data on the data bus. I switched on the power and the LEDs on the breadboard started flashing… 0001, 0010, 0011, 0100… a sign that the real-time clock was outputting the seconds onto the data bus as it should. It was working!

The chip I chose is the RTC 72421 which is made by Epson. The data sheet for this is one of the good ones; it’s really well written and explains everything completely. I was able to use my test board to visit each of the chip’s 16 registers and set it to 24-hour mode, and set the date and time. After that I put the battery in and switched off the main power. The chip outputs a pulse called STD.P on pin 1 which works even when it’s on standby. I set it to one-second-per-pulse mode. You can feed it to an LED driver or a piezo speaker to hear the clock ticking. I plan to leave the battery in overnight and check in the morning if it kept the right time. You can tell that I’m nuts about dates and times. That’s one of my ‘special things’.

So, my first proper test was successful. That’s a good sign.

Real-time clock test

Real-time clock test


I glimpse the fullness of it now

Well, I have started construction of this monster… and a monster it indeed is. I am reminded of the joke in The Hitchhiker’s Guide to the Galaxy about the terrible error in scale which caused an entire battle-fleet to be eaten by a small dog. I have made such an error myself, a huge error in scale on several levels.

First of all, I made a bit of a mistake with the scale of the whole project. This project is colossal. I started to get a sense of this while I was drawing the schematics, but I really had no idea until I noticed it was taking me a couple of minutes to solder a single wire and then decided to estimate how many wires I was going to need. Well, although I haven’t finished numbering the chips on the schematics, I think there’s going to be around 200 chips. Average number of pins per chip… about 18. So that makes about 3600 solder joints just for the chip pins and there are many other things to consider besides those. Working at one joint every 3 minutes for 4 hours a day, that will take about 45 days just for the soldering. Not to mention all the other things that need doing. In reality, I can at least double that, so about 3 months soldering time. Some time next May I think I will laugh at that estimate. I had originally decided to use PCBs since that would be the easiest way to build it. Simply solder the parts onto the board and you’re done. No need for masses and masses of individual wire connections. I did a test for one register and it came to over $100 for a small 5″ board. Even using a cheap Chinese PCB house, the large boards I need would probably cost an absolute fortune which is why I went with prototyping boards. I just couldn’t justify spending that kind of money on the boards alone.

So, I’m now locked into using prototyping boards and the second error I made was how much space this stuff takes up. I had done a number of layout tests and was able to pack a lot of chips into a small space. My boards are 10.6″ square and I can sensibly fit about 80 or 90 chips on a board (or non-sensibly, about 150). But even being sensible about it, I’m finding I’m needing a lot more space between the chips than I have allowed for, due to the thickness of the wires and the fact that a lot of signals are bussed and therefore need 16 wires. The 24 AWG wire I’m using has quite thick insulation which means you can only fit about 12 wires side-by-side in an inch. With my planned distance of only about ½ an inch between the chips, there simply isn’t enough room.

I started construction on the memory board since that is the simplest of the four main boards. This board has some trivial address decoding, a real time clock, I/O ports, DIP switches and some rows of RAM and ROM chips. By starting with this, I figured I could get something testable pretty quickly. In reality, it took 2½ weeks before I was able to test anything at all. To test one board without relying on another board, one needs a method of testing. I decided to build a ‘throw-away’ test board (which turned out quite nice so I won’t be throwing it away). This board is basically a fake control board to fool the memory board into thinking it’s being controlled by a CPU. This board has 6 DIP switches on it (24 for the address bus, 16 for the data bus and 8 for other control signals). It also has a few bus drivers and so on to make sure I can get it off the data bus when it’s telling a memory unit to be on the data bus. It took me a couple of weeks to get the parts and build this board. This was the first inkling I had that I had underestimated the scale of the project and the scale of the required space. Here’s a picture of the finished test board.

Test Board

Test Board

It looks quite tidy until you turn it over.

Test Board - back

Test Board – back

Clearly the wires are taking up far too much space and I confirmed that when I started soldering the memory board itself. There is simply no way with my planned layout that I will be able to fit all the wires in such that I can actually still get to the board to solder it. I’ve already run into problems with wires getting in the way of chip pins and I haven’t even started on anything that uses the whole data bus. Looking around online for how other people do this kind of thing, the solution seems to be to use thinner wire. I have ordered a couple of reels of ‘magnet’ wire, the kind that is used for making coils in motors and power supplies. This wire is very thin (mine will be 28 and 32 AWG) and you can solder through the insulation which burns away. I’m planning to switch to this wire for all the busses and possibly more. I’ll continue to use my current thicker wire for power and ground lines and for other odd connections here and there.

Thick wire problem

Thick wire problem

In other news, I have finished drawing all the schematics including the ALU. It was a bit tricky, especially the shifter section. It’s pretty easy to make a shifter that shifts by a single bit, but my design calls for the ability to shift by 1 to 8 bits instantly. I used a 1-of-8 selector (mux) to choose the signals based on the shift amount and then had to wrap my head around how to choose the single-bit signals to route to each of the eight inputs on each of the 16 chips. I suddenly realized that it should be possible to make a generalized shifter that shifts one way and then re-use it to shift the other way by rearranging the signals as they go in and out of it. A few hours and a lot of brain twisting later, I had drawn the schematic and found that I had saved myself ten chips. That doesn’t save any money because I had already bought the chips in bulk, but it saves on a couple of hundred solder joints that I won’t have to do. I verified this optimization by knocking it up in Logisim and found that it does indeed work. I went to bed that night feeling ever so clever, like I had invented something. But later I found, as I usually do, that this technique is pretty much standard for shifters and I had simply reinvented the wheel.

ALU - Shifter

ALU – Shifter

Finally, I had a very hard time deciding how to connect the four main boards together. The connections require rather a lot of wires. I had already decided to use ribbon cables because I had found a supplier that would make the cables to my requested size at a very reasonable cost. But the layout had me stumped for weeks, I just couldn’t seem to nail it down. Eventually, with the help of Google Drawings, I made the final plan. As you can see, this is going to need a lot of soldering and a lot of wiring.

Ribbon cables

Ribbon cables

There’s these two things my mum used to say all the time when I was a kid that seem appropriate now. “Patience is a virtue.” and “Little by little a bird builds his nest.”

I can do this. There is no ‘try’.

Prior planning and preparation…

I haven’t updated this for ages and anyone reading it would be forgiven for thinking I had given up on the LEO-1. Nothing could be further from the truth. I finally got some answers about the worrying stuff I mentioned earlier, partly from some friendly guys on the Electronics Point forum. You can read the thread here.

To cut a long story short, it seems I have been overthinking this issue a little bit too much. I shouldn’t run into any trouble at the speeds my circuit will be running at. Some of the spiky stuff I was worried about even seems to be generated by the act of measuring it, due to reflections inside the scope’s probes. Some of it is also caused by doing tests on a breadboard with long ratty wires all over the place. When I build the real thing on real boards with short wires, it should be fine. I’m going to go on that assumption for now.

During this time I’ve also been figuring out what other parts I’ll be needing and getting them together. You may recall that I was worried that HCT parts were not the best parts to use and I actually did decide to back up on that and switch over to HC parts. It just wasn’t worth the risk of buying hundreds of chips only to find they don’t work the way I expected. So I counted my losses and reordered the original prototyping parts in HC. I now have a bunch of HCT chips that I won’t use but I’ll hold on to them for a rainy day. Once I had figured out what I was going to need, I ordered a ton of chips. There’s a company on eBay that sells unused surplus parts amazingly cheaply. For example, I was able to get about fifty 74HC32s for about $7. The rest of the stuff I’ve been getting from Mouser and some (like the circuit boards) from DigiKey. The EEPROM I’ve chosen is the Greenliant GLS29EE010 which is a 1Mbit device organised as 128 x 8 bits. They only cost $2 each; two of those in parallel and I’ve got a 16-bit ROM for the monitor program. At this point I decided I was going to have to get a reliable EEPROM programmer. I’d seen cheap Chinese device programmers on eBay but I’d also read appalling things about their reliability and usability. It sounded like a false economy that I couldn’t risk. Perhaps when I was a poor teenager but not now. So I bought a Phyton ChipProg 40, mainly because it has the GLS29EE010 on its supported device list, but also because it was available, and I could afford it. I’m happy to report that it works perfectly and I was able to burn some test garbage into my ROM chips. I was also able to use it to have a look at the old PICs that I’d programmed in 2008 on a PIC development board. The code was all still there. An amazing thing is Flash memory.

In other news, I wanted to have some red LED digits on the front panel for debugging and discovered some really nice smart hex display chips (HP 5082 7340) — but they turned out to be obsolete, very expensive and difficult to get. They look so nice that I don’t know why they would be obsolete. I’ve never seen these kind of things on any equipment before and wonder where they were used. Everything has the ubiquitous seven-segment displays, but not these. The last time I saw anything like them was on my first digital watch in 1978, but they were much tinier. Anyway, I found something similar, TIL311, on eBay and acquired four of them. That’s enough to display a 16-bit value. Here’s a couple of pictures:

TIL311 smart display

TIL311 smart display

TIL311 in action

TIL311 in action

When I haven’t been experimenting with the actual parts, I’ve been drawing the schematics. So far I have drawn two of the four boards and I’ve been finding design flaws while doing it. As soon as I started drawing with real components I noticed I had missed a line driver here and there. I also found a potential race hazard that meant I had to revise the simulation. I hadn’t realised that the memory address decoding will take a finite amount of time to settle and that during that time, it will be possible to select multiple devices onto the data bus. If that happens even for 10 nanoseconds, it won’t be good for the devices or the power consumption, not to mention the stability of the machine. The solution is to wait an extra tick for the decoding to finish and only then actually assert the chip select signal. When I spotted this I found another similar issue and realised my instruction cycle of only 4 states was too simple. I had to increase it to 8 states for memory operations and 5 states for non-memory operations. Very disappointing, but makes sense since I haven’t seen any other designs out there with only 4 states for an instruction. This means that all instructions no longer take the same amount of time to execute which seems a bit weird. Still, I think it will work just fine.

I also figured it would be nice to have a means to switch off the main clock and be able to single step instructions with a button. I spent some time experimenting with ways to achieve that and added it to the schematic for the clock section. During this time I revisited the 555 timer, a familiar friend from my early digital learning days. I still have my old ‘Babani’  book IC 555 Projects by the very drole Mr. E.A. Parr B.Sc, C.Eng, M.I.E.E — that’s a lot of letters 🙂 In the end, I used a 555 for the ‘slow’ clock (a crystal oscillator will be used for the ‘fast’ or normal clock) and didn’t need one for the single step circuit.

Prototyping single step

Prototyping single step

I smell a mistake

Using the PB-503 proto-board I did a bit of testing with the 74HCT parts that I had ordered. Things worked very well. I got a 4-bit counter going and was able to feed 4-bit data into a register and use the bus drivers to get the data out. Doing it in slow-motion (1 to 5 Hz clock) I could see the data was correct on a row of LEDs. Then I did something that I have never done with digital chips before; I connected an output to my oscilloscope, and my jaw hit the ground. This is what I saw:

Output of 74HCT chip at 100KHz

Output of 74HCT chip at 100KHz

Now I didn’t get into university to study electronics like I wanted to, I’m self-taught by experimentation and the Internet, so I didn’t get a very rounded education. I always thought digital signals were… well… digital. Like, square waves. That thing on my screen was not a proper square wave. It had spikey things in it. What troubled me was the max and min voltage readings. Almost seven volts from a 5V power supply? Where was that coming from? And what the hell was that -1.68V negative spike? I had used bypass capacitors like you’re meant to. I never really understood why you needed them but always did it anyway. Wasn’t that meant to prevent this kind of thing? Just for fun I increased the clock to 1MHz and got this:

Output of 74HCT chip at 1MHz

Output of 74HCT chip at 1MHz

This made my blood run cold. How was this possible? If you are an experienced digital electronics engineer, you are laughing at me because you know why this was happening. Well, I didn’t, and I had to find out. Zooming in, I recognised the effect as a kind of ‘ringing’ which I am used to seeing coming out of my analogue synth – but my synth is meant  to do that; it has filters in it designed to muck up your square wave so it sounds cool. I want my digital square waves to be perfect. Why was there analogue stuff in them? I Googled something like “TTL output ringing” and over the next few days I read a lot about things I had never even dreamed of existed. ‘Ringing’, ‘ground bounce’, ‘noise’, ‘crosstalk’, ‘stray capacitance’ and a few other horrors that I forget. I also found out why you need bypass caps, which is nothing to do with this ringing problem. I realised I was in for a ton of random problems if I didn’t learn how to avoid those things. It seems you can’t just plug a load of digital stuff into each other without considering all kinds of weird analogue stuff that can happen to your signals. But I had done that. I had built digital stuff before, clocks and things. I had no scope to show me scary stuff and I always visualised the pulses as perfect square waves. What a poor fool I had been all these years. But wait — my circuits had worked, right? So my circuit would still work; just pretend I’d never looked at the waveform and move on. But I couldn’t. That would be like me pretending my variables were properly initialised in my code. No way, I couldn’t ignore this. I was going to have to find out and try to follow all the established rules for minimising this and any other horrors that were waiting for me. After I calmed down a bit it started to seem rather straightforward. Keep wires as short as possible, don’t run signal wires too close to each other, separate ribbon cable signals with interleaved ground signals, and so on. But this test was on a single output. It wasn’t crosstalk, it was most likely an impedance mismatch from what I could understand. Apparently fixable with resistors. But you don’t see tons of resistors in digital circuits preventing this kind of thing so that couldn’t be it either.

After a while I found that this particular ringing issue is probably nothing much to worry about. I’ve read other people’s CPU blogs and no one seems to care or mention this stuff. I’m probably over-analysing it as I have a tendency to do. For one thing, the evil-looking negative spike is dealt with by internal diodes in the chip inputs. I’m still not sure how the over-supply spike is handled by the chip, if at all. Still, maybe I’ll just have to ignore it and follow best practises.

My reading led me to find out a lot more things that I wouldn’t have known. Without some understanding of these issues I might have ended up with a CPU that kind of worked a bit, sometimes, at slow clock speeds or something equally useless. At least now I have a fighting chance of making it work properly. But all the reading led me to a new issue, the issue of choosing HCT chips over HC. I read a paper by Texas Instruments entitled SN54/74HCT CMOS Logic Family Applications and Restrictions and found, at the end, the following: “…employing HCT instead of HC devices in pure CMOS systems cannot be recommended. […] Due to the lower noise margin, there is an increased risk of interference caused by crosstalk, especially when the lines on the printed circuit board exceed a certain length. Moreover, the reduced switching threshold no longer ensures faultless operation of advanced bus systems used in microprocessor applications today.” I started to think I had made a mistake choosing HCT parts, as I had suspected earlier. My decision was stupid. I should have just found out if I could get all the parts in HC and then built the whole thing with HC instead of wondering if I would be able to, or if I would need to fall back on LS parts. I’m kind of troubled by how I let this happen. It’s not like me to plan something so badly. I had become a bit excited and got carried away. Not the sign of a good engineer. I had to stop and think about this all some more. I went to bed and slept on it.

Design decisions: Electronics

By the time I had finished the first draft of the simulation, I really felt like it would be possible to implement LEO-1 with real electronics. I had tested the simulator by spending a few days writing an assembler so that it would be easier to test by copying and pasting the assembler output into Logisim instead of having to work it out on paper and type the 16-bit instruction codes in. Now I had a simple assembler which I could use to program the real thing. I had to try and build it for real.

As I’ve mentioned before, I’m not totally new to digital electronics. I designed and built a digital clock in 1978 or so and I designed and built a decoder and display for the (now defunct) ‘Maplin Rugby Clock receiver’. I felt confident that I could get back into this fairly easily. I already had a soldering iron, multimeter, wire, basic tools and a cabinet full of spare electronic parts that I’d collected over the years. How hard could it be? I decided to find out by seeing if I could come up with a design for a single one of the 16-bit registers LEO-1 needs.

I had already decided that the 74 series chips was the way to go, just because I’ve always liked them (after I got over hating them), I’ve never really felt comfortable with the 4000 series CMOS parts, and many mini and mainframe computers like the ones I worked with in the 80s were made out of them. I read through this page to refamiliarise myself with some of the parts I was going to need and started to realise that in the last 20 years, things have changed a bit. Originally, in the 70s, I used plain old 74xx parts. These were the real TTLs. They got warm during use and were rather robust. Some were even made of ceramic instead of plastic.

Pictures I took of the clock project I did in 1993 show that I used 74HCxx parts. I’m not sure that at the time I knew that these parts are actually implemented with CMOS and are not really TTL compatible. The project worked because I used all the same kind (HC) and 5 volts (the standard TTL supply)  works for CMOS as well. What’s changed for the better is that there now also exist 74HCTxx parts which are implemented as CMOS but have a completely TTL-compatible interface. By using these, you get compatibility with old TTL parts like 74LSxx but they use less power. I needed to choose between HC and HCT. My decision was influenced by something (that I should have, but didn’t) expect: many of the 74 series are now obsolete and either very difficult or impossible to get. If I went with HC parts (which are ‘preferred’ for new designs), I would be locked into them and if a part was not available, I would be royally screwed. If however, I went with HCT parts, I would have the option of falling back on LS for difficult-to-get items. I decided to go with HCT for this reason. The only downside I could see was HCT’s ‘lower immunity to noise’. I’m not sure if that will affect what I’m doing; I hope not. In retrospect I think I may have made a mistake as I probably won’t need to interface to any LS chips and HC would have been a better choice, but I’m not going back now.

Anyway, after choosing 74HCT chips I ordered a few parts for prototyping the register board. While I was at it I looked into the kind of memory (RAM, ROM) I might want to try. I found a static RAM chip which provides 512k x 8 bits and figured I could use two of those to make a 16-bit memory. As for ROM, I’d like to try EEPROM (i.e., Flash memory in a chip) but I’m still doing research on that because of the need for a programming device to get the code into the chip. There are two options for this: build my own, or buy one. I think I’m going to have to go for option 2 on that. Still thinking about it though. Once I know which device I want to use, I can try and find an affordable programmer that supports it.

So, what do I need for a single register? It turns out that a pair of 74273 octal registers will do the trick. They latch the data in on a positive-going clock edge which is what my design needs, and they have two-state outputs. LEO-1 has four internal register buses which I called RIN, ABUS, BBUS and CBUS. The RIN bus is the register input bus and it will be constantly connected to all register inputs. The other three are the register output buses. ABUS and BBUS go to the inputs of the ALU and CBUS is used for writing a register to memory. This means that the output of every register has to be connected to three bus drivers which will enable any register value to be output to any of the A, B or C buses. The instruction decoder will ensure that only one register at a time can get onto a bus by selecting only one of the eight registers for each case of A, B and C. I chose the 74244 octal bus driver chip for this purpose. The 273 and 244 being octal (8-bit) chips means I have to ‘bit-slice’ to get 16-bits from pairs of 8-bit chips. So, one half (low 8 bits) of a register will need a 74273 register and three 74244 bus drivers, and the other half (high 8 bits) will need the same. This gives a total of eight chips per register for a total of 64 chips across all eight registers. While I was designing LEO-1 in the simulator, I didn’t give this kind of thing much thought. I’m now glad I didn’t try to design a 32-bit machine! Here’s a picture of the chips attached to a bit of static-proof foam:

Register chips

Register chips

Design decisions: Architecture

One of the first design decisions I made for LEO-1 was that it would be a pure 16-bit machine. I wanted to make something that would be unlike anything I had prior experience of and yet would still be useful, while at the same time not being too terribly expensive to build for real if I decided to do that. It was pretty clear to me that it would be more expensive to build, say, a 32-bit machine than it would be to build an 8-bit machine, because the cost pretty much scales with the number of bits you want. For example, if you want to make a 4-bit CPU, you could use a 4-bit adder chip in the ALU. But if you wanted 32-bits, you would need eight of those chips in a cascaded arrangement. Similarly, registers, bus drivers and all that kind of stuff scales in cost according to the number of bits. More bits needs more chips. Chips are cheap, but the more chips you need, the more circuit board space you need. Circuit boards are not cheap, especially if you want proper ones made by a PCB house (which is something I’m considering as I’m very tired of messy veroboard circuits). Also, doubling the number of bits about doubles the power requirements and the amount of heat generated… and so on. So I made a compromise. I would have loved to build a 32-bit monster but it would have been slightly over-the-top for a first CPU. However, 8 bits just wasn’t enough and I’m a bit bored by 8-bit computers now. 16 bits is in the middle (and it also makes me think of the early minis like the PDP-11), so I decided on 16 bits. That also gave me a good chance of making a clean instruction set with one 16-bit word for every instruction.

The second decision followed quickly after realising that 16 bits was good for packing a lot of codes and switches into an instruction word. That was the decision to not use microcode. The main reason for this is that I didn’t want to have to deal with writing microcode, fixing bugs in it and having to store it in a ROM somewhere. Microcode was invented to simplify the design of complex-instruction CPUs, but in order to simplify my design, I decided not to use it. This meant my instruction set was going to have to be pretty simple. The simplest instruction set to build hardware for would have instructions that were all the same length and which were directly ‘wired’ to the registers and ALU and other circuitry. In other words, no microcode. The simplest hardware would also have only one ALU and things like registers would be generalized. Having a bunch of general purpose registers which can all do the same things instead of having an accumulator here and a stack pointer there and an index register somewhere else would just make it easier to design. With general purpose registers, you can design one and then clone it as many times as you like. Of course, the number of registers you can have depends on how many you can address from within an instruction. I decided on eight, which requires three instruction bits to provide a register identifier from 0 to 7. I called the registers R0 to R7 and they are all exactly the same.

I also gave a bit of thought to performance. Using TTL chips I’m not likely to be able to get into the megahertz clock range, I shouldn’t think. I’ll be happy if it runs at 1MHz. In fact, I’ll be happy if it runs at any speed. But I decided early on to make a conscious effort to minimise the number of clock cycles needed to run one instruction. The fewer cycles per instruction, the faster it can go. That was when things got really tricky in the simulator. I found you have to be really careful to make sure that ‘stuff has time to happen’ after each clock tick. For example, you can’t put a memory address on the address bus and expect the memory to be ready at the same moment. There is a real delay, in the order of tens of nanoseconds, before the requested data will be stable on the data bus. So you have to wait at least one clock cycle after addressing the memory before trying to read the data out. Also, the clock ticks have to be far enough apart which is why you can’t just crank up the clock speed indefinitely and expect it to just ‘go faster’. There are also other hazards to think about like race conditions and bus contention. I started to get freaked out by how much you have to consider and how the problem gets worse the faster the circuit is clocked. A few times I almost decided I couldn’t do this and it might be wise to stop. It really made me appreciate the ingenuity of the engineers who designed the computer I’m writing this on. The CPU in my computer is (at least) thousands of times faster than LEO-1 will ever be and probably thousands of times more efficient with its caches and pipelines and branch prediction and whatever else miracles they managed to squeeze into it. I couldn’t even figure out how to implement a simple pipeline and I gave up thinking about it pretty quickly.

Anyway, as I designed the instructions and played around in Logisim I realised that the instructions were reminding me of something I’d seen before, namely RISC instructions. In particular, when I came to design the memory access instructions I found that since I only had one ALU, I could calculate a memory address to load from, but I couldn’t use the loaded value to do more maths in the same instruction. This meant it was only possible to load or store to memory at some calculated address; in other words, LEO-1 has a load-store architecture. Although at first I was just experimenting without much planning ahead, I kind of gravitated towards a RISC design because I was trying to keep things simple. I let the limitations of the architecture guide the design of the instruction set. After a while I started getting inspiration from real RISC machines like the MIPS and I started letting the MIPS design guide me somewhat. I started to see why MIPS has no stack (and therefore no built-in nestable subroutine calls). All that kind of stuff requires either the use of microcode or else insanely complex electronics. Since this is complex enough already, I decided to go the same way. No stack pointer. No automatic subroutine calls. And no interrupts. That’s a decision that is still troubling me as I’m scared it might make the thing less usable in the end. Interrupts are needed for efficient interfacing to external devices and peripherals. Without them, LEO-1 will have to poll devices in a wait-loop and preemptive multitasking is entirely out of the question. Well, I’m not trying to design a real mini-computer with disc drives and tapes and I don’t plan on trying to port Minix to it either. I’m planning on having a simple hex keypad and an LCD display for debugging. I also have a fantasy of making a simple video card like the one on my MK14 if I can wrap my brain around doing that in TTL. Assuming the thing even works, if I ever do need to interface it to a UART or something, I’m sure there’s a way of doing it by polling instead of relying on getting an interrupt.

Another design decision was to eliminate the concept of Condition Codes. At first, my simulation had the ‘traditional’ condition codes (Zero, Negative, Carry and Overflow) but they started to cause trouble. Since I was trying to limit the number of clock cycles needed to execute one instruction, I didn’t like the fact that there didn’t seem to be any reliable way to update the condition code register without burning a clock cycle to do it. I also didn’t like the way I had to ‘wire up’ the carry flag to check it, and the fact that if it was provided as a programmer-visible flag, I would need instructions to set and clear it.  As for the overflow flag, the lack of that flag is not of much concern to me. In 35 years of programming I have never typed a single instruction that checked for overflow, but then again, I have never written a compiler or a maths library (or built a space rocket). Since I’m not planning to use LEO-1 for anything mission critical, I decided I could live without overflow handling as well. Other CPUs that don’t have condition codes such as MIPS handle overflow by using a trap which is like a software interrupt. Since I’m not doing interrupts, I can’t very well do traps either. So, checking for overflow will be completely out of the question without some heavy duty programming jiggery pokery around every operation that cares about overflow. Hopefully it won’t be a problem. As for zero and negative, that’s easy. You can check for zero with a wide AND gate and you can check for negative by just looking at bit 15 of a register. I was able to add instructions for branching if a particular register is ‘not zero’, ‘positive’ or ‘negative’. (It turns out you need a wide OR gate for checking ‘not zero’, but hey).

Things were working out very well when I hit a pretty obvious problem that I would have to work around somehow. A nice clean pure 16-bit CPU has an annoying limitation: 16 bits can only address 64K (words) of memory. That’s all memory space; RAM, ROM and memory-mapped devices. I hummed and hahhed about it for a few hours, wondering if I should just take the easy way out and limit LEO-1 to 64K, but I couldn’t do it. Solving the problem turned out to be quite tricky and will increase the cost and difficulty of building the machine, but it should be worth it. At least I’m not alone: the PDP-11 had exactly the same problem 🙂

I smell a device

I started this blog as a way of documenting something crazy I’ve just started doing in my spare time. Perhaps my wife will see it and it might help her understand what’s going on. Perhaps it will help someone else out there who is having similar thoughts about doing crazy things like this. So what’s it all about? To find out, you’ll have to wait a while while I explain to myself why I’m doing this.

I’ve been interested in computers for most of my life. I mean, I’m 54 right now and I’ve been interested in them since I was about 9. Wait, what? That means I was interested in computers as early as 1970. Yep, that’s right. When I was 9 or 10, I got a book from somewhere (did someone give me it?) called Teach Yourself Computer Programming. It was yellow and black like all books in the Teach Yourself series were. I still remember lying in bed reading it and not understanding what in blazes it was about. The book was basically an introduction to the FORTRAN programming language, pretty heavy stuff for a kid of that age. Back in 1970 the only computers I’d seen were the massive boxes with panels of flashing lights and tape reels that one would see in Lost In Space and The Time Tunnel. I couldn’t relate the text in the book to any of that. But something hooked me. The one part of the book I understood was the flowcharts; the diagrams with diamonds and squares that give step-by-step instructions for doing something. Is the fish cooked yet? Yes: Eat it. No: Go back and wait. Is the fish cooked yet…?

Fast forward to the mid to late 70s. I was an avid reader of the UK electronics magazine Practical Electronics and used to build some of the simpler circuits I found there. I had become interested in electronics at the age of 12 when my parents had bought me an electronics kits, the educational kind with a board you could put components on and make a flashing light or a beeper. In the mid to late 70s, Practical Electronics started publishing projects that used these annoying little things called integrated circuits. They were annoying because they were new and I have always hated change. I was used to transistors and resistors, and now they wanted me to learn how to use these silly little black plastic things with 14 little legs that all looked the same and didn’t seem to have any well-defined purpose. I resisted for a while and eventually decided to buy a few of these things and see what all the fuss was about. I remember the first chips I played with were the 74 series TTL chips, 7400, 7404 and 7490. Using those you could make a counter with an LED digit display that counted up to 9. With more, you could make a digital clock. Suddenly I understood why this change was happening. These things were powerful. (I recently found out that these chips were actually invented in the early 60s. I’m not sure how I managed to miss or ignore them until the late 70s but I think my magazine was partly to blame.)

Around 1977 or 1978, another magazine whose name I forget started publishing a project about building your own computer. I jumped right in and started reading with glee — only to find that I couldn’t make head nor tail of it. “Read? Write? Bit? Byte? RAM? ROM? Bus?” I would mutter to myself in anger. What is this stuff? Hell, forget it!

Around the same time, at school, the maths teacher introduced us to computers by letting us use a teletype which was connected to a university computer through a modem. I got my first real taste of programming in BASIC. I was terrible at it but once a week or so the teacher let me stay after school for an hour and practice. This experience really got the juices flowing. I wanted one of these things at home!

In the summer of 1979, I saw an advert for a home computer kit which was about £40 and looked like a big calculator that had been taken out of its case. It was called the Science Of Cambridge MK14 and it changed my life. It only had 256 bytes of memory and no BASIC, with a simple digit display and a machine code ‘monitor’. Learning to program it was a huge challenge with only the manual to go on and no one to ask for help. I remember being completely unable to understand why they would tell you to type in ‘-1’ as ‘FF’. I’m sure anyone reading this who does not understand computers very deeply would also ask ‘why FF?’ and I can answer that now, but I couldn’t understand it back then and had no one to ask. “Mum, why is -1 typed as FF?” Yeah, right.

I can barely understand how anyone did anything before we had the Internet. To unravel this FF thing and other mysteries, I went up to Foyle’s in London and scoured the shelves for books about computers. I managed to find two books that looked promising. One was called The Architecture of Small Computer Systems and the other was Small Computer Systems Handbook. I learned the meaning of all those mysterious bits and bytes, why -1 is FF and a whole lot more from those books. I was hooked and I wanted more.

Shortly after building the MK14, I got my first proper job which happened to be in a computer room at an insurance company. The computers they had were huge and absolutely fascinating machines, full of flashing lights and with tape reels just like on TV. Although a few of the people I had to work with were abusive arseholes, I still enjoyed working on those machines and miss them bitterly even today.

So where was I at home? Yes, I remember drooling over a picture of an Apple II in bed one night in 1980 or so (other young men were probably drooling over Mayfair, but hey). I couldn’t justify buying the Apple but I found something else — an Ohio Scientific Superboard II. This was a ‘proper’ computer with a real keyboard and the BASIC language built in. I had to have one! I ordered it and waited weeks for it to arrive. When it arrived, there was no power supply, so I had to order a power supply for it and wait a few more weeks. Every day I got up and typed on the keyboard FOR I=1 TO 10 : PRINT I : NEXT. What a sad nerd! But the power supply never showed up — or something — I don’t really remember now but I had to go out and find a surplus electronics shop and buy a 5 volt power supply so I could use my computer. I powered it up and the display was all garbled. I opened up the (very simple plastic) case and found that they had soldered a small board with extra chips over the top of the circuit board. I had read in the magazines that this was routinely done to convert the US video circuitry to work on British TVs. I had no test equipment and no way to know what was wrong, and the bastards had sanded the chip designations off so I didn’t even know what chips they were. In desperation I simply cut the board off — and the display started working normally! It was an unbelievable piece of luck. This ‘convert US to UK’ stuff didn’t work and wasn’t even needed. The thing worked just fine on my portable black and white TV. With that, my life changed again. I learned 6502 machine code to the point where I could (and still can) write the machine code straight out on paper without an assembler (didn’t have one of those). I’m sure I’m not the only one who has 6502 machine code stuck in their head. Even Robocop (or was it Terminator?) had that.

Time to fast forward or I’ll never get through this. Atari 400, Atari 800 XL, Atari ST, Amiga 500, and finally a PC at home, mainframes and minis at work… and then a job actually writing games on Amiga and later Nintendo consoles. Assembler, C, C++… I was snowed under by computer programming for the whole of the 80s and most of the 90s. My electronics hobby was left in the dust and I didn’t really do any electronics again until 1993 when I made a radio-controlled digital clock from a Maplin radio and my own decoder / display design. After that, electronics was left behind again until 2005 when I built an analogue modular synth. Programming really took over my whole life. I lost my first wife because of my coding obsession and really only stopped programming in my spare time once I got the day job I have now. I love my job but it’s sometimes so gruelling that I don’t want to even see any code in my spare time 😎

Bear with me, I’m almost done. I have this recurring dream. Every now and then I have a dream that I’m looking at an old computer of some kind, something with lights, a small screen, and printouts with listings of some kind of unknown assembler code. In the dream, I know that this computer is something amazing, something that I was somehow responsible for creating. I certainly wrote the code, and I even know what it does. It’s code for a CPU I’m not familiar with but I know I wrote it somehow. When I wake up, I’m always left with a sense of loss because the thing was so real and now it’s gone and I can’t remember the details. During the dream I relive that early fascination I had with using a computer that I had soldered together myself from a pile of little plastic chips.

Every now and then I get the itch to ‘surf old computers’ and start reading about the old ICL and DEC computers that I used to work with. I tend to listen to Music For Airports while doing this because for some unknown reason that music evokes in me the feeling of ‘old computers in a clean room’. One day, about 18 months ago, I found a web page written by a guy who had designed and built his own computer with a custom-designed CPU. This nutter genius had actually constructed a working CPU at home, from 74 series TTL chips. I found that he was not alone; there are quite a few people out there who have done this. I remember looking at this work and thinking I could never, ever, do that. So I went back to my normal life and occasionally I would surf old computers and listen to Music For Airports.

Then, about 6 weeks ago, I had been up late ‘surfing old computers’ again, and the next morning I woke up with an idea in my head. I wanted to see if it was possible to design a CPU instruction set that would have only 8-bit-wide instructions. That was when I started typing up a document called Design for CPU with 8-bit opcodes (which is incorrectly named as an 8-bit opcode is not the same thing as an 8-bit instruction). I soon found that I was designing an abomination which was a cross between a 6502 and a Z80 and was most likely impossible to actually build. I decided it was not possible to design a useful instruction set with only 8-bits to play with in a single instruction, not unless you are doing it just for fun or to learn about the concepts. So I abandoned that idea and decided to have a go at designing a 16-bit instruction set for no reason other than the intellectual challenge.

Shortly after I started on that, I found this lovely little logic simulation program called Logisim. I started playing with it and thought I would try to just make a couple of registers that could move an 8-bit value between them. I figured out how to do it pretty quickly and then decided to see if I could make an actual ALU. By the end of the day that I installed Logisim, I had made a simulation of an electronic circuit that could add or subtract two numbers. I didn’t know I had it in me to do that; it wasn’t even terribly hard. It was fascinating and I got rather excited by having done this. I just had to take the next step and see if I could make the circuit do something with the ALU automatically. Over the next few days I started developing a simulation that was turning into a real (albeit simple) CPU. I showed it to my boss over Skype and he said “You should build it out of transistors (lol)!”. Of course, I told him “That would be very hard. Even making it out of logic chips would be too hard for me.”

Then came a trip to Yosemite that my wife had planned and I found myself in a car with four other people and a lot of time to think on the journey. So I thought about my CPU design. I had my laptop with Logisim installed on it and during the coming days, I worked the instruction set out in my head while I was supposed to be ‘looking at rocks and trees’, and in the evening tested it out on Logisim while everyone else played with their phones or slept. By the time we got back from the trip, I had a simulation of a working CPU. I could hardly believe it. In order to test it properly I had to type instructions into it as 16-bit codes. I couldn’t handle it without making mistakes so I spent a few days writing an assembler for the instruction set and that made it easier to test programs by pasting the assembler output into Logisim’s ROM simulation. I found and fixed some problems and got jumps and branches working. Because of the way Logisim works, I found I had actually designed a whole computer with my own CPU as the centrepiece of it. I called the CPU LEO-1 and the computer LEO-1-HC. Since Leo is my wife’s nickname for me and everyone here calls me Leo, I thought that would be a good name for my CPU.

So I have a working CPU in an educational simulator. What can I do with that? Not much really. But I now know I have the ability to design a CPU from scratch at least in theory, so what’s to stop me seeing if I can make a prototype? A real prototype using those ‘silly little black plastic things’, the 74 series ICs.

This blog is a diary of my attempt to do something I didn’t think I was smart enough to do 18 months ago. If I fail, well, what the hell. At least I tried. I know there’s no real point in spending money on making something that looks and works like it’s from 1976. I know I’m going to come up against annoying problems that I can’t foresee, as Logisim is not designed to completely simulate real electronics in a way suitable for production. But I don’t care. This is probably the most interesting (technical) thing I’ve done since I built my MK14. That’s why I just ordered a PB-503 prototyping workstation 😉

Let’s see how far I get… 🙂