Last November, Scott Swazey asked why I made my own boot loader instead of using the original M9312 code.
I knew that the sources for M9312 were available, and I did have a look at them a long time ago. At that point, I was not sure I would ever get the CPU running, let alone booting from disks. And the code looked, well, complex and unlikely to run unless the hardware would mimic the original exactly. Also that seemed hardly possible at that time.
Later, when I got the first disk controller working, I just copied the boot loader from the simh sources – which I studied to get an idea of which parts of the disk controller were essential and which I could skip. And after the second and third disk controllers came into being, I just followed that pattern. Eventually that turned into T44 – the boot loader so far, the one that will announce itself with ‘Hello world’ and then proceed to boot from the first disk on the first available controller it knows about – RK05, RL02, RP06, in that order. Since in most cases the systems have one SD card only, and thus only one controller, that conveniently works for most cases.¬†But Scott was building a system with both an RL and RH controller, so wanting¬†to boot from a specific disk made total sense. So we looked into the challenge of making the original M9312 code work.
The first issue was that the M9312 code used absolute psects – as in, code to be fixed at a specific address. I knew there was an issue with that in my macro11 toolchain, but I never found what it was. Scott found it quickly though, it was a rather embarrassing mistake I made in the replacement of the macro11 linker that I modified to output the VHDL source for the boot roms.
After that, it was surprisingly simple. Just a question of adding the secondary boot rom at 165000, and I restructured the original device boot roms into one source – so it will fit into a single rom image. A bit later, I also changed the interpreter to accept lower case input – the original only works with upper case, which is a bit awkward.
M9312 commands

  • L <octal value> : set address
  • D <octal value> : deposit value at address
  • E <space> : examine data at address
  • S : start program

It also accepts the name of the four device bootroms as command:

  • DL<#> : boot from RL disk #
  • DK<#> : boot from RK disk #
  • DB<#> : boot from RP disk #
  • ZZ : run diagnostic

To make space for the lower case input, I had to remove some of the code from the original interpreter source – a bit of diagnostic code that would run on first boot. That also leaves some leftover room to reintroduce the ‘Hello world’ message – I’ve become used to that, and I’m missing it now. Or maybe some more user friendliness in the command interpreter, it’s very historic in the original state – and although it is somewhat fun to have it work in that way, it also makes for a lot of typing mistakes.
Next to his work on the booting stuff, Scott also found a mistake in the RL controller. The adders for the sector address were not wide enough, so an access to the fourth disk could wrap around to the first. That’s fixed now. He also made a suggestion to offset the disk images on the card, and use a standard MBR to address those images. After some long and hard thought, I decided not to include this – it may be convenient in some cases, but it also conflicts with the future plans I have for the disk controllers.
I haven’t decided yet if I will include the new boot loader into all prebuilt bitstreams. For the simple setups at least, the old boot loader scheme still makes sense. What I’ll definitely do is integrate both to use a joint code base for the device boot roms.
Updated sources will be published in a couple of weeks, I’m currently working to include Terasic’s C5G board into the distribution. After that is finished, I’ll post the new sources.

RSTS and J-11

Some time ago, Paul Koning contacted me about the issue that RSTS did not correctly detect the CPU type when the cpu was configured as a J-11 type – 11/84 or 11/94. He had already identified a problem in the cpu sources: the MFPT instruction would set the CPU code in the primary register set, instead of the currently active register set according to the PSW.

I built the core for the MFPT instruction a long time ago, at the point where I was working with a copy of the ZKDJ test to verify that the regular instructions were working correctly. I added the MFPT mainly because I liked the idea of sticking as close as possible to the original ZKDJ source – at the time, I did not anticipate the system becoming as complete as it is now. Why I chose to write the CPU type value in the primary register set I don’t really remember – it seems illogical now.

Anyway. The fix did solve the CPU type detection problem, but immediately revealed another: the startup code in RSTS went into a halt. Paul quickly found the reason; RSTS would overwrite it’s memory sizing code while trying to find out how much memory was available. The cause of this was that the J-11 models have 2044Mw of memory, and do not implement the unibus remap of the top 128K back into low memory – as 11/44 and 11/70 do.

After I fixed that issue, yet another appeared: the startup would proceed further, but would now issue the message:

This DCJ11 cannot be used in conjunction with an FPJ11 accelerator.
Contact Field Service for FCO kit EQ-01440-01 to correct the problem.
INIT will continue, but timesharing cannot be started.
RSTS V10.1-L RSTS (DB0) INIT V10.1-0L

Which I could easily suppress by setting the 8th bit of the control register at 17 777 750 to zero – stating no FPJ-11 floating point accelerator is present. Since the J-11 always includes the floating point instruction set in it’s microcode, functionally there is no difference in whether or not the FPJ-11 is present – it should only speed up the floating point instructions. But then, the message shows that there is a difference…

Diving deeper into the issue, Paul was able to find that the test that produced the message failed on a test involving the ASHC instruction. Sure enough, in the manual for the 11/84 EK-1184E-TM-001_Dec87.pdf – to be found on Bitsavers – page B-17 lists two model differences for the ASH and ASHC instructions, which I had already implemented a long time ago – but incorrectly applied to all models. As a test, I disabled this specific behaviour – and the result was that RSTS booted up, and recognized a FPJ-11 without complaining.

Apparently, the FPJ-11 then played some role in fixing the wrong implementation of ASHC and probably ASH in the J-11. Maybe the accelerator actually executed these instructions? or maybe it’s presence implied different microcode, or a different path in the microcode?

I’m not sure there is a way to find out – none of the documentation I’ve found so far includes this level of detail on the original hardware. Whatever the case, the RSTS CPU recognition bug is now fixed. Thanks Paul!

Besides fixing these bugs, I also made the bit setting in 17 777 750 a configurable item – including the corresponding behaviour of the ASHC and ASH instructions. The parameter is called have_fpa, and it’s default setting is 0 meaning no FPJ-11. I don’t think there is any use for having this, other than looking at the differences in the hardware listing in the RSTS startup…

have_fpa => 0
Start timesharing?  HA
  HARDWR suboption? LI
  Name  Address Vector  Comments
  TT0:   177560   060
  RB0:   176700   254   Units: 0(RP06)
  XE0:   174510   120   DELUA Address: 00-04-A3-1A-70-E1
  KW11L  177546   100   (Write-only)
  SR     177570
  DR     177570
  Hertz = 60.
  Other: FPU, 22-Bit, Data space, J11-E CPU
  HARDWR suboption?
have_fpa => 1
Start timesharing?  HA
  HARDWR suboption? LI
  Name  Address Vector  Comments
  TT0:   177560   060
  RB0:   176700   254   Units: 0(RP06)
  XE0:   174510   120   DELUA Address: 00-04-A3-1A-70-E1
  KW11L  177546   100   (Write-only)
  SR     177570
  DR     177570
  Hertz = 60.
  Other: FPU with FPA, 22-Bit, Data space, J11-E CPU
  HARDWR suboption?

As usual, I’ll post the updated sources to the download page some time later this weekend.

FPU and DEUNA fixes

Since I found the problem with the BAE register in the RH70 that prevented RSX-11MP from running, I’ve been working on straightening out the timing between the CPU and the main memory. It’s nowhere near finished, but the first tests show that when it is, the CPU will be capable of much higher speed – one experiment even ran at 90Mhz on the latest FPGA models. Not bad at all, compared to the current baseline of 10Mhz, even considering that the new timing needs slightly more cycles per instruction.

In the meantime people have been looking at RSX. Especially Paul Anokhin, who has helped me find several issues. Firstly in bringing the somewhat forgotten de1dram board variant back to life. The DE1 board has two memory chips, an sram of 512KB and a dram of 8MB. Obviously the sram is a bit too small to really bring to life 22-bit CPUs, and very limiting if you want to run the later versions of RSX or Unix on them. Years ago I made the de1dram version as an experiment while I was waiting for my first DE0 board to arrive, but it was never quite finished – the DE0 arrived a bit earlier than that, and I finished the work on that board and forgot about the de1dram. But now it works.

Secondly, Johnny Billquist has been working on BQTCP – a TCP stack for RSX that coexists with Decnet. It is afaik the only case that really requires the buffer chaining in the DEUNA to work correctly – Decnet uses fairly small buffers, but TCP by default uses an MTU of 1500, so if a packet of that size arrives and the buffers are smaller than that, the buffer chaining needs to be correct. This was not a problem before, since it appears as if all other cases – Decnet itself, TCP on 2.11BSD – appear to use buffer sizes slightly larger than the maximum packet size they expect to receive.

I wrote the DEUNA microcode as a sort of proof-of-concept – meaning, it is not really clean structured code. But it appeared to work well enough, so the somewhat more complex case of correct buffer chaining was not completely finished; it triggered a warning message, and it would also switch to the next buffer when needed. What I forgot was the case where a chunk of data from the ENC424J600 chip – the data is copied from the chip in 16-byte chunks – did not completely fit in the buffer; in that case, the whole 16-byte chunk would be placed in the new buffer instead of filling up the old one instead. Obviously, that caused errors for BQTCP. Luckily, it was surprisingly easy to fix, I only needed to restructure the receive flow in the microcode a bit and add a couple of tests to make the buffer chaining work correctly.

The latest issue Paul reported was slightly more complex to find – he wrote a F77 program to do some floating point calculations, and the results were not correct. The same algorithm in BP2 on RSX also was wrong, but translated to C on 2.11BSD it worked correctly. Since I did not have much time to look into this, I asked Paul to look at the instructions generated from the F77 program, and try to find the difference in flow between SIMH – which worked correctly – and the FPGA hardware. What he came up with was that the different flow started near the execution of the ABSF instruction.

That provided a nice clue for me to start chewing on. And soon enough, it became clear that there was an issue in the way that the addressing mode 0 for the group of instructions called ‘fp single operand group 2’ was handled – ABSF, NEGF, TSTF, and CLRF. For mode 0, I implemented a fast path in the instruction sequencer to bypass reading the input operand – because the input operand is in a register, it does not require memory access and thus does not need memory cycles. However, the register read occurred in the same cycle as picking up the output from the ALU – so, in effect, the output of the ALU was not based on the input. For the CLRF instruction, that makes no difference since the input is irrelevant anyway, and I would speculate that the TSTF instruction is not used much – but for the ABSF and NEGF instruction this is obviously not the case.

Apparently the addressing mode 0 ABSF and NEGF instructions are not used much. I checked 2.11BSD; at least the C implementation hides these instructions through library calls, so the compiler does not appear to generate these instructions directly. And the library implementation works with the operands on the stack, so it will never use mode 0. Also the MAINDECs seem to omit checking this part – maybe it did not use a separate data flow in the original machines, so that it would not make sense to specifically test for it. Whatever the case, none of FFPA, FFPB, FFPC, KFPA, KFPB, KFPC, or ZKDL picked up this issue – all of these run quite nicely even when the bug in the CPU is present.

Also here, once I understood the nature of the problem the fix was quite easy, I only needed to advance the register read to the main instruction decode state. Where it should have been in the first place, obviously – the whole point of the fast path was that the register should have been read already during the instruction decode.

Anyway, I’ll be posting the updated sources to the download page, and later this weekend I’ll post updated bitstreams as well. Big thanks to Paul for his help in finding and fixing these!

Fixes for DEUNA

Over the last months, I had a couple of occurrences of the problem where 2.11BSD would loose it’s network connection, reporting that there were no transmit buffers available on the DEUNA. All in all, I’ve seen this problem three or four times over the last year, but maybe ten times in the last month or so. No idea why – the only thing that changed is that I have a new Ethernet switch, that could make for a subtle change in the timing.
Anyway, now that it occurred more often, that also gave me the opportunity to find out what was actually wrong. I enabled the debug code in the if_de.c driver for the DEUNA and added some more debug statements. Next morning I was surprised by debug output – showing that in effect all transmit buffers were free…
So, that got me thinking of the interrupt controller core in the DEUNA. It did contain some strange edge-trigger construction, that could potentially result in a deadlock. I changed it, and setup my venerable old 20Mhz oscilloscope to show the interrupt signals – br and bg.
This time I had to wait for three days for the problem to occur again – and to my disappointment, it did, and it still locked up in the same way. However, it was also clear that no interrupts were taking place, so I was definitely looking in the right place – the interrupt controller was maybe not locked up itself, but even so no interrupts were taking place. More evidence against the interrupt controller and the edge-trigger in it.
A couple of experiments showed that an easy solution would be just to generate interrupts on the level instead of the edge. But this would also cause the DEUNA to keep on interrupting until the software disabled interrupts or cleared the originating bit. Not very elegant, but it did work – and after some time, I realised that the software will in all likely scenario’s examine the interrupt bits, and most likely reset them. So would it maybe work if I went back to the edge triggering system, and reset the trigger on writes into the PCSR0 register?
Of course it did.
And a minor other thing comes to mind: I keep saying DEUNA, but it’s actually a DELUA now. The difference is only in the PCSR1 ID bits; no logic has been changed at all. I did this because Decnet on RSX-11M-Plus tries to load microcode into the DEUNA – which will not work because in reality of course the controller does not look like a real DEUNA at all. But it will leave a DELUA alone. And because all the other software – 2.11BSD and RSTS – does not seem to make a difference between DEUNA and DELUA, there seems to be no reason not to change the thing into a DELUA.
I changed several subtle things in the microcode as well, mostly around buffer chaining and resetting the chip if it becomes disconnected for some reason. Buffer chaining probably still is not correct, but it doesn’t really seem to be used extensively by the operating systems – it’s only when broadcast frames longer than what the network stack expect arrive that the code seems to be triggered.
The updates – including the fix for RSX-11M-Plus – are on the download page now, and several pregenerated bitstreams as well. Enjoy!

RSX11M-Plus. Finally.

A couple of weeks ago someone mentioned that there were some FPGA related articles in the December issue of Circuit Cellar. So I checked it, and one of the articles pointed me to the built-in logic analyzers that the leading tool chains now all seem to have. At least, the Circuit Cellar article is about Chipscope, which is the Xilinx variant, and Altera has something similar called SignalTap.
Since most of my Xilinx stuff has been stored away since last years spring cleaning, I decided to go and play with SignalTap. And as usual with the FPGA tooling, the first impression was not that favourable. But a couple of days later I thought to try again, and this time around I started to appreciate some of the things that the software can do. For instance, tap into an enormous lot of signals at a time – at least certainly compared to my old ‘real’ analyzer, which can do only 32 signals. And the amount of capture memory is also decent, provided you’ve some room in your FPGA memories.
But more interesting is the trick where you can let the analyzer capture when some subset of the signals change state. And you can assign names to bit pattern values in a capture. Those two tricks I used to finally find the problem that prevented RSX-11M-Plus from booting – first, I used the address match signal within the RH11 controller logic as a trigger for the analyzer to capture state, and second, I assigned the register names of the control registers within the RH11 to the address signal.
So, I thought that would give me a nice and easy overview of exactly what RSX-11M-Plus was doing to the RH, and what would cause it to get wrong results. And that is exactly what it did – only, not in the way I expected. Took me some time to see something that in retrospect is very obvious; there is a write to a register in the RH11 space, but it isn’t decoded into a register name – even though I added register names for all registers that I knew about.
Aha. So, something going on here… The first thing I checked was whether it could be a controller register or a disk register – in a real setup with RH and RP, some of the registers reside in the disk, others in the controller. I decided to verify all the controller side registers first – and the one that I was consistently missing was BAE, the register that holds the bits 21-16 of the address for the controller. A quick change to the controller source proved that to be correct; if I assigned BAE to this register address, suddenly RSX-11M-Plus would boot happily… And it seems to run quite happily as well, including running complete sysgens, and also running Decnet and other software.
A couple of things still need some clarification; mostly, do other registers also live at other addresses than I would expect them. Once that is done, and I’ve completed my usual regression tests, I’ll be posting the new vhdl to the download page.

This output from the SignalTap-II analyzer shows the unexpected address for the BAE register

This output from the SignalTap-II analyzer shows the unexpected address for the BAE register


Last week Al Kossow posted some new sources for XXDP tests on Bitsavers. Included are the sources for the MMU tests for 11/34. I tried to run those, and was surprised by a list of error messages… Turns out, there were a number of issues left in my MMU implementation, that none of the other tests I’ve used so far had picked up. To be more specific, FKTA, FKTB, and FKTC all three complained about issues that FKTH, KKTA, KKTB, and ZKDK let pass…

  • mtpi worked in the wrong order. Because the instruction follows the more or less regular destination pattern, I did the address calculation for the destination first, and only after that the special part for mtpi – finding the implied operand from the stack, and popping the stack in the process. That gives wrong results if the stack pointer is updated in the destination address calculation… An obscure case, probably, and I’m not at all sure that this exact process is used by all PDP models. Nevertheless, there is a fix.
  • the a and w bits in the pdr registers should be reset on a write. However, my implementation only did that for word or even byte writes – not for odd byte writes.
  • and probably the most promising of all, the a and w bits in the pdr registers should not be set if the access was aborted by the mmu.

I had some hope that these fixes might also have an impact on the still mysterious problem in booting RSX-11M-Plus. No luck though… Still, it never ceases to amaze me how good and how devious some of these test programs are, it really sometimes takes hours to find out what a test does, and how to make the cpu and mmu work as it was intended. And, obscure though these issues may seem, they may in some way impact how some old software runs on the VHDL.
Anyway, I’m running a number of regression tests now, and will post the latest versions some time later. And maybe play with some of the other tests as well – there could still be more to find there.
And of course a big thankyou to Al!


Since DEUNA works, I more or less constantly have one or more boards running 2.11BSD. One of these has only been down twice since November 2 – the first time after three months when I accidentally touched the power switch while cleaning, and the other time last week when a transformer exploded across the street – close to 10.000 homes without power. So only two reboots since begin of November – and both because of power problems. I have current systems that do worse…
Not much has happened on new developments. Partly because my boards were all showing increasingly impressive uptimes, partly because I was not sure on what to do next – and, other priorities demanded a lot of time. What I did do was restructure the DEUNA microcode a bit to make it more robust. There were a couple of bugs in there as well – for instance, the bit in the control register that should cause a reset of the controller hardware actually didn’t. But a more major improvement is that it is now no longer fatal if the pmod becomes disconnected. The controller will just reconnect when the chip becomes available to it again, and continue where it left off.
There is one problem remaining in DEUNA that I’m aware of; very occasionally – so far, I’ve seen it happen 3 times in almost 6 months of running – 2.11BSD will start complaining that there are no free buffers. At the same time, activity on the blinkenlights show that there is something going on – more instructions appear to be processed than normal during idle-waiting. However, vmstat shows no unusual activity that I can detect. The solution is to ifconfig the interface down and up – and things will be normal again.
In the meantime, I’ve been thinking on what to do next – if anything. One item on the roadmap that is a bit overdue is the split disk controller – I mean, the change to the disk controller that allows the cpu to run while the sd cards are active. Would be useful to make this change – I expect that 2.11BSD and probably RSTS and RSX would benefit from the cpu being able to run during card activity, and thus become faster. But also interrupt response and clock stability would improve. The downside is that it is relatively boring work…
Another idea I have is to change my mind and implement the Qbus systems after all. The difficult bit there really is only that that would require a DEQNA – which shares surprisingly little with DEUNA, so it would be a lot of work. And there really is only one reason to do it – it would run Xinu.
Anyway. Summer is here, time to go out and play. But maybe it will rain some days.

DEUNA works!

Terasic's de0 board with Digilent's pmodnic100 attached

Terasic’s de0 board with Digilent’s pmodnic100 attached

I already posted that I was working on it, so it should not be a very big surprise. DEUNA now works. At least, it works good enough for 2.11BSD – and very stable as well. I’ve even written a new web server for 2.11 – well, you’ve got to understand that 2.11 is a bit older than the concept of web servers. Even though there is a lot of code for simple web servers around on the net, most seem to assume the ‘new’ style of C – which is not that big a deal to convert, until you encounter the intricacies of varargs. Anyway, I decided that I would just roll my own. It seems only right to run your own web server on your own hardware, after all. And, in the process, I decided that I was not really interested in a ‘normal’ web server – I wanted something to browse the operating system with, not something that would serve me content. So, the web server I made gives you the sources, the text files, the directories, and if I get around to it it will give you octal dumps of the binaries. Would have been so brilliantly useful, 30 years ago. And in a way, it still is.
Anyway, Decnet also works. That is, I’ve not had time to test anything beyond a trivially simple 2-system network, with two nodes each running RSTS version 10. That was challenging enough – Decnet encodes it’s node address into the Ethernet card’s mac address, and as it turns out, I misread the specs of the Ethernet chip on the byte order of the mac address. It works now, but to find the mistake took me several nights of debugging. And the show counters command also basically works – at least, it doesn’t hang the system anymore like it did when I first tried it, but it still only produces random numbers. All the required hardware is there though – even the clock that is required for the seconds-since-last-reset fits in. It just needs some more microcode to implement the different counters.
Writing that microcode is not yet on the agenda though… All boards are now tasked with running 2.11BSD, or are in use for tweaking minor things in the DEUNA microcode. Even though I now have plenty of the PMODNIC100 and already had a lot of boards – I should be able to run more than two systems at the same time. However, I forgot that my Ethernet switch hasn’t got any free ports left… so, yes, well, eeehm, I could very well run tests with other OSses as well, but then I’d have to break off the stability tests of the systems I have running now. Would be a shame. I’ll get around to it, eventually. But not this month. Lots of other things demand attention as well, and rewiring my network isn’t even on the things to do list yet.
Which brings me to. The DEUNA itself. It’s an interesting collection of bits and pieces. First of all, the DEUNA that I implemented is a front end for the PMODNIC100 that Digilent sells. Or, more specifically, the ENC424J600 from Microchip – a brilliant tiny thing that has an SPI interface, and does all things that need doing to be Ethernet. The DEUNA frontend is basically a Unibus interface, with a cpu that runs microcode, a busmaster that copies data from the Unibus to the DEUNA, and a busmaster that copies data from the local DEUNA memories to the ENC424J600. Nifty hardware, that is… but the hardware, and by extension also the microcode for the DEUNA, is not compatible in any way with the real deal. Which means that running XXDP for the DEUNA is out of the question. And in turn, that makes verifying if the DEUNA-to-be works correctly a bit difficult. The microcode is also somewhat difficult to debug – it seems like stepping back into the early days of PDP-11 work to me – I mainly have had to rely on theory to find bugs, and only the occasional printf to console to help find a clue as to what is happening. There’s not that much debugging infrastructure in the DEUNA hardware and microcode just yet – probably that is because it’s not been a priority, and that’s because a lot of things already seem to work just fine.
Thus far, I’ve been able to find the answers for 2.11BSD and RSTS v10. At least, 2.11BSD works great – I’ve seen uptimes of several weeks so far, and I’m not really able to break things without cheating – for instance, flood pings work just fine, and all normal protocols work as they should and seem stable – also with uptimes ranging into weeks. And RSTS also seems to work fine – even though details like the read counters command is not really implemented yet. Ultrix-11 however has a problem; it fails because something appears to go wrong with the TCP acknowledge frames – even though the DEUNA microcode receives them and signals the frames to the upper layers, Ultrix appears not to be aware that the frames were received, so a session will stop after the TCP window is filled. I’m clueless as to what goes wrong. It might not even be a problem in my DEUNA; I’ve heard a suggestion that the same issue also may exist on ‘real’ hardware.
Most amazing about the whole DEUNA setup is that I’ve been able to fit an 11/70, a terminal including it’s own PDP-11 cpu to run the terminal microcode, and the DEUNA adapter including it’s own PDP-11 cpu for it’s microcode – all in the single smallish low cost FPGA that sits on the DE0 board. And the same thing also fits on the N2B1200 board. It should come as no surprise that for these configurations the 11/70 does not include floating point hardware – but still, three PDP-11 CPU’s in a single small and low cost FPGA. Luckily, thanks to Walter’s brilliant work on fixing the FPU simulator code, you don’t need floating point hardware to run a 2.11BSD system – so you can have a 11/70 without FP11, but with Ethernet and an embedded console, and have it run a complete 2.11BSD system on the DE0 board – with the ENC424J600 to connect it to Internet. Just make sure that you install the patches on a system that does include the FPU… then set FPSIM YES in your kernel configuration file, run make, probably tweak the makefile, make install, move the image to an SD card, and then you can boot it on your FPGA.
Curious? Surf to my de0 board at

PDP2011 has been to VCF

Jack Rubin sent me this picture of his DE0 board running RT11 in front of a real PDP11/70. Taken during the VCF East 8.0, which took place last weekend. It’s something of a classic ‘old meets new’ picture – there being something like 40 years of age difference in it. Would have liked to be there and see it for real – but it’s on the other side of the ocean from where I live. Still makes me proud to see my work on display. Thanks Jack!
Over the last weeks, I’ve been working on an Ethernet interface for the PDP – a DEUNA, to be more exact. And it’s basically only the implementation of the controller; the real Ethernet stuff is implemented in a tiny Michrochip ENC624J600 device that I interface via SPI. The DEUNA is working a bit already, but there is still a major bug in it – it appears to cause memory corruption. I’ve been able to do some pings to a board running BSD2.11, and even telnetted to it a couple of times, but it tends to crash often. I expect that there is a bug in the bus master logic, and that will take me some time to find. Especially since the summer is here, and it’s time to go outside and spend time on outdoor projects.
PDP2011 in front of a real PDP11/70

Fixes and Nexys3

Since the last post I’ve been fixing some more bugs in the new serial controller. The most obvious difference is that it’s speed is configurable now. Less obvious, but equally if not more important, is that the stability is much improved, especially in the rx channel. At the time of the last post, however, a flood ping would still cause the controller to lock up. Not any more. It now runs quite stable with SLIP at 38400 BPS. At that speed, the CPU is already doing ~4000 interrupts per second and very near 100% busy – it will not go much faster than that, the link will run at 57600 BPS but the actual throughput will decrease, and the amount of packets received in error will rise considerably at that speed.
Anyway, at the same time I received my order of new goodies – a Nexys3 board and assorted pmods, including the pmodnic100. The pmodnic100 is the first step towards creating a DEUNA-type ethernet controller – that is, it looks like it will be possible to do, what I’m not yet sure of is how much time it is going to take. The interfacing is not very difficult, but it is still going to be a lot of work, and DEUNA is a complicated thing to be compatible with.
The Nexys3 is somewhat of a mixed joy, I must say. From the pricing, I had expected the FPGA to be huge. It is not – it’s effectively smaller than the Nexys2-1200 board I already had. Also, it has only 3rd party prom parts – not the Xilinx proms, so prom programming is only really possible with the Digilent tools, and those only run on Windows. My last remaining Windows pc runs fine, but with one minor issue: I need to keep pushing down the CPU fan – if I let go of it, it comes loose, because the plastic retainer bracket is broken. If that happens, after about 5 seconds the cpu will overheat and shut down – causing Windows to crash. Oh joy.
Digilent in the mean time is very slow about answering support questions – it took them almost two weeks to come up with what I already knew – it’s not possible to program the proms with the Xilinx tools. And, to finish my rant, the difference in build quality if you compare Terasic’s products to Digilent’s is rather big – and the pricing is too, but the wrong way around. Digilent used to do very well – the S3Board was and is a fine product: well thought out, well-engineered, and lovingly produced to decent standards. Apparently something got lost along the way.
What the Nexys3 also brought was somewhat of a surprise – the VGA core did not work on it. And apparently it never worked on the Nexys2 either – I must have forgotten that I never implemented it. Anyway, the issue with Nexys3 was that the spartan-6 is unhappy about asynchronous access to blockram; it should be possible, but the synthesizer seems not to generate the core for it. So, since it didn’t really need to be asynchronous anyway, I changed it to synchronous – and fixed one of the older minor issues, that caused the last scan line to fall off the edge of the VGA screen. And I fixed another bug in the vga core as well – it was impossible to enter a capital U. Also I made lots of updates to the font table – it should look better now, and it will also show control characters in a style similar to a vt100. It’s nowhere near a ‘real’ vt100 yet, though.
Together with one of the nice things about the Nexys3 – the USB Host port for the keyboard – that does create a rather fancy system; I can now connect my new fancy Rapoo wireless mini keyboard. No more big old ugly PS2 keyboards!
I’ll be posting the latest sources later this weekend. Besides the fixes to the terminal core and the serial line controller, there’s also the floating point updates in there. Definitely worth the effort of an upgrade.