Index


RISC World

The RISC OS Time Machine

Richard Hallas interviews Dave Walker

In early 1998 Richard Hallas, then editor of RISC User, interviewed Acorn's Dave Walker about the future of Acorn and Acorn machines. A cut down version of this review was published in RISC User volume 11 issue 4. This is the full text of the original interview. Just a few short months after this interview Acorn collapsed. This article provides a fascinating insight into what could have been, as Dave Walker himself admitted, a year is a long time in computing...

pic
Dave Walker photographed in 1998

Acorn's Future Technologies

Risc PC II and the future of desktop machines The specifications of the Risc PC II have changed quite a lot since it was first announced. For the sake of accuracy, would you please give a brief summary of what its most important features are? Also, how much faster in real terms will it be than the current StrongARM Risc PC?

Well, it's still to be determined precisely which components are going into the system; everything hinges on the support which can be squeezed into IOMD2 before we freeze the design. The processor you get at the moment is a StrongARM 110 Revision T, which is slightly different in the memory management area to what's in the Risc PC: DEC have done a few modifications, but it's still clocked at 233MHz. If you were to put a Revision T in a Risc PC, you wouldn't be able to tell the difference between it and what we're currently shipping, which is a Revision S.

The video system has had a major overhaul; a new iteration of VIDC20, designated VIDC20R, is to be used as standard. VIDC20R is essentially the VIDC2L cell as used in the ARM7500 and 7500FE, with the components which were originally removed from the 2 cell to make the 2L cell added back in. The reason for taking this route is that VIDC20 was fabricated with a feature size of about 1.0 to 1.1 micron, whereas VIDC2L was relaid to make it fabricatable on a 0.6 micron process. This feature shrink results in a significant pixel clock speed increase; in fact, it's a doubling to 200MHz from the original 100MHz. When coupled with 4Mb of fast VRAM soldered to the motherboard, this will give you 1600x1200x32K colours at 75Hz; we've actually tried this on a test harness we built when we received the first VIDC20R samples, and we had to borrow the monitor from our top-end CAD workstation to track the results as it was the only monitor in the building which would keep up!

The audio system has also had a significant rework. VIDC won't be providing the bulk of sound output; instead there is a new codec chip going on the motherboard which emulates a SoundBlaster card. I don't believe it supports wavetable synthesis, but it does have a microphone feed (which you'll find on the front panel, along with the headphone jack, power switch and reset button). Also connected with the codec chip is a couple of sockets on the back panel; you'll find a PC-style game port which will double as a MIDI port (in and out, not thru); as part of the main I/O system you'll also find two serial port sockets, each of which should be able to move data at least as quickly as the serial port on the current Risc PC. You'll also find a parallel port, as per current Risc PC, and PS/2 mouse and joystick ports; it's expected that the current Risc PC will be the last Acorn machine to support the quadrature mouse interface.

Finishing off the I/O side, there are three slots on a backplane to take existing podule-style expansion cards, and four PCI slots. The Risc PC/A7000(+)-style NIC slot has been done away with. There will be a single 1600K floppy drive fitted, it's looking like all machines will ship with a 24x-speed CD-ROM drive as standard, and hard drives are catered for with EIDE Mode 4 support which will handle two master-slave device pairs.

Other than the video performance, the biggest difference the user will spot in use is the machine's speed; raising a wet finger in the air, I'd reckon that, using a 233MHz StrongARM, Phoebe will be somewhere between 2.5 and 3.5 times faster than a current 233MHz StrongARM Risc PC.

On a standard Risc PC, the address and data buses are clocked at 16MHz; on the new machine, you're looking at a bus speed of 66MHz. There are two reasons why it's 66MHz: the first is that things get ugly at speeds above that; you get signals jumping off PCB tracks at corners (no kidding!) and you have problems with sockets involving massively increased radio frequency (RF) emission and the fast transients reflecting from the physical track-to-pin interface back down the track. The second reason is that 66MHz is the fastest external clock speed which StrongARM can synchronise to; hence there is little point in producing a PCB which is clocked at a higher rate.

To keep up with this bus speed, RAM is moving from being 70ns refresh on 72-pin SIMMs to significantly quicker SDRAM on DIMMs; the box will ship with a single 16Mb DIMM as standard, and a second DIMM socket left vacant.

The StrongARM Risc PC is still a powerful machine, and the need for a replacement for it is much less urgent than it was when the Risc PC replaced the A5000. If the StrongARM Risc PC is adequate for most users' needs, then who is going to buy the expensive new machine?

People who need Even More Power(TM). For example, StrongARM Risc PCs still only have the same screen rendering capability as ARM610 Risc PCs, and owing to the speed constraints of the podule bus which any network card is currently hung off, don't make particularly wonderful network servers. VIDC20R sorts out the first problem, PCI support fixes the second, and the accelerated main bus, SRAM and SDRAM makes everything happen significantly quicker. If the current StrongARM Risc PC can be likened to an Archimedes 440/1, Risc PC II is the spiritual successor to the 540 (and then some).

What will happen to the existing Risc PC? Will it continue to be made, and for how long?

We're expecting to continue to produce the StrongARM Risc PC, in J233 hardware spec, at least, for the foreseeable future. It'll still be more than adequate for a large number of users.

Many users have expressed disappointment that the Risc PC's modular case design has been dropped. Why have you taken this decision?

Two reasons. First of all, once you push the speed of a board up, it starts radiating more and more radio frequency interference. If you were to take the existing Risc PC case, which is made of plastic, with a metal-spray layer to cut RF from its board down to acceptable levels, and try to shoehorn the Risc PC II board into it, not only would you have enough RF coming out of the case to violate both CE and FCC regulations, but the picture on your monitor wouldn't be exactly free from interference!

Second, producing a case design from scratch and running a smallish-volume production line to make them costs an absolute fortune, and that's not taking into account the overheads of the safety compliance testing. It's far more cost-effective to take an existing case which is large and well-shielded enough for our needs, and then make whatever cosmetic and minor functional changes to it we need to. We're taking an NLX tower case, having an extra cut-out tooled into the back panel to accommodate the three podules, and replacing the remarkably dull and boring standard plastic facia with something a little more distinctive!

What's your position regarding compatibility with other platforms, particularly DOS/Windows and Unix? Does Acorn have any plans to provide support for the use of other operating systems on the new machine itself?

Unix is something very close to my heart, and I'm certainly going to do my utmost to enable Causality (RiscBSD) and Russell King (ARM Linux) to have access to docs and prototype hardware at the same time as other registered developers. While Galileo is closing out development, Unix is likely to be the only way to get multiple ARMs doing symmetric multiprocessing in one of these machines, and I'm up for extended testing if they're up for writing the code. Put it this way: I don't expect the Risc PC II I'll be buying for home use to be spending most of its time running RISC OS!

As far as DOS/Windows compatibility is concerned, this is of vastly less importance to me personally since I don't use either; however, it seems to be something which a lot of users want. The idea for supporting this is to interface to PCI-based PC cards, which are already available; you'll find ads in the back of Byte which advertise cards comprising an IAPX86 CPU (everything from a P90 upwards), local DRAM, a bus-mastering PCI controller and often a 2D graphics controller. Video output from the RISC OS side of the system would be looped through such a card to a monitor; you'd probably lose the capacity to have both displays on screen simultaneously without some cunning scaling and blitting, but we tend to find that users tend to use their systems wholly within one environment of the other anyway.

We don't have any plans ourselves to support other operating systems; however, that's never stopped developers in the past!

RISC OS and Galileo

What significant improvements will be apparent in the next version of RISC OS, and how will it take advantage of the improvements in the Risc PC II?

The first improvement you're going to see isn't necessarily an improvement; it's a requirement, which is that it runs on the new hardware, and is also capable of doing things like addressing the PCI bus that we're going to get on the Risc PC II. As far as actual improvements are concerned, Mike Stephens, who's Mr Kernel-God, has made some interesting enhancements when it comes to memory management (he's actually implemented a few new algorithms that he's come up with), so this means the kernel's going to run quicker. Also, as far as filing systems are concerned, there's a few limitations that the existing Filecore has which have got on quite a few people's nerves for the last ten years. Fundamentally, the 77-objects-per-directory limit is gone. We have the software running in the office now (still very much alpha-test, but appearing stable) and we've found empirically that directory opening slows down somewhat once you start trying to open a directory containing 3000 or more objects. The maximum size of directory that we've built so far, and which has been tested to retain its integrity, has 80,000 objects in it, but that's just on the test bench; I'm not sure whether a high-water mark will be imposed for the final OS build. Also, filenames have had some restrictions removed: we're retaining case insensitivity, and we're also keeping the existing filetyping system, but filename length in theory goes up to 255 characters. This may, for various reasons, be shortened to 191 for the production build.

So you're not extending the existing filetype system?

There are still plenty of unregistered filetypes out there. If all goes to plan, the typing system will also be a bit smarter about translating DOS-type extensions to RISC OS filetypes.

There's going to be a little bit of a facelift to the desktop. It's more a matter of bells and whistles than a fundamental rework, but some things are going to have to have a rework. For instance, the finding system in the filer is not going to be of much use if you highlight a load of directories, go 'find whatever the filename is', go 'OK, there's my file', hit Open and you're presented with a directory with 3000 files in it, so I expect that'll get some rework.

!Configure is going to get reworked so that it becomes modular and can effectively have plug-ins registering with it. It'll still be familiar to Risc PC users; it'll just do more. .

Can you do anything about the Large File Allocation Unit (LFAU) that makes writing files to large discs so wasteful of disc space?

Yes; indeed we have done. The situation as it stands is that you hit a point of diminishing returns at about 8Gb. Your LFAU winds up so big that there's not much point in having extra disc space considering the average size of a file and the amount of space which has to be wasted owing to LFAU granularity. With the new system, you still have the issue that as discs get bigger LFAUs have to grow with them, but now there's a lot more file allocation blocks on a disc, so for a given size of disc, the LFAU is sixteen times smaller. This means that the point of diminishing returns is deferred until you get to 128Gb partitions, when you start hitting trouble again.

People will still be able to use their standard podules. While we were looking at the standard podule bus, and which bits we needed to implement, we constructed a questionnaire, which I sent out to all the developers, along the lines of, "OK, you guys develop hardware; please list the lines that all your cards use." When we added it all together, it turned out that there weren't any lines unused, so what you get is a completely equivalent-to-Risc PC, as-it-always-has-been podule bus. It is basically DMA on the first two slots, and a three-slot backplane (three slots primarily because of case size).

Do you have any plans to incorporate any third party enhancements?

That's currently being determined. This time we are talking about getting hold of demo versions of commercial things from developers, or indeed taking the best of the freeware off the Net (with authors' agreement, of course). Certainly I for one, in addition to Edit, would like to see every machine bundled with a copy of Zap. But StrongEd also has a big following; the editor wars inside Acorn do go on! Therefore I will disclaim that by saying that just because I use Zap does not mean that it's Acorn policy!

How is work progressing on Galileo? When will we be able to use it instead of RISC OS in desktop machines?

Work's progressing rather well, actually. The principal engineer on Galileo is a very, very knowledgeable chap called Sunil Kittur. He's not come from an Acorn background, and is not versed in RISC OS, so we can be certain that we're taking a completely clean approach. He's very much a computer scientist, who has done operating system design and implementation before already (he has a track record of that), and fundamentally the guy knows his onions and has his head screwed on! Indeed, he did that much work on Galileo to start with that for a while it was known within Acorn as Sunil OS! (Of course, that was just a leg-pull.).

We have a stripped-down kernel. It will do multi-threading and process scheduling, and people are now working on slightly higher-level things like the graphics libraries. The kernel will boot; compilers and tools have started porting, but I don't know whether Galileo's capable of building itself yet. There's also a simple command line executive being done. I would like (whether I'll be able to, I don't know yet) to be able to pull some of the wraps off Galileo to the developers at the developer conference which is scheduled for slightly pre-Wakefield. When it's actually going to ship is, right now, someone else's guess, but certainly the world will have seen it by this time next year; whether the world will be able to buy it by then is another matter.

What language is Galileo being written in?

A combination of C and C++.

Given that Galileo is highly modular and scalable, where do you envisage it being used?

Fundamentally, anywhere that requires an operating system rather than just hardware. It could end up running on your Psion; it could end up running on your mobile phone; you could find it running on your video recorder. It's scalable right up to your desktop system, from the point of view of being able to do multitasking processing eventually, and it could go even higher.

Is it the case that you will able to release the kernel to licensees initially, and then finish up writing things like the desktop later?

Exactly. Galileo is primarily intended for portability and embedding, but just because it's intended for embedding doesn't mean that we can't go all the way up to building a desktop on it. Certainly most of the people we expect to be licensing it to won't be needing a lot of really, really high-level stuff like a desktop, so as long as we implement the right things in the order that they're likely to be needed for licensing, then we can just go on with development transparently. But it's looking good. It's looking very good. Another thing that's needed for an embedded technology is a reasonable degree of fault tolerance. It can't crash! Or should it crash fatally, for one reason or another, it's got to be able to recover itself far enough to reboot itself to its previous state.

Will Galileo be able to run existing RISC OS applications and, if so, how?

Fundamentally there's no reason why not, but it would mean having to implement most of a virtual RISC OS machine on top of the Galileo OS. To write a full RISC OS emulator is about the only way you could be absolutely certain to do it, as I don't have details yet on how Galileo would take to having a RISC OS SWI layer veneered on top of it. In fact, a virtual RISC OS box has already been done, but not by us: there exists a thing called XArc (which I have seen), which takes the freeware Architectur 2 ARMulator, that was written by ARM Ltd, and runs a RISC OS ROM image on it. It's slow, and that's partly because ARMs don't actually virtualise well, but if we can do something like that on Galileo OS without sacrificing QoS, we would be running on a real ARM (and hopefully be able to tunnel down to it) rather than a simulated one. Whether we're going to do it this way, or what Galileo is to support in terms of veneering, is still to be determined!

Can you outline how the Guaranteed Quality of Service (QoS) concept works?

OK, I can have a go at it anyway! I'll be liberally quoting from a paper written by Andy Hopper, who's head of formerly Olivetti, now Oracle, Research Labs in Cambridge. He's the guy who invented ATM, and he's also one of the nicest multi-millionaires that you could hope to meet! ATM was the first implementation, really, of the QoS concept. What QoS means is that signals and processes are scheduled to happen within a time limit that they specify according to their declared level of importance. Once you start hitting the top end of your CPU limit, then you wind up actually saying to things that are of low importance, and don't actually need to claim a lot of bandwidth, "No, I'm not going to do this," and so you can continue guaranteed service to the things of highest priority. Fundamentally, the whole idea of ATM is that you deliver guaranteed service to as many processes for as long as you can within the performance limits of your system until you hit the top end of the CPU or I/O, but then you cease having service at all for very low-priority things, and continue guaranteed service as much as possible for higher-priority things.

Andy Hopper's idea, to take it to extremes (which is his way when illustrating things), is: imagine, if you will, that you entire house is plumbed with ATM, including your door bell! Now, let's say that someone presses your door bell. It will send a signal to your house's operating system, saying, "Hi, I'm your door bell. My QoS is 30 seconds and I'm pretty important." Your house then thinks, "Right, I've got this signal from my door bell. I'm going to have enough CPU and I/O spare for a process of this priority at some point in the next 30 seconds to ring the bell, so that's OK." So the basic idea is that if something is to happen, it has to happen within a certain time limit; this time limit is the QoS. When it happens within that time limit is up to the scheduler, and if it isn't going to happen for some reason (such as a more important process needing sufficient CPU at the same time that the new process can't coexist comfortably), the scheduler should tell the calling device about this.

A lot of attention is being focused on Windows CE version 2 at present, which is aimed at some of the markets at which Galileo could be targeted. What can you offer that will persuade licensees to choose Galileo instead of another system such as this?

CE isn't the only other fish in the pond, nor the most significant from the point of view of competition for Galileo; there's a lot of stuff (such as VxWorks) which is built from the ground up to be easily portable, easily embeddable and (most importantly) fault tolerant, while also providing something which calls itself 'real-time processing'.

Real-time processing, and the whole concept of a Real-Time Operating System (RTOS), is deceptively easy to define; the idea behind it is simple enough, in that a system running an RTOS should be able to gather data from its inputs, process this data, and send appropriate signals based on the data to its outputs 'in real time'. The problem arises when you sit back and try to think of what 'real time' actually is from the point of view of response times and tolerances within the system to their variation, and how you're going to guarantee that your embedded system will always have enough bandwidth spare to process that extra important signal when it comes in, even though it may be working on something else; currently, this means that embedded control systems are often over-spec'ed in terms of CPU power. Add a QoS model, and at least half of this uncertainty gets rationalised immediately; you get to specify the maximum tolerable response times directly, rather than as a function of the load that's already on the system.

Bear this in mind, along with the facts that RTOS appears to be one of the current crop of fashionable buzzwords (some systems which claim to be 'real-time' are probably stretching the point a bit), that it's a de facto requirement for any embedded OS to be fault-tolerant, and that it's extremely useful for an RTOS to be modular and scalable as well as small, and you'll realise why Galileo is set to go places.

NCs

Oracle was quick to announce the NC, but then appeared to waste a year while NCI wrote NC OS 2 as a Unix derivative. Why did they feel the need to do this, and have their actions damaged Acorn's chances as a technological innovator in the NC market?

Well, NC OS versus NC OS 2: really, you're looking at two different sides of the same coin. At the end of the day, an NC is, simplistically, a disc-less box with a dedicated network link, some software in ROM and an HTML/Java front-end. NC OS 1 runs on a small, cheap, moderately powerful NC 1 whereas NC OS 2 was originally intended to run on the hardware resulting from the DEC Shark project, which unfortunately has now been canned. As far as running Unix on NC 1 hardware is concerned, being something of a Risc BSD fan, I don't see a great deal of trouble in doing it; 7500FE kernels have just about been done, so you should be able to arrange network booting and NFS-mountable root and swap filesystems just like we did for the R225 or like Sun did for the 3/60. Probably the reason why Shark was canned was that it cost a lot: you're looking at a StrongARM; you're looking at the footbridge chip (which admittedly was reasonably cheap, and enabled you to use nice commodity components for the rest of the board), but then you're looking at a wedge of SRAM in there, for cache flushing, so if I remember rightly, the bill of materials cost for Shark was significantly higher than it was for the NC model 1. NC model 1, running Acorn NC OS 1, is a thinner client than a Shark is, running NC OS 2; however, once you add an X server to an NC Model 1, you're looking at much the same functionality only it's the server which is doing the application executing.

But this NC OS 2 is what appears to be being pushed now by NCI, and they seem to have dropped the Acorn NC OS 1; is that correct?

I don't know whether NCI are actually pushing that at the moment; certainly it's a great solution for intranet-type stuff (businesses especially), but then again, it's a case of where you want to split the thinness of your client. As we have X capability now on the NC, we can effectively run everything that the NC OS 2 box would be able to run; it's just a matter of where it actually runs, whether it's client- or server-execute. Admittedly, of course, the NC OS 2 box is significantly quicker at Java, but we are addressing that; we are ourselves designing a StrongARM-based NC. If you remember the coNCord prototype, we're coming up with something that uses that technology. So you've got the StrongARM; you've got the IOMD in there; you've got all the necessary other bits in there.

It just seems at the moment that there are two distinct bands of NCs: the consumer ones, which are based on Acorn technology, and the corporate ones which aren't. Are you hoping to change that situation?

That's entirely so at the moment, as far as it seems, although corporates are looking very hard at our stuff as well. We're having all sorts of talks with some interesting corporates regarding their possible deployment of NCs: lots and lots and lots of NCs. It looks like hopefully it won't be that long (say year-end, maybe) where we'll be looking at two bands of NCs, whereby you've got the 'not especially Java' NC OS 1.06 box currently, which is the NC as you know and love it with the 7500FE in it, and where you also have a higher-end (ideally corporate) NC, optimised for Java with a StrongARM in, that's also running NC OS (our flavour); so things are really fun on the NC front.

How would you say your design is faring against other similar things that are coming out from other people?

Well, in a lot of cases the other similar things that are coming out from other people are so similar that they've actually licensed them from us! The NetProducts NC is a straight licence; the Proton NC is a straight licence, and I think there's another couple cooking as well. It's no great secret that the RCA/Thompson box is almost a straight licence, although it's something bespoke that we did for them, and Boca, now, are manufacturing standard NC OS 1.06 NCs. So a lot of the NCs that are out there are actually ours under the hood, and the only other NCs really are Sun's JavaStation (which of course is significantly more expensive but significantly more potent) and IBM's offering (again, significantly more potent from the point of view of processor, but significantly more expensive).

How do you see something like the WebTV from Microsoft as impacting on the consumer market for the Acorn-type NCs?

Well, the thing is that WebTV isn't an NC; it's an STB. So it's more like WebTV going up against our STB22, and developments thereof.

But these things will be perceived as being pretty similar by the general public.

Well, the whole thing is that when you get into the specs, it transpires that STB22 actually has (today, and has had for the last six months) a lot of the features that are only promised for WebTV Plus, and aren't in WebTV. We sent a couple of our guys over to CES in Vegas a couple of months back, to see what everyone's getting up to, and admittedly there are some people who have got things going that we haven't necessarily got sorted out yet (things like multiple levels of transparency on overlays), but basically the STB22 still acquits itself very well against those, and indeed development on the 22 is progressing well.

Other things worth bearing in mind about the STB22: we're getting very friendly with Oracle on the STB side of things when it comes to partnering our kit with their latest OVS3 video server. We've also struck up partnerships with Silicon Graphics for their MediaBase servers, so we're shipping them STBs to play with, and they're shipping us O2s with MediaBase on to play with, and everything's getting all nice and pally, and of course we've always had a long-standing relationship with Sun on video servers, among other things.

It's worth getting in at this point a little bit about the Acorn reorganisation. We reckon that all this digital interactive TV (and STBs) is going to be a massive thing; it's just a case that with Online Media we were ahead of the herd (or so it seems in retrospect), but one of the reorganisations within Acorn is actually to have a dedicated business unit that does nothing but DITV, so we're going to be pushing that hard.

When does that start?

It already has. We're actually in the middle of shuffling people around between different divisions and different new business areas so that everything can be kicked into serious life.

How will your NC Reference Profile version 2 differ from version 1?

Fundamentally, if you imagine an NC as being analogous to an A7000+ without a hard drive and some OS changes, the new one will be almost analogous to a Risc PC without a hard drive and with a few OS changes; quite a few OS enhancements over NC OS 1.06 as well, you'll find.

So you're talking about something which is basically faster but much of the same?

That's right. The whole point about the new reference NC, and the ethos behind it, is to enable it to run Java quickly, which by all accounts and testing on Risc PCs it's going to do.

Does Profile 2 replace Profile 1? Are they effectively two different classes of product?

I think they'll probably wind up living side by side. If you don't actually need to run much in Java, an NC OS 1.06 box will do you pretty nicely.

What sort of extra software enhancements are you going to get into the new version 2 other than Java?

Java is going to be the big thing. It's a moot point currently as to whether any of the Director or Shockwave renderers are going to make it into ROM. There's certainly going to be more network encoding there so that you don't necessarily have to boot the NC from Unix, because quite a lot of people, although they may realise eventually that serving from Unix is the right thing to do, currently want to do it off something else! But it's still a little bit up in the air about what the changes are actually going to be; I'm in the team which will be tying all this down, so I'm going to be in for some major fun over the next few months.

You licensed various third party things, like the word processor and the Web browser, in version 1. Will there be more in version 2, or will it be much of the same?

It's going to be much of the same, with enhancements.

The future

How important do you see Java as being in Acorn's future plans?

Central.

How are you going to attract new software developers, then, or is that less of an issue if Java is so central?

Interesting question. The thing about Java (and this is me rambling rather than official Acorn policy, because I don't make official Acorn policy!), is that Java is going to polarise things; huge companies like Corel and Lotus will develop Java-based office applications which run on everything, so you have to ask "if you've got a big machine that runs Java well, are people going to run the big things from Claris or Lotus or whoever, or are they going to run a RISC OS package on it?"

On the other hand, there are certain things that our developers do better than any other people in the world, so if they start writing in Java, they have a real good chance of taking on the world and winning. Java is important, and actually a lot of the developers are sitting down with their books and learning it. Many of them have already asked me for book references for useful Java tutorials!

On the subject of your Internet software, you already have a couple of plug-ins already, like Shockwave and Java; are you working on others?

There is a plug-in that already exists that most people seem to have been overlooked, which is actually in the NC ROM to play -law audio and .WAV files. RealAudio is also very much on the cards; there's a bit of a problem getting it running on NC OS 1.06, but it runs well enough on RISC OS. There's also a couple of deals going on regarding plugins for other common media types, but I can't talk about these yet.

It needs floating point, though, doesn't it?

Yes. Currently it actually runs much better on an A7000+ than on a StrongARM for that reason!

Are you working on further plug-ins, such as JavaScript, or are you leaving such things to third parties?

Well, obviously there's a lot of media types out there, so making existing things plug-in compliant is a good idea. JavaScript is something we've recognised as being an important thing that we're currently missing; we intend to implement it. You will see JavaScript support from Acorn at some point. I can't say when, but the networking group is looking a little interesting at the moment, in that resources are starting to free up to work on new projects as existing projects are completed (for instance, Java 1.0.2), which is actually quite good, because JavaScript needs people who know compilers well.

Is Acorn licensing any other common technologies for use within its other products, be they NCs or desktop machines?

Well, we've got the Java licence; we've done Director 4; we're doing Director 6: the stand-alone (as opposed to plugin-compliant) Director player is in alpha right now. I've got it and played some movies on it, and there are still some areas that aren't implemented at all and some problems in other areas, but that's what you'd expect in alpha code; it plays Director 4 and 5 movies very well. JavaScript isn't really a licensable thing, because the specs are actually out there, but it will be implemented. There are strange possibilities involving Video codecs and Replay. Java 1.2: we're building it, which will give you Java 1.x backward compatibility. I'm expecting to see alpha code this month [February] if I'm lucky.

The source code to Netscape Navigator is now available; is that of any interest to Acorn directly?

Our network guys are having a think, basically. There's actually a lot of Netscape Navigator that isn't being released, for obvious reasons; things like the Java VM. But I would expect, if nothing else, that we'll take a copy of the code and look through it to see if we can find anything useful.

When you announced the relaunch of Acornsoft last year, it caused a lot of excitement among users who remembered the excellent Acornsoft games from the BBC Micro days. Is Acornsoft going to be licensing games in a big way for today's Acorn platform?

We hope so. Strange things are afoot regarding the number of games projects we're actually involved in. Also, of course, the whole games thing seems to have woken up recently anyway; witness the final sorting out and bringing to commercial sale of Doom by R-Comp.

Is this to some extent at least riding off the NC? We've heard about distributed games across the network.

Yes, I'm in the middle of writing a paper to come up with some solid network protocols for doing this kind of thing! The idea is that network gaming is really going to take off. Custom graphics engines, like you get in the N64 from SGI, notwithstanding, an NC can be viewed in one light as a games console with a network interface. I can see that online gaming is going to become a big thing; well, a bigger thing than it is already! And it's growing: it started from nothing when Quake came out, and is now getting colossal.

Do you think there's much of an argument against it because of phone bill considerations? Is it actually realistic for UK users, who don't have free local calls?

Well, there's lots of pressure being put on BT to look into making local calls free, so you never know, but what you have to bear in mind is that, although Acorn is UK-centric in that the English you get in RISC OS is English and not American, we're not UK-centric regarding who we sell to and there are other countries who have free local calls. We've just opened our Palo Alto office: as of last month, Acorn Palo Alto is back, and it'll be a sales office this time rather than an OS development centre.

What are your plans for other countries at present?

We have agencies in Korea and a distributor in Japan. These people are principally licensing technologies: certainly it appears that just about every high-tech manufacturer in Korea now has an ARM licence and a partnership with us. There may be a few that haven't (just a few that we've missed), but all the big ones have. Certainly ETRI, who were at Acorn World, are very influential people. If you were to think of an analogy for this country, well, there isn't really an ETRI equivalent in this country, but if there were to be, they would probably be defined as being the commercial wing of the Ministry of Technology. So, this is long-term good news for getting Acorn lots of money, and getting Acorn lots of money is long-term good news for making RISC OS boxes! ETRI's HandyComBi uses RISC OS as an embedded solution.

Is Acorn able to feed back from technologies, such as handwriting recognition, produced by such licensees?

Well, the thing about handwriting recognition, although we're doing no work on it ourselves, is that you have to bear in mind that the first iteration of the Newton operating system was written on an Archimedes A540! That doesn't mean anything significant; it's just a little trivia!

Are you able to say anything about what Acorn has been working on for other licensees?

There's the new little fax box which was launched last week [NaxPort 100]. The company launched last week; they're called NetFax (they're American), and what this box does principally is: imagine your fax machine, which is sitting there spewing out 9K6 or 14K4 run-length encoded fax data. (This is actually where the dual serial port technology for the Risc PC II came from!) You take your serial stream out of your fax, plug it into your NetFax, have that take the data and recompress it more efficiently, and then you have it spit the data out of the other end. Now it can spit the data out of the other end in two forms: one, you can just do it as a more highly compressed fax at a higher data rate, which is going to save money anyway, but you can also actually use the top model as a fax-to-IP gateway, so if the guy at the other end has one of these boxes, you both just dial your local ISP rather than doing international faxing. Speed improvement with the IP gateway version, of course, isn't the main factor: it's cost. Instead of sending a mega-fax internationally, you're sending a mega-email locally! This device is a RISC OS-based box which effectively sits, almost like a dongle, in the chain that is the phone line.

Where is Acorn going? Where do you see your business and your computers in a year's time?

A year is a long time in computing! I think the thing that's going to get big very shortly is NC deployment. The idea of NCs has actually been around for a long time (even before Larry Ellison started kicking up a fuss). At the end of the day, the ultimate expression of a thin client is an X terminal, which is, ironically enough, what I use my NC as when I'm not developing on it. When I use my NC for use's sake, I just use it to connect to our Solaris boxes. So, NC deployment into education and corporates; NetFax hitting the roof and going through it; desktop systems continuing to be desktop systems (sold to the kinds of people who buy Acorn desktop systems, and maybe a few more now that they're getting more powerful); value-added resellers embedding our boards and our technology in things; and by that point Galileo will be coming online, at which point we'll get even deeper into embedding.

What's your opinion of PCs?

PCs have their place. They make nice, cheap Unix boxes!

Richard Hallas interviews Dave Walker

 Index