I want to depart from the lofty world of semantic networks for a few posts, in order to investigate the world of bit-based computers. By bit-based computer, I mean one in which the entire state of the computer is based on an ordered set of bits. The computer I describe below was suggested by Digi-Comp I, a classic bit-based computer made almost entirely of plastic, but mine is based on a somewhat different design.
Both Digi-Comp I and the computer described below are Von Neumann machines, in the sense that the program memory and the working memory are separate. However, they both differ from more familiar computing devices in that the operation is based on modifying the state using simple production rules.
My implementation is a “32-bit” computer, in the sense that the bits are stored in a single 32-bit integer. I could have used some other storage format, say an array of Boolean values, but the use of a single integer both simplifies the implementation and provides a bit of challenge when writing programs. Because it’s a 32-bit, bit-based computer, I’m going to name it “32BC.” (See the last paragraph for this name as a pun.) The entire computer consists of a single 32-bit integer for working memory, a set of production rules, a simple computing engine, and some I/O stuff. In fact, the computer is so simple, the I/O comprises the bulk of the code.
The basic operation is as follows:
1) Test for a halt; if true goto step 5, else goto to step 2
2) Test the state against the “if” of all the rules.
3) For each of the rules that succeeds, apply the “then” portion of that rule to the state, and set that value as the state. Do this in the order in which the rules are listed.
Note that EVERY state is tested and (if true) applied on every program loop.
4) Goto step 1.
5) Halt and return.
The only thing left to do is to define the test and apply operations. In fact, they will be the same operation, only with different data. They are both based on the following particularly useful logic gate, where A and X are bits supplied by the program memory, and N is the bit from the program state. (Using 32-bit integers makes it easy to do these operations in parallel on all the bits).
Both Digi-Comp I and the computer described below are Von Neumann machines, in the sense that the program memory and the working memory are separate. However, they both differ from more familiar computing devices in that the operation is based on modifying the state using simple production rules.
My implementation is a “32-bit” computer, in the sense that the bits are stored in a single 32-bit integer. I could have used some other storage format, say an array of Boolean values, but the use of a single integer both simplifies the implementation and provides a bit of challenge when writing programs. Because it’s a 32-bit, bit-based computer, I’m going to name it “32BC.” (See the last paragraph for this name as a pun.) The entire computer consists of a single 32-bit integer for working memory, a set of production rules, a simple computing engine, and some I/O stuff. In fact, the computer is so simple, the I/O comprises the bulk of the code.
The basic operation is as follows:
1) Test for a halt; if true goto step 5, else goto to step 2
2) Test the state against the “if” of all the rules.
3) For each of the rules that succeeds, apply the “then” portion of that rule to the state, and set that value as the state. Do this in the order in which the rules are listed.
Note that EVERY state is tested and (if true) applied on every program loop.
4) Goto step 1.
5) Halt and return.
The only thing left to do is to define the test and apply operations. In fact, they will be the same operation, only with different data. They are both based on the following particularly useful logic gate, where A and X are bits supplied by the program memory, and N is the bit from the program state. (Using 32-bit integers makes it easy to do these operations in parallel on all the bits).
As can be seen, the logic gate above can essentially compute any single-bit value.
We will consider a test to have succeeded if, after applying to this gate the program test bits (A0-A31, X0-X31) and the state bits (N0-N31), all of the bits are zero. The action is applied to the current state by applying the action bits from the program to the state bits, and then replacing the current state with the output.
And, except for the details (see F# code in the next post), that’s it.
What can this compute? Anything – anything – within its bit capacity. The production rules function as a “universal” computer in the sense that, if one is willing to write as many production rules as there are states, any possible state->state function can be computed. But that technique’s no fun (not to mention inefficient), the real fun comes from trying to write much smaller sets of production rules to accomplish a given task.
What good is it? Not much really, except as a mental exercise and as a demonstration. But what it does demonstrate is something very interesting:
The computer described above is simple enough to be encoded in very basic hardware. Not just in terms of electronic hardware, like on PLA-type device or as part of a larger circuit (where “mini-computers” like the one above are sometimes actually used). But it is even simple enough to implement – like Digi-Comp I – on “real” hardware, as in plastic, metal, wood, nuts and bolts.
Which prompts a thought: if the Greeks were able to produce, c.125 BC, a device as complex as the Antikythera mechanism, they could no doubt have managed a computer as simple as the one described here by 32 BC. In fact, my “32BC” is actually much simpler than the Antikythera mechanism. What if they had done so and gone on to produce all sorts of other mechanisms based on binary logic? How would the world look today?
-Neil
Next, source code. Hopefully today, if not, early tomorrow. (It depends on whether I decide to spend time today reviewing the code or doing my income taxes, lol.)
No comments:
Post a Comment