Talk:Counter machine
This article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||||||
|
Sugestion for Referencial model
[edit]A reference model is necessary to:
- Simplify model comparations;
- To be didactic on examples and comparations;
- To be objective and "clean".
- To fix a "standard style" (and reader "see the same as the same") for the algorithms and examples into the articles.
Is this a good idea? Others have tried to do something like this on wikipedia for a computer "pseudocode" and failed. They gave it up. We will just end up in argument. Do the mnemonics really matter? For example, I usually use D ("Down") for "Decrement" and U ("Up") for "Increment". But I changed them because they are easier to remember as DEC and INC, Knuth used them in his MIX computer, they are common, as is CLR and CPY. Some folks use DCR for decrement. What is really important is to define them in more primitive terms (somehow) before you use them in the text. This is touched on in the algorithm characterizations article. See the four-variable table of "func(i/d,rS,i/d,rD)" below. wvbaileyWvbailey 04:57, 25 October 2006 (UTC)
Also, if I had a choice I would prefer to use the authors' original mnemonics, rather than a standard set. But, a standard table would help translate them.
I would prefer < rS > or < rD > (Source and Destination) where < > means "contents of". If you write < rS > then you can write, for immediate constants, "k → rS" or better "k → <rS>" means "constant k replaces contents of rS." I have also seen [ ] used in a similar way in a textbook (Boolos-Burgess-Jeffrey 2002). Except they write "[ rS ] → rD".
The "RefTable", if used, probably should go on register machine, perhaps on a separate page if it gets too big. wvbaileyWvbailey 04:43, 25 October 2006 (UTC)
The "Counter machine's referencial model" (for wikipedia articles) consists of a finite set of registers r1 ... rn, each of which can hold a non-negative integer, r0=0 (always zero), and a finite list of instructions I0 ... Im. Each of the instructions in this list is one of the following:
INC(j)
— increment the value of register rj by 1, then go to the next instruction.DEC(j)
— decrement the value of register rj by 1, then go to the next instruction.JZ (j, z)
— check if the value of rj is zero. If so, jump to instruction Iz; otherwise go to the next instruction.HALT
— halts the computation.
Algorithm presentation style (scripting style)
[edit]Here we can fix "(for articles) standards". They are like on Assembly language articles. There are optional "clean styles".
"Ease to write" std styles
[edit]The simplest and ease to write (using wiki "#" mark for auto-addres):
- DEC (2)
- JZ (2,5)
- INC (1)
- JZ (0,1)
- HALT
Or, for arbitrary address (and to use optional "goto labels")
10 DEC (2) 20 JZ (2,50) 30 INC (1) 40 JZ (0,10) 50 HALT
"Better layout" std style
[edit]For a better layout and OPTIONAL use of "goto labels":
Addr | Label | Instruction |
---|---|---|
10 | begin | DEC(r2) |
20 | JZ (r2,end) | |
30 | INC(r1) | |
40 | JZ (r0,begin) | |
50 | end | HALT |
Referential Library
[edit]With the "referential library" (RefLib), the "referential model" can implement ALL other instruction sets from similar register machine models.
- Indirect addressing must be dealt with. Any instruction can be turned into an indirect, even an unconditional jump. Here S denotes "Source-"register and "D" denotes "Destination-"register, A denotes accumulator or any other special register, "i" (or "iD", "iS") denotes "indirect", "d"= NOT(i) (or "dD", "dS") denotes direct. We might have to use e.g. "c" or "k" or something for "immediate constant", where "func" is some 2-parameter function such as ADD or COPY or INC r:
- func( iS, rS, iD, rD) e.g.
- func( d, rs, i, rD) e.g. CPY (d, A, i, rD) = STAi rD,
- func( i, rs, i, rD) e.g.
- func( d, rs, d, rD) e.g. CPY (rS,rD), INC (d,r,d,r) = INC r; DEC (d,r,d,r)= DEC r
- func( i, A, i, rD) ??
- func( i, A, d, rD) ??
- func( d, A, i, rD) e.g. STAi rD CPY (<A>,<<rD>>)
- func( d, A, d, rD) e.g. STA rD
- func( i, rS, i, A) ??
- func( i, rS, d, A) e.g. STAi rS
- func( d, rs, i, A) ??
- func( d, rS, d, A) e.g. LDAi rS
- func( d, A, d, A) e.g. INCA, DECA, CLRA, LDAA = nop,
- func( i, A, i, A) e.g. LDAiA (this is used by Schonhage!)
- this is "irregular"
- func( d, A, k) e.g. LDA k; ADDA k; SUBA k; etc. CPY (d,k,A)= LDA k
- The reason I have not used parenthesis is because the assemblers I have used for microcontrollers do not use them. But maybe parentheses are okay, like a function in EXCEL.
- I find the following is much more expressive: <rS> means "contents of register S", <<rS>> indicates indirect addressing. But the "i" and "d" are pretty good because they act like parameters too. And they are Boolean (i.e. for example, perhaps "i"=1, "d"=0). I have simulated this on a spreadsheet recently -- all four parameters describe the functions nicely. wvbaileyWvbailey 04:19, 25 October 2006 (UTC)
Different register machine models (ex. with stack or accumulator) may be fix, with simple changes, different "referential models".
- I have not seen a stack in any models. Some other accumulator instructions are:
- { LDA r; STA r; INCA; DECA; CLRA; CPYAN (copy accum to address pointer/iNdex register; ADA r (ADD (r,A)), etc., etc. } wvbaileyWvbailey 04:19, 25 October 2006 (UTC)
The "referential library" is a sinple list of "new instructions" implementarions, they may be see as microcoded instructions for a emulator strategy: each similar model with a different instruction set may be "emulated" by the RefLib.
The scripts are "near to formal". For formal ones we can imagine the use of a C preprocessor to expand the RefLib script templates into standard instructions. The scripts are for explaination and demonstration.
Emulated instruction | Implementation (script) | Comments |
---|---|---|
J (i) |
|
Go to i (unconditional jump); register #0 must contain 0. |
JZ(rX, i1,i2) |
|
IF rX=0 THEN i1 ELSE i2 |
DECJZ(r,i) |
|
DEC and JZ. |
INCJ(r,i) |
|
INC and J. |
CLR(r) |
|
CLEAR r; do r=0. |
MOV(rX,rY) | 1 CLR(rY) 2 JZ (rX,6) 3 INC(rY) 4 DEC(rX) 5 J (2) 6 CONTINUE |
Move rX to rY, clearing contents of rX. |
CPY(rX,rY) | 1 CLR(rY) 9 JZ (rW,13) 2 CLR(rW) 10 INC(rX) 3 JZ (rX,8) 11 DEC(rW) 4 INC(rY) 12 J (9) 5 INC(rW) 13 CONTINUE 6 DEC(rX) 7 J (3) 8 ?? |
Copy rX into rY, rW must be free (at end rW=0). |
CMP(rX,rY,r) | 1 CPY(rX,r) 6 JZ(r0,3) 2 CPY(rY,rW) 7 JZ(rW,9) 3 JZ(r,7) 8 INC(r) 4 DEC(r) 9 CONTINUE 5 DEC(rW) |
Compare rX with rY and returns on r (r=0 if rX equal rY). |
ADD(rX,rX,r) | ... in terms of JZ, DEC, J, CLR, INC, CPY. | r=rX+rY; perhaps preserving the contents of rX and rY. |
MUL(rX,rY,r) | ... in terms of JZ, DEC, J, CLR, INC, CPY, ADD. | MULtiply, r=rX*rY; perhaps preserving the contents of rX and rY. |
EXP(rX,rY,r) | ... in terms of JZ, DEC, J, CLR, INC, CPY, ADD, MUL. | EXPonential, r=rX**rY; perhaps preserving the contents of rX and rY. |
SUB(rX,rY,r) | ... in terms of ... | SUBtract, r=rX-rY; perhaps preserving the contents of rX and rY. |
Mnemonic | Implementation script | Comments |
LDA ( r ) | CPY (r) , A implied | LoaDA; special register called "accumulator" |
LDA k | constant k → A | Immediate constant k from instruction |
STA ( r ) | CPY implied_A (r) | StoreA in r: store contents of A in r |
LDA ( i, r ) | CPY <<r>> to <A> | Indirect copy contents of r to accumulator |
STA ( i, r ) | CPY <A> to <<r>> | Indirect copy contents of r to accumulator |
CPY ( i, rS, i, rD ) | CPY <<rS>>, <<rY>> | Indirect copy Source contents to indirect Destination |
Notes:
- <<r>> indicates indirect addressing; <r> indicates direct addressing; k indicates immediate adddressing (constant derived from instruction)
- The CONTINUE instruction is a "template instruction" for continuing into the context where the RefLib will be used.
- A convention may be to fix a "reserved register" for use as rW (wasting).
Working
[edit]Sugestions:
- Create a Counter machine:Reference model article.
- Transfer basic to there
- Update Counter machine article with style (tables) and examples compatible with the recommendations fixed (to be consensus) here.
- The Counter machine article has to be reworked. The Counter-machine model article should be merged into the 'Counter-machine' article, and deleted. To not overload the article one should remove the discussion of the Turing-equivalence, and just mention that counter machines can compute all computable functions [1]. Which by the way is obvious, as counter machines allow spaghetti-code, and with spaghetti-code everything can be done. The article should not list lengthy example programs, but give only the code to calculate the predecessor function without using DEC(r), which is short and interesting. Maybe also the code to copy one variable to another without using CPY(m,n). And leave it there. The different models should be given in tabular form, with reference to the literature where found (this information is now in the 'Counter-machine model' article, but not nice and uniform, and very sloppy). Remove also the 'Problems with the counter machine model', because those are only beauty-problems. This information can be given elsewhere, for instance in the Random-access machine article.
References
- ^ if the Church-Turing thesis is true
Clean Lead section
[edit]Style for Algorithms!
[edit]See Wikipedia:Algorithms on Wikipedia.
Sorry, but the above is bogus. I am sticking with the convention in Boolos-Burgess-Jeffrey (2002) because it is a convention to be found in a published text. wvbaileyWvbailey 20:56, 13 November 2006 (UTC)
Counter machines discussion moved here from Talk:Tag system
[edit]I ran into something important with regards to the inadequacies of a 2-counter Collatz counter-machine. As I noted above I've "built" in less than 16 instructions a two-counter machine that generates Collatz sequences given any starting number. In register A it counts down N by twos to find whether the unary number in the register is ODD or EVEN, and at the same time it counts UP a second register B to EVEN(N/2). Then depending on the ODD-EVEN outcome it either moves 3*EVEN(N/2)+2 or EVEN*(N/2) back into register A. (Thereby doing the { (3*N+1)/2, N/2 } form of Collatz). But... a few months ago I ran into (via a bibliography of a Blass-Gurevich paper) a paper by Rich Schroeppel 1972, A Two Counter Machine Cannot Calculate 2N MIT A.I. Laboratory, Artifical Intelligence Memo #257. This is available on the web at ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-257.pdf . While he proves that a three counter machine can simulate a Turing machine, what he is really after here is that Any counter machine can be simulated by a 2-counter machine, provided an obscure coding is accepted for the input and output. And by this he means 2W*3X*5Y*7Z, where a 4-counter machine made of counters W, X, Y, Z is simulated by the two-counter machine.
A Corollary is that a 2-counter machine can compute any partial recursive function if its input is coded 2N and its output is encoded as 2answer (I think this is in Minsky). He goes on to his major proof ("given N, a two-counter machine cannot compute 2N") and then adds more corollaries: that no 2-counter machine can compute N2, 2N, log2N, sqrt(N), Fib(N), Smallest_Prime_Factor(N), etc. He comes to the Collatz question but notes it is unsolved (his 2-counter algorithm seems to be the same as the one I'm using except it is doing {(6)*EVEN(N/2)+(4), N/2} ), i.e. the classical { 3*N+1, N/2 } version. There's a lot more. He notes that Frances Yao independently proved the major theorem in April 1971, and he claims his proof was discovered in September 1970.
Given all this, we can conclude that a 2-counter machine, while quite adequate for a single-instance Collatz sequence (i.e. given a starting N), is insufficient to handle all sorts of computations required for Godelization, given straight-up unary-numerical inputs (i.e. not encoded at the input and output as Goedel numbers). I'm sure this has a bearing on what you've reported, but I don't know quite what it is (and suspect I don't have the know-how to figure it out). Bill Wvbailey (talk) 20:01, 20 November 2007 (UTC)
- Here's my take on Schroeppel's paper: I see it primarily as showing that what he so readily accepts as the i/o convention in CMs — namely, that the argument and computed value of a function are just the numbers stored in certain registers — is not adequate for universality in a CM with insufficiently-many registers. (It's only on the basis of this inadequate i/o convention that he claims a 2CM "can't calculate" 2n.) But the inadequacy is not surprising, as 2CMs were proved universal (by Minsky) only for a rather elaborate (unsurprisingly "obscure") coding that simulates the configuration of a UTM by means of a Gödel number, whose factorization is of form 2a3m5n, in one of the registers. Although that might raise tractability issues, all recursive functions are certainly computable by a 2CM. If a 2CM is used with some other i/o scheme, it may be able to compute some, but not necessarily all, recursive functions. --r.e.s. (talk) 13:50, 21 November 2007 (UTC)
I agree with your interpretation. But there are complications ... There's more I want to write about this but it got complicated and I ran out of time. I'll go away and draw something and think about this. (See the email I sent Schroeppel back on ~1 Aug and his response, below.)
- In a 2007 paper by Derschowitz and Gurevich I ran into a citation to your 1972 paper (that you wrote while at the MIT AI labs) called "A Two Counter Machine Cannot Calculate 2^N". I was then able to download it off the 'net.
- Did you ever ran into a "better" algorithm for your little challenge to write a multiply program for a 3CM? (I don't have one, but it sounds like a fun challenge. If I can figure your proof out, I may add a brief sketch of it to a wikipedia page titled "Proof of impossibility". Your observation about the Collatz Conjecture/Ulam problem really caught my eye; that problem is mentioned there too as an "open" problem).
- The DG2007 paper has the most illustrious bibliography I've ever appeared in.
- I don't know of a better multiplication algorithm. There hasn't been much work with counter machines, before or since my paper. There has been a fair amount of work on the 3N+1 problem; look up Jeff Lagarias, who does a review every few years. I didn't know the 1972 paper was on the net; I'll have to go take a peek. The impossibility argument can be summarized in a sentence or two: A 2CM that halts must map certain arithmetic progressions in a linear fashion. This is incompatible with most interesting functions like 2^N. The paper spends many pages filling in the details. You might also check with Frances Yao (nee Chu) who discovered the result at the same time as I did.
- Rich Schroeppel rcs@cs.arizona.edu
Bill Wvbailey (talk) 22:08, 21 November 2007 (UTC)
(Image inserted by Wvbailey)
Bill, I hope this move of the material is satisfactory. You wrote "But there are complications ... There's more I want to write about this" — I'm interested in what those complications might be. BTW, using Minsky's 2CM from his book (Computation, 1967), I can get an 11-instruction Collatz program, but I think it's probably just the one you have, but placing goto info in the increment instruction (as Minsky did). --r.e.s. (talk) 02:09, 23 November 2007 (UTC)
- Here's one version with instruction set { INC(r), DCR(r), jZ(A,addr); j(addr) } thus it assumes that instructions are executed in sequence unless a jump occurs.
Instruction #: | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 |
Instruction: | out | jZ | dcr | jZ | dcr | inc | j | inc | inc | inc | jz | dcr | j | jz | dcr | inc | j | H |
Register: | A | A | A | A | A | B | A | A | A | B | B | B | B | A | ||||
Jump-to #: | 14 | 9 | 2 | 1 | 8 | 1 | 14 |
I need to reboot. I'll be back. Bill Wvbailey (talk) 16:41, 23 November 2007 (UTC)
- Here's a state-machine-like version with 11 instructions (not counting HALT) if you count the "output" instruction that appears at the start. I inserted it so I could see better what was going on. I use Excel to simulate these things. So this one uses this instruction set. "Test" is really just a NOP or Turing "N", but it specifies a register to test for 0 or 1:
- { INC(r,address_if_0,address_if_1); DCR(r,address_if_0,address_if_1); TEST(r,address_if_0,address_if_1); HALT }
Instruction #: | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |
Instruction: | out | dcr | inc | dcr | inc | inc | inc | test | dcr | inc | dcr |
Register: | A | A | B | A | A | A | A | B | B | A | B |
Jump-if-0 #: | 2 | 6 | 4 | 10 | 6 | 7 | 8 | 1 | 5 | 11 | 1 |
Jump-if-1 #: | 2 | 3 | 4 | 2 | 6 | 7 | 8 | 9 | 5 | 11 | 10 |
Bill Wvbailey (talk) 18:31, 23 November 2007 (UTC)
The following touches on my cryptic note and the 2N business. I don't understand it yet. In the back of Kleene 1952 p. 526 under the bibliographic reference for Laszlo Kalmar, is a note: "Kalmar takes, as his basis for elementary functions the variables, 1, +, *, |a-b|, [a/b], Σzy=w, Πzy=w, but remarks that then * and [a/b] are redundant." (p. 526). Kleene refers us back to his section §57 where he is developing the μ-operator (the unbounded version). In §57 there's an extended example I. "A function is "elementary", if it can be expressed explicitly in terms of variable natural numbers, the constant 1, the functions +, * and [a/b] and the operations Σzy<w, Πzy<w" [bounded sum and product]. Kleene's point seems to be that the primitive recursive functions are all elementary and vice versa ... except this: "From this it follows that there are non-elementary primitive recursive predicates..." (p. 272). Kleene refers us back to §55 where he observes: "Ackermann's investigation shows that ℰ(a) grows faster with increasing a than any primitive recursive function of a (just as 2a grows faster than any polynomial in a)...." (cf p. 272).
I wanted to find out more about Kalmar's elementary functions. In a book [?] on line by Helmut Schwichtenberg, Logic of Computation, Schwichtenberg starts with the notion of "register machines" (aka counter machines) with three instructions { zero(x); Successor(x); IF x=y THEN Im ELSE In }. He develops the notions of "computable functions" from this model. Then he develops all the stuff he needs to simulate the "elementary functions" e.g. the "bounded sum Σ" and "bounded product Π", etc. and then he proves that the number 22^N is not elementary (at least I think that's what he's doing, am confused here by the weird symbolism, it could be Ackermann's function, or just 2N ... cf "Lemma" on page 59.)
Schwictenberg's version of Kalmar's elementary functions includes "the variables", the constants 0 and 1, addition +, "modified" (proper) subtraction ∸ and bounded sums and product. He defines as sub-elementary as those which omit the bounded product.
Then, he discusses the class ℰ (script-E) i.e. "those number theoretic functions which can be defined from the initial functions: constant 0, successor S, projections (onto the ith coordinate), addition +, modified [proper] subtraction ∸, multiplication * and exponentiation 2x, by applications of composition and bounded minimization." (p. 60). It is the exponentiation 2x that caught my eye.
All this is probably in Minsky's text but I haven't looked back at it yet.
I guess all that my cryptic comment was, was this (it was lost when my Excel froze): that it appears that tag machines are more powerful that what we need for a Collatz sequence (can a tag machine calculate 2N given N encoded as unary on its tape ??),
- Tag machine:
- single-tape with two roller-sets,
- left-hand read-only head
- right-hand write-only head,
- both roller-sets left-only
A weaker [??] but Collatz-capable machine (that cannot calculate 2N given N encoded as unary on its tape),
- Collatz capable machine:
- single-tape with two roller-sets,
- left-hand read-only head,
- right-hand read-only head,
- both roller-sets right- and left-capable
Out of curiosity I've sketched an algorithm to do Collatz on the "tag machine" using an algorithm similar to the ones posted above for the counter machine (i.e. divide by two while calculating EVEN(N/2)). (I need to simulate it to check it.) It requires about the same number of instructions (on the order of 20) as the counter machine. I would not be surprised to find out that it can be "morphed" into the tag-algorithm.
I have no idea where this is leading. I'm just following the scent. Bill Wvbailey (talk) 22:54, 23 November 2007 (UTC)
- I've replied to some of your questions about tag machines, including "can a tag machine calculate 2N given N encoded as unary on its tape??" at Talk:Tag system. I prefer to avoid detailed discussion of tag machines here (or of counter machines over there). --r.e.s. (talk) 01:46, 29 November 2007 (UTC)
DEC
[edit]What does DEC(r) do if register r contains zero? This doesn't seem to be defined anywhere. — Preceding unsigned comment added by 95.151.103.43 (talk) 19:23, 21 November 2020 (UTC)
Merge proposal
[edit]Support: The Counter automaton page gives "counter machine" as a synonym, so there's no reason for two separate pages. (But maybe a "counter automaton" is a subset of "counter machine" with just one register? The page is unclear.) Moreover, "Counter automaton" is mostly a stub. Finally, the merge has been proposed since 2021 without objection, so I'd say go ahead and be WP:BOLD. (There wasn't a section on the talk page to discuss the merge proposal, so I created one.)) KenShirriff (talk) 20:30, 16 February 2024 (UTC)
- Counter automata are a simpler kind of automata, but, while in theory similar structures (a finite state machine with one or more counters), the two have very different applications (counter automata in formal language theory, while counter machines are a study of computer science computations) and the articles appear to show them from these different points of view. Chaotıċ Enby (talk · contribs) 09:35, 24 April 2024 (UTC)
- Oppose: I'm convinced by your explanation that counter automata are for formal language theory while counter machines are for models of computation. I withdraw my merge support. KenShirriff (talk) 19:02, 2 May 2024 (UTC)