PROGRAMMING TECHNIQUES TO TURING MACHINE

 PROGRAMMING TECHNIQUES TO TURING MACHINE

INTRODUCTION:

Turing Machine was invented by Alan Turing in 1936 and it is used to accept Recursive Enumerable Languages (generated by Type-0 Grammar). 


A Turing machine consists of a tape of infinite length on which a read and write operation can be performed. The tape consists of infinite cells on which each cell either contains an input symbol or a special symbol called blank. It also consists of a head pointer which points to the cell currently being read and it can move in both directions.


A Turing Machine is expressed as a 7-tuple (Q, T, B, ∑, δ, q0, F) where: 

 

  • Q is a finite set of states

  • T is the tape alphabet (symbols which can be written on Tape)

  • B is blank symbol (every cell is filled with B except input alphabet initially)


  • ∑ is the input alphabet (symbols which are part of input alphabet)

  • δ is a transition function which maps Q × T → Q × T × {L,R}. Depending on its present state and present tape alphabet (pointed by head pointer), it will move to new state, change the tape symbol (may or may not) and move head pointer to either left or right.

  • q0 is the initial state

  • F is the set of final states. If any state of F is reached, input string is accepted.


There are Turing machines which accept regular languages, and there are Turing machines which accept languages that are not regular.


Some programming techniques for Turing Machines are-

  • Storage in the state.

  • Multiple tracks.

  • Sets of states as subroutines.

Let us now look at each of these techniques closely-


MULTI-TRACK TURING MACHINE

Multi-track Turing machines, a specific type of Multi-tape Turing machine, contain multiple tracks but just one tape head reads and writes on all tracks. Here, a single tape head reads n symbols from n tracks at one step. It accepts recursively enumerable languages like a normal single-track single-tape Turing Machine accepts.

A Multi-track Turing machine can be formally described as a 6-tuple (Q, X, ∑, δ, q0, F) where −

  • Q is a finite set of states

  • X is the tape alphabet

  • ∑ is the input alphabet

  • δ is a relation on states and symbols where
    δ(Qi, [a1, a2, a3,....]) = (Qj, [b1, b2, b3,....], Left_shift or Right_shift)

  • q0 is the initial state

  • F is the set of final states

    For every single-track Turing Machine S, there is an equivalent multi-track Turing Machine M such that L(S) = L(M).

It is sometimes convenient to endow the Turing machine tape with multiple tracks, each with its own tape alphabet, and allow the machine to read from and write to the same position on all tracks simultaneously. For example, to define a Turing machine with three tracks, we need three tape alphabets Γ1 , Γ2 , and Γ3 , each with its own blank symbol, where (say) Γ1 contains the input alphabet Σ as a subset; we also need a transition function of the form

δ: Q × Γ1 × Γ2 × Γ3 → Q × Γ1 × Γ2 × Γ3 × {−1,+1}

Describing a configuration of this machine requires a quintuple (q, x1 , x2 , x3 , i), indicating that each track i contains the string xi followed by an infinite sequence of blanks. The initial configuration is (start, w, “, “, 0), with the input string written on the first track, and the other two tracks completely blank.

But any such machine is equivalent (if not identical) to a single-track Turing machine with the (still finite!) tape alphabet Γ := Γ1 × Γ2 × Γ3 . Instead of thinking of the tape as three infinite sequences of symbols, we think of it as a single infinite sequence of “records”, each containing three symbols.

another name for a multi-tape TM is the Von Neuman architecture (i.e. the design that almost all real world computers are based on). Thus almost all real world practical computers are in fact equivalent to multi-tape TM’s. Adding tapes adds no computational power but it vastly improves performance.

A common application of this idea is to use one track for “real” data, and the second track for one or more “markers” that conveniently mark some positions in the strings.

SUBROUTINES:

A subroutine of a Turing machine is a small set of states in the TM such that it performs a small computation. Primarily, it’s a single entry and single exit state. Given how dynamic this method is, a variety of complicated tasks can be performed by a TM by simply breaking the tasks into smaller subroutines. Under general conditions, it is possible for a turing machine to be a “subroutine” of another turing machine. It is also quite possible for small specialized turing machines to be embedded into a larger machine, in a similar fashion as to how subroutines are embedded in programming languages.

Let’s say we wish for T~ to be a subroutine of T2. For this we require that the states of T~ be disjoint from the states of T2 (excluding the states of T2’s subroutine). In such a case, to “call” T~, T2 itself has to enter the start state of T~. The rules of T1 are part of the rules of T2. In addition, from a halting state of T1, T2 enters a state of its own and proceeds.

Let’s take an example. Let’s rewrite the given “shift write and then reposition” Turing machine



As a combination of this shift right Turing machine 

And this “shift level over all 0s and 1s blocks”.

Combining these Turing machines as subroutine and substituting them in the original Turing machine we finally attain this 

In essence, we “call '' one of the subroutines by entering its final state, however a Turing machine has no mechanism for returning to its caller. Hence a better analogy would be to think of them as “building blocks”. A side note, if we were to utilize one of the blocks twice in one larger calculation, it would be required to duplicate the entire set of states.





STORAGE IN THE STATE

In this technique, the finite control of the Turing machine to hold a finite amount of data in addition to the state. Let’s assume the state as [q,A,B,C] when you think of the finite control to hold three data elements as A, B and C

 

Let’s say you wish to keep a track of an additional symbol. If the extra data can be referred as ‘A’ or ‘B’, and the state be q0 and q1, then the actual state of the Turing machine can be {[q0, A], [q1, A], [q0,B], [q1,B]}

Let’s take another analogy to understand this. Let’s pretend that our finite controller has some finite number of storage cells that can hold a symbol each. 

In this situation, if instead of labeling our states simply as q0, q1,... we label them using a tuple of state identifiers along with the symbols in the storage cells, we can simply show that the turing machine requires no change to its actual definition.

CONCLUSION:

A Turing machine is a theoretical abstraction of a computing engine. Turing machines finds applications in algorithmic information theory and complexity studies, software testing, high performance computing, machine learning, software engineering, computer networks and evolutionary computations. So concluding with the Church-turing thesis that states every effective method of computation is either equivalent to or weaker than a Turing machine.















(references-


Comments