CSE202: Introduction to Formal Languages and Automata Theory

Post on 04-Jan-2016

57 views 7 download

Tags:

description

CSE202: Introduction to Formal Languages and Automata Theory. Chapter 10 Other Models of Turing Machines These class notes are based on material from our textbook, An Introduction to Formal Languages and Automata , 4 th ed., by Peter Linz. Variations on TMs. - PowerPoint PPT Presentation

transcript

CSE202: Introduction to Formal Languages and Automata Theory

Chapter 10

Other Models of Turing Machines

These class notes are based on material from our textbook, An Introduction to Formal Languages and Automata, 4th ed., by Peter Linz.

Variations on TMs

• Most variations don’t add to or subtract from the power of the standard TM

• Additions:• Stay instruction: in each move, the R/W head

moves right or left or stays under the same cell• Tape is infinite in both directions• Multiple tapes (see proof in textbook)

• Restrictions• Each move either writes to the tape or moves,

but doesn’t do both

Computational power

• No attempt to extend the computational power of Turing machines yields a model of computation more powerful than the standard one-tape, one-head, deterministic Turing machine

• By computational power, we mean what can be computed -- not how fast it can be computed. Your desktop may run faster than a Turing machine, but it can’t compute anything that a Turing machine can’t also compute.

Off-line Turing machine

• What if the TM has a second tape that is used to hold the original string, while the main tape is used for processing. You never have to write over the original string. Does this add any power to the TM?

• No. Imagine writing the string onto the main tape, then inserting a special mark on the tape, then copying the string after the mark and doing all the processing after the mark.

Multiple tapes• Consider a TM with k tapes and a separate

head for each tape. It reads and writes on these tapes in parallel.

• We can show that this does not increase the computational power of a TM by showing that any multi-tape TM can be simulated by a standard, single-tape TM.

• The details of the simulation involve dividing a single tape into multiple tracks -- using an alphabet consisting of tuples, with one element for each track.

a b c . . .

l m n . . .

x y z . . .

Multiple tapes

TM with 3 tapes, simulated by . . .

(a,l,x) (b,m,y) (c,n,z) ...

a b c ...

l m n ...

x y z ...

a TM with 1 tape, but 3 tracks on the tape . . .

or a TM with 1 track with tuples in each cell . . .

@ $ % ...

or a TM with the “words” replaced by individual symbols.

alx bmy cnz ...

or a TM with “words” instead of tuples . . .

Multiple heads• In this case, there is a single tape but k heads

that can read/write at different places on the tape at the same time.

• We show that this does not increase the computational power of TMs by showing that a multiple-head TM can be simulated by a standard single-head TM.

• The simulation details are similar to those for a multi-tape TM.

Multiple heads

x y z . . .

simulated by a tape with single head that reads and writes tuples . . .

(x, z) (y, ) (, ) . . .

Two-dimensional tapes

• A 2-dimensional tape is a grid that extends infinitely downward as well as to the right.

• The head can move in 4 directions: right, left, up, and down.

• This TM can also be simulated by a TM with a single, one-dimensional tape

Two-dimensional tapesa

l

x

...

b

m

y

...

c

n

z

...

...

...

...

a b c # l m n # x ...

Random-access Turing machine

• Instead of accessing data on the tape sequentially, imagine a TM that has random-access memory and can go to any cell of the tape in one step. To allow this, the TM has registers that can store memory addresses.

• We can simulate this by a multi-tape TM in which one tape is used as memory and the extra tapes are used as registers.

Random-access Turing machine

a b c a b c a b c ...

1 1 ...

Tape

Register 1

1 1 1 1 1 ... Register 2

Nondeterministic Turing machine

• A nondeterministic TM (NTM) has more than one transition with the same left-hand part, which means more than one transition can be taken in the same configuration.

• Nondeterminism allows a TM to have different outputs for the same input. This does not make sense when computing a function, but makes sense for language-recognition in the same way as before. A string is accepted if some computation leads to the halting state.

Non-determinism

Back when we looked at finite state machines, we discovered that, although it might take fewer moves to process a string in a regular language with a non-deterministic finite automaton, we could always build a deterministic finite automaton to recognize the same strings.

Non-determinism

Deterministic and nondeterministic Turing machines are similar; it may be possible to do things faster with a non-deterministic TM, but it is always possible to build an equivalent deterministic TM that recognizes the same language.

Nondeterminism and computational power

• Nondeterminism does not increase the computational power of a TM.

• We can show this by showing that any NTM can be simulated by a DTM using a technique that the book calls “dovetailing.”

Nondeterminism and efficiency

• Although nondeterminism does not increase the computational power of a TM, it lets it compute some things more efficiently by guessing the right thing to do.

• Although a DTM can always simulate a NTM, the DTM may be much more inefficient because it has to try all possibilities to find the right one.

Nondeterminism and efficiency

• Surprisingly, the question whether a DTM can simulate an NTM efficiently is still unresolved. It is the famous question of whether P = NP.

• P stands for “can be solved by a standard deterministic Turing machine in polynomial time”

• NP stands for “can be solved by a non-deterministic Turing machine in polynomial time”

Nondeterministic TMs

Non-determinism doesn’t add any power to a TM to solve harder problems.

We can always simulate an ordinary TM on a non-deterministic Turing Machine (NTM) by not using the freedom to be non-deterministic.

Theorem: A non-deterministic Turing Machine (NTM) can be simulated exactly by a deterministic Turing Machine.

So TMs and NTMs are equivalent.

Variations of TM that limit its power

• What if we change the transition rules so that the read/write head can only move right? Or delete the finite state controller? Wouldn’t those changes limit the power of a TM?

• Yes! In fact, those changes would limit the power of the TM so much that you really couldn’t call it a TM any more.

Variations of TM that limit its power

• Restricting the amount of tape that a TM can use limits its computational power.

• Theory tells us that this is the only modification to the standard TM that can limit the power of the TM.

Variations of TM that limit its power

• What if we limit the size of the tape to some arbitrary constant limit, no matter what language we are trying to recognize?

• We may have a string too long to fit on the tape. The resulting machine is weaker than a Push-Down Automaton, which has an infinite stack. The advantages of a tape can’t make up for lack of adequate storage.

• Equivalent to a Finite State Automaton (tape size = 0).

Variations of TM that limit its power

• What if we limit the size of tape to the size of the input string?

• This gives us a Linear-Bounded Automaton (LBA). An LBA can accept all context-free languages plus other languages like {anbncn | n 0} and {ww | w {a,b}*}, but not some of the other languages accepted by a standard TM.

• It is more powerful than a PDA but less powerful than a TM.

Universal Turing Machines

The Universal Turing machine simulates any other TM with any tape.

The UTM tape has a description of another TM on it, followed by an encoding of the tape that the machine will run on.

The Universal Turing machine decodes and simulates the represented TM.

This corresponds to Turing’s and von Neumann’s “stored program machine”.

Encoding function

A specific TM is defined primarily by its transition function. Each move of a TM is described by the formula:

(p, a) = (q, b, D)where:p is the initial statea is the current character on the tapeq is the state moved tob is the character written on the tapeD is the direction the tape head moves

Encoding functionSuppose that we represent a move, such as

(q3, a) = (q4, , R)like this:

q3 a q4 R initial state

current character on the tape state moved to

character written on the tapedirection the tape head moves

q3 a q4 RCan you tell what this is supposed to represent?

Encoding function

So here is our “condensed” rule:

q3 a q4 R

Now let’s encode each of these 5 components as a sequence of 0’s, separated by 1’s.

For example• the halt state will be represented by a single 0• q0 will be represented by two 0’s• q1 will be represented by three 0’s• etc.

Encoding function

Characters: = 0

a = 00b = 000

States:halt = 0q0 = 00q1 = 000

Direction:Stay = 0Left = 00Right = 000

Encoding function

But 00 can stand for both the character a and state q0; won’t we get confused?

No, because there are 5 parts to each rule, the parts are separated by 1’s, and the parts always come in the same order.

So:

001010001010001

unambiguously represents:

q0 q1 R

Change leftmost a to b

This TM has 6 transition rules.

Change leftmost a to b

This TM has 6 transition rules:q0Δq1ΔRq1bq1bRq1aq2bLq1Δq2ΔLq2bqhbLq2ΔqhΔS

q0 q1 q2 qhalt

Δ / Δ,R Δ / Δ,L Δ / Δ,S

b / b,R b / b,L

a / b,L

Change leftmost a to b

We use 11 to separate the rules from each other.

So this TM can be represented by:

• q0Δq1ΔR = 0010100010100011

• q1bq1bR = 000100010001000100011

• q1aq2bL = 00010010000100010011

• q1Δq2ΔL = 00010100001010011

• q2bqhbL = 0000100010100010011

• q2ΔqhΔS = 00001010101011

Change leftmost a to b

We can also encode the input string. The string baa would be encoded as:

11000100100

We use 11 to separate this string from the TM.

So, an encoding of the entire TM, plus the string that it is supposed to process, looks like this:

001010001010001100010001000100010001100010010000100010011000101000010100110000100010100010011000010101010111111000100100

How does the Tu work?

The universal Turing machine, Tu, will have 3 tapes.

The first tape will be the input/output tape, and initially it contains the entire string, representing both the specific TM we want to simulate, plus the string the TM is supposed to process.

The second tape is the work tape. We will move the encoded string to this tape.

How does the Tu work?

The third tape will be used to represent the state that the simulated TM is currently in. We start off by copying the initial state of the TM (q0, or 00, in this case) to tape 3.

How does the Tu work?

Tape 1: input/output tape

Tape 3: state the simulated TM is in

Tape 2: work tape; contains encoded string

How does the Tu work?

You can see how the Tu is going to work:

The precondition of any transition rule is the current state the TM is in (available on tape 3), and the character on the TM’s tape that we are currently reading (available on tape 2).

We then look on tape 1 to find the rule whose precondition matches this one.

How does the Tu work?

Finally, we execute the postcondition part, changing the TM’s state to the new state (replacing the old state on tape 3), writing a character onto the TM’s tape (on Tu’s tape 2), and moving the tape head (on tape 2) left, right, or staying.

Does the Tu model the encoded TM?

Yes.

Why? Because it is deterministic, there are only 2 possibilities, crash or halt.

It will crash when the encoded TM does, and halt when the encoded TM does.

Does the Tu model the encoded TM?

Crash:• If the encoded TM crashes, Tu will not find a

transition and will crash.

Halt:• If the encoded TM halts, Tu notices this when it

tries to write a single 0 (the halt state) to tape 3 (which keeps track of the current state the TM is in). At this point it erases tape 1, copies tape 2 onto tape 1 and halts.

Conclusion:Anything that is effectively calculable can be

executed on a TM. A universal TM can compute anything that any

other Turing machine can compute.The universal TM is itself a standard TM. A CPU with RAM is a finite version of a TM; it has

the power of a TM up to the point that it runs out of memory.

Languages or hardware that provides compares, loops, and increments are termed Turing complete and can also compute anything that is effectively calculable.