For an alphabet, we can make do with just { s, 0, 1 }, in which for typographic simplicity, s stands for the blank space.

Similar documents
Tape. Tape head. Control Unit. Executes a finite set of instructions

Training Note TR-06RD. Schedules. Schedule types

Artificial Intelligence

Registers and Counters

General description. The Pilot ACE is a serial machine using mercury delay line storage

Blueline, Linefree, Accuracy Ratio, & Moving Absolute Mean Ratio Charts

(Skip to step 11 if you are already familiar with connecting to the Tribot)

MODULE 3. Combinational & Sequential logic

Import and quantification of a micro titer plate image

Lab experience 1: Introduction to LabView

High Performance Carry Chains for FPGAs

Contents Slide Set 6. Introduction to Chapter 7 of the textbook. Outline of Slide Set 6. An outline of the first part of Chapter 7

Introduction to Digital Logic Missouri S&T University CPE 2210 Exam 3 Logistics

MODFLOW - Grid Approach

v. 8.0 GMS 8.0 Tutorial MODFLOW Grid Approach Build a MODFLOW model on a 3D grid Prerequisite Tutorials None Time minutes

Long and Fast Up/Down Counters Pushpinder Kaur CHOUHAN 6 th Jan, 2003

ECSE-323 Digital System Design. Datapath/Controller Lecture #1

Lesson 25: Solving Problems in Two Ways Rates and Algebra

SCENEMASTER 3F QUICK OPERATION

UNIT 1: DIGITAL LOGICAL CIRCUITS What is Digital Computer? OR Explain the block diagram of digital computers.

Experiment 3: Basic Embedded System Analysis and Design

Pitch correction on the human voice

Pre-processing of revolution speed data in ArtemiS SUITE 1

The Yamaha Corporation

CPS311 Lecture: Sequential Circuits

Nintendo. January 21, 2004 Good Emulators I will place links to all of these emulators on the webpage. Mac OSX The latest version of RockNES

FPGA Laboratory Assignment 4. Due Date: 06/11/2012

Optimization of Multi-Channel BCH Error Decoding for Common Cases. Russell Dill Master's Thesis Defense April 20, 2015

Analogue Versus Digital [5 M]

Previous Lecture Sequential Circuits. Slide Summary of contents covered in this lecture. (Refer Slide Time: 01:55)

IMS B007 A transputer based graphics board

The PK Antenna Analyzer

EECS 140 Laboratory Exercise 7 PLD Programming

DMX-LINK QUICK OPERATION

6.S084 Tutorial Problems L05 Sequential Circuits

Fast Quadrature Decode TPU Function (FQD)

APPLICATION NOTE AN-B03. Aug 30, Bobcat CAMERA SERIES CREATING LOOK-UP-TABLES

CS302 - Digital Logic & Design

Telemetry Standard RCC Document , Appendix L, April 2009 APPENDIX L ASYNCHRONOUS RECORDER MULTIPLEXER OUTPUT RE-CONSTRUCTOR (ARMOR)

CHARACTERIZATION OF END-TO-END DELAYS IN HEAD-MOUNTED DISPLAY SYSTEMS

General Certificate of Education Advanced Subsidiary Examination June Problem Solving, Programming, Data Representation and Practical Exercise

DIFFERENTIATE SOMETHING AT THE VERY BEGINNING THE COURSE I'LL ADD YOU QUESTIONS USING THEM. BUT PARTICULAR QUESTIONS AS YOU'LL SEE

ILDA Image Data Transfer Format

Sequence number techniques

LabView Exercises: Part II

cs281: Introduction to Computer Systems Lab07 - Sequential Circuits II: Ant Brain

Decade Counters Mod-5 counter: Decade Counter:

Good afternoon! My name is Swetha Mettala Gilla you can call me Swetha.

Enhancing Performance in Multiple Execution Unit Architecture using Tomasulo Algorithm

Torsional vibration analysis in ArtemiS SUITE 1

Digital Logic Design ENEE x. Lecture 24

ILDA Image Data Transfer Format

Elements of Style. Anders O.F. Hendrickson

Analyzing and Saving a Signal

VITERBI DECODER FOR NASA S SPACE SHUTTLE S TELEMETRY DATA

Slide Set 6. for ENCM 369 Winter 2018 Section 01. Steve Norman, PhD, PEng

Digital Logic Design: An Overview & Number Systems

More Digital Circuits

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

APA Research Paper Chapter 2 Supplement

Part 4: Introduction to Sequential Logic. Basic Sequential structure. Positive-edge-triggered D flip-flop. Flip-flops classified by inputs

BISHOP ANSTEY HIGH SCHOOL & TRINITY COLLEGE EAST SIXTH FORM CXC CAPE PHYSICS, UNIT 2 Ms. S. S. CALBIO NOTES lesson #39

Software Engineering 2DA4. Slides 9: Asynchronous Sequential Circuits

Example: compressing black and white images 2 Say we are trying to compress an image of black and white pixels: CSC310 Information Theory.

Transportation Process For BaBar

SWITCH: Microcontroller Touch-switch Design & Test (Part 2)

KNX Dimmer RGBW - User Manual

TV Synchronism Generation with PIC Microcontroller

1/ 19 2/17 3/23 4/23 5/18 Total/100. Please do not write in the spaces above.

CESR BPM System Calibration

LUT Optimization for Memory Based Computation using Modified OMS Technique

How To Remove Page Number From First Two Pages In Word 2007

The Calculative Calculator

WAVES Cobalt Saphira. User Guide

Implementation of Memory Based Multiplication Using Micro wind Software

Lecture 3: Nondeterministic Computation

Rec. ITU-R BT RECOMMENDATION ITU-R BT * WIDE-SCREEN SIGNALLING FOR BROADCASTING

EE178 Spring 2018 Lecture Module 5. Eric Crabill

User s Manual. Log Scale (/LG) GX10/GP10/GX20/GP20 IM 04L51B01-06EN. 1st Edition

AutoChorale An Automatic Music Generator. Jack Mi, Zhengtao Jin

EECS 270 Midterm 2 Exam Closed book portion Fall 2014

Advanced Pipelining and Instruction-Level Paralelism (2)

Scanning and Joystick Selection

FOR HIRE-STOPPED-HIRED

Show Designer 3. Software Revision 1.15

Chapter 3: Sequential Logic Systems

Technical Appendices to: Is Having More Channels Really Better? A Model of Competition Among Commercial Television Broadcasters

UNIT IV. Sequential circuit

Objectives. Combinational logics Sequential logics Finite state machine Arithmetic circuits Datapath

COMP sequential logic 1 Jan. 25, 2016

American DJ. Show Designer. Software Revision 2.08

Tempo Estimation and Manipulation

EE178 Lecture Module 4. Eric Crabill SJSU / Xilinx Fall 2005

R&D White Paper WHP 085. The Rel : a perception-based measure of resolution. Research & Development BRITISH BROADCASTING CORPORATION.

Section 6.8 Synthesis of Sequential Logic Page 1 of 8

TERRA. DVB remultiplexer TRS180. User manual

Digital Circuit Engineering

NI-DAQmx Key Concepts

TV Character Generator

2 The Essentials of Binary Arithmetic

Transcription:

Problem 1 (A&B 1.1): =================== We get to specify a few things here that are left unstated to begin with. I assume that numbers refers to nonnegative integers. I assume that the input is guaranteed to contain the binary representations of two such integers, so the TM never has to worry about an empty string. I assume that the two integer representations provided are separated by exactly one blank space, with the bits laid out from MSB on the left to LSB on the right in each, and with the read head of the input tape starting out at the MSB end of the leftmost of the two inputs. For an alphabet, we can make do with just { s, 0, 1 }, in which for typographic simplicity, s stands for the blank space. ADDITION: The top-level view of addition is this: We will use the standard 3-tape machine. The machine will first copy the left addend to the scratch tape, and backspace the scratch tape head by one, then space over the right addend and backspace the input tape head by one, so that the read heads of both input tape and scratch tape end up at the LSB of one or the other of the addends. The machine will then perform bitwise addition with writes to the output tape, using choice of internal state to reflect whether or not there is a carry bit. There will be some finagling that has to do with what happens when one input is shorter than the other. Here is a description of the states and transition function. Only things that change during a transition are mentioned: From the start state, unconditionally enter state copy left addend. Move no tape heads and write nothing. From state copy left addend: If the character under the input tape head is 1 (resp. 0), write 1 (resp. 0) to the scratch tape, and move both the input tape head and the scratch tape head one to the right. Remain in state copy left addend. If the character under the input tape head is s, move the scratch tape head to the left, move the input tape head to the right, and enter state skip

right addend. From state skip right addend: If the character under the input tape head is 1 (resp. 0), move the input tape head one to the right. Remain in state skip right addend. If the character under the input tape head is s, move the input tape head to the left and enter state add with no carry. From state add with no carry: If the character under the input tape head is s and the character under the scratch tape head is s, move the output tape head to the right and enter state halt. (We are done.) If the character under the input tape head is s and the character under the scratch tape head is 1 (resp. 0), write 1 (resp. 0) to the output tape, and move both the scratch tape head and the output tape head to the left. Remain in state add with no carry. (The right input, now copied to the scratch tape, is longer than the left input.) If the character under the scratch tape head is s and the character under the input tape head is 1 (resp. 0), write 1 (resp. 0) to the output tape, and move both the input tape head and the output tape head to the left. Remain in state add with no carry. (The right input, now copied to the scratch tape, is shorter than the left input.) If the character under the input tape head is 0 and the character under the scratch tape head is 0, write 0 to the output tape, and move all three tape heads to the left. Remain in state add with no carry. If the character under the input tape head is 1 and the character under the scratch tape head is 0, write 1 to the output tape, and move all three tape heads to the left. Remain in state add with no carry. If the character under the input tape head is 0 and the character under the scratch tape head is 1, write 1 to the output tape, and move all three tape heads to the left. Remain in state add with no carry. If the character under the input tape head is 1 and the character under the scratch tape head is 1, write 0 to the output tape, and move all three tape heads to the left. Enter state add with carry. From state add with carry:

If the character under the input tape head is s and the character under the scratch tape head is s, write 1 to the output tape and enter state halt. (We are done.) If the character under the input tape head is s and the character under the scratch tape head is 1, write 0 to the output tape, and move both the scratch tape head and the output tape head to the left. Remain in state add with carry. (The right input, now copied to the scratch tape, is longer than the left input.) If the character under the input tape head is s and the character under the scratch tape head is 0, write 1 to the output tape, and move both the scratch tape head and the output tape head to the left. Enter state add with no carry. (The right input, now copied to the scratch tape, is longer than the left input.) If the character under the scratch tape head is s and the character under the input tape head is 1, write 0 to the output tape, and move both the input tape head and the output tape head to the left. Remain in state add with carry. (The right input, now copied to the scratch tape, is shorter than the left input.) If the character under the scratch tape head is s and the character under the input tape head is 0, write 1 to the output tape, and move both the input tape head and the output tape head to the left. Enter state add with carry. (The right input, now copied to the scratch tape, is shorter than the left input.) If the character under the input tape head is 0 and the character under the scratch tape head is 0, write 1 to the output tape, and move all three tape heads to the left. Enter state add with no carry. If the character under the input tape head is 1 and the character under the scratch tape head is 0, write 0 to the output tape, and move all three tape heads to the left. Remain in state add with carry. If the character under the input tape head is 0 and the character under the scratch tape head is 1, write 0 to the output tape, and move all three tape heads to the left. Remain in state add with carry. If the character under the input tape head is 1 and the character under the scratch tape head is 1, write 1to the output tape, and move all three tape heads to the left. Remain in state add with carry. From state halt: Do nothing; stop; we are done!

MULTIPLICATION: The top-level view of multiplication is this: We will use a four-tape machine, with two scratch tapes; namely, scratch0 and scratch1. We start by copying the left input to scratch0 and skipping over the right input, just as for addition, and we also write a zero to scratch1. We will do shift-adds of the left input to a running total which we maintain in scratch1, temporarily storing the result of each shift-add in the output tape as we construct it, but then copying it back to scratch1 for the next shift-add. We will of course only do adds when the relevant bit of the right input is 1. When we are finally done, we will simply leave the result of the last shift-add in the output tape. Each shift and possible add is done as a loop, which is entered with the following conditions: The current running total (and a potential result of the entire multiply) is in the output tape, with the output tape head positioned over its MSB. The possibly shifted left input is in scratch0, and the scratch0 tape head is positioned over its LSB. Any extra zero required for the forthcoming shift has already been added to the right end of the left input (no zero is added before the first iteration of the loop). The tape head of scratch1 is positioned over the MSB of the quantity in scratch1. The input tape head is positioned over the bit used to determine whether an add is about to be performed; a 1 means, do the add. In the interest of brevity, I will not repeat setup steps that are the same as for addition, and I will invoke a portion of the addition routine developed above as a subroutine or procedure call. From the start state, write a zero to scratch1 without moving the scratch1 tape head, write a zero to the output tape without moving the output tape head, and unconditionally enter state copy left addend. States copy left addend and skip right addend work as for addition, with the left addend being copied to scratch0. On final exit from state skip right addend, enter state check bit for add. From state check bit for add: (start of shift and possible add loop) If the character under the input tape head is 1, move the input tape head to the left and enter state set up add. If the character under the input tape head is 0, move the input tape head to the left and enter state do shift. If the character under the input tape head is s, enter state halt we are done.

From state do shift: (do the shift in preparation for the next loop) If the character under the scratch0 tape head is 1 (resp. 0) move the scratch0 tape head to the right, and remain in state do shift. (this should only happen once for each loop of the multiply routine the loop is entered with the scratch0 tape head over the LSB of the shifted left addend) If the character under the scratch0 tape head is s, write a 0, and enter state check bit for add. From state set up add: (do an add and then the next shift) (what we are about to do is copy back the result of the last shift-add from the output tape to scratch1 note that the content of the output tape is guaranteed to be at least as long as the content of scratch1) If the character under the output tape head is a 1 (resp. 0), write a 1 (resp. 0) to scratch1, write an s to the output tape, move both the output tape head and the scratch1 tape head to the right, and remain in state set up add. If the character under the output tape head is s, write an s to scratch1, move the scratch1 tape head to the left, and enter state do add. From state do add: Perform addition of the quantities in scratch0 and scratch1, placing the result in the output tape, by using the code for addition (starting with entry to add with no carry) with source and destinations appropriately modified. On completion of the add, enter state do shift. Problem 2 (A&B 1.5): =================== The construction used in the proof of Claim 1.6 in the text provides most of what we need it already provides an O( (T(n))**2 ) procedure for doing very nearly what we want, in a one-tape machine, yet. To get from what we have got to what we want, we need to fine-tune the proof in three ways: First, we split the input tape off from the single tape of the Claim 1.6 construction. The issue here is that depending on the actual input, the original TM may undertake to read different portions of its input at different times: For example with input string s, the original TM might end up reading the third character of s at instruction #102, whereas with another input string, s, it might be reading a different character at that time, or

otherwise have the input tape head left at a different position, thus violating obliviousness. The fix here is to construct a modified TM that scans the entire input tape every time it simulates any instruction of the original TM, and records whatever symbol it may actually need to read at that instruction, if any. The time required for this additional operation is clearly n (the length of the input string), or perhaps 2n if we require that the input tape head start and finish at the same point of the input tape. And since T is time-constructible, it follow that that time is O( T(n) ) per instruction. Furthermore, of course, the very first thing the modified Turing machine must do is read the entire input tape in order to find out the length of the input; it will use that information in deciding how to use its scratch/output tape, as we shall shortly see. Second, we must make sure that the work tape is large enough to hold any possible scratch work required by any possible input string of length n. Since the original TM runs in time T(n), no single workspace head can possibly get farther than T(n) squares from its original location during the run of the program, so that after the folding indicated in Claim 1.6 to allow for a k-tape machine, the required tape length will be k * T(n). Third, in the actual scanning of the workspace tape, we must arrange always to scan the entire length of the tape -- full end-to-end -- no stopping and turning around when we find the particular location we need; we have to look over the entire tape to preserve obliviousness, and only make modifications to the particular tape squares required for the particular TM input. The second and third modifications, just discussed, allow the operation of the scratch tape of the modified TM to be oblivious. Once the calculation has finished, the machine can easily move to some specific predetermined location of the scratch tape and there write the 0 or 1 indicating the result; it can also write blanks to all the rest of the used portion of the scratch tape, if we so wish it. The calculation of the time required to operate the scratch tape proceeds as in Claim 1.6; it will take at most 5k * T(n) operations per instruction emulated. Adding in 2 * n operations for the scanning of the input tape, and using the inequality n <= T(n), we have at most (5k + 2) * T(n) operations per instruction emulated, or at most (5k + 2) * (T(n))**2 operations in all, so the decision is indeed made in time O( (T(n))**2 ). We conclude by remarking that the essence of this proof is to make the modified Turing machine be very dumb. Every time it emulates an instruction of the original TM, it scans everything it could possibly want to know about the entire system, and thus obtains obliviousness by overkill, so to speak.

Problem 3 (A&B 1.10): ==================== Note to start with that the specification of the problem seems to be incomplete: Although there are instructions to increment and decrement i, there is no instruction to set it. I am going to assume that i starts out initialized to zero, and take that to be a formal requirement of every program written in the little programming language that is described. The problem is also perhaps a little vague in stating what is meant by f is computable [in time so-and-so], when it is clear as it is in the problem that we are talking about a computation of f that does not involve a Turing machine. I take the interpretation that we simply count the number of machine instructions to execute the program, and for the sake of definiteness I will assume that it takes one instruction to read the if statement that is part of each line of the program, one more to compare i to sigma and branch if necessary, and then one instruction for each of the different commands (1) through (5) which we are told comprise the cmds part of the source line. I am also assuming implicitly that the list of cmds, for a given source line, may contain at most one of each of the five command types. (That last admits the possibility that one set of cmds might contain both an instruction to increment i and an instruction to decrement i, which would be a little odd, but the safest thing is to allow for it, if only on the grounds that programmers are themselves often a little odd.) On that basis, a single source line of the programming language might take between two and seven instructions to execute, as the number of cmds varied between zero and five. Other interpretations are possible, of what a machine instruction is in this small programming language, but they will give similar results to what follows. On that basis, if we know that f is computable in time T(n) by a program in the language, we conclude that the number of source lines of the program that are actually executed is at most ceiling( T(n)/2 ). The infinite array A sounds scary, but the important thing to realize is that the variable used to index it can be changed at most by one during each source line. Thus the actual range of the array that can be used during any one run of the program is limited to the indices [ -T(n)/2... T(n)/2 ]. What is more, if we start by loading the input data into a scratch tape, and then set up the head of the scratch tape over A[0] to begin with, all we need to do to make the slot of the array that we need be readily available is to move that scratch tape head one to the right whenever i is incremented, and one to the left whenever it is decremented. Strictly, we don t even need to keep track of what value i actually has in programming terms, it is only used for addressing, and the address arithmetic performed is simple enough that we can always maintain a pointer to the correct address, in the form of

a correctly positioned scratch tape head. The source lines of any particular program in our programming language map to the states and transition function of a TM in a rather natural way. To each label (say, label # 42) of the program there is a state q-label (that would be q-label-42), whose transitions depend solely on the value under the scratch tape head, either to the next q-label (q-label-43) if that value is not the hard-coded sigma, or otherwise to a hard-coded run of states that execute the commands given in the given source line. In that hard-coded run of states, we use four different TM operations for each of instruction types (1) through (4), falling though between them, with each of (1) through (4) taking one TM operation: Writing A[i] is just writing what is under the scratch tape head, incrementing or decrementing i is moving the head, and a goto statement is just an unconditional transition to q- label-whatever. Output b and halt is also just one statement: I have implicitly assumed a standard 3-tape TM, with a separate output tape. We could in principle do a little optimization, possibly combining several of these statements into just one, but for purposes of the problem we do not need to, and careful optimization would require considering the order of the cmds, as in pre-incrementing i versus post-incrementing it, in a set of cmds that contained a write to A[i]. So let s get an upper bound for the number of TM operations required: - To copy the input to the scratch tape, then back up the scratch tape head to the start of the input: 2n operations. Since T is time-constructible, an upper bound for this time is certainly 2 * T(n). - To perform the if statement and run of commands for each source line actually executed: At most 6 operations. ( Doing the if takes only one TM operation, it is a little faster than the simple machine language I assumed for computing f without a TM.) And since as we have seen, the number of source lines actually executed is at most T(n) / 2, it follows that the total run time of the TM does not exceed 2 * T(n) + 6 * ( T(n) / 2 ), or 5 * T(n) so that f is indeed in DTIME( T(n) ). The constant multiplying T(n) would be different, depending on precisely how many machine instructions we assumed were required for the various operations used in calculating f without a TM.

Problem 4 (A&B 1.11): ==================== Take the unicode representing the text description of a Turing machine, load it into memory in a computer architecture of your choice (with sufficient address space), and read out the bits in any specific well-defined order (say, LSB through MSB of each word in turn, starting from low address and increasing). Since unicode-to-text is in fact the inverse of text-tounicode, the recovered TM will be as precise and as well-defined as was the original description. That answer begs the question, I think; I suspect the idea was to show how to turn a text description of any specific Turing machine -- call it M -- into a string that a general-purpose Turing-machine-string interpreter Turing machine (what the text calls U) can read and execute. Yet I think this problem has more to do with proper and systematic formatting of the original text description of M, than with a particular mechanism of going from text to binary. The transition function is clearly the heart of a TM, and it is easy to see that a text description of M in which each specific transition was a tab-delimited line with the input state, character(s) seen, and output actions all given in a completely specified order, and with the separate specific transitions in turn delimited by a different character (e.g., newline), is a data structure which a simple Turing machine interpreter U could easily scan and use to execute whatever algorithm M was set up to do. As indicated in the text, U would use one or more tapes to model the scratch tapes of M, and would use an additional tape to record the state of M. I have glossed over how the representation of M contains the states and alphabet used; they could easily be scanned in from the separate text representations in the formal TM model, or could be derived by reading the appropriate columns of the transition-function table. Since the likely text-to-binary transformations used in constructing the string just described are all reversible, the string could trivially be used to reconstruct the original text description of M. Problem 5 (As given in problem set handout): =========================================== The assignment is to show that the problem of determining whether the second work tape of a TM ever has the string 0110 on it during the course of its computation, is undecidable. The word second in the sentence just given is a little vague, but I will take it in a conservative sense and assume that we are dealing with a three-tape TM in which the first tape is for the input, the second is for scratch work, and the third is for the output; I take the

intent of the problem to be that the scratch tape is the second work tape. With that said, I think it is clear that for any TM M with such tapes, we can straightforwardly modify its transition function to create a new, equivalent TM, M, that writes binary data on its work tape in pairs of bits, the first of which is the bit originally used and the second of which is always a zero, and that erases and overwrites such pairs integrally, the two of a pair one after another. Thus if at some point the work tape of the original machine M contained 01100101, then at the corresponding point in the algorithm for M, the work tape would contain 0010100000100010, in which I have used bold-face to indicate the original bits and non-bold-face to indicate the extra zero bits. With this construction, then clearly the work tape can never contain the sequence 0110, but we are not done. Let us further modify M into M, which does exactly what M did except that it writes 0110 on its work tape just before it halts; that is, we replace the halt state of M with a series of states that unconditionally write 0110 to the work tape and then halt. We therefore see that figuring out whether the second work tape of a TM ever has the string 0110 on it during the course of its computation is equivalent to the halting problem, which is known to be undecidable. QED.