Recent Advances in Algorithmic Learning Theory of the Kanban Cell Neuron Network

Similar documents
NH 67, Karur Trichy Highways, Puliyur C.F, Karur District UNIT-III SEQUENTIAL CIRCUITS

IT T35 Digital system desigm y - ii /s - iii

LUT Optimization for Memory Based Computation using Modified OMS Technique

Long and Fast Up/Down Counters Pushpinder Kaur CHOUHAN 6 th Jan, 2003

International Journal of Engineering Trends and Technology (IJETT) - Volume4 Issue8- August 2013

MC9211 Computer Organization

Optimizing area of local routing network by reconfiguring look up tables (LUTs)

Investigation of Look-Up Table Based FPGAs Using Various IDCT Architectures

Bit Swapping LFSR and its Application to Fault Detection and Diagnosis Using FPGA

OF AN ADVANCED LUT METHODOLOGY BASED FIR FILTER DESIGN PROCESS

Chapter 4. Logic Design

The Proportion of NUC Pre-56 Titles Represented in OCLC WorldCat

Part 1: Introduction to Computer Graphics

Department of CSIT. Class: B.SC Semester: II Year: 2013 Paper Title: Introduction to logics of Computer Max Marks: 30

VLSI Design: 3) Explain the various MOSFET Capacitances & their significance. 4) Draw a CMOS Inverter. Explain its transfer characteristics

Flip Flop. S-R Flip Flop. Sequential Circuits. Block diagram. Prepared by:- Anwar Bari

Enhancing Performance in Multiple Execution Unit Architecture using Tomasulo Algorithm

The word digital implies information in computers is represented by variables that take a limited number of discrete values.

ALONG with the progressive device scaling, semiconductor

Previous Lecture Sequential Circuits. Slide Summary of contents covered in this lecture. (Refer Slide Time: 01:55)

LFSRs as Functional Blocks in Wireless Applications Author: Stephen Lim and Andy Miller

The Research of Controlling Loudness in the Timbre Subjective Perception Experiment of Sheng

Chapter 7 Memory and Programmable Logic

VLSI System Testing. BIST Motivation

Building a Better Bach with Markov Chains

Hardware Implementation of Viterbi Decoder for Wireless Applications

GUIDELINES FOR THE PREPARATION OF A GRADUATE THESIS. Master of Science Program. (Updated March 2018)

Design of Memory Based Implementation Using LUT Multiplier

AU-6407 B.Lib.Inf.Sc. (First Semester) Examination 2014 Knowledge Organization Paper : Second. Prepared by Dr. Bhaskar Mukherjee

Algebra I Module 2 Lessons 1 19

IMS B007 A transputer based graphics board

CHAPTER 4: Logic Circuits

Processor time 9 Used memory 9. Lost video frames 11 Storage buffer 11 Received rate 11

8/30/2010. Chapter 1: Data Storage. Bits and Bit Patterns. Boolean Operations. Gates. The Boolean operations AND, OR, and XOR (exclusive or)

Retiming Sequential Circuits for Low Power

COMPUTER ENGINEERING PROGRAM

The Design of Efficient Viterbi Decoder and Realization by FPGA

Module -5 Sequential Logic Design

FPGA TechNote: Asynchronous signals and Metastability

Logic and Computer Design Fundamentals. Chapter 7. Registers and Counters

For an alphabet, we can make do with just { s, 0, 1 }, in which for typographic simplicity, s stands for the blank space.

CHAPTER 4: Logic Circuits

Digital Logic Design: An Overview & Number Systems

Objectives. Combinational logics Sequential logics Finite state machine Arithmetic circuits Datapath

MAHARASHTRA STATE BOARD OF TECHNICAL EDUCATION (Autonomous) (ISO/IEC Certified)

UNIT 1: DIGITAL LOGICAL CIRCUITS What is Digital Computer? OR Explain the block diagram of digital computers.

TITLE OF CHAPTER FOR PD FCCS MONOGRAPHY: EXAMPLE WITH INSTRUCTIONS

Laboratory Exercise 7

Reducing DDR Latency for Embedded Image Steganography

VLSI Technology used in Auto-Scan Delay Testing Design For Bench Mark Circuits

Experiment: FPGA Design with Verilog (Part 4)

Contents Circuits... 1

Why t? TEACHER NOTES MATH NSPIRED. Math Objectives. Vocabulary. About the Lesson

Design and Implementation of Partial Reconfigurable Fir Filter Using Distributed Arithmetic Architecture

Improving Performance in Neural Networks Using a Boosting Algorithm

LAB 1: Plotting a GM Plateau and Introduction to Statistical Distribution. A. Plotting a GM Plateau. This lab will have two sections, A and B.

COMP12111: Fundamentals of Computer Engineering

DISPLAY WEEK 2015 REVIEW AND METROLOGY ISSUE

Optimization of memory based multiplication for LUT

Product Obsolete/Under Obsolescence

problem maximum score 1 28pts 2 10pts 3 10pts 4 15pts 5 14pts 6 12pts 7 11pts total 100pts

An Lut Adaptive Filter Using DA

Area-Efficient Decimation Filter with 50/60 Hz Power-Line Noise Suppression for ΔΣ A/D Converters

10GBASE-R Test Patterns

Peak Dynamic Power Estimation of FPGA-mapped Digital Designs

EN2911X: Reconfigurable Computing Topic 01: Programmable Logic. Prof. Sherief Reda School of Engineering, Brown University Fall 2014

Report on 4-bit Counter design Report- 1, 2. Report on D- Flipflop. Course project for ECE533

MindMouse. This project is written in C++ and uses the following Libraries: LibSvm, kissfft, BOOST File System, and Emotiv Research Edition SDK.

ENGG2410: Digital Design Lab 5: Modular Designs and Hierarchy Using VHDL

Jin-Fu Li Advanced Reliable Systems (ARES) Laboratory. National Central University

Microprocessor Design

FPGA Hardware Resource Specific Optimal Design for FIR Filters

Section 6.8 Synthesis of Sequential Logic Page 1 of 8

Why FPGAs? FPGA Overview. Why FPGAs?

for Digital IC's Design-for-Test and Embedded Core Systems Alfred L. Crouch Prentice Hall PTR Upper Saddle River, NJ

How to Predict the Output of a Hardware Random Number Generator

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Sciences

Optimization of Multi-Channel BCH Error Decoding for Common Cases. Russell Dill Master's Thesis Defense April 20, 2015

Composer Style Attribution

VHDL Design and Implementation of FPGA Based Logic Analyzer: Work in Progress

A Fast Constant Coefficient Multiplier for the XC6200

Registers and Counters

Chord Classification of an Audio Signal using Artificial Neural Network

Cyclone II EPC35. M4K = memory IOE = Input Output Elements PLL = Phase Locked Loop

ORTHOGONAL frequency division multiplexing

Design of Fault Coverage Test Pattern Generator Using LFSR

DIGITAL SYSTEM DESIGN UNIT I (2 MARKS)

Designing for High Speed-Performance in CPLDs and FPGAs

Set-Top-Box Pilot and Market Assessment

VGA Controller. Leif Andersen, Daniel Blakemore, Jon Parker University of Utah December 19, VGA Controller Components

What is Statistics? 13.1 What is Statistics? Statistics

Digital Systems Laboratory 1 IE5 / WS 2001

UNIT III. Combinational Circuit- Block Diagram. Sequential Circuit- Block Diagram

THE BaBar High Energy Physics (HEP) detector [1] is

for Television ---- Formatting AES/EBU Audio and Auxiliary Data into Digital Video Ancillary Data Space

LUT Optimization for Distributed Arithmetic-Based Block Least Mean Square Adaptive Filter

An Efficient 64-Bit Carry Select Adder With Less Delay And Reduced Area Application

Example the number 21 has the following pairs of squares and numbers that produce this sum.

)454 ( ! &!2 %.$ #!-%2! #/.42/, 02/4/#/, &/2 6)$%/#/.&%2%.#%3 53).' ( 42!.3-)33)/. /&./.4%,%0(/.% 3)'.!,3. )454 Recommendation (

MUHAMMAD NAEEM LATIF MCS 3 RD SEMESTER KHANEWAL

Transcription:

Proceedings of International Joint Conference on Neural Networks, Dallas, Texas, USA, August 4-9, 2013 Colin James III Recent Advances in Algorithmic Learning Theory of the Kanban Cell Neuron Network Abstract A novel algorithm of learning is defined as the Kanban cell neuron model (KCNM). The analysis captures the salient properties of the concrete application of the associated look up table (LUT). The purpose of this system is to foster a structure for machine cognition. The Kanban cell (KC) forms the basis of mapping human neurons into logical networks. The Kanban cell neuron (KCN) maps nine input signals of the dendrites into one output signal of the axon. The logical mechanism is the multivalued logic of four-valued bit code as a 2-tuple of the set {00, 01, 10, 11}. The LUT is indexed by 18-bits as input for output of 2-bits. In a preferred hardware implementation, the rate of processing is 1.8 BB KCNs per second on a $29 device. T I. INTRODUCTION HE KCN maps the biological mechanism of the human neuron as a series of 9-dendrites that process input signals concurrently into 1-axon, the path of the output signal. The synapse and its neural-transmitter fluid allow input signals to be made available to the dendrites for submission to the receptors. The input signals are the results of ion transfers involving the chemical elements of Ca, K, and Na. The input signals are equivalent to the logical values of contradiction, true, false, and tautology. These are respectively assigned as the set of {0, 1, 2, 3} or as the set of 2-tuple as {"00", "01", "10", "11"} [3], depending on which set of valid results is required. To implement a structure for machine cognition, the system described below is a collection of trees with branches of nodes. A node is the Kanban cell (KC) from [1], [2]. A branch is a Kanban cell neuron (KCN). A tree is cascaded KCNs, each with a Kanban cell level (KCL). The collection of trees is a Kanban cell neuron network (KCNN, KCN2, or KCN 2, pronounced KCN-Two or KCN-Squared). The KC is effectively a sieve that processes signals in a highly selective way. The KC is the atomic and logical mechanism of the system as a ternary, 3-ary, or radix-3 tree. Three KCs make up a KCN branch. Multiple KCs as nodes at the same level form a KCL to better describe the KCN to which the KCs belong. The KCNN contains nine KCNs to produce one KCN result. Hence the KCNN is also a 9-ary or radix-9 tree. As a sieve the KCN filters three input signals into one output signal. The number of all input signals to produce one valid output signal as a result depends upon the logic used. The computational or machine mechanism to accomplish this is basically the same for software or hardware implementation. Look up tables (LUTs) contain the logic of the KCNN system. LUTs produce output results at a much faster rate than by the brute force of logical arithmetic. This paper is set out in five subsequent sections: theory of input signals; software implementation; hardware production; excluded art; application of the KCNN to predicate logic; statistical analysis of the KCNN; the Kanban cell neuron model for learning; and conclusion. II. THEORY OF INPUT SIGNALS The KC is defined by the logic contained in the LUTs. The KCN is four of these connected KCs. The utility of the KCN is that it matches the human neuron with 9-inputs as dendrites and 1-output as axon. The input to the KC is in the form of a 2-tuple x 3 for three dibit values as effectively a 3-tuple set of {ii, pp, qq}. To produce a single dibit value as kk output, three input values are required. Hence the expression 3-inputs to 1- output accurately describes the KC. When KCs are chained, three inputs are required to produce each of the three outputs accepted in the next level of KCs, for a total of 9-input signals. This defines the KCN as the expression of 9-inputs to 1-output. The number of inputs required in KC 1 to produce KC n is given by the last formula in Table I. It follows that KCs in parallel and chained in succession represent a permutation of exponential complexity as KC n = 3 n, where n > 0. Each successively complex level of KCs has its number of KCs as a power of 9 (= 3^2), such as (3^2) ^0 = 1, (3^2)^1 = 9, (3^2)^2 = 81,..., (3^2)^n. Table I lists the number of groups of signals of {ii, pp, qq} required for levels of KCNs named as KCLs. The number of separate signals to produce levels of KCLs is listed. The cumulative number of groups of signals to produce one output signal result for each KCL is listed. The number of groups where three groups are processed concurrently for KCL is the result of reducing the cumulative signals by a factor of three of the cumulative number of groups of signals. The column for KCNs is described farther below. Manuscript received April 15, 2013. Colin James III is Director of Ersatz Systems Machine Cognition, LLC, Colorado Springs, CO 80904 USA (phone: 719-210-9534; e-mail: info@ersatz-systems.com). 978-1-4673-6129-3/13/$31.00 2013 IEEE 2158

K C L TABLE I NUMBER OF GROUPED AND DISCRETE INPUT SIGNALS FOR KCLS Discrete signals of KCs = 3 n, n > 0 {ii, pp, qq} = 3 n-1, n > 0 Groups of signals of KCLs Cumulative to one signal =3 n /2 Reduced by 1/3= CEIL(3 n-1 )/3 KCNs Cumulative/3 =INT((3 n )/2/3 ) 1 3 1 1 1 0 2 9 3 4 1 1 3 27 9 13 3 4 4 81 27 40 9 13 5 243 81 121 27 40 6 729 243 364 81 121 7 2 187 729 1 093 243 364 8 6 561 2 187 3 280 729 1 093 9 19 683 6 561 9 841 2 187 3 280 10 59 049 19 683 29 524 6 561 9 841 11 177 147 59 049 88 573 19 683 29 524 12 531 441 177 147 265 720 59 049 88 573 13 1 594 323 531 441 797 161 177 147 265 720 14 4 782 969 1 594 323 2 392 484 531 441 791 161 15 12 348 907 4 782 969 7 174 453 1 594 323 2 391 484 16 43 046 721 14 348 907 21 523 360 4 782 969 7 174 454 17 129 140 163 43 046 721 64 570 081 14 348 907 21 523 361 18 387 420 489 129 140 163 193 710 244 43 046 721 64 570 082 19 1 162 261 467 387 420 489 581 130 733 129 140 163 193 710 244 20 3 486 784 401 1 162 261 467 1 743 392 200 387 420 489 581 130 734 21 10 460 353 203 3 486 784 401 5 230 176 601 1 162 261 467 1 743 392 201 22 31 381 059 609 10 460 353 203 15 690 529 804 3 486 784 401 5 230 176 602 23 94 143 178 827 31 381 059 609 47 071 589 413 10 460 353 203 15 690 529 805 24 282 429 536 481 94 143 178 827 141 214 768 240 31 381 059 609 47 071 589 413 25 847 288 609 443 282 429 536 481 423 644 304 721 94 143 178 827 141 214 768 241 In Table I, KCL-1 requires one KC. KCL-2 requires three KCs because each supplies one signal to the consecutive KCL-1. KCL-18 requires 129 MM groups, each with three signals of input or 387 MM total input signals. KCL-24 contains 94 BB KC groups of three input signals or 282 BB total input signals. The human brain has about 120 BB neurons [4]. A commonly published statistic is that on average there are seven to nine dendrites per neuron [5]. This means three complete groups of signals (six to nine discrete signals) could be processed concurrently at receptor points for dendrites along a neuron. Therefore the number of groups of signals in Table 1 could be reduced by a factor of three as in the KCL column heading of Reduced by 1/3. In other words, each KCL now processes three concurrent input signals in groups of three, or nine concurrent input signals in total for each one output signal. The KCN column heading designates the cumulative number of KCNs as a running sum. Hence to map 120 BB neurons requires KCL-24 with its 141 BB cumulative KCNs to produce one result or 47 BB cumulative KCNs processing nine signals per KCN. III. SOFTWARE IMPLEMENTATION For the formula of a KC, there are 64-combinations of the 2-tuple or dibits of the set {00, 01, 10, 11}. A. Look Up Tables (LUTs) The software implementation of the KCN in KCLs builds a LUT from 64-elements of data of results. The 64-elements are indexed in the interval range of [0, 63]. The values may be represented as the same things but in three different formats such as a natural number, the character string of that natural number, or character digits representing exponential powers of arbitrary radix bases. As numeric symbols, the four valid results are in the set of {0, 1, 2, 3}. As character string symbols, the four valid results are in the set of { 0, 1, 2, 3 }. As character string exponents, the four valid results are in the set of { 00, 01, 10, 11 } or {00, 01, 10, 11}. The representation of the data elements within a LUT is important because the type of format affects the size of the LUT and the speed at which the data is manipulated. The size of the LUT is also smaller for a literal character string: 64-elements as natural numbers occupies 64 * 8 = 512-characters; whereas 64-elements as 8- bit characters occupies 1/8 less. B. Design of N-dimensional LUTs In implementation practice, a combination of numerical and string arrays is desirable for speed and clarity of exposition. In this process, the counter for the hierarchy tree of KCLs is reset to the largest value of that tree level before being decremented. This method is known as PWR3 because three powers of radix-64 are manipulated. The purpose is not to overwrite the indexed value of any node in the tree at a particular level and not to mix indexes by level. A LUT for these 9-inputs [LUT9] consists of a 2-tuple of 2- bits to make 9 * 2 = 18-bits. The binary number 2^18 is decimal 262,144 or those numbers in the range of the inclusive interval of [0, 262143]. This means LUT9 is an array indexed from 0 to 262 143 that is populated with kk results as {"0", "1", "2", '3"} for the binary {"00", "01", "10", "11"}. The design flow of the software implementation is in three parts: build the LUT (as above); populate the top-tier of the KCL with random input values for testing; and process the subsequent lower-tier KCLs. The random values are generated in the range interval [0, 242143], that is, at the rate of 9-input signals at once. These are used to populate the top-tier KCL. The size of the top-tier level is determined by the maximum memory available in the programming environment. In the case of True BASIC, the maximum array size is determined by the maximum natural number in the IEEE format which is (2 ^ 32) 1. The largest radix-3 number to fit within that maximum is (3 ^ 20) 1. However the compiler system allows two exponents less at 3 ^ 18 (3 ^ 18.5, to be exact). Hence the top-tier KCL is set as KCL-18. Subsequent lower-tier KCLs are processed by string manipulation. Consecutive blocks of 9-signal inputs are evaluated for results. The results as single characters are multiplied to the respective exponent power of four and 2159

summed into an index value for the LUT. The result is stored at that point in the KCL tier. This is the KCN performance. C. LUT 241K Exactly how the LUT is built has instructional value. A single dimensioned, 64-element array is filled with single character literals. Again, the array is indexed sequentially in the interval range of [0, 63] by result. Three such arrays are combined into a three dimensional array of 64 * 64 * 64 = 262,144 elements. This serves to index the three inputs of {ii, pp, qq} into all possible combinations of results. To access the three dimensional array faster, it is rewritten in a one dimensional array. This is because while the three indexes of ii, pp, and qq are conceptually easier to digest, a single index of 262,144 elements in the range interval [0, 262143] requires only one index value to be accessed. This is named LUT 241K where each element stores a 2-tuple of ordinal characters. Implemented in hardware as a 2-tuple of 2-bits, the size of LUT 241K is 262,144 * 2-bits / 8-bits per byte for 64K-bytes. D. Performance Results Software simulation using pseudo code the educator's language of True BASIC on a no-load, quad-core laptop with 8 GB RAM. The performance results of the LUT schema is presented in Table II below. TABLE II PERFORMANCE RESULTS OF LUTS Venue LUTs Process Time secs KCNs / sec Memory Software 241K Radix-4 19.890 1 082 119 241 KB + tree Hardware 241K Radix-4 0.032 31 637 176 64 KB + tree The rate is based on processing 21,523,361 KCNs. This number of KCNs is derived from the Sigma permutation in (1) below for the number of KCNs processed. 17 Σ (3 ^ (i - 2)) (1) i = 1 The best performance is the 241K LUT at about 1.1 MM KCNs per second. Performance in hardware is 13.25 NS per access of the LUT or 1,631 times faster than software at about 1.8 BB KCNs per second. IV. HARDWARE PRODUCTION The hardware description is written in the strongly typed programming language VHDL, a subset of Ada. The hardware implementation of the KCN in KCLs builds the same LUT as in the software implementation, but in a different representation. Instead of numbers in 8-characters or literal strings in 1-character, the format is in a 2-tuple of two bits, such as the data type bit_vector. The four valid results are in the set of {00, 01, 10, 11}. The advantage here is again in size. While a literal character occupies 1-byte or 8-bits, the 2-tuple occupies 1/4-byte or 2-bits. In programming hardware with VHDL it is further convenient that bit-manipulation is easy to implement at an abstract level for IN, OUT, and BUFFER signals. Implementation techniques for sequential concurrency [7] may be used for multiple instance of LUTs. V. EXCLUDED ART Further art is excluded here due to pending patent protection, as for example: the contents of the LUTs whereby input values limit the number of signals processed to be about 7%; the theory of how the KCNN commences and terminates based on certain input and output values; and model extension to further multivalued logics (MVLs). VI. APPLICATION OF THE KCNN TO PREDICATE LOGIC How the 18-bits of nine 2-tuple, in groups of three 2-tuple or otherwise, are assigned is determined by the user. For a minimal example, the statement "The Emperor of China is bald, and snow in Hong Kong is falling" may be assigned into two groups of three 2-tuple as 12-bits to yield an equivalent result as follows. The Emperor of China [ii 1= 11 tautology, because Chinese history has monarchs] is [pp 1 = 00, because the head of state is now elected, rendering monarchy moot] bald [qq 1 = 00, because bald monarchy is similarly moot], with kk 1 = 00 absurdum, AND snow [ii 2 = 11] in Hong Kong [pp 2 = 10 false, because snow is unknown there] is falling [qq 2= 00, because snow is not possible there], with kk 2 = 10 false. These combined expressions as kk 1 00 AND kk 2 10 reduce to 00 absurdum. Note also that should the state of affairs be expressed as the Emperor of China [ii 3 = 11] was [pp 3 = 01 true] bald [qq 3 = 01], with kk 3 = 01 true, then the combined expressions as kk 3 01 AND kk 2 10 continue to reduce to 00 absurdum. VII. APPLICATION OF THE KCNN TO FINANCIAL SERIES The formula of the KCN is applied to the stress test yields of the 10-year T-Bond for selected variables from 2008.3 to 2013.2 over 20-quarters (Qs). This experiment classifies input ranges of variables into four logical groups. The KCN finds which groups are logically sound. The input data process test results to verify or falsify the KCN logical formula as a tool of prediction. The sample size examined was the 50-Qs from 2001.1 to 2013.2. Of the stress test statistics [9], only the Supervisory Adverse Scenarios (%SAS) in the 20-Qs from 2008.3 to 2013.2 were found to be remarkable and hence tested. The variables are percent of yield, dollar amount of the gold price, and monthly duration of the instrument as %y, 2160

$g, and md. The logical circuit tested for input of the 2-tuple binary values {00, 01, 10, 11} results in kk. The arithmetic circuit tested for input of {%y, $g, md} results in k. As a variable, the gold price gp is selected as a universal price statistic that withstands errors introduced or masked by models. The spread of duration ( d) and yield ( y) are abandoned to test the simpler and more readily available md and %y for the monthly term of an instrument and its annual percent yield (APY). The results processed are in the logical form of [kk] or in the arithmetic form of [k]. The gold price is taken from monthly graphs [5] and arbitrarily on the first day of the quarter from 3Q 2008. The percent yield statistics are from public records [8]. The 10- year T-Bond duration is described herein as 120-months. The logical range is 0- to 360-months for a 30-year T-Bond. Only adverse scenarios were found to be remarkable, not the baseline or highly adverse. For yield rates, the logical range for 120-months is 0.000 to 5.499% to accommodate the margin of error and rounding. The 120-month instrument ranges are applied to yield percent and gold price ranges of the period 2008.3 2013.2 for the logical circuit from the LUT of Table III. TABLE III LOGICAL CIRCUIT VARIABLE RANGES %y $g md 00 01 10 11 0.000-1.374 0-449 1-90 ygd 1.375-2.749 450-899 91-180 ygd 2.750-4.124 900-1349 181-270 ygd 4.125-5.499 1350-1799 271-360 ygd An instance for Supervisory Adverse Scenarios (SAS) is in Table IV. TABLE IV Statistics of Supervisory Adverse Scenarios Year.Q $g %SAS md kk Year.Q $g %SAS md kk 2008.3 940 10 4.1 10 120 01 11 2011.1 1390 10 3.5 10 120 01 11 2008.4 880 10 3.7 10 120 01 11 2011.2 1419 10 3.3 10 120 01 11 2009.1 870 10 3.2 10 120 01 11 2011.3 1482 10 2.5 01 120 01 00 2009.2 925 10 3.7 10 120 01 11 2011.4 1659 10 2.1 01 120 01 00 2009.3 940 10 3.8 10 120 01 11 2012.1 1599 10 2.1 01 120 01 00 2009.4 1005 10 3.7 10 120 01 11 2012.2 1678 10 1.8 01 120 01 00 2010.1 1120 10 3.9 10 120 01 11 2012.3 1592 10 1.6 01 120 01 00 2010.2 1125 10 3.6 10 120 01 11 2012.4 1789 10 2.5 01 120 01 00 2010.3 1235 10 2.9 10 120 01 11 2013.1 1695 10 2.9 10 120 01 11 2010.4 1315 10 3.0 10 120 01 11 2013.2 1585 10 3.3 10 120 01 11 For the logical analysis, the kk results 00 are invalid logical results. This means those 6-Qs are not considered to contain meaningful yield percents from SAS, and hence are ignored. The kk results 11 are valid logical results. This means those 14-Qs are considered to contain meaningful %SAS and hence are suitable for financial manipulation. The %SAS is based on Black-Scholes algorithms that are vector spaces and hence probabilistic and not bivalent. For the numerical analysis, the test method is an N-by-M contingency test (the χ2 chi-square test is a subset). The advantage is that the expected values are derived directly from the observed values. The 20-Qs are evaluated to k as result. Because the term md is a constant, it is ignored as a numerical value here. In Table V three tests are based on the logical kk result in 12-columns (N) and 2-rows (M). Degrees of freedom (df) are (N-1)*(M-1), and p-values are less than 0.001. TABLE V Statistical Analysis of Supervisory Adverse Scenarios Test 6-Qs 00 14-Qs 11 χ2 df P 1. %y $g %y $g 84.94 9 0.001 2. $g %y $g 79.86 9 0.001 3. %y $g $g 101.10 9 0.001 The three tests are significant. The least significant Test 2 here is taken to mean the least disparate series under test. As expected, the invalid 00 results are most disparate and hence dilute significance of the series. Therefore, excluding the invalid results gives a more realistic view of the valid 11 results in the series. In Test 2, the 6-Qs with invalid 00 results ignore those %SAS values, and the 14-Qs with valid 11 results are taken to have meaningful %SAS values. This experiment turns on an assumption of accuracy. Supervisory series percent yields are assumed to be valid as APY. Because these admit a margin of error of ± 0.1%, the logical evaluation of a range includes a rounding capacity. For example, if the range is [0.0, 5.5] then the range is extended for rounding to [0.00, 5.49]. The maximum range value of 5.47 includes 5.47 ± 0.2%. This experiment also turns on an assumption of meaningfulness. The calculation of the stress tests statistics as described in Supervisory Scenarios [8] should survive academic scrutiny. However, these guidelines are apparently followed without formal oversight because no provision mandates for the independent verification and validation of the outputs. While the logical formula of the KCN is sensitive to the order in which input are processed, the arithmetic formula is not sensitive to the input order. This suggests that the logical formula is both a constrained case of, yet at the same time more abstract than, the arithmetic model. The extended mechanism by which the KCNN learns from this experiment is to archive those valid results of the series and contrast those to the Supervisory Baseline and Very Adverse Scenarios for the same period, or other periods, same yield percent ranges, and contingency test results. VIII.STATISTICAL ANALYSIS OF THE KCNN The distribution of output values in the LUT of 2 ^ 18 entries (262 144) is presented in Table VI. The values evaluated are in the set of {01, 10, 11}. The N-by-M contingency test is used where N = 4, M = 2, and p-value is less than 0.001. 2161

TABLE VI OUTPUT EVALUATION FOR THE LUT OF THE KCNN WITH P-VALUE TABLE VIII SAMPLE OF MINIMUM DESCRIPTION LENGTH (MDL) CODE FROM LUT Input value Acceptable Unacceptable Input radix-10 Input radix-2 Output radix-2 01 7 936 0 10 7 936 0 11 34 560 0 Other 0 6 912 df = 3 χ 2 = 57 344 P 0.001 The random input to the KCNN of 129 million signals (3^17) produces values for 43 million KCNs (3^16). A single subsequent result as produced at the next lower level of 3^15 KCNs is infrequent at the rate of 119/10,194 or about 1.2% for 10,000 random tests. This means the KCNN performs effectively as a two-level network, or a tree of depth-2, after the input is randomly submitted in test. Table VII presents examples of the four possible result values, in 2-tuple and ordinal formats. TABLE VII EXAMPLES OF FOUR RESULTS IN 2-TUPLE AND ORDINAL FORMATS 2-Tuple: ii pp qq kk 11 11 10 11 11 11 11 01 01 01 11 11 11 11 11 11 11 10 10 10 11 10 01 11 01 11 01 11 11 11 11 11 11 11 11 11 11 11 11 00 Ordinal: ii pp qq kk 3 3 2 3 3 3 3 1 1 1 3 3 3 3 3 3 3 2 2 2 3 2 1 3 1 3 1 3 3 3 3 3 3 3 3 3 3 3 3 0 Table VII shows the feature that KCNNs favor the collective processing of tautologies because out of 40 2-tuple values, 29 are tautologies (70%). IX. THE KANBAN CELL NEURON MODEL FOR LEARNING From the LUT, the 18-bit input and 2-bit output are by definition in a binary alphabet and thus of a minimum description length (MDL). Hence the LUT satisfies Occam's Razor. A sample from the LUT is in Table VIII below. 131 097 100000 000000 011001 00 131 098 100000 000000 011010 10 131 099 100000 000000 011011 11 131 100 100000 000000 011100 00 131 101 100000 000000 011101 00 131 102 100000 000000 011110 11 131 103 100000 000000 011111 11 131 104 100000 000000 100000 00 131 105 100000 000000 100001 00 131 106 100000 000000 100010 00 131 107 100000 000000 100011 00 131 108 100000 000000 100100 00 131 109 100000 000000 100101 01 131 110 100000 000000 100110 00 131 111 100000 000000 100111 11 131 112 100000 000000 101000 00 Table VIII shows the Kanban cell neuron model (KCNM) as being only that of the contents of its LUT. An algorithm of learning theory can be this minimalistic with a MVL such as 4vbc. Here the input and output values are contained in the set {00, 01, 10, 11} and correspond respectively to {contradiction, true, false, tautology }. Two example cases follow. At index entry 131 099 the equivalent input binary value is 100000 000000 011011 with an output binary result of 11. Similarly at index entry 131 102 the equivalent input binary value is 100000 000000 011110 with an output binary result also of 11. This means that the user assigned nine 2-tuple to items of interest in each case. The respective input combination when processed through a KCNN produces the result of tautology or proof. Either example is sufficient to qualify the novel KCNM as an algorithm of learning theory because in MDL an output result from an input is testable, verifiable, and reproducible. X.CONCLUSION Previous LUT methods in the literature were universally not complete and not purely standalone implementations but rather were conjoined for utility to implement other theories. The KCNM is disparate from and bears no resemblance to previous deterministic or probabilistic models. Hence, the KCNM deserves in its own right the designation as a new algorithm of learning theory. ACKNOWLEDGMENTS Thanks are due for helpful discussions and comments to Tom Bock, Mary Evans, Eric Jensen, Tony Storey, and the staff of Ersatz Systems Machine Cognition, LLC [ESMC]. 2162

REFERENCES [1] C. James. A Reusable Database Engine for Accounting Arithmetic. Proceedings of The Third Biennial World Conference on Integrated Design & Process Technology, 2:25-30. 1998. [2] C. James. Recent Advances in Logic Tables for Reusable Database Engines. Proceedings of the American Society of Mechanical Engineers International, Petroleum Division. 1999. [3] C. James. Proof of Four Valued Bit Code (4vbc) as a Group, Ring, and Module. World Congress and School on Universal Logic III, 2010. [4] Herculano-Houzel, S. Coordinated scaling of cortical and cerebellar numbers of neurons. Front. Neuroanat. 4:12, 2010. [5] Kitco.com. historical monthly gold price graphs, 2013. [6] C. Koch. Biophysics of Computation. Information Processing in Single Neurons. New York: Oxford Univ. Press., 1999. [7] A. M. Smith, G. M. Constantinides, P.Y.K Cheung. FPGA architecture using geometric programming. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 29:8, 2010. [8] U.S. Federal Reserve. federalreserve.gov file bcreg20121115a2, 2013. [9] U.S. Federal Reserve. "Dodd-Frank Act Stress Test 2013: Supervisory Stress Test Methodology and Results". federalreserve.gov file dfast_2013_results_2040314, March, 2013. 2163