Toronto/Project Description

From 2007.igem.org

< Toronto(Difference between revisions)
(METHODS)
 
(One intermediate revision not shown)
Line 7: Line 7:
=== OBJECTIVES ===
=== OBJECTIVES ===
-
The goal of the proposed project is to develop a simple two-unit feedforward NN using E. coli cells.  The network will consist of one input unit and one output unit.  This network will be trained to either allow the input to propagate to the output, or to block propagation.
+
The goal of the proposed project is to develop a simple two-input-one-output feedforward NN using E. coli cells.  This network will be capable of being trained to function as different types of digital logic gates.
=== REVIEW OF PERTINENT LITERATURE ===
=== REVIEW OF PERTINENT LITERATURE ===
Line 23: Line 23:
=== METHODS ===
=== METHODS ===
-
'''''A.  Basic E. Coli Neural Network Topology'''''
+
'''''A.  Basic E. Coli Neural Network Structure'''''
-
Rather than using single cells to represent individual NN units, the proposed design treats groups of cells as individual unitsThese cells will either be in an active or inactive stateUsing this approach, network connection weights can be represented by the concentration of active cells containing a particular plasmid vectorA larger concentration of active cells with a particular plasmid would result in a faster net production rate of output signal molecules per unit area (i.e. a stronger weight)Thus, for a two unit NN, two different vectors would be needed – one for the input unit and one for the output unitIt may be possible, however, to combine these two vectors into a single vector for the sake of simplicity.
+
The input to the proposed network will be light and the output will be ECFP expression.  Two red light sources will comprise the input nodes.  The cell population will consist of output cells and weight cells.  The output cells, which represent the output node, will be physically separated into three neighboring groups by a membrane to prevent cell migrationThe first two groups detect light from the two network inputs.  The weight cells will be mounted above two groups of output cells and will act as a light filter to control the amount of light received by the output cells.  The weight cell groups will be separated via a membrane to prevent cell migrationThe amount of light picked up by each individual output cell group is integrated via production of HSL signaling molecules. These molecules are picked up by all output cell groups. The third group of output cells has no weight cells above it and exists only for the measurement of ECFP expressionFor each input-output measurement and between training sets, the media should be replaced to remove HSL and blue substrate molecules.  An abstracted diagram of the topology is illustrated in Fig. 2The genetic parts for the output cells are shown in Fig. 3.
-
The vector for the input layer cells is shown in Fig. 2a-bThese cells can be in an active or inactive state, which is established by a toggle switch consisting of two inverters (cI lam and tetR).  When active, they produce an intercellular HSL signaling molecule (3OC6HSL) in response to red light (660 nm) via expression of luxI.  When the cells are inactive, this production is suppressed.
+
[[Image:fig2.png|frame|center|Fig. 2Abstracted diagram of the proposed neural network structure.]]
-
The vector for the output layer cells is shown in Fig. 2c.  The output layer cells constitutively express luxr which detects 3OC6HSL.  In response to the HSL from the input unit cells, the output layer cells produce a short lasting ECFP.  These features provide the basic NN functionality.
+
[[Image:fig3.png|frame|center|Fig. 3.  The genetic parts for constructing the output cells.<br> (a) The light receptor is constitutively expressed.<br> (b) OmpR is activated in darkness when the EnvZ domain on the light receptor is phosphorylated. The activation of OmpR is inverted via cI lam so that light activity is being measured, instead of darkness.  The luxI gene is expressed in the presence of light.  Its gene product produces 3OC6HSL.  Note that cI lam and luxI both have LVA degradation tags to ensure they don’t accumulate.<br> (c) The luxr gene is constitutively expressed.  The gene product detects 3OC6HSL.  When the HSL concentration is high enough, the lux pR promoter is activated, resulting in the production of ECFP.  Note that ECFP has a degradation tag.]]
-
[[Image:fig2.png|frame|center|Fig. 2Basic neural network functionality components.<br> (a) The input cell component consists of a bistable toggle switch that controls cell activation state.<br> (b) When the cell is active, the absence of tetR allows the transcription of light sensing proteins.  When light is detected, 3OC6HSL is produced via luxI transcription.<br> (c) The output cell component detects 3OC6HSL via luxr.  This in turn activates transcription of ECFP.]]
+
'''''BWeight Cells'''''
-
'''''BBackpropagation Controller'''''
+
The weight cells control the amount of light permitted to pass by lacZ expression and cell deathWhen lacZ is expressed, less light passes; and when cell death occurs, any lacZ in the dead cell diffuses, allowing more light to pass.  The delta rule (1) dictates how the weights should change.  When there is input and a target output signal, the weight should increase.  When there is input and output, the weight should decrease.  Low heat will be used as the target output signal.  Hence, when there is light and low heat, cell death will occur via suppression of the KanR gene, given the media contains kanamycin.  When there is light and HSL molecules, lacZ will be produced.
-
Backpropagation is performed by adjusting the network weights.  This is done by deactivating input cells to decrease their weights or activating them to increase their weights. The training mode is activated in the presence of arabinose.  While the network is in training mode, heat provides the training signalIn order to enable this network to learn via backpropagation, further functionality is added to the input layer cells to allow activation via O3C14HSL and deactivation via AI-1. A backpropagation controller cell type is also introduced. 
+
A light filter may be placed between the weight cells and the output cells to ensure the weight cells receive the maximum light in the on-state, while the output cells receive a degree of light that is closer to the on/off transition point.  This way, the weight cells will have more effect on the output.
 +
The system will be designed so that the output cells and the weight cells do not share the same mediaThis way, the media from the output cells can be transferred to the weight cells to initiate network training once the HSL molecules have been generated and evenly distributed.
-
Heat and 3OC6HSL from the input layer only need to be detected in the presence of the input signal (light) due to the x term in (1).  Light sensitive proteins are produced in when training mode is initiated via arabinose (Fig. 3c).  To detect heat, expression of temperature sensitive lacI is enabled in the presence of light (Fig. 3d).  When heat is detected, the backpropagation cells produce O3C14HSL via cinI (Fig. 3d)In the presence of light, 3OC6HSL from the input layer results in the transcription of lasI and thereby the production of AI-1(Fig. 3e).
+
[[Image:fig4.png|frame|center|Fig. 4.  Genetic parts for the weight cells.<br> (a) Light receptors are constitutively expressedThe activation of OmpR is inverted via tetR.<br> (b) In the presence of light, luxR and lac ts are expressed. The luxR gene product detects 3OC6HSL and activates lux pR.  When lux pR is activated, lacZ is expressedThe lac ts gene product inhibits ampR expression when the temperature is reduced, resulting in cell death.]]
-
O3C14HSL and AI-1 modulate the activation state of input layer cellsO3C14HSL promotes activation of the input unit cells via cI lam (Fig. 3a), while AI-1 promotes deactivation via tetR (Fig. 3b).  In essence, these two intercellular signals compete to activate or deactivate the input layer cells, providing the d-y term in (1).  The duration of training provides the α term.
+
'''''DTraining'''''
-
[[Image:fig3.png|frame|center|Fig. 3. Backpropagation<br> (a) Detects O3C14HSL using cinR and promotes activation of the input cells via cI lam.<br> (b) Detects AI-1 using lasR and promotes deactivation of the input cells via tetR.<br> (c) Produces heat and light sensing proteins for the output cells when training mode is activated by arabinose.<br> (d) Produces O3C14HSL via cinI when heat is detected in training mode.<br> (e) Produces AI-1 via lasI when light is detected in training mode.]]
+
This network can be trained to either allow one input to propagate to the output, or to function as a AND or OR gate.  Training is performed by presenting the network with all input and target output combinations for the logic function desired of the network.  A typical training run would proceed as follows:
 +
# ''Presentation of input to network:''  Light input is presented to the output cells for the predetermined exposure time. The weight cells will not have kanamycin or X-gal in the media to prevent blue color production or cell death.  The output cells should have kanamycin and X-gal in the media since the media will be transferred to the weight cells for training.
 +
# ''Blocking of output cells to further input:''  The light input should be blocked from reaching the output cells (but not the weight cells) in order to allow the system to reach steady state.
 +
# ''Presentation of target output to weight cells:'' The appropriate temperature should now be set for the desired target output.
 +
# ''Enabling of training mode:''  Training mode is enabled by transferring the media from the output cells to the weight cells.
-
'''''Training'''''
+
Table I. is a list of parameters that can be adjusted to control how the network behaves.  Note that for the same actual and target output levels, the weight cells should be “zeroed” so that the rates of lacZ production and cell death are such that their effects cancel each other with respect to light transmittance.
-
This network will be trained to either allow or block propagation of the input to the output.
+
[[Image:table1.jpg]]
-
# Allow propagation of the input to the output:  This will involve providing an input and training signal while the network is in training mode.
+
-
# Block propagation of the input to the output: This will involve providing an input signal without a training signal while the network is in training mode.
+
-
Ideally, the cells will be in solution to allow random movement, thereby allowing for more active competition between the O3C14HSL and AI-1 signals.
+
'''''E. Simulation'''''
-
=== CONTINGENCY ===
+
Simulation will need to model the internal chemistry of the cell, as well as the propagation of HSL signals outside the cell.  This will aid in determining appropriate values for the network parameters listed in Table 1.
 +
The modeling process will first involve constructing a model of the basic neural network functionality.  Once this is working as tested by hypothetical inputs and weight cell transmittances, full weight cell dynamics can be implemented.  Table 2 provides a guideline on some of the experimental plots that will be needed to develop the model.
-
The most complex modules are the ones that provide backpropagation functionality.  If these modules fail, one could develop a vector that outputs O3C14HSL or AI-1 in response to manual input signals and manually provide the weight adjustment signals.
+
[[Image:table2.jpg]]
-
 
+
-
If one of the HSL components does not work, it may be possible to feed the input cell output signal back into the input cells, thereby reducing the need for one of the HSL components.  This is not optimal as we are interested in what the output cells detect rather than what is detected by the input cells.  As such, doing this may require more tweaking of the cell density to ensure a fairly uniform concentration of HSL molecules.
+
-
 
+
-
It should be noted that the constitutive expression promoters can be changed to modulate the expression level of the HSL sender and receiver components.
+
-
 
+
-
=== SIMULATION ===
+
-
 
+
-
Simulation will need to model the internal chemistry of the cell, as well as the propagation of intercellular signals outside the cell.  This will help determine wait times required for an input signal to propagate to the output and for network training take place.  The model will also help form an idea of expected background noise.
+
-
 
+
-
The modeling process will first involve constructing a model of the basic neural network functionality.  Once this is working as tested by hypothetical inputs, the backpropagation components can be added.  The following provides a guideline on some of the constants that need to be determined to model the basic system.
+
-
 
+
-
1) Toggle switch: transcription, translation, promoter binding and degradation rate constants of tetR and cI lam.
+
-
 
+
-
2) Light sensor:  transcription, translation, degradation and promoter binding rates of the light sensing proteins, the transcription and degradation rates of luxI and the rate at which luxI produces 3OC6HSL.
+
-
 
+
-
3) Output layer:  Once the intracellular concentration of 3OC6HSL is modeled, the binding and reverse binding constants of the HSL to the cell’s surface will need to be determined.  For simplicity, it may be assumed that there are a fixed number of receptors on the cell’s surface.  A model will need to be proposed for the method of signal transduction that results in the transcription of ECFP.  The transcription, translation and degradation rates of ECFP will also need to be measured.
+
=== SIGNIFICANCE OF WORK ===
=== SIGNIFICANCE OF WORK ===
-
Developing a NN using a genetic regulatory network would provide a stepping stone for the creation of biological systems that operate on fuzzy logic.  These types of systems would have the ability to adapt to changing environmental conditions without the need to design traditional predicate logic based genetic circuitry.
+
Developing a NN using a genetic regulatory network would provide a stepping stone for the creation of biological systems that operate on fuzzy logic.  These types of systems would have the ability to adapt to changing environmental conditions without the need to design traditional predicate logic based genetic circuitry.  Moreover, by implementing a neural network using biological components, one can take advantage of the parallel processing capabilities of biological systems.
-
While relatively simple in functionality, this design will provide a basis for more complicated NN topologies.  For example, with a little more work, one could expand the input layer to contain two input units.  This network topology would then be capable of being trained to behave as an AND gate or an OR gate.  A constant input and the capacity for negative weight values could also be implemented to allow for situations where there is output without input, as is the case for NAND and NOR gates.
+
While relatively simple in functionality, this design will provide a basis for more complicated NN topologies.
=== REFERENCES ===
=== REFERENCES ===

Latest revision as of 23:12, 26 October 2007

Contents

Project: E. Coli Neural Network

INTRODUCTION

Neural networks (NNs) are nonlinear systems capable of distributed processing over a number of simple interconnected units. By adjusting connections between these units, NNs are capable of learning. Similar to NNs, a culture of Escherichia coli cells can be viewed as a distributed processing system with connections formed by intercellular signaling pathways. By engineering the cells to interact with each other in predetermined ways, it should be feasible to develop a bacteria based neural network.

OBJECTIVES

The goal of the proposed project is to develop a simple two-input-one-output feedforward NN using E. coli cells. This network will be capable of being trained to function as different types of digital logic gates.

REVIEW OF PERTINENT LITERATURE

The neural network topology being examined in this proposal is known as a perceptron. Perceptrons are linear classifiers, which output the function of a linear combination of inputs [1]. The factor by which each input is multiplied is known as the weight. A perceptron is visually depicted as a layer of input units which are connected by network weights to a single output unit which performs a function on the weighted sum of the inputs (Fig. 1). For the proposed project, only one input (x1) will be used.

Training of feedforward neural networks typically employs a method known as backpropagation [2]. Backpropagation involves a forward pass where the output of the neural network is computed for a given set of inputs. A backward pass is made where network weights are adjusted based on the error between the actual output and the expected output. The delta rule is commonly employed to determine the amount of weight change. For perceptrons, a simplification of the delta rule can be made where the weight change is equal to (delta)wi = a xi (d-y) (1).

where d is the target output and alpha is a constant that controls the amplitude of the weight adjustment [1]. Training is done by exposing the network to a training set until the error between the actual and expected output decreases within a defined tolerance level.

Fig1.png

Fig. 1. Generalized topology of a perceptron.

METHODS

A. Basic E. Coli Neural Network Structure

The input to the proposed network will be light and the output will be ECFP expression. Two red light sources will comprise the input nodes. The cell population will consist of output cells and weight cells. The output cells, which represent the output node, will be physically separated into three neighboring groups by a membrane to prevent cell migration. The first two groups detect light from the two network inputs. The weight cells will be mounted above two groups of output cells and will act as a light filter to control the amount of light received by the output cells. The weight cell groups will be separated via a membrane to prevent cell migration. The amount of light picked up by each individual output cell group is integrated via production of HSL signaling molecules. These molecules are picked up by all output cell groups. The third group of output cells has no weight cells above it and exists only for the measurement of ECFP expression. For each input-output measurement and between training sets, the media should be replaced to remove HSL and blue substrate molecules. An abstracted diagram of the topology is illustrated in Fig. 2. The genetic parts for the output cells are shown in Fig. 3.

Fig. 2. Abstracted diagram of the proposed neural network structure.
Fig. 3. The genetic parts for constructing the output cells.
(a) The light receptor is constitutively expressed.
(b) OmpR is activated in darkness when the EnvZ domain on the light receptor is phosphorylated. The activation of OmpR is inverted via cI lam so that light activity is being measured, instead of darkness. The luxI gene is expressed in the presence of light. Its gene product produces 3OC6HSL. Note that cI lam and luxI both have LVA degradation tags to ensure they don’t accumulate.
(c) The luxr gene is constitutively expressed. The gene product detects 3OC6HSL. When the HSL concentration is high enough, the lux pR promoter is activated, resulting in the production of ECFP. Note that ECFP has a degradation tag.

B. Weight Cells

The weight cells control the amount of light permitted to pass by lacZ expression and cell death. When lacZ is expressed, less light passes; and when cell death occurs, any lacZ in the dead cell diffuses, allowing more light to pass. The delta rule (1) dictates how the weights should change. When there is input and a target output signal, the weight should increase. When there is input and output, the weight should decrease. Low heat will be used as the target output signal. Hence, when there is light and low heat, cell death will occur via suppression of the KanR gene, given the media contains kanamycin. When there is light and HSL molecules, lacZ will be produced.

A light filter may be placed between the weight cells and the output cells to ensure the weight cells receive the maximum light in the on-state, while the output cells receive a degree of light that is closer to the on/off transition point. This way, the weight cells will have more effect on the output. The system will be designed so that the output cells and the weight cells do not share the same media. This way, the media from the output cells can be transferred to the weight cells to initiate network training once the HSL molecules have been generated and evenly distributed.

Fig. 4. Genetic parts for the weight cells.
(a) Light receptors are constitutively expressed. The activation of OmpR is inverted via tetR.
(b) In the presence of light, luxR and lac ts are expressed. The luxR gene product detects 3OC6HSL and activates lux pR. When lux pR is activated, lacZ is expressed. The lac ts gene product inhibits ampR expression when the temperature is reduced, resulting in cell death.

D. Training

This network can be trained to either allow one input to propagate to the output, or to function as a AND or OR gate. Training is performed by presenting the network with all input and target output combinations for the logic function desired of the network. A typical training run would proceed as follows:

  1. Presentation of input to network: Light input is presented to the output cells for the predetermined exposure time. The weight cells will not have kanamycin or X-gal in the media to prevent blue color production or cell death. The output cells should have kanamycin and X-gal in the media since the media will be transferred to the weight cells for training.
  2. Blocking of output cells to further input: The light input should be blocked from reaching the output cells (but not the weight cells) in order to allow the system to reach steady state.
  3. Presentation of target output to weight cells: The appropriate temperature should now be set for the desired target output.
  4. Enabling of training mode: Training mode is enabled by transferring the media from the output cells to the weight cells.

Table I. is a list of parameters that can be adjusted to control how the network behaves. Note that for the same actual and target output levels, the weight cells should be “zeroed” so that the rates of lacZ production and cell death are such that their effects cancel each other with respect to light transmittance.

Table1.jpg

E. Simulation

Simulation will need to model the internal chemistry of the cell, as well as the propagation of HSL signals outside the cell. This will aid in determining appropriate values for the network parameters listed in Table 1. The modeling process will first involve constructing a model of the basic neural network functionality. Once this is working as tested by hypothetical inputs and weight cell transmittances, full weight cell dynamics can be implemented. Table 2 provides a guideline on some of the experimental plots that will be needed to develop the model.

Table2.jpg

SIGNIFICANCE OF WORK

Developing a NN using a genetic regulatory network would provide a stepping stone for the creation of biological systems that operate on fuzzy logic. These types of systems would have the ability to adapt to changing environmental conditions without the need to design traditional predicate logic based genetic circuitry. Moreover, by implementing a neural network using biological components, one can take advantage of the parallel processing capabilities of biological systems.

While relatively simple in functionality, this design will provide a basis for more complicated NN topologies.

REFERENCES

[1] O. Weisman, The Perceptron. [Online]. Available: http://www.cs.bgu.ac.il/~omri/Perceptron

[2] B. Bardakjian, Cellular Bioelectricity Course Notes. 2005.

Back to the Toronto team main page