Toronto

From 2007.igem.org

(Difference between revisions)
(Project: E. Coli Neural Network)
 
(33 intermediate revisions not shown)
Line 1: Line 1:
-
'''Link to our website:'''
+
[[Image:blue genes logo.jpg|frame|center]]
-
[http://www.igem.skule.ca BlueGenes Website]
+
-
== Project: E. Coli Neural Network ==
+
== iGEM Project: Bacterial Neural Network ==
-
=== INTRODUCTION ===
+
[[Image:Fig2.png|thumb|A visual abstraction of the system. Red light inputs pass through filter cells (grey in this image) and hit photoreceptors on the reporter cells (depicted by solar panels). The reporters sum multiple inputs and fluoresce (blue lightbulb).]]
-
Neural networks (NNs) are nonlinear systems capable of distributed processing over a number of simple interconnected units. By adjusting connections between these units, NNs are capable of learning. Similar to NNs, a culture of Escherichia coli cells can be viewed as a distributed processing system with connections formed by intercellular signaling pathways. By engineering the cells to interact with each other in predetermined ways, it should be feasible to develop a bacteria based neural network.
+
Our project aims to build a bacterial (E. coli) neural network composed of two cell types, where filter cells (type A) modulate input to reporter cells (type B). The first cell type is stimulated with red light of a specific intensity and duration, and will turn blue in proportion to that "pulse" of light. Populations of type A will be physically mounted above those of type B, acting as a light filter. Type B cells are receptive to the same wavelength of light, and will fluoresce in proportion to the amount of light they receive.
 +
Neural networks are unique in that their ability to signal both forward (type A influencing type B) and backward (type B influencing type A) results in the ability to learn. Training sessions can be performed with predefined  inputs and outputs, and repeated iterations will increase the probability that giving the network an input will produce the desired output. Our neural network training functionality will be implemented through cell-cell signalling from type B cells to type A cells, adjusting the strength of the light filter.
-
=== OBJECTIVES ===
+
Depending on the training strategy used, our neural network can learn to function as either an AND or an OR gate. Essentially, our neural network will be able to sum a number of inputs and provide a proportionate output. Once training is completed with a few inputs, we should be able to provide novel inputs to the network and produce appropriate responses. This is a step towards demonstrating fuzzy logic (as opposed to traditional digital logic) in genetic circuits.
-
The goal of the proposed project is to develop a simple two-unit feedforward NN using E. coli cells.  The network will consist of one input unit and one output unit.  This network will be trained to either allow the input to propagate to the output, or to block propagation.
+
For a more comprehensive description including genetic circuit diagrams, please see [[Toronto/Project Description|E. Coli Neural Network]]. (Updated proposal!)
-
=== REVIEW OF PERTINENT LITERATURE ===
+
'''Background:''' What is a neural network, anyway? See the [http://en.wikipedia.org/wiki/Neural_network Wikipedia definition of a neural network].
-
The neural network topology being examined in this proposal is known as a perceptron.  Perceptrons are linear classifiers, which output the function of a linear combination of inputs [1].  The factor by which each input is multiplied is known as the weight.  A perceptron is visually depicted as a layer of input units which are connected by network weights to a single output unit which performs a function on the weighted sum of the inputs (Fig. 1).  For the proposed project, only one input (x1) will be used.
+
== Simulated Models ==
 +
We are currently working on two types of simulations.
-
Training of feedforward neural networks typically employs a method known as backpropagation [2].  Backpropagation involves a forward pass where the output of the neural network is computed for a given set of inputs.  A backward pass is made where network weights are adjusted based on the error between the actual output and the expected output.  The delta rule is commonly employed to determine the amount of weight change.  For perceptrons, a simplification of the delta rule can be made where the weight change is equal to
+
=== Ordinary Differential Equations ===
-
<nowiki><centre><i>(delta)wi = a xi (d-y)        (1)</i></centre></nowiki>
+
<center>[[Image:ODEpic.gif]]</center>
-
where d is the target output and alpha is a constant that controls the amplitude of the weight adjustment [1].  Training is done by exposing the network to a training set until the error between the actual and expected output decreases within a defined tolerance level.
+
An example of the differential equations used to model the neural network. This set defines the luxR-eCFP pathway in the second cell type.
-
+
=== Monte Carlo Stochastic Simulation ===
-
Fig. 1.  Generalized topology of a perceptron.
+
-
=== METHODS ===
+
<center>[[Image:output cell layer.gif]]</center>
-
''A. Basic E. Coli Neural Network Topology''
+
This graph depicts the second cell type responding to light. The blue and purple lines indicate two cell populations producing LuxI, while the red line indicates total HSL production. In our neural network, LuxI is produced in response to light input, and expression of LuxI controls the expression of HSL; therefore, this cell type can "add" the light inputs to produce appropriate HSL output. The x -axis depicts iterations of the simulation, while the y-axis is a measure of output (eg. concentration in the system).
-
Rather than using single cells to represent individual NN units, the proposed design treats groups of cells as individual units.  These cells will either be in an active or inactive state.  Using this approach, network connection weights can be represented by the concentration of active cells containing a particular plasmid vector.  A larger concentration of active cells with a particular plasmid would result in a faster net production rate of output signal molecules per unit area (i.e. a stronger weight).  Thus, for a two unit NN, two different vectors would be needed – one for the input unit and one for the output unit.  It may be possible, however, to combine these two vectors into a single vector for the sake of simplicity.
+
<center>[[Image:one input output simulation.gif]]</center>
-
The vector for the input layer cells is shown in Fig. 2a-b.  These cells can be in an active or inactive state, which is established by a toggle switch consisting of two inverters (cI lam and tetR). When active, they produce an intercellular HSL signaling molecule (3OC6HSL) in response to red light (660 nm) via expression of luxI. When the cells are inactive, this production is suppressed.
+
This figure demonstrates the predicted behaviour of the complete neural network. The blue indicates the size of the light absorbing cell population (ie. the first cell type, which quickly reaches a steady state. The light yellow line represents a steady red input light, while the orange line shows the amount of light being passed to the second cell type (shown in red in the diagram below the graph). Finally, the red line on the graph shows the final output increasing over successive iterations and then reaching a consistent level over time.
-
The vector for the output layer cells is shown in Fig. 2c.  The output layer cells constitutively express luxr which detects 3OC6HSL. In response to the HSL from the input unit cells, the output layer cells produce a short lasting ECFP.  These features provide the basic NN functionality.
+
== Construction ==
 +
'''Construction strategy:''' Using classical DNA transformation and ligation techniques, we plan to build six testing constructs to provide experimental constants for our simulated models of the neural network. The simulations will then assist us in optimizing our experimental conditions for training the final circuits. Building test constructs also results in reliable intermediate parts that can be used to quickly assemble the complete neural network.
 +
[[Toronto/Lab Notebook|Lab Notebook]] - watch our daily wetlab progress online
 +
[[Toronto/Design Updates|Design Updates]] - the circuit diagrams for the neural network and testing constructs
-
  tetR                cI lam              cI lam                  tetR
+
Lab schedule - the lab schedule is now obsolete. The most recent version is available at [http://igem.skule.ca/lab/schedule.htm BlueGenes lab schedules].
-
R0040  B0034  C0051  B0015  R0051  B0034  C0040  B0015
+
-
       
+
-
  tetR                                                                                            OmpR                luxI
+
[[Toronto/Lab Protocols|Lab Protocols]] - online versions of our lab protocols. These can also be found on our static site, igem.skule.ca.
-
R0040  B0034  I15008  B0034  I15009  B0034  I15010  B0015  R0082  B0034  C0061  B0015
+
-
           
+
 +
== The Team ==
 +
[[Toronto/Team Name|What is BlueGenes?]] - a brief overview of who we are, and where the name comes from (not pants!)
-
const.                  luxr                lux pR              ECFP
+
[[Toronto/Roster|Meet the Team]] - team roster, photos, and profiles
-
J23100  B0034  C0062  B0015  R0062  B0034  E0022  B0015
+
-
       
+
-
Fig. 2.  Basic neural network functionality components.  (a) The input cell component consists of a bistable toggle switch that controls cell activation state.  (b) When the cell is active, the absence of tetR allows the transcription of light sensing proteins.  When light is detected, 3OC6HSL is produced via luxI transcription. (c) The output cell component detects 3OC6HSL via luxr.  This in turn activates transcription of ECFP.
+
-
''B. Backpropagation Controller''
+
iGEM 2006 Wiki: [http://parts2.mit.edu/wiki/index.php/University_of_Toronto_2006 Blue Water]. Of particular note is the lab notebook (click on Construction under Committees).
-
Backpropagation is performed by adjusting the network weights.  This is done by deactivating input cells to decrease their weights or activating them to increase their weights.  The training mode is activated in the presence of arabinose.  While the network is in training mode, heat provides the training signal.  In order to enable this network to learn via backpropagation, further functionality is added to the input layer cells to allow activation via O3C14HSL and deactivation via AI-1.  A backpropagation controller cell type is also introduced. 
+
== Sponsors ==
 +
[[Toronto/Sponsors|Sponsors]]
-
Heat and 3OC6HSL from the input layer only need to be detected in the presence of the input signal (light) due to the x term in (1). Light sensitive proteins are produced in when training mode is initiated via arabinose (Fig. 3c).  To detect heat, expression of temperature sensitive lacI is enabled in the presence of light (Fig. 3d).  When heat is detected, the backpropagation cells produce O3C14HSL via cinI (Fig. 3d).  In the presence of light, 3OC6HSL from the input layer results in the transcription of lasI and thereby the production of AI-1(Fig. 3e).
+
If you would like to support us, please go to [http://igem.skule.ca/finance/opportunities.htm Sponsorship] for more on becoming a sponsor.  
-
O3C14HSL and AI-1 modulate the activation state of input layer cells. O3C14HSL promotes activation of the input unit cells via cI lam (Fig. 3a), while AI-1 promotes deactivation via tetR (Fig. 3b). In essence, these two intercellular signals compete to activate or deactivate the input layer cells, providing the d-y term in (1).  The duration of training provides the α term.
+
== Miscellany ==
 +
'''Website:''' [http://www.igem.skule.ca igem.skule.ca] - other information about BlueGenes can be found here.
-
+
'''Contact Us:''' igem[at]skule[dot]ca
-
 
+
-
 
+
-
const.                cinR                    cin                cI lam
+
-
J23100  B0034  C0077  B0015  R0077  B0034  C0051  B0015
+
-
        (activate)
+
-
 
+
-
const.                  lasR              lasR+PAI              tetR
+
-
J23100  B0034  C0079  B0015  R0079  B0034  C0040  B0015
+
-
        (deactivate)
+
-
 
+
-
pBad/araC
+
-
  I0050  B0034 I15008  B0034  I15009  B0034  I15010  B0015
+
-
       
+
-
 
+
-
OmpR                lacI ts                lacI                  cinI
+
-
R0082  B0034  J06501 B0015 J54250  B0034  C0076  B0015
+
-
        (activate)
+
-
 
+
-
OmpR                  luxr                lux pR                lasI
+
-
R0082  B0034  C0062  B0015  R0062  B0034  C0078  B0015
+
-
        (deactivate)
+
-
 
+
-
 
+
-
 
+
-
Fig. 3.  (a) Detects O3C14HSL using cinR and promotes activation of the input cells via cI lam.  (b) Detects AI-1 using lasR and promotes deactivation of the input cells via tetR.  (c) Produces heat and light sensing proteins for the output cells when training mode is activated by arabinose. (d) Produces O3C14HSL via cinI when heat is detected in training mode.  (e)  Produces AI-1 via lasI when light is detected in training mode.
+
-
 
+
-
''Training''
+
-
 
+
-
This network will be trained to either allow or block propagation of the input to the output.
+
-
 
+
-
1) Allow propagation of the input to the output: This will involve providing an input and training signal while the network is in training mode.
+
-
2) Block propagation of the input to the output:  This will involve providing an input signal without a training signal while the network is in training mode.
+
-
Ideally, the cells will be in solution to allow random movement, thereby allowing for more active competition between the O3C14HSL and AI-1 signals.
+
-
 
+
-
=== CONTINGENCY ===
+
-
 
+
-
The most complex modules are the ones that provide backpropagation functionality.  If these modules fail, one could develop a vector that outputs O3C14HSL or AI-1 in response to manual input signals and manually provide the weight adjustment signals.
+
-
 
+
-
If one of the HSL components does not work, it may be possible to feed the input cell output signal back into the input cells, thereby reducing the need for one of the HSL components.  This is not optimal as we are interested in what the output cells detect rather than what is detected by the input cells.  As such, doing this may require more tweaking of the cell density to ensure a fairly uniform concentration of HSL molecules.
+
-
 
+
-
It should be noted that the constitutive expression promoters can be changed to modulate the expression level of the HSL sender and receiver components.
+
-
 
+
-
=== SIMULATION ===
+
-
 
+
-
Simulation will need to model the internal chemistry of the cell, as well as the propagation of intercellular signals outside the cell.  This will help determine wait times required for an input signal to propagate to the output and for network training take place.  The model will also help form an idea of expected background noise.
+
-
 
+
-
The modeling process will first involve constructing a model of the basic neural network functionality.  Once this is working as tested by hypothetical inputs, the backpropagation components can be added.  The following provides a guideline on some of the constants that need to be determined to model the basic system.
+
-
 
+
-
1) Toggle switch:  transcription, translation, promoter binding and degradation rate constants of tetR and cI lam.
+
-
2) Light sensor:  transcription, translation, degradation and promoter binding rates of the light sensing proteins, the transcription and degradation rates of luxI and the rate at which luxI produces 3OC6HSL.
+
-
3) Output layer:  Once the intracellular concentration of 3OC6HSL is modeled, the binding and reverse binding constants of the HSL to the cell’s surface will need to be determined.  For simplicity, it may be assumed that there are a fixed number of receptors on the cell’s surface.  A model will need to be proposed for the method of signal transduction that results in the transcription of ECFP.  The transcription, translation and degradation rates of ECFP will also need to be measured.
+
-
 
+
-
=== SIGNIFICANCE OF WORK ===
+
-
 
+
-
Developing a NN using a genetic regulatory network would provide a stepping stone for the creation of biological systems that operate on fuzzy logic.  These types of systems would have the ability to adapt to changing environmental conditions without the need to design traditional predicate logic based genetic circuitry.
+
-
 
+
-
While relatively simple in functionality, this design will provide a basis for more complicated NN topologies.  For example, with a little more work, one could expand the input layer to contain two input units.  This network topology would then be capable of being trained to behave as an AND gate or an OR gate.  A constant input and the capacity for negative weight values could also be implemented to allow for situations where there is output without input, as is the case for NAND and NOR gates.
+
-
 
+
-
=== REFERENCES ===
+
-
 
+
-
[1] O. Weisman, The Perceptron. [Online]. Available: http://www.cs.bgu.ac.il/~omri/Perceptron
+
-
[2] B. Bardakjian, Cellular Bioelectricity Course Notes. 2005.
+
-
 
+
-
== Team Members ==
+
-
 
+
-
Update 09/08/07:
+
-
BlueWaters iGEM 2007 Project Proposal (Draft): [[Neural Network]]
+
-
<br>The above is subject to change.
+
-
 
+
-
Groups within the team, and current contact people
+
-
* Finance (Kirill, Conrad)
+
-
* Design (Natalie, Charles)
+
-
** four projects under feasibility evaluation
+
-
* Lab (Charles, Natalie)
+
-
** core: 4
+
-
** rotation: 12+ confirmed
+
-
* Presentation (Andy)
+
-
 
+
-
Each group is now in the process of getting up and running smoothly. Need to contact a contact person? Email the team (the address is on the BlueGenes website at the top of this page) and your message will get to the correct person. Please keep in mind that there may be a delay in response due to all of our contact people having full-time commitments elsewhere (and lately, everyone's been out of the country), but we'll do our best to be prompt.
+
-
 
+
-
Our wiki from last year: [http://parts2.mit.edu/wiki/index.php/University_of_Toronto_2006 Blue Water]. Of particular note is the lab notebook (click on Construction under Committees).
+
-
 
+
-
Editors of this wiki: Markup conventions can be found on the [http://en.wikipedia.org/wiki/Wikipedia:How_to_edit_a_page Wikipedia editing tips].
+
-
 
+
-
Much more to come.
+

Latest revision as of 03:58, 27 October 2007

Blue genes logo.jpg

Contents

iGEM Project: Bacterial Neural Network

A visual abstraction of the system. Red light inputs pass through filter cells (grey in this image) and hit photoreceptors on the reporter cells (depicted by solar panels). The reporters sum multiple inputs and fluoresce (blue lightbulb).

Our project aims to build a bacterial (E. coli) neural network composed of two cell types, where filter cells (type A) modulate input to reporter cells (type B). The first cell type is stimulated with red light of a specific intensity and duration, and will turn blue in proportion to that "pulse" of light. Populations of type A will be physically mounted above those of type B, acting as a light filter. Type B cells are receptive to the same wavelength of light, and will fluoresce in proportion to the amount of light they receive.

Neural networks are unique in that their ability to signal both forward (type A influencing type B) and backward (type B influencing type A) results in the ability to learn. Training sessions can be performed with predefined inputs and outputs, and repeated iterations will increase the probability that giving the network an input will produce the desired output. Our neural network training functionality will be implemented through cell-cell signalling from type B cells to type A cells, adjusting the strength of the light filter.

Depending on the training strategy used, our neural network can learn to function as either an AND or an OR gate. Essentially, our neural network will be able to sum a number of inputs and provide a proportionate output. Once training is completed with a few inputs, we should be able to provide novel inputs to the network and produce appropriate responses. This is a step towards demonstrating fuzzy logic (as opposed to traditional digital logic) in genetic circuits.

For a more comprehensive description including genetic circuit diagrams, please see E. Coli Neural Network. (Updated proposal!)

Background: What is a neural network, anyway? See the [http://en.wikipedia.org/wiki/Neural_network Wikipedia definition of a neural network].

Simulated Models

We are currently working on two types of simulations.

Ordinary Differential Equations

ODEpic.gif

An example of the differential equations used to model the neural network. This set defines the luxR-eCFP pathway in the second cell type.

Monte Carlo Stochastic Simulation

Output cell layer.gif

This graph depicts the second cell type responding to light. The blue and purple lines indicate two cell populations producing LuxI, while the red line indicates total HSL production. In our neural network, LuxI is produced in response to light input, and expression of LuxI controls the expression of HSL; therefore, this cell type can "add" the light inputs to produce appropriate HSL output. The x -axis depicts iterations of the simulation, while the y-axis is a measure of output (eg. concentration in the system).

One input output simulation.gif

This figure demonstrates the predicted behaviour of the complete neural network. The blue indicates the size of the light absorbing cell population (ie. the first cell type, which quickly reaches a steady state. The light yellow line represents a steady red input light, while the orange line shows the amount of light being passed to the second cell type (shown in red in the diagram below the graph). Finally, the red line on the graph shows the final output increasing over successive iterations and then reaching a consistent level over time.

Construction

Construction strategy: Using classical DNA transformation and ligation techniques, we plan to build six testing constructs to provide experimental constants for our simulated models of the neural network. The simulations will then assist us in optimizing our experimental conditions for training the final circuits. Building test constructs also results in reliable intermediate parts that can be used to quickly assemble the complete neural network.

Lab Notebook - watch our daily wetlab progress online

Design Updates - the circuit diagrams for the neural network and testing constructs

Lab schedule - the lab schedule is now obsolete. The most recent version is available at [http://igem.skule.ca/lab/schedule.htm BlueGenes lab schedules].

Lab Protocols - online versions of our lab protocols. These can also be found on our static site, igem.skule.ca.

The Team

What is BlueGenes? - a brief overview of who we are, and where the name comes from (not pants!)

Meet the Team - team roster, photos, and profiles

iGEM 2006 Wiki: [http://parts2.mit.edu/wiki/index.php/University_of_Toronto_2006 Blue Water]. Of particular note is the lab notebook (click on Construction under Committees).

Sponsors

Sponsors

If you would like to support us, please go to [http://igem.skule.ca/finance/opportunities.htm Sponsorship] for more on becoming a sponsor.

Miscellany

Website: [http://www.igem.skule.ca igem.skule.ca] - other information about BlueGenes can be found here.

Contact Us: igem[at]skule[dot]ca