Wednesday, December 3, 2008

Seminar Reports - Site Map

The following topics are now available new topics will update soon.

# IPTV
# Multi Core Processors
# Privacy Preserving Data Integration And Mining
# Optical Data Storage
# Rollable Displays
# Occam-Pi programming Language and Rmox
# Mimo Technology
# Odor Communication System

contents

(1)Introduction
(2)Difficulty in odor communication
(3)An Odor Communication System
(4)Odor Spaces
(5)The MTM Algorithm
(6)Applications
(7)Conclusion
(8)References

Abstract

In today’s world most of us are imbibed to the computer world. Here mostly it is based on the human senses. Here the importance of smell and its application to the electronic world is emphasized. Its different parts are de-scribed, and ways to realize them are outlined. In this era, fragrances and flavors have an even greater influence as exemplified by their intensive use in the blooming industries of food and beverage, perfumes and cosmetics, detergents, and many more. These many applications require some means of controlling the odor world. A repertoire of methods in fragrance production and synthesis has been developed, aiming at safe, cheap, and reproducible odor fabrication techniques.

Introduction

It is generally accepted that the sensory world of most humans is built up mainly from visual and auditory impressions, and that other senses, such as smell, have smaller impact. Nevertheless, it seems that the sense of smell is often underestimated, and its impact actually may be overwhelming, directly influencing ancient, primitive, brain paths. Interestingly, humanity has already recognized this a long time ago, perhaps subconsciously , with scents already playing a significant role in ancient religious rituals.

Many applications require some means of controlling the odor world. A repertoire of methods in fragrance production and synthesis has been developed, aiming at safe, cheap, and reproducible odor fabrication techniques. Still, hard labor is required for each individual odor fabrication process, involving tedious, expensive, time consuming research. In the last few decades, there have been efforts to integrate odors into the rapidly evolving world of modern communication. Adding smells to a personal computer, a video, a television set, or a mobile phone, would give rise to a vast number of possible applications, in the fields of commerce, marketing, computer games, and many others. However, available odor technologies seem to be incapable of supporting such applications, making it necessary to develop novel technologies. Today, only simple odor manipulations can be carried out. For example, scented cards are often inserted as sales promoters in magazines, dispensing a fragrance when scratched. Similar “scratch and sniff" devices sometimes accompany movies or home television. Some recent model of mobile phones contains small capsules, emitting pre-determined scents when certain people call. There have even been attempts to introduce odors by means of air conditioning systems in movie theaters and in the workplace. Still, none of the above comes close to the technological advances in vision and audition. One of the most salient expressions of this gap is in modern multimedia. Pictures and sound are routinely transmitted and exhibited on television, video or the personal computer. This has not happened yet with odors.

Difficulty in odor communication

Some of the major problems seem to be the following:

The underlying physics is complex. Vision and audition also involve complex physical phenomena, but photons and sound waves are well-defined physical objects that follow well-known equations of a simple basic nature. Specifically, in both cases sensory quality is related to well known physics. On the other hand, the smell of an odorant is determined by the complex, and only partially understood, interactions between the ligand molecule and the olfactory receptor molecule.

The biological detection system is high-dimensional. The nose contains hundreds of different types of olfactory receptors, each of them interacting in different ways with different kinds of odorants. Thus, the dimensionality of the sense of smell is at least two orders of magnitude larger than that of vision, which can make do with only three types of color receptors.

Odor delivery technology is immature. While artificial generation of desired visual and auditory stimuli is done in high speed and with high quality, smells cannot be easily reproduced. Now-a-days, the best that can be done is to interactively release extracts that were prepared in advance.

Much effort has been invested in trying to better understand the sense of smell and its means of expression. Relating the smell of a molecule to its three-dimensional structure, as well as characterizing ligand-receptor interactions are the subject of intensive research. However, while much progress is constantly reported, no theory adequately dealing with olfaction is currently at hand.

An Odor Communication System

The most general building blocks of such an odor communication system are as depicted in Figure 1. At a remote location (Figure 1a) an input device the sniffer is used to take in the odor and transform it into a digital fingerprint. At a different location (Figure 1b), the fingerprint will be analyzed by the mix-to-mimic (MTM) algorithm, which will instruct an output device. The whiffer to emit a mixture of odorants that will mimic the input odor well enough to fool a human into thinking that he/she actually smells it. Prior to all of this there is also a considerable amount of preprocessing and preparation. All of this will be discussed later on.

This setup is in direct analogy with other communication systems. For example, if we replace the sniffer by a camera, and the whiffer by a printer, we get a visual communication system, with the various color coding (RGB, CMYK, etc.) being analogous to our mixing technique.

The sniffer

In the most general sense, a sniffer is a physical device that can record, or digitize, odorants. In other words, it takes chemical data and turns it into numbers. Upon the introduction of an odorant in its inlet, the sniffer produces a numerical output, which becomes (usually after some further manipulation) a representation, or a fingerprint, of the odorant. To be useful in our odor communication system, we shall further require from a sniffer to be sufficiently discriminatory, in that it produces unique fingerprints for all odorants. Moreover, we would like its fingerprints to exhibit some correlations with the smell perception of their sources. Any instrument that quantifies a certain property of chemicals in a unique and reproducible way suffices. In principle, an apparatus capable of measuring the boiling point of an odorant could become a sniffer. However, we can expect the correlation of boiling points with odor perception to be rather difficult.


A more realistic example is the combination of a gas chromatograph (GC) and a mass spectrometer (MS). The GC/MS combination is very popular in analytical chemistry, and is used to precisely identify the compounds of a mixture. However, we doubt that it would make a good sniffer, since we have no reason to believe that the output it produces has anything to do with smell perception. From a commercial point of view, GC/MS suffers from additional disadvantages: it is expensive, it is large and bulky, and it is complicated to use, requiring carefully-trained operators. Moreover, analyzing its results is time consuming, and often sample preparation is tedious too. In our opinion, the best candidates to serve as sniffers are the instruments collectively grouped under the term electronic noses (e Noses). These are analytic devices, whose main component is an array of non-specific chemical sensors, i.e., sensors that interact with a broad range of chemicals with varying strengths. Consequently, an incoming analyte stimulates many of the sensors in the array, and elicits a characteristic response pattern. These patterns are then further analyzed for the benefit of the specific application. The fact that the biological smelling system also relies on an array of non-specific receptors, gives hope that we may be able to find significant relationships between the biological nose and its artificial counterpart. The usual chemical sensors are replaced by biosensors that are supposed to work in essentially the same way as the biological receptors in the nose. From a commercial point of view, e Noses enjoy several desired properties: they can be made small and cheap; they are easy to use, fast to operate, and for most applications they do not require any special sample preparation.

In the electronic realm, as in the biological one, the desire for sensitivity does not always go well with the desire for non-specificity. Sensors (or receptors) that are designed to respond to an assortment of stimuli are normally characterized by low sensitivity. Indeed, e Noses are typified by relatively high detection thresholds, on the order of 1-10ppm. Although seemingly problematic, this is not a true stumbling block for an odor communication system. First, many odor sources release higher concentrations than this in their immediate vicinity. Second, a preliminary step of concentration enrichment can be always carried out if necessary.

The whiffer

The whiffer is the part of the system that emits the smell imitation to the surroundings. It must include a palette of reservoirs containing the odorants it can mix, a technology to accurately mix them, and means for releasing them to the outside world in accurate quantities and with precise timing. For use by mass consumers, the sniffer should also have small physical dimensions and be of low cost. This definition of a whiffer strongly relies on the assumption that mixtures from within a set of odorants can mimic, to a reasonable level, any desired smell. This is reminiscent of the characteristics of RGB color mixing in vision.

The requirements from a whiffer seem simple, but it turns out that numerous technological barriers must be overcome in order to satisfy them. In fact, whiffers, as we have defined them, are not commercially available. The devices that are closest to being whiffers are the olfactometers, which have been in use for many years and are capable of accurately mixing gas samples and releasing the mixture to the surroundings. They are most often used together with human panelists for the purpose of assessing odor emission levels. However, an olfactometer is not a true whiffer, since it is designed mainly for diluting carefully prepared gaseous samples. We think of a whiffer as being more akin to a printer (say, an ink-jet), with the palette of odorants being analogous to the color cartridge.

The mix-to-mimic (MTM) algorithm

The heart of the system, however, is in its mathematical and algorithmic parts. The ultimate role of these is to instruct the whiffer, based on the input odor detected by the sniffer, as to how to mix the palette odorants so as to produce the desired odor perception.

Odor Spaces

For a proper formulation of the mixing algorithm and the algorithmic processes around it, it is important to introduce the notion of odor space. Our brain carries out a similar operation when we sniff, producing a measurable electrical neuronal activity pattern. We use the term odor space for any end product of a process that represents numerically the olfactory information stored in odor ligands. Specifically, there are three kinds of odor spaces-the sniffer space, the sensory space, and the psychophysical space.

To start with, we use (o; c) to describe an odorant o in concentration c. An odor space represents (o; c) by the set of numbers d(o; c), which we call the odorant vector; the length of this vector is the dimension of the odor space.

The sensory space

The sense of smell is a primeval sense, originating in early single-cell organisms. In principle, it functions by taking a sample of the ambient environment and analyzing its chemical contents. In air-breathing organisms, volatile odorant enter the nasal cavity, where the primary organ of smell, the olfactory epithelium, resides. This pseudostratified neuroepithelium contains 10-100 million bipolar sensory neurons, each having a few dozen mucus-bathed hair-like cellular extensions known as olfactory cilia. The ciliary membranes harbor the olfactory receptor (OR) proteins , as well as components responsible for the chemoelectric transduction process. ORs have all been identified as belonging to the 7-transmembrane superfamily of G-protein coupled receptors. The stereospecific binding of odorant molecules to the ORs initiates a cascade of biochemical events that result in action potentials that reach higher brain centers. The number of distinct types of ORs, r, called the olfactory repertoire size, is believed to be around 1000 in all mammals .Only recently, the full sequence of more than 900 human OR genes has been reported, based on genomic databases . Only about 300 of them are functional in humans, and the rest are pseudogenes. However, in other mammals

the pseudogene fraction could be much smaller. The recognition of odorant molecules occurs in the brain by a non-covalent binding process akin to that encountered in many other receptor types, including hormone and neurotransmitter receptors. However, while for “standard" receptors there is usually only one, or very few, natural ligands, olfactory receptors are functionally promiscuous. Therefore, when an odorant (o; c) approaches the epithelium, it interacts with many receptor types, and can be characterized by the vector




with Ri(o; c) being the response of the i'th type of receptor molecule to the odorant (o; c).We deliberately do not specify the details of the response, which can be the fraction of bound receptors, the concentration of some second messenger, or some other relevant entity. It is often, in fact, a dynamic function of time. We shall see later that the exact definition of Ri(o; c) is irrelevant to our algorithm. The r-dimensional odorant vector dB(o; c) describes the way by which the biological sensory machinery responds to the odorant, so that terming this odor space the sensory space is appropriate. An important observation is that all the 105 to 106 OR molecules in the same sensory cell are of the same type, and thus r is also the number of distinct types of olfactory sensory neurons.

The olfactory neurons send their axons to the olfactory bulb (OB), passing in bundles through the cribriform plate. Here, the first, and rather significant stage of the higher processing takes place. It is widely believed that important aspects of odor quality and strength (concentration) perception are carried out in the OB, and studies have in fact shown that the OB responds with odor-specific spatio-temporal patterns. Successive stimulations with the same odorant have been shown to lead to reproducible patterns of activity. Patterns evoked by low concentrations were

topologically nearly identical to those evoked by high concentrations, but with reduced signal amplitude. Within the OB, the OR axons form contacts with secondary neurons inside ellipsoidal synaptic conglomerates, called glomeruli. A glomerulus serves as a synaptic target for neurons expressing only a single OR type. Consequently, it is not surprising that the number of glomeruli, estimated to be between 1000 and 2000, is of the same order of magnitude as r. From our point of view, the important conclusion is that the OB is stimulated by approximately r distinct types of nerve cells, which tells us that the entire olfactory pathway is triggered by the vector dB(o; c).

The psychophysical space

Upon sniffing, three major tasks are performed by the brain: a qualitative classiffication of the incoming odorant, a quantitative estimation of its strength, and a hedonic decision about its acceptability. The first two are objective tasks (measuring molecule types and concentrations), while the last one is more subjective and will not be dealt with here.

Olfactory classification of a pure chemical or a mixture is a rather elaborate task. Unlike vision, audition and even gustation, olfaction is multidimensional, and is believed to involve dozens, if not hundreds of quality descriptors. Quantitative assessment of these qualities poses real challenges to research in olfactory psychophysics. Methods have been developed to assign descriptors to an odor, and to give relative weights of dominance to the different descriptors. The entire procedure is normally carried out by a human panel of experts who are familiar with the technique, and who are capable of distinguishing the different descriptors with a high degree of accuracy. As appealing as this might sound, it is quite difficult to obtain coherent results with profiling, since exact verbal descriptions of odor perception are too demanding. Human subjects often find it difficult to describe odor quality verbally, an observation supported by the fact that most natural languages have a poor vocabulary for odors, and these are sometimes described using words borrowed from other sensory modalities(e.g., cool, green).

Alternatives to the profiling technique use panels to accomplish simpler, thus perhaps more reliable, tasks, such as various ways of sorting a group of odors, comparing pairs or triples of odors, pointing out exceptions within groups of odors, etc. Some techniques collect enough statistics from the panels to be able to create a distance matrix that quantitatively expresses the level of dissimilarity between pairs of odors. Various kinds of multidimensional scaling (MDS) algorithms can then be applied to the data, resulting in a vector representation of the odors.

Whatever quantitative quality assessment technique is used, an odorant (o; c) is eventually represented by the odorant vector dP (o; c). We use the symbol l to denote the dimensionality of the resulting odor space, which we call the psychophysical space. If one uses odor profiling, then l is normally in the range 20-200, and the i'th element of dP (o; c) is the human panel's opinion regarding the weight of the i'th descriptor. If one uses MDS, l is typically much lower (< 10), and the elements of dP (o; c) do not have precisely describable meaning. We should emphasize that dP (o; c) is concentration dependent, since the perception of an odorant might change with concentration.

We might say that while (o; c) represents the chemical o in concentration c, the odorant vector dP (o; c) represents the human perception of this odorant, or simply its odor. From this perspective, the psychophysical space is the one on which we should focus, since the odor communication system is designed to directly work within it.

There are profound inter-relations between the psychophysical space and the sensory space. The brain itself is the tool that maps the r-dimensional odorant vectors dB (o; c) into their corresponding l-dimensional odorant vectors d P (o; c). Ignoring dynamical phenomena, such as adaptation, this mapping is considered robust, in the sense that identical inputs dB (o; c), evoke approximately the same outputs d P (o; c). This suggests a way to “fool" the human brain: if a certain odorant with a smell d P(o; c) elicits a neuronal response dB (o; c), then the same smell would be perceived if we succeed in developing a mixture of palette odorants that elicits the same neuronal response. The problem is that gathering data on the

behavior of the olfactory neurons is hard, and not much information is currently available. Moreover, the effect of mixtures on neuronal response has not yet been completely unravelled, making the prediction of the effect of mixture perception impossible. For this reason we would like to avoid the necessity of working with the odorant vectors dB (o; c), which leads to working with sniffers and human panels, as we shall see.

The sniffer space

The sensors inside an e-Nose are made using diverse technologies. Depending on the type of sensor, a certain physical property is changed as a result of exposure to a gaseous chemical. During the measurement process a signal is obtained by constantly recording the value of the physical property. Since a typical signal is comprised of a few hundred measured values, a process of feature extraction is frequently required, which is the process of finding a small set of parameters that somehow represent the entire signal.

The set of features extracted from all the signals in a single measurement is called the feature vector, and if there are m features the vector can be viewed as an odorant vector in the m-dimensional sniffer space. When exposed to mixtures of chemicals, e-Noses produce a feature vector that reflects the combined effect of the mixture constituents. Yet, the feature vectors of a mixture do not noticeably differ in any aspect from those of pure chemicals, and in this sense e-Noses do not distinguish pure chemicals from mixtures.

As the brain maps the sensory space into the psychophysical space, we can think of an analog algorithm that maps odorant vectors in the sniffer space to their corresponding odorant vectors in the psychophysical space. We shall call this the mapping algorithm, and denote it by the function f; hence, d P (o; c) =f (d S (o; c)).

The MTM Algorithm

Now that we are equipped with notions of odor space, we can redefine the algorithmic scheme in more accurate terms. Let the whiffer contain n palette odorants, and let ti stand for the i'th of these. We use the generic term to denote an odorant vector that constitutes a representation of palette odorant i in concentration vi in some odor space E. For example, if E is the sniffer space S, then would be the m dimensional odorant vector . If E is the psychophysical space P, then pPi .vi would be l-dimensional odorant vector d P (ti; vi). In this way, p Ei can be viewed as an operator that is applied to the concentration vi to yield some representation of the i'th palette odorant in concentration vi. Notice that we use the symbol vi, rather than c, to denote the concentration of the i'th palette odorant; this is to distinguish the palette odorants from other odorants, for which we use c. We define the mixing vector v = (v1……. vn) T to be the list of palette odorant concentrations in a particular mixture. In accordance with our earlier notations, we represent a palette mixture in the odor space E by PE .v, with v being the mixing vector and PE being an as-of-yet unspecified operator.

Let (o; c) be an arbitrary odorant. The role of the mixing algorithm is to find a mixing vector v, such that the perception of PE _v is as similar as possible to that of (o; c). More formally, we would like d P (o; c) to be as close as possible to PP .v; i.e., we are seeking

with ||.|| some appropriately chosen norm. The general scheme of the mixing algorithm discussed above is described in Figure 2. The sniffer provides the algorithm with a measured odorant vector d S (o; c). The mapping algorithm then transforms this vector into the odorant vector d P (o; c) in the psychophysical space. Following this, based on the specific palette that resides in the whiffer, the algorithm calculates from (1) the mixing vector v, and transmits it to the whiffer. The whiffer then prepares the corresponding mixture and releases it.

We are now in a position to describe our algorithm. In the interest of clarifying its dynamics, we have chosen to describe its development in three stages, each adding a further complication.

Fooling the sniffer

Let us consider first the problem of “fooling" the sniffer. We want to find a way of presenting an e-Nose S with a palette mixture that mimics the original odor it was given. Formally, let (o; c) be an odorant, represented by the m-dimensional odorant vector d S(o; c). We want to find a mixing vector v such that when given PS _ v the sniffer S will produce a fingerprint as similar as possible to the one elicited by (o; c) itself. This is a simplified version of the mixing problem. First, it does not require any space-to-space mapping, since we are working in a single space, the sniffer space. Second, fooling an eNose, whose fingerprints are relatively controllable and are easily measured and studied, seems on the face of it to be simpler than fooling the human perception. Dealing this problem first will provides us with insight regarding the solution of the more general problem.



In analogy with (1), our task is to find a vector v that satisfies

Notice that unlike (1), here the odorant vectors are taken to be in the sniffer space too.
Let us now discuss such a PS in a relatively simple special case. An m-dimensional sniffer space for a sniffer S is called linear if it has the following properties:
(1) Linearity of response: For an odorant (o; c), each of the elements
is proportional to the odorant's concentration. That is,
where is an odorant-dependent constant. Denoting we can write this property in the compact form


(2) Additivity of mixtures: The odorant vector describing the mixture
is the vector sum of the odorant vectors of the individual elements,



For a linear sniffer, the operators pSi are simply multiplications by constant vectors,
Similarly, the operator PS is just a multiplication
by a matrix, If we take ||.||to be the standard
Euclidean norm, then finding v is equivalent to solving the well-known least squares problem,

Actually, v is constrained to be a non-negative vector.
Thus, had the sniffer space been linear, the mixing vector would have been easily calculated as the minimizer of a constrained least squares problem.

Fooling a different sniffer

Suppose now that we have two different sniffer spaces, S1 and S2, with odorant vectors dS1(o; c) and dS2(o; c) of dimensions m1 and m2, respectively. Can we digitize an odorant (o; c) in the first sniffer and then produce a mixture of palette odorants such that the second sniffer will be fooled into thinking it to be (o; c)? No such mapping has ever been proposed. For example, the data provided by a single QMB sensor will probably not suffice to predict the response of some MOX sensor. Single sensor eNoses are, however, not realistic. We claim that for reasonable sniffers, with an adequate multitude of sensors, a good mapping can indeed be found. When a sniffer consists of an array of diverse sensors, it is likely to capture the physical information it needs for characterizing a certain odorant. At least in theory, this information is all that is needed in order to predict the response of another sniffer with similar information content. Put differently, finding the mapping g : S1 -> S2 is more likely to be possible when m1 is large, and when the sensors are as diverse as possible. In our ongoing research, S1 is the MosesII eNose, with its 16 different sensors made up of two completely different technologies.

Once this mapping is found, we would read in the input odor in S1, yielding the m1-dimensional odorant vector dS1(o; c), and then compute the mapping into the space S2, yielding the m2-dimensional odorant vector dS2(o; c). This vector would then be used, to fool the second sniffer, S2.using, our experiments show that the space is adequately linear.

Fooling the human brain

The human nose, with its hundreds of receptor types and complex biological machinery, can be viewed simply as a special case of a sniffer. Like any other sniffer, it takes an odorant (o; c) and represents it by an odorant vector d P (o; c).



However, mapping vectors from an artificial sniffer into the biological human “sniffer" will probably be far more challenging than mapping one e Nose into another. The difficulty is in the fact that the two systems, the biological and the artificial, are very different in the detection mechanism. The olfactory receptors (ORs) operate on very different principles than chemical sensors. As mentioned earlier biosensors for eNoses are being developed by several research groups. Once they are eventually incorporated in e Noses, this difficulty can be expected to be removed. Our point here is that even for “standard" e Noses that use conventional chemical sensors, there is evidence that the resulting fingerprints can be used to infer psychophysical data.

Over a wide range of concentrations between the threshold value and the saturation value, the intensity usually obeys a power law I (o; c) = kcn, with k and n being odor-specific constants. This is definitely not linear, but it has also been observed that n is usually close enough to 1 to allow for the linear approximationto hold in a reasonable range of low concentrations. As explained earlier, real world applications require only low concentrations, thus this linear approximation might very well be adequate for the kind of odor communication system we propose.

Palette odorants

Our odor communication system is based on the belief that there exists a set of palette odorants that can be mixed so as to mimic (up to a certain tolerance) any desired odor perception. Since to the best of our knowledge such an odorant palette has never been realized, the belief in its existence requires some justification. We start with a somewhat philosophical argument, and then provide some experimental observations to support it.

Relevant research indicates that we may assume that if two different stimuli elicit identical response of the ORs, the human perception thereof will be identical. Thus, it is the response of the receptors that has to be mimicked. An

incoming stimulus elicits a spatio-temporal response of the olfactory nerve cells in the epithelium. This response is the combined result of many factors (such as the type of the odorant, its concentration, and its temporal behavior), and it reflects the entire available information regarding the specific stimulus. This information is encoded into the odorant vector dB, which is considered to be the input for the cerebral analysis process. Since this process ends up with the ability to classify the odorant, to estimate its concentration, and to describe it, all this information must be somehow included in the response pattern, yielding the conclusion that identical response patterns will result in the same sensation regardless of the way they were formed. It is now reasonable to assume that any such set of responses can be viewed as a (possibly nonlinear) superposition of patterns, which, when deciphered, can be reformulated as mixtures of suitably chosen palette odorants. Thus, if we can prepare a mixture of palette odorants, whose collective effect on the olfactory nerve cells is similar to the effect of the original odorant, the perception of the mixture will very closely resemble the perception of that odorant.
The fundamental experimental observation that should be considered here is the fact that a mixture is usually perceived by humans as a new odor. This is actually experienced by every individual on a daily basis, with the distinct aroma of food products, beverages, coffee, perfumes, etc., all being odorant mixtures comprising usually hundreds of different odorous volatile chemicals.

Furthermore, the number of glomeruli activated when sniffng a mixture is similar to that activated when sniffng pure chemicals. Similarly, the number of odor qualities perceived by a human panel responding to a mixture is similar to that perceived when responding to pure chemicals. A typical smell is adequately described by 100 unique compounds on the average, then it would drive the number of palette odorants to be impractically large. Fortunately, this is not the case. It is known that even the most complex odors can be mimicked by mixtures of a relatively small number of ingredients. This is nicely seen in the food industry, where people are interested in generating certain smell perceptions using simple artificial blends, known as aroma models. Very complex aromas, such as those in wines, coffee brews, tomato paste, boiled beef and the like, are made of mixtures of many hundreds of chemicals. Yet, certain techniques have been designed to extract

those compounds that have the strongest impact on the smell, and only those are used in the aroma models. Typically, the original smell is reproduced with 10-30compounds at most.

Summary of the MTM algorithm

Devising:
(1)Prepare a (preferably large) database of odorants, and pass them through an appropriate human panel, obtaining the odorant vectors dP (o; c) in the psychophysical space.
(2)Measure the same odorants by a sniffer S, obtaining the odorant vectors dS(o; c).
(3)Learn the mapping f between the sniffer space and the psychophysical space.
(4)Choose a whiffer palette of size n.
(5)Compute the operator PP for the palette odorants. (This is best done by measuring the palette odors directly by the human panel; alternatively, they can be measured by the sniffer S and then subjected to the mapping f.)


Using:

(1) Sample an input odorant (o; c) using the sniffer S, thus obtaining dS(o; c).
(2) Map the resulting fingerprint from the sniffer space to the psychophysical space, dP (o; c) = f(dS(o; c)).
(3) Find the non-negative mixing vector v as the minimizer of

(4) Prepare and release a mixture of the palette odorants according to the vector v.

Choosing the hardware

The algorithmic scheme outlined above can work with any sniffer and any whiffer. Even an extremely poor sniffer, that yields very little information, and a primitive whiffer with a small number of palette odorants and a coarse mixing ability, can be used; the MTM algorithm will produce results and the whiffer will emit the computed mixture -the best possible under the circumstances.

The point is that the results will be only as good as the hardware, and vice versa: better hardware will cause our scheme to produce better results. The situation regarding sniffers is good. More and more eNose types are developed, using continuously improving sensor technologies. We hope that the ideas presented in this paper will have a productive effect on eNose manufacturers, since we envision a far broader spectrum of applications there of.

Whiffers seem to evolve much more slowly. But, as we have shown in our design and construction of iSmell r, the technology is available and the job can be done. We are confident that building and marketing high-quality commercial whiffng devices is possible. Indeed, it is inevitable.

Choosing the palette

One major aspect of the whiffer can benefit from the ideas presented here -the construction of the palette. The two key features of the palette are its size n and the particular palette odorants it contains. A palette designer should be concerned with determining both of these.

In a typical application of our scheme, we expect n to be given, being constrained by the limitations of the technology used, by the desired accuracy and by cost. Let us use the term tolerance, denoted to represent a measure of the extent to which the perception of the computed mixture P P .v deviates from that of the original odorant, dP (o; c). The exact formulation of the tolerance depends on the specific structure of the odor spaces involved.

In principle, a larger palette allows for a smaller tolerance. However, large palettes are more expensive and more difficult to build, hence a compromise between palette size and tolerance must be made. If there were no constraints on the palette, we could simply choose n to be large enough for the palette to contain all possible distinct aromas, which is at least in the order of 104, and very far from the ability of current whiffer technology. To be realistic, we must assume that for the near future n will be under 300.

As to choosing the palette odorants themselves, we envision an algorithm which, given the desired size n and a large collection of candidate odorants, computes the “best" n odorants for the palette. Such an algorithm can indeed be constructed, based on ideas similar to the ones reported upon here, and taking into account accumulated information about the psychophysical space (such as the density distribution of the various odorants). It is not out of the question that such an algorithm could also be used to tailor special palettes to specific application areas, to desired tolerance, to constraints on mixing ratios or quantities, etc.

Another interesting option in palette design is to adopt a multi-tier approach. There might be advantages in building the palette so that the palette odorants are arranged in tiers. In this way, mixtures can be prepared by taking larger quantities from the higher levels (catering for coarser descriptions), adding lower level odorants to fine-tune the output -as a kind of “salt-and-pepper" stage. Of course, the physical reservoirs for the palette odorants inside the whiffer can then be of different sizes, reflecting the differences in the typical use-rates of the various levels.

Choosing a tolerance

What is a reasonable value for We cannot give a number at this stage, but we can claim that for many applications, reasonably good performance is expected even with high tolerance (less accurate mixtures). As human beings, we are mainly driven by visual and verbal stimuli. Whenever these are in conflict with

olfactory impressions, the brain tends to “twist" these impressions so that they fit the visual or verbal input. This leads to the phenomena known as olfactory illusions, which can be as severe as causing subjects to think they are actually smelling an odorless liquid . Consequently, for the average consumer, poor mimicking ability can be compensated by visual and verbal cues, at least to some extent. For example, sniffng a garlic-like substance while watching a TV pizza commercial, might suffice to convince many viewers that they are actually smelling pizza.

Applications

The applications of odor communication are far-reaching and diverse, and include scented movies, scented computer games, scented email attachments, scented commercials, and electronic purchase of odorous products (foods, perfumes, detergents, etc.). Some of the applications do not require the entire setup, and can do with only portions of the system. For example, sniffers can be left out of the day to day usage in cases where the output is known to be a member of a pre-determined set of odors; a preprocessing stage can be carried that will compute the required mixtures in advance.

Conclusion

From the above description we can conclude that though there are certain difficulties in implementing this if used commonly it would help us in our day to day life very efficiently. It can be used for online business like e-shopping. With this we need not just see, imagine and get things but rather smell things to have a better idea and decide more satisfactorily. It can also be used in other areas of pleasure entertainment and study.

References

D. Harrel et al. l
Amoore J. E Johnson
www.elsevier.com

Tuesday, December 2, 2008

ABSTRACT

MIMO is a technique for boosting wireless bandwidth and range by taking advantage of multiplexing.

MIMO algorithms in a radio chipset send information out over two or more antennas. The radio signals reflect off objects, creating multiple paths that in conventional radios cause interference and fading. But MIMO uses these paths to carry more information, which is recombined on the receiving side by the MIMO algorithms.

A conventional radio uses one antenna to transmit a DataStream. A typical smart antenna radio, on the other hand, uses multiple antennas. This design helps combat distortion and interference. Examples of multiple-antenna techniques include switched antenna diversity selection, radio-frequency beam forming, digital beam forming and adaptive diversity combining.

These smart antenna techniques are one-dimensional, whereas MIMO is multi-dimensional. It builds on one-dimensional smart antenna technology by simultaneously transmitting multiple data streams through the same channel, which increases wireless capacity.

INDEX

1.TRENDS IN WIRELESS AND MOBILE COMMUNICATION
2.MIMO Technology in wireless communications
3.MULTIPATH FADING
4.ANTENNAS IN WIRELESS COMMUNICATION
5.WHAT IS MIMO?
6.MIMO SCALABILITY
7.EXAMPLES OF MIMO TECHNOLOGY
8.HSDPA and MIMO
9.MIMO HARDWARE REQUIREMENTS
10.APPLICATION AND BENEFITS OF MIMO
11.COMPANIES USING MIMO TECHNOLOGY
12.CHALLENGES IN MIMO DESIGN
13.UNDERSTANDING MIMO
14.PROMISES MADE BY MIMO
15.LIMITATIONS IN MIMO
16.BIBLIOGRAPHY

TRENDS IN WIRELESS AND MOBILE COMMUNICATION

Last few years have seen rapid development of wireless technologies. The stage is set for third generation technology (3G) and R&D is already aiming at fourth generation (4G) technology.(see fig 1 and 2)

Fig 1: PCN evolution/migration

The 2G technology for mobile communication revolved around GSM mainly for voice communication. It was focused on voice services with circuit switching, whereas the current 2.5G technology is focused on circuit switched voice service and packet switched data service.

The 2G technology offered quite satisfactory voice communication services, but with growing data traffic, the 3G technology has mainly targeted data services, particularly the Internet traffic. Thus the main service component of the 3G technology is quality and reliable Internet data traffic. The migration from 2G to 3G was started for providing new reliable services with minimal investment. To offer quality service and advanced traffic management, the asynchronous transfer mode (ATM) technology is being explored with the IMT 2000 core network node system





The 3G technology still evolves around GSM- based IMT 2000 or universal mobile telecommunication system(UTMS), although alternatives like freedom of mobile multimedia access(FOMA) of Japan and GSM-evolved core network (CN) exist. Comprehensive, broadband, integrated mobile communication will step forward into all-mobile 4G services and communication, The 4G technology will be a migration from the other generation of mobile services to overcome the limitation of boundary and achieve total integration.

The evolutionary approach towards a wireless information age is shown in fig 1. the 4G systems will be developed to provide high-speed transmission, next-generation Internet support (IPv6, VoIP and mobile IP), high-capacity, seamless integrated services and coverage, utilization of higher frequency, lower system cost, seamless personal mobility, mobile multimedia(standard), efficient spectrum use, quality of service(QoS), reconfiguration network and end-to-end IP systems.

MIMO Technology in wireless communications

Digital communication using MIMO (multiple-input multiple-output) or also called volume to volume wireless links is emerging as one of the most promising research areas in wireless communications. In wireless MIMO the transmitting end as well as the receiving end is equipped with multiple antenna elements, as such MIMO can be viewed as an extension of the very popular ‘smart antennas’. In MIMO though the transmit antennas and receive antennas are jointly combined in such a way that the quality (Bit Error Rate) or the rate (Bit/Sec) of the communication is improved. At the system level, careful design of MIMO signal processing and coding algorithms can help increase dramatically capacity and coverage and thus can improve the economics of network deployment for operators. Today, MIMO wireless is widely recognized as one of three or four key technologies in the forthcoming high-speed high-spectrum efficiency wireless networks (4G, and to some extent 3G). Applications also exist in fixed wireless and wireless LAN networks.

Progress in MIMO research poses strong scientific challenges in the areas of modeling (of mobile space-time wireless channels), information theory (coding, channel capacity and other bounds on information transfer rates), signal processing (signaling and modulation design, receiver algorithms), and finally the design of the wireless fixed or mobile networks that will incorporate those MIMO links in order to maximize their gain. More specifically, joint design of sensible multiple access solutions (CDMA, OFDMA, TDMA and variants) as well as medium access (MAC) protocol for wireless MIMO is challenging.

MULTIPATH FADING

Wireless technologies are not free from problems like limitation of the available frequency spectrum, fading and multipath fading. Fading results in sudden drop of signal power in the receiver. Multipath fading results when the transmitted signal bounces off objects like buildings, office cabinets and hills, creating multiple paths for the signal to reach the receiver. The same transmitted signal that follows the different paths reaches the receiver at different times with different phases. Added together, the several incidences of the same signalwith different phases and amplitude may cancel each other, causing signal loss or drop of signal power.







The consequences of multipath fading(fig 3) may be delay spread, short-term fading, long-term fading and Doppler effect. Delay spread results in spreading of the transmitted pulse on the time axis and even in generation of multiple low amplitude pulse trains. It occurs in fixed radio stations.

In mobile environments as the channel condition changes with motion of the receiver, fading causes the short term effect, resulting in fluctuation of the received power over time. The receiver may not adapt to the changes. This degrades the service quality. Short term fading occurs over short term duration. Long term fading results in decreased received power over long time/distance; as time increases, the moving receiver usually goes further away.

The Doppler effect occurs in fast moving mobiles. It results in shift of the frequency randomly. Multipath fading, in effect, either causes low recied signal power or degraded quality of service, both of which are highly unexpected in the future all-wireless and mobile communication. The low received power increases the bit error rate, which, in turn, limits the data rate.

ANTENNAS IN WIRELESS COMMUNICATION

A conventional radio uses one antenna to transmit a datastream.
A smart antenna uses multiple antenna to transmit a datastream.
Smart antenna techniques are one-dimensional.
MIMO uses multi-dimensional antenna.
It builds on one-dimensional smart antenna technology by simultaneously transmitting multiple datastreams through the same channel, which increases wireless capacity
The use of antennas at both transmitter and receiver allows –
1.Multiplicative increase in capacity and spectral efficiency
2.Dramatic reductions of fading thanks to diversity
3.Increased system capacity (number of users)
4.Lower probability of detection
5.Improved resistance to interference

Smart antenna techniques use multiple antennas to improve wireless performance and reliability
– Antennas themselves are “dumb” pieces of metal
– “Smartness” comes from signal processing that is applied to the multiple antennas
– There are differing degrees of smartness

Conventional, “single-dimension” (1D) smart antenna techniques transmit just one data stream per channel
– RF beamforming
– Digital beamforming
– Digital receive diversity combining

• MIMO makes smart antennas “multi-dimension”
– Multiple data streams in the same channel
– 2-D signals

WHAT IS MIMO?

Multiple Input Multiple Output (MIMO) is a smart antenna technique that increases speed, range, reliability and spectral efficiency for wireless systems. MIMO is a new wireless technology conceived in the mid 90’s A technique for boosting wireless bandwidth and range by taking advantage of multiplexing. It is based on an entirely new paradigm for digital signal processing that multiplies the data rate throughput achievable in wireless communication products. Greatly improves the reliability, range and robustness of the connection providing a much better user experience that is closer to “wired” Ethernet quality.

MIMO is one technology being considered for 802.11n, a standard for next-generation 802.11 that boosts throughput to 100M bit/sec. In the meantime, proprietary MIMO technology improves performance of existing 802.11a/b/g networks.

During the 1990s, Stanford University researchers Greg Raleigh and VK Jones showed that a characteristic of radio transmission called multipath, which had previously been considered an impairment to radio transmission, is actually a gift of nature. Multipath occurs when signals sent from a transmitter reflect off objects in the environment and take multiple paths to the receiver. The researchers showed that multipath can be exploited to multiplicatively increase the capacity of a radio system.

If each multipath route could be treated as a separate channel, it would be as if each route were a separate virtual wire. A channel with multipath then would be like a bundle of virtual wires. MIMO uses multiple, spatially separated antennas. MIMO encodes a high-speed datastream across multiple antennas. Each antenna carries a separate, lower-speed stream. Multipath virtual wires are utilized to send the lower-speed streams simultaneously.

MIMO SCALABILITY

One of the benefits of MIMO technology is its ability to scale data transmission speed with the number of antennas and radio and signal processing hardware. When coupled with the increasing integration levels governed by Moore’s law, it provides a communications roadmap to the future.

The data rate of a SISO system is determined by:

R = ES * BW

Where R is the data rate (bits/second or bps),
ES is the spectralefficiency (bits/second/Hertz or bps/Hz),
and BW is the communications bandwidth (Hz).

For instance, for 802.11a, the peak data rate is obtained by:
BW = 20MHz
ES = 2.7 bps/Hz
yielding R = 54Mbps

SISO systems obtain greater performance by using greater
bandwidth. For instance, Atheros’ Turbo® mode allows for:
BW = 40MHz
ES = 2.7 Bps/Hz
yielding R = 108Mbps

Using MIMO, an additional variable is introduced – the number of independent data streams, NS, that are communicated simultaneously in the same bandwidth, in different spatial paths. The spectral efficiency is now measured per-stream as ESS. The data rate of a MIMO system becomes:
R = ESS * BW * NS


For the current 802.11n proposal, there are 10, 20, and 40MHz modes allowed, yielding peak rates with the following parameters
BW = 10, 20, or 40MHz
ESS = 3.6 bps/Hz (BW = 10 or 20)
ESS = 3.75 bps/Hz (BW = 40)
NS = 2, 3, 4
yielding R=144Mbps (20MHz, Ns = 2)
yielding R=300Mbps (40MHz, Ns = 2)
yielding R=600Mbps (40MHz, Ns = 4)

Thus peak data rates ranging from 144Mbps to 600Mbps can be obtained by modifying the bandwidth and number of spatial streams.

HSDPA and MIMO

HSDPA is a packet-based data service in W-CDMA downlink with data transmission up to approximately 10 Mbps over a 5MHz bandwidth
MIMO technology is used, in which multiple antennas are implemented at both base station and mobile terminals to attain a speed of 20Mbps .
At the transmitter, the information bits are divided into several bit streams and transmitted through different antennas.
The transmitted information are recovered from the received signals at multiple receive antennas by using an advanced receiver.
Due to the high data rate transmission, the trade off between complexity and system performance becomes an important issue, especially for the UE designs.

MIMO HARDWARE REQUIREMENTS

In order to maintain multiple independent data streams, multiple RF and baseband chains are required. There must be at least as many chains on each side as the number of spatial streams. I.e.:
NS = min (NR, NT)

In practice, to obtain better radio link robustness, NR and/or NT are typically chosen to be larger than NS for greater spatial diversity and link budget margin. I.e. for a robust NS = 2 system, NR could be 3. Or for increased link margin, diversity, and performance with a single stream systems, NR = NT = 2 could be used.

Figures 3 and 4 show block diagrams of the MIMO transmitter and receiver, indicating the parallelism and required data rate scaling.

The scaling factors indicate the growth in complexity of each of the blocks as a function of the design variables. This complexity in turn scales the power consumption and area of each block. The complexity scaling is due to both sample rates as well as required sample precision.

As a reference point, an 802.11g single-chip transceiver fabricated in 0.18m CMOS reported in ISSCC 2005 occupies 41mm2 total area, with 72% in digital logic. In transmit mode, the systemon- a-chip consumes 498mW of power, 226mW from the digital components. In receive mode, it consumes 513mW total, 330mW from the digital components.

APPLICATION AND BENEFITS OF MIMO

For Business:

• Enables truly wireless office – replaces Ethernet
– Improves wireless reliability and robustness
Reduces infrastructure cost - Doubles coverage area of each AP
Rates to 108 Mbps in each channel – similar to wired Ethernet speed

• Improves VoIP performance
– Extends handset battery life
– Increases call capacity

For Consumers:

• One AP covers your whole home with reliable service
Penetrates more walls at higher rates
No need to sit in the right place to use your laptop

• Supports new wireless multimedia applications
Whole-home coverage for high-speed broadband access
Reliable SDTV and HDTV video transport in home networks
Multi-service applications – voice, video, data

COMPANIES USING MIMO TECHNOLOGY

Airgo Networks Inc.'s True MIMO
Asian manufacturers Taiyo Yuden Inc. and Askey Computer Corp., are also readying products based on True MIMO
at Atheros Communications Inc., in Sunnyvale, Calif.,
Intel Corp. has MIMO in the labs as well, with plans to include it in its Centrino chip sets.
Cisco Systems Inc.
Generation IX Technologies Inc.,Los Angeles.
Broadcom releases "prestandard 802.11n" products.

CHALLENGES IN MIMO DESIGN

MIMO systems deliver greater performance, but with additional cost and power consumption. Competitive pressures of consumer markets impact tolerable cost (area), while thermal and battery life constraints limit tolerable power consumption in wireless portable devices. Additionally, mixed signal issues including coupling and cross-talk become critical in integrated high performance wireless systems which co-locate the digital circuitry with the analog RF electronics. Lastly, the quest for ultra-low cost solutions leads to additional systems-level integration of CPUs and other peripherals.

UNDERSTANDING MIMO

The first step in clarifying the confusion is to understand the major approaches taken by the different MIMO camps. In general, all agree that MIMO uses multiple antennas to send multiple distinct signals across different spatial paths at the same time, increasing throughput.

They also agree that multipath reflection isn't the enemy. In any enclosed space, radio signals propagate at different speeds through different materials and are partially or fully reflected by some materials. If you took high school physics, you might remember an experiment with a laser beam and a tank of water that showed how light can be deflected through materials of different refractive index. A simpler experiment is to look through a bottle of water (or old window glass) to see distortions as light travels at slower speeds.

Radio signals are more susceptible than light to diffraction, reflection and absorption, which has traditionally limited speed and range. The higher the data rate, the more likely it is that multiple paths for transmitted signals will emerge and have to be reassembled at the receiver.

Enter OFDM (orthogonal frequency division multiplexing), which was the bridge technology that took wireless networking from the old 802.11b standard to 802.11a and g. Instead of having data symbols - constellations of information in a signal forming a retrievable chunk - spread across a whole Wi-Fi channel, OFDM subdivides a frequency into a set of slower subchannels. A Byte sent over each subchannel is much easier to recognize because it takes longer to transmit and thus many slightly time-offset versions can be reconciled more easily. One player at the heart of this debate is Airgo, a company that provides MIMO technology to WLAN equipment vendors such as Linksys and Belkin. The company's founder, Greg Raleigh, was an early voice in the wilderness about MIMO and he is still defending his definition of the technology. The company has even trademarked the term "true MIMO" to describe its approach.

Airgo's MIMO builds on OFDM by using spatial multiplexing in which different radio signals are sent over the same frequencies at the same time. Multipath reflection allows the transmitting and receiving antennas to essentially create a unique path in space for each signal using separate radios.

The MIMO delivered by Video54 - appearing first in equipment from Netgear - uses several antennas that can be switched off and on in 50 combinations on a per-packet basis but a single radio. Video54 says its technology dramatically improves the ability of a receiver to reassemble signals. This increases range and throughput.

Selina Lo, Video54's CEO, noted that "if you have a lot of redundant routes, you can always find the route that has the lowest latency or the least cost" in terms of signal usage. Atheros will offer similar technology to its OEM partner D-Link. Spatial multiplexing will be incorporated, at least in large part, into the 802.11n standard, and the folks pursuing just multiple antennas say that that's the time to add it: when both adapters and gateways can take advantage of multiple signal paths. And both approaches are currently available in the marketplace, retrofitted on top of 802.11g equipment and protocols. Besides Linksys and Belkin, SOHOware also uses the Airgo technology. Netgear has adopted the Video54 technology.

In addition, D-Link reportedly will use MIMO technology from a third vendor, Atheros. Atheros has reportedly developed a beam-forming MIMO chipset that offers features like Video54. All the technologies are different, none is standardized and each uses a different theory for delivering greater speed and range over wireless LANs.

PROMISES MADE BY MIMO

MIMO technology promises higher data rate, higher quality of service and better reliability by exploiting antenna array at both the sides (transmitter and receiver) are mixed such that they either generate multiple parallel, spatial bit-pipes and/or add diversity to decrease the bit-error rate.

The fundamental gain in MIMO is increased data rate. MIMO is better and the only stratergy for achieving both the higher data rate and better quality of service. By spreading the transmitted signal over multiple paths, the MIMO technology increases the chances of signal reception at the receiver. It also increases the range of the operation.


In the fig6 MIMO covers all the three base regions of convential cellular telephony. The transmitter can adjust power and phase of the signal fed to antennae, which allows the best transmission quality.

LIMITATIONS IN MIMO

Complex design requirements
More energy
Complex algorithms and design are required for operation of multiple antenna.
Handsets and other mobile devices becomes costlier($199 for the router and $129 for each laptop card, vs. $79 and $69, respectively, for comparable non-Mimo equipment. ).
Capacity of MIMO is low for uncorrelated signals.
Requires robust encryption

BIBLIOGRAPHY

Books:

Electronics For You(January 2006)
Wireless Communications-Principles and practices,Theodore S Rappaport

Links from google search:

G:\MIMO Research @ UT Austin.htm
G:\ carltemme@airgonetworks.com.
G:\Research on Multi-Antenna and MIMO Wireless Systems Signals.htm
G:\MIMO (multiple-input multiple-output).htm

CONTENTS

INTRODUCTION
HISTORY
General Syntax
Communicating Process Paradigm
Occam & CSP
Mobile Channel Bundles
RMoX Operating System
Conclusion & Future Work
References

ABSTRACT

The occam [Inm84a, Inm84b, Inm95, Bar92] programming language is a concurrent programming Language based on the CSP [Hoa78, Hoa85, Ros97] process algebra. CSP provides a rigorous parallel model, whereby parallel processes engage in synchronous events. A process may also select between several events on other (external choice). In occam, CSP events appear mostly as channels (wherewo processes synchronise and data is copied) and at the ends of PARallel blocks — to ensure that l nested parallel processes have terminated. occam was developed between Oxford University and Inmos, with the Transputer [Inm93] in mind. The Transputer combined an on-chip micro-coded scheduler with on-chip external communications (on DS links), resulting in highly scalable multi-processor platforms for parallel processing.The T9000 Transputer additionally provided a virtual channel capability, allowing several virtual inks to be multiplexed into one physical link [MTW93]. Coupled with crossbar routing, namely theC104, this provided a solid platform for the distribution and execution of occam programs.Then we can have a look at the RMoX operating sysyem and its architecture.

INTRODUCTION

The occam programming language is designed to express concurrent algorithms and their implementation on a network of processing components. The occam reference manual serves to provide a single reference, and definition of the language occam. The manual describes each aspect of the language, starting with the most primitive components of an occam program, and moving on to cover the whole language in detail. The manual is addressed to the wider audience, including not only the computer scientist, software engineer and programmer, but also the electronics engineer and system designer.

Programming in occam is easy. Occam enables an application to be described as a collection of processes, where each process executes concurrently, and communicates with other processes through channels. Each process in such an application describes the behaviour of a particular aspect of the implementation, and each channel describes the connection between each of the processes. This approach has two important consequences. Firstly, it gives the program a clearly defined and simple structure. Secondly, it allows the application to exploit the performance of a system which consists of many parts. Concurrency and communication are the prime concepts of the occam model. occam captures the hierarchical structure of a system by allowing an interconnected set of processes to be regarded as a unified, single process. At any level of detail, the programmer is only concerned with a small, manageable set ofprocesses.

HISTORY

Occam is an ideal introduction to a number of key methodologies in modern computer science. Occam programs can provide a degree of security unknown in conventional programming languages such as C, FORTRAN or Pascal. occam simplifies the task of program verification, by allowing application of mathematical proof techniques to prove the correctness of programs. Transformations, which convert a process from one form to a directly equivalent form, can be applied to the source of an occam program to improve its efficiency in any particular environment. occam makes an ideal language for specification and behavioural description. occam programs are easily configured onto the hardware of a system or indeed, may specify the hardware of a system. The founding principle of occamis a minimalist approach which avoids unnecessary duplication of language mechanism, and is named after the 14th century philosopher William of Occam who proposed that invented entities should not be duplicated beyond necessity. This proposition has become known as “Occam’s razor”. The occam programming language arises from the concepts founded by David May in EPL (Experimental Programming Language) and Tony Hoare in CSP (Communicating Sequential Processes). Since its conception in 1982 occam has been, and continues to be under development at INMOS Limited, in the United Kingdom, under the direction of David May. The support for large programs provided by occam3 is based on principles found in many current programming languages which have been refined at INMOS by Geoff Barrett. The development of the INMOS transputer, a device which places a microcomputer on a single chip, has been closely related to occam, its design and implementation. The transputer reflects the occam architectural model, and may be considered an occam machine. occam is the language of the

transputer and as such, when used to program a single transputer or a network of transputers, provides the equivalent efficiency to programming a conventional computer at assembler level. However, this manual does not make any assumptions about the hardware implementation of the language or the target system. occam is a trademark of the INMOS group of companies.

Aims of This Work

The primary objective of this work is to provide support, at the language, kernel and operating system level, for highly concurrent dynamic parallel systems based on occam/CSP.This objective is sought in a number of ways. Firstly, by the extension and general enhancement of the occam programming language, largely by adding dynamic capabilities, plus a number of other (often trivial) extensions that bring it closer to languages such as C and Java (which rely heavilyon dynamic memory allocation), whilst remaining secure against aliasing and parallel usage errors. Second to provide support for data, channel and process mobility, using a movement semantics. And finally, by improving the interface between occam programs and the operating-system environment, allowing programmers to make full use of the UNIX/POSIX [Int96] environment. As a further objective, this work aims to improve the maintainability and safety of occam code, particularly in light of the new facilities added.

Other Approaches to Compiling occam

This thesis is concentrated on the KRoC occam system. However, other ways of compiling occamprograms exist. The most common of these is SPoC[DHWN94] — theSouthampton Portable occam

1 Compiler .SPoC takes a different approach to compiling occam programs from KRoC. Instead of using the Inmos occam compiler, SPoC uses a compiler written (from scratch) using the GMD compiler compiler toolkit [GE90], to produce a compiler that generates portable ANSI C code from the occam.

In a Google search for “occam compiler”, circa October 2002, SPoC appeared at the top of the list, followed by KRoC in 2nd place. Currently, KRoC appears in first place, followed by SPoC in second. This change of ordering is most likely due to the addition of KRoC to the popular ‘FreshMeat’ website (http://freshmeat.net/), that acts as a public project-management and bulletin-board system, mostly for free software.

Parallel programming is too often seen as a “hard” discipline, and one area which the majority of programmers try to avoid. This has not come about through the non-availability of parallel hardware, such as the Transputer — current hardware is more than adequate — but through the lack of appropriate software tools and infrastructures to build concurrent systems. Traditional languages like C, and more recently Java, suffer when parallelism is “bolted on”, frequently resulting in more harm than good. This arises because there is little or no control over how that parallelism is used — the programmer can write all sorts of race-hazardous code, often without realizing it. The problem of managing parallelism in these languages increases with the size of the system, leading to catastrophic failures in large systems that are almost impossible to pin down. Large non-parallel systems also suffer from such problems, that can easily be caused by variable/pointer aliasing errors. The occam language, with CSP semantics for parallelism, solves the majority of these problems: occam does not permit uncontrolled aliasing and CSP provides composable semantics for building parallel systems.

Despite this clear advantage, occam suffers from an inability to interact fully with the surrounding operating-system and hardware environments. The principal reason for this, in the KRoC occam system, is twofold. Firstly, adding significant amounts of code to hand-coded assembly language, of which the majority of KRoC run-time kernels are implemented in, is a difficult and daunting task. Secondly, the surrounding operating-system environment is often poorly adapted for fine-grain parallelism — for instance, executing a blocking system-call from within a (KRoC) occam process will cause all parallel occam processes to be suspended, while the program is blocked in the OS kernel. This makes writing programs that utilise inter-process communication (IPC) and networking difficult. Difficult to the point where it may be preferable to use C, or another language, and risk race-hazard and aliasing errors.

Another of occam’s limiting factors is its lack of dynamic behaviour. Traditional occam programs can be viewed as static process graphs, nodes representing processes and arcs representing channels. Even though parts of a program may come into existence then disappear, all the graphs can be defined statically. One of the reasons for this static model stems from the Transputer, that had real finite memory (as opposed to virtual memory), and by today’s standards, a much lower processing capacity Transputers were designed to be assemble din networks to increase processing yield.

General Syntax

Syntactic notation

The syntax of occam programs is described in a modified Backus-Naur Form (BNF). As an example, the following shows the syntax of assignment
assignment = variable := expression

This means “An assignment is a variable followed by the symbol :=, followed by an expression”. A vertical bar (|) means “or”, so for example:

Action= assignment
| input
| output
is the same as
action =assignment
action = input
action =output

The meaning of this syntax is “An action is an assignment, an input, or an output”.

The written structure of occam programs is specified by the syntax. Each statement in an occam program normally occupies a single line, and the indentation of each statement forms an intrinsic part of the syntax of the language. The following example shows the syntax for sequence:
Sequence = SEQ
{process}

The syntax here means “A sequence is the keyword SEQ followed by zero or more processes, each on a separate line, and indented two spaces beyond SEQ”. Curly brackets and are used to indicate the number of times some syntactic object occurs. process means, “zero or more processes, each on a separate line”. Similarly, 0 , expression , means “A list of zero or more expressions, separated by commas”, and 1 , expression , means “A list of one or more expressions, separated by commas”.

A complete summary of the syntax of the language is given at the end of the main body of the manual

Continuation lines

A long statement may be broken immediately after one of the following:
an operator i.e. +, -, *, / etc..
a comma ,
a semi-colon ;
assignment :=
the keyword IS, FROM or FOR

A statement can be broken over several lines providing the continuation is indented at least as much as the first line of the statement.

The annotation of occam programs

As the format of occam programs is significant, there are a number of rules concerning how programs are annotated. A comment is

introduced by a double dash symbol (--), and extends to the end of the line.

4 Syntax and program format
Consider the following sequence:
SEQ
-- This example illustrates the use of comments
-- A comment may not be indented less than
-- the following statement
...
SEQ -- A sequence
...
Comments may not be indented less than the following statement.

Names and keywords used in occam programs

Names used in occam programs must begin with an alphabetic character. Names consist of a sequence ofalpha numeric characters and dots. There is no length restriction. occam is sensitive to the case of names, i.e. Say is considered different from say. With the exception of the names of channel protocols, names in the examples presented in this manual are all lower case. However, the following are all valid names in
occam:
PACKETS
vector6
LinkOut
NOT.A.NUMBER
transputer
terminal.in
terminalOut

All keywords are upper case (e.g. SEQ). All keywords are reserved, and thus may not be used by the programmer.

Primitive processes

1.1 Assignment

occam programs are built from processes. The simplest process in an occam program is an action. An action is either an assignment, an input or an output. Consider the following example:
x := y + 2
This simple example is an assignment, which assigns the value of the expression y + 2 to the variable x. The syntax of an assignment is:

assignment variable := expression

The variable on the left of the assignment symbol (:=) is assigned the value of the expression on the right of the symbol. The value of the expression must be of the same data type as the variable to which it is to be assigned, otherwise the assignment is not valid.

A multiple assignment assigns values to several variables, as illustrated in the following example:
a, b, c := x, y + 1, z + 2

This assignment assigns the values of x, y + 1 and z + 2 to the variables a, b and c respectively. The expressions on the right of the assignment are evaluated, and the assignments are then performed in parallel. Consider the following example:
x, y := y, x

The effect of this multiple assignment is to swap the values of the variables x and y.
The syntax of multiple assignment extends the syntax for assignment:

assignment variable.list := expression.list
variable.list 1 , variable
expression.list 1 , expression

A list of expressions appearing to the right of the assignment symbol (:=) is evaluated in parallel, and then each value is assigned (in parallel) to the corresponding variable of the list to the left of the symbol. The rules which govern the names used in a multiple assignment therefore follow from those for names used in parallel constructions (see page 16). Practically, this means that no name may appear twice on the left side of a multiple assignment, as the name of a variable or as the name of a variable and the name of a subscript expression which selects a component from an array .The expression on the right of the assignment symbol (:=) may be a function. A multiple result function can be an expression list in a multiple assignment..

1.2 Communication

Communication is an essential part of occam programming. Values are passed between concurrent processes by communication on channels. Each channel provides unbuffered, unidirectional point-to-point communication between two concurrent processes. The format and type of communication on a channel is .Two actions exist in occam which perform communication on a channel. They are: input and output.



1.2.1 Input

An input receives a value from a channel and assigns the received value to a variable. Consider the following
example:
keyboard ? char

This simple example receives a value from the channel named keyboard and assigns the value to the variable char. The input waits until a value is received.
The syntax of an input is:

input channel ? variable

An input receives a value from the channel on the left of the input symbol (?), and assigns that value to the variable on the right of the symbol. The value input must be of the same data type as the variable to which it is assigned, otherwise the input is not valid.

1.2.2 Output

An output transmits the value of an expression to a channel. Consider the following example:
screen ! char

This simple example transmits the value of the variable char to the channel named screen. The output waits until the value has been received by a corresponding input.The syntax of an output is:
output channel ! expression
An output transmits the value of the expression on the right of the output symbol (!) to the channel named on the left of the symbol.

1.3 SKIP and STOP

The primitive process SKIP starts, performs no action and terminates.The primitive process STOP starts, performs no action and never terminates.To explain how SKIP behaves, consider the following sequence:
SEQ
keyboard ? char
SKIP
screen ! char
This sequence executes the input keyboard ? char, then executes SKIP, which performs no action. The sequence continues, and the output screen ! char is executed. The behaviour of STOP is illustrated by the following sequence:
SEQ
keyboard ? char
STOP
screen ! char
This sequence performs the input keyboard ? char before, then executes STOP, which starts but does not terminate and so does not allow the sequence to continue. The output screen ! char is never executed.

1.4 Summary

The primitive occam processes are assignments, inputs, outputs, SKIP and STOP:
Process = assignment
| input
| output
| SKIP
| STOP

Communicating Process Paradigm

Systems are built from layered networks of communicating parallel processes
Synchronous point-to-point communication via ‘channels’
The three sub-components here are ‘plus’, ‘prefix’ and ‘delta’
that could also be process networks
Implementation in occam; semantics from Hoare’s CSP.
Individual processes are completely isolated
interacting with the environment only through channels visible
interfaces

The synchronous nature of channel communication means that implementations may behave differently w.r.t. the environment
• asynchronous behavior is possible, however:

Occam & CSP

The occam language provides for ‘clean’ implementations of such processes

• developed by David May (and others) at Inmos [1] (1983), last commercial revision was occam2.1 [2] (1995)

• strict parallel-usage and aliasing checks give strong safety guarantees
i.e. freedom from race-hazard errors

• remaining issues include: deadlock, live lock and starvation CSP is the underlying process algebra, used to formally reason about the
behavior of occam processes

• developed by Tony Hoare [3, 4], standard text (currently) by Bill Roscoe [5] The mapping between CSP and occam is not an exact fit

• but is sufficiently complete for reasoning about occam programs.

Building Process Networks

The implementation of process networks such as ‘integrate’ is trivial:


PROC integrate (CHAN INT in?, out!)
CHAN INT a, b, c:
PAR
plus (in?, c?, a!)
prefix (0, b?, c!)
delta (a?, b!, out!)
:

Occam and Occam pi Programming Language

Occam can express a wide range of behaviours, but some aspects are lacking

• largely as a result of occam’s original application for the Transputer
• dynamic process creation
• channel mobility
• shared channels
Primary aims of occam-p:
• to provide the above features at a language level
• not break the existing good safety guarantees of occam
• make the language “programmer friendly”, i.e. something that people will actually want to use

overview:-

Extending the classical occam language with ideas of mobility and dynamic network reconfiguration, ideas from Milner’s p-calculus [6]

• we have found ways of implementing these extensions that involve significantly less resource overhead than imposed by the higher-level — but less structured, information and non-compositional — concurrency primitives of existing languages (such as Java) or libraries (such as POSIX threads)

As a result, we can run applications with the order of millions of concurrent processes on modestly powered PCs

• we have plans to extend the system, without sacrifice of too much efficiency and none of logic, to simple clusters of workstations, wider networks such as the Grid and small embedded devices

In the interests of proveability, we have been careful to preserve the distinction between the original point-to-point synchronised communication of occam and the dynamic asynchronous multiplexed communication of the p-calculus; in this, we have been prepared to sacrifice the elegant sparsity of the p-calculus

• we conjecture that the extra complexity and discipline introduced will make the task of developing, proving and maintaining concurrent and distributed programs easier

occam-pi features:

• mobile data, channels and processes; dynamic process creation
• shared channels, channel bundles, recursion, no race-hazards, no garbage, protocol inheritance, extended rendezvous, process priority, ...

Mobile Channel Bundles

Defined as ‘client’ and ‘server’ ends of a “mobile channel-type”,



Directions specified in the type are server-relative

• and may carry data both ways Allowing data to flow both ways is merely convenience — can get (mostly same effect using many individual mobile channels Grouping channels together, for related use, makes good sense however.

Communicating Mobile Bundles

The main use of mobile channel bundles is to support the run-time reconfiguration of process networks

P and Q are now directly connected

Process networks may be arbitrarily reconfigured using mobile channels, but still only reconfiguring static networks

• Dynamic process creation makes this much more interesting



Communication and assignment in traditional occam systems incur an O(n) run-time cost for datacopying. In small localised systems (i.e. those which share a common memory), this run-time cost is often undesirable and often unnecessary. A common solution to this, in occam, is to implement a memory pool which uses RETYPEs and various compiler-builtins to generate pointers from variables. These pointers (usually represented by the INT type) can then be communicated and assigned in O(1) time. The disadvantage of this method is that the compiler can no longer perform parallel-usage and alias checks on the pointers— it just treats them as plain INTs. Thus the potential for introducing parallel race-hazards is high. However, in traditional occam, this is often the only solution. Mobiles, with their movement semantics, provide a simple and effective solution to this — with an implementation that uses communication and assignment of references and has aliasing strictly controlled. These semantics are enforced by the compiler both at compile-time (in the undefineness checker, section 4.6), and also at run-time, by invalidating ‘lost’ references.

Mobiles are not a new concept — movement semantics exist in other languages, for exampleNIL [SY85], although these tend to be few in number. The programming language Icarus [MM98b, MM98a, MM01] also supports a movement semantics, for both data and channels. Icarus

also introduces the concept of a borrowing semantics, that provides an explicit mechanism for temporarily moving data/ channe ls to another process and back at defined points. A similar mechanism also exists for mobile types in occam, although it is somewhat implicit — in procedure calls and abbreviations. For channels, communication and assignment using a movement semantics is the only option— copy semantics have no meaning when applied to channels, and are therefore banned in the standard occam language. Mobile channel-types provide this functionality, and also extend it — by allowing explicitly shared channel-types (for the construction of one-to-any, any-to-one and any-to-process

Dynamic Process Creation

Recursion in occam

Recursion in occam has traditionally been prevented for two main reasons — one practical and one engineered. Firstly, the lack of dynamic memory allocation would have imposed a restriction on the depth of recursion. Secondly, the scoping of names in occam is such that they only become visible at the end of their declaration (where the ‘:’ appears). For PROCs, this means that any use of its own name inside the code body is invalid, unless a different PROC of the same name is in scope — in which case it will be used. It is possible to fake recursion however, often quite convincingly, by using the scoping of names to an advantage [Poo92]. In an arbitrary PROC called ‘foo’, any previously defined PROCs also called ‘foo’ are in scope and perfectly valid. This technique will fail on some KRoC implementations however — when the same name is defined at the outermost level multiple times, the UNIX linker will mostly likely stop with a “multiply defined symbol” error. The tranx86 translator partially handles this by only allowing one entry-point

for any given name per file processed. However, the same top-level PROC name in separate source files will also usually result in a similar error on UNIX platforms — which is a problem generally, not just for recursion. A version of unbounded recursion using a special locally defined PROC — with a very similar name — has been implemented in a development version of the KRoC/Sparc by Wood in [Woo00]. The Linux version of KRoC implements recursion using a different mechanism, by adding support directly to the occam compiler. The support for recursion in KRoC/Sparc also uses a Brinch-Hansen style memory allocator, but with with all support for such in the translator (octran) and run-time kernel only. For this implementation, recursive PROCs are declared using the ‘RECURSIVE’ or

‘REC’ keywords

— the latter being shorthand for the former. For example:

RECURSIVE PROC thing (...)
.
.. body of thing
:
This has the effect of bringing the name ‘thing’ into scope early, thereby permitting its use within the body of ‘thing’. A classic example for parallel recursion in occam is the parallel recursive Sieve of Eratosthenes. The original code for this is given in Appendix A of Moores’s CCSP paper, [Moo99], and requires very little change — just the addition of the ‘RECURSIVE’ keyword. The newcode for the ‘sieve’ process, augmented with channel-direction specifiers, is:

RECURSIVE PROC sieve (VAL INT count, CHAN INT in?, out!)
IF
count = 0
id (in?, out!) -- just copy in to out
TRUE
INT n:
SEQ
in ? n
out ! n
CHAN INT c:
PAR
filter (n, in?, c!)
sieve (count-1, c?, out!)
:

The ‘count’ parameter is used to limit the recursion. An initial value of 169 will allow for all the primes between 2 and 1000000 to be generated. When the ‘sieve’ process receives its first input (from ‘in?’), it outputs it down the prime output channel ‘out!’, then starts a network of subprocesses: a filter to filter out all multiples of ‘n’, in parallel with a recursive call to sieve. When ‘count’ reaches zero, the recursion stops. Some time after new filter processes stop being created, the pipeline will start to produce invalid primes

RECURSION IN OCCAM

RECURSIVE PROC sieve (VAL INT count, CHAN INT in?, out!)
INT n:
SEQ
in ? n
IF

count = 1
end.sieve (n, in?, out!)
TRUE
SEQ
out ! n
PAR
filter (n, in?, c!)
sieve (count - 1, c?, out!)

Process Creation

Three mechanisms provided in occam-pi:

• self-recursive procedures
• n-replicated parallel
• asynchronous process invocation (fork)

All three can be modelled in CSP, but require dynamic memory for implementation

The ‘fork’ is examined here
• essentially a procedure instance that runs in parallel with the invoking process
• parameter passing is strictly uni-directional and uses a communication
semantics

ex:-


The only way to communicate with a forked process is to pass to it a mobile channel-end

• ordinary channels are non-communicable

Worker processes are created on demand Process farms are one application of dynamic process creation, that turns out to be useful for many things

• e.g. the occam(-p) web-server (http://wotug.kent.ac.uk/ocweb/)
• more generally, almost any server that must handle multiple clients concurrently
A more interesting example is the “occam-pi adventure”
• an interactive text-based multi-user game (MUD)
• built as an interconnected matrix of ‘rooms’
• ‘objects’ and ‘users’ connect to rooms using mobile channels, also used to link rooms
• exercises channel mobility.




->Game matrix is constructed dynamically.
• may add, remove and re-arrange rooms whilst running.
->User processes connected to the room the user is in
->Objects lie around in rooms or are held by users.

RMoX Operating System

-> On the whole, operating-systems are large, complex concurrent systems
->Real world operating-systems, even eithout attempting to exploit multiprocessor hardware, suffer from concurrency-related problems:

• subtle race-hazard errors, that in some cases lead to serious security
problems — e.g. local and remote exploits.
• or concurrency is “locked-down” to the point where it damages performance.

-> Multiprocessor architectures (typically SMP) just scale these problems.
->The programming language, in most cases C, does not help:

• synchronisation and locking must be programmed correctly, compiler cannot police usage.

• lack of appropriate encapsulation — e.g. a device-driver may in advertently (or deliberately) interfere with another part of the system.

-> Theory is: construct an operating system using occam-pi and it will be:

• scalable: from small embedded devices to distributed multi computers
• safe: freedom from race-hazard and aliasing errors (incl. array-bounds, etc.)
• fast: due to the extremely light concurrency overheads.








-> The various “core” components utilise internal concurrency
ram disk par port floppy net.

->Drivers themselves may be concurrent internally.

->Some drivers mostly complete, others under construction.







Putting it together



-> With suitable interfacing, RMoX will also run in a Linux environment (user-mod RMoX)

-> We have an experimental version that runs directly on hardware
• running with Linux provides various useful features for development.

Conclusion & Future Work

Language not quite finished — but mostly complete
1
• need a formal semantics for the behavior of mobile channels
For what is supported, performance is good
Compiler and run-time system being developed

• particularly for other platforms/architectures
• investigating support for mostly transparent network distribution

Also in development are demonstrator applications (e.g. servers) and graphics libraries Recently funded for a feasibility study into modelling and theories of nanite assemblies — gigabit switched 32-node 3.2 GHz P4 cluster ready to go.

References

[1] David May. OCCAM. ACM SIGPLAN Notices, 18(4):69–79, April 1983.

[2] Inmos Limited. occam 2.1 Reference Manual. Technical report, Inmos Limited, May 1995. Available at: http://wotug.org/occam/.

[3] C.A.R. Hoare. Communicating Sequential Processes. Communications of the ACM, 21(8):666–677,August 1978.

[4] C.A.R. Hoare. Communicating Sequential Processes. Prentice-Hall, London, 1985. ISBN: 0-13-153271-5.

[5] A.W. Roscoe. The Theory and Practice of Concurrency. Prentice Hall, 1997. ISBN: 0-13-674409-5.

[6] R. Milner, J. Parrow, and D. Walker. A Calculus of Mobile Processes – parts I and II. Journal of Information and Computation, 100:1–77, 1992. Available as technical report: ECS-LFCS-89-85/86, University of Edinburgh, UK.

TABLE OF CONTENTS

TABLE OF CONTENTS

ABSTRACT
TABLE OF CONTENTS

1.INTRODUCTION
2.EVOLUTION
3.POLYMER VISION
4.WHAT IS A ROLLABLE DISPLAY
5.THE TECHNOLOGY BEHIND
a.ORGANIC THIN FILM TRANSISTER
b.POLYMER ELECTRONICS
c.E-INK
i.GYRICON TECHNOLOGY
6.HOW DO U MAKE A ROLLABLE DISPLAY
a.ACTIVE MATRIX BACKPLANE
b.E-INK FRONT PLANE
7.BENEFITS
8.APPLICATIONS
a.DISTRIBUTED SIGNAGE
b.ENTERTAINMENT
c.GPS
d.ENTERPRISE
9.COMMERCIAL REALISATIONS
a.FIRST GENERATION ELECTRONIC PAPER DISPLAY
b.WORLD’S THINNEST ACITVE MATRIX DISPLAY
c.PHILIPS CONCEPT READIUS
10.FUTURE POSSIBILITIES
11.CONCLUSION
12.REFERENCES

ABSTRACT

Rollable displays are lightweight, large-area displays that are unbreakable and can be rolled up into a small-sized housing when not actively used. The displays combine active-matrix polymer driving electronics with a reflective ‘electronic ink’ front plane on an extremely thin sheet of plastic. The availability of such displays would greatly stimulate the advance of electronic books, newspapers and magazines, and also new services offered by (third generation) mobile network operators. These applications currently depend on fragile, heavy and bulky laptops or small, low-resolution displays of mobile phones, which both have clear drawbacks. Within the Philips Technology Incubator an internal venture has been formed with this aim. The venture is called Polymer Vision

INTRODUCTION

The display is the interface to the networked society; it forms the essential link between information and the human being. The major trends to be noted in this respect are the increasing demand for pervasive Mobile/Wearable terminal displays, requiring the introduction of rugged, lightweight and flexible display interfaces, and ubiquitous displays of any shape and size based on new enabling process technologies, with cost-effective production methods.

Reflective displays with paper-like viewing characteristics are rapidly emerging, with applications like electronic books and newspapers, roaming data access, and, on the longer term, full-color video access on the move.

For years, the "holy grail" of the display industry has been a thin, clear, flexible substrate with barrier properties equal to those of a sheet of glass. Flexible displays offer many potential benefits over other display technologies, including reductions in weight and thickness, improved ruggedness, and nonlinear form factors. These features make flexible displays attractive for a variety of electronic products ranging from cell phones and PDAs to computers, toys, electronic books, and "wearables."

EVOLUTION

The manufacturing of flat panel displays is a dynamic and continuously evolving industry. Improvements of flat panel displays are made rapidly as technology improves and new discoveries are made by display scientists and engineers. The cathode ray tube and active matrix liquid crystal display (LCD) recently celebrated their 100th and 25th anniversary, respectively. The arrival of portable electronic devices has put an increasing premium on durable, lightweight and inexpensive display components. In recent years, there has been significant research investment in the development of a flexible display technology.





Organic LEDs on glass substrates are already making their way into consumer products such as digital cameras and electric shavers, but the first products to incorporate flexible displays will likely be electronic books, paper, and signage. While products are initially being built on glass substrates, the shift to flexible substrates is under way.


Rollable-display initially focused on electrophoretics, a low-power-consumption display technology better known for its role in e-paper, e-books, and e-signage.Gyricon's Smart Paper, for example, is produced in a roll like conventional paper but is actually two sheets of thin plastic with millions of tiny bichromal beads embedded in between. E Ink uses stationary microcapsules that contain white particles, black particles, and a clear fluid OLEDs are self-luminous and do not require backlighting, polarizer, or diffusers, which reduces the size and weight. In addition, they offer a wide viewing angle and low power consumption. While OLEDs are not yet as bright as other displays, efforts are under way to improve this. When it comes to putting OLEDs on polymer or metal-foil substrates, proponents say there is a symbiotic relationship between the materials and the production processes that make OLEDs a natural fit for flexible displays.

It turns out that the OLED manufacturing process, because it is all chemical, is much more amenable to retaining optimum performance on a flexible surface than other display technologies such as LCDs.There are still some key technology limitations to be overcome—most notably the extreme sensitivity OLEDs have to moisture and oxygen. Another key factor in transitioning OLED, LCD, and electrophoretic displays onto polymer or metal involves the TFT backplanes, which must also eventually reside on plastic in order to achieve a truly flexible display.

Another approach is to eliminate the temperature issue altogether by using organic transistor materials for TFTs that can be printed directly on plastic.
Polymer Vision, a division of Philips, has been able to make organics-based QVGA (320 × 240 pixels) active-matrix displays with a diagonal of 5 in., a resolution of 85 dpi, and a bending radius of 2 cm. The displays combine a 25-µm thick active-matrix backplane containing the polymer electronics-based pixel drivers, with a 200-µm front plane of reflective "electronic ink" developed by E Ink (see photo, [right]). With nearly 80,000 TFTs, the resulting display is the largest organic electronics–based display yet.

Find It