Search in Blogs of Narayana Rao


Thursday, July 14, 2011

Ontology in Computer Science by Sinuhe Arroyo

Reshared under Creative Commons Attribution-Noncommercial 3.0 License
(There should not be any exchange of money between the receiver of this document and giver or distributor of this document)

Source Knol: Ontology
by Sinuhe Arroyo

The core concept behind the Semantic Web is the representation of data in a machine interpretable way. Ontologies facilitate the means to realize such representation. They characterize formal and consensual specifications of conceptualizations, providing a shared and common understanding of a domain as data and information machine-processable semantics, which can be communicated among agents (organizations, individuals, and software) [10]. Ontologies put in place the means to describe the basic categories and relationships of things by defining entities and types of entities within its framework

Ontologies bring together two essential aspects that are necessary to enhance the Web with semantic technology. Firstly, they provide machine processability by defining formal information semantics. Secondly, they provide machine-human understanding due to their ability to specify conceptualization of the real-world. By these means, ontologies link machine processable content with human meaning using a consensual terminology as connecting element [10].

This knol explores the concepts and ideas behind metadata glossaries. It depicts the most relevant paradigms with the aim of showing their different pros and cons, while evolving towards the concept of ontology. It pays special attention to the relation between ontologies and Knowledge bases and the differences between light weight and heavy weight ontologies. Furthermore, the knol introduces relevant ontology languages, i.e. RDF, OWL and, WSML. Doing so, we examine their main features, applications and core characteristics.

Metadata Glossary: From Controlled Vocabularies to Ontologies
Controlled vocabularies
A controlled vocabulary is a finite list of preferred terms used for the purpose of easing content retrieval. Controlled vocabularies consist of pre-defined, authorized terms, which is in sharp contrast to natural language vocabularies that typically evolve freely without restrictions. Controlled vocabularies can be used for categorizing content, building labeling systems or defining database schemas among others. A catalogue is a good example of a controlled vocabulary.

The discipline taxonomy refers to the science of classification. The term has its etymological root in the Greek word “taxis”, meaning “order” and “nomos”, with the meaning of “law” or “science”.

In our context, taxonomy is best defined as a set of controlled vocabulary terms. Each individual vocabulary term is known as “taxa”. Taxa identify units of meaningful content in a given domain. Taxonomies are usually arranged following a hierarchical structure, grouping kinds of things into some order (e.g. alphabetical list).

A good example for a taxonomy is the Wikispecies [1] project, which aims at creating a directory of species. In the “Taxonavigation” the path in the taxonomy leading to the species is depicted.

The term "thesaurus" has its etymological root in the ancient Greek word “θησαυρός”, which evolved into the Latin word “thesaurus”. In both, the cultures, thesaurus meant "storehouse" or "treasury", in the sense of repository of words[1]. A thesaurus is therefore similar to a dictionary with the difference that it does not provide word definitions, its scope is limited to a particular domain, entry terms are single-word or multi-word entries and that it facilitates limited cross-referencing among the contained terms e.g. synonyms and antonyms [2], [3].

A thesaurus should not be considered as an exhaustive list of terms. Rather they are intended to help differentiating among similar meanings, so that the most appropriate one for the intended purpose can be chosen. Finally, thesauri also include scope notes, which are textual annotations used to clarify the meaning and the context of terms.

In a nutshell, a thesaurus can be defined as a taxonomy expressed using natural language that makes explicit a limited set of relations among the codified terms.

The AGROVOC Thesaurus [4] developed by the Food and Agriculture Organization of the United Nations (FAO) is a good example of a thesaurus.

In philosophy, ontology is the study of being or existence. It constitutes the basic subject matter of metaphysics [3], which has the objective of explaining existence in a systematic manner, by dealing with the types and structures of objects, properties, events, processes and relations pertaining to each part of reality.

Recently, the term ontology was adapted in computer science, where ontologies are similar to taxonomies in that they represent relations among terms. However, ontologies offer a much richer meaning representation mechanism for the relationships among concepts, i.e. terms, and attributes. This is the reason because they are, nowadays, the preferred mechanism to represent knowledge.

In 1993 Gruber provided one of the most widely adopted definitions of Ontology.

“An ontology is an explicit specification of a conceptualization”.

Gruber’s definition was further extended by Borst in 1997. In his work Construction of Engineering Ontologies [5] he defines ontology as follows.

“Ontologies are defined as a formal specification of a shared conceptualization”.

Studer, Benjamins and Fensel [6] further refined and explained this definition in 1998. In their work, the authors defined an ontology as:
“a formal, explicit specification of a shared conceptualization".

Formal: Refers to the fact that an ontology should be machine-readable.

Explicit: Means that the type of concepts used, and the restrictions on their use are explicitly defined.

Shared: Reflects the notion that the ontology captures consensual knowledge, that is, it is not the privilege of some individual, but accepted by a group”.

Conceptualization: Refers to an abstract model of some phenomenon in the world by having identified the relevant concepts of that phenomenon.

Ontologies and knowledge bases
The relation between ontologies and knowledge bases is a controversial topic. It is not clear whether an ontology can only contain the abstract schema (example: concept Person) or also the concrete instances of the abstract concepts (example: Tim Berners-Lee). When drawing a clear line between abstract schema definitions and the instance level, one runs into the problem that in some cases instances are required in order to specify abstract concepts. An example that illustrates this problem is the definition of the concept “New Yorker” as the concept of persons living in New York: in this case, the instance “New York” of city is required in order to specify the concept.

A number or authors have tackled this problem and identified the limits and relationships among existing definitions.

Two definitions, the first one provided by Bernaras et. al [8], and the second one by Swartout [7], clearly identify the relationship between ontologies and knowledge bases.

"An ontology provides the means for describing explicitly the conceptualization behind the knowledge represented in a knowledge base".

“An ontology is a set of structured terms that describes some domain or topic. The idea is that an ontology provides a skeletal structure for a knowledge base“.

Lightweight vs. heavyweight ontologies
Depending on the axiomatization richness of ontologies one can distinguish between heavyweight and lightweight ontologies. Those that make intensive use of axioms to model knowledge and restrict domain semantics are referred to as heavyweight ontologies [10]. On the other hand, those ontologies that make scarce or no use of axioms to model knowledge and clarify the meaning of concepts in the domain are referred to as lightweight ontologies. Lightweight ontologies are a subclass of heavyweight ontologies, typically predominantly a taxonomy, with very few cross-taxonomical links (also known as “properties”), and with very few logical relations between the classes. Davies, Fensel et al. [9] emphasize the importance of such lightweight ontologies:

"We expect the majority of the ontologies on the Semantic Web to be lightweight. […] Our experiences to date in a variety of Semantic Web applications (knowledge management, document retrieval, communities of practice, data integration) all point to lightweight ontologies as the most commonly occurring type.”

Ontologies and folksonomies

Ontology Languages
Resource Description Framework
Web Ontology Language
OWL Lite
OWL Full
Web Service Modeling Language


2. National Information Standards Organization (NISO). (2003). (ANSI/NISO Z39.19-2003, 2003: 1).
3. Wikipedia.
4. Food and Agriculture Organization of the United Nations (FAO).AGROVOC Thesaurus. (1980).
5. Borst W. N. (1997). Construction of Engineering Ontologies. Centre for Telematica and Information Technology, University of Tweenty. Enshede, The Netherlands
6. Studer, R., V.R. Benjamins and Fensel, D. (1998). Knowledge engineering: principles and methods. IEEE Transactions on Data and Knowledge Engineering 25(1-2):161-197.
7. Swartout, B., Patil, R., Knight, K., and Russ, T. (1996). Toward distributed use of large-scale ontologies. In Proceedings of 10th Knowledge Acquisition for Knowledge-Based Systems Worskhop. Banff, Canada.
8. Bernaras A., Laresgoiti I. and Corera J. (1996). Building and reusing ontologies for electrical network applications. In: Wahlster W. (ed) Eurpean Conference on Artificial Intelligence (ECAI’96). Budapest, Hungary. John Wiley and Sons, Chichest...
9. Davies, J., D. Fensel, et al., Eds. (2002). Towards the Semantic Web: Ontology-driven Knowledge Management, Wiley.
10. Fensel, D. (2001). Ontologies: Silver Bullet for Knowledge Management and Electronic Commerce, Springer-Verlag, Berlin, 2001.

Wednesday, July 13, 2011

The Five Senses by Kevin Spaulding

Knol Reshared Under Creative Common 3.0 Attribution License

Source Knol: The Five Senses
by Kevin Spaulding, Sunnyvale, CA

Turn off all the lights, electronics, or other contraptions in your room, close your eyes, sit very still and, if you can, try to imagine that you cannot see, hear, taste, smell, or feel. You would be nothing but a brain hovering in the middle of an endless abyss of space. The world or your existence would have absolutely no meaning. Lacking feedback from your surroundings you would most likely die of starvation unless someone was there to take care of you. If you were born like this then you would hardly think and would never learn anything at all. This is why the senses are of utmost importance, for they connect our brains to the outside world.
Humans have five senses that are based on signals we receive through the skin, eyes, nose, tongue, and ears. [1] The brain takes these signals and makes sense of them, creating our interpretation of the world. Thinking about what this means can be fun because it raises philosophical questions. For instance, how can we really be sure that this supposed world we perceive actually exists outside of our own brain's imagination? After all, everything we've ever come to know since conception has been based on sensory experiences. Let's explore what the senses are and where they come from.

Sensory Receptors
Sensory experiences start with special cells called sensory receptors, also called afferent nerves because they take information to the brain. [2] A sensory receptor will detect stimuli such as heat, light, sweetness, etc. and then convert it into an electrical signal that courses its way into the brain, where the signal is translated into perception like hot, bright, sweet, etc. We are constantly being bombarded by sensory information. The brain decides what information to keep and promote to consciousness and what to discard or keep in unconsciousness. By ignoring information from the senses we are able to focus on doing specific tasks so that we are not always overwhelmed by the world around us.
Lysergic acid diethylamide, better known as LSD, is a chemical compound that unlocks the barrier between the unconscious and conscious sensory information. This leads to psychedelic experiences that interfere with a person's ability to function. [3] It makes sense that the brain has evolved to block off some of the sensory receptor's messages since we would otherwise be in drowned in sensory overload.
Sensory information is processed by the side of the brain opposite to the sensory receptors it is coming from. For instance, things that we see with our right eye is processed by the left side of the brain, and when we touch something with our left hand the information is processed by the right side of the brain. [4]
The study of the relationship between physical stimulus and a person's conscious experience of that stimulus is called psychophysics. Studies in this field are done with the intent of understanding how much stimulus is needed to produce a psychological reaction to that stimulus. This helps us understand how and why we interpret the world around us like we do. [12]


The eyes send sensory information to the brain, which is then translated into vision. The eyes contain around 70% of all the body's sensory receptors, making sight the most information heavy of all the senses. [13] When light first gets to the eye it pass through the cornea, a covering over the iris. The iris will constrict/dilate to make the pupil smaller or larger so it can focus on objects. [14] Behind the pupil is a four millimeter thick crystalline lens which works with the pupil to form two-dimensional imagery of the world. [15] A constricting iris will sharpen the image hitting the retina and also decrease the amount of light entering the eye.
To understand how the image gets from the eye to the brain, we must understand the retina. The retina is the section of the eye that is covered by photoreceptors called rods and cones. These rods and cones will code light (electromagnetic radiation) into electrical signals through a process called transduction. There are around 120 to 125 million rod cells and 6 to 7 million cone cells in each eye. [5] Rod cells are shaped like rods and perform best in dim lit conditions. Cone cells are shaped like cones and perform best in bright conditions.
There is a specific cone cell for the colors blue, green and red. Each of these kinds of cone cells will interpret all the possible kinds of colors in the world by measuring the level of blue, green, or red in any object. This means that there are three kinds of cone cells but just one kind of rod cell. Inside of cone cells there are color detectors called photopsins. [6] These photopsins have a light sensitive chemical called retinol, which is made from vitamin A. [7] Retinol is found inside of shells that are made of opsin protein. [8] The type of opsin will determine whether the cell will detect the color blue, green, or red. Cyanolabe is the opsin for blue, chlorolabe the opsin for green, and erythrolabe the opsin for red. The retinol will change molecular shape when light hits it, which then causes the opsin to change shape. These reactions cause a series of nerves to be excited, helping electrical information get to the optic nerve.
The optic nerve is made of many neurons and runs out from the back of the retina and into the brain. [9] The optic nerve goes through the thalamus and ends at the back of the cortex in the occipital lobe. The visual cortex has cortical representations of the retina called retinotopic maps. [10] There are separate retinotopic maps for motion, depth, color, and form. [11] When our eyes send information to the visual cortex it is in the form of a two-dimensional pattern. The retinotopic maps and temporal lobe will then work together to build the three-dimensional representation we actually 'see'. We don't consciously recognize this because it happens at such a fast speed.
Now that we understand the electrochemical and biological processes that give us our vision, we should acknowledge the psychological processes that go along with sight. We form memories about how things look and how the world works, aiding in our visual perception. For instance, we know that when we place something down and walk away it will look like it is getting smaller but that it is not actually shrinking in size. We can also recognize a dog no matter what angle we see one at. These psychological truths are explained as size constancy and shape constancy, which are the abilities of the perceptual system to know that an object remains constant in size and shape regardless of distance or angle orientation. [20][21] Both of these things require that a person form memories of the object. For instance, someone who spots an elephant in the far distance, and has never seen an elephant before, may think that the elephant is small. If they know elephants are large, then the size they appear at from a distance won't trick them. It can also help to have other objects act as references. Someone who knows their basketball hoop is ten feet tall will automatically be able to determine the height of someone standing next to that basketball hoop.
There are other facets of perception that form through experience and the formation of memories, such as depth perception. [22] When we look at photographs we are able to determine what is up close and what is far away even though the picture only portrays two-dimensions. This is because our brains do this constantly, all of our lives, even while we are looking at three-dimensional environments. Cues that give us perception of depth and only require one eye are called monocular depth cues. [23] Monocular depth cues include motion parallax, kinetic depth effect, linear perspective, interposition, texture, clearness, and shadowing. Depth cues that require both eyes are called binocular depth cues, and include retinal disparity and convergence. [24]
Eye Movement
Eyes are usually always in motion, including when we sleep. Rapid movements, called saccades, are the most common type of eye movement. [19] Saccades occur when people read, drive, or just look around. The eye can make four or five saccades a second. Each movement takes just 20 to 50 milliseconds but it takes 200 to 250 milliseconds before the eye will be able to make its next movement. These rapid movements create snapshots in our memory that fuse together and create a stable view of the world. [13]
Imperfect Eyes
When a person's eyes are not perfectly shaped their vision will be affected. Eyes shaped in such a way that they allow a person to see things that are close to them but not far away are called myopic, or nearsighted. The opposite of myopic eyes are hypermetropic, or farsighted eyes. Hypermetropic eyes can see things at a distance but have trouble seeing things up close. Both of these problems result in eyes sending images to the brain that are not well focused. [16]
Sometimes the lense of an eye will be irregularly shaped, causing visual distortions. This is called astigmatism. [17] It can sometimes accompany nearsightedness and farsightedness. Many times a person's astigmatism will not be pronounced enough for corrective actions, but when it is they usually have to get eye surgery.
As people age they will usually get fuzzy vision as their lenses thicken and become less pliable. [18] This usually results in people having to get glasses or contact lenses of some kind.
Sometimes a person will be born without the ability to see, or will experience heavy damage to the eyes through infection or disease that leaves them visually impaired. [31] When someone has no visual capability they are called totally blind. If a person is able to see but it is less than 20/200 after the best correction, then they are declared legally blind and treated as someone who is totally blind.


Hearing allows us to listen to music, the voices of loved ones, and many other things. In fact, humans probably wouldn't have evolved spoken language if it were not for the ability to hear. Our ears are the tools which collect sound. Because we have two of them, and because they are so sensitive, we are able to make out volume, pitch, direction, and distance of noise. [25] By measuring these factors we can even determine whether or not something is moving toward or away from us and at what rate.
The visible skin of the ear directs air vibrations into the ear canal. These vibrations are then picked up by the tympanic membrane, or eardrum. The eardrums will then send the vibrations to three different bones, all of which are very tiny. These bones are the malleus (hammer), incus (anvil), and stapes (stirrup). [10] The stirrup connects to the cochlea, which is spiral shaped and contains three fluid-filled cavities. The stirrup acts like a piston on the cochlea, causing a soft part called the oval window to move fluid around. The movement created by the fluid in the cochlea is detected by microscopic hair cells. Each hair cell has 20 to 30 hairs known as stereocilia that are arranged in a semi-circle from small to large. [26] The stereocilia are flexible and vibrations cause protein channels to open up between them and the hair cell, which results in the formation of chemical signals that will eventually be sent to the brain.
Hair cells in the cochlea are arranged in such a way that certain frequencies will affect specific hairs only. This is called a tonotopic map. [27] Each hair cell can send information to ten nerve fibers which carry the signal to the brain stem, where it is briefly analyzed. The brain stem then passes the information to the primary auditory cortex (temporal lobes) where it is analyzed in full. [28] The front of the temporal lobes will work with low frequency signals while high frequency signals are sent to the back. This will tell us the volume, pitch, and direction of the sound.
Sound Localization
We can determine the direction a noise is coming from by way of sound localization. Because we have two ears, we can usually hear something in one ear before we hear it in the other, which helps us determine where the sound is coming from. This is called interaural time difference. [29] Sound can also enter one ear at a higher intensity than it enters the other. By registering that one ear found the frequency more intense, we can decide which direction the sound must be coming from. This is called interaural intensity difference. [30] When we are uncertain of a sound's direction we will turn our head and body to use interaural time and intensity difference.
Imperfect Hearing
Hearing impairment is another way of describing damaged or incorrectly functioning auditory systems. There are different kinds of impairment ranging from minor hearing loss to total deafness. [32] The two most common forms of hearing impairment are conduction deafness nerve deafness.
Conduction deafness is defined as interference in the delivery of sound to the neural mechanism of the inner ear. [33] This interference can be caused by hardening of the tympanic membrane, destruction of the tiny bones in the ears, diseases that create pressure in the middle ear, head colds, or buildup of wax in the outer ear canal.
Nerve deafness is damage to the ear that is usually results from very high intensity sound emitted by things like rock bands and jet planes. [35] Constant presence of loud noise can increase a person's sound threshold. This means that a higher-ampiltude sound will be needed to create the same effect that lower-amplitude sound has on someone with normal hearing. Of course, this also means that the person with conduction deafness will have trouble making out sounds that are of a normal decibel. It is important to keep headphones at a reasonable volume and to protect the ears when around loud machinery. [34]
Older people will often have hearing impairment, especially in the high-frequency ranges. Sometimes this can cause difficulties that can generally be helped with the use of hearing aids. Serious deafness such as the kind caused by genetics are much harder to help. Though hearing aids can help slightly, these people generally find it impossible to communicate with hearing people. Since speech and language is directly tied to the auditory systems of our brain, those with severe hearing impairment are unable to use their voice successfully for communication. Though they can read lips and hearing aids can sometimes help, deaf people often feel alienated unless they can talk with someone who knows sign language. [36]

Smell and Taste

Smell and taste are perhaps the most important senses in terms of evolutionary significance. They cause us to want to eat, and help us know what we should be eating. Smell actually works with taste to help us decide whether we're eating something pleasing or disgusting, which can save our lives! [37] When we eat, the tongue is stimulated, of course, and molecules from the food will travel up the nose. These molecules excite millions of nasal sensors (neurons) that are on a sheet of tissue called the olfactory epithelium. The nasal sensors will live only 30 to 60 days before they are replaced by a sheet of stem cells that are waiting in line for their turn as nasal neurons. At the end of each of these cells there are five to twenty little hairs, called cilia. These hairs extend into the nasal cavity where they're protected and aided by mucus. [10]
The human nose can detect millions of different kinds of scents. We have at least one thousand olfactory neuron types, each one capable of detecting a different kind of odorant molecule. [38] When one of these neurons binds with an odorant molecule it will send an electrical signal to its axon. These axons go up into the skull through something called the cribriform plate and connect with the brain. The signals are passed along through the thalamus and into the temporal lobe, or olfactory cortex. [39] Since the pathways of the brain that analyze smell are closely connected with parts of the brain that are responsible for emotions (amygdala) and memories (hippocampus), smell has a way of bringing up emotions and memories from the past. [40]
The tongue has sensory cells, called taste buds, that determine whether something is bitter, sweet, salty, sour, or savory. [41] Each taste bud has about one hundred taste cells with little projections called microvilli. [42] Chemical signals created by these sensory cells are converted to electrical signals and sent to the cortex. Smell and taste information is analyzed by the cortex and transformed into a single perception we call taste. Of course, we can smell things without having to taste them, but taste would be very different without the ability to smell.


The sensation of feeling things touch the skin of our bodies is called touch. Skin is made of three different layers: the epidermis, dermis, and hypodermis. The sense of touch occurs because of sensory receptors built into our skin, called mechanoreceptors. [43] These mechanoreceptors are neurons with stretch-sensitive gateways that open up when pressure is applied. [44] When the skin is touched it will deform, opening up the gates that cause receptors to fire electrical signals through the nervous system and into the brain.
We also have thermoreceptors, which have a protein that varies the cell's activity depending on the amount of heat it is exposed to. [45] Cold receptors will fire signals at around 77 degrees fahrenheit while warm receptors will fire signals at around 113 degrees. [46] When skin is exposed to dangerous temperatures nociceptors will activate.
Nociceptors are sensory cells responsible for creating pain. When activated they will release a range of chemical messengers which bind to and activate nearby nerves whose sole purpose is to transmit pain information to the brain. These pain signals are routed through the thalamus and into the cerebral cortex. In fact, all the information collected by the skin is sent to the cerebral cortex by way of the spinal cord and thalamus. This information is then processed in a sequential map by nerve cells that specialize in texture, shape, orientation, temperature, and more. [10]

These are the main five senses that humans recognize. Some of them obviously overlap, like smell and taste, while others work together in more subtle ways. There is still a lot to learn about how humans perceive the world around them, and there may be senses that the scientific community has not yet proven exist. For example, there is the idea that some people posess extrasensory perception (ESP), or heightened perceptual abilities that the normal person doesn't have. ESP includes telepathy, which is the transfer of thoughts from one person to another without the use of anything external, and clairvoyance, which is the ability to recognize objects or events without the use of normal sensory receptors. Then there is precognition, which is the ability to see into the future, and psychokinesis, which is the ability to move object's using only the mind. None of these abilities are tangible or proven, but there is some evidence which hints that there are senses we don't fully understand yet. On any note, the five senses are both incredible and undeniably important. It will be interesting to see what future research reveals about our senses and ability to perceive the world.

1. The Senses. Eric H. Chudler. Neuroscience For Kids.
2. What is a sensory receptor?. Dawn Tamarkin. STCC Foundation Press.
3. LSD — The Problem-Solving Psychedelic. P.G. Stafford and B.H. Golightly. Psychedelic Library.
4. Right Brain - Left Brain. Catalase.
5. Rods and Cones. HyperPhysics. Georgia State University.
6. Photopsin. Encyclopedia. NationMaster.
7. Vitamin A (retinol). Mayo Foundation for Medical Education and Research.
8. opsin. biochemistry. Encyclopedia Britannica.
9. The Optic Nerve. Anatomy, Physiology and Pathology of the Human Eye. Ted M. Montgomery, O.D.
10. Guides, R., (2007). The Rough Guide to the Brain 1. London: Rough Guides.
11. Kandel, E., Schwartz, J., & Jessell, T. (2000). Principles of Neural Science. New York: McGraw-Hill.
12. WHAT IS PSYCHOPHYSICS?. École de psychologie.
13. Lefton, L., & Brannon, L. (2002). Psychology. Boston: Allyn & Bacon.
14. Iris. Eye Anatomy. St. Luke's Cataract & Laser Institute.
15. The Crystalline Lens. Anatomy, Physiology and Pathology of the Human Eye. Ted M. Montgomery. Optometric Physician
16. Common Eye Problems. Alan Optics.
17. Astigmatism. EYE. Mayo Foundation for Medical Education and Research.
18. Eye changes with aging. External & Internal Eye Anatomy. The Eye Digest, University of Illinois Eye & Ear Infirmary.
19. saccade. Answers Corporation.
20. Size Constancy in a Photograph. Psychology Dept. Hanover College.
21. Shape Constancy. York University.
22. Depth Perception. thinkquest.
23. Depth Cues. Applied Health Sciences. University of Waterloo.
24. Binocular Depth Cues., Inc.
25. How Hearing Works. Tom Harris. HowStuffWorks, Inc.
26. Hearing and Hair Cells. Bobby R. Alford Department of Otolaryngology-Head and Neck Surgery. Baylor College of Medicine.
27. Our Sense of Hearing. University of Washington.
28. auditory cortex. Answers Corporation.
29. Interaural Time Difference (ITD). Sweetwater Sound Inc.
30. Interaural Intensity Difference (IID). Sweetwater Sound Inc.
31. Visual Impairment. TeensHealth. The Nemours Foundation.
32. Deafness and hearing impairment. Media centre. World Health Organization.
33. Conductive Deafness. Infoplease. HighBeam Research, LLC.
34. Protect your hearing. Health News. CNET Networks, Inc.
35. How to Prevent Nerve Deafness. Cheryl Myers. eHow, Inc.
36. Communicating with Deaf People: A Primer. Rochester Institute of Technology.
37. TASTE AND SMELL. Newton's Apple. KTCA Twin Cities Public Television.
38. University Of Pennsylvania School Of Medicine (2006, February 3). A Bouquet Of Responses: Olfactory Nerve Cells Expressing Same.
39. olfactory cortex. HighBeam™ Research, Inc.
40. Olfaction and Memory. Macalester College.
41. The Tongue Map: Tasteless Myth Debunked. LiveScience. Imaginova Corp.
42. Physiology of Taste. Colorado State University.
43. mechanoreceptor. anatomy. Encyclopedia Britannica.
44. Mechanisms of sensory transduction in the skin. NATURE|Vol 445|22 February 2007.
45. thermoreceptor. Answers Corporation.
46. Heat, Cold, and Pain. jkimball. Biology Pages.

Images in the source knol not included here.
Visit source knol for images.
Source Knol: The Five Senses

How to calculate pressure drop and friction losses in a pipe

Resharing of knol under creative commons License 3 attribution license

Source Knol How to calculate pressure drop and friction losses in a pipe

by PipeFlow Software, Manchester, England

Laminar Flow and Turbulent Flow of Fluids

Resistance to flow in a pipe

When a fluid flows through a pipe the internal roughness (e) of the pipe wall can create local eddy currents within the fluid adding a resistance to flow of the fluid. Pipes with smooth walls such as glass, copper, brass and polyethylene have only a small effect on the frictional resistance. Pipes with less smooth walls such as concrete, cast iron and steel will create larger eddy currents which will sometimes have a significant effect on the frictional resistance.
The velocity profile in a pipe will show that the fluid at the centre of the stream will move more quickly than the fluid towards the edge of the stream. Therefore friction will occur between layers within the fluid.
Fluids with a high viscosity will flow more slowly and will generally not support eddy currents and therefore the internal roughness of the pipe will have no effect on the frictional resistance. This condition is known as laminar flow.

Reynolds Number

The Reynolds number (Re) of a flowing fluid is obtained by dividing the kinematic viscosity (viscous force per unit length) into the inertia force of the fluid (velocity x diameter)

Kinematic viscosity = dynamic viscosity / fluid density

Reynolds number = (Fluid velocity x Internal pipe diameter) / Kinematic viscosity

Note: Information on Viscosity and Density Units and formula are included at the end of this article.

Laminar Flow

Where the Reynolds number is less than 2300 laminar flow will occur and

the resistance to flow will be independent of the pipe wall roughness.

The friction factor for laminar flow can be calculated from 64 / Re.

Turbulent Flow

Turbulent flow occurs when the Reynolds number exceeds 4000.

Eddy currents are present within the flow and the ratio of the internal roughness of the pipe to the internal diameter of the pipe needs to be considered to be able to determine the friction factor. In large diameter pipes the overall effect of the eddy currents is less significant. In small diameter pipes the internal roughness can have a major influence on the friction factor.

The ‘relative roughness’ of the pipe and the Reynolds number can be used to plot the friction factor on a friction factor chart.

The friction factor can be used with the Darcy-Weisbach formula to calculate the frictional resistance in the pipe. (See separate article on the Darcy-Weisbach Formula).

Between the Laminar and Turbulent flow conditions (Re 2300 to Re 4000) the flow condition is known as critical. The flow is neither wholly laminar nor wholly turbulent.

It may be considered as a combination of the two flow conditions.

The friction factor for turbulent flow can be calculated from the Colebrook-White equation:

It is a long article with many graphs, and formulas.
Visit for more Source Knol How to calculate pressure drop and friction losses in a pipe

Solar Energy - A Synopsis by William Pentland

Knol Shared under Creative Commons Attribution 3.0 License

Source Knol: Solar Energy

by William Pentland, Senior Energy Systems Analyst at Pace Energy & Climate Center
New York

Solar Energy

The Mechanics of Energy
Energy is the capacity of matter to perform work.[2] Energy exists in multiple forms - mechanical, thermal, chemical, electrical, radiant, and atomic. One form of energy can be converted into any other form of energy if exposed to the appropriate processes. For example, sunlight, a form of radiant energy, is converted into carbohydrates, a form of chemical energy, by plants through a process called photosynthesis.[3] Animals transform chemical energy stored in plants into either kinetic energy (physical movement) or the chemical bonds - a second form of chemical energy - that hold together a living person's body. Otherwise, plants die and over eons of time morph into fossil fuels like oil and natural gas.[4]

Synopsis of Solar Energy
The Sun is about 900,000 miles across and is at least 10 million degrees at its center. The surface of the sun is roughly 6,000°C and its hot gases emit light that has a spectrum ranging from the ultraviolet, through the visible, into the infrared. Photovoltaic or solar cells convert solar power directly into electrical power. Light consists of discrete particle-like packets of energy called photons. Sunlight contains photons with energies that reflect the sun’s surface temperature; in energy units of electron volts. The energy density packed into the photons vary, but the visible region of the light spectrum tends to contain among the highest concentrations of energy that hits the planet.[5]

More energy from sunlight strikes the Earth in one hour than all the energy consumed on the planet in a year. At high noon on a cloudless day, the surface of the Earth receives 1,000 watts of solar power per square meter. Sunlight provides by far the largest of all carbon-neutral or clean-energy sources. Heat travels in all directions from the Sun and is the ultimate source of all energy on Earth. This energy is responsible for all sorts of weather events, not only scorching heat waves. For instance, wind occurs when sunlight heats the ground, which heats the air above it, which rises, so that cool air whisks in to take its place.

In the past decade, solar energy has attracted significant attention from investors, policymakers and the public generally because it is widely available, geopolitically secure and environmentally sustainable. Indeed, solar energy does not create greenhouse gases as a byproduct of generating electricity. Not surprisingly, it is widely considered among the most compelling solutions available for the world's need for clean, abundant sources of energy. Skeptics need only consider the $7.5 billion solar-energy industry - still growing at a rate of more than 30% every year — to appreciate the growing popularity of solar energy in mainstream electricity markets. Still, in 2001, solar electricity provided less than 0.1% of the world's electricity.

What "Efficiency" Means in the Solar-Energy Sector?
The efficiency of a solar cell is a measure of its ability to convert the energy that falls on it in the form of EM radiation into electrical energy, expressed as a percent. The power rating of a solar cell is expressed in watts, as either as peak watt (Wp), which is a measure of maximum possible performance under ideal conditions, or under more real-life conditions including normal operating cell temperature and AMPM (whole day rather than peak sunshine) standard ratings. The following chart shows solar-efficiencies for several of the leading-edge solar cell technologies.

Solar energy is the conversion of the sun’s energy into electricity. Light emitted by the sun is a form of electromagnetic (EM) radiation, and the visible spectrum comprises the majority of solar radiation. EM radiation that falls below the visible spectrum (the infrared region) contains less energy while radiation above the visible spectrum (the ultraviolet region) contains more energy. Solar cells respond to various forms of EM radiation in different ways, depending on the material used to construct the cells.

Crystalline silicon, for example, is able to use the entire visible spectrum, plus a portion of the infrared spectrum. Energy in EM radiation that is outside of the useable region of a solar cell is generally lost as heat. Insolation is the amount of energy present in sunlight falling on a specific geographical region, which is determined by a range of factors that include time of day, time of year, climate, air pollution and several other factors. As a result, the economics of solar energy depend heavily on appropriate geographic siting.

A Very Short History of Solar Energy
“I’d put my money on the sun & solar energy. What a source of power! I hope we don’t have to wait until oil and coal run out before we tackle that. I wish I had more years left.”
-Thomas Edison, 1931

In 1767, Swiss scientist Horace de Saussure built the world's first solar collector, which was used years later by Sir John Herschel to cook food during his South African expedition in the 1830s. Meanwhile, on September 27, 1816, Robert Stirling applies for a patent for his economiser at the Chancery in Edinburgh, Scotland. This engine is later used in the dish/Stirling system, a solar thermal electric technology that concentrates the sun's thermal energy to produce electric power. In 1839, Alexandre-Edmond Becquerel, a French physicist, discovered the so-called photovoltaic effect,[7] when he built a device that could measure the intensity of light by observing the strength of an electric current between two metal plates. When sunlight is absorbed by a solar cell, the solar energy knocks electrons loose from their atoms, allowing the electrons to flow through the material to produce electricity. This process of converting light (photons) to electricity (voltage) is called the photovoltaic (PV) effect.

Becquerel's conversion process transformed only 1% of the sunlight that fell on the submerged electrode into electricity. In other words, the conversion process was only 1% efficient. Following the initial discovery of the PV effect, scientists experimented with different materials in an attempt to find a practical use for PV systems. In the late nineteenth century, scientists discovered that the metal selenium was particularly sensitive to sunlight, and during the 1880 s Charles Fritts constructed the first selenium solar cell. His device, however, was inefficient, converting less than one percent of the received light into usable electricity.

John Ericsson, a Swedish inventor who lived and worked for most of his adult life in the United States, designed and built the world’s first solar-energy engine/dish in Pasadena, Calif. Ericsson presented the concept design for the solar machine (featured above) in 1876 at the centennial celebration in Philadelphia.

The Fritts selenium solar cell was mostly forgotten until the 1950s, when the drive to produce an efficient solar cell was renewed. It was known that the key to the photovoltaic cell lay in creating a semiconductor that would release electrons when exposed to radiation within the visible spectrum. During this time, researchers at the Bell Telephone Laboratories were developing similar semiconductors to be used in communication systems. By accident, Bell scientists Calvin Fuller and Daryl Chapin found the perfect semiconductor: a hybridized crystal called a " doped" cell, which was made of phosphorous and boron. The first solar cells using these new crystals debuted in 1954 and yielded a conversion efficiency of nearly six percent. Later improvements in the design increased the efficiency to almost 15 percent.

In 1957, Bell Telephone used a silicon solar cell to power a telephone repeater station in Georgia. The process was considered a success although it was still too inefficient to penetrate the general mmarketplace . The first real application of silicon solar cells came in 1958 when a solar array was used to provide electricity for the radio transmitter of Vanguard 1 , the second American satellite to orbit Earth. Solar cells have been used on almost every satellite launched since.

In 2004, global solar cell production had increased from less than 10 MW annually in 1980 to about 1,200 MW annually. The current total global PV installed capacity is more than 3 Gigawatts per year.

By the 1960s, photovoltaic cells were used to power U.S. space satellites. By the 1980s, the simplest photovoltaic systems were being used commercially to power small calculators and wrist watches. Today, advanced solar-energy systems provide electricity to pump water, power communications equipment and increasingly generate electricity on a commercial scale. Two solar-energy technologies currently dominate the market for solar-based electricity production. Concentrating solar power systems direct sunlight through a magnifying lens, which increases the heat energy and drives a generator that produces electricity. Photovoltaics systems (PV) convert solar energy into electricity with semiconductors. A third technology, solar heating, absorbs the sun's energy with solar collectors and provides low-grade heat used directly for solar water heating, solar space heating in buildings, and solar pool heaters.

From the mid 1950s to the early 1970s, PV research and development (R&D) was directed primarily toward space applications and satellite power. Large-scale development of solar collectors began in the United States in the mid-1970s under the Energy Research and Development Administration and continued under the auspices of the U.S. Department of Energy after 1976. In 1973, a greatly increased level of R&D on solar cells was initiated following the oil embargo in that year, which caused widespread concern regarding energy supply.

In 1976, the U.S. Department of Energy, along with its Photovoltaics Program, was created. DOE, as well as many other international organizations, began funding PV R&D at appreciable levels, and a terrestrial solar cell industry quickly evolved. [8]

By the late twentieth century, solar energy had become practical and affordable enough to warrant its broad-scale marketing as one of the primary energy sources of the future. During the 1990s, the price of solar energy plunged 50 percent as technology improved. Meanwhile, PV applications went from a niche source of electricity to bringing solar technology into the margins of the mainstream. More than 10,000 homes in the United States were powered exclusively by solar energy in the late 1990s while an additional 200,000 homes supplemented electricity consumption with some form of photovoltaic system, according to the Solar Energy Industries Association. Although the solar power industry was valued globally at $5 billion in 2003, solar power still represented only about one percent of all electric power in the United States that year, primarily due to its persistently high costs and the continuing availability of cheap energy via traditional sources. As you will discover in the following sections, these barriers have fallen dramatically in recent years and unleashed a small revolution in the role the sun plays in humanity's daily life.

Principal Solar-Energy Technologies
In 2004, solar energy accounted for only 0.039 percent of the world's total primary energy supply of 11,059 million metric tons of oil equivalent, according to the International Energy Agency. In other words, solar energy provided about 4 terawatt-hours of electricity generation, out of an estimated overall total production of some 17,450 terawatt-hours (1 terawatt = 1 trillion watts). The strength of the solar energy available at any point on the earth depends on the day of the year, the time of day, and the latitude of at which it hits the Earth.

Sunlight is composed of photons, or particles of solar energy. These photons contain various amounts of energy corresponding to the different wavelengths of the solar spectrum. When photons strike a photovoltaic cell, they may be reflected, pass right through, or be absorbed. Only the absorbed photons provide energy to generate electricity. When enough sunlight is absorbed by the material, electrons are dislodged from the material's atoms. Special treatment of the material surface during manufacturing makes the front surface of the cell more receptive to free electrons, so the electrons naturally migrate to the surface.

When the electrons leave their position, holes are formed. When many electrons, each carrying a negative charge, travel toward the front surface of the cell, the resulting imbalance of charge between the cell's front and back surfaces creates a voltage potential like the negative and positive terminals of a battery. When the two surfaces are connected through an external load, electricity flows. To increase power output, cells are electrically connected into a packaged weather-tight module. Modules can be further connected to form an array. The term array refers to the entire generating plant, which can consists of as few as one solar module or several thousand modules. The number of modules connected together in an array depends on the amount of power output needed.

Several technologies have been developed to harness that energy, including concentrated solar-power systems; passive solar heating and daylighting, photovoltaic systems, solar hot water, and solar process heat and space heating and cooling. To understand the mechanics of these technologies, the best place to begin is the beginning of solar-energy technologies - photovoltaics.


Photovoltaic solar cells convert solar radiation, or sunlight, directly into electrical power. Solar cells are the basic building blocks of photovoltaic systems. PV-based solar energy has become one of the most successful energy technologies the world has ever seen, achieving cost-reductions similar to those achieved by Ford during the era of the Model-T.

There are two types of photovoltaic solar-cells: crystalline silicon cells and thin-film solar cells. Crystalline silicon solar cells typically use silicon or polysilicon substrates. Individual cells vary in size from about 1/2 inch to about 4 inches across and include additional layers placed on top of the silicon to enhance light capture. In "thin-film" solar cells, the substrate is made of glass, metal or polymer substrates and has small deposits of gallium or semiconductor materials placed on top. The substrate may be just a few micrometers thick. Thin film solar cells are typically less efficient than crystalline silicon solar cells.

When sunlight strikes a solar panel, electricity is produced because sunlight releases electrons. Solar cells are frequently combined to produce a large amount of electrical energy in solar-modules and ultimately solar arrays. Solar cells with conversion efficiencies in the neighborhood of 20% were readily available at the beginning of the 21st century, with efficiencies twice as high or more achieved with experimental cells.

Energy conversion efficiency is an expression of the amount of energy produced in proportion to the amount of energy available to a device. The sun produces a lot of energy in a wide light spectrum, but we have so far learned to capture only small portions of that spectrum and convert them to electricity using photovoltaics. So, today's commercial PV systems are about 20% efficient. And many PV systems degrade a little bit (lose efficiency) each year upon prolonged exposure to sunlight. For comparison, a typical fossil fuel generator has an efficiency of about 28%.

Solar cells are typically combined into modules that hold about 40 cells; about 10 of these modules are mounted in PV arrays that can measure up to several meters on a side. These flat-plate PV arrays can be mounted at a fixed angle facing south, or they can be mounted on a tracking device that follows the sun, allowing them to capture the most sunlight over the course of a day. About 10 to 20 PV arrays can provide enough power for a household; for large electric utility or industrial applications, hundreds of arrays can be interconnected to form a single, large PV system.

The performance of a photovoltaic array is dependent upon sunlight. Climate conditions and environmental factors have a huge impact on the amount of solar energy received by a photovoltaic array. The current record for solar-cell conversion efficiency, established in August 2008 by the National Renewable Energy Laboratory, is 40.8 percent.

Photovoltaic cells, like batteries, produce direct electric current (DC) which is generally used to power fairly small loads like those usually required by electronic equipment. When DC from photovoltaic cells is used for commercial applications or sold to electric utilities using the electric grid, it must be converted to alternating current (AC) using inverters, solid state devices that convert DC power to AC.

Concentrating Solar Power (CSP) or Solar Thermal

Solar cells are often placed under a lens that focuses or concentrates the sunlight before it hits the cells. This approach has both advantages and disadvantages compared with flat-plate PV arrays. The main idea is to use very little of the expensive semiconducting PV material while collecting as much sunlight as possible. But because the lenses must be pointed at the sun, the use of concentrating collectors is limited to the sunniest parts of the country. Some concentrating collectors are designed to be mounted on simple tracking devices, but most require sophisticated tracking devices, which further limit their use to electric utilities, industries, and large buildings.

Concentrating solar power (CSP) systems channel sunlight through an optical lens that amplifies the strength and heat of the sun. There are currently three principal types of concentrating solar energy systems: trough systems, dish/engine systems and power towers. CSP plants deploy these systems in large numbers of mirror configurations that convert the sun's energy into high-temperature heat. The heat energy turns water into steam, which then powers a turbine and generates electricity.

On a large scale, and as a means of generating power, CSP has several advantages over photovoltaic cells. Power from concentrating solar heat is less variable than from photovoltaic solar (or from wind), an important consideration for a full-scale utility. Solar thermal facilities can be designed to store energy for several hours after sundown, helping a utility meet evening spikes in demand. And since solar thermal plants use the same steam turbines to generate power that other generating stations use, the plants can be hybridized to burn natural gas or other fuels during nighttime hours, to keep output constant and maximize use of the turbines.

Concentrated solar power is currently the fastest-growing, utility-scale renewable energy alternative after wind power, according to a December 2007 report by Emerging Energy Research, a Cambridge, Mass.-based consulting firm. The study describes the technique as "well-positioned to compete against other electricity generation technologies" and estimates that $20 billion will be spent on solar thermal power projects around the world from 2008 to 2013.

Concentrating collectors reflect solar energy falling on a large area and focus it onto a small receiving area, which amplifies the intensity of the solar energy. The temperatures that can be achieved at the receiver can reach over 1,000 degrees Celsius. The concentrators must move to track the sun if they are to perform effectively; the devices used to achieve this are called heliostats. There are three main types of concentrating solar power systems: parabolic-trough, dish/engine, and power tower.

It is a very long article. Still some more sections are there. For these additional sections as well as images please visit source knol.

Source Knol: Solar Energy

Tuesday, July 12, 2011

Genital Warts - Background and Management - by Dr. Daniela Carusi

Reshared Knol
Source Knol: Genital Warts

by Daniela Carusi, MD, MSc
Instructor of Obstetrics, Gynecology & Reproductive Biology at Brigham & Women's Hospital & Harvard Medical School, Boston


Genital warts, like warts that occurr on the hands and feet, develop when a group of skin cells divides excessively, producing a raised, firm bump. They usually measure from one to a few millimeters, though occasionally they can cover a wide area of the genitalia. When very large they can take on a cauliflower-like appearance. In women the lesions are usually found at the base of the vagina or on the labia, though they can also occur within the vagina and on the cervix. In men, they can occur on the penile shaft or on the glans (tip) of the penis. In either gender they may occur in the anal area, particularly in those who receive anal intercourse.

Genital warts often cause no symptoms, and may be found incidentally during a physical exam. Larger warts may cause discomfort, itching, burning, or vaginal discharge in women. Very large warts may actually obstruct the vagina, urethra, or anus, or may occasionally cause skin cracking or bleeding.

All warts are caused by an infection with the human papilloma virus, or HPV. HPV is the most common sexually transmitted virus in the United States, affecting an estimated 80% of sexually active adults at some time. Most people who acquire HPV have no symptoms, and it is usually impossible to know when an individual was infected. HPV is transmitted sexually, through skin-to-skin or oral-skin contact. An individual may transmit the virus even when there is no visible wart.

Well over 100 strains of the virus have been identified, which differ in the location of infection (in terms of genital or non-genital infection) and their tendency to cause cancers. HPV types 6 and 11 cause most genital warts but do not cause cancers, and thus they are considered “low-risk” types. Individuals may be infected by multiple virus types, so those who have non-cancerous warts may also have a strain of “high-risk,” cancer-causing HPV. Individuals with warts should have regular exams, and women should have cervical cancer screening according to standard guidelines.[1]

Risk factors for HPV infection include young age (with the highest incidence among women 20 to 24 years old), history of multiple sexual partners, and a weakened immune system. Warts may occur from weeks to months after the initial infection, and in general the viral infection will clear within a year of acquisition.[2] Those who contract HPV at an older age and those with weakened immune systems are less likely to eliminate the infection.

How are Genital Warts Diagnosed?

Trained health care providers can identify warts by inspecting the genitalia. Usually no other testing is necessary. The warts are firm, raised, flesh-colored or pale bumps. They often occur in clusters or as scattered small lesions. They must be distinguished from micropapillomatosis, which are normal, fine bumps on the genitalia. Genital warts will have multiple tiny lobules coming from a single base, while micropapillomatosis has only one tiny bump that arises from a single base.

The picture on the right shows a few small genital warts on a female patient. Two vulvar and one peri-anal wart are indicated by the arrows. An image of more extensive warts can be seen by clicking the following link :

If the skin lesion has an unusual appearance, such as an unusual color or texture, or if it appears as an open or bleeding sore, then it should be biopsied. This involves giving numbing medicine to the skin and then removing a small piece of the lesion, which can be accomplished in a medical office. This should be done to exclude the possibility of a cancer or pre-cancer. A biopsy should also be performed if the wart does not respond to routine treatment or if the patient has a weakened immune system (and would therefore be more susceptible to cancers).

How are Genital Warts Treated?

Genital wart treatment is aimed at relieving a patient’s symptoms. Small warts found on physical exam may be left alone if they do not bother the patient. Approximately 20-30% of these lesions will resolve on their own

Genital warts can be treated with medications or by physically removing the lesions. Treatments that eliminate the warts may not remove the virus. Consequently, the lesions will commonly recur after successful treatment, and individuals may still transmit the virus even when the warts are no longer present. Currently there is no medical cure available for HPV, although as noted, many individuals will clear the infection on their own a few years after exposure.

Medical Therapy

Medical treatments for genital warts work in one of two ways: they either destroy the cells that make up the wart, or they activate the patient’s immune system to clear the lesion. Some medical treatments can be used by the patient at home, while others require application by a medical provider. There are only a few studies comparing one type of therapy to another, and thus the treatment should be selected based on availability, cost, and convenience. Any type of medical therapy usually requires multiple applications.

Common Medical Therapies:

· Trichloroacetic acid (TCA): This acid solution is applied directly to the wart with a cotton swab, and works by destroying cell proteins. Care must be taken to avoid touching the healthy skin, and it must be applied by a health care professional. Applications are usually performed weekly until the warts resolve, usually within 4-6 weeks.

· Podophyllin: This medication works by blocking cell division so that the wart can no longer grow. It is similarly applied by a health care provider on a weekly basis, and should be washed off a few hours later. It should not be used within the vagina or on a woman’s cervix, as it can cause burns.

· Podophyllotoxin (Condylox®): This medication is derived from podophyllin, but can be applied by the patient at home. Twice daily for 3 consecutive days the patient applies the drug to the warts, and then takes 4 days off of treatment. This cycle can be repeated weekly for up to 4 weeks. A study comparing this medication to podophyllin showed that the home therapy had better results.[3]

· Imiquimod (Aldara®): This medication works by activating an individual’s immune system in the area of the wart. The immune cells then destroy the wart tissue. The patient applies a cream directly to the wart three times per week, washing off the medication 6 – 10 hours later. This can be done for up to 16 weeks. The treatment normally causes some redness and inflammation at the treatment site, but these symptoms will resolve.

Uncommon Medical Therapies

· 5-Fluorouracil: This is another medication that blocks cell division, and can be injected at the base of the wart. This is performed weekly for up to 6 weeks. Patients may notice pain and ulceration at the injection site.

· Interferon: This medication induces the patient’s immune system, and is also injected at the base of the wart. It may be given as an intramuscular injection as well. In either case it may cause pain and flu-like symptoms, and is not very well tolerated by patients.

Wart Removal

Genital warts may be physically removed by freezing, cutting them off with a knife or scissors, or destroying them with laser or ultrasound. As with medical therapy, this treatment may not remove the underlying virus. These physical treatments are often performed when medical treatments fail, or when the warts are very large.

· Freezing/ Cryotherapy: Wart tissue may be destroyed by freezing it with liquid nitrogen or nitrous oxide. This is performed in a medical office, and may require multiple treatments on a weekly basis. It is usually reserved for smaller lesions.

· Surgical removal: Warts may be removed by cutting them off at their base. This may be desirable with very large lesions, or in situations where a biopsy is needed to exclude a pre-cancer or cancer. Depending on the size of the warts, this procedure requires local (numbing medicine given only to the area involved) or general (patient goes to sleep) anesthesia, and the procedure may produce some pain or scarring.

· Laser therapy: A trained provider may destroy the wart tissue with a laser. This must be done in an operating room with anesthesia, and can also produce pain and scarring after the procedure. It is often the treatment of choice with very large lesions.

· Ultrasound aspiration: This is a specialized treatment that destroys the wart tissue with an ultrasonic aspirator. It also must be performed in an operating room by a trained clinician.

Surgical removal, laser, and ultrasound treatment should ideally remove the wart without damaging the underlying skin. This will minimize scarring later on. Follow-up care involves soaking the treated area, and often applying antibacterial creams.

Is it Safe to Have Sex during Treatment for Genital Warts?

This is a common concern, especially as a patient may need many weeks of treatment. In general, contact should be avoided while a medication is on the skin, or if the patient is experiencing pain or inflammation in the treated area. An individual is capable of transmitting HPV both during and after treatment, so waiting for wart clearance will not prevent virus transmission. Patients should not have intercourse after surgical treatment until cleared by a health care provider.

How can one Prevent Genital Warts?

Because the warts are caused by a sexually transmitted virus, avoiding sexual contact with new partners can prevent them. However, this may not be realistic for many people. Covering the lesions prior to skin-to-skin contact may block transmission, and thus condom use is advisable for men. However, it is more difficult to block contact with vulvar or labial lesions, or to block oral-genital spread A female condom may be placed within the vagina and over part of the woman’s vulva, and a dental dam (a piece of latex placed over an individual’s mouth and tongue) may be used to avoid oral-genital transmission. These methods have not been formally studied as a means to block HPV transmission.

Currently, a vaccine (Gardasil®) is available which causes immunity to the HPV strains that most commonly cause warts (types 6 and 11), as well as to the strains that most commonly cause cervical cancers (types 16 and 18). The vaccine is effective when given prior to first contact with the virus, and thus it should ideally be given prior to any sexual activity. The Centers for Disease Control and Prevention (CDC) recommends vaccine administration in girls 11-12 years old, with catch-up vaccine for 13-26 year-old unvaccinated females.[4] There are currently no official vaccine guidelines for older women, or for boys and men. Importantly, the vaccine will not cure an infection that has already been established.

What are the Implications of Genital Warts in Pregnancy?

There are two major concerns when pregnant women have genital warts: safe treatment for the mother, and potential transmission to the baby.

Genital warts may grow more rapidly during pregnancy, presumably due to impaired immunity in the pregnant state. Pregnant women may desire treatment to avoid discomfort. Additionally, large warts may block the vaginal opening, or lead to skin abrasions and tears during birth. Podophyllin, podophyllotoxin, 5-fluorouracil, and interferon should not be used during pregnancy, as they can potentially affect the fetus. Imiquimod has not been well studied during pregnancy, and is thus often avoided. This leaves either TCA or cryotherapy as first line therapies for pregnant women, and both are considered safe in this situation.

HPV, including types 6 and 11, may transmit to the fetus during birth. Rarely this can cause a condition called respiratory papillomatosis, where HPV causes disease in the baby’s respiratory tract. This is estimated to occur in 7 out of 1000 women with genital warts.[5] Because medical treatment does not eliminate the virus, it is not recommended solely as a means of preventing transmission. Furthermore, transmission has been documented after delivery by Cesarean section.[6] Due to this finding, the rarity of transmission, and the implications of surgery for the mother, Cesarean section is not recommended for the purpose of preventing transmission.


1. American College of Obstetricians and Gynecologists, ACOG Practice Bulletin: clinical management guidelines for obstetrician-gynecologists. Number 45, August 2003. Cervical cytology screening. Obstet Gynecol, 2003. 102(2): p. 417-27.

2. Cox, J.T., The development of cervical cancer and its precursors: what is the role of human papillomavirus infection? Curr Opin Obstet Gynecol, 2006. 18 Suppl 1: p. s5-s13.

3. Hellberg, D., et al., Self-treatment of female external genital warts with 0.5% podophyllotoxin cream (Condyline) vs weekly applications of 20% podophyllin solution. Int J STD AIDS, 1995. 6(4): p. 257-61.

4. Markowitz, L.E., et al., Quadrivalent Human Papillomavirus Vaccine: Recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR Recomm Rep, 2007. 56(RR-2): p. 1-24.

5. Silverberg, M.J., et al., Condyloma in pregnancy is strongly predictive of juvenile-onset recurrent respiratory papillomatosis. Obstet Gynecol, 2003. 101(4): p. 645-52.

6. Rogo, K.O. and P.N. Nyansera, Congenital condylomata acuminata with meconium staining of amniotic fluid and fetal hydrocephalus: case report. East African Medical Journal, 1989. 66(6): p. 411-3.

Article/Knol reshared under Creative Commons Attribution 3.0 License

Marketing Strategy - Differentiating and Positioning the Market Offering - Reshared Article

Reshared Creative Commons 3.0 Attribution License Knol
Source Knol: Marketing Strategy - Differentiating and Positioning the Market Offering

by Narayana Rao K.V.S.S.

Marketing Strategy

Philip Kotler discussed five issues of marketing strategy in his 9th edition of Marketing Management

Differentiating and Positioning the Market Offering
Developing New Products
Managing Life cycle Strategies
Designing marketing Strategies for Market Leaders, Challengers, Followers, and Niches
Designing and Managing Global Marketing Strategies

These issues are covered in different knols by me. This knol describes differentiating and positioning.

Differentiating and Positioning the Market Offering

The issues discussed in the area of differentiating and Positioning the market offering are:

•Tools for Competitive Differentiation
•Developing a Positioning Strategy
•Communicating the Company’s Positioning

Tools for Competitive Differentiation
Differentiation - Definition: is the act of designing a set of meaningful differences to distinguish the company's offering from competitor's offerings.

Boston Consulting Group's differentiation opportunities matrix: Actually it is a competitive advantage matrix applicable to differentiation opportunities.

Four types of industries identified by BCG matrix are:

Volume industry: only a few but very large competitive advantages are possible. The benefit of the advantage is proportional with company size and market share. Example given - construction industry

Stalemated industry: in this type there are only few opportunities and the benefit from each is small. The benefit is also not proportional to the size or market share.
Example: Steel industry - It is hard to differentiate the product or decrease its manufacturing cost.

Fragmented industry: in this type, there are many opportunities, but the benefit of each of them is small. Benefit does not depend on size or market share.

Specialized industry: in this type, the opportunities are more and benefit of each opportunity is high. The benefit is not related to size or market share.

Kotler mentions, Milind Lele's observation that companies differ in their potential maneuverability along five dimensions: their target market, product, place (channels), promotion, and price. The freedom of maneuver is affected by the industry structure and the firm's position in the industry. For each potential competitive opportunity or option limited by the maneuverability, the company needs to estimate the return. Those opportunities that promise the highest return define the company's strategic leverage. The concept of maneuverability brings out the fact that a strategic option that worked very well in one industry may not work equally well in the other industry because of low maneuverability of that option in the different industry and by the firm in consideration.

Five Dimensions of Differentiation

Regarding the tools of differentiation, five dimensions can be utilized to provide differentiation.

Services that accompany marketing, sales and after sales services.
Personnel that interact with the customer

Differentiating a Product


Quality: performance and conformance
Performance - the performance of the prototype or the exhibited sample,

Conformance - The performance of every item made by the company under the same specification


Services differentiation

Ordering ease
Customer training
Customer consulting
Miscellaneous services

Personnel Differentiation


Channel differentiation

Expertise of the channel managers
Performance of the channel in ease of ordering, and service, and personnel

Image differentiation

First distinction between Identity and Image - Identity is designed by the company and through its various actions company tries to make it known to the market.

Image is the understanding and view of the market about the company.

An effective image does three things for a product or company.

1. It establishes the product's planned character and value proposition.
2. It distinguishes the product from competing products.
3. It delivers emotional power and stirs the hearts as well as the minds of buyers.

The identity of the company or product is communicated to the market by

Written and audiovisual media
Atmosphere of the physical place with which customer comes into contact
Events organized or sponsored by the company.

Developing a Positioning Strategy

Levitt and others have pointed out dozens of ways to differentiate an offering(Theodore Levitt: "Marketing success through differentiation-of anything", Harvard Business Review, Jan-Feb, 1980)

While a company can create many differences, each difference created has a cost as well as consumer benefit. A difference is worth establishing when the benefit exceeds the cost. More generally, a difference is worth establishing to the extent that it satisfies the following criteria.

Important: The difference delivers a highly valued benefit to a sufficient number of buyers.

Distinctive: The difference either isn't offered by others or is offered in a more distinctive way by the company.

Superior: The difference is superior to the ways of obtaining the same benefit.

Communicable: The difference is communicable and visible to the buyers.

Preemptive: The difference cannot be easily copied by competitors.

Affordable: The buyer can afford to pay the higher price

Profitable: The Company will make profit by introducing the difference.

Positioning is the result of differentiation decisions. It is the act of designing the company's offering and identity (that will create a planned image) so that they occupy a meaningful and distinct competitive position in the target customer's minds.

The end result of positioning is the creation of a market-focused value proposition, a simple clear statement of why the target market should buy the product.


Volvo (station wagon)

Target customer-Safety conscious upscale families,

Benefit - Durability and Safety,

Price - 20% premium,

Value proposition - The safest, most durable wagon in which your family can ride.

How many differences to promote?

Many marketers advocate promoting only one benefit in the market (Your market offering may have many differentiators, actually should have many differentiators in product, service, personnel, channel, and image).

Kotler mentions that double benefit promotion may be necessary, if some more firms claim to be best on the same attribute. Kotler gives the example of Volvo, which says and "safest" and "durable".

Four major positioning errors

1. Underpositioning: Market only has a vague idea of the product.
2. Overpositioning: Only a narrow group of customers identify with the product.
3. Confused positioning: Buyers have a confused image of the product as it claims too many benefits or it changes the claim too often.
4. Doubtful positioning: Buyers find it difficult to believe the brand’s claims in view of the product’s features, price, or manufacturer.

Different positioning strategies or themes

1. Attribute positioning: The message highlights one or two of the attributes of the product.
2. Benefit positioning: The message highlights one or two of the benefits to the customer.
3. Use/application positioning: Claim the product as best for some application.
4. User positioning: Claim the product as best for a group of users. - Children, women, working women etc.
5. Competitor positioning: Claim that the product is better than a competitor.
6. Product category positioning: Claim as the best in a product category Ex: Mutual fund ranks – Lipper.
7. Quality/Price positioning: Claim best value for price

Which differences to promote:

This issue is related to the discussion of worthwhile differences to incorporate into the market offering done earlier. But now competitors positioning also needs to be considered to highlight one or two exclusive benefits offered by the product under consideration.

Communicating the Company’s Positioning

Once the company has developed a clear positioning strategy, the company must choose various signs and cues that buyers use to confirm that the product delivers the promise made by the company.

Related Articles

•Marketing Strategy for New Industry Products
Pioneer in a Product - Issues When a product is new in the industry life cycle, the firm starting the production and sale ...

•Marketing Strategies for Challenger Firms
Firms take the role of challengers when they make aggressive efforts to further their market share.
•Marketing plan
To become operational, a marketing strategy needs to be derived into a marketing plan for the ongoing period.

Marketing Article Series Directory

Knol are updated periodically. Visit source knol for updates if any
Source Knol: Marketing Strategy - Differentiating and Positioning the Market Offering

Monday, July 11, 2011

Brief History of Computers by Kevin Spaulding

Source Knol: Brief History of Computers

By Kevin Spaulding, Sunnyvale, CA

The Early days (1,000 B.C. to 1940)

Ancient Civilations
Computers are named so because they make mathematical computations at fast speeds. As a result, the history of computing goes back at least 3,000 years ago, when ancient civilizations were making great strides in arithmetic and mathematics. The Greeks, Egyptians, Babylonians, Indians, Chinese, and Persians were all interested in logic and numerical computation. The Greeks focused on geometry and rationality [1], the Egyptians on simple addiction and subtraction [2], the Babylonians on multiplication and division [3], Indians on the base-10 decimal numbering system and concept of zero [4], the Chinese on trigonometry, and the Persians on algorithmic problem solving. [5] These developments carried over into the more modern centuries, fueling advancements in areas like astronomy, chemistry, and medicine.
Pascal, Leibnitz, and Jacquard
During the first half of the 17th century there were very important advancements in the automation and simplification of arithmetic computation. John Napier invented logarithms to simplify difficult mathematical computations. [6] The slide rule was introduced in the year 1622 [7], and Blaise Pascal spent most of his life in the 1600's working on a calculator called the Pascaline. [9] The Pascaline was mostly finished by 1672 and was able to do addition and subtraction by way of mechanical cogs and gears. [8] In 1674 the German mathematician Gottfried Leibnitz created a mechanical calculator called the Leibnitz Wheel. [10] This 'wheel' could perform addition, subtraction, multiplication, and division, albeit not very well in all instances.
Neither the Pascaline or Leibnitz wheel can be categorized as computers because they did not have memory where information could be stored and because they were not programmable. [5] The first device that did satisfy these requirements was a loom developed in 1801 by Joseph Jacquard. [11] Jacquard built his loom to automate the process of weaving rugs and clothing. It did this using punched cards that told the machine what pattern to weave. Where there was a hole in the card the machine would weave and where there was no hole the machine would not weave. Jacquard's idea of punched cards was later used by computer companies like IBM to program software.


Charles Babbage was a mathematics professor at Cambridge University who was interested in automated computation. In 1823 he introduced the Difference Engine, the largest and most sophisticated mechanical calculator of his time. Along with addition, subtraction, multiplication, and division to 6 digits-- the Difference Engine could also solve polynomial equations. [12] It was never actually completed because the British Government cut off funding for the project in 1842. [15] After this Babbage began to draw up plans for an Analytical Machine, a general-purpose programmable computing machine. [13] Many people consider this to be the first true computer system even though it only ever existed on paper. The Analytical Machine had all the same basic parts that modern computer systems have. [5] While designing the Analytical Machine, Babbage noticed that he could perfect his Difference Engine by using 8,000 parts rather than 25,000 and could solve up to 20 digits instead of just 6. He drew schematics for a Difference Engine no. 2 between 1847 and 1849.
After twelve years spent trying to get his Difference Engine No. 2 built, Babbage had to give up. The British Government was not interested in funding the machine and the technology to build the gears, cogs, and levers for the machine did not exist in that time period. Babbage's plans for the Difference Engine and Difference Engine No. 2 were hidden away after his death, and finally resurfaced around 150 years after they'd each been conceived. In 1991 a team of engineers at the Science Museum in London completed the calculating section of Babbage's Difference Engine. [14] In 2002 the same museum created a full fledged model of the Difference Engine No. 2 that weighs 5 tons and has 8,000 parts. [16] Miraculously, it worked just as Babbage had envisioned. A duplicate of this engine was built and was sent to the Computer History Museum in Mountain View, CA to be demonstrated and displayed until May 2009.


In America during the late 1800's there were many immigrants pouring in from all over the world. Officials at the U.S. Census Bureau estimated that it would take ten to twelve years to do the 1890 census. By the time they finished it would be 1900, and they'd have to do the census all over again! The problem was that all of the calculations for the census were performed manually. To solve their problems the U.S. Census Bureau held a competition that called for proposals outlining a better way to do the census. [17] The winner of the competition was Herman Hollerith, a statistician, who proposed that the use of automation machines would greatly reduce the time needed to do the census. He then designed and built programmable card processing machines that would read, tally, and sort data entered on punch cards. The census data was coded onto cards using a keypunch. Then these cards were taken to a tabulator (counting and tallying) or sorter (ordering alphabetically or numerically). [18]

Hollerith's machines were not all-purpose computers but they were a step in that direction. They successfully completed the census in just 2 years. The 1880 census had taken 8 years to complete and the population was 30% smaller then, which meant that automated processing was definitely more efficient for large scale operations. [5] Hollerith saw the potential in his tabulating and sorting machines, so he left the U.S. Census Bureau to found the Computer Tabulating Recording Company. His punch-card machines became national bestsellers and in 1924 Hollerith's company changed its name to IBM after a series of mergers with other similar companies. [19] The computer age was about to begin.

Birth of Computers (1940-1950)


World War II brought concerns about how to calculate the logistics of such a large scale battle. The United States needed to calculate ballistics, deploy massive amounts of troops, and crack secret codes. The military started a number of research projects to try and build computers that could help with these tasks and more. In 1931 the U.S. Navy and IBM began working together to build a general-purpose computer called the Mark 1. It was the first computer to use the base-2 binary system, was programmable, and made of vacuum tubes, relays, magnets, and gears. The Mark 1 was completed in 1944. [20] The Mark 1 had a memory for 72 numbers and could perform 23-digit multiplication in 4 seconds. [5] It was operational for 15 years and performed many calculations for the U.S. Navy during WWII.

The Mark 1 was still a mix of electronic and mechanical. At the same time as the Mark 1, however, there was another project taking place. During WWII the United States army was building new artillery that required firing tables. These firing tables were created by way of intense mathematical calculation that took a very long time to manually compute. To help make this process process quicker the Army started a project in 1943 to build a completely electronic computing device. [21] J. Presper Eckert and John Mauchly headed the project and eventually created the Electronic Numerical Integrator and Calculator (ENIAC), which was completed in 1946. The ENIAC had 18,000 vacuum tubes and absolutely gigantic; 100 feet long, 10 feet high, and 30 tons. It was about a thousand times faster than the Mark 1 at multiplying numbers and 300 times faster at addition. [22]
Another computer designed during WWII was the Colossus, by Alan Turing. This computer cracked the German Enigma code, helping us win the war against the Nazis. Germany themselves were designing a computer much like the ENIAC, code named the Z1. The Z1 project, headed by Konrad Zuse, was never completed. [23]
Von Neumann
Though the computers developed in the second World War were definitely computers, they were not the kind of computers we are used to in modern times. Jon Von Neumann helped work on the ENIAC and figured out how to make computers even better. The ENIAC was programmed externally with wires, connectors, and plugs. Von Neumann wanted to make programming something that was internalized. Instead of rerouting wires and plugs, a person could write a different sequence of instructions that changes the way a computer runs. Neumann created the idea of the stored computer program, which is still implemented today in computers that use the 'Von Neumann Architecture'. [24]

First Generation (1950 - 1957)

The first computer to implement Von Neumann's idea was the EDVAC in 1951, developed in a project led by Von Neumann himself. At the same time a computer using stored programs was developed in England, called the EDSAC. [25] The EDVAC was commercialized and called the UNIVAC 1. It was sold to the U.S. Bureau of the Census in March, 1951. This was actually the first computer ever built for sale. [26] The UNIVAC 1 made a famous appearance on CBS in November, 1952 during the presidential election. [27] The television network had rented the computer to boost ratings, planning to have the computer predict who would win the election. The UNIVAC predicted very early on that Eisenhower would beat Stevenson, which was correct. Network executives were skeptical and did not go live with the prediction until they had arrived at the same conclusion using manual methods. The UNIVAC sat right behind CBS staff during the broadcast, and it was the first time that many people had the chance to see this elusive new technology called the computer.
IBM's first production computer was the IBM 701 Defense Calculator, introduced in April, 1952. [28] The IBM 701 was used mostly for scientific calculation. The EDVAC, EDSAC, UNIVAC 1, and IBM 701 were all large, expensive, slow, and unreliable pieces of technology-- like all computers of this time. [29] Some other computers of this time worth mentioning are the Whirlwind, developed at Massachussets Institute of Technology, and JOHNNIAC, by the Rand Corporation. The Whirlwind was the first computer to display real time video and use core memory. [33] The JOHNNIAC was named in honor of Jon Von Neumann. Computers at this time were usually kept in special locations like government and university research labs or military compounds. Only specially trained personnel were granted access to these computers. Because they used vacuum tubes to calculate and store information, these computers were also very hard to maintain. First generation computers also used punched cards to store symbolic programming languages. [5] Most people were indirectly affected by this first generation of computing machines and knew little of their existence.

Second Generation (1957 - 1965)

The second generation of computing took place between 1957 and 1965. Computers were now implementing transistors, which had been invented in 1947 by a group of reseachers at Bell Laboratories, instead of vacuum tubes. [30] Because of the transistor and advances in electrical engineering, computers were now cheaper, faster, more reliable, and cheaper than ever before. More universities, businesses, and government agencies could actually afford computers now.
In 1957 the first FORTRAN compiler was released. FORTRAN was the first high-level programming language ever made. [31] It was developed by IBM for scientific and engineering use. In 1959, the COmmon Business-Oriented Language (COBOL) programming language was released. Where FORTRAN was designed for science and engineering, COBOL was designed to serve business environments with their finances and administrative tasks. [32] These two programming languages essentially helped to create the occupation of a programmer. Before these languages, programming computers required electrical engineering knowledge.
This generation of computers also had an increase in the use of core memory and disks for mass storage. A notable computer to mention from this time period is the IBM System/360, a mainframe computer that is considered one of the important milestones in the industry. It was actually a family of computer models that could be sold to a wide variety of businesses and institutions. [37]

Third Generation (1965 - 1975)

The third generation of computing spanned from 1965 to 1975. During this time integrated circuits with transistors, resistors, and capacitors were etched onto a piece of silicon. This reduced the price and size of computers, adding to a general trend in the computer industry of miniaturization. In 1960 the Digital Equipment Corporation introduced the Programmed Data Processor- 1 (PDP-1), which can be called the first minicomputer due to its relatively small size. [34] It is classified as a third generation computer because of the way it was built, even though it was made before 1965. The PDP-1 was also the computer that ran the very first video game, called Spacewar (written in 1962). [35]
The software industry came into existence in the mid 1970's as companies formed to write programs that would satisfy the increasing number of computer users. Computers were being used everywhere in business, government, military, and education environments. Because of there target market, the first software companies mostly offered accounting and statistical programs. [5] This time period also had the first set of computing standards created for compatibility between systems.
E-mail originated sometime between 1961 and 1966, allowing computer users to send messages to each other as long as they were connected through a network. [38] This is closely tied to the work that was being done on Advanced Research Projects Agency Network (ARPANET), networking technology and innovation that would one day bring the internet. [50]

Fourth Generation (1975 - 1985)
The fourth generation of computing spanned from 1975 to 1985. Computer technology had advanced so rapidly that computers could fit in something the size of a typewriter. These were called microcomputers, the first one being the Altair 8800. The Altair 8800 debuted in 1975 as a mail-order hobby kit. Many people acknowledge the Altair 8800 as the computer that sparked the modern computer revolution, especially since Bill Gates and Paul Allen founded Microsoft with a programming language called Altair BASIC-- made specifically for the 8800. [36] Now that computers could fit on desks they became much more common.
A small company called Apple Computer, Inc. was established in 1976 and single handedly changed the industry forever. Steve Wozniak and Steve Jobs began to sell their Apple 1 computer that same year, and it quickly gained popularity. It came with a keyboard and only required a monitor to be plugged into the back of the system, which was a novel idea for computers at that time. The Apple II was released the next year and was the first mass produced microcomputer to be commercially sold, and also ushered in the era of personal computing.
In 1981, Microsoft Disk Operating System (MS-DOS) was released to run on the Intel 8086 microprocessor. [39] Over the next few years MS-DOS became the most popular operating system in the world, eventually leading to Microsoft Windows 1.0 being released in 1985. [40] In 1984 Apple introduced their Mac OS, which was the first operating system to be completely graphical. Both Mac OS and Windows used pull-down menus, icons, and windows to make computing more user-friendly. Computers were now being controlled with a mouse as well as keyboard. The first mouse was developed in 1981 by Xerox. [41]
Software became much more common and diverse during this period with the development of spreadsheets, databases, and drawing programs. Computer networks and e-mail became much more prevalent as well.
The first truly portable computer, called the Osborne 1, was released in 1981. [37] Portable computers like the TRS-80 Model 100 / 102 and IBM 5155 followed afterward. [38]
Not all the computers of the time were small, of course. There were still being supercomputers built with the aim of being as fast as possible. These supercomputers were sold to companies, universities, and the military. An example of one such supercomputer is the Cray-1, which was released in 1976 by Cray Research. [39] It became one of the best known and most successful supercomputers ever for its unique design and fast speed of 250 MFLOPS.
This generation was also important for the development of embedded systems. These are special systems, usually very tiny, that have computers inside to control their operation. [42] These embedded systems were put into things like cars, thermostats, microwave ovens, wristwatches, and more.

Fifth Generation (1985 - Present)

The changes that have occurred since 1985 are plentiful. Computers have gotten tinier, more reliable, and many times faster. Computers are mostly built using components from many different corporations. For this reason, it is easier to focus on specific component advancements. Intel and AMD are the main computer processor companies in the world today and are constant rivals. [42] There are many different personal computer companies that usually sell their hardware with a Microsoft Windows operating system preinstalled. Apple has a wide line of hardware and software as well. [45] Computer graphics have gotten very powerful and are able to display full three dimensional graphics at high resolution. [41] Nvidia and ATI are two companies in constant battle with one another to be the computer graphics hardware king.
The software industry has grown a lot as well, offering all kinds of programs for almost anything you can think of. Microsoft Windows still dominates the operating system scene. In 1995 Microsoft released Windows 95, an operating system that catapulted them to a new level of dominance. [46] In 1999 Apple revamped its operating system with the release of Mac OS X. [47] In 1991 Linus Torvalds wrote the Linux kernel that has since spawned countless open source operating systems and open source software. [44]
Computers have become more and more online orientated in modern times, especially with the development of the World Wide Web. Popular companies like Google and Yahoo! were started because of the internet. [43]
In 2008 the IBM Roadrunner was introduced as the fastest computer in the world at 1.026 PFLOPS. [40] Fast supercomputers aid in the production of movie special effects and the making of computer animated movies. [48][49]

This is a very exciting time to be alive since we all get to see how quickly computer technology is evolving, and how much it is changing all of our lives for the better. I recommend that you take the time to visit a computer history museum so you can see some of the machines mentioned in this knol. I also suggest that you do in-depth research to learn more about any specific areas of computing that interest you. It is a vast and exciting world that is always changing. We are lucky to be alive to witness computers past and present.

1.Ancient Greek Mathematics. Kidipede. Portland State University.
2.Ancient Egyptian Number Hieroglyphs. Egyptian Math. Eyelid Productions.
3.An overview of Babylonian mathematics. School of Mathematical and Computational Sciences. University of St Andrews.
4.An overview of Indian mathematics. School of Mathematical and Computational Sciences. University of St Andrews.
5.Gersting, J., (2004). Invitation to Computer Science. Pacific Grove: Brooks Cole.
6.John Napier. School of Mathematical and Computational Sciences. University of St Andrews.
7.Weisstein, Eric W. "Slide Rule." From MathWorld--A Wolfram Web Resource.
8.About Pascaline. School of Mathematical and Computer Sciences (MACS). Heriot-Watt University.
9.Blaise Pascal (1623-1662). Inventors. The New York Times Company.
10.MIT5312: Systems Analysis and Design. Department of Information and Decision Sciences. The University of Texas at El Paso.
11.History of the Jacquard automated loom.
12.Swade, Doron (2002). The Difference Engine: Charles Babbage and the Quest to Build the First Computer. Penguin.
13.The Babbage Engine: A Brief History. Computer History Museum.
14.The Babbage Engine: A Modern Sequel. Computer History Museum.
15.The Babbage Engine: The Engines. Computer History Museum.
16.The Babbage Engine: Overview. Computer History Museum.
17.Herman Hollerith: The World's First Statistical Engineer. Mark Russo. University of Rochester.
18.Herman Hollerith. Computing History. Columbia University.
19.From the U.S. Constitution to IBM. Wittenberg University.
20.The IBM Automatic Sequence Controlled Calculator. Computing History. Columbia University.
21.ENIAC. IEEE Virtual Museum.
22.Programming the ENIAC. Computer History. Columbia University.
23.Part 3: Konrad Zuse's First Computer -- The Z1. EPE.
24.John Louis von Neumann. CS Dept. Virginia Tech/Norfolk State University.
25.The First Stored Program Computer -- EDVAC. Maxfield & Montrose Interactive Inc.
26.The Univac was the First Commercial Computer Circa 1950. Associated Content, Inc.
27.In '52, huge computer called Univac changed election night. USA TODAY.
28.The IBM 701 Defense Calculator. Computing History. Columbia University.
29.First-Generation Computers. The Development of Computers. Hagar.
30.The Invention of the Transistor. Following the Path of Discovery. Julian Rubin.
31.The FORTRAN Programming Language. College of Engineering & Computer Science. University of Michigan.
32.The COBOL Programming Language. College of Engineering & Computer Science. The University of Michigan - Dearborn.
33.1951: Whirlwind Computer - The First to Display Real Time Video. CED in the History of Media Technology.
34.1960: DEC PDP-1 Precursor to the Minicomputer. CED in the History of Media Technology.
35.Spacewar. sympatico.
36.Altair BASIC programming language.
37.Osborne 1. Inventors.
38.History of Laptop Computers. Inventors.
39.CRAY 1. The History of Computing Project.
40.Computer Science Reaches Historic Breakthrough.
41.M. Slater, A. Steed, Y. Chrysantho (2002). Computer graphics and virtual environments: from realism to real-time. Addison-Wesley
42.Is the Intel vs. AMD Chip War Back On?. Sharon Gaudin. Computerworld. PC World.
43.The Secret To Google's Success. The McGraw-Hill Companies Inc.
44.Linux: the big picture. Lars Wirzenius.
45.The Apple Store. Apple Inc.
46.The Unusual History of Microsoft Windows. Inventors.
47.Mac OS X 10.0. Ars Technica, LLC.
48.Paik, K., Catmull, E., Jobs, S., Lasseter, J., & Iwerks, L. (2007). To Infinity and beyond! the Story of Pixar Animation Studios. San Francisco: Chronicle.
49.The History of Special Effects. utminers. University of Texas at El Paso.
50.ARPANET -- The First Internet. livinginternet.

The source Knol has got interesting images. Please visit source knol for images and updates.

Source Knol: Brief History of Computers

Articles reshared under Creative Commons Attribution 3.0 License