Monday, July 21, 2008

Language Translation

Now the world has became like a village.But the only problem is the lack of a perfect global language.Although English is the most popular global language many people in non English countries do not feel comfortable with English.So if you want to communicate with them then we have to translate the words into their language.This is must for the business people to popular their products in international market.
India is a country with many languages.So to communicate with all Indians we have to translate into many languages.India translation needs many translations like Punjabi translation,Hindi translation.Persian countries need Farsi translation,Pakistan needs Urdu translation,Saudi Arabia needs Arabic translation.These are only some examples of translation.Every person feels comfortable with his mother tongue first.So the translation of products into local language will definitely increases the sales of every company.

[get this widget]

Tuesday, July 1, 2008

Digital Signal Processing

DSP, or Digital Signal Processing, as the term suggests, is the processing of signals by digital means. A signal in this context can mean a number of different things. A signal here means an electrical signal carried by a wire or telephone line, or perhaps by a radio wave. More generally, however, a signal is a stream of information representing anything from stock prices to data from a remote-sensing satellite.
Analog and digital signals
In many cases, the signal is initially in the form of an analog electrical voltage or current, produced for example by a microphone or some other type of transducer. In some situations the data is already in digital form - such as the output from the readout system of a CD (compact disc) player. An analog signal must be converted into digital (i.e. numerical) form before DSP techniques can be applied. An analog electrical voltage signal, for example, can be digitized using an integrated electronic circuit (IC) device called an analog-to-digital converter or ADC. This generates a digital output in the form of a binary number whose value represents the electrical voltage input to the device.

[get this widget]

Thursday, June 12, 2008

SEMANTIC EVOLUTION

Semantic evolution is a form of adaptation. Before we develop concepts for semantic evolution, we give a set of basic definitions for adaptation.
1. Adaptation definition
Adaptation of a software system (S) is caused by change (dE) from an old environment (E) to a new environment (E.), and results in a new system (S.) that ideally meets the needs of its new environment (E.). Adaptation involves three tasks:

· Ability to recognize dE.
· Ability to determine the change ds to be made to the system S according to dE.
· Ability to effect the change in order to generate the new system S.

Adaptability then refers to the ability of the system to make adaptation.
2. Semantic evolution
System S evolves semantically when the evolved system S. responds differently to the same input or accepts a different set of inputs. In this application domain, the inputs are the commands sent to the ES by EC and the responses are those strings received by EC from ES.

[get this widget]

Tuesday, June 10, 2008

Remotely Controlled Embedded Systems

Evolution of a software system is a natural process. In many systems evolution occurs during the working phase of their lifecycles. Such systems need to be designed to evolve, i.e., adaptable. Semantically adaptable systems are of particular interest to industry as such systems adapt themselves to environmental change with little or no intervention from their developers. Research in embedded systems is now becoming widespread but developing semantically adaptable embedded systems presents challenges of its own. Embedded systems usually have a restricted hardware configuration, hence techniques developed for other types of systems cannot be directly applied to embedded systems. This paper briefly presents the work done in semantic adaptation of embedded systems, using remotely controlled embedded systems as an application. In this domain, an embedded system is connected to an external controller via a communication link such as ethernet, serial, radio frequency, etc., and receives commands from, and sends responses to, the external controller. Techniques for semantic evolution in this application domain give a glimpse of the complexity involved in tackling the problem of semantic evolution in embedded systems. The techniques developed in this paper were validated by applying them in a real embedded system - a test instrument used for testing cell phones.

[get this widget]

Thursday, June 5, 2008

ASYMMETRIC DIGITAL SUBSCRIBER LINE TECHNOLOGY

Asymmetric digital subscriber line (ADSL) uses existing twisted pair telephone lines to create access paths for high-speed data communications and transmits at speeds up to 8.1Mbps to a subscriber. This exciting technology is in the process of overcoming the technology limits of the public telephone network by enabling the delivery of high speed Internet access to the vast majority of subscribers’ homes at a very affordable cost. ADSL can literally transform the existing public information network from one limited to voice, text, and low-resolution graphics to a powerful, ubiquitous system capable of bringing multimedia, including full motion video, to every home this century. New broadband cabling will take decades to reach all prospective subscribers. Success of these new services will depend on reaching as many subscribers as possible during the first few years. By bringing movies, television, video catalogs, remote CD-ROMs, corporate LANs, and the Internet into homes and small businesses, ADSL will make these markets viable and profitable for telephone companies and application suppliers alike.

ADSL technology is asymmetric. It allows more bandwidth downstream---from an NSP's central office to the customer site---than upstream from the subscriber to the central office. This asymmetry combined with always-on access (which eliminates call setup), makes ADSL ideal for Internet/intranet surfing, video-on-demand, and remote LAN access. Users of these applications typically download much more information than they send. ADSL transmits more than 6 Mbps to a subscriber and as much as 640 kbps more in both directions (shown below). Such rates expand existing access capacity by a factor of 50 or more without new cabling.

[get this widget]

Monday, June 2, 2008

WORKING OF ARTIFICIAL VISION SYSTEM:

The main parts of this system are miniature video camera, a signal processor, and the brain implants. The tiny pinhole camera, mounted on a pair of eyeglasses, captures the scene in front of the wearer and sends it to a small computer on the patient's belt. The processor translates the image into a series of signals that the brain can understand, and then sends the information to the brain implant that is placed in patient’s visual cortex. And, if everything goes according to plan, the brain will "see" the image Light enters the camera, which then sends the image to a wireless wallet-sized computer for processing. The computer transmits this information to an infrared LED screen on the goggles. The goggles reflect an infrared image into the eye and on to the retinal chip, stimulating photodiodes on the chip. The photodiodes mimic the retinal cells by converting light into electrical signals, which are then transmitted by cells in the inner retina via nerve pulses to the brain. The goggles are transparent so if the user still has some vision, they can match that with the new information - the device would cover about 10° of the wearer’s field of vision.

[get this widget]

ARTIFICIAL VISION

Blindness is more feared by the public than any other ailment. Artificial vision for the blind was once the stuff of science fiction. But now, a limited form of artificial vision is a reality .Now we are at the beginning of the end of blindness with this type of technology. In an effort to illuminate the perpetually dark world of the blind, researchers are turning to technology. They are investigating several electronic-based strategies designed to bypass various defects or missing links along the brain's image processing pathway and provide some form of artificial sight.
HOW TO CREATE ARTIFICIAL VISION
The current path that scientists are taking to create artificial vision received a jolt in 1988, when Dr. Mark Humayun demonstrated that a blind person could be made to see light by stimulating the nerve ganglia behind the retina with an electrical current. This test proved that the nerves behind the retina still functioned even when the retina had degenerated. Based on this information, scientists set out to create a device that could translate images and electrical pulses that could restore vision. Today, such a device is very close to be available to the millions of people who have lost their vision to retinal disease. In fact, there are at least two silicon microchip devices that are being developed. The concept for both devices is similar, with each being:
Small enough to be implanted in the eye
Supplied with a continuous source of power
Biocompatible with the surrounding eye tissue

[get this widget]

Monday, May 26, 2008

SMART SENSOR FOR LABELING

In keeping pace with the fast emerging global trends, we have introduced stickers (self-adhesive) labeling machines, suitable for almost all sizes and types of containers.
The sticker labeling machines are the first of their king in India. The sticker labeling machine eliminates most of the disadvantages of the wet glue labeling machines. The sticker labeling machine are 100% user friendly, virtually maintenance free, do not require any data input/retrieval for label size and coding impression. The sticker labeling machines have built-in no bottle-no label system, production counter and synchronized on-line speed control system. The machines are compatible for any contact coding and inkjet coding system.
The machine settings are easy, and can be made by the operating staff with out much fuss. More over this sticker labeling machines will have speeds ranging from 50 to 400 labels per minute. In the following pages, we are going to describe the methods for proper setting and trouble shooting.
APPLICATIONS:
These can label nearly up to 400 containers per minute.
These sensors are very compact and can be fixed any where, it is of size 200mm-400mm.
These are controlled and sensed by simple photodiodes.
These sensors can label top, bottom, top/bottom, front/back, side and wrap around.
These sensors can label with width 8mm and maximum 90-180mm. These can handle up to outer diameter 400mm max and inner diameter 70/76mm
At last many of the systems were introduced by various companies but we introduce a smart technique for the sensor by which the labeling of the container makes task easier and effective. This smart sensor is very co-friendly and above all since its construction is very simple it is inexpensive. So this labeling sensor will be of very good use to the pharmaceutical industries which manufacture their chemicals and pack them in the containers which are to be labeled.

[get this widget]

Saturday, May 24, 2008

CHARACTER RECOGNITION METHODS

1 Template Matching and Correlation Techniques

In 1929 Tausheck obtained a patent on OCR in Germany and this is the first conceived idea of an OCR. Their approach was, what is referred to as template matching in the literature. The template matching process can be roughly divided into two sub processes, i.e. superimposing an input shape on a template and measuring the degree of coincidence between the input shape and the template. The template, which matches most closely with the unknown, provides recognition. The two-dimensional template matching is very sensitive to noise and difficult to adapt to a different font. A variation of template matching approach is to test only selected pixels and employ a decision tree for further analysis. Peephole method is one of the simplest methods based on selected pixels matching approach. In this approach, the main difficulty lies in selecting the invariant discriminating set of pixels for the alphabet. Moreover, from an Artificial Intelligence perspective, template matching has been ruled out as an explanation for human performance [1, 2].

2 Features Derived from the Statistical Distribution of Points

This technique is based on matching on feature planes or spaces, which are distributed on an n-dimensional plane where n is the number of features. This approach is referred to as statistical or decision theoretic approach. Unlike template matching where an input character is directly compared with a standard set of stored prototypes. Many samples of a pattern are used for collecting statistics. This phase is known as the training phase. The objective is to expose the system to natural variants of a character. Recognition process uses this statistics for identifying an unknown character. The objective is to expose the system to natural variants of a character. The recognition process uses this statistics for partitioning the feature space. For instance, in the K-L expansion one of the first attempt in statistical feature extraction, orthogonal vectors are generated from a data set. For the vectors, the covariance matrix is constructed and its eigenvectors are solved which form the coordinates of the given pattern space. Initially, the correlation was pixel-based which led to large number of covariance matrices. This approach was further refined to the use of class-based correlation instead of pixel-based one which led to compact space size. However, this approach was very sensitive to noise and variation in stroke thickness. To make the approach tolerant to variation and noise, a tree structure was used for making a decision and multiple prototypes were stored for each class. Researchers for classification have used the Fourier series expansions, Walsh, Haar, and Hadamard series expansion.

3 Geometrical and Topological Features

The classifier is expected to recognize the natural variants of a character but discriminate between similar looking characters such as ‘k’ – ‘ph’, ‘p’ - ‘Sh’ etc. This is a contradicting requirement which makes the classification task challenging. The structural approach has the capability of meeting this requirement. The multiple prototypes are stored for each class, to take care of the natural variants of the character. However, a large number of prototypes for the same class are required to cover the natural variants when the prototypes are generated automatically. In contrast, the descriptions may be handcrafted and a suitable matching strategy incorporating expected variations is relied upon to yield the true class. The matching strategies include dynamic programming, test for isomorphism, inexact matching, relaxation techniques and multiple to-one matching. Rocha have used a conceptual model of variations and noise along with multiple to one mapping. Yet another class of structural approach is to use a phrase structured grammar for prototype descriptions and parse the unknown pattern syntactically using the grammar. Here the terminal symbols of the grammar are the primitives of strokes and non-terminals represent the pattern-classes. The production rules give the spatial relationships of the constituent primitives.

4 Hybrid Approach

The statistical approach and structural approach both have their advantages and shortcomings. The statistical features are more tolerant to noise (provided the sample space over which training has been performed is representative and realistic) than structural descriptions. Whereas, the variation due to font or writing style can be more easily abstracted in structural descriptions. Two approaches are complimentary in terms of their strengths and have been combined. The primitives have to be ultimately classified using a statistical approach. Combine the approaches by mapping variable length, unordered sets of geometrical shapes to fixed length numerical vectors. This approach, the hybrid approach, has been used for omni font, variable size character recognition systems.
5 Neural Networks
In the beginning, character recognition was regarded as a problem, which could be easily solved. But the problem turned out to be more challenging than the expectations of most of the researchers in this field. The challenge still exists and an unconstrained document recognition system matching human performance is still nowhere in the sight. The performance of a system deteriorates very rapidly with deterioration in the quality of the input or with the introduction of new fonts handwriting. In other words, the systems do not adapt to the changed environment easily. Training phase aims at exposing the system to a large number of fonts and their natural variants. The neural networks are based on the theory of learning from the known inputs. A back propagation neural network is composed of several layers of interconnected elements. Each element computes an output, which is a function of weighted sum of its inputs. The weights are modified until a desired output is obtained. The neural networks have been employed for character recognition with varying degree of success. The neural networks are employed for integrating the results of the classifiers by adjusting weights to obtain desired output. The main weakness of the systems based on neural networks is their poor capability for generality. There is always a chance of under training or over training the system. Besides this, a neural network does not provide structural description, which is vital from artificial intelligence viewpoint. The neural network approach has solved the problem of character classification no more than the earlier described approaches. The recent research results call for the use of multiple features and intelligent ways of combining them. The combination of potentially conflicting decisions by multiple classifiers should take advantage of the strength of the individual classifier, avoid their weaknesses and improve the classification accuracy. The intersection and union of decision regions are the two most obvious methods for classification combination.

[get this widget]

I signed up for PPP!


Hello readers..Today while surfing the net I have found a great website which pays for blogging.The site is "PayPerPost".Then without any second thought I signed up for payperpost website.The greatness of this PPP( PayPerPost) is we will be get paid for blogging on our interests.So there is no need to worry about our readers.
Now a days every one is blogging.So it is better to sign up for this PPP site and get some extra money.You can make decent money with ppp.And the most important thing about PPP is ,it is a genuine site which pays to its members without any faults.PPP is not a fake site like many other sites.They already paid many bucks to its members.
The concept involved in PPP is,they act like a bridge between blogger and advertisers.Advertisers are those who pay for populating their site/blog/product.PPP shows opportunities for the bloggers to write about the advertiser's products in their blogs.With that the blogger drive traffic to advertiser site and bring popularity to his product or site.For this job the blogger will be get paid after a certain period of time.So both advertiser and blogger will be benefited at a time. PPP will not accept all blogs into their blogger's network.The blog should meet some requirements to get approved by PPP.After your blog is approved then you have to write about the advertiser opportunities available to you in you blog.So maintain a good blog and join in PPP and earn some money or your needs.
The money which I earn from this site will be helpful for me to buy some accessories and other small needs.I am also recommending this to you.


[get this widget]

Friday, May 23, 2008

ULTRA WIDEBAND COMMUNICATIONS APPLICATIONS

The trade-off between data rate and range in UWB systems holds great promise for a wide variety of applications in military, civilian, and commercial sectors. The FCC categorizes UWB applications as either radar, imaging, or communications devices. Radar is considered
one of the most powerful applications of UWB technology. The fine positioning characteristics of narrow UWB pulses enables them to offer high-resolution radar (within centimeters) for military and civilian applications. Also, because of the very wide frequency spectrum band, UWB signals can easily penetrate various obstacles. This property makes UWB-based ground-penetrating radar (GPR) a useful asset for rescue
and disaster recovery teams for detecting survivors buried under rubble in disaster situations.
In the commercial sector, such radar systems can be used on construction sites to locate pipes, studs, and electrical wiring. The same technology under different regulations can be used for various types of medical imaging, such as remote heart monitoring systems. In addition, UWB radar is used in the automotive industry for collision avoidance systems.
Moreover, the low transmission power of UWB pulses makes them ideal candidates for covert military communications. UWB pulses are extremely difficult to detect or intercept; therefore, unauthorized parties will not get access to secure military information. Also, because UWB devices have simpler transceiver circuitry than narrow band transceivers, they can be manufactured in small sizes at a lower price than narrow band systems. Small and inexpensive UWB transceivers are excellent candidates for
wireless sensor network applications for both military and civilian use. Such sensor networks are used to detect a physical phenomenon in an
inaccessible area and transfer the information to a destination.
A military application could be the detection of biological agents or enemy tracking on the battlefield. Civilian applications might include habitat monitoring, environment observation, health monitoring, and home automation.
The precise location-finding ability of UWB systems can be used in inventory control and asset management applications, such as tagging
and identification systems—for example, RFID tags. Also, the good performance of UWB devices in multi path channels can provide accurate geo location capability for indoor and obscured environments where GPS receivers won’t work. The high-data-rate capability of UWB systems for short distances has numerous applications for home networking and multimedia-rich communications in the form of WPAN applications. UWB systems could replace cables connecting camcorders and VCRs, as well as other consumer electronics applications, such as laptops, DVDs, digital cameras, and portable HDTV monitors. No other available wireless technologies—such as Blue tooth or 802.11a/b—are capable of transferring streaming video.

[get this widget]

ADVANTAGES OF ULTRA WIDEBAND COMMUNICATIONS

The nature of the short-duration pulses used in UWB technology offers several advantages over narrowband communications systems. In this
section, we discuss some of the key benefits that UWB brings to wireless communications.

5.1 Ability to share the frequency spectrum.

The FCC’s (Federal Communications Commission) power requirement of –41.3dBm/MHz, 5 equal to 75 nano watts/MHz for UWB systems, puts them in the category of unintentional radiators, such as TVs and computer monitors. Such power restriction
allows UWB systems to reside below the noise floor of a typical narrow band receiver and enables UWB signals to coexist with current radio services with minimal or no interference. However, this all depends on the type of modulation used for data transfer in a UWB system.
Some modulation schemes generate undesirable discrete spectral lines in their PSD, which can both increase the chance of interference to other systems and increase the vulnerability of the UWB system to interference from other radio services. Figure 1–6 illustrates the general idea of UWB’s coexistence with narrow band and wide band technologies.

5.2. Large channel capacity

One of the major advantages of the large bandwidth for UWB pulses is improved channel capacity. Channel capacity, or data rate, is defined as the maximum amount of data that can be transmitted per second over a communication channel. The large channel capacity of UWB communication systems is evident from Hartley-Shannon’s capacity formula:
where C represents the maximum channel capacity, B is the bandwidth, and SNR is the signal-to-noise power ratio. As shown in Equation 1–5, channel capacity C linearly increases with bandwidth B. Therefore, having
several gigahertz of bandwidth available for UWB signals, a data rate of gigabits per second (Gbps) can be expected. However, due to the FCC’s current power limitation on UWB transmissions, such a high data rate is
available only for short ranges, up to 10 meters. This makes UWB systems perfect candidates for short-range, high-data-rate wireless applications
such as wireless personal area networks (WPANs). The trade-off between the range and the data rate makes UWB technology ideal for a wide array of applications in military, civil, and commercial sectors.

5.3. Ability to work with low signal to noise ratios

The Hartley-Shannon formula for maximum capacity (Equation 1–5) also indicates that the channel capacity is only logarithmically dependent on signal-to-noise ratio (SNR). Therefore, UWB communications systems
are capable of working in harsh communication channels with low SNRs and still offer a large channel capacity as a result of their large BW.

5.4 Low probability of intercept and detection.
Because of their low average transmission power, as discussed in previous sections, UWB communications systems have an inherent immunity to detection and intercept. With such low transmission power, the eavesdropper has to be very close to the transmitter (about 1 meter) to be able to detect the transmitted information. In addition, UWB pulses are time modulated with codes unique to each transmitter/receiver pair. The time modulation of extremely narrow pulses adds more security to UWB transmission, because detecting picoseconds pulses without knowing when they will arrive is next to impossible. Therefore, UWB systems hold
significant promise of achieving highly secure, low probability of intercept and detection (LPI/D) communications that is a critical need for military operations.

5.5 Resistance to jamming

Unlike the well-defined narrow band frequency spectrum, the UWB spectrum covers a vast range of frequencies from near DC to several gigahertz and offers high processing gain for the UWB signals. Processing gain (PG) is a measure of a radio system’s resistance to jamming and is defined as the ratio of the RF bandwidth to the information bandwidth of a signal

[get this widget]

INTRODUCTION TO ULTRA WIDEBAND COMMUNICATIONS

Ultra-wideband communications is fundamentally different from all other communication techniques because it employs extremely narrow RF pulses to communicate between transmitters and receivers. Utilizing
short-duration pulses as the building blocks for communications directly generates a very wide bandwidth and offers several advantages, such as
large throughput, covertness, robustness to jamming, and coexistence with current radio services .

2. HISTORY AND BACKGROUND

Ultra-wideband communications is not a new technology; in fact, it was first employed by Guglielmo Marconi in 1901 to transmit Morse code sequences across the Atlantic Ocean using spark gap radio transmitters. However, the benefit of a large bandwidth and the capability of implementing multi-user systems provided by electromagnetic pulses were never considered at that time.
Approximately fifty years after Marconi, modern pulse-based transmission gained momentum in military applications in the form of impulse radars. From the 1960s to the 1990s, this technology was restricted to military and Department of Defense (DoD) applications
under classified programs such as highly secure communications. However, the recent advancement in micro processing and fast switching in semiconductor technology has made UWB ready for commercial applications.
Therefore, it is more appropriate to consider UWB as a new name for a long-existing technology.

[get this widget]

Wireless Charging of Mobile

With the mobile phones becoming a basic part of life, the recharging of batteries has become a big problem and with the prospect of mobile devices being equipped with additional peripheral devices like cameras, this problem worsens. As already mentioned, this paper suggests a novel method to charge the mobile phones on the move. The main principle in this proposal is to convert the incoming microwave energy to D.C through the rectenna, and use this voltage to charge the battery of the mobile.

A RECTifying anTENNA; rectifies received microwaves into DC current. A rectenna comprises of a mesh of dipoles and diodes for absorbing microwave energy from a transmitter and converting it into electric power. Its elements are usually arranged in a mesh pattern, giving it a distinct appearance from most antennae. A simple rectenna can be constructed from a schottky diode placed between antenna dipoles Rectennae are highly efficient at converting microwave energy to electricity. In laboratory environments, efficiencies above 90% have been observed with regularity.

The transmitter can be any one of the high frequency oscillators such as Klystrons, Traveling Wave Tubes, and Magnetrons. In our proposal the use of magnetron as the high frequency oscillator is explained. Magnetrons are preferred because they are highly stable.

[get this widget]

Tuesday, May 20, 2008

Applications of Digital Image Processing

Applications of digital image processing in general are infinite. There are hardly any areas where image processing is not necessary. However major applications in medicine, space are discussed briefly:

Medical:
In 1895, Wilhelm Conrad Röntgen discovered that x-rays could pass through substantial amounts of matter. Medicine was revolutionized by the ability to look inside the living human body. Medical x-ray systems spread throughout the world in only a few years. In spite of its obvious success, medical x-ray imaging was limited by four problems until DSP and related techniques came along in the 1970s. First, overlapping structures in the body can hide behind each other. For example, portions of the heart might not be visible behind the ribs. Second, it is not always possible to distinguish between similar tissues. For example, it may be able to separate bone from soft tissue, but not distinguish a tumor from the liver. Third, x-ray images show anatomy, the body's structure, and not physiology, the body's operation. The x-ray image of a living person looks exactly like the x-ray image of a dead one! Fourth, x-ray exposure can cause cancer, requiring it to be used sparingly and only with proper justification.
Space:
Sometimes, we face a situation so as to make the most out of a bad picture. This is frequently the case with images taken from unmanned satellites and space exploration vehicles. DIP can improve the quality of images taken under extremely unfavorable conditions in several ways: brightness and contrast adjustment, edge detection, noise reduction, focus adjustment, motion blur reduction, etc. Images that have spatial distortion, such as encountered when a flat image is taken of a spherical planet, can also be warped into a correct representation. Many individual images can also be combined into a single database, allowing the information to be displayed in unique ways. For example, a video sequence simulating an aerial flight over the surface of a distant planet.

Commercial Imaging Products:The large information content in images is a problem for systems sold in mass quantity to the general public. Commercial systems must be cheap, and this doesn't mesh well with large memories and high data transfer rates. One answer to this dilemma is image compression. Just as with voice signals, images contain a tremendous amount of redundant information, and can be run through algorithms that reduce the number of bits needed to represent them. Television and other moving pictures are especially suitable for compression, since most of the image remain the same from frame-to-frame. Commercial imaging products that take advantage of this technology include: video telephones, computer programs that display moving pictures, and digital television

[get this widget]

Digital Image Processing

Digital image processing is electronic data processing on a 2-D array of numbers. The array is a numeric representation of an image. An image processing system consists of a source of image data, a processing element and a destination for the processed results.

Steps and Tools:
After obtaining the digitized image by the process of digitization the image is sent for image processing. Image processing operations can be roughly divided into three major categories, Image Compression, Image Enhancement and Restoration, and Measurement Extraction.

Image Compression:
As a result of digitization of an average size document at a medium resolution and color depth, a huge file of average size of about 20-50MB is produced. The quality of such an image is high, but the transmission time, the computational burden and the hardware requirements are simply enormous. Also the bandwidth of the transmission line required to transmit the information would be very large. Thus image compression is necessary. Some of the major compression techniques which can be used when transmitting or storing digital images are GIF(Graphics Interchage Format), PNG(Portable Network Graphics), JBIG(Joint Bilevel Group) and JPEG(Joint Photographers Expert Group). Various new techniques are also being devised for quick transmission and efficient compression.

[get this widget]

Monday, May 19, 2008

CT scan working

Inside the CT scanner is a rotating gantry that has an x-ray tube mounted on one side and an arc-shaped detector mounted on the opposite side. An x-ray beam is emitted in a fan shape as the rotating frame spins the x-ray tube and detector around the patient. Each time the x-ray tube and detector make a 360-degree rotation and the x-ray passes through the patient's body, the image of a thin section is acquired. During each rotation the detector records about 1,000 images (profiles) of the expanded x-ray beam. A dedicated computer into a two-dimensional image of the section that was scanned then reconstructs each profile. Multiple computers are typically used to control the entire CT system.
When computer reassembles the image slices, the result is a very detailed, multidimensional view of the body's interior.
A relatively new technique, spiral (helical) CT has improved the accuracy of CT for many diseases. A new vascular imaging technique, called spiral CT-Angiography is non-invasive and less expensive than conventional angiography and allows doctors to see blood vessels without the need for more invasive procedures.
The term "spiral CT" comes from the shape of the path taken by the x-ray beam during scanning. The examination table advances at a constant rate through the scanner gantry while the x-ray tube rotates continuously around the patient, tracing a spiral path through the patient. This spiral path gathers continuous data with no gaps between images.With spiral CT, refinements in detector technology support faster, higher-quality image acquisition with less radiation exposure. The current spiral CT scans are called multidetector CT and are most commonly four- or 16-slice systems. CT scanners with 64 detectors are now available. These instruments should provide either faster scanning or higher resolution images. Using 16-slice scanner systems the radiologist can acquire 32 image slices per second. A spiral scan can usually be obtained during a single breath hold. This allows scanning of the chest or abdomen in 10 seconds or less. Such speed is beneficial in all patients but especially in elderly, pediatric or critically ill patients, populations in whom the length of scanning was often problematic. The multidetector CT also allows applications like CT angiography to be more successful

[get this widget]

COMPUTERIZED AXIAL TOMOGRAPHY

. It is a technique using x-rays or ultrasound waves to produce an image of interior parts of the body.
Its is also a medical imaging method employing tomography where digital processing is used to generate a three-dimensional image of the internals of an object from a large series of two-dimensional X-ray images taken around a single axis of rotation.
CT imaging is particularly useful because it can show several types of tissue –lung, bone, soft tissue and blood vessels—with great clarity. Using specialized equipment and expertise to create and interpret CT scans of the body, radiologists can more easily diagnose problems such as cancers, cardiovascular disease, infectious disease, trauma and muskuloskeletal disorders. Imaging anatomical information from a cross-sectional plane of the body, each image generated by a computer synthesis of x-ray transmission data obtained in many different directions in a given plane. Developed by British electronics engineer Godfrey Hounsfield, CT has revolutionized diagnostic medicine. Hounsfield linked x-ray sensors to a computer and worked out a mathematical technique called algebraic reconstruction for assembling images from transmission data. Subsequently, the speed and accuracy of machines has improved many times over. CT scans reveal both bone and soft tissues, including organs, muscles, and tumors. Image tones can be adjusted to highlight tissues of similar density, and, through graphics software, the data from multiple cross-sections can be assembled into 3-D images. CT aids diagnosis and surgery or other treatment, including radiation therapy in which effective dosage is highly dependent on the precise density, size, and location of a tumor. Because it provides detailed, cross- sectional views of all types of tissue
About the machine:
The CT scanner is a large, square machine with a hole in the center. The patient lies still on a table that can move up or down and slide into and out from the center of the hole. Within the machine an X-ray tube on a rotating gantry moves around the patient's body to produce the images, making clicking and whirring noises as the table moves. In many ways CT scanning works very much like other X-ray examinations. Very small, controlled amounts of x-ray radiations are passed through the body and different tissues absorb radiation at different rates. With plain radiology, an image of the inside of the body is captured when special film is exposed to the absorbed x-rays. With CT the film is replaced by an array of detectors that measure the x-ray profile.

[get this widget]

Sunday, May 18, 2008

Spintronic quantum gates

5. Spintronic quantum gates
The idea of using a single electron or nuclear spin to encode a qubit, and then utilizing this to realize a universal quantum gate, has taken hold. The motivation for this is the realization that spin coherence times in solids are much larger than charge coherence times. Charge coherence times in semiconductors tend to saturate to about 1 ns as the temperature is lowered. This is presumably due to coupling to zero point motion of phonons, which cannot be eliminated by lowering the temperature. On the other hand, electron spin coherence times of 100 ns in GaAs at 5 K have already been reported and much higher coherence times are expected for nuclear spins in
silicon. Therefore, spin is obviously the preferred vehicle to encode qubits in solids.
Using spin to carry out all optical quantum computing has also appeared as a viable and intriguing idea. The advantage of the all-optical scheme over the electronic scheme is that we do not have to read single electron spins electrically to read a qubit. Electrical read out is extremely difficult, although some schemes have been proposed for this purpose. Recently, some experimental progress has been made in this direction, but reading a single qubit in the solid state still remains elusive. The difficult part is that electrical read out requires making contacts to individual quantum dots, which is an engineering challenge. In contrast, optical read out does not require contacts. The qubit is read out using a quantum jump technique, which requires monitoring the fluorescence from a quantum dot. Recently, it has been verified experimentally that the spin state of an electron in a quantum dot can be read by circularly polarized light. Therefore, optical read out appears to be a more practical approach.

[get this widget]

SPINTRONICS

The visionary who first thought of using the spin polarization of a single electron to encode a binary bit of information has never been identified conclusively. Folklore has it that Feynman mentioned this notion in casual conversations (circa 1985), but to this author’s knowledge there did not exist concrete schemes for implementing spintronic logic gates till the mid 1990s. Encoding information in spin may have certain advantages.
First, there is the possibility of lower power dissipation in switching logic gates. In charge based devices, such as metal oxide semiconductor field effect transistors, switching between logic 0 and logic 1 is accomplished by moving charges into and out of the transistor channel. Motion of charges is induced by creating a potential gradient (or electric field). The associated potential energy is ultimately dissipated as heat and irretrievably lost. In the case of spin, we do not have to move charges. In order to switch a bit from 0 to 1, or vice versa, we merely have to toggle the spin. This may require much less energy. Second, spin does not couple easily to stray electric fields (unless there is strong spin-orbit interaction in the host

material). Therefore, spin is likely to be relatively immune to noise. Finally, it is possible that spin devices may be faster. If we do not have to move electrons around, we will not be limited by the transit time of charges. Instead, we will be limited by the spin flip time, which could be smaller.

[get this widget]

Saturday, May 17, 2008

CONFIGURATION OF TACTILE SENSING SYSTEM

This system uses conductive plasterer as a sensor, which has a property that its conductivity changes as a function of the pressure applied. Fig 1 shows the configuration of the system. The conductive elastomer is mounted on 8*8 force sensing sites for the measurement of pressure distribution on the object. These force sensing sites are connected through PC add on data acquisition card. Stepper motor is used to apply specific amount of pressure for the proper identification of the objects. Hardware for scanning the matrix and related signal conditioning is designed along with the stepper motor interface circuitry. Once the image of the object is acquired through the Tactile Sensing System, then it is further processed using Image Processing concepts for the proper identification and inferring other properties of the objects. Tactile data for the object identification is acquired using Row-Scanning technique. After the removal of noise from this tactile data, it passes to the different modules of the system for further processing.
TACTILE DATA PROCESSING

The main modules of this system are pre-processing, data acquisition, matrix representation, graphical representation, edge detection and moments calculations for generating a feature vector. Tactile image acquisition involves conversion of the pressure image into an array of numbers that can be manipulated by the computer. In this system, a tactile sensor is used to obtain the pressure data of the object and this data is further acquired with the help of data acquisition and data input/output card, which are interfaced with the computer. The pre-processing module involves in the removal of the noise, which is essential for acquiring the image of the object under consideration. The image is analyzed by a set of numerical features to remove redundancy from the data and reduce its dimensions. Invariant moments are calculated in this module, which are required by the next module to the artificial neural for the identification of the object independent to scale, rotation and position.

[get this widget]

TACTILE SENSING SYSTEM USING ARTIFICIAL NEURAL NETWORK

Of the many sensing operations performed by human beings, the one that is probably the most likely to be taken for granted is that of touch. Touch is not only complimentary to vision, but it offers many powerful sensing capabilities. Tactile sensing is another name of touch sensing, which deals with the acquisition of information about the object simply by touching that object. Touch sensing gives the information about the object like shape, hardness, surface details and height of the object, etc.
Tactile sensing is required when an intelligent robot wants to perform delicate assembly operations. During this assembly operation an industrial robot must be capable of recognizing parts, determining their position, orientation and sensing any problem encountered during the assembly from the interface of the parts. Keeping all these things in mind, a Tactile Sensing System is developed which uses image processing techniques for pre-processing and analysis along with ANN’s for object identification. Classification of the object independent of translation, scale and rotation is a difficult task. The concept of Artificial Neural Network is used in this system for the proper identification of the object, irrespective of their size, position and orientation.

[get this widget]

Friday, May 16, 2008

Applications of Speech Recognition

The specific use of speech recognition technology will depend on the application. Some target applications that are good candidates for integrating speech recognition include:

Games and Edutainment

Speech recognition offers game and edutainment developers the potential to bring their applications to a new level of play. With games, for example, traditional computer-based characters could evolve into characters that the user can actually talk to.
While speech recognition enhances the realism and fun in many computer games, it also provides a useful alternative to keyboard-based control, and voice commands provide new freedom for the user in any sort of application, from entertainment to office productivity.

Data Entry

Applications that require users to keyboard paper-based data into the computer (such as database front-ends and spreadsheets) are good candidates for a speech recognition application. Reading data directly to the computer is much easier for most users and can significantly speed up data entry.
While speech recognition technology cannot effectively be used to enter names, it can enter numbers or items selected from a small (less than 100 items) list. Some recognizers can even handle spelling fairly well. If an application has fields with mutually exclusive data types (for example, one field allows "male" or "female", another is for age, and a third is for city), the speech recognition engine can process the command and automatically determine which field to fill in.

Document Editing

This is a scenario in which one or both modes of speech recognition could be used to dramatically improve productivity. Dictation would allow users to dictate entire documents without typing. Command and control would allow users to modify formatting or change views without using the mouse or keyboard. For example, a word processor might provide commands like "bold", "italic", "change to Times New Roman font", "use bullet list text style," and "use 18 point type." A paint package might have "select eraser" or "choose a wider brush."

[get this widget]

SPEECH RECOGNITION

One of the most important inventions of the nineteenth century was the telephone. Then at the midpoint of twentieth century, the invention of the digital computer amplified the power of our minds, enabled us to think and work more efficiently and made us more imaginative then we could ever have imagined .now several new technologies have empowered us to teach computers to talk to us in our native languages and to listen to us when we speak(recognition); haltingly computers have begun to understand what we say. Having given our computers both oral and aural abilities, we have been able to produce innumerable computer applications that further enhance our productivity. Such capabilities enable us to route phone calls automatically and to obtain and update computer based information by telephone, using a group of activities collectively referred to as Voice Processing.

SPEECH TECHNOLOGY

Three primary speech technologies are used in voice processing applications: stored speech, text-to – speech and speech recognition . Stored speech involves the production of computer speech from an actual human voice that is stored in a
computer’s memory and used in any of several ways. Speech can also be synthesized from plain text in a process known as text-to – speech which also enables voice processing applications to read from textual database. Speech recognition is the process of deriving either a textual transcription or some form of meaning from a spoken input.
Speech analysis can be thought of as that part of voice processing that converts human speech to digital forms suitable for transmission or storage by computers. Speech synthesis functions are essentially the inverse of speech analysis – they reconvert speech data from a digital form to one that’s similar to the original recording and suitable for playback.Speech analysis processes can also be referred to as a digital speech encoding ( or simply coding) and speech synthesis can be referred to as Speech decoding

[get this widget]

Wireless Application Testing

Wireless applications are booming in the modern PDA and cell phone market. Standalone applications and value-added extensions of existing applications are both excellent ways to attract customers and increase the value of the hardware. Many companies use these applications as a way to retain customers and improve customer experience and satisfaction.Testing these applications has continued to be a road block for many developers. Difficulty insecuring all platforms, and testing the many disparate configurations is cumbersome and time consuming.
2.TYPES OF TESTING:
a) Functional Testing
A set of functional tests will be developed based on client documentation. Functional tests verify that the application performs the tasks correctly and accurately as explained in the client documentation. Dedicated QA Engineers will write test cases for each component of functionality and execute those tests. Tests will be clearly defined prior to execution, with an emphasis on actions and expected corresponding behaviors.
b) Compatibility Testing

The wireless market has a myriad of devices available. The differences in each device can be extensive. Each device can have a different operating system, support different technologies, and have different hardware. Presenting a similar end-user experience on each device is challenging. Generally, each platform requires a separate version of the application, and in some cases, the same platform may have different versions for each different device. This often happens when developing applications for cell phones. Different screen sizes/resolutions and differences in other physical hardware often make it necessary to have different versions for each platform.

[get this widget]

Thursday, May 15, 2008

My Mylot Profile

myLot User Profile

[get this widget]

Biometric Applications- Speech Recognition

The speaker-specific characteristics of speech are due to differences in physiological and behavioral aspects of the speech production system in humans. The main physiological aspect of the human speech production system is the vocal tract shape. The vocal tract is generally considered as the speech production organ above the vocal folds, which consists of the following: (i) laryngeal pharynx (beneath the epiglottis), (ii) oral pharynx (behind the tongue, between the epiglottis and velum), (iii) oral cavity (forward of the velum and bounded by the lips, tongue, and palate), (iv) nasal pharynx (above the velum, rear end of nasal cavity), and (v) nasal cavity (above the palate and extending from the pharynx to the nostrils). The shaded area in the following figure depicts the vocal tract.
The vocal tract modifies the spectral content of an acoustic wave as it passes through it, thereby producing speech. Hence, it is common in speaker verification systems to make use of features derived only from the vocal tract. In order to characterize the features of the vocal tract, the human speech production mechanism is represented as a discrete-time system of the form depicted in following figure.

[get this widget]

Biometric Applications - FINGERPRINT MATCHING

Biometrics is a rapidly evolving technology, which has been widely used in forensics such as criminal identification and prison security. Recent advancements in biometric sensors and matching algorithms have led to the deployment of biometric authentication in a large number of civilian applications. Biometrics can be used to prevent unauthorized access to ATMs, cellular phones, smart cards, desktop PCs, workstations, and computer networks. It can be used during transactions conducted via telephone and Internet (electronic commerce and electronic banking). In automobiles, biometrics can replace keys with key-less entry and key-less ignition.
FINGERPRINT MATCHING:
Among all the biometric techniques, fingerprint-based identification is the oldest method, which has been successfully used in numerous applications. Everyone is known to have unique, immutable fingerprints. A fingerprint is made of a series of ridges and furrows on the surface of the finger. The uniqueness of a fingerprint can be determined by the pattern of ridges and furrows as well as the minutiae points. Minutiae points are local ridge characteristics that occur at either a ridge bifurcation or a ridge ending. Fingerprint matching techniques can be placed into two categories:
· Minutiae-based
· Correlation based.

[get this widget]

BIOMETRICS

Biometrics is automated methods of recognizing a person based on a physiological or behavioral characteristic. Among the features measured are; face, fingerprints, hand geometry, handwriting, iris, retinal, vein, and voice. Biometric technologies are becoming the foundation of an extensive array of highly secure identification and personal verification solutions. As the level of security breaches and transaction fraud increases, the need for highly secure identification and personal verification technologies is becoming apparent.
Biometric-based authentication applications include workstation, network, and domain access, single sign-on, application log on, data protection, remote access to resources, transaction security and Web security. Trust in these electronic transactions is essential to the healthy growth of the global economy. Utilized alone or integrated with other technologies such as smart cards, encryption keys and digital signatures, biometrics are set to pervade nearly all aspects of the economy and our daily lives. Utilizing biometrics for personal authentication is becoming convenient and considerably more accurate than current methods (such as the utilization of passwords or PINs). This is because biometrics links the event to a particular individual (a password or token may be used by someone other than the authorized user), is convenient (nothing to carry or remember), accurate (it provides for positive authentication), can provide an audit trail and is becoming socially acceptable and inexpensive.

[get this widget]

Tuesday, May 6, 2008

ADVANTAGES OF FIBER CABLES OVER WIRES

There are various advantages, which make Optical Fibres better than Copper systems. The crucial operating difference between a fibre optic communication system and other types is that signals are transmitted as light. Conventional electronic communication relies on electrons passing through wires. Radio frequency and microwave communications rely on radio waves and microwaves travelling through open space. The major points which distinguish it on positive side are:
→ Low transmission loss and wide Bandwidth:
Optical Fibres have lower transmission losses and wider bandwidth than copper wires. This means that with optical fibre cable systems more data can be sent over longer distances, thereby decreasing the number of wires and reducing the number of repeaters needed for these spans. This reduction in equipment and components decreases the system cost and complexity.
→ Small size and Weight:

The low weight and the small (hair sized) dimensions of fibres offer a distinct advantage over heavy, bulky wire cables in crowded underground city ducts or in ceiling-mounted cable trays. This is also of importance in aircraft, satellites, and ships, where small, lightweight cables are advantageous, and in tactical military applications, where large amounts of cable must be unreeled and retrieved rapidly.
→ Immunity to interference:

An especially important feature of optical fibres relates to their dielectric nature. This provides optical waveguides with immunity to electromagnetic interference (EMI), such as inductive pickup from signal carrying wires and lightning. It also ensures freedom from electromagnetic pulse (EMP) effects, which is of particular interest in military application.
→ Electrical Isolation:
Since optical fibres are constructed of glass, which is an electrical insulator, there is no need to worry about ground loops, Fibre-to-Fibre crosstalks is very low and equipment interface problems are simplified. This also makes the use of fibres attractive in electrically hazardous environment, since the fibre creates no arcing or sparking
→ Signal Security:

By using an optical fibre a high degree of data security is afforded, since the optical signal is well confined within the waveguide (with any emanations being absorbed by an opaque jacketing around the fibre). This makes fibre attractive in applications where information security is important, such as banking, computer networks and military system

→ Abundant raw materials:

Of additional importance is the fact that silica is the principal material of which optical fibres are made. This raw material is abundant and inexpensive, since it is found in ordinary sand. The expense in making the actual fibre arises in the process required to make ultrapure glass from this raw material

→ Fibres are not prone to Thefts:
Last but not the least, that Optical Fibres cables has a distinctive advantage of not being prone to Thefts, as it is the most common practice in conventional copper cables.

[get this widget]

OPTICAL FIBRE COMMUNICATIONS

Optical Fibre: A flexible optically transparent fiber, as of glass or plastic, through which light can be transmitted by successive internal reflections.
Optical fibres are fine threads of glass which are capable of transmitting light at about 2/3 the speed of light in vacuum.
Optical Fibre provides the greatest "Information - Carrying - Potential" of any available medium today.
Optical fibres are a key element in light wave communication.
A fibre is thinner than a human hair but is very strong and takes less room than an ordinary cable.

The main job of an optical fibre is to guide the light (which carries the information) with minimum attenuation or loss of the signal. It offers the most efficient medium for transmitting digitally formatted information from one point to another. An Optical Fibre guides a coded series of light pulses down a path in which none of the information - carrying light signal is lost. Optical fibre cables have a much greater bandwidth than metal cables (that is they transmit more data) and they are less susceptible to interference. Fibre optics technology is relatively new and because of its high efficiency, optical fibre cables are steadily replacing the traditional lines. Currently all new undersea cables are made of optical fibres With the arrival of cable TV systems, optical fibre is getting closer to your home.

[get this widget]

INTELLIGENT DATA MINING TECHNIQUES

Traditional data mining techniques such as association analysis, classification and prediction, cluster analysis, and outlier analysis identify patterns in structured data.3 Newer techniques identify patterns from both structured and unstructured data. As with other forms of data mining, crime data mining raises privacy concerns.4 Nevertheless,
researchers have developed various automated data mining techniques for both local law enforcement and national security applications. Entity extraction identifies particular patterns from data such as text, images, or audio materials. It has been used to automatically identify persons, addresses, vehicles, and personal characteristics from police narrative reports.5 In computer forensics, the extraction of software metrics6—which includes the data structure, program flow, organization and quantity of comments, and use of variable names—can facilitate further investigation by, for example, grouping similar programs written by hackers and tracing their behavior. Entity extraction
provides basic information for crime analysis, but its performance depends greatly on the availability of extensive amounts of clean input data. Clustering techniques group data items into classes with similar characteristics to maximize or minimize intraclass similarity—for example, to identify suspects who conduct crimes in similar ways or
distinguish among groups belonging to different gangs. These techniques do not have a set of prede-fined classes for assigning items. Some researchers use the statistics-based concept space algorithm to automatically associate different objects such as persons, organizations, and vehicles in crime records. Using link analysis techniques to identify similar transactions, the Financial Crimes Enforcement Network AI System8 exploits Bank Secrecy Act data to support the detection and analysis of money laundering and other financial crimes. Clustering crime incidents can automate a major part of crime analysis but is limited by the high computational intensity typically required. Association rule mining discovers frequently occurring item sets in a database and presents the patterns as rules. This technique has been applied in network intrusion detection to derive association rules from users’ interaction history. Investigators also can apply this technique to network intruders’ profiles to help detect potential future network attacks. Similar to association rule mining, sequential pattern mining finds frequently occurring sequences of items over a set of transactions that occurred at different times. In network intrusion detection, this approach can identify intrusion patterns among time-stamped data. Showing hidden patterns bene fits crime analysis, but to obtain meaningful results requires rich and highly structured data.

[get this widget]

Tuesday, April 15, 2008

Worldwide Interoperability for Microwave Access

New and increasingly advanced data services are driving up wireless traffic, which is being further boosted by growth in voice applications in advanced market segments as the migration from fixed to mobile voice continues. This is already putting pressure on some networks and may be leading to difficulties in maintaining acceptable levels of service to subscribers.
For the past few decades the lower band width applications are growing but the growth of broad band data applications is slow. Hence we require technology which helps in the growth of the broad band data applications. WiMAX is such a technology which helps in point-to-multipoint broadband wireless access with out the need of direct line of sight connectivity with base station.

WiMAX is an acronym that stands for “Worldwide Interoperability for Microwave Access”.WiMAX does not conflict with WiFi but actually complements it. WiMAX is a wireless metropolitan area network (MAN) technology that will connect IEEE 802.11 (WiFi) hotspots to the Internet and provide a wireless extension to cable and DSL for last mile broadband access. IEEE 802.16 provides up to 50 km of linear service area range and allows user’s connectivity without a direct line of sight to a base station. The technology also provides shared data rates up to 70 Mbit/s.

[get this widget]

Monday, March 31, 2008

SIMULATION AND ITS IMPORTANCE IN THE VLSI DESIGN

It is a design validation tool to check circuit’s function and other parameters such as performances. These tools are available for all levels of design. Four types of simulation tools are:
1 CIRCUIT SIMULATION: - It performs a detailed analysis of voltages and currents in the circuit.
2 TIMING SIMULATION: - Simulates large circuits which are not possible by circuit simulations.
3 SWITCH SIMULATION: - Treats the transistors are ideal or semi-ideal switches.
4 GATE SIMULATION: - It uses the logic gate as the fundamental unit of simulation.

Simulation can be done at two levels:
1 Before the layout is complete.
2 After the completion of layout.
It checks hundreds of transistor on a computer but we have thousands of transistors in VLSI chip.

[get this widget]

VLSI Design

VLSI design has become a major driving force in modern technology. It provides the basis for computing and telecommunications, and the field continues to grow at an amazing pace. The field of VLSI design is a resource-intense engineering discipline. Project and product definitions are economically motivated, and competition on a worldwide basis is very keen. The market potential for innovative designs is very large, but the market wise often short due to both competitions and changing consumer demands. The microscopic dimensions of current silicon integrated circuitry make possible the design of digital circuits, which may be very complex and yet extremely economical in space, power requirements and cost, and potentially very fast. The space power and cost aspects have made silicon the dominant fabrication technology for electronics in very wide ranging areas of application. The combination of complexity and speed is finding ready applications for VLSI systems in digital processing and particularly in those application areas requiring sophisticated high speed digital processing.

[get this widget]

Sunday, March 30, 2008

4G APPLICATIONS AND THEIR BENEFITS TO PUBLIC SAFETY

One of the most notable advanced applications for 4G systems is location based services. 4G location applications would be based on visualized, virtual navigation schemes that would support a remote database containing graphical representations of streets, buildings, and other physical characteristics of a large metropolitan area. This database could be accessed by a subscriber in a moving vehicle equipped with the appropriate wireless device, which would provide the platform on which would appear a virtual representation of the environment ahead.
For example, one would be able to see the internal layout of a building during an emergency rescue. This type of application is sometimes referred to as "Telegeoprocessing", which is a combination of Geographical Information Systems (GIS) and Global Positioning Systems (GPS) working in concert over a high-capacity wireless mobile system.
Telegeoprocessing over 4G networks will make it possible for the public safety community to have wireless operational functionality and specialized applications for everyday operations, as well as for crisis management.
The emergence of next generation wireless technologies will enhance the effectiveness of the existing methods used by public safety. 3G technologies and beyond could possibly bring the following new features to public safety:


5.1. Virtual navigation:

As described, a remote database contains the graphical representation of streets, buildings, and physical characteristics of a large metropolis. Blocks of this database are transmitted in rapid sequence to a vehicle, where a rendering program permits the occupants to visualize the environment ahead. They may also "virtually" see the internal layout of buildings to plan an emergency rescue, or to plan to engage hostile elements hidden in the building.


5.2. Tele-medicine:

A paramedic assisting a victim of a traffic accident in a remote location could access medical records (e.g., x-rays) and establish a video conference so that a remotely based surgeon could provide “on-scene” assistance. In such a circumstance, the paramedic could relay the victim's vital information (recorded locally) back to the hospital in real time, for review
by the surgeon.


5.3. Crisis-management applications:

These arise, for example, as a result of natural disasters where the entire communications
infrastructure is in disarray. In such circumstances, restoring communications quickly is essential. With wideband wireless mobile communications, both limited and complete communications capabilities, including Internet and video services, could be set up in a matter of hours. In comparison, it may take days or even weeks to re-establish communications capabilities when a wire line network is rendered inoperable.

[get this widget]

4G -Technology

The approaching 4G (fourth generation) mobile communication systems are projected to solve still-remaining problems of 3G (third generation) systems and to provide a wide variety of new services, from high-quality voice to high-definition video to high-data-rate wireless channels. The term 4G is used broadly to include several types of broadband wireless access communication systems, not only cellular telephone systems. One of the terms used to describe 4G is MAGIC—Mobile multimedia, anytime anywhere, Global mobility support, Integrated wireless solution, and Customized personal service. As a promise for the future, 4G systems, that is, cellular broadband wireless access systems have been attracting much interest in the mobile communication arena. The 4G systems not only will support the next generation of mobile service, but also will support the fixed wireless networks.

[get this widget]

Wednesday, March 26, 2008

MEMS and Nanotechnology Applications

There are numerous possible applications for MEMS and Nanotechnology. As a breakthrough technology, allowing unparalleled synergy between previously unrelated fields such as biology and microelectronics, many new MEMS and Nanotechnology applications will emerge, expanding beyond that which is currently identified or known. Here are a few applications of current interest:
Biotechnology
MEMS and Nanotechnology is enabling new discoveries in science and engineering such as the Polymerase Chain Reaction (PCR) microsystems for DNA amplification and identification, micromachined Scanning Tunneling Microscopes (STMs), biochips for detection of hazardous chemical and biological agents, and microsystems for high-throughput drug screening and selection.
Communications
High frequency circuits will benefit considerably from the advent of the RF-MEMS technology. Electrical components such as inductors and tunable capacitors can be improved significantly compared to their integrated counterparts if they are made using MEMS and Nanotechnology. With the integration of such components, the performance of communication circuits will improve, while the total circuit area, power consumption and cost will be reduced. In addition, the mechanical switch, as developed by several research groups, is a key component with huge potential in various microwave circuits. The demonstrated samples of mechanical switches have quality factors much higher than anything previously available.
Reliability and packaging of RF-MEMS components seem to be the two critical issues that need to be solved before they receive wider acceptance by the market.
Accelerometers
MEMS accelerometers are quickly replacing conventional accelerometers for crash air-bag deployment systems in automobiles. The conventional approach uses several bulky accelerometers made of discrete components mounted in the front of the car with separate electronics near the air bag; this approach costs over $50 per automobile. MEMS and Nanotechnology has made it possible to integrate the accelerometer and electronics onto a single silicon chip at a cost between $5 to $10. These MEMS accelerometers are much smaller, more functional, lighter, more reliable, and are produced for a fraction of the cost of the conventional macroscale accelerometer element


[get this widget]

Thursday, March 20, 2008

Advantages of MEMS and Nano Manufacturing

First, MEMS and Nanotechnology are extremely diverse technologies that could significantly affect every category of commercial and military product. MEMS and Nanotechnology are already used for tasks ranging from in-dwelling blood pressure monitoring to active suspension systems for automobiles. The nature of MEMS and Nanotechnology and its diversity of useful applications make it potentially a far more pervasive technology than even integrated circuit microchips.
Second, MEMS and Nanotechnology blurs the distinction between complex mechanical systems and integrated circuit electronics. Historically, sensors and actuators are the most costly and unreliable part of a macroscale sensor-actuator-electronics system. MEMS and Nanotechnology allows these complex electromechanical systems to be manufactured using batch fabrication techniques, decreasing the cost and increasing the reliability of the sensors and actuators to equal those of integrated circuits. Yet, even though the performance of MEMS and Nano devices is expected to be superior to macroscale components and systems, the price is predicted to be much lower.

[get this widget]

Friday, March 14, 2008

Fabricating MEMS and Nanotechnology

MEMS and Nano devices are extremely small—for example, MEMS and Nanotechnology has made possible electrically-driven motors smaller than the diameter of a human hair (right) -- but MEMS and Nanotechnology is not primarily about size.
MEMS and Nanotechnology is also not about making things out of silicon, even though silicon possesses excellent materials properties, which make it an attractive choice for many high-performance mechanical applications; for example, the strength-to-weight ratio for silicon is higher than many other engineering materials, which allow very high-bandwidth mechanical devices to be realized.
Instead, the deep insight of MEMS and Nano is as a new manufacturing technology, a way of making complex electromechanical systems using batch fabrication techniques similar to those used for integrated circuits, and uniting these electromechanical elements together with electronics.

[get this widget]

Saturday, March 8, 2008

Next Generation DATA STORAGE on Nanometre-scale

Imagine having all the information recorded on a stack of 1,540 CDs on a disk the size of single CD. Or visualise having all the information recorded on a stack of 154 CDs written on a one inch square chip.
New probe microscopy techniques and new organic materials could be combined in the next generation data storage technology - which will be nanometre scale technology with major - impact on related storage technologies.
We can believe that nanotech organic films will be the storage date medium of the near future, using micro-electro-mechanical systems, or MEMS probe devices, to read and write on the medium. Information will be written, read and stored in clusters of molecules within the inexpensive films. By current conventional, a CD (optical disk drive) holds 500 megabits of data per square inch. And by current conventional

[get this widget]

Sunday, March 2, 2008

.) NANOSCROLLS --- alternative to nanotubes

Production of carbon nanoscrolls, an alternative to nanotubes has been a new alternative to nanotubes. ULCA chemists reported a room temperature chemical method for producing a new type of carbon called carbon nanoscolls. Now, what are these Nanoscrolls ? Nanoscrolls are very closely related to much touted nanotubes but have significant advantages over them

[get this widget]

Sunday, February 24, 2008

Replacing COPPER conductors with NANOTUBES

The life of the silicon chip industry

components to silicon chips to increase computer capability. Because copper’s
resistance to electricity increases greatly as the metal’s dimensions decreases there is a limit to how small copper conductors can be.
In contrast, extremely tiny carbon nanotubes can substitute for copper conductors in smaller computer chip electronic configurations because carbon nanotube electrical resistance is not high.
The new process includes ‘growing’ microscopic, whisker-like carbon nanotubes on the surface of a silicon wafer by means of a chemical process. Researchers deposit a layer of silica over the nanotubes grown on the chip to fill the spaces between the tubes. Then the surface is polished flat.Scientists can build more multiple, cake-like layers with vertical carbon nanotube ‘wires’ that can interconnect layers of electronics that made up the chip

[get this widget]

Monday, February 18, 2008

Introduction to CDMA


Cellular networks are wireless WANs that establish a connections between cellular users. The cellular network is comprised of many "cells". The users communicate within a cell through wireless communications. A Base Transceiver Station (BTS) is used by the mobile units in each cell by using wireless communications. One BTS is assigned to each cell. Regular cable communication channels can be used to connect the BTSs to the Mobile Switching Center (MSC). The MSC determines the destination of the call received from a BTS and routes it to a proper destination either by sending it to another BTS or to a regular telephone network. Keep in mind that the communications is wireless within a cell only.
The words "code" and "division" are important parts of how CDMA works. CDMA uses codes to convert between analog voice signals and digital signals. CDMA also uses codes to separate (or divide) voice and control data into data streams called "channels." These digital data stream channels should not be confused with frequency channels.


[get this widget]

Sunday, February 10, 2008

CODE DIVISION MULTIPLE ACCESS (CDMA)

The current cellular networks use many different, in compatible standards which relay on different frequency modulation techniques. The focus of the third generation wireless systems is on a predominantly analog because mobile communications have been primarily developed for voice users. How ever the use of cellular networks to support mobile communicating applications is increasing rapidly due to the emphasis on mobile commerce.

A collection of technologies are emerging to provide analog as well as digital services over cellular networks to support mobile users and applications. The cellular networks are evolving through first, second and third generations. The technology CDMA belongs to third generation of cellular networks.

This is an attempt to give you clear idea on “how to generate a CDMA signal” and “what are the stages involved to generate a CDMA signal”. It also includes “what are the call processing stages”.

[get this widget]

Tuesday, February 5, 2008

Worldwide Interoperability for Microwave Access(Wi-MAX)

The two driving forces of modern Internet are broadband, and wireless. The WiMax standard combines the two, delivering high-speed broadband Internet access over a wireless connection. The main problems with broadband access are that it is pretty expensive and it doesn't reach all areas. The main problem with WiFi access is that hot spots are very small, so coverage is sparse.

Wi-MAX is short for Worldwide Interoperability for Microwave Access, and it also goes by the IEEE name 802.16. WiMAX has the potential to do to broadband Internet access what cell phones have done to phone access. In the same way that many people have given up their "land lines" in favor of cell phones, WiMAX could replace cable and DSL services, providing universal Internet access just about anywhere you go. Wi-MAX delivers a point-to-multi point architecture, making it an ideal method for carriers to deliver broadband to locations where wired connections would be difficult or costly. It may also provide a useful solution for delivering broadband to rural areas where high-speed lines have not yet become available. A WiMax connection can also be bridged or routed to a standard wired or wireless Local Area Network (LAN).

[get this widget]

Thursday, January 24, 2008

NANOTECHNOLOGY


Nanotechnology is molecular manufacturing or, more simply building things one atom or molecule at a time with programmed nanoscopic robot arms. A nanometer is one billionth of a meter (3 to 4 atoms wide). Utilizing the well-understood chemical properties of atoms and molecules (how they stick together) a nanotechnology proposes the construction of novel molecular devices possessing extraordinary properties. The trick is to manipulate atoms individually and place them exactly where needed to produce the desired structure.

Nanotechnology broadly refers to the manipulation of matter on the atomic and molecular scales i.e. where the objects of interest are 0.1-100 nanometer n size. Atomic diameters represent the lower end of this range at tenths of nanometers. Transistors used in the present generation of microprocessors, with dimensions of the order of 100 nanometers are at the upper end of the nanotechnology range. As atoms come together to form molecules and molecules come together to form clusters or crystals, the inherent macro-scale properties are determined. By controlling molecular structure in material synthesis, mankind has gained unprecedented control over the basic material properties such as conductivity, strength, capacity, ductility and reactivity, yielding innovative applications ranging from batteries to automotive materials. This is a passive nano technique that primarily focuses on tuning the properties of resulting bulk materials. The active nano technique facilitates creation of functional electronic and ultimately mechanical devices at the nano scale.


[get this widget]

Wednesday, January 23, 2008

NANOTECHNOLOGY, A BREAK THROUGH IN ALL FIELDS

Nanotechnology has been successful in almost all the fields irrespective of its kind and with the help of this technology it is possible to perform any type of operations starting from manipulating the molecules for separating impurities to the stage where it is the easy method for the production of power.In molecular manufacturing systems, using nanotechnology it is possible transform raw materials, in molecular form, into finished products. Impurities could be separated from feedstock molecules using a sorting rotor .
The purified molecules can be transported away from the sorter system using the molecular equivalent of a conveyor belt. Once a conveyor belt, the molecules can be transported to other belts, changing speed or frequency if necessary. The estimated belt speed is 0.5cm/s and the transition time form belt to belt is less than 0.2µs. a system for transforming a stream of small feedstock molecules into a stream of reagent moieties would be between one million and three million atoms in size. It could deliver the equivalent of its own mass is about 3 sec. The error is rated to be less than 1in 1015 operations at 106 operations per second. This gives a mean time to failure of about 3000 years. Other possible scheme has reagent moieties transported up through the centre of a hollow manipulator arm to a working tip for positional synthesis.

Nano technology is also playing an important role for electronic displays so as replace existing CRO and CRT’s and it is even competing with LCD (Liquid Crystal Display) which is the advanced trend in electronic displays.
The process involved for electronic display using nanotubes is as follows:
• Firstly, Mixture of C60 and nickel is ‘steered’ to specific surface sites by evaporating through a mask. The mask has an array of holes of 300 nm and can be moved with a precision of 1 nm.
• The C60/nickel mixture is evaporated sequentially in ultra high vacuum so as to form alternating layers of C60 and nickel with no impurities
• Then heat it up in the presence of a magnetic field. In this step, the C60 molecules are transformed into bundles of perfectly aligned nanotubes.


[get this widget]

Monday, January 21, 2008

Micro Electro Mechanical Systems

Micro-Electro-Mechanical Systems (MEMS) is the integration of mechanical elements, sensors, actuators, and electronics on a common silicon substrate through micro fabrication technology. While the electronics are fabricated using integrated circuit (IC) process sequences (e.g., CMOS, Bipolar, or BICMOS processes), the micro mechanical components are fabricated using compatible “micro machining” processes that selectively etch away parts of the silicon wafer or add new structural layers to form the mechanical and electromechanical devices.
MEMS promises to revolutionize nearly every product category by bringing together silicon-based microelectronics with micro machining technology, making possible the realization of complete systems-on-a-chip. MEMS is an enabling technology allowing the development of smart products, augmenting the computational ability of microelectronics with the perception and control capabilities of micro sensors and micro actuators and expanding the space of possible designs and applications.

[get this widget]

Friday, January 18, 2008

Advantages of Nanotechnology

  • This is having major advantage of to making the things very smaller. Nanotechnology is making small size of memory product with high storage capacity.
  • Chip is having high-speed access of stored data.
  • It is mainly used for making large data base storage devices like terabytes storage devices.


[get this widget]

Disadvantages of Nanotechnology

  • The manufacturing cost is very high because the product is fully made of molecules.
  • The initial investment is very high. If a failure of product means there is lot of loss. So it takes full of risk.
  • The damage of the Nanotechnology product may cause not recover the original product. There is a maintenance cost is high.


[get this widget]

Wednesday, January 9, 2008

Nanotechnology Medicine ( Nanomedicine)

Nanotechnology will make us healthy and wealthy though not necessarily wise. In a few decades, this emerging manufacturing technology will let us inexpensively arrange atoms and molecules in most of the ways permitted by physical law. It will let us make supercomputers that fit on the head of a pin and fleets of medical nano robots smaller than a human cell able to eliminate cancer, infections, clogged arteries, and even old age. People will look back on this era with the same feelings we have toward medieval times--when technology was primitive and almost everyone lived in poverty and died young. Besides computers billions of times more powerful than today's, and new medical capabilities that will heal and cure in cases that are now viewed as utterly hopeless, this new and very precise way of fabricating products will also eliminate the pollution from current manufacturing methods. Molecular manufacturing will make exactly what it is supposed to make, no more and no less, and therefore won't make pollutants.

[get this widget]

Sunday, January 6, 2008

Advances in medical nanotechnology

Advances in medical nanotechnology depend on our understanding of living systems. With nano-devices, we would be able to explore and analyze living systems in greater detail than ever before. Autonomous molecular machines, operating in the human body, could monitor levels of different compounds and store that information in internal memory. They could determine both their time and location. Thus, information could be gathered about changing conditions inside the body, and that information could be tied to both the location and the time of collection.
These molecular machines could then be filtered out of the blood supply and the stored information could be analyzed. This would provide a picture of activities within healthy or injured tissue. This new information would, in turn, provide us with deeper insights and new approaches to curing the sick and healing the injured. Nano-robots could help us to change our physical appearance. They could be programmed to perform cosmetic surgery, rearranging our atoms to change our ear/nose shape, eye colors or any other physical feature we wish. These are all the possibilities with nanotechnology.
Super medicine:-
If we combine microscopic motors, gears, levers, bearing, plates, sensors, power and communication cables, etc with powerful microscopic computers, we will have a new class of materials. Programmable and microscopic, these smart materials could be used in medicine.
For example, medical nanite could patrol the body, armed with the complete knowledge of a person’s DNA, and dispatch any foreign invader; such cell sentinels would form immunity to not only the common cold but also AIDS and any future viral or bacterial mutation. These nanites would do what the plastic surgeon does, but in a better way. No pain, no bruising and results over night. People would be able to sculpt their own bodies. These who feel they were born the wrong sex could take on the full reproductive attributes of an opposite sex. So men could bear children.
How about painless childbirth? With mature nanotechnology capable of cellular manipulation, there is no reason a women should experience torturous hours of labour for that miraculous moment of birth. Trauma is not a great way for the newborn to enter the world either. Birth will not be traumatic with nanotechnology. Dilation can be controlled by the mother without pain.
Life extension:-
A finch lives two years, a parrot ninety, a gecko one year and a Galapagos Island turtle two hundred. The difference is the genetic programming. The geneticists are quickly unraveling human genetic code. Life is molecular machinery, with atoms arranged in dynamic complex relationships, controlled by DNA. If we have tools small enough to work on a machine and understand its controlling software, machines and their behavior can be modified. Even without nanotechnology, genetic therapies for stopping aging and even reversal will be developed soon.
Cryonics-raising the dead:-
When a patient’s heart stops beating, but before the structure of his brain starts to degenerate, the patient is attached to heart-lung machine and progressively infused with ‘anti-freeze’ and other cellular stabilizers and then his body temperature is lowered until the patient is at liquid nitrogen temperatures. At this point, all molecular stops indefinitely and the patient is put in storage. Later, when nanotechnology cell repair devices become available, the fatal disease that caused ‘death’ is reversed, the anti-freeze toxicity is removed; the patient is warmed back up alive and well.


[get this widget]

IMPACT OF NANOTECHNOLOGY ON MEDICINE

Nanotechnology may have its biggest impact on the medical industry. For instance, consider patients drinking medical fluids containing nano-robots programming to attack and reconstruct the molecular structure of cancer cells and viruses to make them harmless.
Diseases are caused largely by damage at the molecular and cellular level. Today’s surgical tools are, at this scale, large and crude. Modern surgery works only because cells have a remarkable ability to regroup, bury their dead and heal over the injury. Nano-robots could also be programmed to perform delicate surgeries. Nano-surgeons could work at a level a thousand times more precise than the sharpest scalpel available today. By working on such a small scale, a nano-robot could operate seamlessly without leaving the scars that conventional surgery does.



[get this widget]

search

Google