Thursday, December 8, 2011

Touchscreens Technology

Call it an effort of human mind or a miracle from human heart. But this is all happening with technology. We land in an era where everything that can be possibly thought, can also be practically put into. And that too quite reasonably..!! Just move the hand or walk your fingers over a thing and it works. Yes, it is the interactive gesture based technology I am talking about.
 
Touchscreen Techonology
Touchscreen technology is the direct manipulation type gesture based technology. Direct manipulation is the ability to manipulate digital world inside a screen without the use of command-line-commands. A device which works on touchscreen technology is coined as Touchscreen. A touchscreen is an electronic visual display capable of ‘detecting’ and effectively ‘locating’ a touch over its display area. It is sensitive to the touch of a human finger, hand, pointed finger nail and passive objects like stylus. Users can simply move things on the screen, scroll them, make them bigger and many more.
 
Hailing the History..!!
The first ever touchscreen was developed by E.A Johnson at the Royal Radar Establishment, Malvern, UK in the late 1960s. Evidently, the first touchscreen was a capacitive type; the one widely used in smart phones nowadays. In 1971, a milestone to touchscreen technology was developed by Doctor Sam Hurst, an instructor at the University of Kentucky Research   Foundation. It was a touch sensor named ‘Elograph’. Later in 1974, Hurst in association with his company Elographics came up with the first real touchscreen featuring a transparent surface. In 1977, Elographics developed and patented a resistive touchscreen technology, one of the most popular touchscreen technologies in use today.
 
Ever since then, touchscreen displays are widely used in computers, user interactive machines, public kiosks, point of sale applications, gaming consoles, PDAs, smartphones, tablets, etc.
Types of Touchscreen Technology
Let us now give an engineer’s eye to this revolutionary technology. A touchscreen is a 2 dimensional sensing device made of 2 sheets of material separated by spacers. There are four main touchscreen technologies:
1)      Resistive
2)      Capacitive
3)      Surface Acoustic Wave
4)      Infrared
 
1.      Resistive Touchscreen Technology
The resistive touchscreen consists of a flexible top layer made of Polyethylene (PET) and a rigid bottom layer made of glass. Both the layers are coated with a conducting compound called Indium Tin Oxide (ITO) and then spaced with spacers. While the monitor is operational, an electric current flows between the two layers. When a touch is made, the flexible screen presses down and touches the bottom layer. A change in electrical current is hence detected and the coordinates of the point of touch is calculated by the controller and parsed into readable signals for the operating system to react accordingly.   
                                     
Resistive Touch Screen Technology
Some of the popular devices that use Resistive Touchscreen are Nintendo DS, Nokia N97, HTC Touch Pro2, HTC Tattoo, Sony Ericsson Satio, etc.
 
These systems transmit only 75% of light from the monitor. The resistive touchscreen is further divided into 4-, 5-, 6-, 7- and 8-wired resistive touchscreen. While the constructive design of all these modules is similar there is a major distinction in each of its method to determine the coordinates of touch.
 
The Four-wire Resistive Touchscreen uses both the layers to calculate the axes information of the touch. Touch measurement in the 4-wire is a 2 step process. The x-coordinate of the touch point is calculated by creating a voltage gradient on the flexible layer, and the y-coordinate is determined by creating a voltage gradient along the bottom layer.
 
Pros and Cons: Consequently, the 4-wire resistive touchscreen is less durable, feebly accurate and can drift with environmental changes. However these negatives are divulged only with large-sized touchscreen. These are relatively cheap, easily available and consume low power.
 
The Eight-wire Resistive Touchscreen is simply a variation of the 4-wire one with the addition of 4 sense wires, two for each layer. The sensing points aid in reducing the environmental drift to increase the stability of the system. The 8-wire systems are employed in sizes of 10.4” or larger where the drift can be significant.
 
The Five-wire Resistive Touchscreen do not uses the coversheet (flexible layer) in determining the touch coordinate. All the position sensing is employed on the stable glass layer. In this design, one wire goes to the coversheet and four wires are deployed to the four corners of the bottom sheet. The coversheet only acts as a voltage measuring probe. The functioning of the touchscreen remains unscathed even with changes in the uniformity of the conductive coating over the coversheet.
 
Pros and Cons: Highly durable, accurate and reliable. This technology involves complex electronics and is expensive. However, it can be used in sizes upto 22”.
 
The Six and Seven wire resistive touchscreen is also a variant to the 5 and 4 wire technology respectively. In the 6-wire resistive touchscreen an extra ground layer is added behind the glass plate which is said to improve system’s performance. While, the seven–wire variant has two sense lines on the bottom plate. However, these technologies are as prevalent as their counterparts.
 
The Resistive Touchscreen works well with almost any stylus-like object.
2.      Capacitive Touchscreen Technology
The Capacitive Touchscreen Technology is the most popular and durable touchscreen technology used all over the world at most. It consists of a glass panel coated with a capacitive (conductive) material Indium Tin Oxide (ITO). The capacitive systems transmit almost 90% of light from the monitor. Some of the devices using capacitive touchscreen are Motorola Xoom, Samsung Galaxy Tab, Samsung Galaxy SII, Apple’s iPad. There are various capacitive technologies available as explained below.
 
Surface-Capacitive screens, in this technique only one side of the insulator is coated with a conducting layer. While the monitor is operational, a uniform electrostatic field is formed over the conductive layer. Whenever, a human finger touches the screen, conduction of electric charges occurs over the uncoated layer which results in the formation of a dynamic capacitor. The computer or the controller then detects the position of touch by measuring the change in capacitance at the four corners of the screen.
 
Pros and Cons: The surface capacitive touchscreen is moderately durable and needs calibration during manufacture. Since a conductive material is required to operate this screen, passive stylus cannot be used for surface capacitive touchscreen.
 
Capacitive Touch Screen Technology
In the Projected-Capacitive Touchscreen Technology, the conductive ITO layer is etched to form a grid of multiple horizontal and vertical electrodes. It involves sensing along both the X and Y axis using clearly etched ITO pattern.
 
Projected Touchscreen technology
The projective screen contains a sensor at every intersection of the row and column, thereby increasing the accuracy of the system. There are two types of projected capacitive touchscreen: Mutual Capacitance and Self Capacitance
3.      Surface Acoustic Wave Touchscreen technology
The Surface Acoustic Wave Touchscreen technology contains two transducers (transmitting and receiving) placed along the X-axis and Y-axis of the monitor’s glass plate along with some reflectors. The waves propagate across the glass and are reflected back to the sensors.  When the screen is touched, the waves are absorbed and a touch is detected at that point. These reflectors reflect all electrical signals sent from one transducer to another. This technology provides excellent throughput and image clarity.
Pros and Cons: 100% clarity is obtained as no metallic layers are present on the screen, it can be operated using passive devices like stylus, glove or finger nail. Screen can get contaminated with much exposure to dirt, oil which may haunt its smooth functioning.
Surface Acoustic Wave Touchscreen
4.      Infrared Touchscreen Technology
In the Infrared Touchscreen Technology, an array of X- and Y- axes are fitted with pairs of IR Leds and photo detectors. The photo detectors detect any change in the pattern of light emitted by the Leds whenever the user touches the monitor/screen.
Infrared Touchscreen
The starred assets..!!
The potential novice touchscreen technology has many advantages over the conventional QWERTY keyboard and monitor. It is very flexible as opposed to its physical counterparts since the digital displays can be configured anytime at will of the user as per the functionalities. Touchscreen allows users to customize the interface for example alteration of language and size. By adjusting the size of the keyboard, user can utilize the spare area for display and other uses. With the decreasing size of computers and tablets these days, touchscreen is an added advantage. Multiple functions has to be performed on a small screen, touchscreen allows switching to a function at user’s will. For example, virtual keyboard which is an application of touchscreen is displayed on the screen only when the user allows it to be.
 
However, there is also the other side of the coin where there some functionality which cannot be performed using a regular touchscreen like cut-and-paste, right click menu options, drop-down menus.   
The Plural Touch Technology..!!
The plural touch technology or the Multi touch is a variant of the touchscreen technology which can detect two or more touches over its display area at the same time. Some of the common functionalities that require multitouch interface are zooming in, zooming out, rotating objects, panning through a document, virtual keyboard, etc. Multi touch Applications  technology are found in smart phones like iPhone, Samsung Galaxy, Nokia N8, Nexus S, Microsoft Touchtable, Apple’s iPad and many more.
 
Apple iPhone: ‘Multitouch’ now is a trademark by Apple who rightly proved it with a bang with the first most successful multitouch device ever; the iPhone. The first iPhone was unveiled on January 9, 2007. iPhone is no less than a revolution in the touchscreen industry with its maestro functionalities and applications. It uses Mutual Capacitance Technology as its touchscreen. The capacitive touchscreen can only be operated by bare finger or multiple fingers for multi touch.   
Microsoft Surface: is a multi touch product from Microsoft that allows multiple users to manipulate digital content through surface computing. The main feature of this product is its Surface’s interface: Direct interaction, multi-touch contact, object recognition and multi-user experience. It is not based on and limited by the conventional touch technology. The surface utilizes Frustrated Total Internal Reflection and underneath projectors for its display operation. It is indeed a milestone in the multi-touch scenario.
 

SAP (Systems, Applications & Products in Data Processing)

Necessity is the mother of all inventions, profitable or not, that is debatable. But innovation on the other hand requires a vision, with profitability of the enterprise in its heart. Back in the 1970s different companies, big and small, used IBM machines for their business needs, and built management programs on them. Five system analysts at IBM observed that everyone was basically building the same management software for themselves, programming on similar lines by investing lots of money on in-house programming. They thought that if they could provide a solution to answer these needs of different enterprises, it would be more profitable and the set up time for these companies could be drastically reduced. Hence, these five IBMers, Dietmar Hopp, Klaus Tschira, Hans-Werner Hector, Hasso Plattner and Claus Wellenreuther devoted their nights and weekends in developing market standard enterprise software meant for real time data processing for integrating all the business processes. As a result of their efforts, SAP was born.
SAP 
SAP, today has subsidiaries in more than 50 countries around the globe and is one of the largest software companies (third largest revenue independent software provider) in the world employing more than 27,000 people and serving over 17,500 customers which include more than half of the 500 top companies. It has more than 44,000 installations in more than 120 countries and more than 10 million people benefiting from SAP ECC. It mainly focuses on 6 major industries: Consumer, Process, Financial, Discrete Industries, Public Services and Service Industry. It has Industry partners in strong companies like Adobe, CA Technologies, HP, IDS Scheer, Open Text, Smart Ops etc. and is backed by a strong SAP Develop Network (SDN) community, sharing knowledge via blogs, forums, training materials and libraries. SAP claims to grow by providing quality solutions unlike many of its prominent competitors like Oracle which spend huge sums of money in acquiring competitors.
 
When Xerox decided to move out of the computer industry, it wanted to retain IBM technology in its business systems. As a part of the migration costs, IBM acquired software named SDS/SAPE which was later given to the founding members of SAP for about 8% founding stock of the company. The company established its headquarters in Weinheim and office in Manheim, Germany on April 1, 1972, registered as a private partnership under the German civil code as ‘Systemanalyse und Pogrammentwicklung’ (Systems Analysis and Program Development) though most of the time of the founding members was spent in the offices of their first customers, the local branch of Imperial Chemical Industries (ICI). By the end of its first year of operation, SAP employed 9 people and had generated DM 620,000 as revenue.
 
1973 saw the completion of the first accounting system by SAP, named RF, which proved to be a strong foundation in the development of subsequent software modules by SAP which were named SAP R/1. In addition, the company grew from regional to a much wider level developing customers in other parts of Germany like the tobacco company Rothandle and pharmaceutical firm Knoll. Within two years, SAP garnered support from 40 reference customers and the trademark began to emerge. In 1976, a limited-liability company ‘SAP GmbH, Anwendungen und Produkte in der Datenverarbeitung’ was founded and after five years, the private partnership was dissolved with its rights being passed on to SAP GmbH. By the end of 1976, SAP’s 25 employees had generated DM 3.81 million as revenue. SAPs history is sprinkled with success stories of growth from a regional private partnership to a multinational software firm.
 
The next major leap in the company’s profile came in 1978 when the R/2 System was released. R/2 ran on mainframe computers and was the first integrated, enterprise package. It was extremely popular with large European MNCs requiring soft-real-time business applications with multi-currency and multilingual support. As the sales headed towards the DM 10 million mark, SAP brought all its teams under one roof in its new Computer Centre in Walldorf which now is the company’s headquarter. By that time, 50 of the 100 largest industries of Germany were being served by SAP. Working in close co-ordination with its customers, SAP added various modules to its R/2 before it went international. The side by side evolution of computers improved the price/performance ratios and hence acting in favour of the company. By 1982, sales were up by 48% and more than 236 companies in Switzerland, Germany and Austria were working on SAP programs.
SAP AG was founded in Switzerland, focusing on the increase of sales of the R/2 System internationally. The development teams started to work on newer modules like Personnel Management, Plant Maintenance, Production Planning and Control Systems. By 1985, SAP had become a well-known name in all European countries. The company continued to grow with the opening up of new subsidiaries in new places. A major part of improvements in the SAP Solutions is attributed to their partnership with educational institutes like California State University. In 1988 amidst all high growths in international segments, SAP GmbH converted into a stock corporation SAP AG and floated its stocks. In 1989, SAP won the ‘Company of the Year’ award for the first time (twice after that) by Success Manager Magazine.
 
By the end of the 1980’s era, the world had started to migrate towards client-server architecture. Keeping up with its past record of ample flexibility, SAP released its R/3 Systems to cater to these Client-Server configurations. ‘R’ stands for real and ‘3’ for a 3 tier system in R/3. The three tiers are: Presentation Server (GUI), Application Server and Database Server. Launched in 1992, it was an instant hit, especially in North America where the market share of SAP shot up from virtually zero to a whopping 44% of SAPs worldwide sales, bagging the confidence of many fortune 500 companies. The list is very impressive with 8 of the top 10 semiconductor companies, 7 of 10 pharmaceutical companies and giants like Microsoft appearing in it. This release was mainly aimed at the mid-sized market segment. It was arranged into distinct interlinked modules which covered different explicit functions in an organization, the most popular being Financial and Controlling (FICO), Material Management (MM), Sales and Distribution (SD), Human Resources (HR) and Production Planning (PP). SAP has focussed on best practice methodologies in its software processes. In order to cater to particular industries, it has developed Industry Specific (IS) modules. By 1997, SAP had partnerships with more than 25 educational institutes including MIT which greatly contributed to its improvement.
 
Application Server interprets ‘Advanced Business Application Programming / 4th Generation’ (ABAP/4) programs through a collection of executable files and manage Input/Output. All executables start at the same time and stop at the same time. An inventory of these processes is maintained in the AS in a file called Single Configuration File. The serve may be a single standalone, or distributed over different servers with distributed functions like messaging servers. AS formats and forwards database requests to a database server. Database Server caters to the data storage and manipulation like addition, retrieval and updating. All server to server transactions are encrypted by a SAP cryptographic library. In its core, SAP R/3 had about 10,000 tables which controlled the process execution.
The major difference between SAP R/3 and ERP (Enterprise Resource Planning) is that ERP is based on SAP NetWeaver where core components can be implemented in JAVA and ABAP and each new component is developed independently in a self-contained manner. The first release of mySAP ERP launched in 2003 bundled separate products like the SAP R/3 Enterprise and SAP SEM etc. and was an important move in embracing the internet. Application Server was wrapped into NetWeaver, introduced in 2003. SAP ERP was later renamed as ECC (ERP Central Component) in its further releases accompanied by architectural changes like merging of SAP SEM and SAP Internet Transaction Server into ECC. Every SAP system communicates with other clients using SAP specific and http/https protocols. Along with ECC, SAP Business Suite comprises of 4 other applications:
 
1.      Customer Relationship Management (CRM)
2.      Product Lifecycle Management (PLM)
3.      Supply Chain Management (SCM)
4.      Supplier Relationship Management (SRM) 
 
SAP provides its solutions in the form of modules where the customer has the flexibility to buy only the relevant modules. The most prominent modules offered are:
1.       Controlling
2.      Financial Accounting
3.      Financial Supply Chain Management
4.      Human Resources
5.      Logistics Execution
6.      Materials Management
7.      Plant Maintenance
8.      Project System
9.      Production Planning
10. Quality Management
11. Sales and Distribution
 
SAP is generally a large scale project that can span months and sometimes years depending on the complexity of the organization. The implementation was called Accelerated SAP (ASAP) but later migrated to Solutions Manager. SOLMAN tool is used for various functions like project management, system support, defect tracking etc. which are essential for successful SAP implementation. The entire implementation is divided into phases with set goals. In the end, the users should be able to start performing their daily business on the new SAP system.
 
SAP 1
SAP implementations can be pretty expensive. The product is sold on a price per user basis and the actual cost may depend on a variety of factors like number of users, modules etc. There are risks involved and a thorough cost-benefit analysis is necessary before any decision on migration is made. The implementation cost depends on three major factors:
1.      Timeframe: SAP may be implemented over a few days and may even take 5 to 10 years for total migration and cost varies accordingly.
2.      People: The implementation can be done with totally in house workforce or may require dozens of external consultants, project managers and technical people.
3.      Hardware: SAP implementation may be done on merely 3 machines as Production system, Test System and Development system, or it might span up to more than 100 instances.
 
Advantages of SAP:
1.      Global integration without linguistic or currency barriers.
2.      Single update applicable companywide, one time affair.
3.      Real Time information, reduced redundancy errors.
4.      Increased efficiency.
 
Disadvantages of SAP:
1.      Bound into a legal contract with the vendor.
2.      Could be a cause of inflexibility.
3.      ROI may take too long, which is a huge risk.
4.      Risks involved like project failure.
 
But despite of it all, SAP is known to have provided excellent services to its customers, to which the ever increasing customer base stands witness.
 
The speed of businesses has not just increased, but skyrocketed in terms of innovation, competition, finance and regulations. On the other hand, the ability of ERP solutions to handle such changes is rather slow. To find a way around this SAP has started to focus on In-memory computing. In the SAPPHIRE conference held recently, the company highlighted its efforts in the fields like in-memory computing, analytics and mobile devices impinging on consistency and success models based on Best Practices and Rapid Deployment Solutions. In-memory computing involves parsing, sorting and totalling of billions of records by use of sensors like RFID through which every person of the firm can dynamically access and update the repository. With focus on mobile computing, users now can use their mobile devices to access and update the system. With more than 20 RDS since first shipment in September 2010, the company has showed a very aggressive attitude and commitment.
 
In modern times when time is premium, packaged services offer pay-as-you-go consultancy solutions allowing companies to deliver services faster and maximized control on infrastructures. In contrast to open-ended arrangements which are often prone to problems like scope creep, packaged solutions have the ability to deliver rapid returns on technology investments. It has been the history of SAP solutions that they are flexible enough to adapt to user requirements. With such aggressive policies in taming the market and commitment to utilize its vast knowledge base, SAP is expected to stay strong in many years to come.
 

Sixth Sense Technology

It’s the beginning of a new era of technology where engineering will reach new milestones. Just like in the science fiction movies where display of computer screen appears on walls, commands are given by gestures, the smart digital environment which talks to us to do our work and so on, these all will be possible very soon. You imagine it and sixth sense technology will make it possible. Isn’t it futuristic? Now it’s time for sci-fi movie directors to think ahead because the technology shown in there fiction movies soon will become household stuff. Before few years back it was considered to be supernatural or tantalizing imagination. But now it has been made possible. Thanks to Pranav Mistry, a genius who introduced mankind to this futuristic technology.
sixth sense technology
What is sixth sense?
Sixth Sense is a wearable gestural interface that enhances the physical world around us with digital information and lets us use natural hand gestures to interact with that information. It is based on the concepts of augmented reality and has well implemented the perceptions of it. Sixth sense technology has integrated the real world objects with digital world. The fabulous 6th sense technology is a blend of many exquisite technologies. The thing which makes it magnificent is the marvelous integration of all those technologies and presents it into a single portable and economical product. It associates technologies like hand gesture recognition, image capturing, processing, and manipulation, etc. It superimposes the digital world on the real world.
Sixth sense technology is a perception of augmented reality concept. Like senses enable us to perceive information about the environment in different ways it also aims at perceiving information. Sixth sense is in fact, about comprehending information more than our available senses. And today there is not just this physical world from where we get information but also the digital world which has become a part of our life. This digital world is now as important to us as this physical world. And with the internet the digital world can be expanded many times the physical world. God hasn’t given us sense to interact with the digital world so we have created them like smart phones, tablets, computers, laptops, net books, PDAs, music players, and others gadgets. These gadgets enable us to communicate with the digital world around us.
But we’re humans and our physical body isn’t meant for digital world so we can’t interact directly to the digital world. For instance we press keys to dial a number; we type text to search it and so on. This means for an individual to communicate with the digital world he/she must learn it. We don’t communicate directly and efficiently to the digital world as we do with the real world. The sixth sense technology is all about interacting to the digital world in most efficient and direct way. Hence, it wouldn’t be wrong to conclude sixth sense technology as gateway between digital and real world.  Before Wear Ur World (WuW) came there were other methods like speech recognition software, touch recognition etc., which empowered us with direct interfacing.
This WuW or sixth sense device invented by Pranav Mistry is a prototype of next level of digital to real world interfacing. It comprises of a camera, a projector, a mobile cum computing device and colored sensors which are put on the fingers of a human being. The device efficiently senses the motion of the colored markers. Using them it provides us the freedom of directly interacting with the digital world. This technology enables people to interact in the digital world as if they are interacting in the real world.


Why choose sixth sense technology?
Humans take decisions after acquiring inputs from the senses. But the information we collect aren’t enough to result in the right decisions. But the information which could help making a good decision is largely available on internet. Although the information can be gathered by connecting devices like computers and mobiles but they are restricted to the screen and there is no direct interaction between the tangible physical world and intangible digital world. This sixth sense technology provides us with the freedom of interacting with the digital world with hand gestures. This technology has a wide application in the field of artificial intelligence. This methodology can aid in synthesis of bots that will be able to interact with humans.
How does sixth sense works?
The sixth sense technology uses different technologies like gesture recognition, image processing, etc. At present the commercial product isn’t launched but the prototype is prepared. The sixth sense prototype is made using very common and easily available equipments like pocket projector, a mirror, mobile components, color markers and a camera.

sixth sense technology1 The projector projects visual images on a surface. This surface can be wall, table, book or even your hand. Thus, the entire world is available on your screen now. When user moves their hands to form different movements with colored markers on the finger tips, the camera captures these movements. Both the projector and the camera are connected to the mobile computing device in the user’s pocket. Recognition is made using computer vision technique. These markers act as visual tracking fiducials. The software program processes this video stream data and interprets the movements into gestures. The gestures are different from one another and are assigned some commands. These gestures can act as input to application which is projected by the projector. Since, the projector is aligned downwards for compactness; therefore images would be formed at the user’s feet if mirror wasn’t used. The mirror reflects the image formed by the projector to front. The entire hardware is fabricated in the form of a pendent. The entire product cost around $ 350 and that also because of projector. It works very similar like a touch screen phone with entire world as the screen.
Evolution of Sixth Sense Technology
Steve Mann is considered as the father of Sixth Sense technology who made a wearable computer in 1990. He implemented the Sixth Sense technology as the neck worn projector with a camera system. He was a media lab student at that time. Then his work was carried forward by Pranav Mistry, an Indian research assistant in MIT Media Lab. He came up with exciting new applications from this technology. Sixth sense technology was developed at media labs in MIT and coined as Wear Ur World (WUW). The inventors have filed patent under the name Wear Ur World (WUW) in February 2010.
sixth sense technology2 “Rather than waiting for that time to come, I want people to make their own system. Why not?,” Mistry says in an article on Rediff Business. “People will be able to make their own hardware. I will give them instructions how to make it. And also provide them key software…give them basic key software layers…they will be able to build their own applications. They will be able to modify base level and do anything”.
So it can be expected that the software will be open source and there will be a wide market of apps too.
Applications
Fingers as brush: The user can draw anything on paint with the help of his fingers. This drawing can be 3D also. Hence, no need to use mouse.
6th sense technology3 Capture photos with fingers: using the fingers the user can capture photos hence, no need to carry an additional gizmo. The box created by the fingers act as frame for capturing photo.
Palm is the new dialer: this technology enables the user to call 6th sense technology4 without using the dialer. The dialer will be projected on palm and the user can dial the number using other hand.
Read Books easily: Check out the ratings of the Book you are going to buy, it checks the ratings from the internet. And another amazing thing is that it reads the book for you.
Video Newspapers: like the video newspapers of Harry Potter this technology identifies the news headline and then projects the relevant video.
6th sense technology7 Check your Flight Status: Just place the ticket in front of the projector and it checks its status from the internet.
sixth sense technology5 Clock: the user just needs to make gesture of clock and the watch will be projected on the user’s hand.

Access anywhere internet: the users can browse internet sixth sense technology6 on any surface even on their palm.
Conclusion
This technology has seamless applications. This can be used as a replacement of the 5th senses for handicapped peoples. This can provide easy control over machineries in industry. This will have different application for different developers just depending upon how he imagines and what he wants. So, considering its widespread applications the inventor Pranav Mistry has decided to make its software open source. This will enable individuals to make their own application depending upon needs and imagination. As this technology will emerge may be new devices and hence forth new markets will evolve. Some existing devices and technologies will be discontinued but one thing is guaranteed it will write a new chapter in history of science and technology.
TED Video

Compilers

Generally compiling is a term which is often heard by everyone who is associated with programming, even if remotely. A compiler is a program which converts a high level language program/code into binary instructions (machine language) that our computer can interpret, understand and take the appropriate steps to execute the same.
Let us take an example to understand what does the above definition means.
If you ask any person who is associated with programming, that what the first program he/she wrote was, then the obvious answer would be “Hello World”. So let us also start with the same.
#include<stdio.h>
Void main()  
{
printf("Hello, world!");
}
 
This most basic program prints or displays the words "Hello,World!" on the computer screen. But there is a problem. It is not that simple. Behind the curtains there is lot of complex things going. Let us peep inside these things. The hard truth is that our computer cannot understand the commands/instructions contained in a source file (helloworld.c), because C is a high-level language which means, it contains various characters, symbols, and words that represent complex, numbers-based instructions for eg. printf, main, header files etc. The only instructions a computer can execute are those written in machine language, consisting entirely of numbers that is the binary language in terms of 0 and 1. Before our computer can run our C program, our compiler should convert our helloworld.c into an object file; then a program called a linker should convert the object file into an executable file.
The drawing below illustrates the process.
Compilers 

What is compiler?
What we have defined earlier is only half true. The notion of compiler as a program which converts input (high level language) into output (assembly language or machine code) for some processor is a very limited definition of the same. A compiler in real context takes a string and outputs another string. This definition covers all manner of software which converts one string to anther such as text formatters which convert an input language into a printable output, programs which tend to convert among various file formats or different programming languages and also web browsers.
History
In earlier time only machine dependent programming languages were used and hence any program which could be run on one machine could not run on any other as it was specific to that machine. When high level languages that is machine independent language were first invented in the 40s and 50s no compilers had been written. In fifties the first compiler was written by Grace Hopper. The FORTRAN team lead by John Backus at IBM introduced the 1st complete compiler in 1957. 
Development
Early compilers were complex in their code and also had large compiling time. But with time, several developments took place which led to advanced compiler. The main reason behind this was the splitting of a compiler into parts. In broad terms compiler can be distributed in 3 parts:
·   The front end - It understands the syntax of the source language.
·   The mid-end - Its role is to perform the high level optimizations.
·   The back end - It produces the assembly language.
Front end
The front end job is to analyze the source code file and then to make an internal picture of the program (code), called the intermediate representation or IR. It also manages the symbol table which is a data structure which maps or links each symbol in the program or the code with the corresponding information (location and type are few of the information). This is done over several phases:
·         Line reconstruction - This phase is required so as to convert the input character sequence into a form which is ready for the parsing phase.
·         Lexical analysis -  This phase is required to divide the code into small pieces known as tokens ( for example a keyword, identifier or symbol name). This phase is also called lexing or scanning, and the corresponding software for the same is called a lexical analyzer or scanner.
·         Preprocessing - Some languages for example C, requires a preprocessing phase which allows the macro substitution and sometimes conditional compilation.
·         Syntax analysis -  This phase allows parsing the tokens so as to identify the syntactic structure of the code. This phase is used to build a parse tree according to the rules of the grammar of the language.
.   Grammar: The grammar of a language is needed so as to get a meaningful outcome of the language. It doesn’t defines it meaning, rather defines rules to get a meaning. Grammars are specified using "productions." Few of the examples of production are as follows:
Statement -> if (expression) statement else statement
Statement -> while (expression) statement
.   Tree: Basically tree is the data structure used by the compiler internally to represent the meaning of the code. This phase is after the parsing phase in which the grammar of the language is set with the code.
          ·         Semantic analysis - This phase is used to add the semantic information to the parse tree and hence to build the symbol table for the same. This phase performs semantic checks and rejects the incorrect programs or issue warning.
.   Symbol Table: Every subroutine, variable has some information associated with them.
Variable names – information regarding type, storage location and scope.
Subroutine names – information regarding locations, argument and types. The information is associated with the help of a "hash map", which is a table that associates a string with the corresponding information. 
 

Back end
In layman language one can say that back end is associated with the generation of the code that is machine independent.
The main phases of the back end include the following:
·        Analysis: This phase is the base for the optimization of the code. The typical analyses methods are data flow analysis, dependence analysis, pointer analysis etc.. The call and control flow graph are also built during this phase.
·        Optimization: This is the intermediate phase which converts the language representation into equivalent faster forms. The various optimization methods are inline expansion, dead code elimination, loop transformation, register allocation etc.
·        Code generation: This is generally the last phase in the process as it is associated with output language that is the machine language. This involves resource and storage decisions. Debug data if generated is also generated during this phase.
If we break these three levels then there are total seven levels as follows.
compilers3
Out of these few major fields are explained below:
The Lexer (or lexical analyzer)
The lexer is the first process in the phase of compiling. Its purpose is to decompose the stream of input characters into discrete sets known as "tokens".
Let us take an example to understand it:
char str[] = "Compiler.";
Decomposes into:
token 1: Keyword, "char"
token 2: Identifier, "str"
token 3: Left square bracket
token 4: Right square bracket
token 5: Equals sign
token 6: String, "Compiler."    (Whole strings are token)
token 7: Semicolon
From the above example we can easily say that token is a string of characters, categorized according to the rules that are it may be a Identifier, Number, Comma. The role of lexer is to categorize the token according to a symbol type. The tokens are made so as to make the processing of strings easy.

The Parser
In today’s world when dependency on machines is constantly increasing and lots of complicated tasks are performed by the machines, the phase of parsing is very important. This phase increases the capability of the computer to understand the code and take the corresponding action according to the code.
Parsing is the process of understanding the syntax of a language by representing the code by data structures understood by the compiler.
Generally there are two main methods of parsing that is Top-down parsing and Bottom-up parsing.
·         Top-down parsing partitions a program top-down, programs into modules, modules into subroutines, subroutines into blocks.
·         Bottom-up parsing group tokens together into terms, then expressions, statements, then blocks and subroutines.
Error detection
The error detection is a basically not a phase but a process which keeps on going at the background during the various phases. It is an ongoing process occurring throughout.
·         Lexer - detects malformed tokens.
·         Parser - detects syntax errors.
·         Tree - detects annotation type mismatches. 
Types of Compilers
The compilers are classified according to the machine, input and output. Some of the types of compilers are:
·         One-pass compiler
·         Threaded code compiler
·         Incremental compiler
·         Stage compiler
·         Just-in-time compiler
·         A retargetable compiler
·         A parallelizing compiler
 Compiler Benefits
·         The main benefit of compiler is that it allows you to write the code that is not machine dependent.
·         The compiler converts a high-level language into machine code, and it also looks at the source code to make it efficient (by collecting, reorganizing and generating a new set of instructions to make the program run faster on the computer.)
·         Compilers help in debugging the code as the font coloring and indentation helps to catch the error by the programmer and they also display warning and error messages.