Archive for May, 2011

RFID Technology

Rfid has now become the most alluring Topic because this has been the revolutionary in the field of Electronics….some of its applications are also given below….

Long checkout lines at the grocery store are one of the biggest complaints about the shopping experience. Soon, these lines could disappear when the ubiquitous Universal Product Code (UPC) bar code is replaced by smart labels, also called radio frequency identification (RFID) tags. RFID tags are intelligent bar codes that can talk to a networked system to track every product that you put in your shopping cart.

Imagine going to the grocery store, filling up your cart and walking right out the door. No longer will you have to wait as someone rings up each item in your cart one at a time. Instead, these RFID tags will communicate with an electronic reader that will detect every item in the cart and ring each up almost instantly. The reader will be connected to a large network that will send information on your products to the retailer and product manufacturers. Your bank will then be notified and the amount of the bill will be deducted from your account. No lines, no waiting.

Outside the realm of retail merchandise, RFID tags are tracking vehicles, airline passengers, Alzheimer’s patients and pets. Soon, they may even track your preference for chunky or creamy peanut butter. Some critics say RFID technology is becoming too much a part of our lives — that is, if we’re even aware of all the parts of our lives that it affects.

In this article, you’ll learn about the types of RFID tags and how these tags can be tracked through the entire supply chain. We’ll also look at the non-commercial uses of RFID tags and how the Departments of State and Homeland Security are using them. Lastly, we’ll examine what some critics consider an Orwellian application of RFID tags in animals, humans and our society.

Reinventing the Bar Code

Almost everything that you buy from retailers has a UPC bar code printed on it. These bar codes help manufacturers and retailers keep track of inventory. They also give valuable ­information about the quantity of products being bought and, to some extent, by whom the products are being bought. These codes serve as product fingerprintsmade of machine-readable parallel bars that store binary code.

Created in the early 1970s to speed up the check out process, bar codes have a few disadvantages:

  • In order to keep up with inventories, companies must scan each bar code on every box of a particular product.
  • Going through the checkout line involves the same process of scanning each bar code on each item.
  • Bar code is a read-only technology, meaning that it cannot send out any information.

RFID tags are an improvement over bar codes because the tags have read and write capabilities. Data stored on RFID tags can be changed, updated and locked. Some stores that have begun using RFID tags have found that the technology offers a better way to track merchandise for stocking and marketing purposes. Through RFID tags, stores can see how quickly the products leave the shelves and who’s buying them.

In addition to retail merchandise, RFID tags have also been added to transportation devices like highway toll passcards and subway passes. Because of their ability to store data so efficiently, RFID tags can tabulate the cost of tolls and fares and deduct the cost electronically from the amount of money that the user places on the card. Rather than waiting to pay a toll at a tollbooth or shelling out coins at a token counter, passengers use RFID chip-embedded passes like debit cards.


RFID Tags Past and Present

RFID technology has been around since 1970, but until recently, it has been too expensive to use on a large scale. Originally, RFID tags were used to track large items, like cows, railroad cars and airline luggage, that were shipped over long distances, These original tags, called inductively coupled RFID tags, were complex systems of metal coils, antennae and glass.

Inductively coupled RFID tags were powered by a magnetic field generated by the RFID reader. Electrical current has an electrical component and a magnetic component — it is electromagnetic. Because of this, you can create a magnetic field with electricity, and you can create electrical current with a magnetic field. The name “inductively coupled” comes from this process — the magnetic field inducts a current in the wire. You can learn more in How Electromagnets Work.

Capacitively coupled tags

were created next in an attempt to lower the technology’s cost. These were meant to be disposable tags that could be applied to less expensive merchandise and made as universal as bar codes. Capacitively coupled tags used conductive carbon ink instead of metal coils to transmit data. The ink was printed on paper labels and scanned by readers. Motorola’s BiStatix RFID tags were the frontrunners in this technology. They used a silicon chip that was only 3mm wide and stored 96 bits of information. This technology didn’t catch on with retailers, and BiStatix was shut down in 2001 .

Newer innovations in the RFID industry include active, semi-active, and passive RFID tags. These tags can store up to 2 kilobytes of data and are composed of a microchip, antenna, and, in the case of active and semi-passive tags, a battery. The tag’s components are enclosed within plastic, silicon or sometimes glass.

At a basic level, each tag works in the same way:

  • Data­ stored within an RFID tag’s microchip waits to be read.
  • The tag’s antenna receives electromagnetic energy from an RFID reader’s antenna.
  • Using power from its internal battery or power harvested from the reader’s electromagnetic field, the tag sends radio waves back to the reader.
  • ­The reader picks up the tag’s radio waves and interprets the frequencies as meaningful data.

Inductively coupled and capacitively coupled RFID tags aren’t used as commonly today because they are expensive and bulky. In the next section, we’ll learn more about active, semi-passive and passive RFID tags.


  • Automotive – Auto-makers have added security and convenience into an automobile by using RFID technology for anti-theft immobilizers and passive-entry systems.
  • Animal Tracking – Ranchers and livestock producers use RFID technology to meet export regulations and optimize livestock value. Wild animals are tracked in ecological studies, and many pets who are tagged are returned to their owners.
  • Asset Tracking – Hospitals and pharmacies meet tough product accountability legislation with RFID; libraries limit theft and keep books in circulation more efficiently; and sports and entertainment entrepreneurs find that “smart tickets” are their ticket to a better bottom line and happier customers.
  • Contactless Payments – Blue-chip companies such as American Express, ExxonMobil, and MasterCard use innovative form factors enabled by TI RFID technology to strengthen brand loyalty and boost revenue per customer.
  • Supply Chain – WalMart, Target, BestBuy, and other retailers have discovered that RFID technology can keep inventories at the optimal level, reduce out-of-stock losses, limit shoplifting, and speed customers through check-out lines.

Posted by

Mahesh ( MGIT ECE 3rd year)

Watch This Video….

NASA Is Making Hot ‘Way Cool’

The more advanced the electronics, the more power they use. The more power they use, the hotter they get. The hotter they get, the more likely they’ll overheat. It doesn’t take a rocket scientist to understand what typically happens next: The electronics fry.

In the world of electronics, thermal control is always one of the limiting factors — particularly in space where there is no air to help cool down electronic components.

Called electrohydrodynamic (EHD)-based thermal control, the technology promises to make it easier and more efficient to remove heat from small spaces — a particular challenge for engineers building advanced space instruments and microprocessors that could fail if the heat they generate is not removed.

“Today, higher-power computer chips are available, but they generate too much heat,” said Didion, who is leading the technology-development effort also involving Matthew Showalter, associate branch chief of Goddard’s Advanced Manufacturing Branch, and Mario Martins of Edge Space Systems, an engineering company specializing in thermal systems in Glenelg, Md. “If I can carry away more heat, engineers will be able to use higher-power components. In other words, they will be able to do more things.”

The project, a joint activity between NASA Goddard and its partners, received support from the Goddard Internal Research and Development (IRAD) program, which funds the development of promising new technologies that could advance NASA’s scientific and exploration goals. It is being demonstrated in June on a Terrier-Improved Orion sounding rocket mission, which also is flying the Small Rocket/Spacecraft Technology (SMART) platform, a microsatellite also developed at Goddard. This new microsatellite measures about 16 inches in diameter and was specifically designed to give scientific users less expensive access to space.

The main objective of the EHD demonstration is showing that a prototype pump can withstand the extreme launch loads as the rocket lifts off and hurtles toward space. Should it survive the vibration, the technology will have achieved a major milestone in its development, Didion said. It will mean that it is at or near operational status, making it a viable technology for use on spaceflight instruments.

“Any electronic device that generates a lot of heat is going to benefit from this technology,” said Ted Swanson, assistant chief for technology for Goddard’s Mechanical Systems Division. This could include everything from sensors flown in space to those used in automobiles and aircraft.

No Moving Parts

The technology promises significant advantages over more traditional cooling techniques. Unlike current technologies used today by instrument and component developers, EHD does not rely on mechanical pumps and other moving parts. Instead, it uses electric fields to pump coolant through tiny ducts inside a thermal cold plate. From there, the waste heat is dumped onto a radiator and dispersed far from heat-sensitive circuitry that must operate within certain temperature ranges. “Its architecture, therefore, is relatively straightforward,” Didion said. Electrodes apply the voltage that pushes the coolant through the ducts.

“The advantages are many,” he added. “Without mechanical parts, the system is lighter and consumes less power, roughly half a watt. But perhaps more importantly, the system can be scaled to different sizes, from larger cold plates to microscale electronic components and lab-on-a-chip devices.”

Posted by

Mahesh( MGIT ECE 3rd year)


When satellite television first hit the market in the early 1990s, home dishes were expensive metal units that took up a huge chunk of yard space. In these early years, only the most die-hard TV fans would go through all the hassle and expense of putting in their own dish. Satellite TV was a lot harder to get than broadcast and cable TV.

Today, you see compact satellite dishes perched on rooftops all over the United States. Drive through rural areas beyond the reach of the cable companies, and you’ll find dishes on just about every house. The major satellite TV companies are luring in more consumers every day with movies, sporting events and news from around the world and the promise of movie-quality picture and sound.

Satellite TV offers many solutions to broadcast and cable TV problems. Though satellite TV technology is still evolving, it has already become a popular choice for many TV viewers.



 Conceptually satellite TV is a lot like broadcast TV. It’s a wireless system for delivering television programming directly to a viewer’s house. Both broadcast television and satellite stations transmit programming via a radio signal.

Broadcast stations use a powerful antenna to transmit radio waves to the surrounding area. Viewers can pick up the signal with a much smaller antenna. The main limitation of broadcast TV is range. The radio signals used to broadcast television shoot out from the broadcast antenna in a straight line. In order to receive these signals, you have to be in the direct line of sight of the antenna. Small obstacles like trees or small buildings aren’t a problem; but a big obstacle, such as the EARTH, will reflect these radio waves.

If the Earth were perfectly flat, you could pick up broadcast TV thousands of miles from the source. But because the planet is curved, it eventually breaks the signal’s line of sight. The other problem with broadcast TV is that the signal is often distorted, even in the viewing area. To get a perfectly clear signal like you find on cable, you have to be pretty close to the broadcast antenna without too many obstacles in the way.

The Satellite TV Solution:

Satellite TV solves the problems of range and distortion by transmitting broadcast signals from satellites orbiting the Earth. Since satellites are high in the sky, there are a lot more customers in the line of sight. Satellite TV systems transmit and receive radio signals using specialized antennas called satellite dishes.

The TV satellites are all in geo synchronous orbit, meaning that they stay in one place in the sky relative to the Earth. Each satellite is launched into space at about 7,000 mph (11,000 kph), reaching approximately 22,200 miles (35,700 km) above the Earth. At this speed and altitude, the satellite will revolve around the planet once every 24 hours — the same period of time it takes the Earth to make one full rotation. In other words, the satellite keeps pace with our moving planet exactly. This way, you only have to direct the dish at the satellite once, and from then on it picks up the signal without adjustment, at least when everything works right.


Early satellite TV viewers were explorers of sorts. They used their expensive dishes to discover unique programming that wasn’t necessarily intended for mass audiences. The dish and receiving equipment gave viewers the tools to pick up foreign stations, live feeds between different broadcast stations, NASA activities and a lot of other stuff transmitted using satellites.
Some satellite owners still seek out this sort of programming on their own, but today, most satellite TV customers get their programming through a direct broadcast satellite (DBS) provider, such as DirecTV or DISH Network. The provider selects programs and broadcasts them to subscribers as a set package. Basically, the provider’s goal is to bring dozens or even hundreds of channels to your TV in a form that approximates the competition, cable TV.

Unlike earlier programming, the provider’s broadcast is completely digital, which means it has much better picture and sound quality. Early satellite television was broadcast in C-band radio — radio in the 3.7-gigahertz (GHz) to 6.4-GHz frequency range. Digital broadcast satellite transmits programming in the Ku frequency range (11.7 GHz to 14.5 GHz).

The Components:

There are five major components involved in a direct to home (DTH) or direct broadcasting (DBS) satellite system: the programming source, the broadcast center, the satellite, the satellite dish and the receiver.

  • Programming sources are simply the channels that provide programming for broadcast. The provider doesn’t create original programming itself; it pays other companies (HBO, for example, or ESPN) for the right to broadcast their content via satellite. In this way, the provider is kind of like a broker between you and the actual programming sources. (Cable TV companies work on the same principle.)
  • The broadcast center is the central hub of the system. At the broadcast center, the TV provider receives signals from various programming sources and beams a broadcast signal to satellites in geosynchronous orbit.
  • The satellites receive the signals from the broadcast station and rebroadcast them to Earth.
  • The viewer’s dish picks up the signal from the satellite (or multiple satellites in the same part of the sky) and passes it on to the receiver in the viewer’s house.
  • The receiver processes the signal and passes it on to a standard TV.



 Satellite TV providers get programming from two major sources: national turnaround channels (such as HBO, ESPN and CNN) and various local channels (the ABC, CBS, Fox, NBC and PBS affiliates in a particular area). Most of the turnaround channels also provide programming for cable TV, and the local channels typically broadcast their programming over the airwaves.

Turnaround channels usually have a distribution center that beams their programming to a geosynchronous satellite. The broadcast center uses large satellite dishes to pick up these analog and digital signals from several sources.

Most local stations don’t transmit their programming to satellites, so the provider has to get it another way. If the provider includes local programming in a particular area, it will have a small local facility consisting of a few racks of communications equipment. The equipment receives local signals directly from the broadcaster through fibre-optic cable or an antenna and then transmits them to the central broadcast center.

The broadcast center converts all of this programming into a high-quality, uncompressed digital stream. At this point, the stream contains a vast quantity of data — about 270 megabits per second (Mbps) for each channel. In order to transmit the signal from there, the broadcast center has to compress it. Otherwise, it would be too big for the satellite to handle. In the next section, we’ll find out how the signal is compressed.


Satellite signals have a pretty long path to follow before they appear on your TV screen in the form of your favorite TV show. Because satellite signals contain such high-quality digital data, it would be impossible to transmit them without compression. Compression simply means that unnecessary or repetitive information is removed from the signal before it is transmitted. The signal is reconstructed after transmission.

Standards of Compression:

Satellite TV uses a special type of video file compression standardized by the Moving Picture Experts Group (MPEG). With MPEG compression, the provider is able to transmit significantly more channels. There are currently five of these MPEG standards, each serving a different purpose. DirecTV and DISH Network, the two major satellite TV providers in the United States, once used MPEG-2, which is still used to store movies on DVDs and for digital cable television (DTV). With MPEG-2, the TV provider can reduce the 270-Mbps stream to about 5 or 10 Mbps (depending on the type of programming).

Now, DirecTV and DISH Network use MPEG-4 compression. Because MPEG-4 was originally designed for streaming video in small-screen media like computers, it can encode more efficiently and provide a greater bandwidth than MPEG-2. MPEG-2 remains the official standard for digital TV compression, but it is better equipped to analyze static images, like those you see on a talk show or newscast, than moving, dynamic images. MPEG-4 can produce a better picture of dynamic images through use of spatial (space) and temporal (time) compression. This is why satellite NTV using MPEG-4 compression provides high definition of quickly-moving objects that constantly change place and direction on the screen, like in a basketball game.

Source: and

Posted by

Ravi teja ( MGIT ECE 3rd year)

Watch This Video….


For some computer owners, finding enough storage space to hold all the data they’ve acquired is a real challenge. Some people invest in larger hard drives. Others prefer external storage devices like thumb drives or compact discs. Desperate computer owners might delete entire folders worth of old files in order to make space for new information. But some are choosing to rely on a growing trend: CLOUD STORAGE.

While cloud storage sounds like it has something to do with weather fronts and storm systems, it really refers to saving data to an off-site storage system maintained by a third party. Instead of storing information to your computer’s hard drive or other local storage device, you save it to a remote database. The Internet provides the connection between your computer and the database.

Here the  data center operators, in the background, virtualizes the resources according to the requirements of the customer and expose them as storage pools, which the customers can themselves use to store files or data objects. Means by using internet interfacing techniques we will be able to store the data we need in virtual storage devices of any server’s data base.


Actually cloud storage is a sub-category of cloud computing which offer users access to not only storage, but also processing power and computer applications installed on a remote network.
There are hundreds of different cloud storage systems. Some have a very specific focus, such as storing Web e-mail messages or digital pictures. Others are available to store all forms of digital data. Some cloud storage systems are small operations, while others are so large that the physical equipment can fill up an entire warehouse. The facilities that house cloud storage systems are called data centers.
At its most basic level, a cloud storage system needs just one data server connected to the Internet. A client (e.g., a computer user subscribing to a cloud storage service) sends copies of files over the Internet to the data server, which then records the information. When the client wishes to retrieve the information, he or she accesses the data server through a Web-based interface. The server then either sends the files back to the client or allows the client to access and manipulate the files on the server itself.
Cloud storage systems generally rely on hundreds of data servers. Because computers occasionally require maintenance or repair, it’s important to store the same information on multiple machines. This is called redundancy. Without redundancy, a cloud storage system couldn’t ensure clients that they could access their information at any given time. Most systems store the same data on servers that use different power supplies. That way, clients can access their data even if one power supply fails.


  The two main factors effecting cloud storage are Reliability and Security. Clients surely doubt about the privacy of their data stored over internet and also about the security whether his/her data is not lost.
To secure data, most systems use a combination of techniques, including:
Encryption: this means usage of complex algorithms to encode the information. To decode those encrypted files client need to have a unique encryption key. This leads to immobilization of hackers to access your data without the password.

Authentication: it is a process, which requires creating a user name and password.

Authorization: means here the client lists the people who are authorized to access information stored on the cloud system.

These preventive measures prove their existence by securing the client’s information from hackers. And the cloud storage companies exist by their reputation in their field.
Some examples of cloud storage are
Web e-mail providers like Gmail, Hotmail and yahoo mail store e-mail messages on their own servers. Users can access their e-mail from computers and other devices connected to the Internet.

Sites like Flickr and Picasa host millions of digital photographs. Their users create online photo albums by uploading pictures directly to the services’ servers.

YouTube hosts millions of user-uploaded video files.

Now a days so many websites are offering this cloud storage facility and one of the best site is Click Here

And also there are few other sites like Ninefold and so on. Here we can store our information up to 10GB and this space can be extended for premium users.

Posted by

Gopi chand ( MGIT ECE 3rd year)

Watch This Video….

Solar Power Without Solar Cells

A dramatic and surprising magnetic effect of light discovered by University of Michigan researchers could lead to solar power without traditional semiconductor-based solar cells.

The researchers found a way to make an “optical battery,” said Stephen Rand, a professor in the departments of Electrical Engineering and Computer Science, Physics and Applied Physics.

In the process, they overturned a century-old tenet of physics.

“You could stare at the equations of motion all day and you will not see this possibility. We’ve all been taught that this doesn’t happen,” said Rand, an author of a paper on the work published in the Journal of Applied Physics. “It’s a very odd interaction. That’s why it’s been overlooked for more than 100 years.”

Light has electric and magnetic components. Until now, scientists thought the effects of the magnetic field were so weak that they could be ignored. What Rand and his colleagues found is that at the right intensity, when light is traveling through a material that does not conduct electricity, the light field can generate magnetic effects that are 100 million times stronger than previously expected. Under these circumstances, the magnetic effects develop strength equivalent to a strong electric effect.

“This could lead to a new kind of solar cell without semiconductors and without absorption to produce charge separation,” Rand said. “In solar cells, the light goes into a material, gets absorbed and creates heat. Here, we expect to have a very low heat load. Instead of the light being absorbed, energy is stored in the magnetic moment. Intense magnetization can be induced by intense light and then it is ultimately capable of providing a capacitive power source.”

What makes this possible is a previously undetected brand of “optical rectification,” says William Fisher, a doctoral student in applied physics. In traditional optical rectification, light’s electric field causes a charge separation, or a pulling apart of the positive and negative charges in a material. This sets up a voltage, similar to that in a battery. This electric effect had previously been detected only in crystalline materials that possessed a certain symmetry.

Rand and Fisher found that under the right circumstances and in other types of materials, the light’s magnetic field can also create optical rectification.

“It turns out that the magnetic field starts curving the electrons into a C-shape and they move forward a little each time,” Fisher said. “That C-shape of charge motion generates both an electric dipole and a magnetic dipole. If we can set up many of these in a row in a long fiber, we can make a huge voltage and by extracting that voltage, we can use it as a power source.”

The light must be shone through a material that does not conduct electricity, such as glass. And it must be focused to an intensity of 10 million watts per square centimeter. Sunlight isn’t this intense on its own, but new materials are being sought that would work at lower intensities, Fisher said.

“In our most recent paper, we show that incoherent light like sunlight is theoretically almost as effective in producing charge separation as laser light is,” Fisher said.

This new technique could make solar power cheaper, the researchers say. They predict that with improved materials they could achieve 10 percent efficiency in converting solar power to useable energy. That’s equivalent to today’s commercial-grade solar cells.

“To manufacture modern solar cells, you have to do extensive semiconductor processing,” Fisher said. “All we would need are lenses to focus the light and a fiber to guide it. Glass works for both. It’s already made in bulk, and it doesn’t require as much processing. Transparent ceramics might be even better.”

In experiments this summer, the researchers will work on harnessing this power with laser light, and then with sunlight.

courtesy: howstuffworks

Posted by

Mahesh ( MGIT ECE 3rd year)

By the end of 2011 there are gonna be no flying cars, no plots gonna be built in mars and no other sci-fi predictions. But there will be some major advancement in terms of smarter Internet devices and great leaps forward in the area of emerging technologies. This year already gave some fruits like Motorola Xoom, Android Honey comb and some cool stuff. And there are lots to come like iPad2, Play station phone and so on.

So let’s talk about some of those new products which are going to hit the market and leading to revolution.


The tablets which started addict tech lovers are continuing their legacy in 2011 also. They had a good break through by the release of Motorola Xoom and Blackberry playbook. They may be jus a preview of what tablets are capable of. And by the end of the year there will be few surprises from Apple iPad2, dell, Acer and by many others. Google’s next generation tablet OS, Android Honeycomb, is also one to watch out for. Microsoft might reveal touch optimized Windows OS for tablets by mid 2011, but the most anticipated will be the Palm OS on HP Palm Pads.
The tablet war will be fierce with these new tablets, which might bring steep price drops into the tablet PC segment. At the same time, there will be huge improvements in tablet screens; probably more OLED adoption. Intel might also launch its Oak Trail processor tailored for tablet PCs, and without doubt there will be improvements in processor speed, RAM, storage space and cameras.

 Motorola xoom and blackberry playbook respectively

Broadband to speed up with 4G this year:

All of us want a faster connection and 2011 will bring more bandwidth to our mobile world with 4G. It is believed to be 10 times faster than current 3G networks and provides the speed equivalent (or more) than a standard landline wired connection. The launch of 4G around the world will vary greatly as telecoms seek to balance their 3G business with 4G, and as infrastructure is still being set up in many areas. Whether it’s 4G or not, mobile broadband will definitely speed up due to more manufacturers launching cheaper 3G devices and more people switching to 3G.

Motion control gaming:

65% of US households play video games and gaming will grow crazily in 2011, irrespective of the devices—smart phones, tablets, laptops, desktops and gaming consoles. More mobile devices will come equipped with gravity sensors and motion sensitive controls.
Nintendo’s Wii plus, Sony’s PlayStation Move and Microsoft’s Xbox 360 Kinect will become more widespread in 2011. In 2011 gaming consoles will start tracking your movements through cameras and follow voice commands through microphones (examples include the Xbox Controller and PlayStation Motion Controller).
Social gaming was on the rise in 2010 and it will soon get more popular on mobile phones and tablets. Smart phones and tablet PCs will primarily connect your social life in 2011, but with better graphics, more power, and more proficient augmented reality apps.

Google in the PC:

Google entered our lives in 1998 and might revisit the PC again with Google Chrome OS in 2011. Chrome OS is currently in its public beta testing phase on 12-inch Cr-48 notebooks, but might enter into commercial netbooks in mid 2011. Acer, Samsung and many big players are planning to launch netbooks powered by a lighter and faster Chrome OS. The much secured Chrome netbooks will run on 3G connectivity and could bring netbook prices down to USD 100 or lesser. Watch out Microsoft, Windows has a lot to fear from Chrome OS in 2011.

Near field communication:

  Near Field Communication or NFC technology will turn your phone into a digital wallet. Although Google Nexus S became the first smart phone to embrace NFC in 2010, NFC won’t be main stream until later in 2011. Google, Apple, LG and loads of other manufacturers have lined up to get their gadgets packed with NFC, a high-frequency wireless communications technology that enables exchange of data between devices at a distance of 10cm or less. So you have to flash your NFC enabled smart phone at a retail outlet instead of your credit card.

    Nokia has plans to bring NFC to all its smart phones. Google Android 2.3 already supports NFC and LG will roll out different NFC enabled gadgets this year as well. Apple is taking NFC seriously as well since 57% of iPhone users have shown interest in NFC; and the iPhone 5 is rumored to feature it.

Location based networking and Augmented Real

Twitter, Facebook, FourSquare, Gowalla, and Yelp, everyone wants to show your location and publish recommendation for friends, followers and even strangers. Nokia has inked a deal with China’ leading web portals Sina and Tencent to enable Chinese social users to tag locations in their updates. FourSquare currently has over 5 million users, 25000 new users daily and 2 million check-ins. Analysts expect that to quadruple in 2011. So it’s only going to get bigger.

More Surprises:

Apple surprised everyone last year with the iPad. What will be making us go wow in 2011?! The Sony PlayStation phone tops my list, but there are other contenders like the BlackBerry Presenter phone for showing PowerPoint presentations on projectors and monitors using Bluetooth connectivity. The Sony Vaio 3D, which will have a button to switch to 3D mode to play 3D games and watch 3D movies with glasses gets another Sony product near the top of my must play with list. Then there’s the MintPass dual screen dual boot tablet with Windows and Android, iPhone 5, PalmPads and Verizon LTE phones. HTC is experimenting with E-Ink displays and so is Amazon; there might be radical e-readers in the pipeline for 2011. Voice control will improve and more gadgets, from gaming consoles to televisions, will start recognizing your voice. Better Internet connection and improved FaceTime and Skype apps will improve video calls in 2011.
The Shanzhai will continue to gain inspiration from all of these advancements, continuously striving to offer devices at unique local price points, often improving on brand names designs in the process. But what’s potentially very compelling is the possibility of more Shanzhai companies developing into brands themselves. Since the market for smartphones, tablets and other devices is continuously expanding, so too is demand and the room for new players to make an impact.

Let’s watch out the making of history and let’s be a part of it…

Courtesy: various websites and anonymous views

Posted by

Gopi chand ( MGIT ECE 3rd year)

Imagine you cannot find or use your phone when it starts ringing. Perhaps it’s fallen into the depths of your sofa, or your hands are wet from washing up or greasy from baking. To decline the call and send it to your messaging service, you press the area of your palm corresponding to the position of the relevant button if you had the phone in your hand. Or you could press the buttons to answer the call and turn on the speakerphone.

Sean Gustafson unlocks his iPhone by swishing a finger across its screen and pecking out a four-digit PIN on its keypad. It’s unremarkable save for one small thing: there is no phone in his hand. Instead, he’s pressing invisible “buttons” on his palm – to operate an imaginary cellphone. And, astonishingly, it works.

The idea is certainly strange, but Gustafson and his colleagues at Potsdam University in Germany think there is a gap in the market for phones and TV remotes like this that don’t actually exist.

For it to work they reason you’d need two things: people who know precisely where the apps are on their physical phone, and a technology that can sense where they are pressing on their hand so a computer can respond and send commands to your phone – wherever it is.

From iPhone to iPalm

To find out how well people know their modern touchscreen phones, the Potsdam trio recruited 12 volunteers from among the iPhone users they spotted in their cafeteria and tested how well they knew the position of their favoured apps without their phone. “We found 68 per cent of iPhone users can locate the majority of their home screen apps on their hand. This means that iPhone use inadvertently prepares users for the use of an imaginary version,” says Baudisch.

Having established a reasonable chance of successfully finding an app’s position on someone’s palm, they then decided to use “depth cameras” – similar to those at the heart of Microsoft’s Kinect motion-sensing gaming system – to detect where someone is pressing on their palm.

The depth camera they used in their tests is a “time-of-flight” device that flashes an invisible infrared pattern on the scene and which uses ultrafast receiver circuitry to time how long it takes the light bathing different parts of the scene to be returned to a sensor. That way, it knows how far all the objects in the scene are from the camera – so when a users’ finger presses on their palm, it registers where and when it does so. The signal is sent to a computer which processes it and then sends the relevant command to your cellphone.

Going ‘all-imaginary’

In their tests, the depth camera was a clunky head mounted device. “But ultimately, we envision the camera becoming so small that it integrates into clothing, such as the button of a shirt, a brooch, or a pendant. So people would not even notice if someone carries an imaginary phone,” Baudisch told New Scientist.

“We envision that users will initially use imaginary phones as a shortcut to operate the physical phones in their pockets. As users get more experienced, it might even become possible to leave the device at home and spend the day ‘all-imaginary’.”

Answering calls on the phone would still require the physical device – but it would be possible to access apps and forward calls to voicemail with the imaginary version.

It’s not all about phones, however: Gustafson is now working out how a TV remote control could be replaced by an imaginary zapper. The team hope to present their work at a conference on user interfaces later this year.

Posted by

C.S. Sumanth ( MGIT ECE 1st year)

Watch This Video….



Most people have or are familiar with the “Thermos” (also known as a vacuum flask or a Dewar). We remember as kids having one that came with our lunch box. One day we might put grape juice in it and at lunch we would have nice, cold grape juice. The next day we would put hot soup in it and we would have hot soup for lunch. And we can remember asking, “How does it know whether to keep stuff hot or cold?” Where’s the switch, in other words…

Or, similarly, “You heat things up in an oven and cool them down in a refrigerator — how come this thing can do both?” so answer to the question will be the HEAT TRANSFER phenomenon.



 Let’s say you take a glass of ice water or a bowl of hot soup and let them sit out on the kitchen table. You know what will happen: The bowl of soup will cool down to room temperature, and the glass of ice water will warm up to room temperature. This is a thermodynamic fact of life. If you put any two objects with different temperatures together, then heat transfer will cause them to reach the same temperature. So a “room” and a “hot bowl of soup” reach the same temperature by the heat transfer process… the room gets slightly warmer and the bowl of soup gets a lot colder.

If you want to keep a bowl of soup hot as long as possible  that is, if you want to slow down the natural heat transfer process as much as you can… you have to slow down the three processes that cause heat transfer. The processes are:

  • Conduction – Let’s start with a simple question: What is heat? Heat is atomic motion. An atom represents its “heat” by its speed. At the temperature absolute zero, there is no atomic motion. But as atoms get warmer they move. Heat is transferred when one atom runs into another. When this happens, it is a little bit billiard balls colliding… the second atom picks up some of the motion of the first atom. Heat is transferred by these collisions.

The best example of this phenomenon would be to take a metal bar and heat one end of it. The other end will get warm and then hot through conduction. When you put a metal pan on the stove, the inside of the pan gets hot through conduction of the heat through the metal in the bottom of the pan. Some materials (namely metals) are better heat conductors than others (for example, plastics).

  • Convection – Convection is a property of liquids and gases. It occurs because when a liquid or gas gets hot, it tends to rise above the rest of the body of liquid or gas. So, if you have a hot bowl of soup on the table, it heats a layer of air surrounding the bowl. That layer then rises because it is hotter than the surrounding air. Cold air fills in the space left by the rising hot air. This new cold air then heats up and rises, and the cycle repeats. It is possible to speed up convection — that is why you blow on hot soup to cool it down. If it weren’t for convection your soup would stay hot a lot longer, because it turns out that air is a pretty poor heat conductor.

You can see all three of these heat transfer processes occurring when you stand next to a bonfire.

You probably need to stand at least 20 feet away from a big bonfire like this one. What keeps you away is heat radiating from the fire through infrared radiation. The flames and smoke are carried upward by convection: Air around the fire heats up and rises. The ground 3 feet beneath the fire will get hot, heated by conduction. The top layer of soil is directly heated (by radiation), and then the heat is conducted through layers of dirt deep into the ground.

To build a good thermos, what you want to do is reduce these three heat transfer phenomena as much as possible.


One way to build a thermos-like container would be to take a jar and wrap it in, for example, foam insulation. Insulation works by two principles. First, the plastic in the foam is not a very good heat conductor. Second, the air trapped in the foam is an even worse heat conductor. So conduction has been reduced. Because the air is broken into tiny bubbles, the other thing foam insulation does is largely eliminate convection inside the foam. Heat transfer through foam is therefore pretty small.

It turns out that there is an even better insulator than foam: a vacuum. A vacuum is a lack of atoms. A “perfect vacuum” contains zero atoms. It is nearly impossible to create a perfect vacuum, but you can get close. Without atoms you eliminate conduction and convection completely.

What you find in a thermos is a glass envelope holding a vacuum. Inside a thermos is glass, and around the glass is a vacuum. The glass envelope is fragile, so it is encased in a plastic or metal case. In many thermoses you can actually unscrew and remove this glass envelope.

A thermos then goes one step further. The glass is silvered (like a mirror) to reduce infrared radiation. The combination of a vacuum and the silvering greatly reduces heat transfer by convection, conduction and radiation.

So why do hot things in a thermos ever cool down? You can see in the figure two paths for heat transfer. The big one is the cap. The other one is the glass, which provides a conduction path at the top of the flask where the inner and outer walls meet. Although heat transfer through these paths is small, it is not zero.

Does the thermos know whether the fluid inside it is hot or cold? No. All the thermos is doing is limiting heat transfer through the walls of the thermos. That lets the fluid inside the thermos keep its temperature nearly constant for a long period of time (whether the temperature is hot or cold).


Posted by

RaviTeja ( MGIT ECE 3rd year)

Watch This Video….

Most of the solar devices are only 20% efficient….dis could be a major problem in future because all the non renewable Energy Sources are degrading day by day….hence there is a need to increase the efficiency of solar power….

This could be done by using Nanotechnology Concept….this article helps you to understand how 95% efficiency can be obtained and also these devices could be available in the Market in around 5 yrs….

Efficiency is a problem with today’s solar panels; they only collect about 20 percent of available light. Now, a University of Missouri engineer has developed a flexible solar sheet that captures more than 90 percent of available light, and he plans to make prototypes available to consumers within the next five years.

Patrick Pinhero, an associate professor in the MU Chemical Engineering Department, says energy generated using traditional photovoltaic (PV) methods of solar collection is inefficient and neglects much of the available solar electromagnetic (sunlight) spectrum. The device his team has developed — essentially a thin, moldable sheet of small antennas called nantenna — can harvest the heat from industrial processes and convert it into usable electricity. Their ambition is to extend this concept to a direct solar facing nantenna device capable of collecting solar irradiation in the near infrared and optical regions of the solar spectrum.

Working with his former team at the Idaho National Laboratory and Garrett Moddel, an electrical engineering professor at the University of Colorado, Pinhero and his team have now developed a way to extract electricity from the collected heat and sunlight using special high-speed electrical circuitry. This team also partners with Dennis Slafer of MicroContinuum, Inc., of Cambridge, Mass., to immediately port laboratory bench-scale technologies into manufacturable devices that can be inexpensively mass-produced.

“Our overall goal is to collect and utilize as much solar energy as is theoretically possible and bring it to the commercial market in an inexpensive package that is accessible to everyone,” Pinhero said. “If successful, this product will put us orders of magnitudes ahead of the current solar energy technologies we have available to us today.”

As part of a rollout plan, the team is securing funding from the U.S. Department of Energy and private investors. The second phase features an energy-harvesting device for existing industrial infrastructure, including heat-process factories and solar farms.

Within five years, the research team believes they will have a product that complements conventional PV solar panels. Because it’s a flexible film, Pinhero believes it could be incorporated into roof shingle products, or be custom-made to power vehicles.

Once the funding is secure, Pinhero envisions several commercial product spin-offs, including infrared (IR) detection. These include improved contraband-identifying products for airports and the military, optical computing, and infrared line-of-sight telecommunications.

Posted by

Mahesh ( MGIT ECE 3rd year)

Watch This Video….


World has changed a lot in the past decade. Previously we used to have really big analog cameras which can be compared to present digital cameras as Giants. We dint know any major applications what we use presently in a cell phone. Now the latest hit in it are smart phones. By the tremendous use of digital technology, we have shrinked the size of a TV tremendously which led to LED screens. We had never used the system this much for the sake of talking with friends, sharing our photos, videos and all that stuff (yes it’s about social networking). And another biggest hit is cloud computing.

Now lets take a look at these common words what we listen frequently. Coz according to some great engineers and scientists across the world, these are some of the greatest hits of the decade.


Digital photography is a form of photography that uses an array of light sensitive sensors to capture the image focused by the lens, as opposed to an exposure on light sensitive film. The captured image is then stored as a digital file ready for viewing or printing, cropping, color correcting. Until the usage of this technology photographs were printed in dark labs by cleaning them in some liquid and later dry those up which is a time taking process. But here in digital photography, digital photographs can be displayed, printed, stored, manipulated, by this digital and computer techniques, without chemical processing.

The Presidential Portrait of Barack Obama was the first official

U.S. Presidential Portrait to be taken with a digital camera.

Nearly all digital cameras use removable solid state flash memory.

Pixel count (typically listed in megapixels, millions of pixels) is one of the major factor that is heavily marketed figure of merit. And the processing system inside the camera that turns the raw data into a color-balanced and pleasing photograph is usually more critical, which is why some 4+ megapixel cameras perform better than higher-end cameras.

Pixel counts:

The number of pixels n for a given maximum resolution (w horizontal pixels by h vertical pixels) is the product n = w × h. The pixel count quoted by manufacturers can be misleading as it may not be the number of full-color pixels. For cameras using single-chip image sensors the number claimed is the total number of single-color-sensitive photo sensors, though they have different locations in the plane.

Dynamic range:

Practical imaging systems, digital and film, have a limited “dynamic range”: the range of luminosity that can be reproduced accurately. Highlights of the subject that are too bright are rendered as white, with no detail; shadows that are too dark are rendered as black. The loss of detail is not abrupt with film, or in dark shadows with digital sensors: some detail is retained as brightness moves out of the dynamic range.


A smart phone is a mobile phone that offers more advanced computing ability and connectivity than a contemporary phone.  Smart phones run complete operating system software providing a platform for application developers. Thus, they combine the functions of a camera phone and a personal digital assistant (PDA).

The first smart phone was the IBM Simon which was designed and released in early 90’s.

The Nokia Communicator line was the first of Nokia’s smart phones starting with the Nokia 9000, released in 1996.

The latest and leading smart phone is iPhone.

It was the first mobile phone to use a multi-touch interface, and it featured a web browser that Ars Technica then described as “far superior” to anything offered by that of its competitors. A process called “jail breaking” emerged quickly to provide unofficial third-party applications.

Now let’s talk about the biggest invention of the time… Android operating system…

The Android operating system for smart phones was released in 2008. Android is an open source platform backed by Google, along with major hardware and software developers (such as Intel, HTC, ARM, Motorola and Samsung, to name a few), that form the Open Handset Alliance. The first phone to use Android was the HTC Dream. The software suite included on the phone consists of integration with Google’s proprietary applications, such as Maps, Calendar, and Gmail, and a full HTML web browser.

While the operating systems sharing the market are Symbian OS, Samsung’s Bada, Palm Pilot Web OS, Windows phone…


An LED display is a video display which uses light-emitting diodes. A LED panel is a small display, or a component of a larger display. They are typically used outdoors in store signs and billboards, and in recent years have also become commonly used in destination signs on public transport vehicles or even as part of transparent glass area. LED panels are sometimes used as form of lighting, for the purpose of general illumination, task lighting, or even stage lighting rather than display.

There are two types of LED panels: conventional (using discrete LEDs) and surface-mounted device (SMD) panels. Most outdoor screens and some indoor screens are built around discrete LEDs, also known as individually mounted LEDs.

Flat Panel LED Television Display

Possibly the first true all LED flat panel television TV screen was developed, demonstrated and documented by J. P. Mitchell in 1977. The modular, scalable display was initially designed with hundreds of MV50 LEDs and a newly available TTL memory addressing circuit from National Semiconductor. The ¼ in thin flat panel prototype and the scientific paper were displayed at the 29th ISEF expo sponsored by the Society for Science and the Public in Washington D.C. May 1978. The technical display received awards and recognition. Awards included NASA, General Motors Corporation, and recognition from faculty and area Universities and the IEEE. The monochromatic LED prototype remains operational. An LCD (liquid crystal display) matrix design was also cited in the LED paper as an alternative x-y scan technology and as a future alternate television display method. The replacement of the 70 year+ high-voltage analog system (cathode-ray tube technology) with a digital x-y scan system has been significant. Displacement of the electromagnetic scan systems included the removal of inductive deflection, electron beam and color convergence circuits. The digital x-y scan system has helped the modern television to “collapse” into its current thin form factor.

In 1978, Mitchell also submitted his paper to the Westinghouse Science Talent Search contest, where he received an Honorable Mention.

Mitchell also presented his paper at the 90th Session of The Iowa Academy of Science April 21–22, 1978, at the University of Northern Iowa, Cedar Falls, Iowa.

The 1977 model was monochromatic by design. Efficient blue LEDs did not arrive for another decade. Large displays now use high-brightness diodes to generate a wide spectrum color palette. It took three decades and organic electroluminescent materials for Sony to introduce an LED TV: the Sony XEL-1.

Mobile LED display


A social network is a social structure made up of individuals (or organizations) called “nodes”, which are tied (connected) by one or more specific types of interdependency, such as friendship, kinship, common interest, financial exchange, dislike, or relationships of beliefs, knowledge or prestige.

Social network analysis views social relationships in terms of network theory consisting of nodes and ties (also called edgeslinks, or connections).Nodes are the individual actors within the networks, and ties are the relationships between the actors. The resulting graph-based structures are often very complex. There can be many kinds of ties between the nodes. Research in a number of academic fields has shown that social networks operate on many levels, from families up to the level of nations, and play a critical role in determining the way problems are solved, organizations are run, and the degree to which individuals succeed in achieving their goals.

In its simplest form, a social network is a map of specified ties, such as friendship, between the nodes being studied. The nodes to which an individual is thus connected are the social contacts of that individual. The network can also be used to measure social capital – the value that an individual gets from the social network. These concepts are often displayed in a social network diagram, where nodes are the points and ties are the lines.

By the usage of internet by every person in the world these social networking sites have been a tremendous hit in the last decade.

If we consider all the face book users as the population of a country, then that country will be the second biggest in the world. By this we can determine the effect of internet and their social networking sites on the people of the decade.


Cloud computing refers to the provision of computational resources on demand via a computer network. Users or clients can submit a task, such as word processing, to the service provider, such as Google, without actually possessing the software or hardware. The consumer’s computer may contain very little software or data (perhaps a minimal operating system and web browser only), serving as little more than a display terminal connected to the Internet. Since the cloud is the underlying delivery mechanism, cloud based applications and services may support any type of software application or service in use today.

In the past, both data and software had to be stored and processed on or near the computer. The development of Local Area Networks allowed for a system in which multiple CPUs and storage devices may be organized to increase the performance of the entire system. In an extension to that concept, cloud computing fundamentally allows for a functional separation between the resources used and the user’s computer, usually residing outside the local network, for example, in an remote datacenter. Consumers now routinely use data intensive applications driven by cloud technology which were previously unavailable due to cost and deployment complexity. In many companies’ employees and company departments are bringing a flood of consumer technology into the workplace and this raises legal compliance and security concerns for the corporation.

The term “software as a service” is sometimes used to describe programs offered through “The Cloud”.

Common shorthand for a provided cloud computing service (or even an aggregation of all existing cloud services) is “The Cloud”.

An analogy to explain cloud computing is that of public utilities such as electricity, gas, and water. Centralized and standardized utilities freed individuals from the difficulties of generating electricity or pumping water. All of the development and maintenance tasks involved in doing so was alleviated. With Cloud computing, this translates to a reduced cost in software distribution to providers who still use hard mediums such as DVDs. Consumer benefits are that software no longer has to be installed and is automatically updated but savings in terms of dollars is yet to be seen.

The principle behind the cloud is that any computer connected to the Internet is connected to the same pool of computing power, applications, and files. Users can store and access personal files such as music, pictures, videos, and bookmarks or play games or do word processing on a remote server rather than physically carrying around a storage medium such as a DVD or thumb drive. Even those who use web-based email such as Gmail, Hotmail, Yahoo, a company owned email or even an e-mail client program such as Outlook, Evolution, Mozilla Thunderbird or Entourage are making use of cloud email servers. Hence, desktop applications which connect to cloud email can also be considered cloud applications.

This is the cloud computing sample architecture.

These above are the 5 top hits in the previous decade.

In the next article we will look over some more new hits of the recent…

Source : Self Ideas nd internet.

Posted by

Gopi chand ( MGIT ECE 3rd year)