How augmented reality can be used to turn traditional switches into a lot more interesting domotic systems.
All developments in long term context.
How augmented reality can be used to turn traditional switches into a lot more interesting domotic systems.
Last summer, a research group of MIT scientists debuted a new video amplification algorithm that exaggerates slight changes in movement or color, like a magnifying glass for moving images. Since then, they've made the open-source code available and started allowing anyone to upload videos and see the effect for themselves. The New York Times got inside the lab to see what they project is doing in this video.
Read their full paper including the source-code here.
Computers will be able to understand human behavior in a better way than we do. They’ll sence any small motion, any small change in our behaviour. They’ll use the lessons learned in physiognomics. They will be the best possible communication partners, and applied as brand agents, virtual characters representing brands with these motion magnifying capabilities, it will revolutionize the world of marketing communication.
Most gesture-control systems require some kind of external sensors that "see" you, with optical sensors or depth sensors or cameras. They're on the outside, measuring your movements the same way human eyes do. And that's fine, but a new wristband advertises itself as a system that's more internal--it's directly controlled by you.
You make gestures similar to the ones you'd use on an Apple trackpad, except in the air: you'd wave a couple fingers to rewind or pause a video, scroll through pages, that kind of thing. It's compatible with Windows and Mac OS X to start, but since it connects via Bluetooth, it could conceivably connect to just about any mobile device as well: smartphones, tablets, or even drones.
Now computer are getting additional input from humans: gestures in the air, instead of mouse and keyboard. The next step will be the recognition of human gestures, i.e. gestures we already use in our day 2 day lives, having conversations with chatbots like we would do with normal human beings. That will really feel natural.
Researchers have created a lithium-ion battery that keeps on working when stretched to four times its initial length--and bounces back into shape once you let go.
In the future, stretchy batteries such as these could help power solar-energy generating clothes, tattoos that monitor your vital signs, robot skin that's sensitive to touch and other futuristic, flexible devices.
And what about making roofs of these batteries? Any objects created by man could carry batteries, displays and cameras. Your whole house can be a display to! Camoflaging itself in the rainforest, charging itself by sun. Or an aircraft, constantly charging itself at heights. Or robotic humanoid skin to charge all its transistors and sensors.
Touchscreens treat all fleshy finger pads alike: Most detect a simple change in electrical current or in sound or light waves regardless of who is swiping. Researchers at Disney Research, Pittsburgh, have built a touchscreen that can discriminate between users. Every person’s body has its own bone density, muscle mass, blood volume, and water content. The device, called Touché, sends a series of harmless currents through a user’s body. Physiological differences produce differences in the body’s impedance of that current. Touché measures this unique capacitive signature. Scientists could apply capacitive fingerprinting to any touchscreen, or to other ubiquitous objects, such as doorknobs and furniture, turning the world into an interactive device. Touché is still in development, and plans for commercialization, alas, are top secret.
All brands will be able to recognize consumers and continue the dialog where they’ve left off the time. This begins with authentication technology and this is perfect step in this direction! Now, we’ll to touch a screen, soon, we’ll be recognized through our unique field around our bodies.
We've all seen 3D sensors before. In the Microsoft Kinect, for instance. That sensor's design was licensed to Microsoft by +PrimeSense.
At CES this year, Robert Scoble visited Primesense to get a look at its latest 3D sensor. What is big about it? First of all, it's small. Small enough to fit into tablet PCs. Second of all, it's lower cost. Will sell for under $100. Third of all it's more accurate and higher resolution than the one in Kinect (it is so accurate it can tell how hard you are pressing on a surface).
Why is this world changing? Because nothing can track human behavior quite as well as a 3D sensor. Expect to see these start to appear everywhere. In cars. In games. In tablets and TVs. And more.
ALL screens will be 3D. In transparent mode, we’ll be able to look through them, and notice, very naturally, that the perspective changes when we move our head, and when we rotate the screen just a little the perspective will change as well. In non-transparant mode, we’ll have a view on a virtual 3D world and we’ll notice, very naturally, that the perspective changes when we move our head, and when we rotate the screen just a little the perspective will change as well. and obviously we have mixed mode. That’s the essence of the media-completion trend: the virtual world will be as naturally as the real world.
This LED grid created by students of the Delft University of Technology contains 66x28x28=76032 pixels. It contains animations of rotating flowers and more artitic experssions.
In the future, these grids will be unlimited detailed. Now 76,032 pixels seems a lot, but at that time we’ll easily have 76,032,000,000 pixels. The pixels will be so small that our human eye won’t be able to distinguish them from each other.
Applications will be countless. We’ll have conversations with other humans on a distance, or even interact with virtual humans. Prototyping, education, and brand interaction. Everything will change with technologies like these.
This video shows the future vision of Corning, a New York based company based in specialty glass and ceramics. Very interesting! Below you'll also find 'the making of' which much more details and an explanation what's possible today, and what's not
It will take a while before this technology will be seen everywhere in our society. It will be revolutionary, certainly realizing we will also interact with intelligent yet virtual humans.
But again, it will be shifting in developing countries. The poorest countries in the world suddenly get access to these extreme reliable devices (no connectors, no keys, thus not sensitive for sand and maybe even not for water. It will change the level of education in the world. Forever.
Although this is a 2006 video, it definitely shows the future of TV.
All screens, including our BIG screens in our living rooms, will respond to EVERYTHING we do: where we point our fingers, how our whole body expresses an emotion and what we shout we walk along various screens. All our input will be added up, modeled and interpreted by brands, virtual entities on the other side of the screen,
This example might be funny but it absolute part of our future.
A revolutionary interactive 3DTV system is being created by De Montfort University Leicester (DMU), England, researchers. The €4.2 million (approx £3.7 million) project aims to develop a television that can recognise where somebody is sitting in a room and what they wish to view and interact with on their television.
Researchers believe it is a step towards truly interactive 3D video games where gamers use their bodies to control the action without the need for a controller. It could be the next step for Microsoft's Project Natal.
The project, called HELIUM3D (high efficiency, laser-based, multi-user, multi-modal 3D display) is also exploring ways of allowing viewers who are watching the same television to each view a different channel at the same time and could even let them choose different viewing positions within the image.
For example, groups of people watching a football match in the same room could each pick the part of the stadium from which they would like to experience the action.
German researchers at the Fraunhofer Institute for Photonic Microsystems have embedded a head-mounted microdisplay into a pair of glasses—allowing the user to access and manipulate data with simple eye movements.
The [CMOS] chip measuring 19.3 by 17 millimeters is fitted on the prototype eyeglasses behind the hinge on the temple. From the temple the image on the microdisplay is projected onto the retina of the user so that it appears to be viewed from a distance of about one meter. The image has to outshine the ambient light to ensure that it can be seen clearly against changing and highly contrasting backgrounds. For this reason the research scientists use OLEDs, organic light-emitting diodes, to produce microdisplays of particularly high luminance.
Wearers could scroll through menus, shift elements and pull up new info by simply focusing on a particular area or moving their eyes in a specific way.
THE next generation of plug-in hybrid cars could recharge in minutes, thanks to a new type of battery.
Lithium ion cells are used in portable gadgets and the latest hybrid cars as they are light and can be repeatedly charged and discharged with little degradation. But as with all batteries, charging takes some time. That's because it involves detaching lithium ions from the cathode at one end of the battery and absorbing them at the anode; pulling the ions from the cathode is normally a slow process.
Now Byoungwoo Kang and Gerbrand Ceder at the Massachusetts Institute of Technology have revealed an experimental battery that charges about 100 times as fast as normal lithium ion batteries. Their battery contains a cathode made up of tiny balls of lithium iron phosphate, each just 50 nanometres across. The balls quickly release lithium ions as the battery charges, which travel across an electrolyte towards the anode. As the battery discharges, the lithium ions move back across the cell to be re-absorbed by the nanoballs.
Apple has filed a patent for biometric authentication (checking after identification) including installation of a hidden sensor behind the screen that would recognize the user's fingerprint when touched, and / or a front-facing camera for retinal recognition. The filing also suggests further possibilities, such as the device being capable of recognizing the user's voice, or collecting DNA samples for recognition via genetic code.
Researchers from Edinburgh and Manchester University have created a molecular machine that could be used to develop quantum computers for making "intricate calculations" far more quickly than current supercomputers. Essentially, the reseachers relied on molecular scale technology instead of silicon chips; more specifically, they achieved the so-called breakthrough by "combining tiny magnets with molecular machines that can shuttle between two locations without the use of external force." Not surprisingly, there's still more work to be done, with Professor David Leigh of Edinburgh University noting that "the major challenges we face now are to bring many of these qubits together to build a device that could perform calculations, and to discover how to communicate between them."
Contact: Erwin van Lun, +31 621 567 657 (GMT +1), firstname.lastname@example.org