Showing posts with label Tech Mania. Show all posts
Showing posts with label Tech Mania. Show all posts

Sunday 16 October 2016

Solar-powered smart pole keeps commuters powered-up and surfing

07:34 Posted by Anonymous No comments

Smartphones have made it easier than ever to keep occupied while commuting, but all it takes is a dead battery to make for a tedious waiting game. A recently installed piece of street furniture in the Turkish city of Istanbul, however, lets commuters keep their devices charged while surfing the web with that extra jolt of juice.

The Mito was designed by Art Lebedev studio at the request of Verisun, a Turkish tech company that deals in smart city solutions, among other things. The two firms previously worked on a solar-powered smart pole back in 2013, but began work on a new design in September of last year.

There are eight USB charging ports mounted in the Mito, allowing for up to eight mobile devices to be charged at any one time. In addition, wireless internet access allows commuters to check their emails, read the news or browse social media while they wait.

Transport information is provided via a built-in 7-in outdoor LCD screen. This includes the station or stop and route name, the expected arrival time of the bus or tram and the current temperature. The system is powered by Android content management software.

In addition to these features, the Mito also has an eye-catching design, with graceful curves and patterned wood covering an internal metal frame. It has to be said that the Mito doesn't really fit in with with the typical perfunctory vernacular of city street design, but it's a good-looking installation nonetheless.

The name Mito derives from the energy-generating mitochondrion found in cells and refers to the 240-W top-mounted solar panel that powers the unit. Verisun tells New Atlas that, in winter, the Mito can produce up to 600 Wh of electricity a day, which rises to 1,920 Wh in summer.


Both those figures are ample for the 360 Wh of power that Verisun says is required to run the Mito every day, but, in the event that the amount of electricity generated falls short in real-time, there's also a 60-Ah battery from which power can be pulled.

The first Mito was installed near a tram stop at Taksim Square in downtown Istanbul in February. Verisun says it plans to roll more out in different cities in the future.

Verizon trials drones as flying cell towers to plug holes in internet coverage

07:34 Posted by Anonymous No comments

Verizon has joined the likes of Facebook, Google and fellow telecommunications giant AT&T in exploring the potential of internet-connected unmanned aircraft. While its vision involves expanding 4G coverage across the US, it has an immediate focus on shoring up communications for first responders in emergency situations, and recently carried out trials to that effect.

Verizon has dubbed the initiative Airborne LTE Operations (ALO) and says it has actually been in the pipeline for around two years. The company has been working to integrate internet connectivity into unmanned aerial vehicles and hook them up to its 4G network, daisy chaining coverage and beaming it down to unconnected areas in the process. This is similar to how Facebook hopes its Aquila drones will work.

Verizon recently teamed up with company American Aerospace Technologies to see how using drones as gliding cell towers could have an impact in disaster relief scenarios. In a simulated mission in New Jersey, the team set a drone with a 17-foot (5.2 m) wingspan in flight to put the onboard technologies through their paces.


"We are testing a 'flying cell site' placed inside the drone to determine how we can provide wireless service in a weather-related emergency from the air," Verizon's Howie Waterman tells New Atlas. "This is the first such wireless network test in an emergency response simulation, leveraging our 4G LTE network."

Verizon says this is just one of a series of successful technical trials it has conducted around the country, involving both unmanned and manned aircraft. It imagines connected aerial vehicles eventually being used to image crops, carry out inspections of pipelines and high-voltage power lines, and monitor the physical extent of threats like wildfires and tornadoes.

As it stands for businesses in the US, you can't legally fly drones where you can't see them, but regulators say laws that will accommodate these types of applications are in the works. When and if that happens, Verizon says it will have a certification process whereby it will approve other businesses' drones to hook up to its flying 4G network and get in on the action.

Source: New Atlas, Verizon

Saturday 15 October 2016

Disney's one-legged robot hops into action without a tether

00:33 Posted by Anonymous No comments

Bipedal robots, such as Boston Dynamics' Atlas may be able to balance on one leg, but Disney Research has gone one better and built a one-legged hopping robot. This unidexter automaton isn't the first hopping robot, but it's the first to not rely on a tether or external power source to keep bouncing.

Hopping robots have been around since at least the early 1980s and though a robotic pogo stick may seem as pointless as an ejection seat on a helicopter, they have helped engineers in developing and refining robotic locomotion. While robots with two or more legs can handle rough terrain and use different gaits for different tasks, single-legged robots have a much simpler topology and are limited to hopping to move and stay upright.

According to the Disney Research team, this makes them ideal for testing robot leg mechanics and algorithms. A single-legged robot needs to respond at high speeds and with much higher forces causing much higher stresses than on multi-legged robots if they're going to stay upright and not trip over things. Meeting these requirements is much more difficult than making a two-legged robot walk and provides valuable information to designers on how to overcome problems.


Early hopping robots, such as those built by Mark Raibert at MIT in 1983, were heavy affairs with massive metal crash cages. Relying on pulleys, cams, and electric solenoids, it and later hopbots required tethers and external power sources to work without damaging themselves. Building such robots was so hard that the Disney team says that no real progress has been made on increasing the portability of hopping robots since 2007.

The Disney approach for its hopping robot was to make a much lighter, simpler design. In this case, the robot is built around a Linear Elastic Actuator in Parallel (LEAP) for greater agility. This mechanism uses two parallel compression springs to steady the central leg, which provides thrust by means of a voice coil. As the name suggests, the voice coil was originally designed for certain types of speakers, but the coil's ability to provide an easy to control proportional thrust without direct contact between coil and shaft made it ideal for this purpose.


The robot has a special hip made out of a gimbal joint incorporating a pair of servo motors. In addition, there's an onboard microcontroller that controls thrust, orientation, and foot placement. Put them altogether and it adds up to a robot that can maintain its balance for up to 19 hops over seven seconds on its own. Though the robot did have a safety tether, this was slack during the tests and was mainly a telemetry link to record data.


The Disney Research team says that there's still room for improvement on the robot. The software assumes the foot is static when it lands, causing it to slip and the sensors could be more reliable and accurate. Also they are working on a better way to model the robot's velocity while in mid-hop, and they hope to make the mechanism more modular and compact, so it can be adapted to multi-legged robots.

The Disney Research team's findings can be found in this paper (PDF).

The video below shows the Disney Research robot going for a quick hop.


Friday 14 October 2016

Intel unveils its own commercial drone, the Falcon 8+

01:04 Posted by Anonymous No comments

Intel is slapping its name on an advanced drone designed for commercial and professional uses in North America. The Falcon 8+ is the first Intel-branded commercial drone and it's outfitted for industrial inspection, surveying and mapping.


Intel's expert-level drone builds on the AscTec Falcon 8 drone, a V-form octocopter that boasts high stability, precision GPS and flight control electronics and components that are redundant three times over, as well as laying claim to the best weight-to-payload ratio around (empty weight 1.1 kg, max. payload 0.8 kg). This is basically a working drone designed for some of the most intense field applications.

The two companies had already used the Falcon 8 to create a custom drone for Boeing modified with Intel's RealSense cameras. We've also seen Intel make a concerted push into the drone world lately with a drone specifically for developers and another that makes use of the company's 3D-mapping technology for collision avoidance.

The Falcon 8+ builds on the Falcon 8 further by adding Intel's advanced, water-resistant ground Cockpit system for control and an Intel PowerPack to keep it flying.

At the center of the Cockpit is an Intel chipset-powered tablet for planning and conducting complex flight patterns as well as monitoring the live video feed via a low latency digital link up to 1080p resolution with a 1 km range. Flight control can also be managed with a single hand joystick.


The Falcon 8+ comes loaded with on-board sensors that can map surfaces down to the millimeter, which the company says allows for routes to be replicated with a high degree of accuracy.

The whole UAV is arranged in a V-form measuring 768 x 817 x 160 mm with a take off weight of 2.8 kg (6 pounds) when loaded with a camera and gimbal capable of transmitting up to 1080p HD video.

Intel announced the Falcon 8+ at the InterGeo drone conference in Germany and it hasn't yet been approved by the Federal Communications Communications for sale or use in the United States. No word from Intel on how soon that could be or what pricing will look like.

Sources: New AtlasIntel, AscTec

Thursday 13 October 2016

BMW Motorrad's futuristic motorcycle concept keeps the rider in control

08:53 Posted by Anonymous No comments

The BMW Group celebrating its centenary this year unveiled its latest vision of the motoring future in Los Angeles. The BMW Motorrad Vision Next 100 is a concept motorcycle that keeps the rider in control in an autonomous world.

Edgar Heinrich, Head of Design at BMW Motorrad, describes a motorcycle as a "Great Escape" from the mundane world. This Steve McQueen vision is apparently at the heart of the Vision Next 100's design. Added to that are several futuristic ideas about where motoring will be, in general, in 100 years and how those changes will affect motorcycles and those who ride them.

Unsurprisingly, safety concerns are first and foremost in the design. Illustrating this, BMW says no protective clothing, not even a helmet, is required by the rider as assistive systems will keep them safe. Chief among these is a self-balancing mechanism that keeps the bike upright, even when parked, but allows for riding angles suited to the skill level of the rider, tilting into turns and leaning fore and aft when braking or accelerating just as a motorcycle of today would, but without the risk of laying the bike down.

The BMW Group's futurists see a world in which self-driving cars are the norm, but the Motorrad concept keeps the rider in control. To help in this regard, special rider's gear has been designed to form a Digital Companion that supports the rider with situational information when required. This information is relayed via glasses, called the Visor, worn by the rider, called the Visor. Content is triggered by the rider's eyes which, when looking up or down, cause it to display different content. Looking straight ahead clears the display so the rider can concentrate on the experience of riding. The exception being when alerts are required.

The rider would also wear the Vision Next 100's companion suit, which is tuned to provide thermal support by adjusting to provide ventilation or heat. The suit's external design is inspired by the musculature of the human body and bands in the suit can be adjusted by both rider preference and posture to allow for added or loosened support as needed. Riding speeds also adjust the suit, with higher speeds adding support to the upper vertebrae, for example.
The bike itself is designed to evoke nostalgic thoughts of BMW two-wheelers of the past, with a boxer-style engine cover in a naked bike style with outward-facing handle joints as homage to BMW's of days gone by. The black frame triangle is designed to be reminiscent of the very first BMW Motorrad motorcycle, the R32, built in 1923. Although the engine cover appears to be over a boxer-style engine, in reality it's a zero-emissions powertrain. The polished aluminum finish of the engine covers is dynamic, moving outward during use to add aerodynamics around the rider's legs, and sucking in when the bike is at rest for a slim, clean profile.

That engine cover is part of the overall Motorrad Vision Next 100's Flexframe structure. This is a futuristic, flexible construct that allows the entire one-piece frame to give a full-body steer for the motorcycle. Turning the handlebars to steer the bike bends the frame rather than just the front wheel. The amount of force required for the rider to create a turn is adjusted according to the Motorrad Vision's speed - the higher the speed, the more force that is required to make a turn. This is intended to prevent over-steer and corrections.

Integrated into that futuristic frame are the riding lights. A U-shaped element at front is the daytime running light and wind deflector in one piece. The integrated windshield protects the rider at speed and also acts as a heads-up display of information, as needed. Two red, illuminated strips beneath the seat shell are the rear lights and indicators in one piece, made to be reminiscent of Motorrad bikes of today.

A look at the BMW Motorrad Vision Next 100 shows that there are few joints and no visible bolts, screws, springs, or shocks. Damping is controlled by the tires, which feature a futuristic "variable tread" that actively adjusts to maximize grip and minimize impacts.

The BMW Motorrad design team sees a future world in which digital elements are more common than are analog. In that respect, most aspects of human life are about virtual control and allowing robotic machines to do the mundane tasks of everyday living. In that world, a motorcycle that allows the rider to be in control would become a Great Escape.

The BMW Motorrad Vision Next 100 is on display in Los Angeles at the Iconic Impulses: BMW Group Future Experience exhibition until the 16th of October.


Source: New AtlasBMW

Wednesday 12 October 2016

Indian ROV monitors the health of coral reefs

00:06 Posted by Anonymous No comments

Scuba divers who take a plunge into ocean floors to study coral reefs can now take a break. An indigenously developed remotely operated vehicle (ROV) is taking up their role with more efficiency and accuracy and it’s expected to contribute significantly to the conservation and management of corals.

The National Institute of Ocean Technology (NIOT), Chennai, had recently deployed the ROV for studying the coral reefs of the Andaman and Nicobar Islands, which are facing survival threats due to global warming.

While it would take weeks together for a scuba diver to diagnose the health of corals, the ROV could map a larger area in a day.

“The images of corals recorded by the ROV are useful for studying the biodiversity of coral reefs and their evolution. The underwater visuals had shown the coral debris and boulders caused by the 2004 tsunami and the rejuvenation of the colonies of branching corals, stony coral and brain corals at some locations,” explained G.A. Ramadass, Head, Deep Sea Technologies Group, NIOT.

The coral reef biodiversity at Andaman region, which spreads across an area of 11,000 sq km, was seriously affected during the 2004 tsunami. The increasing sea surface temperature added to the stress. Currently, there is no mechanism other than scuba diving to examine the corals and assess the extent of damage or rejuvenation,” explained Dr. Ramadass.

According to the experts, no evidence of coral bleaching was seen in Andaman reef during April 2016 when the ROV carried out a survey. However, the ecosystem needs to be monitored constantly to understand the impacts of raise in temperature, he said.

Development of ROV
NIOT had earlier developed a deep water work class Remotely Operated Vehicle (ROV) ROSUB 6000 which was suitable for exploration in deep waters. It was successfully operated at a maximum depth at 5,289 metres in the Central Indian Ocean Basin. It also contributed to the exploration of deep ocean minerals such as gas hydrates, polymetallic nodules and hydrothermal sulphides, which occur at water depths ranging between 1,000 and 6,000 metres, said a communication from the institute.

A new miniaturised version of ROV, which could be effectively used for exploration and inspection up to 500-metre water depths, caters to the need of the research community and industry. It was also deployed for scientific research in Antarctica as a part of the 34th Indian Scientific Expedition to Antarctica during Jan-Apr 2015. It was deployed in the Lake Priyadarshini near the Indian permanent station Maitri and in the New Indian barrier ice shelf regions.

Source: The Hindu

Monday 10 October 2016

Komatsu's robotic mining truck completely dumps the driver

21:34 Posted by Anonymous No comments

Komatsu's latest autonomous truck fully embraces the notion of unmanned operation by ditching the cabin and adopting a design that optimizes load distribution and doesn't distinguish between forwards and backwards.

Komatsu began trials of its Autonomous Haulage Systems (AHS) in a partnership with mining company Rio Tinto in 2008, and since then the technology has hauled hundreds of millions of tonnes of material in Chile and Australia's Pilbara region.


The autonomous haul trucks like the 930E model used by Rio Tinto incorporate controls, wireless networking and obstacle detection to enable unmanned operation, but they still look like conventional mining trucks complete with driver cabins.

The new 2,014-kW (2,700-hp), 15-m (49-ft) long and 8.5-meter (27-ft) wide "Innovative Autonomous Haulage Vehicle" takes things up a notch. The cabin is completely gone, allowing for a design that better distributes weight to all four wheels, and it uses four wheel drive and four wheel steering for better grip and maneuverability.


Without the need for a driver squinting in the rear view mirror, the truck is also designed to move as efficiently backwards as it does forwards, meaning no three point turns and therefore increased productivity and less wear and tear on the 59/80R63 tires. It can handle a payload of 230 metric tons and reaches a maximum speed of 64 km/h (40 mph).

The robot monster truck is being unveiled at Minexpo International in Las Vegas this week and Komatsu says it "plans a market introduction in the near future."

The video below is Komatsu's animation of the Innovative Autonomous Haulage Vehicle at work.


Source: New AtlasKomatsu

Sunday 9 October 2016

SenseFly gives its eBee agriculture drone bigger wings

07:32 Posted by Anonymous No comments
A few years back, drone-maker senseFly launched its eBee fixed-wing drone with a firm eye on agricultural applications, though the drone's mapping capabilities drew the attention of those working on search and rescue applications, too. Now the company has announced an upgrade, the eBee Plus, which along with longer flight times, packs a new sensor promised to provide more precise mapping.

 The eBee Plus looks much like the original drone on the outside, but it packs a little more functionality inside that senseFly and parent company Parrot will hope can expand its appeal. This includes a new propriety sensor dubbed S.O.D.A (Sensor Optimized for Drone Applications), an RGB camera with a 1-inch sensor, and global shutter that is capable of snapping images with a spatial resolution of 2.9 cm (1.1 in) while flying at an altitude of 122 m (400 ft).

The wingspan has grown from 96 cm (37.8 in) to 110 cm (43.3 in) and flight time is boosted from 45 minutes to 59 minutes, allowing the drone to map 220 ha (540 ac) on a single flight. The company claims this is more area than any drone in its weight class. Routes are planned through the senseFly's accompanying flight-control software, eMotion 3, which allows for multi-drone missions and also handles the data management.

While the drone comes loaded with the S.O.D.A sensor, this can be swapped out for any of the company's fixed-wing RGB, thermal or multispectral sensors, such as the Sequoia released earlier this year. There is also the addition of RTK/PPK functionality which is available as an optional extra and draws on satellite data for more precise positioning information about objects on the ground.


The company will be showing off the new drone at mapping conference Intergeo in Hamburg next week. Though there is no information on the pricing for the eBee Plus available at this stage, we have enquired with senseFly and will update the story when we hear back.

Source: New AtlassenseFly

Panasonic uses human touch to transfer data

03:53 Posted by Anonymous No comments
Panasonic suggests that because the data is traveling through the body and not through the air, secure transmission is assured


In an age when digital information can fly around the connected networks of the world in the blink of an eye, it may seem a little old timey to consider delivering messages by hand. But that's precisely what Panasonic is doing at CEATEC this week. The company is demonstrating a prototype communication system where data is transmitted from one person to another through touch.

There's very little information on the system available, but Panasonic says that the prototype uses electric field communication technology to move data from "thing-to-thing, human-to-human and human-to-thing." Data transfer and authentication occurs when the objects or people touch, with digital information stored in a source tag instantaneously moving to a receiver module – kind of like NFC tap to connect technology, but with people in the equation as well as devices.

The LEDs under one staff member's skirt change to match the color of a bracelet worn by another when they shake hands

It has the potential to allow business types to exchange contact information with a handshake, mood lighting in a room to be changed to match or contrast with clothing when a lamp is touched or access to a building granted by placing a hand or object on a lock interface or door handle. And Panasonic suggests that because the data is traveling through the body and not over the air, secure transmission is assured.

The CEATEC demos are quite basic, but serve to show that the system works. There's no word at the moment on whether it will make it to enterprise or commercial availability, but the video below shows the Human Body Communication Device in action.

Source: PanasonicNew Atlas

Saturday 8 October 2016

Meet NASA's robot destined to mine Martian soil

03:11 Posted by Anonymous No comments

After all, Curiosity could get some much needed company!


Elon Musk and his private spaceflight company SpaceX recently outlined their plan to make space travel to Mars an affordable reality—just $500,000 for a one-way ticket to the Red Planet. To shuttle people to Mars (within the next decade if ambitious goals can be met) SpaceX is working on a carbon fiber fuel tank for a massive 400-foot-tall reusable rocket that only exists on the drawing board at this point.

But getting people to Mars is only half the battle. Making sure that they can survive, possibly for decades, is a whole different challenge. SpaceX might be the perfect organization to launch people to the Red Planet on massive rockets, but they are going to need some help from NASA to build a sustainable colony, which is its proposed goal.

Fortunately, NASA has been quietly working on ways to harvest Martian resources for some years—a necessary step to ultimately realize a self-sustained Martian colony. In April 2016, NASA published a scientific and technical information (STI) paper titled "Frontier In-Situ Resource Utilization for Enabling Sustained Human Presence on Mars." The paper outlines various ways that minerals, water, and atmospheric gasses could be harvested and used to manufacture plastics, rocket propellants, habitat-heating fuels, and even more complicated materials like fiberglass—all with materials that are already on Mars.



Which is where the Regolith Advanced Surface Systems Operations Robot (RASSOR) comes in (see video above). The robot, which could be affixed to a rover or made into a rover itself, uses a rotating digging device to scoop up soils that could then be used for resource extraction. As NASA writes on its website, the RASSOR's "design incorporates net-zero reaction force, thus allowing it to load, haul, and dump space regolith under extremely low gravity conditions with high reliability."

The bot in the video above is actually the RASSOR 2.0, a scaled-up prototype of the original 2013 design. If we are going to build a self-sustained colony on Mars in the foreseeable future, the first step will be sending a host of robots like RASSOR to the Red Planet to get to work building our Martian home for us. As the NASA STI paper states regarding a Martian colony: "The crew is there to explore, and to colonize, not maintain and repair. Any time spent on 'living there' and 'housekeeping' should be minimized to an oversight role of robotic automated tasks."

Friday 7 October 2016

Omnidrectional robot moves on an electrically charged ball to keep things simple

01:06 Posted by Anonymous No comments



In a field where highly complex machinery often pulls the strings of highly complex maneuvers, a system that relies on a single ball to get around is certainly at the simpler end of the spectrum. Ten years ago we were introduced to such an idea, and now the team behind the original Ballbot is back with an even less complicated system. The upgraded machine is dubbed SIMbot and uses an experimental induction motor rather than a mechanical drive system for mobility, resulting in a robot with minimal moving parts.

The original Ballbot was invented by Professor Ralph Hollis, a robotics researcher at Carnegie Mellon University. The tall and slender robot was battery operated and omnidirectional, a set of characteristics its creator says lends itself particularly well to working with people in busy environments.

Because of its slender form and great agility, the robot can roll through doorways, in between furniture and can quite easily be moved out of the way when needed. A few years ago, a company spun out of Carnegie Mellon sought to make use of these capabilities with a version called mObi, eyeing hospitals and offices as its first port of call. The machine has also inspired a number of ballbots around world from roboticists in Japan, Switzerland and Spain.

But ballbots have relied on mechanical parts to move the ball at its base and keep the robot upright. Described as an "inverse mouse-ball drive," this sees motors actuate rollers that press against the ball, move it in the required direction and keep it from tipping over. These act on information gathered by internal sensors that track the robot's balance.

The SIMbot design could make ballbots cheaper and more accessible

"But the belts that drive the rollers wear out and need to be replaced," says Michael Shomin, a Ph.D. student in robotics at Carnegie Mellon. "And when the belts are replaced, the system needs to be recalibrated."

So the team explored more mechanically simplistic means of keeping the ballbot on the move. This led them to induction motors, which are motors that use magnetic fields to induce an electrical current and generate torque rather than relying on electrical connections. They are actually fairly commonplace and can be found in ceiling fans, industrial machinery and Elon Musk's Hyperloop plans.


But applying the technology to a spherical form is a challenge. While the team says progress has been made in this area before with spherical induction motors (SIMs) that can move back and forth a few degrees, their design, combined with advanced software and mathematics, allows for a spherical motor that can spin freely in any direction.

The SIM rests on a hollow iron ball inside a copper shell. Six laminated steel stators sit alongside the ball and produce traveling magnetic waves, guiding the ball in that direction. By altering the currents produced by the stators, the SIMbot can be steered in different directions.

Replacing the belt drives with electric currents cuts down on friction and makes the machine more efficient, but it may also make for more reliable ballbots that require less routine maintenance. And because of the shift away from mechanical components, it may also make them cheaper in the long run.

"This motor relies on a lot of electronics and software," says Hollis. "Electronics and software are getting cheaper. Mechanical systems are not getting cheaper, or at least not as fast as electronics and software are."

You can see the SIMbot in action in the video below.


Wednesday 5 October 2016

Google teaches robots to teach each other

23:19 Posted by Anonymous No comments

Poet John Donne said, "no man is an island," and that is even more true for robots. While we humans can share our experiences and expertise through language and demonstration, robots have the potential to instantly share all the skills they've learned with other robots simply by transmitting the information over a network. It is this "cloud robotics" that Google researchers are looking to take advantage of to help robots gain skills more quickly.

The human brain has billions of neurons, and between them they form an unfathomable number of connections. As we think and learn, neurons interact with each other and certain pathways that correspond to rewarding behavior will be reinforced over time so that those pathways are more likely to be chosen again in future, teaching us and shaping our actions.

An artificial neural network follows a similar structure on a smaller scale. Robots may be given a task and instructed to employ trial and error to determine the best way to achieve it. Early on, their behavior may look totally random to an outside observer, but by trying out different things, over time they'll learn which actions get them closer to their goals and will focus on those, continually improving their abilities.

While effective, this process is time-consuming, which is where cloud robotics comes in. Rather than have every robot go through this experimentation phase individually, their collective experiences can be shared, effectively allowing one robot to teach another how to perform a simple task, like opening a door or moving an object. Periodically, the robots upload what they've learned to the server, and download the latest version, meaning each one has a more comprehensive picture than any would through their individual experience.

Using this cloud-based learning, the Google Research team ran three different types of experiments, teaching the robots in different ways to find the most efficient and accurate way for them to build a common model of a skill.

First, multiple robots on a shared neural network were tasked with opening a door through trial and error alone. As the video below shows, at first they seem to be blindly fumbling around as they explore actions and figure out which ones get them closer to the goal.

After a few hours of experimentation, the robots collectively worked out how to open the door: reach for the handle, turn and pull. They understand that these actions lead to success, without necessarily building an explicit model of why that works.

In the second experiment, the researchers tested a predictive model. Robots were given a tray full of everyday objects to play with, and as they push and poke them around, they develop a basic understanding of cause and effect. Once again, their findings are shared, and the group can then use their ever-improving cause and effect model to predict which actions will lead to which results.

Using a computer interface showing the test area, the researchers could then tell the robots where to move an object by clicking on it, and then clicking a location. With the desired effect known, the robot can draw on its shared past experiences to find the best actions to achieve that goal.

Finally, the team employed human guidance to teach another batch of robots the old "open the door" trick. Each robot was physically moved by a researcher through the steps to the goal, and then playing that chain of action back forms a "policy" for opening the door that the robots can draw from.

Then, the robots were tasked with using trial and error to improve this policy. As each robot explored slight variations in how to perform the task, they got better at the job, up to the point where their collective experience allowed them to account for slight variations in the door and handle positions, and eventually, using a type of handle that none of the robots had actually encountered before.

So what's the point of all this? For neural networks, the more data the better, so a team of robots learning simultaneously and teaching each other can produce better results much faster than a single robot working alone. Speeding up that process could open the door for robots to tackle more complex tasks sooner.

The New York Public Library Unveils a Cutting-Edge Train That Delivers Books

06:32 Posted by Anonymous No comments

The New York Public Library has installed a new, state-of-the-art conveyor system in its Stephen A. Schwarzman Building on Fifth Avenue and 42nd Street to transport requested research materials from newly-expanded storage under Bryant Park to researchers throughout the library.


The conveyor, developed by New Jersey based company Teledynamic, will begin delivering requested materials to two locations in the building during the week of Oct. 3. One of the locations – the iconic Rose Main Reading Room on the third floor – is reopening on Oct. 5 after an over two-year closure for repairs and restoration.

The new system – installed utilizing an innovative design developed by global design firm Gensler – consists of 24 individual red cars that run on rails and can seamlessly and automatically transition from horizontal to vertical motion. The cars pick up requested materials from the newly-expanded Milstein Research Stacks – which now have two levels that can hold up to 4 million research volumes – and deliver the materials to library staff in two locations: one on the first floor and the other in the Rose Main Reading Room. Staff then provide the materials to researchers for use in the library.

The new conveyor system replaces an outdated setup in which boxes of research materials were placed on a series of conveyor belts. The new system is easier to maintain and more efficient.

The new system:

  • Runs on 950-feet of vertical and horizontal track
  • Includes 24 cars that can each carry 30 pounds of material
  • Moves 75 feet per minute.
  • Are tracked using electronic sensors installed on the rails
  • Moves materials through 11 levels of the library, totaling 375 feet.
  • Is electric, and the cars run on 24VDC
  • Takes approximately five minutes to go from the Milstein Stacks to the Rose Main Reading Room (it takes longer for requested materials to be delivered, as the request needs to be received, the materials pulled by staff, and then placed on the system).
  • Costs about $2.6 million
You can learn more about the system in the video given below.


Source: Open CultureNYPL

Tuesday 4 October 2016

Combat helmet-mounted HUD combines information and infrared

06:29 Posted by Anonymous 1 comment

Information can be a soldier's most important weapon, and Rockwell Collins has unveiled a new system that puts that weapon front and center. The company's Integrated Digital Vision System (IDVS) is a heads-up display that attaches to combat helmets and relays information from a command center, other warfighters or drones, as well as augmenting the wearer's vision with multispectral sensors.


Rockwell Collins' IDVS sounds a lot like BAE Systems' Q-Warrior HUD, with both designed to provide key data without distracting a warfighter from the task at hand. That data can take the form of maps, compass headings, and markers on people and objects, which are displayed on a transparent 1,920 x 1,200 pixel display before the wearer's eyes. This display flips up out of the way if need be, and can be adjusted to suit each wearer's unique interpupillary distance (the distance between their eyes).

Meanwhile, the unit can increase visibility in smoky, foggy, dusty or dark conditions through multiple sensors on the top. Two of these detect light in visible and near-infrared wavelengths, while another adds thermal infrared, and the company claims that the system can quickly transition between different light levels. Those sensors display their readings across a 40-degree field-of-view on the HUD, leaving the soldier's peripheral vision clear of any distortion.


Power comes from four 18650 batteries, which should grant the unit up to six hours of full-sensor operation, or eight CR123 batteries. To help future-proof its system, the company says it's built on an open architecture, allowing the data input, output and processing to be upgraded as technology progresses.

Rockwell Collins made the announcement at the Land Forces Australia exhibition in Adelaide, South Australia.

Saturday 1 October 2016

ESLOV: The amazing IoT Invention Kit from Arduino

03:53 Posted by Anonymous No comments

For years, the open-source philosophy of Arduino has been the inspiration to robots, drones, medical and space research, interactive art, musical instruments, 3D printers, and so much more. Now, Arduino is on a mission to radically simplify the way you build smart devices. They've introduced  ESLOV, a revolutionary plug-and-play IoT invention kit.

ESLOV consists of intelligent modules that join together to create projects in minutes with no prior hardware or programming knowledge necessary. Just connect the modules using cables or mounting them on the back of our WiFi and motion hub. When done, plug the hub into your PC.


ESLOV’s visual code editor automatically recognizes each module, displaying them onto the user’s screen. The connections can be drawn between the modules on the editor, and project can be made come to life on the screen. Added to this, the device project can be uploaded to the Arduino Cloud and users can interact with it remotely from anywhere, including via mobile phones. The Arduino Cloud’s user-friendly interface simplifies complex interactions with sliders, buttons, value fields, and more.

The ESLOV modules and hub can also be programmed with the wildly popular Arduino Editor. With the provided libraries, one can customize the behavior of the existing modules, enhance the hub’s functionalities, as well as modify the protocols of both the hub and the modules.

With a total of 25 modules — buttons, LEDs, air quality sensors, microphones, servos, and several others — the possibilities are endless. Applications like baby monitors, washing machine notifier (that informs users when their laundry is finished), IoT enabled thermostats that can be adjust while away from home.

In line with the core values of the Arduino community, ESLOV’s hardware and software are open-source, enabling users to produce their own modules. To accelerate its development in the open-source spirit, ESLOV — which began as part of a three-year EU-funded PELARS project — is now live on Kickstarter and needs your support.

The toolkit is offered in a variety of sizes, depending on the number of modules. Prices range from ~$55 USD to ~$499 USD, with multipacks and other opportunities available as well. Delivery is expected to get underway in June 2017.

In terms of hardware, the main hub is currently equipped with a Microchip SAM D21 ARM Cortex-M0+ MCU at 48MHz and built-in WiFi (just like the MKR1000). Each of the modules are small (2.5 x 2.5cm), low-power (3.3V), single-purpose boards featuring the same processor found at the heart of the Arduino/Genuino UNO: Microchip’s ATmega328P.

The modules’ firmware and the hub’s software can be updated both using the USB cable and over-the-air (OTA).

Those heading to World Maker Faire in New York on October 1st-2nd can learn more about the kit inside the Microchip booth in Zone 3, as well as during Massimo Banzi’s “State of Arduino” presentation on Saturday at 1:30pm in the New York Hall of Science Auditorium.

Want to learn more or back ESLOV for yourself? Check out its Kickstarter page!

Source: Arduino

Thursday 29 September 2016

First self-driving vehicle produced for Volvo's public trial

22:25 Posted by Anonymous No comments


The first car built to take part in Volvo's Drive Me trial has rolled off the production line in Torslanda, Sweden. Described by Volvo as "the world's most ambitious and advanced public autonomous driving experiment," Drive Me will see real people using fully autonomous cars on public roads.


Although the cars aren't scheduled to hit the road until next year, Volvo sees this as the beginning of the project, which will be run in Gothenburg with special "hands-off and feet-off" zones allowing for full autonomous use.


The carmaker has been at the forefront of the autonomous driving revolution, most recently in partnering with Uber to ferry passengers around in self-driving taxis and in trialing a self driving truck in an underground mine.

For the Drive Me trial, Volvo XC90 SUVs are being fitted with a variety of sensors, including LiDAR, radar and traditional cameras. The information from the source sensors is then brought together by a powerful computer that Volvo calls the Autonomous Driving Brain in a process called data fusion. The fused data is used to inform the actions that the cars take.
Volvo says that the Drive Me project differs from others in its customer-focused approach. By researching with real drivers in real-world situations, it hopes to gain insights that more controlled research approaches may not yield.

Subsequent to the Drive Me project getting underway at Gothenburg, another leg is planned for launch in London. Volvo says it is considering interest from cities in China too.

Volvo has high hopes for autonomous driving tech. It's already introducing semi-autonomous technologies to help with its aim for no-one to be seriously injured or killed in a new Volvo by the year 2020, and is looking to begin introducing fully autonomous cars commercially by around 2021.

Here's Volvo's run-down on the project:




Source: Volvo, New Atlas