Over the past few months, we’ve worked hard to share a few wonderful articles with you through this page. This is where some of our wonderful sections such as Geek Speak, Tech Mania and Kit Creatives have been shared. Some of the best achievements of our students and our community has been shared on this page.
But it’s time that we moved on and reached out to more audiences. Things have changed, times have changed. We’ve felt that integrating this page with our own website will give our beautiful readers all the content they need at a more easier location.
The new blog at Kidobotikz’s own website will help our users get all the content they require at the same page. They can gain access to a lot of new features from the page and can also stay in touch with activities of the vibrant community.
So, as of today we are ending our association with the blogger page. All the old articles will exist on this page though, so that you can relive your memories while we move them back to the new page.
While the page may change, what hasn't changed is our approach. You can expect the same kind of articles and the same standard of writing that you’ve grown used to
From tomorrow, you can read about us and kidobotikz community at kidobotikz.com/blog
As always, if you have feedback, we'd love to hear it. Please email us at info@kidobotikz.com.
If one has been part of a Multinational Conglomerate with several business verticals, the recent latest fad one can relate to is the term “Open Innovation”. Open innovation as a concept is fairly new and it’s adoption is yet to take off on a massive scale. One can blame this on the mindset of big institutions that are risk averse and tend to lock up their innovation with patents.
While safeguarding ideas with patents may have been lucrative with the royalties that it had earnt the industry until now, it has had a significant impact on the innovation culture within the ecosystem.
Innovation is no longer disruptive. At best it can be termed as innovation that is incremental and gradual. This means that the innovation engine has run dry across most sectors and needs to be restarted.
While the concept of open innovation has been promoted by big players and small players alike, there has been quite a struggle in getting it moving forward. This can attributed to the general climate where innovation and creativity are no longer words of importance outside R&D and product development. The common employee who probably is a part of testing has no medium through which he can express or explore his creative and technical prowess.
While it is understandable that Industries encourage employees who think outside the box or go the extra mile in bringing uniqueness to the products, more often than not, an employee’s thought processes are locked up in the rigours of routine.
What is of paramount importance is the discovery of an activity or curriculum that not only promotes thinking creatively, but also help think on lines that bring in an element of futurism to ideas.
This can be made possible only if technologies of the future are embraced and employees are made “early adopters”.
With the advent of regimes such as Industry 4.0, Internet of Things(IoT) and Cyber Physical processes, the industry and employees in particular need to acquaint themselves the nitty-gritties of these technologies. Also, with the “Hardware Revolution” underway, more computing has been extracted from non-conventional platforms with devices such as 3-D printers, SBCs (Single Board Computers) and Microcontrollers coming to the fore.
With such being the case, the knowledge of robotics, and the subjects under it’s gamut- electronics, mechanics, algorithm, programming- provide employees with the right skillsets to think across the spectrum and be capable of delivering innovative solutions for many a problems. More importantly, it removes the need to look for solutions from elsewhere. With a climate that embraces ideas within the system, the solutions can be found within the industry. Even if solutions exist elsewhere, employees empowered with the necessary skills will be able to cross-pollinate them to meet the requirements of firms.
NASA's Deep Space Network (DSN) has received the final piece of data collected by the agency's New Horizons probe during its encounter with the dwarf planet Pluto, which took place on July 14, 2015. Data from the New Horizons mission has revolutionized our understanding of Pluto, revealing the planetoid to be a surprisingly dynamic and active member of our solar system.
Unlike NASA's Dawn mission, which has now spent over a year exploring the dwarf planet Ceres, New Horizons never made orbit around its target. Instead, the probe had only a brief window in which to harvest as much information as possible, before barreling past Pluto into the outer reaches of our solar system.
During the pass, New Horizons' meager power supply of only 200 watts ran a sophisticated suite of seven scientific instruments, which worked to collect over 50 gigabits of data. This information was safely stored in two solid-state digital recorders that form part of the probe's command and data-handling system.
Sending this data back to Earth would prove to be a lesson in patience. New Horizons began transmitting the stored data in September 2015 at a rate of around seven megabits per hour. The information was received back on Earth by NASA's Deep Space Network (DSN).
The transmission was not continuous. The natural spin of the Earth necessitated New Horizons to transmit data in eight hour stints when the DSN was available, resulting in a transfer rate of around 173 megabits per day.
Furthermore, the New Horizons team had to share the capabilities of the DSN with other exploration endeavors such as the Dawn mission, further frustrating the data transmission rate.
However, slow and steady wins the race, and at 5:48 a.m. EDT on the 25th of October, over a year after beginning the process, the DSN station located in Canberra, Australia, relayed the final piece of Pluto data from New Horizons to the probe's mission operations center at the Johns Hopkins Applied Physics Laboratory in Maryland.
The last data transmission contained part of an observation sequence of Pluto and its large moon Charon, as captured by the spacecraft's Ralph/LEISA imager.
"The Pluto system data that New Horizons collected has amazed us over and over again with the beauty and complexity of Pluto and its system of moons," comments Alan Stern, New Horizons principal investigator from Southwest Research Institute in Boulder, Colorado. "There's a great deal of work ahead for us to understand the 400-plus scientific observations that have all been sent to Earth. And that's exactly what we're going to do –after all, who knows when the next data from a spacecraft visiting Pluto will be sent?"
When we usually think about the year 2020, we assume of a time somewhere in the future where flying cars would be the norm and contact with alien civilizations would’ve been established. Followers of our former president Shri Abdul Kalam would probably think of the vision he portrayed through the book “Vision 2020”
Of course, in our eternal love for procrastination, most of us forget the fact that 2020 is not the far future. It is very much the near future- just 4 years away!
With such proximity to the promising year, let’s actually examine what are the possibilities that could exist in the third decade of the 21st Century- particularly in the field of education.
Well, we’re no soothsayers or oracles to be able to predict the state of education in the 2020s. But, we actually are capable of is to examine the current trends and plot the graph that ends at the year 2020.
So, what indeed are the trends in the education sector vis-a-vis 2016?
Mobile Learning: Mobile learning, or mLearning. It’s estimated that the mobile learning industry alone will grow to over $37 billion by 2020. Obviously, this is an eLearning trend people just can’t get enough of.
Gamification: Gamification in the course curriculum promotes knowledge retention and also provides a certain degree of entertainment. It does a great job of bringing elements of entertainment and relaxation to any type of online training. It’s possible that the gamification market will top off at about $2.8 billion in 2016.
Video-Based Training: eLearning designers made a wise move when they began to implement video into online learning. Video-based training is becoming so popular that about 98 percent of ALL organizations will include video in their digital learning strategies in 2016.
Big Data: Big Data, for eLearning, would be the data generated by learners who interact with the learning content as a part of their course. This data is collected through Learning Management Systems.
Now, that we have such interesting trends on hand, can we actually get to the task of predicting how the education scenario would like in the future?
A wild guess couldn’t hurt our chances.
Prediction:Individualized training will be the norm as opposed to classroom based learning. The courses will happen at a pace optimised to match the student’s learning speed. While tutors will continue play a role in the learning process, their role will be limited to that of a supervisor of the learning process while the knowledge in itself will not be imparted by them. It will likely be digitized and taught using interactive video content. The tutors will play a role in taking forward the knowledge of the candidate and direct them towards their areas of strength. The performance and areas of strengths in themselves will be determined using metrics that are not based on memorization of concepts, but application of the concepts. More importantly, technologies such as Big Data will help evaluate the learning attributes of students and generate reports that will help tutors understand their students better.
A new NTU robot will soon be spray-painting the interiors of industrial buildings in Singapore, saving time and manpower while improving safety. Known as PictoBot, the robot is invented by scientists from Nanyang Technological University (NTU Singapore) and co-developed with JTC Corporation (JTC) and local start-up Aitech Robotics and Automation (Aitech). PictoBot can paint a high interior wall 25 per cent faster than a crew of two painters, improving both productivity and safety. Industrial buildings are designed with high ceilings to accommodate bulky industrial equipment and materials. Currently, painting the interiors of industrial buildings requires at least two painters using a scissor lift. Working at such heights exposes painters to various safety risks.
In comparison, PictoBot needs only one human supervisor, as it can automatically scan its environment using its optical camera and laser scanner, to navigate and paint walls up to 10 metres high with its robotic arm. It can work four hours on one battery charge, giving walls an even coat of paint that matches industry standards. Equipped with advanced sensors, Pictobot can also operate in the dark, enabling 24-hours continuous painting. Developed in a year at NTU's Robotic Research Centre, PictoBot is supported by the National Research Foundation (NRF) Singapore, under its Test-Bedding and Demonstration of Innovative Research funding initiative.
The initiative provides funding to facilitate the public sector's development and use of technologies that have the potential to improve service delivery. This is done through Government-led demand projects, where agencies use research findings to address a capability gap and quickly deploy the new technology upon successful demonstration.
Painting large industrial spaces is repetitive, labour intensive and time-consuming. PictoBot can paint while a supervisor focuses on operating it. The autonomous behaviour also means that a single operator can handle multiple robots and refill their paint reservoirs.
Using PictoBot to automate spray painting helps us mitigate the risks of working at heights when painting high walls typically found in industrial buildings. In addition, it helps to reduce labour-intensive work, thus improving productivity and ensuring the quality of interior finishes. PictoBot is an example of how autonomous robots can be deployed to boost productivity and overcome the manpower constraints that Singapore faces in the construction industry.
The shapes and sizes of drones have changed a lot in recent times, but most serious quadcopters are still controlled by way of a dual-joystick controller (autonomous flyers notwithstanding). A new crowdfunding campaign is coming at it from a different angle, by developing a drone that can be flown with a single hand using a stick and thumb ring.
It is true that getting up to speed as a drone pilot using joystick controllers can take some time. The team behind Shift is aiming to give novices an easier way to earn their wings through what it claims is a more intuitive way to fly.
The drone itself has a pretty standard and respectable enough list of specs. It carries a 4K camera that shoots 13-megapixel stills, 8 GB of onboard memory and even a claimed 30 minutes of flight time.
But the controller is something we haven't seen before. It is basically a short fat stick that you hold in one hand and slide your thumb through a ring on top. By moving your thumb around you can then guide the drone through the air: push left and the drone flies left, push forward and the drone flies forward and move up to have the drone increase its altitude. There is also separate toggle on the front that can be used to change the drone's orientation with your index finger.
This does sound like a simpler way to control a drone, but we'd be interested to see how well it works in practice. Our encounters with drones controlled via smartphones and watches quickly reminded us how reliable joysticks are once you get a handle on them, and perhaps there is a reason they have remained the controllers of choice for so long.
We'd also question the value of being able to fly one-handed, which is billed as Shift's big advantage. Piloting a drone can take some concentration, so trying to multitask and make phone calls or take a sip of coffee at the same time might just be a recipe for a busted aircraft.
The team behind Shift is aiming to give novices an easier way to earn their wings
The Shift controller is said to be compatible with some already existing drones including models from Syma and WLtoys, and can be pre-ordered with a camera-less mini-drone. The company hopes to ship in May 2017 if all goes to plan.
There have been significant advances in developing new prostheses with a simple sense of touch, but researchers are looking to go further. Scientists and engineers are working on a way to provide prosthetic users and those suffering from spinal cord injuries with the ability to both feel and control their limbs or robotic replacements by means of directly stimulating the cortex of the brain.
For decades, a major goal of neuroscientists has been to develop new technologies to create more advanced prostheses or ways to help people who have suffered spinal cord injuries to regain the use of their limbs. Part of this has involved creating a means of sending brain signals to disconnected nerves in damaged limbs or to robotic prostheses, so they can be moved by thought, so control is simple and natural.
However, all this had only limited application because as well as being able to tell a robotic or natural limb to move, a sense of touch was also required, so the patient would know if something has been grasped properly or if the hand or arm is in the right position. Without this feedback, it's very difficult to control an artificial limb properly even with constant concentration or computer assistance.
Bioengineers, computer scientists, and medical researchers from the University of Washington's (UW) GRIDLab and the National Science Foundation Center for Sensorimotor Neural Engineering (CSNE) are looking to develop electronics that allow for two-way communication between parts of the nervous system.
The bi-directional brain-computer interface system sends motor signals to the limb or prosthesis and returns sensory feedback through direct stimulation of the cerebral cortex – something that the researchers say they've done for the first time with a conscious patient carrying out a task.
In developing the new system, volunteer patients were recruited, who were being treated for a severe form of epilepsy through brain surgery. As a precursor to this surgery, the patients were fitted with a set of ElectroCorticoGraphic (ECoG) electrodes. These were implanted on the surface of the brain to provide a pre-operational evaluation of the patient's condition and stimulate areas of the brain to speed rehabilitation afterwards.
According to the UW, this allowed for stronger signals to be received than if the electrodes were placed on the scalp, but wasn't as invasive as when the electrodes are put into the brain tissue itself.
While the electrode grid was still installed, the patients were fitted with a glove equipped with sensors that could track the position of their hand, use different electrical current strength to indicate that position and stimulate their brain through the ECoG electrodes.
The patients then used those artificial signals delivered to the brain to "sense" how to move their hand under direction from the researchers. However, this isn't a plug-and-play situation. The sensation is very unnatural and is a bit like artificial vision experiments of the 1970s where blind patients were given "sight" by means of a device that covered their back and formed geometric patterns. It worked in a simple way, but it was like learning another language.
"The question is: Can humans use novel electrical sensations that they've never felt before, perceive them at different levels and use this to do a task?," says UW bioengineering doctoral student James Wu. "And the answer seems to be yes. Whether this type of sensation can be as diverse as the textures and feelings that we can sense tactilely is an open question."
For the test, three patients were asked to move their hand into a target position using only the sensory feedback from the glove. If they opened their fingers too far off the mark, no stimulation would occur, but as they closed their hand, the stimulus would begin and increase in intensity. As a control, these feedback sessions would be interspersed with others were random signals were sent.
According to the team, the hope is that one day such artificial feedback devices could lead to improved prostheses, neural implants, and other techniques to provide sensation and movement to artificial or damaged limbs.
"Right now we're using very primitive kinds of codes where we're changing only frequency or intensity of the stimulation, but eventually it might be more like a symphony," says Rajesh Rao, CSNE director. "That's what you'd need to do to have a very natural grip for tasks such as preparing a dish in the kitchen. When you want to pick up the salt shaker and all your ingredients, you need to exert just the right amount of pressure. Any day-to-day task like opening a cupboard or lifting a plate or breaking an egg requires this complex sensory feedback."
Given that the battery life of most multicopter drones typically doesn't exceed 30 minutes of flight time per charge, there are many tasks that they simply can't perform. Feeding them power through a hard-wired tether is one option, although that only works for applications where they're hovering in place. Scientists at Imperial College London, however, are developing an alternative – they're wirelessly transferring power to a drone as it's flying.
For their study, the scientists started with an off-the-shelf mini quadcopter. They proceeded to remove its battery, add a copper coil to its body, and alter its electronics.
The researchers also built a separate transmitting platform that uses a circuit board, power source and copper coil of its own to produce a magnetic field. When placed near that platform, the drone's coil acts as a receiving antenna for that magnetic field, inducing an alternating electrical current. The quadcopter's rejigged electronics then convert that alternating current to direct current, which is used to power its flight.
Known as inductive coupling, the technique has been around since the time of Nikola Tesla. According to Imperial College, however, this is the first time that it has been used to power a flying vehicle. While it currently only works if the drone is within 10 cm (3.9 in) of the transmitter, it is hoped that the range can be greatly increased.
Additionally, instead of continuously powering battery-less copters, it is envisioned that the technology could be used to recharge drones' onboard batteries as they hover over ground support vehicles equipped with the transmitters – this would allow them to remain airborne while recharging, instead of having to land.
It's also possible that the drones could be wired to serve as flying transmitters themselves, "beaming" power from their battery to recipient devices such as hard-to-reach environmental or structural stress sensors, or even to other drones that need a mid-air recharge. A team at the University of Nebraska-Lincoln, in fact, has already used a quadcopter as a flying wireless charger.
If you don't like the thought of bugs crawling all over you, then you might not like one possible direction in which the field of wearable electronics is heading. Researchers from MIT and Stanford University recently showcased their new Rovables robots, which are tiny devices that roam up and down a person's clothing – and yes, that's as the clothing is being worn.
The centimeter-sized robots hang on by pinching the fabric between their wheels, with the physically-unconnected wheel on the underside of the material held against the others simply by magnetic attraction.
Each Rovable contains a battery, microcontroller, and a wireless communications module that lets it track the movements and locations of its fellow little robots. It also has an inertial measurement unit (IMU), which includes a gyroscope and accelerometer. By using that IMU and by counting its wheel revolutions, the robot is able to keep track of its own location, allowing for limited autonomous navigation on the wearer's body.
In lab tests, one battery was sufficient for up to 45 minutes of continuous clothes-roaming.
Once the technology is developed further, suggested applications for it include interactive clothing/jewellery, tactile feedback systems, and changeable modular displays such as name tags.
The Rovables were recently described at the User Interface Software and Technology Symposium, and can be seen in action in the video below.
We’ve seen a variety of car related themes on Kit Creatives over the last two weeks. How about taking the fun indoors and making some interesting themes that can make your living room tech savvy?
It certainly would be cool if we could make some projects that were to help us some of the activities we do on a daily basis. Not just that, you could also win a few words of appreciation from guests who come calling.
Following are three fun “Kit-creatives” that you can make using your foundation and beginner level kits.
1) Post Box Indicator-F
In the age of the email and whatsapp, the mail is not exactly glamorous. Also, we tend to not check the post box on a regular basis for mails. Let's make a circuit to detect the presence of letter inside the box and indicate the same with the help of a LED.
Tired of searching for your lost items under the dark corners of the bed? Well, you could use a torch light, but why get your arms into darkness when you can actually send a robot to search for it? Part utilitarian part pranky, this will be a fun project to try out.
You probably have a vacuum cleaner back home, but this cute little innovation could still come handy in case you don’t have one back home. Build a cute little robotic cleaner using the stock robot.
Rubik's-cube-sized CubeSats are a nifty, cheap way for scientists to put a research vessel into space, but they're limited to orbiting where they're launched – until now. Los Alamos researchers have created and tested a safe and innovative rocket motor concept that could soon see CubeSats zooming around space and even steering themselves back to Earth when they're finished their mission.
Consisting of modules measuring 10 x 10 x 11.35 cm (3.9 x 3.9 x 4.5 in), these mini-satellites first launched in 2003, but are currently lacking in propulsion because they're designed to hitch a ride into space with larger, more expensive space missions. They're usually deployed along with routine pressurized cargo launches, usually into low orbits that limit the kinds of studies that CubeSats can perform.
This limitation is, of course, frustrating for space researchers. In fact, the National Academy of Science recently identified propulsion as one of the main areas of technology that needs to be developed for CubeSats.
Bryce Tappan, lead researcher on the Los Alamos National Laboratory Cube Sat Propulsion Concept team says propulsion would greatly expand the mission-space that these small, low-cost satellites can cover. "It would allow CubeSats to enter higher orbits or achieve multiple orbital planes in a single mission, and extend mission lifetimes," he says.
The roadblock to building a self-propelling CubeSat is the inherent risk in the way conventional spacecraft propel themselves through space. Usually, spacecraft use mixed liquid fuel and oxidizer systems to achieve propulsion – methods that are somewhat unstable. This poses a level of risk that would make self-propelling CubeSats unacceptable aboard another organization's space mission.
"Obviously, someone who's paying half a billion dollars to do a satellite launch is not going to accept the risk," says Tappan. "So, anything that is taken on that rideshare would have to be inherently safe; no hazardous liquids."
The rocket propulsion concept that the researchers have developed is a solid-based chemical fuel technology that is completely non-detonable. They're calling the new concept a "segregated fuel oxidizer system," with solid fuel and a solid oxidizer kept completely separate inside the rocket assembly.
The researchers recently tested a six-motor CubeSat-compatible propulsion array with great success.
"I think we're very close to being able to put this propulsion system onto a satellite for a simple demonstration propulsion capability in space," says Tappan.
The system works in many of the same ways as a conventional chemical rocket motor works, with a pyrotechnic igniter initiating burn in a high nitrogen, high hydrogen fuel section, releasing hydrogen rich gases that flow into the oxidizer section. The chemical reaction there creates tremendous heat and expanding gases that flow through a nozzle to provide thrust.
"Because the fuel and oxidizer are separate," said Tappan, "it enables you to use higher-energy ingredients than you could use in a classic propellant architecture. This chemical propulsion mechanism produces very fast, high-velocity thrust, something not available with most electrical or compressed gas concepts."
As well as expanding research capabilities, Tappan says that another desirable application for a self-propelling CubeSat would be a de-orbit capability.
With more than half a million individual pieces of "space junk" now in various orbits around the Earth, small satellites may eventually have to demonstrate a compelling mission before they can be launched, or have a de-orbit capability so they can burn up in the atmosphere without adding to the space junk problem.
If CubeSats were self-propelling, they could send themselves back towards Earth after their mission is complete and burn up in the atmosphere, so they don't add to the space junk issue.
Tappan would eventually like to see their new rocket motor concept used in more ambitious space missions. "Not only simple things like de-orbiting, but in groundbreaking missions like taking a small spacecraft to the moon, or even to somewhere as far away as Mars," he says.
To learn more about the new propulsion system, watch the video below:
Microphysiological systems, or organs-on-chips, are emerging as a way for scientists to study the effect that drugs, cosmetics and diseases may have on the human body, without needing to test on animals. The problem is, manufacturing and retrieving data from them can be a costly and time-consuming process. Now researchers at Harvard have developed new materials to enable them to 3D print the devices, including the integrated sensors to easily gather data from them over time.
At around the size of a USB stick, organs-on-chips use living human cells to mimic the functions of organs like the lungs, intestines, placenta and heart, as well as emulate and study afflictions like heart disease. But as promising as the technology is, making the chips is a delicate, complicated process, and microscopes and high-speed cameras are needed to collect data from them.
"Our approach was to address these two challenges simultaneously via digital manufacturing," says Travis Busbee, co-author of the paper. "By developing new printable inks for multi-material 3D printing, we were able to automate the fabrication process while increasing the complexity of the devices."
In all, the Harvard team developed six custom 3D-printable materials that could replicate the structure of human heart tissue, with soft strain sensors embedded inside. These are printable in one continuous and automated process and separate wells in the chip host different tissues.
"We are pushing the boundaries of three-dimensional printing by developing and integrating multiple functional materials within printed devices," says Jennifer Lewis, another of the paper's co-authors. "This study is a powerful demonstration of how our platform can be used to create fully functional, instrumented chips for drug screening and disease modeling."
The incorporated sensors allow the researchers to study the tissue over time, particularly how their contractile stress changes, and how long-term exposure to toxins may affect the organs.
"Researchers are often left working in the dark when it comes to gradual changes that occur during cardiac tissue development and maturation because there has been a lack of easy, non-invasive ways to measure the tissue functional performance," says Johan Ulrik Lind, first author of the study and postdoctoral fellow at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). "These integrated sensors allow researchers to continuously collect data while tissues mature and improve their contractility. Similarly, they will enable studies of gradual effects of chronic exposure to toxins."
The research was published in the journal, Nature Materials, and a time-lapse of the 3D printing process can be seen in the video below.
It’s a warm tuesday morning and an evening has elapsed since the victorious contingent returned back to Chennai after a memorable journey! Their bags probably still remain unpacked, but so should be their spirits for it was an event that will talked about for a long time at Kidobotikz. While it would only be befitting if we write minstrels about such a performance, we are just sticking to writing a blog about it.
After a three day campaign at NIT-Calicut’s Tathva ‘16, the students belonging to the Kidobotikz community returned home with a string of victories. When the event draw to a close, Kidobotikzians had left a profound impact on everyone from the organizers to the students and the audience.
The techfest was in very measure of the word, an extravaganza. Hosting 7 events related to electronics and electrical and about 7 events related to robotics alone, Tathva ‘16 was a cracker of an event with participants from across the country. These participants, most of whom were from esteemed institutions such as IITs, NITs and BITs faced fierce competition from a bunch of school students, our own Kidobotikzians. It is nothing less than a wonder that a bunch of school kids participated in a competed in an event hosted for college students and ran them all rout.
The event had a maze of events that were meant to test the knowledge and skills of students. Events such as Amazed, Collision Course, Death Race, Dirt Race, Accelero-Bot X, Circuit Race, Coil Gun, E-Racer were spaced over a span of two days.
Some of these events had truly interesting themes. The ‘Amazed’ event which had Kidobotikzians grabbing all the three places was a truly challenging event to take part in. It required the participants to develop an autonomous line-maze solving robot which had to find its way out through mazes even as it followed the black lines. Another interesting event was the ‘League of Machines’ where six Kidobotikzians split the prizes for all the 3 places. Each of these events had quite an active participation and had several elimination rounds. Despite all this, Kidobotikzians dominated the leaderboards at all these events. This reiterated the deftness and planning in their preparation for the event. With the final tally standing at 15 prizes, one can truly surmise the fact that it was an out and out Kidobotikz show!
The next stop for these marauding band of roboticists would be FTC for which preparation is underway. Along with the next KRG, which is tentatively expected to be held early next year, the academic year of these geeks is packed with action.
What will the Royal Navy look like in 2036? This month's Unmanned Warrior 2016 exercise taking place off the West Coast of Scotland might provide some of the answers. The Navy's first ever large scale demonstration of marine robotic systems not only showcases new technology, but tests the ability of unmanned vehicles to work with one another as well as with conventional naval ships.
The brainchild of then First Sea Lord Admiral Zambellas in 2014, Unmanned Warrior is part of Joint Warrior – a tri-service exercise involving forces from Britain, NATO and allied nations. Including 5,700 personnel, 31 warships, and almost 70 aircraft, it's a major international effort to develop tactics and skills to deal with conflicts in the air, on the surface, underwater, and in amphibious operations.
Unmanned Warrior assesses the rapidly emerging autonomous and remote controlled technologies that could play a major part in wars of the future. With operations spread over the West Coast of Scotland and West Wales, Unmanned Warrior is playing host to over 50 aerial, surface and underwater Maritime Autonomous Systems (MAS) as they explore the areas of surveillance, intelligence-gathering, and mine countermeasures.
Unmanned Warrior is operating in four ranges: The Hebrides around Benbecula in the Western Isles and Stoneway to the north, the British Underwater Test and Evaluation Centre (BUTEC) at the Kyle of Lochalsh by Skye, and Applecross, where dummy minefields have been laid down.
The machines used in the exercise are a remarkable spectrum of aircraft, surface vessels, and underwater craft. The star of the show is the British Army's Watchkeeper Unmanned Aerial Vehicle (UAV) operated by the Royal Artillery 47th Regiment, which is not only part of the tests, but also provides support for ships heading for Joint Warrior.
Other aircraft include the hand-launchable Black Star winged drone, the Schiebel Camcopter S100 mini helicopter, the US Navy's NRQ 21 fixed wing UAV, the twin engined Sea Hunter, self-landing unmanned aerial vehicles, the Boeing ScanEagle with a new visual detection and ranging system, and the pilot-optional Leonardo Solo helicopter.
One craft of particular interest is the Blue Bear Blackstart fixed wing UAV, which is being used as a communications link to mission control in the Command and Control centers. The latter are mostly a collection of undistinguished white ISO containers built for portability, but they can handle data feeds from 40 different systems at once.
One of these centers is aboard the support ship MV Northern River, which did double duty as the target of a "pirate attack." Watchkeeper helped foil this mock attack before going on to catch a "smuggler" by following him as he drove off after collecting stolen goods from an accomplice on the beach.
In addition to the flying drones, Unmanned Warrior also hosts a fleet of robotic surface boats and submersibles. There's the Pacific 950, a Rigid Inflatable Boat (RIB) equipped with a remote control kit, thermal imaging and all-around vision, so it can act as a watchdog for ships at anchor or making slow passages through harbors. Then there's the Maritime Autonomy Surface Testbed (MAST) for evaluating new robotic technologies, and the Hydrographic Survey, which is using Sea Gliders and Wave Gliders to study the sea bottom and monitor salinity, temperature, and how these change with depth.
For the minehunting challenge, actual Royal Navy minehunter ships were used as they tested the Remus 100 and Remus 600 robotic submersibles with advanced sonar for seeking out dummy mines. In addition, the Remus are designed to be lightweight and easily customizable, so they can be quickly adapted to different tasks. In addition, the challenge tested unmanned surface minesweepers, such as the Atlas ARCIMIS.
"The technologies demonstrated in Unmanned Warrior have the potential to fundamentally change the future of Royal Navy operations just as the advent of steam propulsion or submarines did," says Royal Navy Fleet Robotics Officer Commander Peter Pipkin. "This is a chance to take a great leap forward in Maritime Systems – not to take people out of the loop, but to enhance everything they do, extending our reach and efficiency using intelligent robotics at sea."
The Royal Navy video below discusses the importance of the event.
Self-driving cars, trucks and buses might get the bulk of the headlines, but a team at the University of Washington Bothell (UWB) is developing a smaller kind of autonomous vehicle. With the aim of providing a relatively inexpensive alternative to owning an autonomous car, the team is creating a self-driving trike that may even open up the possibility of an automated ride-sharing network, like a bike version of Uber's or NuTonomy's proposed services.
The team, headed up by Tyler Folsom, has been experimenting with fitting autonomous systems into tricycle frames and this work culminated in August with a test that saw a bright orange recumbent trike drive itself in a circle. That modest command, entered via remote control, demonstrated the vehicle's ability to stop, start and turn itself to reach a destination, but Folsom says it's just a "baby step" on the way to deeper autonomy.
"I'm trying to shift the talk about self-driving cars to self-driving bicycles and making sure bicycles are part of the automation equation," says Folsom.
The outcome of that equation, the team hopes, is to eventually produce autonomous vehicles that are much lighter and more environmentally friendly than self-driving cars. With a targeted price tag of around US$10,000, ideally they'd be cheap enough to replace the family car or current public transport options. To keep that price down, the team is trying to maximize the efficiency of the electronics driving the trikes.
"We're using things much less powerful than a smartphone," says Folsom. "Part of the concept is that you don't have to spend as much money as the big car companies are spending. My contention is you don't need all that much processing power to make autonomy happen."
Reducing the required computational power may be easier to achieve if human error is removed from the picture by setting up a better autonomous infrastructure, which is a goal Folsom has been vocal about for years with his Elcano Project. Along with dedicated lanes for autonomous vehicles, he puts forward the idea of renewable energy-powered self-driving taxi systems, possibly with a fleet of velomobiles like Organic Transit's ELF, which could ferry people around cities without impacting too heavily on the environment.
"The big thing for me is the effect this could have on global warming," says Folsom. "If we can push transportation in this direction – very light vehicles – it's a major win for the environment. I want to have the technology that lets people make that choice if we decide, yes, by the way, survival would be a nice thing."
The project, which involves over 20 people, has received a $75,000 grant from Amazon Catalyst.
What word has two ‘O’s and an ‘R’ in it? You are thinking of the word “Robot”. Nope, we are talking of the other word- “Hooray”!
Yes, “Hooray” it was!
The bandwagon of roboticists at NIT-C have finished their campaign and are now retiring to the stables!
Yep. It was the third and final day of the tour of Kidobotikzians to Tathva ‘16. The event, which was conducted ever so wonderfully by the college, was a wonderful experience for the students, their accompanying faculty and the visitors too. Especially considering the fact Kidobotikzians revelled in an event that was for the standard of Graduates!
All efforts put by Kidobotikzians up until the event paid off in full measure as they are returning home tonight with a string of trophies- each a hard earned, hard fought and wonderfully cherished victory!
The third day began with a great deal of promise with a string of interesting events lined up. Today being the third and final day, many of the events of the previous days had their final rounds that were held today. And Kidobotikzians, on their virtue of being seasoned roboticists dominated most of these events. The final results tally did no justice to the average of these young roboticists. To think of it that school going young students travelled to a college fest and participated in it with much success against the best of college students from across the nation is quite an accomplishment. A major credit to all this goes to the students and their faculty who spent many an hours helping students understand the concepts of robotics to a degree of professionalism that these kids know the concepts at the back of their hands. A lot of credit is also owed to the parents who have been a constant support to the wishes of these young students and who help them try such interesting activities as a part of their personal development.
As we sign off, our bunch of roboteers are already gearing up for the next big event- FTC. With detailed planning and preparation underway, Kidobotikzians are expected to work wonders at every major event henceforth!