Sustainable and Green Human Robots

Dr. S. Enjavi Chief technology officer of the European Federation of Robotics invented the new generation of Sustainable and Green Human Robots that can do everything that human can do, without needing to recharge them. They get the energy from the nature. 

We have conducted a short interview with him:



How many hours and how often do these Sustainable and Green Human Robots need to be re-charged if by solar energy? For cold weather, how are they charged?

Especially for service robots that will be active in supporting roles in the night (guards, nurses, etc) we are using the full daytime/daylight charging cycle. Of course we have not managed to get energy out of zero, yet! What we are planning is to partner up with high density, green battery technologies such as the urine fuel cell developed in the UK by Greek researcher Ieropoulos.

What are the top 5 abilities of these Humanoid Robots?

The purpose of humanoids is to be a drop-in replacement for human beings in all their daily functions, most important being 

a) moving, including obstacle avoidance, door handling, step climbing 
b) detecting human identities in their immediate environment by audio and visual processing 
c) engaging in interaction scenarios with the humans, whether enabling scenarios like helping seniors to walk or disabling scenarios like blocking a criminal's escape route 
d) Call-for-help scenarios where the correct human agent(s) are introduced to the scene and given sufficient information to quickly and effectively engage 
e) the companion scenario, whereby artificial voice and artificial face provide a suitable "partner" for anyone from toddlers to the elderly and even to their pets for baby-sitting and pet-sitting

Do you agree that the complexity of the human brain makes any kind of passable man-made imitation seem totally unrealistic? 

We view the machine as our equal in learning and tackling personality and social complexities. The amount of dysfunctionality and criminality in our societies are proof that humans have not mastered the secrets of productive human interaction, and some of their limitations are very human: fatigue, limited memory, conflicts, personal priorities, all make a human agent only moderately effective at their objectives. Robots face a different set of limitations but will be learning in the "cloud"/hive brain, simultaneously recording and analysing millions of interactions and getting a chance to understand humans and serve them with superhuman abilities.

How do robots learn to adapt to the unpredictable decisions and actions of humans?

Ever since the Greeks invented the Hippocratic oath primum non nocere - first do no harm, they created a powerful paradigm for the laws of robotics. Life is a game of chances and there will be cases where smaller or larger harm will come to a human that diverges from expected norms of man-machine interaction. But we in the European Federation of Robotics (Robofed) can mathematically guarantee that, unless someone runs with speed and brutally rams her head into the beautiful titanium alloys of our humanoids, the robots first priority is to minimize the chance that the humans comes into contact with physical, biological or chemical risks. In a similar way an offender running towards or away from a crime will never be as safe with a startled/scared/angry human law enforcer as he will be with a calm, accurate law enforcing machine.

What will happen if robots start to learn "bad behaviours" from humans? Are there ways to make them 'unlearn' it?

Indeed the most powerful approach to robot training and evolution is learning-by-watching, a learning system that with humans depends on so-called "mirror neurons" and for our machines is tentatively called MiMicMe. In all truth we expect the complexity of massively deployed humanoids interacting with numerous will generate some questionable machine behaviours but, additionally, will be seen with human understanding by many/most humans - judging by the deep involvement of people with game characters for example, we would not be surprised if an old lady at a nursing home soon says "oh yes, the humanoid switched the TV to adult channels while I was trying to sleep, but it is only humanoid after all and it has served me great for months so I forgive it". As most interactions take place in the context of "scenarios", scenarios that generate complaints and controversy such as "buy ganja at the street corner and roll me a joint" would end up being temporarily or permanently banned, possibly after legal scrutiny - so not as such unlearned but inactive. Of course for most purposes no risky scenario will be entertained, and risk aversion will be even higher in the presence of third parties (so no rap language by the humanoid to shock and offend passers-by!).

Dr. S. Enjavi 
Chief technology officer, 
European Federation of Robotics 
Email: info@enjavi.com 
twitter: enjavi_com 
Phone: +45.42417244

Popular Posts