Development and Design of a Humanoid Robot System MIRAA: Intelligent & Autonomous Mobile Assistant

This study represents work in progress towards a complete system of a humanoid robot. With this aim, the system must be capable to interact with human, Give response to the question, do gestures, assist with finding an information, remember different types of thing, find internet data regarding to the user ’ s query, find errors in its system and save a log file in order to further development and debug, autonomous servo control with the help of PID (proportional-integral-derivative) controlling. Open CV, Servo Control (Head) the system integrates with facial and object recognition to achieve its objectives based on PID quality, among other advanced capabilities derived from Open CV (camera). Details of the implementation of the currently developed system will be displayed. Towards the end, after success in the preliminary results obtained on our campus, we are encouraged to do so in order to obtain a complete phototype.

A humanoid robot is a robot based on the general structure of a human, such as a robot that walks on two legs and has an upper torso, or a robot that has two arms, two legs and a head. A humanoid robot certainly doesn't look like a real person, The ASIMO humanoid robot, for example, has a helmet instead of a facial sub-section. An Android (male) or Gynoid (female) is a humanoid robot designed to look as real as possible, although these words often seem to be synonymous with humanoid. While there are many humanoid robots in fiction, some real humanoid robots have been built since the 1990s, and some real human-looking Android robots have been developed since 2002. Like robots, virtual avatars can be called humanoid when they resemble humans (Mamun et al., 2020).

Review of Literature
An autonomous human-like robot that is able to adapt to changes in its environment and continues to reach its goals is considered a humanoid robot. These features set Android apart from other types of robots. There has been a lot of progress in the development of humanoid in recent years and there are still many opportunities in this field. Several research groups are trying to design and develop various platforms for humanoid based on mechanical and biological concepts. Many researchers focus on the designing of lower torso to make the Robot navigating as like as a normal human being do. Incorporating the west, buttocks, knees, ankles and lower legs is a more complex and more challenging task. High torso design is complex but interesting work which includes arm and neck design. Walking guide analysis, optimal control of multiple motors or other actors, degree of freedom control (DOF), adaptability control, and intelligence are also challenging tasks to treat humanoids as human (Humanoid Robot, 2019).
Leonardo da Vinci drew a humanoid process in 1495, considered the first man. It was designed for sitting, for arms, for moving the head, and for closing the jaw. The 18th century can be considered as a fertile time for the development of May autonomies that were able to reproduce some human movements. In 1773, Pierre and Henry Louis invented the first automation that was able to write. The mechanical trumpet was made by Friedrich Kaufmann 1810. The trumpeter has a marked drum that was used to activate some valves that helped air flow in twelve languages.
The period of construction and development of the humanoid began in the 19 th century when John Brainard discovered the steam man in when 1865. It was moved by the steam engine and used to pull the cart. The Electric Man was created by Frank Red Jr. in 1885, which was a more or less electronic version of the Steam Man. A prototype soldier named Boilerplate was built in 1893 by Dr. Achibald Campion. An evolutionary number of humanoid systems appeared in the 20 th century. At the turn of the century, the Westinghouse Society developed a humanoid robot called LEEACTRO in 1938, which was able to move, talk and smoke.
From the 1960s to the 1990s, a wide variety of legged robot platforms began to appear in the United States, Russia, France, and especially Japan. A great job was done with the jumping robot at the Massachusetts Institute of Technology (MIT) in the 1980s. The bipod planar, Spring Flamingo, Spring Turkey, Uniru and the 3D bipad were built at MIT, which excels in walking in a dynamic and stable way (Akhtaruzzaman and Shafie, 2010).

Hardware
In this study project we have used various types of hardware components for different purposes. In the box below:

Servo Motor
A servomotor is a rotating activator or linear activator that allows precise control of angular or linear position, speed and acceleration. It has suitable motors for sensor or position response. This requires a relatively sophisticated controller, a dedicated module designed specifically for use with most servomotors. Servomotor not a specific class of motors, basically the term servomotor is often used to refer to a motor for a closed loop control system. Servomotors are used in applications such as robo-tics, CNC machinery or automotive manufacturing. Servo-motors are commonly used as a highefficiency alternative to stepper motors. Steeper motors have some innate ability to control position, as they have internal output steps. This allows the encoder to be used as an open-loop position control without any feedback, as their drive signal determines the number of movement steps. However, this requires the controller to 'know' the location of the stepper motor. So, after the first power-up, the controller needs to activate the stepper motor and turn it to a familiar position, such as, e.g. until it activates the limit switch. This can be noticed when switching to an inkjet printer; the controller ink will move the jet carrier to the extreme left and right to establish the end positions. Regardless of the initial position of a servomotor power-up, the angle at which the controller points it will be turned on immediately (Akhtaruzzaman and Shafie, 2010

LED Dot Matrix
A LED dot-matrix is an electronic digital display device that displays information from machines, clocks and watches, public transport departure indicators, and many other devices needed to display a simple alphabet (and / or graphic) of limited resolution. The display has an LED dot matrix equipped with a rectangular configuration (other sizes are possible, though not common) For example, text or graphics may be displayed by turning on or off the selected lights.
The dot matrix converts the instructions of the controller processor into signals, which turns on or off the index elements of the matrix to produce the required display. This project we will use LED 8x8 dot matrix which includes a total of 64 LED (Cheng, 2016).   I used a camera so that the robot could see, it is a 2 mega pixel wide angle camera that allows the Open CV program to perceive and see the environment.
 I used an external 360 degree microphone so like that robot could hear more accurately.
 I have used plastic, PVC, metal, acrylic for the external appearance of the body.

2.
Software: Different software's are used in this study. The list of required software is below:  With MRL's WebGUI service you can very quickly and easily setup a web interface for an Arduino.
AudioFile Service -This service can play different types of audio files. I use this service to play feedback output audio files from robots. CLI service -CLI; command line interface service allows you to interact directly with MyRobotLab (MRL) when it's started from an operating system shell (MRLComm to the Arduino, 2019).
For example; /runtime/start/python/Python will start a service named python of type Python.

ClockService -
The watch can generate regular messages to engage other services.
HTML Filter Service -Htmlfilter is a service used by either strip out text or wrap the input text in html tags. For example, it uses the program to filter HTML code from AB before sending text to the speech synthesis service.
Local Speech Service -This text to speech synthesis service use local operating system text to speech engine. I have used this service for act as the voice of the robot and also installed Custom teen female voice from VE.
Log Service -Log is a helpful diagnostic service that knows how to connect itself to the user interface. It can display messages from other Services. It can also detect different types of errors and save the log in a text file so that developer can run further Bug fix and development process.
Open Weather Map Service -This service helps gather weathers current data as well as the prediction for MRL. This service I created an API token from the website for this project. API key will be found from this site: https://home.openweathermap.org/ PID Service -A PID service allows input-related tracking and output conversion. Tracking services are currently used as tracking strategies. Input is sent to PID, a "compute" method is sent and appropriate output is sent to a servo (PID Controller, 2019).

PIR Service -A passive infrared sensor (PIR sensor)
is an electronic sensor that radiates infrared (IR) light from objects in its field of view. PIR-based speeds are often used in detectors. Arduino schema (We can also use another compatible controller).

Program AB Service -AIML is known as Artificial
Intelligence Markup Language. Dr. Richard Wallace it's created by to get the ALIAS bot. AIML is a repetitive language that allows breaking down natural language text input to match the feedback that the chatbot can send.
Servo Service -The Servo service is use to control servos through a micro-controller such that an Arduino or a Raspberry pi (Servo Controller). The service will allow control of device attachment or disconnection, control his position and the speed at which it move to that position, and turn off/on the servo.

WebkitSpeechRecognition Service
WebkitSpeechRecognition -uses the speech recognition that is built into the chrome web browser. This service requires the webgui to be running on chrome web browser. We used this service for speech recognition.

WikiDataFeatcher Service
This service grabs data from wikis website (for now wikidata). Wikidata store data by entities with an ID. For example, Adam Sandler has the ID Q132952. Each entity contains several elements: To use wikiDataFetcher, we need an ID for the feature, but not for the label

Natural Language Processing
Natural language processing, usually abbreviated as NLP, A branch of artificial intelligence that uses computers and natural language to interact with humans. The ultimate goal of the NLP is to read, understand, comprehend and comprehend human languages in a valuable way (Ashish Singh Bhatia, 2018). By processing natural language a common conversation between humans and machines can go as follow: a) A man speaks to the instrument. b) The device captures audio. c) Text conversion occurs from audio. d) Text data processing. e) Data takes up space to convert audio. f) Human respond when the machine plays the audio file.

Use of Natural Language Processing
Natural Language Processing is the driving the following common applications:  Language translation application as Google Translate.  Microsoft Word and word processors such as Grammar employ NLPs to test the grammatical accuracy of text.  Interactive Voice Response (IVR) applications are used in call centers to respond to specific user requests.  Personal assistant applications like as OK Google, Siri, Cortana, and Alexa.

Model of NLP project
The basic model of NLP we have use this project can be divided into 3 parts, here we have a figure for the illustration (Mike Barlow, 2016).

STT
We have used a Service from the framework called "WebkitSpeechRecognition" for Speech to Text conversion, this service requires the webgui to be running on chrome web browser. This service actually developed by Google and the service often called Web Speech (Mike Barlow, 2016).
AIML, or Artificial Intelligence Markup Language, is an XML dialect for creating natural language software agents. The XML dialect, called AIML, was developed between 1995 and 2002 by Richard Wallace and the global free software community. Initially, Eliza was referred to as "ALICEE." AIML formed the basis for naming ("Artificial Linguistic Internet Computer Entity"), which won the annual Labna Award for Artificial Intelligence three times and was also the Chatterbox Challenge Champion in 2004; because A.L.I.C.E.

Proces sing
•Processing part takes the converted text as input and process for output.

TTS
• Text to Speech recieves an output text and gives an output using computer generated voice.

Elements of AIML
AIML contains a variety of elements. The most important of these are described in more detail below: Categories -In AIML the departments form the basic unit of knowledge. A section contains at least two more elements: pattern and template elements.
Patterns -A pattern is a string of characters that matches the inputs of one or more users. Processing -In the processing section we used a service called "Program AB" in process data from STT. STT Sends the data to Program AB service, Program AB contains a variety of pre added AIML files, including various sections, template, pattern, sets, etc. The program then matches the data by prioritizing the AIML files compiled by AB. Finally it gives sends the output to the TTS to be produced as a Voice.

TTS -For Text to Speech (TTS) we have used Local
Speech Service, This text to speech synthesis service use local operating system text to speech engine. Speech synthesis is an artificial production of human. A computer system used for this purpose is called a speech computer or speech synthesizer, and can be applied to software or hardware products. A text-to-speech (TTS) system converts ordinary language text into sentences; other systems represent symbolic linguistics, such as phonetic transcriptions in speech. In our project we have used "American Noelle VE" Voice of a teen aged girl (Mike Barlow, 2016). A diagram of typical Text to Speech synthesis system is given below:

Servo Control
A servomotor is a rotary actuator or linear actuator that allows precise control of angular or linear position, speed and acceleration. It consists of a suitable motor attached to a sensor for position response. It requires a relatively sophisticated controller, a dedicated module designed specifically for use with almost all servomotors. To control the servo we add two service calls from the framework, "Arduino" and "Servo". The Arduino service has a servo motor connected to the Arduino. And the servo service is for sending signals to the servo using Arduino board. The figure below shows how the servo will be connected.

RESULTS AND DISCUSSION:
Performance Testing  This project was tested in all forms of hardware testing and we also have tested all the hardware in different Voltage, Ampere and Register value. And in software testing, this project was tested by giving different servo results the code to configure and calibrate the robot.

CONCLUSION AND FUTURE DIRECTION:
It's an humanoid project where we have applied a variety of abilities such as listening, speaking back, asking questions, remembering different things, learning from questions, taking internet data and talking back, moving different parts of body, making different gestures, randomly. Move the body, blink of an eye, etc. Since this is a humanoid project, so we have applied some basic capabilities to the robot.
There are many more things to like in this project and we can add these things to our future projects. They are:  Our project is currently in the form of the upper torso (upper body), next we can create a full body humanoid system in the lower part.  We can also make this robot walk after completing the lower part of the body.  We can also add telepresence and teleportation capabilities. (Teleportation-to move the body of the robot remotely, Telepresence-a human to see and interact with others without being presence in physically, robot will do the job).  We can also add the capability to understand, Process and speak in Bengali.  We can make an interactive UI for the robot so that user can control, configure and give input to the robot.  Voice can be developed to express more emotions and add some additional capabilities like Pitch and Note Shifting, Delay flexibility to express emotions, Sing, Mimicry in different voice.  Remote control using authentication.

ACKNOWLEDGEMENT:
Thanks to the supervisor and Head of the department supported with proper assistance and help for analysis and writing to conduct successful research study. Special thanks to Vice-Chancellor of Gono Bishwabidyalay, Professor Dr. Laila Parveen Banu to help us Robot MIRAA exhibition.

CONFLICTS OF INTEREST:
The authors declare that they have no conflicts of interest in this research to publish it.