Artificial Intelligence: Ethics, Innovation, Investments, and Responsibilities

The India Mobile Congress 2018 will be held on 25th-27th October 2018 at Aerocity, New Delhi and is going to South Asia’s most impressive ICT and TMT event. The theme for this year is ‘New Digital Horizons. Connect. Create. Innovate’. When it comes to exploring, understanding and using future technologies, there is no theme quite as exciting as that of artificial intelligence ethics and innovations. Mankind has long been fascinated with the idea of creating machines that possess a human-level of intelligence and ability. Each new generation of technology has furthered the capacity of machines to do more and more with the need for lesser and lesser human involvement. We’re now at that stage where we are reimagining our notions of innovation itself thanks to staggering developments in the fields of artificial intelligence and robotics.

AI Uses, Industries & Investments

At the IMC 2018, global industry leaders, governmental representatives, technology experts and enthusiasts, developers, sponsors, brands, businesses, media persons and visitors will come together to discuss and experience digital communication technology, products, services, future innovations, policies and more. If you’re interested in learning about AI uses and applications, how you can expect artificial intelligence to change your life, AI investments and building a business in the niche, the India Mobile Congress will be your one-stop destination to all these questions, interests and more.

Artificial Intelligence Ethics

Conversations about AI and its applications invariably come around to the issue of ethics. Who’s going to be responsible for a machine that does something wrong? How does one define right and wrong for machines? Can something that ‘runs’ on man-made intelligence be expected to also adhere to human standards of morality? Is it fair to expect humanity from something that is not even human? What are the values that go into ‘creating’ intelligent non-beings? The discussion on the artificial intelligence ethics can be divided into two subsets:

• Robot ethics: Deals with the ethics of humans creating, constructing, using and addressing robots and other machines with AI.
• Machine ethics: Deals with concerns regarding building machines that act ethically or as if ethical.

The conversation on robot ethics is a challenging one. It begins with robot rights and the notion of extending rights to machines: it deals with the responsibility human beings must show to the machines they create, much like they show towards other human beings and living creatures. The conversation treads several grey areas with issues such as what would constitute robot rights: should they be granted rights such as the right to life, to freedom, to speech and expression? Who would implement these rights and how would they be upheld? As the field of robotics advances and we draw closer to humanoid robots becoming a mainstream reality, the need to define the rights and duties of robots and AI within human society seems to become a more pressing issue. In October 2017 the android Sophia was award citizenship of Saudi Arabia, sparking a global debate on the issue of robot rights, AI ethics and the blurring lines between mankind and its creations.

Some of the common concerns that arise regarding human responsibility while designing and using AI are:

• The intentional use of AI and robots to harm other humans.
• Replacing humans with robots in capacities, professions and tasks that actually require a ‘human touch,’ such as jobs like policing, nursing, mental healthcare intervention and therapy.
• Any threats to human dignity caused by the absence of qualities such as empathy, understanding, kindness and the positive applications of a subjective bias.
• The need for promoting and maintaining transparency when designing and using AI since it will have massive ramifications on the human existence.
• Weaponization of AI along with autonomy may create war machines that act beyond human control and human limits of reason and acceptable violence.
If you thought the discussion on robot ethics challenging, it gets even stranger when we venture into the conversation on machine ethics. How do you even begin to ascribe something as intangible as human morality to a machine? Do we expect –and thus design– AI machines to imbibe human value systems or do we hold them to a higher standard? Sci-fi enthusiasts will remember Isaac Asimov’s Three Laws of Robotics, which, though rooted in art, are actually universally-appreciated core principles of robotics:
• Robots must not harm human beings or allow them to be harmed.
• Robots must follow commands given by humans except when these commands contravene the First Law.
• Robots must prioritize their own safety and existence so as long as doing so does not break the First and Second Law.

Asimov also introduced a Fourth Law or Zero Law to guide all robotic behavior: robots must not harm humanity or allow humanity to be harmed.

The debate on machine morality stems from a single overarching concern: there may come a day when AI and machines are smarter than humans and that day, human beings may lose control of their inventions. Should such a day be realized, what is the degree of danger that would face humankind and what can be done to prevent it? Most global scientists and tech experts agree that to continue work on AI, creators must prioritize building friendly AI so that AI autonomy is tempered with an inherently people-friendly core.

The conversation today begs the question: with the line between science-fiction and reality blurring so much so that we’re drawing inspiration from entertainment, how do we define accountability with respect to AI? What is the potential for proposals such as the idea of installing neuromorphic chips to mimic biological neural process and non-linear ‘thinking’ in AI? Do we design learning algorithms for machines to actually instruct them in a detailed understanding of right and wrong and human behavior?

At the IMC 2018, telecom and tech experts will be delving into the fascinating world of Artificial Intelligence ethics and responsibilities, the future of AI in India and use cases of AI.

Talking Artificial Intelligence Ethics At The 2018 India Mobile Congress

Interested in learning more about artificial intelligence ethics? Have questions about how you can invest in AI or what you can expect from AI home solutions? Come to the India Mobile Congress 2018 and learn from the experts themselves. This year’s panel will include industry leaders, such as:

Artificial Intelligence Ethics

  • Mr. Ajit Rao, Sr. Director of Engineering, Qualcomm,
  • Mr. Badri Gomatam, Chief Technology Officer for Sterlite Technologies Limited
  • Mr. Ajay Sharma, Director for Intel India Strategy Office, Intel.

Also participating in the conference session on AI will be representatives from Google, Mediatek and IBM. The IMC 2018 discussion on artificial intelligence and its applications, potential and moral considerations will be moderated by representatives from KPMG.

Log onto www.indiamobilecongress.com to register for the India Mobile Congress 2018.