Artificial Intelligence (AI), Machine Learning, Ethics and Economics
by Alfio Cerami September 28, 2020
This blog post discusses Mr Drone’s dilemmas from the drone perspective (the Artificial Intelligence – AI – machine perspective), such as those concerned with wrong signature behavior of the target individual, robot visual intelligence, cognitive Artificial Intelligence (AI) algorithms, holographic radars, advanced smart machine technologies, smart situational awareness, AI judgment technologies, bio-inspired signal recognition, their impact on the economy and their repercussions on post-modern societies.
Drones are claimed to decrease the cost of war of bombing (see Rand Corporation set of blogs posts on the effectiveness of drones). But Mr Drone is now a thinking machine due to recent Artificial Intelligence (AI) developments. As living creatures, drones face ethical and practical dilemmas, such as drone-target bombing of ISIS affiliated members. Drones can accidently targeting civilians. A report by the White House has raised questions about the psychological status of drone pilots, who are exposed to excessive working hours that can limit their judgement capabilities (see also Rand Corporation). The dilemma of possible negative consequences linked to “video game wars” and the subsequent de-personalization of targeting individuals on a TV screen is also a issue of concern (see Hijazi et al. 2017). In the future, drones will have no remote-control pilots (Chatham House 2019). A video by Tomas van Houtryve has discussed, there is also a dilemma of “behavioral signature recognition”. The drone pilot or the drone itself/himself/herself might not be able to discern from indivuals sit doing yoga to individuals sit praying before a terrorist attack. Will drone’s artificial intelligence be able to recognize a suspicious behavior? (see video Blue Sky Days – A Sky Full of Cameras by Tomas van Houtryve on Vimeo).
Mr Drone’s Dilemmas in future counter-terrorism include the Artificial Intelligence (AI) design of the machine’s emotions and feelings in making decisions, or in the words of Man and Damasio (2019) whether the machine (in my case Mr Drone or maybe Mrs Drone – the 0-1 binary Matrix) is able to care about what it (maybe better he/she) does or thinks (including consciousness, intelligence and the feeling process itself). As far as the existence of possible sub-topics related to AI is concerned, I like to think about issues related to the reduction of some civic rights (such as the right to privacy) in the context of a politics of fear. This can provide more substance and a more nuanced political science and international relations flavor to these dilemmas.
In his book “Fear Itself: The New Deal and the Origins of Our Time” Katznelson (2014) reminds that the reduction of some civic rights and liberties (such as the right to privacy) has been temporarily admissable to preserve democracy during extremely difficult times (such as during World War II or immediately after 9/11 attacks). But one of the greatest success of American democracy was not to abandon formal rules and practices of parliamentary discussions and decision-making but to reinforce them even in times of emergency and crisis.