Chercheur·e du réseau

École des sciences de la gestion
Dominic Martin
Université du Québec à Montréal (UQÀM)
École des sciences de la gestion
Département d'organisation et ressources humaines
Intérêts de recherche
  • Intelligence artificielle
  • Innovation technologique
  • Enjeux éthiques
  • Éthique des affaires
  • Économie politique
Informations générales
Numéro de téléphone : 
(514) 987-3000 x2226
Numéro de local : 
Principales réalisations
2018 - L’éthique animale vat- elle sauver les robots ?

Si nous voulions accorder un statut moral ou juridique à une intelligence artificielle, quel critère devrions-nous choisir ? Il y a quelques mois, l’Arabie saoudite accordait la citoyenneté à Sofia, un robot humanoïde doté d’une intelligence artificielle (IA) lui permettant de converser — plus ou moins bien — avec des êtres humains. Au-delà de son impact médiatique et de son caractère ironique (beaucoup de travailleurs immigrés saoudiens se voient refuser la citoyenneté), le cas du robot Sofia soulève une question fondamentale : quel statut moral ou juridique devrait-on accorder à une IA ?

2018 - Shedding light on confusion around AI and work

There are many questions about AI’s impact on our work. How big will the change be? How do we prepare for it? How can we deal with the rising inequalities? Recent developments in artificial intelligence (AI) — and particularly machine learning (ML) — are impressive, and there is increasing awareness of the potential impacts of these new technologies on work and the economy in general. According to a 2018 Gallup poll, nearly three-quarters of adults in the US (73 percent) say an increased use of AI will eliminate more jobs than it creates. Are we making the right assumptions regarding AI and the transformation of work?

2016 - Preparing for a future with articial intelligence

Artifcial intelligence technology is advancing rapidly, and society is not su ciently prepared for the changes it could bring. The year 2016 will be a memorable one for artificial intelligence (AI). First, there were important breakthroughs: companies like Google, Mercedes-Benz and Toyota are now building self-driving cars. As well, we now have software that can administer psychotherapy or write a press release. Deep-learning approaches using systems similar to the neural networks in the human brain have produced systems that can perform facial recognition as accurately as humans can, detect lung cancers using X-ray images and learn Chinese in two weeks, among other tasks.

2016 -Who Should Decide How Machines Make Morally Laden Decisions?

Who should decide how a machine will decide what to do when it is driving a car, performing a medical procedure, or, more generally, when it is facing any kind of morally laden decision? More and more, machines are making complex decisions with a considerable level of autonomy. We should be much more preoccupied by this problem than we currently are. After a series of preliminary remarks, this paper will go over four possible answers to the question raised above. First, we may claim that it is the maker of a machine that gets to decide how it will behave in morally laden scenarios. Second, we may claim that the users of a machine should decide. Third, that decision may have to be made collectively or, fourth, by other machines built for this special purpose. The paper argues that each of these approaches suffers from its own shortcomings, and it concludes by showing, among other things, which approaches should be emphasized for different types of machines, situations, and/or morally laden decisions.