‘McDonald’s removed the mcrib from its menu so it could suck its own dick’ –@jaynooch
iBorderCtrl is an AI based lie detector project funded by the European Union’s Horizon 2020. The tool will be used on people crossing borders of some European countries. It officially enables faster border control. It will be tested in Hungary, Greece and Letonia until August 2019 and should then be officially deployed.
The project will analyze facial micro-expressions to detect lies. We really have worries about such a project. For those who don’t have any knowledge on AI and CS, the idea of using a computer to detect lies can sound really good. Computers are believed to be totally objective.
But the AI community knows it is far from being true: biases are nearly omnipresent. We have no idea how the dataset used by iBorderCtrl has been built.
More globally, we have to remind that AI has no understanding of humans (to be honest, it has no understanding at all). It just starts being able to recognize the words we pronounce, but it doesn’t understand their meaning.
Lies rely on complex psychological mechanisms. Detecting them would require a lot more than a simple literal understanding. Trying to detect them using some key facial expressions looks utopian, especially as facial expressions can vary from a culture to another one. As an example, nodding the head usually means “yes” in western world, but it means “no” in countries such as Greece, Bulgaria and Turkey.
The ‘iBorderCtrl’ AI system uses a variety of ‘at home’ pre-registration systems and real time ‘at the airport’ automatic deception detection systems. Some of the critical methods used in automated deception detection are that of micro-expressions. In this opinion article, we argue that considering the state of the psychological sciences current understanding of micro-expressions and their associations with deception, such in vivo testing is naïve and misinformed. We consider the lack of empirical research that supports the use of micro-expressions in the detection of deception and question the current understanding of the validity of specific cues to deception. With such unclear definitive and reliable cues to deception, we question the validity of using artificial intelligence that includes cues to deception, which have no current empirical support.