Can Robots or AI Replace Airport Security Staff?

Airport security is done by humans, but could computers do better?

At airports, we trust scanners, security cameras and special security staff to detect weapons, narcotics or suspicious people before we board a plane. And when going through the security gate, the thought may have crossed your mind: Couldn’t modern computer technology (AI) do this on its own, far better than humans?

When a human has to sit and look at thousands of bags or camera shots over a day, we may wonder how many potential dangers are overlooked because the person inevitably has to occasionally lose concentration, just for a brief moment. Wouldn’t artificial intelligence be able to do the job more safely?

Artificial intelligence is not so intelligent

The short answer is no, but it could take some pressure off staff. Artificial intelligence, or AI, is not nearly so advanced in technology that a scanner, a robot or a camera can do the job alone. So far, AI is a good tool to give an idea that the bag on the conveyor belt could contain something dangerous or illegal.

But the assessment of the bag’s contents is still so complex for artificial intelligence that humans still have to step in and make the final judgement.

AI

2 reasons why artificial intelligence can’t stand alone

Overall, we need to understand why artificial intelligence can’t do airport security on its own:

The difference between a gun and a hair dryer can be big

So let’s start with the development of robotics. Research into artificial intelligence’s ability to deduce whether a malicious person is walking around an airport is still not very far yet. We are far from this scenario.

Because that would require us to be able to explain to the computer something extremely complex with myriad variables, which, incidentally, occurs exceptionally rarely. And given that computers are good at overseeing situations with many familiar variables, we are far from computers taking over security alone.

One thing, however, is the complex human brain. At the very least, should artificial intelligence not be able to tell the difference between a gun and a hair dryer? Well, certainly. But it’s the cat’s play with the mouse since you don’t need to have assembled a gun.

It could be broken up into smaller pieces. It might be covered up; it might have some tape put around it, so it doesn’t look like it.

You would do anything to hide a gun by, for instance, inventing and making new shapes of the gun’s appearance.

Pedagogical explanation: Why artificial intelligence can be tricked?

Let’s talk an example of what we mean by the cat playing with the mouse with the variables: shape and appearance. Suppose you’ve built a hobby project in the form of artificial intelligence at home. This hobby project looks like a carry-on baggage carousel like the ones in airport security.

Let’s name the hobby project AI. You’d like AI to know the difference between lemons and oranges. Oranges are round, lemons are more elongated, and each has shades of orange and yellow. So, you are telling AI to relate to two variations: shape and color.

You go to the greengrocer and buy 11 oranges and 10 lemons, presenting 10 of each to the AI’s algorithms. So now AI knows 10 different shapes and color shades of lemons and oranges.

So you tell the AI to trigger its built-in alarm when it encounters a lemon or orange. After all, you did buy 11 oranges, so the last orange you choose to cut into a square, which you color green.

The green and square orange you now run through the AI’s hand luggage belt, but even though it was an orange, the AI did not trigger its alarm. You may then tell AI that this was indeed an orange, so AI can respond the next time it sees an orange with that shape and color.

In the same way, you can infinitely change the shape or material of a gun. And this is what we mean by ‘the cat playing with the mouse. Because when an artificial intelligence knows about the variations of guns, anyone who wants to take a gun on board a plane will always try to camouflage it so that artificial intelligence doesn’t detect it.

This is why it’s difficult to sort rubbish, for instance, because it comes in many different shapes, colors, materials and so on. And it’s just important that these systems don’t make too many mistakes. Otherwise, we humans lose confidence in them.

Artificial intelligence is built to discriminate

This example uses a simple model to show how algorithms dictate a computer’s limited capabilities. But it also shows how the algorithm discriminates based on its variables.

Because that’s what algorithms do. Discriminating. Therefore, we have to be careful. Because it is ultimately democratic rights that are at stake if we are not careful to learn how to understand artificial intelligence before we use it unfettered!

In the US, there are numerous examples of algorithms that, for example, put black people at a disadvantage because the algorithms are trained with a specific type of data. There are simply too many complex human variables for artificial intelligence to do the job alone.

Computers are good at, for example, playing chess and other things where it’s good to oversee many preconceived combinations and try out many options simultaneously. But in situations where many factors are involved, and we want to explain causal relationships, relationships of cause and effect, the computer still comes up short.

So a lack of technical competence and the ethical issues that would arise are two main arguments for not letting artificial intelligence run airport security alone.

Also Read:

Exit mobile version