From body scans in prisons to plant lice on potatoes: 3 brilliant projects that use smart cameras to widen our take on things

Klaas Dijkstra

How do you get a body scanner to learn how to recognise smuggled goods? Can a camera spot the difference between cotton and nylon? How can AI be used to grow a digital orchard that apple growers can reap the benefits from? These are all questions Klaas Dijkstra, our professor of applied sciences in Computer Vision & Data Science likes to sink his teeth into. He shares three of his visionary projects that use smart cameras to widen our take on things. 


From industrial manufacturing to potato growers, more and more industries are finding out the benefits of smart cameras that can see things better than the naked eye. In the professorship Computer Vision & Data Science they’ve turned training artificial intelligence models into an art form. “From disease detection to quality control and from precision farming to health care: we can teach AI models practically anything,” says brand-new professor of applied sciences Klaas Dijkstra. “We work with our students on a wide range of projects from industry.” So let’s check some of them out. 

1. A self-learning body scanner 

Airports, law courts, prisons: the body scan has been the proven method for years of checking for weapons and smuggled goods. In Klaas’ research, his team and medical technology manufacturer, OD Security, take the body scanner a step further. “Security staff currently detect contraband using x-ray. In our research, we teach the scanner to signal abnormalities itself,” explains Klaas. “We do this by collating all kinds of photos of the human body and training the model to recognise abnormalities that could actually indicate contraband. We use, for instance a medical dummy with the same characteristics as the human body. The aim is that the body scan and the smart camera can then immediately detect every single difference and see what the human eye cannot. And this means we can make prisons, airports and law courts even safer.” 

2. Better recycling using the textile camera 

From tea towel to underpants. Any object made from textile carries a label with its fibre composition, be it cotton, viscose, spandex, polyester or anything else from the wide choice of fibres available. “If there’s not a label on the product, the human eye can’t make a distinction between different types of fabric. But a hyperspectral camera can,” says Klaas. “We’re drawing on the expertise our colleagues in the professorship of Circular Plastics already possess, and together we’ve developed a camera that recognises different plastics. We’re teaching it to do the same trick but then with textile. The camera recognises the type of fibre which means we’re better able to recycle the discarded textile. The purer the waste, the higher the quality of the recycled textile products we can make. And that means we can give textiles a second life. It’s child’s play!” 

3. Smart cameras help potato growers 

At only 1.2 to 1.6 millimetres long, the green peach louse can hardly be seen by the naked eye. Yet this miniscule aphid can cause havoc in a field of potatoes. “Which is why so many potato growers spray their crops with preventative pesticides,” explains Klaas. “But this is a waste of investment and not good for people or planet either. Our smart camera can not only spot the aphids, but also take crystal clear photos of them. We then teach the cameras to recognise the green peach louse and send a signal to the farmer. They then only have to spray where it’s really needed.” 

Apple growers can also reap the benefit from Klaas’ smart cameras. “We’re training AI models to recognise diseased apples in a tree. From Elstar to Granny Smiths, we feed the model thousands of photos of every kind of apple. The challenge in this project is that we can only take photos over a short period of time each year. After all, you only get full apple trees in the autumn. So we’re generating photos ourselves of different types of apple tree using AI. For instance, we ask AI to generate photos of an Elstar apple tree and then we use these photos to train an AI recognition model. We basically grow a digital orchard that we can harvest data from the whole year round.” 

Synthetic data: fast forwarding the future  

The application of this kind of synthetic data offers a huge range of possibilities for improving how we train smart cameras in the future, predicts Klaas. “In order to be able to train a model, you need a data set of at least ten thousand photos. Using generative AI technology such as stable diffusion and models such as Dreambooth, you can generate these photos far more quickly and then go on to train another AI model. It means that one form of artificial intelligence is improving another. And that fast forwards developments massively. I’m looking forward to seeing what the future holds for us!” 

Find out more about our research 

Find out what the professorship in Computer Vision & Data Science can do for you and get in touch with Klaas Dijkstra at Did you know that you can also follow a minor or master’s in Computer Vision & Data Science?  

Sustainable Development Goals

This news contributes to...