Afbeelding
NHLStendenComputerVisionDataScience-1033-HR

Research Focus

As part of the Professorship of Applied Sciences in Computer Vision and Data Science.

A strong trend in computer science that extends to the field of artificial intelligence is Moore’s law, which states that the number of transistors in a microchip doubles every two years. From this it can be inferred that the cost for computing power decreases and, because current AI advancements rely heavily on the available processing power this causes AI to rapidly advance with it.  

Since the inception of computing, this trend has caused a shift in how we approach practical problems. Where in the past, technical solutions were sought in logically defining solutions by programming and rule-based systems, nowadays, the problems are described in the form of annotated data and exemplar images or written natural language from which the AI system learns or on which the AI acts. 

Through extreme amounts of data and seemingly abundant computing resources, foundation models have emerged that are able to generally solve long standing problems like image segmentation, tracking and chatting. These models can only be trained using large investments in, mainly, the electricity bill. This important energy resource is not infinitely available, we need either Moore’s law to cut the overall cost of AI or think of more clever solutions. 

Because, in AI, a solution to a practical problem is defined by the data, the Achilles heel remains the quality of the data. The higher the quality of the data, the more effectively the model can be trained. Similarly, following a proven problem-solving strategy, defining sub-tasks makes finding a solution for a problem easier. This means that cleverly combining several models makes the whole system less dependent on large amounts of data. Some sub-tasks can, for example, be easily solved by the existing pre-trained models, while others remain too problem specific and require further fine-tuning. 

This data-centric strategy has proven invaluable to solve practical applications using AI and computer vision. While the most straightforward solution might be to ever increase data and computing power, for real-life applications this might not be feasible. Taking x-ray images of humans requires low dosages, harmful insects like the green-peach-aphid are rare, windmill-blades are hard to reach, seasonal products like fruits are not available most of the time. In most of these applications there is an inherent data shortage that needs to be addressed. 

The research focus will be on developing strategies for optimally using the available data to solve specific real-life tasks either in a model-centric or a data-centric approach. On one hand there is the effect of the data itself: quality of the images and annotations, availability of images and annotations, distribution of the classes or missing and unknown classes. On the other hand, there is the architecture of the models themselves: how do models handle missing data for anomalous classes, how can models be trained with bad quality data or small data, how can automatically selecting appropriate images help in improving overall performance.  

Apart from the established strategies like data-augmentation and fine-tuning to create models using relatively small amounts of data, a new and exciting research direction is the combination of model-centric and data-centric approaches. This revolves around using synthetic images when there is not enough real data available. This is a research topic which involves creating visually convincing digital twins of processes, either using pre-trained deep learning models, 3d graphics or a combination of both. This can be seen as an extension of data-augmentation and requires handling the interplay between big-data, small-data and models as well as clever solutions for combining and adapting general models for specific tasks. 

For more information

Want to learn more about the professorship Computer Vision & Data Science and our research lines?