How can we "simulate" VR?

    Having the proper data is important when it comes to developing an ML/DL application. In some cases, the amount of data is limited due to some reasons. For example, if we are trying to detect a rare disease, collecting 1KK samples could be simply impossible. When collecting a proper dataset is too expensive, we cannot afford it as well. What shall we do in such a case?


    The answer is simple: we have to generate data. Generative Adversarial Networks are said to be one of the ways to synthetically generate the data. However, it requires at least some data from the domain of interest to generate new data. In this case augmentations could help.

 Data augmentation is the artificial creation of training data for machine learning by transformations (c)


It means, that we can change the original data to increase the training sample and / or apply some transformations to another dataset to get something in the domain of our interest. The second technique can be used, for example, to add fog on street images for simulating the froggy weather conditions.


The issue I have had

    For my GSoC project I need AR/VR medical images. The issue is that such datasets are not widely available. For several days I was reading papers related to AR/VR, ML and the medicine, but they did not provide the datasets in most cases, since the data is something like a "know how" of the researchers.

    Failing to collect a good dataset,  I came up with an idea to simulate AR/VR environment using some augmentation technique.

    The technique was proposed in my very first post. Here I will just show what issues are there in my approach and how I am going to deal with them.

    The main issue is that for every of 3 tasks: classification, segmentation and detection, my approach should not only affect the input image, but also it should somehow affect the targets.

Distorting images for classification.

    Actually, this is the simplest case. In classification problem the change to input image has nothing to do with the target (even if you rotate a cat image by 90 degrees, it still remains a cat). That is why no issues can appear in this task. It was clearly shown in my test task (see  post).


Distorting images for segmentation.

    In case of segmentation, in contrast, the target is heavily dependent on the input image. For example, if you rotate a cat image 90 degrees, the mask should also be rotated so that it satisfies the state of the input image. And this is the moment when the widely known albumentations library going to help me. Example on how I am going to use this library are in my notebook.

Distorting images for detection.

    Actually here I will be dealing with the hardest case. The main ideas is to find a way to map old coordinates of bounding boxes. How to achieve that I will write next week. 

For now, please, suggest me some data sources with AR medical images and check out may "vr simulation" for classification and segmentation tasks

Comments

Popular posts from this blog

Summing up my GSoC experience

Surgical tools detection