How can we "simulate" VR?
Having the proper data is important when it comes to developing an ML/DL application. In some cases, the amount of data is limited due to some reasons. For example, if we are trying to detect a rare disease, collecting 1KK samples could be simply impossible. When collecting a proper dataset is too expensive, we cannot afford it as well. What shall we do in such a case?
The answer is simple: we have to generate data. Generative Adversarial Networks are said to be one of the ways to synthetically generate the data. However, it requires at least some data from the domain of interest to generate new data. In this case augmentations could help.
Data augmentation is the artificial creation of training data for machine learning by transformations (c)
It means, that we can change the original data to increase the training sample and / or apply some transformations to another dataset to get something in the domain of our interest. The second technique can be used, for example, to add fog on street images for simulating the froggy weather conditions.
The issue I have had
For my GSoC project I need AR/VR medical images. The issue is that such datasets are not widely available. For several days I was reading papers related to AR/VR, ML and the medicine, but they did not provide the datasets in most cases, since the data is something like a "know how" of the researchers.
Failing to collect a good dataset, I came up with an idea to simulate AR/VR environment using some augmentation technique.
The technique was proposed in my very first post. Here I will just show what issues are there in my approach and how I am going to deal with them.
The main issue is that for every of 3 tasks: classification, segmentation and detection, my approach should not only affect the input image, but also it should somehow affect the targets.
Comments
Post a Comment