CT-Scan Analysis

At Michigan Aerospace Corporation, one of the major projects I worked on besides ARGOS involved removing artifacts of nonlinear radiation attenuation in cone-beam computed tomography (CT) scans for a client company. We had a set of 2D scans from the lower-quality cone-beam imaging device, a set of 2D scans from the industry-standard imaging device (example), and the 3D scan data from both devices.

The part of the project I worked on was attempting to generate labels for the cone-beam images so that we could train a convolutional neural network to translate from the lower quality cone-beam images to the higher quality images. I approached this in several different ways, but the main problem was that the high quality images were neither matches for the lower quality images, they were completely different patients. The first approach I used was the neural style transfer algorithm (here's an image I made of my dog with my implementation of the algorithm). The idea was that if I could synthesize the "style" in the lower quality images (hopefully the artifacts that I was trying to remove), I could apply that style to the higher quality images, and begin to train a neural network to translate between the synthetically lower quality images to their high quality matches. I could then apply this network to the higher quality images and (hopefully) get corresponding high quality images with the content of the lower quality images. There's a lot that could go wrong here - I had to be extremely sensitive to introducing structure into the images, and about removing structure, as a small change could remove or introduce a tumor. This approach didn't end up working very well, for the reasons mentioned above, and that the style was much more difficult to synthesize than I had expected.

The next step was to use a generative adverarial network (GAN) to create the labels, this time for the low quality iamges instead of the high quality images. The "real" data in the GAN problem formulation was the high quality images, and I substituted the noise that is traditionally used to condition the generator with the low quality images to encourage the generator to begin with that structure. The discriminator then classified between the high quality images and the generated images that (hopefully) contained the structure of the low quality images but without the artifacts that I wanted to remove.

Unfortunately, the project lost funding before I was able to arrive at results with the GAN, even though I had made some promising movement in the problem.

contact@oliverhill.io — Resume