Guest Post: A New Take on a Comet’s Impact for Hubble’s 30th

Judy Schmidt is an amateur astronomer and tireless image processor of the Hubble archive. This post details her recent work on the impact of Shoemaker-Levy 9 with Jupiter for the BBC/Discovery Science Channel’s documentary Hubble: Thirty Years of Discovery.

Last September I found myself facing an unusual request. I’d just gotten off the phone with David Briggs, a producer and director with BBC Science. He wanted to feature one of Hubble’s earliest big successes—the Shoemaker-Levy 9 event. Unfortunately, there were few images to use, and some were fairly rough. What was needed was something more sequential to show during the narration. Who could do this specialized work in a relatively short amount of time, though? Dr. Heidi Hammel, one of the principal investigators, suggested to David I might be able to help, and I was happy to.

What did I just agree to do?

It was a daunting task in more than one way. Typically, I may do three images in a productive week, but for this project I needed to assemble around ninety of them within only a few months. Even though they were small images, they were complicated to put together because Jupiter rotates rapidly, Hubble can only take one picture a time, and each black and white frame was separated by several minutes’ worth of rotation. Furthermore, only some of the frames were suitable for producing visible light color imagery. To create a color image that depicts what Hubble saw, it is necessary to warp the data in such a way that Jupiter’s rotation is undone, simulating an approximate color view.

Take a gander at the following array of images, showing a series of captures during the biggest impact event, when the G fragment tore through Jupiter’s atmosphere. It takes a full set of Red, Green, and Blue filters to create a color image, but notice how there aren’t enough of them to go around. There’s only one Blue filter for the entire set! It’s possible to substitute an Ultraviolet filter in the blue channel, but it doesn’t look quite the same. Even worse, sometimes there were no Blue or Ultraviolet filters, and I had to do everything with just Red and Green. In those cases, I had to fake it a bit by manually reddening the cloud band and the Great Red Spot, if it was present. It really took a lot of effort to get Jupiter’s coloration looking coherent without very noticeable shifts in color.

What the raw frames from Hubble really looked like. How is this supposed to become a color animation?

I knew right away that my existing workflow was barely adequate at best, and the result would either be my loss of sanity or giving up short of reaching the objective. I’d been using my image editing program to manually push and pull pixels around until they matched closely. That’s fine for one image here and there, but dozens? Forget it.

Make a model

My first thought was that I needed a map. Planetary scientists routinely create maps of planets by taking 2D imagery and converting it to a cylindrical projection, essentially rolling a planet out into a rectangle. But wait, what about the impact? The plume is not only coming into view over the horizon, it’s expanding up and outward rapidly, and then sinking down into a wave. Following that, the impact site also spreads out at an amazing speed. Any map would have to somehow include these dynamic elements, or it would be pointless. I had to figure out how to colorize the frames themselves and use as much real Hubble data as possible.

So I took a risk and decided to create a new technique that I’d been mulling over previously, unsure if I could achieve what I was aiming for. My biggest worry was that the 3D software would not-so-subtly alter the data in some way by introducing blur, noise, or an unwanted warp or movement. It is 3D software, after all, not a 2D-image editor. I figured this would be incredibly hacky, but you know what? I love a good hack.

The idea is to use 3D software to take an image of the planet and project it onto a Jupiter-shaped spheroid aligned to the image data, and then use an orthogonal camera to render the scene in a way that would result in pixel-perfect matches to the original Hubble imagery. Rotating the spheroid should result in a perfect warp to match the other frames. To my surprise, and I mean this as humbly as possible, I got it working in less than a day. Maybe the software I was using—Blender (it’s free! it’s open-source! it’s amazingly powerful!)—was an ideal tool for the job, or maybe it was desperation, but there were fewer problems than I expected, and the ones I did encounter were solvable without great effort.

Conceptually simple: Take a 2D image, and project it straight onto a spheroid. The spheroid can then be rotated, and the 2D image will rotate with it.

Once I had my 3D-scene setup, it was just a matter of sorting through the metadata for each image by using a spreadsheet that could automatically calculate the angle that Jupiter had rotated by counting up the number of minutes between each observation and multiplying that by Jupiter’s degrees of rotation per minute. In an ideal situation one could further automate the process of rotating and rendering, but at this point I was satisfied with manually copying and pasting the rotation values and rendering each frame myself. I had a few mix-ups, but it was still phenomenally faster than my previous method.

A side-by-side comparison of the original, cleaned-up data versus the 3D render. Spot the difference? No? Good. Ok, the Blender render is a little darker. I think it has something to do with the way Blender manages color.

Now that the model has been created, it can be rotated easily to match up to the other frames. There are still problems, however. Examine:

This is what it looks like when the spheroid is rotated.

Aside from the planet being cut off at the edges of the frame, it’s also shaded, and any rotation pulls that shading along. The edges of the planet also don’t look quite right. Rotation isn’t nearly as noticeable along the limb of the planet, so it’s easy to fix by using the non-rotated image for the outer edge of the planet, and feathering it in to avoid hard lines where the two meet.

Refining the results

Here’s how the original data compares to the Red, Blue, and Green frames that were warped to match it and then combined to create a color composite.

It’s not an exact match, but it’s pretty close. My model wasn’t perfectly aligned because Jupiter was partially cut off, making it hard for me to guess at the unseen edges. I eventually got better at this by using the handy Jupiter Viewer to generate a wire mesh for alignment, which was remarkably accurate. At the top of the planet, concurrency issues are most apparent where the color frames have a plume, but the original frame has a flattened wave.

The plume is now removed, and the shading of the planet has been corrected.

Finally, the wave must be added back in its place. This looks easy, but some delicate work is required to make it looks like it belongs there. It’s very easy to leave hard edges that scream “FAKE!” at even the least skeptical viewer. I also made sure to include the subtle effect of light bouncing off the rim of the wave back down to the planet. My next conundrum is to decide how to colorize this thing. Should I even try? What color is it?

Adding the wave back in from the original frame. Brightness levels had to be adjusted to make it blend smoothly.

I wasn’t sure how closely the folks putting the documentary would end up zooming in on the impact. If they enlarged it quite a bit, would it be obvious that the plume and wave were black and white? I decided to give it a slight yellow color. I asked Dr. Hammel if this was reasonable, and her response was that it was an ok guess.

I obsessed over it just a little further, deciding that it might have its own atmospheric scattering and gave it a slight blue edge at the top. Since it was rising above Jupiter’s atmosphere, the bottom ought to be reddened like light at a sunrise. We may never know how it really looked, but these tiny details did make it look more believable to me. I was pretty sure no one would notice unless I pointed it out, but that’s just how I am.

Done is better than perfect

There were still small imperfections when I submitted the final version of the animations, but it was getting to the point that it was hard for me to look at it anymore. I’d put some truly solid hours into it, likely more than I’d ever put into a Hubble project before. Could it be better? Probably. Would it really matter? Not much. Like many things in life it’s best to clean up and go home before one suffers a complete burnout trying to polish that last half percent to absolute perfection.

I am very thankful for Dr. Hammel and David Briggs for including me in this project, and, as always, am thankful for Hubble, that wonderful eye in the sky, and everyone who tends to it.

[youtube: https://www.youtube.com/watch?v=fq4PnAabL3Y]

Author

One thought on “Guest Post: A New Take on a Comet’s Impact for Hubble’s 30th

Add yours

Leave a ReplyCancel reply

Create a website or blog at WordPress.com

Up ↑

Exit mobile version
%%footer%%