Wednesday, July 14, 2021

Adventures in Exploring the Planetary Science Data Archives

 

This week, Conor discusses that wonderful repository of US-generated planetary science data: the Planetary Data System. This data, provided for free on the web at https://pds.nasa.gov/  allows any researcher - no matter whether they are professional or amateur - to benefit from the space missions that have been funded by US taxpayer money. Sometimes, this means that discoveries made by a mission can arrive decades after that mission has ended in studies led by researchers who may not have even been alive when that mission was dispatched!

by Conor Hayes

One of my favourite occurrences in astronomy (and in science in general) is when someone manages to pull new information out of old data. For example, data collected by the Galileo spacecraft in 1997 were used in a 2018 paper (https://www.nature.com/articles/s41550-018-0450-z) to argue that Europa might have plumes of water similar to those seen on Enceladus. Of course, in order for discoveries like these to be made, old data has to be archived in a way that is easily accessible to someone who may not have intimate knowledge of how the data were originally gathered.

In an attempt to solve this problem, NASA’s Planetary Science Division founded the Planetary Data System (PDS) in 1989. The PDS was not NASA’s first attempt at an archive for its planetary missions. During the 1960s and 1970s, mission data were primarily archived at the National Space Science Data Center and the Regional Planetary Image Facilities. However, these archives were not always the most robust, focusing primarily on data storage rather than organization and documentation.

The PDS, by contrast, was designed not just to archive data, but also to present it to future researchers in a standardized format that wouldn’t require highly specialized knowledge to use. To this end, the PDS archiving standards were developed. The standards are painfully specific and in-depth (the “basic concepts” document is nearly 50 pages long, and the core reference manuals total to over 650 pages), so I won’t even attempt to explain them in full here. Instead, let’s look at an archived data project from my research to see how the standards are actually implemented.

The basic premise of the PDS archiving standards is that the data have to be accessible to any plausible future researcher. This means that the data absolutely cannot be archived in a proprietary format. Any time that you write a NumPy array to disk as a NPY file, save an image as a PNG, or export a document as a PDF, you are assuming that the technology to read those files will continue to exist. If those formats are depreciated at some point down the line and the general knowledge about how to use them is lost, then the data contained within are, for all intents and purposes, gone forever.

Of course, you have to make some assumptions somewhere, otherwise developing a standard will be nearly impossible. In this case, the PDS decided to assume that future researchers would be accessing their data using computers that could understand ASCII characters. Given that the ASCII standard itself has been a fundamental part of every computer since its creation in the 1960s, this seems like a pretty safe assumption to make.

 

Figure 1 : Some of the information you would find in a PDS label file.

Now, let’s take a look at an actual PDS data product. This product is one frame of an MSL suprahorizon movie (described elsewhere in this blog), and is archived on the PDS Cartography and Imaging Sciences Node. (The other science nodes, if you were curious, are Atmospheres, Geosciences, Planetary Plasma Interactions, Ring-Moon Systems, and Small Bodies). Each product comes in two parts: the label and the actual data. The label (seen in Figure 1) contains information about the format of the data, such as the number of bytes it contains, which byte the image data begins on, the image shape, the bit depth, and the number of bands in the image. It also lists information about the instrument used to collect the data, like the azimuth and elevation that the camera was pointed at, where on the planet the rover was located when the image was taken, and other useful information like the time of day the image was taken and the units associated with the data.

Unlike the label, which is presented in a plaintext format, the image data cannot be understood just by looking at it. If you open it in a text editor, you’ll probably get something that just looks like an incomprehensible mess of random characters (see Figure 2). That’s probably not surprising though. You wouldn’t try to open a PNG in a text editor, so why would this be any different? Well, if you try to open it in your favourite image viewing application, you likely won’t have much luck there either. 

Figure 2 : Opening a PDS image file in a text editor – a bunch of nonsense!

As it happens, both the label and the image data are presented as binary files containing no information that would help an application interpret them. A text editor assumes that you’re trying to open a text file, so the label, which is a text file, opens just fine. (This is also the reason why opening the image file in a text editor displays a bunch of random letters and symbols - the editor is interpreting the image data as ASCII characters.) But displaying an image is much more complex than plaintext, so without the guidance that your typical PNG or JPG includes, it’s unlikely that any mainstream application would be able to open a PDS image file.

This is the downside of the PDS archiving standard. Because it has to make as few assumptions as possible about the application being used to open it, the data are presented in such a general format that most common applications, used to being presented with highly structured files, have no idea what to do with them. The upside is that because the standards are so well-documented, it’s not exceptionally difficult to write your own code to read PDS files. In the interest of time, I ultimately decided to use code someone else had already written (the planetaryimage package distributed by the PlanetaryPy Project - it can be downloaded from their GitHub at https://github.com/planetarypy/planetaryimage, if you’re interested), but it could be a fun challenge to create an image viewer yourself in your language of choice.

 

Figure 3 : The results of opening a PDS image file with a tool designed specifically for the task – a beautiful image from the surface of Mars!

The PDS data archiving standards might not be as intuitive or out-of-the-box easy to use as other file formats that we might be used to, but it’s for a good cause. By standardizing our data archives, we are ensuring that future researchers will continue to have access to the vast volumes of information we have collected about our Solar System, information that may be hiding discoveries awaiting reanalysis by some scientist who might not even be born yet.

Monday, July 5, 2021

Where are all the microbes?

 

This week, our Research Associate, Dr. Haley Sapers, introduces us to the enormous hidden world of microbes all around us. Studying these organisms, the niches they inhabit, and the strategies they use to survive provide clues to the adaptability of life writ large. That, in turn, helps us to understand what kinds of planetary environments might be clement to some form of life. Above, microbes from 2.8 km below the surface of our world. (image credit: Luc Riolon, https://commons.wikimedia.org/wiki/File:Candidatus_Desulforudis_audaxviator.jpg CC-BY-SA-2.5)

by Dr. Haley Sapers

If I asked you where most of the life on Earth was, you would probably tell me it’s all around us. On the surface in forests and jungles, in the oceans around coral reefs and out there swimming around as whales and sharks. And you wouldn’t be wrong.

Macroflora and fauna – that is the large plants and animals that we can see with the unaided eye – have a lot of mass. Plants alone are massive. The cumulative weight of plant life on Earth accounts for a whopping 450 Gt (450 billion tons) of carbon. To put that in perspective, all of the cars in the world only weigh in at about 2.5 billion tons. And because of their large mass, plants and animals comprise most of the biomass on Earth. But mass isn’t the whole story –  the total number of all living organisms that we can see pales in comparison to the extraordinarily high numbers of microbes that inhabit the Earth.

A Phylogenetic Tree of all life on Earth showing relationships between large groups of organisms. Bacteria are in Blue, Archaea in Green and Eucaryotes appear in red.
(image by TimVickers https://commons.wikimedia.org/wiki/File:Collapsed_tree_labels_simplified.png)

There are 3 domains of life; the domain we, and all plants, animals, fungi, and insects are part of is called Eukarya. Prokaryotes, or “Microbes”, as they are colloquially known, form the Bacterial and Archaeal domains. Although bacteria and archaea are both microscopic, they are as different from each other as E. coli is from us! There are about 1030 individual bacterial and archaeal cells on Earth (that’s 1 nonillion or 1 thousand billion billion billion!) To throw a few more astonishingly large numbers out there, there are only an estimated 10 to the power of 24 or 1 quadrillion stars in the Universe or a measly 10 to the power of 21 (one thousand trillion) grains of sand on all the beaches and deserts of the Earth. All those thousand billion billion billion cells weigh in at approximately 77 Gt of carbon. The ~10 to the power of 10 (10 billion) people on Earth only comprise 0.06 Gt of carbon or less than 0.1% of the weight of the microbes.

So, where are all those microbial cells?

You might be surprised to learn that you’re only half human. Of all the cells that are part of your body, about half of them are microbial. They live in our mouths, stomachs, intestines, and skin (among other places…).  Don’t be alarmed – we need all these microbes – in fact, we wouldn’t be able to get any nutrients from our food without them. But if each of the 10 billion people on Earth are home to 10 to the power of 14 microbial cells – we still only end up with 10 to the power of 24 microbes, 6 entire orders of magnitude short! In 2018, a group of scientists decided to count up all the life on Earth and figure out where most of it is. There’s a great (free) publicly available book that looks at a bunch of biological statistics authored by the same group (http://book.bionumbers.org/). 

But back to where the microbes are.  

All that macroflora and fauna that we see around us every day (us included) is in some way dependent on the sun for energy. Life on the surface of the Earth is fueled by the sun, and all life needs energy. So where else could life be? It turns out almost all of those bacterial and archaeal cells (over 95% of them) are actually living deep in the Earth’s subsurface, far, far away from the energy of sun. How is that even possible? That’s actually a really good question, and one many scientists are still trying to figure out. 

There are many different metabolic strategies, or ways for life to get energy. Getting energy from the sun, or consuming other organic matter are only two – perhaps the most common to us, but by far not the most common considering the vast diversity of life on Earth. There are bacteria, for example, that ‘breathe’ iron the same way that we breath oxygen. The iron provides a different electron dumping ground instead of oxygen in anaerobic (or oxygen-free) environments. Many of the microbial subsurface dwellers use strategies like this, gaining energy directly from rocks in a metabolic process known as chemoautolithotrophy (chemo = chemical, auto = self, litho = rock or the self production of chemical energy from rocks). They’re literally living geo-electrical circuits! 

In fact, these seemingly strange metabolisms may have been the first to evolve on Earth (much before photosynthesis, or the ability to harvest energy from sunlight). The very first life on Earth may have been similar to the microbes that now live deep in the Earth’s surface. Because of the diverse energy harvesting strategies that subsurface microbes use and the evidence to suggest that these are some of the earliest metabolisms to have evolved, it’s possible that the subsurface of other planets such as Mars are also habitable in the same way. Who knows – maybe there are even microorganisms living off rocks in the deep subsurface of Mars today!

Tuesday, June 22, 2021

Mars is made of Swiss cheese

  

If the Moon is made of Green Cheese, then what cultured dairy confection makes up Mars? Why Swiss Cheese, of course! This week, Alex takes us on a tour of the pitted south polar terrain of Mars whose interplay of sunlight, water and carbon dioxide ices result in something that looks visibly similar to Swiss Cheese. Naming planetary terrains after food is not new, nor is it limited to the inner solar system. If you were putting together a platter of hors d'oeuvres, Cantaloupe makes an excellent accompaniment to Swiss Cheese. Perhaps we will have to take a closer look at Neptune's moon Triton in the future...

By Alex Innanen

Long-time PVL blog enthusiasts may recall that my planetary journey began at the Martian north pole looking at many, many HiRISE images. Over the past year I’ve returned to the Martian poles – the south pole this time.

Both poles have layered deposits of mostly water ice and dust, and residual water ice caps left behind when the winter layer of CO
2
ice sublimates in the summer. The south polar residual cap (or SPRC for the acronym fans) is mostly made up of carbon dioxide ice as well, overlying water-ice. The terrain of the SPRC is as varied as the North pole, but has some features that are unique to it. One of these are circular or circular-ish pits with steep sides and flat bottoms. The terrain they carve out is similar to a piece of Swiss cheese, giving the features their nickname. 

The distinctive pits of Swiss cheese terrain, from the HiRISE instrument.
[NASA/JPL/University of Arizona]

In Swiss cheese – the kind you can eat – the distinctive holes are formed by carbon dioxide bubbles that are released by the cheese-making bacteria. The Swiss cheese features of the SPRC are much larger than the ‘eyes’ in a piece of cheese – on the order of tens to a few hundreds of metres in diameter. No bacteria are forming these holes, instead they’re likely formed from fractures in the residual cap, which are widened into pits through sublimation from their walls. In the southern spring and summer, the steep, dark sides of the pits get more sunlight than the flat floors, causing the walls to sublimate and grow outwards by a few metres per year.

If the pits grow large enough, they can even grow into each other, creating intricate, branching features that can cover large swaths of the residual cap, like you can see in the HiRISE image here. It’s been suggested that based on this rate of growth, every century or so the entire SPRC could be entirely carved out by Swiss cheese features, causing a total resurfacing. 

[NASA/JPL/University of Arizona]

The Swiss cheese features occasionally show more ephemeral features such as bright, surrounding halos or dark fans emanating from higher standing areas. There’s a fairly clear halo around the feature shown at the top of this post – sometimes nicknamed the ‘Happy Face’. It looks almost like the feature is glowing, but what we’re really seeing is a localized region of higher albedo (i.e. more white) surrounding the Swiss cheese feature. These halos have only been observed during the Southern summer of Mars year 28 (2007, for Earthlings), and their appearance happened to follow a global dust storm. It’s likely, though, that these halos aren’t actually a ring of material getting lighter, but rather the SPRC as a whole getting darker from settling dust, except in the areas close to the pit walls. The mechanism that was proposed to explain this in a 2014 paper, is that the sublimation from the pit walls that I discussed above raises the amount of CO2 in the atmosphere and pushes the settling dust from the storm away from the edges of the pits. Lower rates of sublimation on flat areas allow the dust to settle normally.

The dark fans are much smaller and harder to pick out of even HiRISE images – on the scale of 1-10 m². They tend to appear at the edges of high-standing areas, ‘fanning’ into the lower areas. They appear in the southern spring, and unlike the halos they have been seen over multiple Mars years. Moving into the summer, as CO
2
ice sublimates, the terrain around the fans darkens until the fans disappear. Their formation is also much more exciting – they’re formed when jets of gas rupture through the CO
2
ice layer, lifting dust and depositing it outward in the fan shape. Dust can then get trapped in layers of ice, making it darker, absorbing more sunlight, and leading to more sublimation, creating more trapped gas to explode out and create more fans.

Until now I’ve been talking about CO
2
ice which makes up the majority of the SPRC. But what about water ice? The polar layered deposits are composed mostly of water ice and dust, and in the Southern summer the SPRC shrinks and exposes some of the water ice of the south polar layered deposits. It is possible that the flat floors of Swiss cheese pits also expose water ice in the summer. There have been detections of water vapour associated with the pits, but this could also be from their walls, which could be layers of CO
2
and water ice. In either event, the work I’ve been doing looks as if it is possible for the water ice in the Swiss cheese pits to have any appreciable contribution to atmospheric water vapour. The polar caps are the major source of surface water ice, and the yearly formation and retreat of overlying CO
2
ice, exposing water ice, drives Mars’ water cycle. I’m interested in finding out how much, if any, water vapour could be released from the Swiss cheese pits, and in the event of most or all of the SPRC being removed by Swiss cheese pits, whether this could have a significant impact on the amount of atmospheric water vapour.

Sunday, June 13, 2021

Modelling the atmosphere of K2-141b: June update

 

A model of a planetary environment doesn't spring forth in all of its detail. Typically we start with the simplest model that captures the essential physics, but which also leaves out important details. Sometimes the description of such a model even fits on the back of an envelope! We then build in the complexity piece by piece. This is a process that PhD student Giang has been pursuing over the past couple of years as his models of K2-141b becomes ever more sophisticated. At each stage, we learn something new as we proceed from a solution accurate to a particular order of magnitude, to a 10% level solution to a 1% level solution. There is benefit in the complexity - but it's important not to outrun the data by too much. If we make a prediction or add a minor process that cannot be verified through the data, we run the risk of inventing stories about these worlds that are mere delusions.

By Giang Nguyen

In my previous post, I showed what happened when I introduce UV radiation absorption to K2-141b’s atmosphere. The results from the model went bizarre as the atmosphere kept heating up to essentially become plasma. Although numerically sound within our mathematical construct, this ultra-hot atmosphere simply isn’t realistic as that would make the atmosphere on the planet even hotter than its star.

As I suspected, there was an issue with how I dealt with radiative cooling. The original way for the atmosphere to cool would be exclusively through infrared emissions. Although most of the energy does radiate in the infrared wavelengths, the emissivity of silicon monoxide in that spectral range is very small compared to UV light. Therefore, there is some UV emission that is unaccounted for that would significantly cools the atmosphere.

The solution to this problem is to separately calculate the blackbody radiation of the atmosphere in both infrared and UV. This is done by integrating the Planck function over the desired wavelength range and multiply it with the corresponding emissivity. Here’s the thing with blackbody radiation, especially for hot temperatures of thousands of kelvins. Most of the radiance comes from a very small sliver of wavelengths, and it is pretty much negligible in comparison at every other wavelength. Therefore, when you have low spectral resolution, the estimate of the radiance becomes very inaccurate once you do your integration.

My next step was to do the Planck integration separately solely as a function of temperature with adequate spectral resolution and then to fit that integration to a polynomial. As the integration process now becomes a single line of calculations instead of a bunch of for loops, we’re back to our old speedy model. However, we are at the mercy of our fit coefficients and it seems that our temperature range is too large for a polynomial fit to be accurate; note that our temperature can range from 0 – 3000 K.

All hope seemed to be lost. I was going to have to run the slow model which I estimate will take weeks to pump out a solution, which might not even be correct solution. Thankfully, some scientists in the 1970s ran into the same problem and were able to solve it themselves. When you integrate the Planck function by parts, you end up with an infinite sum (a little bit of math identities is needed here as well). Computing this infinite sum is much faster than the classic way as this sum converges much faster. Finally, with the Planck finite integral taken care of, we can deal with radiative cooling.

As expected, UV emissions capped the temperature of the atmosphere – but it was still hot. The temperature hovers around 2900K across the dayside almost uniformly. Because UV emission only becomes significant when the atmosphere is hot, it never forces the temperature to drop further at low temperatures. When UV absorption and emission cancel each other out at a specific temperature, a very stable sort of radiative balance occurs. This turns out to be important as the atmosphere becomes too thin for IR radiation to take effect.

A warm SiO atmosphere is expected, but for it to be so horizontally consistent and warmer than the surface is a surprise. A welcoming surprise. For emission spectra, a warmer atmosphere means a brighter signal. Using SiO spectral features, we could ultimately see K2-141b’s atmosphere instead of the ground beneath it. Also, the scale height is thicker, even near the terminator (where on the planet you would see the star on the horizon). This means that during a transit, the planet’s atmosphere is optically thick enough to absorb the star’s light that travels through that atmosphere on the way to Earth. With supersonic winds, this might induce an observable Doppler shift when measuring K2-141b’s transmission spectra.

Ultimately, when considering UV absorption and emission, the atmosphere on K2-141b is easier to detect, for either low-resolution and high-resolution spectral instruments. This is very good news as K2-141b is slotted for observation time with the James Webb Space Telescope (JWST). Along with possible future observations from ground-based telescopes, we may definitively detect and characterize K2-141b’s atmosphere - a first for terrestrial exoplanets.

This concludes my update for my current research project. Using a convenient numerical method to evaluate definite Planck integrals, we solved the problem of dealing with K2-141b’s atmospheric radiative cooling. The resultant atmosphere with the full radiative transfer is almost uniformly hot across the planet’s dayside. This suggests that K2-141b’s atmosphere is a lot easier to detect than anticipated. This is exciting as K2-141b is a high valued target for observation, and it might be the first terrestrial exoplanet where we have observed an atmosphere. Although a small step, it is still a step towards finding habitable worlds and life beyond the solar system.

Sunday, May 30, 2021

Testing a new desktop’s computational power with a video game

With each passing year, we depend more and more upon computational simulations for our research work at PVL. Recently, we decided to acquire a new workstation to increase our capacity. This week, Charissa Campbell writes about her efforts to test-drive the new machine using a piece of software that would challenge its simulation capabilities: the video game Stellaris.

by Charissa Campbell

Now that I am fully back to work, several projects have come up that may test the capabilities of the current laptop that I’m using. To help,  I was able to request a PC desktop with lots of processing powers that should be able to handle anything. As a grad student, money is tight in most situations so getting a brand new piece of hardware is a luxury. I was quite excited to see how well this computer performed and decided to look into a suitable test.

My partner and I have had our gaming computers for several years so they are getting on the slower side. We had the idea to test a specific game with the new desktop that is notorious for being slow to run on average gaming PCs because of the nature of the game. The chosen game, Stellaris, is a 4X RTS (Real Time Strategy) grand strategy game where you guide your customizable civilization in a randomized galaxy. It is also notorious for creating a universe so populated it slows to a crawl on the old hardware when nearing the end of the game due to the heavy CPU load.

Stellaris is set in a galaxy that is populated with hundreds of star systems with their own planets. Each empire has a unique species and has a randomly placed starting star system where the goal is to explore the nearby cosmos. You are free to expand your empire while also researching new technology or ancient alien artifacts. This also includes colonizing any habitable planets you come across, assuming you get there first. You can make new friends or enemies across the galaxy with the ultimate goal of surviving an extra-galactic invasion that happens near the end of the game.

To play the game, you can choose and/or design any type of civilization with whatever traits you’d like. Species range from humans to plants to robots and more. You can customize even further by choosing specific traits such as Adaptive (habitability +10%), Strong (army damage + 20%, worker output +5%), Industrious (minerals +15%), and many more. Certain traits can be useful depending on how you want to play the game: do you want to explore, complete science objectives or try taking over the entire galaxy?

At the beginning of the game, each empire has one planet with a handful of "Pops," the unit of people. Over time, as each empire expands, more and more people populate habitable planets and eventually space-borne habitats and ring-worlds. Each Pop is assigned a job based on planetary building to produce the resources needed for their empire. Each job output is affected by a multitude of modifiers from either the job type itself or the Pop working it. Since each modifier needs to be checked first before it can calculate the actual output, it means that there are a lot of calculations going on behind the scenes every month in game. Since these calculations need to be done for each individual Pop, the time it takes the computer to do this adds up. This can make average PCs slow down significantly between the start and end of the game. The gaming PCs we have in our house add several minutes to the computation time once the end game nears. However, this computer has more RAM and a much better processor and video card so it should be able to handle these tasks more quickly. 

The game we set up has 1000 stars in our galaxy, each with their own set of planets. You can also adjust how many habitable planets you encounter. We maxed it to 5x to encourage a higher population to really test the computer. For this run, we went with the United Nations of Earth. They are a peaceful, democratic civilization with the goal of making friends and building a community that can be beneficial to all.


Starting in our own solar system on Earth, you can expand further by terraforming Mars or by being bold and colonizing a nearby star system. Alpha Centauri is nearby with the possibility of habitable planets so it seemed like a suitable choice. In order to colonize, you must send a science ship to survey the nearby system to find any habitable planets, resources or any alien anomalies. Depending on your civilization you may want to concentrate on exploiting mining resources or studying the science from various anomalies detected by your science ship. Once a habitable planet was found within Alpha Centauri’s system, a colony ship was sent to claim it for the United Nations of Earth (see image below).

After this point you are free to keep exploring and claiming more star systems for yourself but you must also consider your own population. With few exceptions, the majority of all resources are produced by your Pops; Therefore you always want to get as many as you can working where you want to fuel your empire. So to keep expanding means to grow the number of Pops in your empire and have worlds for them to live on. While this is manageable for most computers when you have only a couple dozen in each empire, by the endgame your own empire can reach numbers in the thousands, let alone all the other empires of similar sizes in the galaxy.

To determine how well the new computer runs Stellaris, we ran the same game on both machines and timed how long a month would take over the course of the game. We started at year 2200 and timed a month every 20 years until end game at 2400. We expected the new computer to outperform the old one and when compared to each other in the figure below, the new (white) computer has a better processor and significantly more RAM compared to the old (black) computer.

Shown in the figure below, the results were graphed with each other to easily compare computational power. At the start of the game (year 2200), both computers had similar timing for one in-game month. Since the majority of the civilizations were still in the beginning stage, population was low so minimal computational power is needed. Over time, population grew, and more computational power was needed. The two computers diverge significantly with the duration of one in-game month at year 2400 was doubled. Compared to the beginning of the game, the old computer sees a difference of 13 seconds while the new computer only has a difference of 4 seconds. Such a short difference must mean the new computer can definitely handle the majority, if not all of what I’ll throw at it during the rest of my PhD. A before and after of the game has been included at the bottom. 

Figure: Graph of the computational time for one in-game month between the start and end game. Blue shows the old computer, which has a large difference of 13 seconds. Orange shows the new computer and only differs by 4 seconds because of its better processor and more RAM. This is promising for any heavy computational research our group will perform.

Before: The galaxy at the starting stage of our game. The different colours represent different civilizations and you can see all the star systems which are represented by the white dots connected by blue hyperlanes. Any dots not in the coloured blobs are free to be claimed by nearby civilizations. Our civilization, the United Nations of Earth, is located near the top shown by a red arrow. 

After: This is what our galaxy looks like at the end stage of our game. The civilizations have greatly expanded with most star systems claimed. You can see the United Nations of Earth still at the top but they have significantly expanded (red circle).

Sunday, April 11, 2021

What is the geocorona, and how can modelling it help us find habitable exoplanets?

 

As a planet passes in front of a star, the size of the shadow it casts is different at different wavelengths. Typically the solid surface of the planet blocks all light. Above that, the wavelength variation of the absorption in the thin sliver of atmosphere seen against the star is used to help us understand the environment of that planet. But if you go to lyman-alpha (121.6 nm) wavelengths, the greatest absorber is hydrogen escaping from the top of the atmosphere. This sparse hydrogen region, called the geocorona, can be many times larger than the radius of the solid surface, blocking a much larger fraction of the star's light. The image above shows Earth's geocorona as seen from the moon, taken by Apollo 16 astronauts in 1972. https://www.esa.int/Science_Exploration/Space_Science/Earth_s_atmosphere_stretches_out_to_the_Moon_and_beyond

by Justin Kerr

For today’s PVL blog post, I am going to be giving you a brief introduction to my main research project with the team and eventual Master’s thesis. My research is focused on developing a better understanding of the hydrogen coronae that we expect to surround exoplanets in order to direct future searches for life with UV telescopes. 

But just what is a hydrogen corona? 

It may surprise you to hear that a small portion of the Earth's atmosphere extends past the orbit of the moon – this very outermost portion of Earth’s atmosphere consists of atomic hydrogen (so not the gas H2 that we are familiar with on the surface) which has yet to fully escape the gravity of the planet and is known as the geocorona. The geocorona is the hydrogen portion of the exosphere, which itself is defined as the region of the atmosphere where densities of atmospheric particles are so low that they can be described as collision-less. At the orbit of the moon, recent studies have shown that the density is so low that you would only find about one single atom of hydrogen per cubic meter. The geocorona extends past the moon to a radius of at least 100 times that of the Earth before eventually merging with empty space as the influence of the solar wind makes it impossible for the Earth to keep its grip on the atoms.
    
Now that we know what the geocorona is, how can we actually detect it? A single atom per cubic meter of empty space isn’t exactly something you could count visually or easily collect for sampling, after all. Luckily for us, hydrogen happens to be an exceptionally well studied element and we can borrow one of the most famous ways astronomers detect it throughout our galaxy. From quantum mechanics we know that the electrons in atoms can only occupy specific energy levels, and when they move between these levels they will either release or absorb light of a specific wavelength related to the energy difference between the two levels. 

For hydrogen, we know that when an electron moves between the first and second energy levels it will absorb or release light with a wavelength of 121.6nm. This specific line of light in the spectrum is known as the Lyman-alpha line. Since this wavelength falls within the ultraviolet (UV) portion of the electromagnetic spectrum of light, we can then detect it with UV telescopes. In the case of the geocorona, we can see this Lyman-alpha light around the Earth when 121.6nm light from the sun is absorbed by the geocoronal hydrogen and re-emitted in different directions. It is this light that the picture taken by the Apollo 16 astronauts shown above is seeing. This works slightly differently for exoplanets since the Lyman-alpha light produced by the planet’s hydrogen corona won’t make it all the way to Earth. Instead of looking for emissions from them, we look for drops in the Lyman-alpha light that should normally be produced by the star as some of it will have been absorbed by the corona and re-emitted in random directions.

So, we can see a hydrogen corona floating around an exoplanet. How does this relate to finding habitable worlds? It comes down to how a hydrogen corona is actually produced around a planet. From studying the geocorona, we know that it’s size is directly linked to the amount of water vapour located at the base of the exosphere. This is because the atomic hydrogen making up the geocorona is produced by the photo-dissociation (breaking apart of the molecule by light) of water. The size of the hydrogen corona around an exoplanet with an Earth-like atmosphere should also be linked to the amount of water vapour in the same location. While we can currently detect the hydrogen corona of gas giants with hydrogen-dominated atmospheres using the Hubble space telescope, we will need to wait for more powerful instruments such as the proposed LUVOIR telescope to see them around potentially habitable exoplanets.

The goal of my research is to use computer models of exoplanet atmospheres to understand how the size of the hydrogen corona changes with different atmospheric characteristics below the exobase. While telescopes such as LUVOIR which could be used to confirm our models are still a long way from launching, this work will help develop analysis plans and proposals for research so that we are ready when the necessary data becomes available for study. Even though it is just one potential piece of the massive effort put into the attempt to find alien life, I hope that my efforts here with PVL will one day play a role in this grand undertaking to better understand our place in the universe.

Tuesday, April 6, 2021

The Missing Shorelines of Mars

This week, PVL MSc Conor Hayes considers the Martian Ocean hypothesis. This theory has several strong lines of evidence in its favour. However, some of the expected geological remnants, such as a clear shoreline lying along a geopotential, are lacking. Above: features described by some authors as putative "mega-tsunami" backwash channels on Mars, perhaps caused by a meteor impacting a putative ancient Martian ocean. (NASA/JPL/University of Arizona)

by Conor Hayes

Although it is pretty clear that liquid water once existed on the surface of Mars, there is still ongoing debate over how much water was present, as well as how long it lasted. One of the more exciting theories is the “Mars ocean hypothesis,” which posits that the planet’s northern hemisphere hosted a large ocean that covered about a third of its surface. If you look at a terrain map of Mars, this theory intuitively makes sense. Much of the northern hemisphere is a large basin, about five kilometers below the average terrain elevation (sometimes called the datum) and comparatively lacking in features like impact craters. The distribution of former stream channels and river deltas also appears to be consistent with the idea of these primordial rivers flowing into an ocean. The presence of such a large body of water would have significant implications for the potential habitability of Mars in the past, particularly given the thicker atmosphere and higher temperatures needed to sustain that volume of liquid water for an extended period of time.

One problem facing this theory is the lack of an obvious shoreline. Although several potential shorelines have been identified, none have been particularly convincing. In addition to having alternate geological explanations, these proposed shorelines show substantial changes in elevation along their lengths, on the order of several kilometers. Because water likes to lie flat along gravitational potentials, this would suggest either that some kind of geological rearrangement occurred between the formation of the shorelines and the present day or, more likely, that these features aren’t shorelines at all.

In addition to providing clear direct evidence for a Martian ocean, finding shorelines would also help us constrain the properties of Mars’ early atmosphere. Here on Earth, shorelines are largely cut by wind-driven waves, in addition to other phenomena like tides. If ancient shorelines do exist on Mars, then that necessarily implies that the atmosphere was once dense enough to allow the wind to form significant waves. Conversely, the lack of shorelines does not necessarily imply the lack of an ocean, but rather suggests that the atmosphere may have been too thin for wave formation.

Interestingly, the atmospheric pressure required for winds similar to those observed on Mars today to generate ocean waves is much lower than on Earth. This is because gravity plays an important role in the behaviour of fluids. In a lower-gravity environment, like that found on Mars, waves form much more easily at a given surface pressure. A pressure of 50 millibars (about 5% that of sea-level pressure on Earth) would require winds of 30 kilometers an hour to form waves, while an Earth-like atmosphere on Mars could sustain waves with a wind speed of only five kilometers an hour.

In 2016, another theory was put forward to explain the missing shorelines: massive tsunamis caused by two meteor impacts. The authors of this theory present evidence of extant backwash channels, formed when the ocean suddenly rushed inland before slowly draining back out. These would have been dramatic events, as evidenced by maximum inland run-up distances of 500+ kilometers that would have required typical wave heights of 50 meters, possibly up to 120 meters in some areas. Such violent events would have obliterated much of the existing shoreline, resulting in the situation we see today.

The Mars ocean hypothesis has a number of other problems that it must address. For example, we would expect that a Martian ocean would undergo a carbon cycle much like Earth’s oceans do, perhaps even to a greater extent due to the higher concentration of carbon dioxide in Mars’ atmosphere. This process would have resulted in the deposition of carbonate minerals on the ocean floor, something that we have not yet observed in meaningful amounts. One could explain this discrepancy by making the ocean more acidic, which would inhibit carbonate formation.

Regardless of whether or not Mars did have an ocean on its surface at some point in the past, it’s still fun to think about sitting on a Martian beach in your spacesuit, watching as a gentle breeze stirs up large, slow-moving waves that break against the shoreline with less force than you might expect given their size. When I read about things like this, I am reminded why I decided to study astronomy in the first place. The universe is a big place full of sights that can be simultaneously familiar and entirely alien, and you don’t even need to go far from home to experience them. True, Mars may not have oceans now, but being able to explore the ghosts of what once was is perhaps just as awe-inspiring as seeing those ancient waves myself.

Sources:
Rodriguez et al. 2016 (https://doi.org/10.1038/srep25106)
Banfield et al. 2015 (https://doi.org/10.1016/j.icarus.2014.12.001)
Fairén et al. 2004 (https://doi.org/10.1038/nature02911)