Sunday, June 13, 2021

Modelling the atmosphere of K2-141b: June update

 

A model of a planetary environment doesn't spring forth in all of its detail. Typically we start with the simplest model that captures the essential physics, but which also leaves out important details. Sometimes the description of such a model even fits on the back of an envelope! We then build in the complexity piece by piece. This is a process that PhD student Giang has been pursuing over the past couple of years as his models of K2-141b becomes ever more sophisticated. At each stage, we learn something new as we proceed from a solution accurate to a particular order of magnitude, to a 10% level solution to a 1% level solution. There is benefit in the complexity - but it's important not to outrun the data by too much. If we make a prediction or add a minor process that cannot be verified through the data, we run the risk of inventing stories about these worlds that are mere delusions.

By Giang Nguyen

In my previous post, I showed what happened when I introduce UV radiation absorption to K2-141b’s atmosphere. The results from the model went bizarre as the atmosphere kept heating up to essentially become plasma. Although numerically sound within our mathematical construct, this ultra-hot atmosphere simply isn’t realistic as that would make the atmosphere on the planet even hotter than its star.

As I suspected, there was an issue with how I dealt with radiative cooling. The original way for the atmosphere to cool would be exclusively through infrared emissions. Although most of the energy does radiate in the infrared wavelengths, the emissivity of silicon monoxide in that spectral range is very small compared to UV light. Therefore, there is some UV emission that is unaccounted for that would significantly cools the atmosphere.

The solution to this problem is to separately calculate the blackbody radiation of the atmosphere in both infrared and UV. This is done by integrating the Planck function over the desired wavelength range and multiply it with the corresponding emissivity. Here’s the thing with blackbody radiation, especially for hot temperatures of thousands of kelvins. Most of the radiance comes from a very small sliver of wavelengths, and it is pretty much negligible in comparison at every other wavelength. Therefore, when you have low spectral resolution, the estimate of the radiance becomes very inaccurate once you do your integration.

My next step was to do the Planck integration separately solely as a function of temperature with adequate spectral resolution and then to fit that integration to a polynomial. As the integration process now becomes a single line of calculations instead of a bunch of for loops, we’re back to our old speedy model. However, we are at the mercy of our fit coefficients and it seems that our temperature range is too large for a polynomial fit to be accurate; note that our temperature can range from 0 – 3000 K.

All hope seemed to be lost. I was going to have to run the slow model which I estimate will take weeks to pump out a solution, which might not even be correct solution. Thankfully, some scientists in the 1970s ran into the same problem and were able to solve it themselves. When you integrate the Planck function by parts, you end up with an infinite sum (a little bit of math identities is needed here as well). Computing this infinite sum is much faster than the classic way as this sum converges much faster. Finally, with the Planck finite integral taken care of, we can deal with radiative cooling.

As expected, UV emissions capped the temperature of the atmosphere – but it was still hot. The temperature hovers around 2900K across the dayside almost uniformly. Because UV emission only becomes significant when the atmosphere is hot, it never forces the temperature to drop further at low temperatures. When UV absorption and emission cancel each other out at a specific temperature, a very stable sort of radiative balance occurs. This turns out to be important as the atmosphere becomes too thin for IR radiation to take effect.

A warm SiO atmosphere is expected, but for it to be so horizontally consistent and warmer than the surface is a surprise. A welcoming surprise. For emission spectra, a warmer atmosphere means a brighter signal. Using SiO spectral features, we could ultimately see K2-141b’s atmosphere instead of the ground beneath it. Also, the scale height is thicker, even near the terminator (where on the planet you would see the star on the horizon). This means that during a transit, the planet’s atmosphere is optically thick enough to absorb the star’s light that travels through that atmosphere on the way to Earth. With supersonic winds, this might induce an observable Doppler shift when measuring K2-141b’s transmission spectra.

Ultimately, when considering UV absorption and emission, the atmosphere on K2-141b is easier to detect, for either low-resolution and high-resolution spectral instruments. This is very good news as K2-141b is slotted for observation time with the James Webb Space Telescope (JWST). Along with possible future observations from ground-based telescopes, we may definitively detect and characterize K2-141b’s atmosphere - a first for terrestrial exoplanets.

This concludes my update for my current research project. Using a convenient numerical method to evaluate definite Planck integrals, we solved the problem of dealing with K2-141b’s atmospheric radiative cooling. The resultant atmosphere with the full radiative transfer is almost uniformly hot across the planet’s dayside. This suggests that K2-141b’s atmosphere is a lot easier to detect than anticipated. This is exciting as K2-141b is a high valued target for observation, and it might be the first terrestrial exoplanet where we have observed an atmosphere. Although a small step, it is still a step towards finding habitable worlds and life beyond the solar system.

Sunday, May 30, 2021

Testing a new desktop’s computational power with a video game

With each passing year, we depend more and more upon computational simulations for our research work at PVL. Recently, we decided to acquire a new workstation to increase our capacity. This week, Charissa Campbell writes about her efforts to test-drive the new machine using a piece of software that would challenge its simulation capabilities: the video game Stellaris.

by Charissa Campbell

Now that I am fully back to work, several projects have come up that may test the capabilities of the current laptop that I’m using. To help,  I was able to request a PC desktop with lots of processing powers that should be able to handle anything. As a grad student, money is tight in most situations so getting a brand new piece of hardware is a luxury. I was quite excited to see how well this computer performed and decided to look into a suitable test.

My partner and I have had our gaming computers for several years so they are getting on the slower side. We had the idea to test a specific game with the new desktop that is notorious for being slow to run on average gaming PCs because of the nature of the game. The chosen game, Stellaris, is a 4X RTS (Real Time Strategy) grand strategy game where you guide your customizable civilization in a randomized galaxy. It is also notorious for creating a universe so populated it slows to a crawl on the old hardware when nearing the end of the game due to the heavy CPU load.

Stellaris is set in a galaxy that is populated with hundreds of star systems with their own planets. Each empire has a unique species and has a randomly placed starting star system where the goal is to explore the nearby cosmos. You are free to expand your empire while also researching new technology or ancient alien artifacts. This also includes colonizing any habitable planets you come across, assuming you get there first. You can make new friends or enemies across the galaxy with the ultimate goal of surviving an extra-galactic invasion that happens near the end of the game.

To play the game, you can choose and/or design any type of civilization with whatever traits you’d like. Species range from humans to plants to robots and more. You can customize even further by choosing specific traits such as Adaptive (habitability +10%), Strong (army damage + 20%, worker output +5%), Industrious (minerals +15%), and many more. Certain traits can be useful depending on how you want to play the game: do you want to explore, complete science objectives or try taking over the entire galaxy?

At the beginning of the game, each empire has one planet with a handful of "Pops," the unit of people. Over time, as each empire expands, more and more people populate habitable planets and eventually space-borne habitats and ring-worlds. Each Pop is assigned a job based on planetary building to produce the resources needed for their empire. Each job output is affected by a multitude of modifiers from either the job type itself or the Pop working it. Since each modifier needs to be checked first before it can calculate the actual output, it means that there are a lot of calculations going on behind the scenes every month in game. Since these calculations need to be done for each individual Pop, the time it takes the computer to do this adds up. This can make average PCs slow down significantly between the start and end of the game. The gaming PCs we have in our house add several minutes to the computation time once the end game nears. However, this computer has more RAM and a much better processor and video card so it should be able to handle these tasks more quickly. 

The game we set up has 1000 stars in our galaxy, each with their own set of planets. You can also adjust how many habitable planets you encounter. We maxed it to 5x to encourage a higher population to really test the computer. For this run, we went with the United Nations of Earth. They are a peaceful, democratic civilization with the goal of making friends and building a community that can be beneficial to all.


Starting in our own solar system on Earth, you can expand further by terraforming Mars or by being bold and colonizing a nearby star system. Alpha Centauri is nearby with the possibility of habitable planets so it seemed like a suitable choice. In order to colonize, you must send a science ship to survey the nearby system to find any habitable planets, resources or any alien anomalies. Depending on your civilization you may want to concentrate on exploiting mining resources or studying the science from various anomalies detected by your science ship. Once a habitable planet was found within Alpha Centauri’s system, a colony ship was sent to claim it for the United Nations of Earth (see image below).

After this point you are free to keep exploring and claiming more star systems for yourself but you must also consider your own population. With few exceptions, the majority of all resources are produced by your Pops; Therefore you always want to get as many as you can working where you want to fuel your empire. So to keep expanding means to grow the number of Pops in your empire and have worlds for them to live on. While this is manageable for most computers when you have only a couple dozen in each empire, by the endgame your own empire can reach numbers in the thousands, let alone all the other empires of similar sizes in the galaxy.

To determine how well the new computer runs Stellaris, we ran the same game on both machines and timed how long a month would take over the course of the game. We started at year 2200 and timed a month every 20 years until end game at 2400. We expected the new computer to outperform the old one and when compared to each other in the figure below, the new (white) computer has a better processor and significantly more RAM compared to the old (black) computer.

Shown in the figure below, the results were graphed with each other to easily compare computational power. At the start of the game (year 2200), both computers had similar timing for one in-game month. Since the majority of the civilizations were still in the beginning stage, population was low so minimal computational power is needed. Over time, population grew, and more computational power was needed. The two computers diverge significantly with the duration of one in-game month at year 2400 was doubled. Compared to the beginning of the game, the old computer sees a difference of 13 seconds while the new computer only has a difference of 4 seconds. Such a short difference must mean the new computer can definitely handle the majority, if not all of what I’ll throw at it during the rest of my PhD. A before and after of the game has been included at the bottom. 

Figure: Graph of the computational time for one in-game month between the start and end game. Blue shows the old computer, which has a large difference of 13 seconds. Orange shows the new computer and only differs by 4 seconds because of its better processor and more RAM. This is promising for any heavy computational research our group will perform.

Before: The galaxy at the starting stage of our game. The different colours represent different civilizations and you can see all the star systems which are represented by the white dots connected by blue hyperlanes. Any dots not in the coloured blobs are free to be claimed by nearby civilizations. Our civilization, the United Nations of Earth, is located near the top shown by a red arrow. 

After: This is what our galaxy looks like at the end stage of our game. The civilizations have greatly expanded with most star systems claimed. You can see the United Nations of Earth still at the top but they have significantly expanded (red circle).

Sunday, April 11, 2021

What is the geocorona, and how can modelling it help us find habitable exoplanets?

 

As a planet passes in front of a star, the size of the shadow it casts is different at different wavelengths. Typically the solid surface of the planet blocks all light. Above that, the wavelength variation of the absorption in the thin sliver of atmosphere seen against the star is used to help us understand the environment of that planet. But if you go to lyman-alpha (121.6 nm) wavelengths, the greatest absorber is hydrogen escaping from the top of the atmosphere. This sparse hydrogen region, called the geocorona, can be many times larger than the radius of the solid surface, blocking a much larger fraction of the star's light. The image above shows Earth's geocorona as seen from the moon, taken by Apollo 16 astronauts in 1972. https://www.esa.int/Science_Exploration/Space_Science/Earth_s_atmosphere_stretches_out_to_the_Moon_and_beyond

by Justin Kerr

For today’s PVL blog post, I am going to be giving you a brief introduction to my main research project with the team and eventual Master’s thesis. My research is focused on developing a better understanding of the hydrogen coronae that we expect to surround exoplanets in order to direct future searches for life with UV telescopes. 

But just what is a hydrogen corona? 

It may surprise you to hear that a small portion of the Earth's atmosphere extends past the orbit of the moon – this very outermost portion of Earth’s atmosphere consists of atomic hydrogen (so not the gas H2 that we are familiar with on the surface) which has yet to fully escape the gravity of the planet and is known as the geocorona. The geocorona is the hydrogen portion of the exosphere, which itself is defined as the region of the atmosphere where densities of atmospheric particles are so low that they can be described as collision-less. At the orbit of the moon, recent studies have shown that the density is so low that you would only find about one single atom of hydrogen per cubic meter. The geocorona extends past the moon to a radius of at least 100 times that of the Earth before eventually merging with empty space as the influence of the solar wind makes it impossible for the Earth to keep its grip on the atoms.
    
Now that we know what the geocorona is, how can we actually detect it? A single atom per cubic meter of empty space isn’t exactly something you could count visually or easily collect for sampling, after all. Luckily for us, hydrogen happens to be an exceptionally well studied element and we can borrow one of the most famous ways astronomers detect it throughout our galaxy. From quantum mechanics we know that the electrons in atoms can only occupy specific energy levels, and when they move between these levels they will either release or absorb light of a specific wavelength related to the energy difference between the two levels. 

For hydrogen, we know that when an electron moves between the first and second energy levels it will absorb or release light with a wavelength of 121.6nm. This specific line of light in the spectrum is known as the Lyman-alpha line. Since this wavelength falls within the ultraviolet (UV) portion of the electromagnetic spectrum of light, we can then detect it with UV telescopes. In the case of the geocorona, we can see this Lyman-alpha light around the Earth when 121.6nm light from the sun is absorbed by the geocoronal hydrogen and re-emitted in different directions. It is this light that the picture taken by the Apollo 16 astronauts shown above is seeing. This works slightly differently for exoplanets since the Lyman-alpha light produced by the planet’s hydrogen corona won’t make it all the way to Earth. Instead of looking for emissions from them, we look for drops in the Lyman-alpha light that should normally be produced by the star as some of it will have been absorbed by the corona and re-emitted in random directions.

So, we can see a hydrogen corona floating around an exoplanet. How does this relate to finding habitable worlds? It comes down to how a hydrogen corona is actually produced around a planet. From studying the geocorona, we know that it’s size is directly linked to the amount of water vapour located at the base of the exosphere. This is because the atomic hydrogen making up the geocorona is produced by the photo-dissociation (breaking apart of the molecule by light) of water. The size of the hydrogen corona around an exoplanet with an Earth-like atmosphere should also be linked to the amount of water vapour in the same location. While we can currently detect the hydrogen corona of gas giants with hydrogen-dominated atmospheres using the Hubble space telescope, we will need to wait for more powerful instruments such as the proposed LUVOIR telescope to see them around potentially habitable exoplanets.

The goal of my research is to use computer models of exoplanet atmospheres to understand how the size of the hydrogen corona changes with different atmospheric characteristics below the exobase. While telescopes such as LUVOIR which could be used to confirm our models are still a long way from launching, this work will help develop analysis plans and proposals for research so that we are ready when the necessary data becomes available for study. Even though it is just one potential piece of the massive effort put into the attempt to find alien life, I hope that my efforts here with PVL will one day play a role in this grand undertaking to better understand our place in the universe.

Tuesday, April 6, 2021

The Missing Shorelines of Mars

This week, PVL MSc Conor Hayes considers the Martian Ocean hypothesis. This theory has several strong lines of evidence in its favour. However, some of the expected geological remnants, such as a clear shoreline lying along a geopotential, are lacking. Above: features described by some authors as putative "mega-tsunami" backwash channels on Mars, perhaps caused by a meteor impacting a putative ancient Martian ocean. (NASA/JPL/University of Arizona)

by Conor Hayes

Although it is pretty clear that liquid water once existed on the surface of Mars, there is still ongoing debate over how much water was present, as well as how long it lasted. One of the more exciting theories is the “Mars ocean hypothesis,” which posits that the planet’s northern hemisphere hosted a large ocean that covered about a third of its surface. If you look at a terrain map of Mars, this theory intuitively makes sense. Much of the northern hemisphere is a large basin, about five kilometers below the average terrain elevation (sometimes called the datum) and comparatively lacking in features like impact craters. The distribution of former stream channels and river deltas also appears to be consistent with the idea of these primordial rivers flowing into an ocean. The presence of such a large body of water would have significant implications for the potential habitability of Mars in the past, particularly given the thicker atmosphere and higher temperatures needed to sustain that volume of liquid water for an extended period of time.

One problem facing this theory is the lack of an obvious shoreline. Although several potential shorelines have been identified, none have been particularly convincing. In addition to having alternate geological explanations, these proposed shorelines show substantial changes in elevation along their lengths, on the order of several kilometers. Because water likes to lie flat along gravitational potentials, this would suggest either that some kind of geological rearrangement occurred between the formation of the shorelines and the present day or, more likely, that these features aren’t shorelines at all.

In addition to providing clear direct evidence for a Martian ocean, finding shorelines would also help us constrain the properties of Mars’ early atmosphere. Here on Earth, shorelines are largely cut by wind-driven waves, in addition to other phenomena like tides. If ancient shorelines do exist on Mars, then that necessarily implies that the atmosphere was once dense enough to allow the wind to form significant waves. Conversely, the lack of shorelines does not necessarily imply the lack of an ocean, but rather suggests that the atmosphere may have been too thin for wave formation.

Interestingly, the atmospheric pressure required for winds similar to those observed on Mars today to generate ocean waves is much lower than on Earth. This is because gravity plays an important role in the behaviour of fluids. In a lower-gravity environment, like that found on Mars, waves form much more easily at a given surface pressure. A pressure of 50 millibars (about 5% that of sea-level pressure on Earth) would require winds of 30 kilometers an hour to form waves, while an Earth-like atmosphere on Mars could sustain waves with a wind speed of only five kilometers an hour.

In 2016, another theory was put forward to explain the missing shorelines: massive tsunamis caused by two meteor impacts. The authors of this theory present evidence of extant backwash channels, formed when the ocean suddenly rushed inland before slowly draining back out. These would have been dramatic events, as evidenced by maximum inland run-up distances of 500+ kilometers that would have required typical wave heights of 50 meters, possibly up to 120 meters in some areas. Such violent events would have obliterated much of the existing shoreline, resulting in the situation we see today.

The Mars ocean hypothesis has a number of other problems that it must address. For example, we would expect that a Martian ocean would undergo a carbon cycle much like Earth’s oceans do, perhaps even to a greater extent due to the higher concentration of carbon dioxide in Mars’ atmosphere. This process would have resulted in the deposition of carbonate minerals on the ocean floor, something that we have not yet observed in meaningful amounts. One could explain this discrepancy by making the ocean more acidic, which would inhibit carbonate formation.

Regardless of whether or not Mars did have an ocean on its surface at some point in the past, it’s still fun to think about sitting on a Martian beach in your spacesuit, watching as a gentle breeze stirs up large, slow-moving waves that break against the shoreline with less force than you might expect given their size. When I read about things like this, I am reminded why I decided to study astronomy in the first place. The universe is a big place full of sights that can be simultaneously familiar and entirely alien, and you don’t even need to go far from home to experience them. True, Mars may not have oceans now, but being able to explore the ghosts of what once was is perhaps just as awe-inspiring as seeing those ancient waves myself.

Sources:
Rodriguez et al. 2016 (https://doi.org/10.1038/srep25106)
Banfield et al. 2015 (https://doi.org/10.1016/j.icarus.2014.12.001)
Fairén et al. 2004 (https://doi.org/10.1038/nature02911)

Tuesday, March 30, 2021

The “Waiting Game” in Space Research

This week, PVL MSc student Grace Bischof considers the time required for space missions to come to fruition. For martian missions, this process can take a few years. But for those spacecraft headed to the outer planets, to get from proposal to science results can take the better part of a career.
(Image:
https://pixabay.com/illustrations/calendar-date-mark-day-hand-4159913/ )

by Grace Bischof

When applying to grad school, I knew that space research was the only area of research I felt truly enthusiastic pursuing. So far, this research experience with PVL at York has been great. There are many wonderful parts of planetary science research – and astronomy as a whole – that I’ve learned over the past few months. However, I’ve also come to realize one its overwhelming downsides: How long it takes for anything to happen.

As a Mars researcher, it isn’t quite so bad. The Mars 2020 mission was first announced by NASA in 2012. The payload for the rover was determined only a couple of years later. Now, as of March 2021, Perseverance has been wandering through its home in Jezero Crater for over 30 sols. On average, an instrument sent from Earth will arrive at the red planet in 7 months. While the 8-year time period from proposal to landing might seem long initially, this is only the tip of the iceberg for the “waiting game” in planetary science and astronomy.

I recently listened to a talk given by Dr. Jason Barnes, who is a Deputy Principal Investigator on the Dragonfly mission. This mission involves sending a quad-copter to Titan – an icy moon of Saturn – to look for potential signs of life. First announced in 2019, Dragonfly is currently scheduled for a 2027 launch after being pushed back a year due to budget restrictions caused by the pandemic. By the time of launch, the Dragonfly mission will be the same age Mars 2020 was when it touched down on Mars’ surface. If that seems like a long wait, it gets even longer. Because Saturn is much further away than Mars, the Dragonfly will take another 6-10 years before it arrives at Titan, rounding off the mission life span to nearly 20 years before any primary science objectives have occurred.

Perhaps the most infamous long-wait in astronomy is the construction and launch of the James Webb telescope (JWST). The JWST is an incredibly powerful and complex infrared telescope, useful for its purpose of probing space to find the earliest formations of galaxies and planetary systems. Before the Hubble telescope had even launched, the JWST was proposed as a successor to Hubble in the 1990s. The JWST has seen many launch dates come and pass -- 2007, 2011, 2014, 2018 -- and was even in danger of being cancelled in 2011. The launch is now expected to occur on October 31st, 2021. After only a few months in space, the JWST will start its planned 5-year science mission.  There is a fun way I like to think about it: a baby born the day the JWST was proposed could be old enough to analyze the first data it returns in 2022.

Unfortunately, one of the biggest disadvantages to the long wait in space research is the advancements in technology that occur after the instrument has already been constructed but before the primary science occurs. The main objective of the New Horizons mission was to characterize Pluto by performing a flyby. After launching in 2006, the probe finally reached its target in 2015. Technology was 9-years advanced by the time New Horizons made it to Pluto. What else could we have learned if technology developed during that time was included in the mission? The vastness of space is both one of its most interesting characteristics and also one of the most frustrating aspects of studying it.

So far, I haven’t had to play this “waiting game” with my research. The data I work with primarily comes from the Phoenix mission, which completed its operations over 10 years ago. When campus opens back up (hopefully within the next few months), I will be testing a spectrometer under Mars-analog conditions. With these tests, we hope to use this instrument to measure methane on Mars in the future. Who knows where I’ll be if/when that time comes, but there is one thing I can be certain about: it will take a long time to happen.

Saturday, March 6, 2021

How do you power a vehicle on Mars?

 

How is power generated for the vehicles that explore other worlds? This is a problem that PVL MSc student Justin Kerr is considering this term in his research. In the inner solar system, solar power tends to dominate, but once we move outward, electricity generated from the heat of nuclear decay in Plutonium is the only viable option. Even on Mars, however, the latter method can be attractive to guarantee a more stable source of power unperturbed by environmental factors such as the dust covering the Spirit Rover in the image above. (image by: NASA/JPL-Caltech/Cornell)

by Justin Kerr


With the recent landing of the Perseverance rover in Jezero crater, exploration of the Martian surface is now all over both the news and general scientific conversation. Perseverance is the latest in the ever-expanding list of vehicles successfully landed on Mars, and a large one at that. It weighs in at 1025kg and has dimensions of 3x2.7x2.2 meters, making it only a little smaller than the average car. But unlike a car, Perseverance obviously cannot just drive up to the local gas station for a refill. The amount of gasoline needed for a decently long mission would also be far too heavy and volatile to bring along for the launch from Earth and subsequent landing, so just how do we go about powering Perseverance and the vehicles before it while they explore the red planet?

Most past rovers and landers have utilized solar power to generate the electricity they needed to operate. Some recent examples of vehicles using solar power include the InSight lander alongside the Spirit and Opportunity rovers. While Mars is further from the Sun than Earth and thus receives less sunlight, solar panels on the surface can still produce enough energy to power a rover. They also have the huge benefit of never running out of fuel so long as the panels are still functioning, which allowed Opportunity to last for 14 Earth years, far longer than its expected lifespan. That being said, power outputs from the panels may themselves seem quite low compared to the usage of everyday devices on Earth. The panels on InSight are capable of an output of 600 Watts, and the Spirit and Opportunity rovers only 140W – a pittance compared to the 850W power supply in use on the computer on which I am currently typing this!  The Mars vehicles make up for this relatively low power generation rate by storing power in lithium-ion batteries much like those used in modern smartphones to use for larger expenditures or during the night – which brings us to some of the problems associated with solar panels.

The biggest problem with solar energy production is the inconsistent availability of sunlight with which to generate power. The most obvious source of this problem is nighttime, but there are others. Seasonal variations cause decreases in solar power output, with power generation being more difficult during winter. Solar power is most effective at the equator where the most sunlight is received, making it much more difficult to power vehicles closer to the Martian poles (you can actually expect a short paper related to this topic from me in the future!). Dust is a major problem on Mars, able to settle in a fine layer on top of vehicles we land there even during normal weather. Spirit had its solar panel efficiency drop to roughly 60% due to dust coverage in its first year – although the rover actually got lucky by having its panels cleaned off by a dust devil in early 2005. Of even larger concern are the global dust storms that can occur on Mars which put so much dust into the atmosphere that solar power generation becomes essentially impossible. You can see the extensive dust buildup on Spirit from one of these storms in 2007 at the top of this article. One such storm was famously responsible for the loss of Opportunity in 2018.   

While solar power may seem like the obvious solution for power on another planet and is indeed effective in many situations, it clearly isn’t perfect – so what else could we use? Perseverance, Curiosity, and the Viking landers of ages past instead utilized the radioactive decay of plutonium-238 for power generation. Specifically, the power is generated by a device known as a radioisotope thermoelectric generator (RTG). When the plutonium in the MMRTG (Multi-Mission RTG) decays into Uranium, it produces significant amounts of heat as the released radiation is absorbed by materials which can then be converted into electricity. In the Perseverance and Curiosity rovers, the excess heat lost in the conversion process can even be put to use keeping the delicate electrical components of the rover warm. This electricity production method gets around the issues of solar power not working at night, during winter, or when covered by dust – radioactive decay will occur regardless of the environmental conditions. 

 
A warm and glowing Pu-238 Pellet (image: US Dept. of Energy)

Nuclear power generation with MMRTGs still has some downsides compared to solar, such as the amount of power that can be generated by an appropriately sized RTG. Perseverance can currently only generate 110W of power, which is used to charge batteries in the same manner as the solar powered vehicles. This amount will also reduce over time as the amount of plutonium decreases as it decays. Plutonium-238 has a relatively short half-life of 87.7 years, meaning there will be a noticeable drop in the amount of available power by the end of the rover’s 14-year lifespan. There is also a concern with the amount of available plutonium-238 for future missions, as the United States only has enough left in their cold war era stockpiles for a few more missions. Thankfully there are plans to begin production of the isotope right here in Ontario at the Darlington nuclear power plant in the near future. In the end, neither RTGs nor solar power provide a perfect solution to the power requirements of the vehicles we send to Mars. We can expect to see a mix of these two methods in upcoming Mars missions, with the next two vehicles set to land (China’s Tianwen-1 and the ESA’s ExoMars) both utilizing solar power. Just like here on Earth, there seems to be no single best answer for power generation on Mars.

Friday, February 19, 2021

SpaceX, Starlink, and the Commercialization of Space

This week, masters student Conor Hayes tackles a thorny issue: how to balance the expansion of private actors in space and the benefits their work can have for those of us on earth with the needs of astronomical research. It's not inconceivable that such an expansion could change the night sky forever, not just for scientists working with sensitive instruments, but also for most of the world's city-bound population for whom a starry sky could be replaced by criss-crossing lights. Must we give up wonder to achieve a better life for each another? Image: 19 Starlink satellites unintentionally imaged by the Blanco Telescope at the Cerro Tololo Inter-American Observatory. (CC BY 4.0, NOIARL/CTIO/AURA/DELVE, https://nationalastro.org/news/starlink-satellites-imaged-from-ctio/)

by Conor Hayes

One of the consequences of the way that our economic systems are structured is an ongoing competition between public and private interests to exploit various resources. This competition rolled through astronomical Twitter like a bowling ball through a set of pins in November 2019, when the image above first made its way onto the internet.

Taken using the Victor M. Blanco 4-metre telescope at the Cerro Tololo Inter-American Observatory in Chile, it shows 19 bright streaks caused by a train of Starlink satellites passing through the telescope’s field of view. Unsurprisingly, this greatly reduced the quality of the data, leading to widespread concern about the long-term impact of Starlink on astronomical observations.

The mere fact that Starlink satellites are visible in telescope imagery isn’t the problem. Outside contamination of CCD images is nearly inevitable with a long-enough exposure time. If you ever get a chance to look at raw data from a telescope, you will probably see similar, though shorter, bright streaks caused by cosmic rays impacting the detector. Furthermore, artificial satellites have been occasionally ruining images for as long as there have been a significant number of them in orbit. So what is the problem then?

Part of what concerns astronomers about the Starlink constellation is the sheer number of satellites involved. SpaceX currently has authorization to launch 12,000 (!) Starlink satellites, and has submitted paperwork to approve another 30,000 (!!). For comparison, the United Nations Office for Outer Space Affairs currently lists about 10,400 objects launched into space since 1957. When the constellation is completed, the number of visible Starlink satellites may even outnumber visible stars in heavily light-polluted areas like Toronto. Given that the constellation is intended to surround the Earth at many different orbital planes, having a small handful of Starlink satellites streak across your telescope’s field of view may become a regular occurrence.

In addition to antagonizing astronomers who work in the optical, the development of Starlink has also worried radio astronomers. Ground-based radio astronomy is already hard enough thanks to the fact that many of our modern-day technological conveniences are constantly blasting radio waves into the environment. Consequently, much like how optical telescopes are located in dark areas away from major population centers, radio telescopes are often surrounded by “radio quiet zones”, large swaths of land where radio emissions are strictly regulated. But when those radio sources are passing overhead, as the Starlink satellites will be, those radio quiet zones may become significantly louder.

If nothing else, the conflict over Starlink shows how vital it will be for the scientific community and private businesses to communicate with each other to find a mutually beneficial way forward. Though SpaceX is now looking into ways to make their satellites less bright, including darker paint, sunshields, and shutting off transmissions when passing over radio quiet zones, these kinds of after-the-fact adjustments are not sustainable in the long term. Though I personally find the commercialization of space somewhat distasteful, I also recognize that as the barrier to entry gets lower, thanks in large part to the innovations championed by companies like SpaceX, it is almost inevitable that commercial interests will want to spread outward. Because astronomers have held a near-total monopoly on space for so long, learning to let other people in will be a difficult process, one that will require sustained, genuine cooperation from all interested parties.

I didn’t start writing this post with the intent to argue for the termination of the Starlink program. It’s a difficult needle for me to thread because on one hand, I am an astronomy grad student whose future career could be hindered by a poorly-managed privatization of space. On the other hand, I recognize that SpaceX’s goal with Starlink is an admirable one. The past year has demonstrated how global access to reliable, high-speed internet is now more of a necessity than a luxury, and demanding that Starlink be shut down just because of the challenges it presents for astronomy would be irresponsible and short-sighted. This goes both ways, of course. It was incredibly disheartening to scroll through some of the replies to the original tweet and see how many people were calling ground-based astronomy little more than a vanity project with no real worth to humanity (Elon’s tweets dismissing astronomers’ concerns out of hand and telling them that they were overreacting certainly didn’t help, either).

Though I don’t have any concrete solutions right now, it seems increasingly likely that, as is the case in so many other areas of our society, the responsibility for dealing with the monumental shifts in the way that the private and public spheres interact with each other beyond Earth will ultimately fall upon the next generation of astronomers currently working their way through their undergraduate and graduate educations. I do believe that we can eventually strike the right balance, but I hope that time comes before unregulated, antagonistic competition severely damages our ability to look up at the sky and wonder what lies beyond our home.