Looking to the future, astronomers are excited to see how machine learning – aka. deep learning and artificial intelligence (AI) – will enhance surveys. One field that is already benefitting in the search for extrasolar planets, where researchers rely on machine-learning algorithms to distinguish between faint signals and background noise. As this field continues to transition from discovery to characterization, the role of machine intelligence is likely to become even more critical.
Take the KeplerSpace Telescope, which accounted for 2879 confirmed discoveries (out of the 4,575 exoplanets discovered made to date) during its nearly ten years of service. After examining the data collected by Kepler using a new deep-learning neural network called ExoMiner, a research team at NASA’s Ames Research Center was able to detect 301 more planetary signals and add them to the growing census of exoplanets.
These newly-detected exoplanets and the ExoMiner algorithm were described in a paper that was recently accepted for publication in the Astrophysical Journal. The paper and project team were led by Hamed Valizadegan, a machine learning manager with the Universities Space Research Association (USRA) at NASA Ames’, and included multiple researchers from the USRA, the SETI Institute, and universities from all around the world.
As they indicate in their paper, all 301 of the machine-validated planets were originally detected by the Kepler Science Operations Center pipeline. These planets were also promoted to the status of planet “candidate” by the Kepler Science Office (in other words, not confirmed). However, before the Kepler Kepler archive was examined using ExoMiner, no one was able to verify that these potential signals were exoplanets.
Like all machine-learning techniques, this new deep neural network learns to identify patterns based on the data it has been provided. In the case of ExoMiner, researchers at NASA Ames designed it using various tests and properties that human experts use to confirm the presence of exoplanets. Combined with NASA’s Supercomputer (Pleiades), it uses this knowledge to distinguish between actual exoplanets and various types of “false positives.”
Also indicated in the paper is how ExoMiner is more precise and consistent in ruling out false positives and identifying signatures of planets while also showing science teams how it arrived at its conclusion. As Valizadegan explained:
“When ExoMiner says something is a planet, you can be sure it’s a planet. ExoMiner is highly accurate and in some ways more reliable than both existing machine classifiers and the human experts it’s meant to emulate because of the biases that come with human labeling. Now that we’ve trained ExoMiner using Kepler data, with a little fine-tuning, we can transfer that learning to other missions, including TESS, which we’re currently working on. There’s room to grow.”
When a planet crosses directly between us and its star, the light curve is altered slightly, which astronomers use to determine the presence of planets. Credit: NASA’s Goddard Space Flight Center
ExoMiner was specifically designed to assist experts who search through the data gathered during the Kepler and K2 campaigns. The reason for this has to do with the exoplanet-hunting method used by Kepler and its successor, the Transiting Exoplanet Survey Satellite (TESS). This consists of monitoring thousands of stars for signs of periodic dips in luminosity, which could be caused by exoplanets passing in front of them (aka. transiting) relative to the observer.
Known as the Transit Method (aka. Transit Photometry), this technique is the most effective means of exoplanet-detection to date, accounting for over 75% of all discoveries made to date. However, it is also subject to a substantial rate of false positives, which can be as high as 40% in single-planet systems (based on a 2012 study of Kepler mission data). What’s more, it is only effective for about 10% of star systems since they must be edge-on relative to the observer for transits to be visible.
The primary way of getting around this is to monitor thousands of stars in a single field, which creates the data-mining burden (mentioned above). For all of these reasons, having an automated helper that can process the data reliably (by knowing exactly what to look for) is a huge game-changer. As Jon Jenkins, an exoplanet scientist at NASA’s Ames Research Center, said in a recent NASA press release:
“Unlike other exoplanet-detecting machine learning programs, ExoMiner isn’t a black box – there is no mystery as to why it decides something is a planet or not. We can easily explain which features in the data lead ExoMiner to reject or confirm a planet… These 301 discoveries help us better understand planets and solar
Did you miss our previous article…
The James Webb is Measuring Distant Galaxies 5-10 Times Better Than any Other Telescope
On December 25th, 2021, after many years of waiting, the James Webb Space Telescope (JWST) finally launched to space. In the sixth-month period that followed, this next-generation observatory unfurled its Sunshield, deployed its primary and secondary mirrors, aligned its mirror segments, and flew to its current position at the Earth-Sun Lagrange 2 (L2) Point. On July 12th, 2022, the first images were released and presented the most-detailed views of the Universe. Shortly thereafter, NASA released an image of the most distant galaxy ever observed (which existed just 300 million years after the Big Bang).
According to a new study by an international team of scientists, the JWST will allow astronomers to obtain accurate mass measurements of early galaxies. Using data from James Webb’s Near-Infrared Camera (NIRCam), which was provided through the GLASS-JWST-Early Release Science (GLASS-ERT) program, the team obtained mass estimates from some of the distant galaxies that were many times more accurate than previous measurements. Their findings illustrate how Webb will revolutionize our understanding of how the earliest galaxies in the Universe grew and evolved.
The research team (led by Paola Santini of the Astronomical Observatory of Rome) included members from the Instituto Nationale di Astrophysica (INAF) in Italy, the ASTRO 3D collaboration (Australia), the National Astronomical Research Institute of Thailand (ARIT), the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC), the Cosmic Dawn Center (DAWN), the Niels Bohr Institute, The Carnegie Institution for Science, the Infrared Processing and Analysis Center at Caltech, and universities and institutes in the U.S., Europe, Australia, and Asia.
As they indicate in their study, stellar mass is one of the most important physical properties (if not the most) for understanding galaxy formation and evolution. It measures the total amount of stars in a galaxy, which are constantly being added through the conversion of gas and dust into new stars. Therefore, it is the most direct means of tracing a galaxy’s growth. By comparing observations of the oldest galaxies in the Universe (those more than 13 billion light years away), astronomers can study how galaxies evolved.
Unfortunately, obtaining accurate measurements of these early galaxies has been an ongoing problem for astronomers. Typically, astronomers will conduct mass-to-light (M/L) ratio measurements – where the light produced by a galaxy is used to estimate the total mass of stars within it – rather than computing the stellar masses on a source-by-source base. To date, studies conducted by Hubble of the most distant galaxies – like GN-z11, which formed about 13.5 billion years ago – were limited to the Ultraviolet (UV) spectrum.
This is because the light from these ancient galaxies experiences significant redshift by the time it reaches us. This means that as the light travels through spacetime, its wavelength is lengthened due to the expansion of the cosmos, effectively shifting it towards the red end of the spectrum. For galaxies whose redshift value (z) is seven or higher – at a distance of 13.46 light-years or more – much of the light will be shifted to the point where it is only visible in the infrared part of the spectrum. As Santini explained to Universe Today via email:
“The bulk of the stars in galaxies, those that mostly contribute to its stellar mass, emit at optical-near infrared (NIR) wavelengths… [B]y the time the light takes to travels from a distant galaxy to our telescopes, the light emitted by its stars is no more in the optical regime. E.g., for a z=7 galaxy, the light originally emitted at 0.6 micron, reaches our telescope with a wavelength of 4.8 micron. The higher the redshift (i.e. the more distant the galaxy), the stronger is this effect.”
“This implies that we need infrared detectors to measure galaxy stellar masses (the light emitted by the bulk of their stars is out of reach of the Hubble Space Telescope). The only IR telescope we had before the advent of JWST was Spitzer Space Telescope, dismissed a few years ago. However, its 85 cm mirror was not comparable with the 6.5 m mirror of JWST. Most of the distant galaxies were out of reach of Spitzer too: due to its limited sensitivity and angular resolution, they were not detected (or affected by high levels of noise) on its images.
Did you miss our previous article…
A Black Hole can Tear a Neutron Star Apart in Less Than 2 Seconds
Almost seven years ago (September 14th, 2015), researchers at the Laser Interferometer Gravitational-wave Observatory (LIGO) detected gravitational waves (GWs) for the first time. Their results were shared with the world six months later and earned the discovery team the Noble Prize in Physics the following year. Since then, a total of 90 signals have been observed that were created by binary systems of two black holes, two neutron stars, or one of each. This latter scenario presents some very interesting opportunities for astronomers.
If a merger involves a black hole and neutron star, the event will produce GWs and a serious light display! Using data collected from the three black hole-neutron star mergers we’ve detected so far, a team of astrophysicists from Japan and Germany was able to model the complete process of the collision of a black hole with a neutron star, which included everything from the final orbits of the binary to the merger and post-merger phase. Their results could help inform future surveys that are sensitive enough to study mergers and GW events in much greater detail.
The research team was led by Kota Hayashi, a researcher with Kyoto University’s Yukawa Institute for Theoretical Physics (YITP). He was joined by multiple colleagues from YITP and Toho University in Japan and the Albert Einstein Institute at the Max Planck Institute for Gravitational Physics (MPIGP) in Postdam, Germany. The paper that describes their findings was led by YITP Prof. Koto Hayashi and recently appeared in the scientific journal Physical Review D.
The mergers of compact objects discovered so far by LIGO and Virgo (in O1, O2, and O3a). Credit: LIGO Virgo Collaboration / Frank Elavsky, Aaron Geller / Northwestern
To recap, GWs are mysterious ripples in spacetime originally predicted by Einstein’s General Theory of Relativity. They are created whenever massive objects merge and create tidal disruptions to the very fabric of the Universe, which can be detected thousands of light-years away. To date, only three mergers have been observed involving a binary system consisting of a black hole and a neutron star. During one of these – GW170817, detected on August 17th, 2017 – astronomers detected an electromagnetic counterpart to the GWs it produced.
In the coming years, telescopes and interferometers of greater sensitivity are expected to see much more from these events. Based on the mechanics involved, scientists anticipate that black hole-neutron star mergers will include matter ejected from the system and a tremendous release of radiation (which might include short gamma-ray bursts). For their study, the team modeled what black hole-neutron star mergers would look like to test these predictions.
They selected two different model systems consisting of a rotating black hole and a neutron star, with the black hole set at 5.4 and 8.1 solar masses and the neutron star at 1.35 solar masses. These parameters were selected so that the neutron star was likely to be torn apart by tidal forces. The merger process was simulated using the computer cluster “Sakura” at the MPIGP’s Department of Computational Relativistic Astrophysics. In an MPIGP press release, Department director and co-author Masaru Shibata explained:
“We get insights into a process that lasts one to two seconds – that sounds short, but in fact a lot happens during that time: from the final orbits and the disruption of the neutron star by the tidal forces, the ejection of matter, to the formation of an accretion disk around the nascent black hole, and further ejection of matter in a jet. This high-energy jet is probably also a reason for short gamma-ray bursts, whose origin is still mysterious. The simulation results also indicate that the ejected matter should synthesize heavy elements such as gold and platinum.”
The team also shared the details of their simulation in an animation (shown above) via the Max Planck Institute for Gravitational Physics’ Youtube Channel. On the left side, the simulation shows the density profile as blue and green contours, the magnetic field lines that penetrate the black hole are shown as pink curves, and the matter ejected from the system as cloudy white
Did you miss our previous article…
Lava Tubes on the Moon Maintain Comfortable Room Temperatures Inside
Searching for a comfortable place to set up a research station on the Moon? Look no further than the interior parts of lunar pits and caves. While lack of air will be an issue, new research indicates these underground sanctuaries have steady temperatures that hover around 17 Celsius, or 63 Fahrenheit, even though the Moon’s surface heats up to about 127 C (260 F) during the day and cool to minus 173 C (minus 280 F) at night.
Lunar pits, or lava tubes were discovered in 2009 by the Lunar Reconnaissance Obiter and Japan’s Kaguya spacecraft. These are deep holes on the moon that could open into vast underground tunnels. They likely could serve as a safe shielding from cosmic rays, solar radiation and micrometeorites for future human lunar explorers. But now we know they could provide thermally stable sites for lunar exploration.
These long, winding lava tubes are like structures we have on Earth. They are created when the top of a stream of molten rock solidifies and the lava inside drains away, leaving a hollow tube of rock. For years before their existence was confirmed, scientists thought there were hints that the Moon had lava tubes based on observations of long, winding depressions carved into the lunar surface by the flow of lava, called sinuous rilles.
Thurston Lava Tube on the Big Island of Hawaii. Credit: P. Mouginis-Mark, LPI
So far, about 200 lunar pits have been found and at least 16 of these are probably collapsed lava tubes, with the potential for ‘livable’ space, said Tyler Horvath, a UCLA doctoral student in planetary science, who led the new research. Two of the most prominent pits have visible overhangs that clearly lead to some sort of cave or void, and there is strong evidence that another’s overhang may also lead to a large cave.
Horvath processed images from the Diviner Lunar Radiometer Experiment — a thermal camera and one of six instruments on LRO — to find out if the temperature within the pits diverged from those on the surface. Diviner is designed to measure surface temperatures on the Moon, and Horvath’s team had to focus in on extremely small areas to get their data.
They focused on a pit found in the Sea of Tranquility (Mare Tranquillitatis). This image, below, was taken as the Sun was almost straight overhead, illuminating the region. By comparing this image with previous images that have different lighting, scientists can estimate the depth of the pit. They believe it to be over 100 meters.
This is a spectacular high-Sun view of the Mare Tranquillitatis pit crater, revealing the overhang and deep, dark pit. This image from LRO’s Narrow Angle Camera is 400 meters (1,312 feet) wide, north is up.
Credits: NASA/Goddard/Arizona State University
The researchers used computer modeling to analyze the thermal properties of the rock and lunar dust and to chart the pit’s temperatures over a period of time. Their research, recently published in the journal Geophysical Research Letters, revealed that temperatures within the permanently shadowed reaches of the pit fluctuate only slightly throughout the lunar day, remaining at around 17 C (63 F). If a cave extends from the bottom of the pit, as images taken by the Lunar Reconnaissance Orbiter Camera suggest, it too would have this relatively comfortable temperature. The researchers think the overhang is responsible for the steady temperature, limiting how hot things get during the day and preventing heat from radiating away at night.
However, if this particular pit was to be used as a habitat or research station, there would likely be a heat problem just inside the pit.
Did you miss our previous article…
Fashion2 years ago
Steampunk Clothing & Jewelry
Trending Stories2 years ago
Dior Homme Cologne Men’s Fragrance Review
Sports2 years ago
Best Christmas Gift For Your Golfer Co-Worker
Fashion2 years ago
Best Timepieces To Buy For The Holiday Season
Fashion6 months ago
Julian Schneyder Relaxes with Man About Town
Motor2 years ago
2022 Infiniti QX55 Carigami Can Be Yours
Outdoors6 months ago
California Fishing Season. All You Have to Know
Baller Awards9 months ago
Dave Chappelle’s camp releases statement after on-stage attack as he cooperates with police