Last year I published a review of a book entitled ‘Recursion’ by author Blake Crouch. In that review I praised ‘Recursion’ for having a very unusual slant on the old SF theme of time travel. Like ‘Recursion’, the plot of ‘The Dent in the Universe’ by author E. W. Doc Parris also concerns a very different, and interesting kind of time travel, although as you might guess the results are every bit as chaotic.
One Corporation is a high-tech company operating out of California’s silicon valley in the near future, the 2030s. The company specializes in developing video games and their chief claim to fame is the sChip, an integrated circuit that uses Quantum Entanglement to achieve Faster Than Light (FTL) communications with other sChips. This property allows gamers all over the world to play One Corporation’s video games together without any nasty time delays because of distance. (Actually there are some theorists who think something like that might be possible.)
About ten years after the sChip is first introduced an accident causes a large portion of the network to crash, a gamer spilled his coke onto his terminal. An investigation by One Corporation’s chief scientist, the guy who invented the sChip in the first place, reveals that the crash originated when the coke spilling gamers sChip sent a conformation signal to his buddy’s sChip BEFORE it was asked for the conformation. It seems sChips are not only capable of FTL they can send messages into the past.
That’s the neat part about ‘The Dent in the Universe’. Here time travel is limited to only information being send through time, not material objects. Another constraint on time travel in ‘The Dent in the Universe’ is that time travel is only possible through sChips and therefore the farthest back it is possible to go is ten years, when the first sChip was made.
Of course it was a part of Stephan Hawkins’ work over decades that showed that information is still energy so it is a material object. Think about it, in a computer information is stored by flipping magnetic fields, something that requires energy to do. So sending information back in time is still sending a material object, the energy to flip a magnetic field, back in time. Nevertheless the unique take on time travel, and the consequences thereof, is the best part of ‘The Dent in the Universe’.
The worst part is the villain, a serial killer of the Bind Torture Kill or Jeffery Dalmer type. I don’t consider myself to have a weak stomach but there were several sections of ‘The Dent in the Universe’ that were simply unpleasant to read, and that’s being kind. There were a lot of gory details that simply weren’t necessary for the plot as far as I was concerned. By the way the idea of a serial killer getting his hands on a time machine isn’t new. Back in the 1979 there was a movie called ‘Time after Time’ where Jack the Ripper, played by David Warner, got his hand’s on H. G. Wells’ time Machine and traveled to 1980s San Francisco. Wells was played by Malcolm McDowell.
All of that is quite a shame because much of ‘The Dent in the Universe’ is well plotted out, something very necessary in a time travel story and rather exciting. The story could have worked just as well without so much graphic gore.
I do have one other complaint as well. Like many SF stories that take place in the near future ‘The Dent in the Universe’ is filled with techno-talk. The computer gamers all say things like “Rashad’s device processed a D-pad signal at the I/O bus”. Meanwhile the detective’s hunting the serial killer all say things like “That’s inside the feeding zone. Walking distance to the MPWS station, Good eyes Detective Baker, good eyes.” Sometimes I wonder if authors are just trying to impress their readers with how in tune they are with the language spoken by experts in various fields.
And finally it turns out that ‘The Dent in the Universe’ is just the first installment in another series of novels. I haven’t made up my mind as to whether I’ll read the next installment. As I said ‘The Dent in the Universe’ had some really interesting parts, as well as some very unpleasant ones.
Several new stories about the history of life here on Earth have caught my attention. One concerns the discovery of a fossil creature from the Cambrian period when animals with hard parts first evolved and that links together two groups of the huge phylum the Arthropods. The other two are studies of two important groups of animals, one a little known family of dinosaurs while the other concerns the origins of a very important family of insects, the bees. As usual I will discuss the oldest fossil animal first and then go forward in time.
As far as we can tell the Arthropods have been the largest and most diverse grouping of animals going all the way back to the very first animals with hard parts. While the name Arthropod means ‘jointed leg’ the Arthropods are also known for their hard exoskeletons and segmented bodies.
Earth’s ancient oceans were filled with arthropods like the trilobites, the Eurypterids or water scorpions and horseshoe crabs just as today’s are filled with shrimp, lobsters and crabs. In the early Cambrian period, (520 to 550) million years ago there was a lot more experimentation going on in evolution as illustrated by some of the ‘weird wonders’ that have been discovered at the famous Burgess Shale in British Columbia.
Another fossil site where strange creatures from the Cambrian period have been discovered is outside the town of Chengjiang in China’s southern Yunnan Province. Recently a new species of shrimp like Arthropod has been discovered there that truly is a ‘weird wonder’ but which shows characteristics of two different groups of Arthropods and therefore serves as something of a missing link between them.
The animal has been given the name of Kylinxia zhangi and measures about 5 cm in length by 1.2 cm in width. While the body resembles that of a modern shrimp K zhangi possesses two large grasping appendages at it’s front and three large compound eyes on its head, that’s right three eyes!
Using the best preserved specimens of K zhangi researchers at Leicester University in the UK carried out a CT scan of the fossils to better visualize it’s anatomy. One of the surprises they discovered was that the head of K zhangi was composed of a fusion of six segments, the same number as in modern insects. While it is too early to suggest any definite relationship between K zhangi and insects the fact shows how the basic building blocks of Arthropods were being experimented with, leading to the immense diversity we see today.
Speaking of insects some of the best known and most valuable of the six-legged creatures are the bees. Everyone knows, or should know that bees not only produce honey but also pollinate a wide variety of the plants that we grow to eat. Much of our agricultural industry is dependent on pollination by bees with honey just a special side bonus.
Since bees are so important it’s understandable that paleontologists would like to understand their evolution, where and under what conditions they first appeared. For such a widespread group of small animals trying to gather enough data to see the big picture has been difficult however.
Now a new study from researchers at Washington State University have combined data, including DNA studies, of modern bees with fossil evidence to generate a geneological map of bee evolution. According to the study bees first evolved from their cousins the wasps around 120 million years ago on the ancient super-continent of Gondwana, which has since broken up into Africa, South America and Antarctica. What appears to have caused the predatory wasps to become peaceful gatherers of pollen and nectar was the evolution of the first flowering plants. In fact the study seems to argue for a kind of co-evolution back and forth between flowering plants and bees.
Published in the journal Current Biology the team analyzed the DNA from over 200 species of bees living today while comparing their anatomical characteristics with those of 185 fossil bee specimens. Based on this data the researchers developed a genomic map of bee evolution and distribution from the early Cretaceous period to today. One surprising fact the team discovered was that all seven bee families developed before the end of the Cretaceous, in other words the bees appear to have survived the extinction of the dinosaurs rather well!
The question now is whether or not bees can survive the extinction event going on right now thanks to human destruction of the environment. Many species of bees are in danger of extinction because of global warming, pollution and invasive species like the killer hornet that prey on bees. Hopefully studies like the Washington State University’s will help us to protect these busy little creatures, rather than making them one more victim of our ignorance and greed.
Finally today I’d like to stay in the Cretaceous period to discuss a little known group of dinosaurs called the Rhabdodontidae. The Rhabdodontidae consist of nine species, all of which seen to have been confined to the continent of Europe and the period 86 to 66 million years ago. Europe at that time consisted of an Archipelago of large and small islands and the Rhabdodontidae seem to have suffered from the phenomenon of ‘island dwarfism’ because as a group they were rather small dinosaurs at 2-6 meters in length. Plant eaters like the Hadrosaurs the Rhabdodontidae had a similar body shape but the head was quite different because instead of the familiar ‘duck bill’ of the Hadrosaurs the Rhabdodontidae had a pointy beak covered in keratin.
Although the first named species of the Rhabdodontidae was discovered more than 150 years ago there is still a lot about them that paleontologists don’t know such as their posture, mode of walking and eating. In fact no complete fossil specimen of a Rhabdodontidae has ever been found so most of our knowledge of them comes from putting the pieces together like in a jigsaw. Ever since the late 19th century Europe’s dinosaurs have taken back seat in recognition to the iconic dinosaurs of North America like the T rex or Triceratops. Perhaps it’s time for European paleontologists to do a little more digging in their own backyard and find that complete Rhabdodontidae.
The big news in space this month is the return of the OSIRIS-Rex probe from its seven-year long mission to the asteroid Bennu, see my posts of 21 October 2020 and 1 May 2021. During the probe’s more than yearlong study of the asteroid in October of 2020 the spacecraft made a pogo stick style bounce off of Bennu that succeeded in collecting an estimated 250 grams of the asteroid’s material. Once the spacecraft had gathered its precious cargo it ignited its rockets once more for the three-year journey back home.
On September 24th, as the school bus sized main probe passed by the Earth it dropped off a suitcase sized capsule that entered our atmosphere at around 8:40 AM Mountain Daylight Time. The capsule’s descent, including both drogue and main parachute deployment, were flawless and at 8:53 MDT the capsule landed at the US Army’s Proving Ground in Utah and within 30 minutes a NASA recovery team was on the spot and the capsule secured.
Taking the utmost care to prevent the capsule’s precious contents from becoming contaminated by anything of this Earth, the NASA personnel took it to a small, especially prepared clean room at the Army base. There the capsule underwent more procedures designed to prevent contamination in order to prepare it for its plane ride to the Johnson Space Center at Houston.
That plane ride took place the very next day and now the samples of asteroid dirt are in Texas undergoing their initial evaluation. A public announcement of the results of those initial tests took place later in the month. In the years to come scientists all over the world will have their chance to study some of the material brought back from Bennu in the hopes of learning clues as to how our Solar system came into being as well as how some of the chemicals of life, basically carbon and water, came to our Earth.
O’k so the capsule contained material from Bennu landed safely back here on Earth but what about the main OSIRIS-Rex space probe, what’s going to happen to it? Well, it’s still out there, after dropping off the capsule it fired it engines again and is now on it’s way to another asteroid, one named Apophis which the probe is scheduled to reach in 2029. By the by, that same year Apophis will also pass by our planet at one tenth the distance of the Moon.
Another NASA interplanetary probe has also been making some dramatic headlines is the Parker Solar Probe which continues to adjust it orbit taking it closer and ever closer to the Sun, see my posts of 7 June 2017 and 18 December 2019. Now just getting to the Sun is dangerous enough, its surface temperature is over 5,000º C after all and last year in September of 2022 the hazards of getting too close to the Sun increased dramatically.
You see the Sun can be quite violent at times, remember it is really a million and a half kilometer wide hydrogen bomb that’s been going off for over 4 billion years now. Explosions on the Sun’s surface are common and can result in what are called Coronal Mass Ejections (CMEs) that can hurl billions of tons of plasma away from the Sun. And the closer you get to the Sun the more likely it is that sooner or later you’ll get hit by a CME.
That’s exactly what happened to the Parker Solar probe last September. In fact that CME was one of the most powerful ever observed. Well protected by its massive heat shield Parker not only survived the two day long ordeal but the probe actually succeeded in filming the CME as it went by. You can watch that video by clicking on the link below. https://www.youtube.com/watch?v=FF_e5eYgJ3Y
The Sun’s eleven year sunspot cycle is expected to peak in 2025 or 26 and the Parker probe’s trajectory was designed so that it will make its closest approaches at just that time. So Parker will almost certainly encounter even more violent CMEs in the years to come. It’s important to learn all that we can about these powerful events because as our society grows ever more dependent on electrical power and electronics in general the threat of a CME striking our planet and causing massive damage to our infrastructure grows as well.
While the Parker Solar Probe faces extraordinary hazards as it gets ever closer to the Sun space is a dangerous place for any spacecraft. That danger was illustrated by what appears to be the fate of India’s Chandrayaan 3 probe that landed at the Moon’s south polar region just last month.
The success of Chandrayaan 3 made India only the fourth nation to land a probe on the Lunar surface and the first to land near the south pole where it is hoped water ice may be hidden at the bottom of some craters, see my post of 9 September 2023. Chandrayaan landed at the start of the two week long lunar day, sending back priceless data on conditions at the South Pole. Chandrayaan even deployed a small rover vehicle that puttered around the main lander making further measurements.
At the end of the lunar day both the rover and the main lander were ordered to go into a sleep mode for the two week long lunar night during which time the probe’s solar cells would not be able to generate power and the outside temperature could drop to well below -200º C. Even doing so there was no guarantee that either the lander or rover would survive the ordeal.
At the moment it appears Chandrayaan 3 has not survived. Engineers at the Indian Space Research Organization (Isro) report that they have not received any signals from the spacecraft and hopes are diminishing that it will revive. Nevertheless Chandrayaan’s mission was a success, a success that told us a great deal about our Moon’s south polar region.
The knowledge sent back to Earth by missions like OSIRIS-Rex, Parker and Chandrayaan make taking the risks of those missions well worth the effort.
Lifted into orbit back in (December of 2021) the James Webb Space Telescope (JWST) spent its first months away from Earth calibrating its instruments while the world’s astronomers waited eagerly. Well JWST has been in operation for a little over a year now and NASA has taken the opportunity to release some of the more spectacular images sent back by the space telescope.
First a bit of a reminder, JWST operates as most large astronomical telescopes do by taking long exposure digital images of whatever astronomical object it is studying. Most of those ‘deep space’ objects are actually very dim and the only way to get good images is to open up the telescope’s camera and allow the light to gather photon by photon over a long period of time. The images are then computer enhanced to bring out the details the astronomers are interested in. In other words the pictures released by NASA are not what you would see if you actually looked into a telescope at the same object.
Another big difference between JWST and other telescopes, even the Hubble Space Telescope is that JWST views objects primarily in the infrared portion of the electromagnetic spectrum. This allows JWST to see details that are completely invisible to our eyes. That is the reason that JWST had to be placed more than a million kilometers from the Earth because the infrared light coming from both the Sun and the Earth would blind it if it weren’t protected. Again the digital images taken by the JWST in the infrared are then converted by a computer into visible images for astronomers, and the rest of us to see.
The first set of images released from the JWST team at John Hopkins Physics Lab was of the well known ‘Whirlpool Galaxy’ often referred to as Messier 51 or just M51. At a distance of 27 million light years from Earth this galaxy is a favourite target of amateur astronomers not far from the Big Dipper in the sky. While M51 is a typical spiral galaxy it happens to be facing our galaxy almost full on so that our view of its spiral arms is simply magnificent. A very beautiful image of M51 was taken by Hubble a dozen years ago and astronomers have been itching to get a view with JWST ever since.
Now they’ve done just that and the image is beyond expectations. One of the reasons JWST operates in the infrared is that infrared light can pass through the gas and dust that tends to blur the details in the spiral arms of galaxies like M51 in visible light. That means that JWST sees deeper into the galaxy, imaging structure never seen before. The same is also true of the small dwarf galaxy NGC 5195 located at the end of M51’s ‘tail’ and whose gravitational field is actually responsible for much of the structure of the Whirlpool’s spiral arms. Images such as JWST’s of the Whirlpool not only are beautiful but they give astrophysicists a lot of data to use in their efforts to understand how galaxies are structured and how they change with time.
The next astronomical object that the JWST team released images of was a lot closer to home, a mere 2,600 light years away. The Ring Nebula or M57 as it is known is located in the night sky near the bright star Vega and is in many ways a glimpse into the future fate of our own Sun. The star at the center of the ring was once about the same mass as our Sun but about a billion years ago it used up all of its hydrogen fuel and began to burn helium. In order to do that the star’s core had to get smaller and hotter which caused its outer regions to puff up making the star a ‘Red Giant’.
Then, less than a million years ago the star started to run out of helium so again its core got smaller and hotter, so much so that its outer regions were pushed out from the star into interstellar space. This material was mostly ejected from the star’s equatorial region so it formed a ring around the original star, the Ring Nebula.
Since the ring itself is made up of gas and dust JWST’s ability to see in the infrared makes it the perfect instrument with which to study M57. The images taken by JWST show an enormous amount to detail that was never seen before including about 20,000 dense clumps of matter and a halo of 10 concentric arcs with 400 spikes. JWST also discovered that the central star causing the ring is not alone, it has two smaller companion stars, one about 35 astronomical units (AU) from the central star, an astronomical unit is Earth’s distance from our Sun, and the other more distant at 14,400 AU.
Like the images of the Whirlpool galaxy astrophysicists will have plenty to keep them busy analyzing what JWST has found at the Ring Nebula. Nebulas like the ring are not only important because they show our Sun’s future but also because the material ejected from such nebula is how heavier elements like Oxygen, Carbon, Nitrogen and Silicon get spread around the galaxy so that they can form planets like our Earth.
The final set of images taken by JWST are of Supernova 1987A (SN1987A), the closest supernova to Earth in the last 400 years and the only supernova to date for which we have a picture of the star taken before it blew up. Supernova are rare events that only happen when a huge star, at least 20 times the mass of our Sun has used up all of the nuclear fuel available to it. When that happens the star’s core collapses into a neutron star or even a black hole. The rest of the star explodes in one of the most powerful events in the Universe.
Obviously studying supernovas is a lot of fun but the problem is that they are so rare that detailed data is hard to get, most of the supernovas observed by astronomers are in galaxies billions of light years away. That’s why astronomers were so anxious for JWST to observe SN1987A. The Hubble space telescope had been observing the supernova for years and had watched as the shock wave from the explosion caught up to and slammed into material ejected from the star before it went nova.
The images from JWST show that collision in even greater detail with a cluster of material that looks like a string of pearls. The JWST will continue to observe the dynamic changes around SN1987A while also searching for the neutron star that must have formed in the explosion but which so far has eluded detection.
The images released by the team (at Johns Hopkins) are just the beginning of the marvels that astronomers hope JWST will reveal in the years to come. Just as Hubble altered and illuminated our view of the Universe JWST is sure to do the same.
Every year during the first week of October the Nobel prizes are awarded for the sciences and this year the order of announcement was Physiology or Medicine on Monday the second with Physics on Tuesday the third and Chemistry on Wednesday the fourth. Not only did the Medicine prize lead off this year but the award was also arguably the most important and controversial of the three prizes. I’ll discuss each award in the order in which it was announced.
The announcement on Monday that the Physiology prize was awarded to University of Pennsylvania (UofP) researchers Katalin Karikó and Drew Weissman was hardly a surprise. You see the pair’s research on messenger RNA (mRNA) as a means to develop vaccines is what allowed the quick fabrication of the Civid-19 vaccines by both Pfizer and Moderna. To date more than 650 million people have received a Covid-19 vaccine and the work of Drs. Karikó and Weissman is credited with saving millions of lives.
Thirty years ago such a result would have seemed very unlikely. Back then the problems of working with mRNA were so great that the possibility of using it as a vaccine appeared hopeless. RNA is a much more delicate chemical than its cousin DNA, which is why our bodies use DNA for long term storage of genetic information while RNA is used as a short-term messenger. At the same time experiments had shown that when RNA was injected into a lab animal the result was often a severe inflammation at the area of injection.
It was for these reasons that in the mid-1990s Dr. Karikó lost all of the funding for her work and was refused a tenure track position at UofP. In fact she was almost kicked out of the university and forced to return to her home in Hungary. Only a chance meeting with Dr. Weissman, who was working on the human immune system and who had a secure source of funding, enabled Karikó to continue working on mRNA.
Even when the two researchers published their key results of how to modify mRNA and deliver it successfully into the body in 2005 few people took notice. It really is something of a miracle that the pharmaceutical community did begin to pay attention in time so that the Covid-19 vaccines could be developed and tested quickly enough to save millions of lives.
Now for the controversy. As I mentioned above Dr. Karikó was officially kicked out of the UofP when she lost her funding and only managed to remain in the US thanks to her collaboration with Dr. Weissman. The question is, how much of her problems were also due to her being a woman, and an immigrant! Right now the university is justly praising Dr. Karikó for her work there despite having tried several times to fire her. Hopefully that was because of Dr. Karikó’s lack of funding, not her sex or nationality. Still the UofP and academia in general may want to take a moment to review their criteria for who gets funding and why!
The awarding of the Physics Nobel on Tuesday was a lot less divisive. This year’s award went to Pierre Agostini, Anne L’Huillier, both originally from France along with the Hungarian born Ferenc Krausz for their work in generating high-speed laser pulses at the attosecond scale. Like a strobe light that captures movements so fast that they are just a blur to human eyes the team’s attosecond lasers allow scientists to actually see the movements of electrons in chemical reactions and solid state electronics.
Consider a water molecule for a moment, a single oxygen atom that “shares” the electrons of two hydrogen atoms. Well, back when I was in college we were taught that the electrons in a water molecule behaved something like a cloud, quantum mechanics allowed you to calculate probabilities of where they’d be but trying to actually see them, forget it, they just moved too fast.
It wasn’t until the early 2000s that Drs. Agostini, L’Huillier and Krausz developed lasers that could flash at the attosecond scale, fast enough to capture a solid image of a electron in motion. An attosecond by the way is one quintillionth of a second, that’s 10-18 or 0.000000000000000001 seconds. As a comparison there are about as many attoseconds in a single second as there are seconds in the current age of the Universe, 13.5 billion years.
The development of attosecond light pulses has already enabled chemists to better understand how chemical reactions happen and therefore how to better predict their properties. At the same time a better understanding of how electrons behave in semi-conductor materials should help led to better solid-state electronics.
Finally on Wednesday the Chemistry prize was announced and as with Physics it was a celebration of the small, only this time small in size rather than duration. The recipients of the 2023 Nobel Prize in Chemistry were Moungi Bawendi, Louis Brus and Alexei Ekimov, all Americans. These three scientists were honoured for their pioneering work in the development of nanocrystals, crystals whose size is measured in millionths of a meter and are also known as “quantum dots”.
It was back in the 1980s that Drs. Brus and Ekimov first created quantum dots independently of each other and studied their properties. Then in the 1990s Bawendi discovered techniques to manufacture high quality nanocrystals in large quantity, thereby establishing one of the sectors of the current field of nano-technology. Today quantum dots are used in a wide range of products from QLED TV screens to imaging in biochemistry and even in medicine and increasing the efficiency of solar cells.
So we celebrate the achievements of the best in the fields of Medicine, Physics and Chemistry. Throughout the year the various sports each get their separate seasons and it seems like politics just goes on year round so I suppose we should be grateful that pure science at least gets some notice one week out of the year.
It was ninety-five years ago in 1928 that British Physicist Paul Dirac first suggested the possibility of a form of anti-electron, that is an electron with a positive rather than a negative electrical charge. The idea did not attract much attention until four years later in 1932 when Physicist Carl Anderson, who knew nothing about Dirac’s prediction, discovered just such a positively charged electron in the cosmic rays he was studying. In the years that followed many of the particles that physicists studied were found to have an opposite, anti-particle. Soon the whole ensemble was being referred to as Anti-Matter.
As I said above anti-particles have the opposite electric charge of their ‘normal’ counterparts, so when exposed to an electro-magnetic field they always behave in exactly the opposite way as their matter counterpart does. This led physicist Richard Feynman to suggest in 1949 that anti particles could be described as normal particles going backward in time. If such a thing were real however, then wouldn’t anti-matter behave in the opposite way that matter does in a gravitational field as well? Shouldn’t anti-matter go upward in Earth’s gravitational field?
It’s not that easy to determine how anti-matter behaves in a gravitational field. You see anti-particles annihilate instantly as soon as they come into contact with their particle counterpart, so they usually only last a tiny fraction of a second. Plus, since they are generated in high-energy collisions, in particle accelerators or cosmic rays, they are moving at close to the speed of light. Combined that makes it all but impossible to measure the tiny effect of gravity on anti-particles.
If anti-matter did possess anti-gravity however that would be a tremendous discovery, and not only just because anti-gravity is something people have wondered about, written about for hundreds of years. You see all of the models of particle physics we have tell us that the Universe should contain exactly the same amount of anti-matter as it does matter. In fact back in the big bang matter and anti-matter should have been created in exactly equal amounts and then quickly annihilated each other leaving a Universe of only particles of light.
As far as we can tell however our Universe is almost entirely composed of matter, certainly our galaxy is only matter. If it wasn’t we would detect the telltale signs of matter anti-matter annihilation in the interstellar medium. We feel the same about other galaxies as well. For example if Andromeda were composed of anti-matter then the tiny amount of gas and dust between Andromeda and our Milky Way would again show the signs of annihilation.
If anti-matter had anti-gravity however then it would be repulsed by matter, eventually matter and anti-matter would segregate into a Universe and an anti-Universe. However, any kind of Anti-gravity would violate Einstein’s principal of equivalence, which is the basis for his General Theory of Relativity. But maybe the equivalence principal just doesn’t hold when you mix matter and anti-matter.
So physicists have long wanted to find out, did anti-matter have anti-gravity? As I said above it’s not an easy experiment to carry out. First you’d need a lot of anti-particles. Then you’d have to slow down your anti-particles, all while keeping them in a vacuum so that they don’t come into contact with, and annihilate normal particles. You also have the problem of the electric charge of the anti-particles because you see the electromagnetic field is so much stronger than gravity that even a refrigerator magnet, or the potential of a 9-volt battery would be enough to completely ruin your measurement. Combining charged anti-particles to form neutral anti-atoms would be the best way to solve that problem, but again, easier said than done.
The best place to find anti-particles is at a particle accelerator, like the Large Hadron Collider (LHC) at CERN, the largest, most powerful atom smasher in the world currently. That makes CERN the best place in the world to try to measure the effect of gravity on anti-matter and Physicist Jeffery Hangst has spent the last thirty years designing and building the experiment to make that measurement.
One of the pieces of equipment they’ve built at CERN to accompany the LHC is the Extra-Low-ENergy-Anti-proton (ELENA) ring that is capable of delivering about seven and a half million anti-protons every 120 seconds while the LHC is operating. About half a million of those are successfully captured in a solenoid magnet, I said handling anti-particles wasn’t easy.
After being captured the anti-protons are cooled and injected into a device named the ALPHA-g where they are combined with anti-electrons, the two combine to form electrically neutral atoms of anti-hydrogen. These atoms are then cooled further to about four degrees above absolute zero. At the end of this operation only about one hundred anti-atoms remain to be tested, like I said anti-matter isn’t easy to handle.
Once cooling is completed the magnet field confining the anti-hydrogen atoms is turned off allowing them to either fall or rise due to Earth’s gravitational field. Which direction the anti-atoms went was ascertained by detecting the annihilation of the anti-atoms with normal atoms in plugs positioned above and below the containment solenoid.
To make certain of the result the experiment was repeated a dozen times but each test showed the same result, anti-hydrogen, and hence anti-protons and anti-electrons fall in Earth’s gravitational field. Anti-matter dos not possess anti-gravity but the measurement did suggest that anti-matter falls more slowly than normal matter, only 75% as fast. The errors in the experiment are so large however that anti-matter falling at exactly the same rate as normal matter cannot be ruled out. The physicists at CERN are planning further experiments, and further refinements of their experiment to measure more accurately just how fast anti-matter does fall.
If anti-matter does fall more slowly than matter that would still violate the Principal of Equivalence and any difference would still be a clue as to why anti-matter is so rare in our Universe. But for now at least we finally know that anti-matter does not possess anti-gravity. A shame really, that would have been so cool!
It may not seem like it to such short lived creatures as we humans but the Earth is really a very dynamic place. Yes, it’s true that we do notice the occasional outburst like an earthquake or volcanic eruption but we are hardly aware of the constant and steady but slow, emphasis on slow, movements of the ground beneath our feet. That movement is called Plate Tectonics and as an example the entire North American continent is moving westward at a rate of about five centimeters per year. Now that may not sound like a lot but for an entire continent, and remember the Earth has a lot of time for little movements to add up to big changes.
Today the surface of the globe consists of about fifteen different sections or plates, some big, some smaller, that push and squeeze against each other. Sometimes the plates grow, as when seafloor spreading is forcing North American and Europe apart. Sometimes they shrink as when subduction around the edge of the Pacific eats away at the largest plate.
Geologists studying plate tectonics of course ask themselves just when in Earth’s history did the process of plate tectonics begin. They know for example that about 250 million years plate tectonics caused all of the land masses to come together to form one giant super-continent that’s been named ‘Pangaea’. However four and a half billion years ago the Earth’s surface was still molten so there certainly weren’t any tectonic plates back then.
Did plate tectonics begin as soon as Earth had a solid surface? Or were there other processes at work on the early Earth before plate tectonics started? Just when did plate tectonics begin to reshape Earth’s surface?
Recent evidence has been found in the Pilbara Craton region of western Australia which sheds light on that question. The rocks of the Craton are among the oldest on Earth’s surface, some are dated back to about 3.2 billion years ago. Using instruments and techniques of their own invention a team of geologists from Harvard University in the US led by Alec Brenner and Roger Fu showed that 3.2 billion years ago the entire Pilbara Craton region was moving at a speed of 6.1 centimeters per year, a rate very similar to that which our modern tectonic plates are moving.
Doctors Brenner and Fu also found evidence for another of our planet’s dynamic processes, the flipping of Earth’s magnetic poles, north becoming south and south, north, see my posts of 8 February 2017 and 16 January 2017. While there is still a great deal that we don’t understand about how the magnetic poles flip, or why our planet even has magnetic poles for that matter, there is overwhelming evidence that they have flipped 183 times in the last 83 million years. Now the evidence that Brenner and Fu have uncovered shows that the poles have been switching for at least over three billion years.
Speaking of their discoveries Doctor Brenner remarked. “It paints this picture of an early Earth that was already really geodynamically mature. It had a lot of the same sorts of dynamic processes that result in an Earth that has essentially more stable environmental and surface conditions, making it more feasible for life to evolve and develop.”
So plate tectonics has been causing Earth’s land masses to push and collide and bounce off of each other for over 3 billion years now. And as I mentioned above 250 million years ago all of the planet’s land masses were jammed together in a single super-continent. What about the future? Is another super-continent going to happen some day?
Yes, according to a new study conducted by researchers led by Australia’s Curtin University. In fact according to the model super-continents occur on Earth about every 600 million years so the next one should form about 280 million years from now around the North Pole.
The researchers have already given the coming super-continent a name ‘Amasia’ because, according to led author Dr. Chuan Huang, it will form when North America and Asia collide causing the Pacific Ocean to vanish. Of course that’s not going to happen for a long time. A long time that is to such short lived mayflies as we humans.
Why does it seem that, whenever we humans can’t make any progress in solving a problem we just change the name of it in order to make it appear that we’re getting somewhere? Take abortion as an example, it’s been a contentious issue now for over fifty years which is why nobody refers to themselves as either pro-abortion or anti-abortion anymore. No, you’re either pro-choice or pro-life, public relations wise it’s always best to be pro-something rather than anti-anything.
Now we’re doing the same thing with Unidentified Flying Objects or UFOs that over the last seventy-five years have been the biggest and longest lasting conspiracy theory, see my post of 30 June 2022. Decade after decade has gone by with no better evidence for the existence of flying saucers than we had back in the 1950s. Still people insist that there has to be something going on so Congress decided to get involved and back on May 17th held meetings to discuss the latest sightings, including two films taken by Naval Aviators. Both the Pentagon and the Intelligence services were called upon to testify and just to let everybody know that they weren’t just sitting on their butts it was decided to change the name of UFOs to Unidentified Anomalous Phenomenon or UAP.
Despite all of the ‘new evidence’, which once again is no better than the evidence we had in the 1950s, the military had to admit that it couldn’t explain all of the sightings but that there was no evidence that UFOs were either advanced aircraft built by our enemies, Russia or China, or Extra-Terrestrial in origin. The only decision made at the hearings was to have NASA get involved and see what the space agency thought about the whole matter. Which if you think about it is something the government probably should have done sixty years ago.
So NASA gathered a panel of scientists, academicians and other technical experts, including an astronaut to form an ‘Independent Study Team’ to look into the matter. As a group the team is highly qualified, several are professors from major universities like George mason University, Boston University and UC San Diego. There are also several scientists from both private corporations and the US government including the FAA. The team chair is Dr. David Spergel of the Simons Foundation think tank and finally the team included former astronaut Scott Kelly, who spent a year living on the International Space Station and whose brother, another former astronaut is now a Senator from Arizona.
On the 14th of September NASA released a report from the Independent Study Team that included analysis of some of the best known recent sightings but that also served to inform Congress of the expertise and scientific assets that NASA could bring to the study of UFOs. Indeed, much of the report, which you can read for yourself at the web address: https://www.nasa.gov/sites/default/files/atoms/files/uap_independent_study_team_-_final_report_0.pdf
reads something like a sales brochure for NASA. Throughout the report phrases like “NASA – with its extensive expertise in these domains and global reputation for scientific openness” or “NASA’s assets can play a vital role” show that the space agency would be happy to undertake a long term study of UFOs, with adequate funding of course.
To show what NASA can do the report does include analysis of several well known recent sightings especially the ‘Go Fast’ video taken by a Naval Aviator flying off of the USS Theodore Roosevelt aircraft carrier, a video that was shown at the congressional hearing back in May. From the measured data shown on the video itself and using a little simple mathematics the team was able to demonstrate that is was the Navy plane that was going fast, not the object it filmed. In fact the object was traveling no faster than 40 mph, and with the wind. The report confidently states that in all likelihood the UFO was drifting with the wind, in other words it was nothing more than a misidentified balloon.
So, should the federal government fund NASA to conduct a long term examination of UFOs in the hope that when the final report is published everyone will accept it. As I said above this is something that might have made sense sixty years ago but today the ‘true believers’ on either side will never accept any answer but the one they want. My fear is that NASA will explain the great majority of the UFO sightings while at the same time there will always be a small percentage of ‘Unknowns’. This is the same situation that Project Blue Book wound up in sixty years ago and which only served to generate rumors that the federal Government was ‘hiding the truth’ about flying saucers, an accusation that NASA may very soon find itself to be the target of.
If space aliens were actually visiting Earth, in such great numbers as UFO believers insist wouldn’t they have contacted us by now, or conquered us by now or at least have DONE SOMETHING BY NOW!
Whenever we think of the objects that archaeologists discover at ancient sites what usually comes to mind are items made of precious metals or jewels. Artifacts like king Tutankhamun’s death mask or the jewels of Helen found at Troy are certainly among the most famous of archaeological finds. These are the sorts of archaeological treasure than we commonly find displayed in museums.
However there is another class of ancient ‘find’ that is far more valuable to archaeologists in their study of past cultures, language. Written records, whether on papyrus or vellum or even inscribed in stone can tell us much more about ancient peoples than gold or jeweled trinkets, if we can read them. Additionally there are the remains of bygone languages in the very words we use today, the study of which has it own special class of scholars, Philologists.
Today’s post is about two examples of how archaeologists study long forgotten languages, and how, using the most modern of techniques they are learning more about ancient peoples by better understanding their languages.
One of the earliest forms of writing is cuneiform, a technique that was developed to record the language of the ancient people of Mesopotamia, the Sumerians, Akkadians, Assyrians and Babylonians. Basically a scribe would take a soft clay tablet and, using a reed cut to a special wedge shape, make triangular marks in the clay that could be read by another scribe. The clay tablets were then fired just like a piece of pottery, producing a written record that can last for millennia.
The Mesopotamians recorded everything, inventories of livestock or grain, tax revenues, speeches by their kings and of course stories like the saga of Gilgamesh. Hundreds of thousands of cuneiform tablets have been excavated from scores of different sites along the valley of the Tigris and Euphrates rivers, writings that could contain a wealth of knowledge of the peoples of Mesopotamia.
The problem is that so very few modern scholars can read Mesopotamian cuneiform, and because of damage to the tablet, often large sections are simply gone, it can take even an expert weeks to decipher a single record. Because of this only a small percentage of cuneiform tablets have ever been read. No one knows what priceless piece of history remains unknown simply because the tablet on which it is written lies unread in the basement of some museum.
Enter a computer, an Artificial Intelligence algorithm to be exact. An interdisciplinary team of computer scientists and language historians, led by a Google software engineer and an Assyriologist from Arial University has trained an AI to perform translations of Mesopotamian cuneiform. After its development the AI was given a test of its abilities known as the Best Bilingual Evaluation Understudy 4, the same test that a human student of cuneiform would take as a evaluation. The AI’s score was better that the team had expected and good enough to be considered ‘High Quality Translations’.
The AI model did show difficulty in understanding some of the nuances of translations, idioms that have no exact counterpart in English for example. Even with the occasional error however the fact that the AI could produce translations in seconds has led the researchers to suggest that it be used to translate inventories and other mundane writings. If the AI’s translation indicates that the tablet is something more interesting, a peace treaty between two warring cities for example, then a human translator can quickly check the work, just to be certain. With more experience the translation AI will be progressively better so that, in perhaps just a few years the massive backlog of cuneiform tablets will finally be read.
Clay tablets containing cuneiform markings are actual physical objects whose survival after thousands of years allows modern scholars to recreate the dead languages of ancient Mesopotamia, but how can anyone reconstruct a dead language for which there are no written records. That is the problem facing scholars who try to understand the origins of the large group of modern languages known as Indo-European.
So how do philologists know that certain languages, say English and Persian for example, are actually related, that they evolved from a common language, proto-Indo-European that was spoken in prehistoric times. The historic records give us a start. We know that in the 5th century CE Germanic tribes called the Angles and the Saxons invaded Roman Britain and the language they spoke became modern English, so English is a dialect of German. Or we can look at the number of similar words, like how the English Water = German Wasser, or Mother = Mütter, or Morning = Morgen and etc. In the same way we know that modern Italian, Spanish and French all descend from Latin.
Then, in the late 19th century British Officers serving in India came upon a number of ancient scrolls in Sanskrit, the ancient ancestor to modern Hindi. Those officers, educated at either Oxford or Cambridge, were shocked to realize that Sanskrit was an awful lot like Greek, so Sanskrit and therefore Hindi are a part of the Indo-European languages.
Today there are over 160 languages that are recognized as being a part of the Indo-European group, including more than 50 that are officially dead, that is, no one alive today speaks them as their primary tongue. Nearly half the people in the world speak an Indo-European language.
So the questions of where and when was the original proto-Indo-European language spoken is one of the most important in all of anthropology. Many theories have been proposed, often without much evidence and much blood has been spilled in the arguments.
I’m not kidding about the blood, before World War 2 the Nazi maintained that Germany was the site of the original proto-Indo-Europeans, whom they called the ‘Aryans’ and that they were the pure descendants of the Aryans, everyone else being of ‘inferior’ blood. Such was the basis for their racial cleansing and the holocaust.
Most modern scholarship however considers that the steppes of Russia-Ukraine directly north of the Black Sea or the area of the Caucus Mountains where Turkey, Iran and Armenia come together are the likeliest places for the proto-Indo-Europeans with a time frame between 5,000 and 9,000 years ago. Now an international team of linguists and geneticists led by the Max Planck Institute of Evolutionary Anthropology in Leipzig have established the largest ever dataset of both language and genetic correlations of Indo-European words and people.
Using a computer algorithm the team performed a Bayesian phylogenetic analysis of all of their data. What the analysis concluded was that the original home of the Indo-European languages was in the Caucus mountains of eastern Anatolia about 8,100 years ago but that by 7,000 years ago there was already a split into five main branches with many going to the steppe region north of the Black Sea. In other words the answer appears to be a mixture of the two leading theories.
Language is very much a part of the foundation of civilization; as such it is one of the primary concerns of anthropology and archaeology. Studying the language of ancient peoples is essential in order to understand their lives.
When it comes to getting your money’s worth NASA certainly can’t complain about the two Voyager space probes that were launched way back in 1977. After having accomplished all of their mission objectives by visiting the gas giant planets of, Jupiter, Saturn, Uranus and Neptune the spacecraft are still operating, sending back priceless data after 45 years in space. Now the two probes have left our solar system and are in interstellar space giving scientists their first in situ measurements of conditions in the void between the stars.
So when a problem occurs with such a venerable spacecraft it gets a lot of attention from the engineers at the Jet Propulsion Labouratories (JPL) who built and have managed the Voyager missions from the beginning. Especially when they caused the problem. You see the trouble happened on the 21st of July when a series or routine orders meant to align Voyager 2’s antenna so that it was correctly aimed at Earth contained a typo that instead caused the antenna to point in the wrong direction, a full 2º away from our planet. As a result Voyager 2 was unable to either send or receive any signal from Earth.
Thankfully Voyager 1 was unaffected and the good news was that Voyager 2 is programmed to automatically realign back to Earth several times a year so whatever else happened the spacecraft would try to reconnect on October 15th. Nevertheless NASA was determined to reestablish communication with Voyager 2 before then.
The first thing for NASA to try was to see if they could pick up Voyager 2’s signal using the biggest antenna they had, the giant dish antenna outside Australia’s capitol Canberra. That large dish is a part of NASA’s Deep Space Network that keeps in contact with our most distant probes. On the 2nd of August the Canberra receivers succeeded in picking up what JPL termed Voyager 2’s “heartbeat”. It was therefore decided to try to send the correct signal to the probe in the hopes of restoring full communications. Adding to the complications is the fact that the spacecraft is so far away from Earth that it takes 18 hours for a radio signal to reach Voyager 2, and another 18 hours for any reply to come back.
Despite all the difficulties on the 7th of August NASA succeeded in regaining full communications with Voyager 2, a marvel considering how old, and how far away Voyager 2 is. So we are still getting priceless data about interstellar space from humanity’s oldest still active spacecraft, Voyager 2 was actually launched before Voyager 1.
Two other unmanned space probes have also been making some news this past month as both Russia and India attempted to land spacecraft on the Moon. For Russian this was the first time in 47 years that they had tried to land on our nearest neighbor while for India it is their growing space program’s first attempt at a landing on any other world.
Although Indian’s Chandrayaan-3 probe was launched first, back on July 14th, it was Russia’s Luna 25 that first attempted a landing on the 19th of August with a result that was a complete disaster for the Russian space program. As the lander was starting its descent an engine misfire caused a crash landing with the complete loss of the spacecraft.
Chandrayaan-3’s attempt four days later was more successful making India only the forth nation to soft land a spacecraft on the Moon. The landing also marked the first time any spacecraft had landed near the Moon’s south pole, a region that may become very important in the coming years as it is thought that deposits of water ice could be hidden there, water that could help sustain a future Lunar base or colony. In addition to an array of instruments to study the surface Chandrayaan-3 also carries a small rover that will operate for at least one Lunar day, which lasts 14 Earth days.
The results of the two Moon landings may be a sign of what is to come for both countries with India on the way up while Russia is on its way down. As the Soviet Union, Russia was once the clear leader in space exploration but the country’s last major achievement by itself in space was the MIR space station back in the 1990s. Since then Russia has been in a downward spiral with Vladimir Putin robbing its treasury to keep his oligarchs happy while starting fruitless wars against his neighbors.
India, on the other hand has been steadily growing both in terms of its economy and its access to technology. India’s space program is a sign of that growth and a source of national pride. With the landing of Chandrayaan-3 India becomes part of an elite group space faring nations with even more ambitious plans for the future.
There is also some news about manned spaceflight and again it’s all about Space X and Boeing. On August 25th the Crew Seven mission was successfully launched to the International Space Station (ISS) with one of Space X’s Dragon capsules atop one of their Falcon 9 rockets. This mission is Space X’s 11th manned spaceflight and the seventh to send a full crew to the ISS under NASA’s commercial crew program.
The four astronauts aboard Dragon come from four different nations, all a part of the ISS consortium. Commander Jasmin Moghbeli is a former US Marine pilot now with NASA while the European Space Agency’s (ESA) Andreas Morgensen is from Denmark. Rounding out the crew are mission specialist Satoshi Furukawa from Japan and Russia’s Konstantin Borisov. All four will remain aboard the ISS for six months in what has become standard operating procedure thanks to Space X.
Meanwhile Boeing, which was expected to compete with Space X for missions to and from the ISS, has had a seemingly endless series of problems with its Starliner space capsule. Last April Boeing finally succeeded in sending an unmanned Starliner capsule to the ISS as a test flight and it was hoped that a final, crewed test flight would take place by the end of the year.
Complications arose with the capsule’s parachute system however along with some adhesive that could pose a fire risk under certain circumstances. The work of satisfying NASA’s rigid safety protocols grew and grew until now it seems as if that final manned test flight of Starliner will not take place until March of 2023 at the earliest.
Despite all of their difficulties with Starliner Boeing insists they are committed to the program, having secured enough parts to build a further six capsules. If Starliner does succeed in taking astronauts to the ISS early next year it is hoped that the first crew mission could take place just about a year from now. Going forward then Space X and Boeing would alternate crew staffing missions until the end of the ISS program that is expected to be in 2030.
Even with all of the problems, space exploration is expanding with more countries like India becoming involved, with private companies like Space X reducing costs and with new missions to explore our solar system and the infinite beyond.