Paleontology News for June 2020.

There have been a number of small but important discoveries recently illuminating portions of the history of life here on Earth. As usual I think I’ll start with the earliest and move forward in time.

One of the most common modes of life in the natural world is parasitism, where an individual of one species spends a large part of its life literally living off of a member of another species. While parasitism is technically a form of symbiosis it differs from mutually beneficial symbiosis in that the parasite gains at the expense of its host.

In addition to feeding off of our blood, external parasites such as this tick cab carry illnesses like Lyme disease. (Credit: Science Insider)
Internal parasites, such as this tapeworm, may also cause illness. (Credit: WebMD)

A very large number of different species, spread across every major taxonomic group of both animals and plants are parasites for at least a part of their lives. There are so many parasites out there that you would expect that there would be a lot of evidence of parasites in the fossil record.

A lice from the Cretaceous period preserved in amber. Could it contain any dino DNA? (Credit: Daily Mail)

It’s not that easy, a lot of parasites don’t fossilize well, think of a tapeworm. Or consider a dinosaur that is infected with fleas. If that dino dies the fleas will quickly leave to try to find another host, they won’t be fossilized with the dinosaur.

Even if you do find two different species fossilized together you have the problem of determining whether your fossil is a true example of parasitism. For example in my collection I have a small clamshell from the cretaceous period that has the tube of a feather duster worm attached to it. For all I know the worm could have built its home on the shell after the clam had died. So trying to figure out when one creature is benefiting by harming the other isn’t easy.

Fragment of clam shell from the Cretaceous period in my collection with the tube of a feather duster worm attached. This is not an example of parasitism because the tube is on the inside of the shell which means the clam was already dead when the worm attached itself. (Credit R. A. Lawler)

Nevertheless a team of paleontologists from Northwest University in Xi’an China, the Swedish Museum of Natural History and Macquarie University in Sidney Australia has announced what they assert is the earliest known example of parasitism. Their evidence comes from the Cambrian period, approximately 515 million years ago and resembles in many ways my fossil mentioned above.

The fossils consisted of a large number of shells of a species of brachiopod, a creature whose shell resembles that of a clam although the animal inside is totally different. While brachiopods today are quite rare, in the early period of life’s history, more than 250 million years ago, they were more common than clams.

Some of the Brachiopod shells used in the study of ancient parasitism. (Credit: Macquarie University)

Examining the brachiopod shells the paleontologists found that approximately half were encrusted with the tubes of worms, just like my fossil, while the other half were not. Measuring the shells of the brachiopods and using that as an indication of the animal’s health the researchers discovered that the encrusted brachiopods were consistently smaller, by about 26%. This is clear evidence that the worms were harming the brachiopods. In other words the worms were parasites.

Artists impression of a Brachiopod shell infested with parasitic worms. (Credit: Ars Technica)

Not only that, but because like a clam, the shells of brachiopods grow outward from their edges the scientists were able to determine how early in the life of a brachiopod it had become encrusted. Again, those brachiopods that were encrusted earlier in their lives showed the most pronounced size reduction, further evidence of parasitism.

So it appears that parasitism as a mode of life has existed for nearly as long as multi-cellular creatures have. Another common mode of life that has also recently been found to have ancient roots is suspension feeding; animals that swim with their mouths wide open, filtering plankton and other small creatures out of the water. In today’s oceans baleen whales and basking sharks are the best known suspension feeders and are among the largest creatures on Earth.

Now a new study by paleontologists at the Universities of Bristol and Zurich of an ancient fish from the Devonian period, about 380 million years ago, has provided strong evidence that at least one of the ocean’s largest inhabitants back then lived in much the same way. The animal in question belongs to the group of armored fish known as placoderms and is formally called Titanichthys. A giant for its time Titanichthys measured more than five meters in length but crucially its jaw was more than a meter in length. Modern suspension feeders also have greatly elongated lower jaws allowing them to scoop up the greatest amount of water as they swim.

School of Titanichthys feeding as they swim. At five meters in length Titanichthys was one of the largest living things during the Devonian age. (Credit: Sci-News.com)

The new research also found that while the lower jaw of Titanichthys was long it wasn’t very strong, neither the bones themselves nor the muscles attached to the jawbones would have been sufficient to deliver a strong bite, further evidence of the fishes lifestyle as a suspension feeder.

The fossilized skull of Titanichthys. That huge open mouth certainly could have collected a lot of food. (Credit: Black Hills Institute)

Moving forward in time we come to my final story for this month, which concerns the asteroid that is presumed to have caused the extinction of the dinosaurs. It was only about twenty years ago that geologists succeeded in finding actual site of that impact, the Chicxulub crater in the Yucatan peninsula of Mexico.

The Chicxulub crater in the Yucatan peninsula of Mexico. The asteroid that struck here is generally considered to have caused the extinction of the dinosaurs. (Credit: Wikipedia)

Ever since that discovery geologists have surveyed Chicxulub, hoping to learn as much as they can about how the 10 kilometer wide space rock caused so much damage. Destruction so great that it led to the extinction of about 75% of all of the species on Earth. In a paper published in Nature Communications scientists from the University of Texas at Austin, the Imperial College London and the University of Freiberg in Germany have used computer simulations to investigate what the likely initial conditions of that asteroid strike were in order to order for it to have produced the effects seen in the Yucatan today.

Based upon the diameter of the Chicxulub crater, its depth and the observed distribution of ejected material from sites around the world the team of geologists have concluded that the asteroid struck the Earth at an angle of 60º, an angle that they argue produced the greatest amount of destruction. According to the simulations a steeper angle, say 70-90º would have produced a deeper crater but one where the ejecta was more confined to the area around the crater, in other words the other side of the Earth might have been subjected to considerably less devastation. On the other hand, if the asteroid had struck at a shallower angle, say 30º or less, the crater would have also been shallower and the distribution of ejecta would have been much more concentrated in the direction of the asteroid’s motion, which again might have spared some parts of the Earth from baring the full brunt of the asteroid’s destructive power.

Computer simulation of the asteroid strike with the asteroid coming in at an angel of 60 degrees. (Credit: Collins, Patel, Davidson et. al.)

If the simulations produced by the team of geologists do in fact correspond to what actually happened 66 million years ago then the dinosaurs were doubly unlucky. Not only did the asteroid strike suddenly from out of the depths of space but it also struck in just the right way to both produce the maximum destruction and to spread that destruction evenly around the entire world.

Of course as mammals we should remember that what was bad luck for the dinosaurs was good luck for us!

Astronomers debate whether or not there is a ninth planet in our Solar System, and I’m not talking about the argument over Pluto.

The argument over Pluto’s status as a full fledged planet or not has been going on now for almost 14 years and there appears to be no end in sight. I grew up without ever questioning Pluto’s designation but admittedly even back in the 1960s Pluto was considered something of an oddball for a planet. Smaller even than Mercury, much smaller than it’s gas giant neighbors Pluto’s orbit even occasionally brought it closer to the Sun than the eighth planet Neptune. Crossing the orbit of another planet seemed like something no self-respecting planet would ever do.

The dwarf planet Pluto with its largest moon Charon to the upper left. Both are now considered to be Kuiper Belt Objects (KBOs). (Credit: Astronomy Magazine)

Then, starting in the 1990s a number of other icy bodies with even more unusual orbits were discovered not far beyond Pluto. These objects were grouped together as Kuiper Belt Objects (KBOs) and the debate over what kind of body Pluto was, a planet or a KBO began.

The dwarf planet Eris, artists impression shown, is also a KBO and may even be a bit larger than Pluto. (Credit: Space.com)

The current definition of a planet basically consists of two criteria. One, a planet must be large enough, massive enough that it’s gravity pulls it into a nice spherical shape. Pluto passes this criterion easily, as does Ceres in the asteroid belt.

The second criterion is that a planet’s gravity must be strong enough to sweep out any other object from its orbital region. This is the test that Pluto and Ceres both fail. Ceres fails because of the other asteroids while Pluto fails because of the other KBOs. That is the official position and I don’t intend to take sides one way or the other. I’ve never liked arguing over definitions. To me Pluto is what it is no matter what we choose to call it.

The eight recognized planets in our solar system. This image clearly shows the great difference between the rocky inner planets like Earth and the enormous gas giants like Jupiter. Maybe these eight should be split into two groups? (Credit: Britannica)

All of which has nothing to do with today’s actual topic, the continuing search by astronomers for an as yet undiscovered ninth planet, a tenth planet if you insist on Pluto being a planet. So why do astronomers think that there could be another planet out there, and how are they going about looking for it?

It all has to do with the pulling and tugging that the gravities of the planets have on each other’s orbit around the Sun. Because the Sun’s mass is so huge, about 500 times the mass of all of the planets, moons and everything else in the solar system added together, the orbits of the planets are pretty close to ellipses, just as Kepler’s first law requires.

Kepler’s First Law states that planets orbit the Sun in ellipses. This isn’t absolutely accurate because of the gravitational pulls of the other planets. (Credit: Quora)

Nevertheless the pulling of the gravities of the other planets does have an effect that astronomers can measure and compare to their calculations. If any discrepancy is found, even the tiniest will cause astronomers to start searching for the cause.

This happened in the first half of the 19th century when the measurements of the orbit of Uranus, the seventh planet, did not match calculations. It was suggested that another planet beyond Uranus might be the cause and after twenty years of calculations planet number eight, Neptune was found exactly where the math said it would be.

Then the same thing happened to the orbit of Neptune, the planet wasn’t quite moving as the calculations said it should be. So the hunt was on for a ninth planet, which finally led to the discovery of Pluto in 1930. Pluto was so small however that it didn’t seem able to account for all of the discrepancy in Neptune’s orbit. So, for the next five decades astronomers kept looking for a tenth planet beyond even Pluto without success.

Clyde Tombaugh discovered Pluto by noticing the movement of a tiny dot of light, indicated by the arrows, between images taken a week apart. (Credit: The Planetary Society)

Things have gotten a lot more complicated since then, and I’m not talking about whether or not Pluto is a planet. I mean with all of those Kuiper Belt Objects orbiting around out there it’s difficult to calculate just what all is going on. Do the KBOs together account for the problem with Neptune’s orbit or do we still need another planet, or would several more KBOs do the trick? And what about the orbits of the KBOs themselves? Are their orbits matching the calculations or does it seem as if there could be another big body out there affecting their motions? Plotting the orbit for one object in our Solar System is a lot of math, even using a computer. I know, I had to do it back in grad school. Trying to do the same for the over 1500 known KBOs is beyond my programming skills.

Sample of an old fashioned FORTRAN computer program. This is the way I had to calculate planetary orbits back in grad school! (Credit: Stack Overflow)

Fortunately it’s not beyond the skills of Samantha Lawler, Assistant Professor of Astronomy at the University of Regina in Canada. Using observations and discoveries made by the Outer Solar System Origins Survey Dr. Lawler, no relation, has calculated the orbits of the known KBOs for the purpose of finding where that ninth planet could be hiding. If it exists that is.

A fair amount of Dr. Lawler’s work actually consisted of recognizing observational biases in earlier searches for KBOs. First of all since KBOs are so far from the Sun they hardly move at all against the background of fixed stars. Because of that the regions of the Kuiper belt that lay in the same direction as the Milky Way have been ignored because of the enormous difficulty in distinguishing a small icy body from one of the millions of distant stars in our galaxy. Other biases arose when certain telescopes were employed in searching the Kuiper belt during certain times of the year, again neglecting whole sections of the solar system.

The orbits of KBOs as discovered by the best conducted surveys to date. Notice how the region to the lower left is empty. Is this because there are no KBOs in the region or is it because of unconscious bias in the surveys. (Credit: Sci-News.com)

Adjusting for these biases in her simulations Dr. Lawler has shown that KBOs are actually more uniformedly distributed that other surveys had indicated and that the orbits of the known KBOs can be explained without the need for the gravity of a ninth planet to shepherd them.

So it appears that there probably isn’t another planet out there beyond Neptune and Pluto waiting to be discovered after all. Instead there are thousands of small, icy KBOs. Tiny little worlds that never managed to come together to form a single big planet.

Maybe, in some ways that’s even more interesting.

U.S. Navy successfully tests shipboard Solid-State Laser system for anti-aircraft defense. Is the time of Gunpowder’s dominance on the battlefield coming to an end?

On the 29th of May in the year 1453 CE one of the turning points in world history occurred as the great city of Constantinople fell to the might of the Turkish military. Often referred to as the final end of the Roman Empire it is fitting that the massive walls of Constantinople were breached not by any weapon that Julius Caesar would have recognized but rather by the cutting edge, high-tech weapon of its day, the cannon.

Contemporary depiction of the fall of Constantinople emphasizing the Turkish use of artillery. (Credit: Ancient History Encyclopedia)

For the past 600 years wars have been fought with guns, cannons, shells, mines and rockets of various kinds, all of which derived their lethal force from the explosive release of chemical energy. It is true that bayonets, lances and even swords can still be seen today at parades and other military pageants but it is gunpowder and its derivatives that dominate today’s battlefield.

With today’s plastic explosives you can mold your bomb into almost any shape you want and yes that’s an explosive penis he is holding. (Credit: Reddit)

That may not be true for much longer. You see for the last decade or so the U.S. Military, particularly the Navy, has been putting a great deal of money and effort into the development of what are officially known as ‘Directed Energy Weapons’ or DEWs, weapons that derive their power from electricity instead of explosives.

In an earlier post, see post of August 2nd 2017, I discussed the Navy’s Rail Gun which employs magnetic fields to hurl a shell up to 400 kilometers at a velocity of 5 to 6 times the speed of sound. The shells fired by the rail gun travel so fast that they don’t even need an explosive warhead to destroy their target. The shell is solid metal and kinetic energy does all the damage. Meanwhile the Army has been testing an anti-personnel microwave generator that causes pain by radio waves.

The U.S. Navy’s Rail Gun being tested. No explosives are needed, it’s all electricity and magnetic fields! (Credit: YouTube)

Now the Navy has tested a shipboard solid-state laser, using it to intercept, that is shoot down a robotic drone aircraft. Many of the details of the test are secret but it is known that the laser was mounted aboard the U.S.S. Portland, an Amphibious Transport Dock Ship and the test took place on the 16th of May 2020 in the ocean somewhere south of the Hawaiian Islands.

USS Portland firing the Navy’s new Laser Weapons System Demonstrator. (Credit: US Naval Institute)

The two most important parameters of the test, and therefore the most secret, are the power of the laser and the range at which it destroyed its target. Based on a 2018 report from the International Institute for Strategic Studies however it is estimated that the laser’s power was somewhere in the range of 150 kilowatts while from the released images of the test the target was destroyed at a distance of at least several kilometers.

Earlier version of the LWSD mounted aboard the USS Ponce. (Credit: Wikipedia)

Officially the laser on board the Portland is called a ‘Laser Weapons System Demonstrator’ (LWSD) and current plans are for the LWSD to be used to provide protection for naval vessels against small attacking boats as well as aircraft. According to the Portland’s Commanding Officer, Captain Karrey Sanders. “With this new advanced capability, we are redefining war at sea for the navy.”

Official Navy image of drone aircraft being shot down by Laser aboard USS Portland. (Credit: US Navy)

Currently most of the effort being carried out to develop these new DEWs is being undertaken by the Navy. Still, you have to know that in some defense contractor’s labouratory somewhere they’re looking at putting a laser, or perhaps a rail gun on a tank. Slowly but surely the new high-tech weapons of war are becoming powered by electricity not explosives.

Science Fiction has had ray guns for decades. I guess we’re finally catching up to Buck Rogers! (Credit: NASA Science and Entertainment Exchange)

“…Redefining war…,” that’s what Captain Sanders said. And maybe he’s right; maybe gunpowder’s dominance of the battlefield is nearing its end. Too bad we just can’t get rid of battlefields instead!

NASA’s Commercial Crew Program finally begins with the first manned launch of a manned space mission by a private company, Space X.

In the beginning of the Obama administration NASA, the U.S. space agency faced a major dilemma. It’s remaining fleet of three Space Shuttles was growing older, increasing of possibility of another space disaster. At the same time the International Space Station (ISS), which NASA had spent so many years and so many billions of dollars constructing, was only starting its useful lifetime.

The International Space Station (ISS). NASA would like to both keep it manned while at the same time move beyond it to explore beyond Low Earth Orbit (LOE). (Credit: NASA)

To make matters worse, during the Bush administration NASA had been directed to develop a program called Constellation for returning America to the Moon, a program whose enormous cost Obama had little liking for. Without the shuttle or an equivalent man capable launch system how would NASA astronauts get to their brand new ISS?

Often referred to as ‘Apollo on Steroids’ the constellation program was an ambitious program for a return to the Moon. It’s huge cost caused it to be canceled. (Credit: Wikimedia Commons)
The Space Launch System (SLS) is a remnant of the constellation program. Plagued by delays and cost overruns it may fly someday, maybe. (Credit: NASA)

It was decided that NASA would use launch systems that would be developed and operated by commercial aerospace corporations. Contracts had already been given to several such companies to develop robotic capsules to ferry supplies to the ISS. Why not fund those companies to develop manned capable capsules that could take astronauts to Low Earth Orbit (LOE) as well? NASA could then ‘hire’ space capsules to take their astronauts to the ISS while the companies would then be free to use their technology to further the commercial development of space.

So it was that in 2011 four aerospace companies, Boeing, Space X, Blue Origins and Sierra Nevada submitted design proposals for a man capble space capsule and after two rounds of review and competition in 2014 Boeing was awarded a contract for $4.2 billion while Space X was awarded a contract of $2.6 billion to aid them in the design and development of their manned space capsules.

Despite failing to win a contract from NASA’s Commercial Crew Program, Sierra Nevada Corporation continues to work on its version of the shuttle called the ‘Dream Chaser’. (Credit: Space News)

With the retirement of the shuttle in 2011 NASA became dependent on Russian Soyuz rockets to take its astronauts to the ISS so it was hoped that either Boeing or Space X would be ready to begin manned operations by 2017. Developing a man capable space system is not that easy however and the delays mounted.

Taking Cosmonauts to orbit since the mid 1960s the venerable Soyuz has been the only way to reach the ISS for the past 9 years. (Credit: The Verge)

At first everyone expected that Boeing, with its long history in aerospace technology, and with the larger amount of money, would be the first to actually succeed in taking astronauts into space. Over the last several years however the aerospace giant has been plagued with a series of problems. So it was that the mini-space race between Boeing and Space X was finally won by the younger, more aggressive company. See my post of 28 December 2019.

Boeing’s Starliner has flown into orbit on an unmanned test flight but problems with the craft’s software caused the test flight to be considered a failure and the necessary fixes are ongoing. (Credit: Boeing)

Designated as the Demonstration Mission 2 (DM-2) the flight of the Space X Crew Dragon capsule was originally scheduled to take off from Kennedy Space center in Florida on the 27th of May. Less than twenty minutes before take off however bad weather caused the flight to be scrubbed for the day. The next possible launch date was three days later but again the Florida weather was questionable. This time however the rain and winds held off and at 3:22:45 EDT the engines on the Space X Falcon 9 rocket ignited and astronauts Bob Behnken and Doug Hurley had a flawless eight-minute ride into LOE. To make their success complete Space X even managed to recover the Falcon 9 first stage so that it could be used again, an operation that has now become routine for Space X.

The launch of the Space X Falcon 9 rocket with the Crew Dragon capsule carrying astronauts Bob Behnken and Doug Hurley to the ISS. (Credit: SciTech Daily)

About 45 minutes after take off the Dragon capsule, crewed by veteran space shuttle astronauts Bob Behnken and Doug Hurley, completed an orbital adjustment burn, the first of five that would bring them to their rendezvous with the ISS. On the morning of the 30th of May, just 19 hours after lift off the Dragon capsule smoothly docked with the ISS.

The Crew Dragon with Astronauts Behnken and Hurley as seen from the ISS moments before docking. (Credit: NASA)

Now the mission of astronauts Behnken and Hurley is ongoing. For at least the next month they will function as members of the ISS crew but NASA could extend their mission to as much as three months. Then astronauts Behnken and Hurley will complete their mission with a return to Earth in the Dragon capsule, splashing down in the Atlantic off the Florida coast. The next manned launch of the Dragon is currently scheduled for September and will be the first official mission of NASA’s Commercial Crew Program. A little late perhaps but nevertheless, so far so good!

Extraterrestrial Life and Extraterrestrial Intelligence how likely could they be and what are the chances that we may soon discover one or the other.

Certainly one of the biggest questions that anyone can ask is, is there life out there? Are there other planets that have life or even intelligent life living on them? At the present time we really have no idea, our exploration of the Universe has only just begun. We have landed robotic probes on only a very few celestial bodies and even on those we have see so little that some form of life could be hiding from us! Still as the famed science fiction author Arthur C. Clarke once asked, the question of whether we are alone in the Universe can have only two answers and either one is awe inspiring.

Thanks to Steven Spielberg this is most people’s idea of an Extraterrestrial. (Credit: Dred Central)
Unless that is you prefer this one! (Credit: Paramount)

Many would say that the Universe is so large, there so many places that life could exist and evolve into intelligence that surely there must be some life out there. That position, however reasonable, isn’t evidence. So the study of extraterrestrial life remains a science without a subject, a science of conjecture and hypothesis rather than solid fact.

Every little dot in this image is an entire galaxy with billions of stars. In such a huge Universe how can we possibly be alone? (Credit: NASA)

When I was an undergraduate all of that conjecture was summed up in ‘Drake’s Equation’ named for a U.S. astronomer who first explicitly wrote down all of the factors in one equation. Using Drake’s equation it is possible to calculate the number of intelligent species in a galaxy, assuming you have accurate numbers for all of the factors in the equation.

                                Equation 1

In this equation I is the number of intelligent species in a galaxy, say our own Milky Way. You calculate I by multiplying the factors on the right hand side.

N is the number of stars in that galaxy, about 200 billion for the Milky Way.

FP is the fraction of those stars that have planets orbiting them. Therefore FP must have a value of between zero and one.

FH is the fraction of planets that orbit in a ‘habitable zone’ around their star; I’ll explain what that means below. Again, FH is somewhere between zero and one.

FL is the fraction of habitable planets where life actually arises. Again, zero to one.

FI is the fraction of planets with life on them where intelligence evolves. Zero to one.

Back when I was in college the only factor on the right hand side of Drake’s equation that astronomers had any accurate measurement for was N, the number of stars in the Milky Way. Every other factor was totally unknown so any attempt to actually use the Drake equation was just pure guesswork.

Our Milky Way itself contains 200 Billion stars, any one of which could have planets with life on them! (Credit: Forbes)

We’ve made some progress since then. In particular thanks to the discoveries made by the Kepler space telescope and other astronomical programs we now know of the existence of thousands of planets outside of our solar system. Because of these discoveries we can now say with reasonable confidence that at least half of all stars must have planets orbiting them, perhaps 90% or even more. So if even half of the Milky Way’s 200 billion stars have planets, then there are an awful lot of planets out there.

Thanks to the Kepler Space Telescope we know of the existence of thousands of planets outside of our solar system. (Credit: Vox)

We’ve also made some progress with FH, the fraction of planets that could be habitable for life. Thirty to forty years ago ‘habitable’ would have meant liquid water on the planet’s surface, which in our solar system meant only Earth, one out of eight planets. However our space probes to the outer planets have discovered that Mars once had oceans and maybe still has water beneath its surface. Also, data from other probes have raised the possibility that Europa and Enceladus, the moons of Jupiter and Saturn respectively, may have large oceans of liquid water beneath their icy surfaces. That means that our solar system might actually have at least four habitable bodies, not just the Earth. So it appears that FH might actually be larger than we thought just a few decades ago.

Both Jupiter’s Moon Europa (L) and Saturn’s Moon Enceladus (R) are believed to have oceans of water beneath their icy surfaces. This means that more planets than we thought might actually be ‘habitable’. (Credit: NASA)

That leaves us with just the last two factors, FL that fraction of planets with a habitable environment that possess life and FI the fraction of planets with life where intelligence evolves. The only way to get an accurate measurement for these two numbers would be to closely study a few hundred or more habitable planets or moons and just see how many have developed life and how many go on to evolve intelligence.

The evidence from geology is that it didn’t take long for Earth’s Primordial Soup to evolve into living things. (Credit: Scoopnest)

We can’t do that however; it will probably take decades for our space technology to even find life on Mars or Europa if it’s there. The only real example we have to study is Earth. Can we learn anything about FL and FI from studying the history of life on here?

A new study says that we can. Authored by David Kipping of Columbia University’s Department of Astronomy “An objective Bayesian analysis of life’s early start and our late arrival” uses probability mathematics to calculate values for FL and FI that would best simulate life’s history here on Earth.

Bayesian analysis is a mathematical technique for studying complex problems with a large number of parameters. Heavy on calculations it’s often performed by computers. (Credit: Mondo 2000)

You see we know that our planet is about 4.5 billion years old and there is growing evidence that life was well established here as far back as 4 billion years ago. Indeed it looks as though life began on Earth as soon as its surface had cooled enough for life to exist. On the other hand complex, multi-cellular life took 4 billion years to evolve and even then intelligence took another half billion years.

Life may have existed early in Earth’s history but it took a very long time to evolve into complex multi-cellular forms. (Credit: Expii)

So what Doctor Kipping did was to develop a computer program that would vary FL and FI across all of their possible values and see which values succeeded in reproducing life’s history here on Earth. The result that Dr. Kipping obtained is that while life itself could be quite common in the Universe, intelligence is very rare. Mathematically what he found was that FL is close to one but FI is very, very close to zero. Thousands of planets may have life on them for every one that possesses an intelligent species.

I have to admit that I agree with Dr. Kipping. The more we learn about life at the biochemical level the more it seems to be something that will inevitably happen at least once on any planet that it can happen on, and once it happens it spreads everywhere on that planet. However intelligence is so complex, so dependent on the twists and turns of evolution that intellect, mind may be the rarest thing in the Universe.

The philosopher Socrates advised us all to “Know Thyself”, the world would still be a better place if more of us followed his suggestion! (Credit: New Intrigue)

Maybe we should take a lesson from Dr. Kipping’s work. If intelligence is the rarest, most valuable thing in the Universe it might behoove us to use ours a little more often, to appreciate it a little more, to realize that it is all that really separates us from… just biochemistry.

Hurricanes are growing stronger because of Global Warming.

Hurricane season in the Atlantic doesn’t even start until the first of June but already we have had our first named tropical storm of the year. Over the past week Tropical Storm Arthur formed just south of the Florida Keys and then moved north paralleling the US east coast before brushing Cape Hatteras and finally turning east into the mid-Atlantic. It seems that Arthur is just a preview of what is expected to be a rather active hurricane season.

When viewed from space a hurricane can be a thing of beauty. They’re not so nice up close! (Credit: Houston Chronicle)

This year’s official hurricane forecast, published by the Colorado State University’s Tropical Meteorology Project calls for 16 named storms of which eight are expected to develop into hurricanes with four of those becoming major hurricanes, category 3 or higher. This prediction is about 33% higher than the average number of storms over the last thirty years but slightly below the actual number of Atlantic storms that occurred last year in 2019.

They’ve already got the names selected for this year’s tropical storms and hurricanes. And we’ve already had Arthur! (Credit: The Weather Channel)

And that’s only the Atlantic Ocean. The Pacific Ocean has already seen one massive Typhoon that caused considerable damage to the Philippines while in the Indian Ocean a large cyclone named Amphan has struck near the Indian city of Calcutta with winds of over 160 kph causing major damage and the loss of close to 100 lives.

By the way Hurricanes, Typhoons and Cyclones are all the same general phenomenon, although many of the details of where they normally form and usually go may vary greatly. The only real difference is the ocean they form in and cause their damage.

Hurricanes spin because of the Coriolis Effect and high pressure systems, nice weather, spin in the opposite direction of low pressure systems, storms! (Credit: InCarto)

Now a new study from the National Oceanographic and Atmospherics Administration (NOAA) is providing more evidence for the hypothesis that hurricanes and the other kinds tropical storms are slowly getting stronger because global warming. Not each individual storm, they can vary up and down considerably but the average strength of all the storms each year is growing with time.

The National Oceanographic and Atmospheric Administration NOAA operates a fleet of aircraft designed to study Hurricanes and other kinds of weather. (Credit: Slate.com)

(I’d like to take a minute here to discuss the controversy over the terms of Global Warming / Climate Change. There are even some deniers out there have even gone so far as to assert that the fact that scientists use two names is proof that it’s all just a hoax. Well I use both terms but not interchangeably, and here is the reasoning behind my choice of which term I will use in a given circumstance. Greenhouse gasses in our atmosphere are raising the temperature of the Earth’s surface, its oceans as well as its atmosphere. That is a direct effect of the greenhouse gasses that I will refer to as Global Warming. Indirect effects such as stronger storms, droughts and floods I refer to as Climate Change that are caused by Global Warming! Got it, the direct effects of greenhouse gasses are Global Warming while the effects of Global Warming, therefore the indirect effects of greenhouse gasses I call Climate Change!)

Whether you call it Global Warming or Climate Change the pollution we are dumping into our atmosphere is melting the sea ice and generating stronger storms! (Credit: NASA)

The University of Colorado study was based on data obtained from every Atlantic hurricane over the past 40 years including wind speeds, barometric pressures and storm sizes. Much of the data used in the study, storm size in particular, was obtained from satellite images. According to study lead author James Kossin of NOAA, “our results show that these storms have become stronger on global and regional levels, which is consistent with expectations of how hurricanes respond to a warming world.”

NOAA’s Dr. James Kossin, author of the report on the effect of global warming on hurricanes. (Credit: U-W Madison)

In fact the study details an 8% rise in average storm strength with each decade that has passed. “In other words,” Kossin continued, “during its lifetime a hurricane is 8% more likely to be a major hurricane in this decade compared to the last decade.”

It’s easy to understand how global warming could lead to stronger hurricanes. Warmer air blows at a greater velocity; warmer water evaporates more quickly putting more moisture into those faster winds. In essence warmth is energy and by putting more energy into Earth’s surface global warming is also putting more energy into the storms in Earth’s atmosphere.

So it’s a fair bet that this year’s hurricane season will turn out to be a fraction above average, as will many of the years to come. Then as the decades go by the average will increase until what we now consider an active hurricane season becomes average. And of course when combined with sea level rise, another effect of global warming, those stronger storms can be expected to cause much more damage.

Hurricane season starts soon. Are you ready? (Credit: Daily Express)

Just one more way in which global warming is making our future look uncertain and bleak.

P.S. I barely managed to publish this post when I saw a weather report that tropical storm Bertha has formed off of the South Carolina coast. That’s two named storms and it’s still another four days BEFORE hurricane season ‘officially’ starts!

Acids, Bases and pH.

Arguably the two most well known classes of chemical compounds are acids and bases. Pretty much everybody knows that there is citric acid in our orange juice, carbolic acid in our soda, and of course the anti-acids we take for an upset stomach are actually mild bases. Stronger acids help power our batteries and strong bases like lye make strong soap. So it’s worth asking just what are acids and bases and why is it that they seem to be similar in some ways but complete opposites in others.

A few common acids and bases. (Credit: Chemistry LibreTexts)

Simply put any element or compound when dissolved in water will form either an acid or a base. When non-metals like sulfur or carbon are dissolved in water they create an acid while when a metal like sodium is dissolved it will form a base. You have to be careful however because you often have to first form an oxide of the element, say carbon dioxide, in order to dissolve it into water.

The Periodic Table of Elements showing the direction of increased acidity, less basic. (Credit: Periodic Table)

Since both acids and bases basically form when chemicals are dissolved in water in order to understand them it is first necessary to understand a little bit about water at the molecular level. Now everybody knows that a molecule of water has one oxygen atom and two hydrogen atoms, giving it the familiar chemical formula of H2O. Now all of those trillions of billions of water molecules are moving about in the water, banging into each other so that a few of the molecules get broken up into an OH radical with an extra electron, formally written as OH, along with a hydrogen atom minus its electron, H+, really just a bare proton.

Water normally has a few ionized water molecules in it, this makes it partly acidic, partly basic with a pH of 7. (Credit: askIITians)

Now these OH and H+ radicals don’t last for very long, they reform normal water molecules very quickly but new ones are always being made so at any time a constant percentage of the water molecules are split into radicals. At room temperature, 25ºC, just about one in every ten million molecules is split into radicals which when expressed as a power of ten is ten to the minus seventh power, 10-7 and that is why pure, neutral water is said to have a pH of 7! That is what the chemical quantity pH stands for ‘Power of Hydrogen’ or ‘Potential for Hydrogen’ because at a pH of 7 there is about one free hydrogen radical for every ten million, 107 water molecules.

The more hydrogen ions dissolved in water the more acidic, the lower the pH. The opposite is true of OH ions, the more OH the more basic, the higher the pH. (Credit: Quizlet)

If for some reason there are more free hydrogen radicals, say one for every million water molecules, 106 then the pH goes down to 6. Or if something should cause the number of free hydrogen radicals to go down, say one for every one hundred million water molecules, 108, then the pH goes up to 8. I know this sounds kind of backwards but the more free hydrogen radicals the lower the ph, the fewer the higher the pH.

By the way the pH of even the purest water varies with temperature, water at 0ºC has a higher pH, about 7.47 because at this colder temperature the water molecules are less energetic and don’t split up as often so that there are fewer free hydrogen radicals. On the other hand in hot water the molecules are more energetic and split up more often forming more free hydrogen radicals which causes hot water to have a lower pH of around 6.14.

Now let’s see what happens when a chemical is dissolved in water, let’s start with the poisonous non-metallic gas chlorine. Without going into the quantum mechanics suffice it to say that the arrangement of chlorine’s electrons in their orbitals is such that an atom of chlorine needs one more electron in order to fill its shells. When dissolved in water the chlorine atom will grab an electron from a water molecule becoming a negatively charged chlorine ion, Cl. Meanwhile the water molecule splits into an OH and a free hydrogen radical. The more chlorine the more free hydrogen radicals so chlorine lowers the pH forming Hydrochloric acid. It is the presence of the highly reactive chlorine ions plus the OH and free hydrogen radicals that, depending on the concentration, cause hydrochloric acid to be so reactive, so dangerous.

Litmus paper is commonly used in chemistry to determine in a solution is acidic, the paper turns red, or basic, the paper turns blue. (Credit: i RK Yadav)

Metals work in the opposite way. Metals would like to give up an electron in order to have completed electron shells so when a metal, let’s say the dangerous metal sodium, is dissolved in water it gives its spare electron to a water molecule becoming a positively charged sodium ion, Na+.  Meanwhile the water molecule now splits into an OH radical and a neutral hydrogen atom, not a free hydrogen radical. This reduces the number of free hydrogen radicals lowering the pH. Again the presence of highly reactive Na+ and OH radicals causes sodium hydroxide to be such a reactive, dangerous solution.

The fact is that acids and bases are both highly reactive substances but sort of in the opposite way. So it’s not hard to guess that something interesting should happen when you mix an acid and a base in the proper proportions. Using our two examples above what happens when you mix hydrochloric acid and sodium hydroxide base is that the Cl ion and the Na+ ion combine to form ordinary table salt NaCl while the remaining OH radicals, the free hydrogen radicals and neutral hydrogen all just recombine into water. This is a typical result, combining an acid and a base in equal quantities results in the formation of a salt and water, plus a lot of energy depending on the concentration.

When acids and bases are mixing in the proper proportion all that’s left is water and a salt. The dangerous chemicals Hydrochloric acid and Sodium Hydroxide form ordinary table salt! (Credit: IMU Home Learning)

Because they are so reactive, acids and bases play a crucial role in a huge variety of chemical processes. In fact life itself would be impossible with both acids and bases, the A in DNA stands for acid after all.

Acids and bases are everywhere, in our industry, our food even inside us. It’s worth taking some time in order to know a little bit about them.

Social Distancing, Herd Immunity, and R-naught, just a few of the concepts developed by the science of Epidemiology.

With the Covid-19 virus continuing to spread, causing an ever growing number of illnesses and deaths across our planet the science of epidemiology has gone from being a little known branch of medicine to arguably becoming the most vital topic in the world. Literally ‘the study of what is on or among the people’ epidemiology was once the most successful branch of medicine, helping to eliminate such deadly diseases as cholera, typhus and yellow fever. Indeed the doctors and scientists who developed epidemiology succeeded in controlling many infectious diseases without any kind of a cure or in some cases having the slightest idea as to what was causing the illness.

It’s all Greek to me! (Credit: Pinterest)

The ancient Greeks recognized that while some diseases could spread from person to person throughout a population, other illnesses like epilepsy or cancer were not infectious. It wasn’t until 1543 however that an Italian doctor named Girolamo Fracastoro speculated that diseases could be spread by living particles too small to be seen that floated through the air. The invention of the microscope and the discovery that there actually were microscopic living creatures lent considerable weight to Fracastoro’s theory.

Fracastoro and a few other early researchers into the germ theory of disease. (Credit: Open Texbooks)

About a hundred years later in 1662 a part time mathematician, his day job was haberdasher, named John Graunt performed a statistical analysis of the mortality rolls of the city of London before and after the great plague of 1665-66. Graunt’s work provided much evidence supporting some theories about the spread of infection while at the same time disproving others and it established the use of mathematics in the study of diseases.

During the 16th and 17th centuries the city of London had so many plagues that the one of 1665 -1666 is know as ‘The Great Plague’. (Credit: The Lost City of London)

Another Londoner named John Snow became known as the father of modern epidemiology thanks to his work in 1854 leading to his discovering the cause of a number of cholera outbreaks striking the Soho section of London every few years. By simply marking the home addresses of cholera victims on a street map of London, see map below, Snow correctly concluded that the source of the infection was a water pump located on broad street. By disinfecting the water with chlorine and removing the pump’s handle Snow succeeded in ending the outbreak.

John Snow and his map of the distribution of cholera in London.(Credit: The Vintage News)

Another early pioneer was the Hungarian doctor Ignaz Semmelweis who dramatically reduced the infant mortality rate at his Viennese hospital by insisting on rules that promoted cleanliness. Then in the first decade of the 20th century Walter Reed achieved great success in fighting yellow fever in Cuba not by curing his patients who had contracted the deadly disease but by eradicating the mosquitoes who carried the disease from person to person.

Comic book describing how Walter Reed discovered it was mosquitoes that transmitted yellow fever. Yes they used to print comic books about real superheros! (Credit: news.hsl.virginia.edu)

You get the point; the purpose of epidemiology is not to treat the sick but instead to stop the spread of a disease in order to keep other people from becoming sick! That means that often times great advances in epidemiology are made by mathematicians rather than physicians. It has also allowed epidemiology to become the technique used to study social diseases such as obesity, deaths caused by smoking and even gun violence.

The science of Epidemiology being used to study homicides in the city of Detroit. (Credit: Alex B. Hill)

Right now of course the lessons learned from epidemiology are the only weapons we have with which to fight the viral disease Covid-19. Until we have either a vaccine or some really effective anti-viral drug all that each of us can do to protect ourselves is to practice the guidelines developed by epidemiology.

With that in mind it would be a good idea for all of us to understand some of the technical concepts that epidemiologists use to understand how a disease spreads and how we can reduce and control that spread. Probably the factor that is most important in determining, and controlling the spread of a disease is known as its Basic Reproduction Number oftentimes referred to as R-naught or just R0.

Simply put, for each person who becomes infected with a disease, R-naught is the average number of healthy people they will in turn infect. In others words, if you catch a cold and become infectious, R-naught is the number of members of your family, or your co-workers or just people you come into contact with that will catch a cold from you. This also means that if R0 for a disease is greater than one, then the number of people infected is going to grow. For example if R0 for a disease is two then one person will infect two people, those two will go on to infect four and the four will infect eight and so on until almost everyone has, or has had the disease.

A small change in R-naught, say from 2 to 3, can make a huge difference in the number of infected people in a very short period of time. (Credit: University of Scranton)

Under normal conditions in human society there are many diseases that have an R0 much greater than one.  The table below shows the estimated R0 numbers for some well-known diseases.

Table of R-naught for several well known diseases. (Credit: Wikipedia)

Obviously the goal of epidemiology is to find methods and procedures that a community can take that will reduce R-naught for a disease below one. Perhaps the simplest technique is called ‘Social Distancing’ and it just means having everyone in a community reduce the amount of contact that they have with everyone else. No shaking hands when you meet someone, no hugs for friends you haven’t seen in years, also no parties and no big crowds at sports events or concerts. Social distancing works because less contact between people makes it less likely that a germ will pass between them.

Some of the rules of Social Distancing. (Credit: Orange County N. C.)

Looking back at the table you can see how many diseases spread through particles or droplets in the air. Those particles can only travel through the air for about three or four meters so if everyone stayed more than four meters apart those diseases could not spread. R0 would go very close to zero.

Of course such extreme social distancing is not really possible, we live in families and the jobs of many people are so essential that society cannot get along without them. We live in a society and that society requires a certain amount of contact between its members. That’s why other procedures, such as washing hands, disinfecting everything other people touch, and wearing face masks become so important. In fact anything that we can do to reduce R-naught is important, it is at present the only way we have to fight Covid-19. 

Now for many viral diseases those people who are infected and recover acquire an amount of immunity to being re-infected. In such cases, once a majority of the population has been infected the spread of the disease is inhibited because there are now fewer victims left to infect. Not only that but actually the people who have become immune get in the disease’s way, getting between those who are infectious and those who have not yet been infected, effectively generating a macabre form of social distancing. This acquired immunity of the majority of a population is known as ‘Herd Immunity’.Herd immunity should be considered the last resort in fighting a disease however because it results in the maximum number of deaths and hospitalizations of sick people. Basically getting to herd immunity means not fighting a disease and just letting people get infected.

Herd Immunity without a vaccine, top. With a few people getting a vaccine, middle and with a large majority getting a vaccine. Which do you prefer? (Credit: Wikipedia)

Surprisingly there are many people who believe that is the best solution to Covid-19. Indeed the entire nation of Sweden has decided to forego all social distancing measures and just let the disease die out on its own.

One last point, when and if a vaccine is developed that is effective against Covid-19 it will grant immunity to people who have not yet been infected by the disease. In epidemiological terms a vaccine therefore works by getting a population to herd immunity without people dying or being admitting to a hospital, without them getting sick at all. Something I’m certain that we are all looking forward to!

Medical researchers are making great strides in the development of Induced Pluripotent Stem Cells (iPS Cells). Will they soon be able to use them to repair or even replace diseased organs in our bodies?

 Every human being, indeed every animal begins its life as a fertilized egg cell that begins to divide and grow into many cells. As more and more cells are generated they begin to grow into certain types of cells, heart cells, stomach cells, muscle cells, brain cells, over 200 kinds of specialized cells making up every organ in the body. Those early cells, the cells generated before specialization into organ cells sets in are given the name embryonic stem cells or sometimes just stem cells.

Male sperm cells surround a female egg cell trying to get inside. Once one of them succeeds the egg will be fertilized and will develop into a fetus. (Credit: Pinterest)
After fertilization the egg cell begins to divide to form a blastocyst. At this stage the cells are all embryonic stem cells. (Credit: Assisted Fertility Program)

Research into the properties of these undifferentiated stem cells began back in the 1960s at the University of Toronto by biologists Ernest McCulloch and James Till. However it wasn’t until 1981 that British biologists Martin Evans and Matthew Kaufman succeeded in isolating and culturing embryonic stem cells from mice. This advance enabled researchers to begin experimenting with stem cells, to alter or delete some of the genes in the cells in order to investigate the processes that turned them into the specialized cells.

Stem cell pioneers Ernst McCulloch (l) and James Till (r). (Credit: University of Toronto Magazine)

Since stem cells are capable of becoming any type of cell in the body, a property technically referred to as pluripotent, the possibility that they could be used to help repair, perhaps even replace damaged organs has been the driving force in stem cell research. The adult body has few stem cells remaining however, only in the bone marrow or gonads, and those stem cells are only capable of turning into a few types of cells, either blood cells or sex cells.

This is the reason why stem cell researchers were so anxious to obtain embryonic stem cells in order to understand the processes that changed a stem cell into a particular type of body cell. From the 1980s through the early 2000s many biologists conducted an enormous amount of work using embryonic stem cells obtained from animal, primarily mouse fetuses. Unfortunately the only supply of human embryonic stem cells was from the fetuses of women who had undergone surgical abortions, a source that brought with it a tremendous amount of controversy. Because of stem cell research’s association with the practice of abortion even scientists who worked with animal stem cells had difficulties in obtaining funding and the entire field of stem cell research in the U.S. suffered as a result.

A human embryo at four weeks after fertilization, a time when many abortions are performed. At this stage there are millions of embryonic stem cells remaining. (Credit: Abort73.com)

At the same time the researchers all knew that in order to really fulfill the promise of stem cells it was going to be necessary for them to find a method to reverse the process, to take differentiated body cells, say blood cells or muscle cells, and turn them back into embryonic stem cells. After all, think about it, if you had a heart problem and doctors tried to use the stem cells from an aborted fetus to repair your heart wouldn’t your immune system reject those stem cells just as it would try to reject a heart transplant. But if your own adult cells could be turned back into stem cells and then those stem cells used to repair diseased heart tissue there would be no problem of rejection.

  The breakthrough came in 2006 when a Japanese team led by Shinya Yamanaka succeeded in converting adult fibroblast cells into pluripotent stem cells by modifying only four genes. These converted cells were given the name Induced Pluripotent Stem Cells or iPS Cells and Shinya Yamanaka was awarded the 2012 Nobel Prize in medicine for his achievement.

Discoverer of iPS cells Shinya Yamanaka at work in his Labouratory. (Credit: UCSF)

With the development of iPS cells biologists could now take the adult cells of any individual, convert them into stem cells and culture them into as many stem cells as needed. The focus of stem cell research now shifted from the study of stem cells themselves to learning how to use stem cells to help patients with damaged or diseased organs, a field of research that has become known as ‘regenerative medicine’.

Converting adult Fibroblast cells back into stem cells (iPS Cells) allows many different kinds of cells to be regenerated in the lab. (Credit: R&D Systems)

At present there are several distinct lines of ongoing research. The ‘Holy Grail’ of regenerative medicine would be the ‘manufacture’ of entire organs that could replace damaged ones. For example, for a patient suffering from a diseased kidney, instead of getting a kidney transplant from a donor, which would carry with it the problem of organ rejection, cells from the patient’s own body would be converted into iPS cells. Those iPS cells would then be induced to generate a brand new kidney, that patient’s kidney since their cells were used. That new kidney could then be transplanted into the patient’s body without any fear of rejection.

The promise of Regenerative Medicine, using stem cells to grow brand new organs to replace damaged or worn out ones! (Credit: DL3 Spa Services)

Working towards that long range goal the biologists have been moving forward with the idea of repairing rather than replacing damaged organs. In an ongoing study being conducted at Osaka University in Japan by Professor Yoshiki Sawa blood cells were taken from test animals and converted into iPS cells. The iPS cells were then induced into becoming heart muscle cells that were then grown into a sheet of heart muscle tissue that beated, just like a normal heart. The sheet of heart muscle was then surgically placed onto the test animal’s heart, strengthening it and increasing heart function.

Sheet of heart muscle tissue manufactured from iPS Cells. (Credit: NHK)

Over a hundred such experimental surgeries were performed first on animals in order to refine the technique and make certain that everything possible was done to maintain safety before any human trials were attempted. It wasn’t until the 27th of January of 2020 that the first surgery was performed to insert a 4cm circular section of manufactured heart tissue on to a damaged area of a human patient’s heart. That patient is recovering and being constantly monitored to determine how much improvement in heart function the new heart tissue is providing, and for how long. Nevertheless this clinical trial gives a little glimpse into the potential of iPS Cells.

Heart surgery performed for first time on 27 January 2020. Sheet of heart muscle tissue employed to strengthen patient’s weakened heart. (Credit: www.asahi.com)

Another possible use of iPS cells would be to greatly increase the blood available for operations and other medical practices. Blood banks are chronically short on precious blood plasma so the possibility that that iPS cells could be grown in large quantities and then turned into blood cells is very attractive.

The use of iPS stem cells is not without its problems however. First of all at present the efficiency of converting adult cells into iPS cells is less than 1% making the process both slow and expensive. Another major difficulty is the tendency of iPS cells to form cancerous tumors, a danger that has severely limited the number of human experiments using iPS cells.

One serious problem with iPS Cells is that they can lead to the formation of cancerous tumors. (Credit: Irish Times)

 Despite these difficulties advances in the use of iPS cells in the field of regenerative medicine is accelerating. Who knows what new medical procedures will be developed in the next 10 to 20 years using iPS cells.

Paleontology News for May 2020. What’s there to do when you’re ordered to stay at home during a pandemic? Why study dinosaurs of course!

We tend to think of paleontologists as working out in the field, digging around in some barren, rocky terrain unearthing the remains of long extinct forms of life. That’s partly true of course, after all you have to find some fossils before you can study them. And most paleontologists do prefer being on site where the discoveries are made, never knowing what they’ll see in the very next rock they turn over.

Although it is often hard, dirty, sweaty work take it from me fossil hunting is the pure joy of discovery. (Credit: CBS Denver)

Still, a lot of the work in studying ancient life can only be accomplished back in the lab or in the office. Cleaning fossils, examining fossils, comparing them to similar fossils and of course, writing up the papers that will tell your colleagues, and interested laymen like me, what you’ve found. A lot of that work can safely be accomplished even during the ‘social distancing’ needed to stop the spread of Covid-19. So let’s take a look at some of the work that’s being accomplished by paleontologists even in the shadow of a deadly disease.

Cleaning fossils has to be done in the lab where you can take your time and do a meticulous thorough job. (Credit: Wikimedia Commons)

Spinosaurus aegyptiacus is one of the most intriguing dinosaur species known to science. Originally discovered in Egypt back in 1912, Spinosaurus is a large predatory dinosaur belonging to the group known as theropods, the group that includes the mighty T rex and Allosaurus along with the smaller Raptors. Spinosaurus lived during the middle to late Cretaceous period (112 to 93 million years ago) and had one distinguishing feature that set it apart from its relatives, a broad, sail like flap of skin along its back that was held up by spines coming off of the animal’s vertebra. See image below. Large, floppy skin features like Spinosaurus’ sail are usually either for thermal regulation or display or both.

Artist’s impression of a Spinosaurus with a human figure to give scale. (Credit: New York Times)

The loss of the only known skeleton of Spinosaurus during World War 2 brought all research into the creature to a halt, and Spinosaurus was almost forgotten by science. Then in the 1990s further fossils belonging to another species of Spinosaurus, S maroccanus were discovered in Morocco by a National Geographic team led by Doctor Nizar Ibrahim of the University of Detroit Mercy along with Professor Paul Sereno of the University of Chicago. Exploring a layer of rock that has been named the Kem Kem group and which is exposed across a wide area of Morocco the team has unearthed fossils of many different species including specimens of Spinosaurus that have allowed paleontologists to resume the study of this odd dinosaur.

University of Chicago paleontologist Paul Sereno with a skeleton of Spinosaurus. (Credit: The Telegraph)

Actually there is a lot of disagreement over whether S maroccanus is a second species. With the original S aegyptiacus destroyed it is impossible to make a direct comparison and the drawings that remain of the bones of S aegyptiacus are insufficient to determine just how different the new specimens are with certainty.

The new specimens have re-ignited several debates about the nature of Spinosaurus, these include whether or not the predator was actually larger than the famous T rex and whether or not Spinosaurus was at least semi-aquatic, spending a large fraction of its life in the water. Based on the examination of the fossils discovered during the 1990s the full length of Spinosaurus was between 12.5 and 18 meters while the animal’s weight was between 6.5 and 7.5 tonnes. If these estimates are true that would in fact make Spinosaurus a fraction larger than the venerable T rex.

As to the question of Spinosaurus being semi-aquatic the dinosaur’s long narrow, crocodile like snout along with its short, powerful legs do indicate a life style similar to that of…well crocodiles. Add in the fact that the fossils of Spinosaurus were discovered in the same rock beds that yielded numerous specimens of an ancient and extinct sawfish named Onchopristis and it seems clear that Spinosaurus lived in an environment that was as much water as land, such as a swampy river delta.

The extinct fish Onchopristis. Measuring eight meters in maximum length this creature was a monster itself! (Credit: Prehistoric Life -Wiki)
Artist’s impression of the sort of environment and life that Spinosaurus lived. (Credit: BBC)

Now perhaps the crucial piece of evidence has been unearthed, as bones from the tail of Spinosaurus have recently been discovered. Based on those bones the tail of Spinosaurus was a long, flexible and fin like. A tail well suited to providing propulsion in the water. This latest discovery pretty much clinches the hypothesis that Spinosaurus is the first type of dinosaur known to have evolved into a swimming creature.

Tail bones tell the story. The tail of Spinosaurus was big and powerful, perfect for propulsion underwater! (Sci-news.com)

These new discoveries make Spinosaurus an example of how varied and diverse the group we call dinosaurs was, and the research published by Ibrahim and Sereno provides an example of how scientists can continue their work even during a pandemic.