Extraterrestrial Life and Extraterrestrial Intelligence how likely could they be and what are the chances that we may soon discover one or the other.

Certainly one of the biggest questions that anyone can ask is, is there life out there? Are there other planets that have life or even intelligent life living on them? At the present time we really have no idea, our exploration of the Universe has only just begun. We have landed robotic probes on only a very few celestial bodies and even on those we have see so little that some form of life could be hiding from us! Still as the famed science fiction author Arthur C. Clarke once asked, the question of whether we are alone in the Universe can have only two answers and either one is awe inspiring.

Thanks to Steven Spielberg this is most people’s idea of an Extraterrestrial. (Credit: Dred Central)
Unless that is you prefer this one! (Credit: Paramount)

Many would say that the Universe is so large, there so many places that life could exist and evolve into intelligence that surely there must be some life out there. That position, however reasonable, isn’t evidence. So the study of extraterrestrial life remains a science without a subject, a science of conjecture and hypothesis rather than solid fact.

Every little dot in this image is an entire galaxy with billions of stars. In such a huge Universe how can we possibly be alone? (Credit: NASA)

When I was an undergraduate all of that conjecture was summed up in ‘Drake’s Equation’ named for a U.S. astronomer who first explicitly wrote down all of the factors in one equation. Using Drake’s equation it is possible to calculate the number of intelligent species in a galaxy, assuming you have accurate numbers for all of the factors in the equation.

                                Equation 1

In this equation I is the number of intelligent species in a galaxy, say our own Milky Way. You calculate I by multiplying the factors on the right hand side.

N is the number of stars in that galaxy, about 200 billion for the Milky Way.

FP is the fraction of those stars that have planets orbiting them. Therefore FP must have a value of between zero and one.

FH is the fraction of planets that orbit in a ‘habitable zone’ around their star; I’ll explain what that means below. Again, FH is somewhere between zero and one.

FL is the fraction of habitable planets where life actually arises. Again, zero to one.

FI is the fraction of planets with life on them where intelligence evolves. Zero to one.

Back when I was in college the only factor on the right hand side of Drake’s equation that astronomers had any accurate measurement for was N, the number of stars in the Milky Way. Every other factor was totally unknown so any attempt to actually use the Drake equation was just pure guesswork.

Our Milky Way itself contains 200 Billion stars, any one of which could have planets with life on them! (Credit: Forbes)

We’ve made some progress since then. In particular thanks to the discoveries made by the Kepler space telescope and other astronomical programs we now know of the existence of thousands of planets outside of our solar system. Because of these discoveries we can now say with reasonable confidence that at least half of all stars must have planets orbiting them, perhaps 90% or even more. So if even half of the Milky Way’s 200 billion stars have planets, then there are an awful lot of planets out there.

Thanks to the Kepler Space Telescope we know of the existence of thousands of planets outside of our solar system. (Credit: Vox)

We’ve also made some progress with FH, the fraction of planets that could be habitable for life. Thirty to forty years ago ‘habitable’ would have meant liquid water on the planet’s surface, which in our solar system meant only Earth, one out of eight planets. However our space probes to the outer planets have discovered that Mars once had oceans and maybe still has water beneath its surface. Also, data from other probes have raised the possibility that Europa and Enceladus, the moons of Jupiter and Saturn respectively, may have large oceans of liquid water beneath their icy surfaces. That means that our solar system might actually have at least four habitable bodies, not just the Earth. So it appears that FH might actually be larger than we thought just a few decades ago.

Both Jupiter’s Moon Europa (L) and Saturn’s Moon Enceladus (R) are believed to have oceans of water beneath their icy surfaces. This means that more planets than we thought might actually be ‘habitable’. (Credit: NASA)

That leaves us with just the last two factors, FL that fraction of planets with a habitable environment that possess life and FI the fraction of planets with life where intelligence evolves. The only way to get an accurate measurement for these two numbers would be to closely study a few hundred or more habitable planets or moons and just see how many have developed life and how many go on to evolve intelligence.

The evidence from geology is that it didn’t take long for Earth’s Primordial Soup to evolve into living things. (Credit: Scoopnest)

We can’t do that however; it will probably take decades for our space technology to even find life on Mars or Europa if it’s there. The only real example we have to study is Earth. Can we learn anything about FL and FI from studying the history of life on here?

A new study says that we can. Authored by David Kipping of Columbia University’s Department of Astronomy “An objective Bayesian analysis of life’s early start and our late arrival” uses probability mathematics to calculate values for FL and FI that would best simulate life’s history here on Earth.

Bayesian analysis is a mathematical technique for studying complex problems with a large number of parameters. Heavy on calculations it’s often performed by computers. (Credit: Mondo 2000)

You see we know that our planet is about 4.5 billion years old and there is growing evidence that life was well established here as far back as 4 billion years ago. Indeed it looks as though life began on Earth as soon as its surface had cooled enough for life to exist. On the other hand complex, multi-cellular life took 4 billion years to evolve and even then intelligence took another half billion years.

Life may have existed early in Earth’s history but it took a very long time to evolve into complex multi-cellular forms. (Credit: Expii)

So what Doctor Kipping did was to develop a computer program that would vary FL and FI across all of their possible values and see which values succeeded in reproducing life’s history here on Earth. The result that Dr. Kipping obtained is that while life itself could be quite common in the Universe, intelligence is very rare. Mathematically what he found was that FL is close to one but FI is very, very close to zero. Thousands of planets may have life on them for every one that possesses an intelligent species.

I have to admit that I agree with Dr. Kipping. The more we learn about life at the biochemical level the more it seems to be something that will inevitably happen at least once on any planet that it can happen on, and once it happens it spreads everywhere on that planet. However intelligence is so complex, so dependent on the twists and turns of evolution that intellect, mind may be the rarest thing in the Universe.

The philosopher Socrates advised us all to “Know Thyself”, the world would still be a better place if more of us followed his suggestion! (Credit: New Intrigue)

Maybe we should take a lesson from Dr. Kipping’s work. If intelligence is the rarest, most valuable thing in the Universe it might behoove us to use ours a little more often, to appreciate it a little more, to realize that it is all that really separates us from… just biochemistry.

Hurricanes are growing stronger because of Global Warming.

Hurricane season in the Atlantic doesn’t even start until the first of June but already we have had our first named tropical storm of the year. Over the past week Tropical Storm Arthur formed just south of the Florida Keys and then moved north paralleling the US east coast before brushing Cape Hatteras and finally turning east into the mid-Atlantic. It seems that Arthur is just a preview of what is expected to be a rather active hurricane season.

When viewed from space a hurricane can be a thing of beauty. They’re not so nice up close! (Credit: Houston Chronicle)

This year’s official hurricane forecast, published by the Colorado State University’s Tropical Meteorology Project calls for 16 named storms of which eight are expected to develop into hurricanes with four of those becoming major hurricanes, category 3 or higher. This prediction is about 33% higher than the average number of storms over the last thirty years but slightly below the actual number of Atlantic storms that occurred last year in 2019.

They’ve already got the names selected for this year’s tropical storms and hurricanes. And we’ve already had Arthur! (Credit: The Weather Channel)

And that’s only the Atlantic Ocean. The Pacific Ocean has already seen one massive Typhoon that caused considerable damage to the Philippines while in the Indian Ocean a large cyclone named Amphan has struck near the Indian city of Calcutta with winds of over 160 kph causing major damage and the loss of close to 100 lives.

By the way Hurricanes, Typhoons and Cyclones are all the same general phenomenon, although many of the details of where they normally form and usually go may vary greatly. The only real difference is the ocean they form in and cause their damage.

Hurricanes spin because of the Coriolis Effect and high pressure systems, nice weather, spin in the opposite direction of low pressure systems, storms! (Credit: InCarto)

Now a new study from the National Oceanographic and Atmospherics Administration (NOAA) is providing more evidence for the hypothesis that hurricanes and the other kinds tropical storms are slowly getting stronger because global warming. Not each individual storm, they can vary up and down considerably but the average strength of all the storms each year is growing with time.

The National Oceanographic and Atmospheric Administration NOAA operates a fleet of aircraft designed to study Hurricanes and other kinds of weather. (Credit: Slate.com)

(I’d like to take a minute here to discuss the controversy over the terms of Global Warming / Climate Change. There are even some deniers out there have even gone so far as to assert that the fact that scientists use two names is proof that it’s all just a hoax. Well I use both terms but not interchangeably, and here is the reasoning behind my choice of which term I will use in a given circumstance. Greenhouse gasses in our atmosphere are raising the temperature of the Earth’s surface, its oceans as well as its atmosphere. That is a direct effect of the greenhouse gasses that I will refer to as Global Warming. Indirect effects such as stronger storms, droughts and floods I refer to as Climate Change that are caused by Global Warming! Got it, the direct effects of greenhouse gasses are Global Warming while the effects of Global Warming, therefore the indirect effects of greenhouse gasses I call Climate Change!)

Whether you call it Global Warming or Climate Change the pollution we are dumping into our atmosphere is melting the sea ice and generating stronger storms! (Credit: NASA)

The University of Colorado study was based on data obtained from every Atlantic hurricane over the past 40 years including wind speeds, barometric pressures and storm sizes. Much of the data used in the study, storm size in particular, was obtained from satellite images. According to study lead author James Kossin of NOAA, “our results show that these storms have become stronger on global and regional levels, which is consistent with expectations of how hurricanes respond to a warming world.”

NOAA’s Dr. James Kossin, author of the report on the effect of global warming on hurricanes. (Credit: U-W Madison)

In fact the study details an 8% rise in average storm strength with each decade that has passed. “In other words,” Kossin continued, “during its lifetime a hurricane is 8% more likely to be a major hurricane in this decade compared to the last decade.”

It’s easy to understand how global warming could lead to stronger hurricanes. Warmer air blows at a greater velocity; warmer water evaporates more quickly putting more moisture into those faster winds. In essence warmth is energy and by putting more energy into Earth’s surface global warming is also putting more energy into the storms in Earth’s atmosphere.

So it’s a fair bet that this year’s hurricane season will turn out to be a fraction above average, as will many of the years to come. Then as the decades go by the average will increase until what we now consider an active hurricane season becomes average. And of course when combined with sea level rise, another effect of global warming, those stronger storms can be expected to cause much more damage.

Hurricane season starts soon. Are you ready? (Credit: Daily Express)

Just one more way in which global warming is making our future look uncertain and bleak.

P.S. I barely managed to publish this post when I saw a weather report that tropical storm Bertha has formed off of the South Carolina coast. That’s two named storms and it’s still another four days BEFORE hurricane season ‘officially’ starts!

Acids, Bases and pH.

Arguably the two most well known classes of chemical compounds are acids and bases. Pretty much everybody knows that there is citric acid in our orange juice, carbolic acid in our soda, and of course the anti-acids we take for an upset stomach are actually mild bases. Stronger acids help power our batteries and strong bases like lye make strong soap. So it’s worth asking just what are acids and bases and why is it that they seem to be similar in some ways but complete opposites in others.

A few common acids and bases. (Credit: Chemistry LibreTexts)

Simply put any element or compound when dissolved in water will form either an acid or a base. When non-metals like sulfur or carbon are dissolved in water they create an acid while when a metal like sodium is dissolved it will form a base. You have to be careful however because you often have to first form an oxide of the element, say carbon dioxide, in order to dissolve it into water.

The Periodic Table of Elements showing the direction of increased acidity, less basic. (Credit: Periodic Table)

Since both acids and bases basically form when chemicals are dissolved in water in order to understand them it is first necessary to understand a little bit about water at the molecular level. Now everybody knows that a molecule of water has one oxygen atom and two hydrogen atoms, giving it the familiar chemical formula of H2O. Now all of those trillions of billions of water molecules are moving about in the water, banging into each other so that a few of the molecules get broken up into an OH radical with an extra electron, formally written as OH, along with a hydrogen atom minus its electron, H+, really just a bare proton.

Water normally has a few ionized water molecules in it, this makes it partly acidic, partly basic with a pH of 7. (Credit: askIITians)

Now these OH and H+ radicals don’t last for very long, they reform normal water molecules very quickly but new ones are always being made so at any time a constant percentage of the water molecules are split into radicals. At room temperature, 25ºC, just about one in every ten million molecules is split into radicals which when expressed as a power of ten is ten to the minus seventh power, 10-7 and that is why pure, neutral water is said to have a pH of 7! That is what the chemical quantity pH stands for ‘Power of Hydrogen’ or ‘Potential for Hydrogen’ because at a pH of 7 there is about one free hydrogen radical for every ten million, 107 water molecules.

The more hydrogen ions dissolved in water the more acidic, the lower the pH. The opposite is true of OH ions, the more OH the more basic, the higher the pH. (Credit: Quizlet)

If for some reason there are more free hydrogen radicals, say one for every million water molecules, 106 then the pH goes down to 6. Or if something should cause the number of free hydrogen radicals to go down, say one for every one hundred million water molecules, 108, then the pH goes up to 8. I know this sounds kind of backwards but the more free hydrogen radicals the lower the ph, the fewer the higher the pH.

By the way the pH of even the purest water varies with temperature, water at 0ºC has a higher pH, about 7.47 because at this colder temperature the water molecules are less energetic and don’t split up as often so that there are fewer free hydrogen radicals. On the other hand in hot water the molecules are more energetic and split up more often forming more free hydrogen radicals which causes hot water to have a lower pH of around 6.14.

Now let’s see what happens when a chemical is dissolved in water, let’s start with the poisonous non-metallic gas chlorine. Without going into the quantum mechanics suffice it to say that the arrangement of chlorine’s electrons in their orbitals is such that an atom of chlorine needs one more electron in order to fill its shells. When dissolved in water the chlorine atom will grab an electron from a water molecule becoming a negatively charged chlorine ion, Cl. Meanwhile the water molecule splits into an OH and a free hydrogen radical. The more chlorine the more free hydrogen radicals so chlorine lowers the pH forming Hydrochloric acid. It is the presence of the highly reactive chlorine ions plus the OH and free hydrogen radicals that, depending on the concentration, cause hydrochloric acid to be so reactive, so dangerous.

Litmus paper is commonly used in chemistry to determine in a solution is acidic, the paper turns red, or basic, the paper turns blue. (Credit: i RK Yadav)

Metals work in the opposite way. Metals would like to give up an electron in order to have completed electron shells so when a metal, let’s say the dangerous metal sodium, is dissolved in water it gives its spare electron to a water molecule becoming a positively charged sodium ion, Na+.  Meanwhile the water molecule now splits into an OH radical and a neutral hydrogen atom, not a free hydrogen radical. This reduces the number of free hydrogen radicals lowering the pH. Again the presence of highly reactive Na+ and OH radicals causes sodium hydroxide to be such a reactive, dangerous solution.

The fact is that acids and bases are both highly reactive substances but sort of in the opposite way. So it’s not hard to guess that something interesting should happen when you mix an acid and a base in the proper proportions. Using our two examples above what happens when you mix hydrochloric acid and sodium hydroxide base is that the Cl ion and the Na+ ion combine to form ordinary table salt NaCl while the remaining OH radicals, the free hydrogen radicals and neutral hydrogen all just recombine into water. This is a typical result, combining an acid and a base in equal quantities results in the formation of a salt and water, plus a lot of energy depending on the concentration.

When acids and bases are mixing in the proper proportion all that’s left is water and a salt. The dangerous chemicals Hydrochloric acid and Sodium Hydroxide form ordinary table salt! (Credit: IMU Home Learning)

Because they are so reactive, acids and bases play a crucial role in a huge variety of chemical processes. In fact life itself would be impossible with both acids and bases, the A in DNA stands for acid after all.

Acids and bases are everywhere, in our industry, our food even inside us. It’s worth taking some time in order to know a little bit about them.

Social Distancing, Herd Immunity, and R-naught, just a few of the concepts developed by the science of Epidemiology.

With the Covid-19 virus continuing to spread, causing an ever growing number of illnesses and deaths across our planet the science of epidemiology has gone from being a little known branch of medicine to arguably becoming the most vital topic in the world. Literally ‘the study of what is on or among the people’ epidemiology was once the most successful branch of medicine, helping to eliminate such deadly diseases as cholera, typhus and yellow fever. Indeed the doctors and scientists who developed epidemiology succeeded in controlling many infectious diseases without any kind of a cure or in some cases having the slightest idea as to what was causing the illness.

It’s all Greek to me! (Credit: Pinterest)

The ancient Greeks recognized that while some diseases could spread from person to person throughout a population, other illnesses like epilepsy or cancer were not infectious. It wasn’t until 1543 however that an Italian doctor named Girolamo Fracastoro speculated that diseases could be spread by living particles too small to be seen that floated through the air. The invention of the microscope and the discovery that there actually were microscopic living creatures lent considerable weight to Fracastoro’s theory.

Fracastoro and a few other early researchers into the germ theory of disease. (Credit: Open Texbooks)

About a hundred years later in 1662 a part time mathematician, his day job was haberdasher, named John Graunt performed a statistical analysis of the mortality rolls of the city of London before and after the great plague of 1665-66. Graunt’s work provided much evidence supporting some theories about the spread of infection while at the same time disproving others and it established the use of mathematics in the study of diseases.

During the 16th and 17th centuries the city of London had so many plagues that the one of 1665 -1666 is know as ‘The Great Plague’. (Credit: The Lost City of London)

Another Londoner named John Snow became known as the father of modern epidemiology thanks to his work in 1854 leading to his discovering the cause of a number of cholera outbreaks striking the Soho section of London every few years. By simply marking the home addresses of cholera victims on a street map of London, see map below, Snow correctly concluded that the source of the infection was a water pump located on broad street. By disinfecting the water with chlorine and removing the pump’s handle Snow succeeded in ending the outbreak.

John Snow and his map of the distribution of cholera in London.(Credit: The Vintage News)

Another early pioneer was the Hungarian doctor Ignaz Semmelweis who dramatically reduced the infant mortality rate at his Viennese hospital by insisting on rules that promoted cleanliness. Then in the first decade of the 20th century Walter Reed achieved great success in fighting yellow fever in Cuba not by curing his patients who had contracted the deadly disease but by eradicating the mosquitoes who carried the disease from person to person.

Comic book describing how Walter Reed discovered it was mosquitoes that transmitted yellow fever. Yes they used to print comic books about real superheros! (Credit: news.hsl.virginia.edu)

You get the point; the purpose of epidemiology is not to treat the sick but instead to stop the spread of a disease in order to keep other people from becoming sick! That means that often times great advances in epidemiology are made by mathematicians rather than physicians. It has also allowed epidemiology to become the technique used to study social diseases such as obesity, deaths caused by smoking and even gun violence.

The science of Epidemiology being used to study homicides in the city of Detroit. (Credit: Alex B. Hill)

Right now of course the lessons learned from epidemiology are the only weapons we have with which to fight the viral disease Covid-19. Until we have either a vaccine or some really effective anti-viral drug all that each of us can do to protect ourselves is to practice the guidelines developed by epidemiology.

With that in mind it would be a good idea for all of us to understand some of the technical concepts that epidemiologists use to understand how a disease spreads and how we can reduce and control that spread. Probably the factor that is most important in determining, and controlling the spread of a disease is known as its Basic Reproduction Number oftentimes referred to as R-naught or just R0.

Simply put, for each person who becomes infected with a disease, R-naught is the average number of healthy people they will in turn infect. In others words, if you catch a cold and become infectious, R-naught is the number of members of your family, or your co-workers or just people you come into contact with that will catch a cold from you. This also means that if R0 for a disease is greater than one, then the number of people infected is going to grow. For example if R0 for a disease is two then one person will infect two people, those two will go on to infect four and the four will infect eight and so on until almost everyone has, or has had the disease.

A small change in R-naught, say from 2 to 3, can make a huge difference in the number of infected people in a very short period of time. (Credit: University of Scranton)

Under normal conditions in human society there are many diseases that have an R0 much greater than one.  The table below shows the estimated R0 numbers for some well-known diseases.

Table of R-naught for several well known diseases. (Credit: Wikipedia)

Obviously the goal of epidemiology is to find methods and procedures that a community can take that will reduce R-naught for a disease below one. Perhaps the simplest technique is called ‘Social Distancing’ and it just means having everyone in a community reduce the amount of contact that they have with everyone else. No shaking hands when you meet someone, no hugs for friends you haven’t seen in years, also no parties and no big crowds at sports events or concerts. Social distancing works because less contact between people makes it less likely that a germ will pass between them.

Some of the rules of Social Distancing. (Credit: Orange County N. C.)

Looking back at the table you can see how many diseases spread through particles or droplets in the air. Those particles can only travel through the air for about three or four meters so if everyone stayed more than four meters apart those diseases could not spread. R0 would go very close to zero.

Of course such extreme social distancing is not really possible, we live in families and the jobs of many people are so essential that society cannot get along without them. We live in a society and that society requires a certain amount of contact between its members. That’s why other procedures, such as washing hands, disinfecting everything other people touch, and wearing face masks become so important. In fact anything that we can do to reduce R-naught is important, it is at present the only way we have to fight Covid-19. 

Now for many viral diseases those people who are infected and recover acquire an amount of immunity to being re-infected. In such cases, once a majority of the population has been infected the spread of the disease is inhibited because there are now fewer victims left to infect. Not only that but actually the people who have become immune get in the disease’s way, getting between those who are infectious and those who have not yet been infected, effectively generating a macabre form of social distancing. This acquired immunity of the majority of a population is known as ‘Herd Immunity’.Herd immunity should be considered the last resort in fighting a disease however because it results in the maximum number of deaths and hospitalizations of sick people. Basically getting to herd immunity means not fighting a disease and just letting people get infected.

Herd Immunity without a vaccine, top. With a few people getting a vaccine, middle and with a large majority getting a vaccine. Which do you prefer? (Credit: Wikipedia)

Surprisingly there are many people who believe that is the best solution to Covid-19. Indeed the entire nation of Sweden has decided to forego all social distancing measures and just let the disease die out on its own.

One last point, when and if a vaccine is developed that is effective against Covid-19 it will grant immunity to people who have not yet been infected by the disease. In epidemiological terms a vaccine therefore works by getting a population to herd immunity without people dying or being admitting to a hospital, without them getting sick at all. Something I’m certain that we are all looking forward to!

Medical researchers are making great strides in the development of Induced Pluripotent Stem Cells (iPS Cells). Will they soon be able to use them to repair or even replace diseased organs in our bodies?

 Every human being, indeed every animal begins its life as a fertilized egg cell that begins to divide and grow into many cells. As more and more cells are generated they begin to grow into certain types of cells, heart cells, stomach cells, muscle cells, brain cells, over 200 kinds of specialized cells making up every organ in the body. Those early cells, the cells generated before specialization into organ cells sets in are given the name embryonic stem cells or sometimes just stem cells.

Male sperm cells surround a female egg cell trying to get inside. Once one of them succeeds the egg will be fertilized and will develop into a fetus. (Credit: Pinterest)
After fertilization the egg cell begins to divide to form a blastocyst. At this stage the cells are all embryonic stem cells. (Credit: Assisted Fertility Program)

Research into the properties of these undifferentiated stem cells began back in the 1960s at the University of Toronto by biologists Ernest McCulloch and James Till. However it wasn’t until 1981 that British biologists Martin Evans and Matthew Kaufman succeeded in isolating and culturing embryonic stem cells from mice. This advance enabled researchers to begin experimenting with stem cells, to alter or delete some of the genes in the cells in order to investigate the processes that turned them into the specialized cells.

Stem cell pioneers Ernst McCulloch (l) and James Till (r). (Credit: University of Toronto Magazine)

Since stem cells are capable of becoming any type of cell in the body, a property technically referred to as pluripotent, the possibility that they could be used to help repair, perhaps even replace damaged organs has been the driving force in stem cell research. The adult body has few stem cells remaining however, only in the bone marrow or gonads, and those stem cells are only capable of turning into a few types of cells, either blood cells or sex cells.

This is the reason why stem cell researchers were so anxious to obtain embryonic stem cells in order to understand the processes that changed a stem cell into a particular type of body cell. From the 1980s through the early 2000s many biologists conducted an enormous amount of work using embryonic stem cells obtained from animal, primarily mouse fetuses. Unfortunately the only supply of human embryonic stem cells was from the fetuses of women who had undergone surgical abortions, a source that brought with it a tremendous amount of controversy. Because of stem cell research’s association with the practice of abortion even scientists who worked with animal stem cells had difficulties in obtaining funding and the entire field of stem cell research in the U.S. suffered as a result.

A human embryo at four weeks after fertilization, a time when many abortions are performed. At this stage there are millions of embryonic stem cells remaining. (Credit: Abort73.com)

At the same time the researchers all knew that in order to really fulfill the promise of stem cells it was going to be necessary for them to find a method to reverse the process, to take differentiated body cells, say blood cells or muscle cells, and turn them back into embryonic stem cells. After all, think about it, if you had a heart problem and doctors tried to use the stem cells from an aborted fetus to repair your heart wouldn’t your immune system reject those stem cells just as it would try to reject a heart transplant. But if your own adult cells could be turned back into stem cells and then those stem cells used to repair diseased heart tissue there would be no problem of rejection.

  The breakthrough came in 2006 when a Japanese team led by Shinya Yamanaka succeeded in converting adult fibroblast cells into pluripotent stem cells by modifying only four genes. These converted cells were given the name Induced Pluripotent Stem Cells or iPS Cells and Shinya Yamanaka was awarded the 2012 Nobel Prize in medicine for his achievement.

Discoverer of iPS cells Shinya Yamanaka at work in his Labouratory. (Credit: UCSF)

With the development of iPS cells biologists could now take the adult cells of any individual, convert them into stem cells and culture them into as many stem cells as needed. The focus of stem cell research now shifted from the study of stem cells themselves to learning how to use stem cells to help patients with damaged or diseased organs, a field of research that has become known as ‘regenerative medicine’.

Converting adult Fibroblast cells back into stem cells (iPS Cells) allows many different kinds of cells to be regenerated in the lab. (Credit: R&D Systems)

At present there are several distinct lines of ongoing research. The ‘Holy Grail’ of regenerative medicine would be the ‘manufacture’ of entire organs that could replace damaged ones. For example, for a patient suffering from a diseased kidney, instead of getting a kidney transplant from a donor, which would carry with it the problem of organ rejection, cells from the patient’s own body would be converted into iPS cells. Those iPS cells would then be induced to generate a brand new kidney, that patient’s kidney since their cells were used. That new kidney could then be transplanted into the patient’s body without any fear of rejection.

The promise of Regenerative Medicine, using stem cells to grow brand new organs to replace damaged or worn out ones! (Credit: DL3 Spa Services)

Working towards that long range goal the biologists have been moving forward with the idea of repairing rather than replacing damaged organs. In an ongoing study being conducted at Osaka University in Japan by Professor Yoshiki Sawa blood cells were taken from test animals and converted into iPS cells. The iPS cells were then induced into becoming heart muscle cells that were then grown into a sheet of heart muscle tissue that beated, just like a normal heart. The sheet of heart muscle was then surgically placed onto the test animal’s heart, strengthening it and increasing heart function.

Sheet of heart muscle tissue manufactured from iPS Cells. (Credit: NHK)

Over a hundred such experimental surgeries were performed first on animals in order to refine the technique and make certain that everything possible was done to maintain safety before any human trials were attempted. It wasn’t until the 27th of January of 2020 that the first surgery was performed to insert a 4cm circular section of manufactured heart tissue on to a damaged area of a human patient’s heart. That patient is recovering and being constantly monitored to determine how much improvement in heart function the new heart tissue is providing, and for how long. Nevertheless this clinical trial gives a little glimpse into the potential of iPS Cells.

Heart surgery performed for first time on 27 January 2020. Sheet of heart muscle tissue employed to strengthen patient’s weakened heart. (Credit: www.asahi.com)

Another possible use of iPS cells would be to greatly increase the blood available for operations and other medical practices. Blood banks are chronically short on precious blood plasma so the possibility that that iPS cells could be grown in large quantities and then turned into blood cells is very attractive.

The use of iPS stem cells is not without its problems however. First of all at present the efficiency of converting adult cells into iPS cells is less than 1% making the process both slow and expensive. Another major difficulty is the tendency of iPS cells to form cancerous tumors, a danger that has severely limited the number of human experiments using iPS cells.

One serious problem with iPS Cells is that they can lead to the formation of cancerous tumors. (Credit: Irish Times)

 Despite these difficulties advances in the use of iPS cells in the field of regenerative medicine is accelerating. Who knows what new medical procedures will be developed in the next 10 to 20 years using iPS cells.

Paleontology News for May 2020. What’s there to do when you’re ordered to stay at home during a pandemic? Why study dinosaurs of course!

We tend to think of paleontologists as working out in the field, digging around in some barren, rocky terrain unearthing the remains of long extinct forms of life. That’s partly true of course, after all you have to find some fossils before you can study them. And most paleontologists do prefer being on site where the discoveries are made, never knowing what they’ll see in the very next rock they turn over.

Although it is often hard, dirty, sweaty work take it from me fossil hunting is the pure joy of discovery. (Credit: CBS Denver)

Still, a lot of the work in studying ancient life can only be accomplished back in the lab or in the office. Cleaning fossils, examining fossils, comparing them to similar fossils and of course, writing up the papers that will tell your colleagues, and interested laymen like me, what you’ve found. A lot of that work can safely be accomplished even during the ‘social distancing’ needed to stop the spread of Covid-19. So let’s take a look at some of the work that’s being accomplished by paleontologists even in the shadow of a deadly disease.

Cleaning fossils has to be done in the lab where you can take your time and do a meticulous thorough job. (Credit: Wikimedia Commons)

Spinosaurus aegyptiacus is one of the most intriguing dinosaur species known to science. Originally discovered in Egypt back in 1912, Spinosaurus is a large predatory dinosaur belonging to the group known as theropods, the group that includes the mighty T rex and Allosaurus along with the smaller Raptors. Spinosaurus lived during the middle to late Cretaceous period (112 to 93 million years ago) and had one distinguishing feature that set it apart from its relatives, a broad, sail like flap of skin along its back that was held up by spines coming off of the animal’s vertebra. See image below. Large, floppy skin features like Spinosaurus’ sail are usually either for thermal regulation or display or both.

Artist’s impression of a Spinosaurus with a human figure to give scale. (Credit: New York Times)

The loss of the only known skeleton of Spinosaurus during World War 2 brought all research into the creature to a halt, and Spinosaurus was almost forgotten by science. Then in the 1990s further fossils belonging to another species of Spinosaurus, S maroccanus were discovered in Morocco by a National Geographic team led by Doctor Nizar Ibrahim of the University of Detroit Mercy along with Professor Paul Sereno of the University of Chicago. Exploring a layer of rock that has been named the Kem Kem group and which is exposed across a wide area of Morocco the team has unearthed fossils of many different species including specimens of Spinosaurus that have allowed paleontologists to resume the study of this odd dinosaur.

University of Chicago paleontologist Paul Sereno with a skeleton of Spinosaurus. (Credit: The Telegraph)

Actually there is a lot of disagreement over whether S maroccanus is a second species. With the original S aegyptiacus destroyed it is impossible to make a direct comparison and the drawings that remain of the bones of S aegyptiacus are insufficient to determine just how different the new specimens are with certainty.

The new specimens have re-ignited several debates about the nature of Spinosaurus, these include whether or not the predator was actually larger than the famous T rex and whether or not Spinosaurus was at least semi-aquatic, spending a large fraction of its life in the water. Based on the examination of the fossils discovered during the 1990s the full length of Spinosaurus was between 12.5 and 18 meters while the animal’s weight was between 6.5 and 7.5 tonnes. If these estimates are true that would in fact make Spinosaurus a fraction larger than the venerable T rex.

As to the question of Spinosaurus being semi-aquatic the dinosaur’s long narrow, crocodile like snout along with its short, powerful legs do indicate a life style similar to that of…well crocodiles. Add in the fact that the fossils of Spinosaurus were discovered in the same rock beds that yielded numerous specimens of an ancient and extinct sawfish named Onchopristis and it seems clear that Spinosaurus lived in an environment that was as much water as land, such as a swampy river delta.

The extinct fish Onchopristis. Measuring eight meters in maximum length this creature was a monster itself! (Credit: Prehistoric Life -Wiki)
Artist’s impression of the sort of environment and life that Spinosaurus lived. (Credit: BBC)

Now perhaps the crucial piece of evidence has been unearthed, as bones from the tail of Spinosaurus have recently been discovered. Based on those bones the tail of Spinosaurus was a long, flexible and fin like. A tail well suited to providing propulsion in the water. This latest discovery pretty much clinches the hypothesis that Spinosaurus is the first type of dinosaur known to have evolved into a swimming creature.

Tail bones tell the story. The tail of Spinosaurus was big and powerful, perfect for propulsion underwater! (Sci-news.com)

These new discoveries make Spinosaurus an example of how varied and diverse the group we call dinosaurs was, and the research published by Ibrahim and Sereno provides an example of how scientists can continue their work even during a pandemic.

Space news for May 2020.

The big event in space this month will undoubtedly be the launch of the first manned mission for Space X’s Dragon capsule. This launch, to take place from NASA’s Kennedy Space Center in Florida, will not only represent the first ever manned space mission to be conducted by a commercial company but will also mark the return of manned space operations to American soil. Ever since the last flight of the space shuttle Atlantis launched on 8 July 2011 American astronauts have been dependent on purchased tickets aboard the Russian Soyuz spacecraft in order to get to the International Space Station (ISS) at a cost of as much as $80 million per seat.

Landing of shuttle Atlantis marking the end of NASA’s Space Shuttle program in July of 2011. Since this mission American astronauts have been dependent on the Russian’s to get into space. (Credit: NASA)

That dependence is scheduled to end on May the 27th with lift off at 4:32 PM EDT, although weather or technical problems could certainly lead to a delay. The two-man crew for this first manned mission, officially referred to as Demo-2, consists of veteran space shuttle astronauts Doug Hurley and Bob Behnken. Once in orbit Hurley and Behnken will pilot their Dragon capsule toward a docking with the ISS approximately 24hrs after launch.

NASA Astronauts Doug Hurley (foreground) and Bob Behnken (background) in training on a Space X Dragon Capsule simulator. (Credit: Geekwire)

How long Hurley and Behnken will remain at the ISS has yet to be decided. The original mission plan was for a stay of only a week but NASA is anxious to phase out using the Russian Soyuz to man the ISS so Hurley and Behnken’s mission has now been extended to at least a month and could last as long as 110 days. NASA intends to decide just how long the mission will last once the crew is aboard the ISS.

The goal of Space X’s mission, and NASA’s Commercial Crew Program is to deliver astronauts to the International Space Station (ISS) (Credit: Wikipedia)

Presently the American section of the ISS is being manned solely by NASA astronaut Chris Cassidy so there is plenty of standard maintenance and upkeep work to keep Behnken and Hurley occupied. There’s one job in particular that Chris Cassidy cannot do alone because it requires a spacewalk and NASA insists for safety’s sake that all spacewalks be conduced by at least two astronauts. The job consists of swapping out the station’s batteries. Of the two Space X crewmen Bob Behnken is the one with EVA experience so he has spend the last few months getting in some extra training, learning his way around the outside of the station.

Like any home the ISS requires occasional outdoor maintenance. However an EVA requires a bit more planning and skill than mowing your lawn. (Credit: Spaceflight101)

 This first mission in NASA’s commercial crew program has been a long time in coming. Space X and its competitor Boeing were initially funded back in 2014 with a goal of a first mission in 2017 but numerous difficulties and testing setbacks have led to several years of delay.

In fact Boeing’s Starliner capsule is still not ready for its first manned launch. The spacecraft underwent what was hoped to be its final unmanned test flight back in December of 2019 but a series of software problems occurred during the mission, the capsule was unable to reach the ISS and had to be brought back to Earth early. Boeing is still in the midst of debugging the Starliner’s software and hopes to conduct a second unmanned mission later this fall. If that test flight is successful the Starliner’s first manned flight could take place early next year.

The launch of Boeing’s Starliner capsule on its unmanned orbital test flight in December 2019. Although the capsule’s hardware all worked as required there were a number of problems with the spacecraft’s software. (Credit: Business Insider)

Of course everything that happens these days takes place in the shadow of Covid-19 and the launch of Space X’s Dragon capsule is no exception. NASA personnel at Cape Kennedy have worked very hard to keep all activities dealing with the ISS active and fully staffed. That means that the May27 launch has been given high priority and the space agency is determined to carry out the Dragon mission as soon as the equipment and weather are ready.

Some NASA programs, like the James Webb Space Telescope shown here, have been delayed because of the Covid-19 pandemic. However NASA is marshaling all its resources to lauch the Space X Dragon capsule on schedule. (Credit: Spacenews)

However, unlike every American manned space mission since Alan Shepard in 1961 the Space X launch will take place without a crowd of visitors and VIPs to watch. In order to prevent the spread of the virus only a few reporters will be permitted to attend the takeoff. Of course crowds may gather along the nearby public beaches but personally I’ll be quite happy just watching the show on my computer.

Former President Lyndon Johnson watches the launch of Apollo 11. There will be no such crowd gathered to watch the launch of Space X’s first manned mission. (Credit: National Geographic)

Surprisingly enough there is some other space news happening and part of it concerns the Russian Soyuz spacecraft and Covid-19. Because of the spread of the disease in Russia that country’s space agency Roscosmos has temporarily discontinued production of the Soyuz launch vehicles. Not to worry however as there are currently 52 Soyuz rockets in storage ready for use so there’s little chance in the near future of a mission being delayed or cancelled because of the lack of a launch vehicle.

Due to the Covid-19 outbreak Russia has temporarily halted manufacture of it’s Soyuz launch system. (Credit: Russia Space Web)

Finally, even while we here on Earth are struggling with Covid-19 our robotic space probes throughout the Solar system are still busy exploring distant worlds. That includes the OSIRIS-REx mission to the asteroid Bennu. OSIRIS-REx has been orbiting the asteroid since 2018 and is scheduled to swoop down to the asteroid’s surface in order to grab a sample of Bennu in August. On April 14th the spacecraft conducted a practice run, coming within 75 meters of the asteroid before returning to its normal orbital distance of 1 kilometer.

NASA is conducting the final practice runs of the OSIRIS-REx spaceprobe’s attemp to gather samples of the asteroid Bennu. (Credit: SpaceNews Magazine)

Once OSIRIS-REx has completed is sample acquisition procedure it will begin its 2.5 year journey back to Earth in March2021. That means that by September of 2023 NASA will have samples of yet another body in our Solar system.

Progress, even as we deal with a pandemic.

A star orbiting the black hole at the center of our galaxy provides direct observational evidence that Einstein’s theory of gravity is more accurate than Newton’s.

One of the basic laws of physics that students learn in high school is that the planets orbit around the Sun not in perfect circles by rather in the flattened circles formally known as ellipses, see image below. This idea of orbits being ellipses is Johannes Kepler’s first law of planetary motion.

Kepler’s first law is a direct consequence of Newton’s law of Gravity, but the gravity of a third body, not shown here, will cause the ellipse to wobble! (Credit: Quora)

A few decades after Kepler Sir Isaac Newton showed that it was the gravitational pull of the Sun that pulled the planets into those elliptical orbits. However, an orbit is only a precise ellipse if there is just a star and one planet. In our Solar system the other planets have their own gravitational pulls as well, although they are not nearly as strong as the Sun’s. Nevertheless because of the planets all pulling on each other those elliptical orbits aren’t exact, they all wobble around a bit.

In our Solar System the Planet Jupiter weighs as much as all the other planets combined so it causes most of the wobble in the other planet’s orbits! (Credit: Hubble Space Telescope)

In fact after the planet Uranus was discovered astronomers found that its orbit had a wobble in it that couldn’t be explained by the gravitational pulls of the then known planets. In the year 1821 it was suggested that another planet, further out than Uranus could be the culprit and after a lot of math, more than 20 years of calculations by hand, the planet Neptune was discovered in 1846 right where Newton’s gravity said it would be.

It was a wobble in the orbit of Uranus (l) that enabled astronomers to find Neptune (r). (Credit: Daily Mail)

Just a few years later, 1859 to be exact, a peculiar kind of wobble, known as the precession of perihelion, was found in the orbit of Mercury. Now perihelion is the closest point to the Sun in the orbit of a planet and a precession would mean a shifting of where, relative to the Sun, perihelion occurs. See image below.

The precession of perihelion could not be explained by the pull of the other known planets. Was there another planet even closer to the Sun? (Credit: Independent BD)

By the way, this shift measured by the astronomers was tiny, amounting to only 43 seconds of arc per century. If you recall that a complete circle has 360 degrees and each degree is made up of 60 minutes and each minute has 60 seconds then you can see that a change of 43 seconds in a century is very small indeed.

Once again it was suggested that another planet, one even closer to the Sun than Mercury, was the cause of the precession. After their success with Neptune the astronomers were so certain that they gave this ‘new planet’ the name Vulcan before they even found it. In fact they never found it, despite searching for more than 30 years.

In ‘Star Trek’ Mister Spock’s home world Vulcan was named for the hypothetical planet inside the orbit of Mercury! (Credit: Pinterest)

It was Einstein who finally figured out what was going on. In his General Theory of Relativity in 1915 the physicist described gravity not as a force that passed between two massive bodies but rather as a bending of space-time itself. This bending of space-time causes the motion of objects to deflect from a straight line and if the bending is enough, if gravity is strong enough, the ‘straightest path’ for an object can be a closed elliptical orbit.

In Einstein’s General Theory of Relativity the mass of an object bends Space-Time itself caused the path of other objects to bend, even into an orbit! (Credit: Extreme Tech)

For a weak gravitational field the difference between Newton and Einstein is extremely small. So small that when NASA sends a space probe to another planet it uses Newton’s equations not Einstein’s. The math needed with Newton is just so much easier, trust me.

Einstein’s equation for the gravitational field. This is actually shorthand for a system of 16 equations all of which must be solved simultaneously! (Credit: WordPress.com)

As the strength of gravity grows however the difference between the two theories grows exponentially. That’s why Einstein’s theory predicts the existence of black holes, objects with gravity so strong that nothing can escape them, while Newton’s theory doesn’t. And if you get close enough to our Sun, say where Mercury is, the difference becomes large enough to be measured, it works out to be 43 seconds of arc per century. When Einstein solved his field equations the solution to Mercury’s precession just popped right out. This was in fact the first evidence that Einstein’s theory was correct.

Now astronomers with the European Southern Observatory’s (ESO’s) Very Large telescope (VLT), located in the Atacama Desert in Chile have found another example of precession as predicted by Einstein. For the past 27 years the team have been studying a star called S2 as it orbits around the supermassive black hole Sagittarius A* in the very center of our galaxy.

The center of out Glalxy lies between the constellations of Sagittarius and Scorpio. Try to find it some clear night this summer! (Credit: EarthSky)
At the center of all large galaxies lies a supermassive black hole. Our galaxy’s is called Sagittarius A. (Credit: Daily Mail)

S2 completes an orbit around Sagittarius A* once every 16 years and at its closest point the star comes closer than 20 billion kilometers to the black hole, a distance that is about 120 times that between our Earth and the Sun. At that closet point S2 has to move at 3% of the speed of light in order to not be swallowed by Sagittarius A*. Just imagine that, an object as big and massive as a star moving at 3% the speed of light!

Artist’s impression of the star S2 at it’s closest approach to the supermassive black hole Sagittarius A. (Credit: Wikipedia)

Analyzing their data the astronomers, led by Reinhard Genzel, Director at the Max Planck Institute for Extraterrestrial Physics, have now published their results in an article in the journal Astronomy and Astrophysics. What the astronomers have found confirms Einstein’s theory once again. Even at a distance of 26,000 light years they were able to measure the precession of S2’s orbit around Sagittarius A* and it matches up with Einstein’s theory nicely. In fact their results have allowed them to make the most precise measurement yet of the mass of the black hole itself, 4 million times the mass of our Sun.

Because of the precession calculated from Einstein’s theory, S2’s orbit around the black hole will make a lovely Rosetta shape. (Credit: Syfy)

Future observations of S2 and the region around Sagittarius A* will be even more precise and detailed once construction is completed on the ESO’s new Extremely Large Telescope (ELT) in 2025. The astronomers hope to find fainter stars that come even closer to Sagittarius A*, perhaps even close enough to feel the dragging of space-time caused by the spin of the black hole. That’s another prediction of Einstein’s theory that has yet to be observed anywhere. That would be further proof that General Relativity is the most accurate theory for space-time outside of a black hole.

But as for what goes on inside a black hole? That’s going to have to wait for the physics of the future.

What is Soap?

In this era of Covid-19 we hear one piece of advice dozens of times everyday, ‘Wash Your Hands’, ‘Work up a good lather of Soap and Warm Water and Wash your Hands while singing Happy Birthday Twice!’ Which begs the question, what is Soap? Why is Soap so central to both cleanliness and good hygiene?

Just a little friendly advice!

Chemically soaps are a class of compounds known as salts of fatty acids and are produced by combining fats or oils with an alkaline base in solution under heat, a process technically known as Saponification. Soaps include a wide range of substances used for a variety of purposes from thickening agent to lubrication but the most familiar use of soap is undoubtedly as a cleaning agent and in this post I will mainly be referring to these types of soaps.

The Chemical reaction that produces soap, call saponificaction. (Credit: Thought Company)

Toilet soaps as they are known are produced by using either Sodium Hydroxide (NaOH) or Potassium Hydroxide (KOH) as the alkaline. When sodium hydroxide is combined with a thick fat such as lard or tallow the result will be a hard soap while potassium hydroxide and a light oil, such as olive oil, will produce a softer or even a liquid soap. When Lithium Hydroxide is used as the alkaline the result is lithium stearate a common industrial lubricant.

Making soap is actually pretty easy, many people do it as a hobby. (Credit: The Spruce Crafts)

So how do soaps perform their job as a cleaning agent? Well, remember that oil and water don’t mix because oils are non-polar molecules while water is a polar molecule. However a soap molecule is a combination of an alkaline and fat. That arrangement produces a molecule that is polar at one end, attracted to water, but non-polar at the other end, attracted to fats and oils.  

Basic layout of a soap molecule. One end can dissolve in water while the other end can dissolve in fats or oils! (Credit: Nature on the Shelf)

Because of that when used in combination with water soap acts as a surfactant, a material that breaks the surface tension of water allowing the water to more easily dissolve dirt and grime, along with such polar molecules as proteins and sugars, so that they can be washed away.

Soap’s greatest trick however is its ability to encase droplets of oil or fat in tiny spheres of soap molecules called micelles. Unlike oils and fats that do not dissolve in water, these micelles do dissolve allowing the oils and fats to be washed away with the dirt and grime. In other words not only does soap help water to better dissolve the substances it usually can, it also enables water to dissolve substances it generally can’t.

Molecules of soap form ‘Micelles’ around fats and oils allowing them to be dissolved in water and washed away! (Credit: Quora)

This also makes soap an effective anti-biotic because the harsh alkaline at one end of the soap molecule can break up the protein shells that protect viruses. At the same time the micelles can absorb the fats in the cell walls of bacteria, killing them. Of course modern, manufactured soaps often have various chemicals added to them in order to make them even more ‘anti-bacterial’ but it is worth remembering that any soap can be used as a disinfectant.

Soap by itself can help protect you from germs but modern soaps often have other chemicals added to make them true disinfectants. (Credit: Medium)

Archaeological evidence for the manufacture of soap dates all the way back to ancient Babylon with a cuneiform tablet dated to 2200 BCE that describes the earliest known recipe for soap making. The Egyptians, Greeks and Hebrews all had their own varieties of soaps, mostly produced with olive oil and potash, an alkaline solution made from the ashes of a fire, along with a bit of quicklime. This method of soap making produced a strong and particularly harsh soap.

Babylonian table with a recipe for making soap! (Credit: KU Chemistry)
Egyptian ladies washing themselves with soap! (Credit: Realm of History)

Surprisingly the Romans, who are well known for their baths, did not care very much for soap. They preferred to clean their bodies by rubbing them with olive oil and then scrapping the oil off with a dull knife called a strigil. They considered the harsh types of soaps made in the eastern Mediterranean as harmful for the skin. Only after becoming familiar with the milder soaps of the Celts and Germans did the Romans start using soap. (Imagine that, the fierce northern barbarians had the gentler soap!)

A Roman bronze Strigil used to scrap oil off of the body in a Roman bath. I think I’ll stick with soap and water! (Credit: Christie’s)

Both medieval Europe and the Islamic world had soaps but these soaps were generally harsh with an unpleasant smell and so expensive that only the very rich could afford to bathe frequently. Large-scale manufacture of soaps only began in the late 18th century and coincided with a campaign that linked daily washing with good hygiene, ‘cleanliness is next to Godliness!’

In the 19th Century Cleanliness, Hygiene and Morality were pretty much equated. (Credit: Alamy)

Today of course there is a tremendous variety of different kinds of soap available in your local supermarket. There are soaps that can remove the toughest dirt and grime, soaps that actually soften the skin and even soaps that are 9944/100 % pure soap. There are solid bar soaps and liquid soaps, advertised as ‘body wash’, there are even powered soaps. Whatever kind of soap you prefer we all know that regularly washing your hands with soap and warm water is our first line of defense against Covid-19. So wash up and remember, ‘I’m pulling for ya, we’re all in this together’!