It may be the oldest problem in mathematics; it’s a problem we deal with on a regular basis. How do we divide up a single object, let’s say a pie or cake, so that everyone gets a piece and there’s nothing left to go to waste. Remember some people, like my brother and I like big pieces while some people, like my sister want a smaller piece. In the end all of the various fractions that we cut that cake into have to add up to one, that one cake.
Put in mathematical terms the problem consists of finding a set of integers, let’s say the set (2, 3, and 6) the sum of whose reciprocals 1/2+1/3+1/6=1! We know from archaeological evidence that this problem has been considered since the time of the ancient Egyptians but it had to have been around much longer. After all even Neanderthals had to carve up that deer they killed into pieces that added up to one.
Now of course it’s easy to cut up our cake into n number of pieces each of which is 1/nth of the whole, 8 pieces that are each 1/8th of the pie, pizza chefs get a lot of experience at doing that. Mathematicians however like to make things more complicated so they want to consider solutions where each piece is a different size, and just to make things really interesting they prefer to only use fraction whose numerator is 1 like 1/2 or 1/8 or 1/124, such fractions are technically known as unit fractions because of the number 1 in their numerator. Using unit fractions mathematicians can then search for patterns in the numbers of the denominators, like my example of 2, 3 and 6 above. In this way they can learn about the hidden structure in the numbers that we use everyday.
Back in the 1970s this ancient problem got a new twist as the mathematicians Paul Erdős and Ronald Graham published a conjecture that stated that any set of numbers that was sufficiently large, a condition known as positive density, must have a subset of numbers whose reciprocals add up to 1.
Problem was that few mathematicians, including Erdős and Graham, had any good idea about how to prove their conjecture. So the whole idea kind of just sat there for almost fifty years before a mathematician named Thomas Bloom of Oxford University in England was given an assignment to do a presentation on an effort to prove the Erdős-Graham conjecture by Ernie Croot 20 years ago using the colouring method. In this method numbers are sorted into different baskets by a designated colour. Using a branch of mathematics known as harmonic analysis Croot was able to show that no matter how many baskets were used, at least one would contain a set of numbers fulfilling the Erdős-Graham conjecture. Croot used a type of integral called an exponential sum, which can calculate how many integral solutions there are to a problem. The problem is that exponential sums are almost always impossible to solve exactly so Croot’s methodology was unable to answer the full, positive density version of the conjecture as originally stated by Erdős and Graham.
But reading Croot’s attempt did get Thomas Bloom thinking about the Erdős-Graham conjecture and he brought his own expertise in combinatorial and analytic number theory to the problem. Bloom’s technique allowed him to have greater control over the approximation of the exponential sum so that in the end he succeeded in proving not that there was a solution but that the number of solutions was positive and an integer, meaning there had to be one or more solutions.
Just another example of how mathematicians can reexamine even the oldest of problems and still find new structure, new patterns. Showing once again that mathematics is the queen of the sciences.
The new baseball season has begun and I got to attend my first ever opening day game. By the way the Phillies defeated the Oakland Athletics by a score of 9 to 5. That kind of score should be typical of Phillies games this season as team looks to score a lot of runs but their pitching is kinda suspect.
One of the best things about the sport of baseball is that with the action so spread out it makes it easy to follow all of the physics that’s happening down on the field. Whether it be the trajectory of a home run or a line drive up the middle, hey even just a broken bat ground out to shortstop it’s all physics.
Of course some of the most interesting physics comes as the pitcher prepares to throw the ball to his catcher hoping that the batter will either swing at it and miss or at least hit the ball so weakly that one of the fielders can make a play and get an out. In order to accomplish this pitchers try to deceive the batter about the kind of pitch that’s coming. And pitchers have a wide variety of pitches that they can throw including fastballs, sinkers and curveballs as well as the infamous knuckleball along with variations on those pitches.
Now simple trajectories, like that home run, are often discussed in freshmen physics classes by ignoring the effect of wind resistance, not a bad approximation if the wind isn’t blowing too hard. The motions that pitchers can put on a ball however cannot be approximated in that way however because they are all due to the interaction between the ball and the air molecules through which it moves. And the most important factor in determining how the trajectory of a pitch deviates from a trajectory without air is the direction and orientation of the spin that the pitcher puts on the ball as he releases it.
Everybody knows that spin has two distinct directions, sometimes called clockwise and counter-clockwise or right handed and left handed. For a ball traveling more or less horizontally and whose axis of spin is both horizontal to the ground and perpendicular to the direction in which the ball is traveling those spin directions can be referred to as top-spin, where the top of the ball is rotating in the direction that the ball is traveling, and back-spin, where the bottom of the ball is rotating in the direction that the ball is moving. See diagrams below. Later on we will consider what happens when that axis of spin is not horizontal and perpendicular to the balls motion.
That spin on the ball as it moves through the air generates a difference is pressure on the top and bottom of the ball causing a force on the ball due to what is known as the Magnus effect. In the Magnus effect the side of the ball moving in the direction of travel has the greater pressure and so the is pushed the other way. This means that topspin produces a downward force causing the ball to drop faster than it would in a vacuum. This sort of pitch is known as the sinker because it does just that, dropping faster than the batter anticipates causing him to either miss it entirely or hit a weak ground ball somewhere.
Backspin does exactly the opposite, generating an upward force so that the ball seems to rise, hence a rising fastball. In actuality however the ball is still dropping due to gravity but it doesn’t drop as fast as it would in a vacuum. In this case the intent is to make the batter either miss the pitch or get under it, popping the ball up so that a fielder can catch it for an out.
Back in the late 1950s a physicist at the National Bureau of Standards named Lyman J. Briggs undertook a study of the way in which the Magnus effect could change the trajectory of a baseball under typical game conditions. What he found was that the change in position when the ball arrives at the plate was proportional to the amount of spin the pitcher had put on the ball and proportional to the square of the ball’s horizontal speed. For pitch speeds of 70 to 100 miles per hour and spins of 20-30 rpm the change in position would range from between 10.8 to 17.5 inches. (Yes I know, I’m using Imperial units, please forgive me but this is baseball where the bases are 90 feet apart, the distance from the pitching rubber to home plate is 60 feet 6 inches and a baseball weighs between 5 and 5.25 ounces.)
O’k, so we’ve discussed the sinker and rising fastball, pitches that seem to go either down or up depending on the spin, but what about pitches that move sideways like the curveball or screwball. Well you remember I assumed above that the axis of rotation of the ball was horizontal and perpendicular to the direction that the ball is moving. What if we remove that constraint and allow a righthanded pitcher to rotate the spin axis about 45º clockwise? In that case the Magnus effect will cause the ball to move laterally to the left, a standard curveball. For a lefthanded pitcher the curveball is produced by rotating the spin axis about 45º counterclockwise and the ball will move laterally to the right.
And when a righthanded pitcher rotates the spin axis of the ball counterclockwise, so that it moves to the right or a lefthanded pitcher rotates the spin axis clockwise to make it move left you get a screwball. The reason the pitch is known as a screwball it is so rarely seen that its motion seems really weird, and the reason its so rarely seen is that its so dammed hard to throw.
We’ve covered most of the standard, best known pitches but I’ll finish off today with the pitch that every batter, and most pitchers really hate, the knuckleball. The essence of the knuckleball is that the pitcher does his best to put no spin on the ball, eliminating any contribution to the motion of the ball due to the Magnus effect.
That way, as the ball moves toward the plate it gets pushed about by every little breeze, every little pocket of turbulence. A well thrown knuckleball floats and darts this way and that so that neither the batter, nor the pitcher knows where it’s going to end up. A poorly thrown knuckleball does nothing, making it an easy target for the batter to drive out of the park. So as we begin another season of our national pastime it’s worth remembering how baseball is really all about the physics!
Processed foods are nothing new, smoking, salting and pickling of meats and vegetables has been a common practice for thousands of years. Much of early human chemistry was devoted to processing foods for the purpose of preventing them from spoiling. In our modern world we may be able to go to the supermarket to buy fresh food whenever we want but for most of human history processing food during the summer and autumn was the only want to make certain that you’d have food to eat during the long winter.
One problem with any method of processing however is that it always removes or reduces some of the nutritious value of the food, especially the food’s vitamins which are rather delicate chemical compounds. Still, if the only thing you have to eat in the middle of January is some low-nutrient smoked bacon and pickled cabbage, also known as sauerkraut, you’ll eat it and get your vitamins from fresh food during the summer.
Over the last two centuries there has been a revolution in new methods for processing foods. Canned foods and frozen foods are now common along with many kinds of chemical preservatives that help keep food from spoiling. Supermarkets of course love such preserved foods because they can sit on the store’s shelves for months until somebody buys them while any fresh food that isn’t bought quickly has to be thrown away at a financial loss to the market.
As more and more of the foods we eat have become processed foods the problem of low-nutrition has slowly become a bigger and bigger problem. To make matters worse the food manufacturers found ways to make their processed foods actually taste better than fresh food, usually just by increasing the fat content or the sugar content or even just by adding more salt, things that in large amounts are actually bad for our health.
Meanwhile convenience stores like 7-11, Wawa or Royal Farms are becoming ever more popular by selling a wider variety of processed foods without the added space and expense necessary for fresh meats and vegetables. The same is true of the innumerable ‘Mom and Pop’ grocery stores that seem to exist on nearly every block in most cities. These two types of grocery stores have in fact taken over much of inner city America so that now large sections of many big cities have become ‘Food Deserts’ where the only food that is readily available is unhealthy processed food instead of fresh, nutritious food.
The result of this heavy reliance on Hi-Calorie, Low-Nutrition food has been an epidemic of obesity in this country. And with obesity comes all the health risks associated with it, especially heart disease.
So what can we do, go back to fresh foods with a very limited shelf life. Many health conscious people are doing exactly that, even to the extent of growing some of their own food, either in their backyard or in an ever increasing number of community gardens. However there are simply too many people on this planet today for that to be a complete solution, if only because of the increase in waste caused by uneaten fresh food going bad.
So why can’t the scientists and chemical engineers who develop processed foods find a way to make them more nutritious, lower in fat and just plain healthier? In fact there have been many attempts to do just that. Milk and Orange juice have for many years been fortified with vitamins while several brands of breakfast cereal have been made that provide both needed fiber along with loads of vitamins.
Problem is that these healthy foods just don’t taste as good as the ‘bad foods’ do making it hard to convince people to switch. More work needs to be done to make processed foods better tasting and even more nourishing.
Enter David Edwards, Professor of Engineering at Harvard University, Founder and Board Member of Incredible Foods Inc. and now operator of the restaurant Café ArtScience in Cambridge Massachusetts. For years Professor Edwards has been at work developing new varieties of food products that are delicious, nutritious and have zero impact on the environment.
Take WikiWater for example. Inside a hard shell made of a biodegradable corn derived protein, no plastics, water is contained within an edible skin packed with vitamins and other nutrients. Edwards hopes that WikiWater will replace the current plastic water bottles and help lessen the thirst of people in third world countries. Less trash with better nutrition, sounds like a good idea to me!
Professor Edwards made his first big contribution with an inhalable form of insulin for diabetics. Since founding Incredible Foods he and his team have been busy creating a new line of products they call ‘Food Berries’. Food Berries are small, fruit flavoured snacks that are contained inside an edible skin that is not only packed with vitamins but also provides the Food Berry with a considerable shelf life. There are also Hummus and Yogurt varieties of food berries along with a frozen, ice cream style.
So yes we can develop new types of food that are tasty, healthy, long lasting and environmentally friendly. Thanks to scientists like David Edwards we have the technology, we can have processed foods that are actually better than fresh foods. All we need is for our leaders to recognize the problem and do something to solve it.
Back about fifty years ago now the science of Geology underwent a revolution in thought as overwhelming evidence supporting the theory of ‘Plate Tectonics’ was uncovered. The basic idea of plate tectonics is that the surface of the globe is broken into a number of plates that the continents sit upon. Those plates move, extremely slowly, only centimeters per year but they do move and as they move they jostle and crash against one another causing earthquakes to occur, mountains ranges and volcanoes to be born.
Sometimes one plate is forced under another, and when that happens a ‘subduction zone’ is created and one of the geologic features that can occur in such a zone is a deep-water trench such as the Marianas Trench, the deepest place in all of the oceans. The Marianas Trench is in fact only one of about a dozen trenches that are a part of the famous ‘Ring of Fire’ surrounding the Pacific Ocean. The precise mechanics of how these subduction zones are generated is very complicated, several attempts have been made to develop numerical models for analyzing them with computers.
Now a new such model developed at the Instituto Dom Luiz at the University of Lisbon in Portugal has shown great promise in providing a more comprehensive and accurate picture of subduction zone evolution. This new simulation is different from previous models in that it is a full scale three-dimensional reproduction of what is going on at a subduction zone. In the program all of the dynamic forces that effect the generation and evolution of subduction zones were realistically incorporated, including gravity.
Such large scale simulations can require a lot of computer time; in fact each analysis using this new model takes as much as a full week to process using the supercomputer at Johannes Gutenberg University in Germany. Still the results are well worth the effort. According to Jaime Almeida, first author on the study. “Subduction zones are one of the main features of our planet and the main driver of plate tectonics and the global dynamics of the planet.”
Plate Tectonics has taught us much about the broad outline of how the surface of our Earth has changed over billions of years. However a more precise and accurate model of the processes involved may help us better understand, and therefore predict the disasters like earthquakes and volcanoes that are a common threat around the world.
Now I’d like to take a moment to update a geology story that I posted about back on the 24th of June 2020 and 10th of April 2021. The story concerned the discovery of two huge, massive blobs that exist deep within the Earth’s mantel. These blobs are formally known as Large Low-Shear Velocity Provinces (LLSVPs) and differ in composition and viscosity from the surrounding material deep within the Earth. (Previously these blobs were known as Ultra Low Velocity Zones or ULVZs). The LLSVPs were detected because; being made of different materials the vibrations caused by earthquakes travel through them at a lower velocity, hence Low-Shear Velocity. They were discovered by analyzing the data from hundreds of earthquakes as measured by seismographs from around the world.
The two LLSVPs are situated one beneath South Africa and the other beneath the Pacific Ocean and are each the size of a continent with a thickness of greater than 500 km. Also, it has been speculated that the blobs may in fact be the remnants of an ancient planet called Theia that collided with the Earth four and a half billion years ago fragments of which then became our Moon.
Now a new analysis of the LLSVPs by Qian Yuan and Mingming Li of Arizona State University’s School of Earth and Space Exploration has been published in the journal Nature Science. In the article the researchers assert that the LLSVP under Africa is almost 1000 km further from the center of the Earth, and therefore closer to the surface than the one under the Pacific. In an attempt to explain this difference in height the researchers hypothesize that the Africa LLSVP could be less dense and therefore it may be ever so slowly rising through the Earth’s mantel. “The Africa LLSVP may have been rising in recent geological time,” states author Li. “This may explain the elevating surface topography and intense volcanism in eastern Africa.”
It is harder to study what goes on just a few hundred kilometers beneath our feet than it is to study the surface of the Moon or Mars, certainly we’ve sent more probes to the Moon or Mars than we have to a hundred kilometers down. Nevertheless bit by bit geologists are learning the secrets of the planet we all call home.
Twenty years ago the idea that some of the largest galaxies possessed a ‘Supermassive Black Hole’ in their center was a major discovery. Since that time more and more evidence has accumulated that every galaxy, even many small ones, possess such black holes whose mass can be anywhere from tens of millions to billions of times that of our Sun. One of the major questions in astronomy today is whether supermassive black holes came first and formed galaxies around them or does the formation of galaxies lead to the creation of supermassive black holes. By the way, this is a question that it is hoped the new James Webb Space Telescope may provide some evidence to help answer.
One thing we do know is that big galaxies form by combining smaller galaxies, or more often by a big galaxy gobbling up a small one. Our own Milky Way is now known to have gobbled up as many as a half dozen smaller galaxies over the last billion years or so. So what happened to the supermassive black holes in those now consumed galaxies, are they wandering around somewhere in our galaxy or did they become absorbed by the Milky Way’s supermassive black hole.
Probably both. If the two galaxies strike each other in a glancing blow the black holes at their centers may never come within tens of thousands of light years of each other and may wander around separately for billions of years. On the other hand astronomers think that sometimes the black holes can become entangled and will then begin to orbit each other. If that occurs the two supermassive black holes will start to emit gravity waves so that slowly the energy of their orbit will radiate away causing them to move closer and closer until they merge.
Evidence for the latter scenario has recently been uncovered and published in the Astrophysical Journal Letters. The evidence comes from a black hole situated in a galaxy about 9 billion light years away, which you will remember means that the events we are watching actually took place 9 billion years ago. The supermassive black hole, which has been designated as PKS 2131-021, is devouring a considerable amount of matter. A small amount of that matter is escaping from the black hole in the form of a high energy jet. Such objects are called Blazars and it so happens that PKS 2131-021’s jet is pointing right at Earth giving us an excellent look at what is going on.
And recent observations have shown that the energy from PKS 2131-021 fluctuates on a regular basis, around every two years the intensity dips slightly only to soon recover. Checking data going back 45 years from five observatories the researchers confirmed their own observations.
The astronomers hypothesize that the cause of the variation could be another supermassive black hole in a tight orbit around PKS 2131-021, the tightest known orbit for a pair of supermassive black holes. Using Einstein’s Theory of Gravity the astronomers have calculated that the two black holes should merge in about 10,000 years or so and when they do they will produce massive amounts of gravity waves that will the shake the fabric of space-time throughout the observable Universe.
In previous posts, 7 October 2017, 22 October 2017 and 23 September 2020, I have talked about the LIGO and Virgo laser gravity wave observatories and how over the last ten years they have succeeded in capturing the final outbursts from mergers of several pairs of stellar mass black holes, black holes with masses 5-10 times that of our Sun. So far however they haven’t observed gravity waves from pairs of Supermassive black holes, such events are very rare even in the entire Universe. Perhaps with a few more upgrades however they might be able to start picking up the gravity waves already coming from PKS 2131-021.
Astronomers will continue to study PKS 2131-021, with both gravity wave observatories and more old fashion telescopes hoping to learn more of its secrets. The more astronomers observe the Universe the more common Supermassive Black Holes have become so that it’s a good question. Does the Universe consist of Galaxies of stars with Supermassive Black Holes at their hearts, or does it consist of Supermassive Black Holes with a halo of stars around them?
Even while the Covid-19 pandemic continues to rage around the world scientists have been reexamining the pandemics of the past in their efforts to uncover something useful for their fight against Covid. In these posts I have already mentioned the ‘Spanish Flu’ pandemic of 1919-1920 and its similarity to Covid-19.
Now a team of researchers at the Max Planck Institute for the Science of Human History has published a new study in the journal Nature Ecology and Evolution of arguably the best known plague of all time, the ‘Black Death’ of the mid 14th century. Caused by the bacteria Yersinia pestis the bubonic plague is considered to have been responsible for the death of as much as 50% of the population of Europe between the years 1347 to 1352.
Like all of our knowledge of history, what we know about the Black Death comes from those people who kept the records of that time, the literate people who lived in the towns or monasteries. Those records tell us much about the heavy toll the plague took on the people who lived in those communities. Unfortunately those records tell us very little about what was happening to the country people, the peasants, who made up more than 75% of the population in Europe back then.
In order to correct for this urban basis in our knowledge of the effect of the Black Death the team from Max Planck used a new archaeological technique called Palynology, which is the study of fossil plant spores and pollen. The rational for the study was this, if the death rate due to the plague among the rural population was a high as it was in the cities and towns, about 50%, then large areas of once cultivated land should have reverted to wilderness in the years after 1352. Such a large scale change in the flora would be reflected in the kind of pollen that was deposited into the ground from that time.
The researchers collected pollen samples from over 1,600 sites spread throughout Europe and analyzed them. What they found was that the mortality caused by the plague varied widely from location to location, with some rural areas like those in Germany and Italy being hit just as hard as nearby cities while other localities suffered far less. Ireland, for example showed hardly any change at all.
These results correlate well with what epidemiologists are seeing today. Covid-19 may be a worldwide pandemic but how it effects each and every human being depends very much on local conditions where they live.
On a lighter note another team of archaeologists with the Max Plank Institute for the Science of Human History have unearthed an Old Stone Age site not 160 kilometers from present day Beijing in China. The site, which is in the Nihewan Basin to the northwest of the Chinese capital and has been given the name Xiamabei, was carbon dated to between 39,000 and 41,000 years ago and consists of a layer of remains that had been buried about 2.5 meters beneath the surface. During their excavations the archaeologists found and removed 380 small stone tools and artifacts along with 430 mammal bones.
The site was also identifiable by several artifacts that had been stained red by the mineral ochre, which is known to have been used by many primitive cultures as a dye because of its resemblance to the colour of blood. The Xiamabei site is the oldest ochre culture site to have been found in the Far East but the pigment is known to have been used in Europe and Africa as long ago as 300,000 years.
According to co-author Shixia Yang, a scientist with the Chinese Academy of Sciences, “The remains seemed to be in their original spots after the site was abandoned by the residents. Based on this, we can reveal a vivid picture of how people lived 40,000 years ago in eastern Asia.”
One big question left unanswered by the investigation so far is exactly what kind of human beings lived at the Xiamabei local. 40,000 years ago the residents could have been modern Homo sapiens but they could also have been either Neanderthals or Denisovans, the lack of any human bones makes it impossible to be certain. However a slightly younger, nearby location called Tianyuandong, lying about 110 kilometers away, has had remains of H sapiens identified there so the likelihood is that the Xiamabei site was made by our direct ancestors.
Just another couple of stories about the science of archaeology uncovering small bits of our past.
If human civilization is to survive on this planet we must learn how to recycle the industrial chemicals that make modern society. That’s all there is to it. We cannot continue to produce plastics and them just throw them away without them clogging our rivers and oceans. We cannot go on manufacturing Chlorofluorocarbons (CFCs) without them leaking out and destroying the ozone layer. And most of all we cannot persist in burning fossil fuels and just releasing CO2 into the atmosphere without catastrophic effects on our climate.
It shouldn’t be too hard to accomplish. After all nature somehow managed to recycle the chemicals of life over and over again for billions of years without waste products accumulating and becoming a problem. Life here on Earth had evolved into a well tuned machine that took energy from the Sun and used it to cycle carbon through many different creatures. Much of that recycling was done by some of the simplest creatures, bacteria, who took the waste products, or corpses of larger living things and broke down the complex chemicals of ‘higher life forms’ so that they could be used again and again. Perhaps then, it might be a good idea for us to if possible find or otherwise develop strains of bacteria that can consume some of our waste products and convert them into substances that are not harmful, or perhaps even useful.
That is exactly what researchers at Northwestern University and the firm LanzaTech are doing. What the scientists have done is to select and modify a strain of bacteria in order to enable them to absorb CO2 and convert it into the useful chemicals acetone and isopropanol (IPA).
Both Acetone and IPA are industrial chemicals that are manufactured in large quantities from petroleum in processes that emit significant amounts of CO2. Acetone is a well known solvent for both plastics and synthetic fibers as well as being the most commonly used nail polish remover. On the other hand IPA is the main ingredient in many disinfectants including two that are recommended by the World Health Organization for their ability to kill the SARS-CoV-2 virus. Together these two compounds have a yearly sales market of over $10 billion. Techniques that could manufacture these chemicals in an environmentally friendly way would be a major step forward in developing a sustainable economy.
And new gas fermentation process that produces Acetone and IPA developed by Northwestern actually removes CO2 from the atmosphere helping to reduce the green house gasses already put there by power station and gas burning vehicles. Starting with an anaerobic (Non-oxygen breathing) bacterium called Clostridium autoethanogenum the researchers at LanzaTech succeeded in reprogramming the bacterium to ferment CO2 out of the air and convert it to IPA and acetone. As related by study co-author Michael Jewett, “By harnessing our capacity to partner with biology to make what is needed, on a sustainable and renewable basis, we can begin to take advantage of the available CO2 to transform the bioeconomy.”
Just another example of how building a sustainable society doesn’t have to mean going back to the Middle Ages. We can protect our planet, and all of the creatures on it if we just use our brains and are willing to try new techniques for manufacturing those products that a modern society requires. Bioengineering can help us to develop a bioeconomy, an economy that can work with the Earth instead of poisoning it.
If you think about it, the most impressive thing about life here on Earth is its enormous variety. Looking at some of the more unusual species of life around the world makes you wonder just what limits there are to the kind of creature evolution can come up with. Consider the nudibranch and the millipede, the flying squirrel and the flying snake or how ’bout just the duck billed platypus all by itself!
It’s hardly surprising therefore that in the long history of life there should be many creatures that are even stranger. As I usually do I’ll begin my discussion of new unusual fossils in the distant past and work my way forward in time.
Many of the strangest creatures ever found have come from the Burgess Shale fossil site in British Columbia. Even the names of some of the species discovered there indicate how strange they are. Anomalocaris (literally strange shrimp) and Halluciogenia (literally a hallucination) are two of the best known but over the last several decades both of these animals have had several related species uncovered in other fossil sites so that the taxonomy of these ‘weird wonders’ is now better understood.
Not so Opabinia, a five-eyed creature with a backward facing mouth, segmented body with flaps instead of legs and a long elephant-like nose. Although Opabinia had been first described by Walcott in 1912 as an unusual arthropod it was only in 1975 when paleontologist Harry Whittington dissected specimens of the creature using techniques he himself had developed that Opabinia was recognized as the bizarre creature we now know. And in the years since then Opabinia remained a unique creature with no known relatives.
Until now, for a reinterpretation of a fossil from the 500 million year old middle Cambrian Wheeler Formation in Utah by lead author Steven Pates of Harvard’s Department of Organismic and Evolutionary Biology has found another member of Opabinia’s family. The fossil has been named Utaurora comosa and was first described in 2008 as a relative of Anomalocaris.
While U comosa does possess some similarities to Anomalocaris the re-evaluation by the team at Harvard clearly shows that the species bears a striking resemblance to Opabinia. Unfortunately the anterior nose of U comosa has broken off making an exact comparison to Opabinia’s nose impossible. However there does appear to be enough left to assert that the proboscis of U comosa seems to be smaller. At the same time the tail flaps of U comosa appear slightly different, more fan like.
The Wheeler Formation is several million years younger than the Burgess shale so perhaps U comosa is a slightly evolved descendant of Opabinia. In any case Opabina is no longer unique, it has a relative and as more such relatives are found the family’s position in the tree of life will become clearer.
Moving about 50 million years into the future the dominant creature of the Silurian seas were giant sea scorpions, formally known as eurypterids. Ancestors to both modern scorpions and spiders, sea scorpions were predators like their descendants and some species grew to over a meter in length making them among the largest of all arthropods.
Now a new species of eurypterid has been identified in Australia that is the largest specimen discovered in the land down under. The fossil itself had been unearthed years ago and left stored in the Queensland Museum in Australia but only recently has it been thoroughly examined. Realizing that the fossil was that of an eurypterid the creature is estimated to have been as much as a meter in length and has been given the formal name of Woodwardopterus freemanorum.
Of course even the largest creatures of the Cambrian and Silurian periods were small compared to the later dinosaurs. And the largest, best known predator from the age of the dinosaurs was the famous Tyrannosaurus rex or just T rex. One thing about the T rex that sooner or later everybody finds curious are the two tiny, seemingly useless arms that the giant meat eater possessed. Did those petite appendages have any use at all or were they vestigial organs, like our own appendix, useless but not yet eliminated by evolution.
Now a new species of large predatory dinosaur has been discovered in the Los Blanquitos Formation in the Amblayo region in the north of Argentina whose arms are comparatively even shorter than T rex’s. Named Guemesia ochoai by its discoverers from the Natural History Museum in London the animal belongs to a family of dinosaurs called the abelisaur who were distantly related to the Tyrannosaurs that roamed North America at approximately the same time.
As a group abelisaurs were 5 to 10 meters in length and used their powerful heads and jaws to seize and kill their victims. The researchers who described G ochoai were not exactly certain of the creature’s size because the specimen they unearthed could have been a Juvenal. The major difference between the abelisaurs and the northern Tyrannosaurs was that the southern theropods had shorter, deeper skulls that often bore crests or bumps on it.
Regardless of the actual adult size of G ochoai that fact that large, predatory dinosaurs on two continents both evolved arms that were so small as to be practically useless tells us a lot about the way they attacked their prey. If you think about it however, in our modern world wolves take down their prey without using their forelegs, it’s all just teeth and jaws.
So maybe the animals from the past weren’t that different from those of today, they faced the same challenges and came up with pretty much the same solutions.
The Standard Model of particle physics has several problems. For one thing it simply doesn’t contain gravity in any way. Another problem is the masses of all the particles. For example the muon resembles an electron in every respect except its mass, which is 206.84 times that of its cousin. The standard model can’t simply doesn’t explain that ratio or any of the other mass ratios. In fact the whole concept of generations, particles like the electron, muon and tau that behave in the same fashion except for their mass, is a complete mystery at present. Perhaps the biggest problem with the Standard Model however is that it works so well that we have very few clues pointing toward a more comprehensive theory that will answer our questions.
That’s part of the reason why physicists are so busy studying the particle known as the neutrino. These ghost particles have mystified physicists ever since their existence was first predicted by the theoretician Wolfgang Pauli. Pauli proposed the neutrino to explain some discrepancies in the type of radiation known as beta decay.
Now in beta decay a neutron splits into a proton and an electron. In the process conservation of the electric charge works out, a neutron is neutral while the positive proton and negative electron still add up to zero. The energy of the proton and electron did not always come out the same however, a violation of conservation of energy. And the spins of the particles were just all wrong as well, again violating conservation of angular momentum.
What Pauli proposed was that a third particle, both electrically neutral and with zero rest mass, was emitted at the same time and the experimentalists just hadn’t detected it yet. At first Pauli called his particle the neutron but when the bigger, massive neutron was discovered by James Chadwick, Enrico Fermi then suggested Pauli’s neutral particle be called the neutrino, which is Italian for ‘little neutral one’. Well it took more than twenty years but eventually Pauli’s neutrino was discovered in 1956, and in fact physicists now know that there are three different types of neutrino, one each complimenting the electron, the muon and the tau particles. Again why each generation of electron like particle should have its own neutrino is simply not explained in the Standard Model.
Now neutrinos interact very rarely with other particles, it’s estimated that a neutrino could fly through a light-year of solid lead and still have a 50-50 chance of coming out the other side. At the same time neutrinos are generated in large amounts in nuclear reactions, such as the fusion reaction that powers our own Sun and the other stars. Solar physicists therefore wanted to try to capture as many solar neutrinos as they could hoping to learn about the interior of the Sun from them.
Instead they learned more about neutrinos. The first neutrino telescope was built deep beneath the Earth’s surface at the Homestake Mine in South Dakota in order to eliminate contamination from cosmic rays. What the telescope found was that the number of neutrinos coming from the Sun was exactly one-third the expected number. After wondering for some time if something was wrong with their theories of solar fusion, or maybe something was actually wrong with the Sun the physicists eventually found that the three types of neutrino oscillate, that is they change from one type to another over time. The neutrinos generated in the Sun are all electron neutrinos but by the time they reach Earth two-thirds have changed to muon or tau neutrinos.
Which means that neutrinos must have a rest mass because particles with zero rest mass move at the speed of light and according to Einstein’s theory of relativity time does not pass for anything moving at the speed of light. So the questions now are, just what is the mass of a neutrino and can we learn a clue from that about the masses of all the particles.
That’s the purpose of the KATRIN experiment at Karlsruhe Institute of Technology in Germany. KATRIN is trying to measure the mass of a neutrino by making the most precise measurements ever of beta decay, the original interaction for which Pauli first proposed the neutrino. Think about it, if the energy of a neutron gets shared in varying amounts between a proton, electron and a neutrino the minimum amount of energy the neutrino can get is its rest mass. So if you measure thousands or better still millions of neutron decays the maximum energy of the proton and electron taken together and subtracted from that of the neutron, is the rest mass of the neutrino. Easier said than done, remember we’re talking about sub-atomic particles here and previous experiments have already concluded that the neutrino rest mass is less than 1/100,000th that of the electron.
Let me take a moment here to mention the units by which particle physicists measure mass. Remembering Einstein’s most famous equation E=Mc2 physicists like to turn that equation around to get m=E/c2. So to describe the mass of elementary particles physicists use a measure of energy known as the electron-volt, the energy an electron will gain by accelerating across one volt of electrical potential, and divide it by c2 getting eV/c2 or kilo eV/c2 (Kev/c2) or Million eV/c2 (Mev/c2) or even GeV/c2, a billion eV.
Now neutrons are themselves hard to handle, being neutral you can’t use an electric field to control them. So the KATRIN experiment uses the heavy isotope of hydrogen called Tritium, whose nuclei consists of one proton and two neutrons. Tritium is a well studied beta decay source and as a gas it is much easier to handle than a free neutron would be. Also the proton that results when the neutron decays remains in the nucleus, transforming it to a nucleus of helium-3. That means that the only thing you really have to measure is the energy of the produced electron.
Nevertheless it’s still a difficult task, which is why the KATRIN experiment is an enormous instrument 70m in length, much of which is the main spectrometer for measuring the electron’s energy. For the experiment the tritium gas of cooled down to a temperature of 30K (-247ºC) in order to minimize thermal motion and an set of 24 superconducting magnets are used to collimate the emitted electrons into the spectrometer.
While KATRIN is still continuing to collect data an analysis of the measurements gathered by the end of 2019 has achieved a milestone, at the 90% confidence level the rest mass of a neutrino is less than 0.8 eV/c2. An elementary particle with a rest mass that is less than 1eV would have been a shocking result just a few decades ago and in a sense a rest mass of around one-millionth that of the electron, or even less, only deepens the mystery of elementary particle masses.
Still the results of KATRIN are reality and the only way to get beyond the standard model is to gather more facts that don’t fit in the model. Who knows, maybe right now some grad student in some university somewhere is reading the article published in Nature Physics by the KATRIN collaboration and is thinking to themselves, ‘hey, wait a minute… that actually makes sense’! After all, that’s how it started with Pauli, and Einstein, and Bohr and all those others who built the standard model.
In the early morning hours of the 24th of February military forces of the Russian Federation at the orders of their President Vladimir Putin began a full scale assault on the neighboring country of Ukraine. The Russians had taken their time in organizing the attack. More than a month of preparation went into assembling a force of nearly 200,000 men with more than 1,000 tanks along with 2,000 aircraft. Such was the armored firepower of the Russian army that it was widely expected to sweep the much smaller Ukrainian military aside and occupy the capital Kiev along with the county’s other major cities within days.
It hasn’t worked out that way. As I write this post we are twelve days into the Ukraine war and Russian forces are bogged down around on several fronts. The Russian units attacking Kiev are facing stiff resistance and have made no progress for the past week while most other major cities are also still in the hands of the Ukrainian government. Russian President Vladimir Putin has unquestionably overestimated his own strength, underestimated the resolve of the Ukrainian people to resist him while at the same time ignoring the determination of the international community to punish him and Russia for his blatant act of aggression.
Now I do not mean to imply that a Ukrainian victory is coming any time soon. Russian still has enormous forces to bring to bear in this fight and unless Putin is willing to run back home with his tail between his legs this conflict is going to continue and become much more brutal and bloody. There are already signs that the Russians are shifting from a war of decisive battle, i.e. a quick sharp fight with winner takes all, to a war of attrition where the bigger combatant trades casualty for casualty and simply wears down his foe, the most brutal kind of warfare. There are also reports of ever growing numbers of attacks against civilians increasing the casualty figures still higher.
The problem with that is that the longer the war goes on the likelier it is to devolve into a Guerrilla War so that, even after the organized Ukrainian Army is defeated the Ukrainian people continue to fight on in small bands. Such wars, also known as insurgencies, are quite common and although both lengthy and bloody, they often succeed.
The term Guerrilla, which means little warrior in French, comes from Napoleon’s invasion of Spain in 1808 where a large, well trained and well supplied French army defeated several smaller Spanish armies, marched straight to the Spanish capital, seizing it and installing Napoleon’s brother Joseph as the new Spanish king. Problem was that most of the Spanish people didn’t accept their defeat and soon peasant farmers were taking potshots at French soldiers on guard duty, small bands made up of Spanish soldiers who never surrendered along with ordinary citizens were attacking French supply wagons. With a year the entire countryside of Spain was a battlefield and with the British smuggling in military supplies and eventually troops to aid the guerrillas by 1814 the French were defeated in what Napoleon referred to as his ‘Spanish Ulcer’.
Since then other big countries have tried to use their powerful military against a presumed weaker opponent only to find themselves bogged down in a guerrilla war. In World War 2 Hitler faced strong partisan resistance in both Yugoslavia and the Soviet Union. Partisans are just another word for guerrillas by the way.
Famously the United States was defeated in a guerrilla war by the Vietnamese and recently by the Afghans. And it’s important to remember that the Russians, back when they called themselves the USSR, were also defeated by Afghan guerrillas during the 1980s, a war that many historians think helped lead to the collapse of the Soviet Union. A war that Putin should remember well!
If the fighting in Ukraine continues there is every chance that it will evolve into just such a guerrilla conflict. For one thing the Russians simply do not have enough troops to completely guard the entire country. The US Army War College estimates that in order to really secure a hostile country an occupier must have one soldier for every 50 citizens of that country. To occupy the Ukraine would therefore require an army of somewhere between 800,000 and one million troops, a force that the Russian Federation simply cannot afford. The Russians may hold all of Ukraine’s major cities but any potential Ukrainian guerrillas will have plenty of inadequately guarded forests and marshes in which they can organize or retreat into whenever needed.
Meanwhile the Ukrainian people are already preparing themselves for a guerrilla war, ordinary citizens are lining up at police stations to receive guns so that they can help fight the Russian invaders. Companies that produced alcoholic beverages are now manufacturing Molotov Cocktails, bottles filled with gasoline to use as primitive grenades. These are other activities are typical of a guerrilla war. Everyday that they succeed in resisting the Russian advance the morale of the Ukrainians grows making them more likely to continue the fight even after organized resistance has ended.
On the other hand the Russian morale was rather poor at the very start of the war. For all of his propaganda President Putin never succeeded in convincing the Russian people that Ukraine was any kind of threat to them and while the majority of his people continue to support him there is a considerable minority who simply loath their president. Thousands of Russians have already been arrested for protesting against their nation’s invasion of its neighbor, not a good way to start a war that could go on for years.
Add to that the damage to the livelihood of ordinary Russians due to the massive sanctions that the international community has placed upon the country and the morale of both Russian troops and the Russian people can only decline still farther. Already the value of the Ruble has dropped by more than a third while imports of critical goods into Russia have simply stopped. The unity that nations and corporations around the world have shown in their effort to make Russia pay for its aggression has been unexpectedly strong even taking China’s determination to pay both sides against each other into account.
The fighting in Ukraine also brings with it a new potential horror. For the first time ever Nuclear Power Plants are on the front lines of the fighting in a major war. On the very first day of their assault Russian forces seized Chernobyl, the site of the world’s worst nuclear accident. Fortunately there was little fighting involved and the containment vessel surrounding the damaged reactor was unharmed. Seven days later Russian units attacked and occupied the Zaporizhzhia Nuclear Plant, the largest nuclear power plant in Europe. This time the Russians appear to have used more force than necessary and a training building caught fire, raising fears that a nuclear accident could occur.
Ukraine still has three more nuclear power plants and as the fighting grows more intense the possibility of a real nuclear disaster happening cannot be ignored. And even if the Russians do manage to seize and secure all of Ukraine’s nuclear facilities without incident what about the guerrilla war that is almost certain to follow. Partisan units act independently, that is their great strength, but that also means they sometimes act against the wishes of superiors. What if some leader of a guerrilla band convinces himself that a nuclear disaster would be just the thing to spur the Russians into leaving?
All of which means that the fighting over Ukraine has just started, and is likely to get much worse. This war could drag on for years and as far as I can see will only result in terrible harm to both countries that will take decades to repair.
What does Putin hope to accomplish with this war anyway. Well, like Napoleon he hopes to install a government that will be subservient to his will. He hopes to make Ukraine a vassal state to Russia as a way to rebuild the old empire of Russia. But the Ukrainian people will have none of that, after centuries of Russian domination they have tasted independence and like it. And the inspiration that the country is finding in their President Volodymyr Zelensky has fired their courage and resolve while impressing the entire world. In the end Russia simply cannot hold Ukraine, indeed most Russians don’t want to. Only one man is responsible for all of this madness and bloodshed. In the end Vladimir Putin will have achieved nothing with his war against Ukraine except to secure his place in history as just a small and rather inferior version of Adolph Hitler.