I’m lucky enough to have a very vivid imagination. If I just shut my eyes by an act of will I can see, and hear President Kennedy giving his ‘…landing a man on the Moon…” speech. And for an image that I see every day, like Washington on a dollar bill, I don’t even have to close my eyes and I can see George’s face superimposed on everything that’s actually there. Back in Shakespeare’s time the imagination was known as ‘the mind’s eye’ because of the images it can conjure up, hence the quote from Hamlet in this post’s title.
My imagination can even let me see things that I’ve never actually seen in real life. For instance whenever I’m reading a good novel my imagination goes into overdrive visualizing things that may have never have even existed. Consider Arthur C. Clarke’s novel ‘Earthlight’ for example. I haven’t read that book in at least ten years but I can see the battle sequence in my mind any time I want, that’s the impression it made on my mind’s eye.
As a scientist and engineer having a good imagination is definitely a benefit. I can often visualize what should be the results of an experiment, or a circuit, before I begin any testing and if something isn’t right I know it immediately. And any time I’m doing one of those math ‘word problems’ that everyone hates I can visualize what the problem is really about making it much easier to solve.
Not everyone has such a vivid imagination. For some people trying to conjure up images from their own past life, the face of a deceased parent say, requires a considerable mental effort. There are even a small percentage of people, estimated at 1-3% of the population, who are simply incapable of forming mental images of any kind, people who have no minds eye at all.
Such a condition is medically known as aphantasia and can usually only be detected by long a series of psychological tests, tests that are inherently subjective and can often lead to an ambiguous result. Now however a new study has been published in the journal eLife by researchers at the School of Psychology at the University of New South Wales in Sydney Australia that details a direct, physiological technique for diagnosing aphantasia.
The test begins simply enough, the patients are shown a chart with a bright figure set against a gray background and told to stare at the bright figure. Just as in a bright room staring at the bright figure causes the subject’s pupils to respond by contacting somewhat, and the size of the contracted pupil is then measured. The patients are then shown a similar dark figure set against the same gray background. As the subjects stare at the dark figure their pupils will expand, as they would in a dark room. As before the size of the expanded pupil is measured and compared to the earlier contracted pupil size.
Now comes the interesting, even kinda weird part. The patients are now asked to imagine the bright and dark figures they were shown earlier and their pupils should react as before, although maybe not to the same extent. By comparing the second set of results to the first however gives a direct value for the patient’s ability to form a visual image in their mind’s eye.
If you’re thinking that all this smacks of mind over body, well that’s what I think is so interesting. The very idea of our imagination causing actual changes to our body actually isn’t that hard to believe; after all just thinking about sex can certainly stimulate some organs. Still the notion that our eyes will react by our just visualizing bright or dark objects is really rather eerie.
There’s an old expression that ‘the eyes are the window to the soul’. Well what the scientists in Australia have found is a way to use our eyes to measure the strength of our Mind’s Eye.
Physicists are always fascinated by symmetries in the world around us. For example there appears to be exactly the same number of positively charged particles as there are negatively charged particles. At the same time there are just as many north magnetic poles as south magnetic poles.
Another big symmetry appears when we look at the distribution of galaxies throughout the Universe as a whole. In whatever direction we look there are the same sorts of galaxies in roughly the same density. In terms of space the Universe appears to be very symmetrical.
Not so in time. We know that the Universe is expanding; Carl Hubble made that discovery more than 90 years ago now. So in the distant past, billions of years ago, all of those galaxies would have been much closer together than they are today. And going even further back all of the matter in the Universe would have formed one big, dense hot cloud, a big bang. So why should time be different from space.
After all Einstein’s Theory of Relativity tells us that time should really be treated mathematically in the same way as space, a principal know as covariance. And all of the experiments we perform in big atom smashers like the ones at CERN or Fermilab confirm Einstein’s ideas.
Another big lack of symmetry that has physicists confounded is that between matter and anti-matter, those mysterious mirror particles that have the same mass but opposite charge of the matter particles that form everything we know. Another curious fact about anti-particles are that when they come in contact with their ‘normal’ matter counterpart the two annihilate each other becoming photons of light. Matter into energy, just as Einstein said. Again, both our theories and the experiments performed at high-energy physics labs all tell us that anti-particles should be generated just as often as particles, that there should be just as much anti-matter in the Universe as matter.
But there isn’t, certainly not in our Solar System because since the solar wind touches every planet, moon and etc. we’d see the energy from matter anti-matter annihilation if say Jupiter were anti-matter. And that also means that our galaxy can’t contain anti-matter since the interstellar medium touches every star system and again, we don’t see any sign of matter anti-matter annihilation.
What about different galaxies you ask? Couldn’t some of them be composed of anti-matter? Well maybe, but astronomers have also seen a number of galaxies that are colliding with other galaxies and once more there are no signs of the type of energy release that would indicate matter and anti-matter in contact. That leaves physicists with the question, where is all of the anti-matter?
So physicists are faced with two instances of non-symmetry, in time and in matter / anti-matter. And since physicists are clever people it isn’t surprising that someone thought to use one problem to solve the other. You see back in the 1950s physicist Richard Feynman suggested that the best way to think about anti-particles, his paper was explicitly about anti-electrons, was to consider them as normal electrons going backward in time. That way when an electron, going forward in time, collides with an anti-electron, going backward in time they turn into photons who, according to relativity, do not travel in time, perfect symmetry.
So let’s go with that thought, let’s assume that all anti-matter is just normal matter going backward in time. Then what happened to all of the anti-matter that should have been created by the big bang? Well it went backward in time and exists before the big bang. The Universe before the big bang was made up of an amount of anti-matter equal to the matter in the Universe after the big bang. Perfect symmetry.
Time symmetry is restored as well because whatever the Universe looks like at a certain time t after the big bang the Universe looked exactly the same way, on a large scale at least, at the same time t before the big bang. This new model of the Universe uses its anti-matter component as a mirror to fully restore symmetry.
This is the basis of a new paper by physicists Latham Boyle, Kieran Finn and Neill Turok of the Perimeter Institute for Theoretical Physics in Waterloo Ontario in Canada along with the University of Manchester in the UK. In doing their calculations the physicists also discovered that their new, symmetric model of the Universe had a couple of other advantages as well. For one thing the period of rapid expansion called inflation immediately after the big bang proposed by Alan Guth back in the 1970s to account for the almost perfect flatness of the Universe is simply not needed. The model proposed by Boyle, Finn and Turok provides a flat Universe full of particles naturally, without the ‘ad hoc’ insertion of inflation.
Another feature of the model is that it requires a fourth type of neutrino, those mysterious ‘ghost’ particles that very rarely interact with more normal particles. The researchers think that their fourth neutrino species could provide the basis for the missing dark matter, maybe solving yet another problem in astrophysics.
So, how do we go about proving that this new model is the correct one? After all it seems like new models of the Universe are being proposed nearly every week. Well, finding that neutrino would be a good start but physicists have been looking for ‘sterile’ neutrinos for a long time now without success.
The researchers also propose another way. Theories of inflation all predict that the rapid expansion at the beginning of the Universe should have produced large amounts gravitational waves, waves that the scientists at LIGO and Virgo gravity wave observatories may soon be able to detect. But if inflation didn’t happen, if the Universe is symmetric instead, then the search for primordial gravity waves will fail.
Of course it would be so much simpler if we could somehow look back before the big bang to see if there was an anti-matter Universe back then. But that’s impossible! Isn’t it?
Back on August 9th of last year, 2021, the International Panel on Climate Change (IPCC) as directed by the Secretary General of the United Nations released two reports concerning first the causes and secondly the impacts that can be expected from Global Warming over the rest of this century, see my post of August 21st 2021. The possible impacts were analyzed for five specific scenarios of human activity ranging from eliminating carbon emissions immediately to we just continue increasing our carbon footprint without any regard for the damage it is doing to our planet.
Those reports, like everything that deals with climate change should have been a straightforward, empirically based assessment of the facts. Of course what did happen was that the report quickly became politicized with many nations insisting that the problem of climate change was not really urgent. In fact just a few months later at the COP26 climate conference held in October nations such as Japan, Australia and Saudi Arabia refused to accept any language calling for a reduction in fossil fuel emissions. The nation of India, the world’s third largest emitter of greenhouse gases, went so far as to state that it had no plans to even consider reducing its use of coal, the worst energy source for carbon emissions, until at least the year 2050.
Now, on the 4th of April 2022, a third section of the IPCC report was published that deals with what we can do to solve the climate crisis. And if you listen to the scientists there’s no time to wait. As declared by Geoscientist Andrea Dutton of the University of Wisconsin, “We can’t kick this can down the road any longer.” In fact the scientists working on the IPCC report have identified five clear danger signs that will tell us when the worst outcomes of climate change have begun.
1. The Amazon rain forest becomes a savanna. The Amazon jungle has been called the planet’s lungs because of its enormous ability to absorb CO2. Both human encroachment and increasing drought in Brazil are slowly turning it into an arid grassland however. Without that absorption of greenhouse gasses by the Amazon the problem of climate change will only get worse.
2. Coral Reefs die. Coral is actually a symbiosis between a hydra like polyp and a species of algae, the polyp providing a home for the algae while the algae provides food for the polyp. If the water temperature rises too much however the polyp will often kick the algae out. This condition is known as bleaching and can lead to the death of the coral. Over the last ten years major portions of both the Great Barrier Reef and the Florida Keys have been subjected to periods of bleaching and it may only take a small additional rise in the world’s temperature to kill them off entirely.
3. Ice Sheets melt. Much of the world’s water is held captive in ice sheets and glaciers primarily in Antarctica and Greenland. Rising temperatures have already led to massive amounts of that ice melting, with the resulting rise in sea level. If the melting continues or even accelerates then every inhabited coastal area of the world is threatened.
4. Atlantic Circulation stops. The Gulf Stream was first discovered by none other than Ben Franklin back at the end of the 18th century and its effect on the climate of both the east coast of North America and western Europe have been well documented. Over the last few years however studies of the Gulf Stream have suggested that its circulation could be imperiled by rising temperatures, and even a modest reduction in the strength of the Gulf Stream’s flow could have a major impact on the climate of both the US east coast and Europe.
5. The disappearance of the great northern forests. Just to the south of the Artic circle and spread across several continents lies the world’s last great forest. Actually composed of several forests stretching from Alaska, across northern Canada, Scandinavia and Russian Siberia like the Amazon these forest absorb a large fraction of the greenhouse gasses we are generating, helping to reduce somewhat the effects of climate change. And as with the Amazon jungle these forest are now under threat. The three main threats are heat, fire and bark beetles. In my post of July 14th 2021 I discussed the huge heat dome that formed over British Columbia last summer and which not only led to dozens of all time Canadian temperature records being smashed but which also triggered large wildfires, like the one that all but destroyed the little town of Lytton.
And to make matters worse those higher temperatures are just perfect conditions for the spread of bark beetles that are devastating millions of trees. The trees killed by bark beetles then become fuel for further wildfires leading to more release of CO2 and more global warming, a vicious cycle.
So what solutions have the IPCC scientists come up with that will hopefully prevent such massive damage to the Earth’s environment. Needless to say the first thing we must do as a species is reduce CO2 emissions by 43% before 2030, that’s just eight years from now. Right now renewable sources of power, primarily wind and solar, only produce about 10% of the energy we use, the rest is produced by burning oil, gas and coal. So a reduction of 43% in greenhouse gas emissions is going to require a huge effort, with an accompanying huge cost. In fact, instead of reducing greenhouse gas emissions current projections predict a 14% increase in atmospheric CO2 by 2030.
But the scientists say even more is required. They say that in order to keep global temperature rise below a 1.5º rise since pre-industrial times, a goal that was agreed by nearly every country on Earth at the Paris climate summit in 2015, we must start to remove CO2 from the atmosphere.
Scientists around the world have developed several different techniques for carbon removal, techniques that could, if adequately funded for a large, industrial implementation scale, really reduce the levels of CO2 in the air. Of course the problem is that phrase, adequately funded because we’re talking tens if not hundreds of billions of dollars and who’s going to pay for it. As you might guess there are few volunteers.
So, what’s going to happen this time? Not much it seems. With the war in Ukraine along with inflation and crime and all of the other distractions few people are even paying attention to what is happening across the entire planet.
Postscript: A conference of government officials from 153 nations has convened and the attendees are congratulating themselves on their pledges to reduce carbon emissions so that the global temperature rise will remain below 2ºC. I know what you’re going to say. Didn’t officials from those same countries pledge to keep the temperature rise below 1.5ºC just seven years ago in Paris?
They sure did, and now they are literally patting themselves on the back for trying to cover up their failure by making new, and equally meaningless pledges. I don’t think we can hope for our ‘leaders’ to do anything to tackle climate change until the planet is actually on fire.
If you think about it the very first living creatures lived by just absorbing the nutrients in the water around them, not interacting at all with the other simple creatures nearby. After a few million years however some of those early life forms must have evolved to feed off of the dead remains of other creatures. And not too long thereafter, in geologic time at least, some creatures evolved to prey on their living fellows, and so the war of all against all (Bellum omnium contra omnes) began.
And many if not most of the anatomical design and features of those living things we call animals are intended to optimize their consumption of other organisms, plants in the case of herbivores and other animals in the case of predators. Today’s stories are all about some of the ways that evolution solved the problem of ‘Eat or be Eaten.’ As usual I will begin in the distant past and work my way forward in time.
Without doubt the ultimate form of ‘Eat or be Eaten’ would have to be cannibalism, where an animal literally preys upon and eats another member of its own species. In modern human civilization cannibalism is considered to be one of the most evil and horrible acts that a person can commit. It is worth considering however that cannibalism has been observed in more than 1,500 species, including we humans and whether we like it or not there are some pretty good evolutionary reasons for it.
You see by preying upon another member of your species you not only gain a meal but you are also eliminating a competitor for precious resources. For that reason cannibalism is often found in circumstances where food or other resources are scarce. Cannibalism does have its downside however because if you’re not careful you could be eliminating a relative or potential mate or even your own children and thereby reducing your share in the gene pool. And of course any species that practices cannibalism too much runs the risk of literally eating itself into extinction.
But just how long has cannibalism been a behavioral strategy used by living creatures? Think about it, solid evidence for cannibalism isn’t exactly easy to find in the fossil record. Now a paper published in the journal Paleogeography, Paleoclimatology, Paleoecology has announced that indications of cannibalism can be found at a 514 million year old Cambrian period fossil site on an island off the South Australian coast at a place called Emu Bay.
Emu Bay is one of those rare fossil sites where the preservation of specimens is so pristine that things like injuries and fecal material, called coprolites when fossilized, are easy to identify and analyze. The specimens that were found at Emu bay consisted primarily of two large species of trilobites Redlichia takooensis and Redlichia rex and many of the specimens were that were found had injuries that had healed. Now both trilobite species were large animals for that time, as much as 25 centimeters in length, so anything preying on them had to be at least as big as they were. That’s why the researchers, from the University of New England in Australia, believe that the cause of the injuries could have been another member of the same species.
Additional proof came from an analysis of the coprolites that were found, most of which were more than 10% the length of the trilobites themselves. Careful examination of the feces showed that they contained bits and pieces of shell material like the shells of the trilobites. More indication that the trilobites would, at least on occasion, chow down on their own kind. Between the injuries and the shell fragments in the coprolites the paleontologists feel they have a compelling case for the existence of cannibalism more than half a billion years ago.
For vertebrate animals like we humans the need to feed efficiently led millions of years ago to the development of that structure that we all associate with eating, the jaw. The first jawed vertebrates appeared in the fossil record more than 400 million years ago as bones that had been used to support the gills of the first fish moved toward the mouth. Before long jaws had evolved into a wide variety of sizes and shapes that depended on both the type of food an animal ate and the method it used to feed.
(By the way the jawbones of modern vertebrates as they develop after fertilization follows the same developmental path that evolution did 400 million years ago during the Devonian period, this includes you. That is, about 5 weeks after fertilization you had gills, just like every other fish, and the four bones that developed to hold those gills in place then became your jawbone and the bones of your inner ear. The gills then simply disappear once the bones have formed since you no longer need them.)
Now a new study has been published in the journal Science Advances that examines the variety of jaws that evolved so quickly back in the Devonian period. What the researchers at the University of Bristol’s School of Earth Sciences found was that, despite all of the different sizes and shapes of jaws that evolved 400 million years ago there were two factors that predominated, speed and strength, and these two factors often opposed one another.
Think about it, a predator certainly needs a quick jaw in order to seize its prey before it can get away but if the jaw becomes too quick it can also become weak and brittle and a broken jaw is a virtual death sentence for any animal. So there has to be an evolutionary trade off between speed and strength in order for a predator to be able to successfully grab its dinner without any chance of it injuring itself.
A similar argument can be made for a herbivore, the animal needs a quick jaw to be able be bite off as much food as it can as quickly as it can, because remember plant material usually has a lower energy content. Then the plant eater usually has to grind their food in order to get all possible nourishment out of it, and that requires a good strong jaw.
The researchers used data about jaw size and shape from all of the known early jawed fishes and developed a computer model to compare each for speed and strength. They also included a few theoretical jaw shapes in their analysis. The results of the model clearly showed just how quickly the optimum blend of quickness and strength evolved.
Now the jaw of a predator is certainly an offensive weapon and in order to protect themselves from predators many herbivores evolve some kind of defensive armour. One of the best known examples of such defensive body evolution is the family of dinosaurs known as the stegosaurs, with Stegosaurus itself having the characteristic two rows of bony plates along its back and long, sharp spikes on it tail that make it a tough meal for any hungry theropod.
Stegosaurs date from the middle Jurassic period to the early Cretaceous period, 160 million to 100 million years ago but their early evolution is unclear. Now a new specimen from the Chongqing region of China may hold some answers. Dated to about 168 million years ago the animal, which has been named Bashanosaurus primitvus, is the oldest stegosaur from Asia, and perhaps the oldest ever found anywhere.
In fact the animal was given the species name primitvus because of the peculiar, primitive set of characteristics it possesses. Smaller than other known stegosaurs, with thicker more narrow plates along its back B primitvus also had spines sticking out to the side of its shoulders. These features make B primitvus look quite different from other stegosaurs but at the same time it looks quite similar to other types of armoured dinosaurs like the first ankylosaurs that evolved about 20 million years earlier.
The paleontologists from the Chinese Bureau of Geological and Mineral Resource Exploration and Development along with the Natural History Museum of London who discovered and described Bashanosaurus hope that the fossil will shed light on the evolution of the stegosaurs. In any case the fact that armoured dinosaurs evolved so quickly, and diversified so rapidly is just the flip side of jaws and claws in the eternal struggle to ‘Eat or be Eaten’.
It’s on the launch pad, years late and billions of dollars over budget but the Space Launch System (SLS), the most powerful rocket since the venerable Saturn 5 that took astronauts to the moon is finally at Pad 39B at Cape Kennedy, ready for lunch. Well almost ready because the engineers and scientists at both NASA and prime contractor Boeing still have a long list of tests and safety checks to perform before the actual first flight in the space agency’s Artemis program begins. The biggest test, known as the Wet Dress Rehearsal or WDR, is now scheduled for the 1-3 of April.
The rollout of the massive SLS with its Orion, man capable capsule took place on March 17th as the door of the Vehicle Assembly Building opened and the SLS began its long, slow journey to the launch site. The current schedule is for launch to take place no earlier than sometime in May. That first flight will be unmanned, with the second Artemis mission, and the first mission that will actually take astronauts back to orbiting the Moon, coming no sooner than 2024.
Update: The SLS was on its launch pad but after failing to complete the WDR three times NASA has decided to return the rocket to the Vehicle Assembly Building for repairs. Just another in a long series of delays and problems for the Artemis program that is years behind schedule and billions of dollars over budget.
And even as NASA begins the Artemis program to take human beings back to the Moon the space agency is making plans to also return to a destination much further away, the outer planets of Uranus and Neptune. The only space probe to have visited those cold, dark worlds was Voyager 2, which flew past them in the late 1980s. At the time the data sent back by Voyager taught us more about the two outermost planets in our Solar System than we’d learned in more than a hundred years of observing them by Earth bound telescopes. In the years since Voyager however astronomers have come up with thousands of questions about conditions on Uranus and Neptune that they’d love to see answered.
So plans are now being discussed for a joint NASA-ESA mission to the outer planets. Details are sketchy at the moment, even so far as to which planet will be visited, or maybe both. The best upcoming launch window for Uranus is 2030-2034 while that for Neptune is 2029-2030 so the particulars for the mission along with the basic space probe design will probably have to be finalized in the next year or so. One thing that has been decided is that the main probe will carry with it a smaller ‘entry probe’ like the Huygens probe that landed on Titan after being carried to Saturn by the Cassini spacecraft.
The journey to Uranus or Neptune will be a long one, anywhere from 11 to 15 years depending on the specifics of both the probe and the mission. Because the journey will take so long, and will take the probe so far away from the Sun, using solar arrays to power the spacecraft will be impossible, sunlight simply isn’t strong enough that far from the Sun. So therefore the Uranus / Neptune probe will have to get it’s power from radioisotope thermoelectric generators (RTGs) just as both Voyagers along with Cassini and the Galileo probe to Jupiter did.
Sounds like an exciting mission, wouldn’t it be nice if they could find the money to send identical probes to each planet!
A sad note before I sign off. Eugene Parker died on March 15th at the age of 94. The highly regarded NASA astrophysicist is best remembered for his 1957 prediction of the solar wind, the stream of charged particles that are constantly being emitted from the Sun’s atmosphere. That prediction was confirmed just five years later when the Mariner 2 space probe was constantly bombarded during its journey to Venus by just the sort of radiation that Parker had predicted.
Eugene Parker is also remembered as the namesake of NASA’s Parker Solar Probe which since its launch in 2018 has now approached closer to the Sun than any other man made object. The Parker probe was the first, and thus far only space probe to be named for a living scientist. A fitting tribute to a man who advanced our knowledge of the Sun so much.
It may be the oldest problem in mathematics; it’s a problem we deal with on a regular basis. How do we divide up a single object, let’s say a pie or cake, so that everyone gets a piece and there’s nothing left to go to waste. Remember some people, like my brother and I like big pieces while some people, like my sister want a smaller piece. In the end all of the various fractions that we cut that cake into have to add up to one, that one cake.
Put in mathematical terms the problem consists of finding a set of integers, let’s say the set (2, 3, and 6) the sum of whose reciprocals 1/2+1/3+1/6=1! We know from archaeological evidence that this problem has been considered since the time of the ancient Egyptians but it had to have been around much longer. After all even Neanderthals had to carve up that deer they killed into pieces that added up to one.
Now of course it’s easy to cut up our cake into n number of pieces each of which is 1/nth of the whole, 8 pieces that are each 1/8th of the pie, pizza chefs get a lot of experience at doing that. Mathematicians however like to make things more complicated so they want to consider solutions where each piece is a different size, and just to make things really interesting they prefer to only use fraction whose numerator is 1 like 1/2 or 1/8 or 1/124, such fractions are technically known as unit fractions because of the number 1 in their numerator. Using unit fractions mathematicians can then search for patterns in the numbers of the denominators, like my example of 2, 3 and 6 above. In this way they can learn about the hidden structure in the numbers that we use everyday.
Back in the 1970s this ancient problem got a new twist as the mathematicians Paul Erdős and Ronald Graham published a conjecture that stated that any set of numbers that was sufficiently large, a condition known as positive density, must have a subset of numbers whose reciprocals add up to 1.
Problem was that few mathematicians, including Erdős and Graham, had any good idea about how to prove their conjecture. So the whole idea kind of just sat there for almost fifty years before a mathematician named Thomas Bloom of Oxford University in England was given an assignment to do a presentation on an effort to prove the Erdős-Graham conjecture by Ernie Croot 20 years ago using the colouring method. In this method numbers are sorted into different baskets by a designated colour. Using a branch of mathematics known as harmonic analysis Croot was able to show that no matter how many baskets were used, at least one would contain a set of numbers fulfilling the Erdős-Graham conjecture. Croot used a type of integral called an exponential sum, which can calculate how many integral solutions there are to a problem. The problem is that exponential sums are almost always impossible to solve exactly so Croot’s methodology was unable to answer the full, positive density version of the conjecture as originally stated by Erdős and Graham.
But reading Croot’s attempt did get Thomas Bloom thinking about the Erdős-Graham conjecture and he brought his own expertise in combinatorial and analytic number theory to the problem. Bloom’s technique allowed him to have greater control over the approximation of the exponential sum so that in the end he succeeded in proving not that there was a solution but that the number of solutions was positive and an integer, meaning there had to be one or more solutions.
Just another example of how mathematicians can reexamine even the oldest of problems and still find new structure, new patterns. Showing once again that mathematics is the queen of the sciences.
The new baseball season has begun and I got to attend my first ever opening day game. By the way the Phillies defeated the Oakland Athletics by a score of 9 to 5. That kind of score should be typical of Phillies games this season as team looks to score a lot of runs but their pitching is kinda suspect.
One of the best things about the sport of baseball is that with the action so spread out it makes it easy to follow all of the physics that’s happening down on the field. Whether it be the trajectory of a home run or a line drive up the middle, hey even just a broken bat ground out to shortstop it’s all physics.
Of course some of the most interesting physics comes as the pitcher prepares to throw the ball to his catcher hoping that the batter will either swing at it and miss or at least hit the ball so weakly that one of the fielders can make a play and get an out. In order to accomplish this pitchers try to deceive the batter about the kind of pitch that’s coming. And pitchers have a wide variety of pitches that they can throw including fastballs, sinkers and curveballs as well as the infamous knuckleball along with variations on those pitches.
Now simple trajectories, like that home run, are often discussed in freshmen physics classes by ignoring the effect of wind resistance, not a bad approximation if the wind isn’t blowing too hard. The motions that pitchers can put on a ball however cannot be approximated in that way however because they are all due to the interaction between the ball and the air molecules through which it moves. And the most important factor in determining how the trajectory of a pitch deviates from a trajectory without air is the direction and orientation of the spin that the pitcher puts on the ball as he releases it.
Everybody knows that spin has two distinct directions, sometimes called clockwise and counter-clockwise or right handed and left handed. For a ball traveling more or less horizontally and whose axis of spin is both horizontal to the ground and perpendicular to the direction in which the ball is traveling those spin directions can be referred to as top-spin, where the top of the ball is rotating in the direction that the ball is traveling, and back-spin, where the bottom of the ball is rotating in the direction that the ball is moving. See diagrams below. Later on we will consider what happens when that axis of spin is not horizontal and perpendicular to the balls motion.
That spin on the ball as it moves through the air generates a difference is pressure on the top and bottom of the ball causing a force on the ball due to what is known as the Magnus effect. In the Magnus effect the side of the ball moving in the direction of travel has the greater pressure and so the is pushed the other way. This means that topspin produces a downward force causing the ball to drop faster than it would in a vacuum. This sort of pitch is known as the sinker because it does just that, dropping faster than the batter anticipates causing him to either miss it entirely or hit a weak ground ball somewhere.
Backspin does exactly the opposite, generating an upward force so that the ball seems to rise, hence a rising fastball. In actuality however the ball is still dropping due to gravity but it doesn’t drop as fast as it would in a vacuum. In this case the intent is to make the batter either miss the pitch or get under it, popping the ball up so that a fielder can catch it for an out.
Back in the late 1950s a physicist at the National Bureau of Standards named Lyman J. Briggs undertook a study of the way in which the Magnus effect could change the trajectory of a baseball under typical game conditions. What he found was that the change in position when the ball arrives at the plate was proportional to the amount of spin the pitcher had put on the ball and proportional to the square of the ball’s horizontal speed. For pitch speeds of 70 to 100 miles per hour and spins of 20-30 rpm the change in position would range from between 10.8 to 17.5 inches. (Yes I know, I’m using Imperial units, please forgive me but this is baseball where the bases are 90 feet apart, the distance from the pitching rubber to home plate is 60 feet 6 inches and a baseball weighs between 5 and 5.25 ounces.)
O’k, so we’ve discussed the sinker and rising fastball, pitches that seem to go either down or up depending on the spin, but what about pitches that move sideways like the curveball or screwball. Well you remember I assumed above that the axis of rotation of the ball was horizontal and perpendicular to the direction that the ball is moving. What if we remove that constraint and allow a righthanded pitcher to rotate the spin axis about 45º clockwise? In that case the Magnus effect will cause the ball to move laterally to the left, a standard curveball. For a lefthanded pitcher the curveball is produced by rotating the spin axis about 45º counterclockwise and the ball will move laterally to the right.
And when a righthanded pitcher rotates the spin axis of the ball counterclockwise, so that it moves to the right or a lefthanded pitcher rotates the spin axis clockwise to make it move left you get a screwball. The reason the pitch is known as a screwball it is so rarely seen that its motion seems really weird, and the reason its so rarely seen is that its so dammed hard to throw.
We’ve covered most of the standard, best known pitches but I’ll finish off today with the pitch that every batter, and most pitchers really hate, the knuckleball. The essence of the knuckleball is that the pitcher does his best to put no spin on the ball, eliminating any contribution to the motion of the ball due to the Magnus effect.
That way, as the ball moves toward the plate it gets pushed about by every little breeze, every little pocket of turbulence. A well thrown knuckleball floats and darts this way and that so that neither the batter, nor the pitcher knows where it’s going to end up. A poorly thrown knuckleball does nothing, making it an easy target for the batter to drive out of the park. So as we begin another season of our national pastime it’s worth remembering how baseball is really all about the physics!
Processed foods are nothing new, smoking, salting and pickling of meats and vegetables has been a common practice for thousands of years. Much of early human chemistry was devoted to processing foods for the purpose of preventing them from spoiling. In our modern world we may be able to go to the supermarket to buy fresh food whenever we want but for most of human history processing food during the summer and autumn was the only want to make certain that you’d have food to eat during the long winter.
One problem with any method of processing however is that it always removes or reduces some of the nutritious value of the food, especially the food’s vitamins which are rather delicate chemical compounds. Still, if the only thing you have to eat in the middle of January is some low-nutrient smoked bacon and pickled cabbage, also known as sauerkraut, you’ll eat it and get your vitamins from fresh food during the summer.
Over the last two centuries there has been a revolution in new methods for processing foods. Canned foods and frozen foods are now common along with many kinds of chemical preservatives that help keep food from spoiling. Supermarkets of course love such preserved foods because they can sit on the store’s shelves for months until somebody buys them while any fresh food that isn’t bought quickly has to be thrown away at a financial loss to the market.
As more and more of the foods we eat have become processed foods the problem of low-nutrition has slowly become a bigger and bigger problem. To make matters worse the food manufacturers found ways to make their processed foods actually taste better than fresh food, usually just by increasing the fat content or the sugar content or even just by adding more salt, things that in large amounts are actually bad for our health.
Meanwhile convenience stores like 7-11, Wawa or Royal Farms are becoming ever more popular by selling a wider variety of processed foods without the added space and expense necessary for fresh meats and vegetables. The same is true of the innumerable ‘Mom and Pop’ grocery stores that seem to exist on nearly every block in most cities. These two types of grocery stores have in fact taken over much of inner city America so that now large sections of many big cities have become ‘Food Deserts’ where the only food that is readily available is unhealthy processed food instead of fresh, nutritious food.
The result of this heavy reliance on Hi-Calorie, Low-Nutrition food has been an epidemic of obesity in this country. And with obesity comes all the health risks associated with it, especially heart disease.
So what can we do, go back to fresh foods with a very limited shelf life. Many health conscious people are doing exactly that, even to the extent of growing some of their own food, either in their backyard or in an ever increasing number of community gardens. However there are simply too many people on this planet today for that to be a complete solution, if only because of the increase in waste caused by uneaten fresh food going bad.
So why can’t the scientists and chemical engineers who develop processed foods find a way to make them more nutritious, lower in fat and just plain healthier? In fact there have been many attempts to do just that. Milk and Orange juice have for many years been fortified with vitamins while several brands of breakfast cereal have been made that provide both needed fiber along with loads of vitamins.
Problem is that these healthy foods just don’t taste as good as the ‘bad foods’ do making it hard to convince people to switch. More work needs to be done to make processed foods better tasting and even more nourishing.
Enter David Edwards, Professor of Engineering at Harvard University, Founder and Board Member of Incredible Foods Inc. and now operator of the restaurant Café ArtScience in Cambridge Massachusetts. For years Professor Edwards has been at work developing new varieties of food products that are delicious, nutritious and have zero impact on the environment.
Take WikiWater for example. Inside a hard shell made of a biodegradable corn derived protein, no plastics, water is contained within an edible skin packed with vitamins and other nutrients. Edwards hopes that WikiWater will replace the current plastic water bottles and help lessen the thirst of people in third world countries. Less trash with better nutrition, sounds like a good idea to me!
Professor Edwards made his first big contribution with an inhalable form of insulin for diabetics. Since founding Incredible Foods he and his team have been busy creating a new line of products they call ‘Food Berries’. Food Berries are small, fruit flavoured snacks that are contained inside an edible skin that is not only packed with vitamins but also provides the Food Berry with a considerable shelf life. There are also Hummus and Yogurt varieties of food berries along with a frozen, ice cream style.
So yes we can develop new types of food that are tasty, healthy, long lasting and environmentally friendly. Thanks to scientists like David Edwards we have the technology, we can have processed foods that are actually better than fresh foods. All we need is for our leaders to recognize the problem and do something to solve it.
Back about fifty years ago now the science of Geology underwent a revolution in thought as overwhelming evidence supporting the theory of ‘Plate Tectonics’ was uncovered. The basic idea of plate tectonics is that the surface of the globe is broken into a number of plates that the continents sit upon. Those plates move, extremely slowly, only centimeters per year but they do move and as they move they jostle and crash against one another causing earthquakes to occur, mountains ranges and volcanoes to be born.
Sometimes one plate is forced under another, and when that happens a ‘subduction zone’ is created and one of the geologic features that can occur in such a zone is a deep-water trench such as the Marianas Trench, the deepest place in all of the oceans. The Marianas Trench is in fact only one of about a dozen trenches that are a part of the famous ‘Ring of Fire’ surrounding the Pacific Ocean. The precise mechanics of how these subduction zones are generated is very complicated, several attempts have been made to develop numerical models for analyzing them with computers.
Now a new such model developed at the Instituto Dom Luiz at the University of Lisbon in Portugal has shown great promise in providing a more comprehensive and accurate picture of subduction zone evolution. This new simulation is different from previous models in that it is a full scale three-dimensional reproduction of what is going on at a subduction zone. In the program all of the dynamic forces that effect the generation and evolution of subduction zones were realistically incorporated, including gravity.
Such large scale simulations can require a lot of computer time; in fact each analysis using this new model takes as much as a full week to process using the supercomputer at Johannes Gutenberg University in Germany. Still the results are well worth the effort. According to Jaime Almeida, first author on the study. “Subduction zones are one of the main features of our planet and the main driver of plate tectonics and the global dynamics of the planet.”
Plate Tectonics has taught us much about the broad outline of how the surface of our Earth has changed over billions of years. However a more precise and accurate model of the processes involved may help us better understand, and therefore predict the disasters like earthquakes and volcanoes that are a common threat around the world.
Now I’d like to take a moment to update a geology story that I posted about back on the 24th of June 2020 and 10th of April 2021. The story concerned the discovery of two huge, massive blobs that exist deep within the Earth’s mantel. These blobs are formally known as Large Low-Shear Velocity Provinces (LLSVPs) and differ in composition and viscosity from the surrounding material deep within the Earth. (Previously these blobs were known as Ultra Low Velocity Zones or ULVZs). The LLSVPs were detected because; being made of different materials the vibrations caused by earthquakes travel through them at a lower velocity, hence Low-Shear Velocity. They were discovered by analyzing the data from hundreds of earthquakes as measured by seismographs from around the world.
The two LLSVPs are situated one beneath South Africa and the other beneath the Pacific Ocean and are each the size of a continent with a thickness of greater than 500 km. Also, it has been speculated that the blobs may in fact be the remnants of an ancient planet called Theia that collided with the Earth four and a half billion years ago fragments of which then became our Moon.
Now a new analysis of the LLSVPs by Qian Yuan and Mingming Li of Arizona State University’s School of Earth and Space Exploration has been published in the journal Nature Science. In the article the researchers assert that the LLSVP under Africa is almost 1000 km further from the center of the Earth, and therefore closer to the surface than the one under the Pacific. In an attempt to explain this difference in height the researchers hypothesize that the Africa LLSVP could be less dense and therefore it may be ever so slowly rising through the Earth’s mantel. “The Africa LLSVP may have been rising in recent geological time,” states author Li. “This may explain the elevating surface topography and intense volcanism in eastern Africa.”
It is harder to study what goes on just a few hundred kilometers beneath our feet than it is to study the surface of the Moon or Mars, certainly we’ve sent more probes to the Moon or Mars than we have to a hundred kilometers down. Nevertheless bit by bit geologists are learning the secrets of the planet we all call home.
Twenty years ago the idea that some of the largest galaxies possessed a ‘Supermassive Black Hole’ in their center was a major discovery. Since that time more and more evidence has accumulated that every galaxy, even many small ones, possess such black holes whose mass can be anywhere from tens of millions to billions of times that of our Sun. One of the major questions in astronomy today is whether supermassive black holes came first and formed galaxies around them or does the formation of galaxies lead to the creation of supermassive black holes. By the way, this is a question that it is hoped the new James Webb Space Telescope may provide some evidence to help answer.
One thing we do know is that big galaxies form by combining smaller galaxies, or more often by a big galaxy gobbling up a small one. Our own Milky Way is now known to have gobbled up as many as a half dozen smaller galaxies over the last billion years or so. So what happened to the supermassive black holes in those now consumed galaxies, are they wandering around somewhere in our galaxy or did they become absorbed by the Milky Way’s supermassive black hole.
Probably both. If the two galaxies strike each other in a glancing blow the black holes at their centers may never come within tens of thousands of light years of each other and may wander around separately for billions of years. On the other hand astronomers think that sometimes the black holes can become entangled and will then begin to orbit each other. If that occurs the two supermassive black holes will start to emit gravity waves so that slowly the energy of their orbit will radiate away causing them to move closer and closer until they merge.
Evidence for the latter scenario has recently been uncovered and published in the Astrophysical Journal Letters. The evidence comes from a black hole situated in a galaxy about 9 billion light years away, which you will remember means that the events we are watching actually took place 9 billion years ago. The supermassive black hole, which has been designated as PKS 2131-021, is devouring a considerable amount of matter. A small amount of that matter is escaping from the black hole in the form of a high energy jet. Such objects are called Blazars and it so happens that PKS 2131-021’s jet is pointing right at Earth giving us an excellent look at what is going on.
And recent observations have shown that the energy from PKS 2131-021 fluctuates on a regular basis, around every two years the intensity dips slightly only to soon recover. Checking data going back 45 years from five observatories the researchers confirmed their own observations.
The astronomers hypothesize that the cause of the variation could be another supermassive black hole in a tight orbit around PKS 2131-021, the tightest known orbit for a pair of supermassive black holes. Using Einstein’s Theory of Gravity the astronomers have calculated that the two black holes should merge in about 10,000 years or so and when they do they will produce massive amounts of gravity waves that will the shake the fabric of space-time throughout the observable Universe.
In previous posts, 7 October 2017, 22 October 2017 and 23 September 2020, I have talked about the LIGO and Virgo laser gravity wave observatories and how over the last ten years they have succeeded in capturing the final outbursts from mergers of several pairs of stellar mass black holes, black holes with masses 5-10 times that of our Sun. So far however they haven’t observed gravity waves from pairs of Supermassive black holes, such events are very rare even in the entire Universe. Perhaps with a few more upgrades however they might be able to start picking up the gravity waves already coming from PKS 2131-021.
Astronomers will continue to study PKS 2131-021, with both gravity wave observatories and more old fashion telescopes hoping to learn more of its secrets. The more astronomers observe the Universe the more common Supermassive Black Holes have become so that it’s a good question. Does the Universe consist of Galaxies of stars with Supermassive Black Holes at their hearts, or does it consist of Supermassive Black Holes with a halo of stars around them?