The Lionfish is one of the latest invasive species to threaten large-scale destruction of habitat within the US.

Invasive species are defined as populations of living creatures that have been transported from their natural habitat and become established in another ecosystem perhaps thousands of kilometers away. Sometimes this movement is a natural occurrence, such as when a few finches were somehow blown onto the Galapagos Islands, became established and evolved into some fifteen recognized species. Indeed such rare but natural transplanting of species is considered to be a driving force in evolution as the relocated population adapts to its new environment.

The 15 Species of Finches inhabiting the Galapagos Islands are all descended from a few finches that somehow survived being blown to those distant islands. (Credit: Pinterest)

More often however it is human beings who have transported the creatures either intentionally or accidentally. One example of the latter category would be the common salt water aquarium fish the lionfish, any member of the 12 species of the genus Pterois but particularly P volitans and P miles. See images below.

Pterois volitans, the Red Lionfish is a popular aquarium fish that should only be kept by a very experienced hobbyist. (Credit: Wildlife Society)
Pterois miles is another popular species of Lionfish. (Credit: Enalia Physis)

Lionfish are native to the Indian and western Pacific oceans where they are a predatory species feeding on small fish and invertebrates. Adult lionfish are generally 20-40cm in length and can weigh more than a kilogram. Their numerous spiny fins and colourful stripes have made them a popular aquarium fish, even though the animal’s spines are venomous and can produce a painful sting along with vomiting, nausea, convulsions and numerous other ill effects. Because of the danger of their spines Lionfish should only be kept by the most experienced of aquarium Hobbists.

The natural home for lionfish are the blue (P miles) and Green (P volitans ) shaded regions. They are a destructive, invasive species in the red areas. (Credit: USGS)

  Even though lionfish are popular pets it appears as if some aquarium keepers along Florida’s Atlantic coast must have decided that they were more trouble than they were worth and decided to release their pets into the ocean. Once free the lionfish began doing what fish do and without their natural predators the lionfish population has exploded. Lionfish are now regularly found along the US coastline from Cape Hatteras in North Carolina to Texas and throughout the Caribbean islands.

The destruction caused by lionfish consists mainly in their preying on native species, particularly on the young fish of valuable game species. It is estimated that the increasing lionfish population could lead to a reduction of 80% in the biodiversity of Gulf and Caribbean coral reefs.

To combat their spread government and private conservation groups are developing programs to eradicate the lionfish from the waters they are now infesting. Currently biologists and fishermen are working to develop special traps and even robotic hunters that will catch lionfish without harming native species. At present however the most efficient technique for dealing with lionfish is spearfishing by scuba divers.

Spearfishing is presently the best method for controlling the population of these predators. (Credit: Deeperblue.com)

One helpful fact is that lionfish are quite tasty if you fillet them properly; remember they are venomous. So if oceanic scientists do actually develop a technique for large scale culling of lionfish don’t be surprised if someday you see lionfish offered at your local fishmarket.

Broiled Lionfish with Paprika and Herbs

Until then contests and fishing tournaments are being organized to increase interest in harvesting lionfish all along the eastern and gulf coasts. The Florida Keys National Marine Sanctuary has even gone so far as to license divers to hunt lionfish within its boundaries, a thing almost unheard of for a wildlife sanctuary.

Poster for a lionfish catching tournament. (Credit: The Woody Foundation)

Eventually lionfish will simply become a normal part of the marine environment along the southern US coast. In time other animals will learn to prey on them and that will impose a control on their population. In fact it appears that sharks may be immune to the lionfish’s venom, some scientists are even trying to teach sharks to prey on lionfish.

How much damage the lionfish will do to the biodiversity of the Gulf and Caribbean before then however can only be guessed at right now.  A lot of trouble because of a few people who didn’t want to take care of the animals they bought thinking that they looked really cool!

Sum of three cubes problem finally solved for the number 42, the last of the natural numbers to be solved.

Mathematicians like to solve problems, that’s what doing math is after all, solving problems. In fact mathematicians enjoy problem solving so much that they often make some up just in order to have the fun of solving them.

Of course Mathematicians also enjoy a good joke! (Credit: SketchUp Community)

One such problem is the sum of three cubes for the integers between 0 and 100. This problem was initially posed in 1954 at Oxford University and is sometimes known as solutions for the Diophantine equation.

The problem, simply stated is, can a solution be found for the equation:

x3+y3+z3=k                                                 (equation 1)

Where k is an integer between 0 and 100 and x, y, and z are integers not necessarily between 0 and 100 nor even positive.

One example is easy to construct:

13+23+33=1+8+27=36                                 (equation 2)

Using that solution, and remembering that x, y or z can be negative quickly gives three more solutions.

(-1)3+23+33= -1+8+27=34                      (equation 3)

13+(-2)3+33=1-8+27=20                                   (equation 4)

(-1)3+(-2)3+33=-1-8+27=18                          (equation 5)

I’ll give one more playful solution:

23+33+43=8+27+64=99                                (equation 6)

Again using negative integers quickly allows three other solutions to be constructed but I’ll leave them for the student to discover as they say.

You can play at individual solutions for a while but if you try to work methodically starting at zero you quickly run into problems. For example zero itself only possesses the trivial solution:

(a)3+(-a)3+(0)2=0                                        (equation 7)

Where a is any integer. If any non-trivial solution existed for zero it would in fact be a counterexample to Fermat’s famous last theorem.

For k=1 or 2 there are in fact families of solutions. For k=1:

(9b4)3+(3b-9b4)3+(1-9b3)3=1                            (equation 8)

Where b can be any integer. The family of solutions for k=2 is:

(1+6b3)3+(1-6b3)3+(-6b2)3=2                            (equation 9)

Again b can be any integer. To check these solutions it’s instructive give them a try for a nice small number like b=2 and see how they work out!

By the way you are allowed to use the same integer more than once as in this solution for k=3:

13+13+13=3                                                 (equation 10)

O’k, so we’ve found some of the easy solutions but finding solutions for most integers quickly becomes very difficult. So difficult in fact that many solutions only became possible with the aid of electronic computer. Even with the assistance of the world’s best computers solutions for some integers proved elusive.

Indeed, after 65 years two numbers remained for which there was no known solution, 33 and 42. Then earlier this year the University of Bristol mathematician Andrew Booker managed to grab a couple of weeks time on the university’s supercomputer. The solution he obtained for 33 is:

Professor Andrew Booker of Bristol University found the solution to 33 and collaborated on the final solution, 42. (Credit: Phys.org)

88661289752875283+(-8778405442862239)3+(-2736111468807040)3 =33                                                                       (Equation 11)

So now the only number left without a solution was 42. Fans of the British Radio/TV/Book series ‘A Hitchhikers Guide to the Galaxy’ by Douglas Adams may recognize 42 as the answer to the ultimate question about ‘Life, the Universe and Everything!’ Now the fact that the final number lacking a solution should be a fan favourite is of course just a coincidence. Nevertheless a solution for 42 proved to be an order of magnitude more difficult to obtain.

The Hitchhiker’s Guide to the Galaxy began as a radio program on the BBC, was made in Television, then a series of books and finally a movie. (Credit: Amazon)

Realizing he needed even more computing power Booker teamed up with MIT professor Andrew Sutherland, an expert in parallel computing. This is a technique where numerous computers work simultaneously on different parts of the same problem. Professor Sutherland set up a massive ‘planetary computing platform’ consisting of the spare, unused time of half a million home PCs.

Another Andrew, Professor Andrew Sutherland of MIT who organized the computer system that succeeded in cracking the solution to 42. (Credit: MIT Math)

It took one million hours of computing time, which is still only 2 hours per computer after all, but the solution to 42 was finally found.

(-80538738812075974)3 +804357581458175153+ 126021232973356313 =42                                 (Equation12)

So all of the numbers k between 0 and 100 have now been solved, but there’s no reason to stop at 100 is there? The smallest number now without a solution is 114 so get out a pencil and paper and get busy!

The Loch Ness Monster is in the news again. Is there any actual evidence to support the existence of this legendary creature?

The Loch Ness Monster may not get as much publicity as Flying Saucers or Bigfoot do but it’s really the same sort of phenomenon. A few hints of something strange in the historical record, a few sketchy sightings of something that can’t be identified. Once a couple of stories are published in the press it suddenly seems as if everybody is talking about it. Then the number of people who claim to have seen it explodes. Before long the hoaxers join in and you lose all sense of what is legitimate evidence and what has been fabricated in order to make a quick buck.

Toss a hubcap in the air and you too can see a flying saucer! (Credit: SETI Institute)

Finally, after years of sightings with no hard physical evidence to back anything up the public splits into two distinct groups, those who are true believers and those who think it’s all a bunch of humbug. This state of affairs can go on for years with accusations of government cover-ups being added in as an excuse for the lack of real proof.

For the Loch Ness Monster the earliest known report of the creature is from a biography of the Irish monk Saint Columba written about the year 565 CE. In that account a ‘water beast’ in Loch Ness has killed a man and threatens one of the saint’s followers. Columba saves his companion by making the sign of the cross and commanding the beast to leave. Believers in the monster point to this story along with other Celtic folklore about water ‘kelpies’ as evidence that the beast has lived in the loch for centuries.

Legend has it that St. Columba chased a ‘water beast’ from Loch Ness. (Credit: Anomalies)

The monster, commonly known as Nessie first gained worldwide attention in the 1930s with a description of an encounter by George Spicer and his wife who described the creature as being a long snake or eel like creature some 8 meters in length and a bit over a meter in height. Although the Spicers saw no limbs on the creature it crawled across the road and disappeared into the loch.

It was just a year later on the 21st of April 1934 that the most famous picture of the Loch Ness Monster first appeared in the British newspaper the ‘Daily Mail’. The photo came to be known as ‘The Surgeon’s Photograph’ because the Daily Mail had obtained it from a London gynecologist named Robert Kenneth Wilson although significantly Wilson refused to have his name associated with the image.

The Daily Mail headline showing the Surgeon’s Photograph of the Loch Ness Monster. Notice how there is nothing else in the image to give you an idea of the size of the ‘Monster’. (Credit: PBS)

The photo caused an immediate sensation and quickly led to the best-known explanation of the monster as a plesiosaur, an aquatic reptile that went extinct at the same time as the dinosaurs. The idea that a small population of these creatures had somehow survived extinction and was now inhabiting Loch Ness, and perhaps other lakes around the world gained considerable popularity.

Plesiosaurs are aquatic reptiles that are considered to have become extinct at the same time as the dinosaurs. (Credit: Dinosaur Jungle)
‘Champ’ in Lake Champlain is considered to be a relative of the Loch Ness Monster. (Credit: CBS News)

It was only decades later in 1994 that the photo was revealed as a complete fake. The body of the creature was nothing more than a toy submarine bought at Woolworth’s department store to which a neck and head made of wood putty were added. The one-meter long counterfeit was simply floated into Loch Ness and photographed, an object lesson in how easy it can be to fool millions of people who want to be fooled.

Of course one fake, however famous doesn’t mean that there isn’t something unusual in Loch Ness. After all a lot of people have reported seeing something and they’re not all hoaxes.

Indeed they’re not; in fact there have been some legitimate scientific attempts to discover what, if anything is hiding in Loch Ness and a few of them have produced tantalizing hints of something. Perhaps the best known is the 1972 expedition organized by the Academy of Applied Science and led by Robert H. Rines. The team employed sonar apparatus in a methodical search of the loch for any large objects beneath the surface. Then, any time a large object was detected by the sonar an underwater camera with a floodlight recorded an image of the object. On August 8th the sonar detected a moving target some 6 to 9 meters in length. At the same time the underwater camera took a picture of what looked like diamond shaped ‘fin’.

Two images of the ‘Fins’ of the Loch Ness Monster taken in 1972. (Credit: MIT)

That’s the best scientific evidence for the existence of the Loch Ness monster. Problem is that the 6-9 meter target could very easily have been a school of small fish while the picture of the fin is so blurry that it could be almost anything. Still, a half dozen other investigations have produced nothing better.

Now a new approach has been used in the search for Nessie, environmental-DNA (eDNA). eDNA works this way, samples from any body of water will contain some genetic material from all of the species of animal or plant that live in that body of water. Analyzing that DNA tells scientists what species live in that water without having to actually observe or capture a single specimen.

Any animal whose excretions wind up in a body of water can be discovered using eDNA. (Credit: WildlifeSNPits)

Researchers from the University of Otago in New Zealand have performed such an analysis on over 200 water samples from various places in Loch Ness. In particular the scientists were looking for the presence of reptile DNA that would provide evidence for the existence of a population of plesiosaurs.

The study found DNA from some 3,000 species of plant and animal, even bacteria but no indication for reptile DNA of any kind. They also failed to find DNA for large species of fish such as shark, catfish or sturgeon, animals that have been suggested as possibly being responsible for the monster sightings.

Professor Neil Gemmell with a sample of water for Loch Ness. No Nessie DNA was found. (Credit: Time Magazine)

What the scientists did find was the DNA of the well-known animals of northern Scotland, strong evidence that there is nothing unusual in the loch. The scientists also found what they considered to be a large amount of eel DNA in every sample tested leading the team leader Neil Gemmill to suggest that a giant eel might be the best candidate for Nessie. “It’s a least plausible,” Dr. Gemmill asserts.

The Loch Ness Monster nothing more than a big eel? Not much to show for almost 1500 years of hullabaloo.

What ever happened to the uranium fuel from Nazi Germany’s attempt to build a nuclear reactor?

Nearly everyone knows the basic outline of this story, it is after all one of the most important series of events that occurred during the 20th century. In the late 1930s, while the threat of a coming world war grew physicists were learning the secrets of the atom and wondering if it could be possible to release the tremendous energy contained within the nucleus, both for power generation, and for weapons.

The process of Uranium Fission. Started by a single neutron the process releases both energy and more neutrons to produce a chain reaction. (Credit: Nuclear-Power.net)

The countries that would become the allied nations feared that Nazi Germany could become the first to develop an atomic bomb. After all, both the theories of relativity and quantum mechanics were first conceived by Germans and many of the leading researchers in sub-atomic physics were German. In fact the scientists who first succeeded in splitting atoms of Uranium, Otto Hahn and Fritz Strassmann were both German and their fission experiment was performed in Berlin!

The Experimental Apparatus used to first split the nucleus of Uranium (Credit: J. Brew / Flickr)

Hoping to beat the Germans to the bomb the Americans, with help from the British, organized the massive ‘Manhattan Project’. The American program did succeed in producing the first nuclear weapons but not until several months after Nazi Germany had been defeated. In fact when allied scientists searched through the rubble of Hitler’s Reich for Nazi scientists and technology they were surprised to discover how little progress the German nuclear physicists had made.

The Manhattan Project succeeded in developing the first atomic bomb whose first test was the Trinity Test. (Credit: Wikipedia)

There were many reasons why the Nazi atomic bomb program failed. One reason worth considering in today’s political climate would be how the Nazi’s own racism forced some of the world’s greatest minds to flee Europe for the safety of the United States. Men like Albert Einstein, Niels Bohr, Erwin Schrödinger, Hans Bethe, Max Born and many others would all contribute to the Manhattan Project, helping America develop the bomb first.

Albert Einstein was just one of dozens of German scientists who fled their country to escape the Nazis. The loss of their talents weakened the German nuclear program. (Credit: Viva)

There were other reasons as well; one interesting one was the Nazi’s tendency towards an almost feudal disorganization in their nuclear program. In fact the German nuclear program was more like nine distinct programs, each with its own director, each of which set its own agenda and goals with little coordination between the different groups. In contrast the Manhattan Project had one boss, Major General Leslie Groves who, with his science advisor Robert Oppenheimer made certain that everyone and everything in his command worked together for one goal, an atom bomb.

Major General Leslie Groves brought a degree of Military discipline to a bunch of scientists working on the Manhattan Project. (Credit: Wikipedia)

The German nuclear program’s greatest success was in the construction of a nuclear reactor by the Uran-Maschine (Uranium Machine) group in the city of Haigerloch. This group was headed by the Nobel Prize winning theoretician Werner Heisenberg along with his assistant the experimentalist Robert Döpel. The reactor these two scientists designed consisted of some 664, 2kg uranium cubes each about 5cm to a side. These cubes were hung from chains and then immersed in heavy water, which acted as a moderator slowing the neutrons in order to increase their chance of striking a uranium nucleus and maintaining the chain reaction. See image below.

German Physicist Werner Heisenberg led the German attempt to construct a nuclear reactor (Credit: IMDb)
The nuclear reactor designed and built by Heisenberg. The 664 uranium cubes are strung along aircraft cable (Credit: Atomicheritage.org)

Although the reactor was completed it never achieved criticality. That is the reaction never reached the condition where enough neutrons were being produced by the splitting of uranium nuclei to sustain the chain reaction indefinitely. Modern calculations indicate that the reactor design would have required a 50% increase in the number of uranium cubes in order to work. By comparison Enrico Fermi and his group had succeeded in establishing the first sustained nuclear reaction with their reactor in December 1942.

Artist’s rendering of the moment the first nuclear reactor went critical. (Credit: Smithsonian Magazine)

With the fall of Nazi Germany the experimental reactor at Haigerloch was captured by the US Army along with the scientists who worked there. The army troops who seized Haigerloch were accompanied by members of a special project known as Alsos who were attached to the Manhattan Project and led by the physicist Samuel Goudsmit. The Alsos team both interrogated the German scientists and examined the reactor. The captured scientists, including Heisenberg, were later sent to Britain and incarcerated for a time. The reactor was dismantled and the equipment, along with the 664 uranium cubes shipped to the US.

So what happened to those 664 uranium cubes? Well, it is likely that most were simply inserted into the Manhattan Project’s supply chain and eventually the uranium became part of American nuclear reactors or weapons. Some however definitely did not, instead becoming souvenirs that were passed from one person to another. Several of these cubes have found their way into museums including a museum at Haigerloch Germany dedicated to telling the story of Hitler’s reactor. Other known examples include Harvard University and the National Museum of American History in Washington DC. It is possible however that there are some still out there sitting in someone’s attic or garage.

One of the remaining uranium cubes from the Nazi nuclear reactor. (Credit: Science News)

Timothy Koeth, an associate research professor of the University of Maryland is now trying to discover what happened to as many of the uranium cubes as he can. Professor Koeth has even established an email address so that anyone who may have information about the cubes can contact him. The address is:

uraniumcubes@umd.edu

So if you have this old black cube that your grandfather brought back from the war and kept for reasons he never made clear contact Professor Koeth. Maybe that’s a real piece of Hitler’s nuclear reactor!

Gamma Ray Bursts are the most powerful events ever observed in the entire Universe. Could one ever be a threat to life here on Earth?

Ever since Galileo first pointed his telescope into the night’s sky astronomers have continued to discover ever stranger and more fascinating objects inhabiting this Universe of ours. Surely among the most mysterious are the objects known as Gamma Ray Bursts (GRBs).

What is a GRB? Well, about once a day, somewhere in the Universe an event occurs that releases as much energy in a few seconds as our Sun will generate in its entire life! This energy is observed as a bright burst of gamma rays. For decades little was known about GRBs and it’s only in the last 22 years that astronomers and astrophysicists feel that they have begun to understand something about these strange entities.

Gamma Ray Bursts are thought to be the most energetic events in the entire Universe! (Credit: Futurism)

Even the discovery of GRBs was pretty unusual. It is a fact that GRBs are the first, and so far only astronomical discovery to be made by CIA spy satellites. You see it all started in 1963 when the old Soviet Union agreed to the Nuclear Test Ban Treaty that ended the above ground testing of nuclear weapons. The US didn’t quite trust the Russians however; it was thought that the Soviet’s might try to cheat the ban by testing their weapons in outer space. So the CIA launched a series of satellites known as Vela that were designed to detect the sort of gamma radiation that would accompany any nuclear explosion off the Earth.

With the signing of the Nuclear Test Ban Treaty in 1963 the World’s Atomic powers agreed to halt above ground tests of nuclear weapons. (Credit: YouTube)

On July 2 in 1967 the Vela 2 and Vela 3 satellites detected a quick burst of gamma rays but it was soon realized that the burst wasn’t caused by the Russians. Using the data from the two satellites scientists at Los Alamos Nation Laboratory found that the radiation had come from somewhere outside of the solar system. Other bursts were soon detected as well but since the entire Vela program was classified as Top Secret astronomers didn’t get to hear about the discovery until 1973.

The VELA gamma ray detecting satellites were launched into space to monitor the Soviet Union’s Compliance with the Nuclear Test Ban Treaty. Instead they discovered the existence of Gamma Ray Bursts. (Credit: Flickr)

Even after the world’s astronomers knew about the existence of gamma ray bursts progress in understanding them was very slow. Think about it, since gamma rays are blocked by Earth’s atmosphere GRBs can only be detected by specialized satellites. Add to that the fact that GRBs rarely last more than a minute and that they can appear in any part in the sky and you can understand how hard it was to obtain any real data about them. 

The Earth’s Atmosphere blocks most forms of electromagnetic radiation allowing only visible light and radio waves to reach the surface. (Credit: Pinterest)

What astronomers wanted to learn most of all was whether or not GRBs had any other electromagnetic component to them. That is, did an optical, radio or perhaps X-ray flash accompany the gamma ray emissions. In order to do this astronomers had to develop a fast reaction network that would quickly communicate the news that a GRB had been detected to astronomers around the world so that other instruments could be brought in action.

Success finally came in February 1997 when the satellite BeppoSax detected GRB 970228 (GRBs are named by the date of their detection YY/MM/DD). Within hours both an X-ray and an optical glow were detected from the same source, a very dim, distant galaxy. Further such detections soon confirmed that GRBs came from such extremely distant galaxies, most of them many billions of light years away. So distant are the locations of GRBs that in order to appear so bright in our sky they must be the most powerful explosions in the entire Universe.

The BeppoSAX Satellite was designed and launched specifically to study GRBs. (Credit: SlidePlayer)

So what are these GRBs? What makes them so energetic? To be honest there’s still a lot to be learned but a consensus of opinion is growing that there are actually two distinct types of GRBs.

Those that last for a somewhat longer length of time, longer than 30 seconds, are the initial stages of a core collapse supernova. That is the death of a star so massive that it never really settled down like a normal star but instead just implodes after a few million years into a black hole. All of the well-studied GRBs fit this model remarkably well, including their location within galaxies that are undergoing rapid star formation, places where such massive, short-lived stars are far more common.

One interesting feature of this model is that as the star collapses it rotates much more rapidly, just as an ice skater will do when they pull in their arms during a spin. This increase in rotation speed generates a enormous magnetic field at the star’s poles causing the gamma rays that are emitted to squirt out from the poles like the beams of light from a lighthouse. This concentrates the power of the gamma rays into two narrow beams making the GRB look much brighter in the directions those beams travel.

The energy of long duration GRBs is concentrated into two narrow beams light the light from a lighthouse. (Credit: AAS Nova)

If this lighthouse feature of GRBs is true that implies that we are only seeing a small fraction of all GRBs, only those that are pointing at us. It also means that GRBs are not quite as powerful since their energy is focused into the beams. Again, this model fits the data collected for longer duration GRBs that make up about 70% of those that have been observed.

There are also short term GRBs, whose duration averages less than half a second and which make up about 30% of the total observed. Because they are fewer in number and shorter in duration these GRBs are harder to study and therefore less well understood. Several models have been suggested for them but the recent simultaneous observation of a GRB (GRB170817A) only 1.7 seconds after a gravity wave was detected by the LIGO gravity wave observatories implies a direct connection. Based on the nature of the gravity wave observed the event was a merger of two neutron stars. Therefore at least some short period GRBs are the result of neutron stars colliding to form a black hole or a black hole devouring a neutron star.

A Merger of Neutron Stars releases both a GRB and power Gravity waves. (Credit: AAS Nova)

So, if these GRBs are the most powerful explosions in the entire Universe, could they be any danger to us? Are their any stars in our galactic neighborhood that could collapse and generate a GRB? And what damage would a nearby GRB do?

In fact there are a couple of possible candidates known to astronomers. The stars Eta Carinae and WR 104 are both hugely massive stars that could collapse into black holes sometime in the next million or so years. Of the two WR 104 is closest at a distance of only 8,000 light years.

Eta Carena (l.) and WR104 (r.) are the most most massive and powerful stars known. Either couls someday collapse into a black hole triggering a GRB. (Credit: Gresham College)

If WR 104 were to generate a GRB, and if that GRB were aimed at Earth our atmosphere would protect us from the initial burst of gamma and X-rays, only a spike in the Ultra-violet lasting a few minutes would be seen. The long-term effects are much less pleasant however because the gamma and X-rays striking the atmosphere would cause oxygen and nitrogen to combine to form nitrogen oxide and nitrogen dioxide gasses. Both of these gasses are known destroyers of ozone, the form of oxygen in the upper atmosphere that protects us from the Sun’s UV rays. Also the gasses could combine with water vapour in the air to form droplets of nitric acid that would rain down causing further damage.

The Earth’s Ozone layer protects us from the cancer causing UV light from the Sun. Credit: UCAR)

Of course all of that is just speculation, we really have no idea what would happen here if a GRB from a star as close as WR 104 should strike the Earth. Before you start to panic however remember that GRBs are very rare, only one per day in the entire Universe. Let’s be honest, we’re a far greater danger to ourselves than Gamma Ray Bursts are!

Book Review: Why Did the Chicken Cross the World by Andrew Lawler?

Human beings have a tendency to overlook or even ignore those things that are the most familiar to us. Because we see something all of the time we feel as if we know everything there is to know about it, it just isn’t interesting anymore.

The Familiar Barnyard bird. (Credit: IndiaMart)

The chicken has been treated that way throughout history. Entire cultures have been built around cattle or sheep or the bison but not the chicken. Even when a small flock was kept just outside the house for the occasional egg or a special meal it was always the bigger livestock that got all of the attention.

Nevertheless, today it is the chicken that has become humanity’s largest supplier of protein. Today there are more domestic chickens being raised for food than any other animal. The chicken is the greatest success story of the technology of industrial food production, and as a living creature the chief victim of that success.

Andrew Lawler’s book ‘Why did the Chicken Cross the World’ is a journalistic investigation into the chicken, from it’s natural state as a wild bird spread across southern and southeastern Asia to being little more than one of the farmer’s wife’s chores to becoming one of the most valuable industrial commodities on the planet.

Front Cover of ‘Why did the Chicken Cross the World’ by Andrew Lawler (Credit: Amazon)

No one knows when human beings first began to keep the small wild relative of the pheasant but the remains of chickens along with primitive pictograms identified as chickens indicate that our relationship dates back into the Stone Age. The earliest evidence for humans raising and breeding chickens is not for food however, it was for cockfighting.

Wild Chickens still exist in the Kaziranga National Park in India (Credit: Pinterest)

Indeed much of the first third of ‘Why did the Chicken Cross the World’ deals with cockfighting as both a vehicle for gambling but also as a religious ritual! Andrew Lawler presents his evidence in a clear, enjoyable fashion that I quite frankly envy. Traveling around the world Mr. Lawler visits a selection of people who raise roosters for the pit but whose affection for their fighters is much more than just a source of income.

It is likely that Chickens where first domesticated for the fun of watching them fight rather than as a source of food. (Credit: Daily Times)

Moving forward in history Mr. Lawler details how for centuries the chicken competed with ducks and geese, and later the American turkey, for a place in humanity’s farms. It was only in the late 19th and early 20th century that the chicken became the dominant barnyard fowl.

A few centuries ago any barnyard would have kept several species of poultry for food (Credit: MutualArt)

It is the story of how the chicken became the most numerously bred, raised and finally, slaughtered animal that is the main part of ‘Why the Chicken Crossed the World’. Starting about 1850 in England and the US the importation of larger, meatier chickens from Asia began a long term breeding program to produce a chicken that would grow bigger in less time for less feed making chicken more available and less expensive.

Queen Victoria’s Poultry House. It was when Victoria became interested in raising chicken’e that the species became popular in England. (Credit: Poultry Pages)

A key moment came in 1948 when the world’s largest retailer, the A&P supermarket chain joined with the US Department of Agriculture (USDA) to sponsor the ‘Chicken of Tomorrow’ contest. The winner of that contest became the sire of an industrial production line of chickens that grow to more than twice the weight of their wild ancestors. In as little as 47 days modern birds are fully grown at a ratio of one kilo of chicken produced for two kilos of feed, a ratio that is nearly 50% better than any other species of meat producing animal.

The ‘Chicken of Tomorrow’ contest led to the industrialization of raising chickens (Credit: Flashbak)

None of this did the chickens any good. If they are bred for meat they are stuffed by the tens of thousands into industrial sized coops, see image below, where they are fattened up to the point where they can hardly stand. They are allowed to live for less than two months before being slaughtered.

Thousands of Chickens crammed into a modern chicken coop. Is this where your next meal is coming from? (Credit: YouTube)

The Selective breeding of chickens has led to Giant Chickens but at the cost of the animal’s health. (Credit: Insteading)

If they are bred for egg production they are squeezed into a tiny ‘battery cage’, see image. They lay an egg a day on average, a process that takes so much calcium out of their systems that their bones are extremely weak. After a year the hen is so exhausted that she is simply used for dog food.

Egg Laying Chickens in a ‘Battery Cage’. (Credit: Farm Sanctuary)

That’s the hens, the roosters, which are not as valuable and harder to keep because of their tendency to fight, are simply separated from the hens after hatching and disposed of in as cheap a method as possible. To the modern food industry the chicken is no longer a living creature but just another commodity to be produced and packaged cheaply and efficiently.

A motif that Mr. Lawler often returns to is that for millennia the chicken as an animal was a familiar animal. Today it is virtually unknown as a living thing; it is just something we eat, a commodity not a fellow creature. 

‘Why the Chicken Crossed the World’ is a thoroughly enjoyable book. A mixture of science, technology, history, sociology and politics in which you find yourself learning something on every page and the knowledge sticks with you. And I’m not just saying that because Andrew Lawler and I share our surname. To be best of my knowledge we are totally unrelated, the book is just really good!

Space News for August 2019.

We generally think of a story in the news as a report of some sort of dramatic occurrence, a story about an event full of action and yes, even danger. Space news therefore would consist primarily of accounts about rocket launches and space probes landing on distant worlds.

Of course we know that isn’t quite true. In space exploration the calm, deliberate decisions that are made in engineering conferences are every bit as vital to accomplishing the mission as the more spectacular moments. In this post I will be discussing three such stories illustrating the kind of planning and decision making that will make future space missions possible.

Many ideas are developed, and problems solved, in Engineering Meetings (Credit: PSM.com)

One such important decision announced by NASA on August 19 was to give a go ahead to begin construction of their ambitious Europa Clipper space probe, named for it’s target, Jupiter’s moon Europa. The intended mission of the Europa Clipper is to study that icy world in an effort to determine if the moon is actually a possible home for life. Some 40 close flybys of Europa are planned during which the probe will measure of thickness of the moon’s ice surface along with confirming the existence of a liquid ocean beneath the ice.

The Europa Clipper Space probe will make 40 flybys of the icy moon of Jupiter (Credit: ABC57.com)

The decision by NASA means that the design phase of the mission is now over and construction will now begin at NASA’s Jet Propulsion Laboratory (JPL) with a planned launch date of 2023 or 2025. One decision about the Europa Clipper still remains to be made however. What launch vehicle will be used to send the probe on it’s way to Jupiter?

Currently congress has ordered NASA to use the Space Launch System (SLS) as the launch system but that massive rocket is still not ready for it’s first test launch, and there is the real possibility that the SLS might not be ready by 2025. Also, launching the Europa Clipper with the SLS will cost over a billion dollars.

After many delays and budget overruns NASA’s massive Space Launch System (SLS) still has not flown (Credit: NASA)

NASA on the other hand would prefer to launch the Europa clipper using a commercial launcher such as Space X’s Falcon Heavy. Launching the space probe with a commercial rocket would not only save hundreds of millions of dollars but also firm up the launch schedule since the Falcon Heavy has already successfully flown three times. Unfortunately the decision here may be made by politics because the SLS is being built at NASA’s Marshall Space Flight Center in Alabama and some very important republican senators are strongly supporting it.

The Space X Falcon Heavy rocket has already flown successfully three times (Credit: The Verge)

Speaking of the Marshall Space Flight Center, NASA has made another decision naming them as the lead management center in the development of the Lunar Lander for the space agency’s big Artemis program. Artemis is the name that NASA has now given to its plans for returning astronauts to the Moon’s surface by 2024. Since Marshall is already developing the SLS as the Artemis launch vehicle their choice as lead for the Lander now puts two big pieces of the Artemis pie on Alabama’s plate.

The Marshall Space Flight Center is where NASA has developed rockets like the Saturn V and Space Shuttle (Credit: Wikipedia)

Again the decision here was made on the basis of political, not engineering grounds and that’s never a good thing. In fact the decision could very well be changed. You see the Johnson Manned Space Flight Center is in Houston Texas and there are a couple of powerful Texas senators, also republican by the way, who think the Johnson center would be a much better selection as management lead for the lander’s development.

The Johnson Space Flight Center in Texas is Where NASA’s Manned Space Missions are developed (Credit: Wikipedia)

None of this arguing back and forth will make the lander perform any better, or be built any faster or cheaper. Indeed that sort of political infighting is more likely to stall funding appropriations that could lead to schedule delays and cost overruns.

On a more hopeful note NASA has also decided to team up with Space X in order to develop the technology necessary for the refueling of spacecraft in space! Again the idea is to reuse spacecraft rather than just throw them away after one use and build another. In space refueling has long been considered essential towards developing a space infrastructure that will enable longer and more difficult space missions.

Refueling in space would extent the operational life of satellites, thereby reducing their cost (Credit: Engadget)

Take for example the communications satellites that are now in geostationary orbit 35,000 km above the earth’s equator. These multi-million dollar radio relays must keep their antennas pointed very precisely at Earth in order to perform their job at all. To do this the satellites have small, station keeping rocket engines that keep the satellite exactly where they’re supposed to be. After about 5-7 years however those engines run out of fuel and the craft soon begins to drift until the antennas are no longer directed at Earth. Once that happens the satellite becomes nothing more than a very expensive piece of junk up in space. If you could refuel those satellites while in orbit however you could extend their useful life by years and save billions of dollars.

For manned spaceflight in space refueling would allow the development true spaceships that could travel back and forth to the Moon or Mars multiple times. Such spaceships would be refueled at the end of each mission in exactly the way you refuel your car after a long trip.

Developing the technology for refueling in space won’t be easy however. Most of the chemicals used as rocket fuel, liquid oxygen, liquid hydrogen or liquid methane have to be kept cryogenically cold, requiring both refrigeration equipment and power. And everything has to be kept airtight or that fuel that you spent so much money getting into orbit will simply boil off into space. That’s why NASA teaming up with Space X makes sense. While Space X is the leader in reusable spacecraft NASA’s Glenn Research Center in Ohio and Marshall Space Flight Center are the recognized experts in handling and storing various kinds of rocket fuel. Hopefully this teaming up of skills will solve the problems of refueling in space and one day soon in addition to orbiting space stations we will see orbiting gas stations as well. 

Will there soon be a ‘Gas Station’ in orbit above the Earth? (Credit: Ars Technica)

The Transistor and Integrated Circuit, the story of the Miniaturization Revolution in Electronics

Earlier this year I celebrated the fiftieth anniversary of the Moon landing of Apollo 11 by publishing a series of eight articles about the ‘Space Race’ of the 1960s. I enjoyed that task so much that I decided to write a few more posts about some of the other cool technologies of that time, I hope you enjoy them.

In most homes today you’ll find that the number of electronic devices outnumbers the number of human beings by a factor of three, four or even more. Add up all of the TVs, computers, smartphones, hey even our ovens and refrigerators have microprocessors in them nowadays!!! Electronics are so cheap, so versatile and so small that we’re putting them in just about everything.

Just some of the electronics that can be found in a modern home. (Credit: Santa Barbara Loan and Jewelry)

Back in the 60s however electronics were big and expensive. Most homes had one TV, one record player and one, maybe two radios. The reason was simple; electronics were built around the vacuum tube, which were themselves large and expensive. See image below.

An Electronic Vacuum Tube (Credit: Parts Express)

Now if you think that a vacuum tube looks something like an incandescent light bulb you’re quite right, vacuum tubes were developed from light bulbs and like them require a considerable amount of power, voltage and current, in order to just turn on. This makes vacuum tubes wasteful of energy, hot and rather large.

Things started to change during the 60s when the first transistor electronics came on the market, the small, hand held AM transistor radio being the most popular. Now pretty much everyone knows that transistors are made primarily of silicon and, like a vacuum tube a transistor is an ultra fast electrical switch. Unlike a tube however a transistor doesn’t have to be hot in order to work.

An antique six transistor radio. (Credit: ETSY)

This means a transistor needs only a small fraction of the power of a vacuum tube in order to function and therefore they can be made much smaller and packed in together more tightly. Whereas a vacuum tube radio was as large as a piece of furniture a transistor radio could be held in one hand, and with the transistor radio the word miniaturization came into common usage.

Vacuum Tube radios could hardly be considered mobile! (Credit: Flickr)

Still, my first little transistor radio was build of ‘discrete’ transistors. That is to say each transistor was a separate object, an individual piece of silicon packaged in it’s own plastic coating. When I bought my second transistor radio I of course disassembled the first one and inside I found six transistors, along with numerous other components. The transistors were each about the size of a pea; I learned later that the transistors were packaged in a standard format known as TO-92.

A single 2N3904 Bipolar NPN General Purpose Transistor packed in a TO-92 case. (Credit: Addicore)

Even as the first transistorized consumer products were becoming available there were engineers who began to wonder if it would be possible to fit two transistors, or even more, on a single piece of silicon, and how many could you fit? The first experiments with Integrated Circuits (ICs), as these components came to be known, were carried out at Texas Instruments Corp. in 1958. See image below.

The World’s first integrated circuit contain two transistors on a single piece of germanium, not silicon (Credit: Texas Instruments)

The advantages of ICs were many, reduced cost, size and power requirement along with increased operating speed. The drawback to ICs was their high initial start up costs. The facilities needed for manufacturing ICs, known as a ‘foundry’, are very expensive even though, once you had a foundry millions of ICs could then be made very cheaply. In the business this is known as a high Non-Recurring Expense (NRE) with a small Recurring Expense (RE).

A look inside a foundry for the manufacture of Integrated Circuits. (Credit: SemiWiki)

So, who was going to pay for the first IC foundries? The U.S. government that’s who! In the 1960s both NASA and the military had a tremendous need for ever more sophisticated radios, radars, guidance systems and even computers. And all of these new electronics had to be smaller in order to fit into rockets, airplanes and ships. The IC was the only possible technology that could satisfy that need.

Then, once the first foundries were built the miniaturization revolution really got under way. One of the pioneers of the IC industry Gordon Moore declared in 1965 that the number of transistors on a single silicon ‘chip’ would double ever two years. This prediction is commonly called Moore’s Law and has worked now for over 50 years with the current technology being capable of placing millions of transistors on a chip of silicon no larger than a fingernail.

Gordon Moore was an early pioneer in the development of Integrated Circuits. (Credit: Computer History Museum)
A Look inside a typical Integrated Circuit, this one is a Pentium 4 Microprocessor used in many personal computers. (Credit: Calvin College)

With this technological progress has come personal computers, smartphones, digital cameras, digital television and myriad other devices that we all have in our homes or carry on our person. The transistor and Integrated Circuit have become the true symbols of our modern age and their revolution began in the 1960s.

There’s some good news about the Environment for a Change. Plastic microparticles may not be a health danger after all!

Everyday it seems as if we hear another news story about how all the pollutants and trash that we’re dumping into the environment are coming back to do us harm. If it isn’t climate change it’s harmful chemicals in the air or water. One possible threat that’s been in the news recently is plastic microparticles.

Just a small part of the Great Pacific Garbage Patch. Most of this muck is plastic! (Credit: The Brag)

What are plastic microparticles? Well you see, all those millions of tons of plastic we keep throwing away may be chemically inert but ultra-violet light from the Sun combined with mechanical action from ocean waves or weather can break it down into particles less than 5mm in diameter.

Waste Plastic doesn’t decay in the environment but it does break down into small pieces the smallest of which are microparticles (Credit: Lifegate)

Environmental researchers are finding plastic microparticles nearly everywhere. In the oceans they have been discovered in both the artic regions as well as the bottom of the Mariannas trench, the deepest part of the ocean. Scientists in both France and Colorado have even found plastic fibers in rainwater while in Norway they’ve been found in snow. I suppose we’ll have to stop using the phrase ‘Pure as the driven Snow’. With plastic microparticles everywhere we are certainly going to be ingesting some as we eat and drink so the question is. Can they get from our stomach into our bodies and if so what harm will they do there?

Plastic bags have even been found at the bottom of the deepest part of the Oceans (Credit: Science Alert)

Researchers have begun to study this possibility with the intent of determining the health threat posed by plastic microparticles. A leading scientist at the Center for Organismal Studies at the University of Heidelberg, Doctor Thomas Braunbeck has been investigating whether or not plastic microparticles can pass easily through the lining of the intestines of vertebrate animals. In other words if we ingest these particles will they get into us?

Professor Doctor Thomas Braunbeck of Heidelberg University (Credit: Researchgate)

The test animal Dr. Braunbeck choose for his work are the well known fresh water aquarium fish the zebra danio (Danio rerio) because he could study many animals at once quite easily. Also, the zebrafish’s growth rate is so high if plastic microparticles can be absorbed a lot would be absorbed in a short time making detection more certain.

Logo of the Center for Organismal Studies (COS) showing a Zebra Danio , the fish used in the stidy of plastic microparticles (Credit: COS Heidelberg)

To carry out his experiment Dr. Braunbeck used microparticles that were coated with a phosphorescent chemical that made them easier to track and the particle size he choose was around 10μm. First the particles were fed to a kind of small crustacean that is also well known to tropical fish hobbyists as brine shrimp. Once he was certain that the shrimp had indeed absorbed the microparticles he then fed the shrimp to his zebrafish.  

Now here’s the good news. When Dr. Braunbeck checked the fish for signs that plastic microparticles had been absorbed he found none. The particles had been unable to pass through the lining of the zebrafish’s intestine. Instead the microparticles had simply passed all the way through the fish’s digestive system and out the back end.

Since this is one of the first experiments to determine if plastic microparticles can be absorbed through the intestine of a vertebrate the negative result is good news. Before you start celebrating however remember I mentioned above that the particles used in the study were 10μm in diameter. Dr. Braunbeck cautions that smaller particles might still be able to get through. Nevertheless it is nice to hear a little hopeful news about pollution for a change.

Of course just because plastic microparticles may not be a very big health risk certainly doesn’t mean that we shouldn’t be concerned about the millions of tones of plastic waste that’s just turning our planet into a trash dump. Fortunately there are more and more people who are trying to find solutions to the problem. Earlier this year, see my post of 9 January 2019, I wrote about the young man from Holland named Boyan Slat who had invented a 700m long ‘U’ shaped boom to sweep up the Great Pacific Garbage Patch. The first test of Slat’s invention ran into some problems but the upgrades are in progress and a second test is coming soon!

The 700m floating boom used to remove plastic from the ocean still has some bugs to work out. (Credit: Twitter)

While Boyan Slat’s boom is intended to remove large pieces plastic from the Ocean a teenager from Ireland has developed a technique for eliminating up to 88% of plastic microparticles from water. The teenager’s name is Fionn Ferreira and his project won him the Grand Prize in Google’s annual science fair. The native of the town of Ballydehob is planning on using his $50,000 to pay for his further education in college. Fionn’s technique for collecting the plastic microparticles in water involves attracting and removing the particles with a magnet.

Fionn Ferrerira, the winner of this years Google science prize for his technique to remove plastic microparticles from water. (Credit: ABC News)

Wait a minute, you say. Plastic isn’t magnetic. You can’t attract plastic with a magnet. That’s true, however in water plastic microparticles are attracted to ferrofluids, mixtures of oils and iron based magnetite. The oil in the ferrofluid clumps with the microplasic and the magnetite can then be lifted out with a magnet carrying the oil and plastic with it.

If this sounds almost too good to be true you could be right. The biggest technical problem as I see it will be to scale up the whole process, there’s a lot of plastic microparticles out there to be collected. In particular separating the ferrofluid from the plastic so that it can be used again and again could prove difficult. Of course the real problem will be the cost, nobody is going to be making a profit off of this you know.

And that’s the real problem with cleaning up the environment in general, the cost. There are many things we could do to clean up the mess we’re making of this planet of ours. The question is, who’s going to pay for it?

In a revolutionary experiment scientists are using the Gene editing tool CRISPR to treat patients suffering with the genetic disorder Sickle Cell Anemia.

In all of modern science there is perhaps no more rapidly advancing field than that of genetic research. Much of that progress has come about because of the development of the molecular gene editing tool CRISPR (which stands for Clustered Regularly Interspaced Short Palindromic Repeats) that allows biochemists to literally cut and/or paste sections of DNA into the chromosomes of living cells. I have talked about CRISPR several times in previous articles, see posts of 2 March 2019, 12 January 2019, 1 December 2018, 1 September 2018 and 5 August 2017, and the full potential of CRISPR is still only being guessed at.

How CRISPR Works (Credit: Cambridge University Press)

Now the latest experiment is making a bold and daring attempt to treat fully grown persons who are sufferers of the inherited genetic disorder Sickle Cell Anemia, a condition that affects about 100,000 people living here in the United States and millions of others worldwide. This is the first ever attempt to use CRISPR to modify the cells of adult patients in the hopes that the altered cells will allow those patients to live a more normal life.

The Genetic disease Sickle Cell Anemia is a chronic ailment for millions of people (Credit: Familydoctor.org

Before I continue let me talk a little bit about the genetic disease Sickle Cell Anemia. This is a disorder that affects the bone marrow and leads to the production of red blood cells with a defective protein that causes the cells to be deformed, sickle shaped. These deformed blood cells are thereby unable to carry a normal amount of oxygen leading to a permanent and in some cases crippling weakness in the affected person. Most sufferers of Sickle Cell Anemia are ethnic African or African-American and since the disease is inherited it can devastate a family for generations.

Sickle Cell Anemia is an inherited genetic disorder (Credit: Synthego)

The procedure being tested is to take cells from the patient’s bone marrow and modify the cells DNA using CISPR in order to make them produce a protein that is normally only formed by the human body while in the womb and during early childhood. It is hoped that the production of this protein will correct the deformations of blood cells caused by the defective protein, thereby alleviating the anemia caused by sickle cell.

Possible Techniques for using CRISPR to cure Sickle Cell (Credit: American Chemical Society)

The experimental treatment for Sickle Cell Anemia is being conducted at eight hospitals and clinics in North America and Europe and is being overseen by CRISPR Therapeutics of Cambridge Massachusetts in association with Vertex Pharmaceuticals of Boston. The current plan is to have up to 45 patients take part in the initial trials of the experiment.

Patient undergoing CRISPR treatment for Sickle Cell Anemia (Credit: NPR)

It will be months before the researchers know for certain whether or not the modified cells are even producing the desired protein let alone if the protein is actually helping to improve the health of the study’s patients. Then, even if there is strong evidence that the procedure has worked, there is the question of how long will the benefits last? Will this technique produce a permanent cure or will the effect be only temporary?

These are questions that only time can answer, but we are at the threshold of a new medical technology. This may be the first attempt to treat patients with a genetic disease by using CRISPR but it certainly will not be the last.