What ever happened to the uranium fuel from Nazi Germany’s attempt to build a nuclear reactor?

Nearly everyone knows the basic outline of this story, it is after all one of the most important series of events that occurred during the 20th century. In the late 1930s, while the threat of a coming world war grew physicists were learning the secrets of the atom and wondering if it could be possible to release the tremendous energy contained within the nucleus, both for power generation, and for weapons.

The process of Uranium Fission. Started by a single neutron the process releases both energy and more neutrons to produce a chain reaction. (Credit: Nuclear-Power.net)

The countries that would become the allied nations feared that Nazi Germany could become the first to develop an atomic bomb. After all, both the theories of relativity and quantum mechanics were first conceived by Germans and many of the leading researchers in sub-atomic physics were German. In fact the scientists who first succeeded in splitting atoms of Uranium, Otto Hahn and Fritz Strassmann were both German and their fission experiment was performed in Berlin!

The Experimental Apparatus used to first split the nucleus of Uranium (Credit: J. Brew / Flickr)

Hoping to beat the Germans to the bomb the Americans, with help from the British, organized the massive ‘Manhattan Project’. The American program did succeed in producing the first nuclear weapons but not until several months after Nazi Germany had been defeated. In fact when allied scientists searched through the rubble of Hitler’s Reich for Nazi scientists and technology they were surprised to discover how little progress the German nuclear physicists had made.

The Manhattan Project succeeded in developing the first atomic bomb whose first test was the Trinity Test. (Credit: Wikipedia)

There were many reasons why the Nazi atomic bomb program failed. One reason worth considering in today’s political climate would be how the Nazi’s own racism forced some of the world’s greatest minds to flee Europe for the safety of the United States. Men like Albert Einstein, Niels Bohr, Erwin Schrödinger, Hans Bethe, Max Born and many others would all contribute to the Manhattan Project, helping America develop the bomb first.

Albert Einstein was just one of dozens of German scientists who fled their country to escape the Nazis. The loss of their talents weakened the German nuclear program. (Credit: Viva)

There were other reasons as well; one interesting one was the Nazi’s tendency towards an almost feudal disorganization in their nuclear program. In fact the German nuclear program was more like nine distinct programs, each with its own director, each of which set its own agenda and goals with little coordination between the different groups. In contrast the Manhattan Project had one boss, Major General Leslie Groves who, with his science advisor Robert Oppenheimer made certain that everyone and everything in his command worked together for one goal, an atom bomb.

Major General Leslie Groves brought a degree of Military discipline to a bunch of scientists working on the Manhattan Project. (Credit: Wikipedia)

The German nuclear program’s greatest success was in the construction of a nuclear reactor by the Uran-Maschine (Uranium Machine) group in the city of Haigerloch. This group was headed by the Nobel Prize winning theoretician Werner Heisenberg along with his assistant the experimentalist Robert Döpel. The reactor these two scientists designed consisted of some 664, 2kg uranium cubes each about 5cm to a side. These cubes were hung from chains and then immersed in heavy water, which acted as a moderator slowing the neutrons in order to increase their chance of striking a uranium nucleus and maintaining the chain reaction. See image below.

German Physicist Werner Heisenberg led the German attempt to construct a nuclear reactor (Credit: IMDb)
The nuclear reactor designed and built by Heisenberg. The 664 uranium cubes are strung along aircraft cable (Credit: Atomicheritage.org)

Although the reactor was completed it never achieved criticality. That is the reaction never reached the condition where enough neutrons were being produced by the splitting of uranium nuclei to sustain the chain reaction indefinitely. Modern calculations indicate that the reactor design would have required a 50% increase in the number of uranium cubes in order to work. By comparison Enrico Fermi and his group had succeeded in establishing the first sustained nuclear reaction with their reactor in December 1942.

Artist’s rendering of the moment the first nuclear reactor went critical. (Credit: Smithsonian Magazine)

With the fall of Nazi Germany the experimental reactor at Haigerloch was captured by the US Army along with the scientists who worked there. The army troops who seized Haigerloch were accompanied by members of a special project known as Alsos who were attached to the Manhattan Project and led by the physicist Samuel Goudsmit. The Alsos team both interrogated the German scientists and examined the reactor. The captured scientists, including Heisenberg, were later sent to Britain and incarcerated for a time. The reactor was dismantled and the equipment, along with the 664 uranium cubes shipped to the US.

So what happened to those 664 uranium cubes? Well, it is likely that most were simply inserted into the Manhattan Project’s supply chain and eventually the uranium became part of American nuclear reactors or weapons. Some however definitely did not, instead becoming souvenirs that were passed from one person to another. Several of these cubes have found their way into museums including a museum at Haigerloch Germany dedicated to telling the story of Hitler’s reactor. Other known examples include Harvard University and the National Museum of American History in Washington DC. It is possible however that there are some still out there sitting in someone’s attic or garage.

One of the remaining uranium cubes from the Nazi nuclear reactor. (Credit: Science News)

Timothy Koeth, an associate research professor of the University of Maryland is now trying to discover what happened to as many of the uranium cubes as he can. Professor Koeth has even established an email address so that anyone who may have information about the cubes can contact him. The address is:

uraniumcubes@umd.edu

So if you have this old black cube that your grandfather brought back from the war and kept for reasons he never made clear contact Professor Koeth. Maybe that’s a real piece of Hitler’s nuclear reactor!

Gamma Ray Bursts are the most powerful events ever observed in the entire Universe. Could one ever be a threat to life here on Earth?

Ever since Galileo first pointed his telescope into the night’s sky astronomers have continued to discover ever stranger and more fascinating objects inhabiting this Universe of ours. Surely among the most mysterious are the objects known as Gamma Ray Bursts (GRBs).

What is a GRB? Well, about once a day, somewhere in the Universe an event occurs that releases as much energy in a few seconds as our Sun will generate in its entire life! This energy is observed as a bright burst of gamma rays. For decades little was known about GRBs and it’s only in the last 22 years that astronomers and astrophysicists feel that they have begun to understand something about these strange entities.

Gamma Ray Bursts are thought to be the most energetic events in the entire Universe! (Credit: Futurism)

Even the discovery of GRBs was pretty unusual. It is a fact that GRBs are the first, and so far only astronomical discovery to be made by CIA spy satellites. You see it all started in 1963 when the old Soviet Union agreed to the Nuclear Test Ban Treaty that ended the above ground testing of nuclear weapons. The US didn’t quite trust the Russians however; it was thought that the Soviet’s might try to cheat the ban by testing their weapons in outer space. So the CIA launched a series of satellites known as Vela that were designed to detect the sort of gamma radiation that would accompany any nuclear explosion off the Earth.

With the signing of the Nuclear Test Ban Treaty in 1963 the World’s Atomic powers agreed to halt above ground tests of nuclear weapons. (Credit: YouTube)

On July 2 in 1967 the Vela 2 and Vela 3 satellites detected a quick burst of gamma rays but it was soon realized that the burst wasn’t caused by the Russians. Using the data from the two satellites scientists at Los Alamos Nation Laboratory found that the radiation had come from somewhere outside of the solar system. Other bursts were soon detected as well but since the entire Vela program was classified as Top Secret astronomers didn’t get to hear about the discovery until 1973.

The VELA gamma ray detecting satellites were launched into space to monitor the Soviet Union’s Compliance with the Nuclear Test Ban Treaty. Instead they discovered the existence of Gamma Ray Bursts. (Credit: Flickr)

Even after the world’s astronomers knew about the existence of gamma ray bursts progress in understanding them was very slow. Think about it, since gamma rays are blocked by Earth’s atmosphere GRBs can only be detected by specialized satellites. Add to that the fact that GRBs rarely last more than a minute and that they can appear in any part in the sky and you can understand how hard it was to obtain any real data about them. 

The Earth’s Atmosphere blocks most forms of electromagnetic radiation allowing only visible light and radio waves to reach the surface. (Credit: Pinterest)

What astronomers wanted to learn most of all was whether or not GRBs had any other electromagnetic component to them. That is, did an optical, radio or perhaps X-ray flash accompany the gamma ray emissions. In order to do this astronomers had to develop a fast reaction network that would quickly communicate the news that a GRB had been detected to astronomers around the world so that other instruments could be brought in action.

Success finally came in February 1997 when the satellite BeppoSax detected GRB 970228 (GRBs are named by the date of their detection YY/MM/DD). Within hours both an X-ray and an optical glow were detected from the same source, a very dim, distant galaxy. Further such detections soon confirmed that GRBs came from such extremely distant galaxies, most of them many billions of light years away. So distant are the locations of GRBs that in order to appear so bright in our sky they must be the most powerful explosions in the entire Universe.

The BeppoSAX Satellite was designed and launched specifically to study GRBs. (Credit: SlidePlayer)

So what are these GRBs? What makes them so energetic? To be honest there’s still a lot to be learned but a consensus of opinion is growing that there are actually two distinct types of GRBs.

Those that last for a somewhat longer length of time, longer than 30 seconds, are the initial stages of a core collapse supernova. That is the death of a star so massive that it never really settled down like a normal star but instead just implodes after a few million years into a black hole. All of the well-studied GRBs fit this model remarkably well, including their location within galaxies that are undergoing rapid star formation, places where such massive, short-lived stars are far more common.

One interesting feature of this model is that as the star collapses it rotates much more rapidly, just as an ice skater will do when they pull in their arms during a spin. This increase in rotation speed generates a enormous magnetic field at the star’s poles causing the gamma rays that are emitted to squirt out from the poles like the beams of light from a lighthouse. This concentrates the power of the gamma rays into two narrow beams making the GRB look much brighter in the directions those beams travel.

The energy of long duration GRBs is concentrated into two narrow beams light the light from a lighthouse. (Credit: AAS Nova)

If this lighthouse feature of GRBs is true that implies that we are only seeing a small fraction of all GRBs, only those that are pointing at us. It also means that GRBs are not quite as powerful since their energy is focused into the beams. Again, this model fits the data collected for longer duration GRBs that make up about 70% of those that have been observed.

There are also short term GRBs, whose duration averages less than half a second and which make up about 30% of the total observed. Because they are fewer in number and shorter in duration these GRBs are harder to study and therefore less well understood. Several models have been suggested for them but the recent simultaneous observation of a GRB (GRB170817A) only 1.7 seconds after a gravity wave was detected by the LIGO gravity wave observatories implies a direct connection. Based on the nature of the gravity wave observed the event was a merger of two neutron stars. Therefore at least some short period GRBs are the result of neutron stars colliding to form a black hole or a black hole devouring a neutron star.

A Merger of Neutron Stars releases both a GRB and power Gravity waves. (Credit: AAS Nova)

So, if these GRBs are the most powerful explosions in the entire Universe, could they be any danger to us? Are their any stars in our galactic neighborhood that could collapse and generate a GRB? And what damage would a nearby GRB do?

In fact there are a couple of possible candidates known to astronomers. The stars Eta Carinae and WR 104 are both hugely massive stars that could collapse into black holes sometime in the next million or so years. Of the two WR 104 is closest at a distance of only 8,000 light years.

Eta Carena (l.) and WR104 (r.) are the most most massive and powerful stars known. Either couls someday collapse into a black hole triggering a GRB. (Credit: Gresham College)

If WR 104 were to generate a GRB, and if that GRB were aimed at Earth our atmosphere would protect us from the initial burst of gamma and X-rays, only a spike in the Ultra-violet lasting a few minutes would be seen. The long-term effects are much less pleasant however because the gamma and X-rays striking the atmosphere would cause oxygen and nitrogen to combine to form nitrogen oxide and nitrogen dioxide gasses. Both of these gasses are known destroyers of ozone, the form of oxygen in the upper atmosphere that protects us from the Sun’s UV rays. Also the gasses could combine with water vapour in the air to form droplets of nitric acid that would rain down causing further damage.

The Earth’s Ozone layer protects us from the cancer causing UV light from the Sun. Credit: UCAR)

Of course all of that is just speculation, we really have no idea what would happen here if a GRB from a star as close as WR 104 should strike the Earth. Before you start to panic however remember that GRBs are very rare, only one per day in the entire Universe. Let’s be honest, we’re a far greater danger to ourselves than Gamma Ray Bursts are!

Book Review: Why Did the Chicken Cross the World by Andrew Lawler?

Human beings have a tendency to overlook or even ignore those things that are the most familiar to us. Because we see something all of the time we feel as if we know everything there is to know about it, it just isn’t interesting anymore.

The Familiar Barnyard bird. (Credit: IndiaMart)

The chicken has been treated that way throughout history. Entire cultures have been built around cattle or sheep or the bison but not the chicken. Even when a small flock was kept just outside the house for the occasional egg or a special meal it was always the bigger livestock that got all of the attention.

Nevertheless, today it is the chicken that has become humanity’s largest supplier of protein. Today there are more domestic chickens being raised for food than any other animal. The chicken is the greatest success story of the technology of industrial food production, and as a living creature the chief victim of that success.

Andrew Lawler’s book ‘Why did the Chicken Cross the World’ is a journalistic investigation into the chicken, from it’s natural state as a wild bird spread across southern and southeastern Asia to being little more than one of the farmer’s wife’s chores to becoming one of the most valuable industrial commodities on the planet.

Front Cover of ‘Why did the Chicken Cross the World’ by Andrew Lawler (Credit: Amazon)

No one knows when human beings first began to keep the small wild relative of the pheasant but the remains of chickens along with primitive pictograms identified as chickens indicate that our relationship dates back into the Stone Age. The earliest evidence for humans raising and breeding chickens is not for food however, it was for cockfighting.

Wild Chickens still exist in the Kaziranga National Park in India (Credit: Pinterest)

Indeed much of the first third of ‘Why did the Chicken Cross the World’ deals with cockfighting as both a vehicle for gambling but also as a religious ritual! Andrew Lawler presents his evidence in a clear, enjoyable fashion that I quite frankly envy. Traveling around the world Mr. Lawler visits a selection of people who raise roosters for the pit but whose affection for their fighters is much more than just a source of income.

It is likely that Chickens where first domesticated for the fun of watching them fight rather than as a source of food. (Credit: Daily Times)

Moving forward in history Mr. Lawler details how for centuries the chicken competed with ducks and geese, and later the American turkey, for a place in humanity’s farms. It was only in the late 19th and early 20th century that the chicken became the dominant barnyard fowl.

A few centuries ago any barnyard would have kept several species of poultry for food (Credit: MutualArt)

It is the story of how the chicken became the most numerously bred, raised and finally, slaughtered animal that is the main part of ‘Why the Chicken Crossed the World’. Starting about 1850 in England and the US the importation of larger, meatier chickens from Asia began a long term breeding program to produce a chicken that would grow bigger in less time for less feed making chicken more available and less expensive.

Queen Victoria’s Poultry House. It was when Victoria became interested in raising chicken’e that the species became popular in England. (Credit: Poultry Pages)

A key moment came in 1948 when the world’s largest retailer, the A&P supermarket chain joined with the US Department of Agriculture (USDA) to sponsor the ‘Chicken of Tomorrow’ contest. The winner of that contest became the sire of an industrial production line of chickens that grow to more than twice the weight of their wild ancestors. In as little as 47 days modern birds are fully grown at a ratio of one kilo of chicken produced for two kilos of feed, a ratio that is nearly 50% better than any other species of meat producing animal.

The ‘Chicken of Tomorrow’ contest led to the industrialization of raising chickens (Credit: Flashbak)

None of this did the chickens any good. If they are bred for meat they are stuffed by the tens of thousands into industrial sized coops, see image below, where they are fattened up to the point where they can hardly stand. They are allowed to live for less than two months before being slaughtered.

Thousands of Chickens crammed into a modern chicken coop. Is this where your next meal is coming from? (Credit: YouTube)

The Selective breeding of chickens has led to Giant Chickens but at the cost of the animal’s health. (Credit: Insteading)

If they are bred for egg production they are squeezed into a tiny ‘battery cage’, see image. They lay an egg a day on average, a process that takes so much calcium out of their systems that their bones are extremely weak. After a year the hen is so exhausted that she is simply used for dog food.

Egg Laying Chickens in a ‘Battery Cage’. (Credit: Farm Sanctuary)

That’s the hens, the roosters, which are not as valuable and harder to keep because of their tendency to fight, are simply separated from the hens after hatching and disposed of in as cheap a method as possible. To the modern food industry the chicken is no longer a living creature but just another commodity to be produced and packaged cheaply and efficiently.

A motif that Mr. Lawler often returns to is that for millennia the chicken as an animal was a familiar animal. Today it is virtually unknown as a living thing; it is just something we eat, a commodity not a fellow creature. 

‘Why the Chicken Crossed the World’ is a thoroughly enjoyable book. A mixture of science, technology, history, sociology and politics in which you find yourself learning something on every page and the knowledge sticks with you. And I’m not just saying that because Andrew Lawler and I share our surname. To be best of my knowledge we are totally unrelated, the book is just really good!

Space News for August 2019.

We generally think of a story in the news as a report of some sort of dramatic occurrence, a story about an event full of action and yes, even danger. Space news therefore would consist primarily of accounts about rocket launches and space probes landing on distant worlds.

Of course we know that isn’t quite true. In space exploration the calm, deliberate decisions that are made in engineering conferences are every bit as vital to accomplishing the mission as the more spectacular moments. In this post I will be discussing three such stories illustrating the kind of planning and decision making that will make future space missions possible.

Many ideas are developed, and problems solved, in Engineering Meetings (Credit: PSM.com)

One such important decision announced by NASA on August 19 was to give a go ahead to begin construction of their ambitious Europa Clipper space probe, named for it’s target, Jupiter’s moon Europa. The intended mission of the Europa Clipper is to study that icy world in an effort to determine if the moon is actually a possible home for life. Some 40 close flybys of Europa are planned during which the probe will measure of thickness of the moon’s ice surface along with confirming the existence of a liquid ocean beneath the ice.

The Europa Clipper Space probe will make 40 flybys of the icy moon of Jupiter (Credit: ABC57.com)

The decision by NASA means that the design phase of the mission is now over and construction will now begin at NASA’s Jet Propulsion Laboratory (JPL) with a planned launch date of 2023 or 2025. One decision about the Europa Clipper still remains to be made however. What launch vehicle will be used to send the probe on it’s way to Jupiter?

Currently congress has ordered NASA to use the Space Launch System (SLS) as the launch system but that massive rocket is still not ready for it’s first test launch, and there is the real possibility that the SLS might not be ready by 2025. Also, launching the Europa Clipper with the SLS will cost over a billion dollars.

After many delays and budget overruns NASA’s massive Space Launch System (SLS) still has not flown (Credit: NASA)

NASA on the other hand would prefer to launch the Europa clipper using a commercial launcher such as Space X’s Falcon Heavy. Launching the space probe with a commercial rocket would not only save hundreds of millions of dollars but also firm up the launch schedule since the Falcon Heavy has already successfully flown three times. Unfortunately the decision here may be made by politics because the SLS is being built at NASA’s Marshall Space Flight Center in Alabama and some very important republican senators are strongly supporting it.

The Space X Falcon Heavy rocket has already flown successfully three times (Credit: The Verge)

Speaking of the Marshall Space Flight Center, NASA has made another decision naming them as the lead management center in the development of the Lunar Lander for the space agency’s big Artemis program. Artemis is the name that NASA has now given to its plans for returning astronauts to the Moon’s surface by 2024. Since Marshall is already developing the SLS as the Artemis launch vehicle their choice as lead for the Lander now puts two big pieces of the Artemis pie on Alabama’s plate.

The Marshall Space Flight Center is where NASA has developed rockets like the Saturn V and Space Shuttle (Credit: Wikipedia)

Again the decision here was made on the basis of political, not engineering grounds and that’s never a good thing. In fact the decision could very well be changed. You see the Johnson Manned Space Flight Center is in Houston Texas and there are a couple of powerful Texas senators, also republican by the way, who think the Johnson center would be a much better selection as management lead for the lander’s development.

The Johnson Space Flight Center in Texas is Where NASA’s Manned Space Missions are developed (Credit: Wikipedia)

None of this arguing back and forth will make the lander perform any better, or be built any faster or cheaper. Indeed that sort of political infighting is more likely to stall funding appropriations that could lead to schedule delays and cost overruns.

On a more hopeful note NASA has also decided to team up with Space X in order to develop the technology necessary for the refueling of spacecraft in space! Again the idea is to reuse spacecraft rather than just throw them away after one use and build another. In space refueling has long been considered essential towards developing a space infrastructure that will enable longer and more difficult space missions.

Refueling in space would extent the operational life of satellites, thereby reducing their cost (Credit: Engadget)

Take for example the communications satellites that are now in geostationary orbit 35,000 km above the earth’s equator. These multi-million dollar radio relays must keep their antennas pointed very precisely at Earth in order to perform their job at all. To do this the satellites have small, station keeping rocket engines that keep the satellite exactly where they’re supposed to be. After about 5-7 years however those engines run out of fuel and the craft soon begins to drift until the antennas are no longer directed at Earth. Once that happens the satellite becomes nothing more than a very expensive piece of junk up in space. If you could refuel those satellites while in orbit however you could extend their useful life by years and save billions of dollars.

For manned spaceflight in space refueling would allow the development true spaceships that could travel back and forth to the Moon or Mars multiple times. Such spaceships would be refueled at the end of each mission in exactly the way you refuel your car after a long trip.

Developing the technology for refueling in space won’t be easy however. Most of the chemicals used as rocket fuel, liquid oxygen, liquid hydrogen or liquid methane have to be kept cryogenically cold, requiring both refrigeration equipment and power. And everything has to be kept airtight or that fuel that you spent so much money getting into orbit will simply boil off into space. That’s why NASA teaming up with Space X makes sense. While Space X is the leader in reusable spacecraft NASA’s Glenn Research Center in Ohio and Marshall Space Flight Center are the recognized experts in handling and storing various kinds of rocket fuel. Hopefully this teaming up of skills will solve the problems of refueling in space and one day soon in addition to orbiting space stations we will see orbiting gas stations as well. 

Will there soon be a ‘Gas Station’ in orbit above the Earth? (Credit: Ars Technica)

The Transistor and Integrated Circuit, the story of the Miniaturization Revolution in Electronics

Earlier this year I celebrated the fiftieth anniversary of the Moon landing of Apollo 11 by publishing a series of eight articles about the ‘Space Race’ of the 1960s. I enjoyed that task so much that I decided to write a few more posts about some of the other cool technologies of that time, I hope you enjoy them.

In most homes today you’ll find that the number of electronic devices outnumbers the number of human beings by a factor of three, four or even more. Add up all of the TVs, computers, smartphones, hey even our ovens and refrigerators have microprocessors in them nowadays!!! Electronics are so cheap, so versatile and so small that we’re putting them in just about everything.

Just some of the electronics that can be found in a modern home. (Credit: Santa Barbara Loan and Jewelry)

Back in the 60s however electronics were big and expensive. Most homes had one TV, one record player and one, maybe two radios. The reason was simple; electronics were built around the vacuum tube, which were themselves large and expensive. See image below.

An Electronic Vacuum Tube (Credit: Parts Express)

Now if you think that a vacuum tube looks something like an incandescent light bulb you’re quite right, vacuum tubes were developed from light bulbs and like them require a considerable amount of power, voltage and current, in order to just turn on. This makes vacuum tubes wasteful of energy, hot and rather large.

Things started to change during the 60s when the first transistor electronics came on the market, the small, hand held AM transistor radio being the most popular. Now pretty much everyone knows that transistors are made primarily of silicon and, like a vacuum tube a transistor is an ultra fast electrical switch. Unlike a tube however a transistor doesn’t have to be hot in order to work.

An antique six transistor radio. (Credit: ETSY)

This means a transistor needs only a small fraction of the power of a vacuum tube in order to function and therefore they can be made much smaller and packed in together more tightly. Whereas a vacuum tube radio was as large as a piece of furniture a transistor radio could be held in one hand, and with the transistor radio the word miniaturization came into common usage.

Vacuum Tube radios could hardly be considered mobile! (Credit: Flickr)

Still, my first little transistor radio was build of ‘discrete’ transistors. That is to say each transistor was a separate object, an individual piece of silicon packaged in it’s own plastic coating. When I bought my second transistor radio I of course disassembled the first one and inside I found six transistors, along with numerous other components. The transistors were each about the size of a pea; I learned later that the transistors were packaged in a standard format known as TO-92.

A single 2N3904 Bipolar NPN General Purpose Transistor packed in a TO-92 case. (Credit: Addicore)

Even as the first transistorized consumer products were becoming available there were engineers who began to wonder if it would be possible to fit two transistors, or even more, on a single piece of silicon, and how many could you fit? The first experiments with Integrated Circuits (ICs), as these components came to be known, were carried out at Texas Instruments Corp. in 1958. See image below.

The World’s first integrated circuit contain two transistors on a single piece of germanium, not silicon (Credit: Texas Instruments)

The advantages of ICs were many, reduced cost, size and power requirement along with increased operating speed. The drawback to ICs was their high initial start up costs. The facilities needed for manufacturing ICs, known as a ‘foundry’, are very expensive even though, once you had a foundry millions of ICs could then be made very cheaply. In the business this is known as a high Non-Recurring Expense (NRE) with a small Recurring Expense (RE).

A look inside a foundry for the manufacture of Integrated Circuits. (Credit: SemiWiki)

So, who was going to pay for the first IC foundries? The U.S. government that’s who! In the 1960s both NASA and the military had a tremendous need for ever more sophisticated radios, radars, guidance systems and even computers. And all of these new electronics had to be smaller in order to fit into rockets, airplanes and ships. The IC was the only possible technology that could satisfy that need.

Then, once the first foundries were built the miniaturization revolution really got under way. One of the pioneers of the IC industry Gordon Moore declared in 1965 that the number of transistors on a single silicon ‘chip’ would double ever two years. This prediction is commonly called Moore’s Law and has worked now for over 50 years with the current technology being capable of placing millions of transistors on a chip of silicon no larger than a fingernail.

Gordon Moore was an early pioneer in the development of Integrated Circuits. (Credit: Computer History Museum)
A Look inside a typical Integrated Circuit, this one is a Pentium 4 Microprocessor used in many personal computers. (Credit: Calvin College)

With this technological progress has come personal computers, smartphones, digital cameras, digital television and myriad other devices that we all have in our homes or carry on our person. The transistor and Integrated Circuit have become the true symbols of our modern age and their revolution began in the 1960s.

There’s some good news about the Environment for a Change. Plastic microparticles may not be a health danger after all!

Everyday it seems as if we hear another news story about how all the pollutants and trash that we’re dumping into the environment are coming back to do us harm. If it isn’t climate change it’s harmful chemicals in the air or water. One possible threat that’s been in the news recently is plastic microparticles.

Just a small part of the Great Pacific Garbage Patch. Most of this muck is plastic! (Credit: The Brag)

What are plastic microparticles? Well you see, all those millions of tons of plastic we keep throwing away may be chemically inert but ultra-violet light from the Sun combined with mechanical action from ocean waves or weather can break it down into particles less than 5mm in diameter.

Waste Plastic doesn’t decay in the environment but it does break down into small pieces the smallest of which are microparticles (Credit: Lifegate)

Environmental researchers are finding plastic microparticles nearly everywhere. In the oceans they have been discovered in both the artic regions as well as the bottom of the Mariannas trench, the deepest part of the ocean. Scientists in both France and Colorado have even found plastic fibers in rainwater while in Norway they’ve been found in snow. I suppose we’ll have to stop using the phrase ‘Pure as the driven Snow’. With plastic microparticles everywhere we are certainly going to be ingesting some as we eat and drink so the question is. Can they get from our stomach into our bodies and if so what harm will they do there?

Plastic bags have even been found at the bottom of the deepest part of the Oceans (Credit: Science Alert)

Researchers have begun to study this possibility with the intent of determining the health threat posed by plastic microparticles. A leading scientist at the Center for Organismal Studies at the University of Heidelberg, Doctor Thomas Braunbeck has been investigating whether or not plastic microparticles can pass easily through the lining of the intestines of vertebrate animals. In other words if we ingest these particles will they get into us?

Professor Doctor Thomas Braunbeck of Heidelberg University (Credit: Researchgate)

The test animal Dr. Braunbeck choose for his work are the well known fresh water aquarium fish the zebra danio (Danio rerio) because he could study many animals at once quite easily. Also, the zebrafish’s growth rate is so high if plastic microparticles can be absorbed a lot would be absorbed in a short time making detection more certain.

Logo of the Center for Organismal Studies (COS) showing a Zebra Danio , the fish used in the stidy of plastic microparticles (Credit: COS Heidelberg)

To carry out his experiment Dr. Braunbeck used microparticles that were coated with a phosphorescent chemical that made them easier to track and the particle size he choose was around 10μm. First the particles were fed to a kind of small crustacean that is also well known to tropical fish hobbyists as brine shrimp. Once he was certain that the shrimp had indeed absorbed the microparticles he then fed the shrimp to his zebrafish.  

Now here’s the good news. When Dr. Braunbeck checked the fish for signs that plastic microparticles had been absorbed he found none. The particles had been unable to pass through the lining of the zebrafish’s intestine. Instead the microparticles had simply passed all the way through the fish’s digestive system and out the back end.

Since this is one of the first experiments to determine if plastic microparticles can be absorbed through the intestine of a vertebrate the negative result is good news. Before you start celebrating however remember I mentioned above that the particles used in the study were 10μm in diameter. Dr. Braunbeck cautions that smaller particles might still be able to get through. Nevertheless it is nice to hear a little hopeful news about pollution for a change.

Of course just because plastic microparticles may not be a very big health risk certainly doesn’t mean that we shouldn’t be concerned about the millions of tones of plastic waste that’s just turning our planet into a trash dump. Fortunately there are more and more people who are trying to find solutions to the problem. Earlier this year, see my post of 9 January 2019, I wrote about the young man from Holland named Boyan Slat who had invented a 700m long ‘U’ shaped boom to sweep up the Great Pacific Garbage Patch. The first test of Slat’s invention ran into some problems but the upgrades are in progress and a second test is coming soon!

The 700m floating boom used to remove plastic from the ocean still has some bugs to work out. (Credit: Twitter)

While Boyan Slat’s boom is intended to remove large pieces plastic from the Ocean a teenager from Ireland has developed a technique for eliminating up to 88% of plastic microparticles from water. The teenager’s name is Fionn Ferreira and his project won him the Grand Prize in Google’s annual science fair. The native of the town of Ballydehob is planning on using his $50,000 to pay for his further education in college. Fionn’s technique for collecting the plastic microparticles in water involves attracting and removing the particles with a magnet.

Fionn Ferrerira, the winner of this years Google science prize for his technique to remove plastic microparticles from water. (Credit: ABC News)

Wait a minute, you say. Plastic isn’t magnetic. You can’t attract plastic with a magnet. That’s true, however in water plastic microparticles are attracted to ferrofluids, mixtures of oils and iron based magnetite. The oil in the ferrofluid clumps with the microplasic and the magnetite can then be lifted out with a magnet carrying the oil and plastic with it.

If this sounds almost too good to be true you could be right. The biggest technical problem as I see it will be to scale up the whole process, there’s a lot of plastic microparticles out there to be collected. In particular separating the ferrofluid from the plastic so that it can be used again and again could prove difficult. Of course the real problem will be the cost, nobody is going to be making a profit off of this you know.

And that’s the real problem with cleaning up the environment in general, the cost. There are many things we could do to clean up the mess we’re making of this planet of ours. The question is, who’s going to pay for it?

In a revolutionary experiment scientists are using the Gene editing tool CRISPR to treat patients suffering with the genetic disorder Sickle Cell Anemia.

In all of modern science there is perhaps no more rapidly advancing field than that of genetic research. Much of that progress has come about because of the development of the molecular gene editing tool CRISPR (which stands for Clustered Regularly Interspaced Short Palindromic Repeats) that allows biochemists to literally cut and/or paste sections of DNA into the chromosomes of living cells. I have talked about CRISPR several times in previous articles, see posts of 2 March 2019, 12 January 2019, 1 December 2018, 1 September 2018 and 5 August 2017, and the full potential of CRISPR is still only being guessed at.

How CRISPR Works (Credit: Cambridge University Press)

Now the latest experiment is making a bold and daring attempt to treat fully grown persons who are sufferers of the inherited genetic disorder Sickle Cell Anemia, a condition that affects about 100,000 people living here in the United States and millions of others worldwide. This is the first ever attempt to use CRISPR to modify the cells of adult patients in the hopes that the altered cells will allow those patients to live a more normal life.

The Genetic disease Sickle Cell Anemia is a chronic ailment for millions of people (Credit: Familydoctor.org

Before I continue let me talk a little bit about the genetic disease Sickle Cell Anemia. This is a disorder that affects the bone marrow and leads to the production of red blood cells with a defective protein that causes the cells to be deformed, sickle shaped. These deformed blood cells are thereby unable to carry a normal amount of oxygen leading to a permanent and in some cases crippling weakness in the affected person. Most sufferers of Sickle Cell Anemia are ethnic African or African-American and since the disease is inherited it can devastate a family for generations.

Sickle Cell Anemia is an inherited genetic disorder (Credit: Synthego)

The procedure being tested is to take cells from the patient’s bone marrow and modify the cells DNA using CISPR in order to make them produce a protein that is normally only formed by the human body while in the womb and during early childhood. It is hoped that the production of this protein will correct the deformations of blood cells caused by the defective protein, thereby alleviating the anemia caused by sickle cell.

Possible Techniques for using CRISPR to cure Sickle Cell (Credit: American Chemical Society)

The experimental treatment for Sickle Cell Anemia is being conducted at eight hospitals and clinics in North America and Europe and is being overseen by CRISPR Therapeutics of Cambridge Massachusetts in association with Vertex Pharmaceuticals of Boston. The current plan is to have up to 45 patients take part in the initial trials of the experiment.

Patient undergoing CRISPR treatment for Sickle Cell Anemia (Credit: NPR)

It will be months before the researchers know for certain whether or not the modified cells are even producing the desired protein let alone if the protein is actually helping to improve the health of the study’s patients. Then, even if there is strong evidence that the procedure has worked, there is the question of how long will the benefits last? Will this technique produce a permanent cure or will the effect be only temporary?

These are questions that only time can answer, but we are at the threshold of a new medical technology. This may be the first attempt to treat patients with a genetic disease by using CRISPR but it certainly will not be the last.

Newly developed Prosthetic arm is so advanced it not only is controlled by the brain but it gives the user an almost natural sense of feeling.

The development of prosthetic limbs has advanced so quickly in the last few years that it really seems as if our technology is catching up to the science fiction ‘bionic’ limbs in movies and TV from 40 or 50 years ago. In fact a new prosthetic arm being developed at the University of Utah is so sophisticated that the researchers have named it the ‘Luke Arm’ for the ‘Star Wars’ character Luke Skywalker who famously lost his arm to Darth Vader and had it replaced with an artificial one.

Steve Austin, ‘The Six Million Dollar Man’ was a cyborg with an artificial arm and two artificial legs (Credit: Amazon)
In ‘The Empire Strikes Back’ Luke Skywalker looses his arm in a fight with Darth Vader and gets a replacement! (Credit: 20th century Fox)

While the prosthetic undergoing testing is designed for people who have had their left arm amputated below the elbow but the developers are confident that the design can be easily adapted to both the right hand and amputations above the elbow. Mechanically the arm is constructed of metal motors overlayed by a clear silicon skin which has had one hundred electronic sensors imbedded in it.

The Luke arm being benchtested at the University of Utah (Credit: Unews, University of Utah)

So sophisticated is the prosthetic that not only can it move and grab in response to signals from the wearer’s brain but it can send sensory signals back to the brain that are interpreted as the feelings of touch, heat and even pain. This is accomplished by a system of microelectrodes and wires that are implanted in the wearer’s forearm and connected between the nerve endings of the lost limb and the arm’s one hundred sensors.

The Luke Arm’s Sensoty electrode carries 100 sensory inputs between the arm and the Brain (Credit: Unews, University of Utah)

The system of microelectrodes has been named the Utah Slanted Electrode Array and was invented by Professor Emeritus Richard A. Normann. The array is connected to an external computer that serves as a translator between the biological and electronic signals.

Patients using the ‘Luke Arm’ have succeeded in performing even delicate activities such as removing a single grape from a bunch or picking up an egg. This is because the sensors in the arm allow the user to actually feel the softness or hardness of the objects they touch. One test subject, Keven Walgomott of West Valley City Utah, even asserted that he could feel the softness of his wife’s hand as he held it with the ‘Luke Arm’.

Volunteer amputee Keven Walgomott using the Luke Arm (Credit: Interesting Engineering)

According to Jacob George, the study main author and a doctoral candidate at the University of Utah. “We changed the way we are sending that information to the brain so that it matches the human body…We’re making more biologically realistic signals.”

The researchers are currently working on a portable version of the Luke arm that will allow in home trials to begin. Testing of the Luke arm could take another few years before FDA approval is granted and the prosthetic becomes commercially available. Nevertheless, the day is coming when artificial limbs will be providing amputees with a quality of life that is nearly equal to the natural ones they have lost.

If you’d like to see the Luke arm in action click on the link below to be taken to a youtube video from the University of Utah. https://www.youtube.com/watch?v=_Xl6rFvuR08

In the story above I mentioned that Luke arm has one hundred sensors implanted with it. Now that may sound like a lot but of course a real arm has thousands of nerve endings giving our brain a much more complete impression of everything that’s happening to that limb. Could an electronic skin, like that on Luke arm, ever be developed that possesses as many sensors as natural skin?

They’re already working on it! Scientists at the Department of Materials Science and Engineering at the National University of Singapore have developed a sampling architecture of sensor arrays that they have named the ‘Asynchronously Coded Electronic Skin’ (ACES). The engineers assert that ACES could work with arrays of up to 10,000 sensors and have even fabricated and tested a 250 sensor array to demonstrate their technique.

Some of the sensors used to test ACES (Credit: Edgy.app)

Now there are several problems that you’re to need to overcome if you intend to greatly increase the number of sensors in your system. The first is simply data overload, that is more data than a computer, or even our brain can handle. The second is sampling speed. With thousands of sensors waiting to have their data taken, even at a high sampling rate a large faction of a second or more can pass between each time a particular sensor is sampled. That means that an emergency signal, burning heat or a stabbing wound could go unnoticed until real damage is done.

The technique used by the engineers in Singapore is actually modeled on the way the nerves in our body communicate with the brain, an ‘event based’ sampling protocol. Think about it, when you first sit down in a chair you feel a large area of pressure as your skin makes contact, but after a second or so you hardly feel the chair at all. This is because our brain only reacts to changes to the messages from our nerves. The brain only pays attention when our nerves tell it that something, an event, is happening.

The electronic architecture of ACES (Credit: Cosmos Magazine)

The ACES system does much the same thing, only passing on the data of sensors that are measuring changes to their environment. In this way it prevents data overload while at the same time enabling important information to quickly become available to the controlling intelligence. The researchers in Singapore hope that their ACES system will prove to be applicable not only for sensors in prosthetic limbs but also for increasing the ability of Artificial Intelligence systems to sense and manipulate their environment. In that way ACES may be another step forward in shaping the human-machine interface of the future.

Who were the Philistines of the bible, modern DNA measurements may hold the answer.

The ancient writers of the bible liked to portray all of their neighbors in a very negative way but undoubtedly the people they called the Philistines received more than their share of bed press. According to the bible the Philistines lived along the Mediterranean coast just to the east of the central highlands where the Hebrew people resided. Here they built five city-states, Gaza, Ashkelon, Ashdod, Ekron and Gath.

Map of Ancient Canaan showing the region ruled by the Philistines (Credit: Crystalinks)

So who were the Philistines, the original Hebrew word Philištim simply means ‘people of Plešt’ whoever or whatever Plešt may have been. In the Bible however the Philistines are always treated as somehow different from the Canaanites or Moabites or Egyptians who were the Semitic neighbors of the Hebrews.  Philistine names, like Goliath or Delilah, and the customs recorded in the bible also point to a non-Semitic origin.

In the bible the Philistines are depicted as being continually at war with the Hebrews (Credit: Facts and Details)

The first clues to the origin of the Philistines were discovered during the early archaeological expeditions to Egypt in the 1840s. It was the Egyptologists Edward Hincks and William Osborn jr. who published the history of the Pharaoh Ramesses III and his battles with a ‘sea peoples’ that were also known by the name of Peleset (Philistines???). According to Egyptian inscriptions Ramesses III defeated these Peleset in a naval battle in the River Nile as well as a land battle along the eastern Mediterranean not far from the city of Gaza!! The inscriptions go on to state that Ramesses III later settled his captives in a series of ‘strongholds’.

Inscription of Ramesses III defeating the ‘Sea People” (Credit: TheTorah.Com)
Captive Philistines as illustrated by the Egyptians (Credit: Encyclopedia Britannica)

There are those who doubt that theory however stating that the similarity of names is hardly conclusive evidence. At the same time there is little archaeological support for a large-scale settlement of people in the region around Gaza in the 11th century BCE. Some scholars point to the use of the term ‘allophyloi’ (of another tribe) in the Greek translation of the bible, the septuagint to indicate that the Philistines were simply ‘non-Hebrews’, any enemies of the Hebrews. However the bible’s own descriptions clearly seem to refer to a definite ethnic group living in a definite place.

None of which gets us any loser to answering the question, who were the Philistines? If they weren’t Semites like the Hebrews or Egyptians or Canaanites, who were they linguistically and culturally? Where did they come from?

Based upon the clues in the bible and the Egyptian inscriptions the leading hypothesis is that the ‘sea peoples’ came mostly from the area around the Aegean Sea including Crete, Cyprus, the western coast of modern Turkey as well as Greece itself! The idea that a large force of bronze age Greeks might have invaded the southeastern Mediterranean coast fits also in with the well attested destruction of the Mycenaean cites at that time.

How can we ever know? That was three thousand ago and the records from that time are incomplete and not conclusive. Yes there is some archeological evidence such as the discovery at Ashkelon of Late Helladic IIIc Mycenaean pottery but the pottery could have come via trade. The evidence may lean toward an Aegean origin but how can we be sure? 

Examples of Philistine Pottery that resembles that of Pottery from the Aegean (Credit: Bible Odyssey)

Perhaps the modern science of DNA testing can give us the answer. After all, we’ve all seen the ads telling us how DNA can reveal our ancestry. And remember how DNA was used to prove that the body found in a parking lot was actually England’s King Richard III. Couldn’t the same techniques be employed on skeletons from the right area and time period by archeologists?

In fact scientists have now done just that. In 2016 archeologists working at Ashkelon announced that they had discovered the first known Philistine cemetery, the culmination of 30 years of digging. The team, led by co-director Daniel Master of Wheaton College unearthed the remains of at least 108 individuals from which DNA was successfully removed from ten. The results of the DNA analysis clearly showed that the people buried in the cemetery were not related to any of the local ethnic groups but instead showed a strong European, probably a southern European relationship.

Archaeologists at work at the Philistine cemetery at Ashkelon (credit: National Geographic)

So it appears as if the hypotheses about an Aegean origin for the people know in the bible as Philistines was true. Goliath and Delilah may have been the descendants of Odysseus or Agamemnon or some of the other well-known characters of the Greek heroic age.

Think about that for a moment, could the ‘sea peoples’ have brought with them the stories that would become the later Greek myths. We could speculate that the Philistines told their stories about Hercules and the Hebrews responded by imagining stories of their strongman Samson. After all the first of Hercules twelve labours is strangling the Nemean lion whose hide was so tough no weapon could pierce it, while in the bible Samson’s first feat of strength is strangling a lion with his bare hands!!

Hercules slaying the Nemean Lion (Credit: Theoi Greek Mythology)
Samson slaying his lion (Credit: Geni)

Could the story of Yahweh testing Abraham by demanding the sacrifice of Isaac, and then stopping the ritual once he was sure of Abraham’s faith, be the Hebrew answer to the Greek tale of the Goddess Artemis’s demand that Agamemnon sacrifice his daughter Iphigenia to her!

Iphigenia and Isaac being prepared for Sacrifice (Credit: PD)

We may never know the answer to these questions, cultural cross connections leave few traces in the archeological record. One thing we can be certain of however is that since this is the Middle East the DNA results would quickly become politicized!

In fact Israeli Prime Minister Benjamin Netanyahu has already used the results to declare that since the Philistines came from Europe then the modern Palestinians have no claim to any of the lands of the Middle East. He is of course assuming that the Latin word Palaestina is the same as the Greek word Philistinoi which maybe true but is a subject of considerable contention among scholars.

I suppose the only thing we can really be sure of is that David didn’t defeat Goliath; they’re still fighting it out!!!

No matter what you’ve heard this war ain’t over yet!!!! (Credit: Leadership Platform)

Controversy erupts over claims that newly invented glasses can correct colourblindness.

Before I start to discuss colourblindness perhaps I should start by taking a moment to talk briefly about how it is that we are able to see colour in the first place. Now most people are familiar with the fact that at the back of our eyes, the retina, there are two groups of light sensitive cells that are called rods and cones based upon their shape. The rods are sensitive to the intensity of the light whatever the colour, if we had only rods we’d see everything in black and white, complete colourblindness.

The Anatomy of the Human Eye (Credit: Wikipedia)

The cones on the other hand come in three types; some are sensitive to the longer wavelengths of visible light, the colour red. Others are sensitive to the shorter visible wavelengths, the colour blue. The final group is sensitive to the middle wavelengths, the colour green. Together these three types of cones give us the ability to distinguish thousands of shades of colour.

Colourblindness is defined as a decreased ability to discern the full range of the colour spectrum of visible light. In other words some of the cone cells are not functioning properly.

The most common form of colourblindness consists of some degree of difficulty in distinguishing between the colours red and green and is known clinically as dichromatic. Dichromatic colourblindness is both genetic in nature and sex related since the gene for the red / green cone cells occurs on the X chromosome of the X-y sex pair.

How people with Dichromatic colourblindness see the world (Credit: School Work Helper)

The defective gene for colourblindness is recessive in nature so since women have two X sex chromosomes both of the chromosomes must have the defective gene in order for her to be colourblind. A woman with only a single defective X chromosome can be a ‘carrier’ of colourblindness however.

The X-Y Chromosomes determine whether you’s a girl or a boy but can also carry sex related mutations like colourblindness and hair loss (Credit: Socratic)

A male on the other hand has only one X sex chromosome, which he gets from his mother. Therefore if a woman is colourblind all of her male offspring will be colourblind. If a woman is only a carrier of colourblindness then half of her sons will inherit the defective gene and develop colourblindness. Because of this many more men are colourblind, about 8%, as opposed to women, 0.5%.

Colourblindness varies in degree with most colourblind people having only a small loss of colour vision. The chart below shows the different recognized ‘types’ of colourblindness along with the percentage of the population effected.

Types of Colourblindnesss and percent of population effected (Credit: Wikipedia)

Determining whether of not a person has colourblindness is usually accomplished by testing them with one or more Ishihara colour test plates, an example of which is given below. In this example a person with normal colour vision can clearly see the number 27 in the center of the design while a person with a slight colour deficiency will see the number 21. A person with total red / green colourblindness will not see any number at all!

An Ishihara test for Red-Green Colourblindness (Credit: Wikipedia)

There is no cure for colourblindness. Recently however there have been attempts to ‘correct’ colourblindness with specially designed glasses in a manner similar to the way that near or farsightedness can be corrected. The glasses are commercially available under the trademarked name EnChroma® and were developed by developed by a pair of scientists, Andrew Schmeder, a mathematician who studies the psychology of perception, and Don McPherson a glass researcher who has invented specialized laser safety glasses for surgeons.

How the Enchroma glasses are supposed to work (Credit: All About Vision)

According to the inventors the EnChroma® glasses work by eliminating just those wavelengths of red and green light that confuse the eye’s cone cell receptor which allows the brain to perceive a greater colour contrast. EnChroma® Inc. estimates that 80% of colourblind people can see improved colour vision with the help of the glasses and judging by the reaction in Youtube videos of colourblind people who try them for the first time they succeed miraculously.

Not everybody is so convinced however. Researchers at the University of Grenada’s department of Optics in Spain have conducted a test of the EnChroma® glasses with 48 colourblind individuals, over 200 people volunteered for the test. The results of this study, which were published in the journal Optics Express, seem to show that the EnChroma® glasses only marginally perform better than ordinary hunting glasses at increasing colour perception. The conclusion reached was that the wearers of EnChroma® do not perceive new colours so much as see the same colours in a new way.

Sounds a bit like we’re arguing semantics to me. While it’s true that the claims made by EnChroma® Inc are exaggerated, they’re trying to sell the glasses after all, the company has never claimed to be able to ‘cure’ colourblindness. Reality is probably somewhere in the middle with the EnChroma® glasses allowing people with a mild form of red / green colourblindness to separate the two colours more readily.

And at least that’s a start. The development of EnChroma® glasses is the first even slightly successful treatment for colourblindness. Hopefully in the years to come improved versions of the glasses will be developed that perform better, and for more types of colourblindness.

In the long run there has been some research conducted by Maureen Neitz at the University of Washington that has employed gene therapy to cure colourblindness in monkeys. It may in fact be only a few years before there are some treatments available that can significantly improve the ability of people born with colourblindness to see the world with all of the rich tapestry of colours the rest of us take for granted everyday.