Vaping and E-Cigarettes, what are the dangers and why are so many people suddenly getting sick using them?

I think I ought to start this post with a little honest disclosure about my own smoking history. I smoked old-fashioned cigarettes for almost 30 years and managed to finally quit about 15 years ago. My doctor tells me that I appear to have no ill effects because of all my years of smoking so I count myself as being very lucky. And like many ex-smokers I now support any and all efforts to get people off tobacco and in particular to keep young people from ever getting hooked in the first place.

Credit: Get Healthy Clark County

In contrast, about ten years ago the tobacco industry invented a new way for people to smoke, e-cigarettes or vaping as its also known. In essence what an e-cigarette does is boil a small amount of solution from an inserted container known as a ‘pod’. In addition to water these pods contain the chemical nicotine along with a variety of flavourings that the consumer can choose. The smoker then inhales the aerosol generated and the nicotine passes through the lungs into the blood stream just as it does when a person smokes an ordinary cigarette. You’ll recall that nicotine is the reason people smoke in the first place both because the chemical is a stimulant but more so because it’s highly addictive.

At first e-cigarettes looked just like ordinary cigarettes but now they come in a variety of shapes and sizes! (Credit: The Continuum of Risk)

The rational that the inventers and manufacturers of e-cigarettes gave for marketing their product is that since there’s no actual smoke involved there’s none of the carcinogens that cause lung cancer. This is assumed to make e-cigarettes safer than ordinary cigarettes. Safer perhaps but certainly not safe since it’s the nicotine that leads to heart disease among smokers and more smokers actually die of heart disease than lung cancer.

There is a 20 year lag between starting smoking and the incidence of lung cancer but otherwise the two quantities are nearly identical (Credit: Wikipedia Commons)

Still, that logic allowed e-cigarettes to be advertised as safer than cigarettes. At the same time the completely unfounded claim was also promoted that heavy smokers could use e-cigarettes as a pathway to quitting. Together these untested assertions were employed by the manufacturers of e-cigarettes to make their product seem almost like a treatment for smoking rather than just another way of injecting poisons into yourself. 

None of those claims were backed up by any data whatsoever. No long-term studies of the health risks of using e-cigarettes have been completed so the only thing that can be said about smoking e-cigarettes is that the nicotine still makes them dangerous. In fact chemical analysis has shown that many other dangerous substances are present in the aerosol generated by e-cigarettes calling into question the claim that they are at all safer than ordinary cigarettes.

We still don’t know all of the toxic chemicals that are contained in the aerosol from an e-cigarette (Credit: Wikipedia Commons)

For a time the number of people using e-cigarettes was small and consisted almost exclusively of established smokers looking for a safer alternative. But in their efforts to increase sales the e-cigarette manufacturers started extensive advertising campaigns touting how ‘safe’ e-cigarettes were. Then the manufacturers began selling fruit flavoured and even candy flavoured vaping pods that are effectively more attractive to teenagers and even children than to the adults who are legally allowed to purchase the product.

The easiest way to measure the growth of e-cigarette use is by measuring the money being spent on them! (Credit: Heineventures)

Of course the vaping industry categorically denies that they are deliberately advertising to children. However it is an established fact that several overzealous e-cigarette salesmen have actually gone into high schools and promoted their product to students as being safe!! Not safer than real cigarettes mind you, just plain safe…a complete misrepresentation of available facts.

Not only are the advertisements for e-cigarettes geared toward teenagers but they make use of the social media platforms most frequented by teenagers. (Credit: Smithsonian Magazine)

The end result of all that sophisticated media hype was that, after years of declining numbers in people who smoked we now have millions of people who have become hooked on e-cigarettes, especially teenagers. Nearly 11 million adult Americans and two million teenagers are current e-cigarette users. That’s a big market and it means that there’s a lot of money to be made from e-cigarettes, $11.8 Billion in 2018 and the estimated market for 2024 is $16.5 Billion.

And it’s the same old companies making that money; the most popular brand of e-cigarettes, JUUL is 35% owned by Phillip Morris while second place Vuse is a subsidiary of R. J. Reynolds. Both are tobacco companies that for decades continued to deny the overwhelming medical evidence linking their products to millions of deaths. Now these same companies are making the same old arguments to defend their new products.

Remember the Marlborough Man, well he died of lung cancer. Now the company that sold that poison is selling the e-cigarettes JUUL (Credit: Amazon)

Recently the problem of e-cigarettes has taken an even more dangerous turn. Just this year more than 500 people who smoke e-cigarettes have been diagnosed with severe lung problems while seven people have died. At the moment doctors have no idea why e-cigarettes have suddenly turned so deadly. Remember e-cigarettes have been around for 10 years so why have all these people just gotten sick this year! The working hypothesis is that some people have been injecting THC, the chemical from marijuana into their pods and that it is the THC that is causing all the injuries. However there is some evidence to indicate that THC cannot be the sole cause of all the recent cases.

We are only just learning the dangers of vaping! (Credit: TruLaw)

The plain fact is that e-cigarettes are unhealthy, maybe they are a little less unhealthy than real cigarettes but nevertheless they are unhealthy. Such a dangerous product should never have been allowed to be openly sold before extensive studies had been conducted to better quantify just how unhealthy they were.

E-cigarettes and other tobacco products are often sold right next to candy as a means of hooking children as new customers. (Credit: Counter Tobacco)

The tobacco companies behind e-cigarettes knew how to circumvent any sort of government oversight however and so now a whole new generation is getting hooked on the same old poison I got hooked on back in high school. I was lucky and quit before cigarettes killed me. I wonder how many of the users of e-cigarettes won’t be so lucky!

Post Script: Only a week has gone by and the epidemic of lung injuries associated with vaping has more than doubled in size. More than 800 people have been diagnosed and 14 have died! Although a link to THC in vaping pods seems likely Doctors are still at a loss as to exactly what is going on. All that they can do at the moment is to advise everyone “DON”T VAPE”!!!!

Astronomy News for September 2019.

There have been a couple of major discoveries in astronomy this past month, each in their own way teaching us something about the universe outside our solar system, and how similar that is to what goes on inside our solar system.

The first story concerns one of the now over 4,000 planets that have been discovered orbiting around other stars. Exoplanets the astronomers are calling them and most were discovered by the no longer functioning Kepler space telescope. (See posts of 16Dec2017, 28Apl2018 and 3Nov2019)

A chart detailing some of the discoveries made by the Kepler Space Telescope. (Credit: Sky and Telescope)

Most of the exoplanets that have been discovered to date are considerably larger than our Earth is, let’s be honest the bigger anything is the easier it is to see, and most have been found to be orbiting rather close to their parent star. Neither of these conditions is expected to make these exoplanets hospitable for life but the astronomers know that if they find enough exoplanets eventually they’ll start finding some that look more like Earth and which could be inhabited.

Our techniques for discovering exoplanets are far more likely to find big ones. This is a classification as of 2013. (Credit: Universe Today)

In fact their latest candidate possesses an Earth like feature never before seen in an alien world, water vapour in its atmosphere. The planet is officially known as K2-18b and it orbits around the small red dwarf star K2-18 that resides about 110 light years away in the constellation of Leo. Although K2-18b orbits closer to its star than the Earth does to the Sun because K2-18 is dimmer than our Sun the estimated temperature on K2-18b is between 0 and 40 degrees Celsius. That temperature is just right for water to exist on the planet’s surface and nearly perfect for life. Astronomers succeeded in detecting the water vapour in the planet’s atmosphere by studying the light coming from K2-18 as K2-18b was passing in front of the star. That light showed the characteristic absorption lines of water vapour.

An Artist’s impression of what the exoplanet K2-18b might look like. (Credit: ESA / Hubble)
When an element of Chemical Compound is heated they emit an emission spectra, top image is hydrogen’s. When light passes through the same material when cool it absorbs those same frequencies of light becoming an absorption spectra. (Credit: Physics Stack Exchange)

Before you start planning a visit to K2-18b however I should point out that the planet has a mass estimated at about eight times that of Earth and possesses a very thick atmosphere. Together these facts make the planet more like a warm version of Uranus or Neptune than our Earth. Additionally the planet’s star K2-18 is, like many small stars quite active with a large number of solar flares that might bath the planet’s surface in radiation. Still that thick atmosphere would give the planet’s surface some protection and if it does have oceans it is possible that life could exist there.

Astronomers will keep searching the stars of our galaxy looking for worlds that may possess life. Indeed the new James Webb Space Telescope that is expected to be launched in March of 2021 has been designed in part to carry out much more detailed studies of planets like K2-18b. So perhaps in just the next decade or so astronomers may finally discover a planet that truly is Earth like.

The James Webb Space Telescope is nearing completion and launch is expected in 2021. (Credit: Popular Science)

My second story concerns the recent observation of a comet like object that has entered our solar system from outside and is going to pass around the our Sun before heading back out into interstellar space. You may recall hearing about the first observed such an interstellar immigrant that was given the name Oumuamua a little more than a year ago. (See my post of 23May18).

The interstellar object named Oumuamua passed through our solar system in 2017. (Credit: Twitter)

Our new visitor was discovered on August 30th by Gennady Borisov of the Crimean Astrophysical Observatory and has been given the temporary designation of C/2019 Q4 (Borisov). The object has since been observed by more than a half dozen other observers and its orbital parameters have been tentatively determined with a result that the eccentricity of C/2019 Q4 is around 3.2. Now an object in a stable orbit has an eccentricity of between 0 and 1 so an eccentricity of 3.2 means that C/2019 Q4 will make one quick pass by our Sun and then head back out into interstellar space just as Oumuamua did back in 2017.

Unlike Oumaumau the interstellar object C/2019 Q4 (Borisov) has already show evidence that it is a comet. (Credit: Sci-News.com)

There are a couple of big differences between C/2019 Q4 and Oumuamua however. For one whereas all observations of Oumuamua indicated that it was a hard solid object, like an asteroid, C/2019 Q4 has already shown clear evidence of a comet’s tail. In other words Oumuamua was a rock while C/2019 Q4 is a dirty snowball.

The more important difference however may be that C/2019 Q4 has been discovered well before it passes the Sun and astronomers hope to have more than a year to study it.  Oumuamua on the other hand, was only discovered after it had passed the Sun and was on its way out of the solar system, leaving astronomers a little more than a month to observe it. Click on the link below to be taken to a YouTube video of the estimated track of C/2019 Q4 through our solar system. https://www.youtube.com/watch?v=vqMJo3DHOfg

I’m certain there will be a lot more to learn about C/2019 Q4 during the next year, and I hope they come up with a real name before long. You can be certain that I’ll keep you well informed about it. 

The Lionfish is one of the latest invasive species to threaten large-scale destruction of habitat within the US.

Invasive species are defined as populations of living creatures that have been transported from their natural habitat and become established in another ecosystem perhaps thousands of kilometers away. Sometimes this movement is a natural occurrence, such as when a few finches were somehow blown onto the Galapagos Islands, became established and evolved into some fifteen recognized species. Indeed such rare but natural transplanting of species is considered to be a driving force in evolution as the relocated population adapts to its new environment.

The 15 Species of Finches inhabiting the Galapagos Islands are all descended from a few finches that somehow survived being blown to those distant islands. (Credit: Pinterest)

More often however it is human beings who have transported the creatures either intentionally or accidentally. One example of the latter category would be the common salt water aquarium fish the lionfish, any member of the 12 species of the genus Pterois but particularly P volitans and P miles. See images below.

Pterois volitans, the Red Lionfish is a popular aquarium fish that should only be kept by a very experienced hobbyist. (Credit: Wildlife Society)
Pterois miles is another popular species of Lionfish. (Credit: Enalia Physis)

Lionfish are native to the Indian and western Pacific oceans where they are a predatory species feeding on small fish and invertebrates. Adult lionfish are generally 20-40cm in length and can weigh more than a kilogram. Their numerous spiny fins and colourful stripes have made them a popular aquarium fish, even though the animal’s spines are venomous and can produce a painful sting along with vomiting, nausea, convulsions and numerous other ill effects. Because of the danger of their spines Lionfish should only be kept by the most experienced of aquarium Hobbists.

The natural home for lionfish are the blue (P miles) and Green (P volitans ) shaded regions. They are a destructive, invasive species in the red areas. (Credit: USGS)

  Even though lionfish are popular pets it appears as if some aquarium keepers along Florida’s Atlantic coast must have decided that they were more trouble than they were worth and decided to release their pets into the ocean. Once free the lionfish began doing what fish do and without their natural predators the lionfish population has exploded. Lionfish are now regularly found along the US coastline from Cape Hatteras in North Carolina to Texas and throughout the Caribbean islands.

The destruction caused by lionfish consists mainly in their preying on native species, particularly on the young fish of valuable game species. It is estimated that the increasing lionfish population could lead to a reduction of 80% in the biodiversity of Gulf and Caribbean coral reefs.

To combat their spread government and private conservation groups are developing programs to eradicate the lionfish from the waters they are now infesting. Currently biologists and fishermen are working to develop special traps and even robotic hunters that will catch lionfish without harming native species. At present however the most efficient technique for dealing with lionfish is spearfishing by scuba divers.

Spearfishing is presently the best method for controlling the population of these predators. (Credit: Deeperblue.com)

One helpful fact is that lionfish are quite tasty if you fillet them properly; remember they are venomous. So if oceanic scientists do actually develop a technique for large scale culling of lionfish don’t be surprised if someday you see lionfish offered at your local fishmarket.

Broiled Lionfish with Paprika and Herbs

Until then contests and fishing tournaments are being organized to increase interest in harvesting lionfish all along the eastern and gulf coasts. The Florida Keys National Marine Sanctuary has even gone so far as to license divers to hunt lionfish within its boundaries, a thing almost unheard of for a wildlife sanctuary.

Poster for a lionfish catching tournament. (Credit: The Woody Foundation)

Eventually lionfish will simply become a normal part of the marine environment along the southern US coast. In time other animals will learn to prey on them and that will impose a control on their population. In fact it appears that sharks may be immune to the lionfish’s venom, some scientists are even trying to teach sharks to prey on lionfish.

How much damage the lionfish will do to the biodiversity of the Gulf and Caribbean before then however can only be guessed at right now.  A lot of trouble because of a few people who didn’t want to take care of the animals they bought thinking that they looked really cool!

Sum of three cubes problem finally solved for the number 42, the last of the natural numbers to be solved.

Mathematicians like to solve problems, that’s what doing math is after all, solving problems. In fact mathematicians enjoy problem solving so much that they often make some up just in order to have the fun of solving them.

Of course Mathematicians also enjoy a good joke! (Credit: SketchUp Community)

One such problem is the sum of three cubes for the integers between 0 and 100. This problem was initially posed in 1954 at Oxford University and is sometimes known as solutions for the Diophantine equation.

The problem, simply stated is, can a solution be found for the equation:

x3+y3+z3=k                                                 (equation 1)

Where k is an integer between 0 and 100 and x, y, and z are integers not necessarily between 0 and 100 nor even positive.

One example is easy to construct:

13+23+33=1+8+27=36                                 (equation 2)

Using that solution, and remembering that x, y or z can be negative quickly gives three more solutions.

(-1)3+23+33= -1+8+27=34                      (equation 3)

13+(-2)3+33=1-8+27=20                                   (equation 4)

(-1)3+(-2)3+33=-1-8+27=18                          (equation 5)

I’ll give one more playful solution:

23+33+43=8+27+64=99                                (equation 6)

Again using negative integers quickly allows three other solutions to be constructed but I’ll leave them for the student to discover as they say.

You can play at individual solutions for a while but if you try to work methodically starting at zero you quickly run into problems. For example zero itself only possesses the trivial solution:

(a)3+(-a)3+(0)2=0                                        (equation 7)

Where a is any integer. If any non-trivial solution existed for zero it would in fact be a counterexample to Fermat’s famous last theorem.

For k=1 or 2 there are in fact families of solutions. For k=1:

(9b4)3+(3b-9b4)3+(1-9b3)3=1                            (equation 8)

Where b can be any integer. The family of solutions for k=2 is:

(1+6b3)3+(1-6b3)3+(-6b2)3=2                            (equation 9)

Again b can be any integer. To check these solutions it’s instructive give them a try for a nice small number like b=2 and see how they work out!

By the way you are allowed to use the same integer more than once as in this solution for k=3:

13+13+13=3                                                 (equation 10)

O’k, so we’ve found some of the easy solutions but finding solutions for most integers quickly becomes very difficult. So difficult in fact that many solutions only became possible with the aid of electronic computer. Even with the assistance of the world’s best computers solutions for some integers proved elusive.

Indeed, after 65 years two numbers remained for which there was no known solution, 33 and 42. Then earlier this year the University of Bristol mathematician Andrew Booker managed to grab a couple of weeks time on the university’s supercomputer. The solution he obtained for 33 is:

Professor Andrew Booker of Bristol University found the solution to 33 and collaborated on the final solution, 42. (Credit: Phys.org)

88661289752875283+(-8778405442862239)3+(-2736111468807040)3 =33                                                                       (Equation 11)

So now the only number left without a solution was 42. Fans of the British Radio/TV/Book series ‘A Hitchhikers Guide to the Galaxy’ by Douglas Adams may recognize 42 as the answer to the ultimate question about ‘Life, the Universe and Everything!’ Now the fact that the final number lacking a solution should be a fan favourite is of course just a coincidence. Nevertheless a solution for 42 proved to be an order of magnitude more difficult to obtain.

The Hitchhiker’s Guide to the Galaxy began as a radio program on the BBC, was made in Television, then a series of books and finally a movie. (Credit: Amazon)

Realizing he needed even more computing power Booker teamed up with MIT professor Andrew Sutherland, an expert in parallel computing. This is a technique where numerous computers work simultaneously on different parts of the same problem. Professor Sutherland set up a massive ‘planetary computing platform’ consisting of the spare, unused time of half a million home PCs.

Another Andrew, Professor Andrew Sutherland of MIT who organized the computer system that succeeded in cracking the solution to 42. (Credit: MIT Math)

It took one million hours of computing time, which is still only 2 hours per computer after all, but the solution to 42 was finally found.

(-80538738812075974)3 +804357581458175153+ 126021232973356313 =42                                 (Equation12)

So all of the numbers k between 0 and 100 have now been solved, but there’s no reason to stop at 100 is there? The smallest number now without a solution is 114 so get out a pencil and paper and get busy!

The Loch Ness Monster is in the news again. Is there any actual evidence to support the existence of this legendary creature?

The Loch Ness Monster may not get as much publicity as Flying Saucers or Bigfoot do but it’s really the same sort of phenomenon. A few hints of something strange in the historical record, a few sketchy sightings of something that can’t be identified. Once a couple of stories are published in the press it suddenly seems as if everybody is talking about it. Then the number of people who claim to have seen it explodes. Before long the hoaxers join in and you lose all sense of what is legitimate evidence and what has been fabricated in order to make a quick buck.

Toss a hubcap in the air and you too can see a flying saucer! (Credit: SETI Institute)

Finally, after years of sightings with no hard physical evidence to back anything up the public splits into two distinct groups, those who are true believers and those who think it’s all a bunch of humbug. This state of affairs can go on for years with accusations of government cover-ups being added in as an excuse for the lack of real proof.

For the Loch Ness Monster the earliest known report of the creature is from a biography of the Irish monk Saint Columba written about the year 565 CE. In that account a ‘water beast’ in Loch Ness has killed a man and threatens one of the saint’s followers. Columba saves his companion by making the sign of the cross and commanding the beast to leave. Believers in the monster point to this story along with other Celtic folklore about water ‘kelpies’ as evidence that the beast has lived in the loch for centuries.

Legend has it that St. Columba chased a ‘water beast’ from Loch Ness. (Credit: Anomalies)

The monster, commonly known as Nessie first gained worldwide attention in the 1930s with a description of an encounter by George Spicer and his wife who described the creature as being a long snake or eel like creature some 8 meters in length and a bit over a meter in height. Although the Spicers saw no limbs on the creature it crawled across the road and disappeared into the loch.

It was just a year later on the 21st of April 1934 that the most famous picture of the Loch Ness Monster first appeared in the British newspaper the ‘Daily Mail’. The photo came to be known as ‘The Surgeon’s Photograph’ because the Daily Mail had obtained it from a London gynecologist named Robert Kenneth Wilson although significantly Wilson refused to have his name associated with the image.

The Daily Mail headline showing the Surgeon’s Photograph of the Loch Ness Monster. Notice how there is nothing else in the image to give you an idea of the size of the ‘Monster’. (Credit: PBS)

The photo caused an immediate sensation and quickly led to the best-known explanation of the monster as a plesiosaur, an aquatic reptile that went extinct at the same time as the dinosaurs. The idea that a small population of these creatures had somehow survived extinction and was now inhabiting Loch Ness, and perhaps other lakes around the world gained considerable popularity.

Plesiosaurs are aquatic reptiles that are considered to have become extinct at the same time as the dinosaurs. (Credit: Dinosaur Jungle)
‘Champ’ in Lake Champlain is considered to be a relative of the Loch Ness Monster. (Credit: CBS News)

It was only decades later in 1994 that the photo was revealed as a complete fake. The body of the creature was nothing more than a toy submarine bought at Woolworth’s department store to which a neck and head made of wood putty were added. The one-meter long counterfeit was simply floated into Loch Ness and photographed, an object lesson in how easy it can be to fool millions of people who want to be fooled.

Of course one fake, however famous doesn’t mean that there isn’t something unusual in Loch Ness. After all a lot of people have reported seeing something and they’re not all hoaxes.

Indeed they’re not; in fact there have been some legitimate scientific attempts to discover what, if anything is hiding in Loch Ness and a few of them have produced tantalizing hints of something. Perhaps the best known is the 1972 expedition organized by the Academy of Applied Science and led by Robert H. Rines. The team employed sonar apparatus in a methodical search of the loch for any large objects beneath the surface. Then, any time a large object was detected by the sonar an underwater camera with a floodlight recorded an image of the object. On August 8th the sonar detected a moving target some 6 to 9 meters in length. At the same time the underwater camera took a picture of what looked like diamond shaped ‘fin’.

Two images of the ‘Fins’ of the Loch Ness Monster taken in 1972. (Credit: MIT)

That’s the best scientific evidence for the existence of the Loch Ness monster. Problem is that the 6-9 meter target could very easily have been a school of small fish while the picture of the fin is so blurry that it could be almost anything. Still, a half dozen other investigations have produced nothing better.

Now a new approach has been used in the search for Nessie, environmental-DNA (eDNA). eDNA works this way, samples from any body of water will contain some genetic material from all of the species of animal or plant that live in that body of water. Analyzing that DNA tells scientists what species live in that water without having to actually observe or capture a single specimen.

Any animal whose excretions wind up in a body of water can be discovered using eDNA. (Credit: WildlifeSNPits)

Researchers from the University of Otago in New Zealand have performed such an analysis on over 200 water samples from various places in Loch Ness. In particular the scientists were looking for the presence of reptile DNA that would provide evidence for the existence of a population of plesiosaurs.

The study found DNA from some 3,000 species of plant and animal, even bacteria but no indication for reptile DNA of any kind. They also failed to find DNA for large species of fish such as shark, catfish or sturgeon, animals that have been suggested as possibly being responsible for the monster sightings.

Professor Neil Gemmell with a sample of water for Loch Ness. No Nessie DNA was found. (Credit: Time Magazine)

What the scientists did find was the DNA of the well-known animals of northern Scotland, strong evidence that there is nothing unusual in the loch. The scientists also found what they considered to be a large amount of eel DNA in every sample tested leading the team leader Neil Gemmill to suggest that a giant eel might be the best candidate for Nessie. “It’s a least plausible,” Dr. Gemmill asserts.

The Loch Ness Monster nothing more than a big eel? Not much to show for almost 1500 years of hullabaloo.

What ever happened to the uranium fuel from Nazi Germany’s attempt to build a nuclear reactor?

Nearly everyone knows the basic outline of this story, it is after all one of the most important series of events that occurred during the 20th century. In the late 1930s, while the threat of a coming world war grew physicists were learning the secrets of the atom and wondering if it could be possible to release the tremendous energy contained within the nucleus, both for power generation, and for weapons.

The process of Uranium Fission. Started by a single neutron the process releases both energy and more neutrons to produce a chain reaction. (Credit: Nuclear-Power.net)

The countries that would become the allied nations feared that Nazi Germany could become the first to develop an atomic bomb. After all, both the theories of relativity and quantum mechanics were first conceived by Germans and many of the leading researchers in sub-atomic physics were German. In fact the scientists who first succeeded in splitting atoms of Uranium, Otto Hahn and Fritz Strassmann were both German and their fission experiment was performed in Berlin!

The Experimental Apparatus used to first split the nucleus of Uranium (Credit: J. Brew / Flickr)

Hoping to beat the Germans to the bomb the Americans, with help from the British, organized the massive ‘Manhattan Project’. The American program did succeed in producing the first nuclear weapons but not until several months after Nazi Germany had been defeated. In fact when allied scientists searched through the rubble of Hitler’s Reich for Nazi scientists and technology they were surprised to discover how little progress the German nuclear physicists had made.

The Manhattan Project succeeded in developing the first atomic bomb whose first test was the Trinity Test. (Credit: Wikipedia)

There were many reasons why the Nazi atomic bomb program failed. One reason worth considering in today’s political climate would be how the Nazi’s own racism forced some of the world’s greatest minds to flee Europe for the safety of the United States. Men like Albert Einstein, Niels Bohr, Erwin Schrödinger, Hans Bethe, Max Born and many others would all contribute to the Manhattan Project, helping America develop the bomb first.

Albert Einstein was just one of dozens of German scientists who fled their country to escape the Nazis. The loss of their talents weakened the German nuclear program. (Credit: Viva)

There were other reasons as well; one interesting one was the Nazi’s tendency towards an almost feudal disorganization in their nuclear program. In fact the German nuclear program was more like nine distinct programs, each with its own director, each of which set its own agenda and goals with little coordination between the different groups. In contrast the Manhattan Project had one boss, Major General Leslie Groves who, with his science advisor Robert Oppenheimer made certain that everyone and everything in his command worked together for one goal, an atom bomb.

Major General Leslie Groves brought a degree of Military discipline to a bunch of scientists working on the Manhattan Project. (Credit: Wikipedia)

The German nuclear program’s greatest success was in the construction of a nuclear reactor by the Uran-Maschine (Uranium Machine) group in the city of Haigerloch. This group was headed by the Nobel Prize winning theoretician Werner Heisenberg along with his assistant the experimentalist Robert Döpel. The reactor these two scientists designed consisted of some 664, 2kg uranium cubes each about 5cm to a side. These cubes were hung from chains and then immersed in heavy water, which acted as a moderator slowing the neutrons in order to increase their chance of striking a uranium nucleus and maintaining the chain reaction. See image below.

German Physicist Werner Heisenberg led the German attempt to construct a nuclear reactor (Credit: IMDb)
The nuclear reactor designed and built by Heisenberg. The 664 uranium cubes are strung along aircraft cable (Credit: Atomicheritage.org)

Although the reactor was completed it never achieved criticality. That is the reaction never reached the condition where enough neutrons were being produced by the splitting of uranium nuclei to sustain the chain reaction indefinitely. Modern calculations indicate that the reactor design would have required a 50% increase in the number of uranium cubes in order to work. By comparison Enrico Fermi and his group had succeeded in establishing the first sustained nuclear reaction with their reactor in December 1942.

Artist’s rendering of the moment the first nuclear reactor went critical. (Credit: Smithsonian Magazine)

With the fall of Nazi Germany the experimental reactor at Haigerloch was captured by the US Army along with the scientists who worked there. The army troops who seized Haigerloch were accompanied by members of a special project known as Alsos who were attached to the Manhattan Project and led by the physicist Samuel Goudsmit. The Alsos team both interrogated the German scientists and examined the reactor. The captured scientists, including Heisenberg, were later sent to Britain and incarcerated for a time. The reactor was dismantled and the equipment, along with the 664 uranium cubes shipped to the US.

So what happened to those 664 uranium cubes? Well, it is likely that most were simply inserted into the Manhattan Project’s supply chain and eventually the uranium became part of American nuclear reactors or weapons. Some however definitely did not, instead becoming souvenirs that were passed from one person to another. Several of these cubes have found their way into museums including a museum at Haigerloch Germany dedicated to telling the story of Hitler’s reactor. Other known examples include Harvard University and the National Museum of American History in Washington DC. It is possible however that there are some still out there sitting in someone’s attic or garage.

One of the remaining uranium cubes from the Nazi nuclear reactor. (Credit: Science News)

Timothy Koeth, an associate research professor of the University of Maryland is now trying to discover what happened to as many of the uranium cubes as he can. Professor Koeth has even established an email address so that anyone who may have information about the cubes can contact him. The address is:

uraniumcubes@umd.edu

So if you have this old black cube that your grandfather brought back from the war and kept for reasons he never made clear contact Professor Koeth. Maybe that’s a real piece of Hitler’s nuclear reactor!

Gamma Ray Bursts are the most powerful events ever observed in the entire Universe. Could one ever be a threat to life here on Earth?

Ever since Galileo first pointed his telescope into the night’s sky astronomers have continued to discover ever stranger and more fascinating objects inhabiting this Universe of ours. Surely among the most mysterious are the objects known as Gamma Ray Bursts (GRBs).

What is a GRB? Well, about once a day, somewhere in the Universe an event occurs that releases as much energy in a few seconds as our Sun will generate in its entire life! This energy is observed as a bright burst of gamma rays. For decades little was known about GRBs and it’s only in the last 22 years that astronomers and astrophysicists feel that they have begun to understand something about these strange entities.

Gamma Ray Bursts are thought to be the most energetic events in the entire Universe! (Credit: Futurism)

Even the discovery of GRBs was pretty unusual. It is a fact that GRBs are the first, and so far only astronomical discovery to be made by CIA spy satellites. You see it all started in 1963 when the old Soviet Union agreed to the Nuclear Test Ban Treaty that ended the above ground testing of nuclear weapons. The US didn’t quite trust the Russians however; it was thought that the Soviet’s might try to cheat the ban by testing their weapons in outer space. So the CIA launched a series of satellites known as Vela that were designed to detect the sort of gamma radiation that would accompany any nuclear explosion off the Earth.

With the signing of the Nuclear Test Ban Treaty in 1963 the World’s Atomic powers agreed to halt above ground tests of nuclear weapons. (Credit: YouTube)

On July 2 in 1967 the Vela 2 and Vela 3 satellites detected a quick burst of gamma rays but it was soon realized that the burst wasn’t caused by the Russians. Using the data from the two satellites scientists at Los Alamos Nation Laboratory found that the radiation had come from somewhere outside of the solar system. Other bursts were soon detected as well but since the entire Vela program was classified as Top Secret astronomers didn’t get to hear about the discovery until 1973.

The VELA gamma ray detecting satellites were launched into space to monitor the Soviet Union’s Compliance with the Nuclear Test Ban Treaty. Instead they discovered the existence of Gamma Ray Bursts. (Credit: Flickr)

Even after the world’s astronomers knew about the existence of gamma ray bursts progress in understanding them was very slow. Think about it, since gamma rays are blocked by Earth’s atmosphere GRBs can only be detected by specialized satellites. Add to that the fact that GRBs rarely last more than a minute and that they can appear in any part in the sky and you can understand how hard it was to obtain any real data about them. 

The Earth’s Atmosphere blocks most forms of electromagnetic radiation allowing only visible light and radio waves to reach the surface. (Credit: Pinterest)

What astronomers wanted to learn most of all was whether or not GRBs had any other electromagnetic component to them. That is, did an optical, radio or perhaps X-ray flash accompany the gamma ray emissions. In order to do this astronomers had to develop a fast reaction network that would quickly communicate the news that a GRB had been detected to astronomers around the world so that other instruments could be brought in action.

Success finally came in February 1997 when the satellite BeppoSax detected GRB 970228 (GRBs are named by the date of their detection YY/MM/DD). Within hours both an X-ray and an optical glow were detected from the same source, a very dim, distant galaxy. Further such detections soon confirmed that GRBs came from such extremely distant galaxies, most of them many billions of light years away. So distant are the locations of GRBs that in order to appear so bright in our sky they must be the most powerful explosions in the entire Universe.

The BeppoSAX Satellite was designed and launched specifically to study GRBs. (Credit: SlidePlayer)

So what are these GRBs? What makes them so energetic? To be honest there’s still a lot to be learned but a consensus of opinion is growing that there are actually two distinct types of GRBs.

Those that last for a somewhat longer length of time, longer than 30 seconds, are the initial stages of a core collapse supernova. That is the death of a star so massive that it never really settled down like a normal star but instead just implodes after a few million years into a black hole. All of the well-studied GRBs fit this model remarkably well, including their location within galaxies that are undergoing rapid star formation, places where such massive, short-lived stars are far more common.

One interesting feature of this model is that as the star collapses it rotates much more rapidly, just as an ice skater will do when they pull in their arms during a spin. This increase in rotation speed generates a enormous magnetic field at the star’s poles causing the gamma rays that are emitted to squirt out from the poles like the beams of light from a lighthouse. This concentrates the power of the gamma rays into two narrow beams making the GRB look much brighter in the directions those beams travel.

The energy of long duration GRBs is concentrated into two narrow beams light the light from a lighthouse. (Credit: AAS Nova)

If this lighthouse feature of GRBs is true that implies that we are only seeing a small fraction of all GRBs, only those that are pointing at us. It also means that GRBs are not quite as powerful since their energy is focused into the beams. Again, this model fits the data collected for longer duration GRBs that make up about 70% of those that have been observed.

There are also short term GRBs, whose duration averages less than half a second and which make up about 30% of the total observed. Because they are fewer in number and shorter in duration these GRBs are harder to study and therefore less well understood. Several models have been suggested for them but the recent simultaneous observation of a GRB (GRB170817A) only 1.7 seconds after a gravity wave was detected by the LIGO gravity wave observatories implies a direct connection. Based on the nature of the gravity wave observed the event was a merger of two neutron stars. Therefore at least some short period GRBs are the result of neutron stars colliding to form a black hole or a black hole devouring a neutron star.

A Merger of Neutron Stars releases both a GRB and power Gravity waves. (Credit: AAS Nova)

So, if these GRBs are the most powerful explosions in the entire Universe, could they be any danger to us? Are their any stars in our galactic neighborhood that could collapse and generate a GRB? And what damage would a nearby GRB do?

In fact there are a couple of possible candidates known to astronomers. The stars Eta Carinae and WR 104 are both hugely massive stars that could collapse into black holes sometime in the next million or so years. Of the two WR 104 is closest at a distance of only 8,000 light years.

Eta Carena (l.) and WR104 (r.) are the most most massive and powerful stars known. Either couls someday collapse into a black hole triggering a GRB. (Credit: Gresham College)

If WR 104 were to generate a GRB, and if that GRB were aimed at Earth our atmosphere would protect us from the initial burst of gamma and X-rays, only a spike in the Ultra-violet lasting a few minutes would be seen. The long-term effects are much less pleasant however because the gamma and X-rays striking the atmosphere would cause oxygen and nitrogen to combine to form nitrogen oxide and nitrogen dioxide gasses. Both of these gasses are known destroyers of ozone, the form of oxygen in the upper atmosphere that protects us from the Sun’s UV rays. Also the gasses could combine with water vapour in the air to form droplets of nitric acid that would rain down causing further damage.

The Earth’s Ozone layer protects us from the cancer causing UV light from the Sun. Credit: UCAR)

Of course all of that is just speculation, we really have no idea what would happen here if a GRB from a star as close as WR 104 should strike the Earth. Before you start to panic however remember that GRBs are very rare, only one per day in the entire Universe. Let’s be honest, we’re a far greater danger to ourselves than Gamma Ray Bursts are!

Book Review: Why Did the Chicken Cross the World by Andrew Lawler?

Human beings have a tendency to overlook or even ignore those things that are the most familiar to us. Because we see something all of the time we feel as if we know everything there is to know about it, it just isn’t interesting anymore.

The Familiar Barnyard bird. (Credit: IndiaMart)

The chicken has been treated that way throughout history. Entire cultures have been built around cattle or sheep or the bison but not the chicken. Even when a small flock was kept just outside the house for the occasional egg or a special meal it was always the bigger livestock that got all of the attention.

Nevertheless, today it is the chicken that has become humanity’s largest supplier of protein. Today there are more domestic chickens being raised for food than any other animal. The chicken is the greatest success story of the technology of industrial food production, and as a living creature the chief victim of that success.

Andrew Lawler’s book ‘Why did the Chicken Cross the World’ is a journalistic investigation into the chicken, from it’s natural state as a wild bird spread across southern and southeastern Asia to being little more than one of the farmer’s wife’s chores to becoming one of the most valuable industrial commodities on the planet.

Front Cover of ‘Why did the Chicken Cross the World’ by Andrew Lawler (Credit: Amazon)

No one knows when human beings first began to keep the small wild relative of the pheasant but the remains of chickens along with primitive pictograms identified as chickens indicate that our relationship dates back into the Stone Age. The earliest evidence for humans raising and breeding chickens is not for food however, it was for cockfighting.

Wild Chickens still exist in the Kaziranga National Park in India (Credit: Pinterest)

Indeed much of the first third of ‘Why did the Chicken Cross the World’ deals with cockfighting as both a vehicle for gambling but also as a religious ritual! Andrew Lawler presents his evidence in a clear, enjoyable fashion that I quite frankly envy. Traveling around the world Mr. Lawler visits a selection of people who raise roosters for the pit but whose affection for their fighters is much more than just a source of income.

It is likely that Chickens where first domesticated for the fun of watching them fight rather than as a source of food. (Credit: Daily Times)

Moving forward in history Mr. Lawler details how for centuries the chicken competed with ducks and geese, and later the American turkey, for a place in humanity’s farms. It was only in the late 19th and early 20th century that the chicken became the dominant barnyard fowl.

A few centuries ago any barnyard would have kept several species of poultry for food (Credit: MutualArt)

It is the story of how the chicken became the most numerously bred, raised and finally, slaughtered animal that is the main part of ‘Why the Chicken Crossed the World’. Starting about 1850 in England and the US the importation of larger, meatier chickens from Asia began a long term breeding program to produce a chicken that would grow bigger in less time for less feed making chicken more available and less expensive.

Queen Victoria’s Poultry House. It was when Victoria became interested in raising chicken’e that the species became popular in England. (Credit: Poultry Pages)

A key moment came in 1948 when the world’s largest retailer, the A&P supermarket chain joined with the US Department of Agriculture (USDA) to sponsor the ‘Chicken of Tomorrow’ contest. The winner of that contest became the sire of an industrial production line of chickens that grow to more than twice the weight of their wild ancestors. In as little as 47 days modern birds are fully grown at a ratio of one kilo of chicken produced for two kilos of feed, a ratio that is nearly 50% better than any other species of meat producing animal.

The ‘Chicken of Tomorrow’ contest led to the industrialization of raising chickens (Credit: Flashbak)

None of this did the chickens any good. If they are bred for meat they are stuffed by the tens of thousands into industrial sized coops, see image below, where they are fattened up to the point where they can hardly stand. They are allowed to live for less than two months before being slaughtered.

Thousands of Chickens crammed into a modern chicken coop. Is this where your next meal is coming from? (Credit: YouTube)

The Selective breeding of chickens has led to Giant Chickens but at the cost of the animal’s health. (Credit: Insteading)

If they are bred for egg production they are squeezed into a tiny ‘battery cage’, see image. They lay an egg a day on average, a process that takes so much calcium out of their systems that their bones are extremely weak. After a year the hen is so exhausted that she is simply used for dog food.

Egg Laying Chickens in a ‘Battery Cage’. (Credit: Farm Sanctuary)

That’s the hens, the roosters, which are not as valuable and harder to keep because of their tendency to fight, are simply separated from the hens after hatching and disposed of in as cheap a method as possible. To the modern food industry the chicken is no longer a living creature but just another commodity to be produced and packaged cheaply and efficiently.

A motif that Mr. Lawler often returns to is that for millennia the chicken as an animal was a familiar animal. Today it is virtually unknown as a living thing; it is just something we eat, a commodity not a fellow creature. 

‘Why the Chicken Crossed the World’ is a thoroughly enjoyable book. A mixture of science, technology, history, sociology and politics in which you find yourself learning something on every page and the knowledge sticks with you. And I’m not just saying that because Andrew Lawler and I share our surname. To be best of my knowledge we are totally unrelated, the book is just really good!

Space News for August 2019.

We generally think of a story in the news as a report of some sort of dramatic occurrence, a story about an event full of action and yes, even danger. Space news therefore would consist primarily of accounts about rocket launches and space probes landing on distant worlds.

Of course we know that isn’t quite true. In space exploration the calm, deliberate decisions that are made in engineering conferences are every bit as vital to accomplishing the mission as the more spectacular moments. In this post I will be discussing three such stories illustrating the kind of planning and decision making that will make future space missions possible.

Many ideas are developed, and problems solved, in Engineering Meetings (Credit: PSM.com)

One such important decision announced by NASA on August 19 was to give a go ahead to begin construction of their ambitious Europa Clipper space probe, named for it’s target, Jupiter’s moon Europa. The intended mission of the Europa Clipper is to study that icy world in an effort to determine if the moon is actually a possible home for life. Some 40 close flybys of Europa are planned during which the probe will measure of thickness of the moon’s ice surface along with confirming the existence of a liquid ocean beneath the ice.

The Europa Clipper Space probe will make 40 flybys of the icy moon of Jupiter (Credit: ABC57.com)

The decision by NASA means that the design phase of the mission is now over and construction will now begin at NASA’s Jet Propulsion Laboratory (JPL) with a planned launch date of 2023 or 2025. One decision about the Europa Clipper still remains to be made however. What launch vehicle will be used to send the probe on it’s way to Jupiter?

Currently congress has ordered NASA to use the Space Launch System (SLS) as the launch system but that massive rocket is still not ready for it’s first test launch, and there is the real possibility that the SLS might not be ready by 2025. Also, launching the Europa Clipper with the SLS will cost over a billion dollars.

After many delays and budget overruns NASA’s massive Space Launch System (SLS) still has not flown (Credit: NASA)

NASA on the other hand would prefer to launch the Europa clipper using a commercial launcher such as Space X’s Falcon Heavy. Launching the space probe with a commercial rocket would not only save hundreds of millions of dollars but also firm up the launch schedule since the Falcon Heavy has already successfully flown three times. Unfortunately the decision here may be made by politics because the SLS is being built at NASA’s Marshall Space Flight Center in Alabama and some very important republican senators are strongly supporting it.

The Space X Falcon Heavy rocket has already flown successfully three times (Credit: The Verge)

Speaking of the Marshall Space Flight Center, NASA has made another decision naming them as the lead management center in the development of the Lunar Lander for the space agency’s big Artemis program. Artemis is the name that NASA has now given to its plans for returning astronauts to the Moon’s surface by 2024. Since Marshall is already developing the SLS as the Artemis launch vehicle their choice as lead for the Lander now puts two big pieces of the Artemis pie on Alabama’s plate.

The Marshall Space Flight Center is where NASA has developed rockets like the Saturn V and Space Shuttle (Credit: Wikipedia)

Again the decision here was made on the basis of political, not engineering grounds and that’s never a good thing. In fact the decision could very well be changed. You see the Johnson Manned Space Flight Center is in Houston Texas and there are a couple of powerful Texas senators, also republican by the way, who think the Johnson center would be a much better selection as management lead for the lander’s development.

The Johnson Space Flight Center in Texas is Where NASA’s Manned Space Missions are developed (Credit: Wikipedia)

None of this arguing back and forth will make the lander perform any better, or be built any faster or cheaper. Indeed that sort of political infighting is more likely to stall funding appropriations that could lead to schedule delays and cost overruns.

On a more hopeful note NASA has also decided to team up with Space X in order to develop the technology necessary for the refueling of spacecraft in space! Again the idea is to reuse spacecraft rather than just throw them away after one use and build another. In space refueling has long been considered essential towards developing a space infrastructure that will enable longer and more difficult space missions.

Refueling in space would extent the operational life of satellites, thereby reducing their cost (Credit: Engadget)

Take for example the communications satellites that are now in geostationary orbit 35,000 km above the earth’s equator. These multi-million dollar radio relays must keep their antennas pointed very precisely at Earth in order to perform their job at all. To do this the satellites have small, station keeping rocket engines that keep the satellite exactly where they’re supposed to be. After about 5-7 years however those engines run out of fuel and the craft soon begins to drift until the antennas are no longer directed at Earth. Once that happens the satellite becomes nothing more than a very expensive piece of junk up in space. If you could refuel those satellites while in orbit however you could extend their useful life by years and save billions of dollars.

For manned spaceflight in space refueling would allow the development true spaceships that could travel back and forth to the Moon or Mars multiple times. Such spaceships would be refueled at the end of each mission in exactly the way you refuel your car after a long trip.

Developing the technology for refueling in space won’t be easy however. Most of the chemicals used as rocket fuel, liquid oxygen, liquid hydrogen or liquid methane have to be kept cryogenically cold, requiring both refrigeration equipment and power. And everything has to be kept airtight or that fuel that you spent so much money getting into orbit will simply boil off into space. That’s why NASA teaming up with Space X makes sense. While Space X is the leader in reusable spacecraft NASA’s Glenn Research Center in Ohio and Marshall Space Flight Center are the recognized experts in handling and storing various kinds of rocket fuel. Hopefully this teaming up of skills will solve the problems of refueling in space and one day soon in addition to orbiting space stations we will see orbiting gas stations as well. 

Will there soon be a ‘Gas Station’ in orbit above the Earth? (Credit: Ars Technica)

The Transistor and Integrated Circuit, the story of the Miniaturization Revolution in Electronics

Earlier this year I celebrated the fiftieth anniversary of the Moon landing of Apollo 11 by publishing a series of eight articles about the ‘Space Race’ of the 1960s. I enjoyed that task so much that I decided to write a few more posts about some of the other cool technologies of that time, I hope you enjoy them.

In most homes today you’ll find that the number of electronic devices outnumbers the number of human beings by a factor of three, four or even more. Add up all of the TVs, computers, smartphones, hey even our ovens and refrigerators have microprocessors in them nowadays!!! Electronics are so cheap, so versatile and so small that we’re putting them in just about everything.

Just some of the electronics that can be found in a modern home. (Credit: Santa Barbara Loan and Jewelry)

Back in the 60s however electronics were big and expensive. Most homes had one TV, one record player and one, maybe two radios. The reason was simple; electronics were built around the vacuum tube, which were themselves large and expensive. See image below.

An Electronic Vacuum Tube (Credit: Parts Express)

Now if you think that a vacuum tube looks something like an incandescent light bulb you’re quite right, vacuum tubes were developed from light bulbs and like them require a considerable amount of power, voltage and current, in order to just turn on. This makes vacuum tubes wasteful of energy, hot and rather large.

Things started to change during the 60s when the first transistor electronics came on the market, the small, hand held AM transistor radio being the most popular. Now pretty much everyone knows that transistors are made primarily of silicon and, like a vacuum tube a transistor is an ultra fast electrical switch. Unlike a tube however a transistor doesn’t have to be hot in order to work.

An antique six transistor radio. (Credit: ETSY)

This means a transistor needs only a small fraction of the power of a vacuum tube in order to function and therefore they can be made much smaller and packed in together more tightly. Whereas a vacuum tube radio was as large as a piece of furniture a transistor radio could be held in one hand, and with the transistor radio the word miniaturization came into common usage.

Vacuum Tube radios could hardly be considered mobile! (Credit: Flickr)

Still, my first little transistor radio was build of ‘discrete’ transistors. That is to say each transistor was a separate object, an individual piece of silicon packaged in it’s own plastic coating. When I bought my second transistor radio I of course disassembled the first one and inside I found six transistors, along with numerous other components. The transistors were each about the size of a pea; I learned later that the transistors were packaged in a standard format known as TO-92.

A single 2N3904 Bipolar NPN General Purpose Transistor packed in a TO-92 case. (Credit: Addicore)

Even as the first transistorized consumer products were becoming available there were engineers who began to wonder if it would be possible to fit two transistors, or even more, on a single piece of silicon, and how many could you fit? The first experiments with Integrated Circuits (ICs), as these components came to be known, were carried out at Texas Instruments Corp. in 1958. See image below.

The World’s first integrated circuit contain two transistors on a single piece of germanium, not silicon (Credit: Texas Instruments)

The advantages of ICs were many, reduced cost, size and power requirement along with increased operating speed. The drawback to ICs was their high initial start up costs. The facilities needed for manufacturing ICs, known as a ‘foundry’, are very expensive even though, once you had a foundry millions of ICs could then be made very cheaply. In the business this is known as a high Non-Recurring Expense (NRE) with a small Recurring Expense (RE).

A look inside a foundry for the manufacture of Integrated Circuits. (Credit: SemiWiki)

So, who was going to pay for the first IC foundries? The U.S. government that’s who! In the 1960s both NASA and the military had a tremendous need for ever more sophisticated radios, radars, guidance systems and even computers. And all of these new electronics had to be smaller in order to fit into rockets, airplanes and ships. The IC was the only possible technology that could satisfy that need.

Then, once the first foundries were built the miniaturization revolution really got under way. One of the pioneers of the IC industry Gordon Moore declared in 1965 that the number of transistors on a single silicon ‘chip’ would double ever two years. This prediction is commonly called Moore’s Law and has worked now for over 50 years with the current technology being capable of placing millions of transistors on a chip of silicon no larger than a fingernail.

Gordon Moore was an early pioneer in the development of Integrated Circuits. (Credit: Computer History Museum)
A Look inside a typical Integrated Circuit, this one is a Pentium 4 Microprocessor used in many personal computers. (Credit: Calvin College)

With this technological progress has come personal computers, smartphones, digital cameras, digital television and myriad other devices that we all have in our homes or carry on our person. The transistor and Integrated Circuit have become the true symbols of our modern age and their revolution began in the 1960s.