Let me just take a moment before we start to address the nonsensical issue of whether we should be calling the damage done to our environment by the emission of huge amounts of greenhouse gasses global warming or climate change. I look at it this way; the greenhouse gasses are causing the Earth to warm, that’s global warming. That warming then directly causes a large variety of different problems, everything from sea level rise, more intense tropical storms to excessive droughts. That’s climate change.
In other words, greenhouse gasses cause global warming. Global warming then causes the different aspects of climate change.
Honestly though, it really doesn’t matter what you call it so long as you recognize the damage that we are doing to the only planet we have and are willing to do something to solve the problem. Whether you call it global warming or climate change it’s still an ever growing danger that we have to face.
And the evidence of how dangerous the situation is becoming grows every day. This year’s Atlantic hurricane season is demolishing all previous records for the number of storms but today I’d like to talk about the crisis in the western part of the US due to the unprecedented number and intensity of wildfires.
Now I used to live in California’s silicon valley, also known as the San Francisco Bay Area, during the 1980s so I am personally familiar with how large areas of California can go from March to November without a single drop of rainfall. I can remember being warned about the dangers of drought conditions, I have seen how square kilometers of grass and brush will turn brown because of lack of water and I have myself witnessed several, small wildfires. I know from personal experience that wildfires are just a natural part of California’s ecology.
However
the extent of the fires now burning not just in California but throughout the
western half of the US is far beyond anything in human experience. When I see
some of the images coming out of San Francisco, coming from places I know very
well, turned orange by the smoke and distant glow of massive fires I’m chilled.
That hellscape is not the California I knew.
In just the past month of August the western US has seen a number of unprecedented weather and fire conditions. A fire tornado was observed for the first time ever just north of Lake Tahoe. The hottest temperature ever reliably recorded on Earth’s surface, 54.5ºC (130ºF), was measured in Death Valley. A dry thunderstorm swept across Northern California sparking 11,000 lightning strikes that ignited over 300 fires, two of which grew to become the largest even seen in the state. So far this year over 7 million acres of forest land has been burned, a staggering amount far surpassing any previous year’s total, and the fire season isn’t over yet.
Meteorologically what is happening out west is that the increasing temperatures are leading to the growth of a massive ‘heat dome’, a high pressure system that becomes stuck in the same region of the Earth due to the jet streams. These heat domes have led to severe drought conditions causing the death of millions of trees that provide even more fuel for the fires triggered by the heat.
The statistics back up the idea that what we are seeing is an ongoing trend rather than a singular, extraordinary season. Although the National Interagency Fire Center only began keeping more accurate records in 2000 those 20 years of records for California are enough to illustrate the alarming increase in the number of acres of forest burnt each year.
And
remember the total for 2020 was as of the 11th of September and has grown
considerably since then. Adding in the land area burnt in the other western
states and the total area of forest destroyed now comes to something larger
than the entire state of New Jersey.
But California isn’t unique; the golden state is just a bit out in front of the rest of us in the changes happening due to climate change. Back in August the Midwest States, especially Iowa, suffered badly from a rare storm system known as a derecho, a wall of storms hundreds of kilometers in width. The straight line winds developed in a derecho can be as strong as those in a tornado but the damage caused is spread out over a much wider path. In addition to massive destruction to homes and other structures several hundred square kilometers of crops were destroyed, a real tragedy in the agricultural heartland of America.
Once more we know what is happening, Earth’s rising temperatures simply means that more energy is being pumped into the weather systems around the world. More energy means more severe weather of all kinds, more severe hurricanes and more sever droughts, more and stronger tornadoes and just stronger storms in general.
And
all just because we refuse to shift our energy production from quick, easy,
cheap fossil fuels, which will run out eventually anyway, to longer lasting,
sustainable forms of energy production. We all know that the long term cost of
staying on our present path will be enormously greater than any short term
savings. When will we finally find the strength of will to do what we must?
I guess just about everyone in the world knows by now that we here in the U.S.A. are having a Presidential Election this November. Now in this year’s election the choice could not be starker but it’s important in every election to get past the rhetoric and name calling to try to look at the facts and to me facts mean numbers. Therefore over the next several weeks I will be looking at the issues in the race between Donald Trump and Joe Biden as objectively as I can.
In this post I will be taking a look at the US economy, comparing how it has performed under Donald Trump and comparing that to both his promises concerning economic growth but also against that the our Previous President Barack Obama, whom Joe Biden served as Vice-President.
Now
in most elections the economy is by far the most important issue. This year
however other issues, particularly the Covid-19 Pandemic, have pretty much
pushed it aside but nevertheless the economy remains one issue where it is
possible to point to unequivocal facts, job growth, GDP growth and federal
budget deficits. Because of the quantitative nature of the economy it is
possible to make objective assessments.
This year however the economy has suffered a devastating blow because of the Covid-19 outbreak, an event that Donald Trump cannot be blamed for. In order to be fair therefore I will only use the economic numbers from Trump’s first three years as President and compare them to the economy of Barack Obama’s last three years in office, as well as Trump’s own promises about the economy made during his campaign in 2016.
Let’s
start with the jobs figures because that always seems to be the economic
measurement most important to politicians. In the first three years of his
presidency, again before covid-19 hit, the US economy under Donald Trump gained
six million, five hundred and nineteen thousand jobs (6,519,000) an impressive
number. That’s an average of 181,000 jobs every month, bringing the
unemployment percentage down to a very low 3.6%.
However it has to be remembered that Trump inherited a very strong, job creating economy from his predecessor Barack Obama. Unemployment when Trump took office was already very low, only 4.7%. In fact during the last three years of his presidency the labour market under Obama gained over eight million jobs, 8,067,000 to be exact. More than a million and a half more jobs than in Trump’s economy. The average monthly job gain for Obama works out to be 224,000 jobs. In fact 25% more jobs were created during Obama’s last three years than were created during Trump’s first three years. And remember Obama inherited an economy in the throes of the worst financial crisis since the great depression, the unemployment percentage was 7.8% when Obama took office and dropping rapidly.
Not only that but in an effort to create more jobs Trump used the old trick of lowering taxes, which of course led to a sharp rise in the federal deficit. In fact whereas Obama had received a budget deficit of over 1.44 Trillion dollars from President Bush he had in his eight years in office succeeded in lowering that figure to $585 Billion, a 60% reduction.
The deficit under Trump however has increased to $960 billion, a 64% increase in only three years. If we again compare Trump’s performance in his first three years to that of Obama’s last three years we find that the Trump administration has borrowed $2.4 Trillion, 56% more than Obama’s $1.538 Trillion. And that’s all before the Covid-19 pandemic.
The economic statistic that really captures the overall improvement in the economy is growth in Gross Domestic Product or GDP. GDP is just the sum total of all economic activity, in other words every time money legally changes hands. Every product sold, every bit of labour paid for, it all goes into GPD and a healthy economy has a GDP that is growing faster than inflation.
For
the first three years of his presidency Donald Trump has succeeded in growing
the economy at an average percentage of 2.5% as compared to Obama’s yearly
average of 2.36%. However it should be pointed out that Trump promised a yearly
increase in GDP of more than 4%.
That was in many ways his biggest promise, he’s a businessman after all and his ability to handle our economy is supposed to be the chief reason to vote for him. Throughout his career Donald Trump has bragged about his money-making abilities. However the reality of his four bankrupt casinos, the failure of Trump airlines, Trump magazine and Trump steaks tells a very different story.
If Donald Trump can’t deliver on his economic promises, if in fact the Trump economy is little different from the Obama economy. If Trump’s only real economic idea was a trillion dollar tax cut, 67% of which went to the richest 1% of Americans, should the self proclaimed ‘King of Debt’ be given another fours years to give away more of our countries wealth?
The plain fact of the matter is that whatever Donald Trump may say he is not a businessman, never has been and never will be. Donald trump is a salesman who is very good at talking people into doing what he wants, investing in his business ventures, but who is incapable of doing the actual work of managing a business. All of his projects start off in a blaze of publicity only to collapse in a chaos of mismanagement.
The facts show that Donald Trump is no ‘Stable Genius’ where handling our economy is concerned. Instead, as he has always done, Trump just exaggerates his own achievements while belittling those of anyone else. In the end his claim to being the one man who can restore our economy after Covid-19 has no actual data to back it up.
P.S. Just a few days after I published this post the New York Times announced that they have succeeded in obtaining Donald Trump’s tax returns and other financial documents for the past twenty years. The Times’ story details how Trump has lied to, cheated and manipulated his investors, the banks who lent him money as well as the US government. I heartily recommend that anyone check out the story by following this link. https://www.nytimes.com/interactive/2020/09/27/us/donald-trump-taxes.html
There
were a couple of interesting stories about our Universe that caught my eye.
Both deal with celestial objects and events that are among the largest and most
powerful known to astronomy.
I’ve written several posts about the Gravity Wave observatories that are the newest field of research in astronomy. (See my posts of 14Jun17, 22Oct17, and 17Nov18) To date the two Laser Interferometer Gravity wave Observatory (LIGO) observatories in the US along with the Virgo observatory in Italy have observed over fifty events including the merger of two black holes or two neutron stars into a black hole as well as black hole and a neutron star into a black hole.
In all of those events observed thus far however the masses of the objects involved were between two and ten times the mass of our Sun. This places them all within a class known as stellar black holes, which are black holes with a mass comparable to that of our Sun.
At the same time astronomers are discovering more and more evidence of super-massive black holes in the center of every large galaxy. These black holes are estimated to have masses anywhere from several million to several billion times that of our Sun. Those observations left a gap however; there was no direct evidence for the existence of black holes with masses between several times ten to several times a thousand that of our Sun.
Until now, because on April 12th of this year a new gravity wave event, given the designation GW190521, was detected whose characteristics were such that astronomers could determine the initial masses of the two black holes to be 85 and 66 times that of our Sun. The resulting merger gave birth to a black hole with 142 solar masses, the remaining 8 solar masses being completely converted into the energy of gravity waves. That makes GW190521 by far the most powerful gravity wave event yet detected.
But
what interests astronomers the most was that the masses involved, 66, 85 and
142 solar masses all fit into that gap area where no black holes had ever been
observed. That makes GW190521 the first direct evidence for the existence of
intermediate black holes. While astronomers may have learned a great deal from
these first observations you can be certain they are eagerly waiting the next
signal from the merger of intermediate sized black holes.
In another story, on an even larger scale, we have all heard of the galaxy of Andromeda, the closest big galaxy to our own Milky Way and the most distant object that is visible to the naked eye. A typical spiral galaxy Andromeda is a vast disk of more than 200 billion stars some 100,000 light years in diameter at a distance of about 2 million light years from us. And you may have also heard that Andromeda is heading straight at us! In fact Astronomers estimate that our two galaxies are likely to collide in just a little over four billion years.
Most of our current theories about how galaxies evolve are based upon such collisions between small galaxies leading to the build up of ever larger galaxies as the galaxies merge. What kind of a merger will result from the collision of our Milky Way with Andromeda is unknown at present, after all it’s hard to predict the details of something that’s not going to happen for four billion years.
Now
however a group of astronomers are asserting that the collision has already
begun. Using the Hubble space telescope these astronomers, led by Professor
Nicholas Lehner of the Physics Department at the University of Notre Dame, were
trying to determine exactly how big Andromeda is. That’s not actually an easy
task since galaxies are not cohesive objects but vast collections not only of
stars but huge amounts of gas and dust. In other words galaxies don’t have
nice, well defined edges but rather just trail off, becoming less and less
dense the farther you get from their center.
The astronomers were able to study the halo surrounding Andromeda by measuring its effect on the light of even more distant quasars located behind the halo. Quasars are the very active cores of distant galaxies powered by the feeding of supermassive black holes in the galaxy’s center.
As the light from those distant quasars passes through Andromeda’s halo certain wavelengths of light are absorbed. By studying which wavelengths are absorbed, and by how much the astronomers can learn a great deal about the material making up the halo. And what Professor Lehner and his team have found is that Andromeda has a very, very thin halo surrounding it, and that halo extends at least as far as 1.3 million light years from the galaxy’s center, half the distance to our own Milky Way.
But
if Andromeda has a big halo, reaching halfway to our Milky Way, shouldn’t our
Galaxy have just as big a halo. In fact the team has found evidence that it
does, and further evidence that the two halos are already beginning to
interact. So in a sense that collision between Andromeda and the Milky Way has
already begun, even if the main event is still a long time to come.
President Oprah declares war on an unauthorized Moon colony built by freedom loving CEOs. Yes it’s really almost that bad. In fact ‘The Powers of the Earth’ is only about 10% Science Fiction, another 30% starts off as a spy novel that evolves into a war novel. The remainder is more a political pamphlet than anything else and a rather cartoon version of one.
In the short author’s bio at the back of the book, Travis J. Corcoran proudly proclaims himself to be a ‘Catholic Anarcho-Capitalist’ three ideologies that to my mind don’t really fit together all that well. Still Mr. Corcoran has every right to his opinions, his political views. The question is whether or not they serve to enhance a good science fiction novel.
In fact science fiction has long served as a vehicle for social criticism. H. G. Wells for example often brought his socialist ideas into his novels. The Morlocks in ‘The Time Machine’, the Martians in ‘War of the Worlds’ and the Selenites in ‘The First Men in the Moon’ are all described in socio-economic terms, but briefly, the politics don’t get in the way of the story. In some of his later novels, ‘In the Days of the Comet’ or ‘Shape of Things to Come’ Wells does become rather preachy, which is why those novels are not as popular as his earlier work.
In
‘The Powers of the Earth’ on the other hand, long rhetorical speeches are on
nearly every page. And there’s no attempt at evenhandedness, whenever an
argument in favour of some form of organized government is presented it is done
so only to make it a target for attack. The characters back on Earth are all
either idiots or self serving hypocrites, cartoon villains in other words. The
inhabitants of the Moon, fully half of whom are CEOs of some company or other,
aren’t presented in much better terms. Quarrelsome and unwilling to work
together even when they agree if they are Mr. Corcoran’s view of an
Anarcho-Capitalist utopia he can keep it!
However the real problem is that the politics just keeps getting in the way of the science fiction. For example, the Lunar colony at Aristillus crater is only possible because of the invention of an anti-gravity drive that the CEOs on the Moon have and the Earth governments don’t. But we’re never told anything about that drive, nothing at all about how it works. At the same time building a colony on the Moon appears to be simply a matter of drilling out enough big tunnels. No mention is ever made about where the air comes from, or the water.
When I began reading “The Powers of the Earth’ I first thought that Aristillus must be one of the craters near the South Pole where NASA has found evidence of ice. That would at least have served as a source for both the colony’s water and air but the crater isn’t near the South Pole, it’s right in Mare Imbrium, an area that is dry as a bone. There’s no particular reason for Mr. Corcoran to put his colony there that I can think of.
Even
when Mr. Corcoran has an interesting idea he doesn’t develop it very well. As a
part of the story there are five characters who are literally hiking around the
Lunar farside. The hiking party is made up of one human and four genetically
enhanced super-intelligent dogs. Now my ears perked up at the idea of super
intelligent dogs, I wanted some details about the anatomic changes that allowed
a dog to have a bigger brain, the changes to the vocal chords so that the dogs
could speak (actually those dogs have some of the longest political arguments
in the entire book) plus the changes to their paws so that they can type on
their computers. (The dogs are all software whizzes by the way).
But there’s nothing, no mention is ever made of anything about the dogs other than they can think and talk just like a human. Oh, and there are numerous times where the dogs have to take off or put on their spacesuits? How, with their paws?
During the course of ‘The Powers of the Earth’ several of the characters mention the old Robert Heinlein novel ‘The Moon is a Harsh Mistress’ and I have an idea that Mr. Corcoran wrote ‘The Powers of the Earth’ intending it to be a re-boot of that novel. Now it’s been nearly fifty years since I read, “The Moon is a Harsh Mistress’ but I don’t recall Heinlein as being so cavalier with the science, neither were his characters so poorly drawn.
I
do remember that Heinlein, like Wells, used science fiction as a way to
describe different possible ways to build a society. Each different novel
described a different aspect, a different kind of society. But on the other
hand ‘The Moon is a Harsh Mistress’ was never intended recipe for a political
movement, at least I didn’t get that impression.
But
most of all I do remember a Heinlein novel as being worth reading, which I just
can’t say about “The Powers of the Earth”. By the end of the story I
wasn’t even interested in the dogs.
Today’s post will be a bit out of the ordinary because I will not be discussing science or engineering so much as the places where our scientists and engineers receive their education. I’m talking about the Colleges and Universities of the world. I was prompted to write this post by the release of the annual Times Higher Education survey of the world’s best Colleges and Universities.
Now
which University was chosen as the best, which schools made into the top 10, or
which country had the most universities in the top 100 is really nothing more
than a competitive exercise of no actual importance. What is important is
whether or not new institutes of higher learning are being founded, and whether
existing universities are getting better. Still it’s worth taking a quick look
at some of the annual survey’s results in order to get an idea of what is going
on in the world of higher education.
At the very top of the Times Higher Education list for the fifth straight year is Oxford University in the United Kingdom. Britain also has another spot in the top ten, Cambridge University coming in at number six. All of the other spots in the top ten belong to Universities in the United States from Stanford University at number two to the University of Chicago at number ten. Indeed the first non US non UK University is ETH Zurich in Switzerland coming in at number 14 with the University of Toronto in Canada ranking at #18, and Tsinghua University in China at #20, also appearing in the top 20.
Now I’m not trying to brag, and neither should these results be a great surprise. The US and UK have pretty much dominated the world of higher education since the end of World War 2 when most of the world’s other Universities lay in ruin.
During the years when the USSR was pushing education as the way to demonstrate the superiority of Communism several Russian Universities where recognized as among the best schools in the world. However the current Russian government appears to prefer to keep its population ignorant and gullible so the quality of Russian education has declined noticeably.
Instead it’s now China whose institutes of higher learning are gaining the most ground. In addition to Tsinghua University, Peking University received a high ranking of 23 making China the only country other than the US and UK to place two universities in the top 25. China in fact succeeded in doubling its number of schools in the top 100 from three last year to six this year.
Most
of these Chinese Universities are in fact relatively new, babies when compared
to Oxford or Cambridge. It’s a sign of China’s growing middle class who want a
good education for their children, and are willing to pay for it. It’s also a
sign that the Chinese government recognizes that a larger, better educated
middle class will actually make China a stronger more powerful nation.
Other Asian nations are also working hard to improve the quality of the education they provide to their people. Sixteen Asian universities placed within the top 100, the most since the Times Higher Education list began.
Of course the improvement in higher education in Asia doesn’t have to mean that education is the west is slipping. In the years to come the world is going to need all of its college and university graduates if we’re going to overcome the tremendous challenges facing our planet.
By
the way, my old Alma Matter Drexel University came in at 351. Not great, but
not bad considering that’s 351st in the entire world.
There
are a number of small but nevertheless important items that have happened over
the last month which deal with NASA’s Artemis program. So let’s get started.
If NASA’s Artemis program is going to successfully put Americans back on the Moon by 2024, or indeed ever, it is going to need a big rocket to put all of that hardware into space. The big rocket that NASA has been building now for nine years is called the Space Launch System (SLS) and although it may look superficially like the old Saturn V it is in fact a completely new design based on Space Shuttle hardware.
In fact the SLS employs four shuttle main engines in its first stage and in addition has two shuttle solid fuel boosters attached. Since the SLS is making use of a fair amount of existing components you’d think that the design cost and schedule would be reasonable compared to those for a completely new large launch vehicle, say Space X’s Falcon 9.
Well
you’d be wrong, in fact the original cost of the central core first stage of
the SLS was estimated at $6 billion. That amount was already ‘readjusted’ back
in 2017 to $7.17 billion and now NASA has quietly increased that amount to $9.1
billion. And as to schedule, the original launch date for an unmanned flight of
the SLS was supposed to be back in 2017, a date that was later pushed back to
December of 2019 to June of 2020. Needless to say June has come and gone and
the current schedule now for the first, unmanned launch of the SLS is November
of 2021.
Even that is not certain however, because the SLS still has quite a lot of testing to finish first. In fact one big test, a static firing of one of the big solid fuel boosters, was carried out successfully on 2 September. During the test the 53m long booster burned for the full 126 seconds required for an actual flight. See image below. While the data from the test is still being analyzed the initial results indicated a very successful test.
The biggest test still remaining before next year’s unmanned flight is called ‘Green Run Hot Fire’ and may possibly occur as early as next month in October. For the Green Run Hot Fire test the entire rocket, except for the solid boosters, is held down to a test stand and the four main engines are fired for eight minutes, the time simulating a normal launch. Although all of the different subsystems of the SLS have been tested separately this will be the first time the entire rocket will be tested together.
If any problems occur during the Green Run Hot Test it would almost certainly cause yet another delay in that first unmanned test flight. And if that first test flight gets pushed back any further there’s little hope of Artemis reaching the Moon by 2024. In fact because some members of congress are just getting sick and tired of the delays and cost overruns associated with the SLS it might just mean the end of the Artemis program entirely.
Thankfully there’s a bit of better news for Artemis. One of the aerospace companies that are preparing bids for the contract to build the Lunar lander that will actually take the Artemis astronauts down to the Moon’s surface is Blue Origin, the other two being Space X and Dynetics. In late August Blue Origin delivered to NASA’s Johnson Manned Spaceflight Center in Houston a full-scale model of their planned lander.
The model is 12 meters in height and consists of both a planned descent and ascent stage. Although the mock-up does not in any sense function it will allow NASA astronauts to simulate getting down from the crew cabin in the ascent stage to the ground with all of their equipment, and back again. This sort of ergonomic testing is important at this stage because it will not only allow the astronauts to become familiar with the vehicle but if any design flaws are discovered during these tests they can be corrected before construction of the first lander begins.
Although
Blue Origin will be the prime contractor should they win the contract the
lander design will actually be a team effort including Lockheed Martin,
Northrop Grumman and Draper. While Blue Origin concentrates its efforts on the
descent stage it is Lockheed Martin who will be primarily responsible for the
ascent stage. The team members hope that by splitting up the design efforts it
will speed up the design and development of the separate components.
So work is progressing, however slowly on the hardware needed to get Americans back to the Moon, but what about the equipment they’ll be using while on the Moon. For example the old Apollo astronauts had a small Lunar rover vehicle that allowed them to explore more of the Moon’s surface than they could on foot. Are there any plans for an updated Lunar Rover?
Well it turns out that it’s the Japanese Aerospace Exploration Agency (JAXA) who has been given the task of developing the rover as a part of their effort toward the Artemis program. As you might guess JAXA turned to a Japanese company well known for their expertise in motor vehicles, Toyota for help in developing an initial Lunar rover design.
Named
the Lunar Cruiser after Toyota’s famous Land Cruiser the proposed rover would
be considerable larger than the Apollo rover. Equipped with a pressurized cabin
so that the astronauts can remove their spacesuits while driving across the
Moon’s surface the rover will be powered by hydrogen fuel cells and is expected
to have a range as much as 10,000 kilometers.
Currently all of these design specifications are preliminary; after all we still a lot of work to do just getting back to the Moon. The eventual goal of the Artemis program is to establish a permanent base on the Moon and that’s when the Lunar Cruiser would become an important piece of equipment.
Still
it is nice to speculate about what kind of Lunar Base we may have in about
another ten years. I do hope that NASA gets the Artemis program on track. It’s
been almost 50 years since the last human set foot on the Moon, when Artemis
succeeds in getting us back I hope this time its for good.
For decades now one of the dreams of science fiction has been the development of technology that would allow a direct connection between the human brain and an electronic computer, both a dream and a nightmare. The possibilities that such a technology could open up are beyond imaging. Just consider being able to access all of the stored knowledge on the Internet simply by thinking about it. Or how about being able to see, in your mind the images from cameras anywhere in the world. Such technology might even make the age old dream of telepathy real as two brains could speak to each other through a computer.
On the other hand, could that same technology be used to access your most private thoughts and opinions without your permission, so that the very idea of privacy no longer existed? Or what if advertisers could, on a regular schedule implant an ad directly into your brain. No changing channel or going to the bathroom during commercials, they’re inside you!
Such developments are at least decades away. Right now the major goal of Human-Machine-Interface (HMI) technology is to develop methods for people with advanced prosthetics to control them directly from their brains. Like Luke Skywalker’s robotic hand in Star Wars.
Currently
one of the major problems facing researchers in HMI is a matter of finding the
right materials for the interface. Nearly all electronic circuits use copper as
a conductor but when copper is implanted into living flesh, which is mostly a
salty liquid, it corrodes very quickly, degrading if not actually blocking the
performance of the circuit. And those corroded metals in turn cause scarring of
the flesh, which will irritate if not actually harm the person who had the
circuit implanted in them. So scientists and engineers have been searching for
an organic conductor that will not only give good electrical performance but
which will not react in any harmful way when inside the body.
Recently scientists at the University of Delaware have announced that they have found such a material. The team, led by Doctor David Martin has been investigating a class of organic materials known as conjugated polymers that are able to conduct electric current. The material that they identified is known as Pedot and is commercially available as an anti-static coating for electronic displays, I actually think I’m familiar with it.
During testing Pedot showed all of the specifications needed in an interface between electronics and living tissue without any sign of scaring. In other testing Pedot was even able to be infused with dopamine as a possible treatment for addiction making it a possible candidate for other medical procedures as well.
So if scientists have found a material that will allow them to interface electronics directly to the human brain, what kind of electronics will be the first to be implanted? Well Elon Musk of Space X and Tesla fame has funded a small bio-tech company Neuralink that is developing a chip sized device that reads brain impulses and transmits them via bluetooth to a smartphone or other computerized device. Last year Musk showcased a model of Neuralink that was implanted behind the ear of a patient and picked up brain impulses by means of thin wired electrodes laid along the top of the skull. This year’s model has just been announced and consists of a coin-sized disk implanted directly onto the skull.
Initial testing of this year’s model consisted of implanting the interface onto the skulls of three pigs, directly over that portion of the brain that dealt with signals from the animal’s snout. Now pig’s snouts are one of their main sensory organs and when the pigs were given food or other objects to smell and rummage through a display screen showed the firing of the neurons in the animal’s brains as they used their snouts.
Neuralink now hopes to begin testing on human volunteers sometime this year. The plan is to implant the device in patients with severe spinal chord injuries in the hope that a second device implanted below the injury would enable the patient’s brain signals to bypass the injury and therefore allow them to once again control their arms and legs.
The
future possibilities of such technology belong more in science fiction novels,
for now. Right now however the biggest problem the engineers at Neuralink have
is that their rather delicate thin-wire electrodes don’t last long inside the
patient’s body. They degrade over time because of the corrosive chemicals in
the body.
What
do you want to bet that the people at Neuralink are contacting the team at the
University of Delaware right now?
Everybody knows that our environment is in trouble. The waste and pollution generated by eight billion human beings is choking the planet, producing changes that have already caused the extinction of hundreds of species, and may lead to our own. If we are going to preserve the environment we cannot just return to a simpler, less polluting level of technology, let’s say the 18th century as an example. As I said there are eight billion of us now and horsepower, waterwheels and ox-drawn plows will not sustain such a large population. Instead we must use our technology to develop solutions to the problems that ironically we used technology to cause.
Recently
there have been three new technological breakthroughs, inventions if you like,
that may play an important role in saving our planet. At least I hope so.
In many ways plastic is actually harmless. It’s neither poisonous nor cancer causing. In fact it has many excellent qualities, it has countless uses and it’s so cheap that we use it in countless ways. Ironically it is the fact that plastic is so useful, and cheap that makes it so great a danger. We manufacture so much of it and despite what the plastics manufactures tell us we don’t recycle more than a very few percent of what we make. The truth is that, aside from plastic 2-liter bottles, most single use plastic items, like plastic bags, utensils and straws, are not even made of the right kind of plastic to be recycled. All of those items, and many others just accumulate in our waste dumps which, since plastics don’t decay, are becoming an ever bigger problem on both the land and in the sea.
To solve that problem chemists have for many years been searching for a kind of plastic, technically a polymer, that can easily, and cheaply be broken back down into their constituent parts, chemically known as monomers. These reconstituted monomers could then used to create new polymers, new pieces of plastics over and over again.
A team of researchers from the United States, China and Saudi Arabia has recently announced the development of just such a polymer plastic, which they call PBTL. According to the announcement, which appeared in the journal Science Advances, PBTL has all of the desirable qualities of current plastics but in the presence of a catalyst PBTL breaks down readily into its original monomers. After testing through multiple build ups and breakdowns the teams concluded that there was no reason that the cycle could not be carried out over and over again, that they have succeeded in developing a plastic that is designed to be recycled.
Of course there is one caveat, in order to make the optimal use of PBTL’s reusability it must be separated not only from non-plastic waste but from all other kinds of plastic. That means more sorting, more manpower required in the recycling effort and that means more cost. What’s needed therefore is some recognizable way to distinguish PBTL from everything else. It would also be helpful if all plastic items were manufactured from PBTL but that may be difficult to accomplish since there are so many plastic manufacturers.
Still
it is a step in the right direction. With PBTL we now can recycle all of our
plastics, if we have the will to do so.
As bad as the problem of plastics is, and even greater threat to our planet must surely be the enormous amounts of CO2 that we have been releasing into the atmosphere. And to make matters worse at the same time we are cutting down the Earth’s forests that are the best way of removing that CO2 from the air. The resulting buildup of greenhouse gasses is the direct cause of global warming and the attendant changes in climate.
So if forests and other vegetation are one way of getting CO2 out of the atmosphere shouldn’t we be planting more trees and other plants. Of course there are people trying to do just that, however those efforts have so far been unable to even keep pace with deforestation let alone bring down the level of greenhouse gasses.
So scientists have been trying to develop an ‘Artificial Leaf’ which, like a real leaf, would use sunlight and water to covert CO2 into a usable fuel. Such a technology would mimic photosynthesis and in large scale operations could provide the energy we use reducing if not eliminating our dependence on fossil fuels.
Some
of the most advanced research toward an artificial leaf has come from the
Department of Chemistry at Cambridge University where Professor Erwin Reisner
leads a team of chemists who last year succeeded in producing a device that
converted CO2 into the fuel syngas, a fuel that is not easy to store
for long periods of time. Another problem with the device was that it was
constructed from materials similar to those in ordinary solar cells, making the
device expensive to scale up into a large scale power plant.
Now the team at Cambridge has developed a new artificial leaf that is manufactured on a photocatalyst sheet, a technology that is capable of being scaled up much more easily, and therefore more cheaply. Also the end fuel produced by the new ‘leaf’ is formic acid which is more storable and can be converted directly into hydrogen, as in a hydrogen fuel cell.
The
Cambridge team still has more work ahead of them; the efficiency of the entire
system needs to be improved for one thing. However it is quite possible that in
just a few years we may have a new form of solar technology that not only
produces energy but actually removes CO2 from the atmosphere.
Of course we already have a both solar and wind technologies that are actually producing a sizable fraction of our electricity. One big problem that has limited the usefulness of both solar and wind power is that the energy they generate varies significantly. When it’s a sunny day or if there’s a good breeze they produce a lot of energy that somehow has to be stored so it can be used at night or an a calm day. Most often that energy is stored in old-fashioned chemical batteries, a technology that has hardly improved in the last 100 years.
As any owner of an electric car will tell you batteries absorb their energy slowly, taking a long time to charge up. Not only that but batteries tend to be heavy, costly and have a limited useful lifespan, a very large number of problems for such a critical component in modern technologies.
There is another energy storing electronic device that is cheap, lightweight, can be charged and discharged thousands of times, not only that but they can absorb or discharge their energy very quickly. They are called capacitors, descendents of the old Leyden jar and even if you’ve never heard of them you own hundreds of them in your cell phones, TVs, computers and other electronics. Capacitors, the very name comes from their capacity to store electricity, are superior to chemical batteries in every way except one, they can’t store nearly as much electrical energy as a battery can.
As you might guess there are engineers working on capacitor technologies in the hope of increasing the amount of energy they can store. One such group is working out of Lawrence Berkeley National Labouratories and is headed by Lane Martin, a Professor of Materials Science at the University of California at Berkeley. Taking a common type of commercially available capacitor known as a ‘Thin Film’ capacitor Martin and his associates introduced defects into the material of the thin film known as a ‘relaxor ferroelectric’.
Now ferroelectric materials are non-conductive which allows the capacitor to hold positive charges on one side of the film and negative charges on the other, that’s how the energy is stored. The higher the voltage across the thin film the more energy is stored but if the voltage gets too high the film breaks down, the energy is released and the capacitor is destroyed.
The engineers at Lawrence hoped that by adding the defects to the thin films they could increase the voltage the capacitor could withstand without breaking down. Doubling the voltage by the way would actually increase the energy stored by a factor of four. The team used an ion beam to bombard the ferroelectric material creating isolated defects in the film and the first results of testing have shown a 50% increase in the capacitor’s efficiency along with a doubling of the energy storage capacity.
As with the other two new inventions described in this post, capacitors that can store twice as much energy are not going to solve all of our environmental problems, but they’ll help. That’s the takeaway from all of technology developments I’ve discussed; each one is a step towards solving our energy and pollution problems. We have the scientists who can find the solutions, do we have the will to use their work and save our planet before it’s too late?
Like ever other science paleontology began with big discoveries, the existence of the dinosaurs would be one example. As time passed paleontologists began to fill in a few of the big details, such as the fact that some dinosaurs walked on two legs. As more and better preserved specimens were unearthed more and finer details were uncovered, like the fact that some dinosaurs actually had a covering of fine feathers to help keep them warm. Finding the kind of pristine fossils needed to fill in those gaps in our knowledge however requires a lot of patience, hard work and let’s be honest, luck.
Some of the best preserved fossils in recent years have come from amber deposits in the country of Myanmar, see my posts of 16 December 2016 and 1 June 2019. Now a new study in the journal ‘Current Biology’ by scientists at the New Jersey Institute of Technology, the University of Rennes in France and the Nanjing Institute of Geology and Paleontology has announced the discovery of a new fossil from Myanmar that answers a lot of questions about a unique group of extinct insects known as ‘Hell-ants’.
In the fossil record hell ants are one of the earliest known groups of ants, with 14 different known species from the Cretaceous period they appear to have become totally extinct in the same disaster that killed off the dinosaurs. Recognizing a hell ant is quite easy, they all have two very sharp, dagger like mandibles extending out and curving upwards from their lower jaw. In addition most species have a horn like structure at the top of their heads. The whole configuration strongly suggests that the hell ants attacked their prey by sweeping it up in the dagger like mandibles, trapping it against the horn structure.
There’s a problem with that idea however, those ants who exist today, like virtually all insects have mouth parts that move, not up and down as ours do but horizontally. That’s one of the reasons why close up movies of insects look so icky, their mouth parts move side to side. The idea that hell ants somehow moved their jaws upward was quite controversial, many paleontologists refused to believe it until they saw it.
Well they believe it now, for a piece of amber from Myanmar has recently been discovered that encases a hell ant caught in the act of attacking its prey. Looking at the image below it is obvious that the Hell Ant, a new species that has been give the name Ceratomyrex ellenbergeri has grabbed its victim, an immature specimen of an ancient relative of cockroaches called Caputoraptor elegans from beneath with those dagger like mandibles. Capturing it in a fashion that could only be accomplished if those mandibles could move up and down.
Fossils
like the hell ant from Myanmar, that answer specific questions are of course
rare, even the best researchers can spend years of their career looking for
one. Just as often scientists can make discoveries by using the newest, latest
technology to examine fossils in new ways to answer important questions about
the history of life.
One such question deals with the first appearance of the sense of sight in the fossil record, the first animals to have eyes. While paleontologists agree that the compound eye of the ancient arthropods called trilobites were the first eyes to evolve there are still many questions about that eye. How exactly did it function and was it as advanced as the compound eye of modern arthropods like insects or crustaceans? In other words how good was the vision of a trilobite?
Now paleontologists at the University of Cologne and the University of Edinburgh have employed a high-tech digital microscope to examine the eye of a particularly well preserved specimen of a 429 million year old trilobite Aulacopleura kionickii from the Czech Republic. What the scientists found was that the trilobite’s eye was constructed from a honeycomb structure of 200 cells with each cell having its own lens and providing the animal with one pixel of information. The vision of a 430 million year old animal was therefore equivalent to a modern digital image with 200 pixels, vague and imprecise but still the best in the world at that time.
Such
an eye is also virtually identical to that of a modern bee or dragonfly, the
only difference being the number of cells, a dragonfly’s compound eye for
instance can have as many as 30,000 cells. The fact that the arthropod eye has
remained so stable for so long is a testament to both the simplicity and
versatility of the compound eye but also to the conservatism of evolution. If
you have an organ that is doing a job quite well it can exist for many millions
of years with only superficial changes.
As a final example of how, if you wait long enough the fossil record will provide amazing evidence of how creatures lived long ago a recent fossil of an ichthyosaur was unearthed in China with the remains of its last meal still recognizable in its stomach. Now ichthyosaurs were aquatic reptiles who lived during the age of the dinosaurs, see my posts of 28 October 2017 and 18 April 2018, and the fossil ichthyosaur found in China was dated to about 200 million years ago.
According to the paper published in the journal iScience the skeleton of the ichthyosaur, a member of the genus Guizhouichthyosaurs was nearly complete and measured about 5 meters in length. The big surprise was inside however, the partial skeleton of another marine reptile known as a thalattosaur.
In life the thalattosaur would have been about 4 meters in length making this find the earliest known example of one large predator feeding on another. Although the thalattosaur’s head and tail were missing the rest of the skeleton was intact, the four limbs still connected to the body. Although the researchers cannot be certain they consider the intact condition of the body to be evidence that what they have discovered is a case of predation, not scavenging. In either case it is a remarkable find, two fossils for the price of one telling a story from long ago.
Bit
by bit paleontologists are filling in the gaps in our knowledge of the history
of life here on Earth. Using the trilobite’s eye as a metaphor our image of the
past started out with only a small number of pixels, vague and imprecise. Each
new fossil discovery adds one more pixel to that image and while we may not yet
have reached the level of high-definition our view of the past is becoming
clearer all the time.
Many of the foods that we buy in the supermarket are made more appetizing and longer lasting by the addition of a thickening agent to give them more body and volume. Thickeners work by increasing the viscosity of a liquid, normally without altering their taste or colour and one of the most common forms of thickening agents is known as a Gel. In chemistry a gel is defined as a liquid contained within a cross-linked solid network of molecules by surface tension that prevents the liquid from being able to flow. In some respects a gel acts almost like a sponge, a lattice of fibers that holds in a liquid.
Commercially the two most common types of gels are pectin and gelatin, from which the word gel is derived. Both pectin and gelatin form their cross linked network from long chains of molecules, technically called a polymer. The primary difference between the two chemicals being that in pectin the chains are made up of sugar molecules while in gelatin they are composed of proteins. Those differences stem from the sources of the two classes of chemicals with pectin being derived from plant tissue while gelatin is produced from animal tissue.
In plants pectin consists of a large number of compounds derived from sugars, technically polysaccharides, which serve as structural components in the cell wall. Pectin serves to not only strengthen the cell walls of the non-woody parts of plants but also allows for cell growth while at the same time holding plant cells together. The softening of fruit as it ripens is caused by the breakdown of pectin through the action of enzymes, as is the rotting of the leaves of deciduous trees.
Historically pectin has been used in food production for many centuries, just how many is not precisely known. However it was in 1825 that chemist Henri Braconnot first succeeded in isolating Pectin. Although pectin can be obtained from numerous fruits and vegetables modern commercial production of pectin is primarily derived from the peels of citrus fruits.
Pectin is perhaps best known in food preparation for the production of jellies and jams, indeed without pectin your favourite jelly would be nothing but a sweet juice. In addition to jellies pectin is also used to provide a bit of substance in low fat baked goods and health drinks. Being both colourless and tasteless Pectin does not interfere with the natural appearance or flavour of the food it is adding body to.
Pectin
is also frequently used in non-food products, being added to cosmetics and
drugs such as in time-release drug capsules. In fact the increase in viscosity
and volume provided by pectin have led to it being prescribed as a medication
for constipation and diarrhea.
As mentioned earlier, unlike pectin gelatins are produced from animal parts, specifically protein collagens obtained from the hooves, bones and skins of pigs and cows. Despite being chemically so different from pectin gelatin behaves in much the same fashion and is often used in much the same way.
Everyone is familiar with Jell-O, the brand name for fruit flavoured gelatin desserts. Other food products that obtain their firmness from gelatin include marshmallows, gummy candies, ice cream, cream cheese and even margarine. Like pectin, gelatin has no taste or colour and can be added to almost any food in order to give it some firmness without altering the flavour or appearance.
Gelatin also has a large number of non-food uses including a variety of glues and other adhesives. In photographic film gelatin is by far the most commonly used material for holding the silver halide crystals onto the photographic paper. Other uses include the capsules of some powdered drugs as well as the binder for matches and sandpaper. Ballistics gelatin is commonly used in measuring the performance of firearms and their projectiles.
The earliest known uses of gelatin come from 15th century England where the hooves of cows were boiled to obtain gelatin. Modern gelatin production comes the hides of both pork and cows (75%) or their bones (25%) although some cooks still produce their own gelatin at home from animal bones and cuts of meat containing a good deal of cartilage.
Because it is made from animals the consumption of gelatin by some people may be taboo for religious or ethical reasons. Jews and Moslems are forbidden from eating gelatin made from pork while Hindus are forbidden from eating gelatin made from cows. A vegan of course would refuse to consume gelatin of any kind. Pectin on the other hand, being produced from plants doesn’t raise any such moral conflicts.
Subjective
opinions aside, pectin and gelatin are two very different classes of chemicals
that nevertheless are used in very similar ways in the production of food.
Providing another reminder that cooking is really just chemistry.