In an effort to reduce acid emissions from the aviation industry, preventing an annual number of between 1,000 and 4,000 deaths, it is planned to burn very low-sulphur jet fuel in planes. However, although better air quality is anticipated, such low sulphur fuels might also reduce the formation of sulphate aerosols, particles of which reflect solar energy back into space and help cool the planet.
Such ultra-low sulphur jet fuels (ULSJ) contain just 15 ppm of sulphur, to be compared with a high of 3,000 ppm for some jet fuels. Indeed, the sulphur content of aviation fuel has been increasing of late, thought due to an increasing reliance on high-sulphur crude oil obtained from the Middle East and Venezuela.
There are different means for removing sulphur from crude oil and fuels, of which the industry standard in hydrodesulfurization (HDS). In HDS, the liquid oil or fuel is contacted with a catalyst under a relatively high pressure of H2 gas, in the presence of a catalyst, removing the sulphur from compounds like bezothiophene and it s higher homologues in the form of hydrogen sulphide, H2S. However, to deal with higher sulphur contents, a greater pressure of H2 must be used, and in general the contact time with the catalysts also needs to be longer, thus slowing the production process and increasing the amount of energy required to run it.
There are methods for removing sulphur from fuels, e.g. by absorption onto metal oxides such as zinc oxide (ZnO) onto which are supported various transition metals. Another route is called "oxidative desulphurisation", in which the benzothiophenes are converted to sulphones, which contain the SO2 functional group and are more easily removed. The sulphur compounds can also be removed using microporous adsorbents such as activated carbon and zeolites. Most refineries prefer to use HDS, and the other options are best regarded as fall-back, Plan-B strategies.
While there is no real disagreement that removing sulphur from jet fuel leads to better air and public health benefits, it can be argued that an additional 2% of CO2 emissions is incurred, because energy derived from fossil fuels is needed to drive process, which it is thought will add between 2 and 7 cents per gallon to the cost of the fuel. It is thought that desulphurizing the fuel might recover a health benefit of perhaps one quarter, an increased climate change impact of maybe one tenth would be incurred. While there remains some doubt as to the accuracy of the relatively simple models used to estimate health issues which are underpinned by complex mechanisms, all evidence is that to remove sulphur from fuels is a positive course of action.
Interestingly, rather than the expected cooling effect of sulphate aerosol particles, a study by Mark Jacobson, made at Stanford University, suggests that there might actually be an increase in warming because sulphate becomes coated onto carbon black particles in the exhaust and increases the warming effect of the carbon. By reducing the concentration of sulphate from the low sulphur fuels, the effect is diminished and cooling is experienced relative to the higher sulphur containing fuels, although this may refer to an uncertainly in the model used. Presumably, any such warming effect must be to some degree counterbalanced by the reflection of solar energy from sulphate particles per se, generated from free SO2, liberated into the atmosphere.
It seems most likely that it is emissions from the combustion of fuels with low sulphur content at ground level, rather than at cruising altitude that will provide the greatest health improvement, while the model and analysis made using it have revealed various factors that are likely to be of importance to mechanisms and issues of global warming and public health.
http://www.rsc.org/chemistryworld/2012/05/ultra-low-sulfur-jet-fuel-radar
Saturday, June 30, 2012
Thursday, June 21, 2012
Agricultural phosphorus shortage made worse by biofuels?
This article will be published shortly in: "Australian Resources and Investment magazine".
Professor Christopher J. Rhodes, Director of Fresh-lands Environmental Actions, Reading UK. cjrhodes@fresh-lands.com
World rock phosphate production is set to peak by 2030. Since the material provides fertilizer for agriculture, the consequences are likely to be severe, and worsened by the increased production of biofuels, including those from algae.
Introduction.
The depletion of world rock phosphate reserves will restrict the amount of food that can be grown across the world, a situation that can only be compounded by the production of biofuels, including the potential large-scale generation of biodiesel from algae. The world population has risen to its present number of 7 billion in consequence of cheap fertilizers, pesticides and energy sources, particularly oil. Almost all modern farming has been engineered to depend on phosphate fertilizers, and those made from natural gas, e.g. ammonium nitrate, and on oil to run farm machinery and to distribute the final produce. A peak in worldwide production of rock phosphate is expected by 2030,1 which lends fears over how much food the world will be able to grow in the future, against a rising number of mouths to feed. Consensus of opinion is that we are close to the peak in world oil production too. Phosphorus is an essential element in all living things, along with nitrogen and potassium. These are known collectively as, P, N, K, to describe micronutrients that drive growth in all plants and animal species, including humans. Global demand for phosphate rock is predicted to rise at 2.3% per year, but this is likely to increase in order to produce crops for biofuel production. As a rider to this, if the transition is made to cellulosic ethanol production, more phosphorus will be required still since there is less of the plant (the "chaff") available to return as plant rubble after the harvest, which is a traditional and natural provider of K and P to the soil.
World rock phosphate production amounts to around 140 million tonnes. In comparison, we would need 352 million tonnes of the mineral to grow sufficient algae to replace all the oil-derived fuels used in the world.2 The US produces less than 40 million tonnes of rock phosphate annually, but to become self-sufficient in algal diesel would require around 88 million tonnes of the mineral. Hence, for the US, security of fuel supply could not be met by algae-to-diesel production using even all its indigenous rock phosphate output, and significant further imports would be needed. This is in addition to the amount of the mineral necessary to maintain existing agriculture. In principle, phosphate could be recycled from one batch of algae to the next, but how exactly this might be done remains a matter of some deliberation. e.g. The algae could be dried and burned, and the phosphate extracted from the resulting “ash”, or the algae could be converted to methane in a biodigester, releasing phosphate in the process. Clearly there are engineering and energy costs attendant to any and all such schemes and none has been adopted as yet.
Cleaning-up the Environment.
There is the further issue of the demand on freshwater, of which agriculture already struggles to secure enough to meet its needs, and in a sustainable picture of the future, supplies of water appear uncertain against the countenance of climate change. It is in the light of these considerations that algae/algal fuels have begun to look very appealing3, especially given the claimed very high yields that can be obtained per hectare as compared say with rapeseed and biodiesel. Conventional algae production can be combined with water clean-up strategies3, to remove N and P from agricultural run-off water and sewage effluent, both to prevent eutrophication (nutrient build-up in water), which causes algal blooms, and to conserve the precious resource of phosphate. Algae might also be “fed” with CO2 from the smokestacks of power stations to reduce carbon emissions. The implementation of integrated strategies such as these, where the creation of a “carbon neutral” fuel is combined with pollution-reduction is thought to be the only way that the price of algal fuels can be brought down to a level comparable with conventional fuels refined from crude oil. As the price of oil rises inexorably, they are likely to become even more attractive. “Peak phosphate” is connected to “peak oil” since phosphate is mined using oil-powered machinery, and in the absence of sufficient phosphorus, we will be unable to feed the rising global human population, since modern industrialised farming depends on heavy inputs of phosphate, along with nitrogen fertilizers. Pesticides, too, derived chemically from crude oil, are essential, along with oil-refined fuels for farm machinery. It is, nonetheless, doubtful that the world’s liquid transportation fuel requirements can be met through standard methods of algae cultivation entirely,4 though fuel production on a smaller scale seems thus feasible. An analogy for the latter might be as growing algae in a “village pond” for use by a community of limited numbers.
No solution to “fuel crops versus food crops” problem.
It is salutary that there remains a competition between growing crops (algae) for fuel and those for food, even if not directly in terms of land, for the fertilizers that both depend upon. This illustrates for me the complex and interconnected nature of, indeed Nature, and which like any stressed chain, will ultimately converge its forces onto the weakest link in the “it takes energy to extract energy” sequence. It seems quite clear that with food production already stressed, the production of (algal) biofuels will never be accomplished on a scale anywhere close to matching current world petroleum fuel use (>20 billion barrels/annum). Thus, the days of a society based around personalized transport run on liquid fuels are numbered. We must reconsider too our methods of farming, to reduce inputs of fertilisers, pesticides and fuel. Freshwater supplies are also at issue, in the complex transition to a more localised age that uses its resources much more efficiently.
In contrast to fossil fuels, say, phosphorus can be recycled, but if phosphorus is wasted, there is no substitute for it. The evidence is that the world is using up its relatively limited supplies of phosphates in concentrated form. In Asia, agriculture has been enabled through returning animal and human manure to the soil, for example in the form of sewage sludge, and it is suggested that by the use of composting toilets, urine diversion, more efficient ways of using fertilizer and more efficient technology, the potential problem of phosphorus depletion might be circumvented. It all seems to add up to the same thing, that we will need to use less and more efficiently, whether that be fossil resources, or food products, including our own human waste. We are all taking a ride on spaceship earth, and depend mutually on her various provisions to us. Our number is now so great that we cannot maintain our current global profligacy. In the form of localised communities as the global village will devolve into by the inevitable reduction in transportation, such strategies would seem sensible to food (and some fuel) production at the local level. "Small is beautiful" as Schumacher wrote those many years ago, emphasising a system of "economics as if people mattered".5
And if we try to continue with business as usual?
There is a Hubbert-type analysis of human population growth which indicates that rather than rising to the putative “9 billion by 2050″ scenario, it will instead peak around the year 2025 at 7.3 billion, and then fall. It is probably significant too that that population growth curve fits very closely both with that for world phosphate production and another for world oil production. It seems to me highly indicative that it is the decline in resources that will underpin our decline in numbers as is true of any species: from a colony of human beings growing on the Earth, to a colony of bacteria growing on agar nutrient in a Petri-dish.
References.
(1) Rhodes, C.J. (2011) Science Progress 94, 323.
(2) Rhodes, C.J. http://ergobalance.blogspot.com/2012/02/achilles-heel-of-algal-biofuels-peak.html
(3) Rhodes, C.J. in Algal Fuels: Phycology, Geology, Biophotonics, Genomics and Nanotechnology, J.Seckbach (ed.), Springer, Dordrecht, in press.
(4) Rhodes, C.J. (2012) Science Progress 95, in press.
(5) Schumacher, E.F. (I 973) Small is beautiful: a study of economics as if people mattered. Vintage, London.
(5) Schumacher, E.F. (I 973) Small is beautiful: a study of economics as if people mattered. Vintage, London.
Thursday, June 07, 2012
Current Commentary: Energy from Nuclear Fusion – Realities, Prospects and Fantasies?
Also published in the journal Science Progress, of which I am an editor. It may be downloaded for free via this link: http://www.ingentaconnect.com/content/stl/sciprg/2012/00000095/00000001/art00005
Feasible fusion power – the carrot before the donkey?
When I was about 10, I recall hearing that nuclear fusion power would become a reality "in about thirty years". The estimate has increased steadily since then, and now, forty odd years on, we hear that fusion power will come on-stream "in about fifty years". So, what is the real likelihood of fusion-based power stations coming to our aid in averting the imminent energy crisis? Getting two nuclei to fuse is not easy, since both carry a positive charge and hence their natural propensity is to repel one another. Therefore, a lot of energy is required to force them together so that they can fuse. To achieve this, suitable conditions of extremely high temperature, comparable to those found in stars, must be met. A specific temperature must be reached in order for particular nuclei to fuse with one another. This is termed the "critical ignition temperature", and is around 400 million degrees centigrade for two deuterium nuclei to fuse, while a more modest 100 million degrees is sufficient for a deuterium nucleus to fuse with a tritium nucleus. For this reason, it is deuterium-tritium fusion that is most sought after, since it should be most easily achieved and sustained.
One disadvantage of tritium is that it is radioactive and decays with a half-life of about 12 years, and consequently, it exists naturally in only negligible amounts. However, tritium may be "bred" from lithium using neutrons produced in an initial deuterium-tritium fusion. Ideally, the process would become self-sustaining, with lithium fuel being burned via conversion to tritium, which then fuses with deuterium, releasing more neutrons. While not unlimited, there are sufficient known resources of lithium to fire a global fusion programme for about a thousand years, mindful that there are many other uses for lithium, ranging for various types of battery to medication for schizophrenics. The supply would be effectively limitless if lithium could be extracted from the oceans.
One disadvantage of tritium is that it is radioactive and decays with a half-life of about 12 years, and consequently, it exists naturally in only negligible amounts. However, tritium may be "bred" from lithium using neutrons produced in an initial deuterium-tritium fusion. Ideally, the process would become self-sustaining, with lithium fuel being burned via conversion to tritium, which then fuses with deuterium, releasing more neutrons. While not unlimited, there are sufficient known resources of lithium to fire a global fusion programme for about a thousand years, mindful that there are many other uses for lithium, ranging for various types of battery to medication for schizophrenics. The supply would be effectively limitless if lithium could be extracted from the oceans.
In a working scenario, some of the energy produced by fusion would be required to maintain the high temperature of the fuel such that the fusion process becomes continuous. At the temperature of around 100 - 300 million degrees, the deuterium/lithium/tritium mixture will exist in the form of a plasma, in which are nuclei are naked (having lost their initial atomic electron clouds) and are hence exposed to fuse with one another.
The main difficulty which bedevils maintaining a working fusion reactor which might be used to fire a power station is containing the plasma, a process usually referred to as "confinement" and the process overall as “magnetic confinement fusion” (MCF). Essentially, the plasma is confined in a magnetic bottle, since its component charged nuclei and electrons tend to follow the field of magnetic force, which can be so arranged that the lines of force occupy a prescribed region and are thus centralised to a particular volume. However, the plasma is a "complex" system that readily becomes unstable and leaks away. Unlike a star, the plasma is highly rarefied (a low pressure gas), so that the proton-proton cycle that powers the sun could not be thus achieved on earth, as it is only the intensely high density of nuclei in the sun's core that allows the process to occur sustainably, and that the plasma is contained within its own gravitational mass, and isolated within the cold vacuum of space.
In June 2005, the EU, France, Japan, South Korea, China and the U.S. agreed to spend $12 billion to build an experimental fusion apparatus (called ITER)1 by 2014. It is planned that ITER will function as a research instrument for the following 20 years, and the knowledge gained will provide the basis for building a more advanced research machine. After another 30 years, if all goes well, the first commercial fusion powered electricity might come on-stream.
The Joint European Torus (JET)
I attended a fascinating event recently - a Cafe' Scientifique2 meeting held in the town of Reading in South East England. I have also performed in this arena, talking about "What Happens When the Oil Runs Out?", which remains a pertinent question. This time it was the turn of Dr Chris Warrick from the Culham Centre for Fusion Energy3 based near Abingdon in Oxfordshire, which hosts both the MAST (Mega Amp Spherical Tokamak) and the better known JET (Joint European Torus) experiments. In the audience was a veteran engineer/physicist who had worked on the pioneering ZETA4 experiment in the late 1950s, from which neutrons were detected leading to what proved later to be false claims that fusion had occurred, their true source being different versions of the same instability processes that had beset earlier machines.
Nonetheless, his comment was salient: "In the late 50s, we were told that fusion power was 20 years away and now, 50-odd years later it is maybe 60 years away." Indeed, JET has yet to produce a positive ratio of output power/input energy, and instability of the plasma is still a problem. Dr Warrick explained that while much of the plasma physics is now sorted-out, minor aberrations in the magnetic field allow some of the plasma to leak out, and if it touches the far colder walls of the confinement chamber, it simply "dies". In JET it is fusion of nuclei of the two hydrogen isotopes, deuterium and tritium that is being undertaken, a process that as noted earlier, requires a "temperature" of 100 million degrees.
I say "temperature" because the plasma is a rarified (very low pressure) gas, and hence the collisions between particles are not sufficiently rapid that the term means the same distribution of energy as occurs under conditions of thermal equilibrium. It is much the same as the temperatures that may be quoted for molecules in the atmospheric region known as the thermosphere which lies some 80 kilometers above the surface of the Earth. Here too, the atmosphere is highly rarified and thus derived temperatures refer to translational motion of molecules and are more usefully expressed as velocities. However expressed, at 100 million degrees centigrade, the nuclei of tritium and deuterium have sufficient translational velocity (have enough energy) that they can overcome the mutual repulsion arising from their positive charges and come close enough that they are drawn together by attractive nuclear forces and fuse, releasing vast amounts of energy in the process.
JET is not a small device, at 18 metres high, but bigger machines will be necessary before the technology is likely to give out more energy than it consumes. Despite the considerable volume of the chamber, it contains perhaps only one hundredth of a gram of gas, hence its very low pressure. There is another matter and that is how long the plasma and hence energy emission can be sustained. Presently it is fractions of a second but a serious "power station" would need to run for some hours. There is also the problem of getting useful energy from the plasma to convert into electricity even if the aforementioned and considerable problems can be overcome and a sustainable, large-scale plasma maintained.
The plan is to surround the chamber with a "blanket" of lithium with pipes running through it and some heat-exchanger fluid passing through them. The heated fluid would then pass on its heat to water and drive a steam-turbine, in the time-honoured fashion used for fossil fuel fired and nuclear power plants. Now my understanding is that this would not be lithium metal but some oxide material. The heat would be delivered in the form of very high energy neutrons that would be slowed-down as they encounter lithium nuclei on passing through the blanket. In principle this is a very neat trick, since absorption of a neutron by a lithium nucleus converts it to tritium, which could be fed back into the plasma as a fuel. Unlike deuterium, tritium does not exist is nature, being radioactive with a half life of about 12 years. However produced, either separately or in the blanket, lithium is the ultimate fuel source, not tritium per se. Deuterium does exist in nature but only to the extent of one part in about two thousand of ordinary hydrogen (protium) and hence the energy costs of its separation are not inconsiderable.
Nonetheless, his comment was salient: "In the late 50s, we were told that fusion power was 20 years away and now, 50-odd years later it is maybe 60 years away." Indeed, JET has yet to produce a positive ratio of output power/input energy, and instability of the plasma is still a problem. Dr Warrick explained that while much of the plasma physics is now sorted-out, minor aberrations in the magnetic field allow some of the plasma to leak out, and if it touches the far colder walls of the confinement chamber, it simply "dies". In JET it is fusion of nuclei of the two hydrogen isotopes, deuterium and tritium that is being undertaken, a process that as noted earlier, requires a "temperature" of 100 million degrees.
I say "temperature" because the plasma is a rarified (very low pressure) gas, and hence the collisions between particles are not sufficiently rapid that the term means the same distribution of energy as occurs under conditions of thermal equilibrium. It is much the same as the temperatures that may be quoted for molecules in the atmospheric region known as the thermosphere which lies some 80 kilometers above the surface of the Earth. Here too, the atmosphere is highly rarified and thus derived temperatures refer to translational motion of molecules and are more usefully expressed as velocities. However expressed, at 100 million degrees centigrade, the nuclei of tritium and deuterium have sufficient translational velocity (have enough energy) that they can overcome the mutual repulsion arising from their positive charges and come close enough that they are drawn together by attractive nuclear forces and fuse, releasing vast amounts of energy in the process.
JET is not a small device, at 18 metres high, but bigger machines will be necessary before the technology is likely to give out more energy than it consumes. Despite the considerable volume of the chamber, it contains perhaps only one hundredth of a gram of gas, hence its very low pressure. There is another matter and that is how long the plasma and hence energy emission can be sustained. Presently it is fractions of a second but a serious "power station" would need to run for some hours. There is also the problem of getting useful energy from the plasma to convert into electricity even if the aforementioned and considerable problems can be overcome and a sustainable, large-scale plasma maintained.
The plan is to surround the chamber with a "blanket" of lithium with pipes running through it and some heat-exchanger fluid passing through them. The heated fluid would then pass on its heat to water and drive a steam-turbine, in the time-honoured fashion used for fossil fuel fired and nuclear power plants. Now my understanding is that this would not be lithium metal but some oxide material. The heat would be delivered in the form of very high energy neutrons that would be slowed-down as they encounter lithium nuclei on passing through the blanket. In principle this is a very neat trick, since absorption of a neutron by a lithium nucleus converts it to tritium, which could be fed back into the plasma as a fuel. Unlike deuterium, tritium does not exist is nature, being radioactive with a half life of about 12 years. However produced, either separately or in the blanket, lithium is the ultimate fuel source, not tritium per se. Deuterium does exist in nature but only to the extent of one part in about two thousand of ordinary hydrogen (protium) and hence the energy costs of its separation are not inconsiderable.
The neutron flux produced by the plasma is very high, and to enhance the overall breeding efficiency of lithium to tritium the reactor would be surrounded with a “lithium” blanket about three feet thick. The intense neutron flux will render the material used to construct the reactor highly radioactive, to the extent that it would not be feasible for operators to enter its vicinity for routine maintenance. The radioactive material will need to be disposed of similarly to the requirements for nuclear waste generated by nuclear fission, and hence fusion is not as "clean" as is often claimed. Exposure to radiation of many potential materials necessary to make the reactor, blanket, and other components such as the heat-exchanger pipes would render them brittle, and so compromise their structural integrity. There is also the possibility that the lithium blanket around the reactor might be replaced by uranium, so enabling the option of breeding plutonium for use in nuclear weapons.
Providing a fairly intense magnetic field to confine the plasma (maybe 4 Tesla - similar to that in a hospital MRI scanner) needs power (dc not ac as switching the polarity of the field would cause the plasma to collapse) and large power-supply units containing a lot of metals including rare earths which are mined and processed using fossil fuels. The issue of rare earths is troublesome already, and whether enough of them can be recovered to meet existing planned wind and electric car projects is debatable, let alone that additional pressure should be placed upon an already fragile resource to build a first generation of fusion power stations.
World supplies of lithium are also already stressed, and hence getting enough of it not only to make blankets for fusion reactors and tritium production but also for the millions-scale fleet of electric vehicles needed to divert our transportation energy demand away from oil is probably a bridge too far, unless we try getting it from seawater, which takes far more energy than mining lithium minerals. The engineering requirements too will be formidable, however, most likely forcing the need to confront problems as yet unknown, and even according to the most favourable predictions of the experts, fusion power is still 60 years away, if it will arrive at all. Given that the energy crisis will hit hard long before then, I suggest we look to more immediate solutions, mainly in terms of energy efficiency, for which there is ample scope.
To quote again the ZETA veteran, "I wonder if maybe man is not intended to have nuclear fusion," and all in all, other than from solar energy I wonder if he is right. At any rate, garnering real electrical power from fusion is so far distant as to have no impact on the more immediately pressing fossil fuels crisis, particularly for oil and natural gas. Fusion Power is a long-range "holy grail" and part of the illusion that humankind can continue in perpetuity to use energy on the scale that it presently does. Efficiency and conservation are the only real means to attenuate the impending crisis in energy and resources.
Providing a fairly intense magnetic field to confine the plasma (maybe 4 Tesla - similar to that in a hospital MRI scanner) needs power (dc not ac as switching the polarity of the field would cause the plasma to collapse) and large power-supply units containing a lot of metals including rare earths which are mined and processed using fossil fuels. The issue of rare earths is troublesome already, and whether enough of them can be recovered to meet existing planned wind and electric car projects is debatable, let alone that additional pressure should be placed upon an already fragile resource to build a first generation of fusion power stations.
World supplies of lithium are also already stressed, and hence getting enough of it not only to make blankets for fusion reactors and tritium production but also for the millions-scale fleet of electric vehicles needed to divert our transportation energy demand away from oil is probably a bridge too far, unless we try getting it from seawater, which takes far more energy than mining lithium minerals. The engineering requirements too will be formidable, however, most likely forcing the need to confront problems as yet unknown, and even according to the most favourable predictions of the experts, fusion power is still 60 years away, if it will arrive at all. Given that the energy crisis will hit hard long before then, I suggest we look to more immediate solutions, mainly in terms of energy efficiency, for which there is ample scope.
To quote again the ZETA veteran, "I wonder if maybe man is not intended to have nuclear fusion," and all in all, other than from solar energy I wonder if he is right. At any rate, garnering real electrical power from fusion is so far distant as to have no impact on the more immediately pressing fossil fuels crisis, particularly for oil and natural gas. Fusion Power is a long-range "holy grail" and part of the illusion that humankind can continue in perpetuity to use energy on the scale that it presently does. Efficiency and conservation are the only real means to attenuate the impending crisis in energy and resources.
UK and US join forces on laser-fusion energy5
The UK company AWE and the Rutherford Appleton Laboratory have joined-forces with the US-based National Ignition Facility (NIF) to help provide energy using Inertial Confinement Fusion (ICF), in which a pellet of fuel is heated using powerful lasers. Since the late 1950s, UK scientists have been attempting to achieve the fusion of hydrogen nuclei (tritium and deuterium) using magnetic confinement (MCF). The UK-based Joint European Torus (JET) is the largest such facility in the world and may be regarded as a prototype for the International Thermonuclear Experimental Reactor (ITER) based in France. So far, the "breakeven point" has not been reached, and the energy consumed by the plasma has yet to yield more energy than it takes to maintain it; moreover, there are problems of instability as already alluded to.
An alternative is Inertial confinement fusion (ICF), in which fusion of nuclei is initiated by heating and compressing a fuel target, typically in the form of a pellet containing deuterium and tritium contained in a device called a hohlraum (hollow space or cavity) using an extremely powerful laser. Energy is delivered from the laser to the inner surface of the hohlraum which produces high-energy X-rays. The impingement of these X-rays on the target causes its outer layer to explode, and by a Newtonian counter reaction, drives the inner substance of the target inwards, compressing it massively. Shock-waves are also produced that travel inward through the target.
If the shock-waves are intense enough, the fuel at the target centre is heated and compressed to the extent that nuclear fusion can occur. The energy released by the fusion reactions then heats the surrounding fuel, within which atomic nuclei may further begin to fuse. In comparison with "breakeven" in MCF, in ICF a state of "ignition" is sought, in which a self-sustaining chain-reaction is attained that consumes a significant portion of the fuel. The fuel pellets typically contain around 10 milligrams of fuel, and if all of that were consumed it would release an energy equivalent to that from burning a barrel of oil. In reality, only a small proportion of the fuel is "burned". That said, "ignition" would yield far more energy than the breakeven point value.
At the NIF it is hoped to have ignition within a couple of years, or far sooner than the carrot before the donkey "50 years away" for MCF, although there is much to be done yet. A single shot from the world's most powerful laser at NIF is reported to have released "a million billion neutrons" and for a tiny fraction of a second produced more power than was being consumed in the entire world, although to achieve ignition this would need to be increased a thousand-fold.
A real breakthrough, no doubt, but as with MCF, how long before this technology can be fabricated into actual power stations? There are many nontrivial ancillary challenges too, especially the secondary procedure of actually getting the energy out of the reactor into a useful form, i.e. heat to drive steam-turbines as with all other kinds of thermal power stations, to generate electricity. This is very complex and untested technology compared, say, to coal- and gas-fired or nuclear power plants. Actual fusion power is still at best many decades away and the concept should not be thrown as a red-herring that the world's impending energy crisis has been abated.
Most immediately, what fusion in any of its manifestations does not address is the problem of providing liquid fuels as conventional supplies of oil and gas decline, and it is this which is the greatest and most pressing matter to be dealt with, against a backdrop of mere years not a luxury of decades.
"Cold fusion" proven?
I remember well the phenomenon of "cold fusion" as it was dubbed.6 This was back in 1989 when Professors Stanley Pons and Martin Fleischman claimed that they could extract 40% more energy in the form of heat than they had input in the form of electricity into an electrochemical cell containing deuterium oxide ("heavy water"). They proposed the deuterium nuclei had undergone a nuclear fusion. The potential implications of this were staggering: that rather than trying to mimic the massively high temperature conditions of some hundred million degrees or so as are necessary to overcome the strong Coulombic forces that tend to keep two positively charged nuclei apart, as in "hot" plasma-fusion, it was feasible to somehow overcome this barrier such that the process could occur at room temperature.
Pons and Fleischman became largely dismissed as charlatans when many other research groups around the world found themselves unable to reproduce their results and confirm their claims, which were accordingly dismissed as unfounded. However, note the comment below to the effect that the phenomenon has since been confirmed in many highly credible laboratories around the world. I remember there were some really quite bizarre effects found by other workers - for example, one young man was killed when a cold-fusion cell exploded while he was trying to demonstrate the phenomenon of "fusion in a test-tube" as the popular press described it.7 So, something real was happening, fusion or not. A senior scientist and champion of cold-fusion, Dr Eugene Mallove, was murdered during the furore, which incited a number of conspiracy theories at the time.8
The matter never entirely went away and I recall reading an article either in The Guardian or New Scientist (or both) to the effect that a scientist in the U.S. had claimed to have demonstrated fusion when he exposed hexadeuteroacetone (that's C3D6O as opposed to the more common C3H6O) to ultrasound. He was vilified by the scientific community, as I recall and its dogma that cold fusion did not exist and could not as there is no theory to explain it.9 However, a professor in Japan has apparently demonstrated that if deuterium gas is passed into a reactor containing composite palladium-zirconium oxide (Pd-ZrO2) nanoparticles, Helium-4 is produced (a sure sign of fusion?), the temperature of the reactor rises and its centre remains warm for 50 hours.10
If this is true it is absolutely fascinating and perhaps some accepted scientific laws will need to be substantially modified, as has been said. However, from a practical point of view, that of dealing with the energy crunch, even if cold fusion is a reality, have we found our salvation? I don't think so, frankly. I have not seen any figures for how much Pd and deuterium gas are used to run this cell and how much excess heat is produced. However, I have yet to be convinced that the energy needed to produce deuterium gas (by the electrolysis of deuterium oxide - "heavy water") and to make enough heavy water in the first place to feed the electrolysis units, will be offset by the final thermal output of the "fusion" reactors. Then there is the matter of availability of palladium metal, the energy for its fabrication into the composite nanoparticles and so on, and how would the heat energy be extracted usefully, say to heat buildings or drive electricity turbines? The problem of energy extraction is even worse for "hot" fusion, from a plasma that even if it can be sustained, would produce ultra-high energy neutrons that no known materials are yet able to withstand, from which to extract thermal energy.
Pons and Fleischman became largely dismissed as charlatans when many other research groups around the world found themselves unable to reproduce their results and confirm their claims, which were accordingly dismissed as unfounded. However, note the comment below to the effect that the phenomenon has since been confirmed in many highly credible laboratories around the world. I remember there were some really quite bizarre effects found by other workers - for example, one young man was killed when a cold-fusion cell exploded while he was trying to demonstrate the phenomenon of "fusion in a test-tube" as the popular press described it.7 So, something real was happening, fusion or not. A senior scientist and champion of cold-fusion, Dr Eugene Mallove, was murdered during the furore, which incited a number of conspiracy theories at the time.8
The matter never entirely went away and I recall reading an article either in The Guardian or New Scientist (or both) to the effect that a scientist in the U.S. had claimed to have demonstrated fusion when he exposed hexadeuteroacetone (that's C3D6O as opposed to the more common C3H6O) to ultrasound. He was vilified by the scientific community, as I recall and its dogma that cold fusion did not exist and could not as there is no theory to explain it.9 However, a professor in Japan has apparently demonstrated that if deuterium gas is passed into a reactor containing composite palladium-zirconium oxide (Pd-ZrO2) nanoparticles, Helium-4 is produced (a sure sign of fusion?), the temperature of the reactor rises and its centre remains warm for 50 hours.10
If this is true it is absolutely fascinating and perhaps some accepted scientific laws will need to be substantially modified, as has been said. However, from a practical point of view, that of dealing with the energy crunch, even if cold fusion is a reality, have we found our salvation? I don't think so, frankly. I have not seen any figures for how much Pd and deuterium gas are used to run this cell and how much excess heat is produced. However, I have yet to be convinced that the energy needed to produce deuterium gas (by the electrolysis of deuterium oxide - "heavy water") and to make enough heavy water in the first place to feed the electrolysis units, will be offset by the final thermal output of the "fusion" reactors. Then there is the matter of availability of palladium metal, the energy for its fabrication into the composite nanoparticles and so on, and how would the heat energy be extracted usefully, say to heat buildings or drive electricity turbines? The problem of energy extraction is even worse for "hot" fusion, from a plasma that even if it can be sustained, would produce ultra-high energy neutrons that no known materials are yet able to withstand, from which to extract thermal energy.
Energy-Catalyzer11
The issue of “cold fusion” has resurfaced in the guise of the Energy Catalyzer. This is also referred to as E-Cat and is claimed as a Low-Energy Nuclear Reaction (LENR) heat source, and is the creation of Andrea Rossi who is an inventor. Much has been written on this subject in the popular press, and the following highlights are taken from a useful wikipedia article11, containing original references which I have read and validated as being accurate. A patent was approved in Italy on April 6, 2011 by Rossi and physicist Sergio Focardi which designates the E-Cat as "process and equipment to obtain exothermal reactions, in particular from nickel and hydrogen". Now this is where it gets interesting: Rossi and Focardi say the device works by infusing heated hydrogen into nickel, transmuting it into copper and producing heat. However, an international patent application has received an unfavorable international preliminary report on patentability because it seemed to "offend against the generally accepted laws of physics and established theories" and it is concluded that the application is lacking in either experimental evidence or a firm theoretical basis that accords with current scientific understanding. The device has been demonstrated to a number of invited audiences, but it has not been independently verified. Writing on Forbes, Mark Gibbs has concluded that: "until a verifiably objective analysis is conducted by an independent third party that confirms the results match the claims, there’s no real news".
Evaluation of the device
The University of Bologna, where Focardi is an emeritus professor, has made it very clear that it has not been involved at all in the device so far, but will begin experiments on the E-Cat as soon as the contract signed with Andrea Rossi's Italian company (EFA Srl) comes into force. On November 23, 2011, the Corriere della Sera reported on its Bologna edition that the University's contract with Rossi is expected to start "within a few weeks", and that the results of the research would be published in scientific journals, possibly "by summer [2012]". As Ny Teknik reports, Peter Ekström, a lecturer at the Department of Nuclear Physics at Lund University in Sweden, points out that a chemical reaction is unlikely to provide sufficient energy to overcome the Coulomb barrier, that gamma rays are absent, that there is no explanation for the origin of the extra energy, that the expected radioactivity after fusing a proton with 58Ni is not detected, the occurrence of 11% iron in the spent fuel is unexplained, that the 10% copper in the spent fuel has coincidentally the same isotopic ratios as naturally occurring copper, and that there is no unstable copper isotope in the spent fuel suggesting that only stable isotopes are produced. Kjell Aleklett, who is a physics professor at Uppsala University, said the proportion of copper was too high for any known nuclear reaction involving nickel, and significantly the copper had the same isotopic ratio as natural copper (implying that this is where it came from rather than any process of transmutation).
Actual demonstrations of the E-Cat
Two such demonstrations were given in January and February and others as summarised in the list below. Reporting on the January demonstration, Benjamin Radford, an analyst on the Discovery Channel wrote: "If this all sounds fishy to you, it should,” and that "In many ways cold fusion is similar to perpetual motion machines. The principles defy the laws of physics, but that doesn’t stop people from periodically claiming to have invented or discovered one.”
- On the 29th of March, 2011, two Swedish physicists, Hanno Essén and Sven Kullander witnessed a test of a smaller version of the Energy Catalyzer, which ran for six hours. It was claimed that a net power output of 4.4 kW had been achieved with a total energy output of about 25 kWh. An analysis of the unused powder showed it to be pure nickel while that taken from the reactor (reported as used for 2.5 months) contained 10 percent copper and 11 percent iron. Kullander said that the presence of copper is "a proof that nuclear reactions took place in the process”. However, other researchers, Ekström and Aleklett concluded that since that copper had the same isotopic ratios as natural copper, and that the proportion of it is too high, it most likely arises from contamination. Significantly, the formation of iron is not mentioned at all in the patent. Essen and Kullander were guarded in their evaluation, writing that: "Since we do not have access to the internal design of the central fuel container... we can only make very general comments.” Essén later stated "I am still very uncertain about this.”
- A test run of the E-cat was made on the 6th of October, 2011, which reportedly lasted for about eight hours from which Roland Pettersson, an emeritus Associate Professor from the University of Uppsala, said: "I'm convinced that this works, but there is still room for more measurements".
Potential commercial exploitation
A Greek company, Defkalion, had intended to build a heating plant based on the Energy Catalyzer, but the deal fell through, although the company has announced that they plan to fabricate a similar device. Rossi made a deal in May 2011 with AmpEnergo in Ohio, to receive royalties on sales of licenses and products built on the Energy Catalyzer throughout North and South America. It was reported that an engineer Domenico Fioravanti had tested a1 MW power plant based on the Energy Catalyzer on the 28th of October, 2011, although the name of the client was not disclosed. Fioravanti claimed that over a period of 5.5 hours the plant produced 2,635 kWh, which corresponds to an average power output of 479 kW. Independent observers were not permitted, but also the plant remained connected to a power supply throughout the test, purportedly to run the fans and the water pumps. It is reported that the customer took possession of the plant afterwards. Rossi claims to have orders for thirteen more 1 MW units which are on sale for $2 million each, in addition to the unnamed customer who has the one from the 28th of October test. Focus, a popular science magazine in Italy, has stated that 12 additional units are to be provided to the same, undisclosed customer. Rossi commented: "We are building a 13 MW thermal plant, made of 13 plants such as the one you saw on October 28th: but it's a military research and I can't reveal any further detail, not the name, nor the place, nor the nationality of the customer".
Four Swedish entrepreneurs, two of them particle physicists have a website Ecat.com set-up to sell the device from. In response to a question about sceptical commentary regarding the device, one of the physicists, Magnus Holm, replied that "Until [Rossi] makes an independent test, there is obviously a small chance that it does not work. We are willing to take that risk because it’s such an amazing technology if it works". When asked if he was "contributing to fraud", Holm said: "We are not engaged in any deception, and I do not think Rossi is engaged in any fraud either. If it would turn out that it does not work, in spite of everything, I would think it is about self-deception". This is all quite fascinating and all I can say is: watch this space.
References.(1) http://www.iter.org/
(2) http://www.cafescientifique.org/
(3) http://www.ccfe.ac.uk/
(4) http://en.wikipedia.org/wiki/ZETA_%28fusion_reactor%29
(5) http://www.bbc.co.uk/news/science-environment-14842720
(6) http://en.wikipedia.org/wiki/Cold_fusion(7) http://www.science-frontiers.com/sf080/sf080u19.htm
(8) http://www.wanttoknow.info/eugenemallove
(9) http://en.wikipedia.org/wiki/Bubble_fusion
(10) http://www.rexresearch.com/arata/arata.htm
(11) http://en.wikipedia.org/wiki/Energy_Catalyzer
Subscribe to:
Posts (Atom)