About Precautionary Principle:
Precautionary principle sounds logical: When you aren’t sure if something might cause harm, be careful and don’t do anything that could be dangerous, especially to anything really important like human lives, the environment and so on. It also seems like it would not be a new or revolutionary concept. However, Precautionary Principle is really a lot more extreme and a lot less common sense than one might think.
The term actually dates back to 1998, when The Wingspread Conference on the Precautionary Principle was convened by the Science and Environmental Health Network was issued the statement: ”
“When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically.”
And with this one statement, “Precautionary Principle” became the next big thing and was totally the “in” concept for everyone in the enviro-political movement to go to workshops on and state talking about – just to show how up to date they are.
The concept was pushed as if it were somehow amazing and should be the guiding principle behind EVERYTHING. The EU formally adopted Precautionary Principle in 2000 as the fundamental basis of environmental policy, without really ever defining what it was or how it should be applied. Not surprisingly, San Fransisco in the US has adopted the policy as well.
But there’s a problem. precautionary principle assumes that something should be considered harmful or potentially harmful until proven otherwise. Depending on your definition of “proof,” you may run into some problems here. If one goes by the principle that nothing in science is ever proven true beyond any doubt, then you automatically have a paradox where it is impossible to ever do anything on the grounds that it might possibly maybe be harmful.
In precautionary principle, no evidence is needed that something is harmful or even could be harmful. No plausible reason to believe it could be harmful is needed either. In many cases no amount of scientific evidence against the thesis that something is harmful ever seems to be reasonable to counter the argument that something is “not proven safe.” Good scientists are often reluctant to state something is “impossible” – for example, the designer of a nuclear reactor may be highly confident that the reactor will never melt down and that even if it did the containment vessel would hold the material. But despite this, the designer would understandably be reluctant to say it *cannot* happen. After all, it’s not impossible that the containment structure won’t be breached by a hit by a massive meteor, even if it is astronomically unlikely.
In this circumstance, precautionary principle moves the burden of proof, creating a ridiculous burden to prove beyond any shadow of a doubt that any claim of harm, no matter how far fetched is 100% false. Since no evidence is ever needed to make a claim and no reasoning for the claim is required either, it’s possible to claim anything might be harmful in one way or another.
Therefore by this logic:
“I think CF lightbulbs will increase the number of herpes cases.”
“I just do. I think I had a dream about it or something.”
“Is there proof to the contrary?”
“No no studies have ever been done into the use of CF lightbulbs and the transmission of herpes”
“Can we do one?”
“Yes, but it’s hard, if not impossible to be totally conclusive about that kind of link, especially if it’s small.”
“Therefore we must ban CF lightbulbs”
On the risk of doing nothing:
Another big issue with “precuationary principle” is that it assumes that it is always best to do nothing when the action is not proven beyond the smallest shadow of a doubt to be harmful. However, since it always favors inaction, this can be significantly worse than taking an action. For one thing, refusing to accept anything new or anything which might possibly be harmful will tend to have economic and societal consequences, which although indirect, can lead to a much greater harm to life health and the environment.
Furthermore, failure to take an action or adopt a method or technology will always favor approaches of inaction which will commonly have a greater impact. For example, if seat belts were a new technology, one might be able to use precautionary principle to argue that there is no proof that they will not injure the body by trying to stop it too quickly or that they will not trap people in burning cars. One might even go as far as to say that there is no proof that they will not have the effect of making people feel secure and therefore drive more erratically and therefore cost lives. Thus, by precautionary principle, no matter how many crash test studies are done and no matter how much theory and design goes into seat belts, there would be grounds for banning them.
This presents another paradox because precautionary principle does not allow for any kind of “risk management” or “acceptably small risk” no matter how small. It can be taken to the point of being ridiculous and often is. It does not allow for any assessment of the risk of inaction. Building a cell phone tower near a school would be considered to be against precautionary principle because there “may be some risk,” but it does not consider that if it were not built near the school it may be more difficult to build and therefore put the lives of workers in more danger. It may also offer poorer coverage and therefore cost the life of a motorist who is unable to call in an emergency. These possibilities, though small, are certainly no smaller than the risk of building near a school.
Example of Applying Precautionary Principle to Inaction:
“The house is on fire. We had better put it out with this extinguisher before the fire spreads and consumes the house.”
“How do we know the extinguisher will not make it worse? Perhaps the extinguisher is full of gasoline and pressured with propane instead of CO2.”
“But how could that happen?”
“I don’t know. Perhaps someone filled it with gas as a joke. Perhaps someone who did not understand the English writing on it believed it was a gas container and filled it.”
“That sounds far fetched.”
“Yes but you cannot prove it could not happen. We have no proof this is not the case.”
“You are right, by precautionary principle we should not attempt to put out the fire.”
Some common sense and when to be cautious:
If you’re not absolutely certain something is safe, then you probably shouldn’t bet your life or anyone else’s on it. Sounds like common sense, right? And in general it is. This is why, even if an aircraft company is pretty damn sure their latest design is totally safe and reliable from wind tunnel testing and calculations, they still build a prototype and give it a good shakedown with an experienced test pilot before it gets the all clear to carry passengers. This is why drugs are tested on tissue cultures, animals and controlled volunteer groups many times before they are put out on the market. It’s also why lifeboats are installed on ships, even if the owners are really damn sure that the ship is not going to sink. It’s always a good idea to take that extra measure of security in case you’re wrong.
In engineering a concept which is goes along these lines is called “factor of safety.” It basically means the margin between what stresses an item is going to be subjected and the stresses that would cause it to fail. Factor of safety tends to be very high for items where there is any uncertainty involved and where a failure could be catastrophic. Exactly how much of a factor of safety is considered necessary depends on certain things. In circumstances where a failure could result in a loss of life, factor of safety is generally high. This is especially true whenever there are uncertainties, such as when a certain type of structure is being built for the first time or in high risk environments like space flight or submarines.
For example, if a bridge is intended to carry a certain amount of traffic, then the design will call for enough strength to support the maximum possible load expected on the bridge, plus an extra load beyond its intended capacity. The reason is simple: to insure a comfortable margin of safety so even if one of the calculations is a little off or if one of the gutters has an undetected flaw in it or even if there has been some damage to the bridge, everyone can rest assured that it won’t come crashing down. (At least, it is not supposed to. If not maintained properly or too much load is placed on it.. well that’s another thing.)
Engineers are asked to work between opposing forces of safety and cost. In many cases, a large factor of safety is practical because the cost of adding more material than is absolutely necessary is nominal. However, in some cases, there is also a need to keep effeciency of materials and construction to the maximum. An example of this is aircraft. Building an aircraft considerably stronger than it needs to be for routine operation would add too much weight and could actually effect safety (as well as effeciency and performance) negatively. In such circumstances, the factor of safety may be smaller, but in order to achieve this while still maintaining acceptable confidence in the safety of the aircraft, the degree of uncertainty must be reduced, thus necessitating rigorous testing and quality control.
Another application of “factor of safety” can be found in pharmaceuticals. In general, doctors are not permitted to prescribe the amount of a medication which would theoretically kill a patient. They’re not even allowed to prescribe anything close to it. Furthermore the greater the difference between a therapeutic dose of a drug is from the lethal dose of the drug is, the greater the factor of safety and thus the safer the drug is considered. Drugs with smaller factors of safety are always monitored very carefully, but those which have very little chance of causing problems are not subjected to the same scrutiny. A drug with a small factor of safety might be considered unsuitable for situations where it is not completely necessary to preserve life or health.
In a few circumstances, there is known to be a very high probability of danger or there are great unknowns. In these circumstances it is considered justified to expend more on safety than would be normally considered necessary in other circumstances. A case in point would be sending men to the moon. The Apollo program had vigorous safety measures, which were increased after the tragic Apollo-1 accident. Despite being tested on static stands and simulations, all rocket stages were tested without humans on board before being used for a manned flight. The Apollo command and lunar modules were completed by Apollo-7, but they were tested in earth orbit and then in lunar orbit before the first attempt to land on the moon. The first landing was brief and carried sparse equipment to save capacity. Later landings increased capability and duration. These safety measure would prove worth their price when Apollo-13 was nearly lost. Despite all the safety measures taken on the program, it was still understood to be a high risk mission and was undertaken by astronauts who understood the dangers. Richard Nixon’s speech writers had even prepared a speech to be given in the event of a loss of the crew.
Precautionary Principle: This is where common sense gets twisted into something very nonsensical.
Case in point: Cell Phone Towers and Wireless Transmitters
According to some, precautionary principle should be applied to cell phone transmitters and other RF devices. The proposals are to restrict the deployment of such towers and to especially assure that they are never placed remotely near to residential structures, schools, population centers and similar. Furthermore, proposals have been made for shielding on buildings in order to reduce exposure.
The consequences of doing so would include great expense on both mobile companies and customers, dramatically reduced quality of service and the need for cell phones to transmit at higher power levels in many areas, thereby actually increasing local RF field intensity. Such restrictions would also dramatically reduce the potential revenue to site owners from leases, including excluding schools and public property from lucrative site leases. Furthermore, the reduced quality of service can impact the use of cell phones for reporting emergencies as well as the ability of the system to triangulate the location of emergency calls. Because many of these protests also address government and dispatch radio services, such as TETRA, restrictions can also have an impact on the quality and reliability of communications to first responders, law enforcement and other emergency services.
Reasons not to worry:
-Extensive scientific study has failed to find any proof or even solid evidence showing any adverse health effects.
-Several extremely large and well controlled studies have been done on the subject and approved by well respected and credible scientific bodies.
-The inverse square law assures that any RF radiation exposure is extremely small at a normal distance from the transmitter.
-The levels at the base of a transmitter are often lower than those near a phone or even around wireless headsets, baby monitors, remote controls and alike.
-More than a half century of use of UHF and microwave communications, many much more powerful has failed to produce any noticable effects on health at low levels.
-The exposure limits set for RF radiation from transmitters are significantly lower than the levels at which damage to health has even been shown to be possible.
-No credible mechanism by which low-level RF radiation could have chronic health effects has been proposed.
-All or nearly all the claims of electrosensitivity, acute effects, health problems around transmitters and alike have failed to be verified by scientific tests, but they are very easily explained by very well established psycological and sociological effects which are analogous to numerous other cases seen throughout history.
Reasons to worry:
-RF radiation fields are rare, but not unheard of in natural settings where humans evolved.
-A lot of people have claimed that they could be harmful, although no valid evidence has been produced.
-RF radiation is known to be hazardous at very high levels, although this is primarily due to dielectric heating.
- It is remotely possible that a long term health effect from exposure to RF fields exists, but is so extremely weak and exists in so few cases that it has failed to stand out from the statistical noise despite the extensive studies done. Long-term associations with conditions like cancer can be difficult to verify when the link is not statistically strong and clear-cut.
The Interest in Precautionary Principle:
One might think that something as general as “Precautionary Principle” would not really be exciting enough to have any organizations devoted to it. This is not the case, however. There are several organizations which not only support Precautionary Principle, but have made it a major part of their reason for being or are entirely dedicated to the idea of precautionary principle. Seems a bit strange really to sit around and talk about precautions and how they can be stretched to the extreme, but that’s what they do!
The Precautionary Principle Project
Precaution.org – The Environmental Research Foundation
The Science And Environmental Health Network
Taking Precaution – The Bay Area Precautionary Principle Working Group
Be Safe Precautionary Campaign
Southeast Regional Precautionary Conference
The Center For Health, Environment and Justice
A Small Dose of Toxicology
Oregon Center for Environmental Health
This entry was posted on Saturday, April 26th, 2008 at 2:54 pm and is filed under Bad Science, Culture, Enviornment, Good Science, Not Even Wrong, Obfuscation, Politics. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
View blog reactions