Monday, September 26, 2011

UARS: Why Another Uncontrolled Satellite Crash?

Saturday, the 6-ton UARS satellite crashed. No one is sure exactly where: the odds say it crashed in the Pacific Ocean, but the uncertainty says some debris might have come down over the Pacific Northwest.

Official estimates were that about half a ton of this satellite would crash to Earth in 26 pieces, one as large as 300 pounds or so. My own estimate of surviving debris mass is larger.

I’ve been watching satellites and satellite re-entries all my life. In all those years, no person has been hurt, because most of these hit the sea (the only victims being fish).

But the risk is still there. So, why do we continue launching satellites that can crash uncontrolled? That's a good question!

Our 85-ton Skylab space station crashed onto Australia in 1979, leaving lighter debris on rooftops in a coastal town, with the heavier pieces carrying further inland. About 75 tons were eventually recovered!

Before that crash, the “official wisdom” was that it would mostly “burn up”. It clearly didn’t do that. Had its debris field been more centered on that Australian town, it is likely someone would have been hurt or killed.

That same year, the 10-ton Pegasus 2 satellite also came crashing back to Earth. This one hit the ocean, as do most.

In 1978, the Russian satellite Cosmos 954 crashed to Earth in north-central Canada. This one was the special case of a nuclear reactor-powered satellite, for which the preferred disposal method (a really high, decay-proof orbit) failed. There was serious radioactive contamination over an area of back-country Canada hundreds of miles wide.

And, most folks remember vividly the crash of Space Shuttle Columbia in Texas in 2003. Again, there were a lot of pieces, some quite large, on the ground from Dallas to Tyler. No one on the ground was hurt, but they very easily could have been.

In contrast, the Russians deliberately crashed their Mir space station into the Pacific in 2001. They used a rocket motor to de-orbit the station and put it down exactly where they wanted: away from land and people.

Excepting disasters and reactor disposals, most satellites could (and should) be equipped with a small rocket motor for a controlled crash in a safe place. Small solid rocket motors are cheap, light, compact, widely available, and they last for decades without any maintenance, waiting to be used.

So why don’t we do this, especially considering the nail-biting experience with Skylab?

Simply because no rule says we have to. That’s something very easy to fix, and without a new law.

For civilian / commercial satellites, the Federal Aviation Administration (FAA) should simply require controlled de-orbit provisions. It’s their jurisdiction and they already have rule-making authority. A word from the President to do it, is all it would take.

For military satellites, a simple order from the Commander-in-Chief is all that is needed.

Mr. President, fix this. Give the word to the FAA and the Joint Chiefs. I bet most of the other satellite-launching nations would soon follow suit.

Friday, September 23, 2011

Air Races, Air Shows, and Risks

The recent fatal crash of the modified World War 2 P-51 “Galloping Ghost” at the Reno air races is a horrible incident. Lots of things have been said on the news and on the internet about it, but all of this is speculation based on incomplete information. The National Transportation Safety Board (NTSB) will investigate this and determine its cause. They will use all the available information, including stuff not yet reported or on the internet. Until they publish their findings, perhaps a year from now, all else is mere speculation. That being said, some speculators are more informed than others. I would place more trust in the speculations of an actual aircraft engineer over the speculations of most other members of the public. Being such an engineer, here are my speculations: Facts “Galloping Ghost” suddenly pitched up and climbed, rolled over inverted, and dove to the ground, all in a matter of scant seconds. Impact was not directly upon, but was immediately adjacent to, spectators, many of whom were killed by pieces thrown from the wreck. There was no post crash fire. The left trim tab was photographed departing from the airplane’s horizontal tail before impact. The pilot’s head was not visible in the canopy before impact. The retractable tail wheel was seen extended before impact. This aircraft was modified in several ways from its World War 2 configuration to compete in the races. Most notable were “clipped wings” reducing span (and aileron size) by 5 feet each side, and removal of the belly air scoop and radiator in favor of a sacrificial coolant system. These enable higher top speed, at the cost of higher landing speed and perhaps a reduced maximum roll rate, not a loss of basic stability. Less obvious were changes to canopy size, wing fillet size, and smoothing of protuberances, for drag reduction. The race speeds significantly exceed 500 mph, when the original level-flight top speed for the P-51 during World War 2 was 435 mph. I do not know what the original “never exceed” speed was for the P-51, but these race speeds would be approaching or exceeding that limit. Flying too fast risks structural failures of wing and tail components by a phenomenon called flutter. Similar Previous Incident About a decade ago, a similarly-modified P-51 named “Voodoo-5” experienced a very similar incident: sudden high speed pitch-up (at high acceleration) into a climb, with the pilot losing consciousness briefly due to excessive gee forces. He woke up, with no memory of events, at 9000 feet altitude, and regained control, landing successfully. “Voodo-5” was found to have lost the same left trim tab as “Galloping Ghost”. Loss of the tab at high speed caused failure of part of the elevator control linkage, leaving only the right elevator for pitch control. Aerodynamic flutter was blamed for loss of the trim tab. At 400+ mph, P-51's exhibit a relatively unusual nose-up tendency that you fight with down trim and down stick. Other aircraft exhibit high-speed nose-down "tuck", or no trim-change tendencies at all. In the P-51 with that nose-up tendency, sudden loss of half your elevator effectiveness at very high speed causes the aircraft to suddenly and violently pitch up, at something near 10-15 gees. The pilot passes out, or can even be killed with a broken neck, depending on helmet weight and head restraints, or the lack thereof. Speculations Regarding “Galloping Ghost” “Galloping Ghost” lost the same left trim tab, and pitched up similarly at high gee. Some on the internet say telemetry from the aircraft indicated 11.5 gees. The differences between “Voodoo-5” and “Galloping Ghost” are (1) “Galloping Ghost” also experienced a roll, and (2) it appears her pilot never woke up, or was perhaps already dying of a broken neck. It also appears that the high pitch-up gee level forced deployment of her tail wheel. The roll motion on the way up caused her to peak in inverted flight, as photographed. I think I see a light-colored helmet on the dark dashboard in that internet inverted-flight photo, but I could be wrong. She then continued her pitch-roll motion into a dive-to-impact. There has been speculation that the pilot’s seat failed in “Galloping Ghost”, which might explain why his head was not visible in the canopy. I would be surprised at seat failure in a fighter plane at only 10-15 gees, but I guess it could happen. It did not in “Voodoo-5”, though. Waiting for the Truth The NTSB will opine officially maybe a year from now, but I'd almost bet they say that P-51 trim tabs are vulnerable to flutter-induced departure at race speeds beyond the original design's never-exceed speed. Few designers provide aerodynamic or mass balancing, or any other anti-flutter structural treatment, to a trim tab. Maybe they should. If so, the NTSB will say so. Inappropriate Fear-Mongering The public safety issue raised by some reporters has less to do with any given aircraft being "pushed too far", or being modified "too radically", and more to do with simple spectator crowd placement. The wording in those reports seems deliberately chosen to inflame fears, and is a disservice to the public, much like yelling “fire” in a crowded theater when there isn’t one. At air shows, spectators may not legally be located beneath expected aircraft flight paths. At the Reno air races, they can be (and are) located under flight paths. Perhaps they should not be, similar to the air show restrictions. While these are the first spectator deaths at Reno since the 1950's, that risk has always been there. There was a fatal crash at an air show the day following the “Galloping Ghost” incident. No one but the pilot was killed, because no one was underneath the falling plane.

Tuesday, September 6, 2011

Mars Mission Second Thoughts Illustrated

As I said in a previous posting (8-9-11), I had some second thoughts about the back-up propulsion for my fast trip Mars mission paper, presented at the Mars Society convention in Dallas, Texas, August 4-7, 2011. My backup had been the VASIMR electric propulsion scheme, thinking it a breakthrough in thrust for the power consumption. Based on what I saw at the meeting, it is no breakthrough, and is really mostly unsuitable for fast trips to Mars.

My second thoughts centered around an alternative slow-traveling vehicle requiring artificial gravity, because the manned mission duration would exceed the 1 year known to be tolerable. This vehicle would be powered by the same solid-core nuclear thermal technology I assumed in my landers, derived from the NERVA tested successfully 4 decades ago. I planned this alternative around simple minimum-energy Hohmann transfer orbits, because it is easy.

That still leaves the gas-core nuclear thermal-powered fast trip vehicle, which is still my baseline. I took a closer look at the orbits and the near-straight line “shots” across the solar system at the higher travel speeds. This verified my earlier crude ballpark estimate of the fast trip velocity requirements. All of this is illustrated here, at a level of analysis no deeper than is required to confirm the concepts and their feasibility. For example, I used circles to approximate the actually slightly-elliptical orbits of the planets. To first order for a feasibility check, this is “good enough”.

Baseline Trajectories

The baseline “fast trip mission” sends a fleet of three unmanned ships to Mars parking orbit ahead of the manned ship. This fleet is propelled by the landers themselves, and comprises all the propellant required to support the landing operations, plus enough to send these assets one-way to Mars by Hohmann transfer. Figure 1 shows the initial Hohmann transfer for these unmanned assets. Note that there is an opposition during the unmanned flight to Mars.

I looked for ways to center my manned fast trip about that first opposition, without adding too much extra time in orbit to the manned mission. This did not prove feasible, so that the manned fast trip is centered about a second opposition some 779 days after the first one. The mission calls for 16 weeks at Mars making landings, which would be 56 days to either side of the opposition. A little time spent making rough calculations gave me an “optimal” one-way flight time pretty near 83 days for the “average” mission these approximations represent. This is shown in Figure 2.

Baseline Vehicles

The total mission time (for the men) is under 9 months, so no artificial gravity is required. Note that the total time the propellant tanks sent unmanned must maintain the liquid hydrogen is well over two years – rather challenging! The vehicle designs are as shown in Figure 3, and are essentially unchanged from my paper. The direct launch costs are pretty much as I estimated in the original paper.

Guessing that total program costs are about 6 times the direct launch costs gives something like $50 billion to mount this mission, given the right team. Those figures are similar to the ones in the original paper. That “right team” issue is also discussed in more detail in that paper. See the 7-25-11 posting for an on-line version of that original paper.

Backup “Slowboat” Trajectories

If the manned vehicle is comprised of the same basic modules, but with solid core nuclear engines instead of the gas core engines, then single stage two-way flight, even on a Hohmann transfer, is not possible. But, a single stage transfer to Mars can be flown, and the empty tanks left there at Mars. In this way, a single stage return to Earth can be made, without relying on propellant already sent unmanned to Mars. This is a safety issue: what if rendezvous should fail in Mars orbit? The crew needs a way to return anyway.

The Hohmann transfer to Mars is identical to that in Figure 1. All four ships travel together as a single fleet: 3 unmanned and the one manned vessel. It is not possible to return by Hohmann transfer until the second opposition approaches, as illustrated in Figure 4. These oppositions are separated by 779 days, which leads to the timelines shown for the return in Figure 4. Thus, total manned mission duration is about 2.66 years, requiring the use of artificial gravity to protect the health of the crew, and considerably more packed supplies for the longer mission. Time at Mars about doubles, allowing for 16 2-week landings instead of 16 1-week landings, as in the baseline.

Backup “Slowboat” Vehicles

The unmanned vehicles are unchanged from my paper. The manned vehicle is necessarily bigger than the baseline design, driven by the substantially lower performance of solid core nuclear thermal rockets (SC-NTR) vs. gas core nuclear thermal rockets (GC-NTR). The solid core vehicle is substantially longer and about twice as heavy as the baseline gas core vehicle. The “payload” is larger, too, driven by the need to pack about 3 times as much supplies, with some of that bulky, heavy frozen food. These are depicted in Figure 5.

The return vehicles, command module (also the radiation shelter), habitat module, and supply storage modules are the same, I just needed 3 storage modules instead of one. I did take a closer look at the habitat module, since more space is needed for the longer mission to maintain psychological health. The easiest way to do that was to make the habitat an inflatable, along the lines of the Bigelow Aerospace modules already in experimental flight test now. Equipment and floor structure would be stowed along the axis for launch, and folded out into position once the module is inflated, as illustrated in Figure 6.

The same module could be used on the baseline fast trip vehicle, there is no need to build two different designs. It is imperative not to mount equipment on module walls, as they need to be accessible for very rapid meteoroid puncture repairs. (The same is true of non-inflatable modules.)

I wrestled with several ideas on how to provide adequate radius at acceptable spin rates for artificial gravity, at the one gee level which we already know would be adequate. The breakthrough was to spin the long ship end-over-end, using the long module stack as its own spin diameter. For the trip to Mars, the propellant stack is 34 modules long, each figured as 5.2 m diameter and 13.9 m long, based on the payload shroud dimensions for the SpaceX Falcon-heavy launch vehicle. Spinning end-over-end at only 1.2 rpm provides right about 1 gee at the forward end of the inflatable habitat (at its lower deck as illustrated). The stack is shorter returning to Earth, but should be long enough to provide close to 1 gee at no more than the acceptable limit of 4 rpm.

About Radiation

The original paper covers solar flare radiation shielding in the command module. This is done by surrounding the flight deck with water and wastewater tanks, plus perhaps a little steel plate. One provides space in there for all 6 crew, and a day or two of supplies to outlast the typical solar storm. This enables critical maneuvers to be flown, no matter what the solar weather, a major flight safety issue.

A little research since then provided credible dose estimates for the cosmic ray background radiation, composed of particles so energetic that ordinary shielding is more-or-less impractical. The dose varies between 22 and 60 REM per year in a steady “drizzle”, depending upon the strength of the solar wind, which tends to deflect some of it. The original radiation dose limits for astronauts was set to 25 REM/year, which was the World War 2-vintage max dose for civilian adults. It has since been revised to 50 REM/year, based on what I can find on the internet. The actual dosage rate only sometimes exceeds the newer limits, and then only by a small amount. Trips to Mars thus appear quite feasible without incurring any immediate health risks from cosmic rays, or even any significant prospect of long-term effects.

The Program As Revised

Changing to SC-NTR backup propulsion puts the artificial gravity and frozen food storage issues into the design mix. This has to be made to work, and they are things we have never before done. Using the baseline GC-NTR propulsion puts that very propulsion into the mix as something we never did do before, excepting some feasibility experiments. Those are the two development items to be worked in parallel, so that one or the other is ready in time to fly. (This is the same basic parallel path development idea that was in the original paper, where the baseline was GC-NTR, and the backup was VASIMR and its power plant.) All the other items are simple design / build / checkout efforts based on known technologies, and that includes the SC-NTR. The high-level program plan is just a bunch of parallel paths, as illustrated in Figure 7.

So, we are looking at somewhere in the vicinity of $53 B to $70 B to send 6 men to Mars to make 16 widely-separated landings all over the planet, in the one trip, with maximum safety and self-rescue capability designed-in at every step, and with all-reusable assets left in space to be refueled and reused by subsequent missions. The whole thing could be done for prices like that, in only 5-10 years, given the right kind of contractor teams, and the right kind of an agency to lead them. That is one incredible amount of “bang for the buck”!

As I said in my original paper, right now we do not have that agency, and only a couple of the right kind of contractors, at best. But, if we fix those lacks, we could really do this. The numbers show it is definitely feasible.

The last time we as a nation embarked on a mission to explore another world (the moon), we had nearly two decades of sustained economic boom, from all the jobs created just to get the mission done. That may not be causal, but it is definitely correlated. Why not do it again?


Figure 1 – “Slowboat” Transfer to Mars, Baseline and Backup


Figure 2 – “Fast Trip” Transfer To and From Mars, Baseline


Figure 3 – Baseline Manned and Unmanned Vehicles


Figure 4 – “Slowboat” Transfers to Earth, Backup


Figure 5 – Backup Manned and Unmanned Vehicles


Figure 6 – Inflatable Habitat Module, Baseline and Backup Vehicles


Figure 7 – Program Outline Plan

Monday, September 5, 2011

Surprise, Surprise: Oil Boom in the Williston Basin (“the Bakken”)

Resources on the internet about this formation have been revised recently. There appears to be an oil drilling boom going on in eastern Montana and western North Dakota. They are horizontal-drilling and hydro-fracturing for light crude (meaning low viscosity liquid). One of the descriptions says the crude they can recover seems to be just about the same gross physical properties as diesel (density, viscosity).

That's a surprise to me. Two years ago I researched this formation as a "shale unit, very low porosity and microscopic permeability", and everything I read about the hydrocarbons in it said a consistency more like tar. Hydro-fracturing simply would not work on a near-solid resource like that. It would have to be mined, like coal.

What I read now says the Bakken comprises a dolomite layer around 100-140 feet thick, bounded above and below by shale layers. Typically, the shale is the “original” source for the hydrocarbons. The dolomite is listed as 5% porosity and microscopic permeability (1-10 microdarcy's, just almost impermeable). It is in the dolomite layer (not the shale) that they are horizontal-drilling and hydro-fracturing. Estimates vary about how much of the total resource they might possibly recover this way, by over an order of magnitude, depending upon who made the estimate and what agenda they have.

For the Burgess Shale natural gas hydro-fracturing here in Texas, the estimate is that about 3% of the gas down there is actually recoverable. For the liquid in the Bakken dolomite layer, I'd simply guess that factor as 3% or less, which is nearer the 1% end of the estimate range of 1% to 50% that I saw on-line yesterday. Almost-nil permeability just has that effect, hydro-fracturing notwithstanding.

I suspect that there are residual tars left behind in both of the shale units in the Bakken formation, and that the source for the light fractions in the sandwiched dolomite layer is the lower shale member. Somehow, I don't see light fractions migrating downward from the upper shale member, so its lighter fractions are most likely now lost to us.

So, how much recoverable light oil might there be, and how much good might it do, if we can recover around 2% of it?

Oil in the Dolomite Layer:

If you guess that there's something like 500 x 500 statute miles of this formation, averaging 100 ft thick, at 5% porosity, then there might be as many as 6 trillion barrels of light oil down there.

500 mile dimension x 5280 ft per mile = 2.64E6 ft. 500 mi x 500 mi is then 6.97E12 sq ft. Multiply by 100 ft thick to obtain 6.97E14 cu.ft of dolomite rock. The hydrocarbon volume equals the pore space volume at 5% of rock volume, assuming the pores are 100% full. That's 3.48E13 cu.ft of hydrocarbons. Cu.ft volume of hydrocarbons x 7.48 gal per cu.ft is 2.61E14 gal hydrocarbons; divide that by 42 gal/barrel. That's 6.2E12 (about 6 trillion) barrels of hydrocarbon volume down there in the pores of the dolomite layer, supposedly all hydro-fracturable, very light crude.

Assume we can recover 2% of it. That's about 1.24E11 barrels of light oil that could be recovered, or about 124 billion barrels in ordinary terms. That's quite significant. I could be off by a factor of 2-3 in rock volume assumptions, more likely toward the smaller than the larger, so these figures are rather optimistic.

At our 7-8 billion barrels / year consumption in the USA, then potentially, this could power us for about 16-17 years. That really is significant, even if it is optimistic by a factor of 2-3. If it is all light oil. If we really can recover 2% of it. If the rock pores are really full. Lots of "ifs".

Let's say this oil boom lasts 20-30 years (typical for a very large field). The average production rate from the mature field (which takes several years to achieve) might be as much as around 4-6 billion barrels a year, again possibly optimistic by factor of 2-3. That's still a lot, optimistic or otherwise.

Replacing Foreign Imports:
About 1/3 of our consumption is domestic production, about 1/3 comes from Mexico and Canada, and about 1/3 comes from OPEC (which includes Venezuela, along with that idiot running it; and our “friend” Iran, with that insane group of religious fanatics running it). That's about 2.5 billion barrels per year from each source. We might very well be able to replace much of the OPEC oil with domestic from the Bakken dolomite layer, even as the other sources decline. For a little while.

But, no matter how politically expedient, it is still clearly not at all wise to count on it “ending” our dependence on foreign oil. Although, you can bet more than one GOP/Tea Party candidate will run on "why not save ourselves from oil dependence with the Bakken, if the environmentalists and Democrats would just get out of the way?" They did exactly that in '08: remember “drill, baby, drill?”

Even with the new oil boom that I did not expect to see, it’s still a comic-opera puppet-theater issue intended to distract the public from the real truths that threaten us. It’s still just a fake electioneering issue for a bunch of comic-opera buffoon candidates. Beware! I warned you!

About the Tar Shale Layers:
I saw no thickness figures on the two shale units, in the new data that I found this year. I bet they're quite thick, though. You'd have to deep strip mine it, and what I saw said it averages 2 miles down. Figure shale at 0.5% or less porosity, for maybe another handful of trillions of barrels of potentially-recoverable hydrocarbon. This tar shale stuff would be very hard to extract and process, though, and so it would be a supremely expensive product.

And, we would get it for the environmental cost of a permanent crater some 500x500x2 miles in size, which is bigger by far than the volume of Lake Superior. That shale tar is what I was thinking about when I posted what I did about "the Bakken" last year (the 3-14-10 article). That’s still true, oil boom notwithstanding.

Conclusions:

Yep, we need to go get the hydro-fracturable light oil.

Yep, it’ll surely help with imports.

Nope, it will not “save” us.

There is no permanent answer among depletable (fossil) fuels, and never will be.

Update 6-5-2016:  here is an updated curve of US oil production versus time obtained from the US EIA website.  I have sketched upon it the Hubbert curve for conventional oil production.  It is clear the fracking technology is a new effect.  How tall this could go,  and how wide this will be over time,  are things that are completely unclear as of yet.  

-------------------------------------------------------------

Update 1-3-15:

The recent explosion of US “fracking” technology (hydraulic fracturing plus horizontal-turn drilling) has modified the picture of oil prices versus recessions.  Unexpectedly,  the US has become a leading producer of crude oils for the world market.  Plus,  there has been an associated massive production increase and price drop in natural gas.

OPEC has chosen to take the income “hit” and not cut back their production in response.  Their reasoning is twofold:  (1) fear of loss of market share,  and (2) hope that low oil prices will curtail US “fracking” recoveries.  We will see how that plays-out.

Oil prices are now such (at around $55/barrel) that US regular gasoline prices are nearing $2.00/gal for the first time in a very long time.  This is very close to the price one would expect for a truly competitive commodity,  based on 1958 gasoline prices in the US,  and the inflation factor since then. 

It is no coincidence that the exceedingly-weak US “Great Recession” recovery has suddenly picked up steam.  The timing of the acceleration in our economic recovery versus the precipitous drop in oil prices is quite damning.  There can be no doubt that higher-than-competitive-commodity oil prices damage economies.  Oil prices are a superposition of the competitive commodity price,  overlain by an erratic increase from speculation,  and further overlain quite often by punitive price levels when OPEC is politically unhappy with the west.  That’s been the history. 

This economic improvement we are experiencing will persist as long as oil,  gas,  and fuel prices remain low.  (Government policies have almost nothing to do with this,  from either party.)  How long that improvement continues depends in part upon US “fracking” and in part upon OPEC.  Continued US “fracking” in the short term may depend upon adequate prices.  In the long term,  we need some solutions to some rather intractable problems to continue our big-time “fracking” activities. 

The long-term problems with “fracking” have to do with (1) contamination of groundwater with combustible natural gas,  (2) induced earthquake activity,  (3) lack of suitable freshwater supply to support the demand for “fracking”,  and (4) safety problems with the transport of the volatile crude that “fracking” inherently produces. 

Groundwater Contamination

Groundwater contamination is geology-dependent.  In Texas,  the rock layers lie relatively flat,  and are relatively undistorted and unfractured.  This is because the rocks are largely old sea bottom that was never subjected to mountain-building.  We Texans haven’t seen any significant contamination of ground water by methane freed from shale.  The exceptions trace to improperly-built wells whose casings leak.

This isn’t true in the shales being tapped in the Appalachians,  or in the shales being tapped in the eastern Rockies.  There the freed gas has multiple paths to reach the surface besides the well,  no matter how well-built it might have been.  Those paths are the vast multitudes of fractures in the highly-contorted rocks that subject to mountain-building in eons past.  That mountain-building may have ceased long ago,  but those cracks last forever. 

This is why there are persistent reports of kitchen water taps bursting into flames or exploding,  from those very same regions of the country.   It’s very unwise to “frack” for gas in that kind of geology.

Induced Earthquake Activity

This does not seem to trace to the original “fracking” activity.  Instead it traces rather reliably to massive injections of “fracking” wastewater down disposal wells.  Wherever the injection quantities are large in a given well,  the frequent earthquakes cluster in that same region.  Most are pretty weak,  under Richter magnitude 3,  some have approached magnitude 4. 

There is nothing in our experience to suggest that magnitude 4 is the maximum we will see.  No one can rule out large quakes.   The risk is with us as long as there are massive amounts of “fracking” wastewater to dispose of,  in these wells.  As long as we never re-use “frack” water,  we will have this massive disposal problem,  and it will induce earthquakes. 

Lack of Freshwater Supply to Support “Fracking”

It takes immense amounts of fresh water to “frack” a single well.  None of this is ever re-used,  nor it is technologically-possible to decontaminate water used in that way.  The additives vary from company to company,  but all use either sand or glass beads,  and usually a little diesel fuel.  Used “frack” water comes back at near 10 times the salinity of sea water,  and is contaminated by heavy metals,  and by radioactive minerals,  in addition to the additives.  Only the sand or glass beads get left behind:  they hold the newly-fractured cracks in the rocks open,  so that natural gas and volatile crudes can percolate out. 

The problem is lack of enough freshwater supplies.  In most areas of interest,  there is not enough fresh water available to support both people and “fracking”,  especially with the drought in recent years.  This assessment completely excludes the demand increases due to population growth.  That’s even worse.

This problem will persist as long as fresh water is used for “fracking”,  and will be much,  much worse as long as “frack” water is not reused.  The solution is to start with sea water,  not fresh water,  and then to re-use it.  This will require some R&D to develop a new additive package that works in salty water to carry sand or glass beads,  even in brines 10 times more salty than sea water. 

Nobody wants to pay for that R&D. 

Transport Safety with Volatile “Frack” Crudes

What “fracking” frees best from shales is natural gas,  which is inherently very mobile.  Some shales (by no means all of them) contain condensed-phase hydrocarbons volatile enough to percolate out after hydraulic fracturing,  albeit more slowly than natural gas.  Typically,  these resemble a light,  runny winter diesel fuel,  or even a kerosene,  in physical properties.  More commonly,  shale contains very immobile condensed hydrocarbons resembling tar.  These cannot be recovered by “fracking” at all. 

The shales in south Texas,  and some of the shales and adjacent dolomites in the Wyoming region actually do yield light,  volatile crudes.  The problem is what to transport them in.  There are not enough pipelines to do that job.  Pipelines are safer than rail transport,  all the spills and fires notwithstanding. 

The problem is that we are transporting these relatively-volatile materials in rail tank cars intended for normal (heavy) crude oils,  specifically DOT 111 tank cars.  Normal crudes are relatively-nonvolatile and rather hard to ignite in accidents.  DOT 111 cars puncture or leak frequently in derail accidents,  but this isn’t that serious a problem as long as the contents are non-volatile.  These shale-“frack” light crude materials resemble nothing so much as No. 1 winter diesel,  which is illegal to ship in DOT 111 cars,  precisely since it is too volatile. 

The problem is that no one wants to pay for expanding the fleet of tougher-rated tank cars.  So,  many outfits routinely mis-classify “frack” light crudes as non-volatile crudes,  in order to “legally” use the abundant but inadequate DOT-111 cars.  We’ve already seen the result of this kind of bottom line-only thinking,  in a series of rather serious rail fire-and-explosion disasters,  the most deadly (so far) in Lac Megantic,  Quebec. 

Volatile shale-“fracked” crudes simply should not be shipped in vulnerable DOT 111 cars,  period.  It is demonstrably too dangerous. 

Conclusions

“Fracking” shales for natural gas and light crudes has had a very beneficial effect on the US economy and its export-import picture.  We should continue this activity as a reliable bridge to things in the near future that are even better. 


But,  we must address the four problem areas I just outlined.  And I also just told you what the solutions are.  The problem is,  as always,  who pays.   What is the value of a human life?  What is the value of a livable environment?  It’s not an either-or decision,  it’s striking the appropriate balance!

Friday, September 2, 2011

Balanced News

TRY CHECKING YOUR PERSONAL IDEOLOGICAL BELIEFS AGAINST THE DATA

Left-wing, right-wing, doesn’t matter. Here are the actual data:




Spending



So who increased spending the most?

(No doubt it’s too much.)

(War spending does not explain all of this.)




Deficits



So which is the better trend?


(And this is before any of the recently-negotiated cuts “kick in”.)




The Stimulus and Jobs



So did the stimulus help job growth or not? (And remember, each man spent about half of the total stimulus)


Did we stop too soon?


Could we have better-targeted the money to steepen the job growth trend?








A Very Good Question:

If your beliefs about what has been happening do not square with the actual data, then what are you going to do?

Change your thinking and use the data? Or keep your beliefs?