Friday, December 30, 2011

The Old Train Still Runs!

For the first time in a few years, I set up my old electric train in the shop. I got this train as a very young boy, Christmas of 1952. My dad and his next door neighbor built the train board layout in 1954. Some of the items, including two more freight cars, were added between then and about 1960.

The engine and tender is a 4-6-4 coal-fired steam locomotive of the type that was used to pull passenger trains on the old New York Central railway. Lionel called this particular engine 2046, and used it in more than one of their train sets. My original set included a silver tank car, silver box car, black gondola car, and a caboose. The yellow barrel car and the red explosives box car got added later.

The first image shows a good close-up of the engine and tender:



The second image shows the whole train board layout. The inner (third) loop of track is something I added about a decade or so ago. The original setup had two concentric loops connected by 4 switches, all 1954-ish vintage track and switches.

Along the way, the two extra freight cars, the crossing equipment, the water tower, and the beacon got added, as Christmas presents, if I remember correctly. My paternal grandmother gave me the gantry crane, which still works. It rotates, moves up and down, and the electromagnet still picks up iron things.

I put some miscellaneous small toy cars into this set-up. I also built the loading ramps out of scrap wood from an old VW bus wooden headliner. My painted-paper landscape simulation from over 20 years ago has deteriorated past repair. I need to replace it, and add some more hand-made buildings. I'll do it, once I retire.



The third image is my finger pointing at the entry in an original 1953 Lionel catalog for the exact set that is my original train: 1505WS, which was $49.95 in 1952-dollars, and still the same price in 1953. My good friend Harry Petersen in Minnesota found that catalog and sent it to me. He, too, is a model railroad enthusiast.




Below I have embedded a video clip taken with my wife's camera of this train running on that train board. It no longer smokes, but the whistle still blows. This thing is 59 years old this year, and it still runs! In spite of all the mistreatment I gave it as a child. Lionel certainly made a good product.

This posting is just for fun. Hope you enjoyed looking at it.

GW

Wednesday, December 28, 2011

Latest Production Version of the Kactus Kicker

Update 7-30-15:  The new website is fully operational.  It has all the information,  photos,  and videos anyone could ever need.  It is a turnkey site for selecting,  customizing,  and purchasing a production tool.  Shipping is available,  so sales of plans have been discontinued.  Some additional parts and labor have been farmed out to appropriate vendors,  to adjust to higher production rates,  so prices posted previously are now obsolete.  Go to http://www.killyourcactusnow.com

For those of you wanting to know about my cactus tools, here are some pictures of the latest production model with the tougher snout and bigger barge front. These are from my wife's computer, file number 2010-04-25. I believe they were taken after construction of serial numbers 047 and 048.

This is a "machine" with no moving parts, towed on a simple chain bridle behind any tractor with a drawbar. It kills prickly pear cactus "in situ", without pick-up and disposal of the debris, and without chemicals. It's just driving-a-tractor work. You do it several times, for a full eradication. See http://www.txideafarm.com and go to the cactus eradication sub-page for a good description of how it really works.


photo 040 How to Hitch-Up

It really is just that simple. Flip the loop in the tow bridle over a trailer ball on your towbar. If you do this with a 3-point rig, be sure it is braced for sideways loads over 1000 pounds per tool (you can tow more than one at a time). You will incur forces like that when you turn.


photo 041 What the Bridle Looks Like All Hitched-Up and Ready to Tow

Be sure the bridle is not in the lift configuration, pinned up with a bolt over the tool's center of gravity. It needs to make a big Vee, you tow from the corners of the deck. The snout just stabilizes it like a gigantic, super-tough sled runner out front. The chain through the snout braces just limits up/down and side-side travel on really rough ground. It should be slack, otherwise.


photo 042 How-To Pry-Up the Tool to Get at What's Underneath, or Store It

Back up the tractor and slack the chain, then un-hitch it. You will need about a 6-foot prybar and a 2-foot piece of small angle iron. Use the prybar as the photo shows to get leverage to lift the tool up onto its rear edge, then prop it in place with the angle iron under one of the skids. The "tongue load" on the snout is just too high to do this without a good prybar.


photo 045 Proper Stowage Without Killing Grass

Once propped up, you can remove any debris accumulated under the tool that makes it ride off the ground. Old barbed wire and certain kinds of vine-like weeds are prone to do this. Just kick or hoe them out from underneath, and you can lower the tool with the prybar, re-hitch, and resume work. This is also a very good way to store the tool in the pasture between treatments, since it cannot kill a whole big patch of grass while tipped up on edge like this. This is how I store mine.


photo 037 How to Pick-Up the Tool with Its Own Bridle

If you pull the tow bridle aft, you can pin it together with the extra 2"L 3/8 UNC bolt, nuts, and washers that I provide with every tool. If you have serial number 047 or 048, you might have to re-rig the snout travel-limiter chain slightly to do this, but I generally already have it rigged for lifting easily, right from the shop (from serial number 049-on). The center of gravity is just between the rear of the snout tube and the front edge of the big ballast bar flat. Pin the bridle together there, and pick it up at the pin point as shown.

The snout travel-limiter picks up the forward load of 3-places, the chain towers being the other two. Be careful, this thing weighs 600-700 pounds. But most tractors now have hydraulic buckets. Just use a tow chain with hooks, and pick the tool up with the bucket, and put it right where you want it (pick-up bed or flat trailer).

GW

Other Related Articles on this Site (date highlighted on this one)


Date.....…title/content
2-9-17....Time Lapse Proof It Works
............watch cactus being crushed and composted
7-30-15......New Cactus Tool Website
...................turnkey site for info,  photos,  videos,  purchases
1-8-15……Kactus Kicker Development
………………production prototype & 1st production article
1-8-14……Kactus Kicker: Recent Progress
…………..….testing a revised wheeled design (experimental)
10-12-13..Construction of the Tool
………………building a “Kactus Kicker” (plain tool)
5-19-13…….Loading Steel Safely
……………….transport and storage of materials
12-19-12…Using the Cactus Tool or Tools
……………...how the tool is employed (applies to any model)
11-1-12….About the Kactus Kicker
..…………….painting and rigging finished tools (plain tool)
12-28-11..Latest Production Version
………………new bigger snout and barge front (plain tool)

Wednesday, December 21, 2011

FTL Neutrinos Update

I posted an update to the faster-than-light neutrinos article. Scroll down to it dated 10-9-11 and titled “Faster-Than-Light Neutrinos? Maybe! Their Meaning? Arguable!”

GW

Wednesday, December 14, 2011

Reusability in Launch Rockets

A group of folks I correspond with (at the forums on NewMars.com) has been discussing reusable launch rocket possibilities. One of the names they use is “big dumb booster”, or BDB. My own opinion is that reusability is incompatible with the low inert mass fractions used in the stages of typical launch rockets today: too light is simply too fragile. I do know from their website that Spacex is interested in reusing the first stage of their Falcon-9 booster, but that their results so far are unsuccessful. So, my analysis results here should be of interest, both to my correspondees, and to Spacex.

Spacex’s Falcon-9 is a two-stage rocket with kerosene-oxygen engines in both stages. It features an interstage ring and a payload shroud (on the satellite version) that I assume both get jettisoned at staging. The same engines are used in both stages, except that the one in the second stage has a longer bell than the nine in the first stage, and the first stage engines see atmospheric backpressure.

Baseline Falcon-9 Performance Estimate

I looked up most of the basic engine and vehicle data from Spacex’s website, for Falcon-9 as a baseline case, and reverse-engineered the rest. Here it is, summarized, in Figure 1:


Figure 1 – Baseline Falcon-9 Data

These performance data were computed with the simple rocket equation, and some experiential “jigger factors” that knock down ideal velocity increments to more realistic values. The other choice for analysis is a real trajectory computer code, either two-dimensional or three-dimensional, which is a complicated thing to set up and to use. I used the simple analysis approach to set up actual computer trajectory analyses, for the Scout launch vehicle at LTV Aerospace, about 4 decades ago.

Here, I used a “jigger factor” of 1.10 to knock down the first stage ideal velocity increment, because that stage sees air drag, and flies mostly vertically, so that gravity drag is significant. For the second stage, I used 1.05, reflecting flight in vacuum, mostly but not entirely horizontal. The final summed velocity increment I estimate for Falcon-9 is about 26,900 feet/second, or 8.19 km/second, which is remarkably close to the orbital velocity at low altitudes (about 7.9 km/second). It’s close enough that any simplified design trades made under these assumptions are realistic enough to be useful.

I looked at two potential solutions to the trade-off between extra structural weight for reusability, and reduced payload fraction that increases the price per unit payload delivered to orbit. One was to retain the basic two-stage design, and increase the size of the first stage to compensate for added inert fraction, at constant mass ratio. The other approach was to replace the two-stage design with an equivalent three stage design, keep the top two stages as throwaways, and increase the first stage size to compensate for increased first stage inert weight fractions. Both were done at constant delivered payload weight.

Two Stage Analysis with Heavier Structural Inert Fractions in the 1st Stage
The payload is exactly the same as baseline. I assumed the payload shroud weight to be proportional to the maximum payload weight it contains at 15.18%. There are no changes to the second stage weight statement or performance values. The interstate ring weight I assumed proportional at 0.815% to the weight it carries, in this case the second stage ignition weight. It is the first stage weight statement that varies, but at constant mass ratio, so the propellant weight fraction is the same as baseline in all cases. The equation relating mass ratio MR and propellant weight fraction fprop is:

fprop = (MR – 1)/MR

Now, 1 – fprop is the total of the inert mass fraction and the stage payload mass fraction, where the first stage payload comprises the ready-to-ignite second stage, the interstage ring, and the payload shroud. I looked at the baseline, twice, and three times the first stage inert weight fraction, scaling up the first stage ignition weight to match. The resulting weight statements are given in Figure 2. Bear in mind that the delivered stage performance data are identical to baseline, since the mass fractions are identical to baseline.

Three Stage Analysis with Heavier Structural Inert Fractions in the 1st Stage (Only)

I had to allocate velocity increments among the three stages in some logical fashion. I chose to make the second and third stage mass ratios 5 like the Falcon-9 second stage, and my first stage mass ratio 4, like the Falcon-9 first stage. I used “jigger factors” of 1.10 and 1.05 on my first and third stages, similar to the Falcon-9 first and second stages. I used an intermediate factor of 1.07 for my second stage. My first stage Isp was 289.5 sec, like the Falcon-9 first stage. My second and third stages used Isp = 304 sec, like the Falcon-9 second stage. The corresponding exhaust velocities are 9314.4 and 9780.9 ft/sec.

I computed the sum of the estimated actual velocity increments to be factor 1.5406 too high, so I knocked down each stage’s velocity increment by this factor, and recomputed mass ratios as 2.45935 for my first stage, and 2.84251 in my second and third stages. I ran the design study to the same payload as Falcon-9, with the same shroud weight, and two interstage rings at 0.815% of the stage weights above each ring. I assumed that interstage ring 1-2 and the payload shroud drop off with stage 1, and that interstage ring 2-3 drops off with stage 2.

The payload is exactly the same as baseline at 23,050 lb. I assumed the payload shroud weight to be proportional to the maximum payload weight it contains at 15.18%, for 3500 lb. There are no changes to the second or third stage weight statements or performance values as I changed first stages. The interstate ring 2-3 weight I assumed proportional at 0.815% to the weight it carries (in this case the second stage ignition weight) for 606 lb. Interstage ring 1-2 is 0.815% of stage 2 ignition weight, for 1999 lb. It is the first stage weight statement that varies, but at constant mass ratio, so the propellant weight fraction is the same as baseline in all cases, and so is the performance.

For the “baseline” three-stage inert fractions, I assumed 5% for my first stage, very similar to the multi-engine first stage of Falcon-9. I used the same 4.2% for my third stage as for the single-engine second stage of Falcon-9. My second stage has an intermediate inert fraction of 4.6%, chosen to reflect only a few engines in the second stage. The weight statements for the trade study are given in Figure 3. Bear in mind that all three versions of the three-stage vehicle have exactly the same estimated velocity performance, also shown in the figure.


Figure 2 – Weight Statements for the Two-Stage Reusability Trade Study


Figure 3 – Weight Statements for the Three-Stage Reusability Trade Study

Note that in both Figure 2 and Figure 3, I have included the overall payload weight fraction, computed as payload weight delivered to orbit Wpay, divided by the stage 1 ignition weight, which is the launch weight WL. (In the context of this analysis, the term “weight” really refers to mass.) In both trade studies, payload fraction decreases as stage 1 inert weight increases, exactly as expected. I was surprised and pleased to see that the baseline throwaway 3-stage option had a slightly higher payload fraction than the corresponding baseline throwaway 2-stage option. This and the slopes of the trends did seriously impact the final conclusions.

Trajectory Comparison

The final trajectories are compared in Figure 4. Both the 2-stage and 3-stage vehicles follow similar paths to the same orbital insertion conditions, at the same altitude (in the vicinity of 200-300 miles, or 300-500 km, up). Only potential re-use of the first stage was considered, for either configuration. A first stage fallback is indicated for each. Reentry velocity is simply assumed the same as the first stage burnout velocity. They would be comparable, in any event. Noting that reentry gets really challenging much above 10,000 feet/second (near Mach 10), I see little point to trying to make the second stage of the 3-stage vehicle reusable. It simply comes back too fast to be readily survivable.


Figure 4 – Comparison of Trajectories for 2-Stage and 3-Stage Vehicles


Figure 5 – Comparison of 2-Stage and 3-Stage Results


Payload Fraction Results Comparison

The payload fraction vs first stage inert fraction data are plotted in Figure 5 for both the 2-stage and 3-stage vehicles. The trends are reasonably linear-looking over the ranges computed, but at different slopes. As expected, the 3-stage vehicle design is less sensitive to first stage inert fraction than the 2-stage design (3-stage having the shallower slope). I did not really expect to see the baseline 3-stage vehicle to have a slightly-higher payload fraction at the baseline throwaway inert value, but it did.

Between the higher baseline throwaway payload fraction, and the shallower slope with first stage inerts, it appears that at 10% inerts in the first stage, the 3 stage vehicle has a payload fraction near 2.8%, while the 2-stage vehicle is down near 2.2% at the same 10% inerts. 10% inerts in the first stage is of enormous interest, because that is close to the inert fraction of the Space Shuttle solid booster motors, which actually were reusable most (but not all) of the time. That’s about the level where your tankage becomes strong enough to be pressure vessel-capable, as well as survivable for ocean impact on parachutes. Tankage that is pressure vessel-capable might as well be used as a pressure-feed system, eliminating the weight, cost, and reliability risks of turbopump machinery.

Conclusions

It is clear the 3-stage option is more tolerant of higher inert weights in the first stage. Combine this with a lower first stage fall-back speed, and reusability seems more certain at 10% inerts, and with a higher payload fraction (nearly 3% 3-stage vs only a bit over 2% 2-stage).

Accordingly, 3 stages is a better option than 2 stages, if the first stage is to be reused. The drop from 2-stage non-reusable payload fraction is actually quite small (3.1% to about 2.8%). This is because the all-throwaway 3-stage vehicle actually has a better baseline throwaway payload fraction than the 2-stage (3.3% at 5% inerts, vs 3.1% at 4% inerts).

This does raise the question of whether 4 stages might allow first stage reusability at even better payload fraction, or else allow the same payload fraction with both first and second-stage reusability. I leave that for others to investigate.

The main lesson here is that you really do have to do something different in order to get a different result. Reusability will require a greater inert weight fraction to cover recovery gear, and to confer the strength to survive better. Practical reusability simply cannot happen in the 4-8% inert range.

This study points toward 10% inerts in the first stage, at the very least. The more first stage inerts you have to “cover”, the more stages you need to use, to be tolerant of lowered mass ratio in each stage.

But at least we know the job really can be done, and here is one well-proven way to do it (more stages).

Sunday, November 13, 2011

Gas Fracking: Good or Bad? Depends!

Recent news reports published by the internet news services tell how the EPA is seriously investigating complaints related to natural gas fracking (hydro-fracturing) near Pavillion, Wyoming. Those complaints include contamination of water supplies by methane and by toxic fracking chemicals.

I looked up the geography and geology of Pavillion: it lies in the western half of Wyoming, a region dominated by the Rocky Mountains. There will be sediments in the basins, but the fundamental underlying geology is contorted and fractured mountain zone rock.

As a result, I am entirely unsurprised that both natural gas and fracking chemicals are finding their ways into the groundwater. I am surprised that how this can be, is a still a matter of legal debate.

I am no geologist, but even I can understand what is happening, and how, and I published it as a guest column in the Waco Tribune Herald last May. Here is the original submitted text for that column, with some emphasis added now:

Coming Even Cleaner on Fracking” (submitted 5-26-11/published 5-28-11)

The “Trib’s” editors recently ran a very nice editorial on the controversy surround the process of “fracking” (short for “hydraulic fracturing”) for natural gas in shale. This article neatly laid out the two sides of the public debate, which is centering mainly on whether or not there are undesirable side effects.

I find it very interesting that the studies are "still inconclusive", seeing as how the field data is very indicative of what actually happens. It's not a simple either-or situation, it’s geology-dependent, and this is completely left out of the current public debates.

Here in Texas and nearby states, the rock layers are old seabed sediments, more or less level, and are relatively intact. Few paths exist across these layers for oil and gas to migrate upward. That is why fracking has few side effects in this part of the country. The most notable exception has been very minor earthquake tremors induced from the disposal of used fracking fluids by deep well injection.

In Pennsylvania and the other states in the Appalachian mountain zone, there have been widespread complaints about natural gas getting into groundwater, leading to fire and explosion incidents when turning on the water tap. These are real incidents, and are easy to understand if one simply looks at the geology below the surface.

In a mountain zone, the rock layers are highly contorted, fractured, and thoroughly broken-up. There are many paths for oil, and especially the far-more-mobile gas, to migrate to the surface. It is entirely unsurprising, and in fact quite predictable, that this very mobile gas, once released from a deep shale, should migrate upward and contaminate near-surface water supplies. It does so by dissolving into the water under earth pressures, similar to a carbonated beverage.

The solution to the exploding kitchen faucet problem is simple: fracking for gas is OK in continuous-layered sea bottom sediment zones, but not OK in highly-fractured mountainous zones. So, we don't frack there, period. Those gas deposits await a still-undiscovered recovery technology with fewer side effects, more suited to that kind of geology.

This does mean that the agencies regulating gas leases actually do have to regulate, and sometimes to deny permits, unaccustomed as they apparently are to such activities.

The processes of fracking and fracking-fluid disposal were specifically exempted from EPA regulation under the Clean Water Act. This happened in that secretive energy company meeting at the White House during the last administration. It is known as the Halliburton exemption.

However, the injection of diesel fuel into the earth is actually still regulated. While fracking fluid is mostly water plus a little sand or glass beads, the most common liquid trace additive in all these "secret" recipes is diesel fuel. If those recipes were widely revealed, the use and disposal of these fluids would come under direct EPA regulation again, meaning only that they take a little better care doing what they already do.

In that event, fracking for gas would still be quite profitable, just not quite as much as it is without any regulation at all. But fewer folks suffer the side effects, and that’s a good thing.

Update 1-3-15:

The recent explosion of US “fracking” technology (hydraulic fracturing plus horizontal-turn drilling) has modified the picture of oil prices versus recessions.  Unexpectedly,  the US has become a leading producer of crude oils for the world market.  Plus,  there has been an associated massive production increase and price drop in natural gas.

OPEC has chosen to take the income “hit” and not cut back their production in response.  Their reasoning is twofold:  (1) fear of loss of market share,  and (2) hope that low oil prices will curtail US “fracking” recoveries.  We will see how that plays-out.

Oil prices are now such (at around $55/barrel) that US regular gasoline prices are nearing $2.00/gal for the first time in a very long time.  This is very close to the price one would expect for a truly competitive commodity,  based on 1958 gasoline prices in the US,  and the inflation factor since then. 

It is no coincidence that the exceedingly-weak US “Great Recession” recovery has suddenly picked up steam.  The timing of the acceleration in our economic recovery versus the precipitous drop in oil prices is quite damning.  There can be no doubt that higher-than-competitive-commodity oil prices damage economies.  Oil prices are a superposition of the competitive commodity price,  overlain by an erratic increase from speculation,  and further overlain quite often by punitive price levels when OPEC is politically unhappy with the west.  That’s been the history. 

This economic improvement we are experiencing will persist as long as oil,  gas,  and fuel prices remain low.  (Government policies have almost nothing to do with this,  from either party.)  How long that improvement continues depends in part upon US “fracking” and in part upon OPEC.  Continued US “fracking” in the short term may depend upon adequate prices.  In the long term,  we need some solutions to some rather intractable problems to continue our big-time “fracking” activities. 

The long-term problems with “fracking” have to do with (1) contamination of groundwater with combustible natural gas,  (2) induced earthquake activity,  (3) lack of suitable freshwater supply to support the demand for “fracking”,  and (4) safety problems with the transport of the volatile crude that “fracking” inherently produces. 

Groundwater Contamination

Groundwater contamination is geology-dependent.  In Texas,  the rock layers lie relatively flat,  and are relatively undistorted and unfractured.  This is because the rocks are largely old sea bottom that was never subjected to mountain-building.  We Texans haven’t seen any significant contamination of ground water by methane freed from shale.  The exceptions trace to improperly-built wells whose casings leak.

This isn’t true in the shales being tapped in the Appalachians,  or in the shales being tapped in the eastern Rockies.  There the freed gas has multiple paths to reach the surface besides the well,  no matter how well-built it might have been.  Those paths are the vast multitudes of fractures in the highly-contorted rocks that subject to mountain-building in eons past.  That mountain-building may have ceased long ago,  but those cracks last forever. 

This is why there are persistent reports of kitchen water taps bursting into flames or exploding,  from those very same regions of the country.   It’s very unwise to “frack” for gas in that kind of geology.

Induced Earthquake Activity

This does not seem to trace to the original “fracking” activity.  Instead it traces rather reliably to massive injections of “fracking” wastewater down disposal wells.  Wherever the injection quantities are large in a given well,  the frequent earthquakes cluster in that same region.  Most are pretty weak,  under Richter magnitude 3,  some have approached magnitude 4. 

There is nothing in our experience to suggest that magnitude 4 is the maximum we will see.  No one can rule out large quakes.   The risk is with us as long as there are massive amounts of “fracking” wastewater to dispose of,  in these wells.  As long as we never re-use “frack” water,  we will have this massive disposal problem,  and it will induce earthquakes. 

Lack of Freshwater Supply to Support “Fracking”

It takes immense amounts of fresh water to “frack” a single well.  None of this is ever re-used,  nor it is technologically-possible to decontaminate water used in that way.  The additives vary from company to company,  but all use either sand or glass beads,  and usually a little diesel fuel.  Used “frack” water comes back at near 10 times the salinity of sea water,  and is contaminated by heavy metals,  and by radioactive minerals,  in addition to the additives.  Only the sand or glass beads get left behind:  they hold the newly-fractured cracks in the rocks open,  so that natural gas and volatile crudes can percolate out. 

The problem is lack of enough freshwater supplies.  In most areas of interest,  there is not enough fresh water available to support both people and “fracking”,  especially with the drought in recent years.  This assessment completely excludes the demand increases due to population growth.  That’s even worse.

This problem will persist as long as fresh water is used for “fracking”,  and will be much,  much worse as long as “frack” water is not reused.  The solution is to start with sea water,  not fresh water,  and then to re-use it.  This will require some R&D to develop a new additive package that works in salty water to carry sand or glass beads,  even in brines 10 times more salty than sea water. 

Nobody wants to pay for that R&D. 

Transport Safety with Volatile “Frack” Crudes

What “fracking” frees best from shales is natural gas,  which is inherently very mobile.  Some shales (by no means all of them) contain condensed-phase hydrocarbons volatile enough to percolate out after hydraulic fracturing,  albeit more slowly than natural gas.  Typically,  these resemble a light,  runny winter diesel fuel,  or even a kerosene,  in physical properties.  More commonly,  shale contains very immobile condensed hydrocarbons resembling tar.  These cannot be recovered by “fracking” at all. 

The shales in south Texas,  and some of the shales and adjacent dolomites in the Wyoming region actually do yield light,  volatile crudes.  The problem is what to transport them in.  There are not enough pipelines to do that job.  Pipelines are safer than rail transport,  all the spills and fires notwithstanding. 

The problem is that we are transporting these relatively-volatile materials in rail tank cars intended for normal (heavy) crude oils,  specifically DOT 111 tank cars.  Normal crudes are relatively-nonvolatile and rather hard to ignite in accidents.  DOT 111 cars puncture or leak frequently in derail accidents,  but this isn’t that serious a problem as long as the contents are non-volatile.  These shale-“frack” light crude materials resemble nothing so much as No. 1 winter diesel,  which is illegal to ship in DOT 111 cars,  precisely since it is too volatile. 

The problem is that no one wants to pay for expanding the fleet of tougher-rated tank cars.  So,  many outfits routinely mis-classify “frack” light crudes as non-volatile crudes,  in order to “legally” use the abundant but inadequate DOT-111 cars.  We’ve already seen the result of this kind of bottom line-only thinking,  in a series of rather serious rail fire-and-explosion disasters,  the most deadly (so far) in Lac Megantic,  Quebec. 

Volatile shale-“fracked” crudes simply should not be shipped in vulnerable DOT 111 cars,  period.  It is demonstrably too dangerous. 

Conclusions

“Fracking” shales for natural gas and light crudes has had a very beneficial effect on the US economy and its export-import picture.  We should continue this activity as a reliable bridge to things in the near future that are even better. 


But,  we must address the four problem areas I just outlined.  And I also just told you what the solutions are.  The problem is,  as always,  who pays.   What is the value of a human life?  What is the value of a livable environment?  It’s not an either-or decision,  it’s striking the appropriate balance!

Saturday, November 12, 2011

Student Pulsejet Project

TSTC welding student Justin Friend made the Waco paper Friday 11-11-11, with a pulsejet thruster he built from plans he found on the internet. He had already tested this device himself, but brought it out to the TSTC airport apron for a demonstration test Thursday morning. Most of the attending crowd were aviation maintenance and welding program students and faculty. This was a personal project for Justin, not a class project. I called Bill Whitaker, the editor at the Waco “Trib”, and he sent a reporter.

Justin is a college algebra student this semester with my colleague Otto Wilke, who attended the demo test, as did I. (Another math department colleague, Doyle Ware, also went with me to see the test.) Justin sought sheet metal geometry help from Otto, and operating and safety advice from me, once he found out that Otto and I are engineers. His welds were obviously very good, as the pulsejet tube runs very hot in places. No cracks or flaws of any kind have turned up to date.

This pulsejet tube is valveless, so there are no moving parts at all. It is a “folded pulsejet”, so that the back-spit from the short inlet contributes to its thrust. This one is a nominal 50 pound thrust device, big enough to push a go-kart around. It runs on propane. Justin cut the parts from flat stainless steel sheet, rolled them up, and welded them together, excepting the return-bend tubing. This return bend is on the exhaust side, and is the hottest part of the structure. It glows in broad daylight when running throttled up. I got copies of photos Justin made during his first tests. Two are here.


P 1030654

This device has a spark plug on the side of its combustion chamber, and a propane injection manifold tube across the inlet right at the dump into the chamber. One starts the spark, some starting air from a leaf blower, and the propane, to light it off. Once running, starting air and spark are no longer necessary. It throttles up and down a wide range of thrust by simply raising and lowering the propane feed pressure.


P1030658

This thing is dangerously noisy: I estimate around 130-135 decibels, so ear protection is a necessity. You can feel the sound waves beating on your stomach. At the TSTC demo runs, you could feel the concrete airport apron shake beneath your feet. Unlike all other forms of jet engine, pulsejets “sing” at a definite frequency, the rate of the pulsed fuel-air explosions inside the tube. This size tube “sang” at about 80 Hertz, like an earth shakingly-loud operatic bass.

I haven’t heard noises that loud in decades. Being a part of this young man’s project was a huge amount of fun. I actually knew something about this engine and could help Justin with it, because decades ago I researched the military work done on them in the 1950’s and 1960’s. I always wanted to build one myself, but never actually did it. (That may change, this was just too much fun.)

Added 11-13-11:

Here is the first of two QuickTime Movie (.MOV) files I got from Justin. You can see the tube slowly warm up and quit spewing unburned fuel from the inlet (a thin whitish spray). You can also get a sense of the 80 Hertz "singing" tonal quality of the sound, but no real hint of how loud it was. Upon shutdown, the flames from the inlet are propane residuals from the fuel line venting into a very hot environment.

video

Here is the second movie file from Justin. In this one, the tube is quite warm and operating very well at near-full thrust. You can see him disconnect the spark while it runs, with absolutely no effect upon the operation of the tube.

video

Added 12-7-11:

Since this article originally posted, Justin has mounted his 50-pound thruster to an old golf cart, and driven it at the Hearne, TX airport (an uncontrolled field). The video is of a pass he made after the tube was fully warmed up. The speed is close to the control limit for that cart.

video

Justin has since begun procuring parts and materials for a much larger thruster.

Saturday, October 29, 2011

October 29, 2011, Update

Here follows various updates regarding various projects I have been pursuing. The list is not fully comprehensive.

Mars Mission / Paper Presentation at Mars Society Meeting August 2011

That project is completed. I stand by my earlier statements that we could put men safely on Mars and return them safely, for under something like $50 billion. It would take a space agency that we do not currently have, and a contractor base that we do not currently have, to accomplish this. The real take-home lesson is that the agency and contractor base we have built and maintained all these decades is the wrong setup.

I did revamp the program back-up to solid-core nuclear “slowboat” from the original VASIMR-based fast-trip. The baseline is still gas-core nuclear fast-trip. As it turns out, VASIMR is just another electric ion drive, no better than the others in terms of performance potential. None of those is suitable for fast manned trips to Mars. I did add some better orbit trajectory estimates. See the 9-6-11 posting “Mars Mission Second Thoughts Illustrated” for those details. The original posting of paper content was 7-25-11 “Going to Mars (or Anywhere Else Nearby) the posting version”.

Ethanol Vehicle and Engine Work

That is now completed. I have decided that it is easier and more effective for most people just to use stiff gasohol blends E-20 to E-35, than it is to come up with shade-tree conversions for still-higher blend ratios. For some vehicles and engines, conversions are easy, for others, not so much.

This work is documented well enough in postings 5-5-11 “Ethanol Does Not Hurt Engines” and 2-12-11 “”How-To” for Ethanol and Blend Vehicles” for others to use the information. Anyone can learn to make E-30-something blends in any vehicle. For me, this has become routine operation of an F-150, a Nissan Sentra, two lawnmowers, a wood chipper, and a garden tiller on E-30-something blend, all completely factory stock. I still run my slightly-modified Farmall tractor on straight E-85, but have re-mothballed the modified 1973 “ethanol VW” and the unmodified 1960 “blend VW” against future needs.

My two-stroke chain saw and weed eater seem to run just fine on the E-10 blend they now sell as regular unleaded gasoline, with one exception. The weed eater is having age-related fuel line replacement troubles, but also seems to suffer from some poor design choices, as well, regarding how these lines connect to the fuel tank and the other components.

I have never run stiffer blends in either the weed eater or the chain saw. Is there is a materials incompatibility problem with the weed eater? I don’t know yet. Is there a “bad design” issue with loosening connections? Yes, of that I am sure. Is there an overheat problem that stiffens fuel line hose? I think so, and it’s another “bad design” issue unrelated to fuel composition.

Ramjet Engineering

In June of 2010 I paid a visit to Jeff Greason and his crew at XCOR Aerospace, Mojave, California. This was in regard to a future space launch project of theirs that involves ramjet propulsion. They had been unable to locate an all-around ramjet expert, until they ran across me by accident. I had not done such engineering since the old rocket plant in McGregor closed, at which time I was laid off: November 1994. It was not very long after that layoff, that most serious military ramjet work simply dried up in this country (although, not overseas).

As it turns out, of the few of us that I considered to be all-around experts with significant real design and test experience, most (or maybe all) the others are now dead. I seem to have outlived them all, thus becoming pretty much this country’s last living expert in that kind of propulsion.

Over the last year or so, I dug out some of my old ramjet stuff and got back into the “swing of it”, in order to be “back up to speed” when XCOR needs my help, perhaps next year. I began by looking at high-speed systems for orbital launch, which aligns with their project. This took the form of pencil-and-paper stuff in the odd evenings, and eventually evolved into a two-stage horizontal takeoff/landing aircraft, the lower stage being parallel-burn rocket and ramjet. We paralleled each other in this.

As best I can figure, the ramjet strap-on assist idea for vertical-launch rocket vehicles is more of a low-speed design system. That idea is less worked-out than the high-speed two-stage airplane, which does seem both feasible and attractive. But I do believe that my top-level conclusions are correct. I do not yet have the software tools necessary to do this kind of work for XCOR or anybody else, seeing as how no computer folks still support the old DOS-based programming languages that I learned long ago.

I have put a lot of “typical” ramjet performance estimates up on this “exrocketman” site, in articles too numerous to catalog here. More will be forthcoming, as I develop stuff into results one can truly trust. I have put some efforts into converting my old “smarts” and programs into modern Excel spreadsheet format. I now have a sizing spreadsheet that works for the old lower-speed “stovepipe” designs, plus a performance mapping option verified to work as the nozzle unchokes. It has become clear that the nested iteration-loop character of these calculations demands a real computer code, not a manually-converged spreadsheet, which is simply too slow and labor-intensive to be practical.

I already had a sizing code in an advanced BASIC language that works for high-speed designs. The corresponding performance-mapping code does not yet work. The corresponding codes for low-speed designs are nowhere-near in working order at all. There is available to me only a single obsolete computer with an old Windows-98 operating system, that will even run the programming language. This is a real practical problem yet to be solved.

More recently, I made contact with Aerojet, who is the current inheritor of the gas generator-fed ramjet work I did long ago at the old McGregor missile propulsion plant. It seems there might be a need for my skills once again. We’ll see. I have heard nothing positive back since that initial contact, though.

Meanwhile, I have been documenting my old procedures and methods in a series of reports that I keep in a “ramjet how-to” notebook. It’s not complete, but I do have a document regarding the high-speed engine cycle analysis, and one giving estimated spike inlet performance, plus another one documenting low-speed engine cycle analysis, with flameholding and heat protection “how-to” for both speed regimes. I think eventually this notebook will become a book on the “how-to” of ramjets.

One thing I do know: it is very easy to document science, but it is very, very hard to document art. Most of ramjet engineering is art, not science! And the science is hard enough, being way more complicated than rocketry.

Cactus Tool Stuff

I get more email inquiries now, but still very few sales. Many folks seem to be finding the cactus page on the “txideafarm” site. These seem to be generally younger folks. But, as it was with the old ranchers looking at magazine ads, few seem willing to believe it really works.

But, it does work! My place, and everything my friend Dave Gross has done, proves it.

I build and sell about 1 or 2 tools a year, these days. I sell about 2-4 plans sets per year. The rigors of fabricating pieceparts is taking an increasing toll on my aging body. Steel is now about 3 times as expensive as when I started doing this, 6 years ago. This all makes me wonder if I should not give up fabrication in favor of just selling plans. Maybe it is time to license construction to a bigger company.

Dear readers, please weigh in on this: should I continue selling tools, or just plans? There’s comment buttons available; use them.

Reno Air Race Crash

I posted an article 9-23-11 about the fatal crash of the Galloping Ghost at the recent Reno air race. That article says the elevator trim tab failed, most likely due to aeroelastic or other structural divergence effects at high speed.

Tab failure in the P-51 at race speeds, modified or not, leads to a sudden pitch-up condition. That leads quickly to pilot gee-induced unconsciousness, at best. The cure for spectator fatalities is not to position any spectators under expected flight paths.

I still see no reason to revise this posting until the NTSB has a chance to report its findings, perhaps sometime in 2012. And I doubt there will be any need to revise it then.

Energy Resources

I had published an article here 3-14-10 “Drill Here, Drill Now, Pay Less?” that dealt with a purported vast US oil resource named “the Bakken”. That resource is really shale tar, and is not (and will never be) “drillable oil”. Since then, I have become aware of the fact that not all of that rock unit is shale. There is a substantially-more porous dolomite layer in the Williston basin that actually does contain a light crude recoverable with hydro-fracturing technology.

In an article dated 9-5-11 “Surprise, Surprise: Oil Boom in the Williston Basin (“the Bakken”)”, I took on the size and recoverability of that resource. It is significant, but no “game-changer”, as some would have you believe. I concluded that yes, we should go get this oil. But, no, it will not save us from foreign oil dependence. I still see no reason to change those conclusions.

The fundamental economic problem is that our western economy was designed to run on cheap energy, primary of which is transportation fuel. Energy today, particularly transportation fuel, is no longer cheap. Therefore, we have economic recession/depression (choose your word). Government policies (from either side) have nothing to do with boom or bust conditions. Only energy prices really matter. The proof of this thesis is in the data posted 2-4-11 “Oil Prices, Recessions, and the War”.

It is hard to argue with data, is it not?

My most important point is that the best way to win this “war on terror” is to not need middle eastern oil any more. It’s not so much about the economics, it’s about victory. What is so damned hard to understand about that concept?

Concluding Remarks

Enough rambling. Please weigh-in by means of the various comment buttons. It is the only way I know that anyone even sees this stuff at all.

Sunday, October 9, 2011

Faster-Than-Light Neutrinos? Maybe! Their Meaning? Arguable!

Recent news reports from the world of science indicate one team working in a particle accelerator has clocked neutrinos traveling slightly faster than lightspeed. They are begging other teams to confirm or disprove this result independently, as is proper and normal in the business of science.

Commentators and experts have been weighing in on what such a result might mean, if confirmed to be true. “Everybody” points at Einstein (specifically his 1905 Special Theory of Relativity) to say that there is a speed limit these neutrinos seem to be violating. It’s either/or, not both.

Speed limit? That is an interpretation of Einstein’s theory, not a result. It is a very old interpretation, and I personally disagree with it. Here’s why:

In Einstein’s original 1905 paper, he sets up and solves the equations that describe the appearance of object A to an observer in reference frame B, moving at some relative but constant velocity V, when seen by light photons traveling at vacuum lightspeed c.

He did this for speeding subatomic particles, just like those neutrinos. Others since have extended the theory to large objects.

The theory postulates that c is a value which all observers measure as the same, no matter their motion, which really is something actually experimentally demonstrated with certainty. The theory’s mathematical results (sometimes called the Lorentz-Fitzgerald contraction equations) describe the object’s mass, dimension, and rate of time passage:

M = Mo/√(1-V^2/c^2) where M is what is seen and Mo is the resting value at V = 0

L = Lo*√(1-V^2/c^2) where L is what is seen and Lo is the resting value at V = 0

T = To*√(1-V^2/c^2) where T is what is seen and To is the resting value at V = 0

For these, M is the object’s mass, L its dimension in the direction of travel, and T is its local rate of time flow. Lateral dimensions are unaffected by V.

If one plugs a V greater than vacuum lightspeed c into these equations, the results are not real numbers, when a real-number result is what one seeks, being the only result that has meaning in the context of this problem. For almost a century now, the common interpretation has been that the not-real result means it is not possible to travel at speeds V exceeding vacuum lightspeed c. This is the origin of the common statement about “Einstein’s speed limit”.

When solving formulas in any branch of science and engineering, there are always fundamental assumptions about the problem, even if they are unspoken. Getting a not-real result with a modeling equation can nearly always be traced back to violating a fundamental assumption, even if it is not obvious.

Why should not-real results from Einstein’s theory be any different? What was his fundamental assumption that we violate when we plug in V greater than c in those equations?

Remember, the equations describe the appearance of object A to an observer in reference frame B. That presupposes that observer B can actually see object A (there it is!)

At V greater than c, we get a non-real result, which most likely simply means observer B cannot see object A, since we assumed he could. This interpretation is based on all those other experiences with formulas and problem-solving, and getting real versus not-real results.

When you think about it, how could observer B see object A traveling so fast? Object A is traveling faster than the photons with which observer B sees. The same sort of observational thing happens with supersonic aircraft: you cannot hear them coming because the sound waves by which you hear don’t arrive until much later.

Now, is the moving object A really heavier, shorter, and moving slower in time, or does he just look that way? How do you tell? You have to bring object A back to rest in observer B’s frame of reference.

Once the relative velocity V is back to zero, the equations say mass, length, and rates of time flow look completely normal and quite equal to both object A and observer B.

And furthermore, it does not matter who was really moving: V is relative only (that’s in part where the name of the theory came from).

Yet, somebody’s time flow rate really was slower. Their clocks (which are totaling devices, not rate of flow devices) will disagree. This is an experimentally validated and very certain result. Special Relativity does not resolve that problem, which is often called the Twin Paradox.

It is Einstein’s 1915 work on General Relativity that provides the answer: whoever did the accelerating to V and then back to zero is the one who experienced less total accumulated passage of time. Yet, his sense of time flow was entirely normal, to him, throughout the journey! This is the direct consequence of speed of light, not rate of time flow, being constant for all observers.

Could object A be observed if it were flying faster than light? To me, the equations say “no” with the not-real result; remember that they were derived on the assumption the object can be observed.

Could object A actually travel faster than light with respect to reference frame B? That’s a very good question! If the not-real result actually just means he cannot be seen, then that same not-real result says nothing about whether he can actually fly that fast!

So, that’s my maverick interpretation: Einstein says nothing about an actual speed limit. I seem to stand alone in this. But I always did like shouting from the corner that the Emperor has no clothes.

But, if I am right, we actually can travel faster than light, given sufficiently powerful technology. But, navigation will be hell if we see by photons, because the entire universe becomes unobservable!

We’re going to need some additional theory!

Update 12-21-11

The December 2, 2011 issue of “Science” (volume 334 issue 6060) has an interesting “News Focus” article on pages 1200-1201. This magazine is the peer-reviewed journal published by AAAS. The article title is “Where Does the Time Go?” and its topic line is “Superluminal Neutrinos”. There is now a lot of effort at a lot of places to either replicate the experiment or de-bug the procedures OPERA used to produce its faster-than-light results. Depending upon those outcomes, this result may never be explained. But this kind of activity is exactly what should be going on. The process of doing science really works.

The article describes the experimental concept as a simple timing across a fixed distance, although the elements of accomplishing that are not that simple. These are pulses of neutrino creation at one location correlated as pulses of neutrinos received at another location. Timing is by speed-of-light corrected GPS measurements, and by electrical transmission speed corrections in all the equipment. They are looking at how much the graphs of these pulses overlap.

One item questioned in the article is the GPS calibration, which really wasn’t done often enough. Countering the notion of miscalibrated GPS is the systematic time shift required to correct the calculated neutrino speed downward, when a more randomized error would be expected.

In the sixth paragraph (on page 1200) is an assumption the OPERA scientists made about the location where the neutrinos get created, of about a km delay. That assumption should be investigated. While the article says the error associated with it is “small”, it is a small effect we are arguing about (around 60 nanoseconds of time over a distance of about 700 km).

If it were a surveying error, the article says it would have to be on the order of 18 meters, which is not really credible. To me, this points right back to the creation-delay assumption.

It has been my engineering experience that 90+% of assumptions made are faulty or inappropriate. That why I suggest investigating assumptions first, or at least very high up on the priority list.

The other assumption that should be investigated is the interpretation of special relativity implying a speed limit, as discussed above. While not a popular topic in the physics community, it should still be done.

I would be delighted if the neutrinos were actually faster than light, and we needed some new theory. That would be a real breakthrough, and no telling where it might lead.

GW

Saturday, October 8, 2011

Comic Opera Buffoons and Puppet Theater

Take a few minutes to watch this cartoon, it is well worth it.

1948 Cartoon

http://nationaljuggernaut.blogspot.com/2009/09/this-cartoon-seemed-far-fetched-in-1948.html

I last saw this cartoon as the “movie day” movie in a public school classroom sometime in the late 1950’s or early 1960’s, but I don't remember which grade or whose class.

I will say this: the "ism" this cartoon warns against is exactly what is being preached today by BOTH our political parties, ON BOTH SIDES OF THE AISLE. The "issues" that divide them are nothing but puppet theater to distract us from the real and lethal problems that we face. Those are problems that require us to work together, but that cooperation does not contribute to re-election campaigning.

Holding public office has ceased to be a public service, and has instead become a high-dollar for-profit business, especially at the national level. It is also prevalent at the state level, and below in some places. As a result, we typically choose from among various comic-opera buffoon candidates, who are motivated only by selfish interest, instead of real statesmen who would do the people's business in preference to their own.

From the letters I see published in the newspaper, the ridiculous forwarded email "hit pieces", and most of the mainstream media opinion columns and broadcasts, it appears that a majority of the public has fallen for these lies. Furthermore, it appears most of the voting public actually believes the nonsense shouted by these buffoons. That scares the daylights out of me.

If I am right about this, and no one else out there wakes up and sees the real truth, then we are truly doomed.

GW

Monday, September 26, 2011

UARS: Why Another Uncontrolled Satellite Crash?

Saturday, the 6-ton UARS satellite crashed. No one is sure exactly where: the odds say it crashed in the Pacific Ocean, but the uncertainty says some debris might have come down over the Pacific Northwest.

Official estimates were that about half a ton of this satellite would crash to Earth in 26 pieces, one as large as 300 pounds or so. My own estimate of surviving debris mass is larger.

I’ve been watching satellites and satellite re-entries all my life. In all those years, no person has been hurt, because most of these hit the sea (the only victims being fish).

But the risk is still there. So, why do we continue launching satellites that can crash uncontrolled? That's a good question!

Our 85-ton Skylab space station crashed onto Australia in 1979, leaving lighter debris on rooftops in a coastal town, with the heavier pieces carrying further inland. About 75 tons were eventually recovered!

Before that crash, the “official wisdom” was that it would mostly “burn up”. It clearly didn’t do that. Had its debris field been more centered on that Australian town, it is likely someone would have been hurt or killed.

That same year, the 10-ton Pegasus 2 satellite also came crashing back to Earth. This one hit the ocean, as do most.

In 1978, the Russian satellite Cosmos 954 crashed to Earth in north-central Canada. This one was the special case of a nuclear reactor-powered satellite, for which the preferred disposal method (a really high, decay-proof orbit) failed. There was serious radioactive contamination over an area of back-country Canada hundreds of miles wide.

And, most folks remember vividly the crash of Space Shuttle Columbia in Texas in 2003. Again, there were a lot of pieces, some quite large, on the ground from Dallas to Tyler. No one on the ground was hurt, but they very easily could have been.

In contrast, the Russians deliberately crashed their Mir space station into the Pacific in 2001. They used a rocket motor to de-orbit the station and put it down exactly where they wanted: away from land and people.

Excepting disasters and reactor disposals, most satellites could (and should) be equipped with a small rocket motor for a controlled crash in a safe place. Small solid rocket motors are cheap, light, compact, widely available, and they last for decades without any maintenance, waiting to be used.

So why don’t we do this, especially considering the nail-biting experience with Skylab?

Simply because no rule says we have to. That’s something very easy to fix, and without a new law.

For civilian / commercial satellites, the Federal Aviation Administration (FAA) should simply require controlled de-orbit provisions. It’s their jurisdiction and they already have rule-making authority. A word from the President to do it, is all it would take.

For military satellites, a simple order from the Commander-in-Chief is all that is needed.

Mr. President, fix this. Give the word to the FAA and the Joint Chiefs. I bet most of the other satellite-launching nations would soon follow suit.

Friday, September 23, 2011

Air Races, Air Shows, and Risks

The recent fatal crash of the modified World War 2 P-51 “Galloping Ghost” at the Reno air races is a horrible incident. Lots of things have been said on the news and on the internet about it, but all of this is speculation based on incomplete information.

The National Transportation Safety Board (NTSB) will investigate this and determine its cause. They will use all the available information, including stuff not yet reported or on the internet. Until they publish their findings, perhaps a year from now, all else is mere speculation.

That being said, some speculators are more informed than others. I would place more trust in the speculations of an actual aircraft engineer over the speculations of most other members of the public. Being such an engineer, here are my speculations:

Facts

“Galloping Ghost” suddenly pitched up and climbed, rolled over inverted, and dove to the ground, all in a matter of scant seconds. Impact was not directly upon, but was immediately adjacent to, spectators, many of whom were killed by pieces thrown from the wreck. There was no post crash fire.

The left trim tab was photographed departing from the airplane’s horizontal tail before impact. The pilot’s head was not visible in the canopy before impact. The retractable tail wheel was seen extended before impact.

This aircraft was modified in several ways from its World War 2 configuration to compete in the races. Most notable were “clipped wings” reducing span (and aileron size) by 5 feet each side, and removal of the belly air scoop and radiator in favor of a sacrificial coolant system. These enable higher top speed, at the cost of higher landing speed and perhaps a reduced maximum roll rate, not a loss of basic stability.

Less obvious were changes to canopy size, wing fillet size, and smoothing of protuberances, for drag reduction. The race speeds significantly exceed 500 mph, when the original level-flight top speed for the P-51 during World War 2 was 435 mph.

I do not know what the original “never exceed” speed was for the P-51, but these race speeds would be approaching or exceeding that limit. Flying too fast risks structural failures of wing and tail components by a phenomenon called flutter.

Similar Previous Incident

About a decade ago, a similarly-modified P-51 named “Voodoo-5” experienced a very similar incident: sudden high speed pitch-up (at high acceleration) into a climb, with the pilot losing consciousness briefly due to excessive gee forces. He woke up, with no memory of events, at 9000 feet altitude, and regained control, landing successfully.

“Voodo-5” was found to have lost the same left trim tab as “Galloping Ghost”. Loss of the tab at high speed caused failure of part of the elevator control linkage, leaving only the right elevator for pitch control. Aerodynamic flutter was blamed for loss of the trim tab.

At 400+ mph, P-51's exhibit a relatively unusual nose-up tendency that you fight with down trim and down stick. Other aircraft exhibit high-speed nose-down "tuck", or no trim-change tendencies at all.

In the P-51 with that nose-up tendency, sudden loss of half your elevator effectiveness at very high speed causes the aircraft to suddenly and violently pitch up, at something near 10-15 gees. The pilot passes out, or can even be killed with a broken neck, depending on helmet weight and head restraints, or the lack thereof.

Speculations Regarding “Galloping Ghost”

“Galloping Ghost” lost the same left trim tab, and pitched up similarly at high gee. Some on the internet say telemetry from the aircraft indicated 11.5 gees. The differences between “Voodoo-5” and “Galloping Ghost” are (1) “Galloping Ghost” also experienced a roll, and (2) it appears her pilot never woke up, or was perhaps already dying of a broken neck.

It also appears that the high pitch-up gee level forced deployment of her tail wheel. The roll motion on the way up caused her to peak in inverted flight, as photographed. I think I see a light-colored helmet on the dark dashboard in that internet inverted-flight photo, but I could be wrong. She then continued her pitch-roll motion into a dive-to-impact.

There has been speculation that the pilot’s seat failed in “Galloping Ghost”, which might explain why his head was not visible in the canopy. I would be surprised at seat failure in a fighter plane at only 10-15 gees, but I guess it could happen. It did not in “Voodoo-5”, though.

Waiting for the Truth

The NTSB will opine officially maybe a year from now, but I'd almost bet they say that P-51 trim tabs are vulnerable to flutter-induced departure at race speeds beyond the original design's never-exceed speed. Few designers provide aerodynamic or mass balancing, or any other anti-flutter structural treatment, to a trim tab. Maybe they should. If so, the NTSB will say so.

Inappropriate Fear-Mongering

The public safety issue raised by some reporters has less to do with any given aircraft being "pushed too far", or being modified "too radically", and more to do with simple spectator crowd placement. The wording in those reports seems deliberately chosen to inflame fears, and is a disservice to the public, much like yelling “fire” in a crowded theater when there isn’t one.

At air shows, spectators may not legally be located beneath expected aircraft flight paths. At the Reno air races, they can be (and are) located under flight paths. Perhaps they should not be, similar to the air show restrictions. While these are the first spectator deaths at Reno since the 1950's, that risk has always been there.

There was a fatal crash at an air show the day following the “Galloping Ghost” incident. No one but the pilot was killed, because no one was underneath the falling plane.

Tuesday, September 6, 2011

Mars Mission Second Thoughts Illustrated

As I said in a previous posting (8-9-11), I had some second thoughts about the back-up propulsion for my fast trip Mars mission paper, presented at the Mars Society convention in Dallas, Texas, August 4-7, 2011. My backup had been the VASIMR electric propulsion scheme, thinking it a breakthrough in thrust for the power consumption. Based on what I saw at the meeting, it is no breakthrough, and is really mostly unsuitable for fast trips to Mars.

My second thoughts centered around an alternative slow-traveling vehicle requiring artificial gravity, because the manned mission duration would exceed the 1 year known to be tolerable. This vehicle would be powered by the same solid-core nuclear thermal technology I assumed in my landers, derived from the NERVA tested successfully 4 decades ago. I planned this alternative around simple minimum-energy Hohmann transfer orbits, because it is easy.

That still leaves the gas-core nuclear thermal-powered fast trip vehicle, which is still my baseline. I took a closer look at the orbits and the near-straight line “shots” across the solar system at the higher travel speeds. This verified my earlier crude ballpark estimate of the fast trip velocity requirements. All of this is illustrated here, at a level of analysis no deeper than is required to confirm the concepts and their feasibility. For example, I used circles to approximate the actually slightly-elliptical orbits of the planets. To first order for a feasibility check, this is “good enough”.

Baseline Trajectories

The baseline “fast trip mission” sends a fleet of three unmanned ships to Mars parking orbit ahead of the manned ship. This fleet is propelled by the landers themselves, and comprises all the propellant required to support the landing operations, plus enough to send these assets one-way to Mars by Hohmann transfer. Figure 1 shows the initial Hohmann transfer for these unmanned assets. Note that there is an opposition during the unmanned flight to Mars.

I looked for ways to center my manned fast trip about that first opposition, without adding too much extra time in orbit to the manned mission. This did not prove feasible, so that the manned fast trip is centered about a second opposition some 779 days after the first one. The mission calls for 16 weeks at Mars making landings, which would be 56 days to either side of the opposition. A little time spent making rough calculations gave me an “optimal” one-way flight time pretty near 83 days for the “average” mission these approximations represent. This is shown in Figure 2.

Baseline Vehicles

The total mission time (for the men) is under 9 months, so no artificial gravity is required. Note that the total time the propellant tanks sent unmanned must maintain the liquid hydrogen is well over two years – rather challenging! The vehicle designs are as shown in Figure 3, and are essentially unchanged from my paper. The direct launch costs are pretty much as I estimated in the original paper.

Guessing that total program costs are about 6 times the direct launch costs gives something like $50 billion to mount this mission, given the right team. Those figures are similar to the ones in the original paper. That “right team” issue is also discussed in more detail in that paper. See the 7-25-11 posting for an on-line version of that original paper.

Backup “Slowboat” Trajectories

If the manned vehicle is comprised of the same basic modules, but with solid core nuclear engines instead of the gas core engines, then single stage two-way flight, even on a Hohmann transfer, is not possible. But, a single stage transfer to Mars can be flown, and the empty tanks left there at Mars. In this way, a single stage return to Earth can be made, without relying on propellant already sent unmanned to Mars. This is a safety issue: what if rendezvous should fail in Mars orbit? The crew needs a way to return anyway.

The Hohmann transfer to Mars is identical to that in Figure 1. All four ships travel together as a single fleet: 3 unmanned and the one manned vessel. It is not possible to return by Hohmann transfer until the second opposition approaches, as illustrated in Figure 4. These oppositions are separated by 779 days, which leads to the timelines shown for the return in Figure 4. Thus, total manned mission duration is about 2.66 years, requiring the use of artificial gravity to protect the health of the crew, and considerably more packed supplies for the longer mission. Time at Mars about doubles, allowing for 16 2-week landings instead of 16 1-week landings, as in the baseline.

Backup “Slowboat” Vehicles

The unmanned vehicles are unchanged from my paper. The manned vehicle is necessarily bigger than the baseline design, driven by the substantially lower performance of solid core nuclear thermal rockets (SC-NTR) vs. gas core nuclear thermal rockets (GC-NTR). The solid core vehicle is substantially longer and about twice as heavy as the baseline gas core vehicle. The “payload” is larger, too, driven by the need to pack about 3 times as much supplies, with some of that bulky, heavy frozen food. These are depicted in Figure 5.

The return vehicles, command module (also the radiation shelter), habitat module, and supply storage modules are the same, I just needed 3 storage modules instead of one. I did take a closer look at the habitat module, since more space is needed for the longer mission to maintain psychological health. The easiest way to do that was to make the habitat an inflatable, along the lines of the Bigelow Aerospace modules already in experimental flight test now. Equipment and floor structure would be stowed along the axis for launch, and folded out into position once the module is inflated, as illustrated in Figure 6.

The same module could be used on the baseline fast trip vehicle, there is no need to build two different designs. It is imperative not to mount equipment on module walls, as they need to be accessible for very rapid meteoroid puncture repairs. (The same is true of non-inflatable modules.)

I wrestled with several ideas on how to provide adequate radius at acceptable spin rates for artificial gravity, at the one gee level which we already know would be adequate. The breakthrough was to spin the long ship end-over-end, using the long module stack as its own spin diameter. For the trip to Mars, the propellant stack is 34 modules long, each figured as 5.2 m diameter and 13.9 m long, based on the payload shroud dimensions for the SpaceX Falcon-heavy launch vehicle. Spinning end-over-end at only 1.2 rpm provides right about 1 gee at the forward end of the inflatable habitat (at its lower deck as illustrated). The stack is shorter returning to Earth, but should be long enough to provide close to 1 gee at no more than the acceptable limit of 4 rpm.

About Radiation

The original paper covers solar flare radiation shielding in the command module. This is done by surrounding the flight deck with water and wastewater tanks, plus perhaps a little steel plate. One provides space in there for all 6 crew, and a day or two of supplies to outlast the typical solar storm. This enables critical maneuvers to be flown, no matter what the solar weather, a major flight safety issue.

A little research since then provided credible dose estimates for the cosmic ray background radiation, composed of particles so energetic that ordinary shielding is more-or-less impractical. The dose varies between 22 and 60 REM per year in a steady “drizzle”, depending upon the strength of the solar wind, which tends to deflect some of it. The original radiation dose limits for astronauts was set to 25 REM/year, which was the World War 2-vintage max dose for civilian adults. It has since been revised to 50 REM/year, based on what I can find on the internet. The actual dosage rate only sometimes exceeds the newer limits, and then only by a small amount. Trips to Mars thus appear quite feasible without incurring any immediate health risks from cosmic rays, or even any significant prospect of long-term effects.

The Program As Revised

Changing to SC-NTR backup propulsion puts the artificial gravity and frozen food storage issues into the design mix. This has to be made to work, and they are things we have never before done. Using the baseline GC-NTR propulsion puts that very propulsion into the mix as something we never did do before, excepting some feasibility experiments. Those are the two development items to be worked in parallel, so that one or the other is ready in time to fly. (This is the same basic parallel path development idea that was in the original paper, where the baseline was GC-NTR, and the backup was VASIMR and its power plant.) All the other items are simple design / build / checkout efforts based on known technologies, and that includes the SC-NTR. The high-level program plan is just a bunch of parallel paths, as illustrated in Figure 7.

So, we are looking at somewhere in the vicinity of $53 B to $70 B to send 6 men to Mars to make 16 widely-separated landings all over the planet, in the one trip, with maximum safety and self-rescue capability designed-in at every step, and with all-reusable assets left in space to be refueled and reused by subsequent missions. The whole thing could be done for prices like that, in only 5-10 years, given the right kind of contractor teams, and the right kind of an agency to lead them. That is one incredible amount of “bang for the buck”!

As I said in my original paper, right now we do not have that agency, and only a couple of the right kind of contractors, at best. But, if we fix those lacks, we could really do this. The numbers show it is definitely feasible.

The last time we as a nation embarked on a mission to explore another world (the moon), we had nearly two decades of sustained economic boom, from all the jobs created just to get the mission done. That may not be causal, but it is definitely correlated. Why not do it again?


Figure 1 – “Slowboat” Transfer to Mars, Baseline and Backup


Figure 2 – “Fast Trip” Transfer To and From Mars, Baseline


Figure 3 – Baseline Manned and Unmanned Vehicles


Figure 4 – “Slowboat” Transfers to Earth, Backup


Figure 5 – Backup Manned and Unmanned Vehicles


Figure 6 – Inflatable Habitat Module, Baseline and Backup Vehicles


Figure 7 – Program Outline Plan

Monday, September 5, 2011

Surprise, Surprise: Oil Boom in the Williston Basin (“the Bakken”)

Resources on the internet about this formation have been revised recently. There appears to be an oil drilling boom going on in eastern Montana and western North Dakota. They are horizontal-drilling and hydro-fracturing for light crude (meaning low viscosity liquid). One of the descriptions says the crude they can recover seems to be just about the same gross physical properties as diesel (density, viscosity).

That's a surprise to me. Two years ago I researched this formation as a "shale unit, very low porosity and microscopic permeability", and everything I read about the hydrocarbons in it said a consistency more like tar. Hydro-fracturing simply would not work on a near-solid resource like that. It would have to be mined, like coal.

What I read now says the Bakken comprises a dolomite layer around 100-140 feet thick, bounded above and below by shale layers. Typically, the shale is the “original” source for the hydrocarbons. The dolomite is listed as 5% porosity and microscopic permeability (1-10 microdarcy's, just almost impermeable). It is in the dolomite layer (not the shale) that they are horizontal-drilling and hydro-fracturing. Estimates vary about how much of the total resource they might possibly recover this way, by over an order of magnitude, depending upon who made the estimate and what agenda they have.

For the Burgess Shale natural gas hydro-fracturing here in Texas, the estimate is that about 3% of the gas down there is actually recoverable. For the liquid in the Bakken dolomite layer, I'd simply guess that factor as 3% or less, which is nearer the 1% end of the estimate range of 1% to 50% that I saw on-line yesterday. Almost-nil permeability just has that effect, hydro-fracturing notwithstanding.

I suspect that there are residual tars left behind in both of the shale units in the Bakken formation, and that the source for the light fractions in the sandwiched dolomite layer is the lower shale member. Somehow, I don't see light fractions migrating downward from the upper shale member, so its lighter fractions are most likely now lost to us.

So, how much recoverable light oil might there be, and how much good might it do, if we can recover around 2% of it?

Oil in the Dolomite Layer:

If you guess that there's something like 500 x 500 statute miles of this formation, averaging 100 ft thick, at 5% porosity, then there might be as many as 6 trillion barrels of light oil down there.

500 mile dimension x 5280 ft per mile = 2.64E6 ft. 500 mi x 500 mi is then 6.97E12 sq ft. Multiply by 100 ft thick to obtain 6.97E14 cu.ft of dolomite rock. The hydrocarbon volume equals the pore space volume at 5% of rock volume, assuming the pores are 100% full. That's 3.48E13 cu.ft of hydrocarbons. Cu.ft volume of hydrocarbons x 7.48 gal per cu.ft is 2.61E14 gal hydrocarbons; divide that by 42 gal/barrel. That's 6.2E12 (about 6 trillion) barrels of hydrocarbon volume down there in the pores of the dolomite layer, supposedly all hydro-fracturable, very light crude.

Assume we can recover 2% of it. That's about 1.24E11 barrels of light oil that could be recovered, or about 124 billion barrels in ordinary terms. That's quite significant. I could be off by a factor of 2-3 in rock volume assumptions, more likely toward the smaller than the larger, so these figures are rather optimistic.

At our 7-8 billion barrels / year consumption in the USA, then potentially, this could power us for about 16-17 years. That really is significant, even if it is optimistic by a factor of 2-3. If it is all light oil. If we really can recover 2% of it. If the rock pores are really full. Lots of "ifs".

Let's say this oil boom lasts 20-30 years (typical for a very large field). The average production rate from the mature field (which takes several years to achieve) might be as much as around 4-6 billion barrels a year, again possibly optimistic by factor of 2-3. That's still a lot, optimistic or otherwise.

Replacing Foreign Imports:
About 1/3 of our consumption is domestic production, about 1/3 comes from Mexico and Canada, and about 1/3 comes from OPEC (which includes Venezuela, along with that idiot running it; and our “friend” Iran, with that insane group of religious fanatics running it). That's about 2.5 billion barrels per year from each source. We might very well be able to replace much of the OPEC oil with domestic from the Bakken dolomite layer, even as the other sources decline. For a little while.

But, no matter how politically expedient, it is still clearly not at all wise to count on it “ending” our dependence on foreign oil. Although, you can bet more than one GOP/Tea Party candidate will run on "why not save ourselves from oil dependence with the Bakken, if the environmentalists and Democrats would just get out of the way?" They did exactly that in '08: remember “drill, baby, drill?”

Even with the new oil boom that I did not expect to see, it’s still a comic-opera puppet-theater issue intended to distract the public from the real truths that threaten us. It’s still just a fake electioneering issue for a bunch of comic-opera buffoon candidates. Beware! I warned you!

About the Tar Shale Layers:
I saw no thickness figures on the two shale units, in the new data that I found this year. I bet they're quite thick, though. You'd have to deep strip mine it, and what I saw said it averages 2 miles down. Figure shale at 0.5% or less porosity, for maybe another handful of trillions of barrels of potentially-recoverable hydrocarbon. This tar shale stuff would be very hard to extract and process, though, and so it would be a supremely expensive product.

And, we would get it for the environmental cost of a permanent crater some 500x500x2 miles in size, which is bigger by far than the volume of Lake Superior. That shale tar is what I was thinking about when I posted what I did about "the Bakken" last year (the 3-14-10 article). That’s still true, oil boom notwithstanding.

Conclusions:

Yep, we need to go get the hydro-fracturable light oil.

Yep, it’ll surely help with imports.

Nope, it will not “save” us.

There is no permanent answer among depletable (fossil) fuels, and never will be.

Update 6-5-2016:  here is an updated curve of US oil production versus time obtained from the US EIA website.  I have sketched upon it the Hubbert curve for conventional oil production.  It is clear the fracking technology is a new effect.  How tall this could go,  and how wide this will be over time,  are things that are completely unclear as of yet.  

-------------------------------------------------------------

Update 1-3-15:

The recent explosion of US “fracking” technology (hydraulic fracturing plus horizontal-turn drilling) has modified the picture of oil prices versus recessions.  Unexpectedly,  the US has become a leading producer of crude oils for the world market.  Plus,  there has been an associated massive production increase and price drop in natural gas.

OPEC has chosen to take the income “hit” and not cut back their production in response.  Their reasoning is twofold:  (1) fear of loss of market share,  and (2) hope that low oil prices will curtail US “fracking” recoveries.  We will see how that plays-out.

Oil prices are now such (at around $55/barrel) that US regular gasoline prices are nearing $2.00/gal for the first time in a very long time.  This is very close to the price one would expect for a truly competitive commodity,  based on 1958 gasoline prices in the US,  and the inflation factor since then. 

It is no coincidence that the exceedingly-weak US “Great Recession” recovery has suddenly picked up steam.  The timing of the acceleration in our economic recovery versus the precipitous drop in oil prices is quite damning.  There can be no doubt that higher-than-competitive-commodity oil prices damage economies.  Oil prices are a superposition of the competitive commodity price,  overlain by an erratic increase from speculation,  and further overlain quite often by punitive price levels when OPEC is politically unhappy with the west.  That’s been the history. 

This economic improvement we are experiencing will persist as long as oil,  gas,  and fuel prices remain low.  (Government policies have almost nothing to do with this,  from either party.)  How long that improvement continues depends in part upon US “fracking” and in part upon OPEC.  Continued US “fracking” in the short term may depend upon adequate prices.  In the long term,  we need some solutions to some rather intractable problems to continue our big-time “fracking” activities. 

The long-term problems with “fracking” have to do with (1) contamination of groundwater with combustible natural gas,  (2) induced earthquake activity,  (3) lack of suitable freshwater supply to support the demand for “fracking”,  and (4) safety problems with the transport of the volatile crude that “fracking” inherently produces. 

Groundwater Contamination

Groundwater contamination is geology-dependent.  In Texas,  the rock layers lie relatively flat,  and are relatively undistorted and unfractured.  This is because the rocks are largely old sea bottom that was never subjected to mountain-building.  We Texans haven’t seen any significant contamination of ground water by methane freed from shale.  The exceptions trace to improperly-built wells whose casings leak.

This isn’t true in the shales being tapped in the Appalachians,  or in the shales being tapped in the eastern Rockies.  There the freed gas has multiple paths to reach the surface besides the well,  no matter how well-built it might have been.  Those paths are the vast multitudes of fractures in the highly-contorted rocks that subject to mountain-building in eons past.  That mountain-building may have ceased long ago,  but those cracks last forever. 

This is why there are persistent reports of kitchen water taps bursting into flames or exploding,  from those very same regions of the country.   It’s very unwise to “frack” for gas in that kind of geology.

Induced Earthquake Activity

This does not seem to trace to the original “fracking” activity.  Instead it traces rather reliably to massive injections of “fracking” wastewater down disposal wells.  Wherever the injection quantities are large in a given well,  the frequent earthquakes cluster in that same region.  Most are pretty weak,  under Richter magnitude 3,  some have approached magnitude 4. 

There is nothing in our experience to suggest that magnitude 4 is the maximum we will see.  No one can rule out large quakes.   The risk is with us as long as there are massive amounts of “fracking” wastewater to dispose of,  in these wells.  As long as we never re-use “frack” water,  we will have this massive disposal problem,  and it will induce earthquakes. 

Lack of Freshwater Supply to Support “Fracking”

It takes immense amounts of fresh water to “frack” a single well.  None of this is ever re-used,  nor it is technologically-possible to decontaminate water used in that way.  The additives vary from company to company,  but all use either sand or glass beads,  and usually a little diesel fuel.  Used “frack” water comes back at near 10 times the salinity of sea water,  and is contaminated by heavy metals,  and by radioactive minerals,  in addition to the additives.  Only the sand or glass beads get left behind:  they hold the newly-fractured cracks in the rocks open,  so that natural gas and volatile crudes can percolate out. 

The problem is lack of enough freshwater supplies.  In most areas of interest,  there is not enough fresh water available to support both people and “fracking”,  especially with the drought in recent years.  This assessment completely excludes the demand increases due to population growth.  That’s even worse.

This problem will persist as long as fresh water is used for “fracking”,  and will be much,  much worse as long as “frack” water is not reused.  The solution is to start with sea water,  not fresh water,  and then to re-use it.  This will require some R&D to develop a new additive package that works in salty water to carry sand or glass beads,  even in brines 10 times more salty than sea water. 

Nobody wants to pay for that R&D. 

Transport Safety with Volatile “Frack” Crudes

What “fracking” frees best from shales is natural gas,  which is inherently very mobile.  Some shales (by no means all of them) contain condensed-phase hydrocarbons volatile enough to percolate out after hydraulic fracturing,  albeit more slowly than natural gas.  Typically,  these resemble a light,  runny winter diesel fuel,  or even a kerosene,  in physical properties.  More commonly,  shale contains very immobile condensed hydrocarbons resembling tar.  These cannot be recovered by “fracking” at all. 

The shales in south Texas,  and some of the shales and adjacent dolomites in the Wyoming region actually do yield light,  volatile crudes.  The problem is what to transport them in.  There are not enough pipelines to do that job.  Pipelines are safer than rail transport,  all the spills and fires notwithstanding. 

The problem is that we are transporting these relatively-volatile materials in rail tank cars intended for normal (heavy) crude oils,  specifically DOT 111 tank cars.  Normal crudes are relatively-nonvolatile and rather hard to ignite in accidents.  DOT 111 cars puncture or leak frequently in derail accidents,  but this isn’t that serious a problem as long as the contents are non-volatile.  These shale-“frack” light crude materials resemble nothing so much as No. 1 winter diesel,  which is illegal to ship in DOT 111 cars,  precisely since it is too volatile. 

The problem is that no one wants to pay for expanding the fleet of tougher-rated tank cars.  So,  many outfits routinely mis-classify “frack” light crudes as non-volatile crudes,  in order to “legally” use the abundant but inadequate DOT-111 cars.  We’ve already seen the result of this kind of bottom line-only thinking,  in a series of rather serious rail fire-and-explosion disasters,  the most deadly (so far) in Lac Megantic,  Quebec. 

Volatile shale-“fracked” crudes simply should not be shipped in vulnerable DOT 111 cars,  period.  It is demonstrably too dangerous. 

Conclusions

“Fracking” shales for natural gas and light crudes has had a very beneficial effect on the US economy and its export-import picture.  We should continue this activity as a reliable bridge to things in the near future that are even better. 


But,  we must address the four problem areas I just outlined.  And I also just told you what the solutions are.  The problem is,  as always,  who pays.   What is the value of a human life?  What is the value of a livable environment?  It’s not an either-or decision,  it’s striking the appropriate balance!