Although I've been too busy to test the new motor over the past few months, I did have some time to look into some other issues. But first of all, it looks like I'll be able to resume static testing at the homestead. I worked with the homeowner's board to get permission to build a test facility in the back yard. Although I haven't constructed it yet, the plan is to dig a 4 foot deep ditch and lower the test stand into it. Going below ground and taking advantage of the resulting berm provides significant noise reduction and dramatically increases the margin of safety against any mishap. Here are some noise and safety references that I found helpful:

  • Design Guide for Highway Noise Barriers, FHWA/TX-04/0-1471-4, Texas Department of Transportation
  • Acoustic Loads Generated by the Propulsion System, NASA SP-8072
  • Noise Barrier Design Handbook, US DOT
  • Reductions in Multi-component Jet Noise by Water Injection, AIAA-2004-2976
  • Safety Standard for Explosives, Propellants, and Pyrotechnics (new), NASA-STD-8719.12
  • Safety Standard for Explosives, Propellants, and Pyrotechnics (old), NASA NSS 1740.12

For a 6 foot berm acting as a noise barrier, computing the Fresnel number for various frequencies gives an insertion loss of 19 dB at 1 kHz, and 24 dB at 10 kHz. Figure 2 in SP-8072 indicates that the noise spectrum increases in frequency as the size of the motor decreases. That, combined with the fact a noise barrier is increasingly effective as the frequency increases works out well for my size motor. Just doing a thought experiment, this makes sense - you can typically hear lower frequencies through walls better than higher frequencies. My goal is to get the noise level at 400 feet away down to the same or less than one of the F-18 or F-16 jets that routinely fly overhead. Another method if I need it is water injection after the nozzle which has been shown to knock off a few more dB.

I decided to look further into the overall accuracy of my data system and I uncovered some surprises. I'm using National Instruments hardware (SCXI-1520, SCXI-1125, PXI-6030E) and I was aware of the 0.1% accuracy spec on the SCXI-1520 (which isn't very good BTW). However, actually achieving that is not trivial and is compounded by driver bugs and board behavior. The first problem I noticed is that the SCXI-1520's excitation supply is coupled with the bridge balance circuit so changing the excitation value causes an offset that has to be nulled out. Well, unless you can put the bridge at rest, you can't null it out so that's not an option in all cases. There is an autozero feature but in the particular version of the NI-DAQ driver I was using, the feature was broken. Upgrading to a newer NI-DAQ driver required upgrading my version of LabVIEW so it ended up being quite painful. After all that, I discovered that autozero (and autonull) doesn't work unless the channel is configured for bipolar mode so I had to potentially throw away half my resolution just to take advantage of the autozero feature. I also ran into some other problems with wrong gains after changing the master timebase on the PXI-6030E. Finally, you have to actually measure the excitation value to properly scale a ratiometric bridge-type transducer since the driver nominal value can be quite different from the actual value (9.978V instead of 10.0 for one channel, over 0.2% error). Another problem I ran into involved disconnected and floating channels. I had a channel temporarily disconnected, the last bridge channel right before the start of the TC channels in the scan list. I noticed the temperature from the TC channel drifting around by a couple of degrees and I traced it to the previous disconnected channel floating and saturating the ADC. After some searching, I found an NI knowledge base article that said the PGIA needs more time to settle following a railed input voltage. Apparently those folks have never had a cable break or accidentally get disconnected during a test. It would be a shame to get bad data on adjacent channels just because of a bad cable on another channel. The workaround for me was to reduce the sample rate down to a level where the crosstalk resulted in less than 0.2 degF variation on a temperature channel. So, while the NI hardware seems to have adequate performance, you have to be careful and check to see that you're really getting the rated accuracy.

To help with transducer calibration and data system checkout, I bought a used HP 34401A DMM and a more stable voltage source, a General Resistance Dial-A-Source DAS-47AL which I calibrated against the freshly-calibrated 34401A. I'm slowly building up my calibration facilities where I can now externally calibrate all my DAQ boards and transducers:

  • Voltage input - HP 34401A DMM
  • Voltage output - Dial-A-Source DAS-47AL
  • Pressure input - Paroscientific Digiquartz
  • Mass - Ohaus EB15 scale with 10 kg Class F calibration weight
  • Time/Frequency - Datum TymServe 2100 with IRIG and 10 MHz GPS disciplined frequency outputs
  • Length - various scales and DRO systems on shop equipment
  • Specific Gravity - Bellwether hydrometers for water and kerosene
  • Temperature - lab grade mercury thermometer

Calibrating the flowmeters proved to be an interesting adventure. I considered sending them off but at $350 per cal (for Cox), I decided to buy the Ohaus scale with a calibration weight instead and perform the cal myself. I talked to a couple of different cal labs and the most accurate method seems to be the positive displacement method. It uses a precision double-acting cylinder that has air on one side of the piston and the fluid on the other side. You move the piston at a constant rate and if you can measure the displacement vs. time, then you know the volumetric flow rate into the meter. They do this at several different flow rates and give you a curve of k-factor as a function of pulse rate (it's not constant - it can vary almost 2% end to end). The other method is a time-volume calibration where you fill up a drum with water, weigh it, and count the total pulses from the flowmeter. The cost for this type of cal was $1195 for 12 points. I ended up performing my own time-volume cal with a bucket and scale. The first time I ran a water cal of my LOX flowmeter (Cox AN8-6), there was a problem immediately apparent with the data so I took it apart and cleaned it which seemed to fix it. But the bigger problem is that I had previously cleaned it so I decided to set that one aside as a spare. I had been concerned for a while that my older Cox AN series flowmeters didn't have a downstream snap ring so I decided to switch to a different flowmeter I had, a Flow Technology Inc. FT6-8 that has snap rings on both ends. In a stroke of luck, I was able to retrieve the original factory cal from 1990 and my calibration matches it within 0.2-0.3%. My fuel flowmeter, also a Cox AN8-6, had good repeatability across multiple cals so I'll continue to use it. My data system is primarily analog so I'm using a Frequency-Voltage converter to condition the flowmeter output before it goes to the SCXI-1520. The circuit is slightly non-linear so I'm using a 3rd order curvefit for the F-V converter. However, the flowmeter K-factor also varies with flowrate so I had two 3rd order curvefits that I had to combine. I started to do it algebraically but it got messy pretty quick. That's when I discovered polynomial composition and LabVIEW had a VI to do just that. After combining the curvefits, I decided that keeping the result up to 3rd order was sufficient.

I've never been happy with the way I was attaching my thermocouples to the chamber. Although I would like to measure the inside hot wall temperature, that isn't practical with my design so I just drilled small holes in between the cooling passages. In my previous tests, I used Omegabond-400 cement to hold the TCs into the hole but it didn't work very well. Maybe I didn't cure it properly but when I tried to LOX-chill the plumbing to cool things down, the cement turned to slush because of the low temperatures. I came up with a way to mechanically fasten the TC to the chamber by using an #6-32 aluminum cap screw with a hole drilled into the middle. The TC is threaded through the hole with a 90-degree bend at the end of the bead and expoxied into place. You can then screw the assembly finger tight into the threaded hole in the side of the chamber. This lightly clamps the bead into place and since the cap screw is aluminum, there are no temperature expansion issues. We'll see whether I can remove the cap screw after a hot fire. The assembly should not get hot enough to melt the epoxy though. Another benefit of this arrangement is that it ensures the TC bead is grounded which seems to be helping my EMI problem a bit.

There's been an occasional problem with my data acquisition program that would cause it to randomly hang on startup. I discovered that it's not a good idea to reset the Datum bc635PCI every time because it disrupts the IRIG lock detection functionality. After a board reset, it takes about 8 sec to lock, 3.5 min to phase lock and between 4 min-1.5 hr to freq lock. Since I was resetting the board and immediately continuing in the program, I would get unpredictable results. Also, after changing the timing mode (even to the same value it is already set to), it takes about 5 sec for the lock and phase bits to return to normal.

EMI from the igniter showing up in the data has been a problem for me on several tests. While annoying, there were bigger things to worry about so I hadn't had a chance to really take a good look at it until recently. I connected a good ground strap between the test stand and instrumentation chassis which seems to eliminate the problem. You can definitely see the difference with and without the strap. However, even with it disconnected, I didn't see the level of EMI that I had seen on previous tests so I'll have to keep an eye out for it in case it reappears.

The weakest part of my entire setup is the igniter. I'm currently using a spark coil with a relay so it buzzes and generates continuous sparks. A pair of wires with a plastic spacer is inserted up into the chamber and taped externally so it stays in place until chamber pressure forces it out. While an integrated augmented spark or torch igniter would better, I'm committed to the existing design because I believe that a system only used for startup can be left behind on the ground instead having to pay the weight and complexity penalty on the flight vehicle. The challenge is then to ensure adequate igniter energy, to keep the igniter in the chamber long enough, and to ensure that the system is working when the main propellants enter the chamber to avoid a hard start. For these next series of tests, I've constructed a transformer by wrapping 10 turns of wire around the ground lead of the igniter. With a full-wave rectifier and suitable filter capacitor, I can get a noisy but useful signal into the data acquisition system. I've modified the test sequence to check for valid igniter spark feedback before introducing the propellants into the chamber.


Something I've been meaning to do for a while is build a suitable blast shield in case the motor decides to disassemble itself due to a hard start. After some really crude analysis, I came up with this design. The sides use 12x12 inch sheets of LEXGARD MP1000 laminate with an extra 3/8 inch sheet of polycarbonate on the inside to cosmetically protect the expensive LEXGARD sheets in the event of a minor burnthrough. A washer on the bolts between the sheets should prevent moisture from getting trapped in between. For the top I used 1/2 inch hot-rolled plate steel. The whole assembly (which is a lot heavier than I intended) bolts to the front of the test stand with some extra unistrut. I spent a lot of time reading various articles about blast shields and just got a headache after a while so I settled on this design as probably "good enough". Some interesting references I found include: NASA TN D-4894 "Blast Shields Testing", SAND99-0634 "Secondary Containment Design for a High Speed Centrifuge", DOE/TIC-11268 "A Manual for the Prediction of Blast and Fragment Loadings on Structures", and BRL-405 "The Initial Velocities of Fragments from Bombs, Shell, and Grenades". Hopefully, I'll never get the chance to see how well the blast shield works.

I picked up a 0-500 psia Paroscientific Digiquartz pressure transducer from eBay that I plan to use as a calibration source for my tank and chamber pressure transducers. Paroscientific sensors are well-known for amazing long-term stability so I'm expecting the cal sheet I have from 1986 is still good. It is an absolute transducer and the zero looks to have drifted by about 0.5 psi (0.1% FS) which is reasonable over that period of time. These have a frequency output in the range of 30-42 kHz so the measurement accuracy is dependent on the accuracy of the frequency counter reference oscillator. The newer versions have both a pressure and temperature-compensating output but this older unit only has the frequency output. These have an interesting calibration curvefit and as a result, the accuracy of the indicated pressure is about 10x worse than the accuracy of the frequency counter reference oscillator. For example, a 100 ppm reference oscillator will yield about 0.1% FS accuracy of the transducer. So, to get the full 0.01% FS accuracy of the Digiquartz transducer, I need at least a 10 ppm reference oscillator. The equation is P = C(1–T0^2/Tau^2)[1 – D(1– T0^2/Tau^2)], where C, D, and T0 are from the cal sheet, and Tau is the period in microseconds of the frequency output. I don't have a real frequency counter but I do have a National Instruments PXI-6030E DAQ card and a PXI chassis with a 25 ppm clock built into it that gets me pretty close. For a 500 psi transducer, that should get me within 0.125 psi or so, certainly good enough for amateur rocket work. I also just purchased a Datum Tymserve 2100 from eBay which in addition to the NTP network port, has both an IRIG output and a 10 MHz reference clock which will come in handy for testing.