Fuel Economy Testing

For the past 30 years or so a variety of methods have been utilized to evaluate new fuel saving devices for fleets in an attempt to simulate "real world" testing. These tests were born out of a disregard for automaker fuel economy

data that often does not accurately reflect the actual experience of fleet owners. Generally, "real world" testing does not attempt to control driving habits or driving conditions. Instead they use a variety of vehicles with differing missions.

"REAL WORLD" VS. REAL DATA

These "real world" tests range from a simple seat-of-the-pants, 1-week driving test that compares the results to historical data, to elaborate, A/B tests utilizing global positioning systems (GPS) and computers tapped into the onboard diagnostics (OBDII) sensors. In these tests the selected vehicles are first run for 3 months to base line the data (A) and then they are converted to the new fuel efficiency device and run for another 3 months (B). After about 6 months of testing the A/B data is compared often with no real conclusions. Why? Because something happened during the testing to throw all the results into doubt, sort of like "the dog ate my homework!"

So, what typically goes wrong? A lot. Some of the selected vehicles get replaced during the test, others are pulled out for maintenance and some repurposed to another task. Perhaps the most insidious data faults result from seasonal changes. The A test may be done in the spring and 3 months later the B test is done in the summer. It would not be surprising to find the B test vehicles idling more due to a running air conditioner. Some fleet managers could explain this away as "real world" testing, but unless the data is normalized for this anomaly, the test is irrelevant.

UNCONTROLLED VARIABLES

The problem with the "real world" approach to testing is that it lacks control over 4 key variables. Consider the first variable, actual testing conditions. These could include weather, topography of the locale, and even the condition of the road surface. Most people don't realize that oil companies change gasoline formulation every two months to compensate for seasonal temperature fluctuations. This combined with temperature-related air density changes can significantly alter fuel economy results.

The second variable is the measurement of fuel consumption. In almost every "real world" test, the trip computer on the dashboard is the fuel measurement device. Of course this method is useless in the short-term because it varies widely until enough miles have been accumulated to average out the instantaneous swings. It can be relatively accurate over a 2 or 3-month period. But, measuring fuel consumption over such a long period of time brings in the first uncontrolled variable: weather.

Third, is the largest fuel economy variable of all: the driver. Unless you are a professional driver driving a fixed course to a tightly controlled time/waypoint standard, it is impossible to accurately duplicate driving in the A/B test. Many fleet vehicles have multiple drivers which further confounds any semblance of reproducibility.

The final variable is the actual data collection process itself. This usually leads to a report published to the "powers that be" who will decide the fate of the new fuel economy device. Unless the data can be normalized for conditions, measurement tolerances, and driver habits that differ between test A and test B, they are usually not very relevant, but nonetheless often used as fact for "real world" testing.

NEW TEST PROCEDURES

So, if "real world" testing is not very reliable, what is? Here is where we come face to face with the reason most fleet managers prefer "real world" testing in the first place. The tests used by the automakers and the EPA up until 2006 were based on a 1975 test protocol called the Federal Test Protocol 75 or FTP75. The FTP75 attempted to simulate driving as it was done in 1975 in a town in California. This route was fed into a computer and then driven on a dynamometer (more on this later). Those who remember the first oil embargo know that speed limits across the U.S. at that time were 55 MPH. So, too, the FTP75 is limited to a top speed of 55 MPH. Today some highway speed limits are 80 MPH. The FTP75 did not even use the same type gasoline available to consumers, but rather a highly refined version of gasoline called indolene. No wonder "real world" experiences did not reflect the MPG sticker on the car and why fleet managers came to disregard them.

In 2006 the U.S. Congress finally acknowledged this disparity and ordered the automakers to use another, more relevant test protocol called the EPA US06, which better simulated driving today with greater data collection accuracy. The problem with the EPA testing is that vehicles have to be transported to one of the few independent, certified facilities around the country. Furthermore, the cost to test could cost between $10,000 and $30,000 per vehicle. Put 10 vehicles in the test and you may well spend any potential fuel savings from a new device in testing alone.

NEW AND PRACTICAL TEST PROCEDURES

Albuquerque-based Enerpulse, Inc. studied the conflict between "real world" and scientific testing when introducing the Pulstar™ pulse plugs to fleet managers. A pulse plug looks and fits exactly like a spark plug and incorporates an electrical device called a capacitor, which boosts the energy of the spark like a camera flash boosts light. It's sort of like putting a flashcube in your engine. The more robust spark makes ignition more precise and combustion more efficient, improving engine performance and fuel efficiency by an average of 6%.

In 2008 Pulstar™ was introduced to a U.S based "green" fleet with a healthy mix of E85 and hybrid vehicles. By this time Enerpulse, Inc. had developed a cost-effective alternative to EPA testing. It is called the Enerpulse Performance Evaluation Procedure or E-PEP. E-PEP is comprised of 3 types of dynamometer tests: torque, acceleration and fuel economy. The fuel economy test utilizes the same drive cycle as the EPA US06 test and instead of costly gas analysis equipment, it uses relatively inexpensive, but very precise, digital flow meters. The result is a test that can be done in one day with reproducibility of +/- 2% and can be set up in or near the fleet operations. More importantly, the E-PEP controls the variables associated with "real world" testing and is therefore, far more accurate.

DYNAMOMETER DEFINED

To control the conditions, the test vehicle is lashed to a dynamometer (dyno). The dyno is a rolling roadbed tied to an electric motor. The motor, which is linked to a computer, senses the power and speed from the wheels. It is calibrated by inputting the weight of the vehicle so load can be applied to the wheels to simulate actual driving for that specific vehicle. In that the testing is done indoors, all weather, topography and road surface variables are eliminated.

Next, a digital flow meter is installed into the engine's fuel line and linked to a computer. This device samples fuel flow twice every second during the test and is the data that will be collected and compared to determine the efficacy of the new fuel economy device.

The most important element of the E-PEP test is the drive cycle. In this case E-PEP utilizes the EPA US06 drive cycle. During the test, the driver is constantly watching the screen above. At the bottom of the screen is the 10-minute cycle he will actually drive on the dyno. It combines both city and highway driving ranging from 0 to 81 miles per hour. The driver's main focus is on the green line inside of the two blue lines. The driver must remain within the blue lines or flunk the test. In our case, the test is first run 3 times with automaker recommended spark plugs and then another 3 times with Pulstar™ pulse plugs.

An enormous amount of data is generated, which is ultimately distilled into a comparison chart showing the Pulstar™ improvement over the recommended spark plug.


DYNO TESTING PROVES RELATIVITY

The argument that dyno testing does not simulate the "real world" misses the point completely. Dyno testing is designed to prove relativity. If a fuel economy device tested on a dyno in a sterile environment improves fuel economy by 5%, then even if the car is driven up the side of a rocky mountain in the "real world" it should get 5% better fuel economy than without the device. A dyno test on a Hummer and a Prius will show dramatically different fuel consumption results, but there are conditions in the "real world" under which the Prius will use more fuel than the Hummer (Hummer idling vs. the Prius racing). This does not make the Hummer more fuel-efficient. The dyno will prove the relative value, which must absolutely translate into the fleet average costs. But, unless test variables are controlled, no fuel-efficient device could survive the ambiguities of the "real world".

Perhaps part of the dilemma with fleet testing in general can be explained by the motivations of fleet managers. Most are judged on the reliability of the fleet and not on fuel efficiency, which is thought to be out of their control. After all, how can a fleet manager control the driving habits of others? A new fuel efficiency device, no matter how effective in reducing fuel consumption, is a potential concern to the fleet manager because it could create more maintenance issues and fleet downtime.

Now that we have experienced $4.00 per gallon gasoline, and by most accounts will again, fleet operators are scrambling to find new ways to reduce fuel costs. Add to this a growing sensitivity to global warming and our dependence on foreign oil and you have a powerful reason to consider fuel efficiency alternatives. Separating truly effective fuel economy alternatives from the "snake oil" requires accurate and cost-effective dyno testing procedures that are relative and relevant to the number one cost of operating a fleet.

Comments