Designed for 50 Tons Per Hour—Why are We Still Running 38?

By Justin Price | August 13, 2025

 

Pellet operations are rarely short on improvement ideas.

What I find, though, is evidence that the ideal tweak, like a die change, a faster feed screw or a bigger hammer mill, will reliably boost the throughput and reach the claimed objective.

When working with clients, we often hear about the plant’s nameplate capacity and how the plant has never produced at that level. When we ask the operators about this difference, the answers are always with a thoughtful gesture of “We think it’s the dryer…or maybe the dry hammer mill horsepower is limiting us.”

In many mills, data already exists buried inside some programmable logic controller (PLC) historian, downtime log or half-forgotten Excel sheet. This article lays out a five-step program based on the Design of Experiments (DOE) to help transform that raw data into verified improvements.

Step 1: Collect and Clean Your Data
With today’s modern computers, PLCs and data collection histograms, we capture thousands of data points every second. Most of these are noise. Focus on the high-impact items first. Clean the inputs. Start with synchronized clocks, tagging units and removing outliers.

Some key data sources include the machine center feed rates, die temperature and pressure, dryer inlet and outlet temperatures, and motor amps for all the systems larger than 100 horsepower. These are important because they have direct control levers, real-time throughput and energy proxy. Often, these data points are in the PLC historians.

Another place to look is in the downtime log/CMMA (Collaborative Memory Management Assist) system. Here, focus on the stop reason, duration and corrective actions taken. When looking at this information, prioritize the chronic failures for a DOE follow-up.

Energy matters in the plant resources. Vibrations and oil analysis data for the large motors should be monitored to link mechanical health to process settings. The final area to think about gathering data from would be in the quality analysis lab and other process test points, like fiber moisture and pellet durability. With this data, you can see whether the throughput gains help or hurt the pellet quality.

Step 2: Choose Your Variables 
Most of the classical textbooks heap praise on the full-factorial DOE, looking at every possible combination of factors and levels that can be tested. However, many variables can’t be controlled in the mill.

The middle ground on this is to look at the fractional factorial DOE. Choose the most important settings with only a handful of runs. If a factor or variable can’t be changed in under 30 minutes, or if it risks quality standards, you may want to park that for a large DOE study.

By screening for variables that can be moved or changed quickly, you can start to look at shorter runs repeatedly. Most of the runs or trials should be limited to somewhere between eight and 16 shifts. The most common method for this is called the Taguchi Factorial Method. This is a statistical approach to DOE that improves the quality and performance by identifying critical design parameters. We typically recommend an L8 Orthogonal array, where you can test seven 2-level factors using available simple Excel software.

The most common screening variables we see in the mills are the feed rates, temperature changes between inlets and outlets, fiber blend, pressure changes across flow systems, and energy consumption at larger motors. Choose two levels for each factor and build your Orthogonal array.

Step 3: Design Your Experiments
Run the trials without killing throughput. Many operators fear experiments because they remember the last one that jammed the process. To help mitigate the risk, try to stabilize the process before logging any data. For example, wait a couple residence times in dryers before logging the data or a couple of die-full rotations after the set-point change.

You can also use paired samples. Collect the QA pellets at the start and end of each run so quality deltas align with the process data. If you are running the trials over multiple shifts, post up the data and let the production team know what is going on; they may see things that will help refine the next experiment.

Step 4: Analyze Your Results
Turning your pile of numbers into dollars and insight can feel like an overwhelming activity. There are some powerful tools on the market to look at data sets. However, with a few trials completed, you can use simple Excel tools to move along the path quickly. The most common methods in Excel include tools like main-effect plots, interaction plots and regression/response surface molding.

Main effects are the ones we plot where one axis (typically the y-axis) is the factor you varied against the average response at every level you tested, with a straight-line connection to the means. The steeper the slope of the line, the bigger the influence there is on the factor. A flat line would mean the factor had no influence on the outcome. Figure 1 shows this relationship.

Figure 1: Main-Effect Plot.
Figure 1: Main-Effect Plot.

Figure 1: Main-Effect Plot
An interaction plot starts to show how the relationship between two factors affects a response variable. If the lines are parallel, then the data suggests that there is no interaction, while nonparallel lines suggest some interaction. This could mean that one factor in the response depends on the level of the other factor. In Figure 2, you can see that the blended material and the softwood material have different responses as the feed rates are changed. Since the lines are not parallel, this indicates that the throughput drops more sharply for the 50/50 blend when you try to push the feed rate. If this mill wanted to crank feed rate to hit the nameplate capacity, sticking to the softwood blend would likely preserve throughput better than a 50/50 mix.

Figure 2: Interaction Plot.
Figure 2: Interaction Plot.

Figure 2: Interaction Plot
Regression and response surface models are a bit more complex. They use scatter plots to model the relationship between a dependent variable and one or more independent variables. Figure 3 shows multiple y values from trials at each x variable, with a regression line fitted to estimate the relationship.

Figure 3: Linear Regression Model.
Figure 3: Linear Regression Model.

Figure 3: Linear Regression Model
If your data set or historian exceeds 100 gigabytes or noise factors outnumber controllable factors, it might be time to call on a data science toolkit (Python, R, ML clustering) or even artificial intelligence programs. These tools can compress analysis time from weeks to hours.

Step 5: Implement and Integrate
Repeat, refine and expand the data. DOE is not a one-off project, but a study of rhythm and patterns. The beginning data establishes the comparison against your benchmarks or the equipment manufacturer’s rating. The experiments determine if the lever you are investigating is the biggest. Once the lever is pulled, you need to use the next DOE to confirm the gains with the performance indicator you are measuring. Only then can you standardize the setting for the equipment. That restarts the cycle. Most importantly, communicate with operators—they’re your biggest ally.

When data sets are turned into disciplined experiments, the path from a nameplate 50 tons per hour (TPH) to a real 50 TPH is no longer guesswork. You get there by following a routine of repeatable, engineered Design of Experiments. If you’re short on time or analytical horsepower, outside experts can accelerate the journey. The roadmap, however, is yours.

Author: Justin Price
Co-CEO, Evergreen Engineering Inc.