I know I shouldn’t let it bother me, but it does. Hearing/reading the following really bugs me.
“Projections are a waste of time.”
“All projections do is average past seasons.”
“I don’t trust any projections that don’t go out on a limb.”
“Projections are silly. They’re just guesses. I can guess as well as a so-called expert.”
I shouldn't, but I take these personally. I'm frustrated the comments are mostly naïve to what a projection truly is, as well as its purpose. That’s the nice way of saying it. For some it goes beyond naivety. Some are too lazy to educate themselves on the concept, while others are too arrogant to care. Regardless, they’re passing judgment on a subject while lacking the underlying understanding to tender a valid argument.
When someone contended they didn’t use projections to draft their team, I’d counter with everyone uses projections. Some may not be spreadsheet-driven, but how we feel about a player is our projection.
Then I looked up the on-line definition of a projection:
An estimate or forecast of a future situation or trend based on a study of present ones.
And compared that to the definition of a prediction:
Something said or estimated that will happen in the future.
It turns out I was wrong, we all don’t use projections, some use predictions. The difference is “based on a study”. Projections have a theoretical basis. Predictions may, but they don’t have to. It’s cliché, but it works:
All projections are predictions but not all predictions are projections.
What follows is my view of a projection, what it is and how it should be used. This is all as a precursor to bringing my projection methodology out from behind the firewall. Sorry, 2018 projections are obviously still privy only to subscribers, but hopefully understanding what they are and where they come from deepens your knowledge base and helps win.
There are several ways to consider a projection. My favorite is a weighted average of all plausible outcomes. Think of it this way. Let’s say each season were played a gazillion times. The projection would be the average of the gazillion outcomes. The weighted average encompasses all combinations of skills and playing time.
Let’s start with the latter. Possible playing time for everyone is zero plate appearances (PA) or innings pitched (IP) to not missing a game or start. The chances of either are remote, but over a gazillion seasons, all players will have some where they get hurt in the spring. Very few hitters can reasonably be projected for 162 games while a decent number of pitchers will start 32 or 33 times, though obviously only a rare few will be complete games. The rest of the playing time projections will range in between, with a cluster around historical numbers. Players with an injury history will have a greater number on the lower side, with fewer above their recent amount.
Let’s forget performance and focus on just playing time of full-time hitters without an injury history, getting days off here and there. This group usually receives about 650 PA, spanning 150-155 games. What should the correct playing time expectation be?
- More than 650 PA
- Fewer than 650 PA
- Exactly 650 PA
Over a gazillion seasons, the average playing time will be fewer than 650 trips to the dish. There’s about 150-155 chances for an injury truncating a season compared to only 7-12 opportunities for more playing time. Accounting for all gazillion seasons, the average is below 650. Keep this in mind when thinking about expected playing time.
Shifting attention to a batter getting 200-250 PA, totaling maybe 75 contests, there’s several pathways to get there. Some are part-time players, others lost time to injury. There’s always mid-season call-ups and those, bouncing between the majors and minors, etc. Each case is different, with a unique array of playing time possibilities. It’s impossible to answer the above question with 250 PA without more info. However, it stands to reason over a gazillion seasons, more 250 PA hitters will be projected for more than that as compared to 650 PA batters.
Let’s put the theory to the test. I did a study using the previous six seasons, yielding five data points. The test was looking at the number of PA at different levels, determining how many received more or fewer the following year. I used five and ten percent more (and fewer) as the target. So, a 600 PA hitter needs more than 660 to be counted as more than ten percent with a 200 PA guy needing only something above 220. Here’s the results:
|PA||#players||0-5% more||5-10% more||>10% more||5% fewer||5-10% fewer||>10% fewer||overall more||overall fewer|
The 651-plus subset encompasses the most players, yet only one hitter bettered their previous season’s total by at least ten percent. In fact, only 18 percent beat the number at all. On the other hand, a whopping 44 percent dropped by at least 65 PA. That’s pretty mind-boggling, at least to me, screaming extreme prudence when projecting playing time. However, keep in mind all players are subject to similar scrutiny, so if everyone is docked some PA from the previous campaign, on a relative basis, their fantasy ranking is unchanged. Still, this helps speak towards why projections rarely expect a great deal more production for players at this level.
As anticipated, the fewer PA garnered one season, the better chance of accruing more the next. Don’t read too much into so many being either ten percent more of fewer at the lower levels. Again, 10 percent more than say, 300 is 330, or seven to ten games.
Skills analysis is quirky. Not even considering all outcomes aren’t an exact translation of skills, we’re really looking at a range, not a singular measure. That is, if a player’s skill is “70”, some days he’ll play like a “65” and some “75”, without any influence from outside factors like luck. Bringing in luck while still ignoring playing time combinations, we need to account for a range of skills affected by a slew of luck-induced scenarios. Now take all this and play it out for every playing time possibility and maybe even a gazillion data points aren’t enough.
To get an idea how repeatable performance is, I did a study like the above using hitter’s home run and stolen base rates. It’s important to note HR and SB rate aren’t true skills, but they make a suitable proxy. The reason is there’s some happenstance involved with each. The homers are influenced by park factors as well as some luck with respect to weather conditions and in what part of the park the ball was hit. Steals depend on how often a player reaches base in a scenario conducive to an attempt and is given the green light. Both could be distilled further into more of a skill metric, but it’s not necessary for this discussion. It’s part of making the projection sausage which will be detailed in a follow-up discussion as mentioned earlier.
To qualify, a hitter needed 200 PA and at least 10 HR or SB during the season in question. Their performance was converted to a rate, using PA and compared to the average rate of everyone in the study that season, yielding an index. A player with a rate equal to the league average was scored as 100. Better than league average was over 100, worse was below. Doing it this way accounts for both playing time differences from season to season and league context each year. Hitting the same homers in the same PA in 2017 and 2016 aren’t the same since more homers were struck last season.
Incorporating six years of data, giving five data points reveals 60 percent of qualified batters crack homers at a lower rate one season to the next while 69 percent swipe bags at a poorer pace. Admittedly, this is vague, you no doubt have some granular questions concerning the results. Platinum subscribers will have their queries answered soon this will be written up in a chapter of the Z Book. For this discussion, suffice it to say more than half of fantasy-relevant batters perform at a lower rate than the previous season.
Keep in mind, this looks at rate of performance. More than half of hitters exhibit a rate worse than the previous season while probably playing less. This isn’t conjecture, it’s research-based fact. Those chiding the efficacy of projections do so without any inkling of this and is the crux of what frustrates me. To berate projections without truly knowing what they are is ignorant.
I can appreciate disagreeing with the importance as pertains to playing fantasy baseball. If you tell me your ballpark estimations of performance work for you, that’s fine. This means you excel at other elements of game play. You can gauge draft flow and react accordingly. You know your opponents and their tendencies. You have a keen sense of where your expectations relate to the market, and know how to take advantage to the fullest. You work your tail off in season, making sage pickups and manage your roster to achieve maximum points.
On the other hand, using projections as opposed to predictions isn’t a guarantee of success – not even close. I don’t want to put a percentage on it, but if each factor was in proportion to a pizza, eating the projection slice would leave me hungry. Very hungry. And envious of those eating the other slices, a metaphor for possessing the other talents.
As good as you are, all I’m saying is you could be even better with more refined player expectations.
Obviously, you can choose otherwise. All I ask is to think twice before denigrating a process without an honest understanding of what it’s all about and what it’s supposed to be used for. There’s method to projections madness. They’re not a waste of time. They’re more than guesses. They aren’t supposed to go out on a limb. They’re supposed to provide a plausible baseline of player performance. What you choose to do with it (which includes ignoring them) is your call.
Thanks for letting me vent a bit. For those interested, I’ll soon be posting my projection methodology in this space, as well as providing the complete research discussed above for Platinum subscribers.