Bill James: Judge and Altuve

Update 2/18/2018: I started writing this a couple months ago and couldn’t finish after reading Bill James quote OPS, a very flawed baseball statistic which is a tangent I don’t really care about.  If people want to throw around these kind of stats that makes the results from this data model more valuable.

tl;dr This model reflects a team’s Win/Loss record based upon its players.  WAR does not.  This model uses the estimated Win/Loss record based upon Bill Jame’s own PE formula.  We could, like James stated with the Yankees, adjust to real wins and losses very easily but we don’t.  That is all….

——————————cut here——————————

I got directed to this article: Judge and Altuve | Articles | Bill James Online written by Bill James and there are some interesting tidbits that I need to comment on.  It’s difficult thinking about baseball in the winter and I have been putting this off.  This post will be updated throughout the winter as I think of something different to say.

The article is about the value of Judge and Altuve as MVP.  This data model is clear and unambiguous,  Aaron Judge is the MVP of AL right behind Giancarlo who we have as MVP of NL also.  Here are our top 5 MLB players.

Rank WAA Name_TeamID Pos
+001+ 10.00 Corey_Kluber_CLE PITCH
+002+ 9.66 Giancarlo_Stanton_MIA RF
+003+ 8.92 Aaron_Judge_NYA RF-DH
+004+ 8.55 Max_Scherzer_WAS PITCH
+005+ 8.38 Paul_Goldschmidt_ARI 1B

AL, NL, Pitchers and batters are all ranked together in this data model.   Apparently Bill James agrees with the MVP voters that Altuve is AL MVP.  Whatever.  He has some interesting things to say in the article which is a good read.  Here’s a blurb:

The first indication that there is a problem with applying the normal and general relationship is this.   The Yankees, by the normal and general relationship, should have won 102 games, when in fact they won only 91.   That’s a BIG gap. The Yankees played poorly in one-run games (18-26) and other close games, which is why they fell short of their expected wins.   I am getting ahead of my argument in making this statement now, but it is not right to give the Yankee players credit for winning 102 games when in fact they won only 91 games.   To give the Yankee players credit for winning 102 games when in fact they won only 91 games is what we would call an “error”.   It is not a “choice”; it is not an “option”.   It is an error.

When you express Judge’s RUNS. . .his run contributions. . . when you express his runs as a number of wins, you have to adjust for the fact that there are only 91 wins there, when there should be 102.  (The Astros should have won 101 games and did win 101 games, so that’s not an issue with Altuve.)  But back to the Yankees, one way to do that is to say that the Yankee win contributions, rather than being allowed to add up to 102, must add up to 91.

He makes an assumption which is not true.   WAR does not add up to anything as we have shown here over and over.  This model has the the sum of Yankees players adding up to 102 games exactly according to Bill james’ Pythagorean Expectation formula.  Bill James is talking about this model, not WAR.

There is a simple method to make this adjustment in this model.  We would tax NYA 11 games and ding every player according to playing time.  According to our above table Aaron Judge has a WAA=8.92.    He would lose 0.6 on an adjustment and drop to 8.32.  Since everyone in the league would be adjusted the rankings could change but in no way change enough for Jose Altuve to move ahead.

Right now I don’t want to do this.  Runs are the currency that achieves wins and they are what players accumulate above or below average.  We can assign run production with virtually 100% accuracy.  This gets converted to wins according to Pythagorean Expectation which is the WAA value measure players carry from team to team when they get traded.  This value measure is the same for all leagues from MLB to A+ to JPL to even little league.  This model must work for all leagues the same. The disparity between PE and real wins and losses can be magnified in lower leagues which could obfuscate players who are only there to prove themselves, where wins and losses may not even matter to those teams.

I’m torn by this.  It can easily be done with this model.  it would create a split in valuations and, like Sabermetrics, which value is correct.  I prefer the value that reflects the estimated wins and losses.  In the end I don’t think it would matter that much anyway.  Perhaps we’ll run some numbers and see.

The logic for applying the normal and usual relationship is that deviations from the normal and usual relationship should be attributed to luck. There is no such thing as an “ability” to hit better when the game is on the line, goes the argument; it is just luck.   It’s not a real ability.

We don’t know what causes a team to exceed or not exceed expectations.   We can’t predict the future.  We can only estimate it.   Reality is the goal post, all estimates are a source of error.  Luck has nothing to do with it.

 

Update 2/18/2018:  This is where I need to stop commenting.