BIG BLUE FANS FOR

BASKETBALL

ANALYSIS OF THE GAME OF BASKETBALL

 

NET GAME EFFICIENCY

After many years of examining the efficiency data, both offensive and defensive, as well as other more conventional expressions of playing effectiveness such as scoring averages, for and against, it is intuitively obvious that a normalized measure of scoring rates provides a more consistent basis for comparing teams, game to game, and season to season.   For example, a team that plays like Princeton , may average only 64 possessions per game while holding its opponents to only 56 points per game.   However, does that low opponent scoring average really indicate more effective defense by Princeton than a team like North Carolina that plays at a pace of about 90 possessions per game and allows its opponents to score an average of 72 points per game?

Many people look at the 56 ppg scoring average verses the 72 ppg scoring average and comment on the great effectiveness of the Princeton defense but fail to make any remark about the 72 ppg defensive effort.   However, when one normalizes this data for pace, one will find that the Princeton-like team allows its opponents to score at a rate of 0.875 points per possession whereas North Carolina type team allows its opponents to score only 0.800 points per possession.  

A similar analysis of scoring averages produces similar conclusions about offensive effectiveness. If the Runnin ' Rebels score 90 ppg on an average of 100 possessions per game, does it really have a more efficient offense than a Florida type team that scores 80 points per game on 80 possessions per game.   Again, the Runnin ' Rebels score 0.900 points per possession while the Gators score at a rate of 1.000 points per possession in this hypothetical example.

Offensive and Defensive efficiencies such as these, only for real teams and their real results, provides the better basis to compare teams' offensive and defensive effectiveness.   However, being the best offensive team or the best defensive team does not necessarily equate to being the best team.   The best teams are those that use their available resources at each end of the floor to maximize the spread between their offensive and defensive efficiencies.  

For example, consider the following three hypothetical teams and their corresponding offensive and defensive efficiencies.

 

Team Number

Offensive Efficiency

Defensive Efficiency

A

1.000

0.860

B

0.950

0.750

C

0.860

0.700

TABLE I

Clearing, Team A has the best offense and Team C the best defense.   However, the best team is Team B.   The reason is that Team B has a Net Game Efficiency [NGE] of 0.200 points per possession [ ppp ], while Teams A and C have a NGE of 0.140 ppp and 0.160 ppp , respectively.   In this example, Team C is stronger than Team A as well.

As a team's NGE increases, so will that team's winning percentage.   Is this relationship valid?   Absolutely.   Are there other variables that produce scatter in the data, NGE v Winning Percentage?   Yes.   However, despite these other variables, it is clear that a team's goal should be to maximize its NGE for the season to maximize its competitiveness and to maximize its winning percentage.

Regardless of what value of NGE a particular team has achieved, that team's performance varies over the course of a season.   Every team will perform better than their average NGE in about half of their games, and correspondingly perform poorer than the average NGE in about half of their games.   Statisticians quantify the distribution of this entire set of data, about the mean, using the statistical measurement, Standard Deviation.  

When the mean is positive and larger than the standard deviation, then the team will nearly always win.   As their mean NGE falls, the probability of winning falls, as does the winning percentage.   When the NGE is zero, then the team is likely to lose as many as it wins.   Finally when the NGE ventures into negative territory, that team is more likely to lose than win, and its winning percentage continues to fall.   Ultimately, at the other end of the spectrum, when the average NGE is negative, and smaller than the standard deviation, the team will rarely win any games.   The following table illustrates these relationships:

 

Average NGE

Standard Deviation

Probability of Winning

Probability of Losing

General Statement of Probable Winning Percentage

0.200

0.160

Very High

Very Low

Only lose that rare upset when opponent plays exceedingly well and your team does not.

0.100

0.160

Good

Low

Team will win majority of games, but will lose enough to leave fans feeling the team is inconsistent

0.000

0.160

Average

Average

Team will win and lose about the same number of games.   Will beat some better teams and lose to some inferior teams

-0.100

0.160

Low

Good

Team will lose majority of games, but will win enough to leave fans feeling hopeful that only if.

-0.200

0.160

Very Low

Very High

Only win that rare upset when opponent plays exceedingly poor, and your team plays very well

TABLE II

For several years, I have pointed out that there are just a few teams capable of competing for national honors each season.   These nationally competitive teams are those that post high NGE values on the season.   There are few who can and do achieve at these levels each year.   These teams are special.  

I have even gone so far as to say that UK needs to post a season NGE of 0.160 ppp or higher to rise into this elite group that season.   What is the basis for that statement?   The following are the UK NGE and NGE Standard Deviation data for the last three seasons.

 

Season

NGE

NGE Standard Deviation

Comment

2005

0.138

0.158

A good team that won a majority of its games and made a nice NCAA run

2006

0.049

0.211

A team that performed poorly, and sporadically throughout the year, and flamed out of the NCAA early

2007

0.102

0.161

A team that has performed more consistently than 2006, but nonetheless lost a considerable number of games.   Should not perform in post season at the levels of the 2005 team

2008
0.070
0.230
A team that performed inconsistently throughout the season, particularly the first 13 games. However, this team did play much better basketball through the SEC regular season schedule, until Patrick Patterson was lost for the season due to injury with 3 regular season games remaining

2009

0.109

0.217

A team that has performed inconsistently throughout the season, but most of the high standard deviation for this season can be attributed to the collapse of this team near the end of Janaury. After a 16-4 start, this team limped to the finish line with a 6-10 record over the last 16 games.

2010

0.198

0.189

A team that has performed very well over the course of the season, improving throughout, and posting a 35-3 record before falling out of the tournament in the Elite 8. The NGE value was a little low for a legitimate Final Four and/or Championship run, and the Standard Deviation, while improved, was still higher than the usual variance for most UK teams. Look for additional improvement in NGE and Standard Deviation in future years.

TABLE III

Please notice that the Standard Deviation in 2005 and 2007 are nearly equal.   These results more closely align to the values I have observed for UK teams over the last 12 to 15 seasons, and 2006 is clearly the exception, an aberration.   That is why a NGE value of 0.160 ppp or higher appears to be the threshold for greatness for UK teams. The very high standard deviations during 2008 and 2009 probably point to the reason for Gillispie's failure to survive as UK Head Coach for more than 2 years.

Year to Year comparisons of NGE are valid for a particular team, or for one team vs. other teams, if the teams play comparable schedule strengths. However, it is clear that teams with equal NGE values may not be equal, and the team that played the tougher schedule would have achieved a better overall result. So, this begs the question, how should the strength of schedule factor into year to year and team to team NGE comparisons.

Work on this relationship is proceding. Pomeroy and Sagarin both attempt to account for schedule strength. So does the NCAA with its RPI system. The approach that I have been working with, and fine tuning utilizes the NCAA RPI SOS values, and while the results are not entirely reliable, I have been satisfied that neither the Sagarin nor the Pomeroy SOS values are any better, and perhaps are less reliable. However, I continue to seek a more reliable SOS adjustment system, and am willing move from the RPI SOS methods if and when a more reliable SOS measurement system appears.

This leads to the concept of Adjusted NGE, which is the raw NGE modified to reflect schedule strength differences. For the past 2 NCAA tournaments, the teams with the highest Adjusted NGE values have advanced to the NCAA Final Four, and won the tournament.

Submitted by Richard Cheeks

CHECK OUT THESE OTHER ANALYTICAL WRITINGS

What Is Basketball?

What is a Possession?

Change in Position on Definition of Possessions

What Is Net Game Efficiency?

Why Do "Upsets" Occur?

WHAT IS THEORETICAL BASIS FOR GAME ANE CALCULATION

Do Objective Performance Measures Like NGE
Account For Intangible?

 

Go Back

Copyright 2006-10
SugarHill Communications of Kentucky
All Rights Reserved