By popular (?) demand, I have somewhat reduced the weight of the regression-based "poll" in determining our state-by-state polling averages. The weight is now set so as to be equal to the weighted average of the reliability ratings of polls that make up the regression. Confused yet?
Say we have three polls, one of which has a reliability rating of 1.0, the second of which of which has a rating of 0.8, and the last of which has a rating of 0.2. The regression analysis itself is calculated with these weights in mind: for example, the 1.0-rated poll will have five times more influence on the outcome of the regression than the 0.2-rated poll.
To determine the average rating of a poll in the regression, we essentially have to multiply the poll ratings by themselves, and then divide them by the sum of the ratings. For example, in the example given above, the weighted average poll rating would be calculated as follows:
1.0 x 1.0 = 1.0 +In this example, the weighted average poll rating would be .84. As it turns out, the weighted average rating of all the polls that make up the regression is presently .77, and so this is the weight you'll see applied to the regression in determining the state-by-state averages (this number may bob upward or downward slightly over time).
0.8 x 0.8 = 0.64 +
0.2 x 0.2 = 0.04
= 1.68 / (1.0 + 0.8 + 0.2) =
= 1.68 / 2.0 = .84
The principle here is that the regression is only as good as the sum of its parts. If the polls that make up the regression are old or unreliable, the regression itself will be less reliable.
It seems to me that this is a "fair" way to determine the weight given to the regression. It is not necessarily the optimal way from the standpoint of predicting the outcome of the election. Quite frankly, I suspect that if anything, we'd wind up with better predictions if we gave the regression more, rather than less, weight. But, since I haven't yet done the empirical work to back up this claim, the fairness concern prevails.