Thanks, Brent, for the reference to Goodhart. However, methinks the situation is reversed; Goodhart's Curse is an example of the more general statistical thermodynamic-like principle. I suspect that Maxwell himself may have conjectured something like this same principle while working on his Equipartition Principle. Other similar ideas: Optimizer's Curse Goodhart's Law Campbell's Law Observer Effect Perverse Incentive Lucas Critique Citation Impact Cobra Effect Since the basic issue is really a statistical thermodynamics one, there's *no need to ascribe intelligence or agency or motive*. For example, dumb evolution will tend to explore the state space and will therefore most likely find itself in one of the states that exemplify these principles. In short, *Mother Nature* is to blame, not *human nature*. At 08:58 PM 7/12/2018, Brent Meeker wrote:
Sounds like an instance of the more general "Goodhart's Curse".
https://agentfoundations.org/item?id=1621
Brent Meeker
On 7/12/2018 9:09 AM, Henry Baker wrote:
I seem to recall someone from NASA telling me about some famous theorem of control theory that goes something like this:
Suppose you have a system with multiple degrees of freedom, e.g., x,y,z in 3-space, and you have sensors capable of sensing the coordinate positions, but the *precisions of the different dimensions are different*. I.e., you might be able to get x,y positions to within one meter, but the z position might have a precision only to 5 meters.
The classical optimal control algorithms will tend to operate in such a way that *most of the position uncertainty will be *forced* into the most imprecise coordinate.
Example: the usual GPS system is more accurate in the lat&lon dimensions than it is in the *elevation* dimension. So you construct an optimal control system to fly an airplane using GPS. The system operates beautifully to get your plane to land on the correct runway, but unfortunately, it nearly always crashes the plane due to the altitude uncertainty.
In other words, the optimal control strategy is indeed "tail-stuffing", whereby small uncertainties are moved into degrees of freedom which are less observable, and hence will be magnified to the point where they are.
Thus, a banker who can manipulate the location of credit risk can manipulate it into degrees of freedom that are less observable, and furthermore, any optimal algorithm would do precisely the same.