Thanks for the article, another important to be added above is that human systems are malleable. Individually and collectively people respond to their quantification them and methods of making predictions about them.
For example, say a credit scoring model find that people using lots of exclamation marks on social media are less likely to return loans and it scores. Eventually some people learn that not using exclamation marks increases their credit scores without actually increasing their frequency of loan repayment. Here the act of using exclamations as an estimator itself will make it an unreliable estimator. I guess no one will be hurt if exclamations go extinct, but in many other cases the unintended consequences can be grave.
Say if police starts relying on people’s social network to predict who is likely to commit a crime, then it will change the social network itself. Because certain communities have higher crime rates than others, it might increase segregation.
So in many of the circumstances more than predicting future AI might engage in shaping the future.
Well this is another reason why causality is important, sooner or later the correlations will go away and your model will be doomed.