One of my favourite recurring tropes of AI speculation/singulatarian deep time thinking is mediations on how an evil AI or similar might destroy us.
And all I can think is: we already have one of those. It is pretty clear to anyone who’s paying attention that 1. a marketplace regime of firms dedicated to maximizing profit has—broadly speaking—added a lot of value to the world 2. there are a lot of important cases where corporate profit maximization causes harm to humans 3. corporations are—broadly speaking—really good at ensuring that their needs are met.
From Mini. Quiet Babylon. Intriguing.
Some of this is a time horizon issue. At some sufficiently long time horizon, things like environmental destruction or, to take a more popular AI trope, the destruction of the human race, would seem to be counter to a profit motive.
Perhaps that's the wrinkle in this hypothesis. For a variety of reasons, corporations have failed to survive for very long; most have a lifespan shorter than that of the average human (the open market being a ruthless jungle to try to survive in). Not long enough, I suspect, to execute on a long-term plan to destroy humanity. Given this theory, we should perhaps regard antitrust regulation with even more respect.
The linked post mentions that corporations have corrupted American politics. Jonathan Rauch makes a great argument in his book Government's End why the American political system is hugely susceptible to the influence of special interests, and it's not just the profit motive but inherent structural flaws in the American (and many other) forms of government.