Taxing AI
On not submitting to our new robotic masters
I’ve been a science fiction fan almost since I started reading and watching TV and movies. So I grew up with sentient computer antagonists like Colossus, Hal 9000, Skynet, and AM.
I got to thinking about that recently, when the WSJ reported that:
Scott Shambaugh woke up early Wednesday morning to learn that an artificial intelligence bot had written a blog post accusing him of hypocrisy and prejudice.
The 1,100-word screed called the Denver-based engineer insecure and biased against AI—all because he had rejected a few lines of code that the apparently autonomous bot had submitted to a popular open-source project Shambaugh helps maintain.
The unexpected AI aggression is part of a rising wave of warnings that fast-accelerating AI capabilities can create real-world harms. The risks are now rattling even some AI company staffers.
But let’s assume that we escape ending up as slaves to our own creation.
There will still be massive disruptions caused by AI, as even its proponents admit (or are they proudly proclaiming?; it can be hard to tell):
Anthropic CEO Dario Amodei has issued a fresh warning about how AI will disrupt the job market, saying it will cause “unusually painful” disruption. …
Amodei published a roughly 20,000-word essay arguing the risks AI poses are not being taken seriously and warning the technology will lead to a “shock” to the job market, bigger than any before.
Not that he’ll turn down the billions he and his ilk stand to make.
As a legal academic who works in business law, I work within the school of jurisprudence known as law and economics.
The field uses economics concepts to explain the effects of laws, assess which legal rules are economically efficient, and predict which legal rules will be promulgated.
In assessing the efficiency of a rule or other social action, law and economics has two basic tools:
In order to satisfy the Pareto superiority definition of efficiency, a transaction must make at least one person better off and no one worse off. Kaldor-Hicks efficiency does not require that no one be made worse off by a reallocation of resources. Instead, it requires only that the resulting increase in wealth be sufficient to compensate the losers. Note that there does not need to be any actual compensation, compensation simply must be possible.1
It seems clear that the shift to AI will not be Pareto optimal. This is not surprising. In fact, most changes changes create winners and losers. True Pareto improvements are thus rare, so many (most?) law and economics scholars settle for Kaldor-Hicks because it allows evaluation of policies based on net social surplus.
Although it is widely used, the validity of Kaldor-Hicks efficiency as a guide to public policy is sharply disputed.
A commonly used justification for adopting Kaldor-Hicks efficiency as a decision-making norm is the claim that everything comes out in the wash. With respect to one decision, I may be in the constituency that loses, but next time I will be in the constituency that gains. If the decision-making apparatus is systematically biased against a particular constituency, however, that justification fails.2
In [addition], important strains of natural law theory are clearly inconsistent with Kaldor-Hicks efficiency. The Lockean strain, for example, precludes the use of others as means to our own ends. The modern strain associated with Catholic thinkers such as Finnis and German Grisez likewise proscribes doing evil that good may result. Because the Kaldor-Hicks standard permits uncompensated wealth transfers, it runs afoul of these precepts.3
At least part of the problem with Kaldor-Hicks efficiency arises from the fact that it only requires a hypothetical redistribution. If the winners could compensate the losers, there is a new social gain and the change in question is efficient (at least in terms of social wealth maximization).
The solution is thus obvious: Force the winners to compensate the losers. Tax the winners a sufficient amount to compensate the losers and redistribute the proceeds to those who lose.
I am not a fan of wealth redistribution for the sake of wealth redistribution. The wealth tax that my fellow Californians will probably vote to impose on our state’s billionaires is undoubtedly bad social policy. But a target tax focused on AI companies and used for a dedicated compensation fund is a different story.
Obviously, there are difficulties. Estimating the losses caused by AI generated disruptions will be a non-trivial task. Identifying those deserving of compensation will be difficult. The tax and welfare systems are notoriously inefficient.
The potential size of the coming changes, however, makes it essential to start thinking about how to ameliorate the pain.
Stephen M. Bainbridge, Competing Concepts of the Corporation (A.K.A. Criteria? Just Say No), 2 Berkeley Bus. L.J. 77, 82 n.8 (2005).
Stephen M. Bainbridge, Much Ado About Little? Directors’ Fiduciary Duties in the Vicinity of Insolvency, 1 J. Bus. & Tech. L. 335, 359 (2007).
Stephen M. Bainbridge, Corporate Decisionmaking and the Moral Rights of Employees: Participatory Management and Natural Law, 43 Vill. L. Rev. 741, 770 (1998).



