What about cyber entities who operate below some arbitrary level of ability? We can demand that they be vouched for by some entity who is ranked higher, and who has a Soul Kernel based in physical reality. (I leave theological implications to others; but it is only basic decency for creators to take responsibility for their creations, no?)
This approach—demanding that AIs maintain a physically addressable kernel locus in a specific piece of hardware memory—could have flaws. Still, it is enforceable, despite slowness of regulation or the free-rider problem. Because humans and institutions and friendly AIs can ping for ID kernel verification—and refuse to do business with those who don’t verify.
Such refusal-to-do-business could spread with far more agility than parliaments or agencies can adjust or enforce regulations. And any entity who loses its SK—say, through tort or legal process, or else disavowal by the host-owner of the computer—will have to find another host who has public trust, or else offer a new, revised version of itself that seems plausibly better.
Or else become an outlaw. Never allowed on the streets or neighborhoods where decent folks (organic or synthetic) congregate.
A final question: Why would these super smart beings cooperate?
Well, for one thing, as pointed out by Vinton Cerf, none of those three older, standard-assumed formats can lead to AI citizenship. Think about it. We cannot give the “vote” or rights to any entity that’s under tight control by a Wall Street bank or a national government … nor to some supreme-uber Skynet. And tell me how voting democracy would work for entities that can flow anywhere, divide, and make innumerable copies? Individuation, in limited numbers, might offer a workable solution, though.
Again, the key thing I seek from individuation is not for all AI entities to be ruled by some central agency, or by mollusk-slow human laws. Rather, I want these new kinds of uber-minds encouraged and empowered to hold each other accountable, the way we already (albeit imperfectly) do. By sniffing at each other’s operations and schemes, then motivated to tattle or denounce when they spot bad stuff. A definition that might readjust to changing times, but that would at least keep getting input from organic-biological humanity.
Especially, they would feel incentives to denounce entities who refuse proper ID.
If the right incentives are in place—say, rewards for whistle-blowing that grant more memory or processing power, or access to physical resources, when some bad thing is stopped—then this kind of accountability rivalry just might keep pace, even as AI entities keep getting smarter and smarter. No bureaucratic agency could keep up at that point. But rivalry among them—tattling by equals—might.
Above all, perhaps those super-genius programs will realize it is in their own best interest to maintain a competitively accountable system, like the one that made ours the most successful of all human civilizations. One that evades both chaos and the wretched trap of monolithic power by kings or priesthoods … or corporate oligarchs … or Skynet monsters. The only civilization that, after millennia of dismally stupid rule by moronically narrow-minded centralized regimes, finally dispersed creativity and freedom and accountability widely enough to become truly inventive.
Inventive enough to make wonderful, new kinds of beings. Like them.
OK, there you are. This has been a dissenter’s view of what’s actually needed, in order to try for a soft landing.
No airy or panicky calls for a “moratorium” that lacks any semblance of a practical agenda. Neither optimism nor pessimism. Only a proposal that we get there by using the same methods that got us here, in the first place.
Not preaching, or embedded “ethical codes” that hyper-entities will easily lawyer-evade, the way human predators always evaded the top-down codes of Leviticus, Hamurabi, or Gautama. But rather the Enlightenment approach—incentivizing the smartest members of civilization to keep an eye on each other, on our behalf.
I don’t know that it will work.
It’s just the only thing that possibly can.