Most of my better thoughts went into this post from 2017. This post is leftovers, footnotes, and expansions.
NASA JPL |
Terminology
c is
cognitive speed. It's not intelligence, it's the speed at which a mind
processes information and initiates actions. It's not an exact measure.
Humans operate at 1c (mostly). A fly, casually dodging your flyswatter,
operates at 10c or 100c.
#c is the local baseline cognitive speed. For humans, that's 1c. For some trading algorithms, it might be 10,000c or more.
>#c means "significantly faster than the local cognitive speed." Bullet time.
<#c means "significantly slower than the local cognitive speed."
Naval battles, in the age of the battleship, were both surprisingly slow
and astonishingly fast. Engagements were plotted on a <#c scale.
Relatively few 1c-scale decisions matter. Effectively, it doesn't matter
how good your intuition is, because you've usually got plenty of time
to make mistakes.
A
chess game, despite requiring a lot of thought, takes place on a <#c
scale. The point isn't thinking quickly (thought that does help). It's
thinking correctly.
Baseball requires >#c decisions. These days, a batter has to swing as the ball is leaving the pitcher's
hand. No time for conscious cognitive processing. No time to aim. Just
analysis and action at a level below the steady 1c tick of conscious thought.
Mass and Momentum
A fighter moving faster than the eye can follow is a standard trope. A
cyborg gunslinger with iron muscles, sprinting through raindrops that
seem to be standing still.
There are still limitations. The faster you move pieces into alignment, the more force you need to apply (and then cancel out).
Consider lobbing a tennis ball gently up, then swatting it gently down,
then catching it. Takes about a second and doesn't require any
particular effort. If you want it to take a hundredth of a second, you
need to toss it harder, swat it harder, and then cancel out the force
when you catch it.
Then imagine swinging a gun into position. You not only need to apply an
impulse, you need to accurately cancel that impulse when the weapon is
in the correct position. Slam. Slam. The faster you move, the harder the
impact. The slower your brake, the longer it takes.
The mass you're moving is also a factor. It's a very human reaction,
when two boats are about to bonk into each other, to reach out to push
them apart. Can work for canoes. Does not work for yachts. That's how you lose a hand.
The speed of your servos (which includes biological muscles, motors,
handwavy nanofibres, etc.) needs to be balanced with their strength.
Strength, of course, being a vague term correlated to mass and scale;
how strong (in human terms) is a flea? The material you're moving
through also matters. Around here, water is around 1,000x as dense as
air. It's why airplane propellers are fast and big and boat propellers
are (relatively) small and slow.
So there's a cap on speed, tied to mass and strength. It's still significantly faster than human muscles, but it's fast. I wouldn't bet on humans in a human vs. robot fencing or ping-pong contest.
Choice
Where >#c excels is choice. Do you go left or right. Draw your sword
or your hammer? >#c gives you time to weigh options.. without any
extra information. Without information and properly formed heuristics,
all speed gets you is wheel spin and pointless analysis. Hyperactivity
is not productivity.
Weapon-wise, a generic sci-fi plasma gun is a good example for >#c
units. Adjustable rate of fire, spread, burst, intensity, and range. All
values you can adjust on the fly. And all, mercifully, ones the GM
doesn't need to adjudicate; just assume the optimal case.
Humans have a built-in "safety valve" for decision-making. It's obvious
in chess. You sit there thinking "Do I move my knight or my queen? If I
move my knight, my opponent will.. [long branching chain of events].
But if I move my queen, my opponent will... [another long branching
chain of events]. And both options look equally bad, no matter how you
analyze them. Time is running out. Knight or queen, knight or queen. And
so you move your bishop without any analysis at all.
Humans are weird that way.
Basically, choice A [move knight] has accumulated a heavy negative
weight from all the analysis you performed. Choice B [move queen] has an
equally heavy negative weight. And so, when a third choice presents
itself with no negative weight, you jump at it, even though it also has
no analysis. You can imagine a badly programmed chess computer following
the same process. For any given board state, give all legal moves a
positive score (based on the average of games from a database or magic
or whatever). Take the move with the best score. Analyze the subsequent
chain of moves (again, by database or magic), deducting points if the
move leads to a poor board state. Knight starts off with a high score, but analysis drops it until it reaches queen's score. The program switches back and forth between knight and queen until both scores drop below bishop. Then, time's up. And a suboptimal move is made.
Heuristics and Fuzzy Logic
To quote David Mitchell,
"If I knew how I knew everything I knew, I'd only be able to know half
as much because it'd be all be clogged up with where I know it from."
The more time you spend analyzing how you arrived at a decision, the
less time you have to make a decision. Heuristics (i.e. rules of thumb)
are very handy. If you drop a kitchen knife, you can spend time
analyzing its fall and preparing to catch it, or you can use the
heuristic "a falling knife has no handle" and jump backwards and out of
danger. If you use the heuristic "loud popping sounds means people are
shooting at me and want me dead, I should get into cover", then you're
likely to not get shot... and also skip the local firework displays.
But heuristics might make a paranoid AI (or a paranoid human) uneasy. To mangle a quote, adopt a heuristic and you adopt the beliefs of its creator. Bias creeps in. Logic and control go out the window.
Isomorphism
is another double-edged sword. Humans interact with unknown systems
using isomorphism. If some element
of an unknown system can be mapped to a system we know, we will
immediately become more comfortable, even if the isomorphism later
proves misleading or useless. To put it another way, we like stories. Image recognition is a form of isomorphism, and can easily lead to dangerous misunderstandings.
Cyborgs and Hijacks
Ideally, you want #c software on #c hardware. Sticking a hyperadvanced
AI into a concealing shell of meat sounds great in theory, but it's like
being soaked in treacle. Muscles are comparatively slow. The human eye
is a wobbly mess of averaging, interpolation, blind spots, and
blackness. If you want to get optimal results, replace all your hardware
with something that can handle your new speed.
Then again, there's no particular reason parts of human frames can't operate at >#c for limited periods.
Moore's Law
If you want to think fast, you need mass. Read-write speeds are the only
real limitation, and locally those are currently well beyond 1c.
But
a transistor/logic gate/ fancy quantum nodes have a minimum size. I'm
willing to imagine small-molecule transistors on a surface; a handful of
atoms at most. Current gates are in the 2nm range, so it's entirely
plausible. But there's still a limit. We know that it's possible to
simulate a human brain using an instrument approximately the size of a
human brain (since we do it all the time), so that's a handy benchmark.
Does the human body count as the support system?
Modern chips
need to be cooled to be effective. It's easy to make a computer
blowtorch-proof. It's hard to make it blowtorch-proof and portable.
Flash memory is sturdier than spinning discs and magnetic tape, but a
bullet-sized kinetic impact will still make a mess of delicate circuits.
I tried a few Fermi estimates to get approximations for
scale-to-speed conversions, but the results varied too widely to be
useful. Will the hardware needed to run a 1c AI be the size of a pebble,
a brick, a person, or a house? Darned if I know. If in doubt, the
general trend is to aim small.
Sidebar: Folding Laundry
Current laundry folding robots look like photocopiers. Feed a cotton t-shirt into the tray and it gets folded. Not too impressive. More generalized laundry-folding robots require very specific and controlled conditions... and fold slower than a surly teenager on a Saturday evening.
A
laundry-folding robot needs to identify edges in a mixed mass of
material of varying colours and textures. It needs to then manipulate
the fabric without damaging it, orient it correctly, identify axes,
fold, and store the item. It needs to pair socks, deal with fitted
sheets, and untangle some of the more complicated undergarmets. These
are very difficult tasks. Cars, faces, and road signs are nice and
predictable. Laundry is a mess.
I can imagine people fighting
ambulatory AI with weighted tarps or sheets covered in misleading dazzle
camouflage, false folds, and Möbius edges.
Toss one over the robot and watch it struggle to get the thing off.
Edge detection, flummoxed. Strength turned against itself (by pushing here, one might apply pressure there, and if the sheet is strong enough the robot could tear its own head off by accident).
It's
difficult to imagine a superintelligence that would be flummoxed by a
folded sheet, but blind spots or suboptimal tasks could still exist.
Impairment and Contract Law
Can humans meaningfully consent?
Currently, humans are the upper
end of the local intelligence spectrum. Concern about consent runs
backwards, towards children, animals, drunk people, etc. It doesn't run
uphill because the adult human,
it is assumed, is already at the top of the hill.
But imagine a being that sees chess the way we see tic-tac-toe. To quote that old post:
Scale 3, the scale I think is probably closest to true, says "On the scale of possible problem solving abilities, humans, dolphins, and cats are pretty much the same. If aliens exist, there's a very good chance our ability to solve problems looks like a worm's ability."
Thought Experiment 1
Imagine
you are an explorer in a soft sci-fi setting and, at the same time, a
decently moral human. You land on an alien planet. The Ooblecks that
live there are willing to trade 1 bar of gold for 1 kg of salt. This is,
for all involved, a reasonably fair deal. But you discover that if you
put on a green floppy hat, the Ooblecks will trade 100 bars of gold for
the same 1 kg of salt. This is not a fair deal.
- Do you investigate why?
- If you discover that the Ooblecks have a deeply held belief that anyone who wears a green floppy hat is a supernatural figure with divine powers (which you do not possess), do you wear the hat?
- If you decide not to wear the hat, do you try to tell the Ooblecks that their belief is likely to lead to terrible trade deals?
- Do you tell anyone else about the Oobleck and the hat? Do you sell the information?
Thought Experiment 2
Imagine
you are a human in business on the present-day planet earth. You are
aware that many recent studies show gift-giving, even of small and
inconsequential gifts, biases decision-making and creates positive
bonds. You are also aware that businesses you want to have positive
relationship with have rules preventing their employees from receiving
substantial gifts.
- How closely do you adhere to the spirit of those rules?
- Do you still provide gifts that fall below the threshold?
- Do you ignore the thresholds and hope the selected employees are either ignorant of their company's rules or sufficiently discreet?
- How exactly does a business lunch differ from a bribe of equal value?
Buck Godot The website is down, but many archives are available. It's standard Foglio stuff. |
Humans
are terrible at pretty much everything. Long-term planning. Matching
goals to methods. Working together. There are notable exceptions, but
from an outside perspective, human history probably looks like an
appalling shambles. There's no need to invent psychohistory
to see that any interested superintelligence could probably play
humanity like a fiddle. Figure out the drives, then steer with a hand
concealed behind six layers of obfuscation.
But in direct
dealings - trade, diplomacy, etc, assuming those are even possible or
desirable. - a superintelligence is faced with a problem. Humans might
sign a contract trading gold for salt. They might think they're getting a
good deal. But the superintelligence might know otherwise. Even if the
consequences are explained, humans might not be able to understand the
explanation, or have the necessary mechanisms to grapple with the
results.
Presumably, other superintelligences can go "oh for
fuck's sake Garthrax, you're exploiting the lower beings again. Give the
salt back and say you're sorry." Contracts are agreements of
understanding, and if one side clearly doesn't understand, the contract
does not exist.
The Trouvelot Astronomical Drawings (1882) |
Summary
- No matter how quickly you think, physics caps physical movement speed. Tradeoffs are required.
- >#c cognitive speeds give more time to analyze choices, but not all analysis is useful. Information can be bottlenecked.
- Heuristics are necessary but potentially dangerous.
- Cyborgs are a poor compromise.
- Intelligence still requires mass. Superintelligence may or may not be easily portable. The human brain is a decent order-of-magnitude size benchmark.
- Humans may not be able to meaningfully consent when higher-order intelligences are involved.
GMing a >#c NPC
- Spend a lot of time between sessions thinking.
- Rely on vagueness and hints.
- Don't try excessively complicated plans. Pick one good twist or scheme and pile on layers of misdirection.
- Stories are useful hooks. Convince humans they've discovered a story and they'll follow it to the conclusion they expect, ideally in the wrong direction.
- Players like thinking they're clever. They like succeeding. It's rewarding, and infuriating, when they find out they're being manipulated. They will want revenge.
- Kill your darlings.
- Pop-culture superintelligence, like Hannibal Lecter, Sherlock Holmes, or Jeeves, can sometimes border on magic. Coincidence, not competence. A chain of events that works for story reasons, but doesn't hold up to examination. Try to avoid that. Ideally, the players should be able to follow the sequence from start to finish and be deeply impressed, not skeptical.
- Superintellgience isn't about boasting or monologuing. It's about results. If speaking helps, speak. Otherwise, act.
Playing a >#c PC
- All of the above.
- Rely on "given time, optimal outcome" tools. E.g. rather than describing exactly how your abilities allow you to precisely hit targets in a gunfight, just deal max damage (or equivalent). Save time, assume full competence.
- Don't monopolize the GM's time. Broadly describe your goals. Think inside your own head.
- It is still possible to be fooled. Work with your GM to describe your character's potential weaknesses.
- Nobody exclusively plays one PC in an RPG. Other players can help suggest optimal schemes, even if their PCs can't.
- Always have at least 3 goals in view, justified with at least 3 levels of "why".
Skerples be seein the matrix again...
ReplyDeleteI'm a software engineer of machine learning and former cognitive neuroscientist and I have thoughts related to this lol, but unfortunately not the time or energy at the moment so putting a pin in this...
ReplyDeleteAdditional human flaw: humans can run out of energy. :D
DeleteIf we assume that "humans are terrible at pretty much everything" and that super-intelligent aliens are presumably not, then I don't think the question of whether humans can consent to dealing with them should necessarily be concerning. As modern liberal notions of consent and contract law are products of our limited human thought, it's likely that the aliens have more accurate, coherent, etc., philosophies. This is, of course, also assuming that the same sort of intelligence required for interstellar travel and making dubiously consensual contracts with apes is the same sort that leads to better philosophizing, that they're not just making shit up to justify stealing precious materials from less advanced species, and so on and so on.
ReplyDelete"if one side clearly doesn't understand, the contract does not exist"
I think this may be mixing together the questions of whether the contract exists and whether the contract is legitimate by our feeble human supposition. We can for example imagine a scenario where one party in a contact is aware that the other does not understand it, and is willing and able to enforce it regardless.