© 06/30/92 Alexander Chislenko
Design for an infinitely fast computer, etc.
(This was my first message on the extropian list)
Limits to computation.
I once had some absolutely non-practical idea on how to increase maximal computer speed beyond evident physical limitations, and hope it is appropriate to share it in this discussion.
'Conventional limitations' are determined by:
- the minimal possible size of computing elements
- they determine max. density of memory, which, at our (my) current knowledge of physics, can be put at around 10**100 elements (of Planck, not atomic! size) per cubic meter;
- speed of communications (lets assume C is the limit)
- architecture - let's think it's parallel, and 100% of useful transactions happen between physically adjacent computing elements. It gives us (we don't care about a few extra orders of magnitude here) 10**40 operations on each of 10**100 elements = 10**140 ops per cubic meter = the ultimate limit to the density of intelligence * the volume of the Universe... (I am sure though that 1) this kind of intelligence would find further ways to improve itself, and 2) we cannot possibly make any relevant statement about features of so complex objects)
- Connectedness of the computer. No matter how well we arrange the elements, the computer should still be an integrated system, which means that any element should be able to communicate to any other one, and this should take up to a whole cubic light-meter of time per just 10**100 of complexity...
We can remove this last limitation by effectively increasing the dimensio- nality of the physical space, which can help squeeze higher volumes into smaller sizes (diameters) of 'computers'.
For that, we could create around a certain point ('center of the computer') a number of space-time bottle-shaped half-open bubbles (Planckeons). If each of them has an internal volume of 1 cub.m., and their 'necks' are small enough to fit, say, a million of them in a 1-meter vicinity of our center, then we'll have a computing space with a radius of 2 meters and a volume of 1 million cub.m.
By shrinking the sizes of the necks, and putting zillions of new bubbles into each bubble, and so on, we can pack any volume into any size, and thus build a machine with NO SPEED LIMIT.
As for the problems with energy dissipation, we can just close each sub-bubble as soon as we get the results of the computational task it has been assigned, and leave the spawned universes digest their own heat, keeping our master-space clean and cool.
Well, it all just shows that we can easily solve theoretical problems with theoretical technologies...
The really interesting problem in developing extremely sophisticated computers (and other systems), in my opinion, is the evident fact that humans:
1) are increasingly unable to understand the skyrocketing complexity of developing systems;
2) are unwilling to incorporate superior design techniques into themselves (I did some polling on it), which is their only chance to catch up, at the expense of the 'human nature' in its narrow human sense.
3) are not willing to let go of control.
This situation cannot last for long, and it will definitely explode long before the design techniques reach any fundamental physical limits.