“Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” – Stephen Hawking
…I used to think like that last year(months before september/13 when I figured the universe was a decompressor-like universe).
And I understand what he is saying, if we are a frail part of something powerful, in the long term an accident would happen that would make us disappear(completely). So even if a scientist were to conquer strong A.I.(as I planned to) in the long term only machines would remain. IF it were possible. It’s not.
In fact, shortly after I thought exactly what Hawking just described but before september/13 I realized that not only would we be doomed in the long term but also in the short term. Let me explain, the dynamics of international relations would culminate very fast in a scenario where one can either choose a short-term victory over their enemies by granting autonomy to A.I. and lose in the long term to A.I. or lose in the short term and in the long term. That would lead to A.I. gaining full independece very, very fast.
IF, it were possible. It is not.
There are some evidence of my the-universe-as-a-decompressor-like-program that I have not undisclosed:
-Humans getting better IQ scores(despite the decline of polygamy predicting a lower average(but a higher maximum, which is good in an interconnected world))
-Universe being able to transfer information “instantaneously”(from out POV), i have already mentioned this one, but am doing again for completeness)
Also, shortly after I was convinced the universe was decompressor-like and combinatorial power of organic brains would never be surpassed I discovered further evidence:
-Quantum mechanics look random
-A decompressor-like-universe would look random at its lowest levels
A deHash unvierse is just like a decompressor but uses new, outside information to “decompress”.
I could say, there is a cat on the couch in 1500’s and then have additional information “decompressed” from that statement thanks to advances in the anatomy and behavioral studies of cats that would better describe the reality I hashed.
So, I could say: “This contains a lethal potion”
Then I could try describing that statement further: what lies inside “this” and what is “this”? After gaining new outside knowledge(archeologists discovered that “this” was likely a rounded pot made of iron, for instance) I would try that with what I already knew. Sometimes, I would reach a dead end and my hypothetical world(trying to figure out what reality was) would “collapse” and we would go back a few steps. Perhaps the iron pot didn’t fit the descriptions anymore because the potion was acid strong enough to dissolve iron or because the person describing it came from a foreign land and was unlikely to posses an iron pot. We then go a few steps back(not necessarily to the initial description) and continue the deHash attempts.
Collapsing. Rings a bell with me. But I’m no physicist and I was only interested in knowing if it was possible to conquer unlimited power in our universe with A.I.(it is not, I wouldn’t have made a blog otherwise).
Oh yeah, there is also the methods that I used that I didn’t fully describe. I used decision functions.
So, I have this universe, that will need to be translated to bits so my A.I. can understand it. This opens up several questions:
1.Has evolution made so many calculations that I will need to give my A.I. an initial push to overcome humanity?(I assumed the answer was yes at the time)
2.Does intelligence(as we understand it) means understanding(compressing) every bit in this universe with the same priority?(the answer is No, we emphasize some bits(related to survival.reproduction) way more than others(distant phenomena with no consequences to us))
If 1 applies, what patterns can I focus on? I discovered that colors in images tended to cluster and that that locality also applied when describing sound,heat and other information. So classified universes(hypothetical and ours) in two:
1: Those where the color of pixel X is closer to X+1(the next pixel) than max_difference_between_pixels/2
2: Those that do not have that characteristic
You can do that with kolmogorov complexity as well, those universes that have decreasing kolmogorov complexity and those that do not. Just replace an static(3d) universe with a temporal one(4d)
011010->0101010->00000000(decreasing like ours)
Of course, all that assumes the following:
1-The universe must be translated to bits in order for a machine to understand it
2-The universe is finite(both in size and information)
3-We have a mean of gathering a representative sample of the information contained in the universe
1 is sort of straightforward. 2 should be straightforward after so many attempts of applying infinity have failed and because infinity can’t be described in a computer. I could make an argument that the passion for infinity reflects a benign characteristic of our brains, functions that assume arbitrarily sized parameters tend to not break in an online decompressor. 3.If we can’t tell a computer what a representative sample of our universe is, we can’t tell a human either. So it’s a catch 22.