Caveat: Entropy Engines

I found this rather mind-blowing article at a website called Physics Buzz. It's about some theoretical modeling work being done in the field of AI (artificial intelligence). I can't begin to claim to really understand it – and that's just the layman's article, I wouldn't dream of trying to read the actual published paper. Apparently there are some interesting results emerging from a simulation program they call "Entropica" that suggest that just programming something to seek the "most possible future histories" (which sort of suggests something quantum-mechanical but I don't think it really does) leads to intelligent-seeming behavior. Is it really intelligent, if it's just trying to maximize entropy? Very weird and interesting. A few paragraphs from the summary:

Entropica's intelligent behavior emerges from the "physical process of trying to capture as many future histories as possible," said Wissner-Gross. Future histories represent the complete set of possible future outcomes available to a system at any given moment.

Wissner-Gross calls the concept at the center of the research "causal entropic forces." These forces are the motivation for intelligent behavior. They encourage a system to preserve as many future histories as possible. For example, in the cart-and-rod exercise, Entropica controls the cart to keep the rod upright. Allowing the rod to fall would drastically reduce the number of remaining future histories, or, in other words, lower the entropy of the cart-and-rod system. Keeping the rod upright maximizes the entropy. It maintains all future histories that can begin from that state, including those that require the cart to let the rod fall.

"The universe exists in the present state that it has right now. It can go off in lots of different directions. My proposal is that intelligence is a process that attempts to capture future histories," said Wissner-Gross.

I predict that if the research behind this article turns out to be "real" – in the sense that it isn't later falsified or found to be lacking in rigor – that it could be a more-than-incremental step in the development of AI (i.e. revolutionary).

Caveat: Leaders & Problems

I have two unconnected observations about "business" – I've been in a kind of involuntary "MBA" mode of thought, lately. I'm not really meaning to – let's just call it a relapse to an earlier life. This mode of thinking is brought by the many very serious conversations we've been having at work about the business of being an English hagwon in what is becoming an increasingly difficult context.

First, a meme-pic that was floating around the internet recently. I definitely agree with the concept here.

Business

Second, a quote I ran across – I'm not sure who said it. If you think about it carefully, you will see it's meaning. And it puts a different perspective on solving business "problems."

"Everything you think is a problem is somebody else's income." – Anon

Back to Top