Post Reply 
The coming exponential increase in AI - discussion of exponential progress...
07-20-2018, 08:37 PM (This post was last modified: 07-20-2018 08:46 PM by pier4r.)
Post: #2
RE: The coming exponential increase in AI - discussion of exponential progress...
Nice. I agree with the article.

-----
AI discussion
About AIs I see a lot of systems that gets really optimized for specific tasks.

I see it so: organic intelligence is more general purpose since it has to fight (survive) other intelligence and natural events.

AIs (or hardcoded heuristics and heuristics with self adjusting parameters, see n1) are very specific on a topic and may end up solving it pretty well. Pretty well compared to the performance of human brains. For example I can add maybe 2 numbers of 5 digits in 5 hours (I am slow), but the hp 35 destroys me easily. The hp35 is no AI, but one can say that it is an oracle as it doesn't have to search a solution in a space of possible solutions, it computes it directly.

Now to the exponential thing. Well surely the exponential growth can be something to think about, but what if the complexity of the next problem grows in an exponential way too? What I mean is that some problems may be way more difficult that they looks, and the next problem after the one just solves is just exponentially bigger.

Chess for example is a relatively "easy" problem as it is well defined, it has total information, it has few rules and the size of the chessboard is limited.

For what I know there were chess programs already in the '60s . Around 1990 the first dedicated combination: chess heuristic + hardware could compete with human masters, although not the top. Between 1997 and 2003 chess heuristics with powerful hardware defeated also the top players. Since some 5-8 years any dual core (or more) home computer is enough to beat top players.

Therefore 1960-2010 , 50 years of exponential growth (computing power mostly) were needed to produce a "total domination" in the field of chess. To me it looks not a little.

For go is not yet so, as google did produce a self adjusting heuristic but powered with specific hardware. Heuristic that is not easily replicated outside google (yet).

Google trained their self adjusting heuristic to beat one of the strongest chess programs of the moment, Stockfish 8 (although Stockfish had several components disabled). Stockfish is quite an hardcoded heuristic. After 44 millions of self played games, alpha zero chess played stockfish 100 times with no losses and 28 wins.
After a while a team started a similar project, leela chess zero. The project is running since 1 year and more, and it is still shattered by Stockfish (with all components enabled) even when it runs on some high end home hardware (2 expensive video cards). https://icga.org/?page_id=2469
Leela manages to win human grand masters (not sure about the top) but still the message is: to replicate google feat, it takes time.

Once again, those are relatively specific problems. My point being: getting AI singularity or the like is not that easy, at least for some decades. This doesn't mean that we should ignore the problem of course.

n1: hardcoded example: a traditional chess engine with minmax search function and so on. Self adjusting example: neural networks, that still don't decide the size of the network, the ranges of the weights, the layers, the connections, the interpretation of input and output and so on; but they go through some iteration to fix some parameters in a self sufficient way.
All this executed really fast.

Wikis are great, Contribute :)
Find all posts by this user
Quote this message in a reply
Post Reply 


Messages In This Thread
RE: The coming exponential increase in AI - discussion of exponential progress... - pier4r - 07-20-2018 08:37 PM



User(s) browsing this thread: 1 Guest(s)