HP Forums
The coming exponential increase in AI - discussion of exponential progress... - Printable Version

+- HP Forums (https://www.hpmuseum.org/forum)
+-- Forum: Not HP Calculators (/forum-7.html)
+--- Forum: Not remotely HP Calculators (/forum-9.html)
+--- Thread: The coming exponential increase in AI - discussion of exponential progress... (/thread-11066.html)



The coming exponential increase in AI - discussion of exponential progress... - Gene - 07-16-2018 09:18 PM

I found this a very interesting read. FYI.

Moore's Law and exponential curves


RE: The coming exponential increase in AI - discussion of exponential progress... - pier4r - 07-20-2018 08:37 PM

Nice. I agree with the article.

-----
AI discussion
About AIs I see a lot of systems that gets really optimized for specific tasks.

I see it so: organic intelligence is more general purpose since it has to fight (survive) other intelligence and natural events.

AIs (or hardcoded heuristics and heuristics with self adjusting parameters, see n1) are very specific on a topic and may end up solving it pretty well. Pretty well compared to the performance of human brains. For example I can add maybe 2 numbers of 5 digits in 5 hours (I am slow), but the hp 35 destroys me easily. The hp35 is no AI, but one can say that it is an oracle as it doesn't have to search a solution in a space of possible solutions, it computes it directly.

Now to the exponential thing. Well surely the exponential growth can be something to think about, but what if the complexity of the next problem grows in an exponential way too? What I mean is that some problems may be way more difficult that they looks, and the next problem after the one just solves is just exponentially bigger.

Chess for example is a relatively "easy" problem as it is well defined, it has total information, it has few rules and the size of the chessboard is limited.

For what I know there were chess programs already in the '60s . Around 1990 the first dedicated combination: chess heuristic + hardware could compete with human masters, although not the top. Between 1997 and 2003 chess heuristics with powerful hardware defeated also the top players. Since some 5-8 years any dual core (or more) home computer is enough to beat top players.

Therefore 1960-2010 , 50 years of exponential growth (computing power mostly) were needed to produce a "total domination" in the field of chess. To me it looks not a little.

For go is not yet so, as google did produce a self adjusting heuristic but powered with specific hardware. Heuristic that is not easily replicated outside google (yet).

Google trained their self adjusting heuristic to beat one of the strongest chess programs of the moment, Stockfish 8 (although Stockfish had several components disabled). Stockfish is quite an hardcoded heuristic. After 44 millions of self played games, alpha zero chess played stockfish 100 times with no losses and 28 wins.
After a while a team started a similar project, leela chess zero. The project is running since 1 year and more, and it is still shattered by Stockfish (with all components enabled) even when it runs on some high end home hardware (2 expensive video cards). https://icga.org/?page_id=2469
Leela manages to win human grand masters (not sure about the top) but still the message is: to replicate google feat, it takes time.

Once again, those are relatively specific problems. My point being: getting AI singularity or the like is not that easy, at least for some decades. This doesn't mean that we should ignore the problem of course.

n1: hardcoded example: a traditional chess engine with minmax search function and so on. Self adjusting example: neural networks, that still don't decide the size of the network, the ranges of the weights, the layers, the connections, the interpretation of input and output and so on; but they go through some iteration to fix some parameters in a self sufficient way.
All this executed really fast.


RE: The coming exponential increase in AI - discussion of exponential progress... - Valentin Albillo - 07-20-2018 11:21 PM

.
Hi, Pier:

(07-20-2018 08:37 PM)pier4r Wrote:  Google trained their self adjusting heuristic to beat one of the strongest chess programs of the moment, Stockfish 8 (although Stockfish had several components disabled). Stockfish is quite an hardcoded heuristic. After 44 millions of self played games, alpha zero chess played stockfish 100 times with no losses and 28 wins.

Biased information: Google told the specialized press that AlphaZero won the 100-game match (which no knowledgeable expert was allowed to see) by a final score of 28 wins, 72 draws, 0 loses. Notice the "0 loses", which gives the impression that Stockfish was unable to defeat AlphaZero not even once.

However, in another test of 1,200-games between the two, AlphaZero won 290, draw 886 and *lost* 24 games.

Thus, a buggy version of Stockfish 8, without opening book (!!!), with dismayingly small transposition tables, configured with an excessive number of threads, without endgame tablebases and without advanced time management, and last-but-not-least running in hardware orders of magnitude slower, even despite being so maddeningly and unfairly handicapped still managed to win 24 games against the 180-teraflop supercomputer.

Of course everything is propietary, no independent experts saw or supervised the games, there was no "peer review" of any kind, and Google neither sells the TPUs nor gives information, nothing, so in my humble opinion, this "AlphaZero versus Stockfish" was but a marketing device to get as much worldwide publicity as possible in order to try and get paid wholesale for their expensive products and services, some people made a lot of money either for their biased opinions or else for their silence, and that's it.

Oh, and they succeeded, the worldwide publicity was phenomenal.

Regards and have a nice weekend.
V.
.


RE: The coming exponential increase in AI - discussion of exponential progress... - pier4r - 07-21-2018 11:25 AM

(07-20-2018 11:21 PM)Valentin Albillo Wrote:  However, in another test of 1,200-games between the two, AlphaZero won 290, draw 886 and *lost* 24 games.

I did not know about this last bit, I read again the paper and I noticed it. Well surely it was PR but it is also true that the Go performance or the fact that alpha0 played only "few hours" - if it is true, and still on a supercomputer - it is not to be underestimated.

Still my point was something different: speed in searching solutions may improve exponentially, but then also the problems to solve are exponentially harder to solve. I read somewhere that stockfish is open source and every time it gets possible patches is tested with millions of games that takes time to be completed.