Post Reply 
How much memory may a HPPL program use on a G2?
09-22-2023, 07:29 AM (This post was last modified: 09-23-2023 05:17 AM by komame.)
Post: #88
RE: How much memory may a HPPL program use on a G2?
(09-21-2023 07:59 PM)jte Wrote:  While I didn't design or implement PPL, I've implemented special-case handling in some parts of the project. The Function app, for example, doesn't use the main PPL interpreter for the sorts of things typically graphed by high-school students. Instead, it uses something closer to a "bytecode" interpreter. Before graphing begins, the PPL (the part after the "F1(X)=") is analyzed. If it is of a simple-enough form to fit the special-case interpreter, a simple list of instructions is generated for an abstract machine (of a "push" form - things like "add register 1 to register 5, put the result in register 7"). Things like common subexpressions are recognized and only evaluated once (when evaluating a user-defined function for a particular value).
I don't fully understand. After all, PPL is also a bytecode interpreter. Is what you're analyzing the result of tokenization by the PPL mechanism, or is a different bytecode generated for your mechanism?

(09-21-2023 07:59 PM)jte Wrote:  My intuition is that part of the speed variability that can be seen with the standard PPL interpreter has to do with the heap. (The special-case interpreters I've just mentioned above do not use the heap during evaluations.)
I've always been puzzled by why in PPL the program size affects performance. Even dead code slows it down. It's a very peculiar implementation, even for an interpreter.
From what Cyrille explained here: https://www.hpmuseum.org/forum/thread-12585.html it seems that when handling variables, there's some kind of code linear scanning to look for variables, and this happens multiple times instead of a one-time scan and keeping references in a hash table or a binary tree for optimal performance. It's an unusual approach, and all of this negatively impacts PPL's performance.

That also makes me wonder:
(03-11-2019 06:53 AM)cyrille de brĂ©bisson Wrote:  - your 100 "dummy" function are of course slowing down any lookup of non local variables as they enter relatively high in the variable precedence order and the lookup is (sorry about this) linear (the name list is not sorted). So adding 100 extra items to look through will slow down any search that has to traverse it.
The order of functions in PPL doesn't matter, so by scanning the code once, you can cache addresses for calls that can be sorted by names, allowing for efficient searching.

EDIT: It was only the next day after writing this post that I realized Cyrille meant that labels are scanned in such a way that functions are searched first, and then global variables. This doesn't refer to scanning the code but rather those lists where labels for functions and variables are stored. Nevertheless, it's still a linear search, which means it shouldn't work efficiently, and what I wrote about the hash table and binary tree still applies here.

BR,
Piotr
Find all posts by this user
Quote this message in a reply
Post Reply 


Messages In This Thread
RE: How much memory may a HPPL program use on a G2? - komame - 09-22-2023 07:29 AM



User(s) browsing this thread: 22 Guest(s)