How much memory may a HPPL program use on a G2?
|
09-24-2023, 05:46 AM
(This post was last modified: 09-24-2023 07:18 AM by komame.)
Post: #97
|
|||
|
|||
RE: How much memory may a HPPL program use on a G2?
Tyann,
(09-23-2023 04:29 PM)komame Wrote: In some cases, this method works slower: [...] When I noticed that the results are sometimes worse, I had a similar idea to yours about building a string, but I only build it from the pattern side. There's no simple comparison here, but this way, the index is retrieved twice only for the dictionary word. It slighty improved the results (still comparing to the my rc1 release as 'previously'): ??e??r??? => 386 words found in 5.483 - 5.798s (previously 5.6 - 7.6s) ??e??r??s => 130 words found in 5.506 - 6.236s (previously 5.7 - 7.7s) ??e?????s => 1031 words found in 5.104 - 5.725s (previously 5.8 - 7.3s) p?e??r??? => 93 words forund in 1.038 - 1.158s (previously 1.3 - 2.0s) a??????? => 3167 words found in 1.474 - 1.801s (previously 1.461 - 1.719s) a???????? => 3839 words found in 2.371 - 2.597s (previously 2.363 - 2.725s) sc???r??? => 9 words found in 0.589 - 0.601s (previously 0.597 - 0.618s) a??? => 123 words found in 0.029 - 0.037s (previously 0.021 - 0.023s) a????? => 1101 words found in 0.265 - 0.315s (previously 0.255 - 0.291s) p????eurs => 29 words in 1.146 - 1.376s (previously 0.975 - 1.171s Since comparing a single character is still somewhat more time-consuming, in the case of long words with many known characters (and therefore many characters to compare), the total time is longer than with the old method. The same problem arises even when we have only one known letter but the word is short - in that case, the old method wins because the percentage of known words to all words is relatively high. However, for long words with relatively few known letters, the new method clearly prevails. Additionally, it also depends on where these known letters are positioned and whether they are in a sequence or scattered. In the case of short words, I wouldn't be concerned about slightly worse performance because they are searched almost instantly. Taking the overall picture into account, the averaged results are better, especially for those more resource-intensive queries, so I believe it's worth permanently implementing this solution. At this stage, we also need to consider measurement error, as while it may not have been significant with longer times, it becomes crucial with very short times. Variability in measurements (as is common in PPL) requires multiple runs under different conditions for reliable results. (09-23-2023 11:15 AM)Tyann Wrote: I've improved my search routine and I'm now for ??e??r???? <10s.This measurement is indeed supposed to be for ??e??r???? (10 characters), or rather for ??e??r??? (9 characters). So far, we haven't done it for 10 characters. Anyway, for 10 characters, I managed to achieve a result ranging from 4.799 to 5.410 seconds. EDIT: Now, I've measured again for ??e??r??? (9 characters), and I achieved an even better result: 4.718s. In the list above, it was 5.483 - 5.798s for this case, and it was based on approximately 30 runs under different conditions. As you can see, these measurements are so unstable that it's difficult to assess the impact of a specific change on performance conclusively, and I think further optimization efforts may not make sense, especially considering that everything could change dramatically in the next firmware version. Piotr |
|||
« Next Oldest | Next Newest »
|
User(s) browsing this thread: 10 Guest(s)