Post Reply 
newRPL: [UPDATED April 27-2017] Firmware for testing available for download
07-25-2016, 11:41 AM (This post was last modified: 07-25-2016 11:05 PM by matthiaspaul.)
Post: #341
RE: newRPL: [UPDATED July-14-16] Firmware for testing available for download
(04-12-2016 01:17 PM)Claudio L. Wrote:  
(04-12-2016 10:09 AM)Nigel (UK) Wrote:  I've left the calculator turned off for about 2 hours and the current settled at about 4 mA. Removing the SD card causes the current to jump up to 10 mA. Re-inserting the card causes the current to drop back to 4 mA again, even before the card is properly seated. Simply having the card mostly in without pushing causes the current to drop.
Good info, thanks. I'll look into the SD card pins.
I don't have any special insights into the circuitry of the HP 49g+/50g (do we have a schematic available somewhere?), but to me this looks as if the interface has no pull-ups (or pull-downs). SD cards have internal pull-ups, so this isn't an issue for as long as a card is inserted, but without a card the interface lines may start floating and thereby cause unwanted currents to occur in the CMOS ports.
Some microcontrollers can keep internal pull-ups (or pull-downs) enabled in sleep modes, whereas others would disable them in sleep. I haven't checked the datasheets, but it might be worth looking into what applies to the processors used in these calculators.
Also, I seem to remember that the HP 49g+ uses a Samsung S3C2410X01, whereas the HP 50g uses a S3C2410A. Perhaps the minor differences between these revisions happen to affect this area?
Or does the HP 50g use actual resistors, whereas they are not populated in the HP 49g+?

The above description could be read also as if the behaviour would occur even before the SD card actually makes electrical contact with the card holder. If so, does the card holder feature a "card detect" switch? While for most card holders the metal frame (if any) is connected to GND, I have also seen card holders, where the frame is connected to the switch contact, so without any pull-up or pull-down capacitive coupling might cause the corresponding input line into the processor to start floating. It does not explain different behaviour between the HP 49g+ and 50g, though.

Regarding other differences between the HP 49g+ and the 50g: The HP 50g has a 3.3V TTL serial port, whereas IIRC the HP 49g+ has not. Perhaps the TTL RX line in the 50g has a physical pull-down resistor (and therefore isn't specially treated in the firmware), whereas it is floating in the 49g+?

Hopefully, these "loose ends" can help tracking down the issue.

Greetings,

Matthias


--
"Programs are poems for computers."
Find all posts by this user
Quote this message in a reply
07-25-2016, 01:12 PM
Post: #342
RE: newRPL: [UPDATED July-14-16] Firmware for testing available for download
(07-25-2016 11:41 AM)matthiaspaul Wrote:  I don't have any special insights into the circuitry of the HP 49g+/50g (do we have a schematic available somewhere?), but to me this looks as if the interface has no pull-ups (or pull-downs). SD cards have internal pull-ups, so this isn't an issue for as long as a card is inserted, but without a card the interface lines may start floating and thereby cause unwanted currents to occur in the CMOS ports.
Some microcontrollers can keep internal pull-ups (or pull-downs) enabled in sleep modes, whereas others would disable them in sleep. I haven't checked the datasheets, but it might be worth looking into what applies to the processors used in these calculators.
Also, I seem to remember that the HP 49g+ uses a Samsung S3C2410X01, whereas the HP 50g uses a S3C2410A. Perhaps the minor differences between these revisions happen to affect this area.
Or does the HP 50g uses actual resistors, whereas they are not populated in the HP 49g+?
The above description could be read also as if the behaviour would occur even before the SD card actually makes electrical contact with the card holder. If so, does the card holder feature a "card detect" switch? While for most card holders the metal frame (if any) is connected to GND, I have also seen card holders, where the frame is connected to the switch contact, so without any pull-up or pull-down capacitive coupling might cause the corresponding input line into the processor to start floating. It does not explain different behaviour between the HP 49g+ and 50g, though.

I think both processors behave the same, even other processors of similar family have the exact same SD/MMC on-chip controller. The 50g and 49G+ both have a card detect pin, which as far as I can see it's handled properly during power off (same as all other pins, following the recommended procedure from the S3C2410 manual, also checked the 50g stock rom leaves the pins configured the same as newRPL).

(07-25-2016 11:41 AM)matthiaspaul Wrote:  Regarding other differences between the HP 49g+ and the 50g: The HP 50g has a 3.3V TTL serial port, whereas IIRC the HP 49g+ has not. Perhaps the TTL RX line in the 50g has a physical pull-down resistor (and therefore isn't specially treated in the firmware), whereas it is floating in the 49g+?

Hopefully, these "loose ends" can help tracking down the issue.
This is a good possibility, as it makes sense for the UART pins to be disconnected on the 49G+ and perhaps have external resistors on the 50g. Will have to investigate some more.
So far, has anybody else seen excessive battery drain? (can't rule out a defective unit either). My calc lasts more than I expected with a set of batteries during normal use (and not-so-normal too, my SD card burn tests are quite battery hogs).
Find all posts by this user
Quote this message in a reply
07-25-2016, 09:46 PM
Post: #343
RE: newRPL: [UPDATED July-25-16] Firmware for testing available for download
July update!

This new update brings bugfixes and improvements mainly to the SD card module.

* New SDFREE command to get the free space in the card
* Finally! Support for SDHC cards!

* Fixed bug: 99999.99999 rounded to <5 figures displayed 10000
* Fixed bug: UPDIR and HOME not updating status area when using LS-UP and LS-HOLD-UP


Support for SDHC cards was tested on one card only, your mileage might vary (please test and report!). Also when testing SD cards please follow the following procedure:
* Put the card on a PC, run a disk checking utility to fix any pre-existing file system errors
* Play with newRPL SD card access at will
* Every now and then put it on a PC and run a disk check, and report if you identify any file system errors or corrupted data (shouldn't happen, but...).

I measured 750 kb/s effective write speed on a 16 GB card that runs around 3MB/s on a PC. I need to investigate why the slower speed, but my stress tests worked great otherwise.

As always, please report back any issues.
Find all posts by this user
Quote this message in a reply
07-26-2016, 01:01 PM
Post: #344
RE: newRPL: [UPDATED July-25-16] Firmware for testing available for download
The PC simulator was updated to match the latest firmware. You can download the Windows installer here:

newRPL Sourceforge site


Sorry, no fancy keyboard image yet, but it can save/restore sessions, and can mount SD card images. On Windows, you can use the free software:

http://www.osforensics.com/tools/mount-disk-images.html

which I don't particularly endorse, but it makes it easy to create and mount an image, format it on the PC, copy some files on it, then leave it there for newRPL simulator to use.
On Linux/BSD/Mac just use 'dd' to get an image of an actual card.
Find all posts by this user
Quote this message in a reply
07-26-2016, 04:37 PM (This post was last modified: 07-27-2016 07:13 PM by matthiaspaul.)
Post: #345
RE: newRPL: [UPDATED July-25-16] Firmware for testing available for download
(07-25-2016 09:46 PM)Claudio L. Wrote:  * New SDFREE command to get the free space in the card
Hi Claudio,

it's great to see you're working on SD card support!

Some unsorted comments/remarks/questions/requests: ;-)

Do you support the two entries in the FS info sector already? Do you maintain the media and volume mount flags during mount/unmount/startup/sleep/shutdown? I'm asking because I had a (very) quick look at the sources and couldn't find it in there (but it is well possible that I have overlooked it, although this http://www.hpmuseum.org/forum/thread-656...l#pid58965 suggests that it isn't implemented yet). For some details, see:

http://www.hpmuseum.org/forum/thread-656...l#pid58783

I can provide more details if necessary.

Regarding your semicolon trick (http://www.hpmuseum.org/forum/thread-464...l#pid57755), do I take it right, that this only affects the LFN, whereas the SFN will not get a semicolon appended? If a SFN would conflict with another SFN this would be solved by changing the numeric tail (~1 etc.), wouldn't it? (I'm asking because SFNs with trailing semicolon is ringing an alarm bell with me as it may make such files inaccessible in some implementations - I will have to check this myself, though.)
Somewhat related, have you considered adding support for DR-style directory and file access passwords and (single-user) access permissions (read/write/execute/delete rights on a file-by-file basis), so that password-protected files and directories can be accessed only when the correct password was given (either as global password, as part of the filespec: "C:\DIR.EXT;DIRPWD\FILE.EXT;FILEPWD", or in a pop-up window opening when trying to access such files)? This can be used also to put and process files in groups (using the same password) with wildcards (something like DEL *.HP;FILEPWD would delete only those files in *.HP with a matching password FILEPWD). Password hashes and permissions are stored in reserved areas in the FAT file system. If this sounds interesting, I can provide the necessary details how this FAT extension is implemented in operating systems of Digital Research origin, so that unaware OSes won't hick up on it (they just see them as hidden files).

Some of the SD-related commands are named SDCHDIR, SDMKDIR, SDPGDIR.
Wouldn't it be better to rename SDPGDIR into SDRMDIR to use the "standard" names. Perhaps they are so frequently used to warrant shortcuts like SDCD, SDMD and SDRD as well?

For similar reasons I would rename SDPURGE into SDERASE with shortcut SDDEL (or even SDERA, although ERA is only supported by DR shells, not by MS ones).

How do you propose to ensure file system integrity if a program needs to prompt a user with something like "Please remove SD card from slot"? Is there something like a SDFLUSH or SDUNMOUNT command returning only after any pending writes are written out and the file system is in a clean state? Or does the implementation write-through stuff immediately leaving the file-system in a semi-unmounted state that is ready for safe card removal but without trashing internal data so that an on-demand-remount can happen without time penalty for as long as the card was not removed?

The Unicode conversion table

const int const cp850toUnicode[128] = {
0x00C7, 0x00FC, 0x00E9, 0x00E2, 0x00E4, 0x00E0, 0x00E5, 0x00E7,
0xEEEA, 0x00EB, 0x00E8, 0x00EF, 0x00EE, 0x00EC, 0x00C4, 0x00C5,
...

appears to contain a bug. The code point at 0x88 should be 0x00EA rather
than 0xEEEA, shouldn't it?

Your Unicode <> OEM conversion is currently hardwired to codepage 850.
I would like to suggest to make this configurable by a command like SDCHCP, SDCP, SDOEMCP, SDCODEPAGE or similar. While 850 is not uncommon, most
LFN implementations default to codepage 437, as this is the default hardware codepage used on Western PCs. Most users of 850 would be better off with codepage 858, anyway - it is the same as 850 except for that the Turkish dotless i (0x0131) at codepoint +D5h was replaced by the euro currency symbol 0x20AC. Also important would be ISO 8559-1 codepage 819 (0x0333) and a CDRA user-variant of it (58163, 0xE333) for a 1:1 implementation of the HP 48/49/50 RPL character set. Ideally, we'd have a number of predefined codepages and at least one user-definable vector.
There's one more potential problem as in some code pages some code points correspond with more than one Unicode character, Greek beta and German sharp s being one example. For files created under newRPL, this could be solved by expanding/modifying the tables, but how to solve this for files created externally with only a SFN - the calculator simply cannot know if a character at code point 0xE1 in the codepage was meant to be a Greek beta or a German sharp s?

Greetings,

Matthias


--
"Programs are poems for computers."
Find all posts by this user
Quote this message in a reply
07-26-2016, 06:45 PM
Post: #346
RE: newRPL: [UPDATED July-25-16] Firmware for testing available for download
(07-26-2016 01:01 PM)Claudio L. Wrote:  The PC simulator was updated to match the latest firmware. You can download the Windows installer here:

newRPL Sourceforge site

After a while the simulator seems to shut down. Screen showing vertical stripes. How can I wake it up again? Even better, it shouldn't do that.

Günter
Find all posts by this user
Quote this message in a reply
07-26-2016, 09:27 PM
Post: #347
RE: newRPL: [UPDATED July-25-16] Firmware for testing available for download
(07-26-2016 04:37 PM)matthiaspaul Wrote:  
(07-25-2016 09:46 PM)Claudio L. Wrote:  * New SDFREE command to get the free space in the card
Hi Claudio,

it's great to see you're working on SD card support!

Some unsorted comments/remarks/questions/requests: ;-)

Do you support the two entries in the FS info sector already? Do you maintain the media and volume mount flags during mount/unmount/startup/sleep/shutdown? I'm asking because I had a (very) quick look at the sources and couldn't find it in there (but it is well possible that I have overlooked it, although this http://www.hpmuseum.org/forum/thread-656...l#pid58965 suggests that it isn't implemented yet). For some details, see:

http://www.hpmuseum.org/forum/thread-656...l#pid58783

I can provide more details if necessary.

Not implemented for a few reasons:
a) It's not exactly reliable: if the card comes for example from another 50g with stock firmware (which doesn't update these fields), you get bad info. I figured exchanging cards between calcs would be quite common.
b) Implementing it requires proper mount/unmount, which means if the user just pulls the card out then putting on a PC will indicate a "dirty" file system and will suggest (or force) disk checking procedures.
c) To minimize fragmentation, the file system driver uses that "scan" to locate the largest block of free clusters in the disk.

But now that I added SDHC support, with potentially much slower scan times I may have to rethink that.

(07-26-2016 04:37 PM)matthiaspaul Wrote:  Regarding your semicolon trick (http://www.hpmuseum.org/forum/thread-464...l#pid57755), do I take it right, that this only affects the LFN, whereas the SFN will not get a semicolon appended? If a SFN would conflict with another SFN this would be solved by changing the numeric tail (~1 etc.), wouldn't it? (I'm asking because SFNs with trailing semicolon is ringing an alarm bell with me as it may make such files inaccessible in some implementations - I will have to check this myself, though.)

Semicolon is not an allowed character in 8.3 names, so this is only for long names. SFN names do use the numeric tail in the classic way. But since potentially conflicting names are allowed, the SFN should not be relied upon when the LFN exists.
One example:
Let's say I create "BigFile", which has a SFN "BIGFILE".
Now I want to create "BIGFILE", which has a conflict with the previous SFN. newRPL will create LFN="BIGFILE;" and the SFN="BIGFIL~1" will be used. Now somebody reading only SFN names might think BIGFILE is "BIGFILE", when in reality is "BigFile". Not sure if it makes sense, but that's why SFN names shouldn't be relied upon. Actually, modern windows sometimes assigns a completely random SFN, not sure under which conditions, but this is perfectly valid.

(07-26-2016 04:37 PM)matthiaspaul Wrote:  Somewhat related, have you considered adding support for DR-style directory and file access passwords and (single-user) access permissions (read/write/execute/delete rights on a file-by-file basis), so that password-protected files and directories can be accessed only when the correct password was given (either as global password, as part of the filespec: "C:\DIR.EXT;DIRPWD\FILE.EXT;FILEPWD", or in a pop-up window opening when trying to access such files)? This can be used also to put and process files in groups (using the same password) with wildcards (something like DEL *.HP;FILEPWD would delete only those files in *.HP with a matching password FILEPWD). Password hashes and permissions are stored in reserved areas in the FAT file system. If this sounds interesting, I can provide the necessary details how this FAT extension is implemented in operating systems of Digital Research origin, so that unaware OSes won't hick up on it (they just see them as hidden files).
None of the above. I just can't picture a multi-user calculator... how do you place more than 2 fingers on the same keyboard? :-)

(07-26-2016 04:37 PM)matthiaspaul Wrote:  Some of the SD-related commands are named SDCHDIR, SDMKDIR, SDPGDIR.
Wouldn't it be better to rename SDPGDIR into SDRMDIR to use the "standard" names. Perhaps they are so frequently used to warrant shortcuts like SDCD, SDMD and SDRD as well?

For similar reasons I would rename SDPURGE into SDERASE with shortcut SDDEL (or even SDERA, although ERA is only supported by DR shells, not by MS ones).

I had them like that at first, but I renamed them all for consistency with the RPL names.
PGDIR and SDPGDIR do the same, CRDIR and SDCRDIR, etc.

(07-26-2016 04:37 PM)matthiaspaul Wrote:  How do you propose to ensure file system integrity if a program needs to prompt a user with something like "Please remove SD card from slot"? Is there something like a SDFLUSH or SDUNMOUNT command returning only after any pending writes are written out and the file system is a clean state? Or does the implementation write-through stuff immediately leaving the file-system in a semi-unmounted state that is ready for safe card removal but without trashing internal data so that an on-demand-remount can happen without time penalty for as long as the card was not removed?

The current implementation writes all information when requested. It is safe to pull the card anytime, as long as there are no open files for writing. But if the user has an open file, for example, data corruption can occur when the user pulls the card out. Not done yet, but I'm planning to simply have an IRQ on the card detection pin, so if the user pulls the card when there's data to be written the system will throw an exception, asking the user to reinsert the card immediately.

(07-26-2016 04:37 PM)matthiaspaul Wrote:  The Unicode conversion table

const int const cp850toUnicode[128] = {
0x00C7, 0x00FC, 0x00E9, 0x00E2, 0x00E4, 0x00E0, 0x00E5, 0x00E7,
0xEEEA, 0x00EB, 0x00E8, 0x00EF, 0x00EE, 0x00EC, 0x00C4, 0x00C5,
...

appears to contain a bug. The code point at 0x88 should be 0x00EA rather
than 0xEEEA, shouldn't it?
You're probably right, I'll double check.

(07-26-2016 04:37 PM)matthiaspaul Wrote:  Your Unicode <> OEM conversion is currently hardwired to codepage 850.

Yes, mainly because it's not meant to exchange information with DOS 6.20 anymore. USA uses 437, most Europe and Latin America use 850, so 850 is more widespread than 437 from what I researched.
This won't affect you in any way, it's only to translate short names with strange characters into readable names for the calculator (these files would've been created by older OS). As long as you have a long file name this is not used. I put it there for legacy, to have some way to show a file with strange characters, but code pages are in the past, newRPL is Unicode compliant and not apologizing for it.
Find all posts by this user
Quote this message in a reply
07-27-2016, 04:39 PM
Post: #348
RE: newRPL: [UPDATED July-25-16] Firmware for testing available for download
(07-26-2016 09:27 PM)Claudio L. Wrote:  
(07-26-2016 04:37 PM)matthiaspaul Wrote:  Your Unicode <> OEM conversion is currently hardwired to codepage 850.

Yes, mainly because it's not meant to exchange information with DOS 6.20 anymore. USA uses 437, most Europe and Latin America use 850, so 850 is more widespread than 437 from what I researched.
This won't affect you in any way, it's only to translate short names with strange characters into readable names for the calculator (these files would've been created by older OS). As long as you have a long file name this is not used. I put it there for legacy, to have some way to show a file with strange characters, but code pages are in the past, newRPL is Unicode compliant and not apologizing for it.

Now that I have some more time I'd like to clarify that a bit more:
* If you create a file in newRPL with a character > 127, it automatically thinks a long name is needed, and the name is stored converted to UCS-2 (not to be confused with UTF-16!), there's no translation to any codepage (Windows does this as well, adds a LFN even if the name is short when the name contains strange characters).
* If you open an file name that includes a LFN, then only the LFN is used, therefore there's no codepage conversion.
* Only if you create an 8.3 name with characters >127 in some other OS, those characters will be in the OEM codepage of that OS. newRPL is for simplicity going to interpret those characters as CP850 and convert them to Unicode. This is a very rare occurrence

I just tried creating a file on Windows 10, using a file name with a single letter S sharp (0xDF), and the OS included both a LFN and SFN entries. The SFN used CP437, and when I put the card on the calculator, newRPL displayed the file properly, as it only used the Long name.
So you'd have to use an older OS that is not LFN aware AND using a CP other than 850 to run into problems.
Find all posts by this user
Quote this message in a reply
07-27-2016, 04:59 PM
Post: #349
RE: newRPL: [UPDATED July-25-16] Firmware for testing available for download
(07-26-2016 04:37 PM)matthiaspaul Wrote:  it's great to see you're working on SD card support!

A bit off-topic, but since you took interest in the file system inner working, you might be interested in taking a look at an older project of mine:

http://hpgcc3.org/projects/cleanfs

It's a file system that does everything FAT does but without a FAT table: all information about a file is stored in the directory. It saves a LOT of I/O compared to FAT.
It has other good qualities:
* Unlimited length file names (UTF-8)
* On-disk format much simpler than FAT, easier for embedded systems to use.
* 64-bit file sizes
* The file system is self-contained: each directory is a file system on its own.
* It's well suited to be used for example for TAR or ZIP style archives, as well as block devices.
* Can be case-sensitive or insensitive, this is a run-time choice.

I never used it for anything serious, it was just a proof of concept, but the demo of the reference implementation works quite well.
Find all posts by this user
Quote this message in a reply
07-27-2016, 08:56 PM (This post was last modified: 09-10-2016 02:10 PM by matthiaspaul.)
Post: #350
RE: newRPL: [UPDATED July-25-16] Firmware for testing available for download
(07-26-2016 09:27 PM)Claudio L. Wrote:  
Quote:Do you support the two entries in the FS info sector already? Do you maintain the media and volume mount flags during mount/unmount/startup/sleep/shutdown?
Not implemented for a few reasons:
a) It's not exactly reliable: if the card comes for example from another 50g with stock firmware (which doesn't update these fields), you get bad info. I figured exchanging cards between calcs would be quite common.
That's right, these values are nothing that can be relied upon, they can be used only with sanity checks in place. However, it's still possible to take advantage of them. It first needs to check these conditions:

- 1. A valid FAT32 BPB is present (to be detailed)
- 2. The 16-bit "logical sector size" at BPB offset +0x00 is larger or equal to 512 bytes. (In general, FAT32 logical sector sizes can be as small as 128 bytes, however, if an FS info sector is present, the logical sector size must be at least 512 bytes.)
- 3. The 16-bit "FS info sector cluster" at FAT32 BPB offset +0x24 contains a value smaller 0xFFFF and larger than 0x0000. (These two values indicate that no FS info sector is present.)
- 4. The FS info sector has valid signatures: sector offsets +0x00..+0x03 contain values 0x52 0x52 0x61 0x41, sector offsets +0x1E4..+0x1E7 contain values 0x72 0x72 0x41 0x61, and sector offsets +0x1FC..+0x1FF contain values 0x00 0x00 0x55 0xAA.
- 5. The 32-bit value at offset +0x1EC in the FS info sector is either equal to 0xFFFFFFFF or it is larger than 0x00000001 and smaller than the volume's highest cluster number

If all these conditions are met, the "last allocated cluster pointer" can be set to the 32-bit value at offset +0x1EC. If it contains a valid value, using this value will avoid unnecessary fragmentation on future allocations and there's no need to scan over a possibly large number of already allocated clusters. If, however, the value is not valid (which is still possible at this stage), the system will find out when it attempts to allocate the next cluster as this won't be empty. However, the system will then smoothly continue to search for the next free cluster, so this potential error condition is resolved gracefully. On a not too fragmented volume it will typically still find the next free cluster much earlier than searching for it from the start of the FAT, as the outdated pointer is most probably still in the ballpark region of the last actual allocation. So, even an outdated pointer will not cause any harm, whereas a valid pointer will dramatically speed up the first allocation.

If one of these five conditions is not met, the "cluster allocation pointer" will have to be set to 0xFFFFFFFF (for "unknown"), therefore forcing the system to
search from the start on the next allocation.

For the free cluster counter, conditions 1..4 must be met as well. Additionally, the 32-bit value at offset +0x1E8 in the FS info sector must either be equal to 0xFFFFFFFF (for "unknown") or smaller than the volume's highest cluster number. Even if these conditions are met, the implementation must not rely
on this value to be correct unless it is known to be correct by other means (see below).

The actual free space can be calculated alongside other operations on the volume - as soon as the cluster pointer will have wrapped around for the first time the actual free space is known until the volume is unmounted, the medium removed or the system shut down.

Once the free space has been determined this way, the system can immediately fail further allocations if the value is 0. But for as long as the free space is only based on the FS info sector value, the system would still have to scan the FAT for free clusters even if the value indicated 0 already - once it has finished that scan, the actual value is known, so this time consuming operation will happen only once until the next unmount.

When mounting a volume, it is not normally necessary to know the free space, so it is also not necessary to perform the scan immediately. This can be delayed until someone actually wants to know the exact value (SDFREE). In a multi-threading system, the free space scanning could be carried out as a non-blocking background process.

Quote:b) Implementing it requires proper mount/unmount, which means if the user just pulls the card out then putting on a PC will indicate a "dirty" file system and will suggest (or force) disk checking procedures.
Yes, but this is exactly what should happen in this scenario, as the integrity of the file system cannot be trusted any more until after running a disk check utility.
Quote:c) To minimize fragmentation, the file system driver uses that "scan" to locate the largest block of free clusters in the disk.
Yes. Not necessarily the largest block, but on a still unfragmented volume, the last allocation was most probably at the end of the allocated area.
Quote:But now that I added SDHC support, with potentially much slower scan times I may have to rethink that.
At least this is what I would propose as it is easy to implement (almost no memory and code overhead) and it can speed up things considerably if the values are (almost) valid, and does not cause actual problems, if they are not.

Of course, there are other methods to speed up certain access patterns and there are various strategies how to possibly reduce fragmentation on FAT file systems (the above method is part of what is used by DOS and Windows). Unfortunately, they require considerably more complex implementations, more memory for various types of buffers and to hold dynamically built in-memory data structures, background processes - way too complicated for an embedded system, IMHO.

One feature may be worth considering, though: A vast amount of fragmentation is caused by frequent allocations and deallocations of files, as the system would try to maintain the integrity even of void data (to allow later undeletion), and it would thereby effectively cause more fragmentation in this scenario. However, a good amount of such interim file operations could be carried out on temporary files. Therefore, some operating systems (including DOS) have special API functions for temporary files. They do not only ensure the creation of unique file names (so the user does not have to be bothered with them), but using these functions will also cause the file system to use different allocation / deallocation strategies. The file system would no longer attempt to maintain deleted directory entries and use "fresh" clusters for new allocations, but it would try to reuse previously freed entries.

Something like this could be implemented in newRPL as well. On the command line there could be a number of "reserved file names" which the system would recognize as temporary files. The on-disk file names could use a special pattern so that the system can recognize them as temporary files (even if they are left-overs from previous sessions). The file system could thereby automatically remove orphanted temporary files.

Quote:Semicolon is not an allowed character in 8.3 names, so this is only for long names. SFN names do use the numeric tail in the classic way. But since potentially conflicting names are allowed, the SFN should not be relied upon when the LFN exists.
One example:
Let's say I create "BigFile", which has a SFN "BIGFILE".
Now I want to create "BIGFILE", which has a conflict with the previous SFN. newRPL will create LFN="BIGFILE;" and the SFN="BIGFIL~1" will be used. Now somebody reading only SFN names might think BIGFILE is "BIGFILE", when in reality is "BigFile". Not sure if it makes sense, but that's why SFN names shouldn't be relied upon.
It does make sense. It's perfect this way.
Quote:Actually, modern windows sometimes assigns a completely random SFN, not sure under which conditions, but this is perfectly valid.
Yes, I know, there are also a number of other special cases:

- If a filename fits into the 8.3 format with all characters uppercase, Windows can be configured to only create a SFN and skip creating the unnecessary LFN (thereby avoiding unnecessary clutter in the filesystem).

- If a filename fits into the 8.3 scheme and either contains only lowercase letters or combines a lowercase filename and an uppercase extension or vice versa, the creation of an LFN can be suppressed as well. In this case only an SFN is created and the case information is stored in bits 4 and 3 at offset 0x0C in directory entries, so that the LFN can be recreated from the SFN later on.

- Further, Windows can be configured to not start using numeric tails until actually necessary. It would simply truncate the name to fit into the 8.3 scheme, so the SFN for a file named "helloworld.txt" would be "HELLOWOR.TXT", not "HELLOW~1.TXT". Useful to keep as much of the original name available as SFN.

Quote:None of the above. I just can't picture a multi-user calculator... how do you place more than 2 fingers on the same keyboard? :-)
That's why I wrote "single-user permissions".

Quote:Not done yet, but I'm planning to simply have an IRQ on the card detection pin, so if the user pulls the card when there's data to be written the system will throw an exception, asking the user to reinsert the card immediately.
This sounds like a good idea! (Comparing the BPB serial number can be used to ensure that the same medium was reinserted.)

Greetings,

Matthias


--
"Programs are poems for computers."
Find all posts by this user
Quote this message in a reply
07-28-2016, 02:52 AM
Post: #351
RE: newRPL: [UPDATED July-25-16] Firmware for testing available for download
(07-27-2016 08:56 PM)matthiaspaul Wrote:  That's right, these values are nothing that can be relied upon, they can be used only with sanity checks in place. However, its still possible to take advantage of them.

The idea behind the initial scan was to find the largest pool of free clusters, and keep not only the cluster, but how many we have available until we need to scan for the next hole.
Being the largest hole in the disk, unless the disk is filled near capacity, or fragmented beyond hope, it's unlikely you'll run out of them. This means you only need to read the FAT table at mount, and hopefully never again for a long time.


(07-27-2016 08:56 PM)matthiaspaul Wrote:  
Quote:b) Implementing it requires proper mount/unmount, which means if the user just pulls the card out then putting on a PC will indicate a "dirty" file system and will suggest (or force) disk checking procedures.
Yes, but this is exactly what should happen in this scenario, as the integrity of the file system cannot be trusted any more until after running a disk check utility.

Yes, and no.
When you open a file, the FAT table is read to get the complete chain of clusters for that file. After that, it's only written when you close the file.
Writing to the FAT is done using a "patching" cache. It keeps only a list of clusters that need to be modified, and if anybody reads the FAT table, the sectors read get patched "on the fly" with the contents of the cache, so the process sees the "new" FAT table, even though it wasn't written to disk yet.
Only when you close a file its directory entry gets updated and the FAT cache gets flushed. This also means if you pull the card, the file system is normally in a consistent state (as long as you don't pull it during a write operation).
Well, this is usually the case but it isn't guaranteed (the FAT cache is limited size, so if you write a large amount of data, it will have to perform partial flushes), so there might be lost clusters worst case, but other than that it's safe to pull the card (of course any open files would not be written).
Only if the card is pulled while actually writing there could be an inconsistent state.
And since it almost always is in a consistent state, I tried to avoid writing the boot record to change the dirty bit, because either I had to write it every time I closed a file, or you'd get bothered with a dirty state when the file system is perfectly fine.

(07-27-2016 08:56 PM)matthiaspaul Wrote:  At least this is what I would propose as it is easy to implement (almost no memory and code overhead) and it can speed up things considerably if the values are (almost) valid, and does not cause actual problems, if they are not.
It would speed up mounting, but slow down actual use, as the FAT table would have to be read more often than with the current implementation. In other words, it would spread out those initial 7 seconds into every single write operation.
Still, an idea worth considering, especially now with larger SDHC cards.

(07-27-2016 08:56 PM)matthiaspaul Wrote:  Of course, there are other methods to speed up certain access patterns and there are various strategies how to possibly reduce fragmentation on FAT file systems (the above method is part of what is used by DOS and Windows). Unfortunately, they require considerably more complex implementations, more memory for various types of buffers and to hold dynamically built in-memory data structures, background processes - way too complicated for an embedded system, IMHO.

Yes, I don't want to over complicate it. This is not a Disk Operating System. The file system is just a module of the calculator, the main goal is the calculator, not disk management.
newRPL's file system is not dumb by any means, it is quite optimized for the little resources it uses.

(07-27-2016 08:56 PM)matthiaspaul Wrote:  One feature may be worth considering, though: A vast amount of fragmentation is caused by frequent allocations and deallocations of files, as the system would try to maintain the integrity even of void data (to allow later undeletion), and it would thereby effectively cause more fragmentation in this scenario. However, a good amount of such interim file operations could be carried out on temporary files. Therefore, some operating systems (including DOS) have special API functions for temporary files. They do not only ensure the creation of unique file names (so the user does not have to be bothered with them), but using these functions will also cause the file system to use different allocation / deallocation strategies. The file system would no longer attempt to maintain deleted directory entries and use "fresh" clusters for new allocations, but it would try to reuse previously freed entries.

Something like this could be implemented in newRPL as well. On the command line there could be a number of "reserved file names" which the system would recognize as temporary files. The on-disk file names could use a special pattern so that the system can recognize them as temporary files (even if they are left-overs from previous sessions). The file system could thereby automatically remove orphanted temporary files.
I'm not sure the use case in newRPL would require so many temp files. That's more for a general purpose operating system. newRPL use case is more of writing lots of tiny 1-cluster files. This virtually eliminates fragmentation, as files are (almost all) atomic.

(07-27-2016 08:56 PM)matthiaspaul Wrote:  Yes, I know, there are also a number of other special cases:

- If a filename fits into the 8.3 format with all characters uppercase, Windows can be configured to only create a SFN and skip creating the unnecessary LFN (thereby avoiding unnecessary clutter in the filesystem).

- If a filename fits into the 8.3 scheme and either contains only lowercase letters or combines a lowercase filename and an uppercase extension or vice versa, the creation of an LFN can be suppressed as well. In this case only an SFN is created and the case information is stored in bits 4 and 3 at offset 0x0C in directory entries, so that the LFN can be recreated from the SFN later on.

- Further, Windows can be configured to not start using numeric tails until actually necessary. It would simply truncate the name to fit into the 8.3 scheme, so the SFN for a file named "helloworld.txt" would be "HELLOWOR.TXT", not "HELLOW~1.TXT". Useful to keep as much of the original name available as SFN.
All cases above are implemented exactly as described in newRPL.

(07-27-2016 08:56 PM)matthiaspaul Wrote:  
Quote:None of the above. I just can't picture a multi-user calculator... how do you place more than 2 fingers on the same keyboard? :-)
That's why I wrote "single-user permissions".
While I have no intention of going above and beyond basic VFAT support for newRPL, I'll keep these ideas in mind if I ever decide to improve my CleanFS project.
Find all posts by this user
Quote this message in a reply
07-28-2016, 10:36 AM
Post: #352
RE: newRPL: [UPDATED July-25-16] Firmware for testing available for download
(07-28-2016 02:52 AM)Claudio L. Wrote:  The idea behind the initial scan was to find the largest pool of free clusters, and keep not only the cluster, but how many we have available until we need to scan for the next hole.
Being the largest hole in the disk, unless the disk is filled near capacity, or fragmented beyond hope, it's unlikely you'll run out of them. This means you only need to read the FAT table at mount, and hopefully never again for a long time.
That's fine, but on a not too fragmented filesystem the pointer in the FS info sector will likely point to (if the pointer is outdated: shortly before) the same location or at least be near the start of a larger free area (with fragmentation: not necessarily the largest one, though).
It will do so not by some unexplainable "magic", but just because this is an "artefact" of the allocation strategy used by DOS and Windows (and likely most other implementations using the FS info sector) almost always increasing the pointer until it wraps around (in order to increase the timespan and likelihood that deleted files can be undeleted later on - and also for some "high-level" wear-leveling).

So, by taking advantage of the FS info sector, you can skip the initial scan (or, if the pointer was outdated, reduce it to a very short scan until it is back in "sync"), and likely still don't have to read the FAT for some long while afterwards (until the medium gets full or fragmented).

Of course, these assumptions don't work for highly fragmented volumes, there isn't much that could be done about it except for performing some form of defragmentation every once in a while.
(I am aware of strategies to still maintain acceptable speed even on significantly fragmented volumes, but they aren't suited for embedded systems as they require a considerable more complicated in-memory representation of the mounted filesystem than the on-disk FAT structures.)

(07-28-2016 02:52 AM)Claudio L. Wrote:  
(07-27-2016 08:56 PM)matthiaspaul Wrote:  At least this is what I would propose as it is easy to implement (almost no memory and code overhead) and it can speed up things considerably if the values are (almost) valid, and does not cause actual problems, if they are not.
It would speed up mounting, but slow down actual use, as the FAT table would have to be read more often than with the current implementation. In other words, it would spread out those initial 7 seconds into every single write operation.
That's overly pessimistic, as this would happen only on a fragmented or nearly full volume (see above).

Well, if you prefer to let the filesystem work more locally and don't like the idea of increasing the pointer until it wraps around, you could still try some mixed approach to get rid of the initial delay:

Use the FS info sector only on volumes beyond a certain size threshold (when the delay starts "hurting" the user) and ignore it on smaller volumes.

Greetings,

Matthias


--
"Programs are poems for computers."
Find all posts by this user
Quote this message in a reply
07-28-2016, 02:54 PM
Post: #353
RE: newRPL: [UPDATED July-25-16] Firmware for testing available for download
(07-28-2016 10:36 AM)matthiaspaul Wrote:  So, by taking advantage of the FS info sector, you can skip the initial scan (or, if the pointer was outdated, reduce it to a very short scan until it is back in "sync"), and likely still don't have to read the FAT for some long while afterwards (until the medium gets full or fragmented).
(07-28-2016 02:52 AM)Claudio L. Wrote:  It would speed up mounting, but slow down actual use, as the FAT table would have to be read more often than with the current implementation. In other words, it would spread out those initial 7 seconds into every single write operation.
That's overly pessimistic, as this would happen only on a fragmented or nearly full volume (see above).
We are thinking of two different strategies. My strategy is to know the start and extent of the "hole", your strategy is to know the start of the hole, but not the end.
If/when my strategy finds a large hole, it doesn't need to verify if the cluster is free each time it needs to allocate clusters (unless you run out of them, in that case it scans for the next hole).
Your strategy gets the next free cluster (or last used, same thing), but each time you need to allocate a new cluster, you need to go and check if the next cluster is actually free by reading the FAT.

That's why you say "short scan", and you are correct, as with your strategy you only need to "walk" a few clusters until you find a free one. With my strategy, the scan is not so short, because even if you quickly find the next hole, a big hole requires you to read potentially almost the whole FAT to determine its size. But once you find a big one, there's no need to check ever again (until you run out, that is).
Being an embedded system where we don't have any read cache for the FAT, just having to read one sector to check if the next cluster is free every time a new cluster is needed is quite a slowdown.

Even if I adopt the next cluster hint from the BPB, I'd still need to know the size of the hole, which potentially requires a large chunk of the FAT to be scanned (if it's a big hole), so with my strategy in mind, there's no big advantage in using the hint.

Perhaps the "best of both worlds" would be:
a) Use the hint of the BPB as a start point, scan from there until the next hole only (don't look for the largest).
b) Determine the size of the hole, but up to a limit of let's say 1024 clusters. On FAT32 it means we'd read 8 sectors max. during this limited scan. If the hole is bigger it doesn't matter, as the next scan will find the next 1024 clusters of it with another "short scan" of 8 sectors.
c) Add all the logic to write the BPB dirty bit when the first file is open, and write it again when the last file is closed.

This would reduce the initial scan time (not as much as you were suggesting, but a lot) and the penalty introduced by having to check the next cluster will happen only every 1024 allocations instead of every time. I think it balances both strategies quite well.

(07-28-2016 10:36 AM)matthiaspaul Wrote:  Use the FS info sector only on volumes beyond a certain size threshold (when the delay starts "hurting" the user) and ignore it on smaller volumes.

This is a good idea, although my proposed "compromise" idea above I think defines a single threshold that performs the same on big or small volumes.

I should start working on these changes (unless you see something fundamentally wrong with my approach above).
And hey, thanks a lot for the high-technical-level discussion, I appreciate deep digging like this, as it always leads to better implementations.

Claudio
Find all posts by this user
Quote this message in a reply
07-30-2016, 06:24 AM
Post: #354
RE: newRPL: [UPDATED July-25-16] Firmware for testing available for download
Claudio, please forgive me if this has been asked before or is considered a silly question, but after a forum search I didn't find an answer, so here it goes.

Will newRPL improve battery life on the 50g?

Understandably, the 50g gets shorter battery life than legacy 4MHz and 2MHz (and below) Saturn processor calculators due to the high speed processor in the 50g. However, as we all very well know, most computers can reduce CPU speed at times when such is not needed so as to increase battery life. Is that possible in 50g hardware, and/or is there something that can be done in newRPL to ensure longer battery life versus stock RPL?

Thank you.
Find all posts by this user
Quote this message in a reply
07-30-2016, 01:26 PM
Post: #355
RE: newRPL: [UPDATED July-25-16] Firmware for testing available for download
(07-28-2016 02:54 PM)Claudio L. Wrote:  Being an embedded system where we don't have any read cache for the FAT, just having to read one sector to check if the next cluster is free every time a new cluster is needed is quite a slowdown.
[...]
b) Determine the size of the hole, but up to a limit of let's say 1024 clusters. On FAT32 it means we'd read 8 sectors max. during this limited scan. If the hole is bigger it doesn't matter, as the next scan will find the next 1024 clusters of it with another "short scan" of 8 sectors.
[...]
This would reduce the initial scan time (not as much as you were suggesting, but a lot) and the penalty introduced by having to check the next cluster will happen only every 1024 allocations instead of every time. I think it balances both strategies quite well.
Do you read the FAT on a sector by sector basis or in larger chunks (depending on how much memory is available)? If you do it on a sector by sector basis, perhaps the best compromise would be to look ahead cluster entries worth one sector only: One sector gets read anyway. Consequently, as the info is readily available in the buffer, it makes sense to evaluate it before the buffer is trashed and the same sector would have to be read again later on when searching for the next free cluster. Reading ahead 8 sectors could speed up things some more if you do multi-sector-I/O and have a buffer large enough to hold 8 sectors, but the gain won't be that large any more and looking ahead always carries the risk that the stuff isn't needed at all. In the worst case, you would have read 7 sectors for nothing.

Assuming a sector size of 512 bytes (although this isn't fixed), this would be 128 clusters for FAT32, 256 clusters for FAT16, and ca. 340 clusters on FAT12. This may be more than enough already. Combining this with the cluster size, this would cover file growths of 64 KB and more already, so anything more might be overkill on a calculator.

Greetings,

Matthias


--
"Programs are poems for computers."
Find all posts by this user
Quote this message in a reply
07-30-2016, 02:28 PM
Post: #356
RE: newRPL: [UPDATED July-25-16] Firmware for testing available for download
(07-26-2016 09:27 PM)Claudio L. Wrote:  
(07-26-2016 04:37 PM)matthiaspaul Wrote:  Your Unicode <> OEM conversion is currently hardwired to codepage 850.
Yes, mainly because it's not meant to exchange information with DOS 6.20 anymore. [...]
This won't affect you in any way, it's only to translate short names with strange characters into readable names for the calculator (these files would've been created by older OS). [...] but code pages are in the past, newRPL is Unicode compliant and not apologizing for it.

(07-27-2016 04:39 PM)Claudio L. Wrote:  * Only if you create an 8.3 name with characters >127 in some other OS, those characters will be in the OEM codepage of that OS. newRPL is for simplicity going to interpret those characters as CP850 and convert them to Unicode. This is a very rare occurrence
[...]
So you'd have to use an older OS that is not LFN aware AND using a CP other than 850 to run into problems.
I'm afraid I have to disagree here. Unicode is great (although far from being perfect), but while it will be used on many new systems, I don't see codepages vanishing in the next few decades, either. Not only because many older systems exist and are still fully functional, but also because there are applications where Unicode does not offer benefits over 8-bit codepages, but just complicates a design.

What you declare as "rare occurance" would actually be the most common use case for me, transferring files from plain FAT volumes to the calculator.

Regarding OEM character translation, I think, if only one codepage could be supported, codepage 437 would be the best choice, as this is the default hardware codepage used on most PCs. Adding support for a basic repertoire of other codepages does not seem like a waste of flash space to me. A more compact table representation could be found.
Another idea is to store the translation table(s) in a special binary system file in the root directory of the volume and have the filesystem use this table if present, and default to an internally stored table for 437 if not. This could be useful also for other implementations, so we could define a little FAT extension here.

Greetings,

Matthias


--
"Programs are poems for computers."
Find all posts by this user
Quote this message in a reply
07-31-2016, 02:14 AM
Post: #357
RE: newRPL: [UPDATED July-25-16] Firmware for testing available for download
(07-30-2016 06:24 AM)JDW Wrote:  Claudio, please forgive me if this has been asked before or is considered a silly question, but after a forum search I didn't find an answer, so here it goes.

Will newRPL improve battery life on the 50g?

Understandably, the 50g gets shorter battery life than legacy 4MHz and 2MHz (and below) Saturn processor calculators due to the high speed processor in the 50g. However, as we all very well know, most computers can reduce CPU speed at times when such is not needed so as to increase battery life. Is that possible in 50g hardware, and/or is there something that can be done in newRPL to ensure longer battery life versus stock RPL?

Thank you.

ARM processors all have programmable clocks, so you can vary the speed at will. The stock 50g uses the CPU at 12 MHz when idle (waiting for a key), and 75 MHz when you run any program (basically, whenever you see the hourglass it's running at 75 MHz).
newRPL does it slightly different:
* Runs at 6 MHz, instead of 12MHz. Because there's no emulator, 6 MHz is plenty for many tasks.
* Whenever it's idle waiting for a key, it actually stops the CPU altogether with a Wait-For-Interrupt.
* If you don't poll the keyboard for 300 ms (or I changed it to 500ms? can't remember), then it assumes it's running a program and goes at full blast 192 MHz. Whenever you see the hourglass in newRPL you are running at 192 MHz.
* As soon as your program ends and the calculator waits for a key again, the clock goes back to 6 MHz.

Basically, the difference is that because newRPL is so much faster than RPL, many programs will be able to run and finish within 500ms, even while running at only 6 MHz so it's less often that the CPU goes at full speed.
The stock firmware goes with the fastest clock as soon as the user presses the key, because the emulator itself needs to run at 75 MHz to emulate a Saturn at a decent speed (about 2.1x faster than the original when running at 75 MHz).
Find all posts by this user
Quote this message in a reply
07-31-2016, 02:41 AM
Post: #358
RE: newRPL: [UPDATED July-25-16] Firmware for testing available for download
(07-30-2016 02:28 PM)matthiaspaul Wrote:  I'm afraid I have to disagree here. Unicode is great (although far from being perfect), but while it will be used on many new systems, I don't see codepages vanishing in the next few decades, either. Not only because many older systems exist and are still fully functional, but also because there are applications where Unicode does not offer benefits over 8-bit codepages, but just complicates a design.

What you declare as "rare occurance" would actually be the most common use case for me, transferring files from plain FAT volumes to the calculator.
Does your system support LFN names? If so, it is rare because your system will generate a Unicode LFN for any strange names.
If it doesn't (because of bad implementation), you can still guarantee an LFN will be created by appending a semicolon to the file name (you can do that for all files with a single rename command). That way you get rid of the code page problem forever, as the *source* system will do the OEM to Unicode conversion (in whatever code page is using). newRPL will ignore the semicolon so you'll see your original names in the calc.
If you are using plain DOS without LFN support, then I'd say "why?" You wrote the LFN support for DR-DOS, right? so at least you should use your own creation :-).

(07-30-2016 02:28 PM)matthiaspaul Wrote:  Regarding OEM character translation, I think, if only one codepage could be supported, codepage 437 would be the best choice, as this is the default hardware codepage used on most PCs. Adding support for a basic repertoire of other codepages does not seem like a waste of flash space to me. A more compact table representation could be found.
Another idea is to store the translation table(s) in a special binary system file in the root directory of the volume and have the filesystem use this table if present, and default to an internally stored table for 437 if not. This could be useful also for other implementations, so we could define a little FAT extension here.

If it was the Prime, there's plenty of flash, so I'd say no problem. On the 50g we have 2 MB and newRPL already uses 1.5 MB, so by the time newRPL is ready it will be tight in there. I'd put this in the backburner until newRPL is more finished. If there's space in ROM, then perhaps multi-CP can be added.
Regarding loading it in RAM, I think it's worse: we only have 512 kb of ram, I'm leaving about 32 kb max. for the file system to use, that's all so loading the table and leaving it permanently does more harm than good.
Find all posts by this user
Quote this message in a reply
08-01-2016, 01:31 AM (This post was last modified: 08-01-2016 03:16 PM by matthiaspaul.)
Post: #359
RE: newRPL: [UPDATED July-25-16] Firmware for testing available for download
While I don't have the time to really dig deep into the sources, I have had a closer look and already identified a few bugs and compatibility issues. In some cases, the necessary changes are local enough, so that I can provide improved/fixed source code excerpts, in other cases I'll just give some hints.

Some FAT related data structures have evolved over time or have entries where the proper interpretation depends on various conditions. Several bugs in the implementation are down to not checking the presence of particular data structure versions (f.e. BPB variants) or not testing on the conditions under which they are valid.

https://sourceforge.net/p/newrpl/sources...extentry.c

...
while(FSReadLL(buffer,32,dir,fs)==32)
{
if(buffer[0]==0) return FS_EOF;
if(buffer[0]==0xe5) continue; // DELETED ENTRY, USE NEXT ENTRY
if( (buffer[11]&FSATTR_LONGMASK) == FSATTR_LONGNAME) {
// TREAT AS LONG FILENAME
...

The LONGMASK (0x0F) condition above is incomplete. On FAT12 and FAT16 volumes, the attribute combination 0x0F can also occur as part of a valid pending delete file under DELWATCH, however, actual VFAT LFN entries always have the cluster value at 0x1A set to 0x0000 and the length entry at 0x1C happens to never become 0x00000000 under these conditions. This check does not work for FAT32 volumes, but released versions of DELWATCH didn't support FAT32, and would have to use additional sanity checks, anyway. Consequently, the clause above should be expanded as follows (pseudo-code) in order to avoid any misinterpretation:

if ((buffer[0x0B] & FSATTR_LONGMASK) == FSATTR_LONGNAME) && // attr is 0x0F
(
(type == TYPE_FAT32) || // additional test on FAT12/FAT16 only
(((u16*)buffer[0x1A] == 0) && ((u32*)buffer[0x1C] != 0)) // cluster 0, size > 0
) {
// TREAT AS LONG FILENAME
...

Note, that this code uses boolean shortcut evaluation and that it deliberately does not test for the high-part of the cluster value at offset 0x14, as this is valid only on FAT32, and the additional test does not apply to FAT32.

Another similar test occurs somewhat further down in the source code:

...
// VERIFY THAT SHORT ENTRY FOLLOWS LONG NAME
if (((ptr[11] & FSATTR_LONGMASK) == FSATTR_LONGNAME) ||
(*ptr == 0) || (*ptr == 0xE5)) {
// VALID SHORT ENTRY NOT FOUND
...

I haven't checked if similar tests exists in other files as well.

There's another bug when retrieving the start cluster value:

entry->FirstCluster = buffer[26] + (buffer[27]<<8) + (buffer[20]<<16) + (buffer[21]<<24);

The code above is only valid on FAT32 volumes; on FAT12 and FAT16 it would have to read as follows, as the 16-bit entry at 0x14 holds other information on those volumes:

entry->FirstCluster = buffer[0x1A] + (buffer[0x1B]<<8);

So, this could become something like:

entry->FirstCluster = buffer[0x1A] + (buffer[0x1B]<<8);
if (type == TYPE_FAT32)
entry->FirstCluster += (buffer[0x14]<<16) + (buffer[0x15]<<24);

I've seen similar code sections in quite a few other source files as well (you probably know them by heart, whereas I haven't written them down yet) and they need to be changed as well. In general, it is invalid to assume that the 16-bit entry at 0x14 in directory entries is 0 on FAT12 and FAT16 volumes.

There's another potential problem here: The contents of entries in directory entries, which are not used by the implementation, must not be changed. So, in the above example, the code most probably would still have to store away the value at offset 0x14 even on FAT12 and FAT16 volumes for later restoration (just not treat it as cluster entry).

See, for example, this file:

https://sourceforge.net/p/newrpl/sources...direntry.c
fsupdatedirentry.c:

...
mainentry = buffer + 32*(file->DirEntryNum-1);
// write new properties
mainentry[11] = file->Attr;
// mainentry[12] = file->NTRes;
mainentry[13] = file->CrtTmTenth;
WriteInt32(mainentry+14, file->CreatTimeDate);
WriteInt16(mainentry+18, file->LastAccDate);
WriteInt16(mainentry+20, file->FirstCluster>>16);
WriteInt16(mainentry+26, file->FirstCluster);
WriteInt32(mainentry+28, (file->Attr&FSATTR_DIR) ? 0 : file->FileSize);
WriteInt32(mainentry+22, file->WriteTimeDate);
...

Unless this would be on a FAT32 volume, it would be invalid to overwrite the contents of the 16-bit entry at offset 20 with the high cluster (unless this would hold the original contents when opening the file - however, this would make it necessary to change the usage of the FirstCluster variable as cluster variable).

For some reasons, restoring the original value of the NTRes byte was commented out here. This byte is used for various purposes. It is important that the implementation does not change the contents of bits 7-5 and 2-0. It may change bits 3 and 4 when dealing with LFNs, and it may clear all bits when creating a new file. However, in an already existing file, the bits must not be changed.

In the following file:

https://sourceforge.net/p/newrpl/sources...direntry.c
fsdeletedirentry.c

the following excerpt can be found:

...
mainentry = buffer;

for (f = 0; f < file->DirEntryNum; ++f, mainentry += 32) {
mainentry[0] = 0xE5;
}
...

This should be changed to something like the following in order to allow files to be undeleted without having to enter the first character of the files again:

...
mainentry = buffer;

for (f = file->DirEntryNum; f > 0; --f, mainentry += 32) {
if (f == 1) {
mainentry[0x0D] = mainentry[0]; // save 1st char of SFN at 0x0D for later undeletion (overwrite no longer needed creation ms)
(u32*)mainentry[0x0E] = 0; // clear 0x0E..0x11 creation date & time of deleted file
}
mainentry[0] = 0xE5;
}
...

So much for now, but there's more... ;-)

Hope it helps,

Matthias


--
"Programs are poems for computers."
Find all posts by this user
Quote this message in a reply
08-02-2016, 02:30 AM
Post: #360
RE: newRPL: [UPDATED July-25-16] Firmware for testing available for download
On another note I flashed my HP49g with the latest firmware which eat fresh set of batteries over the weekend being untouched.
Strange?
Find all posts by this user
Quote this message in a reply
Post Reply 




User(s) browsing this thread: 39 Guest(s)