Jump to content
OLD CS1

Random IO errors and undelete-able file on SCS2

Recommended Posts

57 minutes ago, OLD CS1 said:

Before dozing off, I took the plunge and set the file to an appropriate name for its location.  The file is now accessible, in that it can be read and appears to contain the data it was meant to contain.  I figure at this point deleting it should be fine, though I would still like to validate the sector bitmap.

 

That will have to wait until tomorrow.  We had a heavy cold front move through the area, and while the storms are gone we are under strong winds forecast until mid-morning.  Power is flickering and I would rather not put the UPS to a test.

Good job.  Renaming is simple enough and you should be able to rename it through standard disk managers now that the index is 'sorted'.

 

Validating the bitmap for one file is relatively simple.   Confirming that no other files are using the same allocation units (16 sectors/AU) is where TIImageTool will come in handy.  In cases where I was uncertain if the file was safe to delete, I have renamed the file to something like "ZZNODELETE", protected it, and went on my merry way.  The question that remains is, "why did the DSR do this in the first place", and it isn't easily answered.

  • Like 1

Share this post


Link to post
Share on other sites
21 hours ago, InsaneMultitasker said:

The question that remains is, "why did the DSR do this in the first place", and it isn't easily answered.

I believe the system locked up during creation of the file.  I have assumed the random lock-ups were due to this bad entry in the filesystem as the system has been stable.  But I am not so certain, now.

 

I am more curious as to why the filesystem requires everything to be in alphabetical order.  It seems like a lot of work has to happen to keep this structure versus, say, Commodore's DOS, which does not care in what order files are listed and allows for some creative work on the directory.  I am also surprised as there appears to be no native defragmentation or bitmap validation/verification utility for the TI.

  • Like 1

Share this post


Link to post
Share on other sites
4 hours ago, OLD CS1 said:

I am more curious as to why the filesystem requires everything to be in alphabetical order.  It seems like a lot of work has to happen to keep this structure versus, say, Commodore's DOS, which does not care in what order files are listed and allows for some creative work on the directory.  I am also surprised as there appears to be no native defragmentation or bitmap validation/verification utility for the TI.

That's just the way the TI Disk Controller works - it is not a requirement of the TI Filesystem itself. The disk controller does a binary search to find the specified file, so the files must be sorted to do that.

 

The first versions of Classic99 didn't bother doing that search, but searched the entire index in sequence. But this actually broke on some deliberately malformed disk images.

 

  • Like 1

Share this post


Link to post
Share on other sites
54 minutes ago, dhe said:

Hey @OLD CS1 did you ever pursue this problem further?

Not yet, but I believe I will have to.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...