busy inodes and segfault?
Gregory Davis
GregDavis at umbc.edu
Mon Mar 3 14:55:04 CET 2003
On Sunday 02 March 2003 9:33 pm, Love wrote:
> Gregory Davis <GregDavis at umbc.edu> writes:
> > On Sunday 02 March 2003 8:27 pm, you wrote:
> >> Gregory Davis <GregDavis at umbc.edu> writes:
> >> > In the case of arla, I can still read files in the afs cache, but when
> >> > I try to write to an afs volume, the program that made the write call
> >> > hangs. The xfs device then becomes busy and I cannot umount afs.
> >>
> >> Try 0.36pre25, i think we started to leak inode in some linux 2.4.x
> >> version.
> >>
> >> Love
> >
> > Is that version any different from 0.35.12? I'm currently using
> > 0.36pre23 and see the segfaults, as was in 0.35.11.
>
> 0.36pre25 very diffrent from 0.35.12.
>
> The fix that fixes the inode leak is in 0.36pre25 but not 0.36pre24.
>
> How do you make the segfault happen ? I would very much fix that and add it
> to the regression suite.
>
> Love
I'm no Linux kernel expert but...
Linux uses a paging mechanism to cache memory used in main memory. You may
have 2 MB main memory used, but 259 MB cached alongside of it (in case you
open OpenOffice again). When all the main memory gets full of cache, it
starts releasing old cache to make room for newly requested memory that is
not cached. It is when cached memory starts to get recycled that the busy
inodes start causing trouble, assumeably because they are in cached memory
and get recycled when they shouldn't. I test the situation by leaving the
computer on for extended periods of time to use up the main memory; this is
sped up by running huge apps like open office, compiling big projects like
kernel or gnome or kde or other, or leaving an audio stream running in xmms
for a while. I suppose you could simulate a test that would max out memory
by making an infinite loop program whose data grows exponentially. When it
bombs, you have used all available memory. Then try to view an afs directory
that you have not viewed (cached) before, and see if there is anything
missing from it (everything will be if arla encountered busy inodes). I will
try the newer release to see if it works better. Btw, Arla does seem to
handle busy inodes better than openAFS, as I can simply kill the arlad
daemon, unmount /afs, delete the cache, and restart it all and everything
works fine, until the next busy inode.
Thanks,
Greg
More information about the Arla-drinkers
mailing list