Logo 
Search:

Unix / Linux / Ubuntu Answers

Ask Question   UnAnswered
Home » Forum » Unix / Linux / Ubuntu       RSS Feeds
  on Dec 19 In Unix / Linux / Ubuntu Category.

  
Question Answered By: Adah Miller   on Dec 19

Yes, defragging can mess you up. You need to work on an unmounted file system
for starters and if you have only one bootable partition then you must use a
live disk or something similar and check to make sure the disk is not mounted
before working on it.

Basically Windows file systems are based on older technology. Most people see
NTFS as newer than FAT32, but NTFS came out in 1993 with the advent of NT (the
same year as ext2), while FAT32 came out in 1996. It is of the same generation
as ext2 which is not a journaled file system either. Both ext3 (1999) and
ReiserFS (2001) are newer and ext4 and Reiser4 are both in development.

However, the prevalence of NTFS on servers and its longevity is a sign that it
is a durable file system which can be depended on in the long run. It may not be
the best for everyday users who do things to their system that may cause them
problems later, such has shutting down improperly, but it is certainly a worthy
file system for what it was made for, servers. In the times of TB drives, it may
be time for M$ to look at updating their file system which so far has stood them
in good stead, but lots has happened since 1993.

Recent Linux file systems (ext3 and ReiserFS) are all journaled which means they
are more resilient because changes are written to the log file or journal before
they are written to disk. If the file and journal disagree then the file will be
purged. This can happen if a system goes down in between the journaling and the
file change being committed to disk. In Windows files are truncated or worse
still the file table can become corrupted. The one thing not to do in Linux is
to use fsck on a mounted disk or it can mess things up royally. No file system
can protect the user if the user is careless.

Ext2 allows the user to recover deleted files, but ext3 does not. When a file is
deleted, the file locations are zeroed out in ext3 and one must use grep to
recover file parts as best as one can.

Journaling on Linux explanation:
www.ibm.com/developerworks/library/l-fs.html
FAT and NTFS explained:
support.microsoft.com/default.aspx

The other big difference is that Windows file systems store data in sequence
from the beginning of the disk. As a file is added it is put in the first
available location, next to the previous one. If that previous file is added to
afterwards, the extra bits are moved to a new location causing the file to
become fragmented and a gap will exist if the file is reduced in size.

Linux does it differently, it starts not at the beginning but in the middle and
leaves space to grow when it adds a file. The file system then monitors the
files and moves them silently if they appear to be no longer in use. However, as
the disk fills, even Linux must scramble to fit large files into available
spaces and will fragment files. If fragmentation is an issue in Linux it usually
shows itself on servers and since there is no decent defrag tool it can become a
problem. The best solution is to not fill your drives up too full. Windows users
at least have a good tool for managing fragmentation.

I am told by file system affectionados that a file system to watch is ZFS from
Sun which is used on Solaris. Not being so inclined I have so far resisted the
temptation to try openSolaris. I prefer ReiserFS to ext3, but there is nothing
wrong with ext3. Both support up to 16 TB file size, so there is lots of
potential still in these file systems.

Share: 

 

This Question has 9 more answer(s). View Complete Question Thread

 
Didn't find what you were looking for? Find more on Defragment Or get search suggestion and latest updates.


Tagged: