Heavily Fragmented (Large) NTFS Volume

Discussion in 'OT Technology' started by hax0rwax0r, Jul 22, 2007.

  1. hax0rwax0r

    hax0rwax0r New Member

    Joined:
    Oct 17, 2006
    Messages:
    24
    Likes Received:
    0
    I have a Windows Server 2003 R2 Standard server that is HEAVILY fragmented on the storage volume. The volume is 3.27TB in size and is NTFS format. This volume is the Backup2Disk area for our Backup Exec 11D server.

    Of the 3.27TB on the server, 120GB is free.

    The volume gets constant disk I/O because it handles nightly and weekly backups so there is rarely much idle time.

    Traditionally it was that you needed 15% free space to do an effective defragmentation but I spoke with a Raxco salesman and he said that is not so much true these days with Server 2003 reserving part of the disk space in it's MFT zone (that their product could utilize for defragmentation). I'm not sure of the validity of that statement.

    At this point we are ready to throw some money into software to fix this issue. Has anyone had any experience with doing this in the past? I need this to be able to not only fix the MASSIVE fragmentation problem we currently have but also have the ability to maintain the volume to prevent this in the future (while having disk I/O during backups nightly/weekends).

    Any help or suggestions is appreciated.

    Here is the defragmentation report from Windows Server 2003 R2 defragmenter:

    Volume Backup Volume (D)
    Volume size = 3,353 GB
    Cluster size = 4 KB
    Used space = 3,212 GB
    Free space = 140 GB
    Percent free space = 4 %
    Volume fragmentation
    Total fragmentation = 49 %
    File fragmentation = 99 %
    Free space fragmentation = 0 %
    File fragmentation
    Total files = 4,803
    Average file size = 727 MB
    Total fragmented files = 2,999
    Total excess fragments = 10,200,072
    Average fragments per file = 2124.68
    Pagefile fragmentation
    Pagefile size = 0 bytes
    Total fragments = 0
    Folder fragmentation
    Total folders = 55
    Fragmented folders = 4
    Excess folder fragments = 19
    Master File Table (MFT) fragmentation
    Total MFT size = 103 MB
    MFT record count = 58,099
    Percent MFT in use = 55 %
    Total MFT fragments = 3
     
  2. Stealthy_C

    Stealthy_C my Vespa rocks.. ̔̕̚̕̚ ҉ ҉̵̞̟̠̖̗̘̙̜̝̞̟̠͇̊̋̌̍̎ ̏̐̑̒ OT Supporter

    Joined:
    Aug 28, 2003
    Messages:
    20,904
    Likes Received:
    8
    Location:
    .
    To fix and prevent this problem in the future do the follow:

    Step 1 - Create a Windows Scheduled task to defrag every hour on the hour.
    Problem solved, I accept Paypal as payment for my consulting services.


    :o
     
  3. dorkultra

    dorkultra OT's resident crohns dude OT Supporter

    Joined:
    Oct 14, 2005
    Messages:
    22,743
    Likes Received:
    27
    Location:
    yinzer / nilbog, trollhio
    create a .bat file that says
    defrag driveletter: -f
    defrag driveletter: -f
    defrag driveletter: -f
    etc
    run it task manager
    profit
     
  4. hax0rwax0r

    hax0rwax0r New Member

    Joined:
    Oct 17, 2006
    Messages:
    24
    Likes Received:
    0
    It will never finish... We have tried this before and that is why I was looking for a third party solution to fix this problem.
     
  5. Harry Caray

    Harry Caray Fine purveyor of x.264, h.264 & TS HD-Video !!! HD

    Joined:
    Apr 19, 2001
    Messages:
    17,176
    Likes Received:
    5
    Location:
    MyCrews:4x4,SoCal,Tesla,EV's
    yea, but if there is rarely idle time like he says (constant disk I/O), how does it handle that?

    have it shut down the backup / replication/ duplication service first? then defrag?

    Then just restart? I'd think after you get past this main 1st time, it'll go quickly after that...
     
  6. Harry Caray

    Harry Caray Fine purveyor of x.264, h.264 & TS HD-Video !!! HD

    Joined:
    Apr 19, 2001
    Messages:
    17,176
    Likes Received:
    5
    Location:
    MyCrews:4x4,SoCal,Tesla,EV's
    Well I just got Raxco v8 Server and tested on my 2003 AdvServ and running on one of my RAID-320 setups that have like 8% room....

    It chugged right through it (like 4hrs) but cleaned it all up and now has 14% free and is faster as well !!

    Plus, during HEAVY torrent use, it still was working fine so I'd say try it!
     
  7. deusexaethera

    deusexaethera OT Supporter

    Joined:
    Jan 27, 2005
    Messages:
    19,712
    Likes Received:
    0
    If the files are all significantly smaller than the amount of free space, then it will be able to defragment. However, you're just going to have to take it offline for a while so it can run. Plan to have the server offline for the next holiday weekend, which I think will be Labor Day.

    If you simply can't get it to defrag, you'll need to find a suitable place (possibly the hard drives on all the workstations in your office) to move as many files off the server as you can, then defrag (if the drive isn't empty after moving stuff off it), then move all the files back.

    EDIT: Running a defrag operation once every hour is too often for a drive that huge. Have it run twice a week instead.
     
  8. Stev

    Stev Active Member

    Joined:
    Mar 12, 2004
    Messages:
    11,409
    Likes Received:
    0
    diskkeeper 2007 can be set to run in times of idle disk times.

    But it wont really do much good, as you said it needs to happen when there isnt any disk I/O otherwise u shoot yourself in the foot.
     
  9. deusexaethera

    deusexaethera OT Supporter

    Joined:
    Jan 27, 2005
    Messages:
    19,712
    Likes Received:
    0
    I just had a thought, actually. :run:

    If it won't totally screw everyone over, you could make the drive read-only while the defrag operation is running -- that way people can still read the existing data, but they can't modify it until the defrag is done.
     
  10. Stev

    Stev Active Member

    Joined:
    Mar 12, 2004
    Messages:
    11,409
    Likes Received:
    0
    I dont see why you cant take it offline for 20 minutes at SOME point in 24 hours to do it. Just need to find the lowest activity time and impliment a policy where people need to understand this resource will not be available for a certian timeframe. Explain to a higher-up at work that it needs to happen to keep the performance of your datastore optimal.

    For a volume of that size, u will need a lot more time for the first time, but 20-30 minutes continuously should be enough for damage control and upkeep for day to day.
     
  11. deusexaethera

    deusexaethera OT Supporter

    Joined:
    Jan 27, 2005
    Messages:
    19,712
    Likes Received:
    0
    To hell with keeping the data store optimal, you need to tell the director of your business unit that you need to take the thing offline periodically because if you don't the fragmentation will overload the server's ability to keep track of all the file fragments and files will start disappearing permanently.

    It's not entirely accurate, but it's important to be able to market your needs to the people who can give you what you need. And you're going to need a long weekend plus an hour or so a few nights a week.
     
  12. crontab

    crontab (uid = 0)

    Joined:
    Nov 14, 2000
    Messages:
    23,443
    Likes Received:
    12
    I've never used the d2d2t backup2disk feature with veritas, but i've dealt with d2d2t on Legato and Tivoli/TSM and a couple Sun/EMC VTL's all OEM'd from falconstor. So I will make a lot of assumptions here...

    If this is the disk staging area for your backups do you have a window or a period of time within the week to export as much data off of disk as you can onto tape? And then free it up off of disk? This in turn will free up space for your defragmentation to run, whichever software product you plan to use. So if one needs to do a restore, then they will have to read it from tape.

    Are there any other reason's why you need keep the backups on disk other than accelerated restores and backups?

    The only caveat in my experience is that restores will be slower. This is something you need to communicate with the rest of your teams and management when you have to do this change. Something that they should be able to live with for about a week.

    Also, I assume that you were already defragging this area on a periodic basis. You need to re-evaluate that scheme since you get right back into the predicament after time.
     
  13. Peyomp

    Peyomp New Member

    Joined:
    Jan 11, 2002
    Messages:
    14,017
    Likes Received:
    0
    Shouldn't have to defrag disks in this day and age.
     
  14. NoLiving

    NoLiving New Member

    Joined:
    Jul 7, 2007
    Messages:
    192
    Likes Received:
    0
    Location:
    Austin, TX.
    Buzzsaw defragmenter does on the fly defragmentation.

    Agree that defragmentation is old and busted.
     
  15. deusexaethera

    deusexaethera OT Supporter

    Joined:
    Jan 27, 2005
    Messages:
    19,712
    Likes Received:
    0
    Not on servers, anyway. I dunno if it really benefits desktop users to have on-the-fly defragging.
     
  16. P07r0457

    P07r0457 New Member

    Joined:
    Sep 20, 2004
    Messages:
    28,491
    Likes Received:
    0
    Location:
    Southern Oregon
    on-the-fly? No. But NTFS does benefit from periodic defrag.
     
  17. deusexaethera

    deusexaethera OT Supporter

    Joined:
    Jan 27, 2005
    Messages:
    19,712
    Likes Received:
    0
    Oh, absolutely. I didn't mean to imply that desktop machines shouldn't be defragged -- I cut people new assholes when they don't defrag once a month -- but on-the-fly doesn't do much for Joe User.
     
  18. P07r0457

    P07r0457 New Member

    Joined:
    Sep 20, 2004
    Messages:
    28,491
    Likes Received:
    0
    Location:
    Southern Oregon
    correct.
     
  19. Peyomp

    Peyomp New Member

    Joined:
    Jan 11, 2002
    Messages:
    14,017
    Likes Received:
    0
    Filesystems shouldn't ever store files in a fragmented manner.
     
  20. P07r0457

    P07r0457 New Member

    Joined:
    Sep 20, 2004
    Messages:
    28,491
    Likes Received:
    0
    Location:
    Southern Oregon
    i have yet to see one that does not suffer at all from fragmentation (although some suffer more than others, and not all can be safely defraged)
     
  21. NoLiving

    NoLiving New Member

    Joined:
    Jul 7, 2007
    Messages:
    192
    Likes Received:
    0
    Location:
    Austin, TX.
    Defragmentation, like memory garbage collection, should be done automatically by the system.
     
  22. P07r0457

    P07r0457 New Member

    Joined:
    Sep 20, 2004
    Messages:
    28,491
    Likes Received:
    0
    Location:
    Southern Oregon
    not necessarily. I can make sense to NOT perform on-the-fly defrag by the system. However, garbage collection is essentially a requirement to stable system performance.
     
  23. deusexaethera

    deusexaethera OT Supporter

    Joined:
    Jan 27, 2005
    Messages:
    19,712
    Likes Received:
    0
    That puts overhead on the system that you can't be sure the user can afford to spare. On a file server, realtime defragmentation is great; on a database server, it's good but it eats into the database engine's search speed; on a production workstation like the ones I manage at my office, users run highly CPU-intensive processes that take hours to run and generate files several gigabytes in size, which they then move to the file server, so realtime defragmentation would slow down their work with no benefit whatsoever.

    The world isn't black and white.
     
  24. Peyomp

    Peyomp New Member

    Joined:
    Jan 11, 2002
    Messages:
    14,017
    Likes Received:
    0
    The newer filesystems avoid fragmentation completely, or almost completely (unless you fill them).
     
  25. P07r0457

    P07r0457 New Member

    Joined:
    Sep 20, 2004
    Messages:
    28,491
    Likes Received:
    0
    Location:
    Southern Oregon
    such as? Because I do not believe this is true.
     

Share This Page