Over the years I have recovered many hard drives configured with NTFS. One of the leading reasons that data recovery is performed on these hard drives is an anamoly developed in the Master File Table. This area of the drive is the single most important set of data stored on your system. The Master File Table houses all attributes, as well as cluster placement for every file on your system. It contains security attributes, file name attributes, date and time signatures, and a mini FAT called a run list that points to every cluster where a particular file is stored.
In addition to the infomation stored in the Master File Table it has been my experience that if a previous copy of the Master File Table had been saved off into a file onto a remote site I could have easily imported that file and used it to recover the data. In other words, it is rarely the occasion that an entire file system gets totally wiped out. It is usually some small piece of information either corrupted or omitted from the Master File Table that causes the problem. Even a restore disk used on a hard drive that totally destroys all remnants of a file system cannot keep a backup copy of the Master File Table from recovering some data.
How, you may ask can this be? Well grasshopper, read on and see. Imagine a book. A reference book preferably. Now, let us define the attributes of a reference book. Lets see, there is a forward where the author may offer a few remarks so we know how intelligent he is. There is a table of contents that give you a general idea of what is in the book and where it is located. There is the body of the book, the actual information. Last but not least, an index. A detailed description, with page numbers that tell you exactly where the data is that you are looking for. For illustration purposes we can say that the index of the book is the Master File Table, and the body of the book is the data on your hard drive. If the index of the book is ripped out of the back, how would it be possible to find the information you are looking for? I suppose you could wade through the entire book and possibly, after several hours of searching, find the answers you are looking for. I have done that with some of my older books where the back, and the front of the book have disappeared. A book may have 200, 300, 400, maybe even 500 pages to look through, and if the information is important enough it is worth the look. However, wouldn’t it have been easier if I would have just photo copied the index and placed that in a nice safe place. Then, when the book gets old, and I lose the index, I have this nice copy that I have kept to help me find my information.
Leafing through a 500 page book may be time consuming but it is feasible, however, apply that same logic of the index and the book to a hard drive. Who wants to scan through 234,000,000 sectors looking for data. If the data is fragmented then the data is probably lost. Wouldn’t have been nice to have a copy of the Master File Table to use and find all of your old tax returns, or doctoral thesis, or the only pictures of your grandsons birth? I would say, “Yeah!! It would’ve been nice!”.
Please don’t get the wrong idea. This is not the same as entire backup, on another set of media. There are holes to this system. First, if the drive actually goes bad, then it will be difficult if not impossible to get the data back. Secondly, any thing that writes to the data portion of the drive will make the Master File Table useless. However, it takes a long time to destroy a 250 GB hard drives data area. Lastly, I have not been able to find a piece of software that just dumps the Master File Table to a remote site. Looks like someone should write one?
This is an interesting article and well put for those who don’t understand the many technical names attributed to what makes a computer, a computer.
While I have been reading about the MTF, from various resources, I have seen mentioned the MTF Mirror.
Is it possible to access this for copying and backup without touching the main MTF and how would you suggest doing this?
That is an excellent question and one that bears clarification. The Master File Table can be several thousand records in size. I have seen an MFT with as many as 7 million records. That being said, an average MFT is between 150 and 200 thousand files. This would make the MFT approximately 200 megabytes. The overhead to maintain a mirror of the MFT records would be prohibitive so Microsoft has done this.
The cluster, that lies directly in the center of the partition is the mirror cluster. This mirror placement is used almost one hundred percent of the time. The mirrored data is only what can fit in one cluster, which is four records. The most important information in this mirror is the first record. The first record is the MFT pointer, that tells the file system where the entire MFT is stored. In other words, the first record of the Master File Table tells us which clusters are used in order to store the Master File Table.
Many times I have seen the first record of the MFT destroyed, and in doing so making the entire file system unreadable. I always thought this was a weakness of the file system. Microsoft tries to overcome this by storing a copy of the first record in the center of the drive.
Linux does something similar in as much as you have a super block saved several times in EXTFS, XFS, etc file systems. You just never know where a bad sector might pop-up.
So, to answer your question, there is not a full copy of the Master File Table on the partition, but there is a backup of the first record to the MFT.
I hope this answers your question.
So, why not write a MFT remote back up?
i have a friend that is pull millions of pages from eBay and several time the master table has become damaged even with 8 hard drives running to reduce the load (NOT RAID) so yes it would be good if you could backup and restore this table as running chkdsk /f takes ages and then you end up copying millions of files back from the backup server and the process take a day for 200GB of data.