When a PC edits a file, does it delete the original file?
If code.txt
(or whatever file) is edited and saved I have two ideas of how a PC would handle the process:
The PC deletes
code.txt
completely and makes a newcode.txt
(edited version) from scratch.The PC edits part of hex of
code.txt
. So no delete happens.
Which idea represents how computers work?
editing
|
show 2 more comments
If code.txt
(or whatever file) is edited and saved I have two ideas of how a PC would handle the process:
The PC deletes
code.txt
completely and makes a newcode.txt
(edited version) from scratch.The PC edits part of hex of
code.txt
. So no delete happens.
Which idea represents how computers work?
editing
Greetings! Working from the excellent answer provided by user Grawity, here are some clarifying questions:
– Haakon Dahl
Jan 24 at 2:49
18
@HaakonDahl what clarifying questions? You posted nothing.
– The Great Duck
Jan 24 at 6:38
Dangit. Have to wait until I get back on my PC. But the gist is what level -- hardware, filesystem, OS, or app? And what app?
– Haakon Dahl
Jan 24 at 7:07
Why does it matter to you? Even programs that create a "new" file will probably change the creation time so that it matches the original. The only visible difference would be the inode number (or equivalent concept) which may matter (e.g. if you have hardlinks around they will get "out of sync").
– Bakuriu
Jan 26 at 8:56
1
Voting to close this question as too broad. It all depends on the OS, software and underlying file system’s capabilities.
– JakeGould
Jan 27 at 19:14
|
show 2 more comments
If code.txt
(or whatever file) is edited and saved I have two ideas of how a PC would handle the process:
The PC deletes
code.txt
completely and makes a newcode.txt
(edited version) from scratch.The PC edits part of hex of
code.txt
. So no delete happens.
Which idea represents how computers work?
editing
If code.txt
(or whatever file) is edited and saved I have two ideas of how a PC would handle the process:
The PC deletes
code.txt
completely and makes a newcode.txt
(edited version) from scratch.The PC edits part of hex of
code.txt
. So no delete happens.
Which idea represents how computers work?
editing
editing
edited Jan 27 at 19:12
JakeGould
31.6k1096138
31.6k1096138
asked Jan 22 at 21:44
Desk ManDesk Man
408126
408126
Greetings! Working from the excellent answer provided by user Grawity, here are some clarifying questions:
– Haakon Dahl
Jan 24 at 2:49
18
@HaakonDahl what clarifying questions? You posted nothing.
– The Great Duck
Jan 24 at 6:38
Dangit. Have to wait until I get back on my PC. But the gist is what level -- hardware, filesystem, OS, or app? And what app?
– Haakon Dahl
Jan 24 at 7:07
Why does it matter to you? Even programs that create a "new" file will probably change the creation time so that it matches the original. The only visible difference would be the inode number (or equivalent concept) which may matter (e.g. if you have hardlinks around they will get "out of sync").
– Bakuriu
Jan 26 at 8:56
1
Voting to close this question as too broad. It all depends on the OS, software and underlying file system’s capabilities.
– JakeGould
Jan 27 at 19:14
|
show 2 more comments
Greetings! Working from the excellent answer provided by user Grawity, here are some clarifying questions:
– Haakon Dahl
Jan 24 at 2:49
18
@HaakonDahl what clarifying questions? You posted nothing.
– The Great Duck
Jan 24 at 6:38
Dangit. Have to wait until I get back on my PC. But the gist is what level -- hardware, filesystem, OS, or app? And what app?
– Haakon Dahl
Jan 24 at 7:07
Why does it matter to you? Even programs that create a "new" file will probably change the creation time so that it matches the original. The only visible difference would be the inode number (or equivalent concept) which may matter (e.g. if you have hardlinks around they will get "out of sync").
– Bakuriu
Jan 26 at 8:56
1
Voting to close this question as too broad. It all depends on the OS, software and underlying file system’s capabilities.
– JakeGould
Jan 27 at 19:14
Greetings! Working from the excellent answer provided by user Grawity, here are some clarifying questions:
– Haakon Dahl
Jan 24 at 2:49
Greetings! Working from the excellent answer provided by user Grawity, here are some clarifying questions:
– Haakon Dahl
Jan 24 at 2:49
18
18
@HaakonDahl what clarifying questions? You posted nothing.
– The Great Duck
Jan 24 at 6:38
@HaakonDahl what clarifying questions? You posted nothing.
– The Great Duck
Jan 24 at 6:38
Dangit. Have to wait until I get back on my PC. But the gist is what level -- hardware, filesystem, OS, or app? And what app?
– Haakon Dahl
Jan 24 at 7:07
Dangit. Have to wait until I get back on my PC. But the gist is what level -- hardware, filesystem, OS, or app? And what app?
– Haakon Dahl
Jan 24 at 7:07
Why does it matter to you? Even programs that create a "new" file will probably change the creation time so that it matches the original. The only visible difference would be the inode number (or equivalent concept) which may matter (e.g. if you have hardlinks around they will get "out of sync").
– Bakuriu
Jan 26 at 8:56
Why does it matter to you? Even programs that create a "new" file will probably change the creation time so that it matches the original. The only visible difference would be the inode number (or equivalent concept) which may matter (e.g. if you have hardlinks around they will get "out of sync").
– Bakuriu
Jan 26 at 8:56
1
1
Voting to close this question as too broad. It all depends on the OS, software and underlying file system’s capabilities.
– JakeGould
Jan 27 at 19:14
Voting to close this question as too broad. It all depends on the OS, software and underlying file system’s capabilities.
– JakeGould
Jan 27 at 19:14
|
show 2 more comments
9 Answers
9
active
oldest
votes
Could be either – it depends on the text editor that was used.
The concept of a 'text file' isn't built into computers – each operating system may manage files differently, and each text editor may use those files differently.
In practice, you'll find text editors which have both mechanisms. Practically all operating systems allow direct overwrite of an existing file's contents, so simple editors such as Notepad usually just ask the OS to write directly into the original file, as that's easiest to implement – but risky if you lose power mid-write. So for reliability reasons, many editors deliberately save the updated data to a new file and delete the original.
(I think in-place updates are more common among hex editors, where most edits don't insert/delete bytes but only change existing locations, so a full rewrite file is not needed.)
There's even a third mode of operation – the editor might first make a backup copy of the old file, then directly write new data into the file.
It also depends on the filesystem which keeps the file. With most traditional filesystems, if a program asks to write to an existing file, the filesystem will just overwrite old data in-place.
However, some filesystems do work in "copy-on-write" mode, where any new data is always written to a different location, whether the program wants it or not. Again, this has the possible advantage of increased reliability because an interrupted change can be fully reverted.
In some filesystems (such as Btrfs or ext4) this is an optional feature; in others (e.g. log-structured filesystems) it is part of the core design.
30
It's not just on a filesystem level. Flash memory, for example, has to clear a block before it can write to it. So, in practice, it will often write to files simply by writing the new change to a new block, and invalidating it on the old block. By having this sort of thing handled automatically by the device itself, the OS can just use a normal hard drive file system.
– trlkly
Jan 22 at 23:41
7
@trlkly: All modern flash memory devices are divided into erase regions which are orders of magnitude larger than a disk sector, and cannot recycle any portion of such a region without erasing all of it. Consequently, if a region contains 32 obsolete sectors worth of data and 224 sectors of useful data, it will have to copy the 224 sectors of useful data somewhere else before it can free up the space from any of the obsolete sectors. Modern operating systems use a "trim" command to indicate disk sectors whose contents can be abandoned if the block they are on gets recycled.
– supercat
Jan 23 at 0:48
Some editors choose at run-time which behaviour to use (e.g. depending on whether a file has just one directory entry naming it, or many).
– Toby Speight
Jan 23 at 16:06
2
Many editors will simply read the file into memory and do all changes there. (Perhaps peiodically autosaving a copy of ongoing work to a different.) The original file is not changed at all until you save changes, e.g. with vi's :w command.
– jamesqf
Jan 23 at 18:58
4
@jamesqf: Well, the question was about what happens when a file is "edited and saved"...
– grawity
Jan 23 at 19:34
|
show 2 more comments
Since you are talking about "saving the file", then file will not be edited in-place on disk.
With a file in a usual filesystem, there are two things to consider. There is the directory entry, and then there is the actual file data somewhere on the disk.
When you edit a file in a normal editor, it will load the file data into RAM, and any editing will just happen on that copy of the data. Then when you save the file, there are basically two options:
Option 1: the original file is renamed, so both the original directory entry and the original data will remain on the disk. The rename might for example change file suffix to .bak
(removing any previous .bak
file, usually). Then a new file is created and the data from memory is written there.
Option 2: the original directory entry is modified so the file is truncated to 0 length. The area on disk used for file data will be marked as unused, but the old file contents will remain on disk until they are overwritten. Then new data is written. In this case the directory entry remains, just the data it points to is changed.
There are a few possible variations, a common one being, the edited data is first stored to temporary file, so if your computer crashes at this point, the original file will likely not be damaged. Then the original file is deleted and the new file renamed with the correct name. Or, the original file could just be deleted before writing the new one.
So your theory 1 is close to what most editors do.
Then there are special cases. The most obvious one is a disk editor, which allows reading and overwriting bytes directly on disk. Another might be a database file, where records might be fixed size, so it's easy to just overwrite a record. But data can't be appended in the middle of a file, and therefore editing text files or any other files where the length of the data in the middle of the file commonly changes, these tricks can't really be used.
So your theory 2 is possible in some cases, but normal text editors and such don't do it.
1
"Since you are talking about "saving the file", then file will not be edited in-place on disk." - I think that anytime you "open" a file, edit it, and write the changes back to disk, you are "saving the file", regardless of whether the file is "written in place" (overwritten), or the old file is deleted or renamed and a new file is created. Either way, you usually, at some point decide to "save the changes", or "discard the changes".
– Kevin Fegan
Jan 28 at 1:14
@KevinFegan Well, you can open a file in suitable disk or hex editor, edit the contents, and save changes. Or, you might open a database file (such as SQLite database file), and modify the database, and have changes committed to the file. So just opening a file for modification can mean modifying it in-place, but "saving a file" usually implies creation of a new file, and these other alternatives have differently named action for saving changes.
– hyde
Jan 29 at 17:03
add a comment |
Historically, drives were directly controlled by the OS, which in turn controlled by the application. In that context, Theory 2 was the primary way PCs worked. the OS specified a physical location to put data, and it had full control over this process. As a result, early file systems had a "bad sector" table, so after your data was lost, the computer could tell you the data was lost and mark the sector as unusable to avoid more data loss. Disk scans and defragmentation was the order of the day.
However, after the turn of the century, we moved to LBA, so now the OS would simply reference the "logical" block it wanted to read or write to. The hard drive itself now had the intelligence to shuffle around data behind the OS's back without it noticing. This meant better reliability, since sectors that failed to verify could simply be moved to a new physical location without affecting the OS's knowledge of where that data was located.
In modern hardware, the "platter" disk drives typically just overwrite whatever was there before with the new incoming data, and optionally remaps the LBA if the sector looks like it might not retain the data (the sector is damaged or worn). "Flash" drives typically erase the old cells and then write data to new cells, a process known as wear-leveling.
In both cases, this is possible because there is always unused capacity beyond the reported value. This overprovisioning allows the drive to have a longer usable life than the rather unreliable technology of the previous century's technology. The LBA mode enables the physical medium to be abstracted from the OS so that the drive itself could take whatever measures the drive thinks is necessary to prevent data loss.
At the application level, you typically open a file in "WRITE" mode, which tells the OS to clear the file ("delete" the contents, but not the file itself), then write new data. All of this is buffered at the OS level, then "flushed" to the drive, which makes the requested changes.
Given that information, Theory 1 is what technically happens at the application programming level, at least by default, as there is also a "write with append" mode to avoid clearing the file contents. The OS itself will present the changes to be made more like Theory 2, but abstracted via LBA. The drive itself will then probably do something that's a mix of Theory 1 and Theory 2.
Yep. It's complicated, and very part-manufacturer/OS-developer/application-developer dependent. However, all of this complexity is aimed at making data storage more reliable while improving power usage/battery life.
add a comment |
Depends. AFAIK Microsoft Word, when saving .doc
(not .docx
) files with Fast save options enabled, appends changes made to document since last save do existing file.
add a comment |
Generally speaking, A computer will allocate the memory where the original file resides as 'deleted', but all this really means is that it won't show up in your file browser anymore, and the cells in the memory where it was written are allowed to be overwritten in future.
As to whether the new file is written into the same place is down to a number of factors, primarily the software you are using and how it is designed to make use of the memory.
2
I think you might be confusing "memory" with the notion of file system unlink operations. And this doesn't really have anything to do with the stated question, which asks if concrete files are overwritten or if there is some sort of n-way update.
– jdv
Jan 23 at 19:16
Well if software was designed to do that specifically then It's possible, though as far as I'm aware this is generally how both Long term storage and RAM work.
– GigaJoules
Jan 24 at 8:50
Unfortunately, your explanation (as far as I can decode what you mean) is decidedly not how "long term storage and RAM" work. But, at the end of the day, this has little to do with the question at hand. Which, I reiterate, is asking how software updates textual information to a file on a general purpose computing device with a typical modern file system. We don't have to consider how something like "memory" does or does not work to answer this question.
– jdv
Jan 24 at 16:25
add a comment |
Hopefully this isn't redundant, a little extra info/background.
The PC doesn't usually have much control over how a file is edited, it's the application that does it.
A few examples of how some apps might handle editing:
Notepad loads the entire document into memory and then saves the whole thing over your original document (or a new one you specify).
Nearly all other small editors will save a "new" file as you edit and then copy it over your original document deleting it when you "save".
Large Document editors that you might use to edit a book tend to read/modify a section of a document because they can edit documents bigger than memory. These may actually edit the document "In place". They might re-write one page and leave the rest alone. These often have a more complex indexed on-disk representation than a simple .txt file would to allow this behavior.
The large editors might also just save temporary files with "updates" to your original document. When you do your final save it can merge them all in and re-write your document.
Most editors can be configured to leave the existing version untouched and create a new one with your changes (retain old versions).
As to the part of your question regarding what a "PC" does, some operating systems will remember every version of a file and always create a new one. This is pretty rare these days but I remember old "Mini Computers" (What we'd now call mainframes) where every file had a version at the end like "File.text.1" and it would add to the version every time you edited it. This kind of behavior would better apply to something like a tape drive or CD rom where overwriting the old version was completely impractical.
add a comment |
2 is not impossible, but it is stupid for various reasons.
A well-written text file editor will:
- Write a file with a different name and the new contents. If the original was
myfile.txt
, the new one might bemyfile.txt.new
- Provided 1. succeeded, rename the original to a backup file, say
myfile.txt~
- Rename the new file to the original name
myfile.txt
- If everything has succeeded, remove the backup file. Many editors leave it anyway, so the user can recover if he/she soon works out that what he/she did with the editor was not what he/she wanted to do.
If the computer crashes or runs out of space on the disk during the above, there is not a situation where both the old and the new files are lost or only partially saved.
The truncate-in-place-and-rewrite behaviour of lots of text editors for non-IBM/non-Microsoft operating systems for the past half century is not "stupid".
– JdeBP
Jan 28 at 13:44
add a comment |
Short answer
Highly depends on your editor, underlying software/drivers, storage.
Paranoiac answer
Can be recoverable unless you remove it permanently.
Long answer
There is missing information in your question (software, hardware, etc), so instead of answering myself I will help you answer your question yourself.
It depends on few factors:
Editor: If the editor software replaces the blocks of the same file, then it may get rewritten. And this may also be depended on editor settings and file types. Note that the word may was made italic. Even when editor rewrites the file, it still can remain untouched (read the next points).
Underlying software/drivers/file system: File will remain untouched if there are other software/drivers underneath that protect initial file from being overwritten. Those types of software include versioning systems, virtual differential disks, some backup software. An example is Git, which will keep the original file blocks and will create new file that holds the modified blocks.
Storage:
Storage itself can write changed blocks on a new sector, and mark old blocks as "free". Then the file will physically remain on the storage (and is recoverable), unless it gets overwritten by other file. Example is modern SSD storage, which may do it on a hardware level.
There are ways to recover data from a typical mechanical HDD's magnetic discs even when the data was overwritten. And there are specialized companies in it.
So if you want to get concrete answer whether your file will be deleted or not, you must also tell what editor, backup/VCS software/hardware and storage you use. If I missed any point, feel free to edit the answer.
How to make sure that the deleted file is actually deleted from the storage?
This is probably the next question that you will question yourself. Well there are many software/hardware solutions. Since SuperUser is not for promoting software/hardware, instead of telling names I will tell you how to find them: search for keywords "permanently delete file". For more exact matches mention your OS, hard drive type, or other info you have.
add a comment |
One behavior that no one has mentioned yet is a relevant behavior of some versions of MS Windows operating systems is also related to the filesystem in use.
The behavior works like this: When you rename or delete a file, if you create (re-create) a (new) file with the same name within 15 seconds of when the original file was deleted (or renamed), the creation date/timestamp is copied from the original file. Essentially, the new file "becomes" the old/original file.
In this case, it really doesn't matter if the application saves the changes to the file by your method #1: making a new file with the same name, or by your method #2: edit/update the file in place (file not deleted). Either way, the final file looks in (nearly) every way, like the original file. The only thing is, it will likely occupy different physical drive space (clusters/sectors) and the directory entry for the file will likely be in a different location.
As I said, this is a behavior of some versions of MS Windows/filesystems. I don't know which version of Windows and which filesystem this started on, and if it is still the behavior of more recent versions. If I had to guess I'd say it was introduced on Windows NT and Windows XP and is still the behavior of Windows 10, and (still a guess) the behavior requires a Fat32 or NTFS (and perhaps newer) filesystem.
Actually, it does matter, because NTFS supports hard links and one of the well-known differences between these methods is the effect on multiply-linked files. Filesystem tunnelling has been around since at least Windows NT 5.0.
– JdeBP
Jan 28 at 13:41
@JdeBP - Yes, we agree. That's why I said #1) "Nearly" in "the final file looks in (nearly) every way, like the original file", and #2) directory entry in a different location.
– Kevin Fegan
Feb 9 at 2:11
You do not agree if you assert, as you do, that it does not matter.
– JdeBP
2 days ago
add a comment |
protected by JakeGould Jan 28 at 0:40
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
9 Answers
9
active
oldest
votes
9 Answers
9
active
oldest
votes
active
oldest
votes
active
oldest
votes
Could be either – it depends on the text editor that was used.
The concept of a 'text file' isn't built into computers – each operating system may manage files differently, and each text editor may use those files differently.
In practice, you'll find text editors which have both mechanisms. Practically all operating systems allow direct overwrite of an existing file's contents, so simple editors such as Notepad usually just ask the OS to write directly into the original file, as that's easiest to implement – but risky if you lose power mid-write. So for reliability reasons, many editors deliberately save the updated data to a new file and delete the original.
(I think in-place updates are more common among hex editors, where most edits don't insert/delete bytes but only change existing locations, so a full rewrite file is not needed.)
There's even a third mode of operation – the editor might first make a backup copy of the old file, then directly write new data into the file.
It also depends on the filesystem which keeps the file. With most traditional filesystems, if a program asks to write to an existing file, the filesystem will just overwrite old data in-place.
However, some filesystems do work in "copy-on-write" mode, where any new data is always written to a different location, whether the program wants it or not. Again, this has the possible advantage of increased reliability because an interrupted change can be fully reverted.
In some filesystems (such as Btrfs or ext4) this is an optional feature; in others (e.g. log-structured filesystems) it is part of the core design.
30
It's not just on a filesystem level. Flash memory, for example, has to clear a block before it can write to it. So, in practice, it will often write to files simply by writing the new change to a new block, and invalidating it on the old block. By having this sort of thing handled automatically by the device itself, the OS can just use a normal hard drive file system.
– trlkly
Jan 22 at 23:41
7
@trlkly: All modern flash memory devices are divided into erase regions which are orders of magnitude larger than a disk sector, and cannot recycle any portion of such a region without erasing all of it. Consequently, if a region contains 32 obsolete sectors worth of data and 224 sectors of useful data, it will have to copy the 224 sectors of useful data somewhere else before it can free up the space from any of the obsolete sectors. Modern operating systems use a "trim" command to indicate disk sectors whose contents can be abandoned if the block they are on gets recycled.
– supercat
Jan 23 at 0:48
Some editors choose at run-time which behaviour to use (e.g. depending on whether a file has just one directory entry naming it, or many).
– Toby Speight
Jan 23 at 16:06
2
Many editors will simply read the file into memory and do all changes there. (Perhaps peiodically autosaving a copy of ongoing work to a different.) The original file is not changed at all until you save changes, e.g. with vi's :w command.
– jamesqf
Jan 23 at 18:58
4
@jamesqf: Well, the question was about what happens when a file is "edited and saved"...
– grawity
Jan 23 at 19:34
|
show 2 more comments
Could be either – it depends on the text editor that was used.
The concept of a 'text file' isn't built into computers – each operating system may manage files differently, and each text editor may use those files differently.
In practice, you'll find text editors which have both mechanisms. Practically all operating systems allow direct overwrite of an existing file's contents, so simple editors such as Notepad usually just ask the OS to write directly into the original file, as that's easiest to implement – but risky if you lose power mid-write. So for reliability reasons, many editors deliberately save the updated data to a new file and delete the original.
(I think in-place updates are more common among hex editors, where most edits don't insert/delete bytes but only change existing locations, so a full rewrite file is not needed.)
There's even a third mode of operation – the editor might first make a backup copy of the old file, then directly write new data into the file.
It also depends on the filesystem which keeps the file. With most traditional filesystems, if a program asks to write to an existing file, the filesystem will just overwrite old data in-place.
However, some filesystems do work in "copy-on-write" mode, where any new data is always written to a different location, whether the program wants it or not. Again, this has the possible advantage of increased reliability because an interrupted change can be fully reverted.
In some filesystems (such as Btrfs or ext4) this is an optional feature; in others (e.g. log-structured filesystems) it is part of the core design.
30
It's not just on a filesystem level. Flash memory, for example, has to clear a block before it can write to it. So, in practice, it will often write to files simply by writing the new change to a new block, and invalidating it on the old block. By having this sort of thing handled automatically by the device itself, the OS can just use a normal hard drive file system.
– trlkly
Jan 22 at 23:41
7
@trlkly: All modern flash memory devices are divided into erase regions which are orders of magnitude larger than a disk sector, and cannot recycle any portion of such a region without erasing all of it. Consequently, if a region contains 32 obsolete sectors worth of data and 224 sectors of useful data, it will have to copy the 224 sectors of useful data somewhere else before it can free up the space from any of the obsolete sectors. Modern operating systems use a "trim" command to indicate disk sectors whose contents can be abandoned if the block they are on gets recycled.
– supercat
Jan 23 at 0:48
Some editors choose at run-time which behaviour to use (e.g. depending on whether a file has just one directory entry naming it, or many).
– Toby Speight
Jan 23 at 16:06
2
Many editors will simply read the file into memory and do all changes there. (Perhaps peiodically autosaving a copy of ongoing work to a different.) The original file is not changed at all until you save changes, e.g. with vi's :w command.
– jamesqf
Jan 23 at 18:58
4
@jamesqf: Well, the question was about what happens when a file is "edited and saved"...
– grawity
Jan 23 at 19:34
|
show 2 more comments
Could be either – it depends on the text editor that was used.
The concept of a 'text file' isn't built into computers – each operating system may manage files differently, and each text editor may use those files differently.
In practice, you'll find text editors which have both mechanisms. Practically all operating systems allow direct overwrite of an existing file's contents, so simple editors such as Notepad usually just ask the OS to write directly into the original file, as that's easiest to implement – but risky if you lose power mid-write. So for reliability reasons, many editors deliberately save the updated data to a new file and delete the original.
(I think in-place updates are more common among hex editors, where most edits don't insert/delete bytes but only change existing locations, so a full rewrite file is not needed.)
There's even a third mode of operation – the editor might first make a backup copy of the old file, then directly write new data into the file.
It also depends on the filesystem which keeps the file. With most traditional filesystems, if a program asks to write to an existing file, the filesystem will just overwrite old data in-place.
However, some filesystems do work in "copy-on-write" mode, where any new data is always written to a different location, whether the program wants it or not. Again, this has the possible advantage of increased reliability because an interrupted change can be fully reverted.
In some filesystems (such as Btrfs or ext4) this is an optional feature; in others (e.g. log-structured filesystems) it is part of the core design.
Could be either – it depends on the text editor that was used.
The concept of a 'text file' isn't built into computers – each operating system may manage files differently, and each text editor may use those files differently.
In practice, you'll find text editors which have both mechanisms. Practically all operating systems allow direct overwrite of an existing file's contents, so simple editors such as Notepad usually just ask the OS to write directly into the original file, as that's easiest to implement – but risky if you lose power mid-write. So for reliability reasons, many editors deliberately save the updated data to a new file and delete the original.
(I think in-place updates are more common among hex editors, where most edits don't insert/delete bytes but only change existing locations, so a full rewrite file is not needed.)
There's even a third mode of operation – the editor might first make a backup copy of the old file, then directly write new data into the file.
It also depends on the filesystem which keeps the file. With most traditional filesystems, if a program asks to write to an existing file, the filesystem will just overwrite old data in-place.
However, some filesystems do work in "copy-on-write" mode, where any new data is always written to a different location, whether the program wants it or not. Again, this has the possible advantage of increased reliability because an interrupted change can be fully reverted.
In some filesystems (such as Btrfs or ext4) this is an optional feature; in others (e.g. log-structured filesystems) it is part of the core design.
edited Jan 23 at 4:42
community wiki
2 revs
grawity
30
It's not just on a filesystem level. Flash memory, for example, has to clear a block before it can write to it. So, in practice, it will often write to files simply by writing the new change to a new block, and invalidating it on the old block. By having this sort of thing handled automatically by the device itself, the OS can just use a normal hard drive file system.
– trlkly
Jan 22 at 23:41
7
@trlkly: All modern flash memory devices are divided into erase regions which are orders of magnitude larger than a disk sector, and cannot recycle any portion of such a region without erasing all of it. Consequently, if a region contains 32 obsolete sectors worth of data and 224 sectors of useful data, it will have to copy the 224 sectors of useful data somewhere else before it can free up the space from any of the obsolete sectors. Modern operating systems use a "trim" command to indicate disk sectors whose contents can be abandoned if the block they are on gets recycled.
– supercat
Jan 23 at 0:48
Some editors choose at run-time which behaviour to use (e.g. depending on whether a file has just one directory entry naming it, or many).
– Toby Speight
Jan 23 at 16:06
2
Many editors will simply read the file into memory and do all changes there. (Perhaps peiodically autosaving a copy of ongoing work to a different.) The original file is not changed at all until you save changes, e.g. with vi's :w command.
– jamesqf
Jan 23 at 18:58
4
@jamesqf: Well, the question was about what happens when a file is "edited and saved"...
– grawity
Jan 23 at 19:34
|
show 2 more comments
30
It's not just on a filesystem level. Flash memory, for example, has to clear a block before it can write to it. So, in practice, it will often write to files simply by writing the new change to a new block, and invalidating it on the old block. By having this sort of thing handled automatically by the device itself, the OS can just use a normal hard drive file system.
– trlkly
Jan 22 at 23:41
7
@trlkly: All modern flash memory devices are divided into erase regions which are orders of magnitude larger than a disk sector, and cannot recycle any portion of such a region without erasing all of it. Consequently, if a region contains 32 obsolete sectors worth of data and 224 sectors of useful data, it will have to copy the 224 sectors of useful data somewhere else before it can free up the space from any of the obsolete sectors. Modern operating systems use a "trim" command to indicate disk sectors whose contents can be abandoned if the block they are on gets recycled.
– supercat
Jan 23 at 0:48
Some editors choose at run-time which behaviour to use (e.g. depending on whether a file has just one directory entry naming it, or many).
– Toby Speight
Jan 23 at 16:06
2
Many editors will simply read the file into memory and do all changes there. (Perhaps peiodically autosaving a copy of ongoing work to a different.) The original file is not changed at all until you save changes, e.g. with vi's :w command.
– jamesqf
Jan 23 at 18:58
4
@jamesqf: Well, the question was about what happens when a file is "edited and saved"...
– grawity
Jan 23 at 19:34
30
30
It's not just on a filesystem level. Flash memory, for example, has to clear a block before it can write to it. So, in practice, it will often write to files simply by writing the new change to a new block, and invalidating it on the old block. By having this sort of thing handled automatically by the device itself, the OS can just use a normal hard drive file system.
– trlkly
Jan 22 at 23:41
It's not just on a filesystem level. Flash memory, for example, has to clear a block before it can write to it. So, in practice, it will often write to files simply by writing the new change to a new block, and invalidating it on the old block. By having this sort of thing handled automatically by the device itself, the OS can just use a normal hard drive file system.
– trlkly
Jan 22 at 23:41
7
7
@trlkly: All modern flash memory devices are divided into erase regions which are orders of magnitude larger than a disk sector, and cannot recycle any portion of such a region without erasing all of it. Consequently, if a region contains 32 obsolete sectors worth of data and 224 sectors of useful data, it will have to copy the 224 sectors of useful data somewhere else before it can free up the space from any of the obsolete sectors. Modern operating systems use a "trim" command to indicate disk sectors whose contents can be abandoned if the block they are on gets recycled.
– supercat
Jan 23 at 0:48
@trlkly: All modern flash memory devices are divided into erase regions which are orders of magnitude larger than a disk sector, and cannot recycle any portion of such a region without erasing all of it. Consequently, if a region contains 32 obsolete sectors worth of data and 224 sectors of useful data, it will have to copy the 224 sectors of useful data somewhere else before it can free up the space from any of the obsolete sectors. Modern operating systems use a "trim" command to indicate disk sectors whose contents can be abandoned if the block they are on gets recycled.
– supercat
Jan 23 at 0:48
Some editors choose at run-time which behaviour to use (e.g. depending on whether a file has just one directory entry naming it, or many).
– Toby Speight
Jan 23 at 16:06
Some editors choose at run-time which behaviour to use (e.g. depending on whether a file has just one directory entry naming it, or many).
– Toby Speight
Jan 23 at 16:06
2
2
Many editors will simply read the file into memory and do all changes there. (Perhaps peiodically autosaving a copy of ongoing work to a different.) The original file is not changed at all until you save changes, e.g. with vi's :w command.
– jamesqf
Jan 23 at 18:58
Many editors will simply read the file into memory and do all changes there. (Perhaps peiodically autosaving a copy of ongoing work to a different.) The original file is not changed at all until you save changes, e.g. with vi's :w command.
– jamesqf
Jan 23 at 18:58
4
4
@jamesqf: Well, the question was about what happens when a file is "edited and saved"...
– grawity
Jan 23 at 19:34
@jamesqf: Well, the question was about what happens when a file is "edited and saved"...
– grawity
Jan 23 at 19:34
|
show 2 more comments
Since you are talking about "saving the file", then file will not be edited in-place on disk.
With a file in a usual filesystem, there are two things to consider. There is the directory entry, and then there is the actual file data somewhere on the disk.
When you edit a file in a normal editor, it will load the file data into RAM, and any editing will just happen on that copy of the data. Then when you save the file, there are basically two options:
Option 1: the original file is renamed, so both the original directory entry and the original data will remain on the disk. The rename might for example change file suffix to .bak
(removing any previous .bak
file, usually). Then a new file is created and the data from memory is written there.
Option 2: the original directory entry is modified so the file is truncated to 0 length. The area on disk used for file data will be marked as unused, but the old file contents will remain on disk until they are overwritten. Then new data is written. In this case the directory entry remains, just the data it points to is changed.
There are a few possible variations, a common one being, the edited data is first stored to temporary file, so if your computer crashes at this point, the original file will likely not be damaged. Then the original file is deleted and the new file renamed with the correct name. Or, the original file could just be deleted before writing the new one.
So your theory 1 is close to what most editors do.
Then there are special cases. The most obvious one is a disk editor, which allows reading and overwriting bytes directly on disk. Another might be a database file, where records might be fixed size, so it's easy to just overwrite a record. But data can't be appended in the middle of a file, and therefore editing text files or any other files where the length of the data in the middle of the file commonly changes, these tricks can't really be used.
So your theory 2 is possible in some cases, but normal text editors and such don't do it.
1
"Since you are talking about "saving the file", then file will not be edited in-place on disk." - I think that anytime you "open" a file, edit it, and write the changes back to disk, you are "saving the file", regardless of whether the file is "written in place" (overwritten), or the old file is deleted or renamed and a new file is created. Either way, you usually, at some point decide to "save the changes", or "discard the changes".
– Kevin Fegan
Jan 28 at 1:14
@KevinFegan Well, you can open a file in suitable disk or hex editor, edit the contents, and save changes. Or, you might open a database file (such as SQLite database file), and modify the database, and have changes committed to the file. So just opening a file for modification can mean modifying it in-place, but "saving a file" usually implies creation of a new file, and these other alternatives have differently named action for saving changes.
– hyde
Jan 29 at 17:03
add a comment |
Since you are talking about "saving the file", then file will not be edited in-place on disk.
With a file in a usual filesystem, there are two things to consider. There is the directory entry, and then there is the actual file data somewhere on the disk.
When you edit a file in a normal editor, it will load the file data into RAM, and any editing will just happen on that copy of the data. Then when you save the file, there are basically two options:
Option 1: the original file is renamed, so both the original directory entry and the original data will remain on the disk. The rename might for example change file suffix to .bak
(removing any previous .bak
file, usually). Then a new file is created and the data from memory is written there.
Option 2: the original directory entry is modified so the file is truncated to 0 length. The area on disk used for file data will be marked as unused, but the old file contents will remain on disk until they are overwritten. Then new data is written. In this case the directory entry remains, just the data it points to is changed.
There are a few possible variations, a common one being, the edited data is first stored to temporary file, so if your computer crashes at this point, the original file will likely not be damaged. Then the original file is deleted and the new file renamed with the correct name. Or, the original file could just be deleted before writing the new one.
So your theory 1 is close to what most editors do.
Then there are special cases. The most obvious one is a disk editor, which allows reading and overwriting bytes directly on disk. Another might be a database file, where records might be fixed size, so it's easy to just overwrite a record. But data can't be appended in the middle of a file, and therefore editing text files or any other files where the length of the data in the middle of the file commonly changes, these tricks can't really be used.
So your theory 2 is possible in some cases, but normal text editors and such don't do it.
1
"Since you are talking about "saving the file", then file will not be edited in-place on disk." - I think that anytime you "open" a file, edit it, and write the changes back to disk, you are "saving the file", regardless of whether the file is "written in place" (overwritten), or the old file is deleted or renamed and a new file is created. Either way, you usually, at some point decide to "save the changes", or "discard the changes".
– Kevin Fegan
Jan 28 at 1:14
@KevinFegan Well, you can open a file in suitable disk or hex editor, edit the contents, and save changes. Or, you might open a database file (such as SQLite database file), and modify the database, and have changes committed to the file. So just opening a file for modification can mean modifying it in-place, but "saving a file" usually implies creation of a new file, and these other alternatives have differently named action for saving changes.
– hyde
Jan 29 at 17:03
add a comment |
Since you are talking about "saving the file", then file will not be edited in-place on disk.
With a file in a usual filesystem, there are two things to consider. There is the directory entry, and then there is the actual file data somewhere on the disk.
When you edit a file in a normal editor, it will load the file data into RAM, and any editing will just happen on that copy of the data. Then when you save the file, there are basically two options:
Option 1: the original file is renamed, so both the original directory entry and the original data will remain on the disk. The rename might for example change file suffix to .bak
(removing any previous .bak
file, usually). Then a new file is created and the data from memory is written there.
Option 2: the original directory entry is modified so the file is truncated to 0 length. The area on disk used for file data will be marked as unused, but the old file contents will remain on disk until they are overwritten. Then new data is written. In this case the directory entry remains, just the data it points to is changed.
There are a few possible variations, a common one being, the edited data is first stored to temporary file, so if your computer crashes at this point, the original file will likely not be damaged. Then the original file is deleted and the new file renamed with the correct name. Or, the original file could just be deleted before writing the new one.
So your theory 1 is close to what most editors do.
Then there are special cases. The most obvious one is a disk editor, which allows reading and overwriting bytes directly on disk. Another might be a database file, where records might be fixed size, so it's easy to just overwrite a record. But data can't be appended in the middle of a file, and therefore editing text files or any other files where the length of the data in the middle of the file commonly changes, these tricks can't really be used.
So your theory 2 is possible in some cases, but normal text editors and such don't do it.
Since you are talking about "saving the file", then file will not be edited in-place on disk.
With a file in a usual filesystem, there are two things to consider. There is the directory entry, and then there is the actual file data somewhere on the disk.
When you edit a file in a normal editor, it will load the file data into RAM, and any editing will just happen on that copy of the data. Then when you save the file, there are basically two options:
Option 1: the original file is renamed, so both the original directory entry and the original data will remain on the disk. The rename might for example change file suffix to .bak
(removing any previous .bak
file, usually). Then a new file is created and the data from memory is written there.
Option 2: the original directory entry is modified so the file is truncated to 0 length. The area on disk used for file data will be marked as unused, but the old file contents will remain on disk until they are overwritten. Then new data is written. In this case the directory entry remains, just the data it points to is changed.
There are a few possible variations, a common one being, the edited data is first stored to temporary file, so if your computer crashes at this point, the original file will likely not be damaged. Then the original file is deleted and the new file renamed with the correct name. Or, the original file could just be deleted before writing the new one.
So your theory 1 is close to what most editors do.
Then there are special cases. The most obvious one is a disk editor, which allows reading and overwriting bytes directly on disk. Another might be a database file, where records might be fixed size, so it's easy to just overwrite a record. But data can't be appended in the middle of a file, and therefore editing text files or any other files where the length of the data in the middle of the file commonly changes, these tricks can't really be used.
So your theory 2 is possible in some cases, but normal text editors and such don't do it.
edited Jan 23 at 21:46
answered Jan 23 at 21:32
hydehyde
189212
189212
1
"Since you are talking about "saving the file", then file will not be edited in-place on disk." - I think that anytime you "open" a file, edit it, and write the changes back to disk, you are "saving the file", regardless of whether the file is "written in place" (overwritten), or the old file is deleted or renamed and a new file is created. Either way, you usually, at some point decide to "save the changes", or "discard the changes".
– Kevin Fegan
Jan 28 at 1:14
@KevinFegan Well, you can open a file in suitable disk or hex editor, edit the contents, and save changes. Or, you might open a database file (such as SQLite database file), and modify the database, and have changes committed to the file. So just opening a file for modification can mean modifying it in-place, but "saving a file" usually implies creation of a new file, and these other alternatives have differently named action for saving changes.
– hyde
Jan 29 at 17:03
add a comment |
1
"Since you are talking about "saving the file", then file will not be edited in-place on disk." - I think that anytime you "open" a file, edit it, and write the changes back to disk, you are "saving the file", regardless of whether the file is "written in place" (overwritten), or the old file is deleted or renamed and a new file is created. Either way, you usually, at some point decide to "save the changes", or "discard the changes".
– Kevin Fegan
Jan 28 at 1:14
@KevinFegan Well, you can open a file in suitable disk or hex editor, edit the contents, and save changes. Or, you might open a database file (such as SQLite database file), and modify the database, and have changes committed to the file. So just opening a file for modification can mean modifying it in-place, but "saving a file" usually implies creation of a new file, and these other alternatives have differently named action for saving changes.
– hyde
Jan 29 at 17:03
1
1
"Since you are talking about "saving the file", then file will not be edited in-place on disk." - I think that anytime you "open" a file, edit it, and write the changes back to disk, you are "saving the file", regardless of whether the file is "written in place" (overwritten), or the old file is deleted or renamed and a new file is created. Either way, you usually, at some point decide to "save the changes", or "discard the changes".
– Kevin Fegan
Jan 28 at 1:14
"Since you are talking about "saving the file", then file will not be edited in-place on disk." - I think that anytime you "open" a file, edit it, and write the changes back to disk, you are "saving the file", regardless of whether the file is "written in place" (overwritten), or the old file is deleted or renamed and a new file is created. Either way, you usually, at some point decide to "save the changes", or "discard the changes".
– Kevin Fegan
Jan 28 at 1:14
@KevinFegan Well, you can open a file in suitable disk or hex editor, edit the contents, and save changes. Or, you might open a database file (such as SQLite database file), and modify the database, and have changes committed to the file. So just opening a file for modification can mean modifying it in-place, but "saving a file" usually implies creation of a new file, and these other alternatives have differently named action for saving changes.
– hyde
Jan 29 at 17:03
@KevinFegan Well, you can open a file in suitable disk or hex editor, edit the contents, and save changes. Or, you might open a database file (such as SQLite database file), and modify the database, and have changes committed to the file. So just opening a file for modification can mean modifying it in-place, but "saving a file" usually implies creation of a new file, and these other alternatives have differently named action for saving changes.
– hyde
Jan 29 at 17:03
add a comment |
Historically, drives were directly controlled by the OS, which in turn controlled by the application. In that context, Theory 2 was the primary way PCs worked. the OS specified a physical location to put data, and it had full control over this process. As a result, early file systems had a "bad sector" table, so after your data was lost, the computer could tell you the data was lost and mark the sector as unusable to avoid more data loss. Disk scans and defragmentation was the order of the day.
However, after the turn of the century, we moved to LBA, so now the OS would simply reference the "logical" block it wanted to read or write to. The hard drive itself now had the intelligence to shuffle around data behind the OS's back without it noticing. This meant better reliability, since sectors that failed to verify could simply be moved to a new physical location without affecting the OS's knowledge of where that data was located.
In modern hardware, the "platter" disk drives typically just overwrite whatever was there before with the new incoming data, and optionally remaps the LBA if the sector looks like it might not retain the data (the sector is damaged or worn). "Flash" drives typically erase the old cells and then write data to new cells, a process known as wear-leveling.
In both cases, this is possible because there is always unused capacity beyond the reported value. This overprovisioning allows the drive to have a longer usable life than the rather unreliable technology of the previous century's technology. The LBA mode enables the physical medium to be abstracted from the OS so that the drive itself could take whatever measures the drive thinks is necessary to prevent data loss.
At the application level, you typically open a file in "WRITE" mode, which tells the OS to clear the file ("delete" the contents, but not the file itself), then write new data. All of this is buffered at the OS level, then "flushed" to the drive, which makes the requested changes.
Given that information, Theory 1 is what technically happens at the application programming level, at least by default, as there is also a "write with append" mode to avoid clearing the file contents. The OS itself will present the changes to be made more like Theory 2, but abstracted via LBA. The drive itself will then probably do something that's a mix of Theory 1 and Theory 2.
Yep. It's complicated, and very part-manufacturer/OS-developer/application-developer dependent. However, all of this complexity is aimed at making data storage more reliable while improving power usage/battery life.
add a comment |
Historically, drives were directly controlled by the OS, which in turn controlled by the application. In that context, Theory 2 was the primary way PCs worked. the OS specified a physical location to put data, and it had full control over this process. As a result, early file systems had a "bad sector" table, so after your data was lost, the computer could tell you the data was lost and mark the sector as unusable to avoid more data loss. Disk scans and defragmentation was the order of the day.
However, after the turn of the century, we moved to LBA, so now the OS would simply reference the "logical" block it wanted to read or write to. The hard drive itself now had the intelligence to shuffle around data behind the OS's back without it noticing. This meant better reliability, since sectors that failed to verify could simply be moved to a new physical location without affecting the OS's knowledge of where that data was located.
In modern hardware, the "platter" disk drives typically just overwrite whatever was there before with the new incoming data, and optionally remaps the LBA if the sector looks like it might not retain the data (the sector is damaged or worn). "Flash" drives typically erase the old cells and then write data to new cells, a process known as wear-leveling.
In both cases, this is possible because there is always unused capacity beyond the reported value. This overprovisioning allows the drive to have a longer usable life than the rather unreliable technology of the previous century's technology. The LBA mode enables the physical medium to be abstracted from the OS so that the drive itself could take whatever measures the drive thinks is necessary to prevent data loss.
At the application level, you typically open a file in "WRITE" mode, which tells the OS to clear the file ("delete" the contents, but not the file itself), then write new data. All of this is buffered at the OS level, then "flushed" to the drive, which makes the requested changes.
Given that information, Theory 1 is what technically happens at the application programming level, at least by default, as there is also a "write with append" mode to avoid clearing the file contents. The OS itself will present the changes to be made more like Theory 2, but abstracted via LBA. The drive itself will then probably do something that's a mix of Theory 1 and Theory 2.
Yep. It's complicated, and very part-manufacturer/OS-developer/application-developer dependent. However, all of this complexity is aimed at making data storage more reliable while improving power usage/battery life.
add a comment |
Historically, drives were directly controlled by the OS, which in turn controlled by the application. In that context, Theory 2 was the primary way PCs worked. the OS specified a physical location to put data, and it had full control over this process. As a result, early file systems had a "bad sector" table, so after your data was lost, the computer could tell you the data was lost and mark the sector as unusable to avoid more data loss. Disk scans and defragmentation was the order of the day.
However, after the turn of the century, we moved to LBA, so now the OS would simply reference the "logical" block it wanted to read or write to. The hard drive itself now had the intelligence to shuffle around data behind the OS's back without it noticing. This meant better reliability, since sectors that failed to verify could simply be moved to a new physical location without affecting the OS's knowledge of where that data was located.
In modern hardware, the "platter" disk drives typically just overwrite whatever was there before with the new incoming data, and optionally remaps the LBA if the sector looks like it might not retain the data (the sector is damaged or worn). "Flash" drives typically erase the old cells and then write data to new cells, a process known as wear-leveling.
In both cases, this is possible because there is always unused capacity beyond the reported value. This overprovisioning allows the drive to have a longer usable life than the rather unreliable technology of the previous century's technology. The LBA mode enables the physical medium to be abstracted from the OS so that the drive itself could take whatever measures the drive thinks is necessary to prevent data loss.
At the application level, you typically open a file in "WRITE" mode, which tells the OS to clear the file ("delete" the contents, but not the file itself), then write new data. All of this is buffered at the OS level, then "flushed" to the drive, which makes the requested changes.
Given that information, Theory 1 is what technically happens at the application programming level, at least by default, as there is also a "write with append" mode to avoid clearing the file contents. The OS itself will present the changes to be made more like Theory 2, but abstracted via LBA. The drive itself will then probably do something that's a mix of Theory 1 and Theory 2.
Yep. It's complicated, and very part-manufacturer/OS-developer/application-developer dependent. However, all of this complexity is aimed at making data storage more reliable while improving power usage/battery life.
Historically, drives were directly controlled by the OS, which in turn controlled by the application. In that context, Theory 2 was the primary way PCs worked. the OS specified a physical location to put data, and it had full control over this process. As a result, early file systems had a "bad sector" table, so after your data was lost, the computer could tell you the data was lost and mark the sector as unusable to avoid more data loss. Disk scans and defragmentation was the order of the day.
However, after the turn of the century, we moved to LBA, so now the OS would simply reference the "logical" block it wanted to read or write to. The hard drive itself now had the intelligence to shuffle around data behind the OS's back without it noticing. This meant better reliability, since sectors that failed to verify could simply be moved to a new physical location without affecting the OS's knowledge of where that data was located.
In modern hardware, the "platter" disk drives typically just overwrite whatever was there before with the new incoming data, and optionally remaps the LBA if the sector looks like it might not retain the data (the sector is damaged or worn). "Flash" drives typically erase the old cells and then write data to new cells, a process known as wear-leveling.
In both cases, this is possible because there is always unused capacity beyond the reported value. This overprovisioning allows the drive to have a longer usable life than the rather unreliable technology of the previous century's technology. The LBA mode enables the physical medium to be abstracted from the OS so that the drive itself could take whatever measures the drive thinks is necessary to prevent data loss.
At the application level, you typically open a file in "WRITE" mode, which tells the OS to clear the file ("delete" the contents, but not the file itself), then write new data. All of this is buffered at the OS level, then "flushed" to the drive, which makes the requested changes.
Given that information, Theory 1 is what technically happens at the application programming level, at least by default, as there is also a "write with append" mode to avoid clearing the file contents. The OS itself will present the changes to be made more like Theory 2, but abstracted via LBA. The drive itself will then probably do something that's a mix of Theory 1 and Theory 2.
Yep. It's complicated, and very part-manufacturer/OS-developer/application-developer dependent. However, all of this complexity is aimed at making data storage more reliable while improving power usage/battery life.
edited Jan 24 at 20:57
answered Jan 24 at 20:50
phyrfoxphyrfox
2,3391013
2,3391013
add a comment |
add a comment |
Depends. AFAIK Microsoft Word, when saving .doc
(not .docx
) files with Fast save options enabled, appends changes made to document since last save do existing file.
add a comment |
Depends. AFAIK Microsoft Word, when saving .doc
(not .docx
) files with Fast save options enabled, appends changes made to document since last save do existing file.
add a comment |
Depends. AFAIK Microsoft Word, when saving .doc
(not .docx
) files with Fast save options enabled, appends changes made to document since last save do existing file.
Depends. AFAIK Microsoft Word, when saving .doc
(not .docx
) files with Fast save options enabled, appends changes made to document since last save do existing file.
answered Jan 27 at 11:40
miletmilet
311
311
add a comment |
add a comment |
Generally speaking, A computer will allocate the memory where the original file resides as 'deleted', but all this really means is that it won't show up in your file browser anymore, and the cells in the memory where it was written are allowed to be overwritten in future.
As to whether the new file is written into the same place is down to a number of factors, primarily the software you are using and how it is designed to make use of the memory.
2
I think you might be confusing "memory" with the notion of file system unlink operations. And this doesn't really have anything to do with the stated question, which asks if concrete files are overwritten or if there is some sort of n-way update.
– jdv
Jan 23 at 19:16
Well if software was designed to do that specifically then It's possible, though as far as I'm aware this is generally how both Long term storage and RAM work.
– GigaJoules
Jan 24 at 8:50
Unfortunately, your explanation (as far as I can decode what you mean) is decidedly not how "long term storage and RAM" work. But, at the end of the day, this has little to do with the question at hand. Which, I reiterate, is asking how software updates textual information to a file on a general purpose computing device with a typical modern file system. We don't have to consider how something like "memory" does or does not work to answer this question.
– jdv
Jan 24 at 16:25
add a comment |
Generally speaking, A computer will allocate the memory where the original file resides as 'deleted', but all this really means is that it won't show up in your file browser anymore, and the cells in the memory where it was written are allowed to be overwritten in future.
As to whether the new file is written into the same place is down to a number of factors, primarily the software you are using and how it is designed to make use of the memory.
2
I think you might be confusing "memory" with the notion of file system unlink operations. And this doesn't really have anything to do with the stated question, which asks if concrete files are overwritten or if there is some sort of n-way update.
– jdv
Jan 23 at 19:16
Well if software was designed to do that specifically then It's possible, though as far as I'm aware this is generally how both Long term storage and RAM work.
– GigaJoules
Jan 24 at 8:50
Unfortunately, your explanation (as far as I can decode what you mean) is decidedly not how "long term storage and RAM" work. But, at the end of the day, this has little to do with the question at hand. Which, I reiterate, is asking how software updates textual information to a file on a general purpose computing device with a typical modern file system. We don't have to consider how something like "memory" does or does not work to answer this question.
– jdv
Jan 24 at 16:25
add a comment |
Generally speaking, A computer will allocate the memory where the original file resides as 'deleted', but all this really means is that it won't show up in your file browser anymore, and the cells in the memory where it was written are allowed to be overwritten in future.
As to whether the new file is written into the same place is down to a number of factors, primarily the software you are using and how it is designed to make use of the memory.
Generally speaking, A computer will allocate the memory where the original file resides as 'deleted', but all this really means is that it won't show up in your file browser anymore, and the cells in the memory where it was written are allowed to be overwritten in future.
As to whether the new file is written into the same place is down to a number of factors, primarily the software you are using and how it is designed to make use of the memory.
answered Jan 23 at 11:32
GigaJoulesGigaJoules
272
272
2
I think you might be confusing "memory" with the notion of file system unlink operations. And this doesn't really have anything to do with the stated question, which asks if concrete files are overwritten or if there is some sort of n-way update.
– jdv
Jan 23 at 19:16
Well if software was designed to do that specifically then It's possible, though as far as I'm aware this is generally how both Long term storage and RAM work.
– GigaJoules
Jan 24 at 8:50
Unfortunately, your explanation (as far as I can decode what you mean) is decidedly not how "long term storage and RAM" work. But, at the end of the day, this has little to do with the question at hand. Which, I reiterate, is asking how software updates textual information to a file on a general purpose computing device with a typical modern file system. We don't have to consider how something like "memory" does or does not work to answer this question.
– jdv
Jan 24 at 16:25
add a comment |
2
I think you might be confusing "memory" with the notion of file system unlink operations. And this doesn't really have anything to do with the stated question, which asks if concrete files are overwritten or if there is some sort of n-way update.
– jdv
Jan 23 at 19:16
Well if software was designed to do that specifically then It's possible, though as far as I'm aware this is generally how both Long term storage and RAM work.
– GigaJoules
Jan 24 at 8:50
Unfortunately, your explanation (as far as I can decode what you mean) is decidedly not how "long term storage and RAM" work. But, at the end of the day, this has little to do with the question at hand. Which, I reiterate, is asking how software updates textual information to a file on a general purpose computing device with a typical modern file system. We don't have to consider how something like "memory" does or does not work to answer this question.
– jdv
Jan 24 at 16:25
2
2
I think you might be confusing "memory" with the notion of file system unlink operations. And this doesn't really have anything to do with the stated question, which asks if concrete files are overwritten or if there is some sort of n-way update.
– jdv
Jan 23 at 19:16
I think you might be confusing "memory" with the notion of file system unlink operations. And this doesn't really have anything to do with the stated question, which asks if concrete files are overwritten or if there is some sort of n-way update.
– jdv
Jan 23 at 19:16
Well if software was designed to do that specifically then It's possible, though as far as I'm aware this is generally how both Long term storage and RAM work.
– GigaJoules
Jan 24 at 8:50
Well if software was designed to do that specifically then It's possible, though as far as I'm aware this is generally how both Long term storage and RAM work.
– GigaJoules
Jan 24 at 8:50
Unfortunately, your explanation (as far as I can decode what you mean) is decidedly not how "long term storage and RAM" work. But, at the end of the day, this has little to do with the question at hand. Which, I reiterate, is asking how software updates textual information to a file on a general purpose computing device with a typical modern file system. We don't have to consider how something like "memory" does or does not work to answer this question.
– jdv
Jan 24 at 16:25
Unfortunately, your explanation (as far as I can decode what you mean) is decidedly not how "long term storage and RAM" work. But, at the end of the day, this has little to do with the question at hand. Which, I reiterate, is asking how software updates textual information to a file on a general purpose computing device with a typical modern file system. We don't have to consider how something like "memory" does or does not work to answer this question.
– jdv
Jan 24 at 16:25
add a comment |
Hopefully this isn't redundant, a little extra info/background.
The PC doesn't usually have much control over how a file is edited, it's the application that does it.
A few examples of how some apps might handle editing:
Notepad loads the entire document into memory and then saves the whole thing over your original document (or a new one you specify).
Nearly all other small editors will save a "new" file as you edit and then copy it over your original document deleting it when you "save".
Large Document editors that you might use to edit a book tend to read/modify a section of a document because they can edit documents bigger than memory. These may actually edit the document "In place". They might re-write one page and leave the rest alone. These often have a more complex indexed on-disk representation than a simple .txt file would to allow this behavior.
The large editors might also just save temporary files with "updates" to your original document. When you do your final save it can merge them all in and re-write your document.
Most editors can be configured to leave the existing version untouched and create a new one with your changes (retain old versions).
As to the part of your question regarding what a "PC" does, some operating systems will remember every version of a file and always create a new one. This is pretty rare these days but I remember old "Mini Computers" (What we'd now call mainframes) where every file had a version at the end like "File.text.1" and it would add to the version every time you edited it. This kind of behavior would better apply to something like a tape drive or CD rom where overwriting the old version was completely impractical.
add a comment |
Hopefully this isn't redundant, a little extra info/background.
The PC doesn't usually have much control over how a file is edited, it's the application that does it.
A few examples of how some apps might handle editing:
Notepad loads the entire document into memory and then saves the whole thing over your original document (or a new one you specify).
Nearly all other small editors will save a "new" file as you edit and then copy it over your original document deleting it when you "save".
Large Document editors that you might use to edit a book tend to read/modify a section of a document because they can edit documents bigger than memory. These may actually edit the document "In place". They might re-write one page and leave the rest alone. These often have a more complex indexed on-disk representation than a simple .txt file would to allow this behavior.
The large editors might also just save temporary files with "updates" to your original document. When you do your final save it can merge them all in and re-write your document.
Most editors can be configured to leave the existing version untouched and create a new one with your changes (retain old versions).
As to the part of your question regarding what a "PC" does, some operating systems will remember every version of a file and always create a new one. This is pretty rare these days but I remember old "Mini Computers" (What we'd now call mainframes) where every file had a version at the end like "File.text.1" and it would add to the version every time you edited it. This kind of behavior would better apply to something like a tape drive or CD rom where overwriting the old version was completely impractical.
add a comment |
Hopefully this isn't redundant, a little extra info/background.
The PC doesn't usually have much control over how a file is edited, it's the application that does it.
A few examples of how some apps might handle editing:
Notepad loads the entire document into memory and then saves the whole thing over your original document (or a new one you specify).
Nearly all other small editors will save a "new" file as you edit and then copy it over your original document deleting it when you "save".
Large Document editors that you might use to edit a book tend to read/modify a section of a document because they can edit documents bigger than memory. These may actually edit the document "In place". They might re-write one page and leave the rest alone. These often have a more complex indexed on-disk representation than a simple .txt file would to allow this behavior.
The large editors might also just save temporary files with "updates" to your original document. When you do your final save it can merge them all in and re-write your document.
Most editors can be configured to leave the existing version untouched and create a new one with your changes (retain old versions).
As to the part of your question regarding what a "PC" does, some operating systems will remember every version of a file and always create a new one. This is pretty rare these days but I remember old "Mini Computers" (What we'd now call mainframes) where every file had a version at the end like "File.text.1" and it would add to the version every time you edited it. This kind of behavior would better apply to something like a tape drive or CD rom where overwriting the old version was completely impractical.
Hopefully this isn't redundant, a little extra info/background.
The PC doesn't usually have much control over how a file is edited, it's the application that does it.
A few examples of how some apps might handle editing:
Notepad loads the entire document into memory and then saves the whole thing over your original document (or a new one you specify).
Nearly all other small editors will save a "new" file as you edit and then copy it over your original document deleting it when you "save".
Large Document editors that you might use to edit a book tend to read/modify a section of a document because they can edit documents bigger than memory. These may actually edit the document "In place". They might re-write one page and leave the rest alone. These often have a more complex indexed on-disk representation than a simple .txt file would to allow this behavior.
The large editors might also just save temporary files with "updates" to your original document. When you do your final save it can merge them all in and re-write your document.
Most editors can be configured to leave the existing version untouched and create a new one with your changes (retain old versions).
As to the part of your question regarding what a "PC" does, some operating systems will remember every version of a file and always create a new one. This is pretty rare these days but I remember old "Mini Computers" (What we'd now call mainframes) where every file had a version at the end like "File.text.1" and it would add to the version every time you edited it. This kind of behavior would better apply to something like a tape drive or CD rom where overwriting the old version was completely impractical.
answered Jan 24 at 17:59
Bill KBill K
26717
26717
add a comment |
add a comment |
2 is not impossible, but it is stupid for various reasons.
A well-written text file editor will:
- Write a file with a different name and the new contents. If the original was
myfile.txt
, the new one might bemyfile.txt.new
- Provided 1. succeeded, rename the original to a backup file, say
myfile.txt~
- Rename the new file to the original name
myfile.txt
- If everything has succeeded, remove the backup file. Many editors leave it anyway, so the user can recover if he/she soon works out that what he/she did with the editor was not what he/she wanted to do.
If the computer crashes or runs out of space on the disk during the above, there is not a situation where both the old and the new files are lost or only partially saved.
The truncate-in-place-and-rewrite behaviour of lots of text editors for non-IBM/non-Microsoft operating systems for the past half century is not "stupid".
– JdeBP
Jan 28 at 13:44
add a comment |
2 is not impossible, but it is stupid for various reasons.
A well-written text file editor will:
- Write a file with a different name and the new contents. If the original was
myfile.txt
, the new one might bemyfile.txt.new
- Provided 1. succeeded, rename the original to a backup file, say
myfile.txt~
- Rename the new file to the original name
myfile.txt
- If everything has succeeded, remove the backup file. Many editors leave it anyway, so the user can recover if he/she soon works out that what he/she did with the editor was not what he/she wanted to do.
If the computer crashes or runs out of space on the disk during the above, there is not a situation where both the old and the new files are lost or only partially saved.
The truncate-in-place-and-rewrite behaviour of lots of text editors for non-IBM/non-Microsoft operating systems for the past half century is not "stupid".
– JdeBP
Jan 28 at 13:44
add a comment |
2 is not impossible, but it is stupid for various reasons.
A well-written text file editor will:
- Write a file with a different name and the new contents. If the original was
myfile.txt
, the new one might bemyfile.txt.new
- Provided 1. succeeded, rename the original to a backup file, say
myfile.txt~
- Rename the new file to the original name
myfile.txt
- If everything has succeeded, remove the backup file. Many editors leave it anyway, so the user can recover if he/she soon works out that what he/she did with the editor was not what he/she wanted to do.
If the computer crashes or runs out of space on the disk during the above, there is not a situation where both the old and the new files are lost or only partially saved.
2 is not impossible, but it is stupid for various reasons.
A well-written text file editor will:
- Write a file with a different name and the new contents. If the original was
myfile.txt
, the new one might bemyfile.txt.new
- Provided 1. succeeded, rename the original to a backup file, say
myfile.txt~
- Rename the new file to the original name
myfile.txt
- If everything has succeeded, remove the backup file. Many editors leave it anyway, so the user can recover if he/she soon works out that what he/she did with the editor was not what he/she wanted to do.
If the computer crashes or runs out of space on the disk during the above, there is not a situation where both the old and the new files are lost or only partially saved.
edited Jan 26 at 16:17
Peter Mortensen
8,376166185
8,376166185
answered Jan 24 at 12:15
nigel222nigel222
19912
19912
The truncate-in-place-and-rewrite behaviour of lots of text editors for non-IBM/non-Microsoft operating systems for the past half century is not "stupid".
– JdeBP
Jan 28 at 13:44
add a comment |
The truncate-in-place-and-rewrite behaviour of lots of text editors for non-IBM/non-Microsoft operating systems for the past half century is not "stupid".
– JdeBP
Jan 28 at 13:44
The truncate-in-place-and-rewrite behaviour of lots of text editors for non-IBM/non-Microsoft operating systems for the past half century is not "stupid".
– JdeBP
Jan 28 at 13:44
The truncate-in-place-and-rewrite behaviour of lots of text editors for non-IBM/non-Microsoft operating systems for the past half century is not "stupid".
– JdeBP
Jan 28 at 13:44
add a comment |
Short answer
Highly depends on your editor, underlying software/drivers, storage.
Paranoiac answer
Can be recoverable unless you remove it permanently.
Long answer
There is missing information in your question (software, hardware, etc), so instead of answering myself I will help you answer your question yourself.
It depends on few factors:
Editor: If the editor software replaces the blocks of the same file, then it may get rewritten. And this may also be depended on editor settings and file types. Note that the word may was made italic. Even when editor rewrites the file, it still can remain untouched (read the next points).
Underlying software/drivers/file system: File will remain untouched if there are other software/drivers underneath that protect initial file from being overwritten. Those types of software include versioning systems, virtual differential disks, some backup software. An example is Git, which will keep the original file blocks and will create new file that holds the modified blocks.
Storage:
Storage itself can write changed blocks on a new sector, and mark old blocks as "free". Then the file will physically remain on the storage (and is recoverable), unless it gets overwritten by other file. Example is modern SSD storage, which may do it on a hardware level.
There are ways to recover data from a typical mechanical HDD's magnetic discs even when the data was overwritten. And there are specialized companies in it.
So if you want to get concrete answer whether your file will be deleted or not, you must also tell what editor, backup/VCS software/hardware and storage you use. If I missed any point, feel free to edit the answer.
How to make sure that the deleted file is actually deleted from the storage?
This is probably the next question that you will question yourself. Well there are many software/hardware solutions. Since SuperUser is not for promoting software/hardware, instead of telling names I will tell you how to find them: search for keywords "permanently delete file". For more exact matches mention your OS, hard drive type, or other info you have.
add a comment |
Short answer
Highly depends on your editor, underlying software/drivers, storage.
Paranoiac answer
Can be recoverable unless you remove it permanently.
Long answer
There is missing information in your question (software, hardware, etc), so instead of answering myself I will help you answer your question yourself.
It depends on few factors:
Editor: If the editor software replaces the blocks of the same file, then it may get rewritten. And this may also be depended on editor settings and file types. Note that the word may was made italic. Even when editor rewrites the file, it still can remain untouched (read the next points).
Underlying software/drivers/file system: File will remain untouched if there are other software/drivers underneath that protect initial file from being overwritten. Those types of software include versioning systems, virtual differential disks, some backup software. An example is Git, which will keep the original file blocks and will create new file that holds the modified blocks.
Storage:
Storage itself can write changed blocks on a new sector, and mark old blocks as "free". Then the file will physically remain on the storage (and is recoverable), unless it gets overwritten by other file. Example is modern SSD storage, which may do it on a hardware level.
There are ways to recover data from a typical mechanical HDD's magnetic discs even when the data was overwritten. And there are specialized companies in it.
So if you want to get concrete answer whether your file will be deleted or not, you must also tell what editor, backup/VCS software/hardware and storage you use. If I missed any point, feel free to edit the answer.
How to make sure that the deleted file is actually deleted from the storage?
This is probably the next question that you will question yourself. Well there are many software/hardware solutions. Since SuperUser is not for promoting software/hardware, instead of telling names I will tell you how to find them: search for keywords "permanently delete file". For more exact matches mention your OS, hard drive type, or other info you have.
add a comment |
Short answer
Highly depends on your editor, underlying software/drivers, storage.
Paranoiac answer
Can be recoverable unless you remove it permanently.
Long answer
There is missing information in your question (software, hardware, etc), so instead of answering myself I will help you answer your question yourself.
It depends on few factors:
Editor: If the editor software replaces the blocks of the same file, then it may get rewritten. And this may also be depended on editor settings and file types. Note that the word may was made italic. Even when editor rewrites the file, it still can remain untouched (read the next points).
Underlying software/drivers/file system: File will remain untouched if there are other software/drivers underneath that protect initial file from being overwritten. Those types of software include versioning systems, virtual differential disks, some backup software. An example is Git, which will keep the original file blocks and will create new file that holds the modified blocks.
Storage:
Storage itself can write changed blocks on a new sector, and mark old blocks as "free". Then the file will physically remain on the storage (and is recoverable), unless it gets overwritten by other file. Example is modern SSD storage, which may do it on a hardware level.
There are ways to recover data from a typical mechanical HDD's magnetic discs even when the data was overwritten. And there are specialized companies in it.
So if you want to get concrete answer whether your file will be deleted or not, you must also tell what editor, backup/VCS software/hardware and storage you use. If I missed any point, feel free to edit the answer.
How to make sure that the deleted file is actually deleted from the storage?
This is probably the next question that you will question yourself. Well there are many software/hardware solutions. Since SuperUser is not for promoting software/hardware, instead of telling names I will tell you how to find them: search for keywords "permanently delete file". For more exact matches mention your OS, hard drive type, or other info you have.
Short answer
Highly depends on your editor, underlying software/drivers, storage.
Paranoiac answer
Can be recoverable unless you remove it permanently.
Long answer
There is missing information in your question (software, hardware, etc), so instead of answering myself I will help you answer your question yourself.
It depends on few factors:
Editor: If the editor software replaces the blocks of the same file, then it may get rewritten. And this may also be depended on editor settings and file types. Note that the word may was made italic. Even when editor rewrites the file, it still can remain untouched (read the next points).
Underlying software/drivers/file system: File will remain untouched if there are other software/drivers underneath that protect initial file from being overwritten. Those types of software include versioning systems, virtual differential disks, some backup software. An example is Git, which will keep the original file blocks and will create new file that holds the modified blocks.
Storage:
Storage itself can write changed blocks on a new sector, and mark old blocks as "free". Then the file will physically remain on the storage (and is recoverable), unless it gets overwritten by other file. Example is modern SSD storage, which may do it on a hardware level.
There are ways to recover data from a typical mechanical HDD's magnetic discs even when the data was overwritten. And there are specialized companies in it.
So if you want to get concrete answer whether your file will be deleted or not, you must also tell what editor, backup/VCS software/hardware and storage you use. If I missed any point, feel free to edit the answer.
How to make sure that the deleted file is actually deleted from the storage?
This is probably the next question that you will question yourself. Well there are many software/hardware solutions. Since SuperUser is not for promoting software/hardware, instead of telling names I will tell you how to find them: search for keywords "permanently delete file". For more exact matches mention your OS, hard drive type, or other info you have.
edited Jan 27 at 19:11
answered Jan 27 at 18:56
X XX X
1275
1275
add a comment |
add a comment |
One behavior that no one has mentioned yet is a relevant behavior of some versions of MS Windows operating systems is also related to the filesystem in use.
The behavior works like this: When you rename or delete a file, if you create (re-create) a (new) file with the same name within 15 seconds of when the original file was deleted (or renamed), the creation date/timestamp is copied from the original file. Essentially, the new file "becomes" the old/original file.
In this case, it really doesn't matter if the application saves the changes to the file by your method #1: making a new file with the same name, or by your method #2: edit/update the file in place (file not deleted). Either way, the final file looks in (nearly) every way, like the original file. The only thing is, it will likely occupy different physical drive space (clusters/sectors) and the directory entry for the file will likely be in a different location.
As I said, this is a behavior of some versions of MS Windows/filesystems. I don't know which version of Windows and which filesystem this started on, and if it is still the behavior of more recent versions. If I had to guess I'd say it was introduced on Windows NT and Windows XP and is still the behavior of Windows 10, and (still a guess) the behavior requires a Fat32 or NTFS (and perhaps newer) filesystem.
Actually, it does matter, because NTFS supports hard links and one of the well-known differences between these methods is the effect on multiply-linked files. Filesystem tunnelling has been around since at least Windows NT 5.0.
– JdeBP
Jan 28 at 13:41
@JdeBP - Yes, we agree. That's why I said #1) "Nearly" in "the final file looks in (nearly) every way, like the original file", and #2) directory entry in a different location.
– Kevin Fegan
Feb 9 at 2:11
You do not agree if you assert, as you do, that it does not matter.
– JdeBP
2 days ago
add a comment |
One behavior that no one has mentioned yet is a relevant behavior of some versions of MS Windows operating systems is also related to the filesystem in use.
The behavior works like this: When you rename or delete a file, if you create (re-create) a (new) file with the same name within 15 seconds of when the original file was deleted (or renamed), the creation date/timestamp is copied from the original file. Essentially, the new file "becomes" the old/original file.
In this case, it really doesn't matter if the application saves the changes to the file by your method #1: making a new file with the same name, or by your method #2: edit/update the file in place (file not deleted). Either way, the final file looks in (nearly) every way, like the original file. The only thing is, it will likely occupy different physical drive space (clusters/sectors) and the directory entry for the file will likely be in a different location.
As I said, this is a behavior of some versions of MS Windows/filesystems. I don't know which version of Windows and which filesystem this started on, and if it is still the behavior of more recent versions. If I had to guess I'd say it was introduced on Windows NT and Windows XP and is still the behavior of Windows 10, and (still a guess) the behavior requires a Fat32 or NTFS (and perhaps newer) filesystem.
Actually, it does matter, because NTFS supports hard links and one of the well-known differences between these methods is the effect on multiply-linked files. Filesystem tunnelling has been around since at least Windows NT 5.0.
– JdeBP
Jan 28 at 13:41
@JdeBP - Yes, we agree. That's why I said #1) "Nearly" in "the final file looks in (nearly) every way, like the original file", and #2) directory entry in a different location.
– Kevin Fegan
Feb 9 at 2:11
You do not agree if you assert, as you do, that it does not matter.
– JdeBP
2 days ago
add a comment |
One behavior that no one has mentioned yet is a relevant behavior of some versions of MS Windows operating systems is also related to the filesystem in use.
The behavior works like this: When you rename or delete a file, if you create (re-create) a (new) file with the same name within 15 seconds of when the original file was deleted (or renamed), the creation date/timestamp is copied from the original file. Essentially, the new file "becomes" the old/original file.
In this case, it really doesn't matter if the application saves the changes to the file by your method #1: making a new file with the same name, or by your method #2: edit/update the file in place (file not deleted). Either way, the final file looks in (nearly) every way, like the original file. The only thing is, it will likely occupy different physical drive space (clusters/sectors) and the directory entry for the file will likely be in a different location.
As I said, this is a behavior of some versions of MS Windows/filesystems. I don't know which version of Windows and which filesystem this started on, and if it is still the behavior of more recent versions. If I had to guess I'd say it was introduced on Windows NT and Windows XP and is still the behavior of Windows 10, and (still a guess) the behavior requires a Fat32 or NTFS (and perhaps newer) filesystem.
One behavior that no one has mentioned yet is a relevant behavior of some versions of MS Windows operating systems is also related to the filesystem in use.
The behavior works like this: When you rename or delete a file, if you create (re-create) a (new) file with the same name within 15 seconds of when the original file was deleted (or renamed), the creation date/timestamp is copied from the original file. Essentially, the new file "becomes" the old/original file.
In this case, it really doesn't matter if the application saves the changes to the file by your method #1: making a new file with the same name, or by your method #2: edit/update the file in place (file not deleted). Either way, the final file looks in (nearly) every way, like the original file. The only thing is, it will likely occupy different physical drive space (clusters/sectors) and the directory entry for the file will likely be in a different location.
As I said, this is a behavior of some versions of MS Windows/filesystems. I don't know which version of Windows and which filesystem this started on, and if it is still the behavior of more recent versions. If I had to guess I'd say it was introduced on Windows NT and Windows XP and is still the behavior of Windows 10, and (still a guess) the behavior requires a Fat32 or NTFS (and perhaps newer) filesystem.
answered Jan 28 at 0:57
Kevin FeganKevin Fegan
3,68221433
3,68221433
Actually, it does matter, because NTFS supports hard links and one of the well-known differences between these methods is the effect on multiply-linked files. Filesystem tunnelling has been around since at least Windows NT 5.0.
– JdeBP
Jan 28 at 13:41
@JdeBP - Yes, we agree. That's why I said #1) "Nearly" in "the final file looks in (nearly) every way, like the original file", and #2) directory entry in a different location.
– Kevin Fegan
Feb 9 at 2:11
You do not agree if you assert, as you do, that it does not matter.
– JdeBP
2 days ago
add a comment |
Actually, it does matter, because NTFS supports hard links and one of the well-known differences between these methods is the effect on multiply-linked files. Filesystem tunnelling has been around since at least Windows NT 5.0.
– JdeBP
Jan 28 at 13:41
@JdeBP - Yes, we agree. That's why I said #1) "Nearly" in "the final file looks in (nearly) every way, like the original file", and #2) directory entry in a different location.
– Kevin Fegan
Feb 9 at 2:11
You do not agree if you assert, as you do, that it does not matter.
– JdeBP
2 days ago
Actually, it does matter, because NTFS supports hard links and one of the well-known differences between these methods is the effect on multiply-linked files. Filesystem tunnelling has been around since at least Windows NT 5.0.
– JdeBP
Jan 28 at 13:41
Actually, it does matter, because NTFS supports hard links and one of the well-known differences between these methods is the effect on multiply-linked files. Filesystem tunnelling has been around since at least Windows NT 5.0.
– JdeBP
Jan 28 at 13:41
@JdeBP - Yes, we agree. That's why I said #1) "Nearly" in "the final file looks in (nearly) every way, like the original file", and #2) directory entry in a different location.
– Kevin Fegan
Feb 9 at 2:11
@JdeBP - Yes, we agree. That's why I said #1) "Nearly" in "the final file looks in (nearly) every way, like the original file", and #2) directory entry in a different location.
– Kevin Fegan
Feb 9 at 2:11
You do not agree if you assert, as you do, that it does not matter.
– JdeBP
2 days ago
You do not agree if you assert, as you do, that it does not matter.
– JdeBP
2 days ago
add a comment |
protected by JakeGould Jan 28 at 0:40
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
Greetings! Working from the excellent answer provided by user Grawity, here are some clarifying questions:
– Haakon Dahl
Jan 24 at 2:49
18
@HaakonDahl what clarifying questions? You posted nothing.
– The Great Duck
Jan 24 at 6:38
Dangit. Have to wait until I get back on my PC. But the gist is what level -- hardware, filesystem, OS, or app? And what app?
– Haakon Dahl
Jan 24 at 7:07
Why does it matter to you? Even programs that create a "new" file will probably change the creation time so that it matches the original. The only visible difference would be the inode number (or equivalent concept) which may matter (e.g. if you have hardlinks around they will get "out of sync").
– Bakuriu
Jan 26 at 8:56
1
Voting to close this question as too broad. It all depends on the OS, software and underlying file system’s capabilities.
– JakeGould
Jan 27 at 19:14