Suppose a program has memory leaks.
1) When a process dies (normally or segmentation fault), are those leaked memory freed?
2) What about other resources a process holds?
Stack and heap memory is freed and file descriptors are closed on all modern system, I think.
On POSIX systems there are a number of resources that are not freed when a process exits, shared semaphores, message queues and memory segments. These are meant to be persistent between processes, so they simply can't. It is the responsibility of the application to free them.
It could do that e.g with
on_exit handlers, but usually there is a simpler way. For memory segments you would typically use
shm_unlink after all processes have opened such a segment. The segment then ceases to exist when the last process (and its file descriptor to the segment) is closed.
With most modern operating systems (linux, windows from around NT 3.5), yes.
1) Yes, the memory is freed.
2) Different process model? I don't know what you mean by that, but once a program dies, all the memory that it
new'd is then returned to the OS and will be reallocated to another program later.
3) Once a program exits, all allocated memory is returned to the OS, however until the process is
wait()ed by another process, there is a small amount of data such as the exit status waiting around for someone to collect it. On linux, I believe, a normal process from bash/init will be waited and cleaned up automatically.
You can safely assume with modern linux systems that the memory will be freed... However... Its not a guaruntee, and certainly not best practice.