54

When a segmentation fault occurs in Linux, the error message Segmentation fault (core dumped) will be printed to the terminal (if any), and the program will be terminated. As a C/C++ dev, this happens to me quite often, and I usually ignore it and move onto gdb, recreating my previous action in order to trigger the invalid memory reference again. Instead, I thought I might be able to perhaps use this "core" instead, as running gdb all the time is rather tedious, and I cannot always recreate the segmentation fault.

My questions are three:

  • Where is this elusive "core" dumped?
  • What does it contain?
  • What can I do with it?
3
  • Usually you only need the command gdb path-to-your-binary path-to-corefile, then info stack followed by Ctrl-d. The only worrying thing is that core-dumping is a usual thing for you. Commented Apr 18, 2016 at 18:28
  • 1
    Not so much usual, more occasional - most of the time it's due to typos or something I changed and didn't preempt the outcome. Commented Apr 18, 2016 at 18:59
  • 1
    Related: Stack Overflow: Core dumped, but core file is not in the current directory? Commented Jun 30, 2021 at 4:19

5 Answers 5

31

If other people clean up ...

... you usually don't find anything. But luckily Linux has a handler for this which you can specify at runtime. In /usr/src/linux/Documentation/sysctl/kernel.txt you will find:

core_pattern is used to specify a core dumpfile pattern name.

  • If the first character of the pattern is a '|', the kernel will treat the rest of the pattern as a command to run. The core dump will be written to the standard input of that program instead of to a file.

(See Core dumped, but core file is not in the current directory? on StackOverflow)

According to the source this is handled by the abrt program (that's Automatic Bug Reporting Tool, not abort), but on my Arch Linux it is handled by systemd. You may want to write your own handler or use the current directory.

But what's in there?

Now what it contains is system specific, but according to the all knowing encyclopedia:

[A core dump] consists of the recorded state of the working memory of a computer program at a specific time[...]. In practice, other key pieces of program state are usually dumped at the same time, including the processor registers, which may include the program counter and stack pointer, memory management information, and other processor and operating system flags and information.

... so it basically contains everything that gdb needs (in addition to the executable that caused the fault) to analyze the fault.

Yeah, but I'd like me to be happy instead of gdb

You can both be happy since gdb will load any core dump as long as you have a exact copy of your executable: gdb path/to/binary my/core.dump. You should then be able to analyze the specific failure instead of trying and failing to reproduce bugs.

1
  • 2
    Thanks for this. I'm long used to ulimit -c for controlling the production of core files, but some system builder seems to think that /proc/sys/kernel/core_pattern = |/bin/false was a good idea. Pffftt! Commented Oct 20, 2021 at 9:22
24

Also, if ulimit -c returns 0, then no core dump file will be written.

See Where to search for the core file generated by the crash of a linux application?

You can also trigger a core dump manually with CTRL-\ which quits the process and causes a core dump.

1
  • 5
    If ulimit -c returns 0, you can enable core dumps for that terminal by calling ulimit -c unlimited, as your source says. This sets the max allowed core file size to unlimited. Commented Jun 30, 2021 at 4:15
7

The core file is normally called core and is located in the current working directory of the process. However, there is a long list of reasons why a core file would not be generated, and it may be located somewhere else entirely, under a different name. See the core.5 man page for details:

DESCRIPTION

The default action of certain signals is to cause a process to terminate and produce a core dump file, a disk file containing an image of the process's memory at the time of termination. This image can be used in a debugger (e.g., gdb(1)) to inspect the state of the program at the time that it terminated. A list of the signals which cause a process to dump core can be found in signal(7).

...

There are various circumstances in which a core dump file is not produced:

   *  The process does not have permission to write the core file.  (By
      default, the core file is called core or core.pid, where pid is
      the ID of the process that dumped core, and is created in the
      current working directory.  See below for details on naming.) 
      Writing the core file will fail if the directory in which it is to
      be created is nonwritable, or if a file with the same name exists
      and is not writable or is not a regular file (e.g., it is a
      directory or a symbolic link).
   *  A (writable, regular) file with the same name as would be used for
      the core dump already exists, but there is more than one hard link
      to that file.
   *  The filesystem where the core dump file would be created is full;
      or has run out of inodes; or is mounted read-only; or the user has
      reached their quota for the filesystem.
   *  The directory in which the core dump file is to be created does
      not exist.
   *  The RLIMIT_CORE (core file size) or RLIMIT_FSIZE (file size)
      resource limits for the process are set to zero; see getrlimit(2)
      and the documentation of the shell's ulimit command (limit in
      csh(1)).
   *  The binary being executed by the process does not have read
      permission enabled.
   *  The process is executing a set-user-ID (set-group-ID) program that
      is owned by a user (group) other than the real user (group) ID of
      the process, or the process is executing a program that has file
      capabilities (see capabilities(7)).  (However, see the description
      of the prctl(2) PR_SET_DUMPABLE operation, and the description of
      the /proc/sys/fs/suid_dumpable file in proc(5).)
   *  (Since Linux 3.7) The kernel was configured without the
      CONFIG_COREDUMP option.

In addition, a core dump may exclude part of the address space of the process if the madvise(2) MADV_DONTDUMP flag was employed.

Naming of core dump files

By default, a core dump file is named core, but the /proc/sys/kernel/core_pattern file (since Linux 2.6 and 2.4.21) can be set to define a template that is used to name core dump files. The template can contain % specifiers which are substituted by the following values when a core file is created:

       %%  a single % character
       %c  core file size soft resource limit of crashing process (since
           Linux 2.6.24)
       %d  dump mode—same as value returned by prctl(2) PR_GET_DUMPABLE
           (since Linux 3.7)
       %e  executable filename (without path prefix)
       %E  pathname of executable, with slashes ('/') replaced by
           exclamation marks ('!') (since Linux 3.0).
       %g  (numeric) real GID of dumped process
       %h  hostname (same as nodename returned by uname(2))
       %i  TID of thread that triggered core dump, as seen in the PID
           namespace in which the thread resides (since Linux 3.18)
       %I  TID of thread that triggered core dump, as seen in the
           initial PID namespace (since Linux 3.18)
       %p  PID of dumped process, as seen in the PID namespace in which
           the process resides
       %P  PID of dumped process, as seen in the initial PID namespace
           (since Linux 3.12)
       %s  number of signal causing dump
       %t  time of dump, expressed as seconds since the Epoch,
           1970-01-01 00:00:00 +0000 (UTC)
       %u  (numeric) real UID of dumped process
2
  • This answer is incomplete: The first part is about not writing core files (which is not a topic for the question), while the second part explains naming of the core file only. Unanswered is what the core contains, and how to use it. Commented Jan 5, 2022 at 10:56
  • 2
    @U.Windl Not writing core files is certainly on-topic when the question explicitly asks "Where is this elusive "core" dumped?" "Nowhere" is an answer to that question. Commented Jan 5, 2022 at 13:27
5

In Ubuntu any crash that happens gets logged into /var/crash. The generated crash report can be unpacked using a tool apport:

apport-unpack /var/crash/_crash_file.crash <path to unpack>

and then the core dump in the unpacked report can be read using

gdb "$(cat ExecutablePath)" CoreDump
1
  • This is wrong: /var/crash is for kernel dumps AFAIK. Commented Jan 5, 2022 at 10:57
1

NEW ANSWER VERSION

1: Where linux dumps core by default:

ls -lA /var/lib/systemd/coredump/

2: The content is essentially:

  • Stack trace: Shows what function calls led to the crash.
  • Registers and memory state: Helps pinpoint invalid memory accesses.
  • Variable values: Useful for debugging unexpected behavior.

3: Core dumps can be a gem for debugging. You can analyze facts about almost anything running on your Linux with them.

OLD ANSWER VERSION:

When I configured my Linux, researching a bit about hardening and specific optimization for my user, I created this file (and several others) as my knowledge base for my installations. I made this simple inline documentation to remember what each thing did. I won't know some of the more detailed reasons right now, but if I need it, I'll research it again and provide it to you. Anyway, the configuration I have saved is the one below. It avoids what you're going through. At least for me, it solved a lot of things. I hope it helps you and others who need it too.

Here are the contents of the file:

Replace user with your username, don't forget it!!

# /etc/security/limits.conf
#
#This file sets the resource limits for the users logged in via PAM.
#It does not affect resource limits of the system services.
#
#Also note that configuration files in /etc/security/limits.d directory,
#which are read in alphabetical order, override the settings in this
#file in case the domain is the same or more specific. #That means, for example, that setting a limit for wildcard domain here
#can be overridden with a wildcard setting in a config file in the
#subdirectory, but a user specific setting here can be overridden only
#with a user specific setting in the subdirectory.
#
#Each line describes a limit for a user in the form:
#
#<domain> <type> <item> <value>

# Disable core dump for everyone (https://linux-audit.com/software/understand-and-configure-core-dumps-work-on-linux)
* hard core 0

# Open files (this is to avoid fd exhaustion)
user soft nofile 65535
user hard nofile 131072

# This is to avoid "fork bombs"
user soft nproc 4096
user hard nproc 16384

# Memory lock (This is to avoid memory lock DoS)
user soft memlock 67108864
user hard memlock 134217728

# Core dump size (this is for debugging, 0 disables it)
user soft core 0
user hard core 0

# CPU time (Prevents process from frying the CPU more than it should)
user soft cpu unlimited
user hard cpu unlimited

# Real time priority
user soft rtprio 0
user hard rtprio 0

# End of file
3
  • 1
    This does not appear to address any of the three questions at the end of the issue: Where is this elusive "core" dumped? What does it contain? What can I do with it? Commented Jun 11 at 6:06
  • 1
    I'm trying to fix it. Sorry for my old answer, the question was different before, that time my answer made sense. I hope I made things better. Commented Jun 11 at 7:09
  • Thanks for coming back and fixing the answer. It looks better now (although I would suggest removing the old answer text completely). The three questions were always there in the issue, but the original formatting might have made them easier to miss. Commented Jun 11 at 7:38

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.