As @StephenKitt and @ilkkachu already pointed out the gawk manual contains some code that will remove unreadable files from ARGV[] in the BEGIN section but that has a race condition between when the file is tested and when awk actually tries to read it's content, which could be much later if the preceding files are large.
I'd use the script in @StephenKitt's answer if you have gawk or the script in the gawk manual otherwise unless you do think you might have that race condition problem as the gawk manual script is clearer, briefer, simpler, more efficient, etc. than what's below and doesn't require a temp file and global variables, but for those who are concerned about that race condition - here's a more complicated script that'll work in any awk and relies on creating a temp file to open immediately before attempting to open any real file and at THAT point tests if the upcoming real file is readable or not.
$ cat skip.awk
function addTmp(        cmd, oArgv, i, j) {
    cmd = "mktemp"
    cmd | getline TmpChkFile
    close(cmd)
    if ( TmpChkFile != "" ) {
        print "" > TmpChkFile
        close(TmpChkFile)
        for (i in ARGV) {
            oArgv[i] = ARGV[i]
        }
        oArgc = ARGC
        ARGC = 1
        for (i = 1; i < oArgc; i++) {
            if ( ! (oArgv[i] ~ /^[a-zA-Z_][a-zA-Z0-9_]*=.*/ \
                    || oArgv[i] == "-" || oArgv[i] == "/dev/stdin") ) {
                # not assignment or standard input so a file name
                ARGV[ARGC] = TmpChkFile
                ArgFileNames[++j] = oArgv[i]
                ArgFileIndices[j] = ++ARGC
            }
            ARGV[ARGC++] = oArgv[i]
        }
    }
}
function rmvTmp() {
    system("rm -f \047" TmpChkFile "\047")
}
function chkTmp(        stderr, line) {
    if ( (FNR == 1) && (FILENAME == TmpChkFile) ) {
        ++TmpFileNr
        if ( (getline line < ArgFileNames[TmpFileNr]) < 0 ) {
            stderr = "cat>&2"
            printf "Warning: skipping unreadable file \"%s\"\n", ArgFileNames[TmpFileNr] | stderr
            close(stderr)
            delete ARGV[ArgFileIndices[TmpFileNr]]
        }
        close(ArgFileNames[TmpFileNr])
        next
    }
}
BEGIN { addTmp() }
END { rmvTmp() }
{ chkTmp() }
If your awk supports multiple -f arguments (e.g. as required by POSIX) or any other way of executing multiple scripts at once (e.g. GNU awk has @include) then you can use that method to include the above along with your actual script (otherwise copy/paste the above into the same file), e.g. given you have a script like:
$ cat tst.awk
FNR == 1 { print FILENAME, $0 }
and files like:
$ ls file_{1..3}
ls: cannot access 'file_2': No such file or directory
file_1  file_3
then with any POSIX awk (and most, if not all, others) you can do:
$ awk -f skip.awk -f tst.awk file_{1..3}
file_1 A
Warning: skipping unreadable file "file_2"
file_3 C
Most of the work above is done in BEGIN which is called once before the first input file is opened to ensure a readable temp file exists in ARGV[] immediately before every real input file, then chkTmp() is called for every line of input but only does something when it's the first (and only) line of the temp file and that something is to try to open the real input file that's coming after it in ARGV[]. Then END just removes the temp file. So the real extra overhead is the call to chkTmp() and it testing FNR==1 for every input line.
I'm creating a temp file instead of using an existing file because there is no file that is guaranteed to exist on all Unix boxes and, even if there were, it'd have to be exactly 1 line long to avoid adding extra overhead of chkTmp() having to read every line of that file since not all awks support nextfile (or we could call that instead of next inside chkTmp()).
     
    
for file in file_?; do [ -r "$file" ] && awk '(FNR==1) { print FILENAME }' "$file"; donemaybe?