Performance
As performance is your main concern, let's face this first. To complete the example CSV-file with ~36k lines your original script needs around 139s*.
The main bottlenecks are in_array:
if (!in_array($record,$return_waarde)) {}
and array_combine:
$return_waarde[] = array_combine($header_trimmed, str_getcsv(utf8_encode($record), ';'));
As you want an associative array, we can't get rid of array_combine but we can improve the very expensive and slow test from in_array.
Idea
Instead of checking the fastly growing and complex result-array for existence of the newly created associative array, you can do this:
- create a second array
 
- create a hash of the current dataset/row
 
- check this array's keys for the existence of the latest hash using 
isset, which is faster than in_array 
- only if the hash is not found, store it, run 
array_combine on the row and append the result as well 
Result
while (false !== ($data = fgetcsv($handle, 1000, ','))) {
    $hash = md5(serialize($data));
    if (!isset($hashes[$hash])) {
        $hashes[$hash] = true;
        $values[] = array_combine($headerUnique, $data);
    }
}
With this improvement the script processes all 36k lines in ~0.5s now*. Seems a little faster. ;)
Unique entries in the result
Even though this is solved by using the hash now, let me point out a flaw in your logic:
if (!in_array($record, $return_waarde)){
    $return_waarde[] = array_combine($header_trimmed, str_getcsv());
}
This will never find any duplicates, because you check for existence of the indexed array $record but afterwards you insert a different associative array.
Unique header names
In the beginning you create unique names for duplicate entries in the header row:
if(!in_array($value, $header_trimmed)){
      $header_trimmed[] = $trim;
  } else {
      $header_trimmed[] = $trim . "1";
  }
If you have a column name more than two times, you'll end up with this, probably unintended, result:
['column', 'column1', 'column1']
You can create a function to make the names truly unique, e.g.:
function unique_columns(array $columns):array {
    $values = [];
    foreach ($columns as $value) {
        $count = 0;
        $value = $original = trim($value);
        while (in_array($value, $values)) {
            $value = $original . '-' . ++$count;
        }
        $values[] = $value;    
    }
    return $values;
}
This will result in
['column', 'column-1', 'column-2']
Return value of read_cvs
Currently your function read_csv() does return either a string or an array. The function should always return an array. You can even make the parameter- and return-value-types more strict:
function read_csv(string $file): array {}
Also try to exit early, when something went wrong instead of nesting if-statements. If you actually want to do something, if an error occurs, throw an exception:
if (!$file) {
    throw new Exception('File not found: ' . file);
}
Final result
Finally let's make this function more versatile by adding the line length and delimiter as optional parameters.
function read_csv(string $file, int $length = 1000, string $delimiter = ','): array {
    $handle = fopen($file, 'r');
    $hashes = [];
    $values = [];
    $header = null;
    $headerUnique = null;
    if (!$handle) {
        return $values;
    }
    $header = fgetcsv($handle, $length, $delimiter);
    if (!$header) {
        return $values;
    }
    $headerUnique = unique_columns($header);
    while (false !== ($data = fgetcsv($handle, $length, delimiter))) {
        $hash = md5(serialize($data));
        if (!isset($hashes[$hash])) {
            $hashes[$hash] = true;
            $values[] = array_combine($headerUnique, $data);
        }
    }
    fclose($handle);
    return $values;
}
* For testing I used an example CSV-file with over 36.000 lines from the site SpatialKey. I duplicated a few column names and added at least one duplicate line. My environment is the latest MAMP running PHP 7.1.1. The time was measured using: $start = microtime(true); $x = read_csv('test.csv'); print microtime(true) - $start;.