Lets assume you are talking about log files with the purpose of reproducing error scenarios happening at your customers site (which you did not describe). Lets assume the log files are to be analysed by you, not by the customer himself. And lets assume the log files don't grow to a size of several hundreds MB. Then your team lead is correct, one log file is probably easier to handle than many.
The situation changes if there are many completely different kinds of logs, and different users of your customer (with different roles) have to look directly into the log files by themselves, and each user needs to see different kind of informations. Then it would make more sense to split the logs thematically. If log file size matters, it may be a good idea to split the logs to time intervals (for example, one log for each day, week or month).
Whatever you do, it is always a good idea to make the log information filterable. That means, add machine-readable tags to each log entry, for example, which application or subsystem produced the entry, was it a warning, a severe error message or just a status information, and of course add the exact time stamp. That will help you to find the information you need afterwards much more easily even when you have them all in one file.
And its often unnecessary to develop your own log analyser - there are great free tools like the MS log parser which can query and filter your log files for you. For some scenarios you don't need much more than a decent spreadsheet.