A code review needs to find and point out any major flaws:
- Does the code use the wrong approach to solve the problem? Do the design patterns fit?
- Does the code actually solve the problem? Does it meet the functional requirements? Ideally there will be tests showing that it does, but the closer you get to the user (interface) the harder this is to verify.
- Are there any (subtle) bugs that lead to failure? Look for missed edge cases, input parsing/validation and exception handling in particular.
- Is the code clear and understandable? Is it documented well enough for you to immediately understand what it does and why? Are the explanations reasonable?
- Does the code have any security issues? Missing authorisation checks, sql injections, all the non-obvious small things causing vulnerabilities. Scrutinise every single line, but also look for holes in the large picture.
- Is the code efficient (enough)? Is there any missed optimisation potential, will it work with large inputs, does it introduce any bottlenecks?
- Does the code fit into the rest of the application design? Does it duplicate some code existing elsewhere, is there any potential for reuse, should functionality have been generalised? Will a refactoring of other parts become necessary?
How much to do on each of this points depends a lot on your line of work and development workflow. If your team is doing sufficient technical planning together, the right approach would have been determined and agreed upon before the work started, so you only need to check whether that was actually implemented. If there is a dedicated QA team who will check the acceptance criteria to be met, you don't have to do that part. Identifying problematic edge cases is easier when looking at the code than when doing an end-to-end test, though.
Don't trust tests. Focus on the things that have not been tested automatically. For the parts where tests have been written, verify that they actually check the right things, match the test description, and that the assertions are correct (and would catch wrong results).
Regarding architectural/design decisions and non-functional requirements, experience with the whole application is required for the review. If you feel you are missing that, code reviews serve as a learning opportunity. In addition to browsing the relevant (old and new) code, ask questions!
Finally, if you feel you are not as critical with their code as you would be with your own code, you could try pitching your own solution against theirs. Before reading the code to review, look at the requirements of the ticket. Try to come up with your own approach: what functions do you need to change and how, what new classes and methods would you introduce, how would you name them and what would they do, what patterns would you apply? You don't have to actually implement it, but get a good enough picture so that you feel you could start writing down the code of your solution.
Then, compare your approach against the code that was written. What have they implemented differently and why? Is either approach more straightforward? Did you break down the problem in the same way? What edge cases have you missed to identify, what edges cases have they missed?
Do not try to argue that your solution is superior, but focus on the differences. They will give you the insights on the potential flaws that you can criticise.
Apart from the major flaws that you need to look out for, any code will have lots of small flaws. Point them out or just propose a change (or even just do the change yourself if it is faster and uncontroversial). Do not argue, they are often subjective and have rather low priority.
Depending on how valued code quality is in your organisation, they may even be considered irrelevant to the code review. In the code of a perfectionist, you may rarely find any, but everybody has bad days. In either case, while reading through the code, you will come across
- badly chosen names
- violated naming conventions
- code style and formatting issues or inconsistencies
- typographical mistakes
- inconsistencies in handling of values
- misapplied patterns
- overcomplicated code that could be simplified with standard library methods
- hardcoded values that should have been named constants
- leftover debugging commands
- duplicated code that should be refactored
- unnecessary changes to unrelated components
- etc.
They may not all be significant, but finding them is a sign of having focused on the code with a critical mind instead of just skimming over. Some of these problems can be detected automatically by tooling and fixed before the review, but many (anti)patterns are too subtle or impossible to codify (let alone to match deterministically).
Code that "smells" also has a higher likelihood of containing a well-obscured major flaw. Look for these in the most unclear sections of the code. Over time, you will develop a hunch for such bugs.