This is what non-functional requirements of performance are for.
The notion of fast enough has nothing technical per se. It depends on user perception of your product, and should be translated through the requirements. This is the only objective way for you to tell whether your actual implementation is fast enough or not.
If you don't have those requirements, anything else is speculation and unconstructive.
The user tells you that the app feels slow, but at any point, anybody specifies what slow means in terms of milliseconds, on which hardware and for which feature? Unconstructive: you can't improve the code based on that, and you essentially can't tell that a revision ago, the code was unacceptably slow, and now, it's fast enough.
You think a specific feature can run faster than it currently is? That's premature optimization, and goes against your users, who may not care at all about the speed of this feature, and may prioritize a specific bug, or need a new feature, or need something else to be faster.
How can I know if my code can improve or not?
Assume it always can. Some of the techniques include:
Rewriting code to use more memory but less CPU, or more CPU but less memory. This often leads to code which is very difficult to read, understand and maintain; this is one of the reasons why premature optimization should be avoided.
Using different data structures.
Relying on caching, precomputing stuff or using OLAP cubes.
Moving low level, including down to the Assembler.
Not doing the task. At all. That's the ultimate optimization from N seconds to zero.