Answer mostly depends on your scalability/uptime/failover requirements. If it's necessary to have highly resilient architecture, then yes, you need to have multiple instances (sometimes even distributed across multiple data centers).
This is actually one of the benefits of microservices - you can scale different services independently - some of them are more performance critical than others so you create more instances of these. Contrast this with monolithic applications where you can scale just the whole monolith (ignoring the fact different requirements for different parts of the monolith).
If you're lucky, then you microservice is stateless - all instances are completely isolated. This is great because you can scale them almost infinitely. But often you have some state which needs to be distributed among the individual nodes. If you have relational database, you can have all nodes connect to the same DB instance, but this usually doesn't scale well. You can then setup read replicas (slaves), or even multi-master replication (read and writes possible from several DB instances which then sync), but you can still hit some limits (depending on the application and scalability requirements). Recently there's hype for distributed key-value stores - e.g. Hazelcast which can have a lot of nodes (even one per application instance) which then synchronize. This usually scales better than relation DBs, but data is usually not persistent and lost after system crash.
When you have multiple application instances, it's a good idea to hide this fact from the client - you should have only one endpoint which is served by a load balancer (e.g. HAProxy) which then directs the traffic to the actual application instances. These instances can be added/removed based on the load dynamically, the endpoint never changes.