They are multi-threaded either:
- to exploit the multiprocessor nature of current systems
- because the programming language they are written in, implicitly uses multiple threads
- to avoid complex asynchronous or event based programming paradigms
or a combination of the 3 above.
The most performing applications today are multi-threaded but each thread is seen as completely independent.
Sharing only a bare minimum of state information among them, and no synchronization. The application is actually a set of single threaded applications which can almost linearly scale since they, almost, don't share any information between them and also avoid any expensive context switching when multiple threads run on a single cpu.
I had recently the pleasure to implement a small single threaded message driven framework.
What a pleasure!!!! A model I had used years ago as a microkernel on embedded systems used for networking.
What a pleasure I didn't had to think about
- Synchronizing threads
- Protect access to shared data
- All kinds of locks, deadlocks
- Priority inversions
Every time you use some datastructure in a multithreaded application you have to worry about all kinds of potential you might encounter.
In that case, not, I could permit myself extravagances ,......
How can we achieve performance with single thread instead of multiple threads?
ReplyDeleteIt all depends on how your code is written.
DeleteA typical mutlithread applications will use synchronization primitives, shared data among the threads,... In that case, the performance of your application will follow +- amdahl's law https://en.wikipedia.org/wiki/Amdahl%27s_law
If your application is a sum of single threads/processes who share almost nothing, your application will almost linearly scale