For Python, the reasons are historical. When it was designed, mathematics and arithmetic were not exactly the focus of attention, and Guido decided to follow how things were done in C when he didn't particularly care one way or the other. That is why division in Python works as in C.
In C, the choice is a more natural one, since it was designed to be a low-level language focused on efficiency. In the early days, this efficiency was achieved largely by providing a close mapping between hardware functionality and language functionality. This meant that you could get good efficiency even with a simple compiler, by leveraging the programmer's intuitions and experience regarding hardware efficiency (with integer arithmetic being much more efficient than floating-point, provided that the latter was supported at all.) Following this principle, the compiler should not automatically slow down things without very good reasons, while automatically moving from fast integers to slow floating-point arithmetic could easily be seen as violating that principle. Also, integer division is sufficient in many cases, especially when used for data organization (subdivision of datasets) rather than mathematics. The creators of C were working on operating systems, compilers, and other kinds of system software were this was true to a large extent. Floating-point arithmetic was included in the language, but in later practice it was often made optional, if at all (especially on small micros and embedded systems.)
For a high-level language focused on ease-of-use for the programmer, automatic conversion between integer and floating-point "where required" makes a lot of sense. So it is not strange that the other languages you mention work that way. Python is going in that direction as well (see e.g. http://python.fyxm.net/peps/pep-0238.html)
Comment
Hello Rickard,
This is a great piece of information. I very much liked the way you have explained it.
Thanks for answering a question which was very intriguing for me.
Regards,
Vishruth
Parent comment
For Python, the reasons are historical. When it was designed, mathematics and arithmetic were not exactly the focus of attention, and Guido decided to follow how things were done in C when he didn't particularly care one way or the other. That is why division in Python works as in C. In C, the choice is a more natural one, since it was designed to be a low-level language focused on efficiency. In the early days, this efficiency was achieved largely by providing a close mapping between hardware functionality and language functionality. This meant that you could get good efficiency even with a simple compiler, by leveraging the programmer's intuitions and experience regarding hardware efficiency (with integer arithmetic being much more efficient than floating-point, provided that the latter was supported at all.) Following this principle, the compiler should not automatically slow down things without very good reasons, while automatically moving from fast integers to slow floating-point arithmetic could easily be seen as violating that principle. Also, integer division is sufficient in many cases, especially when used for data organization (subdivision of datasets) rather than mathematics. The creators of C were working on operating systems, compilers, and other kinds of system software were this was true to a large extent. Floating-point arithmetic was included in the language, but in later practice it was often made optional, if at all (especially on small micros and embedded systems.) For a high-level language focused on ease-of-use for the programmer, automatic conversion between integer and floating-point "where required" makes a lot of sense. So it is not strange that the other languages you mention work that way. Python is going in that direction as well (see e.g. http://python.fyxm.net/peps/pep-0238.html)