I recently attended a talk by Turing Award winner William Kahan, who is well-known for his contributions to numerical analysis and the IEEE 754 standard for floating-point computation. In his talk, he pointed out that errors in floating-point computations can have disastrous consequences, a claim no one can dispute. A more interesting observation, and one that's likely equally indisputable, is that numerical analysis is on the decline in computer science curricula. Thus, fewer and fewer young software developers who create programs heavy in numeric computation understand how those computations can go very wrong. Kahan's proposed solution is to develop programming languages and compilers to help programmers diagnose faulty numeric programs, e.g. by allowing for trials with different rounding modes.
Thinking about all these "young programmers" who are, according to Kahan, "clever and numerically naive", made me wonder if I'm now considered an "old programmer". In my undergrad days, numerical analysis was a firm requirement of the Applied Math and Computer Science programs, and I suspect this is no longer true in many schools. During my grad studies, I was a teaching assistant for the numerical analysis course, first in the Computer Science department and later in the Electrical & Computer Engineering department. The course was universally detested by students in both departments. However, while most of the computer scientists could not understand why they were subjected to such utterly useless drivel, the engineers grudgingly admitted that the course was necessary for their future careers. There were also a few applied mathematics students, like myself, in the class; they actually enjoyed tasks like deriving Runge-Kutta formulas for ODEs by hand and did not complain much. It's pretty tough to make this material exciting, though my own instructors did a good enough job to convince me to continue in this area for my grad studies.