Originally posted by mindseye@Jan 25 2005, 03:56 PM
I believe it's misleading to compare Firefox and IE in the way you did in the last sentence; the security differences between the two browsers are at least an order of magnitude.
[post=277338]Quoted post[/post]
It wasn't my intention to compare the two browsers at all. As you point out, they are not comparable in performance. (On the other hand, the degree to which they have been attacked in the field isn't comparable either, due to the vast differences in installed base.)
I was actually expressing a mild disapointment in the software development process in general. My point was that the well-known failures of IE should have provided a good practical lession about specific exploit mechanisms to be strictly guarded against. Yet, in reviewing the bug history for Firefox, I see a lot of the same classes of problems that were exploited in IE. I'm not talking about complex things with esoteric interactions of embedded media, Java, etc, but rather mundane things like buffer overflows, heap overruns, and similar lack of basic exception handling allowing exploits.
However, I'm not being critical of anyone in particular. Developing good, reliable software is quite difficult, and I am lamenting that we still have a long way to go. I design medical equipment for a living, and as medical devices have come to depend increasingly on software/firmware, the need for improved development procedures has become increasingly evident.
One traditional, and I believe seriously flawed, method of ensure quality software has been extensive formal testing. However, this fails to achieve perfection for several reasons. It is impossible for a finite set of formal tests to completely explore all possible failure modes of very complex systems. Unit testing of smaller chunks of the system is often used to improve this situation, but complex interactions are generally not exhaustively tested. Further, if the programmers who write or maintain the code are aware of how it is tested, there is a tendency to code to pass the tests, but not necessarily to perform well in general. In some industries in certain parts of the world, quality systems attempt to prohibit developers of critical code from knowing how their code will be tested, in an effort to avoid this. However, this prohibition quickly breaks down when bugs are found and have to be fixed. When software fails, people often cry out that it wasn't adequately tested. But, as system complesity increases, the test requirements increase so fast that formal testing is no longer an efficient tool for finding bugs. What is necessary is a preemptive mechanism for preventing bug in the first place, not testing them out later.
The second traditional tool, which I believe to be better at finding obscure problems and preventing problems in the first place, is the code review. The open-source concept (as is used in Foxfire, Linux before people got greedy, etc.) amplifies the effect of the code review, by in effect allowing the code to be reviewed by much larger groups of people from more diverse backgrounds. (An interesting article mentioning this, and other observations about software failures, appears on pg. 45 of January's Embedded Systems Programming magazine. One of the points in the article is the failure to learn from past mistakes; this is what made me think of the Mozilla bug list echoing some of the same faults as IE.) However, there are limitations of the review process, and cases where the open source model is inapplicable, or where plausible arguments can be made that in certain cases it actually does harm, not good. (A lot of these came up during the public debate over electronic voting; I'm not sure I agree with them, but some do have credibility.) For example, the public availability of sendmail source made it very easy for hackers and spammers to understand and exploit its vulnerabilities, but getting patched versions actually installed on mail servers proved much harder. It could be argued that had the code been closed source, distributed as binary only, the vulnerabilities would have been discovered at a slower, and more managable, rate.
Modern software development concepts, like new languages, OOP concepts, CASE, etc, etc, have been helpful as well, in theory making it harder to write bad code. Yet, I have seen badly unreliable systems built this way, too.
We all like to beat up on Microsoft, but it must be remembered that many of their products have roots to times when computers and the web were used very differently than today. At the same time as they had to adapt to changing needs and security risks, they had to deal with their legacy as well. For example, I was critical about the lack of exception handling above. Yet, in the days of slower processors and smaller memory, exception handling was often intentionally omitted (despite the fact that this was known to be bad practice), because it was better to have a product that ran at an acceptable speed, and fit in an amount of memory that your users could afford, than one that was bulletproof, but useless because it was too slow. In some ways, newcomers have it somewhat easier, because they lack a lot of the legacy issues, and can start fresh building on the experience and failures of what came before. Yet, they often still struggle, because until there is some revolution, good software doesn't come easy, and takes a rare breed to make it happen.