I worked as a Systems dev at Microsoft for a while.. There were many lessons to learn there - about software in general, and large systems in particular. I realized today that some of the knowledge can be applied to systems in general. This is a post on that...
Most of the software we know today involves a lot of code. Sometimes it is millions of lines of code. Needless to say, no one person can understand the entire thing. Thats natural, since our human brain can only handle a fixed set of things at a time. And that number is definitely not a million. So what is done is that a number of people work on these problems, and each person understands what he or she does and tries to enhance and modify that module. It is however important that the person modifying this piece of code has at least a basic understanding of all the ways in which whatever she is changing can affect other parts of the system. And that is hard, given the size of the problems we work on.
Now consider what happens when a new dev starts to work on such a system. They have possibly never worked with a new system their entire life (being fresh out of college). Most often they are therefore asked to fix bugs. Often, they end up making various mistakes while fixing the bug, and sometimes end up generating more bugs than what they really fix. The basic problem here is that they dont understand the system. Most bugs require change in less than 10 lines of code. However, what looks easy initially is so difficult, because one has to make sure that the changes to these 10 lines of code is not causing anything unexpected for the every other line of code in the system. (Sometimes in a well understood and modular system, your 10 lines of code will not really affect all the other lines... but the point is, your system has to be well understood for you to be able to say that there is really no effect.) This is hard to do, and most often, the effect is that new devs start introducing bugs while developing.
Microsoft (as do most other companies) has a stringent code review process in place, wherein every single change that is made to the system is monitored by someone who understands the system as well as the implications of the changes made to the code.
Now that the premise is set, it is time to come to my main argument. Earth is a very large biological system. Our knowledge of it is mostly basic, since we only understand how each part works separately, and have only just begun to understand how all the parts work together. We are just making baby steps with genetic engineering, and I think it is reasonable to say that we are new devs in that area. We definitely dont understand the entire system (having only recently sequenced the entire human genome). But despite this lack of understanding, we are tweaking the system in little ways here and there. And from my knowledge of systems, there is only one result for such kind of tweaking without full understanding - bugs. And worse sometimes - build breaks.
What I am saying is that our knowledge of genetic engineering is still too nascent for us to apply it to anything that really matters. Before a software company releases a product, they subject the product to a huge slew of tests. And that is not always random testing, there are analyzers that specifically look at every line of code that has gone into production to see if there are problems that can be caused once it is deployed. If one were to employ the same strategy for genetic engineering, there would have to be tests that checked what the effects of their modification was - long term. Considering the size of the biological system, and because effects take much longer to manifest, this would mean a very long period of testing.
However, we are already mass producing genetically modified crops and selling it to the masses, without looking at the implications it can have on the entire system. Each lab that works on these is independently modifying the system, without considering what the effects of its modification might be. There is no central authority who understands and can certify that the proposed change will not cause a problem to the system (code review). In all, we are sort of blindly modifying a very large and somewhat delicate system without understanding what we are doing. And my hunch is that this strategy is going to cause some serious problems (bugs) which are going to surface perhaps ten or twenty or perhaps even a hundred years later.
Extending the analogy:
One can extend the analogy further. A large software system usually has small parts that largely decide the most basic functions of the system. This "heart" will be what most of the rest of the system will be built on, and changing the heart is risky, because it is going to affect the entire behavior of the system. For example, the kernel in an operating system forms a kind of "heart" of the operating system. Thus making even small changes in the kernel code can cause drastic changes in the OS, resulting in effects ranging from a kernel that cannot boot to something that shuts down mysteriously. All of that happens when one does not understand what one is doing. (At Columbia, there is a course on OS which requires that we make changes to the kernel. And that is reputedly the toughest course in the CS dept. most likely because it is hard to understand the effects that changing this piece of code has on the system.
Here is the argument on genetics : Genes control everything, apparently. We understand that much. Thus genes sort of form the heart of the biological system. So essentially, with genetic engineering, what we are trying to do is modify kernel code for the biological system without understanding it. And the effects of this is going to be much larger than anything we tried earlier, because earlier, we had not started kernel hacking.
Now... does that give you a lot of confidence about eating genetically modified food stuffs?
No comments:
Post a Comment