BKMTHSTM.RVW 980201 "Mother of Storms", John Barnes, 1994, 0-812-53345-3, U$5.99/C$6.99 %A John Barnes %C 175 Fifth Ave., New York, NY 10010 %D 1994 %G 0-812-53345-3 %I St. Martin's Press/Tor Books/Tom Doherty Assoc. %O U$5.99/C$6.99 212-674-5151 fax 800-288-2131 pnh@tor.com %O www.tor.com www.stmartins.com %P 560 p. %T "Mother of Storms" Of storms and meteorology I know very little. Weather in Vancouver is frontal, and that makes for a rather boring, though not entirely predictable, progression. The description of global warming and storm formation in the book seems plausible, and not inconsistent with what I do know. What I do know, of course, is computers, communications, and viruses. These play a large role in the book, pretty much as large as the hurricanes themselves. Let's start with the viruses. First off, the entire computing infrastructure of the book appears to be very similar to Fred Cohen's theorized viral computing environment. A far cry from the simplistic code fragments of today, the programs of the book's time are all, to some extent, viral and replicable. They also seem to use a lot of genetic programming, modifying themselves sometimes in light of a superior implementation, and sometimes at random. However, perhaps because of a lack of comfort on the part of the author, the technical population of the book seems to have lost control of the technology. The programs are left to propagate, replicate, and develop as they will. One character allows as how the programs are now so complex that nobody can understand them. For me, this sounds a little bit too much like the "computers work by magic" attitude that absolves people from responsibility for learning and thought. Secondly, there are the specifically viral programs of the book, the datarodents. These are targeted information gathering applications, which roam the networks, populating nodes and routers where the chance of finding the desired information seems high. When a datum is found, a datarodent will replicate itself to a) stay at its post, b) find a path back to the originator to report the datum and/or c) trace back the route of the datum to its source, there possibly to replicate further. This behaviour was fairly standard theorizing in the early days of computer virus research. (Due to the profligacy of basement variant makers, and their total lack of imagination, virus research is not as much fun as it once was.) The activity of the datarodents, however, appears to be inconsistent. On the one hand, nonentity individuals seem to be able to find almost any information in almost no time at all. Every conversation can be overheard and reported. The entire world lives in a fishbowl in which nobody has ever heard of encryption. (Or, if they have, the author doesn't really understand it. On the other hand, network replicable programs would make terrific encryption breakers. On yet the third hand, self-optimising programs would not be a particularly good bet, since speed of operation is quite measurable, and would presumably be selected for, while strength of encryption is very difficult to assess. You might end up with a very fast, but completely insecure, implementation. But enough digression.) Except that not all conversations *are* overheard. Important information goes unfound. Lost datarodents (of which there must be more than a few) for a given project cannot be found and the project reconstructed. Then we come to the major event: the melding of man and machine. Now, I don't have too much of a problem with the "optimization" of the programs. As mentioned, this is a logical and reasonable followon to the current work in genetic programming. I don't really even have a problem with cross-platform programming. This exists, in some guises, in current systems. There are, for example, the fat binaries for the Macintosh, which hold code for both the 68xxx and 88xxx processors, and also cross-platform environments such as Java and the MS Word and other VBA compliant macros. I can even grant, rather dimly, some connection between neural net computing, and our own neural activity. (The author is, by the way, to be commended for being the first writer I can recall to understand that the brain changes, under experience, and that this would have an impact on direct neural communications.) The actual crossover of computing programs to the human brain is, though, a bit too much like the old joke of the masses of equations on the left side of the blackboard, the desired result on the right side of the blackboard, and the cloud in the middle saying "and now a miracle occurs." As the punchline goes, I think you need to be a bit more specific in step two. copyright Robert M. Slade, 1998 BKMTHSTM.RVW 980201