|log (2002/07/26 to 2002/08/01)|
Thursday, August 1, 2002
I renounce the piercing of the eyeballs with a three-edged blade, the flaying of the feet and being forced to walk the hot sands upon the sterile shores of the Red Sea until the flaming sun shall strike me with a livid plague. I renounce the madness, the hoodwink, the mock hanging, the mock beheading, the mock drinking of the blood of the victim, the mock dog urinating on the initiate, and the offering of urine as a commemoration.
For some reason I'm having trouble with the idea that the FBI and the Justice Department and Interpol and everybody have been doggedly investigating possible rigging of an Olympic event. I'm not sure exactly why I have trouble with the idea; I mean it's probably fraud in some sense or something. But it seems awfully different from murder or robbery or even false advertising.
I mean, frankly, if the government were simply to blow out its lips and go "heh, so people fix sporting events; film at eleven; eh?", it wouldn't bother me a bit. Is that wrong?
(In fact depending on the time of day, pages on this very log are the top Google hits on "tanya harding nude". And of course we're still the World Experts on "nude david duchovnu". (How hard would it be to get up there for, say, "angela jolie nude"? CEOLN: your source for Nude Celebrity Misspellings.))
Finished Einstein's Dreams, picked up a copy at a favorite bookstore on a whim the other week. It was very good; surreal but concrete, about time and abstract physics but about people and birds and water. Also short and broken into even shorter pieces, for ease of reading and contemplation even in fragments of leisure.
On today's bit of the tape, Searle has revealed what it is that an object has to do in order to cause a mind (since it's not just a matter of running the right computation): in order to cause a mind, an object has to "duplicate the causal powers of the brain." So there you are!
I'm hoping he'll eventually tell us what that means, beyond "in order to cause a mind, an object has to be able to cause a mind".
Along the way, he's also said that if his brain were to be gradually replaced by silicon chips (with, presumably, the same I/O behavior), three things might happen: he might continue to behave the same way and still be conscious, he might continue to behave the same way but no longer be conscious (no longer have mental states), or he might still be conscious but no longer have behavior. (He could also end up with neither behavior nor consciousness, but he didn't mention that.)
Of course granting that something might look and behave exactly like John Searle but have no consciousness inside, propels us solidly back into the center of the Mind-Body problem; the Problem of Other Minds in particular. If that Searle-behaving object might for all we can tell not have consciousness, what about all the other person-behaving systems in the world? How do we know (how do I know) that they have consciousness? How does Searle know, what evidence does he have, that those brains also "cause minds"? And if those brains might not be causing minds, how can he even know that his brain is causing a mind? Might not his mind be caused by something entirely else? What is his evidence for "brains cause minds"?
Which is to say, I haven't noticed any actual progress on the Mind-Body problem in anything he's said so far...
I have this little piece of philosophical SF that I want to write for the POC site, about what it would be like to discover (as I think we might for all we know discover, although I perhaps irrationally doubt that we will) that there really is some special physics involved in consciousness (or at least in intelligent behavior). This sort of thing may be what Searle is driving toward, although I sort of doubt it.
OpenSSH Package Trojaned (also on SlashDot). Amusingly enough, the Trojan horse is in the build process, so only people who build from source (without reading the source and the make process first) were vulnerable. And happily, it was discovered and fixed pretty quick after it happened, by some random guy.
If a man in estrus gender-switches and tumesces with a soulflower, try not to bring it up in a front of your friends. It's bad enough as it is.
New flash: Searle doesn't actually intend the Chinese Room argument to say anything about whether or not actual digital computers can think (have mental features, are conscious, etc, etc). All he means it to show is that if a computer can think, it's not solely because of the fact that implements a particular program.
As far as the Chinese Room is concerned, it could be the case that every single digital computer on the planet is actually thinking; they'd just have to be doing it by virtue of something besides (in addition to) the fact that they implement the programs that they do. Searle doesn't really stress this very often; uncharitably I suspect it's because if he did his thesis would sound much less interesting and controversial, and people wouldn't pay so much attention to him.
(Now in fact Searle thinks that it's "obvious" that current computers don't in fact think, but the Chinese Room argument isn't intended to demonstrate that; if he has other reasons to believe that they don't think, he hasn't mentioned them yet.)
So what, besides implementing a particular program, does it take to think? What else is it that brains do, and how do we tell whether or not computers are doing it also? I'm hoping Searle will get to that. What he's mostly said so far is that these are "neurobiological" questions, which doesn't make me feel very hopeful. (If no one had ever made a chair out of metal, would "can you make an aluminum chair?" have been a woodworking question? If no one had ever seen iron melt, would "can metals be liquid?" have been a question for experts on ice?)
In the lecture before he started talking about how the Chinese Room argument doesn't in fact aim to say that actual computers can't think, Searle gave his solution to the Mind-Body problem. Here it is:
1) brains cause minds
This doesn't impress me terribly much so far, in part because if you stick "how can it be that" in front of both of those phrases, you get a statement of (part of) the Mind-Body problem. This doesn't bode well for them constituting a solution. But there are still quite a few tapes left, so perhaps he'll develop it into something more plausible eventually.
The apparently deliberate overload rendered the RIAA.org site unavailable for portions of four days and came after the group endorsed legislation to allow copyright holders to disrupt peer-to-peer networks.
Of course mature responsible people do not gleefully chant "neener, neener, neener!" about things like this.
It's summer! Yay!
And some boring technical weblog-related inputs in that same box:
I like the short RSS feed, though usually I don't. Your log is special that way. Would the long feed have the sidebar pictures?
I still haven't gotten up the energy to do the full-entry RSS feed (and I'm gratified to hear that some people enjoy the short one). Various things demotivate me. Such a feed would (strictly speaking) violate the RSS Spec, which says that item description elements shouldn't be more than 500 characters long (and nearly all my log entries are). It would require automagically absolutizing all the relative URLs. It would mean somehow doing something about the input boxes and sidebar pictures (as one reader mentioned), and would generally perpetuate the popular myth that there is something called "content" that can and should be separated from something called "presentation" (see incomplete rant on the subject the other year).
Also I don't really understand the desire for it. With the current RSS feed, your RSS client (whatever form that may take) sees that there's a new entry in my log, shows you the pithy comment that I've made about it, and lets you click breathlessly on it to see the entry itself (in brown, with pictures in the lefthand border, just as God Intended). If you want to like automatically download stuff for offline reading, you can just tell your autoclient to download the page that the link element points to.
With a full-entry RSS feed, your RSS client would get a squished-down version of the entire entry, probably without the sidebar images, and without the color and font information that gives the place its character. You save one mouse-click. How is that worth the trouble?
(I'm not denying that it's worth it to some people, because people keep saying that it is, and I'm hardly going to accuse anyone of false consciousness about weblog viewing desires; I just don't currently understand what makes it worth it to them.)
Ooh, that was more space than that deserved! Let's close with some reader input from the talking place:
Still you are here. But Victoria Sinclair not here. Still smelling funny. What do you care of it? This is not of me. Balance.
So there you are. There's a ghost story that I want to tell you, too, but probably not tonight.
Well, this is frustrating. In the first minute of the lecture today, Searle says that it's important to distinguish mental things (like belief, understanding, thought) from subjective consciousness. He doesn't think that computers have either one, but he says that they're different (or at least he says that people have historically distinguished them, and he strongly implies that he agrees with this), and that what he's concerned about in these lectures is the mental (rather than the subjective). On the other hand, for the rest of the lecture he seems to blur the two together, using "consciousness" and "conscious" as synonyms for "thinking" and "thought". I'm not sure if he means to be using a non-subjective sense of "consciousness" here, or if he's really blurring them together.
Note that this is pretty important! If he's talking about the objective realm, then we can see the issue as mostly about word usage. If I (and other people who believe in some computational theory of mind) want to use "understanding" and similar words for any system that displays appropriate behavior, and Searle is willing to use those words only for any system that both displays appropriate behavior and contains something very much like a human brain, we can just note that we use the word differently; there's no fact of the matter at issue. I suspect that, if we do start to have non-human-brain systems that show understanding-like behavior, Searle's usage will wither (although if it doesn't wither fast enough it's likely to cause all sorts of nasty civil rights issues).
But if Searle is talking about subjective consciousness, then the case is completely different. I agree that it's really hard to see how instantiating the right program can cause subjective consciousness, but it's equally hard to see how this mess of neurons between my ears could cause subjective consciousness. So it's a deep mystery how an electronic computer could be conscious, but it's also a deep mystery how a person could be, and I'm wildly curious about the basis of Searle's claim to know that people are, but digital computers aren't.
But I said most of that yesterday.
Just as I was pulling into the parking lot, he started to outline his solution to the mind-body problem, that shows how mind relates to body and how people have mental states but computers and Chinese Rooms don't. So far it strikes me as very unpromising, but I'll wait to comment until I've heard the whole thing.
And speaking of government interest in anti-gravity, here's a Weekly Updates page on the NASA website that's worth a look. (For the Biefeld-Brown effect, see the very wonderful American Antigravity site. Do these things actually fly? That'd be kinda cool even if it isn't a reactionless drive. Expert opinion welcome.)
nope, fishing for compliments...
We've definitely seen The Matrix; it wasn't bad (apart from the whole "humans as energy sources" thing).
Today's bumper sticker idea:
DON'T SIN; THE CHURCH HATES COMPETITION.
I thought of that at lunch the other day, but I wasn't going to actually blog it, to avoid offending the one or two readers that haven't already been driven away by my attitude toward religion (hey, I have my own imaginary friend in the sky), but then Medley logged this: Pope Refuses to Visit with Sexual Abuse Victims:
John L. Allen, Vatican correspondent at the National Catholic Reporter, wrote that many Church officials regard the magnitude of sexual abuse by priests in the United States as exaggerated, fueled in part by "the disproportionate Jewish influence in the American media."
I have to feel sorry for sincere Catholics. I mean, sure, even the earthly representatives of the Almighty are fallible humans, but must they be utter chuckleheads?
Days have passed, one after the other, but on no particular day has anything special happened, nor has there been in any span of days a gradual building up of anything. He feels himself scattered across time, not centered in anything, not contained in anything, but fragmentary, existing mostly as things, words, casual meaningless glances, that were long ago forgotten by strangers.
If you are a user of Immortality Device and if you have written testimonial or personal opinion about Immortality Device on your web site, you may join this web-ring.
New kind of Band-Aids: Advanced Healing. Designed to stay on for several days! Also, if our experience is any guide, not designed to come off at all, without lots of pulling and tearing and re-opening of wounds that were, until you tried to get the Band-Aid off, indeed healing quite well. But that's technology for you.
... current literary and critical theory, however, maintains that Mexican take-out menus only derive relative "meaning" in comparison to other, equally subjective, Mexican take-out menus.
Speaking of bumper stickers, I was delighted to find that Nancy Lebovitz's Calligraphic Button Catalogue still exists. I remember being endlessly amused by one of her earlier hardcopy catalogues ("The Little Calligraphic Button Catalogue on the Prairie", I think it was) years and years and years and years ago.
Go not to the surrealists for counsel, for they will say both no and hippopotamus
In my next life, I want to be one of those die-hard fen that goes to all the cons and gets all the in-jokes in the button catalogue. (I have no idea why this notion appeals to me so much; of all the things to idealize, why science fiction fandom?)
Other bumper stickers can be found elsewhere.
A reader writes:
Good (skeptical) take on the Chinese Room here.
That is in fact a rather good page. It reflects pretty exactly the feelings I had this morning on the way to work, listening to Searle on the tape, giving his lecture about the Chinese Room, and how it shows that the computational model of the mind is wrong.
(The Chinese Room argument, very briefly, says this: if you can write a computer program that appears to understand Chinese when executed, then you can give that program to a human who doesn't understand Chinese, and that person can "bench-run" the program, using huge books in which the database is written, and big ledgers to record values of variables in, and an enormous printout of the program itself, and if you hand that person something written in Chinese, the person will (much much later) hand you back a reply written in Chinese, that seems to reflect understanding.
He then outlines the Systems Reply (which I think is the right reply): it may be that the human doesn't understand Chinese, but the system as a whole including the books and ledgers and the program, does.
To this Searle gives his two-pronged response: first, he calls the reply "preposterous", and then he alters the example so that the room disappears, and the person memorizes and internalizes all the books and ledgers and the program itself. Now (he says) the person still doesn't understand Chinese, and there's no other system that you can point to and say "this system as a whole understands Chinese".
IMHO the Systems Reply still works fine here; it's just that the person's body now hosts two different systems, one which understands Chinese and one which doesn't. This seems unlikely, but that's Searle's fault: he's asked us to imagine that someone can memorize a computer program and all the variables and data necessary to act as a competent speaker of Chinese, and of course that's utterly impossible. But if you could do that, you'd have a person with two different systems embodied in his body, one that understood Chinese and one that didn't.
But anyway the big frustration that I have, and that the zompist guy linked to above has, is that the Chinese Room argument by itself is purely negative. "Just reacting in well-specified ways to some inputs, and producing some outputs, isn't enough to have understanding", it says.
But what else is there? What else, other than taking in inputs, performing things like computations, and pushing out outputs, is there for chunks of matter to do? If we have understanding, and we don't have it by virtue of that sort of thing, how do we have it?
I'm hoping that Searle's going to address that in the later lectures of this course I'm listening to; it does look like it from the syllabus. (Have to find out when he made these tapes; I wonder how old these sound waves are.)
One tantalizing possibility is that when Searle talks about "understanding" and "thinking" and "intentionality" and stuff like that, he's really talking about the subjective nature of consciousness (something that we've thought a good deal about around here). I would agree with him that it's not at all clear how having certain I/O behavior, or being a process running a particular computer program, could lead to intentionality, and I'd be very eager to hear his suggestions on the matter. On the other hand, I'd also wonder how he can be so positive that the Chinese Room doesn't have subjective experience. After all, if you ask it whether or not it does, it will reply quite convincingly in the affirmative...
For a year he went every day into the woods, back to that same clearing, and wrapped his arms around the graceful trunk of that tree, and put his lips to the bark in that same place.
Fob. Frob. Gob. Hob-nob. Lob. Mob. Knob. Rob. Sob.
I have the best (and the most Iris Chacon obsessed) readers!
While it is true that statutes should be construed so as to avoid a finding of unconstitutionality if possible (Statutes, supra, § 150[c], at 321), courts should not reach for strained constructions or adopt constructions that are patently inconsistent with the legislation's core purpose (see, People v Dietze, 75 NY2d 47, 52- 53; cf., People v Mancuso, 255 NY 463, 474).
Which is a really cool legal principle. In this case, it means that if you allow naked male chests in public, you gotta allow the female ones too.
One of the most important purposes to be served by the equal protection clause is to ensure that "public sensibilities" grounded in prejudice and unexamined stereotypes do not become enshrined as part of the official policy of government. Thus, where "public sensibilities" constitute the justification for a gender-based classification, the fundamental question is whether the particular "sensibility" to be protected is, in fact, a reflection of archaic prejudice or a manifestation of a legitimate government objective (cf., People v Whidden, 51 NY2d 457, 461).
Which seems like a Good Thing.
Some recent searches:
"old banana republic"
Also from Fark:
In explaining the religious right's newfound unease about Mr. Ashcroft, Paul Weyrich, the president of the Free Congress Foundation, said, "A lot of the social conservatives appreciate the stands he's taken on child pornography and the Second Amendment and a number of social issues. But there is suddenly a great concern that what was passed in the wake of 9-11 were things that had little to do with catching terrorists but a lot to do with increasing the strength of government to infiltrate and spy on conservative organizations."
Yeah, he was only supposed to be infiltrating and spying on liberal organizations. (This is my favorite phrase of the day: "religious right's newfound unease about Mr. Ashcroft".)
So I've been listening to John Searle's Teaching Company lectures on Philosophy of Mind. Searle is famous for his "Chinese Room" argument, which he claims proves that (roughly) having understanding and meaning and aboutness and stuff isn't simply a matter of running the right program. I find the argument unconvincing, the Systems Reply entirely valid, and Searle's response to it a fine example of argument by bald assertion. His alternative to strong AI, the theory that consciousness is "biological" in the same way that digestion is "biological", seems pretty unpromising to me. So I don't consider myself a big Searle fan.
On the other hand listening to him on the tapes here, I'm pretty impressed. He sounds smart, funny, commonsensical, rational. (He has this sort of gravelly John Wayne accent; "This is a real problem for the materialists, Pilgrim".) So far he's done a pretty good job of surveying the Cartesian mind-body dualism, the problems with it, and some of the monist reactions to it. I'm not really looking forward to his presenting the Chinese Room argument (because I expect I'll get annoyed by it), but I am looking forward to hearing him set out his own theory (because I'm hoping that once he's explained it, it'll make more sense to me).
Ideally I'll come out of this with a rational reconstruction of Searle's theory, and even the Chinese Room argument, that reconciles it with the true stuff in functionalism and strong AI (and gets published in, say, J. Phil., and makes me rich and famous). More likely I'll just get a more accurate picture of what Searle's theory is, and figure out where I start to disagree with it and why.
I like understanding stuff.