Harnad, S. (unpublished) Uncomplemented Categories, or, What is it Like to be a Bachelor? 1987 Presidential Address: Society for Philosophy and Psychology.  http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad87.uncomp.htm

Uncomplemented Categories, or, What is it Like to be a Bachelor?

Stevan Harnad
Psychology Department
Princeton University
harnad@princeton.edu
http://www.princeton.edu/~harnad


 


Maybe it's just because hermeneutics is so much in vogue these days, but I've lately come to believe that the secret of the meaning of life is revealed by certain jokes from the state of Maine. The pertinent one on this occasion (and some of you will recognize it as one I've invoked before) is the one that goes "How's your wife? to which the appropriate deadpan downeaster reply is: "Compared to what?"

"Compared to what?" How many seemingly absolute judgment calls are there that turn out to be relative, that turn out to depend on what the alternatives are?

Well, before you misjudge the direction of a paper that is subtitled "What is it Like to be a Bachelor" and that starts out with a How's-your-wife joke, let me quickly turn to a more neutral topic: Mushrooms.

For some reason, the Princeton area, especially around the general vicinity of the Institute for Advanced Study, has a sizeable population of Soviet immigrants. These people have acclimatized well to the culturally impoverished conditions of America. They've reconciled themselves to the fact that, in place of mass poetry readings, garlanded with flowers, they must settle for deafening rock concerts, festooned with cannabis fumes and festive fungi. And in place of the intellectually impassioned weekends with vodka and mushrooms and friends, they've had to settle for just vodka, mushrooms and friends. Even the mushrooms were at risk, because most Americans do not have a passion for mushrooms, and certainly not for picking their own. Yet this is one cultural practice to which the Russian arrivals have held fast at all costs. They will not be constrained to the monotonous generic varieties available in all supermarkets. They see no reason not to continue exploiting the full diversity that nature offers by picking their own.

So, invited to eat with them, I am faced with a grave dilemma. Do I dare trust their mushroom-picking capacities? My trepidation is not just based on the cultural fact that picking your own is not done in these parts. I would probably be less nervous if the pickers were from stateside. Why? It's the "Compared to What" problem again: What if -- an inner voice warns me -- the features that the Russians are using to sort the edible ones from the poisonous ones were reliable for the alternatives back home in the USSR, but the alternatives here are different? What if those faithful local guideposts lead you astray in the USA? I'm going to read you some entries from a mushroom dictionary that will either whet your appetite or relieve you of any inclination to dine with Russians:

So when you point to a given a mushroom and ask "is that safe to eat?" in a rather ominous sense the answer is "compared to what?" And I'm not referring here to a continuum of relative edibility -- some mushrooms being better tasting or less poisonous than others. What's at issue is: What are the alternatives with which this mushroom could be confused? Features that are reliable for categorizing the fungi of Soviet Georgia may be decidedly risky in the context of the flora of the Okefenoke swamp. At the very least, I would hope that the features my Soviet friends are relying on have been retested, and if necessary, revised, on the basis of the local confusable alternatives. And I myself wouldn't want to be the source of the feedback for that empirical updating process...

What has to be sampled in order to get a set of features that will reliably sort instances into their proper categories (in the case of mushrooms, the proper categories might be "mushroom" versus "toadstool," or "edible" versus "poisonous")? It seems obvious that for the simplest kind of category, namely a dichotomy, you would have to sample two kinds of instances: Positive instances and negative instances. The positive instances would be the members of the category and the negative ones would be the nonmembers. Let us call the negative instances the "complement" of the category. (In a more complicated plural taxonomy, a positive instance of any one of the categories would count as a negative instance of all the others.)

Suppose we were sorting mushrooms. The positive instances would be edible mushrooms and the negative instances would be inedible mushrooms. Let's call the latter for simplicity "toadstools." Notice that the negative instances could not consist of everything that was not a mushroom. In other words, the complement of a category cannot be the rest of the universe. For if it were, then you could go on testing the "compared to what" question forever. If not only toadstools, but all fauna and flora and inanimate objects and even abstract ideas were potentially confusable negative instances of mushrooms, the search for reliable distinguishing features might never halt. In reality, of course, the local context in which a categorization must be learned fixes the complement: The complement of negative instances is the set of actual confusable alternatives you sample -- the things you're in danger of mistaking for edible mushrooms. In the mushroom case, this complement may be one thing in the suburbs of Tbilisi and another in the suburbs of Atlanta. So it is really true that the answer to the question "Is this an edible mushroom" is: "Compared to what?"

Now what is it that is happening when a category is being learned? We are sampling positive and negative instances, we are attempting to label them correctly, and we get feedback as to whether or not we are right. In the case of poisonous mushrooms, the feedback is somewhat dramatic, and may threaten to cut the category-learning short. But in many cases we have the advantage of an instructor who provides feedback -- feedback that may be based on someone else's once having learned it all the hard way, but that now allows our learning to be somewhat less risky. And of course most object categories we learn don't involve any risk at all.

But whatever the source of the feedback may be, the burden on us is to find the features that will eventually allow us to identify the positive and negative instances correctly without feedback. How difficult this will be depends entirely on the degree of interconfusability among the alternatives. (It is again a "Compared to What" matter.) In sampling positive and negative instances perhaps it is easy find many features that will reliably sort future cases without further need for feedback. There may be big, obvious natural gaps between the positive and negative instances. Sexing adult roosters and chickens, for example, is easy. But sexing baby chicks is not. Identifying all the local mammals may be easy; sorting the local fungi may not.

It is the hard cases that I am primarily concerned with in this paper. But what's clear in both kinds of cases is that if we have indeed succeeded in achieving an asymptotic level of performance, one where we are sorting the positive and negative instances with 100% success, then it must be because we have somehow managed to find a set of features that is sufficient to permit us to do so.

Let me add parenthetically that we are here in California, where the concept of necessary-and-sufficient features is, shall we say, "locally inedible." Everyone from Wittgenstein to penguins is cited as evidence that a set of features that provides necessary and sufficient conditions for categorization does not exist in many cases. Now I hold no particular brief for "necessary" features, since in difficult categorization problems, highly interconfusable ones, the winning feature-set may be what philosophers of science call "underdetermined." There may exist more than one set of features that would successfully sort the positive and negative instances -- that would provide a reliable boundary between the category and its complement. But what certainly seems necessary for correct categorization is a set of features that is sufficient to generate it. Otherwise, where does the successful sorting performance come from? The features may be relational ("larger white warts"), conditional ("flesh turns red if it is cut"), negative ("lack of red stains where bruised"), or even disjunctive ("distinctive warts or yellow gills"), but surely such features must exist and be detectable, detected and used.

So where there is a felicitous ending to a category learning story, the punchline is a set of features that is sufficient to guide reliable sorting, happily ever after. But what about the "compared to what" problem? In the mushroom/toadstool case, the Russians presumably had such a sufficient set of differentiating features back home. So when a Russian points to an instance and identifies it as an "edible," based on a tried and true set of features, hard won from a lot of prior trial-and-error field experience and instruction on his native turf, can he be said to be absolutely identifying something "out there" that consists of the natural kind that goes by the name "edible mushroom"? In one sense he can, but in another perhaps more important sense, he cannot. If the world were a microcosm, and his sample back in Tbilisi were somehow guaranteed to be a completely representative slice of it, then perhaps he would really have picked out a natural kind, absolutely and once and for all.

But the "compared to what" problem looms large. The instance to which the Russian is referring (pointing) may not be edible at all, and the reason he wrongly identifies it as edible is because the features that never failed him in the context of the confusable alternatives in Soviet Georgia are not reliable in Dixie Georgia. It may be -- to put an uncharacteristic (but I hope perspicuous) twist on a fashionable idiom we owe to Kripke and Putnam, and ultimately to Frege -- "schmedible" rather than "edible." This suggests that the features the Russian uses to reliably pick out edible mushrooms are context-dependent, provisional and approximate rather than exact. They depend on ceteris paribus or all-else-being-equal conditions with respect to the "compared to what" question. They may be sufficient for the Soviet context of alternatives, the Russian complement of that category, but not for the American complement. In America the features may have to be revised, supplemented, perhaps replaced altogether by another set that works for the alternatives that grow here. Features that in Russia safely distinguished a mushroom from a toadstool might here simply distinguish two variants of mushroom, or of toadstool. And differences that were innocent or nonexistent in Russia may become salient and even critical here.

Consider an equivalent sorting problem for a machine. Suppose all the machine had by way of positive and negative instances were smooth, crisp, well-placed two-dimensional projections of trees and animals appearing on its transducer surface; and suppose the machine's mission was to learn to correctly identify trees and animals. In such a simplified, stylized context of alternatives -- with nothing to worry about comparing and confusing except trees and animals, many features might work, for example, running a line across a critical portion of the image and calling it an animal if the number of intersections is greater than or equal to two, a tree otherwise. Just this lone feature would actually sort positive and negative instances reliably; but if you widened the context of confusable alternatives -- say, by introducing a tree with a bifurcated trunk, or a stork standing on one leg -- the machine would be foiled and it would have to revise and elaborate its feature set.

To a first approximation, we are such sorting machines. We sample positive and negative instances of categories with feedback, we somehow find the distinguishing features, and then we identify new instances correctly -- unless the context of alternatives is widened and our provisional features turn out to be inadequate, in which case we must find new distinguishing features. The old ones will still have been "approximately" right, relative to the old context. When the answer to the "compared to what" question is the old context, they will sort reliably. But if we change the "compared to what" factor, the features may change too. In the new, wider context, the approximation will have been tightened. But at no time can such devices be said to have picked out the "exact" features of the category in question. Sorting and labeling, insofar as they are not just finite tasks on which you can close the book after a successful chapter, are not exact activities: not as long as there is an uncertain outside world and an uncertain future that may defeat our provisional feature-sets. Hence any categorization, any identification, is itself always provisional, context-dependent and approximate. Hitherto faithful feature-sets may fail us, and may have to be relaced by new, more reliable and general ones. "What's that?" "Compared to what?"

Having, I hope, made a case for the provisional, context-dependent, compared-to-what nature of catgorization, I must now take a few moments to describe a model for categorization -- a sketch of the kinds of internal structures and processes a device would have to have in order to be able to learn to sort and label correctly by sampling positive and negative instances. Note that this model will resemble in some respects an approach to the internal representation of meaning that is regarded by some as having been discredited: The approach of the 17th century British Empiricists and their verificationist successors into the present day. This model, however, does not claim to be a theory of meaning. It is only a theory of sorting and labeling, a theory of categorization. On the other hand, if the theory should happen to succeed in explaining how a device can learn to sort and label objects and states of affairs, and even how it can then go on to describe objects and states of affairs with strings of these labels, and perhaps even how it can respond appropriately to strings of labels by manipulating objects as well as by generating further strings of labels -- then perhaps such a device would be turing-indistinguishable from a device that really does have "meanings."

But let's forget about that. This is just a model for how to get a device to learn to sort, to label and to generate label-strings. Three human behavioral capacities are relevant here, and must somehow be captured by the model. One is discrimination. This is the capacity, given a pair of objects, to say whether they are the same or different, and if not the same, to say how similar they are. A good example of this kind of capacity is performance on Shepard's mental rotation task. The subject is shown a pair of pictures. Both pictures are of two-dimensional projections of a complex, unfamiliar three-dimensional object. Both pictures may be of the same object or they may be of different objects. If they are both of the same object, however, the object will not be in the same orientation. The second picture will be of the first object rotated to various degrees. The subject's task is to say whether the objects are the same or different. It has been found that the time it takes for the subject to perceive that the second picture is of the same object is proportional to the degree to which the object has been rotated. Shepard and his colleagues inferred that this happens because an analog 3-D image of the object in the first picture is being mentally rotated to see whether it matches the second.

Simpler examples of discrimination would be to be shown two highly similar stimuli and then to be asked which of them is the same as or more similar to a third stimulus. Discrimination tasks draw on sensory acuity and on short-term memory, as one stimulus may have to be briefly remembered to compare it with another. The "compared to what" nature of discrimination is obvious here. The task always involves pairs of stimuli -- simultaneous or successive -- and relative judgments about them.

So relative discrimination is the first of the three behavioral capacities we'll be concerned with here. The second is usually contrasted with the first and called "absolute" discrimination, because instead of pairs of objects, one object is presented alone and the task is to identify it. It should be clear from what I discussed earlier, however, that this is still a relative, compared-to-what problem. Apart from trivial tasks where there is only a small fixed number of objects, with large, obvious differences between them, this too is a "relative" judgment task, but this time it calls for sorting and labeling relative to an absent set of confusable alternatives that have presumably been sampled previously. Let's call this task "identification" or "categorization." Mushroom picking is an example. So is any other object-naming that is based on learning directly from sensory instances: color naming, pitch identification, chicken sexing and any other sensory taxonomy.

Two especially hard kinds of cases of identification that psychophysicists investigate are one-dimensional sensory continua and complex, unfamiliar multidimensional sets of sensory stimuli. For the continuum, imagine a series of shades of gray. They are subdivided into regions that you have to learn to label correctly. George Miller has shown that with most continua 7+/- 2 subdivisions are as many as you can manage reliably before the accuracy of your performance really drops. With multidimensional stimuli no magical number predicts how many categories you will be able to sort them into accurately when they are presented in isolation. It depends on how interconfusable they are, in other words, it depends on "compared to what."

The third and last behavioral capacity I want to single out is perhaps more controversial, because many will not concede that it is a "behavioral" capacity at all. I am referring to giving a "correct" description of an object or event by generating a string of labels (governed, usually, by the syntax of a natural language). On the surface, the task looks similar to labeling, but it's much more complex because the label strings have systematic constituent structure -- they're not just arbitrary names. They're decompasable into elementary constituents that have rule-governed, systematic interrelationiships. Unfortunately, a discussion of these complexities would constitute another paper, one addressing what I've dubbed the "symbol grounding problem" -- which is a difficulty of a rival approach to modeling categorization and cognition, an approach that also happens to be the currently prevalent one in cognitive science: the top-down symbolic modeling done in most of the field of Artificial Intelligence (AI). I cannot discuss the symbol grounding problem here; I'll only say in passing that what I'm describing here is an alternative candidate: a grounded, bottom-up model that is not symbolic but hybrid symbolic/nonsymbolic.

The model was suggested in part by a provocative phenomenon in psychophysics called "categorical perception" that I again regrettably do not have time to describe here but that is discussed fully in a book called "Categorical Perception: The Groundwork of Cognition" that has just appeared. The phenomenon of categorical perception should already be familiar to you from color perception: hues vary along a smooth physical continuum of wave length, going from the short infrared range to the long ultraviolet. What we see, however, are several rather hasty changes as we move from red to orange, yellow, green, blue, etc., corresponding to the color categories that we name. Instead of gradual quantitative differences, there are relatively sudden qualitative changes. The same is true with certain speech sounds, such as ba, da and ga, which happen to vary along a single acoustic continuum called the "second formant transition." A similar effect can occur in pitch perception when musicians learn the semitone categories "C," "C#," "D," etc. What all these cases have in common is that a physical continuum has somehow been segmented, quantized, discretized. Category boundaries have been set up internally where there were none present in the physical input signal. As a consequence, equal-sized physical differences are not perceived as being of equal size. They are perceived as being larger if they are between the categories and smaller if they are within.

This effect can be seen as an interaction between two of the three behavioral tasks I described earlier: discrimination and identification. If you were to plot the "discrimination function" along a one dimensional sensory continuum, then "discriminability" -- the ease with which we can tell apart pairs of equal-sized differences (actually, their logarithms, but never mind) -- along that continuum should be equal. But what happens instead with categorical perception is that there are peaks and troughs. The discriminability is amplified in some regions and compressed in others. If you look where the discriminability is the lowest -- where pairs of stimuli are the hardest to tell apart -- it's in the middle of a labeled category, whereas the region where the discriminability is greatest -- where pairs of stimuli are the easiest to tell apart -- it's across the boundary between two labeled categories. The identification function -- the one that governs which regions are called what -- seems to be influencing the discrimination function -- the one that governs what stimuli you can tell apart. How similar things look is being influenced by whether or not they are in the same category.

It is not yet clear how general an effect categorical perception is, and especially whether category boundary effects can arise purely as a result of learning. Many category boundaries are innate, but there are indications that learning may be able to generate boundaries too. In either case, categorical perception suggests a way in which higher-order categories can be grounded in lower-order ones, all the way down to elementary psychophysical ones.

The model is the following: Whenever a categorizing device receives a sensory stimulus on its sensory surfaces, two kinds of representations are formed. One is an iconic representation. Iconic representations are analogs of the proximal stimulus on the sensory surface -- the shadow the object casts on your receptors. The analog representation may transform the sensory projection, but the transformation must be invertible so as to make it possible to recover, if necessary, the physical shape of the sensory projection. The sensory input, for example, may be two-dimensional, whereas the iconic representation may be three-dimensional (not necessarily spatial dimensions, of course), preserving the sensory input as a two-dimensional projective transformation. The function of iconic representations is to generate the first of the three behavioral capacities I mentioned: discrimination. Same/different judgments, similarity judgments, matching, and discriminations mediated by spatial and other continuous shape-preserving transformations would be accomplished by internal physical comparisons (probably unconscious ones) between sensory projections and sensory icons, or between sensory icons and sensory icons. Much has been written -- with most of which I agree -- about what can't be done by sensory icons; for example, they certainly cannot subserve object naming. But discrimination is something they can do, and that is all they are required to do in this model.

At the same time that an iconic representation is being generated, if there are feedback contingencies that define "correct" and "incorrect" labeling, a categorical representation will begin to be formed, by trial and error, on the basis of instances and overt attempts to label them correctly. The categorical representation is no longer iconic or invertible with the sensory projection. It is selectively noninvertible, preserving only those features of the sensory representation that are sufficient to reliably distinguish the positive and negative instances so that sorting and labeling, first guided by the feedback, can eventually go on independently. The categorical representation is really a feature-filtered icon or micro-icon. It preserves only the features that will generate reliable sorting given the confusable alternatives that have been sampled. And it is associated with a name, the unique, arbitrary label that identifies that category and its complement in that context. That name is extremely important, and I will return to it in a moment.

How is the distinguishing feature-set found? In some elementary cases it may be innate, and in some cases sensory discontinuities will be great enough to make finding it trivial. But in the interesting, underdetermined, highly interconfusable cases, an inductive mechanism for feature extraction will obviously be needed. I do not have a candidate for that mechanism. All I conjecture is that it will be probabilisitic; perhaps the current connectionistic networks or something like them could turn out to have the requisite feature-finding capability.

So we have iconic representations that subserve discrimination performance and categorical representations that subserve sorting and labeling. But the story does not end here. We do not just spend our lives discriminating, sorting and labeling sensory categories. Where does a sensory category leave off and an "abstract" category begin? It seems evident that categorization itself is already an act of abstraction. If I partition a sensory continuum such as the color spectrum and segment it into labeled categories, I've already done some abstraction: Yellows are those regions of the electromagnetic spectrum that are below a given wave-length threshold and above another. Their categorical representation abstracts that feature and that feature alone. Quadrilaterals of all sizes and shapes likewise differ from triangles of all sizes and shapes in at least one invariant sensory feature, and that is what is abstracted by the categorical representation. Nor, as I mentioned before, do the features extracted by the category filter have to be monadic: They could be relational, conditional, disjunctive or even constructive. An active test -- for, say, a quantitative physical parameter such as a threshold, or for topological closure, continuity, or some other derived property -- may have to be applied. Although the invariant basis must be present in the sensory projection, a variety of transformations, amplifications and tests may be needed to find and use it. (Gibson seemed to think this would be easy.)

And since categorization already requires abstraction, the next step seems obvious: Higher-order categories are formed from lower-order ones on the basis of the systematic relationships among the categorical representations. One natural relation is suggested by categorization itself. Some higher-order categories will simply contain lower-order categories as their members. Not everything would be hierarchical, however, since the contexts of alternatives on which all categorical representations are based need not stand in systematic relations to one another. Some of the features of an instance may be relevant in one context and others in another: The frequency of an oboe's note is relevant in the context of pitch categorization, its wave-form is relevant in the context of timbre categorization (e.g., compared to a shawm, cornetto, english horn, bassoon).

There is a natural candidate for capturing and encoding the systematic relationships among categorical representations: It is the label that is connected to each one. Grounded in their associated categorical representations, the features for which they are selective, and the object categories these pick out, the labels are now eligible to enter into bottom-up combinations of their own that would inherit this grounding. Consider that the disadvantage of a device that can do nothing but discriminate and identify is that all of its categories depend on direct sensory experience (except of course the innate ones). All its learning must be from direct sensory acquaintance. But we clearly learn another way too: from symbolic description. I propose that the labels of elementary sensory categories provide the atomic terms for a third kind of representation -- symbolic representation -- out of which descriptive strings can be formed that may yield the power of natural language (and perhaps the "language of thought").

I will only consider descriptions that state category relations -- "An A is a B" -- but I believe that this captures an enormous amount of cognition. I have no time to elaborate here. I will merely illustrate with a suggestive example. More details are available in the book I mentioned.

Suppose you had two grounded sensory categories already. To illustrate I will use two categories that are very unlikely in reality to have been elementary sensory categories. They are probably themselves higher-order categories. But the trick I will describe is recursive, and all you need is that the categories you start with are already grounded. It does not matter whether their grounding was by direct sensory acquaintance or by recursive application of this strategy to prior grounded categories that ultimately derive their grounding from elementary sensory categories. Suppose you have a grounded "horse" category -- that is, you have iconic representations that allow you to discriminate horses and categorical representations that allow you to identify them correctly. Suppose you also have a grounded "stripes" category, likewise with the requisite iconic and categorical groundwork. Now look what you get for free: A "zebra" (new, arbitrary label) is a horse with stripes. (In category inclusion language: Things with the features of horses and with the features of stripes are members of the category "zebra."

Armed with only that, not only can you correctly identify a zebra on first acquaintance, without ever having to go through sensory learning, but "zebra" can now enter into further descriptions of the same kind, inheriting its sensory grounding from the iconic and categorical representations of horses and stripes, or whatever they inherited theirs from. Notice that you can ground a unicorn with the same strategy -- even a "peekaboo unicorn," which is defined as a unicorn that disappears without a trace whenever anyone or anything looks at it, with eye or instrument. Since unicorns are not likely ever to be observed, and since peekaboo unicorns are unobservable in principle, this grounding strategy does not even sound like a verificationist one. But of course this is not a theory of meaning but a theory of sorting, labeling and label-stringing, so whether or not it's really verificationist is beside the point.

I'm almost home now. What I want you to consider now is what would follow if something along these lines actually turned out to be the right theory of categorization. What if we are devices that sort label and describe objects and states of affairs by sampling positive and negative instances, learning the features that will reliably sort future instances, and stringing together labels to describe the features and category-inclusion relations of instances we have not yet sampled. Now suppose such a device encounters a categorization problem such as the following:

I'm going to instantiate a "laylek" for you: Look, that's a laylek, and that, and that [pointing to people]. You're probably beginning to form some hypotheses. But now that's a laylek too, and that, and that, and that [pointing to objects]. You've probably got some revised hypotheses, but bear with me: All objects are layleks; so are all events, all states of affairs, all ideas all experiences. In fact, any instance that comes to mind is an instance of a laylek. Are you getting a clear idea of what a laylek is?

We'll save your hypotheses for the question period. What I want to suggest is that those of you who are prepared to admit to having a certain difficulty with the laylek category may be having it for the following reason: Layleks only have positive instances, whereas a categorical representation requires negative instances. The reason is that a categorical representation depends on an implicit answer to the "compared to what" question. A categorical representation requires a complement, a sample of the relevant confusable alternatives. "Laylek" is apparently an uncomplemented category. As such, can it be represented at all? And if so, just what is being represented, and how?

Let me take a less mysterious example: What is it like to be a bachelor? Note that this category concerns what it is like to be a bachelor, not what a bachelor is. There are positive and negative instances of bachelors all over the place. They spread like mushrooms. But the category in question here is "what it is like to be a bachelor." Now I can state with complete veracity that until the present day all I have ever sampled is positive instances. I have never known what it was like to be anything other than a bachelor. Do I therefore not have a category representing what it's like to be a bachelor? Do I not know what I'm talking about when I say I know what it's like and I think I'm picking it out and making sense when I talk about it? Will I have to wait till I'm married some day to finally find out what it was like to be a bachelor?

This is an opportune time to remind you again that we are not discussing a theory of meaning here, but a theory of categorization. So what I'm really asking is: Can I have a categorical or a symbolic representation of "what it's like to be a bachelor" in the same sense that I can have a representation of the zebra or unicorn I just discussed, namely, a representation that encodes the features that will reliably sort the positive instances from the negative ones should they ever be encountered, and that will allow me to use this category in further grounded symbolic descriptions?

Well, of course I do have a categorical representation of WIILTBB ("wiltby"). It's based on rather eclectic sources: approximations to the marital experience that I have nevertheless sampled at first hand, extrapolations from first-hand experiences, analogies, first-hand testimony from others about what marriage is like, etc. So I'm closer to being in the "zebra" situation than in the "laylek" situation with respect to the "wiltby" category. Of course, my provisional feature-set may be inadequate or nonrepresentative. I may really have the "compared to what" factor figured out all wrong, and when I marry I may really get a shock about what things were really like before... But that, I think, is fair game for ordinary category representation, which, after all, is provisional, approximate and context-dependent, susceptible to updating or even radical revision. Like the Russian mycophiles, I may simply not happen to have sampled or been told of the relevant and representative alternatives.

But are there worse cases? Are there cases that are more like "laylek," in which the category representation -- if any -- is defective in a deeper sense -- where the "category" is not just uncomplemented in practice, but uncomplementable in principle? I believe there are, and I will close by describing three such cases to you, and inviting you to see whether you can discover any more. What is interesting about the three that I have come up with is that they are also intimately related to certain enduring problems of philosophy. So the other question I wish to raise is: If we are ourselves categorizing devices of the kind I've described, could some of these philosophical problems be related to the fact that we try to identify and describe coherently using some categories that are inherently defective -- not just uncomplemented, but uncomplementable -- using strategies for complementing them that are doomed to failure?

Before describing the three cases of uncomplementable categories I want to very briefly digress to mention a celebrated problem that also looks to be one of complementation. It's what Chomsky has dubbed the problem of the "poverty of the stimulus." According to Chomsky, the utterances a child hears and produces constitute such an impoverished sample that it is not possible to learn from them the features -- in this case, the rules of syntax -- that the child very soon demonstrably has and uses. What the sample specifically lacks is negative data: It is all, or almost all, positive instances, Neither the child nor the speakers around him make the kinds of mistakes that would have to be made -- and then corrected by feedback -- if the rules of grammar were being learned from the instances by trial and error. The true explanation, then, is that the child already has the rules built in innately.

But since the very same poverty-of-the-stimulus argument applies to trial-and-error learning of these rules by evolution, we are left with a kind of "Big Bang" theory of the origin of grammar: It's built into the structure of the universe. Some, taking their cue from this, subscribe to the "Big Bang" theory of the origin of all of our categories.

An abiding preference for parsimony has impelled me to look for ways in which syntactic pattern learning might turn out not to be uncomplemented after all. One way, of course, would be if the Chomskyan rules turned out to be the wrong ones, or imparsimonsious ones (perhaps because syntax is not autonomous or "modular"), and the right ones turned out to be learnable after all. But here I must accept Chomsky's "psychological reality" argument that there's no disputing the only theory in town: If you think another one's right, come up with it first.

The other possibility, though, is that Chomsky's rules are right, but he's somehow underestimated the instances -- or overestimated their underdetermination, as the case may be. Note that syntactic strings are both productive and receptive categories. You can both receive and send instances. Does that exhaust the possibilities? What about the ones you meant to send, or would have sent? Consider that every utterance you hear in its syntactically correct form could represent corrective feedback for the wrong utterance -- a negative instance -- that you would have produced, had you been trying to say the same thing. I don't think anyone is in a position to count the number of such stillborn inclinations there might be, and that might be actively being corrected by all the overt positive instances everyone else around the child is generating instead.

Well, it's just a kind of Vygotskyan thought. And, in any case, I don't think Chomsky's uncomplemented categories are examples of uncomplementability in principle, just in practice, in this ostensibly nativistic universe. So let's return to the problem of categories that are uncomplementable in all possible worlds:

One such category (or series of categories) is the following: What is it like to be awake? What is it like to be alive? What is it like to be aware? What is it like to be conscious? What is it like to be? You can think about those as an exercise at home. Let me just point out a few false starts. We usually try to complement "what it's like to be awake" with experiences such as drowsiness and dreaming. These won't do, because we are "awake" in all these cases. (That's one of the reasons dreaming is called "paradoxical sleep" -- because we are not unconscious while we're dreaming; we simply forget and lose continuity if we are not awakened during or soon after the dream.) Nor is it clear that there is any point to which to extrapolate along the continuum from alertness to drowsiness to the hypnagogic state to... what? At some point you just disappear, and there just isn't anything it's like!

The reason, of course, is that all of these uncomplementable categories are experiential categories. And since their complement is nonexperiential, it's either nonexistent or self-contradictory -- "What it's like to experience not experiencing" or something like that. Why do we nevertheless feel that we know what we're talking about when we discuss such uncomplemented categories, rather than feeling that we're talking incoherently about "layleks"? The reason, I suggest, is that we're resorting to the same sort of strategy I used with "wiltby" -- supplying the negative instances by extrapolation and analogy. The only difference is that in the case of "wiltby" the strategy could work in principle, and could even be tested in practice. Whereas with these uncomplementable experiential categories such strategies cannot work in principle: Whatever features we may use, they cannot serve to sort the positive instances from the negative ones, because there are no negative instances; there's no answer to the question "compared to what?"

I promised two other examples. Apart from uncomplementable experiential categories there are uncomplementable existential categories: "Things that exist" (not to be confused with "things that are material, concrete, or observable") constitute an uncomplementable category, like "layleks." And third and finally, some of the self-denial puzzles, such as the statement "This statement is false," seem to have a bit of the same flavor, but I'm not so sure about them.

The real puzzle, though, is: If such uncomplementable categories cannot hope to be sorted, labeled or described any better than layleks, why do we persist in treating them as if they were? Perhaps it's because of that age-old lure of hermeneutics...