Okay, I’m overincluding in language what others put only into some biocommunication systems but not into language as Mandler means, e.g., Korean (http://www.bryanburnham.net/wp-content/uploads/2014/01/Mandler-2004-Thought-before-language.pdf). In Spelke’s (Which comes first, language or thought? – Harvard Gazette) work, the babies had (and babies generally have) communication systems; e.g., they communicated boredom, meaning that they expressed it and the investigators understood it as boredom and acted accordingly. Language has one definition that says “[a] non-verbal method of expression or communication” (Oxford Languages | The Home of Language Data (sense 1.1)); but if linguists now use another word for that sense, that’s fine; we can use another term. Replacing the term need not invalidate the basic point I make above.
But not every biocommunication system is capable enough for choice-makers, because not all biocommunication has to accommodate choice, so it may be that a biocommunication system that does not accommodate choice does not accommodate labelling. If you stub your toe and a nerve notifies your brain, labelling may be absent in that nerve. It may merely signal or not. If there’s a word for a biocommunication system that has less capacity than language has but is still capacious enough to include some way of signifying ‘this’ rather than ‘that’, i.e., labelling, maybe someone will post it (my brief Googling didn’t find it). Meanwhile, I’ll call it “[a] biocommunication system that includes labelling”.
A biocommunication system that includes labelling necessarily exists once choice exists and may exist earlier. It may continue to exist even if choice is later lost. But, once choosing is something the fetus does, the fetus must have a biocommunication system that includes some method of labelling. Otherwise, it cannot choose. Thus, the fetus has to possess an adequate (and nonminimal) system of biocommunication and has to use it in order to choose.
The first human (or other animal) to speak did not lack an ability to communicate before speaking; I didn’t say it/she/he did and I agree with you on that. And neither did the first speaker also invent the first labels or organize classes to teach them. I agree with you on that, too. But whoever spoke first had no speech (specifically speech) as a model. Indeed, considered locally, several speakers may each have been first in their communities, if each had no knowledge of any others speaking. But speech is not a prerequisite for symbols or labels. I think we agree on that, too.
Personal language exists: idiolect. It is usually close to a dialect, but it is still unique to an individual, and in the island case above, the idiolects would be linguistically unusually far from each other. We usually don’t explicitly study someone’s idiolect, but that’s because it’s usually too difficult (thus expensive) when idiolects in a dialect community or even in a language standard community are likely close enough to ease people’s cooperation. However, the islanders would study each other’s idiolect enough to build a shared language, a pidgin, and only then would personal language start to fade (and nearly disappear over generations), as a creole forms. The book Twice as Less gives examples of survivals from supposedly-forgotten languages. Idiolects are rarely challenged in full and so can stay solidly anchored in us.
Helen Keller before Annie Sullivan showed up had symbols, just fewer of them and they likely were less efficient. If you choke on food and need help getting it out of your windpipe, the standard way to let other people know so someone can help you (probably chosen as common even without standardization) is for you to grab your own throat. That requires context for clarity but, with that context, grabbing your throat is a symbol (see, e.g., Choking: First aid - Mayo Clinic). Symbols requiring context is probably the norm; they’re still symbols.
Adults, too, report experiencing something for which they have no words, especially when they remember an experience for which words have later come into their vocabularies. I used to have an occurrence that I counted as amusing, unimportant, and infrequent and that I didn’t tell anyone about; only later did I find out that it’s called “heartburn” or “acid reflux” (and may need medical attention). But even without those labels I recognized such an occurrence, remembered earlier ones, noted the similarity, and classified them together, even if my rubric was only some version of “that warmth”. “That warmth” is a label and preverbal babies would have labelling, too. Adults often talk of not having a word for something and at least imply that they can’t communicate what that something is, but they don’t necessarily lack at least an internal symbol for it. If Hurlburt is right about thoughts without symbols (Thinking Without Words | Psychology Today), then something more abstract than symbols describes what is being experienced by the person in question, and we just need a term for it. Hurlburt wrote, “[a]n unsymbolized thought is specific: you’re wondering what Feature 5 is.” But then the symbol is ‘the whatchamacallit that Feature 5 is’. If you have 5 objects including A, B, C, and D, thus implying a 5th object without a letter, that’s a label implied by the process of elimination (something dogs can perform) for the 5th and you explicitly label it as, e.g., the 5th object or the unlettered object even if you know no words as such. If by convention we shouldn’t call it a symbol, fine, provided we can call it something, because it exists, it’s important, and it functions kind of like a symbol, so we need to call it something. Do you know what we would call it? Note that Hurlburt acknowledges that “many (perhaps most) psychologists . . . believe that unsymbolized thinking is impossible.” Thus, either he’s right and many or most psychologists are wrong or he’ll turn out to be wrong and others right. So, maybe, until a convention changes, we should still call it a symbol.
A man asked me what to call his girlfriend (I think that was the relationship). She didn’t like “lady” and he didn’t like “woman” (too matronly). I suggested he use a paragraph and say what he thought. Then she’d know what he thought. If we require one word per purpose, we would never have enough words; often we need strings of words. Claiming we don’t have a word for something is often premised on needing just a single word for a meaning. If you smell a flower with a unique scent, you may still have a way of referring to it, e.g., “the aroma from the red-edged blue flower on the other side of the rocky little hill”. You can tell someone about it and they can go find the flower in an hour or so and experience the same aroma, even if you liked it and they don’t.
Thinking may well come before words, although not before some way to refer to what is being thought about. We typically develop words because we need them, so the need typically comes before a word, but that doesn’t mean it comes before a way of referring to it (e.g., “the mystery I’m trying to understand now” or “the mystery I was struggling with yesterday”).
Culture develops along two continua: in the society (large or small and of humans or of other organisms) and in the individual. People have had culture for millions of years and, at the same time, babies acquire culture (and the fetus likely acquires it, too, at least in later stages). It’s like car drivers entering a controlled-entrance highway and picking up speed in order to join existing traffic. But, while acquisition of culture in a baby or fetus would precede acquisition of an adult language (which is experienced in the womb but likely without morphemic meanings), I’m not sure acquisition of culture precedes an intrafetal communication system, since culture implies choice and choice must have labels. Example: Saying to a one-year-old in the kitchen: “No, you may not have the knife.” The child infers the possibility of having the knife, if not already conceived of; thus the child knows of a choice between having (forbidden) and not having (approved) with respect to the knife. I suppose it might be possible to acquire a little culture without choice, but I’m trying to come up with an example of that; I haven’t yet; and I don’t think anyone can acquire very much culture without choice. They may not be able to exercise the choice (a knife could be beyond a child’s reach) but choice would exist.
Many kinds of animals have culture. Ethologists have established that. Whether the animals know they do seems unknown (Gruber et al. found no evidence of that metaknowledge (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4319388/)) but lack of knowledge of culture as a category of knowledge doesn’t prevent having culture. Likewise, I think fetuses and babies likely can have culture without knowing they do. But babies, at least, may know they know knowledge-unit X because parents told them.
The Edwards article (Psychologists say babies know right from wrong even at six months) talks of the possibility of morality being “hard-wired”. If it is, that part of morality is outside of culture and irrelevant here unless choice is also hard-wired. If contradictory demands are hard-wired into an organism which is forced to choose, since it would seem that an internal contradiction without a means of choice by which to resolve it would be evolutionarily harmful over time, therefore the pre-choice-means contradiction would usually evolve out of existence in a species that survives many generations. Thus, we can assume that a normal specimen of a normal species does not have hard-wired contradictions in it. Contradictions in culture, however, are likely near the beginning of the acquisition of culture, at least as soon as choice can be exercised on any matter.
On your response to my comment that “language may therefore exist inside the microorganism without a brain; likewise for fetuses, if they make choices, too, or perhaps the fetus doesn’t make choices until it has a brain and thus the brain can be where it stores labels it might need later”, your response being “[a]s a linguist, this makes no sense to me”: I don’t know when the fetus starts making choices (if it does); perhaps that does not begin until it has a brain. However, while we consider a brain as conceptually separate from the rest of an organism, a useful concept, but if some microorganisms make choices while not having brains, then they have some means for storing the list of choices and deciding to choose, perhaps doing so in a part of the microorganism that we don’t call a brain but that serves a purpose close enough to that of a brain in other organisms.
By the way, the process that I propose that consists of, in the toddler, discarding fetal language in favor of adult language, if this occurs, is, for vocabulary, relexification.
Maynard Smith’s view agrees on there being a communication system but I don’t know how early in an organism’s life he would have said there is one.
That transmission, reception, and acting on what is received co-evolve in biocommunication makes sense. I don’t disagree.
(All URLs were as accessed 10-29-17. I didn’t go through most of the comments in the Hurlburt thread or similarly supplementary public comments on any other pages.)
@gbrooks9: No, I assume the neonate is smarter than the fetus. E.g., if the fetus has 37 labels, the neonate has at least as many. Birth is probably disturbing and distracting to a neonate and so intelligence and knowledge may be less evident until the stress and experiential novelty are past, but that may be moot if we don’t have the means to test intelligence and knowledge both shortly before and shortly after birth across a sample population. Babies don’t talk and so it’s taking a while for scientists to accumulate evidence of babies’ intelligence and knowledge (even if parents sometimes wonder what’s taking the scientists so long to catch up to what parents often already know). My point regarding other uses of a communication system is that the fetus need not have a communication system solely to communicate with someone else, just as adults can have other uses for language.
@Argon and @beaglelady: On whether some microorganisms make choices: I addressed that in my opening post with a link to a thread that cited a source that appears to be authoritative on point. If that was refuted by a scientist, I’m unaware of that; please let us know if it has been. Inanimate objects are not microorganisms.
@beaglelady: Responding in reverse order: Elements are not microorganisms and I don’t argue that a less complex system is a more complex one.