Monday, June 29, 2009

Swimming crawl and stuttering

Are you a good front crawl swimmer? My crawl was so lousy when I was at school. My major difficulty was breathing, especially alternating breathing from one side to the other. I constantly choked, and my movements were terrible. Even if I tried to concentrate hard on not choking and relaxing, I would still have this instinctive reaction, which made me gasp for air and in turn would completely get me out of control. But on the other hand, my breast and back stroke and diving longer distances were fine.

Only slowly did I learn to control my breathing. It took me months, but now I do not have this chocking instinctive reaction. And I even joined the swimming club, and train twice a week. My crawl technique is still far from perfect, but my breathing is perfectly fine now. No more gasps, no more struggle. And most importantly, I am more in control to focus on my techniques.

There is absolutely nothing wrong with my brain with respect to breathing while swimming crawl, but I learned to associate breathing in crawl with the sensation of running out of air and drowning. The more I tried to control my breathing, the worse it got. I got rid of this association by de-conditioning my body. It had to learn: No, you are not going to drown if you have no air for 2-3 seconds. The process took so long because I did not do it systematically, it was deeply ingrained, and I had to focus on swimming.

So why I am telling you this? Stuttering might well be similar with one big difference: there is something wrong with my brain. The moment the brain realizes a block in speech flow, it kicks off instinctive reactions as it has learnt to associate those moments with panic, fear, embarrassment, and so on. And we loose control. Unlike with swimming, those moments are not only learned by association, but can also be due to a low-capacity speech system which delays speech initiation. As I wrote before, we stutter because our brain or we expect to stutter or because the speech system can momentarily not cope due to a higher demand to capacity.

That's why unlike with swimming, we cannot easily unlearn because our brain is constantly creating mini-blocks. Think of the recovering alcoholic been given a glass of beer each week or the overweight person having to eat chocolate every week in order to test their resistance. Or, just imagine I had to unlearn the association breathing-in-crawl to choking if something from time to time creates chocking randomly in me.

Sunday, June 28, 2009

The faces of stuttering

It is very difficult to try and create more awareness of stammering when most of us are doing our damnedest to hide it.

(Make a statement. Send me your picture at tom dot weidig at gmail dot com! A picture and sentence on how you relate to stuttering!)

Thursday, June 25, 2009

The faces of stuttering

I am a person who stutters, and also a high school career counselor, blogger at Make Room for Stuttering, writer, newsletter publisher, and President of my Toastmaster's club.

(Make a statement. Send me your picture at tom dot weidig at gmail dot com! A picture and sentence on how you relate to stuttering!)

The Millions of Real Heroes

I have had to deal with stuttering for thirty years, but tried to achieve as much as possible despite, and spent a lot of time blogging on stuttering and research.

YOU ARE THE REAL HERO. You have lived with the daily challenges of stuttering but do not resign and live your life. You are a parent of a child who stutters, and do your best to help. You are a therapist who has specialised in stuttering and spend your time and efforts to help people who stutter with passion and beyond your professional call of duty. You are a researcher who cares about understanding stuttering better.

Please send me your picture with a short sentence describing yourself and your relationship to stuttering!! And I will post them. To tom dot weidig at gmail dot com. And you will be one of the millions of real heroes featured of my blog! Send me a smiling picture! I am starting this new blogging idea based on my last post!

Wednesday, June 24, 2009

Who are the real heros to be honored?

Recently the American Institute of Stuttering (AIS) has honored the actress Emily Blunt, and last year they honored vice-president Joe Biden. Both have stuttered as teenagers, but are now completely fluent. So why should they be honored?

As a role model? Hardly, the simple fact is that Emily and Biden have no clue whatsoever what made them not stutter any more, it just happened to them. As it just did not happen to me and millions of others. But in itself not something that needs to be honored more than the millions of people who live with stuttering in their daily lives. The travesty is reflected in Biden's statement that "In my darkest days I would not trade my stuttering for what it's taught me and what it's made me. It's been the single most beneficial thing that's ever happened to me... having overcome it." Yes, but I can say the same thing about any tragedy that happened to me where I came out of unscathed. Take a air plane crash. If you can out of it alive, it does enrich your life and give you a new perspective. But of course, if you are among the dead or handicapped for life, how would you feel about such a statement? They have done nothing nothing at all to deserve their fluency more than all the millions who kept on stuttering. How many of us have tried to become fluent? Some of us, the real heros for me, have managed to keep it under control after very very hard work. But what have they done? They are the freak accidents of nature which made them fluent. God knows why.
They are not role models for stutterers. Emily is fluent. If you want role models for young women who stutter, go to StutterTalk, and listen to Elana Yudman and Kristel Kubert and Caryn Herring and Samantha Gennuso. Of course, if you want a role model for acting skills, model looks, fashion icon, cuteness, down-to-earth, supporter of good causes, then take Emily. (Of course I am not saying that all the 4 girls from Stutter Talk could not rival her  :-)

It is commendable that they do not forget the experience and pain of stuttering and that they take time away to support the cause. And I grant Emily a bigger credit here as Biden for she hardly has any other incentives. But should we honor them for such minimal effort? Invite them as key speakers. Fine. Thank them for supporting the cause. Fine. But do not honor them as you dishonor the others. Many spend a lot of time and resources on stuttering.You should honor StutterTalk's Peter and Eric, Greg Snyder, myself or others. Have we ever received an award yet, no! Or give an award to Per Alm who changed his career to work on stuttering and does diligent work to debunk research.

So why are they honored? For two very simple reasons: publicity and money. If Emily or Biden is on the picture, AIS is getting more exposure in the media and more attention from others. More people hear about them and ultimately someone might donate. Would I as AIS director do the same? Yes but still it is a corrupt system, which does not attribute honour to those who contribute most.

Tuesday, June 23, 2009

American Institute of Sloppiness?

You can read on the American Institute for Stuttering website on early intervention
Some of the most important and innovative work now being done in stuttering is early intervention treatment. It is cost effective, in both financial and emotional terms. It is almost unconscionable for a child to be denied it. Recent research now supports our common sense that tells us to intervene early. We now know that:

The sooner a child receives treatment, the shorter the treatment time will be and the greater the likelihood for lasting gains in fluency.

Early intervention treatment with the critical involvement of parents and caretakers can prevent a lifetime of potential shame and debilitation. The goal is to reverse the course of stuttering and resolve it before it becomes chronic and “hard-wired.”
The statement is typical for the sloppy and confident manner in which early intervention is glorified. Looks very professional and solid at a first glance, doesn't it? But dig a bit deeper, and every single sentence falls apart. It is symptomatic of the field of "evidence"-based intervention. Here is my dissection:

Some of the most important and innovative work now being done in stuttering is early intervention treatment.
Who did what? Too vague. Important and innovative sounds really great doesn't it?

It is cost effective, in both financial and emotional terms.
In which way? Where is the calculation? Surely no treatment is cheaper? Especially because the 80% who recover are treated at financial and emotion costs? Statements are made without any supporting argument.

It is almost unconscionable for a child to be denied it.
Wow. Can you feel the guilt? Pure propaganda. Puts pressures on parents. Anyone not agreeing is guilty of a crime against their child.

Recent research now supports our common sense that tells us to intervene early.
Name the research! In fact, recent research has shown that Lidcombe is NOT as effective as thought, but no-one ever talks about the latest study. You cannot just make statements like this. How can anyone check your claims? Is the common sense not to wait because most would recover anyway?

We now know that: The sooner a child receives treatment, the shorter the treatment time will be and the greater the likelihood for lasting gains in fluency.
Now my brain goes hyper. This statement is a classical example of correlation-causality fallacy. The statement itself is probably correct but the implied message is not. Let me give you a few pointers. First of all, do you mean "sooner to onset" or "sooner as in age".

Re onset, the longer from onset of stuttering, the most likely you keep on stuttering. It is obvious, because the proportion of kids who recover goes down to zero as a function of time from onset. So the longer the onset, the less kids who recover are in the sample, so those in the sample are more and more likely NOT to recover! Can you see where I get at? It is a mirage! Yes, there is a relationship, but there is a trivial explanation!

Re age, girls start speaking earlier than boys by an average of 6 months (my guess). Girls are much more likely to recover without clinical intervention. If I remember correctly, girls and boys are roughly as likely to start stutter (or may half as likely), but many more boys do not recover. So the sex ratio of 1:4.5 only happens after natural recovery and not at onset. So the earlier you pick up the child the more likely it is a girl and the more likely it will recover. Again the effect can be explained without any reference to treatment whatsoever.

Early intervention treatment with the critical involvement of parents and caretakers can prevent a lifetime of potential shame and debilitation.
So here is the first sentence up to debilitation that I moderately agree on if the therapist is good. You can certainly work on attitude of all involved for those that keep on stuttering. But to be honest the "prevent a lifetime of" is too strong. Even though I know a lot about my stuttering: I still feel discomfort, embarrassment, shame and all other things occasionally but it is not running my life any more or holding me back big time.

The goal is to reverse the course of stuttering and resolve it before it becomes chronic and “hard-wired.
They clearly did not read the latest brain imaging by Chang et al as explained in a post. Even recovered kids have this hard-wired in their brain. Their brains are different, but they managed somehow to deal with stuttering.

Please read the passage again:
Some of the most important and innovative work now being done in stuttering is early intervention treatment. It is cost effective, in both financial and emotional terms. It is almost unconscionable for a child to be denied it. Recent research now supports our common sense that tells us to intervene early. We now know that:
The sooner a child receives treatment, the shorter the treatment time will be and the greater the likelihood for lasting gains in fluency.
Early intervention treatment with the critical involvement of parents and caretakers can prevent a lifetime of potential shame and debilitation. The goal is to reverse the course of stuttering and resolve it before it becomes chronic and “hard-wired.”

Can you feel the intellectual hollowness and sloppiness of the text? That's what we all need to fight to improve the level of debate!

Sunday, June 21, 2009

A reader told me about his experience with using an in-ear metronome:
Hi Tom,

I discovered that an "in-ear metronome" used by musicians improves my speech by about 50%. I bought the "mm-1 metronome" in a music store for 25 dollars! It is used like a bluetooth.

I put it in my ear, and I only have to do to speak at the rhythm of the metronome. Because the device has various settings: from fast paced to low paced. Thus you do not get used to it and is therefore more effective. But I found lower settings from 50 to 120 more effective. But it works fine with a setting of 70 or 65, too. Stuttering reduced to about 50% in one week and went down to 40% the next.

Before, I was not able to read without stuttering, and with the device I can read a whole page of a book, and stutter minimally like 2 times per page. &Past week I read for 15 minutes in front of my doctor and I didn't stutter once. (When I see a doctor, generally I turn nervous).

My interaction with people went from 25% (poor) to about 50%. Still I have ups and down, but the baseline is about 40%. I was able to order pizza, and leave phone messages. I am still not prepared to go to a drive, but with some practice, I think I will. I still stutter in front of authoritative figures with only 30%. I guess I need more practice and self-control

Saturday, June 20, 2009

No World Congress in China?

A reader sent me this email:
No link for Argentina world congress....I don't think.....still not official. I just heard from ISA people. Looks like it is going to be 2011. Was supposed to be 2010 in China, but didn't work out.

Friday, June 19, 2009

Lessons I have learned

Here is how I changed my mind on several issues over the years.

Tom2000: The brain is like a computer and there is a defect region for people who stutter.
Tom2009: The brain is like a big city and a deficiency might not be a deficit region but a disturbed  communication between regions (like a bad transport system affecting the efficiency of a city).

Tom2000: A gene is responsible for stuttering in some people.
Tom2009: 10s of different gene (combinations) can cause stuttering.

Tom2000: A cure might be possible once we know what causes stuttering.
Tom2009: The fact of knowing in which way our brain is messed up will not lead to a cure. A messed up brain is a messed up brain.

Tom2000: Better brain imaging and genetics will show us the problem in a clear way.
Tom2009: Better brain imaging and genetics will over-load us with data due to sub-types and reveal the vast complexity of the human brain.

Tom2000: Early childhood intervention is helping kids.
Tom2009: Early childhood intervention is not working (measuring just fluency) for any treatment. The brain is messed up full stop, and you cannot change the brain. At best can you help the brain to adjust behaviours.

D&C debate: my grain of salt

I was listening to a discussion on the demands & capacity treatment approach with Joe Klein at StutterTalk: see here. I want to add my grain of salt into the wound:

1) They rightly point out that treatment needs to be better than natural recovery, but then Joe suggests that one research showed that 90% of kids recovered which is above the natural recovery rate. We need to be careful here. Some recent research has shown a natural recovery rate of 85%, but often people use 70-80%. So one really would need to have a control group to know for certain. It is important. Imagine the real rate is 85% but you have 70% in your mind. Then in each research you will claim success!

2) I rarely hear someone talking about relapse in kids. No experienced therapist would suggest that fluency in adults immediately after therapy is indicative of long-term success. You need to look at least one year down the line. Why do we never make the same argument for kids? We seem to assume that kids do not relapse, because their brain is plastic?

3) Obviously, reducing demands on a kid will make them more fluent. But isn't an easier spelling lesson also reducing dyslexia in dyslexic kids? Less difficult spelling challenges does not reduce the underlying severity of dyslexia, but gives us the allusion of improvements. If someone tells you that you have a beer belly, you can strengthen your muscles and have no belly, but you cannot keep this up forever! Without any doubt, such a low-demand environment is not forever, and at some point in their life they will be faced again with a normal-demand environment. The key question on stuttering is whether this lower demand period helps the brain to recover better and then better deal with a normal demand world! But at the very least a part of the success might well be an illusion of improvements.

4) Joe Klein said that Franken's pilot study research has shown that Lidcombe and D&C are equally good. Actually, the right statement to make is that both approaches had similar outcomes. The study design does not actually prove that the treatment approaches are actually reducing stuttering in the long-term above natural recovery but they are saying that no-one is better than the other. She is now doing it with larger number (they are at more than 120 right now), but I spoke to her and she agreed that it is not showing whether it is successful, because there is no control group.

The key question on demands and capacity as well as on Lidcombe is whether they help the natural recovery process, and make kids better equipped to handling their sensitive brain. My intuition tells me that they might well reduce symptoms in the short term and be lasting for a few but that they do not eliminate the sensitive brain per se, and a future event can make stuttering break out again. So I would rather see them as damage control exercises rather that treatment/cures.

Thursday, June 18, 2009

Can Martians stutter?

One of your fellow readers asked an intriguing question:
I read an old sci-fi book recently called "Last and First Men" written by Olaf Stapledon in 1931. In that book, there is a race of martians who communicate not by speech, but by electromagnetic fields. They live side-by-side with humans, and eventually a new race of human is born. These new humans are able to communicate with each other directly from brain to brain by radio communication ... a kind of radiotelepathy. My question: if, in 100 years time, scientists are able to develop a device that would allow radiotelepathy between people, do you think that stutterers would still stutter in this mode of communication?
To be able to answer the question, I need to add more meat to it! Presumably the Martians do not have any speech (encode the motor sequences from information from the language areas) and motor control areas of the brain. They have a region that receives information from the language areas and encodes it into radio wave sequences which are sent to the motor regions that control the radio wave emitting "muscles". Effectively, the question asks whether it is a language or a speech motor control disorder. If the language region stutters, there is no difference.

I would say that stuttering is about the disturbed communication between the different brain regions involved in generating speech (principally on the speech and motor control regions rather than language areas). Thus, I would argue that there will be Martians who stutter because some, like some humans, have a stuttering radio wave system (due to genes and developmental issues), BUT the question was on whether stutterers would stutter. There I argue that they will not stutter, because everyone will get a system that does not stutter in the same way that everyone has a mobile that is free of noise (because all noisy mobiles are replaced).

So to conclude, some Martians will stutter, but stuttering humans on the radio channel will not!

Passed my neuroscience course

I just finished my first neuroscience postgraduate course at the Open University on Neural networks & Cognitive Neuropsychology. (Another reason why I have not been posting as often) I passed, which might come as a shock to those who so far dismiss my arguments on the basis that I only have a PhD in physics and they have studied X, Y, Z. In fact, I got 90% in my final essay, and 75% in my assignments. I am posting my essay as evidence! I hope you enjoy. Please feel free to comment.

ECA DS 871
Author: Tom Weidig
Words: 4820 words with references

The advent of connectionist modelling represented a considerable blow to proponents of the neuropsychological approach.

Imagine you have a wrinkly dirty fatty pulp in front of you. You can study the fine microscopic details, find the nerve and glia cells, and still have only increased your fascination for the brain. This pulp is a dead fellow human's brain which analysed sensory input and listened to Chopin's nocturne, generated mental processes and deviced a plan to rob a bank, stored information and remembered childhood memory of the little creek behind their house, and controlled a body and hit his opponent a bloody nose. But you have no clue how the brain did what it did. What do you do? You might also be a doctor, and witness the diverse strange behaviour of patients with strokes or blows to the head. He comprehends speech but cannot speak any more. She can speak but not comprehend speech any more. He doesn't like his left arm any more and wants to get rid of it. Her memory span is restricted to five minutes telling you the same joke over and over again and laughing to your same joke every single time. There must be a pattern behind the functional break-downs giving clues to the organisation of the brain. You cannot look inside a living brain, but you can do a post-mortem anatomical examination looking for abnormal brain structure and relating your finding to observed deficiencies.

That's pretty much the atmosphere and constraints within which cognitive neuropsychology emerged. Clinicians were faced with a plethora of deficiencies caused by strokes, brain injury or brain disease, and the science-minded ones wanted to know what caused A to only loose speech comprehension and B only to loose speech. And with the hope that the study of these special brains sheds light on the normally functioning brain. The French physician Paul Broca's research is one of the earliest examples on how you can understand the normal brain better by looking at the damaged brain. He systematically studied aphasic patients, patients with speech production issues, post-mortem, and found lesions in the inferior frontal lobe of the left brain hemisphere - an area that is now known as Broca's area. His life work had great influence on the understanding of the brain, specifically on the lateralisation and localisation of speech functions, and he inspired others to follow and distil his approach. For example, the inspired German physician Carl Wernike looked at patients with inability to understand speech. He was able to identify more posterior regions of the left hemisphere, which were critical for language comprehension - a lack of understanding of language prevents attaching meaning to perceived speech sounds. Their prominent work in the 1860s and 1870s inspired many others to find similar relationships between physically localised lesions and functional deficiencies. The approach also reveals the interdependence of deficiencies and therefore gives information on whether the corresponding underlying processes share resources or information. A good example of double dissociation is given above: He comprehends speech but cannot speak any more. She can speak but not comprehend speech any more. Patients with pure Broca's aphasia and with pure Wernicke's aphasia suggest that speech production and language comprehension are two independent processes that do not share resources or information. Scientists started to realize the importance of searching for case studies with double dissociation.

Intensive studying of damaged brains gave rise to many interesting relationships and independences. Scientists started to order the new knowledge into flow (box and arrow) diagrams. Such frameworks are enormously helpful as they make explicit the qualitative relationships between different functions and processes. A constructive and focused debate is now possible, because a rejection of a framework cannot be categorical and vague any more. Your counterpart needs to identify the disputed box or arrow and provide solid counterarguments based on experimental findings or logical arguments. Or challenge your experimental evidence and identify flaws. A good example is the field of visual agnosia. Based on case studies, Lissauer (1890) proposed a simple framework for visual agnosia: High-level visual processing involves visual analysis first and then attribution of meaning to the perceived object. He labels a corresponding deficit in these areas as "apperceptive agnosia" (inability to create a stable percept from intact incoming low-level visual information) and "associative agnosia" (inability to associate meaning from semantic memory to the percept). Lissauer's scheme is a good first-order approximation and as such still clinically useful today. A simple copying test can distinguish between the two types of aphasia. If you are able to copy an object in drawing but unable to identify the meaning of the copied object, you will have appercetive agnosia. However, later more carefully designed and detailed studies have identified cases with different types of apperceptive agnosia: some pass the copying test but still have apperceptive agnosia. For example, Kartsounis and Warrington (1991) found that Mrs FRG was able to pass visual acuity tests but was unable to discriminate between object and background. Warrington (1985)'s model use a variety of different types of agnosia to suggest the existence of sub-stages of visual processing, and propose refinements to Lissauer’s model. Visual analysis divides up into shape coding, figure-ground segmentation, and perceptual classification. Another example of a box and arrow diagram approach is Ellis and Young (1988)'s functional architecture for hearing and speaking words. Again, the framework was developed over many years and based on sharper and sharper studies using older frameworks are guiding lights.

As with every experimental paradigm, you try to push as far and as deep as possible until you reach insurmountable limits. And limits were found. Very often the physical and functional damage is broad and fuzzy. Especially strokes can affect many regions and range from total to minor physical damage. The notable exception are purposeful lesions in animals and surgical intervention to ease severe epileptic attacks. And possibly the case of NA whose room mate's sword pierced through his nose into his brain causing a pure form of amnesia! Moreover, patients rarely show clear deficiencies but a range of deficiencies, and brain plasticity is a non-negligible factor, too. The rarity of clear case studies in lesion and dysfunction raises statistical issues, because they might have arisen by chance. Whereas if you have hundreds of cases, random fluctuations average out. Also, at least before the imaging age, anatomical studies could only be done after the patient's death when the brain would have aged and might have adapted somewhat. Finally, experiments are hard to control and all kind of biases can creep in, even for the most experienced and clever scientist. A good example is the debate on whether the brain handles living and non-living things differently. After initial excitement, other explanations emerged. First, pictures of living things are generally more complex: e.g. compare the picture of a fly to the picture of a cup. Any visual system should have more difficulties in processing a complex picture than a simple one, especially if damaged. Second, living things look similar to each other whereas non-living things do not. For example, there are many animals with four legs, a head, and a tail, but tools are all different because each tool has a different function and we would therefore expect them to look differently. A damaged visual system should have more difficulties distinguishing between similar objects. Third, familiarity could play a role. Humans see and handle tools on a daily basis, and the visual system processes pictures of familiar objects much more often. It is easy to see how a visual system tuned to familiar objects retains some abilities after damage. No doubt all of these factors can influence the performance between living and non-living, and the brain might well not think in terms of living and non-living. Therefore, Stewart et al (1992) and Funnel and de Mornay-Davies (1996) asked for more careful designs to control for these factors.

Connectionist modelling has a completely different approach. Remember. At the beginning, we found the fine microscopic details of the brain, the nerve and glia cells, but this new knowledge had only increased our fascination with the brain. Cognitive neuropsychology ignores the foundations on which brain processes must ultimately rest - the neuron: it is a top-down approach. Connectionist modelling on the other hand is a bottom-up approach. With the advance in computing power connectionist modelling was able to take the neurons seriously. Let us take a physics analogy. A gas is macroscopically well described by Boyle's perfect gas law with pressure, temperature, and volume, but microscopically a gas consists of billions of atoms colliding like fast moving gluey tennis balls. Computer simulations are now able to derive the macroscopic properties of a gas from the microscopic properties of the individual atoms. In neuroscience, the microscopic world of the individual neuron is reasonably well described, and so is the macroscopic behaviour of the brain like reading or re-calling memory. However, the holy grail of understanding is unreached: to describe the brain's macroscopic behaviour in terms of neurons. Connectionist modelling takes up this challenge: to directly simulate the interactions of neurons however difficult. No-one would doubt that all cognitive neuropsychologists when drawing their boxes and arrows knew of the ultimate incompleteness of their approach. They, especially the clinicians, would rightfully argue that theirs is a functional approach, trying to understand the broad flows of information from process to process. A first order approximation sufficient to understand and treat their case load. Surely, the neural networks would only but confirm their boxes and arrows as an emergent structure of neurons, for they have used a scientific approach to gain knowledge. However, I believe that neuropsychologists committed a forgiveable logical fallacy: If a theory fits well all known observations, it must be correct. I am not referring to the Popperian ideal of a falsifiable theory, but to the possibility that several theories (of a very different nature) could fit all known observations and how do you then decide which one is actually implemented by nature? Imagine you dance at a night club and no-one dances with you. Your theory that the girls are just too shy fits your observation, but of course there is an alternative theory, namely that you are a bad dancer or that no-one likes you! Connectionnist modelling gave birth to new insights, new and different ways of getting the same ouput but with a brainy feel. Neural networks strikingly show brain-like properties, to name a few: resistance to damage, tolerance to noisy inputs, retrieval by content, distributed memory, parallel processing, typicality effects in memory retrieval (i.e. ability to hold a prototype of a set of similar objects), ability to deal with new inputs based on experience. An amazing feat, considering that many features of neuronal activity are left out: the glia cells, the neurotransmitter levels at the synapses, the 1000s of connections, firing for no reason, non-linear effects. And now you can start asking interesting questions: Which model is correct: the box and diagram one or the neural network one? Is the box and arrow diagrams not too deterministic and computer-like? If indeed the brain is like the neural network, how can we ever model memory with box and arrow diagrams? Moreover, neural networks have brought in the quantitative area of neuroscience. It is not good enough to just run a qualitative approach with flow diagrams, but you need to quantify and explain exactly what is going on. Let's take Lissauer's model on visual processing. He assumes object identification first and then attachment of meaning to the object. It does not really help you to know this if you want to build a visual system yourself. It is like telling an overweight person to eat a bit less and do some sports. Correct, but how do you implement this strategy? Only by going through the steps, writing down an algorithm, constructing a neural network can you find out whether what looks good on paper is also implementable. Connectionist modeling is a useful tool to check whether realistic models can at least in principle be built using the current theoretical understanding about the system.

Here is an example on how connectionist modelling became a rival in love. Two very different models, one neuropsychological and one network, generate the same output: the past tense in the English language. Many verbs have regular endings of the past tense (show-> show-ed), but there are also irregular ones (go->went). How does the brain conjugate the verbs? Maybe the brain of a child has learned the rule: if you want to have the past tense of a verb, just add -ed to the end. And how do you deal with went, came, fought? Maybe the child has the exceptions to the rule learned by heart and added them to list: go -> went, come -> came, fight -> fought. Therefore for every verb, the brain might look in the exception list first, uses it if present, and applies the rule if absent. But that's a box and arrow model. How does the brain do it? Pinker (1994) argues in favour of such a dual-route model based on a double dissociation argument: in some disorders the regular form is impacted and in others the irregular one. In Pinker and Prince (1998), they suggest that there is a rule-based route (adding –ed to the ending), and a route with a list of exceptions that block the creation of a regular form for an irregular verb. (I never understood why the list has to block the rule. Is it not simpler to first look up the exception and if not found, use the rule?) In any case, the over-regularisation argument is key here. Children often make errors like goed instead of went, even though they have used the correct past tense before. The argument goes that at first children learn past tense by heart, then learn a rule on how to form past tense, and here is where they make the over-regularisation mistake until the exceptions list strengthens. Connectionist modelling challenges this neat and clear-cut computational approach to brain processes. For example Plunkett and Marchman (1996) have created a neural network that is able to conjugate the verb correctly, and most importantly without any rules or list of exceptions. It is just a neural network with many weights trained on regular verb and irregular verbs. Suddenly it is at the least feasible that a single route model with a messy structure very unlike a computer does both functions, which means that both functions share the same resources.

But why was Pinker able to argue for a dual neuropsychological model supported by a double association argument if the real thing could be a single route neural network instead? Juola and Plunkett (1998) showed that it is possible to construct a neural network that has the appearance of a double dissociation. Double dissociations can arise from nothing else but extreme cases of the natural variation of a lesioned network, and possibly amplified by the publication bias (only interesting and rare cases are published). Again, there was a network trained to conjugate verbs. The information of the irregular and regular verbs are distributed in the weights of the connections between neurons. However, by chance there are some weights which are more responsible for regular verbs and some weights are more responsible for the irregular once. If you selectively damage or remove a connection, the deterioration of the performance will not be evenly distributed between regular and irregular verbs. I saw this phenomena in my TMA02 work: depending on which weight I set to zero, a different horse's name was affected. Now assume that you have 100 people with a stroke. In the majority of cases, weights dedicated to both type of past tense will be affected. But in a few the pure weights are the only ones affected: either the weights most responsible for regular verbs are affected by chance, or the weights mostly dedicated to irregular verbs. So you can get a few patients with inability to do regular verbs and a few patients with an inability to do irregular verbs. A typical double dissociation but based on a single route. This opened up a Pandora box on neuropsychology's use of double dissociation and modularity concepts: Is the ability to perform a certain function distributed across the whole brain or an interaction of different brain regions? Or is there only one specific brain area responsible? Is a double association evidence for anatomically distinct modules or only evidence that the processes underlying the two functions do not share resources? However, we should not forget that learning the past tense in English is a cultural thing and cannot be an innate module. In Norwegian, you have two regular verb forms plus irregular verbs. So are there three routes? Today we think of the brain as composed of interacting modules shaped by evolutionary pressures. I would argue that the failure of the double dissociation argument is restricted to learned functions rather than on a wide category of related functions or at innate functions. The interesting question is what is innate: the past tense is not! But is the language ability as such an innate module or an interaction of more general purpose modules arisen from interaction with the environment especially culture?

Connectionist modelling also show interesting alternative learning paths similar to real development, and thereby highlights the inability of neuropsychological models to adequately model the development of functions in children in any detail. Neural networks are not just possible models on how a skill is generated but also on how the skill is learned. Above, we have come across the past tense. Plunkett and Marchman (1996)'s model is not just interesting as an end-product, but the training of the model can be viewed as a model of how children learn. Children learn in a U-curve. First they perform well because they learn the past tense by heart. Then they make mistakes while at the same time learning the rule of regular verbs, and finally they have mastered the past tense. Plunkett and Marchman (1996) has taught the net by feeding it with regular and irregular verbs in the same way as children encounter verbs. They compared the performance to real data from a children's study by Marcus et al (1992), and find similar results: early learning is error-free, over-regularisation errors happen but are rare for common irregular verbs, irregularisation (conjugating a regular verb irregularly) errors are rare. To conclude, a blank slate neural network is all you need to learn the past tense, no innate module is required. At least not for this task. Again we should probably make a distinction between evolutionarily recent skills and ancient skills needed for spreading the genes (like survival).

However, neural networks have an Achilles’ heel: they do not include a realistic model of learning. In a sense, neural networks do what in fact drove them away from computationalism, namely to mathematically describe a phenomena rather than show how it emerges. The weights of the connections between neurons are artificially determined by mathematical rules like the gradient descent method with the help of artificial concepts like error back propagation. How can connectionist modeling claim to build up the brain from scratch if the learning process is not well modelled? In fact, in my opinion the lack of realistic learning leaves neural networks open to a serious loophole: data mining. Neural networks have a huge number of weights that need to be trained, and there are billions of billions of billions possible constellations. In a sense it is not surprising that they can model any system with a bit of fine-tuning, trial and error, and little constrain on learning, because they have so many degrees of freedom. Let's take an analogy. This essay has about 5000 words, and in fact there are an unimaginable variety of possible texts possible; including a scence from Hamlet and the author's eulogy! I can easily train my document by adjusting the words of the essay to match those of a passage in Hamlet. Surely, we would not say that this is in any way a success of my document itself. So would we not expect neural networks to fit any system? Yes, neural networks are trained on a training set and then tested on a different set, but often the training set implicitly contains the rules. I believe, the artificial learning might well leave the system too unconstrained. On the other hand, you can argue that that's precisely the secret of the brain - it's enormous flexibility. And maybe most of its activity is indeed not real understanding but just re-coding and fitting output to expected output.

Not all is lost for cognitive neuropsychology. Sure, connectionist modelling has become a rival in love, challenges many assumptions, and provides food for thought. The neuropsychological tools are not as sharp as previously thought. However, the advent of new revolutionary tools to explore the brain has rejuvenated the field. And will keep everyone busy for many years to come. First of all, the imaging techniques have become very powerful, allowing real-time insight into the structure and functioning of the living brain. Instead of having experimental results now and a post-mortem study years later, you have experimental results and imaging data at the same time. You now have three points of reference: the task performance, the functional imaging data, and the structural data. And very importantly the resulting quantitative data allows to quantitatively compare to control brains and run proper group difference statistical tests. For example, the advent of CAT and MRI scans have made it possible to see the structure visually in vivo, and detect blood clots, squeezed brain regions, and dead tissue at high resolution. The DTI technique is also interesting, because it allows to track fibre structures in the brain and see whether there is disruption between different regions. Functional imaging tools add another twist to the study of damaged brains. You can monitor the indirect BOLD (blood oxygenation level dependency) signal and the direct electrical activity of neurons. PET and fMRI work on BOLD. The idea is that functionally active regions with plenty of firing neurons consume more blood to re-fill their battery literally - the action potential. Functional imaging data is revolutionary in the sense that you can take a time series of scans over several seconds, and see the brain at work. Or, you can scan the damaged brains over a longer period of time and monitor recovery. Of course, this can also be done for the normal brain but a damaged brain gives extra information. However, the functional BOLD methods are reaching their limits because blood flow is by nature fuzzy, not 100% correlated to neuronal activity and does not have a high time resolution. And of course because the real stuff is the electrical activity of the billions of neurons. Two techniques stand out here. EEG is more of a surface electrical activity measurement tool, whereas MEEG aims to give a 3D view of electrical activity in the brain using magnetic fields. MEEG has technical but probably solvable issues: from a 2D surface map you need to deduce a 3D activation pattern - an inverse problem. The imaging field is maturing in terms of resolution and technology (with the notable exception of MEEG which requires more work), and progress will mostly be on improved signal analysis and interpretation, using several techniques at once, lighter scanners and experimental design improvements. Very importantly, the critique on using case studies is also dramatically weakened, because we now have so much more information on a single damaged brain. Surely, a competitor in the flat screen industry only needs to look at one of your screens to be able to understand your new innovative technology, because he can take it apart.

Another area of improvements is purposeful temporary lesioning in humans without ethical concerns. The technique is called TMS, and sends out magnetic pulses that temporarily disturb the electrical activity in the targeted brain region therefore knocking out a region for a brief period of time. Experimental subjects can then be tested on tasks and their performance can be compared to a control group. For example, you ask a subject to talk and then you can aim the TMS on Broca's region. The subject will suddenly not be able to talk any more. Aiming TMS at other areas does not have this effect. However, TMS is a rather clumsy method and it is difficult to control the target area. Therefore, animal models are still very important for our understanding of the brain. Scientists are able to experimentally control the degree and area of damage with knocking out genes, well-controlled surgical lesions, targeted medication all inside a multi-technique scanner. Finally, open brain surgery inspired by Penfield's work is still done and perfected. We should also not overlook the importance of more indirect tools like genetics. Our understanding of the human genome and some gene's impact on brain function has become a powerful tool. Many genes play a crucial role in the building of the brain via protein coding. Research in many brain-based disorders have revealed a genetic component, and current research is revealing the impact of specific genes on proper brain functioning. Clearly, gene research is not relevant to patients with brain strokes and injuries, but for brains with a permanent dysfunction. A good example is the FOXP2 gene. Specific language issues, developmental verbal dyspraxia, has been linked to mutations in this gene. So the study of damaged brains can now be seen from yet another angle and tackle developmental brain disorders. The experimental tasks given to these patients are then linked to the genetic and imaging data. In fact, the impact is even wider, because evolutionary biological consideration can into play. The gene is needed for proper language, when did it evolve? Do apes have it, too? Is it just helping out other genes? The FOXP2 is an impressive example on how the study of damaged brains can open up new avenues into understanding our brain. Finally, neuropharmacology is also adding new tools to discover the damaged brain or to induce temporary damage to a region or pathway. How does cocaine impair the brain? Do dopamine-receptor blocking agents modify the functioning of a damaged brain?

To conclude, did the advent of connectionist modelling represent a considerable blow to proponents of the neuropsychological approach? Let me first be pedantic, and religiously point out that science is not about people but about statements, arguments supporting a statement, and counterarguments! So did connectionist modelling represent a considerable blow to the neuropsychological approach? No, it is more of a wake-up call. There are very different ways to model a brain. You need to explain every step quantitatively in detail. You have to be more careful with double dissociation arguments. However, the core idea of neuropsychology is still alive: you can find out a lot by studying what went wrong. No approach is better than the other, like cities: neither Paris nor London are better, they are just different ways of living. In fact, I argued above that the revolution in imaging and genetics considerably help neuropsychology to re-invent itself. You can zoom in much more into case studies. But maybe due to the genetics revolution there should be more emphasis on developmentally damaged brains like in dyslexia, stuttering, laTourette syndrome as opposed to messy damaged brains by stroke or injury. These are exciting times with so much new information from many different tools exploring each a completely different aspect of the brain. But of course there is only one brain, and at some point they all need to come together in meta-analysis. A challenging task, and maybe even impossible. I like to compare the brain to a big city with many interacting objects: factories, highways of information, objects and people, executive branches shaping the city, construction plans as genes, neurotransmitters as the weather and so on. There are many different ways to explore a city: by foot, by listening to people's stories, by buying TimeOut, by satellite pictures, by statistics. Still even though I have information on all of it, I still find it difficult to say exactly what for example London is. Where does it start where does it end? If I take Westminster away, is it still London? Whatever happens. At least the clinical side of neuropsychology will never die. The patients with damaged brains will always exist and they have specific deficiencies, and a good clinician must ask what causes these deficiencies in order to best devise a plan for the best possible recovery. But they don't care about neurons or networks, they want simple box and arrow models that are roughly right. But still I want to understand from the neuron onwards why someone doesn't like his left hand any more and wants to get rid of it.

Kartsounis, L.D. and Warrington, E.K. (1991). Failure of object recognition due to a breakdown of figure-ground discrimination in a patient with normal acuity. Neuropsychologia, 29, 969-80.
Warrington (1985). Agnosia: the impairment of object recognition. In P.J. Vinken, G.W. Bruyn, and H.L. Klawans (eds), Handbook of Clinical Neurology (vol. 45, pp. 333-49). Amsterdam: Elsevier Science Publishers.
Ellis, A.W. and Young, A. W. (1988). Human Cognitive Neuropsychology. London: Lawrence Erlbaum Associates.
Lissauer, H (1890). Ein Fall von Seelenblindheit nebst einem Beitrage zur Theorie derselben, Archiv für Psychatrie und Nervernkrankheiten, 21, 222-70
Stewart, F., Parkin, A. J., and Hunkin, N. M. (1992). Naming impairments following recovery from Herpes Simplex Encephalitis: Category-specific? The Quarterly Journal of Experimental Psychology, 44A, 261-84.
Funnel, E, and de Mornay-Davies, P.D. (1996). JBR: A re-assessment of concept familiarity and a category-specific disorder for living things. Neurocase, 2, 135-153.
Pinker, S. (1994). The language instinct: how the mind creates language. William Morrow, New York.
Pinker, S. and Prince, A. (1998). On language and connectionism: Analysis of a parallel distributed processing model of language acquisition. Cognition, 28, 73-193.
Plunkett, K. and Marchman, V. A. (1996). Learning from a connectionist model of the acquisition of the English past tense. Cognition, 61, 299-308.
Farah and McClelland (1991) in Cohen,G., Johnston, R. and Plunkett, K. (eds) 2000 Exploring Cognition: Damaged Brains and Neural Nets, Hove, Psychology Press (Taylor & Francis).
Devlin in Cohen,G., Johnston, R. and Plunkett, K. (eds) 2000 Exploring Cognition: Damaged Brains and Neural Nets, Hove, Psychology Press (Taylor & Francis).

Wednesday, June 17, 2009

From an evolutionary perspective

Stuttering has a genetic component. First, homozygotic twins are more likely to stutter both than heterozygotic twins. Second, several locations on chromosomes are correlated to stuttering. These two sources of evidence in combination with research from other disorders allow two conclusions. First, there is not a single gene for stuttering, but many different ones or combinations of. (For example, deafness is associated to tens of different genes.) Second, genes are not a sufficient condition to develop stuttering in all people: if you have the genes, you could be fluent. And if you dont have the genes, you could develop stuttering. However, the likelihood of being in the stuttering group is influenced by the genes. For the non-genetics group, it is likely that a neurological incident (possibly enhanced by environmental stress) kickstarts stuttering. Examples would be an infection or a brain trauma, whose likelihood could also be related to non-speech related genes. Such genes would probably not show up in correlation studies as their signals are too weak and multi-factorial.

Due to its genetic component, stuttering is a candidate for evolutionary adaptation. About 1% of the population stutters, and did not recover naturally from the sample of 3-5% of children who temporarily stutter. It is interesting to look at the past history of stuttering, and ask whether this ratio was stable throughout the evolution of speech in humans, or whether in fact the proportion of the stuttering population was greater in early humans. If the proportion was indeed greater, there must have been evolutionary pressure that reduced the proportion to the 1% of today. (A gender perspective would be even more interesting, as women are 4 times less likely to develop stuttering. This could suggest that they were under greater evolutionary pressure, however this might not be directly linked to speech but more general pressure that favours stable and fast maturity of females for the survival of tribes.) The question is whether this scenario makes sense theoretically and fits all known data, and if it makes sense, whether nature took this course, and whether experiments can be devised to check predictions.

The mechanism of exactly how and why evolutionary pressures made humans evolve speech unlike in their closest relatives the apes is not clear. It is reasonable to assume that humans started out like the apes of today in that they used different sounds with different intonation to communicate mental states. The crucial aspect that gave advantages to early humans is probably the emergence of rudimentary grammar that allowed early humans to combine those tens of sounds representing general concepts into thousands of different sentence combinations. However, grammar requires extra brain resources/modules. So instead of a mental concept being associated to one sound and going to the motor cortex, it now took a detour to a new region that combines words into sentences to better represent and communicate a mental concept. This ability to create sentences imposed great pressure on the brain and at the same time more greater fine motor control of the muscles is needed. A reasonble guess is that the brain divided up the pathway of making sounds into two: one old for sounds (how we speak), and one more recent for automatic speech where the early human could focus on what to say (and also on how to combine words) as opposed to how (intonation) he would say it. The social and natural smile might be an analogy.

It is reasonable to assume that nature favours fluent speakers. This is especially true as early humans became more and more organised in tribes, and social skills and communication skills became important, and a lack thereof a distinguishing feature from others. The evolutionary pressure only started when fkuency became important. So it might well correlate with emergence of culture. (Another example would be genes for dyslexia, that could only have been exposed to evolutionary pressure when writing became important, unless dyslexic have other defficiencies.)

It is possible to at least in principle falsify the theory. My theory is that early humans had more genes creating unstable speech, stuttering,and these are being selected out by evolutionary pressures. So the theory predicts that a higher proportion of early humans carry stuttering genes. So a comparison between a sample of modern humans and early humans would reveal sample differences. In practise, such a test is not possible yet as old DNA degrades relatively fast, unless preserved under special conditions. The promising news is that new techniques are being developed to extract ancient DNA. For example, such techniques have been used in Neanderthals with an age of tens of thousands of years. So there will probably be a DNA bank at some point in the future on ancient DNA which could be used.

Friday, June 12, 2009

Leys persists

Leys persists with his campaign to get Google change their policy regarding stuttering cures:
Tom, I thought you and your readers might be interested in this story.

Following the BSA’s complaints, the Advertising Standards Authority has just made adjudications against five more ads from two organisations, and The adjudications have been published today at http://www.asa.orguk/asa/adjudications/public/

These adjudications have been made because the organisations in question did not respond to the ASA’s requests for data which might have supported their claims.

An additional benefit of these adjudications is that Google, who ran these ads, are deemed to be affiliate marketers, and thus bear responsibility for accepting ads which infringe the advertising codes.

So we have written to Matt Brittin, the Country Director of Google UK, pointing out that, once again, Google has been found to be taking money for ads, without taking responsibility for their content We also reminded him that, in addition to the British Stammering Association, many other authoritative people and organisations – including the Royal College of Speech and Language Therapists and the Stuttering Foundation of America – have already advised Google that stammering cannot be ‘cured’. That is why we have all asked that stuttering/stammering should be included in the Google ‘Miracle Cures’ policy, so that ads of this kind will no longer be accepted by Google.

We will carry on reporting ads to the ASA which feature misleading claims about stammering treatments. Google, we hope, will reconsider their position.

Tuesday, June 02, 2009

Exam Stress

I have the CFA Level II exam on Saturday with 3000 pages that I had to study. I only finished reading last week, and now I revise, and guess what, I can't remember anything. Oh God. Only 4 days left...

After the exam, I will have more time for the blog...
I did a recording with StutterTalk and its hosts Peter and Eric. We talked on the neurological basis of stuttering. I am not 100% happy with my performance; too many ideas. I need to focus on an idea at a time, talk less and stutter less! :-)

Anyway, Peter was asking an excellent question regarding my position that stuttering is mostly learned behaviour from an unstable and sensitive speech system most likely due to neurological connectivity issues: I often see kids that started starting a few days ago and they often have already full-blown stuttering symptoms. How can this be learned behaviour? Here is my answer:

a. His experience is consistent with a large scale survey on early childhood stuttering that shows sudden onset of stuttering to be typical. And therefore, it seems unlikely to be just learned behaviours, even though learned behaviours can be learned within days, I guess?

b. I did not actually mean what I said that stuttering is mostly learned behaviour from an unstable and sensitive speech system. I really mean: automatic response and learned behaviours. If you observe stutterers, there are two kinds of behaviours: shared by most stutterers and not shared by most stutters. (In fact it would be interesting to do a study of this. Which symptoms are most shared?) Behaviours shared by most stutterers is mostly likely an automatic response due delayed speech initiation like a long pause, a block, repetition. And then there is learned behaviour like looking down, moving your arm etc. Think of holding your breath: we all react in identical ways, but at the same time we also handle it differently.

Here is what I really mean: there are three types of activities. The internal activity of the brain, the overt automatic response to delayed speech initiation, and learned behaviour associated to stuttering and triggered by events, words, situations, people, etc.