Accents matter when you are deaf – especially when you can’t lip read

When I was in grade eight, our class watched and then discussed a film. I was asked a question about the film by my teacher, a recent arrival from Australia. In addition to a strong Aussie accent, he had a very bushy moustache, impeding my ability to read his lips. That plus the accent made it too difficult for me to understand the question so I asked him to repeat it. He did, but I still couldn’t understand what he said. Rather than admit I didn’t hear, something I couldn’t bring myself to do until well into my twenties in fact, I said I didn’t know the answer. My classmates burst out laughing and I knew they must be laughing at me. I later found out that the question was “What is the capital of Canada?” This was not, needless to say, the happiest of school memories.

My cochlear implant has improved my life in immeasurable ways and I do hear much better than I did back in grade eight. But I often still need to lip read to understand what is being said. And as with my experience with that Aussie teacher, I am also stymied by accents. The words not only sound different but they actually look different when formed by the lips.

Let me give you an example. Many years ago I attended a conference in San Diego put on by the Alexander Graham Bell Association for the Deaf. This was the perfect place for a deaf person to be because not only was everyone aware of our needs, the services available greatly enhanced our ability to hear and understand. One such service was the oral interpreter. This person takes what is being said by the speaker and mouths the words silently, sometimes changing the word or phrase to make it easier to lip read. I had my own oral interpreter and within minutes of watching her lips knew she was from Chicago, even though I did not actually hear the sound of her voice. I had cousins living in that area and knew how Chicagoans sounded. I could see her lips form the broad and long ‘ah’ sound like Chigaaaago and heard it in my mind’s eye so to speak. The ‘a’ looked different.

Hearing without being able to lip read can be challenging and a lot of the difficulty in my case is due to accents. A few weeks ago I listened to a series of short videos in preparation for a course I was taking. The videos consisted of a narrator as a voice-over and the main speaker looking into the camera. While I could understand the main speaker, I was totally lost when it came to the narrator as there was no captioning to help me. It was a woman’s voice, somewhat soft and with a bit of a drawl I think. But her accent was too hard for me to understand the words she spoke and of course there was no way I could read her lips.

Recently however, I had a totally different experience with accents, which is why I am writing this post. My sister and I watched a show called The Edwardian Farm on YouTube. The show is British and includes a narrator plus three main characters working a farm as it would have been in Edwardian days. While it was difficult for me to get what the three main characters were saying unless they faced the camera, I could understand the plummy BBC-type accent of the narrator perfectly. And just last week I was listening to a pod cast of a lecture given by a Cosmologist from the Perimeter Institute. He is originally from South Africa but studied in England and had a similar accent to the Edwardian Farm narrator. Again, I understood him perfectly.

I’m not exactly sure why I can understand one accent and not the other, although I suspect that a lot has to do with pitch and enunciation as well as the deliberate cadence of the speaker. But it is an eye-opener for me to know this. I think I shall have to move to London!

Wired for Sound: a memoir of deafness and cochlear implants

I first met Beverly Biderman about 25 years ago while executive director of VOICE for Hearing Impaired Children. As an early recipient of cochlear implants, she has been an inspiration for me. She wrote a book about her experiences a number of years ago and has recently updated it to tie in with the inaugural performance of an opera about her life! Here is the press release for the opera and more specifically the release of the revised ebook for any person in your own life who has a hearing loss and might enjoy receiving it. I highly recommend it!
Rosemary Pryde

Beverly Biderman, the author of WIRED FOR SOUND: A JOURNEY INTO HEARING is pleased to announce the birth of an opera based on her memoir of learning to hear with a cochlear implant. The opera, titled TMIE, on the threshold of the outside world, will premiere in English February 25th, 2016 in Lisbon, Portugal at O’culto da Ajuda. The intriguing “TMIE” in the opera title is the name of a gene involved in hearing and deafness.

A new revised and updated version of Biderman’s classic book on deafness and cochlear implants is now available as an ebook. This rare “inside” account of hearing with a cochlear implant, the first effective artificial sensory organ ever developed, is a moving story about a deaf woman’s journey through deafness and into hearing.

Praised by Oliver Sacks as “a beautiful account full of wonder and surprises,” the 2016 edition brings the reader up to date on the technology, and more importantly, on the transformations in Biderman’s life brought on by her cochlear implant.

“… reads in line with the best of memoirs, one that leaves us wanting to hear her voice again.”


Beverly Biderman has had a progressive hearing loss since she was a toddler. It became profound by the time she reached her early teens, and from that point on, she was completely unable to understand any speech without lipreading. As an adult, she had surgery for a cochlear implant, and then — everything changed.

The ebook, “Wired for Sound: a journey into hearing” is available on Amazon.

For further information, or complimentary reviewer copies of the ebook, please contact

Glass half full or half empty?

Last week I received a new processor for my cochlear implant. The processor is the tiny computer that translates sounds in the environment to a code. The electrodes in my implant use this code to send a message to my brain that I have heard the sound. I must admit that I have a difficult time believing I have only had this marvelous invention for three years. It seems like a lifetime and I can hardly remember what life was like before my cochlear implant.

There are still challenges, especially when I can’t read a speaker’s lips, but they are far outweighed by the experiences that now go in my ‘life-half-full’ glass. My new processor includes a special receiver that connects both my hearing aid and implant with the personal FM that I wear in groups to boost the sound. So I am now hearing in stereo! Just two days after receiving this new processor, I attended a special music concert at church. I was amazed at how much better the music was this time, even more than that first time I experienced music with my implant at the Symphony, some eighteen months ago. I could hear very soft sounds distinctly. It’s really quite something.

The percussionist who provided some of these soft sounds sat up very tall in his seat and hid the speaker’s head from the pulpit. I tried to lean across him so that I could see the speaker’s lips, but was almost in my seat mate’s lap! So I missed hearing most of what the speaker said. There was a short video about Advent and an explanation in a child’s voice about the first candle of hope. I was able to get the gist of the message but unable to hear it all. It would have been nice to have had the video captioned so I could read what she said. My local newspaper, like many, has an extensive catalogue of videos on its webpage. But much to my dismay, none of the videos are captioned. I had a good talk with the customer rep and she thinks it’s an idea whose time has come. Hopefully I will be able to hear some of my favorite columnists sometime in the future. We’ll see.

And while my implant has improved my life in immeasurable ways, I still experience trepidation in any waiting room wondering if I will hear my name being called. This is a very common concern for anyone with a hearing loss. Even though we do hear our name above all other sounds, the anxiety of missing is always there, especially when there is more than one door where the person is calling from. I was in such a waiting room just last week. It took me a while to realize that those calling out the names were coming from an entrance behind me. Oh dear – did I miss? I quickly moved so I would be in the line of sight of this door and was able to see the person mouthing my name when I was eventually called. Whew.

So glass half empty or half full? No question – entirely full. A very good friend of mine told me the other day how much she admired how I had dealt with the many challenges of deafness over the years. To me, they aren’t really challenges, just steps along the way, even if they are sometimes one step forward and two steps back. I can now experience so much more. All in all, the challenges are pretty inconsequential when I can hear that beautiful music.

The Communications Challenge

The other day I was introduced to someone and as is usually the case, mentioned early on that I was deaf. His response was, “Oh, you use sign language.” “No,” I said, “I speak!” He was rather taken aback and indicated that surely I was taught how to sign in school. I told him no, I went to a regular school and didn’t pick up any signs for years. Now I know the signs for Merry Christmas, Happy Easter, thank you and sorry, but that’s about it.

This encounter got me thinking about the different ways those of us with hearing loss communicate. If you are unfamiliar with this topic, here is some background.

There are basically three types of communication approaches that are taught to children with hearing loss: verbal (and that can include speech reading or lip reading as well as listening for auditory cues); sign language (in Canada, American Sign Language is used but there are other versions in other countries); and Total Communication, which is a combination of speech and sign.

The communication approach you are taught as a deaf child depends on many factors including current educational thinking, whether you were prelingually deaf (generally deaf before the age of one) or became deaf later in life, the severity of your hearing loss, your family situation and the availability of therapy services in your community.

I first learned about the controversy surrounding communication approaches back in the mid-eighties when I volunteered to sit on the board of a charity providing services to deaf individuals. This organization referred to those of us with hearing loss as either Deaf, deafened or hard of hearing, depending on the severity of the loss and the type of communication we used.

I learned that those advocating sign language don’t always agree with those advocating speech, particularly where children are concerned. Often there are severe camps that attack each other.

In my own case, I was four years old when I lost my hearing and five when the diagnosis was finally confirmed. As the youngest of three children in a family that also included three adults, I learned spoken language very early and by the time I was four was reading books, so it seemed a natural decision for me to attend a regular school.

The educational authorities however, wanted to send me to the school for the deaf where I would learn a combination of sign language and speech and be with other children who were also deaf. At the time, the only specialized school was a three-hour drive away. My parents decided that there would be no way they would send their five-year-old to boarding school and put up a fight to have me enrolled in a regular school, two blocks from our home. I’m very glad they did, but when I tell this story to those who use sign language exclusively, they think I am criticizing them.

I am truly not.

Over the years I have thought a lot about why the feelings regarding this issue are so intense. Parents of deaf children want to communicate with their child and many fear that they won’t be able to do so unless their child learns spoken language. Adults using sign language often had to cope with less than ideal communication situations during their school years and when they discovered sign language, a whole new world opened up for them. Anything that threatens this world, including deaf children who receive cochlear implants, is very scary.

I have always believed that communication in all its forms is a life force. We cannot do without it. Imagine how crucial it must be for those with hearing loss to find ways to communicate with others. Anything we can do to make it easier, whether it is providing a variety of communication options for children, continuing to develop better technical tools for those using speech or simply honouring the communication approach chosen, would be a good thing!

More on music and cochlear implants

Last June I wrote about my success hearing music with my implant. Since that time I have been trying to explain the difference between hearing ‘normal sounding’ musical sounds and hearing these sounds with a cochlear implant, to help others understand the enormity of this milestone.

I had a meeting with my implant rehab therapist earlier this week and she found this great illustration. (see link below) I want to share it with you – both those of you who are implant users and those who might know someone with an implant. I think this is an easy way to understand what we go through trying to hear music with a cochlear implant, especially when the implant’s sounds overpower the sounds from the non-implanted ear.

Some people with cochlear implants never reach the ‘normal’ stage; others get there fairly quickly. It took me almost two years. My brain had to reach back more than 60 years in its auditory memory for these very complex sounds.

I now hear the ‘normal’ sounding music with both my implant and my hearing-aided ear and I am greatly enjoying this newly rediscovered pleasure.

Here is the link. You need to scroll down a bit to find the illustration but I think it is worth it.

The Symphony

Those of you who are regular readers of my blog know about my struggles to hear music. While I could hear simple one-note tunes fairly early on, the more complex music of a symphony was beyond my reach. Until now.

Last week I attended a concert that featured among other pieces, Beethoven’s Piano Concerto No. 3 in C Minor, Op. 37. For those of you unfamiliar with this particular piece, the pianist’s fingers are literally flying over the keys for much of it. I heard almost every note.

There was still a ‘Darth Vaderish’ quality to the lowest registers and I could not pick up the pizzicato of the violins. But the last time I can remember hearing so much, so clearly, would have been at least 40 years ago. And I think the sound is actually clearer now.

It IS the implant that is making the difference here. A couple of times during the concert I took off my cochlear implant processor so I could get a sense of what I was hearing with just my hearing aid alone. It was very little and very soft. When I put the processor back on, the sound not only increased tenfold but it was MUSIC I was hearing in both ears, not noise. This was no fluke.

And how appropriate that my re-introduction to music was a piece by Beethoven.

A few days later, I was still on a high and still processing what happened that evening when I recalled others telling me that they had often been deeply moved by a piece of classical music.

I didn’t know what this meant until that night in June, almost two years after I received this amazing device.

My “cochlear implant expectation check list” is almost complete, so this will likely be the final entry of my cochlear implant journey, .

I still plan to offer thoughts on hearing loss issues from time to time and of course would enjoy hearing and publishing your stories on my blog so please do send them along to me. In the meantime, I am going to go listen to some music!

You can’t always get what you want in life

But to paraphrase that Rolling Stones oldie, you just might find out that you get what you need.

One of the biggest challenges for me is music. I have written in this blog on several occasions about my experiences hearing music with my implant. I could hear the true notes of simple tunes like Twinkle Twinkle Little Star with a week or so of practicing but a complex piece of music sung by a choir still sounds very Darth Vader-like.

So when my audiologist asked if I would like to participate in a research project for music therapy, I jumped at the chance. Here was the opportunity I was waiting for. I would devote 30 minutes every day for a month listening to and trying to identify specific musical sounds. It would be a great success and I would start to enjoy music once again.

Ah. It did not quite happen that way. Let me tell you a bit about the therapy itself. The patterns are atonal so you cannot easily memorize them. And as with all learning, they become more difficult as you go through the sections – nine in all, with several tasks in each one. For each task, you are presented with two patterns, each slightly different and played within a larger selection of music. The test is to identify when you hear one of the two patterns. Seemed easy enough. Not!

For one thing, the notes I heard with my implant did not always correspond with the placement of the notes on the screen. I know music. Before I lost most of my hearing, I played piano and violin and sang in the choir and glee club. I can read music and my brain still retains the musical memory so I know what it should sound like. The patterns in this therapy program appeared as notes on a staff (well, a seven line one) and I could hear in my mind what the sound should be. What I actually heard surprised me. In many cases instead of hearing notes that on the computer screen went up the scale, I heard them going down or vice versa. Very odd.

Eventually after much practice, I was able to hear some of the patterns as they should sound but there were still several that defeated me, so I just listened for what I actually heard rather than what I thought I should hear and was able to move through the sections reasonably quickly.

Then I hit section nine, the final section. Two weeks into my therapy month I hit section nine, task one. At the end of the month I was still on section nine, task one! I just could not get enough correct answers to move forward and the program doesn’t allow you to skip. I made sure I had a glass of wine at the ready each day after I finished the required half-hour. I would check the clock and stop at precisely 30 minutes. It was extremely frustrating, but I finally completed the requisite four weeks this past weekend and on Tuesday went back for my post test.

When I had my initial appointment a month ago, I was told that the researchers were finding that this therapy had a positive side effect.There may be a connection between music and speech. Many participants discovered that their ability to understand speech improved over the month. And my test results showed that as well. My ability to understand bits of speech in a noisy environment almost doubled from the first test a month ago.

A couple of weeks ago I was at a meeting of about 20 people. I was trying out a new personal F/M system that augments the sounds I get from my hearing-aided left ear and decided to sit at the back of the room for this meeting just to see how good the F/M was. I have NEVER sat at the back of the room for anything. I was always placed at the front in school and while I heard the teacher when she faced me, never got the benefit of being one of the ‘disruptive kids’ who had so much more fun in the back. Well, I heard just about everything at this meeting. My new F/M undoubtedly helped but I have a feeling that the dreaded section nine, task one played a key role as well.

I need to understand speech better. The more I understand, the less tired I am and the more I can participate in life. I can still hear some music with my hearing-aided ear even if I can’t get the full power and scope. So while it was a drag at the time, I really did get what I needed from this therapy.

And a PS. I saw a different researcher for my post test. She explained that cochlear implants are designed for speech, not music and music involves a totally different signal. There is rhythm, there are dynamics but most especially there is pitch. Speech generally covers a frequency range from 250 to 8,000 hertz. Music goes up to 16,000 hertz. Clearly there are some missing pieces and this helps to explain why I hear a range of notes that go down when the screen tells me they are going up. The brain is not getting the right signal and cannot accurately process what it hears. The research continues.