Choosing a Cochlear Implant that Works with a Hearing Aid

By Jessica Lewis

The journey to begin the process of cochlear implantation is an exciting prospect. When your audiologist tells you there may be hope to regain hearing and comprehension, the potential seem limitless. Amongst the excitement and new possibilities, the process of cochlear implantation can be overwhelming, and it’s not a decision to be made overnight.

My first step after making the decision to move forward with implantation was to learn about how cochlear implants work – not a deep technical perspective, just an overview such as can be found in the videos on the manufacturer’s web sites.  In particular, I was interested in understanding the differences between cochlear implants and hearing aids.   

When I began my cochlear implant journey last year, I had to choose which implant and company I would partner with for life. And they all promised better-than-ever features over the others. I was told by my audiologist all the general bells and whistles of the different implants, but ultimately she just handed me big packets of information on the three brands: MED-EL, Advanced Bionics, and Cochlear and said, “it’s up to you!”

The first step to any decision is to research every possible outcome. Brand packets in hand, I meticulously combed through each one hoping to have one company stand out over the other. But unfortunately, all three brands look like they offer similar implants, with very small distinctions between them. This was where I decided to hit the pavement and go straight to the sources. My audiologist was kind enough to get me in contact with representatives and implant recipients for Advanced Bionics, Cochlear, and MED-EL. I set up meetings over coffee, chatted via email, and took voluminous notes throughout the process. I truly felt this made my decision easier, as I was able to discuss and listen to first hand experiences: the good, the bad, and the ugly.

I qualified for a cochlear implant in both ears, but I decided to only implant one for now as I can still use a hearing aid in the other ear. Many cochlear implant recipients are bimodal, meaning that they use one cochlear implant and one hearing aid.

While you can use any hearing aid with a cochlear implant, they really are different devices, and process sounds differently.  Successful bimodal users learn to adapt to the different inputs to each ear.

Sister companies Advanced Bionics and Phonak recently introduced the Naída bimodal hearing solution, consisting of a cochlear implant with a Naída CI Q90 processor, and a Naída Link hearing aid.  The sound processing technology is the same for both instruments.  And the volume behavior (the way the loudness is adjusted automatically) is also the same. My hope is that this will make the transition to becoming a bimodal user as effortless as possible, and that it will provide me with the best bimodal hearing experience.

One nice feature available now is that when you change program or volume by pressing buttons on either instrument, both instruments respond, and you can hear the beeps in both ears.  Also, you can stream sounds to both instruments using a ComPilot or a Roger Pen.  

Some new features are coming soon that will make the Naída Link system even more integrated.  I look forward to programs such as DuoPhone, where you hold the phone up to one ear, and the sound is streamed wirelessly to the other, so you hear it in both ears!  And StereoZoom uses the mics on the two instruments together to make a super-tight focus directly in front of you – perfect for noisy restaurants!

Needless to say, I made the decision to go with Advanced Bionics because of all the features for bimodal users like me.

Naida bimodal

With both the Link and implant,  I am able to hear sounds that I haven’t heard since my hearing loss began (including my cat’s incessant meowing which I’m not sure I missed…). I’m able to capture wonderfully clear sounds and speech with just the implant itself, but the addition of the Link adds such a richer sound to my surroundings, adding a more natural tone. I can carry on conversations in restaurants with ease, hear my boyfriend calling me from another room, and even talk on the phone with the T-mic or my Roger pen streaming into both ears. It’s astounding how clear I am able localize sounds through these intelligent and cohesive devices; two ears are definitely better than one!

Going forward, I can’t wait to see what additional features Advanced Bionics and Phonak will offer bimodal users.

About the Author

Jessica LewisJessica Lewis is a twenty-two-year-old recently hired pediatric oncology RN. Her hearing loss started in 2014 warranting the use of bilateral hearing aids until becoming a cochlear implant recipient in 2015. She was implanted June 30th, 2016 and activated on July 14th, 2016 and received her Naida Link a week later. She currently resides in Jacksonville, Florida where she hopes to not only change the lives of her pediatric patients but also advocate for the deaf/hard-of-hearing community she so closely relates with. She strives to pave the way for awareness and recognition of this community in introducing new technology and communication techniques through her experiences both medically and professionally.


Applying for Social Security Benefits

Deanna Power

Deanna Power, Community Outreach Manager for Social Security Disability Help , has contributed a page to help you learn about Social Security Disability Insurance (SSDI) and Supplemental Security Income (SSI) in the United States.  Find out if you qualify at Applying for Social Security Benefits.

How Do Implanted Children Learn to Talk?

The latest issue of EXPLORE magazine is now available! Published by hearing implant manufacturer MED-EL, this international publication provides an in-depth look at key hearing loss-related topics.  EXPLORE KIDS was just released and is available for free (print or download) at  One sample article, How Implanted Children Learn to Talk, is republished here with permission for your convenience.


MED-EL Rehabilitation Manager, Ingrid Steyns,explains how children with cochlear implants can learn to speak as well as their peers

In what circumstances are cochlear implants (CIs) suitable for a child? 

CIs are considered for children diagnosed with a type of hearing loss known as sensorineural at a severe to profound level in one or both ears. In some countries, children with moderately severe sensorineural hearing loss may also be considered when insufficient benefit is derived from hearing aids. Access and commitment to auditory training, also known as (re)habilitation (see below), is very important, too.

‘Rehabilitation’ refers to the process after CI surgery in which people who have lost their hearing learn how to hear again.

‘Habilitation’ refers to the process after CI surgery in which people who were born deaf learn to hear for the first time.

Why is auditory training necessary?

After a CI is fitted, a person receives stimulation that provides a message of sound to their brain. An understanding of this message isn’t necessarily immediate, and skills for understanding these sounds need to be practised. During auditory training, CI users learn to recognise sounds and words, gradually improving over time. For adults, the average rehabilitation period takes six to 12 months, but for young children, (re)habilitation programmes often last for several years, as this is a critical time for speech and language learning.

How does the training of babies and young children who have never been able to hear differ from the rehabilitation of adults?

The process of a child learning to speak is complex and relies heavily on their ability to hear. Babies must have adequate access to all the sounds of speech and numerous opportunities to listen to spoken language before they, in turn, can develop it. If a child is fitted with a CI in the early years of life, their habilitation model can follow a development that’s very similar to the way children without hearing problems would learn to listen and speak. The brain is born ready to receive sound and is in a sensitive period for language learning. For people who lose their hearing after learning to speak, the rehabilitation model will follow a re-learning pathway, where sound provided by the CIs is shaped to match their pre-existing knowledge of spoken language.

How can implanted children be best supported in learning to speak?

A team approach is most effective with support from the surgeon, audiologist, speech and language pathologist and rehabilitation specialist, as well as the family and teachers. The family’s role is extremely important. Listening, speech and language are learnt through abundant, meaningful exposure, so families must be given the right information on suitable strategies to achieve this. For a baby or a young child, this may involve singing songs to stimulate particular speech sounds that they don’t yet have in their repertoire, or playing games and activities that include certain features of language. It’s a case of closely monitoring the child’s progress while incorporating specialist knowledge, and adapting goals to further improve the child’s outcomes.

Can deaf children with implants learn to hear and speak as well as their normal-hearing peers?
In the early years of a child’s life, the brain is at its most adaptable, ready to receive sound and develop language. With early cochlear implantation and rehabilitation, this prime period for development can be maximised, and deaf children with CIs have the potential to achieve listening and speaking skills that are comparable to those of their peers who don’t have a hearing impairment.

About the Author

Ingrid Steyns

© Peter Fesler

Ingrid Steyns is a Rehabilitation Manager at MED-EL’s head office in Innsbruck, Austria. She is a certified listening and spoken language specialist and a practising speech and language pathologist.

Cochlear Wireless Accessories Review

By Christina LampWireless-prod-group-shotHaving trialled the Cochlear wireless accessories last year, I was very pleased with the quality and the range of sounds. When they became available, I ordered all three accessories!

Pairing the products can take a few goes and tests to work it out if you are not electronically minded like myself.

Once the devices are paired, you can start streaming with the TV Streamer or Mini Microphone with a long press of the telecoil button on your Remote Assistant or Remote Control.  Or you can do a long press of the upper button on the Nucleus 6.

Mini Mic and TV stream on remote

Selecting an accessory

The Phone Clip + has its own call pick up/hang up button.

Mini Microphone


The Mini Microphone is inconspicuous – it even matches his shirt!

  • Hear from up to 7 metres
  • Switched on using your remote

This was easily my favourite product, a wonderful diverse product. I tested this in a number of situations.

  • One-on-one conversation in a very loud location-coffee shop at the crossroads of 2 main thoroughfares with very heavy traffic
  • A TV that did not have a TV streamer, placed under the speaker
  • Travelling on transport
  • Discussion with my husband
  • In a group that had a main speaker

My experience was:

  • Clear speech, I was able to follow the speaker in all situations
  • The background noise was low and the speaker’s voice was loud
  • The voice was clear in all situations
  • The TV, while not the same sound as the TV streamer, it’s clear with great clarity, a great solution for if you are visiting somewhere where you need to listen to a TV or radio
  • Light weight, easy to store and very portable
  • Very easy to use, switch the device on for the person who will wear it, adjust the volume using the buttons on the side as necessary, and select mini microphone on your remote.

The device can also have music and audio streamed directly to the Sound Processor using a supplied cable connected to the mini microphone and the audio device.  I haven’t tested this yet. The Mini Microphone is mono, so if you want true stereo, you have to use the Phone Clip + instead.

Mini Microphone quick guide

TV Streamer

Cochlear wireless TV streamer

  • Stereo Sound

I tested this at home, my walls are double brick and floors tiled, the rooms tend to echo.

My experience was:

  • I found the stereo very clear
  • Able to listen to the TV from the living room while working in the kitchen
  • Easily adjust the volume on the TV Streamer
  • Easily switched on by remote

My only difficulty was:

  • When I listen to Stereo Sound on the TV Streamer, I find it hard to hear the conversation around me.  When adjusting the accessory mix to improve the sound around me, the person speaking is too soft to hear.. Since I have gone bilateral in Nov 2014, I found it much easier to just have the TV Streamer connected to one CI without adjusting the accessory mix to hear voices, while the other CI is used normally so I can hear people around me. I plan on testing the conversation around me again once my second CI has adjusted to hearing sounds.

TV Streamer quick guide

Here is how to change the accessory mix, which adjusts the loudness of the accessory input compared to the processor microphone.

Assessory Mix

Wireless Phone Clip

Christina Lamp with phone clipI tested this on different voices.

My experience was:

  • I could follow most conversations easy, there was clarity
  • A bad connection with static was more difficult but using the clip I was able to follow enough of the conversation
  • Very easy to use, one click for answer, if you can’t answer double click.
  • Clips easily to clothing
  • The background noise is muted so you can just concentrate on the voice
  • The area I live has reduced mobile connectivity but I was still able to have a good conversation even though the connections weren’t great
  • Sometimes in some calls the voices sounded not as clear like a landline does but it was explained by a hearing person that a mobile is not as clear as a landline anyway

You can also use the device to listen to music from your phone in stereo. I haven’t done this yet as prior to CI I had trouble hearing music. This will be a next step for me to learn after CI number 2 has settled.

Chris Lamp phone


Phone Clip quick guide


This was a wonderful outcome for me and I happily purchased all three products and they have improved my life.

Pre-CI I would have:

  • No voice clarity for the TV or loud coffee shop location
  • Heard only noise on transport
  • Relied totally on subtitles on TV
  • Only heard the voice on the phone with no clarity and a lot of noise
  • Sat in front of the speaker and tried my hardest to lip read

About the Author

Christina Lamp has had a hearing loss since she was 7 years old.  By time she was a teenager, she had become profoundly deaf.  Christina relied heavily on analogue hearing aids and lip reading to communicate.  After struggling with digital hearing aids, she received her first cochlear implant in March 2014 and then went bilateral in Nov 2014.  Christina and her sister both have cochlear implants, but the rest of their family is hearing.

Advanced Bionics AquaCase Unboxing

Click on an image to see the full-size version!

OtoSense Identifies Sounds Using Your Smartphone

Otosense captureOtoSense is a new app for Android and iOS (coming soon) that can identify important sounds in your life. When the app is running, it continuously analyzes your environment to detect sounds such as a doorbell, smoke detector, or telephone.  When it finds a match, it notifies you via flash, vibration, or third-party notification.

A small library of sounds is included, and you can record your own important sounds to add to the list.

Because OtoSense continually analyzes the sound environment, it does increase the drain on your battery.  The app is nice enough to give you warnings when you open it, requiring you to confirm that you do want it to run.  It also keeps a small icon in the notification bar to remind you that it is running.

Otosense icon


The company is working on improving battery life, and also on adding to the library of sounds. Download OtoSense now to try it out for free!

Should I Wear Medical ID?

Medic-Alert-02Cochlear implant recipients and candidates often ask whether they should wear some form of medical ID.  Cochlear Implant HELP has cut through the anecdotes, interviewed medical professionals, and has assembled the information to help you make your own decision.

Read more at Should I Wear Medical ID?

Auditory-Verbal Therapy & Telepractice: What’s Happening in France

By Hilary Coté Depeyre, M.A. M.S. CCC-SLP

In countries around the world, Auditory-Verbal Therapy (AVT) is recognized and used as a principle method in which deaf and hard of hearing children learn to effectively communicate through listening and spoken language.  Today, a child with a profound hearing loss can learn to listen and speak earlier and better than ever before thanks to advancements in early detection of hearing loss, advanced cochlear implant technology, and family-centered, early intervention. With these technologies and interventions, the degree of a hearing loss no longer determines a child’s spoken language outcome.

In the AV method, the ultimate goal is age-appropriate auditory, language, speech, cognition, and communication skills for a child, meaning that he or she will be in a mainstreamed environment (regular classroom) as soon as possible.  Just like hearing children, deaf and hard of hearing children develop spoken language skills through listening, and their parents help to highlight the meaningfulness of sounds throughout the day. Visual cues are not used, and one-on-one teaching is critical. Auditory-Verbal Therapy sessions are planned to provide coaching to parents as they interact with their child. The therapist can give feedback and provide strategies to parents as they help their child build language skills and use their cochlear implant.

There are currently no certified AV therapists or Listening and Spoken Language Specialists (LSLS) in France.  There is, however, no shortage of parents following the method, and looking for guidance. And as an American-certified Speech-Language Pathologist working in France, I’m helping use the foundations of AVT to support families as they help their children reach their full potential as cochlear implant users. Caroline Pisanne is one of the pioneering mothers who first sought AVT for her son via telepractice, and thanks to her website, more parents in France are becoming aware of AVT.

I was first hesitant to start practicing speech therapy in France via telepractice, as presenters at the AG Bell 2012 Convention I attended spoke about their advanced telepractice platforms, and how they kept blogs for each family, had very fast internet connections, and could even send materials and needed technology to families prior to a session. Needless to say, I did not feel ready for this! However, the families that Caroline referred to me were motivated for their children, and did not seem to mind the occasional hiccups setting up the technology. What was important to them was that I could provide professional assistance in a language they understood, and they were open to trying something new. Telepractice was new and exciting for me as well, but the biggest factor was my recent arrival in France. I had no other way to continue the work I love. While nearly all of the telepractice programs I had heard about were set up to help families who were remote, I was drawn to it as I was remote!

So, the Skype sessions began, and it took a couple sessions to get the hang of this new way of doing therapy. A month into sessions, we were naturals.

What does a typical session look like?

AVT telepractice

A day or two before the session, I provide the families with a lesson plan, including objectives and activities as well as a list of materials that we will be using. In this way, they can prepare and know what to expect.

The actual session is similar to a face-to-face session. We let the child play, books are shared to encourage language development and early literacy skills, and songs and rhymes are incorporated. As the parents are physically with their child, they are the teachers, not me. This aspect of telepractice directly incorporates the principle of AVT, that parents are the most important models for learning speech and spoken communication.

Throughout the session, I coach the parents and share with them critical strategies to incorporate into their everyday communication. Therapy is always diagnostic in nature, meaning that I continually monitor the child’s progress and modify the activities or goals when needed.

Following the session, parents receive a summary with progress notes as well as ideas for how to incorporate these new strategies into their everyday routines. Ideally, these sessions happen weekly.

What are the benefits to telepractice?

  • Children are in the comfort and familiarity of their own home
  • Fewer sessions are skipped due to illness or other life disturbances
  • Flexible scheduling based on the family’s needs
  • No travel time is needed for either the family or the therapist
  • Multiple family members can more easily participate

What are the drawbacks to telepractice?

  • Technology issues with the webcam, audio, slow Internet connection, etc.
  • Poor sound quality and distance of the child from the microphone can make it difficult to accurately judge articulation skills
  • Using Skype alone and not a more advanced platform limits what we can do
  • Video lag can lead to talking over each other

With telepractice, families in France can now have professional guidance in French or English following a method they believe in. There are still no certified LSLS AVTs in France, but step-by-step we are increasing awareness and using the resources we have for families to help their children maximize the use of their cochlear implants. Let’s hope that someday soon a French orthophoniste will pursue LSLS AVT certification!

The Principles of Auditory-Verbal Therapy

  1. Working toward the earliest possible identification of hearing loss in infants and young children, ideally in the newborn nursery. Conducting an aggressive program of audiologic management.
  2. Seeking the best available sources of medical treatment and technological amplification of sound for the child who is deaf or hard of hearing as early as possible.
  3. Helping the child understand the meaning of any sounds heard, including spoken language, and teaching the child’s parents how to make sound meaningful to the child all day long.
  4. Helping the child learn to respond and to use sound in the same way that children with normal hearing learn.
  5. Using the child’s parents as the most important models for learning speech and spoken communication.
  6. Working to help children develop an inner auditory system so that they are aware of their own voice and will work to match what he or she says with what they hear others say.
  7. Knowing how children with normal hearing develop sound awareness, listening, language, and intellect and using this knowledge to help children with hearing impairments learn new skills.
  8. Observing and evaluating the child’s development in all areas. Changing the child’s training program when new needs appear.
  9. Helping children who are deaf or hard of hearing participate educationally and socially with children who have normal hearing by supporting them in regular education classes.

About the Author

photo 2

Hilary Coté Depeyre, M.A. M.S. CCC-SLP is an American Speech-Language Pathologist who has settled in France, thanks to her French husband. She spends part of her time working through telepractice with children with cochlear implants in France, and the other part of her time working with her husband on their dairy and ice cream farm in the Alps. She hopes to soon be able to work toward becoming a certified LSLS AVT therapist. This long process, beginning with recognition as an orthophoniste in France, is in the works!  If you would like more information, e-mail her at

Confessions of an Ineraid User

By Carolyn Tata

Carolyn Tata

I was born with a moderate to severe hearing loss in both ears, cause unknown, and was fitted with my first body hearing aid at 11 months.  About a year later, the opposite ear was also aided, I believe using a “Y” cord with the one single aid.  After some time, I got a second body aid and wore the two simultaneously. I was mainstreamed from the start with the help of outside visits with a hearing/speech teacher.

In sixth grade, I upgraded to two BTE’s after my teacher noticed I was not hearing as well. In my mid to late twenties, my hearing began declining rapidly.

In 1988, I suffered a bout of Tullio Syndrome in my left ear, rendering it unaidable.  The amplified sound coming out the hearing aid was distorted and would cause intense dizziness and loudness recruitment.

About a year after the Tullio incident, I met my ex-fiancé, who was a hearing aid dispenser.  At his suggestion, I  became curious about a new technology called a cochlear implant. Together we discussed and researched the idea.

I went to Lahey Clinic for a CI evaluation.  I was rejected because I did not meet the FDA guidelines for a clinical cochlear implant device. At that time, a prospect had to score 6% or less on the single word tests, and I kept scoring 7%, outclassing myself.

Not wanting to give up,  I went through 2 more cochlear implant  evaluations, one at Yale University Hospital, and the other at Massachusetts Eye and Ear Infirmary (MEEI).  Yale also rejected me for a clinical device, but recommended that I wait for a new emerging cochlear implant system called the MiniMed, being developed in California.  This was the precursor to today’s Advanced Bionics implants.  I suspect they suggested waiting because it would buy me more time for either my hearing to deteriorate further, or for the clinical guidelines to relax.

I did not want to wait, as I was fearful of losing the opposite ear any day.  I live independently and needed to keep working.  At MEEI, I also did not qualify for a clinical program, but their cochlear implant research program gave me a two-pronged option: enroll in their program with the Nucleus device in a research capacity, or  take on a more experimental device, the Ineraid.  They had high hopes to obtain FDA approval for the Ineraid at the time.

It took me some time, research into the marketing materials, rudimentary observation of others with the two systems, and finally serious thought to make this decision.  I concluded that  it made sense to opt for a “generic “ device such as the Ineraid.  There were no implanted electronics that could break or hinder upgrades.  All of the electronics existed *outside* of the body. The external hardware is connected to the implanted electrode array via an outlet that protrudes through the scalp.  This outlet, working like a wall socket, is called the percutaneous pedestal. This meant that the hardware *and* software was outside the body, in the hands of developers. I decided simple  “plug and play” was the way to go.  I would have easy opportunity to try any of the latest technologies.  Little did I know that the world was going to move to implanted electronics so quickly.  And little did any of us know that that Ineraid, with the same low infection rate as other implants, would not gain FDA approval for the reason of infection risk.

Best this selfie!  My earhook connected to the percutaneous pedestal.

Beat this selfie! My earhook connected to the percutaneous pedestal.

Ineraid processor, cable, and BTE

Ineraid processor, cable, and BTE

I underwent surgery for the Ineraid array on October 26, 1990. The operation was 5 to 6 hours long, with no immediate complications.  Recovery was about 2 weeks.  During this time, I was struggling with intense dizziness.  It may have had something to do with the Tullio issue, or possibly just postoperative fluid loss in the semicircular canals which comprise the ear’s balance mechanism.  The dizziness was steady, but did eventually subside to an  episodic pattern over the following year.  At about 6 months, the episodes grew shorter and less severe until they were finally gone completely a year later.

Hookup was in December.  When I was initially switched on, I listened and immediately declared it was not working.  All I heard was clicks and pings. It was pretty bad!  Then Dr.  Eddington, inventor of the Ineraid device, said “Wait” with a capital W.  He then brandished a screwdriver.  With that screwdriver, he bent over the opened processor and began to twist 4 screws while literally voicing 4 vowel sounds: A, E, O and U.  At each of those screwheads began the lead to the 4 active electrodes in the implanted Ineraid array.  I had to tell him when they sounded of equal loudness.  That was the  “mapping” in its earliest form. I was then instructed to take out my hearing aid in my opposite ear and go home.  I recall pronounced lightheadedness from switching the usual ear off and the opposite one on.  I felt very unbalanced.  This was  a feeling that was so pronounced, it was almost unbearable.  I truly believe the experts did not realize the gravity of the stunt they expected me to perform here.

Dr. Don Eddington

Dr. Don Eddington

I was still hearing mostly clicks and bells. I stuck with it and recognized just one sound by the end of that first day; a dog barking in the distance. However, I believe it was just the cadence of the bark and the fact that we were standing outside in a quiet suburban yard that helped me identify the sound.  It was another 2 weeks before I could get a toehold on the new stimuli.  That handle was the sound of folding vertical louver blinds. Once that sounded like blinds closing, other familiar sounds began to fill in for me.   It was a domino effect, with many pieces falling into place. Once that got underway, my journey of  discovering new sounds began. I would say this was a 5-year ongoing  process.

It was a very mentally intense time as there were so many new sounds I had never heard before (food sizzling in a pan, the hiss of a lit match, the cat scraping in the  litter box, the time clock beep, perfume spray, etc). As for communication functionality, it made lipreading immensely easier, but  I still needed to lipread in most situations.  I could use the phone only with the most familiar people.  But I did enjoy music!  Immensely, as I heard so many new high notes.  My old favorites became new ones.

The first processor I used operated on a primitive Simultaneous Analog Stimulation (SAS) strategy.   SAS is basically an all-on firing strategy.  What they discovered over time was distorted hearing from these 8 electrodes firing simultaneously.  The electrodes’ signals were fighting against each other.  From the feedback of us subjects, they developed the idea of making these electrodes take turns and fire off alternately so each one could have the “dance floor”. This was the birth of CIS, or Continuous Interleaved Sampling.  I spent much of 10 years as a research subject participating in the development of this strategy.  CIS served as a foundation of many of today’s implant processes.

Because this was a new concept, there was not yet a wearable processor that ran CIS.  I had to give feedback via tests in the laboratory, and we delivered our responses via a very simple computerized user interface.  In order to be tested through this CIS strategy, I was seated in an open sound booth with wires running through the wall.  One of these wires plugged directly into the pedestal in my scalp. I was plugged directly into  a myriad of boxes that looked like our old stereo receivers, but even older than that.  These had old fashioned toggle switches and a green oscilloscope to illustrate pulse (or current?) strengths.  Old stuff, but it still did what it needed to do!  The tests would be set up in many different ways in the “back room” and the feedback from me (us subjects)  was mostly via a  Pong-like  computer screen with three boxes that would light up with each signal.  We would have to pick the one that was different.  There was just not hours, but days and years of this seemingly same tests.

I went to the MIT campus to perform speech production exercises. I did get the explanation that they were studying how speech changes when hearing is improved. They had me repeat over and over the same sentences with just one word slightly different.  Ingrained in my mind is saying: << It’s a pod again.  It’s a rod again.  It’s a mod again. >>  etc etc.

To record my speech actions, I had to wear a chest strap, electrodes on my throat and cheeks, some kind of air mask, and speak into a microphone.  There were many different kinds of exercises where I had to put up with discomfort and “perform” for 2 solid hours.

For a while I was living in Salt Lake City, and working at the University of Utah Medical Center.  One day I was walking down the corridor, and a man spotted me as an Ineraid patient. Of all people, it was the famous cochlear implant researcher, Dr. Michael Dorman! He asked me to volunteer as a test subject to try a new device. That was the start to a tight and personal research endeavor.

Just before that time, Smith+Nephew and Richards as the parent company overseeing the Ineraid product, decided to across the board, change some component on the processor board (resistor, transistor?).  Whatever, it totally wreaked havoc on my hearing with the Symbion processor.  At the time I approached by Dr. Dorman, I was struggling with pretty lousy hearing that should not have happened as such.  Thankfully Michael Dorman could clearly see through his testing how badly the revised Ineraid was serving me. I don’t think it was his plan but he decided to let me try a MED-EL processor modified to run CIS.  OMG!!!!  First I saw daylight with the Symbion, then I was awash in sunshine with this MED-EL treasure. I was so fortunate to be able to change processors, even to one from a different company.

MED-EL CIS PRO+ used with my Ineraid array!

MED-EL CIS PRO+ used with my Ineraid array!

Special MED-EL CIS LINK earhook

Special MED-EL CIS LINK earhook

I was astounded when I returned to my office that same day he gave me this processor to try.  People, like my boss, saw the wonder and joy in my face.  This was *connection*! However, I’m guessing it might have been too much connection as I think the processor was set too sensitively.  I think I heard things I was not supposed to hear  (and why not, is always my question!)  I was hearing stuff in my home out there that my companion could not, like the rushing of air through the ducts.

Dr. Dorman was provided more explanations about his testing than the folks back in Boston.  In both Boston and Salt Lake City, there were many, many threshold and pitch discrimination tests. Dr. Dorman did many pitch discrimination tests with me, which showed how scrambled my hearing was with the “newly improved” processor.

Dr. Dorman explained how CIS worked and it became clear to me the reasons for the pitch tests. They were varying the electric outputs of the individual electrodes to create virtual electrodes at the points where the electrical fields of the electrodes intersected. I thought this was the coolest concept!

I wore the MED-EL processor when I moved back to the East Coast. Unfortunately it began to produce static, which grew louder over time.  No one could resolve the issue, so Dr. Don Eddington retrieved that beloved MED-EL processor, returning it to Dr. Dorman.  Dr. Eddington then put a Geneva processor on me, which he had developed with some people in Switzerland to make a body-worn processor that can run CIS. I never liked it as well as I did that MED-EL.  But it could all be in the settings, programming.  Who knows.  Can I get that back, I dunno.  If I could, the other major question would be: who would service it?

Speed dial to 2003, when the opposite ear was implanted, with a clinical device this time.   It was a pleasantly easier experience as it was much simpler prep, shorter surgery and much shorter recovery time.   We can thank better surgical methods after initial trials on people like me the first time around.  This surgery was much easier and rehabilitation was much easier and faster with this CI because I now had a good basis established for the sounds that I was about to hear.

Now that is already 2014 with two CIs, I have been elated to enjoy the advantages that come with binaural hearing.  It took 24 years to get to this point, but I still appreciate my chances to scrabble along this path which has helped countless others following me.  It was hard work, but also very rewarding.  I have many cherished memories from the “old” days that others today would never get to appreciate.  I feel gratification from watching the recipients following our research efforts and findings.  I am thankful to have input a little bit to bettering some lives. Thank you for listening!