Advanced Bionics Grows over 30% Year-Year for FY2013/2014

Transparent NaidaSonova, the parent company of Advanced Bionics, reports that AB’s annual growth was over 30% in both Swiss Francs and local currencies.  The company says:

“Cochlear implants segment – Drawing from a complete portfolio

The performance of the cochlear implants segment was another highlight in the year under review. The segment achieved sales of CHF 195.3 million, an increase of 33.1% in Swiss francs and 36.0% in local currencies. Supported in particular by the launch of the Naída CI Q70 sound processor in summer 2013, sales accelerated over the course of the year, exceeding a year-on-year growth of 50% in the second half of 2013/14. Europe and North America in particular responded very well to the new sound processor that incorporates many industry-first innovations shared with Phonak hearing aids. The balanced portfolio of electrodes and Advanced Bionics’ swimmable processor also supported growth, which reflected both the addition of new customer clinics and increased penetration at existing accounts. As in the previous year, cochlear implants sales included the fulfillment of a central government tender in China.

Profit from the cochlear implants segment improved strongly during financial year 2013/14, in line with our business plan, despite significant expenses from the launch of new products, particularly the new Naída CI Q70 sound processor. EBITA for the segment reached 12.8 million, representing an operating margin of 6.6%. This is an important step towards our goal of bringing the EBITA margin of the cochlear implants business closer to the corporate average. Normalized for one-off costs, principally the increased product liability provision related to Advanced Bionics’ Vendor B product recall in 2006, the cochlear implants segment had achieved an EBITA of CHF 1.8 million in the previous financial year. In 2013/14 the relevant parameters for the said product liability provision developed fully in line with the assumptions considered in the accounts of the previous financial year. Thus no releases or additions with P&L effect were booked to the provision in the year under review.”

Read the full financial report. Information specific to Advanced Bionics is on pages 26 and 27.

AB also plans to continue the rapid pace of new product introduction. Slide 27 of the investor presentation shows a rough timelines of new electrodes, implants, and processors  through fiscal year 2016.

AB development plan FY2014

Advanced Bionics Neptune Sound Processor Gains Japanese Approval

AB_Neptune_ProcessorsNeptune, the world’s first and only swimmable sound processor for cochlear implant recipients, is now approved in Japan.

Read more here.

 

MED-EL RONDO and OPUS 2 User Manuals Now Available!

opus 2 indicator lightHave you ever forgotten how to pair a FineTuner with your RONDO or OPUS 2?  Or have you gotten a blinking pattern on the LED, and wondered what it meant?  You can carry around your user manual until it become dog-eared, or you can just bookmark the files on CochlearImplantHELP.com!

Your user manual for the RONDO and OPUS 2 will always be available here at CochlearImplantHELP.com.  Bookmark the links, or look for them on our Guides page.

Cochlear Nucleus 6 for N22, N24 Schedule Update

N6 for N22 N24 scheduleThe Nucleus 6 processor from Cochlear will be available for CI24M and CI24R implants by summer 2014, at least in the UK.  The company is working on N22 compatibility, and hopes to submit the update to regulatory agencies by the end of 2014 for approval.

Read the update here.

Auditory-Verbal Therapy & Telepractice: What’s Happening in France

By Hilary Coté Depeyre, M.A. M.S. CCC-SLP

In countries around the world, Auditory-Verbal Therapy (AVT) is recognized and used as a principle method in which deaf and hard of hearing children learn to effectively communicate through listening and spoken language.  Today, a child with a profound hearing loss can learn to listen and speak earlier and better than ever before thanks to advancements in early detection of hearing loss, advanced cochlear implant technology, and family-centered, early intervention. With these technologies and interventions, the degree of a hearing loss no longer determines a child’s spoken language outcome.

In the AV method, the ultimate goal is age-appropriate auditory, language, speech, cognition, and communication skills for a child, meaning that he or she will be in a mainstreamed environment (regular classroom) as soon as possible.  Just like hearing children, deaf and hard of hearing children develop spoken language skills through listening, and their parents help to highlight the meaningfulness of sounds throughout the day. Visual cues are not used, and one-on-one teaching is critical. Auditory-Verbal Therapy sessions are planned to provide coaching to parents as they interact with their child. The therapist can give feedback and provide strategies to parents as they help their child build language skills and use their cochlear implant.

There are currently no certified AV therapists or Listening and Spoken Language Specialists (LSLS) in France.  There is, however, no shortage of parents following the method, and looking for guidance. And as an American-certified Speech-Language Pathologist working in France, I’m helping use the foundations of AVT to support families as they help their children reach their full potential as cochlear implant users. Caroline Pisanne is one of the pioneering mothers who first sought AVT for her son via telepractice, and thanks to her website, more parents in France are becoming aware of AVT.

I was first hesitant to start practicing speech therapy in France via telepractice, as presenters at the AG Bell 2012 Convention I attended spoke about their advanced telepractice platforms, and how they kept blogs for each family, had very fast internet connections, and could even send materials and needed technology to families prior to a session. Needless to say, I did not feel ready for this! However, the families that Caroline referred to me were motivated for their children, and did not seem to mind the occasional hiccups setting up the technology. What was important to them was that I could provide professional assistance in a language they understood, and they were open to trying something new. Telepractice was new and exciting for me as well, but the biggest factor was my recent arrival in France. I had no other way to continue the work I love. While nearly all of the telepractice programs I had heard about were set up to help families who were remote, I was drawn to it as I was remote!

So, the Skype sessions began, and it took a couple sessions to get the hang of this new way of doing therapy. A month into sessions, we were naturals.

What does a typical session look like?

AVT telepractice

A day or two before the session, I provide the families with a lesson plan, including objectives and activities as well as a list of materials that we will be using. In this way, they can prepare and know what to expect.

The actual session is similar to a face-to-face session. We let the child play, books are shared to encourage language development and early literacy skills, and songs and rhymes are incorporated. As the parents are physically with their child, they are the teachers, not me. This aspect of telepractice directly incorporates the principle of AVT, that parents are the most important models for learning speech and spoken communication.

Throughout the session, I coach the parents and share with them critical strategies to incorporate into their everyday communication. Therapy is always diagnostic in nature, meaning that I continually monitor the child’s progress and modify the activities or goals when needed.

Following the session, parents receive a summary with progress notes as well as ideas for how to incorporate these new strategies into their everyday routines. Ideally, these sessions happen weekly.

What are the benefits to telepractice?

  • Children are in the comfort and familiarity of their own home
  • Fewer sessions are skipped due to illness or other life disturbances
  • Flexible scheduling based on the family’s needs
  • No travel time is needed for either the family or the therapist
  • Multiple family members can more easily participate

What are the drawbacks to telepractice?

  • Technology issues with the webcam, audio, slow Internet connection, etc.
  • Poor sound quality and distance of the child from the microphone can make it difficult to accurately judge articulation skills
  • Using Skype alone and not a more advanced platform limits what we can do
  • Video lag can lead to talking over each other

With telepractice, families in France can now have professional guidance in French or English following a method they believe in. There are still no certified LSLS AVTs in France, but step-by-step we are increasing awareness and using the resources we have for families to help their children maximize the use of their cochlear implants. Let’s hope that someday soon a French orthophoniste will pursue LSLS AVT certification!

The Principles of Auditory-Verbal Therapy

  1. Working toward the earliest possible identification of hearing loss in infants and young children, ideally in the newborn nursery. Conducting an aggressive program of audiologic management.
  2. Seeking the best available sources of medical treatment and technological amplification of sound for the child who is deaf or hard of hearing as early as possible.
  3. Helping the child understand the meaning of any sounds heard, including spoken language, and teaching the child’s parents how to make sound meaningful to the child all day long.
  4. Helping the child learn to respond and to use sound in the same way that children with normal hearing learn.
  5. Using the child’s parents as the most important models for learning speech and spoken communication.
  6. Working to help children develop an inner auditory system so that they are aware of their own voice and will work to match what he or she says with what they hear others say.
  7. Knowing how children with normal hearing develop sound awareness, listening, language, and intellect and using this knowledge to help children with hearing impairments learn new skills.
  8. Observing and evaluating the child’s development in all areas. Changing the child’s training program when new needs appear.
  9. Helping children who are deaf or hard of hearing participate educationally and socially with children who have normal hearing by supporting them in regular education classes.

About the Author

photo 2

Hilary Coté Depeyre, M.A. M.S. CCC-SLP is an American Speech-Language Pathologist who has settled in France, thanks to her French husband. She spends part of her time working through telepractice with children with cochlear implants in France, and the other part of her time working with her husband on their dairy and ice cream farm in the Alps. She hopes to soon be able to work toward becoming a certified LSLS AVT therapist. This long process, beginning with recognition as an orthophoniste in France, is in the works!  If you would like more information, e-mail her at hilary.cote@gmail.com.

Google to Acquire MED-EL

Google_logoMED-EL_logo

This is an April Fool’s post.  Cochlear Implant HELP strives to provide timely and accurate information.  So as not to mislead our readers, we now identify April Fool’s posts that mention specific cochlear implant manufacturers with this header.  Our posts often hint at features that would exceed the hopes of many of our readers by far.  While the posts are intended in jest, they do reflect some of the wishes of the community, and manufacturers might benefit from accepting these as inputs for longer-range product possibilities.

In a dramatic move certain to shake up the already heated competition in the cochlear implant industry, Google and MED-EL announced that Google will purchase MED-EL.  Both companies see tremendous amounts of room for technology improvements.

Perhaps the most interesting comment is Larry Page’s offhand announcement of ‘Google Ear,’ which will start out like a hybrid between a smart phone and a cochlear implant processor, but will evolve into a paradigm shift for how anybody interacts with the Internet.

Married Couple Live TV Activation

Tim and Natalie Nobes, both 44, have been deaf their whole lives.  They had cochlear implant surgery on the same day, and were activated live on Australian television.  Unfortunately this version of the videos isn’t captioned, but cochlearimplanthelp working on getting the studio to add subtitles.

Tim

The main segment includes the family story and Natalie’e activation.  And Tim’s activation follows in an emotional conclusion.

Hybrid Cochlear Implant Receives FDA Approval

The U.S. Food and Drug Administration today approved the first implantable device for people 18 and older with severe or profound sensorineural hearing loss of high-frequency sounds in both ears, but who can still hear low-frequency sounds with or without a hearing aid. The Nucleus Hybrid L24 Cochlear Implant System may help those with this specific kind of hearing loss who do not benefit from conventional hearing aids.

Read more on the FDA Press Release.

Confessions of an Ineraid User

By Carolyn Tata

Carolyn Tata

I was born with a moderate to severe hearing loss in both ears, cause unknown, and was fitted with my first body hearing aid at 11 months.  About a year later, the opposite ear was also aided, I believe using a “Y” cord with the one single aid.  After some time, I got a second body aid and wore the two simultaneously. I was mainstreamed from the start with the help of outside visits with a hearing/speech teacher.

In sixth grade, I upgraded to two BTE’s after my teacher noticed I was not hearing as well. In my mid to late twenties, my hearing began declining rapidly.

In 1988, I suffered a bout of Tullio Syndrome in my left ear, rendering it unaidable.  The amplified sound coming out the hearing aid was distorted and would cause intense dizziness and loudness recruitment.

About a year after the Tullio incident, I met my ex-fiancé, who was a hearing aid dispenser.  At his suggestion, I  became curious about a new technology called a cochlear implant. Together we discussed and researched the idea.

I went to Lahey Clinic for a CI evaluation.  I was rejected because I did not meet the FDA guidelines for a clinical cochlear implant device. At that time, a prospect had to score 6% or less on the single word tests, and I kept scoring 7%, outclassing myself.

Not wanting to give up,  I went through 2 more cochlear implant  evaluations, one at Yale University Hospital, and the other at Massachusetts Eye and Ear Infirmary (MEEI).  Yale also rejected me for a clinical device, but recommended that I wait for a new emerging cochlear implant system called the MiniMed, being developed in California.  This was the precursor to today’s Advanced Bionics implants.  I suspect they suggested waiting because it would buy me more time for either my hearing to deteriorate further, or for the clinical guidelines to relax.

I did not want to wait, as I was fearful of losing the opposite ear any day.  I live independently and needed to keep working.  At MEEI, I also did not qualify for a clinical program, but their cochlear implant research program gave me a two-pronged option: enroll in their program with the Nucleus device in a research capacity, or  take on a more experimental device, the Ineraid.  They had high hopes to obtain FDA approval for the Ineraid at the time.

It took me some time, research into the marketing materials, rudimentary observation of others with the two systems, and finally serious thought to make this decision.  I concluded that  it made sense to opt for a “generic “ device such as the Ineraid.  There were no implanted electronics that could break or hinder upgrades.  All of the electronics existed *outside* of the body. The external hardware is connected to the implanted electrode array via an outlet that protrudes through the scalp.  This outlet, working like a wall socket, is called the percutaneous pedestal. This meant that the hardware *and* software was outside the body, in the hands of developers. I decided simple  “plug and play” was the way to go.  I would have easy opportunity to try any of the latest technologies.  Little did I know that the world was going to move to implanted electronics so quickly.  And little did any of us know that that Ineraid, with the same low infection rate as other implants, would not gain FDA approval for the reason of infection risk.

Best this selfie!  My earhook connected to the percutaneous pedestal.

Beat this selfie! My earhook connected to the percutaneous pedestal.

Ineraid processor, cable, and BTE

Ineraid processor, cable, and BTE

I underwent surgery for the Ineraid array on October 26, 1990. The operation was 5 to 6 hours long, with no immediate complications.  Recovery was about 2 weeks.  During this time, I was struggling with intense dizziness.  It may have had something to do with the Tullio issue, or possibly just postoperative fluid loss in the semicircular canals which comprise the ear’s balance mechanism.  The dizziness was steady, but did eventually subside to an  episodic pattern over the following year.  At about 6 months, the episodes grew shorter and less severe until they were finally gone completely a year later.

Hookup was in December.  When I was initially switched on, I listened and immediately declared it was not working.  All I heard was clicks and pings. It was pretty bad!  Then Dr.  Eddington, inventor of the Ineraid device, said “Wait” with a capital W.  He then brandished a screwdriver.  With that screwdriver, he bent over the opened processor and began to twist 4 screws while literally voicing 4 vowel sounds: A, E, O and U.  At each of those screwheads began the lead to the 4 active electrodes in the implanted Ineraid array.  I had to tell him when they sounded of equal loudness.  That was the  “mapping” in its earliest form. I was then instructed to take out my hearing aid in my opposite ear and go home.  I recall pronounced lightheadedness from switching the usual ear off and the opposite one on.  I felt very unbalanced.  This was  a feeling that was so pronounced, it was almost unbearable.  I truly believe the experts did not realize the gravity of the stunt they expected me to perform here.

Dr. Don Eddington

Dr. Don Eddington

I was still hearing mostly clicks and bells. I stuck with it and recognized just one sound by the end of that first day; a dog barking in the distance. However, I believe it was just the cadence of the bark and the fact that we were standing outside in a quiet suburban yard that helped me identify the sound.  It was another 2 weeks before I could get a toehold on the new stimuli.  That handle was the sound of folding vertical louver blinds. Once that sounded like blinds closing, other familiar sounds began to fill in for me.   It was a domino effect, with many pieces falling into place. Once that got underway, my journey of  discovering new sounds began. I would say this was a 5-year ongoing  process.

It was a very mentally intense time as there were so many new sounds I had never heard before (food sizzling in a pan, the hiss of a lit match, the cat scraping in the  litter box, the time clock beep, perfume spray, etc). As for communication functionality, it made lipreading immensely easier, but  I still needed to lipread in most situations.  I could use the phone only with the most familiar people.  But I did enjoy music!  Immensely, as I heard so many new high notes.  My old favorites became new ones.

The first processor I used operated on a primitive Simultaneous Analog Stimulation (SAS) strategy.   SAS is basically an all-on firing strategy.  What they discovered over time was distorted hearing from these 8 electrodes firing simultaneously.  The electrodes’ signals were fighting against each other.  From the feedback of us subjects, they developed the idea of making these electrodes take turns and fire off alternately so each one could have the “dance floor”. This was the birth of CIS, or Continuous Interleaved Sampling.  I spent much of 10 years as a research subject participating in the development of this strategy.  CIS served as a foundation of many of today’s implant processes.

Because this was a new concept, there was not yet a wearable processor that ran CIS.  I had to give feedback via tests in the laboratory, and we delivered our responses via a very simple computerized user interface.  In order to be tested through this CIS strategy, I was seated in an open sound booth with wires running through the wall.  One of these wires plugged directly into the pedestal in my scalp. I was plugged directly into  a myriad of boxes that looked like our old stereo receivers, but even older than that.  These had old fashioned toggle switches and a green oscilloscope to illustrate pulse (or current?) strengths.  Old stuff, but it still did what it needed to do!  The tests would be set up in many different ways in the “back room” and the feedback from me (us subjects)  was mostly via a  Pong-like  computer screen with three boxes that would light up with each signal.  We would have to pick the one that was different.  There was just not hours, but days and years of this seemingly same tests.

I went to the MIT campus to perform speech production exercises. I did get the explanation that they were studying how speech changes when hearing is improved. They had me repeat over and over the same sentences with just one word slightly different.  Ingrained in my mind is saying: << It’s a pod again.  It’s a rod again.  It’s a mod again. >>  etc etc.

To record my speech actions, I had to wear a chest strap, electrodes on my throat and cheeks, some kind of air mask, and speak into a microphone.  There were many different kinds of exercises where I had to put up with discomfort and “perform” for 2 solid hours.

For a while I was living in Salt Lake City, and working at the University of Utah Medical Center.  One day I was walking down the corridor, and a man spotted me as an Ineraid patient. Of all people, it was the famous cochlear implant researcher, Dr. Michael Dorman! He asked me to volunteer as a test subject to try a new device. That was the start to a tight and personal research endeavor.

Just before that time, Smith+Nephew and Richards as the parent company overseeing the Ineraid product, decided to across the board, change some component on the processor board (resistor, transistor?).  Whatever, it totally wreaked havoc on my hearing with the Symbion processor.  At the time I approached by Dr. Dorman, I was struggling with pretty lousy hearing that should not have happened as such.  Thankfully Michael Dorman could clearly see through his testing how badly the revised Ineraid was serving me. I don’t think it was his plan but he decided to let me try a MED-EL processor modified to run CIS.  OMG!!!!  First I saw daylight with the Symbion, then I was awash in sunshine with this MED-EL treasure. I was so fortunate to be able to change processors, even to one from a different company.

MED-EL CIS PRO+ used with my Ineraid array!

MED-EL CIS PRO+ used with my Ineraid array!

Special MED-EL CIS LINK earhook

Special MED-EL CIS LINK earhook

I was astounded when I returned to my office that same day he gave me this processor to try.  People, like my boss, saw the wonder and joy in my face.  This was *connection*! However, I’m guessing it might have been too much connection as I think the processor was set too sensitively.  I think I heard things I was not supposed to hear  (and why not, is always my question!)  I was hearing stuff in my home out there that my companion could not, like the rushing of air through the ducts.

Dr. Dorman was provided more explanations about his testing than the folks back in Boston.  In both Boston and Salt Lake City, there were many, many threshold and pitch discrimination tests. Dr. Dorman did many pitch discrimination tests with me, which showed how scrambled my hearing was with the “newly improved” processor.

Dr. Dorman explained how CIS worked and it became clear to me the reasons for the pitch tests. They were varying the electric outputs of the individual electrodes to create virtual electrodes at the points where the electrical fields of the electrodes intersected. I thought this was the coolest concept!

I wore the MED-EL processor when I moved back to the East Coast. Unfortunately it began to produce static, which grew louder over time.  No one could resolve the issue, so Dr. Don Eddington retrieved that beloved MED-EL processor, returning it to Dr. Dorman.  Dr. Eddington then put a Geneva processor on me, which he had developed with some people in Switzerland to make a body-worn processor that can run CIS. I never liked it as well as I did that MED-EL.  But it could all be in the settings, programming.  Who knows.  Can I get that back, I dunno.  If I could, the other major question would be: who would service it?

Speed dial to 2003, when the opposite ear was implanted, with a clinical device this time.   It was a pleasantly easier experience as it was much simpler prep, shorter surgery and much shorter recovery time.   We can thank better surgical methods after initial trials on people like me the first time around.  This surgery was much easier and rehabilitation was much easier and faster with this CI because I now had a good basis established for the sounds that I was about to hear.

Now that is already 2014 with two CIs, I have been elated to enjoy the advantages that come with binaural hearing.  It took 24 years to get to this point, but I still appreciate my chances to scrabble along this path which has helped countless others following me.  It was hard work, but also very rewarding.  I have many cherished memories from the “old” days that others today would never get to appreciate.  I feel gratification from watching the recipients following our research efforts and findings.  I am thankful to have input a little bit to bettering some lives. Thank you for listening!

Michael Chorost reviews the Naída CI Q70 Processor

Author and cochlear implant user Michael Chorost has published a detailed review of the Naída CI Q70 processor, filled with user experiences and pros and cons of the processor.  Read the full review here.