Hearing Loss in Percentages and Decibels [en]

As the founding editor of Phonak’s community blog “Open Ears” (now part of “Hearing Like Me“) I contributed a series of articles on hearing loss between 2014 and 2015. Here they are.

For years, I’ve been mystified when hearing people refer to their hearing loss in percentages. “I have lost 37% hearing in my left ear.”

Since I was thirteen and had my first audiogramme, that is how I’ve been thinking of hearing loss. In decibels, presented as a graph of how loud a sound needs to be so I can hear it, at various frequencies. I’ve showed my audiogramme on Open Ears already but here it is again:

Steph Audiogram

As you can see, at 500Hz I don’t hear sounds below 50dB, but at 4000Hz (higher pitch sounds) my left ear has almost “normal” hearing, as I can hear sounds as soft as 20dB. As is the case for most people, my hearing loss is not the same at all frequencies.

Hence the mystification: how could one express this with only one number? And how would you convert decibels (a logarithmic scale, where 20dB is 10 times as loud as 10dB) into percentages?

Many months ago already, I wanted to write a blog post about this. I did some research, asked some people, and stumbled upon a formula which didn’t feel very convincing (maybe this one). At the time already, it seemed to me that there was more than one method, which kind of discouraged me from further investigation.

A couple of months back, there was a consumer advocacy piece on Swiss TV where they sent somebody with hearing loss to various audiologists to see what solutions where recommended (and at what price). The surprising part was that the “hearing loss” of the test subject, expressed in percentages, varied wildly from shop to shop.

Anyway, this put me back on track to figure out how on earth they converted audiogrammes to percentages. Despite hearing people talk about their hearing loss in percentages all these years, I’d never been given a percentage value myself for mine.

I roped in Pascal to investigate and thanks to him finally got some satisfactory answers. Here are my take-aways.

First, the proper way to describe hearing loss is the audiogramme. As one can guess based on the results of the Swiss TV programme and the discussion around Christina’s article about “cheating” the test, taking somebody’s audiogramme is a bit of an art-form, although technically it is a rather simple procedure. Done well, it should produce the same result independently of who is measuring it (assuming your hearing is stable). This is, by the way, my personal experience with my audiogramme, done and redone over the years by three doctors and at least as many audiologists.

Second, there seems to be no end of formulas to “convert” audiogrammes to percentages, even in the United States alone (now extrapolate that to the rest of the world). And the results vary. According to one method, I have “25.3%” hearing loss. According to another, the one used by the doctors in the Swiss TV programme (CPT/AMA table reproduced below), I have “48.7%” in one ear and “40.7%” in the other.

AMA/CPT Table

Does this really mean anything? Does it make any sense to say I am missing “roughly half my hearing” or “a quarter of my hearing”? The first formula uses a kind of weighted average where you multiply the “good ear” by 5 — why on earth by 5? Quoting the article I just linked to: “Notably, while a five-to-one weighting is common among hearing impairment calculations, there is no research basis for this particular proportion.”

Third, again referring to the very interesting discussion in the same article, the need for a simple way to express hearing loss “objectively” seems to have its roots (at least some of them) in the compensations for work-related hearing loss. If you’re going to give money to a worker because of hearing lost on the job, there has to be an objective and simple way to determine how much. Which, when we realise that even an audiogramme is a rather poor indicator of the real-life impact of hearing loss. Two people with similar audiogrammes may feel differently impaired in their life by their hearing loss. Quoting again:

Few studies have found evidence for any of the several arithmetic hearing loss calculations in current or recent use in the US, as an effective measure of real-world hearing difficulty. More significantly, a literature review was unable to identify any study that has used appropriate statistical methods to evaluate the relative strength of association between these hearing impairment calculations and self-report measures.

Well, there we are. Measuring hearing loss is a hairy affair, and percentages don’t seem to me a very useful way of expressing it, as the calculation methods vary, seems sometimes pretty arbitrary, and apparently don’t correlate well with the real impact hearing loss has on our lives.

Flock, extensions, and coComment [en]

[fr] Une adresse de site pour convertir des extensions Firefox pour utilisation avec Flock, qui est un excellent navigateur. J'étais déçue de ne pas pouvoir utiliser l'extension Firefox pour coComment avec Flock -- maintenant je peux!

My ex-collegue and now friend Gabriel introduced me to the Flock browser quite some time back. I mentioned it quite a bit on my other blog but I don’t think I talked about here much.

Anyway, it’s great. It’s Firefox, but with all sorts of nice bloggy, flickr-y, del.icio.us-y stuff tied in. I’d like to get coComment integrated in there too.
(Disclaimer: I work for coCo.)

One thing that makes coComment really nice to use is the Firefox extension. Once you’ve installed it, you don’t need to do anything, and it automatically records all the comments you make (as long as the blog platform is more or less compatible to show them on your user page. Here’s mine.

The thing that bothered me when I started using Flock again sometime back was that I had to revert to using the bookmarklet (which, let’s be honest, is a real pain — who remembers to click on a bookmarklet before posting each comment? not I!) Today, as I was starting on my tour of the blogosphere to see what people are saying about coComment I came upon another Flock user who regretted the extension wasn’t compatible.

So, I headed to our internal bug-tracker to find out what the status of my request for a Flock extension was, and saw that Nicolas (coComment’s Daddy!) was asking for more information on converting extensions. I googled a little and here’s what I came up with:

Well, I installed the extension in Flock, restarted my browser, and after a painful start (wouldn’t be able to tell you if it was because of the extension or just good ol’ Windows acting up) it was up and running. I now have Flock running the coComment Firefox extension!

Let me know how it goes for you if you try it, particularly on other platforms. And if you haven’t tried Flock yet, you should. It’s really neat!

Finally out of MySQL encoding hell [en]

[fr] Description de comment je me suis sortie des problèmes d'encodage qui résultaient en l'affichage de hiéroglyphes sur tous les sites hébergés sur mon serveur.

It took weeks, mainly because I was busy with a car accident and the end of school, but it also took about two real whole days of head-banging on the desk to get it fixed.

Here’s what happened: remember, a long time ago, I had trouble with stuff in my database which was supposed to be UTF-8 but seemed to be ISO-8859-1? And then, sometime later, I had a weird mixture of UTF-8 and ISO-8859-1 in the same database?

Well, somewhere along the line this is what I guess happened: my database installation must have been serving UTF-8 content as ISO-8859-1, leading me to believe it was ISO-8859-1 when it was in fact UTF-8. That led me to try to convert it to UTF-8 — meaning I took UTF-8 strings and ran them through a converter supposed to turn ISO-8859-1 into UTF-8. The result? Let’s call it “double-UTF-8” (doubly encoded UTF-8), for want of a better name.

Anyway, that’s what I had in my database. When we upgraded MySQL and PHP on the server, I suddenly started seeing a load of junk instead of my accented characters:

encoding-problem-2

What I was seeing looked furiously like UTF-8 looks when your server setup is messed up and serves it as ISO-8859-1 instead. But, as you can see on the picture above, this page was being served as UTF-8 by the server. How did I know it wasn’t ISO-8859-1 in my database instead of this hypothetical “double-UTF-8”? Well, for one, I knew the page was served as UTF-8, and I also know that ISO-8859-1 (latin-1) served as UTF-8 makes accented characters look like question marks. Then, if I wanted to be sure, I could just change the page encoding in Firefox to ISO-8859-1 (that should make it look right if it was ISO-8859-1, shouldn’t it?) Well, it made it look worse.

Another indication was that when the MySQL connection encoding (in my.cnf) was set back to latin-1 (ISO-8859-1), the pages seemed to display correctly, but WordPress broke.

The first post on the picture I’m showing here looks “OK”, because it was posted after the setup was changed. It really is UTF-8.

Now how did we solve this? My initial idea was to take the “double-UTF-8” content of the database (and don’t forget it was mixed with the more recent UTF-8 content) and convert it “from UTF-8 to ISO-8859-1”. I had a python script we had used to fix the last MySQL disaster which converted everything to UTF-8 — I figured I could reverse it. So I rounded up a bunch of smart people (dda_, sbp, bonsaikitten and Blackb|rd — and countless others, sorry if I forgot you!) and got to work.

It proved a hairier problem than expected. What also proved hairy was explaining the problem to people who wanted to help and insisted in misunderstanding the situation. In the end, we produced a script (well, “they” rather than “we”) which looked like it should work, only… it did nothing. If you’re really interested in looking at it, here it is — but be warned, don’t try it.

We tried recode. We tried iconv. We tried changing my.cnf settings, dumping the databases, changing them back, and importing the dumps. Finally, the problem was solved manually.

  1. Made a text file listing the databases which needed to be cured (dblist.txt).
  2. Dumped them all: for db in $(cat dblist.txt); do mysqldump --opt -u user -ppassword ${db} > ${db}-20060712.sql; done
  3. Sent them over to Blackb|rd who did some search and replace magic in vim, starting with this list of characters (just change the browser encoding to latin-1 to see what they look like when mangled)
  4. Imported the corrected dumps back in: for db in $(cat dblist.txt); do mysql -u user -ppassword ${db} < ${db}-20060712.sql; done

Blackb|rd produced a shell script for vim (?) which I’ll link to as soon as I lay my hands on the URL again. The list of characters to convert was produced by trial and error, knowing that corrupted characters appeared in the text file as A tilde or A circonflexe followed by something else. I’d then change the my.cnf setting back to latin-1 to view the character strings in context and allow Blackbr|d to see what they needed to be replaced with.

Thanks. Not looking forward to the next MySQL encoding problem. They just seem to get worse and worse. (And yes, I do use UTF-8 all over the place.)