Here we go again with some baffling stuff. I wanted to understand the implications of drug (i.e., medicine or medication) half-lives, in particular for drugs taken daily. The half-life calculators that I found were not useful at all, so I created my own (including an interactive graph), for use on a desktop or laptop, with a keyboard and biggish screen:
This page does not explain the basics of half-lives. There are plenty of other sites that do that.
For drugs with a short half-life (e.g., a few hours), I can see how if taken daily, there is no buildup because the daily residual is negligible. It was intuitively obvious to me that with a long half-life (e.g., a half-day or more), taking the drug daily would cause an overlap and buildup—convergent, but still, you would have more drugs in your system than you take daily, and I wanted to know that number.
Wikipedia recently instituted a format for its drug entries that includes the drug’s half-life. That makes it easy and convenient to look up the half-life for all the drugs I’ve checked.
There seems to be an assumption that drugs with a long half-life are slower acting. Mathematically, they stabilize in the system at a higher dose than what you take. I find that interesting.
There is the Wikipedia page on biological half-life, but the math there is way beyond me. Here is what was obvious to me:
After x hours with half-life H (in hours) and dose D1, the fractional amount D2 leftover is:
When you take drugs at regular intervals, there might be some nonnegligible amount left over from previous doses. Here is essentially what my calculator is doing, where p is the hours between doses:
Yes, I realize the real-world implications of drugs and their half-lives are way more complicated than a simple power-of-two equation. Still, I wanted a quick and easy way to compute the oversimplified numbers.
Before the HTML version, I wrote a polyrhythm visualizer some time ago as a Java applet, when I was learning a number of the Chopin nocturnes. This was spurred on in particular by Op. 27, No. 2 in D-flat major. It’s in 6/8 time (i.e., two beats per measure), with the left hand playing six sixteenths per beat throughout. The ending involves two beats of: seven notes in the right hand and six notes in the left hand (a ratio of 7/6). Not only was the 7/6 a big challenge, but I started noticing patterns that called for some exploration.
There’s a pattern formed by which notes “fire” closest to each other in each hand.
Lines are drawn between top and bottom to emphasize when notes in the left and right hands are closest. The ebb and flow of the time distance itself has a pattern. In the example above, the left hand increasingly trails the right hand, until the midpoint, and then the left hand decreasingly trails the right hand, until they resynchronize.
One key for handling close ratios is that the midpoint involves an even trade-off between hands.
I didn’t want to draw too many lines between top and bottom, so the closeness visualization trumps the equidistant visualization.
Update: I’ve tried this on a few different systems now, and it functions horribly and unacceptably except on my development system. Google Chrome and IE9 RC work very accurately on my development system, I swear it. For now, the auralization is best described as experimental.
To keep things simple and somewhat accurate, I wanted to use window.setInterval(...) rather than try to have sequences of window.setTimeout(...) daisy-chained together. I didn’t know what to expect across browsers. My conclusion is that timers in all browsers are very accurate, with Chrome being the most accurate. Chrome timers are least affected by CPU activity within the browser itself and other processes.
The primary weakness bumped into seems to be that simply playing sequences of Audio elements is susceptible to random delays now and then. That said, Chrome is so reliable, it’s almost completely acceptable for the purpose here—essentially a metronome. IE9 RC is also very reliable. Firefox 3.6 is perhaps just under the threshold of acceptability. I found Opera to be too erratic.
Safari 5.0 on Windows delays the playing of all audio elements, thus the UI and the audio are totally out of sync.
Sequencing Audio elements
There are three things worth noting here:
I didn’t find any problems with playing multiple Audio elements simultaneously. The sounds played okay and blended okay.
The biggest hurdle was in realizing that I couldn’t get away with replaying the same Audio element each time it was needed. I needed to create pools of identical Audio elements and cycle through the pools.
I found that repeatedly calling play() on an Audio element sounded erratic, as if the sound got queued up to play but didn’t necessarily play immediately. In fact, I’d venture to say this is the primary weakness of all browsers. A big improvement here, at least for Chrome, was to call play() only when playing the sound for the first time, and using currentTime = 0.0 to play it again later.
To expound on #2: If you play with the demo, you’ll notice there are only three different sounds. I spent a lot of time trying to get three Audio elements to play and replay and blend acceptably. This was a losing battle. The result was almost random noise.
Rather than work with three Audio elements, I’ve created three pools of ten Audio elements. (Choosing ten was arbitrary; a much smaller number would probably work just as well.) For example, playing ten hits of the hi-hat has played ten instances of the same sound (and playing twenty hits has played each sound twice). Using this approach cleaned up the sound tremendously.
As I’ve tried more browsers on more systems, I see that the audio performance varies wildly. It seems that performance is irreparably bad on old, slow hardware. But even on faster, newer hardware, performance varies a lot. I’ve implemented two different approaches to playing audio, and which approach is used can be selected at run time:
The default choice is to load the sounds once and replay them when needed. This seemed like the obvious approach to me, but this often results in random delays playing the sounds.
I’ve found that on some systems, performance is better if a new Audio element is created and loaded (and played) each time a sound is needed. (Note: only reloading the audio did not make a noticeable difference. Creating a new Audio element each time is what made the difference.)