Tue, 22 Apr 2008
Now that I have your attention…
Here's a frustrating anecdote demonstrating a simple principle of user interface design.
For the past 3 days, my LG CU400 cell phone has been relentlessly trying to get my attention. But why?
The first time I heard the tone, I picked up the phone and saw nothing of interest on the display. There was no message indicator. The signal strength was acceptable. I had no voice mail, new text messages, missed calls—nothing seemed out of the ordinary.
Initially, I thought there must be some kind of signal interference and perhaps it was toning when re-acquiring a signal. Later in the day, I was mountain biking on Beacon Hill passing directly under cell phone towers and in line of site of nearly every cell phone tower for 50 miles. Yet, the tone persisted periodically.
I use my cell phone for its alarm clock feature, but the mystery tone woke me in the night and I had to silence it in order to sleep.
Monday (day 2), I made a note whenever I heard the tone. I wasn't near the phone all the time, and was in noisy areas periodically where I wouldn't have heard it, anyway. The three times I noted indicated a pattern, though: 11:37 am, 5:37 pm, and 11:37 pm. Monday night, I powered the phone off.
Today (Tuesday, day 3), I powered the phone on and heard the tone very soon afterwards. And again in 5 minutes. The phone toned every 5 minutes 3 times, then every 15 minutes twice, then every 30 minutes twice.
Then silence for just over an hour. At 1 hour 2 minutes, the cycle started over.
I had checked my alarm settings. I checked the memory—plenty used (I've got several photos and even a few videos saved), but still plenty free.
Finally a new clue. I sent a text message, which I do infrequently, but more
often now that I've started using Twitter. The phone
prompted: Send without saving?
Ah-ha! It must be unable to save text messages.
There were 196 messages in my Sent folder. That's probably every message I've sent in the 13 months I've owned the phone. I deleted all those messages and immediately received 3 new text messages that had been queued up. They were Twitter messages with time-stamps over the past 3 days.
So, I assume what happened was that each time a new message arrived, or when
the phone was powered off and on, the out of text message memory
tone
started it's alert cycle. And even though the phone indicated I had plenty of
memory available, that memory obviously wasn't available for text messages.
The morale of this story: It isn't enough to tell the user there is a problem. You must tell the user what the problem is!
[/programming] [link]
Sat, 13 Jan 2007
Inconceivable!
Every once in awhile, perl still gives me a surprise. This past week, I ran into a bug that had me stumped for quite awhile. I was using a common loop construct with an iterator object. But unexpectedly, the object returned by the iterator was getting trounced. I was getting “Can’t call method on an undefined value” errors.
In frustration, I added some diagnostics at the very top of the loop:
While ( my $obj = $iterator->next ) { warn "Inconceivable!" unless defined $obj; ...
Sure enough! The program began shouting warnings. How can
$obj
be defined in the loop conditional then undefined on the
very next line of code? Inconceivable!
Or not. Perl provides another way to get to that next line of code.
Farther down in the code:
# we need to skip some objects 1 while ( $obj = $iterator->next && $obj->some_method ); last unless $obj; redo; # <-------- $obj is defined and healthy here
perldoc
says:
The “redo” command restarts the loop block without evaluating the conditional again. The “continue” block, if any, is not executed. If the LABEL is omitted, the command refers to the innermost enclosing loop. Programs that want to lie to themselves about what was just input normally use this command…
Even though the conditional is not re-evaluated, $obj
goes out
of scope and we get and brand new, undefined $obj
at the top of
the loop, thanks to the my
declaration in the loop conditional.
That makes some sense (perfect sense if you understand the perl internals—I
can’t claim that). It caught me off guard.
One simple way to solve the problem is to move the my
declaration outside the loop. In my case, I was able to eliminate need for
the redo
.
Another lesson learned—the only way they stick—the hard way.
[/programming] [link]
Sun, 28 Mar 2004
Still toying with Fibonacci numbers
Yesterday, I said I needed to stop this silliness with Fibonacci numbers. But I still remain intrigued.
Ron
Knott’s page, which I referenced yesterday, includes some
information about the relationship between Fibonacci numbers and the
Golden Ratio: (sqrt(5)+1)/2, or 1.61803
I found the perl module Math::Fibonacci which uses this relationship to calculate the n-th Fibonacci number using the fast algorithm:
F(n) = ~ g^n/sqrt(5)
where g
is the golden ratio and ~
means
“take the nearest integer.”
I experimented a bit with that algorithm and it is quite useful for Fibonacci values that can be expressed within the precision of native floats. To get beyond the limitations of native floats, I used Math::BigFloat. Unfortunately, it took much longer to get results than just using the trivial expansion with Math::BigInt, and those results were not exact.
Then I found Math::Big
which has a fibonacci
function. In scalar mode, it uses a
very fast divide-and-conquer algorithm based on the formula:
F(n+m) = Fm * Fn+1 + Fm-1 * Fn
Using Math::Big::fibonacci, I can calculate the 10,000-th Fibonacci number in 0.37 seconds with this command line:
perl -MMath::Big -e 'print
0+Math::Big::fibonacci($ARGV[0]),"\n"' 10000
It takes about 2.5 seconds to get the same result with the trivial expansion using the code I posted yesterday.
[/programming] [link]
Sat, 27 Mar 2004
Counting Fibonacci numbers
I need to stop this silliness. Couldn’t sleep, so I got up early and played with perl and Fibonacci numbers some more.
The latest incarnation prints the n-th Fibonacci number where n is passed as an argument. Here, n is 1000:
perl -MMath::BigInt -e '$b=new Math::BigInt
1;($a,$b)=($b,$a+$b)while--$ARGV[0];print"$b\n"' 1000
That’s assuming Fibonacci numbers begin with one instead of zero. I’ve seen them described both ways. Is zero a Fibonacci number?
[/programming] [link]
Sat, 20 Mar 2004
Fibonacci numbers
I’ve been reading GEB. Hofstadter loves Fibonacci numbers.
This morning I got up early and read a section in GEB while the house was quiet and the sun was coming up. To test understanding, Hofstadter listed several items, some well formed, some not. As an answer key, he wrote, “Those whose numbers are Fibonacci numbers are not well formed.”
On a sheet of paper I wrote the Fibonacci sequence far enough to check the answers. Then, just to exercise tired brain cells, I started adding as fast as I could to extend the sequence.
I had about 80 numbers written when I grew weary of the exercise. But staring at the page of digits, I couldn’t rest without knowing if and where I had made any errors.
So, I headed down to the computer and wrote a couple of perl one-liners.
I love Perl, and I especially enjoy writing little one-liners I can execute from the command line without having to create a source file. My first attempt looked like this:
perl -e '@f=(1,1);print$f[@f]=$f[-1]+$f[-2],"\n"while@f<80'
That resulted in the last few terms printing in scientific notation,
e.g., 2.34167283484677e+16. I wanted the full integer representation.
The final version prints the term number and the full integer
representation using the the Math::BigInt
module. It also takes the number of terms as an argument and the
command line shown pipes the output to less
for easy
viewing.
perl -MMath::BigInt -e '$one=Math::BigInt->new(1); @f=($one,$one); print@f+1,": ",$f[@f]=$f[-1]+$f[-2], "\n"while@f<$ARGV[0]' 80|less
[/programming] [link]
Sat, 20 Dec 2003
Allocating rounding errors fairly
Last Monday, I visited a client in California. Their programmer showed me some code he'd been working on and mentioned that he was having some difficulty with rounding errors accumulating.
I shared a technique with him I've used to solve the same problem.
Having way too much time in airports and on planes to think about it, I realized that the method I shared, while quite good for most cases, fails to allocate rounding errors fairly in extreme cases. I spent many hours since trying to find an optimal solution.
Consider the problem of distributing money, evenly, across three accounts. If we only have one cent to distribute, which account should get it?
Using the naive rounding method, none of the accounts will get anything and we will be left with one cent undistributed.
Using the technique I shared with Russ, we would consider the first account and attempt to distribute 1/3 cents. That rounds to zero. We would then restate the problem as distributing our single cent across the remaining two accounts. So, we would attempt to distribute 1/2 cent to the second account. That rounds to one cent. Finally, having nothing left to distribute, we assign zero cents to the third account.
This works well for any number of accounts ensuring that the maximum rounding error applied to any given account is less than one cent.
But what happens if we distribute one cent to the same three accounts 10,000 times? Ideally, two accounts should end up with $33.33 and one should end up with $33.34.
Using naive rounding, all three accounts would still have zero balances. Using the second method described, the first and third accounts would have zero balances, and the second account would have a balance of $100.00, since it would always get the benefit of the rounding errors.
I spent a lot of time working on a fair method for distributing the rounding errors. I'm certain I spent my time simply reinventing the wheel, but it was a very interesting problem to solve.
Consider the same problem: distributing $0.01 across 3 accounts.
The first account should get 1/3 cents. Since we can't divide a cent, we'll give the first account a 1 in 3 chance at rounding up. 2 out of 3 times, we'll move on to the second account and give it a 1 out of 2 chance at rounding up. 1 out of 2 times we'll still have our cent to distribute and we'll move to the third account, which will always get the cent if we reach it, because it is the only remaining account.
If you work out the math, you'll see that each account has exactly a 1 in three chance at getting the cent.
Surprisingly (at least it surprised me), this works out with any amount to be allocated, and with any weighting.
For instance, assume we have 3 accounts with weights 1, 2, and 1 respectively. If we want to allocate 1 cent to these accounts, we would attempt to allocate ¼cents to the first account, giving it a 1 in 4 chance to round up. 3 out of 4 times, we'll move to the second account and give it a 2 out of 3 chance (2 is it's weight, and we consider only the remaining 2 accounts for a total weight of 3) at rounding up.
To allocate a larger number, we simply allocate the the integer portion after multiplying the amount to allocate by the account's weight and dividing by the total weight. The odds of rounding up are given by the fractional remainder. We reduce the total amount to be allocated by the amount allocated to each account and the total weight by the account's weight at each round and move on.
The following perl subroutine demonstrates the algorithm:
# usage: allocate(amount, weight_1, weight_2, ..., weight_n)
# returns an array of n allocated amounts
sub allocate {
my $value = shift;
my $basis = 0;
$basis += $_ for @_;
map {
my $allocation = $value * $_ / $basis;
my $allocated = int($allocation);
my $remainder = $allocation - $allocated;
++$allocated if rand() < $remainder;
$basis -= $_;
$value -= $allocated;
$allocated;
} @_;
}
[/programming] [link]
Wed, 03 Dec 2003
My first Open Source contribution
This morning, Jonathan Lundquist and I found and fixed a bug in the Squid Web Proxy Cache. I'm not sure about Jonathan, but it was my first, direct contribution to Open Source and a rewarding experience.
It took my knowledge of the Linux development environment, tools, and protocol for creating and submitting a patch; and it took Jonathan's excellent analysis and debugging skills to quickly locate the bug and suggest a patch. Individually, it may have taken either of us 2 days to find and fix the problem in totally foreign code. But together, we found it, fixed it, made a report including a patch in under 2 hours.
I think it is rare to find a working relationship that multiplies productivity that way. Several years ago, Jonathan and I wrote the first version of a Windows based application in a single 30+ hour session, passing the keyboard back and forth. That product has gone through many revisions with thousands of man hours invested, but the core of the application still consists of the code we wrote in those 30-some-odd hours.
Sadly, after 12 years working closely together, that working relationship will end, soon. Jonathan has accepted a position as CFO of for one of our clients. Good luck, Jonathan. I'll miss you most when debugging code on my own!
[/programming] [link]
About this weblog
This site is the personal weblog of Marc Mims. You can contact Marc
by sending e-mail to:
[email protected].
Marc writes here about cycling, programming, Linux, and other items of personal interest.
This site is syndicated with RSS.
Archives
Credits
CSS stolen from Tom Coates who didn't even complain.