Reassess This

As a homeowner, I’m used to getting all sorts of shady offers in the mail for new mortgages with astoundingly bad terms. But now that home values are declining, the free market has spawned a new kind of scam, at least in California. In our fair state, home values are reassessed only when they are sold (hoo boy, Prop 13!). In the meantime, the County Assessor assumes that your home value increases by about 2% each year and increases your property taxes accordingly. Historically, this has been a win for homeowners, whose property value was outpacing 2% by leaps and bounds, and a increasingly problematic loss for any local tax-supported services (such as school funding).

Anyway, these new offers take the form of a letter warning you that your home is probably worth less than the county thinks it is, and giving you the opportunity to pay a third party company to file a “tax reassessment” form to have the property properly revalued (and get a lower property tax bill). What makes this such a miserable scam is that anyone can file this form themselves, for free. Here are online instructions, with the online form. Not only that, but the County Assessor is pre-emptively re-assessing 500,000 homes this year (sold between 2003 and 2008) to see if they should be adjusted — you don’t even have to file the form! The County Assessor’s office is clearly exasperated with this scam, too, and has posted a scam warning on the subject.

Recently, I received one of these offers that really took the cake. Not only did the letter from “Property Tax Adjustment Services” try to entice me to pay for a free service, but it actually came formatted as a bill — complete with a “due date” and a “late charge” if payment was not received by the deadline! As I stared at the “bill”, it seemed strangely familiar… so familiar that I went and dug up my actual property tax bill. They are formatted virtually identically. See image at right (click to enlarge). The “reassessment bill” is on top, and my property tax bill is on the bottom (actual numbers removed). Obviously they’re hoping that I as a busy homeowner might glance at this and think it comes from the County Assessor’s office and is a required payment.

This scam letter actually does mention the fact that you can file the form yourself (but not that it’s free to do so). It also warns that “Property Tax Adjustment Services” is an expert business who will ensure that it gets done right. Yeah. The form requires all of three pieces of information: your home’s address and the addresses of two comparable recent sales. This information is available easily from the County Assessor’s website, which even has a browsable map interface so you can see all recent sales near your home.

Disgusting, is what it is. Or simple capitalism in action? Caveat emptor!

Women in Technology: Missions to Mars and Internet Identity

Yesterday was Ada Lovelace Day, accompanied by a large-scale blogging exercise in which people around the world blogged about women in technology they admire. Yesterday was also a rather busy day for me, so I’m writing my entry a day late. I’m sure Ada would understand.

There are volumes to say (and that have been written) about Ada herself. She was gifted in mathematics and reasoning, and developed the first computer programs — before any computers actually existed. (She was developing hypothetical programs for Babbage’s Analytical Engine, which didn’t exist either.) Today it is challenging enough to learn languages already developed for machines that anyone can use; imagine starting from less than scratch to accomplish computational magic!

I’d like to draw your attention to two women who’ve made more recent contributions to the field of computers and technology. The first is Donna Shirley, a key player in the JPL Pathfinder mission to Mars in 1997. She led the team that built the Sojourner rover, as chronicled in her enjoyable Managing Martians autobiography. She was a trailblazer for women in high-profile (and high-stress) mission positions, but also remarkable for her accomplishments regardless of gender. She flew airplanes, became an aeronautical engineer, worked on the Mariner 10 mission to Venus and Mercury, raised a daughter, and more. I recommend this fascinating interview with her from 1998. I had the opportunity to meet her years later, when I was interviewing for jobs with my shiny new Ph.D. in 2002. At the time, she was the Associate Dean of Engineering at the University of Oklahoma, and I had a wonderful lunch with her. I didn’t end up taking that job, and she moved on a year later to start her own speaking and consulting business to encourage innovation and creativity in tech fields. There’s so much more to say about her delightful personality and her passion about space and innovation. I encourage you to take a look at her book.

Another fascinating woman in technology is Sherry Turkle. Her background is in psychology, which she’s applied to good effect in analyzing the world of technology. She wrote a book called Life on the Screen: Identity in the Age of the Internet about how people interact with computers (and the Internet), and the effect that interaction has on us in return. What’s even more remarkable is that this book was published in 1995, when the Internet was still something of a foreign country that only a fraction of the population had visited. She has some very interesting things to say about identity in a virtual environment and the challenge involved in drawing a clear separating line between events in the “real” world and events that happen online. She’s put forth a host of other interesting ideas, including:

I love new ideas and thought-provoking inventions, regardless of the gender of their source. Ada Lovelace Day is a chance to put the spotlight on female contributors, with one goal being to combat the perception that tech advances are produced solely by men. So far, they’ve collected a phenomenal 1,112 posts by bloggers (men and women) about these ground-breaking possessors of double-X chromosomes. Go ahead and browse, as a list or a world map. So many of these were new to me!

How we get reduced-fat peanut butter

I adore peanut butter. It’s tasty on toast, on celery, on bananas, on Ritz crackers, on chocolate, and pretty much most other things. But of course, it is also high in fat, so I try to rein in my peanut-butter tendencies when possible. Low-fat versions of most foods are available, but I always wonder about the impact on taste.

The other day at the store I noticed a sale on Skippy peanut butter, my favored brand. In fact, the 16.3-oz containers were cheaper, per ounce, than their 32-ounce brethren that I normally buy. So it was the perfect chance to pick up a sample of both regular Skippy and the reduced-fat version for a side-by-side taste test.

As I opened up the containers, I wondered how exactly you could, in fact, reduce the fat in peanut butter. Although commercial peanut butter does have added oils “to prevent separation”, most of the fat actually comes from the peanuts themselves. How do you get a low-fat peanut? Answer: you don’t! While of course I don’t have the recipe that Skippy uses, perusing the ingredient lists of the two products suggests that you reduce the fat by… diluting the peanuts. The same non-separation oils are used, but reduced-fat peanut butter also comes with “soy protein” and “corn syrup solids” not present in the regular variety. The total protein per serving is the same in both products, so I can only imagine that the soy protein is there to make up the balance after diluting the peanuts (and their protein). The corn syrup solids are apparently there to make the product sweeter — and in fact the nutrition label reports more sugar (4g vs. 3g) in the reduced-fat version (cf. the regular version).

But numbers aside, what of the taste test? I grabbed a banana and spread one swath of peanut butter per bite, as I normally do, but alternated which peanut butter I used. There is definitely a difference. I worked my way through the whole banana to ensure I had enough samples to convince myself that it wasn’t just my imagination. I also did a pure test with no banana to evaluate them in isolation. Both products are equally creamy (thank you filler oils!), but the flavor in the reduced-fat version is slightly wrong. It’s blander, and I’m left with a certain after-taste that reminds me of the after-taste I get with food containing artificial sweeteners. It really isn’t as satisfying, which is what you’d expect based only on the fat difference — but there’s a definite taste difference as well.

So, yes, to a first approximation, the reduced-fat peanut butter still tastes like peanut butter. But after finishing a reduced-fat peanut butter banana, I don’t find myself tempted to go back and eat more peanut butter all by itself like I usually do. On the other hand, maybe that’s a good thing.

Can neural networks predict the death penalty?

I recently came across an article on the use of a neural network to predict which death row inmates would be executed and which would not. The authors of “An Artificial Intelligence System Suggests Arbitrariness of Death Penalty” argued that because they were able to train a neural network to successfully predict execution decisions using only irrelevant variables, then the (human) decisions being made must be arbitrary. Confused yet? Although their neural network achieved 93% accuracy, they argue that because information about DNA testing and the quality of each defendant’s legal representation was omitted, this performance is concerning. In their words,

“What we have demonstrated here is that ANN technology can predict death penalty outcomes at better than 90%. From a practical point of view this is impressive. However, given that the variables employed in the study have no direct bearing on the judicial process raises series questions concerning the fairness of the justice system.”

That is, the neural network must have identified a useful predictive pattern in the data, but in a sense it was “not supposed to,” so a pattern may exist where one should not be.

There are several problem with the arguments in and conclusion of this paper.

First, I don’t think the authors interpreted their result correctly. “Arbitrariness” was not at all demonstrated (despite the paper title). The neural network identified some sort of pattern in the data set that allowed it to successfully predict the outcome for 93% of previously unseen inmates. If they were executed “arbitrarily” (i.e., a random decision was made for each inmate), then the neural network would not have been able to learn a successful predictor. Instead, if the features really are irrelevant to the judicial process (they include sex, race, etc.), then high performance of the neural network instead shows bias in the system. There is some sort of predictive signal even in features that shouldn’t directly affect execution decisions.

Second, I’m not convinced that the features really are irrelevant. While sex, race, month of sentencing, etc., should (presumably) not be deciding factors in who gets executed, “type of capital offense” sounds quite relevant to me. If the neural network placed a heavy weight on that feature, I would be much less concerned than if it placed a high weight on “sex”. What was the neural network’s performance if the capital offense features were omitted? In fact, it would be interesting to use a machine learning feature selection method to pick out the “most useful” features from the 17 used in this study, to help identify any bias present.

Finally, the evaluation was quite limited, so our confidence in the conclusions should also be limited. The authors trained a single neural network on a single training set and evaluated it on a single test set. More typical methodology would be to use cross-validation, splitting the data set into, say, 10 test sets and, for each one, training a network on the remaining 9. This yields a much better estimate of generalization performance. Also, what about other machine learning methods? Is 93% achieved only by a neural network? What about a support vector machine? (SVMs have been shown to out-perform neural networks on a variety of problems.) What about a decision tree, which would yield direct insight into the decisions being made by the learned model? For that matter, what about neural networks with other network structures? Why was a network with a single hidden layer of five nodes used? Was that the only one that worked?

Naturally, my critique comes from a machine learning perspective. I have no legal training. I would be very interested in any insights or opinions on this work from those who do have a legal background. What is the value of this kind of study to the field? Is this an important subject to investigate? How could the results be used to positive benefit? What other questions were left unanswered by the authors of this paper?

Lessons from the young’uns

After five days of visiting my nieces (2.5 years and <2 weeks old, respectively), I’ve learned some relevant lessons.

  • Toddlers are really good at figuring out what their points of leverage are. (“Mommy can physically pick me up and move me, but she can’t make me eat…”)
  • As adults, we send mixed messages and do hypocritical things all the time. We don’t realize it until we encounter a strictly literal individual, like a two-year-old.
  • When a toddler refers to a container of sour cream as “ice cream”, it’s not actually worth correcting her (unless you’re really interested in having a knock-down-drag-out argument.). Same with her stuffed “tiger” (actually a leopard), “Hot Dog” (Mickey Mouse), “snack” (can only refer to chips/crackers/pretzels, not fruit/cheese/anything currently undesired), and that prize word: “mine” (telling her that it’s yours is like bear-baiting). Not every moment is a teachable moment.
  • I’d forgotten how much fun rolling around on the floor and tickling someone is, especially a giggly two-year-old who keeps laughing, “I got you!” even when you’re the one getting her.
  • Projectile vomiting is not, as I had thought, just a funny phrase used by the over-inebriated.
  • Breast milk has natural antibiotics (!) and you can use it to clear up mild eye infections, such as those caused by blocked tear ducts. This actually worked!

Other experiences that capture my week:

  • On a walk, we encountered a flower. Me: “I wonder what kind of flower that is?” Toddler: *throws a rock at it*
  • Toddler, after breakfasting: “I’m done!” Me: “Okay.” Her: “No! I want to tell Mommy!”
  • I walked in after an afternoon trip to the grocery store. Toddler: “Daddy!” Me: “No, Daddy’s still at work.” Toddler, running past me to check the garage: “Daddy daddy daddy!” Me: “No, he isn’t home yet.” Toddler, echoing in garage: “DADDY!” Repeat for five minutes. (She does love her Daddy!)

I’m already looking forward to my next visit. :)

Older entries »