At New Music USA, where I work, we publish NewMusicBox. Coincidentally, NewMusicBox is turning 15 years old this month, and there’s a great series of posts examining the site’s history (in related news, you should make a donation to keep the site going for another 15 years). Occasionally I’ll contribute an article to the site, and there are two recent pieces I’d like to share with you.
Last February, the National Center for Arts Research (NCAR) released an initial report. They use data from tons of non-profit arts organizations to try to produce a picture of the whole system, and give arts organizations tools to measure their own health in the context of other, similar organizations. It’s a great thing to be doing, and as their work matures it’s going to be incredibly useful. My article looks at some of the limitations of their approach that you might miss if you don’t look too close.
There’s a lot of promise in “big data” approaches to problems. But a lot of the value of the “big data” approach is that in a lot of web and marketing analysis contexts you have literally everything, and not just an experimental sample that you collected on your own. A lot of the statistical tools scientists are equipped with to handle these challenges are geared towards making justified extrapolations from small data pools, whereas big data situations get right past issues of sample size and experimenter effects.
The trap (which NCAR successfully avoids) is to take a very large collection of traditionally collected data, and treat it as if you had an experimenter-free set of analytics data. A lot of arts organizations are taking their first steps into data analysis, and it’s important to be careful
Last March, I waded into what amounts to an arts policy blog game of telephone. Lyz Crane of ArtPlace America (who I already knew was brilliant, but have subsequently learned was even more brilliant than I thought), made a comment, then Doug Borwick blogged it enthusiastically, then Diane Ragsdale pointed out some holes in his hastily assembled argument, and then I jumped in. I tried to flesh out some of the ideas as best I could, to show where they break down, and hopefully produce some productive thinking in the resulting article.
The core discussion is about how arts organizations should be motivated, what it means to “serve” a “community” (including what each of those terms actually means), and how success should be measured. The trouble is that “success” is easy to identify, and hard to explain. Just like economies and families, healthy arts organizations are all the same, but sick ones are all sick in their own way.
My main warning in the discussion is to be careful that you’re measuring what you value, and not just valuing what you measure.
Thanks especially to Molly Sheridan, Alex Gardner, and Frank J. Oteri (happy 50th!) for their editing and care with these articles.