Tuesday, November 23, 2010

Jewish Survey of the Day: "Quick Bytes: On the Minds of Teens" (JESNA)

The first of my new series of reviews of Jewish surveys comes to us from JESNA and provides insight into "What is on the minds of Jewish teens?" The italicized sections come directly from the AAPOR Standards for Minimum Disclosure.

Who sponsored the research study, who conducted it, and who funded it, including, to the extent known, all original funding sources.


JESNA sponsored the research. Who conducted the research is not clear.

The exact wording and presentation of questions and responses whose results are reported.


No. The survey instrument is not available.

A definition of the population under study, its geographic location, and a description of the sampling frame used to identify this population. If the sampling frame was provided by a third party, the supplier shall be named. If no frame or list was utilized, this shall be indicated.

The sample appears to come from students attending schools that belong to the North American Association of Community Hebrew High Schools. No information is provided on the sampling frame used.

(This sample certainly doesn't cover all Jewish teens. Many don't attend any formal Jewish education during their high school years, while others--predominantly Orthodox--attend Jewish day schools. Accordingly, it is unlikely to represent "Jewish teens" as a group. Does it represent Jewish teens in supplementary Jewish education in high school? This hinges on how representative the NAACHHS is of "Hebrew High Schools." I can't comment on this, because the portion of their website that lists member schools doesn't work, at least on Firefox. While the introductory paragraph casts the survey's applicability in terms that are too broad for my taste, the fact that the sample is drawn from NAACHHS schools is mentioned multiple times, leaving the reader to reach their own conclusions about its representativeness. How many schools participated? We simply don't know and, on that basis, it's very difficult to know how much weight to give this research.)

A description of the sample design, giving a clear indication of the method by which the respondents were selected (or self-selected) and recruited, along with any quotas or additional sample selection criteria applied within the survey instrument or post-fielding. The description of the sampling frame and sample design should include sufficient detail to determine whether the respondents were selected using probability or non-probability methods.

No indication of sample design is given.

(We don't have a clue how JESNA reached the students. Does NAACHHS have a comprehensive list of student emails? Was a sample selected or was every student approached? Were students mailed links by their schools? Was parental permission obtained? Again, we don't know these fundamental facts.)

Sample sizes and a discussion of the precision of the findings, including estimates of sampling error for probability samples and a description of the variables used in any weighting or estimating procedures. The discussion of the precision of the findings should state whether or not the reported margins of sampling error or statistical analyses have been adjusted for the design effect due to clustering and weighting, if any.

Sample size is n=219. No indication of sampling error is provided. (As noted above, we don't even know if this was from a sample.)

Which results are based on parts of the sample, rather than on the total sample, and the size of such parts.

No indication is given.

Method and dates of data collection.

Data was collected by web survey. Dates are a little vague, with "May 2010" given as the timeframe.

Assessment

This survey clearly (to me at least) fails major aspects of the AAPOR minimal disclosure standards. The failure to provide this basic level of information severely inhibits the utility of these data. To give a basic example, it makes a great deal of difference if the results achieved sample size of n=217 was drawn from a sample of 500 cases or from all 15,000 children attending NAACHHS schools (a number I made up on the spot because that information is not reported by NAACHHS).

Jewish Survey of the Day/Month/Week/Year

I am adding a new feature in my periodic blog: reviews of Jewish surveys. As those familiar with me are well aware, I cast a jaundiced eye over much of the work of what used to be my field (through September 2010 I was an associate research scientist at the Cohen Center for Modern Jewish Studies at Brandeis University; since October 2010 I have been working as a senior analyst/project manager at Abt SRBI Inc.). I will vaguely organized this around the Standards of Minimal Disclosure of the American Association for Public Opinion Research (AAPOR) Code of Professional Ethics and Practice [May 2010] which is, to whit, information that shall be included "in any report of research results or make them available immediately upon release of that report" (read the Standards here). This is the basic information required for a reader to make informed use of statistics in the public domain. It is not an "academic" standard (most AAPOR members work in industry, not academia), nor is it particularly difficult to meet. Any survey that exists for purposes other than pure entertainment should adhere at a minimum to these standards. While I initially planned to keep my focus on AAPOR compliance, I immediately fell off the wagon.

Tuesday, November 16, 2010

Books I want

Cochran, William G. 1977. Sampling Techniques. 3rd ed. Wiley.
Heeringa, Steven G. 2010. Applied Survey Data Analysis. Chapman & Hall.
Kish, Leslie. 1995. Survey Sampling. Wiley-Interscience.
Lohr, Sharon L. 2009. Sampling: Design and Analysis. 2nd ed. Duxbury Press.
Wolter, Kirk. 2010. Introduction to Variance Estimation. 2nd ed. New York: Springer.

Thursday, August 5, 2010

How Did You Get Here? Early Career Decisions and Graduate School

This is a talk I gave to Cohen Center summer interns on July 27, 2010, as part of a series of talks by full-time staff.

My path here began, I suppose, at birth. I was born in 1975 in Sydney, Australia, to a Jewish father and a non-Jewish mother. My father’s father was born in Australia of Lithuanian parents while his mother was born in Baden bei Wein, Austria, and escaped in 1938 following the Anschluß. Both were doctors. My mother’s family was Hungarian. My maternal grandparents walked westward in 1945 away from the Soviet advance and ended up in a displaced persons camp in Wiener Neustadt, Germany, where my Grandpa Fafa served as a doctor. He moved to the then-Australian colony of Papua New Guinea, along with a number of other Hungarian doctors, bringing my grandmother and mother (born in the DP camp) along later. My parents actually met in Papua New Guinea, where Grandpa Fafa had heard that there was a visiting Viennese lady doctor and her two eligible medical student sons. (My paternal grandfather had died before then.) Grandpa Fafa promptly went to see my maternal grandma, Edith, and, European gentleman that he was, clicked his heels, bowed, introduced himself, and explained that he had two daughters and heard that she, my grandmother, had two sons. Thus I came to be.

This level of detail might be seen as self-indulgent, I suppose, except I study Jews, and I study outcomes, and beginnings do count. I also know that my path here was somewhat more winding than normal, so I sketch out my beginnings.

I was not raised as a Jew or a Christian. My religious education, such as it was, consisted of attending an Easter mass at Sydney’s Catholic cathedral, a daily service at an Anglican church, and morning services at an Orthodox synagogue. (Most synagogues in Australia are Orthodox.) I felt I belonged in the synagogue in a way that I did not in the cathedral or the church. I don’t know why. Ever afterwards, I felt Jewish. I don’t think it was because I felt closer to that side of my family. I spent far more time with my Hungarian grandparents than my Jewish grandmother. The ethnic food I cook is Hungarian. The language I curse in is Hungarian. Nor was Grandma Edith a pillar of Jewish practice. She had a hostile relationship with Judaism, her observant family having barred her from attending Jewish tutoring with her brothers. Despite living but a door away from her observant parents, Jewish holidays were not celebrated.

Until college (or university, as we would say in Australia), there wasn’t much I could do about my feelings of Jewishness. I didn’t live in a Jewish neighborhood. I didn’t know that Jewish youth groups existed. There was no web for me to search things out on. I did read the Jerusalem Report, which my grandmother passed onto me, and books on Israel and occasionally Jewish life in general. At college, however, I had options. During orientation week, I joined the Australian Union of Jewish Students (we don’t have Hillels). When signing up for classes, at the last minute I chickened out from taking Chinese and decided to sign up for Jewish Civilisation, Thought, and Culture. I only planned to take it for a year, dropping it and keeping up with my “serious” subjects of political science and modern history, as well as my classes as a law student. (Like Canada and the rest of the Commonwealth, law is an undergraduate degree in Australia.) As it turned out, Jewish Civ was the only class I enjoyed, and I dropped history without a further thought at the end of the year. At the end of the year, I spent the summer holidays in Israel on program ran by the Union of Jewish Students and with my relatives.

By the time I came back, I knew I wanted to resolve my status. I certainly I wasn’t Jewish by Orthodox or Conservative standards, nor was I Jewish under the Reform movement’s standard of “appropriate and timely public and formal acts of identification with the Jewish faith and the Jewish people.” No exception or loophole covered me. Thus, I began the process of conversion. It was clear to me that I couldn’t “accept the yoke of the mitzvot,” in an Orthodox sense, so I converted under Reform auspices (there being no Conservative movement in Australia) after a year of studying material I had already learned at university. And so, I became a Jew.

I kept taking classes in Jewish Civilisation and started enjoying political science more. I became secretary and then vice-president of the Union of Jewish Students at Sydney University. At the end of my second year, I went back to Israel for the summer for the Union of Jewish Students’ Leadership Development Program. It developed me in ways I never expected, as I met and fell in love with an American at the Jerusalem youth hostel in which my group was staying. Back in Australia, I never did enjoy law and dropped my law degree at the end of my third year, keeping studying for my B.A. If I wasn’t going to be a lawyer, what would I do? With the innocence (read “foolishness”) of youth, I decided I would go to grad school in Jewish studies. Because I was in a long-distance relationship with an American, clearly it would be in the U.S. As it happened, because I dropped Criminal Law, I was a few credits shy of a three year B.A., which in any case was not accepted by U.S. universities. I thus spent something of a gap year in my fourth year of college: working at a department store, catching up on my honours year entry courses in political science, and taking Hebrew classes outside my degree. I also planned a topic for my honours thesis. This was quite difficult. It had to be on a Jewish and political science topic; it couldn’t be on something purely Israeli, because my Hebrew was not up to reading scholarly literature, to put it mildly; it had to involve original research; it couldn’t have been studied before; and it had to involve travel to the U.S. Eventually, I worked out that I could study American Jewish lobbying of Israel over the then-current (1998) debate over the status of non-Orthodox converts for the Law of Return. Because I had a quasi-gap year, I was able to do my bibliographic research in advance and work out who I wanted to interview. I then spent three months in the U.S. over my summer holidays doing all my reading of the existing literature and interviewing heads of major American Jewish organizations. Nothing to do with my girlfriend, of course. When I came back to Australia and classes began, I was able to write my thesis with ease.

This marked the end of my involvement in political science. I was interested in studying Jews with social scientific methods, but couldn’t stand being locked into political questions. While all Jewish studies interested me, I found that the more recent the topic was, the more fascinating it was. The pivotal point for me was reading Jewish Choices: American Jewish Denominationalism. This is actually an unremarkable book, a heavily statistical analysis of Jewish religious identification and behavior using the National Jewish Population Survey of 1990, but it showed me what could be done with social scientific methods and Jewish topics.

Degree in hand, I took my GRE’s, doing abysmally in math, poorly in logic, and brilliantly in English. Again, with the foolishness of youth, I only applied to two grad programs: sociology at Columbia and Near Eastern and Judaic Studies and sociology at Brandeis. I found out about the latter because I called the Brandeis sociology department and spoke to the grad student administrator. She heard what I was interested in and said, Well, actually, we have this joint Ph.D. I was sold. I was accepted to Brandeis but not Columbia. I’m not sure my qualifications would have been enough for a straight sociology or Jewish studies degree. Foolishly, once again, I took the Brandeis offer even though I was not funded (my application was sent at the very last minute). I got funding in my second year, but I was extraordinarily lucky to do so. I don’t know if I would have applied to grad school in the U.S. had my fiancée not been American. I might well have ended up in exactly the same place, but the odds certainly would have been lower.

I arrived here on August 2, 1999, less than a year after finishing college, started grad school a month later, and married my fiancée in June of 2000. It took me a while to adjust to life as a grad student. Some classes were great, others not so much. The classes that were central to my academic development were those that were out of department. I took a class in survey research at the Heller School. I also had to take a module on stats in Heller as part of my NEJS requirements. Being a module, it wasn’t listed in the course catalog. Most people would have asked for help and eventually enrolled. Not me. I’m shy and hate talking to people. I took the nearest class listed in the course catalog, Applied Regression Analysis. I almost died when I open the text and saw the statistical formulae in all their Greek glory. I’d had a little exposure to stats in a sociology class, though, and took to the material very rapidly. In my final semester, I took the 21st and 22nd of my required 21 classes by taking Applied Econometrics and Applied Multivariate Analysis. I also TAed. I ended up getting quite depressed. In retrospect, I should have worked at the Cohen Center rather than just trying to study and be a TA. (I did work for the Cohen Center during vacations as a participant observer on Birthright trips and at summer camps.) I like having structure in my life, and work keeps me busy and gives me a sense of worth. While it didn’t fit my plans, it’s probably better to spend a year or two in the real world before signing yourself up for another three year or more years of institutionalization in an educational institution—you will already have spent at least 17 years in school by this point.

With coursework over, I should have been studying hard for my sociology accreditation committee and NEJS comprehensive exam. As it was, I was very depressed (lack of structure, once again), and my wife was pregnant and we would need more income. I screwed up my courage and emailed Len to see if he had work for me at the Cohen Center. He did. I was meant to work 20 hours a week and spend the rest of my time studying. My initial assignment was a small independent project to do some literature reviews. This is typical. A grad student doesn’t necessarily make a good employee. Some are know-it-alls for whom drudge work is beneath them. Others are just flaky. We give assignments that don’t matter much to see what kind of person a grad student is. My record, looking back, gave clear signs of flakiness. I hadn’t worked as a grad student and I didn’t have a record of analogous research projects. As it turned out, I wasn’t a flake. Just as lack of structure depressed me, having structure in the form of a job energized me and I was rapidly working 40 hour weeks. I spent the next year essentially working full-time and neglecting my exams. While not advancing my studies, it was therapeutic. I was then able to get back to my exams and treat them as if they were a job, while keeping working at the Cohen Center. Having reframed them as work, I was able to get through them with minimal trauma. I had progressed through steadily more meaningful jobs at the Cohen Center and found my niche at the quantitative end of the spectrum. It’s far more difficult to find people interested in Jewish topics with quantitative predilections than qualitative ones, so I fit in very well.

Working as a researcher is very different from studying to be one. Grad school provides a way of thinking about the world and provides a basic level of knowledge. But that’s it. I remember sitting in my survey research class talking about data collection and the professor said “and then you clean the data.” Little did I know that cleaning the data is more than 95 percent of the effort expended, with analysis being 5 percent at most. Nobody teaches you in grad school to clean data or any of the other elements of what I will call the art or practice of social research. These are learned by doing and take place in the context of an apprenticeship. When I look at resumes, I look for people with practical experience far more than people with higher degrees. Somebody who has practiced research, even in a humble way, takes far less effort to train and represents a much lower level of risk as a hire.

Having gotten my exams out the way, I needed a topic for my dissertation. I had a role model in Shaul Kelner, who wrote his Ph.D. at CUNY on his work as the Cohen Center’s fieldwork coordinator for Birthright studies in Israel. I was determined to do the same and have my job and my dissertation be one and the same. At that time, Len was talking to Combined Jewish Philanthropies about their forthcoming community survey. I told Len that I wanted to take the project on and be involved in every single aspect I possibly could and write about methods used in Jewish population studies. This built on the work I had been doing at the Cohen Center. For the nine months the North American Jewish Data Bank was at Brandeis, I served as the day to day staff person and became familiar with the extensive history of Jewish population studies. I was also responsible for distributing the National Jewish Population Survey of 2000-01, a deeply flawed piece of research. Understanding its limitations gave me a keen sense of research methods. The Boston study was a wonderful experience and I learned a vast amount about the theory and practice of survey research. Large sections of my dissertation were drawn with little editing from methodological material I prepared during the course of the project and other work I had conducted on the National Jewish Population Survey. In turn, large sections of my dissertation ended up in the methodological report with little editing. It was a win-win situation, with the added bonus that I had a $500,000 budget for my dissertation research.

Having received my Ph.D., why am I still working here? There aren’t many faculty positions in contemporary Jewry. In the past decade, I recall the following: A tenure-track position in sociology at the University of Judaism, Los Angeles. The person in the position left and it appears the position no longer exists. A tenure-track position in the sociology department at Yeshiva University which seems never to have been filled. A tenure track position in Jewish studies and any social science department at the University of Michigan, the funding for which never came to fruition. A tenure-track position in sociology and Jewish studies at Vanderbilt University, which was filled by my former colleague Shaul Kelner. A position for an “established scholar” (i.e., not me) at Monash University in Australia. They ended up hiring a Yiddishist. To teach contemporary Jewry. A tenure-track position at Brown University, which was written for a specific candidate who, not surprisingly, was hired. A position at open rank at the Hornstein Jewish Professional Leadership Program at Brandeis. No-one was hired for it and the position wasn’t filled. A position at the professorial level in the Hornstein Program which was written for Len, who was hired. A tenure-track position at USC in sociology and Jewish studies, filled by a Ph.D. student in sociology from Columbia whom I’m not familiar with. I might be forgetting one or two, but that’s about it. I was one of three people to interview for the Vanderbilt position, but that’s the furthest I have advanced. There has also been a post-doctoral position at the University of Michigan. I can’t recall a single person from either the contemporary Jewry track of the NEJS Ph.D. program or from the joint Ph.D. I studied for who is a professor. If you want to be a professor of Jewish studies, you appear to be better off with a Ph.D. in modern Jewish history. If you want to be a professor in a social scientific discipline who studies Jewish topics, you’re probably better off doing a straight Ph.D. in the discipline of your choice, being careful not to let your Jewish interests put you outside the mainstream.

What about other careers? People with my training seem to go three ways. They work for a philanthropic foundation focused on Jewish life, they work for a Jewish organization, most commonly in research or evaluation, or they work here.

I don’t know what it’s like to work for a foundation, but I can’t imagine it’s much fun to be a program officer, the person who supervises projects for the philanthropy. We work with them all the time, and they’re administrators with extremely limited scope for innovation or creativity. Research and evaluation for a foundation might be a little better, but your life will be spent conducting small-scale evaluations of programs and contracting for larger projects. The professional heads of foundations have backgrounds in Jewish organizations, usually federations, so it’s not clear if a career path exists from evaluation to senior roles.

Work for Jewish organizations seems to have little career path, either, if you work on the research side. People who have had these positions seem to leave in under a decade or have their jobs downsized. Research tends not to be taken very seriously in the Jewish organizational world. Some senior professionals do have Ph.D.’s, but they are on the managerial track and don’t engage in much research. You can also start with a master’s in Jewish communal service, an MBA, rabbinical ordination, or perhaps a Jewish studies MA. It’s not entirely clear why being a rabbi qualifies you to work in a complex service organization, but many rabbis end up there, so apparently it does. The coin of the realm for the highest positions in Jewish organizations is the ability to raise money.

This leaves the Cohen Center and other research organizations. Entry level for people with masters degrees is the research analyst or research associate level, depending on skills and experience. This involves working for a principal investigator, albeit with a considerable degree of autonomy once you’ve proved yourself (and assuming you work for a good P.I.). You are typically given a job and work out how to complete it. You are usually working on several projects at once and need to juggle these. Some analysis work will be involved, but writing is minimal. The intangibles we look for are maturity, being hard-working, capable of working well without constant supervision, being a quick study, the ability to work well as part of a team, and generally not being a prima donna. The specifics of education and even intelligence in general are less important. Ph.D.’s with limited experience will come in at the research associate level. Most Ph.D.’s will start as senior research associates. Initially, work will be under or in conjunction with a more senior researcher and will involve similar work to a research associate, but with greater responsibilities, autonomy, and supervision of junior staff. Analysis and writing is a major part of the job. Advance sufficiently high on the totem pole to be at the principal investigator level and a major part of your job is preparing research proposals in response to RFPs (requests for proposals), a good number of which will fail. At this level, working with clients, writing reports, and supervising staff are major responsibilities. Substantive involvement in research and analysis is typically limited to setting up paradigms for more junior researchers to follow, resolving difficult issues, and conducting particularly complicated work. Much more time is spent looking over the work others have done and advising them on how to refine it.
Working at the Cohen Center enables one to conduct a different type of research to academics. Most working professors don’t actually do particularly exciting research. They don’t have the staff to conduct complicated research projects. Consequently, if they are quantitative types, they spend their lives analyzing datasets that other people have created. If they are qualitatively oriented, they will do small-scale interview and participant observation research. Teaching responsibilities take up some time (more earlier in your career when you’re still creating courses), particularly if you don’t work at a research university. You’re also expected to publish in peer-reviewed journals and need to produce a book to be eligible for tenure and move from assistant professor to associate professor.

What should you learn from all this? In some ways, the most helpful advice is: don’t be me. Consider working for a while before going to grad school. Apply to plenty of grad schools. Visit them and talk to faculty and students. Get your applications in early. Apply for fellowships like Wexner and other funding opportunities. Work while you study. Prepare for your qualifying classes while you’re still doing coursework. Understand the career options available to you. I shudder to think at all the places where things could have fallen apart but fortuitously did not. But if I were to offer this advice to myself as an undergraduate, I might have made fewer errors, but it wouldn’t have changed my plans. It was the only thing I wanted to do…I couldn’t avoid it any more than I could have decided not to become a Jew. The flip side of the coin is that I’m a living example of Herzl’s line that “if you will it, it is no dream,” anachronistically rendered in Hebrew as “im tirtzu ein zo agadah.” I had no Jewish background to speak of, grew up in Australia, had a decent but not outstanding undergraduate education, and no appreciable mathematical abilities, but I wanted to conduct social research on American Jewry. Yet today I am an “expert” on American Jewry, a competent survey researcher, and a very good statistical analyst. I’ve also discovered my interests on the border of survey design, statistics, and mathematics, which I didn’t know I would become interested in. I can’t complain about the life I lead, but if you are planning to apply to grad school in a field of purely academic interest (astronomy, English, linguistics, Jewish studies), do so because you can’t help yourself, not because you want a career in academia.

Friday, April 2, 2010

20¢ = 25¢


Hailing from Australia, I've often been puzzled by American currency mores. Why "nickel" and "dime"? (I still can't remember which is which after a decade in the U.S.) Why is a dime (10¢ if you're wondering) smaller than a nickel (5¢)? What do Americans have against their 50¢ and $1 coins? Why do all banknotes look the same? (A practice that has been ruled to be in violation of the Disabilities Act.) Why don't Americans use $50 and $100 bills?

Besides all of these questions, I've wondered whether quarters (25¢) or 20¢ coins (as we have in Australia) are more efficient. I define efficiency for a system of coinage as requiring the fewest coins for an arbitrary amount of change between 1¢ and 99¢. Although Australia withdrew and coins from circulation in 1992, I made the comparison as close as possible by looking at a system with 1¢, 5¢, 10¢, 20¢/25¢, and 50¢ coins. The result, as the title of this piece suggests, is that 20¢ and 25¢ coins are equally efficient--the average number of coins required for amounts of change between 1¢ and 99¢ is 4.24 in both cases (SD is 1.71 in both cases).

As I mentioned earlier, though, Americans have a strange aversion to 50¢ coins, while pre-1992 Australia had 2¢ coins. If we take this into account, the average number of coins required for change in the U.S. rises to 4.75 while the Australian average falls to 3.43, meaning that you get on average more than a coin extra in change in the U.S.

The next step, of course, is to ask whether it is possible to do better than 5¢, 10¢, 20¢/25¢, and 50¢ coins altogether. Might we in fact better be served by 4¢, 12¢, 37¢, and 74¢ coins with respect to minimizing change? (Actually, yes. This would require 4.02 coins on average.) I've cleared most hurdles to specifying this problem in AMPL (a mathematical programming language for optimization problems) and will return with my "rational" scheme for change in the near future.

Of course, not all amounts of change are created equal. In currency regimes with 1¢ coins, prices tend to end in nines (Basu 1992, 2006; Demery and Duck 2007), while in Australia prices end in fives. Ideally, we would weight the number of coins by the probability of receiving change of that amount in a transaction. However, a cursory search has not revealed a useful dataset for these calculations.

References

Basu, Kaushik. 1997. "Why are so many goods priced to end in nine? And why this practice hurts the producers." Economics Letters 54:41-44.

-----. 2006 "Consumer cognition and pricing in the nines in oligopolistic markets." Journal of Economics & Management Strategy 15:125-141.

Demery, David and Nigel W. Duck. 2007. "Two plus two equals six: an alternative explanation of why so many goods prices end in nine." Discussion Paper No. 07/598, Dept. of Economics, University of Bristol, Bristol, UK.)

Thursday, March 25, 2010

A Checklist for Critically Reading Quantitative Research

The following is an attempt to create a heuristic for evaluating the quality of quantitative social scientific research with zero assumed background in social scientific methods.

1. Is the information contained solely in a press release and/or interview or is a report or article in a scholarly journal available? If a report or scholarly journal, was the research reviewed by knowledgeable peers?

2. Is the basic methodological information required by the American Association for Public Opinion Research's Standards for Minimal Disclosure available, either in the report or a methodological appendix?

a. Who sponsored the survey, and who conducted it. Commentary: Search for information on the researcher and the company that actually carried out the data collection, if different. Have either been cited by AAPOR for violating its Code of Professional Ethics and Practices (e.g., )? Alternately, have company principals held posts in professional associations like AAPOR (c.f. Tom Smith, Mark Shulman) and/or publish methodological articles and give presentations at meetings like the American Statistical Association? Does the research firm used primarily conduct political polls, market research, or academic research? Naturally, look for commentaries on the specific piece of research. Where research is sponsored by an organization with an agenda that might influence research (i.e., just about all Jewish communal research), does the researcher explain the nature of their relationship with the sponsor?

b. The exact wording of questions asked, including the text of any preceding instruction or explanation to the interviewer or respondents that might reasonably be expected to affect the response. Commentary: Are the questions worded in a way that might bias the answer? If the results are very different to previous research, has the author used standard questions? If not, do they justify their items? Are there any skip patterns that fail to collect information from important populations? If the researcher combines items to form a scale or index, does s/he explain exactly how it was done? Does s/he report measures of scale reliability (e.g., Cronbach’s/coefficient alpha > .75)? Does the index/scale actually seem to reflect what the researcher says it does? Quite often, it doesn’t.

c. A definition of the population under study, and a description of the sampling frame used to identify this population. Commentary: Is there any reason to think that the sample might differ systematically from the population it is intended to represent? Was it a convenience sample (a.k.a. open sample)? Was it from an opt-in Internet panel? Does the sample just constitute Jews by religion? If there is a reasonable probability of bias, is it likely to have increased or decreased the reported results of the study? Does the author address any possible limitations of the sample?

d. Sample sizes and, where appropriate, eligibility criteria, screening procedures, and response rates computed according to AAPOR Standard Definitions. At a minimum, a summary of disposition of sample cases should be provided so that response rates could be computed. Commentary: The lack of a response rate is a major red flag. The response rate should be accompanied by a specification of the exact response rate formula used (e.g., AAPOR RR3). Better research will not only list the response rate but will address possible biases.

e. A discussion of the precision of the findings, including estimates of sampling error, and a description of any weighting or estimating procedures used. Commentary: If the survey uses an opt-in Internet panel or a convenience sample, estimates of sampling error are inappropriate and should never be reported. Discussion of weights may distinguish between design weights and poststratification weights (which correct for biases). If poststratification weights are used, the discussion should specify what variables were used and where how targets for adjustment were derived.

f. Which results are based on parts of the sample, rather than on the total sample, and the size of such parts. Commentary: If analyses exclude parts of the sample, is there a reasonable justification for their exclusion?

g. Method, location, and dates of data collection. Commentary: How long was the field period? How many contact attempts were made? Did the researcher try to convert refusals? Overall, does it appear that sufficient effort was put into collecting the data?

3. If the researcher describes something as a well-established fact, is it in fact widely supported? Often, it is not. Do a quick fact-check.

4. Does the researcher actually present evidence that directly supports their conclusions? Is there any evidence in the research that contradicts the researcher’s assertions? (This is surprisingly common.) Look carefully for situations where some related data is shown, “hand-waving” takes place (i.e., unsupported assertions are made), and a conclusion is stated definitively.

a. For processes involving changes over time, is there actually evidence of change over time or does the researcher base her/his analyses on differences between age groups? If so, can one reasonably expect there to be lifecycle effects?

b. For processes involving individuals, does the researcher base their conclusions on aggregated data?

5. Does the researcher omit any relevant outcomes? First, were there questions asked that aren’t reported? Second, were any topics simply not asked about? Is there a reason to expect that the omitted topics might diverge from the reported results?

6. If the researcher asserts that X caused Y, does s/he control for other factors that might be expected to influence the outcome?

a. Simple bivariate analyses are far more prone to biases than are regression analyses with suitable controls, which tend to be quite robust, even for biased samples.

b. How important are exact estimates to the researcher’s findings? The more research depends on a specific figure (e.g., the number of Jews in the United States), the more vulnerable it is to shortcomings in data collection.

c. Did the researcher omit any relevant explanatory variables? First, were there any topics asked about that weren’t included in the analysis? (Typically, where a variable is not significant, a researcher will say something like “X, Y, and Z did not have a significant effect and are omitted; model not shown.”) Second, were there any potentially relevant explanatory variables omitted? If so, what explanatory variables included in the analysis might be picking up the omitted variables’ effects?

d. Does the researcher report whether effects were statistically significant? This is especially important where sample sizes are small. (Note that statistical significance isn’t appropriate in cases where all relevant units were surveyed, like many Birthright reports, or for convenience and other nonprobability samples.)

e. If the sample size is very large, are the effects reported large enough to be meaningful?

7. For evaluation research on the effectiveness of a program or policy:

a. How does the researcher estimate the program effect? In declining order of rigor, these are:

i. True experiments that randomly allocate individuals or other units like communities into treatment or control groups.

ii. Quasi-experimental designs that have nonrandom treatment and control groups and measure outcomes before and after the treatment intervention begins. Does the researcher document whether systematic differences exist between treatment and control groups? If there are differences, what steps does the researcher take to take account of these?

iii. Quasi-experimental designs that either measure participants before and after the treatment but do not have a control group or measure treatment and control groups only after intervention. For participant-only pre/post designs, are there other factors such as aging or a major event like a terrorist attack that could also explain the results? For treatment and control post-only designs, does the researcher model the characteristics of the groups and control for them when analyzing outcomes.

iv. Treatment-only post-only designs that ask retrospective questions about attitudes and behavior. Are the events being recalled memorable? Is there a reason to believe that respondents may have recall errors, like telescoping events?

v. Treatment-only post-only designs that ask about program satisfaction and self-perceptions of program effect. These are very weak.

b. Does the researcher generalize about the program’s effect beyond the type of people who actually participated? For instance, does a study of a program of outreach to intermarried Reform synagogue members claim that the program will work for all intermarried families? Is this a reasonable assumption?

Wednesday, January 13, 2010

Lessons Learned--Part 1--RAID 0+1

I'm something of an experiential learner. Herewith the first things from the school of "dear God that was a stupid thing to do"--and a few that I actually got right the first time.

Never, ever use RAID 0+1 with an Intel ICHxR chip I like to build computers. Like many a PC do-it-yourselfer, I sometimes get too ambitious. Yes, I could build simple computers, but where's the fun in that? My first project was putting my Dell Dimension 8200 (or something like that) into a new (flashy, naturally) case. Of course, Dell used a proprietary motherboard format that didn't fit the new case. I got out my trusty drill and drilled new holes in the case backboard to accommodate standoffs. Of course, I decided to keep the motherboard on the computer to line up the holes. Not a smart idea, that. The case kept going, the motherboard didn't. I eventually combined bling (the case of course had a clear window) in the form of blacklight CCFLs, glow-in-the-dark sleeves on the power cables and light-up round IDE cables with overengineering (I managed to squeeze a Thermalright SI-120 on my new and undrilled ASrock 939Dual-SATA2 (AMD 3800X2), along with a fan for my RAM (with flashing lights!), a fan clamped on top of the Northbridge heatsink, some massive Panasonic fans throwing about 100 CFM, a Radeon 9700 All-in-Wonder with aftermarket heatsink fan (later joined by an Nvidia 7600 which I somehow managed to keep myself from sticking extra fans on--the Dual-SATA2 supported both AGP and PCI-E, so who was I to pike out?), and two (count 'em, two!) WD Raptor 72GB drives in RAID 0, plus two more data hard drives. So, my first computer (still going all these years later despite the odds) boasted 10 fans (2 case, 1 CPU, 2 RAM, 1 Northbridge, 1 PSU, 2 video) and four hard drives, all wrapped in an aluminium case that practically acted as an amplifier, was a tad noisy. A couple of years later, the kind people I work for let me build a new computer for myself. I did my homework this time. I got a normal motherboard (Gigabyte GA-965-DS3R), a sensible case (a big steel Antec case with a modular 500W PSU), a nice Core 2 Duo E6850 (plus a huge whomping Thermalright Ultra-120 with lovingly lapped copper base plus and my fastest Panasonic air pusher), four WD K-series 250GB drives with an RAID 0+1 partition for data and an 8GB RAID 0 partition for the swap file, 4GB of DDR2-800 RAM (later upped to 8GB), and Win XP 64-bit. (I ran XP 64-bit on my previous workstation, a Dell Precision 380 in the days before 64-bit printer drivers.) RAID 0+1, I thought, was the piece de resistance,. It would be safe, unlike my RAID 0 at home, and speedy, unlike the RAID 5 in the Precision 380. It merrily dispatched Stata analyses with blistering speed until I got back from winter vacation to discover that it wasn't running and reported that one of the disks had failed. Fine, I thought. It used crash on occasion, recovering with RAID errors, and this would finally let my identify the failing drive. I jotted down the failed drive's serial, pulled it, got a new one from Microcenter, installed it, and...and...it wouldn't recover. I'd get to the RAID BIOS, tell it to rebuild (all the information was there, thanks to the +1 part of the RAID) and it wouldn't boot, going straight to asking to boot from CD. I consulted the Intel Matrix RAID web pages (the board used an ICH-9R northbridge) to discover that, yes, it would recover once it booted into Windows. What one would do if one couldn't boot into Windows in the first place wasn't mentioned. Which brings me to the lesson learned. As I found out subsequently, the boot sectors of the first two disks of a RAID 0+1 array are necessary for it to boot properly. Naturally, I experienced failure in disk 1. Intel's Matrix RAID, while in just about every other way a truly admirable system, as a partially software RAID controller, needs an OS to recover, and that doesn't work with a corrupt boot section. Next time, RAID 1 or a proper RAID card! The news wasn't completely awful, though. I did have data redundancy and Vincent from PC-Maker in Waltham, after several days of pain, managed to pull out all my (and my employer's) data for far less than the $2,000 plus of data recovery companies. As it happened, my automated backup at work, other than being almost impossibly large and bringing down the repeated wrath of the network admins at work on me as I tried to recover from it, had stopped backing up my hard drive. I have a sensible new computer (a Dell OptiPlex) on the way and the sad carcass of my second DIY computer awaiting cannibalization.