Inner dual methods for structured output learning

One of the big challenges in making structured predictors more practical is that they can be really slow compared to the hack of assuming your variables are independent unstructured models.

There’s a line of research that partially addresses this expensiveness by avoiding repeated inference during the learning phase. I call it “inner-dual learning,” but it goes by many names.

The basic idea is that we do prediction by solving an optimization problem

\arg\max_x f(x; \theta).

For example, the objective f could be an energy and an entropy, which would make this optimization related to variational marginal inference in probabilistic models, or it could just be a linear objective over a structured output space, which is typical in combinatorial structured prediction. For various reasons, we often solve these inference objectives using a dual version of the inference optimization

\arg\min_\alpha g(\alpha; \theta),

where

\min_\alpha g(\alpha; \theta) = \max_x f(x; \theta)~\text{and}~g(\alpha; \theta) \ge f(x; \theta), \forall x, \alpha

.

The learning optimization often takes the form of

\min_\theta ~ R(\theta) + \max_x f(x; \theta) + \text{something}

which is a saddle-point problem minimizing over parameters \theta and maximizing over the predictions x. The “something” is typically a function of \theta that doesn’t depend on x.

The standard way to solve saddle-point-y things like this is to repeatedly

  1. solve the inner inference optimization,
  2. take the (sub)gradient with respect to the parameters \theta, and
  3. update the parameters using your favorite gradient-based optimization routine.

Having to solve the inference for each gradient step is painful, especially when you’re dealing with complex structured outputs. So the inner-dual idea is to replace that inner maximization with a minimization of its dual. (For real duals of concave functions, this forms an upper bound on the original learning objective!)

\min_{\theta, \alpha} R(\theta) + g(\alpha; \theta) + \text{something}.

The problem then becomes a joint minimization over the parameters and the dual variables! Moreover, we often have very fast message-passing style algorithms for solving the dual minimization. That means we can, in a principled way, interleave learning and inference, rather than using one as a subroutine for the other. The learning loop then becomes

  1. make a single-pass update to the inner inference dual variables (e.g., pass messages),
  2. take the (sub)gradient of the dual objective with respect to the parameters \theta, and
  3. update the parameters using your favorite gradient-based optimization routine.

As far as I know, the first such inner-dual method was done in the seminal paper by Ben Taskar and folks at ICML ’05 (link). That paper is really well cited and has been read by tons of people, but seemingly few have picked up on this trick. Instead, I usually see it cited for the primal structured-output learning objective they introduced, which has been the foundation of a lot of related work. Part of the reason for the lack of recognition for this idea is that Ben et al. formulated the dual as a quadratic program that had to be passed into a general-purpose quadratic programming tool, which I imagine is super slow.

Years later in 2010, Ofer Meshi and folks (link), and Tamir Hazan and Raquel Urtasun (link) used the same idea but using the dual objectives from fast message-passing algorithms for graphical model inference as the inner dual. Later on in 2012, Alex Schwing and folks (link) used this for latent variable modeling, also using message-passing duals. In 2015, in work with Stephen Bach, Jordan Boyd-Graber, and Lise Getoor, we used the inner dual method twice to even more aggressively dualize expensive inferences during latent variable learning (link). We did this specifically for hinge-loss MRFs and with ADMM inference, but I’m working with my current students on extending this to be more general now. Also last year in 2015, Chen, Schwing et al. revisited the inner dual idea to train deep models with structured predictors attached to them (link).

I’m sure there are other examples of this trick in the literature, though one problem with finding it is that it hasn’t been consistently named. Both the Taskar and Hazan (et al.) papers don’t really give it a name, referring to the idea as just a dual objective; Meshi et al. refer to the idea as using a dual loss; Bach and I referred to it as inner dual (or in our case, since there were two dual inferences for latent variable learning, we called it paired dual learning); and Chen/Schwing et al. called it blending inference and learning. My preference is inner dual, as should be obvious by how I refer to it throughout this post. I think it captures the fact that we’re using the dual objective of the inner optimization. But pay attention for these other names of it!

While inner-dual methods seem to do a great job of alleviating the computational cost of learning, they still train models that are expected to run expensive inference at test time. That’s not great. There are related lines of research on training structured predictors that will do something cheap at test time, like truncate a message passing optimization, but it remains to be seen how to integrate these different approaches without nullifying their benefits.

Scientific Hypothesis: We are the Best

It’s reviewing season for the summer conferences, so here’s something that’s on my mind as I’m doing my reviews.

One crappy thing that happens a lot in machine learning research is that researchers do non-scientific things like over-claiming, taking ownership, and bad experiment design. We end up with paper after paper, each claiming to present the best method with cherry-picked experiments that only demonstrate that the authors can draw prettier curves than other authors.

Sometimes authors use phrases like “our method” a lot in their description of the approach they’re demonstrating. Sometimes I even see tables or plots describing the final results from experiments where the legend entries are “our method,” “So and so’s method,” “SVM,” etc. This type of naming hints at a lack of objectivity.

Naming the proposed method is usually better, especially when the name actually describes the thing (so not an acronym that uses a letter from the middle of one of the words… c’mon people). Then the authors become scientists trying to understand the properties of some approach they discovered. And yes, they still get credit for discovering it, they just get fewer eye rolls.

This attitude also encourages poor experiment design. As computer scientists, we should want to understand the behavior of certain algorithms, so really good experiments would test many hypotheses about how the new algorithm performs under different conditions. We want to understand the strengths, weaknesses, and tradeoffs in comparison to other known methods. But many experiments in papers only test one hypothesis: “our method is the best method ever and you should purchase it.”

This problem is bad enough that I almost never trust the results of experiments in papers, or I always just think of them as synthetic sanity checks, even when they are using real data.

I’m certainly also quite guilty of this unscientific attitude and behavior. It’s very hard to avoid. On one hand, as scientists, we want to advance the world’s knowledge on machine learning, but on the other hand, as people who do science for a living, we want credit for advancing the world’s knowledge. That often leads to our papers reading more like patent applications than descriptions of scientific discovery. Yuck.

In conclusion, I’ve pointed out an annoyance and proposed no great solution for it. So I guess this qualifies as just ranting. But my method of pointing out this problem improves upon the state-of-the-art method by so-and-so et al. by 11%.

 

On Rejection and Acceptance Rates

A while ago, I posted about the emotionally difficult challenges of research. I mentioned half-jokingly at the end of that post that I would in the future write a post about another challenge: rejection.

Since some of the students in my group just submitted their first papers to the peer review process, I figured I’d actually go through with that plan now. We just submitted three papers to the KDD research track, which, when I last checked, has an acceptance rate of just under 20%. That means a few things. In expectation, 0.6 of my group’s papers will be accepted, and the MAP estimate (and marginal decoding) of how many papers will be accepted is zero.

Of course, the actual likelihood of acceptance or rejection is dependent on the quality of the work, but as we’ve seen recently from, e.g., the NIPS experiment, paper quality doesn’t have as certain an effect as we’d hope. What makes a scientific contribution worthy of acceptance is a very subjective concept, so even the best papers have a chance of landing some reviewers who just won’t be convinced.

So given these realizations, when submitting work for peer review, one must be somewhat prepared for rejection. It sounds easy, right? We know acceptance rates are low; we know it’s not personal; we know the reviewers that decide our fates are doing the best they can with very limited time. If we were perfectly rational, logic-based beings, there’d be no problem here. Just keep improving your work and trying again and again.

Of course, we aren’t purely rational and logical. Being rejected is one of the most difficult and painful parts of a researcher’s life. It genuinely hurts. As scientists, we want to be rational about it, but the visceral reaction to reading “we regret to inform you” comes with a plethora of painful emotions, from disappointment and sadness, to fear and anger.

The reason I thought it might be useful to write a post like this is similar to that of my previous post inspired by the TED talk. Scientists are often trained to ignore these emotions. We don’t talk about them much. We often try to only discuss the rational, actionable parts of rejection. “Use the feedback, and make the research better.” But the reality for me is closer to “get feedback, feel terrible, doubt yourself, blame system, blame yourself, briefly consider quitting, feel embarrassed for having these reactions, seek support from non-scientist friends and family, get back to work pretending you’re not hurt, eventually really get back to work.”

This whole ordeal is invariably part of the job. But like the emotional challenge of facing uncertainty and “the cloud” in research, it would be helpful to acknowledge that it is okay, normal, and expected to have these emotional reactions. That doesn’t preclude the more rational advice I’ve seen around, but I’d like not to perpetuate the fantasy that we can be perfectly rational about this whole process.

Eventually, once we realize that these emotions are normal, perhaps then it will be easier to filter them. Then we can find which emotions are useful, and work toward getting the ones that are not useful (i.e., most of them) out of our systems. Perhaps knowing that these emotions are a shared experience can help us manage them more easily.

Upcoming AISTATS paper

In about a week, our paper Unifying Local Consistency and MAX SAT Relaxations for Scalable Inference with Rounding Guarantees by Steve, me, and Lise will appear at AISTATS in San Diego. (Steve will be giving a talk on it Monday morning.)

The paper title is a mouthful, and it includes a pretty technical result, so here’s a weak attempt at explaining the intuition and take-away messages.

The paper is about MAP inference, or finding the most likely state in a probability distribution. Specifically, it’s about MAP inference in what we are calling logical Markov random fields (MRFs), which are MRFs whose potentials are defined by weighted logical clauses of a particular form. We show equivalences between two different approaches (and a third bonus one) for approximating MAP inference in these logical MRFs. These equivalences are theoretically interesting, but at least as importantly, they allow us to get the benefits of each approach, leading to fast MAP approximations that have constant-factor approximation guarantees.

What’s special about logical MRFs is that MAP inference is equivalent to finding the maximum weighted satisfying assignment to the logical variables, aka the MAX SAT problem. On the surface, this equivalence doesn’t seem exciting, because it equates one NP-hard problem to another. What makes it exciting is that there are some nice approximation algorithms for MAX SAT with quality guarantees, and these algorithms can therefore be applied to get quality guarantees for MAP. Unfortunately, the approximation method by the famous computer science duo, Michael Goemans and David Williamson, requires solving a linear program (LP) that scales not-so-well in practice, when using off-the-shelf linear programming algorithms.

Another approach for approximating MAP inference borrows from recent developments on doing fast inference in MRFs using local-consistency relaxations. At a very high level, these approaches relax the space of possible marginal probabilities (i.e., the marginal polytope) to a simpler space that only requires local consistency between marginal probabilities of variables and factors (i.e., the local marginal polytope). By solving the corresponding optimization over this simpler, relaxed set of constraints, many very fast message-passing algorithms have been discovered in the past few years. So it’s natural to try to use one of these local-consistency relaxation message-passing algorithms to do MAP inference in a logical MRF.

The main result we show in the paper is that these two seemingly different approaches are equivalent. This equivalence means that when we use these fast-in-practice local-consistency relaxation algorithms that pass messages to quickly find an approximate solution, we’re also able to get the solution to the linear-program subproblem of the MAX SAT approximation algorithm. Using the LP solution, we can perform a special rounding scheme that guarantees a constant-factor approximation.

The last bonus equivalence is that both of these approaches are also equivalent to the linear form of a hinge-loss Markov random field, which we’ve been studying over the past few years as a powerful class of probabilistic models with efficiently optimizable energy functions. The conversion from logic to hinge-loss MRFs (i.e., the principles behind probabilistic soft logic) had previously been motivated by connections to fuzzy logic, and now we have these other relationships to MAX SAT relaxation and local-consistency relaxation.

Behind the scenes, this last bonus piece is how we happened to find these equivalences. We initially were working with these hinge-loss MRFs and we had thought that these other approaches to doing inference in logically-defined MRFs seemed so different, that it’d be interesting to compare them. So we ran a few tests and discovered they were behaving similarly… very similarly; they were returning solutions with differences that were small enough to be numerical errors. This surprising behavior led to confusion, derivations, and finally understanding.

Finally, the super weird part: any (nondeterministic factored) discrete MRF can be converted to an equivalent logical MRF. This, in some sense, means the constant-factor quality guarantees that come from the equivalence between logical MRF inference and MAX SAT, also apply to any discrete MRF. But along the way of converting a discrete MRF to the restricted logical MRF form, the strength of this constant-factor guarantee must be weakened, especially since we know that MAP can’t be approximated to a constant factor unless P = NP. But it’s still an open question how this all fits together. Check out the paper, and maybe we can brainstorm about this in San Diego.

My Spring Conference Reviewing Data

Last night, I finally finished a marathon month+ of reviewing for machine learning and machine-learning-adjacent conferences. Because of my own poor calendar organization, I foolishly agreed to join the program committees for IJCAI 2105 (Machine Learning Track), KDD 2015, ICML 2015, and UAI 2015. These conferences all had reviewing periods during the month of March and this first bit of April.

My paper assignments for these conferences were six for IJCAI, five for KDD, six for ICML, and five for UAI. While I was reviewing these 22 papers, I was recording my initial overall recommendation (prior to discussion and author response) for each of these papers, just to measure how I tend to score papers. I figured I’d post some of these recordings here, with the major caveat that these are still tiny sample sizes and they are heavily biased by what papers and topics I like to bid on. I’m also going to convert all scores to a scale of [strong reject, weak reject, weak accept, strong accept] to both simplify and muddy up my data a bit to prevent any chance of some smartypants somehow de-anonymizing based on my silly blog post.

  • For IJCAI, my recommendations for my six papers were one reject, one weak reject, three weak accepts, and one strong accept.
  • For KDD, my recommendations for my five papers were three rejects, one weak reject, and one strong accept.
  • For ICML, my recommendations for my six papers were two weak rejects, three weak accepts, and one strong accept.
  • For UAI, my recommendations for my five papers were two rejects, two weak rejects, and one weak accept.

Overall, I recommended four rejects, six weak rejects, eight weak accepts, and three strong accepts. I gave zero strong reject recommendations. If my initial vote was the only one that counted, my accept rate for each conference is 66% for IJCAI, 20% for KDD, 66% for ICML, and 20% for UAI. Overall, my acceptance rate was a rather high 45%.

So what is the takeaway message? I’m not sure. I guess this still isn’t enough data to really tell anything. Let me attempt to make some claims.

  • The numbers suggest that I like ICML and IJCAI papers better than UAI and KDD papers. I would be pretty surprised if this is true and not just a result of randomness. It’s hard to tell with the IJCAI ML track being a brand new idea. I usually imagine myself as liking UAI papers the most of all the medium-sized ML conferences.
  • The numbers suggest that I like ICML papers about graphical models, structured prediction, and relational learning. Since these are the topic areas I usually bid on and that Toronto Paper Matching usually assigns to me. This is plausible, but not consistent with my low accept rate for UAI.
  • By a similar argument, the numbers suggest that I don’t like KDD papers on graph mining and relational models. This is also plausible, but surprising. I think in this case, I really like the problem area of data mining from complex network data, but maybe I’m often unsatisfied by the methods people propose. It’s possible I’m too critical of this kind of work.

Sorry these are all pretty weak analyses. The sample size is just too small. If I want to understand my own biases better, I need to volunteer to review even more (note to self: do not do this), or keep better records from previous years of reviewing.

Only one thing is absolutely clear from this month of reading all these submissions: seriously everyone needs to stop using the word “employ.”

Advanced Machine Learning = Graduate Introduction to Machine Learning

This is just an announcement regarding a FAQ primarily for the Virginia Tech community. I’m scheduled to teach CS5824/ECE5424 Advanced Machine Learning. This course is named this way because it’ll be taught in conjunction with CS4824/ECE4424 for undergrads, which is called “Machine Learning.” So the “Advanced” modifier indicates that you’ll be graded at an advanced, graduate level, but the course in both cases will be intended for students who want an introduction to machine learning. In other words, it’ll be my version of this course, currently being taught by Dhruv.

Sorry for the confusion! The university needed these courses to have different names.

Dhruv will be teaching a seminar course on deep learning in the fall (not yet in the course catalog), if you’re looking for a course to continue learning after the intro, and I’m working out details on teaching a graphical models seminar-level course in Spring 2016. So there will be plenty of chances for more advanced study in the next academic year, but “Advanced Machine Learning” is “Introduction to Machine Learning for Graduate Students.”

The Plurality of Data

Data_and_Lore

Plural Data. (I know that’s not Data on the right… Or is it?)

Let’s set this straight. The English word “data” should almost always be used as a mass noun, which means it shouldn’t be treated as a plural noun. Lots of people make the mistake of using it as a plural noun because they think it sounds fancier its original Latin form was a plural of “datum.” But the way we use it in modern English, especially in computer science, makes the concept of a “datum” make no sense.

Data pluralists think of data sets as many “datums,” but what is a datum? An example? A measurement? A dimension of a measurement? A bit in the binary representation of a dimension of a measurement in a IEEE floating point representation?

It is none of these. Data, especially high-dimensional and complex data, does not consist of countable singulars in any way that actually corresponds to what we mean when we talk about it, so it should not be a plural. Instead, it should be treated synonymously to “information.” We analyze “this information” or “this data,” not “these information” or “these data.”

I’m pretty darn certain of this reasoning. But is there any example where another word is (justifiably) plural when its singulars are uncountable or undefined? Or is there any scenario in modern usage where datums can be reasonably defined? If there are, then those that data might change my mind.

Contraction-based convergence proofs

Every now and then, teaching gives us some really valuable experiences. I’m teaching graduate level AI, and I had a really great experience learning something I had never paid much attention to before. Specifically, I read a section of Russell and Norvig’s book that deals with the convergence of Bellman’s utility updates for Markov decision processes.

When it comes to convergence proofs, the kind I’ve seen usually reason about the high-dimensional curvature of some objective function, or they do nasty things like analyzing the unrolled recursion tree. Yuck! For this reason, when I saw a section on a convergence proof, my immediate instinct was to run away and skip it for my and my students’ sakes.

But the convergence of Bellman’s updates can be proven using the concept of contraction, which says that an operator f on two inputs x and x' makes them “closer,” according to some measure of distance d.

d(x, x') < d(f(x), f(x'))

This property means that repeating the operator, e.g., f(f(f(f(x)))) and f(f(f(f(x')))), makes any two different initializations x and x' get closer and closer, and if the contraction is strong enough, makes them converge to the same value, giving a proof of convergence and uniqueness for the update $\latex f$. Beautiful, no?

The work comes in proving the contraction bound. There’s freedom in choosing the distance metric. The stronger the contraction, the faster the convergence. The Bellman updates contract by a constant factor, which I imagine is about as fast a convergence as you can get. And I imagine there can be some traps when things contract, but the amount of contraction becomes infinitesimal; in that case, this argument doesn’t actually show convergence.

I’m now curious what other convergent algorithms can be proven to converge using a contraction argument. Or more importantly, whether I can reach into my bag of algorithms-that-I-can’t-yet-prove-convergence-of and find one that I can finally solve using this approach.

I’m also curious if some type of contraction is a necessary condition for a convergent operator with a unique fixed point. I’m sure I could find that by doing some reading. But first, I have to put out a million tiny fires (this is how I describe my life now as a professor).

A quick Keynote tip

A few months ago, I discovered that Keynote has connection lines, which dynamically connect objects that you draw. I used to draw graphs and diagrams in Omnigraffle for this reason, but with native Keynote connection lines, you can animate them directly, etc. The ability to animate graphs is super important, and I’m a huge proponent in using animation in talks (in the right places) to help audiences visualize some of the absurdly complex things we present.

But it still wasn’t perfect, because Keynote doesn’t have built-in keyboard shortcuts to insert connection lines, so you have to select the objects you want, go to the pulldown menu, and select the command. This process is not good for avoiding repetitive-stress injury.

Well, I recently discovered that Mac OS lets you define keyboard shortcuts for any application. Maybe this is common knowledge, but in System Preferences, the Keyboard module lets you add App Shortcuts.

I set mine to cmd-L for a “Straight Connection Line” and cmd-option-L for a “Curved Connection Line.” And now I draw graphs natively in Keynote.

Unfortunately, Magic Move doesn’t work great with connection lines when they have to change orientation. They disappear as the nodes move and reappear once the nodes settle into their location on the next slide. That would be a really cool and useful animation to help visualize graphs. Oh well. Can’t win ’em all.

On the NIPS Experiment and Review Process

A lot of attention has been focused on the NIPS experiment on its own reviewing system, Eric Price’s blog post on it, and unfortunately my flippant tweet pointing out some inaccuracies in the blog post. So I’ll try to clarify what I meant here.

Edit: folks looking for more info on the NIPS experiment should check out Neil Lawrence’s initial blog post on it and stay tuned, as he and Corinna Cortes will be eventually releasing a more thorough writeup on the results.


Eric Price’s Post

In the spirit of Eric’s post, here’s a TL;DR: the minor inaccuracies in Eric’s post should not detract from the main message that the results from the NIPS experiment are concerning, but we should be careful to get the details right before trying to explain the scary results with faulty information.

The three details that popped out at me as inaccurate were the idea that the program committee was split into two independent committees, that the committee only discussed papers with an average score between 6.0 and 6.5, and that area chairs could not see if a paper was a duplicate. In Eric’s defense, a lot of these details and more were clarified in the comments and discussion below his post, so what I’m writing here is somewhat redundant (e.g., he points out in the comments that NIPS does not have a fixed acceptance rate).

On the first point. I’m not completely sure how the program chairs implemented the duplication. But I don’t think the concept of splitting the program committee in half is correct or makes sense the way NIPS reviewing is organized. Most of the area chairing, fostering discussion, quality control of reviews, etc., is done independently by each area chair, so there no real concept of split independent committees. I’m not even sure what that would mean. The committee is in some sense already split 92 ways, for each area chair, but reviewers may be assigned papers with different chairs, so it’s not really independent. The way Eric’s post describes it invokes an image of two huge conference rooms of area chairs talking about the papers, which is a model that doesn’t scale to the absurd size of the NIPS conference nowadays.

As for the issue of which papers were discussed, I believe this was simply a recantation of a half-joking description of what the review process was, but the tongue-in-cheek tone is lost in writing. First, the reviewers discussed any paper that wasn’t immediately an obvious, high-confidence consensus. Then the area chairs examined all these reviews, papers, discussions, joining in the discussion in most cases. Then the area chairs met in pairs to go over all their papers together (except conflicted papers), making pair judgement calls on each and identifying controversial or tricky decisions that needed to be discussed at the larger meetings. After that, the chairs met in groups of four with one of the program chairs to go over these tricky decisions. Nowhere in this process does the average score even come up other than as a very rough way for area chairs to sort the papers, all of which they have to individually consider. But we also had much better heuristics to sort by, like controversiality, spread of scores, and low confidence scores. I’ll write more about my own personal experience with this process later.

Lastly, area chairs, who had access to author identities (because we are partially responsible for preventing conflicts of interest), could in fact see if a paper was a duplicate. To make the experiment work on CMT, the program chairs had to create a fake additional author, whose name was something like NIPS Dup1. So it was pretty obvious. It’s not clear how this affected the experiment. Different area chairs might have reacted differently to it. I did my best to ignore this bit of information, to preserve the experiment, but there’s no way that my awareness of this didn’t affect my behavior. Did I give these papers more attention because I knew my area chairing would be compared to someone else’s? Or did I give them less attention because I wanted to focus on my actual assigned papers? I wanted to do neither, but who knows how successful I was. This fact surely contaminates the experimental results and any conclusions we might be able to definitely make about it.

I think the reason I was irked by these nitpicky details was partially because the discussion in the comments seemed to suggest that they were the problem with NIPS and the reason for the inconsistency. But I was hasty to criticize on Twitter, because Eric’s post really brings up a few important points and interesting discussion and hypotheses. It is truly great that people are talking about it, and lots of non-machine-learning communities have been made aware of the NIPS experiment through Eric’s post. I’d like some more caution about overgeneralizing from the experimental results, like Eric’s TL;DR does, but I suppose that’s inevitable, so we might as well get right to it with the first public writeup on the results. Hopefully people read on past that, since the rest of his analysis is pretty level-headed and thoughtful.


My own experience as area chair

Since this was my first year on the senior program committee, I got a new perspective on the NIPS review process that might be helpful to share to others. It wasn’t a huge difference from what I’d seen as a reviewer in the past, but since I was responsible for chairing 21 papers, I got a somewhat larger sample size. But it was still only 21 papers, which is much less than the number of papers that the duplication experiment used, and my sample is super biased toward my subject areas. So take this with a HUGE grain of salt. It’s just an anecdote, not a scientific study.

My initial experience with the process was disappointing. I watched the reviewers enter their reviews, and more often than not, these reported opinions that were unsubstantiated, unsupported in the review writeups, late, and unproofread. As a rough guess of the ratio here, each paper had three reviewers initially, and I would say about 3 out of 4 papers had one thoughtful review. After that, the reviewers with crappy reviews became very hard to reach. They didn’t participate in the discussion without a ton of prodding, they didn’t respond to the author rebuttals, and when they did respond, they surprisingly tended to stick to their original, seemingly unsubstantiated opinions.

So that was terrible. The good news is all that happened afterwards. Like I mentioned above, we area chairs met in pairs. (Even though I have only good things to say about the area chairs I met with, I’ll leave names off to preserve the anonymity of the review process.) My AC partner met with me on Skype for (I think) about two hours, maybe three, and we talked through each of our papers. The easy decisions, where all reviewers agreed, reported high confidence, and demonstrated high confidence through their words, were very quick discussions, but most of our time was spent debating about the more difficult decisions. Throughout this discussion, it was clear to me how thoughtfully my partner had considered the reviews and the papers, understanding when reviewers were making valid arguments and when they were being unfair. In many ways, I was reassured that another area chair was trying as hard as I was to find the signal in the noise.

The next stage was a meeting with four area chairs and a program chair. Again, we met on Skype, and this time went through a filtered list of the papers our respective AC pairs had decided needed further discussion. This list mostly included papers on the borderline of acceptance or ones where the reviewers were unwilling to agree on a decision. Reading the reviews and looking at the papers, we did our best to make a decision during the meeting, and again, anytime we couldn’t reach a decision, it was marked for further discussion and consideration as a borderline paper by the next level of the hierarchy: the program chairs.

After that point, I’m in the dark as to what the PCs’ decision process was. I know at some point they had to cut off acceptances for reasons of physical venue space, but it’s my understanding that the acceptance rate before that point is already pretty close to the usual 25%-ish rate.


So what now?

So my experience with the whole process could be summarized by saying that I saw some really disappointing reviewers, but was rescued from losing faith in NIPS by the thoughtfulness of the area chairs I worked with and the program chairs. But even within the sea of bad reviews, there were some standout individuals who anchored (mixed maritime metaphor…) these discussions and decisions in reason, fairness, and perspective. So I think as NIPS continues expanding to accommodate the growing interest in its topics, we’ll have to figure out how to address the growing proportion of bad reviewers that we’ll need to recruit to handle its scale. Maybe the answer is better reviewer training, or maybe the answer is more work for the more competent reviewers, or maybe there is no answer.

One important aspect to consider is, when we talk about peer review being broken, unscalable, or any other complaint, that the primary purpose of these huge processes is not to assign credit for peoples’ work, it’s to decide what content to present in a conference, and despite all the noise, and all the flaws in the system, the quality of the conferences I’ve been attending has always been consistently high. More specifically, to not generalize beyond what I’m discussing in this post, NIPS 2014 was a great conference. Of course, in the real world, assigning credit is a super-important part of this whole deal. (Luckily, the process for journals and for funding decisions tends to happen at a smaller scale, so my guess is it’s less noisy and less crappy, but that certainly isn’t perfect either.) So something does have to be fixed, but it’s not as broken as it may feel at times.

(There are people experimenting with new models of publication to try to address these issues, e.g., see the talks from the ICML 13 Workshop on Peer Review, and the International Conference on Learning Representations (ICLR))

Lastly, I’ll conclude with two questions I’ve been asking myself and my colleagues lately. Of all the peer review experiences you’ve had as a reviewer, a chair, or an author, how often do all the reviewers understand the paper and make a valid decision based on this understanding? For me, the answer is nearly zero percent of the time. Reviewers almost never understand the papers I submit, whether they accept or reject, and when I’m a reviewer or chair, reviewers almost always have different interpretations of what’s going on in the paper, which means they can’t all be correct. So peer review is broken, right? Maybe, but as a second question, how often is the final decision the right decision? For me, the answer is pretty close to always. E.g., the papers I’ve had rejected for dumb reasons have always needed a lot of improvement in retrospect, or maybe the papers reviewers don’t get aren’t written or thought through well enough. Maybe despite reviewers not knowing what they’re reading, their B.S. detectors still work fine. But we shouldn’t just settle for that if it’s true. I dunno. Lots to think about…