#Accepted Papers Across Areas
Our advice to the area chairs was as follows: group labels no more than 15% of your submissions “Accept”, and no more than another 20% “Lobby for Accept” or “Maybe Accept”, unless the area has fewer than 20 submissions.
Area | Long | Short |
Information Extraction | 24 | 12 |
Semantics | 19 | 19 |
Summarization | 14 | 8 |
Tagging, Chunking, Syntax and Parsing | 14 | 7 |
Generation | 14 | 6 |
Machine Learning for NLP | 14 | 6 |
Machine Translation | 13 | 8 |
Vision, Robotics and Other Grounding | 10 | 10 |
NLP Applications | 10 | 8 |
Social Media and Computational Social Science | 10 | 5 |
Sentiment Analysis | 9 | 5 |
Phonology, Morphology, and Word Segmentation | 9 | 3 |
Discourse and Pragmatics | 8 | 8 |
Text Mining | 8 | 6 |
Question Answering | 7 | 7 |
Dialogue and Interactive Systems | 7 | 2 |
Cognitive Modeling and Psycholinguistics | 7 | 1 |
Speech | 6 | 3 |
Theory and Formalisms | 4 | 1 |
Good Papers Appreciated Reviews More
This year we asked authors to rate the reviews on their quality and whether the reviews were helpful. The following are the results which made us smile:-)
Paper Review Overall Score | # Author Responses | Review Quality | Review Helpfulness |
[1, 2) | 6 | 3.5 | 3.0 |
[2, 3) | 46 | 3.1 | 2.5 |
[3, 4) | 98 | 3.4 | 2.9 |
[4, 5) | 95 | 3.9 | 3.2 |
[5, 6] | 6 | 4.2 | 3.5 |
To what extent can reviewers identify authors?
6.2% reviews believed they knew who the authors were because the reviewers had seen the papers as preprints or the papers or supplementary materials revealed author identities.
12.3% reviews could guess author identities, but the submissions themselves didn’t communicate who the authors were.
The rest 81.5% reviews had no idea about who the authors were.
One thought on “Acceptance and Author Feedback”