Thanks for doing our HITs! With your help, we think we’ll be able to build some pretty exciting technologies to help computers better understand human language.
Sometimes this task can be tricky and we want you to get a sense of what the task is before you work on the main project. We already have labels for these examples, and what we’re doing here is gathering data to understand how well people perform on the task when they have some instructuions and a little bit of training.
Ideally, no. We already have labels for all of these examples, and we know that there are an equal number of “True” and “False” examples. So if you find yourself assigning one label more often thsn the other reconsider how you are evaluating the prompts.
Unfortunately no, there is no automatic way for us to add you to our qualified list of workers. We go through the submitted HITs on the training task at least once a day and add worker IDs to the qualified list. Once your name is on the list, you will be able to start work on the main annotation task.
No. Unless it’s clear to us that you are assigning labels across many HITs without even considering the prompts, we won’t reject any of your work.
You should fill this out if you can’t complete the HIT, and not otherwise. This could be if the HIT interface is partially broken (an empty page, for example). If there is a typo in a sentence, but you think you know what it means anyway, please don’t report it. Never put anything in this field if there isn’t a problem.
We are busy graduate students with the Bowman Group, a subgroup of the ML2 group at New York University Center for Data Science. We are also affiliated with the NYU Departments of Data Science, Computer Science, and Linguistics.
Leave a comment at the bottom of any HIT!