Thanks for doing our HITs! With your help, we think we’ll be able to build some pretty exciting technologies to help computers better understand human language.
This task can be tricky and we want you to get a sense of what the task is before you work on the main project. We already have tags for these examples, and what we’re doing here is gathering data to understand how well people perform on the task when they have some instructuions and a little bit of training.
Unfortunately no, there is no automatic way for us to add you to our qualified list of workers. We go through the submitted HITs on the training task at least once a day and add worker IDs to the qualified list. Once your name is on the list, you will be able to start work on the main annotation task.
Yes! We already have labels for all of these examples, and we know that there are many more “No” labeled examples than “Yes”. So if you find yourself assigning more “No” than “Yes” labels, don’t be alarmed. If your responses are balanced or skew more towards “Yes”, reconsider how you are evaluating the prompts.
No. Unless it’s clear to us that you are assigning tags across many HITs without even considering the prompt, we won’t reject any of your work.
You should fill this out if you can’t complete the HIT, and not otherwise. This could be if the HIT interface is partially broken (an empty page, for example). If there is a typo in a sentence, but you think you know what it means anyway, please don’t report it. Never put anything in this field if there isn’t a problem.
We are busy graduate students with the Bowman Group, a subgroup of the ML2 group at New York University Center for Data Science. We are also affiliated with the NYU Departments of Data Science, Computer Science, and Linguistics.
Leave a comment at the bottom of any HIT!