Score Task/Task Achievement
Run in postman: Score Task Achievement
The Speechace Task Achievement API supports following task types:
Describe-Image: The speaker is presented with an image and asked to describe the details, relationships, and conclusion to be drawn from elements of the image.
Retell-Lecture: The speaker listens to a 1-2 minute lecture and is asked to summarize the lecture focusing on key elements, concepts and conclusions from the lecture.
Answer-Question: The speaker is presented with a short question which typically requires a one or two word answer.
Each task type has particular input and outputs:
Task Type | Inputs | Outputs |
---|---|---|
describe-image |
| Task score on scale of 0-5. |
retell-lecture |
| Task score on scale of 0-5. |
answer-question |
| Task score on scale of 0-1 where 0 is incorrect and 1 is correct. |
The API supports different modes in combining task scores and language scores in assessment:
user_audio_file
oruser_audio_text
: The speaker's response can be submitted as either audio or text, allowing task scoring to be used with written responses as well.include_speech_score
: Speech scoring can be included or excluded along with the task score. Note that ifuser_audio_text
is used, theinclude_speech_score
will always be zero. Therefore, in written responses, only task scores are provided.
All tasks are available in the following languages:
English (en-us, en-gb)
Spanish (es-es, es-mx)
French (fr-fr, fr-ca)
Request Format
The endpoint that is to be used will depend on the region of your subscription. For example, for US West, the endpoint is https://api.speechace.co.
POST
https://api.speechace.co/api/scoring/task/v9/json
Query Parameters
Parameter | Type | Description |
---|---|---|
key | String | API key issued by Speechace. |
dialect | String | This is the dialect in which the speaker will be assessed. Supported values are: en-us, en-gb, fr-fr, fr-ca, es-es, es-mx. |
user_id | String | Optional: A unique anonymized identifier (generated by your applications) for the end-user who spoke the audio. |
task_type | String | The task_type to score. Supported types are:
|
Request Body
Parameter | Type | Description |
---|---|---|
task_context | String | The context or model or model answer for the task presented to the speaker. Used in the following task-types:
This must be provided in the same language as the one being assessed. |
task_question | String | The task question presented to the speaker, used in task-type = answer-question. This must be provided in the same language as the one being assessed. |
user_audio_file | File | file with user audio (wav, mp3, m4a, webm, ogg, aiff) |
include_speech_score | String |
|
user_audio_text | String | A text transcript of the speaker's response.
|
Response Example
Notice the task_score.score
key for the overall task achievement score in the response below:
The pronunciation and fluency interpretation of the key elements in the response of the spoken word or sentence remains the same.
The new addition is the task score parameters, which indicate the extent to which the task has been achieved.
Difference between task_context
and relevance_context
Relevance is binary and is higher level. It evaluates whether the response is on-topic or not (True or False)
Task Achievement is more nuanced and scores how well the response addresses the task
For a general question such as "Do you think the government should subsidize healthcare?" relevance is primarily assessed, as there is no definitive right or wrong answer; the focus is on whether the response is on topic.
In contrast, for a specific question like "What does the following business chart tell us?" a specific answer is expected. Therefore, a nuanced task context and detailed task score are required to evaluate how well the response addresses the specific elements of the task.
Last updated