How difficult is it to distinguish program components?
How wide a range should be normal between components for the same performance?
Give it a try yourself.
Let’s ignore Skating Skills because it’s the hardest to appreciate on video. And also so we can claim that we are not being influenced by the skating skills to anchor our other scores.
Also ignore the difficulty of the technical content and just focus on the components.
How about we concentrate on Transitions and Interpretation (2017 definitions), because they’re the components with the fewest criteria and they probably have the least overlap with each other.
Transitions
*Continuity of movements from one element to another
*Variety
*Difficulty
*Quality
Interpretation
*Movement and steps in time to the music
*Expression of the music’s character/feeling and rhythm, when clearly identifiable
*Use of finesse to reflect the details and nuances of the music
And let’s not worry about decimal places, but just the general range of quality on the scale of 0 to 10.
10 Outstanding
9 Outstanding
8 Very Good
7 Good
6 Above Average
5 Average
4 Fair
3 Weak
2 Poor
1 Very Poor
0 Extremely Poor
If you use the official chart, you could refer to the color ranges rather than to specific numerical scores.
Here are five short programs from the same JGP event (several years ago, so the individuals involved are all in their 20s now and less likely to be familiar to current fans). I’ve chosen performances to represent the range of abilities in that event, according to the judges’ scores, and present them in skate order, which was a random draw.
2011 Volvo Cup
Arijana Tirak BIH
Riona Kato JPN
Polina Agafonova RUS
Regina Glazman KAZ
Christina Erdel GER
How would you rate them on Transitions and Interpretation according to the above criteria?
I’m mainly curious about how we can analyze components in isolation and how much difference we’re likely to find between disparate components. There’s no right or wrong answer, and we may disagree with each other, let alone with the official panel, with good reasons for our different assessments.
I can’t participate in this exercise honestly because I have looked at the official protocols. If someone wants to put together links to another group of programs from a different event I could try my hand at that.
How wide a range should be normal between components for the same performance?
Give it a try yourself.
Let’s ignore Skating Skills because it’s the hardest to appreciate on video. And also so we can claim that we are not being influenced by the skating skills to anchor our other scores.
Also ignore the difficulty of the technical content and just focus on the components.
How about we concentrate on Transitions and Interpretation (2017 definitions), because they’re the components with the fewest criteria and they probably have the least overlap with each other.
Transitions
*Continuity of movements from one element to another
*Variety
*Difficulty
*Quality
Interpretation
*Movement and steps in time to the music
*Expression of the music’s character/feeling and rhythm, when clearly identifiable
*Use of finesse to reflect the details and nuances of the music
And let’s not worry about decimal places, but just the general range of quality on the scale of 0 to 10.
10 Outstanding
9 Outstanding
8 Very Good
7 Good
6 Above Average
5 Average
4 Fair
3 Weak
2 Poor
1 Very Poor
0 Extremely Poor
If you use the official chart, you could refer to the color ranges rather than to specific numerical scores.
Here are five short programs from the same JGP event (several years ago, so the individuals involved are all in their 20s now and less likely to be familiar to current fans). I’ve chosen performances to represent the range of abilities in that event, according to the judges’ scores, and present them in skate order, which was a random draw.
2011 Volvo Cup
Arijana Tirak BIH
Riona Kato JPN
Polina Agafonova RUS
Regina Glazman KAZ
Christina Erdel GER
How would you rate them on Transitions and Interpretation according to the above criteria?
I’m mainly curious about how we can analyze components in isolation and how much difference we’re likely to find between disparate components. There’s no right or wrong answer, and we may disagree with each other, let alone with the official panel, with good reasons for our different assessments.
I can’t participate in this exercise honestly because I have looked at the official protocols. If someone wants to put together links to another group of programs from a different event I could try my hand at that.