These are common abbreviations for coaches/choreographers/skaters when writing down a program layout and a lot of judges used to use something like those for their 6.0 judging sheets (each judge had own nomenclature but it was similar - S-Salchow, T-toe loop...)
The #1 thing I would do is to have separate judging panels for TES and PCS. Due to financial reasons, I know this is probably impossible, but I sometimes can't help but think that judges are so busy evaluating each element separately that they don't have time to properly evaluate the program from a 'big picture' (or so to speak) sort of perspective that is required for the components of PCS.
They already tried #1 and the judges complained of boredom!^^^
The #1 thing I would do is to have separate judging panels for TES and PCS. Due to financial reasons, I know this is probably impossible, but I sometimes can't help but think that judges are so busy evaluating each element separately that they don't have time to properly evaluate the program from a 'big picture' (or so to speak) sort of perspective that is required for the components of PCS.
The #2 thing I would do is to send out a LOT of memos reminding the judges that the categories of PCS should be evaluated separately! I'm tired of seeing the same pattern over and over again for every skater's PCS despite what they do on the ice: all the components boxed together within a point of each other, with TR always slightly lower than all the rest.
^^^
The #1 thing I would do is to have separate judging panels for TES and PCS. Due to financial reasons, I know this is probably impossible, but I sometimes can't help but think that judges are so busy evaluating each element separately that they don't have time to properly evaluate the program from a 'big picture' (or so to speak) sort of perspective that is required for the components of PCS.
The #2 thing I would do is to send out a LOT of memos reminding the judges that the categories of PCS should be evaluated separately! I'm tired of seeing the same pattern over and over again for every skater's PCS despite what they do on the ice: all the components boxed together within a point of each other, with TR always slightly lower than all the rest.
They already tried #1 and the judges complained of boredom!
They've already done #2 and Joe Inman got blasted for it!
^^^
The #1 thing I would do is to have separate judging panels for TES and PCS. Due to financial reasons, I know this is probably impossible, but I sometimes can't help but think that judges are so busy evaluating each element separately that they don't have time to properly evaluate the program from a 'big picture' (or so to speak) sort of perspective that is required for the components of PCS.
The #2 thing I would do is to send out a LOT of memos reminding the judges that the categories of PCS should be evaluated separately! I'm tired of seeing the same pattern over and over again for every skater's PCS despite what they do on the ice: all the components boxed together within a point of each other, with TR always slightly lower than all the rest.
Here's a scale used in Dressage - another judged and some what subjective sport:
10 Excellent
9 Very good
8 Good
7 Fairly good
6 Satisfactory
5 Sufficient
4 Insufficient
3 Fairly Bad
2 Bad
1 Very bad
0 Not performed
It is used to evaluate each individual move - sometimes with a coefficient, as well as to provide overall scores for the horse and rider (similar to PCS in away)
It helps me relate to the numbers - A good triple lutz or a satisfactory triple lutz or maybe an excellent one?
My proposal: Get rid of the Technical Panel, and separate all raters (judges) into two groups:
Group One: To assess the difficulty of each executed element. The base mark of that element is the mean difficulty score given by the judges. Say, seven judges give their level rating as such for an executed element: 2, 3, 3, 2, 3, 2, 1, and the values for the levels are: level 1 = 3.5 points, level 2 = 5 points, level 3 = 7.5 points, etc. After deleting two extreme ratings, the base mark or average difficulty score for that element is: (5 + 7.5 + 7.5 + 5 + 5)/5 = 6. All gray areas (under-rotation, edge calls, etc.) will thus be handled as gray areas and there will no arbitrary dichotomous call of the current scoring system.
Group Two: To assess the quality of each executed element (i.e., give out GOEs). The judges of this panel will not concern themselves with level or type (Luz vs, Flip) of an element, and thus they will not be informed of Group One's decisions. They may watch if a jump has a "clear" edge going in but whether it is a "wrong" edge or not is not their concern. They do not count the rotations in spins or jumps.
All they care is how well they are executed. They watch for hand-downs, stumbles, and falls however. Say, the mandatory deduction for a fall = -1 , and here are the judges' decisions: -1 (fall), 0 (non-fall), -1 (fall), -1 (fall), -1 (fall), 0 (non-fall), -1 (fall). So the mandatory deduction for that executed element would be (-1 + 0 -1 -1 -1)/5 = -0.8.
This is something that needs reliability studies. If the result shows that judging by overall impression is as good as by features, then go for overall impression, which is easy and time-efficient. I have a question though: Does the Technical Panel of the current system actually count the features achieved? What is shown on their computer screen? A checklist of features for them to click on the box if the feature is achieved? I am not a skater and have no clear idea how they actually do the judging. I'm inclined to believe they are judging by impression and may review the footage if they are not in agreement.1) Are the levels still based on published features that either are or are not achieved? Or should they be based on each technical judge's overall impression of the difficulty of the element? For complicated elements, such as pairs step sequences, where there's no way that one person can actually keep track of all the features, should each tech judge make a guess as to the probable level based on the features s/he did see?
Option #1: Go with the majority. Option #2: If more than 1/3 of the judges identify an element differently, the computer automatically shows up a warning sign, footage review is then conducted by all the judges, and the majority has the final say. Which option is better? Again, it needs reliability studies. If the study shows that Option #2 does not improve reliability significantly, go with Option #1.2) Who decides what each element actually is? Do each of the tech judges identify the elements independently, and then the computer will use the identification that the majority identified? (That would be an argument in favor of 5 or 7 judges instead of 6) Or should the referee or the equivalent of the technical controller, or even the data entry person, have the authority to review confusing elements afterward and make a human decision, based his/her own interpretation as well as those of the judges, as to what the ambiguous element should be called?
As a non-skater, I can tell if a skater doubles a triple, which means the quality of the executed double is sacrificed in some way and so a negative GOE may be justified. In some cases, I cannot tell if a skater intentionally goes for a double or not, which means its quality is as good as a planned double and probably deserves a positive GOE. Interestingly, I never count the revolutions of a spin while watching a performance. I look for well-centered, fast spins. Can those judges be "untrained"? We just assume their judgement may be affected by the technical aspects. Again, reliability studies will prove or disprove that assumption.Experienced judges are going to notice those things even if it's not their job to worry about them. So these judges' assessments of quality will be affected by whether they saw a double or a triple jump or whether the flying camel had 6 revolutions or 12.
The judge can just click on the "fall" box under the element that the skater falls. If a fall occurs between elements, the judge should click on the "fall" box under "non-element". If a skater falls again between elements, the judge clicks another "fall". Nobody needs to keep track of falls. The computer does the job. Remember, agreement among judges regarding a "fall" or "non-fall" is not needed.Will there be a separate box on the computer screen (and paper, for backup or for small competitions run without computers for the judges) on which to keep track of falls? Next to the box for the element GOE? If the GOE is +3 to -3 as now, maybe there's an option of -4 to be used only for elements with falls. What about falls between elements? Maybe it should be the first panel keeping track of those.
Does the Technical Panel of the current system actually count the features achieved?