ABCs of the COP | Golden Skate

ABCs of the COP

Joined
Aug 3, 2003
People have been understandably confused by the COP and this results in a lot of frustration. So my idea for this thread is for various people to provide info on the COP in basic and/or summary form. I know Mathman and Pitchka are well-versed in the COP, but this thread is open to anyone to post a "lesson" (I can already suggest that someone do one on just what the abbreviations mean, lol) and of course to anyone and everyone to ask questions.

Since I'm going first, I'll start with something I noticed first: Lack of understanding of what constitutes the five components that make up the Total Component Score (TCS) also known as the "overall criteria," which are more or less equivalent to the presentation score under the 6.0 system.

The following definitions are from the ISU Communication No. 1207
New Judging System Figure Skating / Ice Dancing
Attachment B, p11
Definition Program Components
Figure Skating/Ice Dancing
Skating Skills
Definition: Methods used by a skater/couple to create movement over the ice surface.
Purpose: To reward efficiency of movement in relation to speed, flow and quality of
edge.
Criteria:
•Overall skating quality
• Multi directional skating
• Speed and power
• Cleanness and sureness of edges (steps & edges Ice Dancing)
• Glide and flow
• Depth and quality of edges (Ice Dancing)
• Balance in ability of partner (Ice Dancing and Pair Skating)

Transitions
Definition: Skating steps/elements linking program highlights.
Purpose: To reward different steps, movements and elements linking and enhancing the
program highlights so they become part of the program not just isolated
elements.
Criteria:
• Difficulty and quality of steps linking elements.
• Creativity and originality of steps linking elements (these are in
Choreography for Ice Dance)
• Originality and difficulty of entrances and exits of elements
• Pattern (Ice Dancing)
• Balance of workload between partners (Ice Dancing)
• Difficulty and variety of dance footwork, holds and linking movements (Ice Dancing)

Performance/Execution
Definition: The evaluation of the skater’s/couple’s ability to exhibit a pleasing appearance
through body awareness and projection.
Purpose: To reward the skater’s/couple’s ability to demonstrate body line, carriage and
balance while executing element highlights.
Criteria:
• Carriage
• Style
• Body alignment
• Variation of speed
• Unison (Ice Dancing and Pair Skating)
• Balance in performance between partners (Ice Dancing)

Choreography
Definition: The evaluation of the program layout in relationship to elements and their
linking steps. Program highlights should be evenly distributed over the ice
surface demonstrating the skater’s/couple’s skills.
Purpose: To reward the skater(s) that utilizes the entire ice surface and different levels
of space around the skater(s). (Ice Dancing: to reward the couple creatively
utilizing the program to develop a theme or concept by use of music, the entire
ice surface and different levels of space around them.)
Criteria:
• Harmonious composition of the program
• Creativity and originality (Ice Dance only)
• Conformity of elements, steps and movements to the music
• Originality, difficulty and variety of program pattern
• Distribution of highlights
• Utilization of space and ice surface

Interpretation
Definition: The use of the body and skating elements to express outwardly the mood and
character of the chosen music.
Purpose: To reward the skater(s)/couples who express the mood, emotions, and
character of the music by using technical elements, linking steps and
choreography as a result of the music’s structure.
Criteria:
• Easy movement and sureness in time to the music
• Finesse, and nuances of the musical phrases (and accents and
change of pace of music in Ice Dance)
• Expression of the music’s style and character
• Maintaining the character and style of the music throughout the
entire program
• Timing (Original Dance and Free Dance only)
Timing (for Compulsory Dances only)
Criteria:
• Skating in time with the music
• Skating on strong beat
• Introductory Steps

Each judge gives the skater a score from 0 to 10 in increments of .25 for each component. The Total Component Score (TCS) is calculated according to what's called the "trimmed mean." The trimmed mean is calculated by deleting an equal number of highest and lowest scores and calculating the average of the remaining scores. For this purpose, for a panel of 9 or 8 Judges 4 scores will be deleted and for a panel of 7 – 5 Judges 2 scores will be deleted. (In the ISU papers, scores are sometimes referred to as grades. Generally the ISU papers use score to refer to the total calculated scores and grade to refer to the individual judge's score. I'll stick to score for both because I think grade implies a letter grade and also to be consistent.)

The panel’s scores for each program component are then multiplied by a factor as follows (same for Junior and Senior):
Men: SP: 1.0 FS: 2.0
Ladies: SP: 0.8 FS: 1.6
Pairs: SP: 0.8 FS: 1.6
The results are rounded to two decimal places. The results are the 5 Program Component Scores or TCS.

For example, in the Skate America ladies free skate, we can see the scores each judge gave each skater for every part of the program, including technical elements and the five components, plus the trimmed mean total scores. Thus we can also tell which scores were thrown out. To get the Detailed Results for a given discipline and SP or LP, go to
http://www.isufs.org/results/sa2003/index.htm
and click on Detailed Results, which is the furthest box to the right. The DR are in PDF format so you'll have to download them, but at least on my computer it took less than a minute to download each of the eight Detailed results.

Let's look at an example of how Component scores work and how the totals are calculated. Unfortunately the post format isn't great for tables, so please bear with me.

Let's start with how the score for all five compenents is calculated for one judge, eg, Judge #1, for one skater, eg, Sasha Cohen.
Skating Skills (SS) 8.00
Transitions (TR) 6.00
Perf/Execut* (PE) 8.50
Choreography (CH) 8.00
Interpretation (IN) 8.75
*Performance/Execution

When you see this on the Detailed Results, the scores of all 11 judges will read horizontally for each component and after the name of each component you will see a column named "Factor" under which it will say all the way down "1.60." This is because after the scores for all five components are added up for each judge they are multiplied by a factor of 1.60 for the ladies free program. For the ladies SP the factor is 0.8. (See above for the component factors for the men, ladies, and pairs SP and FS. Dance is judged differently, but that's a different lesson:).)

So, for all five component scores by Judge #1 we add the numbers from above for Cohen and get:
8.00
6.00
8.50
8.00
8.75
____
39.25

Then we multiply 39.25 by the Ladies FS Component factor of 1.60.
39.25 x 1.60 = 62.80

So 62.80 is Judge #1's Total Program Component Score (factorized). This same thing is done with the scores of all 11 judges. EDIT: Then the two high scores and two low scores, plus the scores of two randomly selected judges for the whole competition, as if they had never been on the panel, are thrown out for each of the five components. That leaves five scores after the highs, lows, and random scores are thrown out. When we average those five scores, the result is known as the trimmed mean. Sasha's Total Component Score (TCS), which is the score on the far right column of the table below all the other trimmed mean totals, is 68.88.

CONTINUED EDIT: When I first wrote this, I not only forgot to include concept of the two judges who are randomly excluded by computer before the competition even starts, but I also misunderstood how to mathematically calculate the TCS for Sasha even if I had thrown out the two highs, two lows, and had correctly guessed which judges' scores had been randomly eliminated. To quote GKELLY (see next post), who helped me out by correcting my blunder, here's the right way to do it:
Just looking at the program components, as Rgirl does, you don't come up with 68.88 for Cohen by adding up judge 1's scores for the five components, then judge 2's, etc., in columns, and then work across the bottom to eliminate the randomly selected judges (if we knew who they were) and the high and low *totals* and average what's left.

Instead, you have to go across the rows for each component, get rid of the secret randomly eliminated judges, throw out high and low of what's left for that component, average the remaining scores for that component, and end up with the average in the right column.

In other words, you work across, eliminate, and average first, and only at the end work down and add. Rgirl was working down and adding first and then working across, eliminating, and averaging last, which is the wrong algorithm and therefore would not produce the right result even if we did know which were the secretly eliminated judges.

CONTINUED EDIT: So the way I had gone on explaining how Sasha's TCS was calculated and numbers I orginally had listed were completely wrong. So rather than leaving up the incorrect numbers and confusing people, I deleted them. To give an example of the correct way Sasha’s TCS for her Skate America free skate was calculated is difficult since we don’t know which two judges were randomly excluded prior to the competition. As Gkelly said, on the Detailed Results table, you read across horizontally for each component--Skating Skills, Transitions, Performance Execution, Choreography, Interpretation--and it is from each of those components that the same two judges’ scores who have randomly excluded are thrown out, then the two high and two low scores left after that are thrown out, then the average of the remaining five scores are calculated to get the trimmed mean score, which is at the far right of the Detailed Results table. By adding the trimmed means for the five components we get the unfactored total (UT). By multiplying that number by the factor for the TCS, 1.6, we get the factored total (FT) or Sasha’s TCS of 68.88.
SS 8.50
TR 8.25
PE 8.75
CH 8.75
IN 8.80
_______
UT 43.05

43.05
x 1.6
______
FT 68.88, ie, TCS 68.88

So the way I had originally explained how Sasha's TCS was calculated and the numbers I orginally had listed were completely wrong. Thanks again to GKELLY for the corrections.

I think a math mistake by Rgirl is a great place to end this, our first and hopefully not our last lesson in the ABCs of the Code of Points. Other COP professors, please, take it away------->
R(2 + 2 = 5)girl
 
Last edited:

gkelly

Record Breaker
Joined
Jul 26, 2003
The detailed protocol gives the total elements and total components scores for each judge, and also the total segment score for each judge.

I don't believe that those totals are ever used in the calculations, though.

Rather, as I understand it, two of the judges were never "really" on the panel to begin with but were randomly chosen by computer to be eliminated for the event as a whole.

Then the two high and two low grades of execution from the remaining judges are thrown out for *each* element and for *each* separate program component score, and the remaining five are averaged. These averages, factored as necessary, are found in the far right column labeled "Scores of Panel."

The total in the bottom right corner is the skater's official Total Segment Score for the segment (long program in this case), and the totals for Elements and Program Components in the right column are also what's listed at the top of the skater's sheet as the official elements and components marks.

Just looking at the program components, as Rgirl does, you don't come up with 68.88 for Cohen by adding up judge 1's scores for the five components, then judge 2's, etc., in columns, and then work across the bottom to eliminate the randomly selected judges (if we knew who they were) and the high and low *totals* and average what's left.

Instead, you have to go across the rows for each component, get rid of the secret randomly eliminated judges, throw out high and low of what's left for that component, average the remaining scores for that component, and end up with the average in the right column.

In other words, you work across, eliminate, and average first, and only at the end work down and add. Rgirl was working down and adding first and then working across, eliminating, and averaging last, which is the wrong algorithm and therefore would not produce the right result even if we did know which were the secretly eliminated judges.

Unless I've totally misunderstood the process.
 
Joined
Aug 3, 2003
Thanks, GKelly! I'm sure you're right. I knew there were judges that were randomly thrown out before the competition, as if, like you say, they were never there, but I forgot to include that in my explanation and calculation. That's what I get for trying to do a COP lesson late at night, lol.

Thanks again for correcting my mistake and for contributing to the thread. Obviously you understand the COP very well and I hope you'll contribute a "tutorial" on some other aspect yourself. I think if people can get it a little at a time and from people who had to go through the process of figuring it out for themselves, mistakes and all, the COP will be much more accessible than trying to get it all from the various 20-plus-page ISU "communications." Even a secondary "communication" was 18 pages long. In fact I was almost going to call this thread, "Remembrance of Things COP," but I thought that might scare off posters even more, lol.
Rgirl
 

giseledepkat

Rinkside
Joined
Oct 7, 2003
gkelly said:
Instead, you have to go across the rows for each component, get rid of the secret randomly eliminated judges, throw out high and low of what's left for that component, average the remaining scores for that component, and end up with the average in the right column.

:eek: OMG, my head is spinning!!!

Here's the part that makes no sense to me: what does it accomplish to secretly, randomly eliminate judges and then additionally throw out high and low marks, before averaging the remainder??? I understand the principle of the trimmed mean -- a way of arriving at a "consensus" mark unaffected by lowball or highball scores. But what purpose does random elimination serve? If it is put in place to quote *prevent crooked judging* unquote, doesn't the trimming of high and low scores accomplish that anyway? If, as I have read, it is to guard against individual judge's federations putting pressure on them, I fail to see how this helps in the least! What it does accomplish, IMO at least, is simply to make the arrived upon trimmed mean score less accurately representative of the panel's intent.

It just seems needlessly complicated, and somehow mathmatically "unclean" -- like mixing apples and oranges? (Mathman, where are you?) :(
 

mzheng

Record Breaker
Joined
Jan 16, 2005
OK, I'll join the game here. I'll repeat my posts in another SA fold here as Rgirl suggested.

From Mzheng =======>
Now, after the competetion, by looking at the protocol and do some math -- basically solve the group of linear eqautions -- you can figure out which 5 judges's scores are counted. In another words, who the 5 judges been choosen by the computer as the final judge panel.

Oh, well. I'm too lazy to do that.

From Mathman ========>
Help me out. Is it four scores thrown in the random draw and two thrown out in trimming the mean, or is it the other way around?

I am assuming that the judges who are thrown out in the random draw are thrown out across the board. But the marks thrown out in trimming the mean are done element by element. In other words, the same judges' scores are thrown out in the random draw for every element, but in the trimming process the the scores of different judges might be chosen for different elements, depending on who happened to be high or low on that element.

The linear system is not quite so straightforward for these reasons, but with the hugh amount of data available the system must be overdetermined. As MZheng says, it should be possible quite easily to figure out whose scores counted. Since the ISU's whole purpose in these experiments was to prevent us from doing this, I wonder if Speedy has kept a couple of aces up his sleeve.

Mathman


From mzheng ==========>
Mathman, I knew you would be interested. hehehe..

Actually I almost put your name in my last sentance to ask you do the math.

Sorry I didn't thought of the trimming mean process, that would make it difficult to come up with the group of linear equations.

Och, they did not even give judge names. Assume the column 1 is the same judge 1 marks for all skaters. Then I guess the most you can do is figure out if judge number # is in the final counted panel. Am I right, mathman?
 
Joined
Jun 21, 2003
giseledepkat said:
:eek: OMG, my head is spinning!!!

Here's the part that makes no sense to me: what does it accomplish to secretly, randomly eliminate judges and then additionally throw out high and low marks, before averaging the remainder??? I understand the principle of the trimmed mean -- a way of arriving at a "consensus" mark unaffected by lowball or highball scores. But what purpose does random elimination serve? If it is put in place to quote *prevent crooked judging* unquote, doesn't the trimming of high and low scores accomplish that anyway? If, as I have read, it is to guard against individual judge's federations putting pressure on them, I fail to see how this helps in the least! What it does accomplish, IMO at least, is simply to make the arrived upon trimmed mean score less accurately representative of the panel's intent.

It just seems needlessly complicated, and somehow mathmatically "unclean" -- like mixing apples and oranges? (Mathman, where are you?) :(
I can't figure it out either, Geseledepkat. The main secrecy tool is simply that the judges are not named. As you say, if one judge is cheating in an outlandish way, that will be taken care of in the trimming process anyway. Otherwise, since all the marks for all the judges are displayed in great detail whether the marks of judge X are actually used in the total or not, I don't see how the random draw helps protect judges' privacy. The only thing that I can see is, if the president of a national federation tells its judge to cheat and he/she doesn't, then the judge can say, yes, I double-crossed you but don't shoot me because maybe my vote didn't count anyway.

MZheng, I think that we can figure out which judges' scores were thrown out in the random draw. After that it is easy to see which were thrown out in the trimming process for each element. I will do that as soon as I find out whether it is 4, then 2, or 2, then 4, in the two steps of the winnowing process.

Mathman
 

ladybug

On the Ice
Joined
Jul 27, 2003
Oh my God...How are the judges supposed to watch the program, decide how many points that element deserves or if the skaters should get deductions or bonus points, type it into the computer and watch the next element at the same time.

I would be so busy scoring the first jump, I would probably miss the rest of the program.

You did a great job of explaining Rgirl. If I was thirth-something maybe I could retain about 1/4 of what you said.

Keep posting and I'll keep reading and maybe I will be a little more in tuned when watching Skate Canada.

Thanks :confused:
 
Joined
Aug 3, 2003
Mzheng & Mathman,
Thank you for adding your posts from the SA folder to this thread. In case some people are hesitant about asking questions for fear they might sound stupid, I'll use something one of my professors would say: "No question asked in sincerity is stupid."

Mathman,
Re your question about how many scores are thrown out, in the ISU COP Communication 1207 if does not specify how many high/low scores will be eliminated for a panel of 11 judges, but it does specify that it's 4 for a panel of 9 and 2 for a panel of 7. I haven't found anything specific about the scores that are randomly discarded prior to the event in that version of the 1207 Communication, but that doesn't mean the info isn't there; I could have just missed it.

BTW, there are at least two versions of ISU Communication 1207. For the link to an older communication, see Pitchka's post on the "Detailed Scores and Results" thread in the Skate America folder, http://www.goldenskate.com/forum/showthread.php?s=&threadid=3002 and click on Older Communiques. Sorry I don't have the URL, but once you download it (at least on my computer, you lose the URL). I don't know how Pitchka found the older version, but you might try checking:
http://www.isu.com
and searching under New Judging System and/or Code of Points.

Also, for people's general information, here are two articles that discuss the COP quickly and generally:
http://www.usfsa.org/news/2003-04/nebelhorn-wrapup.htm
"Skaters, Coaches and Judges Discuss New Judging System at Nebelhorn"

http://www.usfsa.org/news/2003-04/newjudging-summary.htm
"New ISU Judging System Quick Summary"

Rgirl
 
Last edited:

apache88

On the Ice
Joined
Jul 28, 2003
I have always liked the COP, nevermind that it still needs tweaking. I like it when things are quantifiable. With the COP, all skaters' executions of all the skating elements can be compared in points and God knows how much I love comparisons. Theoretically, we can even compare performances from different competitions. Wouldn't it be great to be able to compare Tara's LP at 1998 Olys and Michelle's The Red Violin at 2000 Worlds?
 
Joined
Jun 21, 2003
The more I think about it, the more I wonder if Speedy doesn't plan to use that random draw thing as a bargaining chip and a red herring to deflect the attention of critics from the real issue. It is obvious that the ISU has no interest in confronting ethical issues head on. But if they can get people to arguing about the random draw, maybe they plan to give it up, bragging then about how they are willing to compromise in the face of public opinion. Speedy could even say, I am abandoning the random draw in an effort to lessen the secrecy about the judging process, aren't I a good fellow.

Statisticians may argue about the effect -- if any -- of the random draw, but to the average fan, this aspect of the CoP (and especially of the Interim System) just makes it seem like the winners are chosen by drawing lots instead of on the basis of their performances.

Mathman

PS. Since no one seems to know whether a field of 11 is cut down to a final panel of 9 or of 7, I'll do it both ways. A postieri we should then be able to figure out which way it was.

PPS. About Ladybug's comments concerning the extra burdens on the judges, I think that this means more than ever that the judges will need to watch prectice sessions and be very well prepared as to which moves skaters plan to insert at each point of their programs.
 
Last edited:

Ptichka

Forum translator
Record Breaker
Joined
Jul 28, 2003
OK, guys, here is the Older Communiques). The one thing it has that the latest version does not is the GoE values. This gives the idea not only of what "base value" for each element is, but also what all the "-1", "+2" etc. really mean.
 
Joined
Jun 21, 2003
That's the key to the whole thing, Ptichka. If you know that -2 really means -1.4 for some elements, but -2.2 for others, then you can actually reconstruct the final scores. (I think.)

I see that GKelly already answered my question about the random draw. Thanks.

Mathman
 

tharrtell

TriGirl Rinkside
On the Ice
Joined
Jul 26, 2003
I've been avoiding the subject of the CoP until now because it just seems annoying to wade through. Now with the GP going on, I feel the need to understand the scores, but yeeeaaaacchh! I LIKE numbers, but what a mess! Thanks for the explanations here, though. Reference to TCS was made in the Skate Canada folder, and I was wondering what on earth TCS was!
 

giseledepkat

Rinkside
Joined
Oct 7, 2003
Mathman said:
The more I think about it, the more I wonder if Speedy doesn't plan to use that random draw thing as a bargaining chip and a red herring to deflect the attention of critics from the real issue. It is obvious that the ISU has no interest in confronting ethical issues head on. But if they can get people to arguing about the random draw, maybe they plan to give it up, bragging then about how they are willing to compromise in the face of public opinion. Speedy could even say, I am abandoning the random draw in an effort to lessen the secrecy about the judging process, aren't I a good fellow.

I dearly hope you're right!

Statisticians may argue about the effect -- if any -- of the random draw, but to the average fan, this aspect of the CoP (and especially of the Interim System) just makes it seem like the winners are chosen by drawing lots instead of on the basis of their performances.

The problem I see looming with this, IMO, unneccessary and disruptive "wrinkle" in the system, is that some day it's going to come down to making the difference in a closely decided competition. Which scores are randomly deleted prior to trimming can have an effect on the outcome -- it may turn out to be a very slight effect, in mathematical terms, which nonetheless becomes a very profound effect by virtue of deciding a close race! My math skillz are decidedly minimal, so I'll try to illustrate with whole numbers to make the difference easier to calculate. I'll assume a panel of 11, with 2 scores randomly discarded and 4 scores trimmed:
(And I know the calculations are done for each element, not for total scores, this is just a very rough hypothetical example!)

Judge 1: 70 points
Judge 2: 69 points
Judge 3: 68 points
Judge 4: 67 points
Judge 5: 66 points
Judge 6: 65 points
Judge 7: 64 points
Judge 8: 63 points
Judge 9: 62 points
Judge 10: 61 points
Judge 11: 60 points

Now let's consider two scenarios. In the first, the two highest marks, 70 points and 69 points, are discarded in the random draw. That leaves 68 and 67 points as the highest remaining scores, which are trimmed along with the bottom scores of 60 and 61. The remaining 5 judges marks -- 62, 63, 64, 65 and 66 are added together (320) and divided by 5 to arrive at the trimmed mean score of 64.

In the opposite scenario, the random draw eliminates the bottom two marks (60 and 61), the remaining highs and lows are trimmed and the 5 marks remaining (64 through 68) yield a trimmed mean of 66. (330 divided by 5)

If, as you suggest, Mathman, it will indeed be possible for the mathmatically inclined to actually figure out ex post facto which judge's marks were randomly discarded this could lead to massive allegations of unfairness! What if Skater X wins by the slimmest of margins, but is later revealed to have benefitted from a preponderance of low marks having been randomly deleted? While Skater Y found herself in exactly the opposite situation? Indeed, the whole purpose of the CoP -- to promote fairness and objectivity -- will be undermined, and, as you say, it will appear that the winner was chosen by drawing lots! (Or at least, that is how it will appear to fans of Skater Y! :mad: )
 

Ptichka

Forum translator
Record Breaker
Joined
Jul 28, 2003
tharrtell said:
Reference to TCS was made in the Skate Canada folder, and I was wondering what on earth TCS was!
  • TSS Overall Score
  • TES Technical Elements
  • TCS Program Components:
  • SS Skating Skills
  • TR Transitions
  • PE Performance/ Execution
  • CH Choreography
  • IN Interpretation
 

tharrtell

TriGirl Rinkside
On the Ice
Joined
Jul 26, 2003
Ptichka, thanks, this thread has helped tremendously. I've just been too lazy up until now to be bothered with the details.
 
Joined
Jun 21, 2003
OK, here I go. I will try to recap GKelly's explanation (above) with some particular numbers. For my example I chose Sasha Cohen's long program at Skate America. First I will do the Program Components. To avoid putting down too much data, I will illustrate the method of the CoP for just two lines, Choreography and Interpretation. The other three lines, Skating Skills, Transitions and Performance/Execution, are similar. Here are Sasha's scores from the 11 judges, taken from the ISU website (click on Ladies Free Program, detailed scores):

http://www.isufs.org/results/sa2003/index.htm

Choreography- 8.00 8.25 8.50 8.75 8.75 8.50 9.50 9.25 8.50 9.50 8.50 -- (Average) score of panel: 8.75

Interpretation-- 8.75 8.75 9.00 8.75 8.75 8.75 9.50 8.75 8.75 9.50 8.50 -- (Average) score of panel: 8.80

First the random draw eliminated judges 6 and 11. Their scores don't count, all the rest of the way. (Yes, it was easy to figure out which judges were eliminated in the random draw, as MZheng noted.) This leaves:

Choreography- 8.00 8.25 8.50 8.75 8.75 9.50 9.25 8.50 9.50

Interpretation-- 8.75 8.75 9.00 8.75 8.759.50 8.75 8.75 9.50

Now the computer throws out the two highest and the two lowest scores in each line. In Choreography, this eliminated the two 9.50's (highest), and the 8.00 and the 8.25 (lowest). In Interpretation, this eliminated the two 9.50's and two of the 8.75's. The remain five scores were

Choreography- 8.50 8.75 8.75 9.25 8.50

Interpretation-- 9.00 8.75 8.75 8.75 8.75

These sets of five numbers are the ones that are averaged in the last column on the right, under the heading "Scores of Panel."

Notice that in the trimming procedure a particular judge might have his score eliminated in one line, but not eliminated in another line. However, this mostly did not happen -- the judges who liked Sasha the best (judges 7 and 10) gave her consistently high marks in every category, while the judges who didn't like her so much (judges 1, 4 and 9) gave her consistently lower marks in every category. Dirk Schaeffer analyzed this tendency systematically for the data from the Nebelhorn data in his article

http://www.goldenskate.com/articles/2003/101703.shtml

Finally, after these "Average scores of panel" are added up for each of the Program Components, the total is multiplied by a factor of 1.6. That is, it is artificially made 60% higher, before it is added onto the Total Element Score (the "technical score") to determine the final "Total Segment Score" for the long program.

The two "segments" are the long and the short program. For the short program the "factor" is .80 instead of 1.60. This makes the long program count twice as much as the short program in the final total, at least as far as the "presentation" scores are concerned.

Mathman

PS. To Gizeledepkat. I was able to determine that it was judges 6 and 11 who were eliminated just by fooling around for 15 minutes with a calculator. Thus you are quite right that this random draw thing has nothing to do with preserving judges anonymity. This is quite different from the case of the Interim System, where the lack of data frustrated any attempt at assigning marks to individual judges.

So now I don't know what the random draw is supposed to accomplish -- except, as you say, make a lot of people mad when their favorite appears to be screwed by it.
 

dorispulaski

Wicked Yankee Girl
Joined
Jul 26, 2003
Country
United-States
Dumb question

The thing I want to know about COP is where do the callers sit in the arena. Going to SA reminded me that where you sit does indeed affect what you see. In general, I do not think that the positions the judges sit in-strung out all almost the whole side of the arena at ice level-is particularly optimum. I would like to see them about 8 rows up. Which leads to my question. I would like to see the callers optimally located, because otherwise it can be hard to tell whether correct jump edges, 2 footed landings and so forth have been done. Does anyone know where the callers sat at SA or SC?

dpp
 
Joined
Jun 21, 2003
I don't know the answer to your question, Doris, but it is interesting to look at the breakdown of scores for each element. For instance, in Sasha's LP at Skate America only one judge (judge number 6) missed the double footed landing on the triple flip.

Presumably this will be reviewed at the end of the season and judges that miss a lot will not be given choice assignments next year (?)

Mathman
 
Joined
Aug 3, 2003
Mathman,
Thanks so much for demonstrating the actual way the component scores are calculated and figuring out that it was judges 7 and 11 who were eliminated. Just before I read your post, I was about to edit my first post according to Gkelly's corrections. I thought I should try to figure out who the randomly eliminated judges were to give an accurate example to make up for my blunder, but you not only saved me the trouble but also probably saved the forum from further confusion because no doubt I would have made at least one significant math error:eek:

Gisel,
Your example is an excellent way to show how eliminating different judges can potentially make a difference in who wins and who doesn't. For example, before I read Mathman's post and was messing around with the component scores, I wondered what the difference would be if the scores of all 11 judges had been counted towards the Total Component Score (TCS).

It just so happens that for Skating Skills (SS), if we calculate the total mean for all 11 judges, we get 8.50, which is the same result as that for the trimmed mean, 8.50, but that’s just a coincidence. For Transitions (TR), if we calculate the total mean for all 11 judges, we get 8.00 rather than the trimmed mean of 8.25. So just to give people an idea of the differences between the trimmed means (TrM) and the total means (ToM) if the scores of all 11 judges had been included in the calculation, here are they are side by side
TrM ToM
8.50 8.50
8.25 8.00
8.75 8.80
8,75 8.77
8.80 8.09
__________

43.05 42.16

TrM Factored 43.05 x 1.6 = 68.88
ToM Factored 42.16 x 1.6 = 67.46

68.88 - 67.46 = 1.46

So using the factored trimmed mean results in a Total Component Score (TCS) that is +1.46 over using the factored total mean had we used the scores of all 11 judges. It doesn’t seem like much, but considering that the difference in the pairs competition between Pang/Tong’s gold and Petrova/Tikohnov’s silver using the Total Segment Scores (TSS) for both the SP and LP was only 0.16, we see how important every score that is included or excluded can be.

BTW, I think the random exclusion of two judges plus the high and low scores, number depending on the number of judges on the panel, is, as Gisel noted, like excluding apples and oranges, or to really mix my metaphors, is, as Mathman noted, a red herring. I'd rather see either one or two high and low scores thrown out and have a larger panel of scores from which to calculate the trimmed mean than randomly exclude two judges and calculate the results from only five scores. At SA, out of 11 judges, six were excluded, which obviously is more than half. Even if people like the idea of a five scores used for calculating the trimmed mean, I'd rather exclude scores on at least the theoretical basis of bias than just on the basis of chance.

A couple of people brought up the difficulty for the judges of looking at the program and evaluating everything in the COP. For example, I think Ladybug spoke for many fans when she said, How are the judges supposed to watch the program, decide how many points that element deserves or if the skaters should get deductions or bonus points, type it into the computer and watch the next element at the same time? I would be so busy scoring the first jump, I would probably miss the rest of the program. Firstly, each jump, spin, spiral, or footwork has a preassigned base score. Secondly, the callers decide and tell the judges whether the element was completed or not. All the judge has to do is decide whether the jump, spin, spiral, or footwork receives the base score or plus or minus 1, 2, or 3 points.

Remember that the judges have not only been judging skating for years but also have had workshops and mock competitions using the COP. In addition, just looking at Campbell's and SA, at least with the singles and pairs skaters, they tend to use a lot of the same elements year after year and what distinguishes their programs, or doesn't, is the way they perform these elements, if they add anything new, and how well-constructed the entired program is. Even with my very (VERY) elementary understanding of the COP, I was surprised to find how easy it was to follow a skater's program using the list of elements while also thinking of the five components. In fact, all the singles and pairs programs I watched (dance is the subject of a whole other post) break down into parts that I found to be relatively easy to evaluate.

As Joe Inman said in his post on FSU (sorry, don't have the link, I think it was back in September), he felt as if for the first time he was accurately evaluating what the skater actually did with discrete scores rather than having to come up with a single number for the technical and one for the presentation aspects of a program. As Mathman said, it does make it important for judges to watch practices and/or otherwise be familiar with the programs, but essentially you tick off the score for each element while at the same time bearing in mind skating skills, transitions, performance/execution, interpretation, and choreography. It's like most other things that appear unbearably complex upon first viewing: Once you become familiar with the system and break it down into its component parts, your brain assimilates the information and can analyze independent elements at the same time.

It might be argued that by breaking programs down into these components, judges don't get the effect of the whole program, that they don't properly evaluate those programs that "blow the roof off" the arena even if they skater doesn't display the greatest technical skills or doesn't thoroughly fulfill the five components according to the criteria. It may indeed be that the TCS needs to include a "total effect" score for those programs whose whole is greater than the sum of its parts, but that might be part of the "tweaking" process. OTOH, in the past, judging based on how exciting a program is or how much the audience responds has been criticized, especially when the program has later been analyzed and found to be lacking in technical merit or include a lot of easy skating and/or posing. Oksana Baiul's '94 Olympic free skate comes to mind--and I'm speaking as someone who loved Oksana's skating, but based on the actual technique, accuracy, and general presentation, would have given the gold to Nancy Kerrigan.

Getting back to acronyms, first thanks to Pitchka for her list. Just to give the technical terms for some of the acronyms:
TSS stands for "Total Segment Score" or as Pitchka noted, "Overall Score."
TES stands for "Total Element Score" or as Pitchka noted, "Technical Elements."
TCS stands for "Total Component Score" or as Pitchka noted, "Program Components."

There has been some discussion as to whether the TES is equivalent to the technical score under the 6.0 system and the TCS equivalent to the presentation score. While I think the TES can be thought of as equivalent to the 6.0 system technical score, IMO it's more difficult to equate the TCS with the 6.0 presentation score. The reason is that the TCS includes a lot of what I consider to be technical elements.

For example, according to the ISU Communication 1207, the purpose of Skating Skills is: To reward efficiency of movement in relation to speed, flow and quality of edge. I listed all the criteria in the first post of this thread so I won't repeat them all here, but I will list those criteria I consider to be technical. For singles and pairs only (just for purposes of this discussion):
Skating Skills
• Multi directional skating
• Speed and power
• Cleanness and sureness of edges
Transitions.
• Difficulty and quality of steps linking elements.
• Originality and difficulty of entrances and exits of elements
Performance/Execution
• Variation of speed (slightly technical)
Choreography
• Originality, difficulty and variety of program pattern

I think one of the strengths of the COP is that it doesn't try to strictly separate "technical" and "presentation," which I'd always thought was confusing at best and impossible at worst. During the SLC '02 Olympics, one of the Chinese pair teams completely lost unison on their side-by-side spins because the male partner fell out of his initial camel spin. I remember Scott Hamilton saying, "That will come off the technical mark," and Sandra Bezic saying, "No, it comes off the presentation mark." I don't know who was correct (I think Bezic gave an explanation as to why it came off "presentation" and it made sense), but the point is, at least to me, the male partner who fell out of his camel spin made a technical error and it also interfered with the presentation of the program.

In the COP, the spins are evaluated individually and included in the TES, and such an error as described above would receive deductions in the Elements section of the scoring and also would receive low scores under "Skating Skills" especially for lack of cleanness and sureness of edges and lack of balance in ability of partner, and "Performance/Execution," especially for lack of unison. So at least for me, while I think there is a general correlation between the 6.0 system technical mark and the TES and the 6.0 system presentation mark and the TCS, I think the correlation is a weak one. For me, rather than thinking, "What would this equal unde the 6.0 system?" it's more important to become familiar with the COP and evaluate the TES and TCS according to the stated criteria.

Still to come--more acronyms and abbreviations! (I know people will be waiting with bated breath:p)
Rgirl
 
Last edited:
Top