Skaters/Judging experts on GS - Question | Page 7 | Golden Skate

Skaters/Judging experts on GS - Question

Joined
Jun 21, 2003
Are you sure you don't like CoP 'cuz you sure do explain it well!

I do like the CoP. Sort of. :)

The reason I have ambivalent feelings about it is because IN THEORY it should go like this. Figure skating is a judged sport, not a measured sport. The CoP tries to force a square peg (measurement) into a round hole (judgement).

Skatinginbc is an expert at doing just that.

One example that skatinginbc gave earlier is wine tasting. It seems like the only sort of judgement that it is possible to make is, "This wine tatses better than that wine." But if we want winemaking to be an Olympic sport, we have to come up with a way to say, this wine scores 97.2 points and that one scores 94.6 points. And strange as it seems, we can take a somewhat creditable stab at doing that.

(One problem with wine tasting is skate order. The judges might be happier when they get down to the final flight, having tasted 18 wines previously, and thus be in the mood to give out generous marks for the last six. :) )

Where I grew up livestock judging was a very important "sport." At the county fairs there were competitions among 4-H-ers about who raised the finest calf or lamb (also the largest pumpkin -- but that can be weighed. :) ) In addition there would be contests in judging these contests.

All the young contestant judges would judge a selection of livestock, and then the professional judges would judge the student judges on their judging ability.

This was serious business, because the best of the apprentice judges could get jobs as livestock buyers for large meat packing outfits and earn big bucks.
 
Last edited:
Joined
Jun 21, 2003
skatinginbc said:
Excluding the highest and lowest scores ==> Skater C won.

I never agree with using this method when the number of raters is small (i.e., after discarding two scores, only three are left ==> the probability of measurement error increases significantly due to the limited size).

Very tricky. The measurement error is still based on sample size 5 . This is true even when analyzing the median, where only one number counts in the end. But you cannot predict in advance which judges' scores will end up being excluded.

If you increase the panel from 5 (choosing 3) to 7 (choosing 5), the standard error goes down only by 15%, rather than the 23% that one might expect.

For instance, 5.0, 5.0, 4.75, 4.75, 2.00.

The unreliable one is obviously 2.00, not one of the 5.00s. Why should we automatically discard a 5.00 simply because it is one of the highest?

I believe the main reason for the trimming is not a statistical one. I think they want to counter gross cheating and also to eliminate keying errors. Every once in a while you see something like

8.50 8.25 8.50 8.75 8.50 8.75 8.50 8.25 0.00

Obviously (cf. the defective stopwatch) the last judge pressed the wrong key. You can't change the entry after it has been entered. (This is never allowed in any sport, no matter how silly the recoded mark is. The purpose of this rule is to prevent people from influencing the judge after the fact.)

What about the two 8.75s? Are they legitimate or are they the result of two judges conspiring to give this skater a subtle boost?

Anyway, the more finely we analyze these details, the more skaters have a right to be angry and confused by the outcome. If a skater loses because a mark in his favor was excluded from consideration, or because one or two judges didn't like him at all, thus overbalancing the majority of the panel that did like him better than the other guy -- that skater will not be a be fan of the CoP.

This is why they got rid of the random draw. Even though statisticians can explain why it is OK in the long run over many competitions, still, when Sasha Cohen lost the Eric Bompard trophy because the computer randomly discarded this judge's score rather than that, Sasha fans were not happy campers.

Unfortunately the obvious solution to all these problems -- increase the size of the judging panel -- is not feasible for financial reasons.
 
Last edited:

skatinginbc

Medalist
Joined
Aug 26, 2010
Unless there is reason to hope that the underlying distribution is symmetric...
A bell-like distribution is assumed when all judges are trained and calibrated to rate skaters based on uniform, specified criteria or rubrics. The training typically includes an assessment of rater reliability. The judges must be able to produce ratings within the acceptable range in order to pass the certification qualification.

the only methods that seem to give robust results are boot-strapping (iterative) techniques
After correcting extreme scores to be within the acceptable range (within one "level" from the mean or one full point in a 10-point scale or 10 points in a 100-point scale), Skater D, your favorite, won under CoP.

if there are five scores to begin with and two are discarded, the sample size is still 5, not 3
Yes. My post #118 was embarrassing. I was half dreaming, half awake, writing it in a rush with my husband waiting anxiously to go out together. I have edited my post #118. Better now? :biggrin:

But for interpretation, performance/execution, etc., there can be a greater difference of opinion
I'm not sure about that. The acceptable discrepancy for a judge's score in Chopin piano competition (a competition mainly about interpretation, execution, etc.) is within one level (10 points in 100-scale) or even less (5 points in the finals). Your statement might not have empirical supports. Raters for abstract skills (writing for example) are able to reduce their biases (e.g., toward certain criteria) and become highly consistent after rater training (http://www.ijlt.ir/portal/files/401-2011-01-01.pdf).

Figure skating is a judged sport, not a measured sport. The CoP tries to force a square peg (measurement) into a round hole (judgement). Skatinginbc is an expert at doing just that.
To make sure we don't lose each other in the semantics, I would like to clarify some definitions. By measurement I mean "the assignment of numerals to objects or events according to some rule" (Stanley Smith Stevens, 1946, http://en.wikipedia.org/wiki/Psychometrics). Under this definition, assigning either rankings or points to skaters' performances based on specific rules or criteria is an act of measuring, and the rankings or points derived from such assessments are measurements. If you prefer calling it judgment, then let's call it judgment, or assessment or evaluation or appraisal or whatever pleases you. The measuring process can be either a norm-referenced assessment (comparing skaters against each other) or a criterion-referenced assessment (judging a performance against an explicit set of mastery standards or levels). Either way, it can be rated (ranked or scored) holistically or analytically.

Holistic vs analytic assessment:
Is a quick, overall evaluation needed? Yes. It is difficult for the judges to focus on tiny details of so many elements and still be able to evaluate the overall performance, interpretation and choreography simultaneously.
Does the performance mean more than the sum of its parts? Yes, at least in the eyes of the fans.
Can a skating performance meaningfully be broken down into distinct dimensions? Yes. Big bricks (jumps and spins) are distinct from the rest. I'm not sure about footwork and transitions though.
Is it necessary to provide formative feedback on the dimension in question? Probably. Although providing feedback to the skaters is not mandatory, people would like to know why certain skaters receive higher/lower scores.

This is my position:
Skating is a sport, so criterion-referenced assessments should be employed. Skating is more than its part, so holistic assessments should be used except that the big tricks should be separated from the rest as a distinct category (i.e., technical element scores). Therefore, skating in my opinion should adopt a holistic, criterion-referenced measure (The vicious word that Mathman hates :biggrin:).

If skating is solely a performance art, not a sport, I don't mind the last stage of the competition (say, containing only the final 6) is rated with a ranking order. Assigning rankings to a long list of competitors is not a good idea in my mind. The differences in the middle part of the bell-curve are relatively small, full of flip-flops, tie-breakers and all that. A point system can do a better job in that respect.

One of the criticisms of the IJS is that it promotes "corridor judging." No one wants to get an anomaly, so you tend to give the score that you think the rest of the judges will give, rather than "voting your conscience."
"Corridor judging" is in fact desired, perfectly in line with the main goal of judge training, namely, to reduce rater biases and differences in severity (too stringent or lenient). I think what you meant was "reputation judging"--A judge assigns scores based on the skaters' past results or reputation to avoid an extreme score that might result in a disciplinary action. Well, is there any research study suggesting that reputation judging is more rampant in CoP than in 6.0? My impression is the opposite: Margarita Dorbiazko & Povilas Vanagas wrote a letter to Cinquanta in Feb 2002 complaining that the judges relied on reputation judging rather than the actual performances. It was under the 6.0 system. How do you explain Javier Fernández's sudden jump in PCSs from 2011 Worlds to 2011 Skate Canada with your criticism of "corridor judging"? Was the outcome based on reputation or past results? Or was it because most judges recognized Javier's improved quality?

This is Javier's FP Transition score in 2011 Worlds:
5.75 6.00 6.25 6.00 6.50 6.00 6.50 6.00 6.00 (Mean = 6.11)(Note the amazing consistency among judges. No extreme scores were more than one point from the mean).

This is Javier's FP Transition score in 2011 Skate Canada:
8.50 8.00 8.75 8.00 8.00 7.25 7.50 7.00 6.75 (Mean = 7.75)(Note the amazing consistency among judges. No extreme scores were more than one point from the mean).

So they were "corridor judging" by your definition. Was that a bad thing or something good that should be promoted? It was clearly not based on the past results or reputation. The fact that so many judges dared to assign a score one point higher than the past results proved the merit of CoP (criterion-referenced measure).

But for PCS the anomalies subtract. If you are too high on interpretation but too low on choreography, that counts as 0 anomalies.
What does that say to us? It means that the design was an "analytic holistic assessment", a global synthetic judgement with specified categories to ensure that no particular aspects of the performance are overly valued by some judges and ignored by others. It is similar in design to most piano competitions where categorical aspects are specified, for instance (David Lang Piano Competition: http://www.redpoppymusic.com/rules/official-rules.pdf),
● Interpretation of the score to the Work (20 points maximum)
● Musicianship (20 points maximum)
● Vitality of performance (20 points maximum)
● Originality of performance (20 points maximum)
● Evaluation of the performance as a whole (20 points maximum)
I cannot find the scoring criteria for the Chopin Piano Competition on the internet, but as far as I can remember, it consists of 4 or 5 categories (probably 4 because my memory told me 25 points maximum for each category). Remember that in a previous post I pointed out the raw score given by a piano judge will be adjusted if exceeding 10 points? It was based on the total score, not on individual category score. The categories serve mainly as a reminder for the judges so that they can apply similar criteria in their judging. A high inter-category coefficient is however expected given that those skills are highly integrated and it is NOT very meaningful to clearly separate them.
If that is indeed also the principle behind CoP, the criticism about judges' similar PCSs across all categories becomes meaningless.
 
Last edited:

skatinginbc

Medalist
Joined
Aug 26, 2010
I believe the main reason for the trimming is not a statistical one....
To correct systematic errors caused by, for instance, imperfect calibration of raters, we may adjust each judge's scores to standardized scores so that they are in the same distribution. Since the ISU already uses the computer in their scoring, why not let the computer do the standardization as well?
To correct random errors caused by, for instance, keying errors, we may use the median (a robust statistic) instead of mean. And if you believe that a bell-shaped distribution cannot be reasonably assumed in figure skating judging due to some skill categories (e.g., interpretation) where some judges may focus on certain criteria while others may emphasize different ones, or due to possible contamination (e.g., judges conspiring to give a skater a subtle boost) so that a mixture model or a distribution with high kurtosis is observed, the median is the better estimate. The three-judge system used in assessing the difficulty level of an element is actually a method of finding the median. This is also the scoring method for the TOFEL writing test, where two raters grade an essay and the 3rd rater is needed only when the first two fail to come up with the same grade.

So, will you like CoP better if it uses medians instead of means?
 
Last edited:
Joined
Jun 21, 2003
Yes, I think so (median ;) ). This has other advantages, too. It is really easy for fans and skaters to understand, and the computer doesn't have to deal with rounding errors (as for instance when Evan Lysacek won U.S. nationals ahead of Johnny Wier because of inappropriate rounding conventions).

By the way, the median is a non-parametric statistic, hence perfectly suited for (the kind of rating that I like to call :) ) judging. The mean is a parametric statistic -- the kind you want to use in measuring.

In 6.0 skating, after the first skater in a flight skated there would be a pause while the median scores, (5.5, 5.7), or whatever, for all judges was computed. This was communicated to the judges.

I am not sure what use the judges made of this information. I believe that they were then allowed to "recalibrate" their own score if they wanted to be on the same page with the rest of the judges. Then for subsequent skaters, at least each judge knew whether he was scoring more strictly or more leniently than the panel as a whole.

Since the actual number 5.7 did not really matter, just the placement, this did not do violence to the judges' integrity and independence. It just got all the judges off on the right foot.

P.S. The median would be great for GOEs. For PCSs you still have to sacrifice a little bit of mathematical purity when you add or average the medians of the five components.
 
Last edited:

skatinginbc

Medalist
Joined
Aug 26, 2010
The mean is a parametric statistic -- the kind you want to use in measuring.
You have such a fixed definition about measuring. Measurement Theory also deals with ordinal data such as rankings, which are analyzed with non-parametric statistics.
The median would be great for GOEs.
Agreed. When I looked at the protocols of the past results, I found the median to be the most neutral, convincing and accurate according to my own taste. Another advantage about using the median is that no trimming of the outliers is needed.
For PCSs you still have to sacrifice a little bit of mathematical purity when you add or average the medians of the five components.
Is that a problem significant enough that we have to worry about given that the five components are considered as a whole and the median we are looking for is the median total score of the five components combined?
 
Last edited:
Joined
Jun 21, 2003
^ It's true I do not like to lump judging and measuring together under a single head, "rating."

Suppose I die. I present myself before the Pearly Gates. Saint Peter appears and says in or out. I have faced Judgement.

So now Saint Peter, for bookkeeping reasons, decides to assign the number pi to "in" and Avagadro's Number to "out." Now I have faced "Measurement Day" because a numeral has been assigned.

Of course if Saint Peter says, instead of in or out, "I reviewed your ledger and you score 73.9 on the sin-o-meter," well that's a different story. ;)

We can call the distinction mere semantics if we like, but vocabulary, grammar and syntax do have their uses. We not only have to measure things, we have to talk to each other as well. Experts in every science seize possession of ordinary words and give them precise meanings within that particular discipline which may or may not correspond to ordinary usage. :yes:

Another advantage about using the medium is that no trimming is needed.

I would put it this way. The median is the maximally trimmed mean. All numbers except one are trimmed, the rest being too big or too small.

The trimmed mean is a halfway compromise when we can't decide whether the mean or the median is the right thing to look at. That's why the trimmed mean is so hard to analyze mathematically -- it is neither fish nor fowl.
 
Last edited:
Joined
Jun 21, 2003
Is that a problem significant enough that we have to worry about given that the five components are considered as a whole and the medium we are looking for is the medium total score of the five components combined?

No, on second thought I take back that criticism. I think the right way to look at it is that each component separately provides a score, then we add up the scores.

We do not particularly want to get an average rating among the five components for a particular skater.
 
Last edited:

skatinginbc

Medalist
Joined
Aug 26, 2010
Suppose I die. I present myself before the Pearly Gates. Saint Peter appears and says in or out. I have faced Judgement. So now Saint Peter, for bookkeeping reasons, decides to assign the number pi to "in" and Avagadro's Number to "out." Now I have faced "Measurement Day" because a numeral has been assigned. Of course if Saint Peter says, instead of in or out, "I reviewed your ledger and you score 73.9 on the sin-o-meter," well that's a different story. ;)
It is Judgment Day if the decision is nominal: "In" or "Out", "Stay in the Heaven" or "Go to Hell", "guilty" or "not guilty". It is Measurement Day if Saint Peter ranks me against others and claims that I am the 34th most sinful person he has ever met. So, do you think a skating competition is more of Judgment Day or Measurement Day?
 
Last edited:

skatinginbc

Medalist
Joined
Aug 26, 2010
I think the right way to look at it is that each component separately provides a score, then we add up the scores.
If we do so, then we are treating them as true distinct category, not a holistic assessment, and thus the anomalies in PCS should not be allowed to subtract.
 

dorispulaski

Wicked Yankee Girl
Joined
Jul 26, 2003
Country
United-States
So, will you like CoP better if it uses medium instead of mean?

My Wicked Yankee Girl side just has to come out here, and I apologize in advance to you two serious (and very interesting) debating mathematicians.

I think that I would love COP better if they used a medium. :rofl:


Then I would know who won the competition before the skating started, and I could just sit back and enjoy the show. :laugh:
 

gkelly

Record Breaker
Joined
Jul 26, 2003
It is Judgment Day if the decision is nominal: "In" or "Out", "Stay in the Heaven" or "Go to Hell". It is Measurement Day if Saint Peter ranks me against others and claims that I am the 34th most sinful person he has ever met. So, do you think a skating competition is more of Judgment Day or Measurement Day?

In skating terms, the binary In/Out decision would be more comparable to tests and the sinfulness ranking more comparable to competition.
 

gkelly

Record Breaker
Joined
Jul 26, 2003
No, on second thought I take back that criticism. I think the right way to look at it is that each component separately provides a score, then we add up the scores.

We do not particularly want to get an average rating among the five components for a particular skater.

If we do so, then we are treating them as true distinct category, not a holistic assessment, and thus the anomalies in PCS should not be allowed to subtract.

I agree with Mathman here.

I think the point of the PCS is to look for five separate "holistic" assessments in five separate but overlapping categories -- maybe we can't use the word "distinct" since they do overlap, but the intention is to distinguish them as much as possible rather than to obliterate the distinctions by averaging.

I say "holistic" because they each assess their respective criteria over the whole program.

Look at Transitions and Interpretation, for example, which in theory are probably the most distinct.

We want to be able to say, e.g., Skater A uses difficult and varied skills with good quality and directly links highlight moves and elements together throughout the program (9.0 Transitions) and is generally good at skating to the music -- effortless and on time but not emotionally connected (7.0 Interpretation) whereas Skater B is just above average at connecting the elements (6.0 Transitions) but very good at expressing the music (8.0 Interpretation).

Breaking down the scores that way is a lot more informative than saying Skater A has an average PCS of 8.0 and Skater B has an average of 7.0.
 
Joined
Jun 21, 2003
If we do so, then we are treating them as true distinct category, not a holistic assessment, and thus the anomalies in PCS should not be allowed to subtract.

Good point.

However, I still think that the reason the ISU does it this way is not related to reliability and validity. I think it is to spot systematic cheating and bias on the part of a particular judge for or against a particular skater, especially national bias. Persistent national bias, and occasional charges of collusion, are what made the ISU rush the CoP into effect in the first place. The CoP's main selling point even today is not that it is a better scoring system, but that it is harder for crooked judges to manipulate the results.

In general, the ISU does not want to charge their own judges with incompetence. In order for the ISU to review a judge's performance, they have to have pretty much straight anomalies all down a skater's score card.

I haven't looked up the exact wording in a while, but basically you have to have as many anomalies are there are scoring categories before anything happens. And this has to occur in three different events before it triggers an investigation.

After all, ISU judges are unpaid volunteers who get nothing for their service but expenses and a thank you.
 

skatinginbc

Medalist
Joined
Aug 26, 2010
I agree with Mathman here...the intention is to distinguish them as much as possible rather than to obliterate the distinctions by averaging...Breaking down the scores that way is a lot more informative than saying Skater A has an average PCS of 8.0 and Skater B has an average of 7.0.
I did not speak of or respond to averaging, which is not part of the algorithm for deciding the outcome with the medians.
We want to be able to say, e.g., Skater A uses difficult and varied skills with good quality and directly links highlight moves and elements together throughout the program (9.0 Transitions) and is generally good at skating to the music -- effortless and on time but not emotionally connected (7.0 Interpretation) whereas Skater B is just above average at connecting the elements (6.0 Transitions) but very good at expressing the music (8.0 Interpretation).
Then we can break down a skating performance into two categories:

1. Technical scores (analytic assessment): judged by two panels: one assessing the difficulty level, and the other the execution, like the D-scores and E-scores in diving competitions.
A. Elements (Big tricks): Jumps, spins and other standardized elements that have limited room for creativity.
B. Skating skills: Turns, edges, speed and so on as demonstrated in footwork, transitions and anywhere in between.

2. Presentation scores (holistic assessment): performance, choreography, and interpretation. This category is judged as a whole. When you add all body parts (e.g., hands, head, torso, legs, etc.) together, they won't make a living thing.

What do you think?

Speaking of "flip-flops" under ordinal judging, I really cannot see what the fuss is about
How about "flip-flops" under the "median" method? Say, if we use the computer program to standardize each judge's score like some international piano competitions do so that the scores from all judges are in the same distribution (For instance, an 8 in one judge's mind could mean a 7 in the mind of another, or 0.50 point for one judge could mean 0.75 for another. By standardizing their scores, we may correct those systematic variances). And then the adjusted scores, instead of the raw scores, are used to find the medians. As the competition progresses and more data about each judge's score distribution are available, there is a chance that the medians might change and thus "flip-flops" might occur. Will you make a fuss about that?
 
Last edited:

gkelly

Record Breaker
Joined
Jul 26, 2003
I did not speak of or respond to averaging, which is not part of the algorithm for deciding the outcome.

Looking back at the interchange where that came from, I guess you and Mathman had been discussing it as a possibility, but neither of you was really recommending it.
So ignore my response to what I thought you were talking about. :)

Then we can break down a skating performance into two categories:

1. Technical scores (analytic assessment): judged by two panels: one assessing the difficulty level, and the other the execution, like the D-scores and E-scores in diving competitions.
A. Elements (Big tricks): Jumps, spins and other standardized elements that have limited room for creativity.
B. Skating skills: Turns, edges, speed and so on as demonstrated in footwork, transitions and anywhere in between.

Are you being descriptive or prescriptive here?

Descriptively, yes. For singles and pairs, A. is essentially the current Technical Elements Score minus the step and spiral sequences, and B. is those sequences plus all the other technical content and technical quality between the elements as currently covered under the Skating Skills and Transitions components.

Are you proposing to change the current process? E.g., to group the scoring for those sequences in with the whole-program skating and transitions scores instead of as elements? To have the technical panel somehow define the difficulty of the skating between the "big trick" elements? A case could probably be made for either of those changes, although I'm not sure how the details would work. Do you want to make that case and figure out how to implement it?

And do we want to say that "skating skills" (probably the single most important score, the single most important quality that is being judged in a skating contest, which informs everything else) is or is not holistic, should or should not be judged across the whole program?

2. Presentation scores (holistic assessment): performance, choreography, and interpretation. This category is judged as a whole. When you add all body parts (e.g., hands, head, torso, legs, etc.) together, they won't make a living human being.

What do you think?

For 2), are you acknowledging that under the current system those three aspects (performance, choreography, interpretation) are each judged holistically across the whole program. Yes, that's right, they are.

Or do you mean that those three components should be merged into a single score, similar to the old 6.0 Presentation mark?
 
Last edited:

skatinginbc

Medalist
Joined
Aug 26, 2010
Are you being descriptive or prescriptive here?
I was just soliciting ideas. I'm no expert in figure skating. Unless skating experts think that it makes sense as well, my idea should be taken as part of a brainstorming process, not a formal proposal, nor a prescriptive demand.
Are you proposing to change the current process? E.g., to group the scoring for those sequences in with the whole-program skating and transitions scores instead of as elements? To have the technical panel somehow define the difficulty of the skating between the "big trick" elements?
It's just an idea. If skating experts like you immediately say "NO, bad idea", then no further discussion is needed. If you say "Sounds good, probably feasible", then we can further work on the details.
And do we want to say that "skating skills" (probably the single most important score, the single most important quality that is being judged in a skating contest, which informs everything else) is or is not holistic, should or should not be judged across the whole program?
I said "technical scores" were "analytic assessment" because it requires the judges to observe each dimension separately (i.e., "big tricks" and "skating skills") in order to come up with different profiles of performance. As far as whether we should judge the sub-category "skating skills" holistically or analytically, it is open for debate.
are you acknowledging that under the current system those three aspects (performance, choreography, interpretation) are each judged holistically across the whole program. Yes, that's right, they are. Or do you mean that those three components should be merged into a single score, similar to the old 6.0 Presentation mark?
I meant they should be seen and analyzed as one inseparable category. The effort to divide a soulful, living thing into different body parts is called "murder" (in this case, "killing the art"). The sub-category may exist just to remind the judges that they should not be drawn to one aspect (e.g., "clean performance") and ignore another ("so-so interpretation"); still, it is only one score (the total score for this big category) that matters. Another way (and probably a better way) is simply deleting all these sub-categories, and using scoring rubrics to remind the judges of the criteria.
 
Last edited:
Joined
Jun 21, 2003
gkelly said:
Or do you mean that those three components (P&E, Choreo/Composition and Interpretation) should be merged into a single score, similar to the old 6.0 Presentation mark?

I mean they should be seen and analyzed as one inseparable category.

Whatever the intent of the CoP, in practice I think the judges do just that -- give out one blanket score for these three components combined.

Here are the scores received in those categories at Four Continents.

Chan 9.04 9.11 8.96
Takahashi 8.54 8.54 8.64
Rippon 7.57 7.54 7.46
Miner 7.43 7.54 7.39
Reynolds 6.83 7.18 7.00
Mara 7.14 7.25 7.07
Ten 6.93 7.18 7.00
Ge 7.18 8.96 7.18
Guan 6.68 6.75 6.75
Machuda 6.89 7.07 7.07
Song 6.39 6.29 6.29

Only once was there a difference of even .25, the minimum gradation of individual judges marks.
 

skatinginbc

Medalist
Joined
Aug 26, 2010
I guess you and Mathman had been discussing it as a possibility, but neither of you was really recommending it. So ignore my response to what I thought you were talking about. :)
It was my fault. I spelled it wrong. I spelled "medium" instead of "median". It might have led you into thinking that I meant "averaging".
I think that I would love COP better if they used a medium. :rofl:
:rofl::eek::
 
Last edited:
Top