Globe and Mail: Figure skating judging system still has flaws | Page 4 | Golden Skate

Globe and Mail: Figure skating judging system still has flaws

Joined
Jun 21, 2003
I think the different kind of spins are an area where the Tv commentators could help us out more. When they show spins in slow motion the commentators could point out where the skaters change edges, etc., and how that figures into the scoring.
 

ivy

On the Ice
Joined
Feb 6, 2005
These are common abbreviations for coaches/choreographers/skaters when writing down a program layout and a lot of judges used to use something like those for their 6.0 judging sheets (each judge had own nomenclature but it was similar - S-Salchow, T-toe loop...)


Yes, I'm sure they are extremely useful for people on the inside of the sport, but they also serve to make it harder for fans to understand the judging and protocol sheets. Good design and a computer program could so easily eliminate this point of confusion. If protocol sheets are to be made public, why not make them as readable as possible to the widest number of people?
 

gkelly

Record Breaker
Joined
Jul 26, 2003
I suspect that the ISU sees two groups of people:

*insiders (skaters, coaches, officials, etc., numbered in the thousands) who need and want the details, work with them on a frequent basis (sometimes daily), and have a vested interest in learning abbreviations

*the general public (numbered in millions) who might tune in to skating competitions on TV or attend an event that happens to be held near to home but who just want to be entertained with pretty movement and exciting tricks, and the sponsors or media outlets who may offer big sums of money to reach those eyeballs by presenting skating events

What they don't seem to take into account is the serious fans (numbered in thousands) who try to follow the sport closely, travel to events, watch events online, etc. Of those thousands of fans, some are willing to put in lots of effort to learn everything that the insiders need to know. Others don't really care about the judging and put more effort into learning about skaters' bios or music choices or similar details. Many here fall in between and do want to know details but want the ISU to make it easy for them.

The question is, once the primary stakeholders' needs are satisfied and there is some acknowledgment of the casual viewers and media sources that serve them, how can they address the in-between needs of the range of fans (including, perhaps, family members of competitors). Should they add another step to procedures to meet the needs of 1,000 fans? 10,000? 100,000?

I doubt there are actually 100,000 fans worldwide who want to see the detailed protocols. But it would be great if the ISU could find ways to grow its community of sportsfans to the point that they can make a profit from this group of intermediate stakeholders and meet their needs as well as those of the participants and of the casual mass audiences
 

emma

Record Breaker
Joined
Oct 28, 2004
thanks, gkelly, for the thoughtful breakdown of kinds of stakeholders here. For that middle group, I suspect that there are number of different 'easy access' kinds of info they want, making it difficult, perhaps, to satisfy them. But, some of it shouldn't be that hard. For example, the first few times I looked at protocols, I really didn't understand why some kind of key was not provided - one that would lay out the element abbreviation in addition to the small line of info they provide about 'x' and 'e'. I do understand there isn't room on the page, but why can't the key go at the bottom of the whole document? OR, why can't a list of these often used abbreviations be easily available on the ISU website, and a link to it provided at the top of the protocols? I really do understand the focus on the insiders and their 'needs'....but I see this as an opportunity lost by the ISU to generate a knowledgeable and engaged fan base.
 

Poodlepal

On the Ice
Joined
Jan 14, 2010
The problem is, there are so many numbers, decimals, multipliers, etc. that it's just too much. This is a sport, and sports are supposed to be fun. The score sheet is almost like the score sheet a psychiatrist might use when declaring someone insane from a personality test, or what a child study team member might use when deciding if a kid needs an IEP. Well, she got a 4.5 on decoding, but she only got a 3.4 grade level on sentence structure. . .:)

This is what I would do.

1. Kill the decimals. Make everything a whole number.
2. Forget the positive grades of execution. If the jump is done perfectly, it gets full points.
3. Make a standard deduction that everyone can understand, like: -1 double foot, -2 hand down, -3 fall. They have something like this already, I think, so they can keep it. I am also not opposed to no credit for a fall at the senior levels, at least.
4. Someone suggested that they should do one of each jump type. I agree. They can subtract 5 if they don't try something. This would encourage quads.
5. Get rid of the choreography component. These "kids" don't do their own choreography. Give the points to Lori Nichol and judge what the actual skaters do.
6. Make "skating skills" a technical point. It seems to belong there more than with the program components.
7. Actually count the transitions (if they don't do that already). Don't say, "Oh, I remember this skater doing more transitions than this one, so she gets an 8, and the other one gets a 5.75. If skater #1 does 10 transitions, give her 10 points.
 

evangeline

Record Breaker
Joined
Nov 7, 2007
^^^

The #1 thing I would do is to have separate judging panels for TES and PCS. Due to financial reasons, I know this is probably impossible, but I sometimes can't help but think that judges are so busy evaluating each element separately that they don't have time to properly evaluate the program from a 'big picture' (or so to speak) sort of perspective that is required for the components of PCS.

The #2 thing I would do is to send out a LOT of memos reminding the judges that the categories of PCS should be evaluated separately! I'm tired of seeing the same pattern over and over again for every skater's PCS despite what they do on the ice: all the components boxed together within a point of each other, with TR always slightly lower than all the rest.
 
Last edited:

Blades of Passion

Skating is Art, if you let it be
Record Breaker
Joined
Sep 14, 2008
Country
France
That would not work and would not be good at all, particularly no credit for choreography and crediting transitions like that. :disagree:
 

gkelly

Record Breaker
Joined
Jul 26, 2003
The #1 thing I would do is to have separate judging panels for TES and PCS. Due to financial reasons, I know this is probably impossible, but I sometimes can't help but think that judges are so busy evaluating each element separately that they don't have time to properly evaluate the program from a 'big picture' (or so to speak) sort of perspective that is required for the components of PCS.

There are a maximum of 13 elements per program. Judges can pretty well know what GOE they're going to give the element as soon as it's over. They're not going to spend more than a second or two worrying about an element that already happened -- just put in the score, or make a note to come back to it at the end (e.g., if they're waiting to see if the tech panel downgrades a jump)
 

mskater93

Record Breaker
Joined
Oct 22, 2005
^^^

The #1 thing I would do is to have separate judging panels for TES and PCS. Due to financial reasons, I know this is probably impossible, but I sometimes can't help but think that judges are so busy evaluating each element separately that they don't have time to properly evaluate the program from a 'big picture' (or so to speak) sort of perspective that is required for the components of PCS.

The #2 thing I would do is to send out a LOT of memos reminding the judges that the categories of PCS should be evaluated separately! I'm tired of seeing the same pattern over and over again for every skater's PCS despite what they do on the ice: all the components boxed together within a point of each other, with TR always slightly lower than all the rest.
They already tried #1 and the judges complained of boredom!

They've already done #2 and Joe Inman got blasted for it!
 

skatinginbc

Medalist
Joined
Aug 26, 2010
Jackie Wong said in his article, "The technical specialist is the person who identifies the error as a fall, and then the technical panel votes on whether or not it should be counted as a fall. The technical panel is made up of three people, the technical controller, the technical specialist, and the assistant technical specialist." I am confused. Does that mean there will be no vote if the technical specialist does NOT call out a "fall" in the first place? If so, it's easy for a "lenient" specialist to manipulate the outcomes, isn't it? Also, how do they "vote"? Anonymously, or through discussion like "I think we should give him the benefit of doubt. What do you think?" If there is a brief talk before the "vote", group dynamics (e.g., conformity to the "leader", to a friend, or to whoever expresses his judgment first) would play a significant role.
 
Last edited:

let`s talk

Match Penalty
Joined
Sep 10, 2009
The new judging system is a logical product of globalization and standardization. Minimum of originality, individuality and quality. Maximum of effectiveness, predictiveness and control. MacDonalds on the Ice. That is where figure skating has turned out to be. (*for now, I hope).
 

gkelly

Record Breaker
Joined
Jul 26, 2003
^^^

The #1 thing I would do is to have separate judging panels for TES and PCS. Due to financial reasons, I know this is probably impossible, but I sometimes can't help but think that judges are so busy evaluating each element separately that they don't have time to properly evaluate the program from a 'big picture' (or so to speak) sort of perspective that is required for the components of PCS.

The #2 thing I would do is to send out a LOT of memos reminding the judges that the categories of PCS should be evaluated separately! I'm tired of seeing the same pattern over and over again for every skater's PCS despite what they do on the ice: all the components boxed together within a point of each other, with TR always slightly lower than all the rest.

They already tried #1 and the judges complained of boredom!

A compromise -- assuming that it were financially feasible to assign more officials -- could be to have one set of judges assigned to judge GOEs plus Skating Skills and Transitions (i.e., focusing on the technical execution of the elements and the technical content and execution of everything between the elements), and a second set to judge only Performance/Execution, Choreography, and Interpretation.

They've already done #2 and Joe Inman got blasted for it!

It probably needs to be done as official communications from the technical committees. It would also help to have clearer guidelines and training about how to assign specific numbers to the various other components completely independent of the Skating Skills score.

I think it's much easier to get consensus among skating experts about what constitutes 5.0 or 7.0 or 9.0 skating skills than it is to figure out what level of choreography or interpretation would correspond to those numbers.

Maybe some professional practitioners, theorists, and critics of the visual and performing arts could offer some guidance on how to evaluate those aspects of skating performances on a scale of 1 to 10. (Even though art is rarely competitive and rarely scored numerically for such purposes)
Obviously a skater with more skating skill would have better possibilities for scoring well in the other areas, but it should come down to how they actually use those skills for those purposes that day
 

ivy

On the Ice
Joined
Feb 6, 2005
Here's a scale used in Dressage - another judged and some what subjective sport:

10 Excellent
9 Very good
8 Good
7 Fairly good
6 Satisfactory
5 Sufficient
4 Insufficient
3 Fairly Bad
2 Bad
1 Very bad
0 Not performed

It is used to evaluate each individual move - sometimes with a coefficient, as well as to provide overall scores for the horse and rider (similar to PCS in away)

It helps me relate to the numbers - A good triple lutz or a satisfactory triple lutz or maybe an excellent one?
 
Last edited:

gkelly

Record Breaker
Joined
Jul 26, 2003
^^^

The #1 thing I would do is to have separate judging panels for TES and PCS. Due to financial reasons, I know this is probably impossible, but I sometimes can't help but think that judges are so busy evaluating each element separately that they don't have time to properly evaluate the program from a 'big picture' (or so to speak) sort of perspective that is required for the components of PCS.

The #2 thing I would do is to send out a LOT of memos reminding the judges that the categories of PCS should be evaluated separately! I'm tired of seeing the same pattern over and over again for every skater's PCS despite what they do on the ice: all the components boxed together within a point of each other, with TR always slightly lower than all the rest.

Here's a scale used in Dressage - another judged and some what subjective sport:

10 Excellent
9 Very good
8 Good
7 Fairly good
6 Satisfactory
5 Sufficient
4 Insufficient
3 Fairly Bad
2 Bad
1 Very bad
0 Not performed

It is used to evaluate each individual move - sometimes with a coefficient, as well as to provide overall scores for the horse and rider (similar to PCS in away)

It helps me relate to the numbers - A good triple lutz or a satisfactory triple lutz or maybe an excellent one?

That's similar to the way the 0-10 scale is defined for program components (aspects of the whole performance) under IJS
http://www.isu.org/vsite/vfile/page/fileurl/0,11040,4844-152077-169293-64120-0-file,00.pdf

Then there are some more detailed lists of what's supposed to be considered under each component
http://www.isu.org/vsite/vfile/page/fileurl/0,11040,4844-152086-169302-64121-0-file,00.pdf

But what there isn't a document for is how to apply the numbers to the specific criteria. What do the percentages in that colored scale actually represent? Amount of time alone isn't sufficient because I would think that meeting the criteria in an in-depth detailed way 90% of the time interrupted only to telegraph the hardest jumps would be worth more than superficial involvement 100% of the time

I would want more guidance. Judge training seminars and experience can help provide more language and more examples, but unless it's in an official document not all judges will work with the same trainers or get the same experience


Skating historically used a 0-6 scale similar to the above for figures and later for whole programs

With IJS, the individual elements are on a similar scale, essentially
+3 excellent
+2 very good
+1 good
0 satisfactory
-1 minor flaw
-2 two minor flaws or one moderate flaw
-3 serious and/or multiple flaw(s)

The negative grades can get complicated because there are other penalties for poor performance in addition to the GOEs (downgrades, lower levels than attempted, fall deductions, extended lift deductions in dance). And even 0 and +1 grades sometimes reflect punishable minor flaws reducing what would otherwise be a higher positive grade
 

skatinginbc

Medalist
Joined
Aug 26, 2010
Jackie Wong, "The technical specialist is the person who identifies the error as a fall, and then the technical panel votes on whether or not it should be counted as a fall."
Gkelly said, "just put in the score, or make a note to come back to it at the end (e.g., if they're waiting to see if the tech panel downgrades a jump)."

The above statements point out one critical source of measurement error: Raters (judges, technical specialists) are NOT totally independent. Their judgments may be influenced by those of others. It would be unfathomable if a qualified evaluation consultant had approved such a design.

My proposal: Get rid of the Technical Panel, and separate all raters (judges) into two groups:

Group One: To assess the difficulty of each executed element. The base mark of that element is the mean difficulty score given by the judges. Say, seven judges give their level rating as such for an executed element: 2, 3, 3, 2, 3, 2, 1, and the values for the levels are: level 1 = 3.5 points, level 2 = 5 points, level 3 = 7.5 points, etc. After deleting two extreme ratings, the base mark or average difficulty score for that element is: (5 + 7.5 + 7.5 + 5 + 5)/5 = 6. All gray areas (under-rotation, edge calls, etc.) will thus be handled as gray areas and there will be no arbitrary dichotomous call of the current scoring system.

Group Two: To assess the quality of each executed element (i.e., give out GOEs). The judges of this panel will not concern themselves with level or type (Luz vs, Flip) of an element, and thus they will not be informed of Group One's decisions. They may watch if a jump has a "clear" edge going in but whether it is a "wrong" edge or not is not their concern. They do not count the rotations in spins or jumps. All they care is how well they are executed. They watch for hand-downs, stumbles, and falls however. Say, the mandatory deduction for a fall = -1 , and here are the judges' decisions: -1 (fall), 0 (non-fall), -1 (fall), -1 (fall), -1 (fall), 0 (non-fall), -1 (fall). So the mandatory deduction for that executed element would be (-1 + 0 -1 -1 -1)/5 = -0.8.

Besides the above responsibilities, each group also gives scores to:
Group One: Skating Skills, Transition, and other mandatory deductions (time violation, etc)
Group Two: Choreography, Presentation and Interpretation.

The current system requires 12 raters (9 judges + 3 specialists), while mine requires 14 (7 judges for each group).
 
Last edited:

Serious Business

Record Breaker
Joined
Jan 7, 2011
skatinginbc, your proposal seems like a wonderfully elegant solution. You must have a great background/education in management.

The inconsistent calls from the technical panels from one competition to the next is a great and baffling source of controversy. That three people get to decide competition results largely on their own is not a great idea. Your solution not only fixes that, but also ameliorates the problem in PCS judging, where judges seem to lock in their PCS across the board without the variety that any one skater's PCSs should have. It makes it easier for the judges to focus on particular aspects of the PCS, since they don't have to handle all of them.

In fact, your suggestion makes too much sense. Unless there's some technical/skill barrier stopping most judges from being able to correctly judge under-rotation and level calls, I don't see why it can't be adapted. Please, send it to the ISU, the USFS, and whoever else.

Your system could also be implemented pretty directly now by reducing the number on each panel to 6. So we'd have the same total number of specialists/judges. Yes, this makes the system less reliable, but the ISU are cheapskates.
 

gkelly

Record Breaker
Joined
Jul 26, 2003
My proposal: Get rid of the Technical Panel, and separate all raters (judges) into two groups:

I like the idea of using averages for things that happen in gradations rather than either/or. And I could like the idea of one group assessing difficulty, skating skills, and transitions while another group assesses quality and qualitative performance components.

A couple of questions, though.

Group One: To assess the difficulty of each executed element. The base mark of that element is the mean difficulty score given by the judges. Say, seven judges give their level rating as such for an executed element: 2, 3, 3, 2, 3, 2, 1, and the values for the levels are: level 1 = 3.5 points, level 2 = 5 points, level 3 = 7.5 points, etc. After deleting two extreme ratings, the base mark or average difficulty score for that element is: (5 + 7.5 + 7.5 + 5 + 5)/5 = 6. All gray areas (under-rotation, edge calls, etc.) will thus be handled as gray areas and there will no arbitrary dichotomous call of the current scoring system.

1) Are the levels still based on published features that either are or are not achieved? Or should they be based on each technical judge's overall impression of the difficulty of the element?
For complicated elements, such as pairs step sequences, where there's no way that one person can actually keep track of all the features, should each tech judge make a guess as to the probable level based on the features s/he did see?

2) Who decides what each element actually is? Do each of the tech judges identify the elements independently, and then the computer will use the identification that the majority identified? (That would be an argument in favor of 5 or 7 judges instead of 6) Or should the referee or the equivalent of the technical controller, or even the data entry person, have the authority to review confusing elements afterward and make a human decision, based his/her own interpretation as well as those of the judges, as to what the ambiguous element should be called?

Most of the time all the tech judges would identify the same element. That was a triple toe loop. That was a layback spin. No question what it was. Where they might disagree is whether the toe loop was fully rotated or whether the layback deserved a higher level.

Sometimes an individual judge will have a bad viewing angle or a momentary lapse of attention or will hit the wrong button on the computer and will identify the element incorrectly by mistake. No problem -- the rest of the judges will get it right and the majority will overrule the mistake.

But sometimes the skater will make a skating mistake that makes the element ambiguous, and in that case someone needs to decide what the element actually was before the computer can calculate the base mark.

What if half the judges see a flip and the other half see a lutz with a change of edge?

What if half see a flying camel with a really weak fly and the other half see a camel spin with a backward entry and no fly?

What if half see a solo 2F with a hoppy insecure landing and the other half see a 2F+1Lo< combination? Does it fill a jump combination slot or a solo jump slot?

What if a skater in the short program steps out of both triple jumps, so there's no combination? Who decides which one should be considered to be the intended combination and which the solo jump?

Group Two: To assess the quality of each executed element (i.e., give out GOEs). The judges of this panel will not concern themselves with level or type (Luz vs, Flip) of an element, and thus they will not be informed of Group One's decisions. They may watch if a jump has a "clear" edge going in but whether it is a "wrong" edge or not is not their concern. They do not count the rotations in spins or jumps.

Experienced judges are going to notice those things even if it's not their job to worry about them. So these judges' assessments of quality will be affected by whether they saw a double or a triple jump or whether the flying camel had 6 revolutions or 12.

All they care is how well they are executed. They watch for hand-downs, stumbles, and falls however. Say, the mandatory deduction for a fall = -1 , and here are the judges' decisions: -1 (fall), 0 (non-fall), -1 (fall), -1 (fall), -1 (fall), 0 (non-fall), -1 (fall). So the mandatory deduction for that executed element would be (-1 + 0 -1 -1 -1)/5 = -0.8.

Will there be a separate box on the computer screen (and paper, for backup or for small competitions run without computers for the judges) on which to keep track of falls? Next to the box for the element GOE? If the GOE is +3 to -3 as now, maybe there's an option of -4 to be used only for elements with falls.

What about falls between elements? Maybe it should be the first panel keeping track of those.
 
Last edited:

skatinginbc

Medalist
Joined
Aug 26, 2010
1) Are the levels still based on published features that either are or are not achieved? Or should they be based on each technical judge's overall impression of the difficulty of the element? For complicated elements, such as pairs step sequences, where there's no way that one person can actually keep track of all the features, should each tech judge make a guess as to the probable level based on the features s/he did see?
This is something that needs reliability studies. If the result shows that judging by overall impression is as good as by features, then go for overall impression, which is easy and time-efficient. I have a question though: Does the Technical Panel of the current system actually count the features achieved? What is shown on their computer screen? A checklist of features for them to click on the box if the feature is achieved? I am not a skater and have no clear idea how they actually do the judging. I'm inclined to believe they are judging by impression and may review the footage if they are not in agreement.
2) Who decides what each element actually is? Do each of the tech judges identify the elements independently, and then the computer will use the identification that the majority identified? (That would be an argument in favor of 5 or 7 judges instead of 6) Or should the referee or the equivalent of the technical controller, or even the data entry person, have the authority to review confusing elements afterward and make a human decision, based his/her own interpretation as well as those of the judges, as to what the ambiguous element should be called?
Option #1: Go with the majority. Option #2: If more than 1/3 of the judges identify an element differently, the computer automatically shows up a warning sign, footage review is then conducted by all the judges, and the majority has the final say. Which option is better? Again, it needs reliability studies. If the study shows that Option #2 does not improve reliability significantly, go with Option #1.
Experienced judges are going to notice those things even if it's not their job to worry about them. So these judges' assessments of quality will be affected by whether they saw a double or a triple jump or whether the flying camel had 6 revolutions or 12.
As a non-skater, I can tell if a skater doubles a triple, which means the quality of the executed double is sacrificed in some way and so a negative GOE may be justified. In some cases, I cannot tell if a skater intentionally goes for a double or not, which means its quality is as good as a planned double and probably deserves a positive GOE. Interestingly, I never count the revolutions of a spin while watching a performance. I look for well-centered, fast spins. Can those judges be "untrained"? We just assume their judgement may be affected by the technical aspects. Again, reliability studies will prove or disprove that assumption.
Will there be a separate box on the computer screen (and paper, for backup or for small competitions run without computers for the judges) on which to keep track of falls? Next to the box for the element GOE? If the GOE is +3 to -3 as now, maybe there's an option of -4 to be used only for elements with falls. What about falls between elements? Maybe it should be the first panel keeping track of those.
The judge can just click on the "fall" box under the element that the skater falls. If a fall occurs between elements, the judge should click on the "fall" box under "non-element". If a skater falls again between elements, the judge clicks another "fall". Nobody needs to keep track of falls. The computer does the job. Remember, agreement among judges regarding a "fall" or "non-fall" is not needed.
 
Last edited:

Blades of Passion

Skating is Art, if you let it be
Record Breaker
Joined
Sep 14, 2008
Country
France
Does the Technical Panel of the current system actually count the features achieved?

Yes. It requires 3 people to look at the features of footwork sequences or else it would take far too long for the scores to come up (and it already takes quite a long time in some cases).

Your idea definitely has some merit; getting more opinions on levels/downgrades would be a good thing. As for the above problem, if there were 6 people on the "level/tech" panel and 6 people on the "quality/program" panel, you could separate the 6 people on the "level/tech" panel into two separate groups of three. This means you would only be getting 2 complete opinions on the level for footwork (rather than the 6 you would get for spins/jumps/other elements) but it's still an improvement.

Ultimately, though, I don't think the PCS scoring should be split up into two separate groups like that. The other problem with the proposal is that the protocols would become much more complicated. An extra line would have to be added for every element; casual fans trying to learn more about the scoring would become even more confused. If we could have 14 people involved with scoring at events, then I would much rather have 8 judges and 6 people on the tech panel. The tech panel would be split up into two separate groups of 3 and each group would look at different elements. One of the groups would be the "head group" and would decide how the program as a whole should be called. After calling the elements, the workload would then be split up between the two groups to decide the levels/rotations of each element. That would provide more time for the tech members to really thoroughly examine the elements and come to a well-informed conclusion.

Although, much of this problem (of tech calls so drastically altering the outcome of competitions) is because some the rules and the judging tendencies are not good to begin with. If a jump is called as < or << the judges should never automatically give less GOE just because the jump was called as such. The jump already received a deduction and if it was otherwise clean, then it shouldn't be further penalized. Plus, judges should be deciding for themselves if they feel the calls were correct or not. If the tech panel gives a jump < or << and the judge feels the calls were undeserved, the judge should then be giving that element +1 GOE more than they otherwise would have, to counteract what they felt was a poor call. The system between judges and tech panel should be working like a governmental checks-and-balances system.

Aside from that, judges should be thinking on a more objective scale about the quality with which an element was executed (ie - figure skating across all time, not just the competition itself). Judges need to be more actively giving -GOE to elements that are lackluster as a result of laboriously trying to gain a level. They need to be a little more strict about giving +GOE. An element getting +3 GOE means it should be among the best the sport of figure skating has ever seen. Getting +1 GOE in the first place should mean the element was somehow went above and beyond in quality, not just merely completed without error. If judges were actually not rewarding these underwhelming Level 4 spins/death spirals/footwork with +GOE, and instead specifically thinking "nope, that element would have looked better if it had been a lower level and all those features hadn't been crammed into it", then skaters would more actively choose to do lower level elements and there would be less discrepancies about the calls.

Furthermore, the disparity in base value between the different levels on elements is still too high in some cases, IMO. There is much that still needs to be improved with CoP and the judging itself. Adding more clueless judges and/or splitting up clueless judges to different duties won't result in competitions being evaluated that much better.
 
Top