Spinner Data Task
The difference between what should happen and what does happen is a difficult distinction for students. They are so used to finding exact answers in the back of textbooks, that differing experimental results create an sense of uneasiness. At an early age (Grade 9 in my province) we begin to introduce students to the ideas of sampling and experimental probability.
The topic is usually approached with a project or survey of schoolmates. The results are then tallied and then used to create “probabilities” of various things such as favourite sports team, food, or colour. I love the philosophy behind the project approach; student initiative and autonomy is a powerful thing. I, however, don’t like that the experiment involves humans. Here’s why…
Students see humans as unpredictable, but often view mathematical concepts as very predictable. Can we blame them? For years they have been calculating, to remarkable precision, answers neatly coded in the back of text books. Calculating experimental probabilities on unpredictable subjects allows students to connect probability (a naturally unpredictable thing) exclusively to (seemingly) unpredictable events.
I want my students to encounter something counter intuitive. Hopefully, they can construct a sense of stochastic thinking through this cognitive conflict.
Enter something very predictable… something so mathematical… the coloured spinner.
Hand each group a Spinner Data Task: Spinner Charts handout.
I begin by establishing the predictability of the device. I ensure the students that each section of the spinner is exactly one-fifth of the total area. We also talk about ties. What should happen if it lands on the divider between two sections? Because this is digital, I explain that ties are not possible because of certain rounding. In a nutshell, the spinner is completely fair.
Have the students predict the results of ten spins. What is the logical conclusion? Is there any dissension? This a great time to to play on predictability.
“So you’re telling me that it MUST land on each segment exactly twice?”
Make the students verbalize uncertainty; add the language of probability to their discourse.
Have them predict the results for 100 spins. Are they more willing to see variance because the number of trials went up? Is anyone more certain that the spread will be uniform? You will get interesting arguments form both sides:
“There are more spins, so more chance of the spinner landing on the same piece over and over.”
“There are more spins, so there are more chances for them to even up as you go.”
Both excellent conversations.
I show them the result for 200 spins with a 6 second Vine video. This provides a quick reveal and an immediate feedback loop. It also allows teachers show the technology even if they don’t have access to individual technology for each group. The Vine is pausable at any point. Have a discussion with your students about the result.
Is it fair? Is the program rigged? Where is the breaking point? At what point would you no longer believe the computer simulation? A spread of 10? 20? 30?
The second spinner on the handout shows the section labeled “1” as twice as big as the other four. I would go through the same process with the students as before–but with a growing awareness of uncertainty.
The next spinner has section “1” worth three times sections “3”, “4”, and “5”. Section “2” is double that of “3”, “4”, and “5”. Repeat process. Have students predict the results and graph on the accompanying charts that mirror the Vine videos.
The last spinner has section “1” is worth four times sections “3” and “4” while sections “2” and “5” are double the size of sections “3” and “4”.
After the discussion around the proportions wanes, I want to reverse the process to see if students can get a feel for the “population” when given the data from a “sample”. I give each group a copy of a handout with a set of data. They are asked to draw the spinner that was used to collect the data.
I provide them with a protractor and allow them to use their calculator. Students will probably assume the data was generated with “nice fractions”. Some groups will take the percentages and try to match them with the closest unit fraction. The nice corollary of this is they try and add all their fractions to one. Entertaining…
It is important that students write out their logic and record the sizes of their regions. (as fractions or percents). When each group has attempted a solution, I digitize a couple and run a simulation to see how close they were. I recommend getting familiar with the spinner website before class.
The task develops the notion of probability within a seemingly predictable context. It encourages good math discourse, lays groundwork for bias in real world sampling, and has students conjecturing, arguing, and creating all at once. It works on basic ratio skills as an added bonus.
|Common Core State Standards Addressed|
|Saskatchewan Curriculum: Math 9 Curricular Outcomes Addressed|
Using a predictable context to study sampling, data, and experimental probability creates a deeper appreciation for the mathematics of the unknown. The assault on their common sense creates an ecology of rich mathematical discourse as students learn that not all of mathematics can be summarized neatly in an answer key.