Evan Barba and J.R. Osborn
Introduction
Over the past decade, there has been a growing recognition among educators that systems, design, and computing are the three disciplines that best encompass the skills and knowledge workers need to successfully contribute to the 21st-century workforce (AAAS, 2009; Uri Wilensky and Mitchel Resnick, 1999). Yet, in addition to the many complexities that arise in education, developing curricula that successfully mesh these disciplines introduces new and understudied complexities; particularly when it comes to integrating those curricula into schools and matching them to current educational- nal standards and metrics. In this presentation, we describe the theoretical underpinnings and practical benefits and challenges of our curriculum in Systemic Design and Computing (SDC) based on three iterations of a pilot course.
Our SDC curriculum treats systems thinking as a worldview that can be used to organize knowledge, formulate problems, and evaluate solutions, design as a set of methods for synthesizing and communicating solutions, and computing as a medium for implementing, testing, and deploying those solutions. It is rooted in the idea that teaching students a small set of cross-cutting concepts and skills while training them to apply those skills in new contexts can provide a firm but flexible foundation to build on over the course of their lifetimes.
We evaluate our pilot course and evolving curriculum in the context of Learning Progressions (LPs) (Alicia & Alonzo, 2011; Black & Simon, 1992; Rogat, Corcoran and Mosher, 2010), extending that research by defining a quantifiable notion of sophistication in SDC concepts and skills. Using spider graphs to chart student progress along multiple dimensions and developing quantitative measures based on the emergent properties of these graphs, we have developed a flexible but consistent framework that captures and communicates the complexity of interdisciplinary learning without sacrificing our ability to track and compare student and cohort progress. Our hope is that by systematically investigating how students progress in their learning of SDC concepts and practices, we can understand the most effective ways to create the coherent, multi-dimensional, and engaging curricular experiences that students need to mature into effective and adaptable lifelong learners.
KEYWORDS: systemic design, education, computing, recursive design.
SDC Progress Variables
Typically, sophistication, the core metric of LPs, is defined by grade-level expectations or disciplinary knowledge, but measuring it has proved difficult and, at times, controversial. Progress maps (Hess, 2012; Hess. 2008; Wil- son and Draney, 2004) in which student performance is ranked graphically on a continuum, have been praised as consistent, reliable, and practical measures of student performance, with the added benefit of easily commu- nicating results. They have proven useful in providing timely feedback to students and teachers as part of formative assessment and can be combined with an underlying statistical model for longitudinal and group comparison, something education researchers value highly.
For our pilot course, we used six progress variables that embody a few key concepts in SDC. These are one example and are not intended to be doctrine or all-encompassing. Three of these progress variables, system mapping, visualization, and algorithms, represent collections of essential skills and knowledge in each SDC discipline. However, the SDC curriculum and LP also aim to teach students how to integrate disciplinary concepts. So, in addition, we defined three progress variables that embody the knowledge and skills for the intersections of each of the disciplines (a similar approach was used by Rowland (Rowland, 1999). The three intersections are Systems+Design, Design+Computing, and Computing+Systems (the “+” indicates deep integration, not simply adding one discipline onto the other), and the associated progress variables are iteration, interactivity, and modelling, respectively. The resulting structure allows us to map student progress across six interconnected axes: the three “core” fields of systems, design, and computing, plus the three intersections that connect these fields.
Measuring Sophistication
The measurement model we have developed is both a basis for evaluating progress in student understanding and a way of communicating that progress back to students. Our approach uses a multidimensional variation of a progress map employing spider graphs (also called radar charts). The result of connecting individual numerical values on a radar chart is a polygon whose shape gives a holistic picture of the learner at glance (Figure 1). Howe- ver, another important characteristic of these charts, which has been completely overlooked in the literature, stems from the fact that the polygon has emergent properties (area, centre of mass, eccentricity) that are readily apparent in the visualization but difficult to dig out of the data, despite being straightforward calculations. These emergent attributes of the polygon pro- vide quantitative metrics for measuring sophistication in SDC.
The area of the polygon in Figure 1 denotes the overall level of the learner’s sophistication, providing a single collective variable that measures student learning along all SDC dimensions; this is a replacement for a course grade or GPA in this system. This value can be used to verify quantitatively that learning is taking place or combined with the additional variables to reveal a wealth of insights, described below, that are typically hidden by traditional- nal grading systems.
Another emergent property of these graphs, the centre of mass (or centroid) of the polygon, shows where a student’s focus and core competency lies. Calculating the centroid and using it alongside the origin (centre) of the radar chart as foci in an ellipse (see Figure 1) allows us to calculate a third value, the eccentricity. Eccentricity provides a measure of the depth of a student’s specialization (a “well-rounded” student with equal skill in all areas will have a circle with 0 eccentricities). Students may choose to become more eccentric by specializing in one discipline or try to balance out by becoming more circular. The point is to provide clear and digestible information for students to make the choice which best suits their goals while maintaining the ability to compare students and cohorts. Students of equal sophistication (area) can have very different shapes, eccentricities, and centres of mass.
As we deployed the above methodology and measurement model in parallel with our traditional grading scheme, we noted many benefits and some drawbacks. Benefits include: tailoring a curriculum, comparing across different subject matter, measuring integrated learning, and overall flexibility. While drawbacks include difficulty standardizing across courses, the time required for mentoring, and students’ desire to “optimize.” These will be discussed in depth in our presentation.
References
- AAAS. 2009. Benchmarks for Science Literacy. Oxford University Press, USA.
- Anthony P Carnevale. 2013. 21st Century Competencies for College and Career Readiness. Career Development Quarterly: 5–9.
- NGSS Lead States. 2013. Next Generation Science Standards: For States, By States. Washington, DC.
- Partnership for 21st Century Skills. 2009. 21st Century Skills, Education & Competitiveness: A Resource and Policy Guide. Retrieved from: http://www.p21.org/documents/P21_Framework_Definitions.pdf
- Tim Brown. 2008. Design thinking. Harvard business review 86: 84–92, 141.
- Jeannette M. Wing. 2006. Computational thinking. Communications of the ACM 49, 3: 33. http://doi.org/10.1145/1118178.1118215
- R Buchanan. 1992. Wicked Problems in Design Thinking. Design Issues 8: 5–21. http://doi. org/10.2307/1511637
- Uri Wilensky and Mitchel Resnick. 1999. Thinking in levels: A dynamic systems approach to making sense of the world. Journal of Science Education and Technology 8, 1: 3–19. http://doi.org/10.1023/A:1009421303064
- Alicia C. Alonzo. 2011. Learning Progressions That Support Formative Assessment Practices. Measurement: Interdisciplinary Research & Perspective 9, 1.
- Paul Black and Shirley Simon. 1992. Progression in learning science. Research in Science Education 22, 1: 45–54.
- Aaron Rogat Tom Corcoran, Frederic A Mosher. 2010. Learning Progressions in Science. Harvard Education Letter 26, 4: 1–3. http://doi.org/10.1007/978-94-6091-824-7
- Karin K Hess. 2012. Learning Progressions in K-8 Classrooms: How Progress Maps CanInfluence Classroom Practice and Perceptions and Help Teachers Make More InformedInstructional Decisions in Support of Struggling Learners (NCEO Synthesis Report).
- Karin Hess. 2008. Developing and using learning progressions as a schema for measuring progress. Retrieved [November 2011] from http://www. nciea. org/publications/ CCSSO2_KH08. pdf
- Mark Wilson and Karen Draney. 2004. Some links between large-scale and classroom assessments: The case of the BEAR Assessment System. Towards coherence between classroom assessment and accountability: 132– 152.
- Gordon Rowland. 1999. A Tripartite Seed: The Future Creating Capacity of Designing, Learning, and Systems. Hampton Press, Inc, Cresskill, NJ.