Rubric for the qualitative assessment of student-designed Snap! projects
Keywords:
rubric, qualitative assessment, learning outcomes, teaching materials, coding, programming languages, Snap!, computer scienceAbstract
An objective evaluation and assessment of individual student-designed projects are challenging. Appropriate tools are currently lagging and have to be developed. Block-based programming languages, such as Snap!, are often used for teaching programming basics and the subsequent development of student-designed programming projects. The current research qualitatively developed a rating rubric for Snap! projects to investigate how novices’ programming skills can be evaluated and assessed in a criterion-guided manner. For this purpose, an evaluation was conducted on a baseline dataset of 36 student projects created over three school years after a programming course for novices. Based on this database an assessment rubric was designed. A team of experts reviewed and evaluated the assessment rubric. Following expert evaluation, the rubric was improved and expanded. Finally, a test data set consisting of ten other Snap! projects of varying complexity was presented to prospective teachers for comparative evaluation with and without the resulting rubric. The results show that the rating rubric significantly improves the comparability of assessments. In addition, a clear differentiation of the projects by level is achieved for the test data set. Furthermore, the assessment rubric enables a more precise achieved result evaluation in particular rubric categories.
Downloads
References
Andrade, H. G. (2000). Using Rubrics to Promote Thinking and Learning.
Ball, M. A., & Garcia, D. D. (2016, Februar). Autograding and Feedback for Snap!: A Visual Program-ming Language (Abstract Only). Proceedings of the 47th ACM Technical Symposium on Compu-ting Science Education. 47th ACM Technical Symposium on Computing Science Education. https://doi.org/10.1145/2839509.2850572 DOI: https://doi.org/10.1145/2839509.2850572
Balouktsis, I. (2016). Learning Renewable Energy by Scratch Programming. Επιστημονική Επετηρίδα Παιδαγωγικού Τμήματος Νηπιαγωγών Πανεπιστημίου Ιωαννίνων, 9(1), 129. https://doi.org/10.12681/jret.8916 DOI: https://doi.org/10.12681/jret.8916
Boe, B., Hill, C., Len, M., Dreschler, G., Conrad, P., & Franklin, D. (2013). Hairball: Lint-inspired static analysis of scratch projects. Proceeding of the 44th ACM Technical Symposium on Comput-er Science Education, 215–220. https://doi.org/10.1145/2445196.2445265 DOI: https://doi.org/10.1145/2445196.2445265
Da Cruz Alves, N., Gresse von Wangenheim, C., & Hauck, J. C. R. (2019). Approaches to Assess Com-putational Thinking Competences Based on Code Analysis in K-12 Education: A Systematic Mapping Study. Informatics in Education, 18, 17–39. https://doi.org/10.15388/infedu.2019.02
Denner, J., Werner, L., & Ortiz, E. (2012). Computer games created by middle school girls: Can they be used to measure understanding of computer science concepts? Computers & Education, 58(1), 240–249. https://doi.org/10.1016/j.compedu.2011.08.006 DOI: https://doi.org/10.1016/j.compedu.2011.08.006
Döring, N., & Bortz, J. (2016). Forschungsmethoden und Evaluation in den Sozial- und Humanwissen-schaften. Springer Berlin, Heidelberg. https://doi.org/10.1007%2F978-3-642-41089-5
Funke, A., & Geldreich, K. (2017). Measurement and Visualization of Programming Processes of Primary School Students in Scratch. Proceedings of the 12th Workshop on Primary and Secondary Com-puting Education - WiPSCE ’17, 101–102. https://doi.org/10.1145/3137065.3137086 DOI: https://doi.org/10.1145/3137065.3137086
Garcia, D. D., Harvey, B., & Segars, L. (2012). CS principles pilot at University of California, Berkeley. ACM Inroads, 3(2), 58. https://doi.org/10.1145/2189835.2189853 DOI: https://doi.org/10.1145/2189835.2189853
Gesellschaft für Informatik (Hrsg.). (2016). Bildungsstandards Informatik—Sekundarstufe II. Empfehlun-gen der Gesellschaft für Informatik e. V. erarbeitet vom Arbeitskreis »Bildungsstandards SII«, 183/184, 88.
Gummels, I. (2020). Wie kooperatives Lernen im inklusiven Unterricht gelingt. Springer Spektrum Wies-baden. https://doi.org/10.1007/978-3-658-29114-3
Harel, I., Massachusetts Institute of Technology, & Media Laboratory (Hrsg.). (1993). Constructionism: Research reports and essays, 1985-1990 (2. print). Ablex Publ. Corp.
Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. Rout-ledge.
Jürgens, E., & Lissmann, U. (2015). Pädagogische Diagnostik. Beltz Verlag.
Koh, K. H., Basawapatna, A., Nickerson, H., & Repenning, A. (2014). Real Time Assessment of Compu-tational Thinking. 2014 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), 49–52. https://doi.org/10.1109/VLHCC.2014.6883021 DOI: https://doi.org/10.1109/VLHCC.2014.6883021
Maloney, J., Resnick, M., Rusk, N., Silverman, B., & Eastmond, E. (2010). The Scratch Programming Language and Environment. ACM Transactions on Computing Education, 10(4), 1–15. https://doi.org/10.1145/1868358.1868363 DOI: https://doi.org/10.1145/1868358.1868363
Mladenović, M., Mladenović, S., & Žanko, Ž. (2020). Impact of used programming language for K-12 students’ understanding of the loop concept. International Journal of Technology Enhanced Learning, 12(1), 79. https://doi.org/10.1504/IJTEL.2020.103817
Modrow, E. (2018). Informatik mit Snap!, Snap! In Beispielen. http://ddi-mod.uni-goettingen.de/InformatikMitSnap.pdf
Moreno-León, J., Román-González, M., Harteveld, C., & Robles, G. (2017). On the Automatic Assess-ment of Computational Thinking Skills: A Comparison with Human Experts. Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, 2788–2795. https://doi.org/10.1145/3027063.3053216 DOI: https://doi.org/10.1145/3027063.3053216
Perera, P., Tennakoon, G., Ahangama, S., Panditharathna, R., & Chathuranga, B. (2021). A Systematic Mapping of Introductory Programming Languages for Novice Learners. IEEE Access, 9, 88121–88136. https://doi.org/10.1109/ACCESS.2021.3089560
Price, T. W., & Barnes, T. (2015). Comparing Textual and Block Interfaces in a Novice Programming Environment. Comparing Textual and Block Interfaces in a Novice Programming Environment. https://doi.org/10.1145/2787622.2787712 DOI: https://doi.org/10.1145/2787622.2787712
Seiter, L., & Foreman, B. (2013). Modeling the learning progressions of computational thinking of prima-ry grade students. Proceedings of the Ninth Annual International ACM Conference on Interna-tional Computing Education Research, 59–66. https://doi.org/10.1145/2493394.2493403 DOI: https://doi.org/10.1145/2493394.2493403
Shute, V. J. (2008). Focus on Formative Feedback. Review of Educational Research, 78(1), 153–189. https://doi.org/10.3102/0034654307313795 DOI: https://doi.org/10.3102/0034654307313795
Svedkijs, A., Knemeyer, J.-P., & Marmé, N. (2022). Förderung von Computational Thinking durch ein digitales Leitprogramm zur blockbasierten Programmiersprache Snap! In B. Stadl (Hrsg.), Digita-le Lehre nachhaltig gestalten. Waxmann Verlag. https://doi.org/10.31244/9783830996330
Wang, W., Zhang, C., & Stahlbauer, A. (2021, April 23). SnapCheck: Automated Testing for Snap! Pro-grams. ITiCSE 2021. https://doi.org/10.1145/3430665.3456367
Weintrop, D., & Wilensky, U. (2015). To Block or Not to Block, That is the Question: Students’ Percep-tions of Blocks-Based Programming. Proceedings of the 14th International Conference on Inter-action Design and Children, 199–208. https://doi.org/10.1145/2771839.2771860 DOI: https://doi.org/10.1145/2771839.2771860
Weintrop, D., & Wilensky, U. (2017). Comparing block-based and text-based programming in high school computer science classrooms. ACM Transactions on Computing Education (TOCE), 18(1), 3. DOI: https://doi.org/10.1145/3089799
Werner, L., Denner, J., Campe, S., & Kawamoto, D. C. (2012). The fairy performance assessment: Meas-uring computational thinking in middle school. ACM Transactions on Computing Education, 215–220. https://dl.acm.org/doi/10.1145/2157136.2157200 DOI: https://doi.org/10.1145/2157136.2157200
Wiliam, D. (2011). Embedded formative assessment. Solution Tree Press.
Wolf, K., & Stevens, E. (2007). The Role of Rubrics in Advancing and Assessing Student Learning. 7(1).
Zhang, N., & Biswas, G. (2019). Defining and Assessing Students’ Computational Thinking in a Learning by Modeling Environment. In S.-C. Kong & H. Abelson (Hrsg.), Computational Thinking Edu-cation (S. 203–221). Springer Singapore. https://doi.org/10.1007/978-981-13-6528-7_12
Published
How to Cite
Issue
Section
Copyright (c) 2026 Nicole Marmé, Jens-Peter Knemeyer, Alexandra Svedkijs

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).