lnu.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 15) Show all publications
Toll, D. & Wingkvist, A. (2018). Visualizing programming session timelines. In: Proceedings of the 11th International Symposium on Visual Information Communication and Interaction: . Paper presented at 11th International Symposium on Visual Information Communication and Interaction, VINCI 2018, 13-15 August 2018 (pp. 106-107). ACM Publications
Open this publication in new window or tab >>Visualizing programming session timelines
2018 (English)In: Proceedings of the 11th International Symposium on Visual Information Communication and Interaction, ACM Publications, 2018, p. 106-107Conference paper, Published paper (Refereed)
Abstract [en]

Learning programming with tutor tools has grown in popularity. These tools present programming assignments and provide feedback in the form of test-cases and compilation errors. Our timeline visualization of data from one such tool allows us to tell a story about what files were accessed and for how long, in what order files were edited, grown or shrunk, what errors the student ran into, and how those errors were addressed. This can be done without a need to read and replay the entire programming session. In sum, the tool has been used to visualize logs from students that tried to solve programming assignments and we find interesting stories that can help us improve how we address new assignments.

Place, publisher, year, edition, pages
ACM Publications, 2018
Keywords
Software visualization, Time series data, Errors, Visual communication, Visualization, Learning programming, Programming assignments, Test case, Time-series data, Timeline visualizations, Data visualization
National Category
Software Engineering
Research subject
Computer Science, Software Technology
Identifiers
urn:nbn:se:lnu:diva-83308 (URN)10.1145/3231622.3232506 (DOI)2-s2.0-85055502530 (Scopus ID)9781450365017 (ISBN)
Conference
11th International Symposium on Visual Information Communication and Interaction, VINCI 2018, 13-15 August 2018
Available from: 2019-05-24 Created: 2019-05-24 Last updated: 2019-05-24Bibliographically approved
Toll, D. & Wingkvist, A. (2017). How Tool Support and Peer Scoring Improved Our Students' Attitudes Toward Peer Reviews. In: Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education: . Paper presented at 2017 ACM Conference on Innovation and Technology in Computer Science Education, Bologna, Italy, 3-6 July, 2017 (pp. 311-316). New York, NY, USA: ACM Publications
Open this publication in new window or tab >>How Tool Support and Peer Scoring Improved Our Students' Attitudes Toward Peer Reviews
2017 (English)In: Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education, New York, NY, USA: ACM Publications, 2017, p. 311-316Conference paper, Published paper (Refereed)
Abstract [en]

We wanted to introduce peer reviews for the final report in a course on Software Testing. The students had however experienced issues with peer reviews in a previous course which made this a challenge. To get a better understanding of the situation, we distributed a pre-questionnaire to the students. 48 of the 83 students provided their expectations on peer reviews. To deal with some of the perceived issues, we developed a peer review tool where we introduce anonymity, grading of reviews, teacher interventions, as well as let students score and comment on the reviews they receive. In total, 67 reports were submitted by 83 students and 325 reviews were completed. The post-questionnaire was answered by 48 students (not necessarily the same respondents as for the pre-questionnaire as both were collected anonymously). While 27 of the students expected incorrect feedback only 13 students agreed to have got incorrect feedback in the post-questionnaire. The students reported that they found the feedback from their peers more valuable (+15%) than expected, and 88% of the students reported that they learned from doing peer reviews. Overall, we find that the students' attitudes towards peer reviews have improved.

Place, publisher, year, edition, pages
New York, NY, USA: ACM Publications, 2017
Keywords
Peer Review, Peer Grading, Software Testing, Courseware
National Category
Computer Sciences
Research subject
Computer and Information Sciences Computer Science, Computer Science
Identifiers
urn:nbn:se:lnu:diva-68419 (URN)10.1145/3059009.3059059 (DOI)2-s2.0-85029489027 (Scopus ID)978-1-4503-4704-4 (ISBN)
Conference
2017 ACM Conference on Innovation and Technology in Computer Science Education, Bologna, Italy, 3-6 July, 2017
Available from: 2017-10-23 Created: 2017-10-23 Last updated: 2019-08-29Bibliographically approved
Borstler, J., Stoerrle, H., Toll, D., van Assema, J., Duran, R., Hooshangi, S., . . . MacKellar, B. (2017). "I know it when I see it" - Perceptions of Code Quality ITiCSE'17 Working Group Report. In: ITICSE-WGR'17: PROCEEDINGS OF THE 2017 ITICSE CONFERENCE WORKING GROUP REPORTS. Paper presented at ITiCSE Conference on Working Group Reports (ITiCSE-WGR), Bologna, ITALY, JUL 03-05, 2017 (pp. 70-85). ACM Publications
Open this publication in new window or tab >>"I know it when I see it" - Perceptions of Code Quality ITiCSE'17 Working Group Report
Show others...
2017 (English)In: ITICSE-WGR'17: PROCEEDINGS OF THE 2017 ITICSE CONFERENCE WORKING GROUP REPORTS, ACM Publications, 2017, p. 70-85Conference paper, Published paper (Refereed)
Abstract [en]

Context. Code quality is a key issue in software development. The ability to develop high quality software is therefore a key learning goal of computing programs. However, there are no universally accepted measures to assess the quality of code and current standards are considered weak. Furthermore, there are many facets to code quality. Defining and explaining the concept of code quality is therefore a challenge faced by many educators. Objectives. In this working group, we investigated code quality as perceived by students, educators, and professional developers, in particular, the differences in their views of code quality and which quality aspects they consider as more or less important. Furthermore, we investigated their sources for information about code quality and its assessment. Methods. We interviewed 34 students, educators and professional developers regarding their perceptions of code quality. For the interviews they brought along code from their own experience to discuss and exemplify code quality. Results. There was no common definition of code quality among or within these groups. Quality was mostly described in terms of indicators that could measure an aspect of code quality. Among these indicators, readability was named most frequently by all groups. The groups showed significant differences in the sources they use for learning about code quality with education ranked lowest in all groups. Conclusions. Code quality should be discussed more thoroughly in educational programs.

Place, publisher, year, edition, pages
ACM Publications, 2017
Keywords
Code quality, programming
National Category
Computer and Information Sciences
Research subject
Computer and Information Sciences Computer Science, Computer Science
Identifiers
urn:nbn:se:lnu:diva-83428 (URN)10.1145/3174781.3174785 (DOI)000455771300004 ()2-s2.0-85046906485 (Scopus ID)
Conference
ITiCSE Conference on Working Group Reports (ITiCSE-WGR), Bologna, ITALY, JUL 03-05, 2017
Available from: 2019-05-27 Created: 2019-05-27 Last updated: 2019-08-29Bibliographically approved
Börstler, J., Störrle, H., Toll, D., van Assema, J., Duran, R., Hooshangi, S., . . . MacKellar, B. (2017). "I know it when I see it": perceptions of code quality. In: ITiCSE '17: Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education: . Paper presented at ITiCSE '17: Innovation and Technology in Computer Science Education, Bologna, Italy, July 3-5, 2017 (pp. 389-389). New York, NY, USA: Association for Computing Machinery (ACM)
Open this publication in new window or tab >>"I know it when I see it": perceptions of code quality
Show others...
2017 (English)In: ITiCSE '17: Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education, New York, NY, USA: Association for Computing Machinery (ACM), 2017, p. 389-389Conference paper, Published paper (Refereed)
Abstract [en]

Code quality is a key issue in software development. The ability to develop software of high quality is therefore a key learning goal of computing programs. However, there are no universally accepted measures to assess the quality of code and current standards are consideredweak. Furthermore, there are many facets to code quality. Defining and explaining the concept of code quality is therefore a challenge faced by many educators. In this working group, we investigate the perceptions of code quality of students, teachers, and professional programmers. In particular, we are interested in the differences in views of code quality by students, educators, and professional programmers and which quality aspects they consider as more or less important. Furthermore, we are interested in which sources of information on code quality and its assessment are used by these groups. Eventually, this will help us to develop resources that can be used to broaden students' views on software quality.

Place, publisher, year, edition, pages
New York, NY, USA: Association for Computing Machinery (ACM), 2017
Keywords
Code quality, programming
National Category
Computer Sciences
Research subject
Computer Science, Software Technology
Identifiers
urn:nbn:se:lnu:diva-70486 (URN)10.1145/3059009.3081328 (DOI)978-1-4503-4704-4 (ISBN)
Conference
ITiCSE '17: Innovation and Technology in Computer Science Education, Bologna, Italy, July 3-5, 2017
Available from: 2018-02-05 Created: 2018-02-05 Last updated: 2018-04-11Bibliographically approved
Olsson, T., Toll, D., Ericsson, M. & Wingkvist, A. (2016). Evaluation of an architectural conformance checking software service. In: ACM Proccedings of the 10th European Conference on Software Architecture Workshops (ECSA-W): . Paper presented at 10th European Conference on Software Architecture, November 28 - December 02, 2016, Copenhagen, Denmark. ACM Press, Article ID 15.
Open this publication in new window or tab >>Evaluation of an architectural conformance checking software service
2016 (English)In: ACM Proccedings of the 10th European Conference on Software Architecture Workshops (ECSA-W), ACM Press, 2016, article id 15Conference paper, Published paper (Refereed)
Abstract [en]

Static architectural conformance checking can be used to find architectural violations, cases where the implementation does not adhere to the architecture, and prevent architectural erosion. We implement a software service for automated conformance checking and investigate the effect this has on the number of architectural violations in software projects. The service is implemented using our heuristic-based approach to static architecture conformance checking of the Model-View-Controller pattern. The service is integrated in the source code management system of each project, so a report is generated every time the source code is modified. The service was evaluated in a field experiment that consisted of eight student projects. We found that the four projects that used the service produced significantly fewer violations compared to those that did not.

Place, publisher, year, edition, pages
ACM Press, 2016
Series
ACM International Conference Proceeding Series
Keywords
Static Architectural Conformance Checking, Model-ViewController, Software as a Service, Field Experiment
National Category
Software Engineering
Research subject
Computer Science, Software Technology
Identifiers
urn:nbn:se:lnu:diva-60472 (URN)10.1145/2993412.3003391 (DOI)000406156800015 ()978-1-4503-4781-5 (ISBN)
Conference
10th European Conference on Software Architecture, November 28 - December 02, 2016, Copenhagen, Denmark
Available from: 2017-02-03 Created: 2017-02-03 Last updated: 2019-03-06Bibliographically approved
Toll, D., Olsson, T., Ericsson, M. & Wingkvist, A. (2016). Fine-Grained Recording of Student Programming Sessions to Improve Teaching and Time Estimations. Paper presented at 20th Annual Conference on Innovation and Technology in Computer Science Education, JUL 06-08, 2015, Vilnius, LITHUANIA. International journal of engineering education, 32(3), 1069-1077
Open this publication in new window or tab >>Fine-Grained Recording of Student Programming Sessions to Improve Teaching and Time Estimations
2016 (English)In: International journal of engineering education, ISSN 0949-149X, Vol. 32, no 3, p. 1069-1077Article in journal (Refereed) Published
Abstract [en]

It is not possible to directly observe how students work in an online programming course. This makes it harder for teachers to help struggling students. By using an online programming environment, we have the opportunity to record what the students actually do to solve an assignment. These recordings can be analyzed to provide teachers with valuable information. We developed such an online programming tool with fine-grained event logging and used it to observe how our students solve problems. Our tool provides descriptive statistics and accurate replays of a student's programming sessions, including mouse movements. We used the tool in a course and collected 1028 detailed recordings. In this article, we compare fine-grained logging to existing coarse-grained logging solutions to estimate assignment-solving time. We find that time aggregations are improved by including time for active reading and navigation, both enabled by the increased granularity. We also divide the time users spent into editing (on average 14.8%), active use (on average 37.8%), passive use (on average 29.0%), and estimate time used for breaks (on average 18.2%). There is a correlation between assignment solving time for students who pass assignments early and students that pass later but also a case where the times differ significantly. Our tool can help improve computer engineering education by providing insights into how students solve programming assignments and thus enable teachers to target their teaching and/or improve instructions and assignments.

Place, publisher, year, edition, pages
Chicago, IL: Tempus Publications, 2016
Keywords
computer science education, learning analytics, educational data mining, computer engineering education
National Category
Computer and Information Sciences
Research subject
Computer and Information Sciences Computer Science, Computer Science
Identifiers
urn:nbn:se:lnu:diva-55062 (URN)000378700600003 ()2-s2.0-84973569061 (Scopus ID)
Conference
20th Annual Conference on Innovation and Technology in Computer Science Education, JUL 06-08, 2015, Vilnius, LITHUANIA
Available from: 2016-07-26 Created: 2016-07-22 Last updated: 2019-08-13Bibliographically approved
Toll, D. (2016). Measuring Programming Assignment Effort. (Licentiate dissertation). Växjö: Faculty of Technology, Linnaeus University
Open this publication in new window or tab >>Measuring Programming Assignment Effort
2016 (English)Licentiate thesis, monograph (Other academic)
Abstract [en]

Students often voice that the programming assignments are hard and that they spend a lot of time on solving them. Is this true; are we giving them too hard assignments and how much and what do they spend the time on? This is what we want to gain insight to. We constructed a tool that records programming sessions with finer granularity than the existing solutions. The tool has recorded 2643 programming sessions from students. Using that data we found that students spend only 15% of their time writing code, and that on average 40% of their programming effort is spent reading and navigating. We also estimate the time spent outside of the tool to be almost 20%. The increased detail in the recordings can be used to measure the effect of programming source code comments, and we found that the effect of both helpful and redundant comments increases the reading time but did not help to reduce the students writing effort. Finally, we used the tool to examine the effects of an improved programming assignment and found that the total effort was not reduced.

Place, publisher, year, edition, pages
Växjö: Faculty of Technology, Linnaeus University, 2016. p. 87
Series
Reports: Linnaeus University, Faculty of Technology ; 40
National Category
Computer Sciences
Research subject
Computer and Information Sciences Computer Science, Computer Science
Identifiers
urn:nbn:se:lnu:diva-50118 (URN)978-91-87925-98-6 (ISBN)
Presentation
2016-01-22, Ny104K, Kalmar, 13:00 (English)
Opponent
Supervisors
Available from: 2016-03-14 Created: 2016-03-02 Last updated: 2018-01-10Bibliographically approved
Toll, D., Olsson, T., Ericsson, M. & Wingkvist, A. (2015). Detailed recordings of student programming sessions. In: ITiCSE '15: Proceedings of the 2015 ACM Conference on Innovation and Technology in Computer Science Education. Paper presented at 20th ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE 2015, July 4-8 2015, Vilnius (pp. 328-328). ACM Press
Open this publication in new window or tab >>Detailed recordings of student programming sessions
2015 (English)In: ITiCSE '15: Proceedings of the 2015 ACM Conference on Innovation and Technology in Computer Science Education, ACM Press, 2015, p. 328-328Conference paper, Published paper (Refereed)
Abstract [en]

Observation is important when we teach programming. It can help identify students that struggle, concepts that are not clearly presented during lectures, poor assignments, etc. However, as development tools become more widely available or courses move off-campus and online, we lose our ability to naturally observe students. Online programming environments provide an opportunity to record how students solve assignments and the data recorded allows for in-depth analysis. For example, file activities, mouse movements, text-selections, and text caret movements provide a lot of information on when a programmer collects information and what task is currently worked on. We developed CSQUIZ to allow us to observe students on our online courses through data analysis. Based on our experience with the tool in a course, we find recorded sessions a sufficient replacement for natural observations.

Place, publisher, year, edition, pages
ACM Press, 2015
National Category
Computer and Information Sciences
Research subject
Computer and Information Sciences Computer Science
Identifiers
urn:nbn:se:lnu:diva-55003 (URN)10.1145/2729094.2754859 (DOI)2-s2.0-84952050834 (Scopus ID)9781450334402 (ISBN)
Conference
20th ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE 2015, July 4-8 2015, Vilnius
Available from: 2016-07-22 Created: 2016-07-22 Last updated: 2018-01-10Bibliographically approved
Ihantola, P., Vihavainen, A., Ahadi, A., Butler, M., Börstler, J., Edwards, S. H., . . . Toll, D. (2015). Educational Data Mining and Learning Analytics in Programming: Literature Review and Case Studies. In: Program ChairsNoa Ragonis Beit Berl College and Technion-ITT, Israel Päivi Kinnunen Aalto University, Finland (Ed.), Proceedings of the 2015 ITiCSE on Working Group Reports: . Paper presented at ITiCSEInnovation and Technology in Computer Science Education, Vilnius, LITHUANIA, JUL 04-08, 2015 (pp. 41-63). New York: ACM Press
Open this publication in new window or tab >>Educational Data Mining and Learning Analytics in Programming: Literature Review and Case Studies
Show others...
2015 (English)In: Proceedings of the 2015 ITiCSE on Working Group Reports / [ed] Program ChairsNoa Ragonis Beit Berl College and Technion-ITT, Israel Päivi Kinnunen Aalto University, Finland, New York: ACM Press, 2015, p. 41-63Conference paper, Published paper (Refereed)
Abstract [en]

Educational data mining and learning analytics promise better understanding of student behavior and knowledge, as well as new information on the tacit factors that contribute to student actions. This knowledge can be used to inform decisions related to course and tool design and pedagogy, and to further engage students and guide those at risk of failure. This working group report provides an overview of the body of knowledge regarding the use of educational data mining and learning analytics focused on the teaching and learning of programming. In a literature survey on mining students’ programming processes for 2005–2015, we observe a significant increase in work related to the field. However, the majority of the studies focus on simplistic metric analysis and are conducted within a single institution and a single course. This indicates the existence of further avenues of research and a critical need for validation and replication to better understand the various contributing factors and the reasons why certain results occur. We introduce a novel taxonomy to analyse replicating studies and discuss the importance of replicating and reproducing previous work. We describe what is the state of the art in collecting and sharing programming data. To better understand the challenges involved in replicating or reproducing existing studies, we report our experiences from three case studies using programming data. Finally, we present a discussion of future directions for the education and research community.

Place, publisher, year, edition, pages
New York: ACM Press, 2015
Keywords
educational data mining; learning analytics; programming; replication; literature review
National Category
Computer Sciences
Research subject
Computer and Information Sciences Computer Science
Identifiers
urn:nbn:se:lnu:diva-50120 (URN)10.1145/2858796.2858798 (DOI)000389809400002 ()2-s2.0-84964789171 (Scopus ID)978-1-4503-4146-2 (ISBN)
Conference
ITiCSEInnovation and Technology in Computer Science Education, Vilnius, LITHUANIA, JUL 04-08, 2015
Available from: 2016-03-02 Created: 2016-03-02 Last updated: 2018-01-10Bibliographically approved
Olsson, T., Toll, D., Wingkvist, A. & Ericsson, M. (2015). Evolution and Evaluation of the Model-View-Controller Architecture in Games. In: Proceedings of the Fourth International Workshop on Games and Software Engineering: . Paper presented at 2015 IEEE/ACM 4th International Workshop on Games and Software Engineering (GAS). Florence, Italy, 20150518 (pp. 8-14). USA: IEEE Press
Open this publication in new window or tab >>Evolution and Evaluation of the Model-View-Controller Architecture in Games
2015 (English)In: Proceedings of the Fourth International Workshop on Games and Software Engineering, USA: IEEE Press, 2015, p. 8-14Conference paper, Published paper (Refereed)
Abstract [en]

In game software it is important to separate game play code from rendering code to ease transitions to new technologies or different platforms. The architectural pattern Model-View-Controller (MVC) is commonly used to achieve such separation. We investigate how the MVC architectural pattern is implemented in five game projects from a small development studio. We define a metrics-based quality model to assess software quality goals such as portability and rendering engine independence and perform an architectural analysis. The analysis reveals three different evolutions of the pattern. We also assess the quality and find that 1. The evolutions of the architecture differ in quality and 2. An architectural refactoring to a newer version of the architecture increases the software quality.

Place, publisher, year, edition, pages
USA: IEEE Press, 2015
Keywords
MVC, Model-View-Controller, Computer Game, Game Architecture
National Category
Software Engineering
Research subject
Computer Science, Software Technology
Identifiers
urn:nbn:se:lnu:diva-47098 (URN)10.1109/GAS.2015.10 (DOI)000380613400003 ()2-s2.0-85009188274 (Scopus ID)
Conference
2015 IEEE/ACM 4th International Workshop on Games and Software Engineering (GAS). Florence, Italy, 20150518
Available from: 2015-11-09 Created: 2015-11-09 Last updated: 2019-06-25Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-5335-5196

Search in DiVA

Show all publications