Doom-monitoring Students’ Online Interactions and Content Creation in Schools

Avatar photo
You can listen to this podcast directly on our website or on the following platforms; SoundCloud, iTunes, Spotify, CastBox, Deezer, Google Podcasts, Podcastaddict, JioSaavn, Listen notes, Radio Public, and Radio.com (not available in the EU).

Posted on

In this Papers Podcast, Professor Andra Siibak, Professor of Media Studies at the Institute of Social Studies at the University of Tartu in Estonia, and Kristjan Kikerpill, lecturer in Information Law and Digital Sociology of the same institution, discuss their co-authored Child & Adolescent Mental Health (CAMH) journal Special Issue paper, ‘Schools engaged in doom-monitoring students’ online interactions and content creation: an analysis of dominant media discourses’ (doi.org/10.1111/camh.12621). There is an overview of the paper, methodology, key findings, and implications for practice.

Discussion points include;

  • The types of activities that are monitored by schools using student activity monitoring software.
  • The global nature of online monitoring of students’ online interactions and content creation in schools.
  • What does the phrase ‘doom-monitoring’ mean and how it came about.
  • The implications of the inaccuracy of the technology on students being monitored.
  • The impact of this kind of monitoring on marginalised children
  • The difference in opinions of teachers and parents and students regarding the use of online monitoring.
  • How might this type of technology be improved to better support young people.

In this series, we speak to authors of papers published in one of ACAMH’s three journals. These are The Journal of Child Psychology and Psychiatry (JCPP)The Child and Adolescent Mental Health (CAMH) journal; and JCPP Advances.

Subscribe to ACAMH mental health podcasts on your preferred streaming platform. Just search for ACAMH on; SoundCloudSpotifyCastBoxDeezerGoogle Podcasts, Podcastaddict, JioSaavn, Listen notesRadio Public, and Radio.com (not available in the EU). Plus we are on Apple Podcasts visit the link or click on the icon, or scan the QR code.

App Icon Apple Podcasts  

Kristjan Kikerpill
Kristjan Kikerpill

I am a lecturer in Information Law and Digital Sociology. In my everyday teaching assignments, I teach freedom of expression, and privacy law, to upcoming journalists and communication students. The other side of my research activities, often carried out together with Andra, are multifaceted because I study online scams but also the datafication of education, in particular issues with surveillance and the use of artificial intelligence in carrying out surveillance.

Professor Andra Siibak
Professor Andra Siibak

Andra Siibak is a Professor of Media Studies and Program Director of the Media and Communication doctoral program at the Institute of Social Studies, University of Tartu, Estonia. Her main field of research has to do with the opportunities and risks surrounding internet use, datafication of childhood, new media audiences and privacy. Together with Giovanna Mascheroni, she co-authored a monograph “Datafied Childhoods: Data Practices and Imaginaries in Children’s Lives” (2021) published by Peter Lang. Andra is a member of the Estonian Young Academy of Sciences, and a member of Film, Media and Visual Studies section of Academia Europaea. (Bio and image from Centre of Excellence for the Digital Child)

Other resources

Featured Paper ‘Schools engaged in doom-monitoring students’ online interactions and content creation: an analysis of dominant media discourses’, Kristjan Kikerpill, Andra Siibak

Transcript

[00:00:00.340] Jo Carlowe: [Music] Hello, welcome to the Papers Podcast series for The Association for Child and Adolescent Mental Health, or ACAMH for short.  I’m Jo Carlowe, a Freelance Journalist with a specialism in psychology.  In this series we speak to authors of published papers in one of ACAMH’s three journals.  These are The Journal of Child Psychology and Psychiatry, commonly known as JCPP, The Child and Adolescent Mental Health, known as CAMH, and JCPP Advances.

Today I’m interviewing Andra Siibak, Professor of Media Studies at the Institute of Social Studies at the University of Tartu in Estonia and Kristjan Kikerpill, Lecturer in Information Law and Digital Sociology, of the same institution.  Kristjan and Andra co-authored the Paper, “Schools Engaged in Doom-Monitoring Students’ Online Interactions and Content Creation: An Analysis of Dominant Media Discourses,” recently published in CAMH.

If you’re a fan of our Papers Podcast series, please subscribe on your preferred streaming platform, let us know how we did, with a rating or review, and do share with friends and colleagues.  Kristjan and Andra, thanks for joining me.  Can you start with an introduction about who you are and what you do?

[00:01:19.700] Professor Andra Siibak: Yeah, hi, and thank you for having us.  So, yeah, I work at the University of Tartu as a Professor Media Studies, but my specific area of interest for research is currently related with datafication, in particular datafication of titles and, during the more recent years, also datafication of education.  So, this is also concretely related to the paper that was published in that special issue.

[00:01:47.229] Jo Carlowe: Thank you, and Kristjan?

[00:01:48.759] Kristjan Kikerpill: So, I am a Lecturer in Information Law and Digital Sociology.  That’s a double-edged sword, but on my everyday teaching assignments, I teach freedom of expression and previously, law to journal – upcoming Journalists and communication students.  The other side of my research activities, often, and regard – together with Andra, are multifaceted, because I study online CAMHS inception, but now, also, datafication of education, and, in particular, issues with surveillance and the use of artificial intelligence in carrying out surveillance.

[00:02:31.970] Jo Carlowe: Very interesting areas.  In your paper you say, “Thousands of schools have hired companies to provide them with student activity monitoring software.”  Can you describe the types of activities that get monitored?  And I’m wondering if this is something that just happens in the United States, or is this something that happens everywhere?

[00:02:52.360] Professor Andra Siibak: Oh, we really see the uptake of such kind of technologies, and this particularly started during the COVID pandemic, obviously, but there has been a growing public concern about safety and security of students in schools.  Therefore, in particular in the US, but also in other countries around the world, we see how various schools and school districts started to hire private companies to monitor students’ online interactions, especially the content they create, especially also, in their social media interactions.  And, basically, of course, it’s slightly difference in between the different vendors what kind of affordances do these software have.  But in the majority of cases, such software is used for monitoring students’ online searches, the content of their emails or some other documents.  Also, their social media interactions and oftentimes, also, the software is used on a school issued device and this happens especially in some school districts within the US, but in some occasions also in the privately owned devices of the students.  And, also, some vendors provide schools with a range of monitoring and control functionalities, like locking in appropriate material or tracking blockings, you know, school-related applications, but also viewing students’ screens in real-time or closing some web browser tabs, or even taking direct control of their input functionality.

[00:04:27.949] Jo Carlowe: We’re going to look in more details in your paper, but before we do, I’m wondering, what are your personal views about this form of monitoring of student online activity?

[00:04:39.590] Kristjan Kikerpill: I can give you the exclusive on how the phrase ‘doom-monitoring’ came about.  So, when we started reading about – these news articles, I didn’t know there were so many and how far-reaching this technology is.  And an image appeared in my mind of, if you can imagine, a Police Officer or a member of the SWAT Team, in full tactical gear, standing in the hallway of a primary school, telling first graders not to run so fast.  So, what this really represents is the disparity of the rules that we bring compared to try and solve certain problems, or at least give off the impression of solving these problems.  And that should raise numerous questions about the appropriate balances, checks and balances, really, is whether or not this is really necessary, how effective it is and what are the consequences?  What is the collateral damage of trying to use these types of, let’s say, ‘solutions’?

[00:05:51.900] Jo Carlowe: Andra, what about you, what are your personal views?

[00:05:55.750] Professor Andra Siibak: Yeah, well, I have always, through my academic career, studied children and the use of online technologies, which have made me especially concerned about the impacts such technologies can have on the young people, on the one hand, but also on the other hands, I am also concerned about potential moral panics.  But as I have been studying, also, various topics related to privacy, I’m also concerned about students’ rights and young people’s rights, and in particular, the right to privacy, but also of right to free expression, that I currently see that these kind of technologies start to have concerning the impact on.  So, yeah, personally, I’m also more sceptical and concerned.

[00:06:42.440] Jo Carlowe: Okay, well, let’s look at the research.  Let’s turn to your paper.  This was “Schools Engaged in Doom-monitoring Students’ Online Interactions and Content Creation and Analysis of Dominant Media Discourses,” recently published in the CAMH.  Can you give us a brief overview of the paper to set the scene?

[00:07:00.610] Kristjan Kikerpill: We were engaged in numerous similar topics, but this one emerged specifically, the use of these monitoring technologies and so, we decided to look further into it.  While the picture that emerged is that while we do have news stories about the issue and how it’s playing out in different societies, there was not too much research available on the same, sort of, level that would – you would expect, ‘cause, you know, something that interests you as a topic, would hope there is lots of previous research that can help you find your way and then, find the niche that needs more work.  But here, it was more of an open field for us, so, took the materials that we had, and we could find, and then, so, we tried to, then, contribute in our own way through studying media texts and then, getting a critical approach to media texts.

So, I would say that’s, sort of, the brief overview of the paper structurally and what we find – found out was that there are certain ways of speaking about these topics.  And I would say, like, for me, let’s say the most surprising thing was just how little students, children and youth were, themselves, involved in these discussions, because they were mostly silent.  And we’re mostly these, again, ‘expert adults’ discussing the topics, and I thought that was really interesting for me, because off topic, I could say that something similar were to, let’s say, be imposed on us, as employees, I don’t think we would, sort of, take it lying down, saying that this is completely normal.  But seems that when children and youth are concerned, and there’s this question of are we engaging this topic in a manner that could be described as a subject type of interaction, where fully acknowledge children’s rights in the different approaches that we take?  Or do we have such a cultured type interaction, where it’s we, the adults, or the expert adults, are deciding what’s best for the children and we find that, well, why would we need to include them?  We know best.

[00:09:23.940] Jo Carlowe: I’m going to dig in more into the detail a bit later.  I just want to take a step back.  Can you tell us a little about the methodology used?

[00:09:31.950] Kristjan Kikerpill: So, we opted to use critical discursive psychology and the approach takes, sort of, as its basis, this understanding that people are both the products and users of discourse.  So, we are impacted by the way something is thought of and discussed and we can, therefore, reproduce this existing discourse, but we can also introduce changes and shifts, which become part of the discourse over time.  So, have different positions, subject positions available in discourse, so, whether we agree with a topic or the way it is discussed, or whether we resist it.

And so, studying language on this level, on this microlevel, and then connecting it to these wider discourses, social/cultural discourses, was just a very interesting way of approaching task that we had ahead of ourselves.  And so, we have these direct quotes in the news articles and then, connecting these to wider discourse, how these topics are discussed, whether it’s techno-optimism all the way.  Someone is promoting a technology, whether it is really, really critical, on the other hand, and then, finding how people actually use these so-called interpretative repertoires, which are relatively coherent ways of talking about something, with a more limited terminology in use when addressing specific topics.  And so, it was very interesting for us to see how these specific quotes in news articles created these interpretive repertoires and how, in turn, these repertoires would feature in wider discourses.

[00:11:19.899] Jo Carlowe: And let’s return to the findings.  What were the key findings that you’d like to share?

[00:11:26.589] Kristjan Kikerpill: Actually, I would like to, sort of, go over the interpretive repertoires that emerged from our analysis and so, the first one, I guess it already came up, it was the silent students/expert adults, right?  So, the most qualified respondents when it comes to students’ personal experiences with, and opinions on, constant digital surveillance, and this is something, certainly, that needs further research in the future, because it cannot just remain at, sort of, a fact, or, like, declaring the fact that children’s voices seem to be neglected.  But we need to, you know, find out more down the line.

And so, the second interpretive repertoire, then, it was “a solution in search of a problem,” which is directly from a quote in one of the news articles.  It emerged through our analysis, ‘cause there are lots of these options available in terms of technology and they’re trying to have the solution, so-called.  Now, they’re trying to find the problem, which is best to attach this solution.  So, what emerged, from our analysis, also, this type of opportunism involved and type of fearmongering involved.

As certain quotes also brought up the fact that whenever, for example, there is a school shooting, there’s been this influx of, you know, marketing type emails suggesting that these companies can keep their student if they can, sort of, prevent this – prevent horrible events from happening.  And when things get really bad and the decisions can also be, sort of, more easily, maybe, achieved, the critics, sort of, counteract this by saying that whenever these horrible events happen, we cannot seem to set aside all the rights that everyone have and focus on this solely, and use whatever means necessary, sort of, prevent the next bad thing.  So, we have to make a more balanced approach and find out what could be more effective solution to this, actually, deep-seated social problem that exist.

Then the third, which is interpretive repertoire, the normalisation of surveillance for a good cause.  It’s totally a techno-optimistic sort of way of speaking about things, where everything can be solved through surveillance.  You have enough information, then you can prefer – weed out everything that is bad, right?  Because you have this near perfect picture of what is going on, and you wouldn’t have that if you weren’t derailing people, you know, around clock.  And then, so, I’m, let’s say, marketing or supporting as the positive solution to these issues, would actually take us down the line where this becomes normal.

[00:14:22.139] Jo Carlowe: I want to pick up on something you mentioned there, which is the inaccuracy of the technology.  In the paper, you describe the applied doom-monitoring technologies as “woefully inaccurate and poised for an excess of false positives.”  I think there’s one example given where kids discussing shooting basketballs is taken as meaning shooting.  I’m just wondering, can you give some other examples of how the technology fails and, also, what the implications are for the young people being monitored?

[00:14:55.750] Professor Andra Siibak: Yeah, I can just add that, basically, from the media reporting, we see various instances where this technology has failed students, or really flagged some content that shouldn’t be flagged, and which has caused various kinds of problems to the students.  Example related to the basketball team and the shooting.  Also, let’s say, some tweets about a movie shooter were flagged as potential threats, or when someone mentions in social media that their credit score is ‘shooting up’, then this is also something that was flagged.

Schools started alerting parents that there will be a lockdown drill that they are exercising during the morning break.  Also, what was interesting is that in some occasions, also, the technology flagged some possible profanity and hate speech.  In several occasions, for instance, it was noted that the word ‘gay’ was considered to be a possible profanity or part of the hate speech, or, also, some anecdotal evidence when a student was writing a biology project and used the word ‘shit’, or when the student was writing a essay about Odysseus and he used the word ‘bastard’ in it, and then it got flagged, as well.

Basically, what we found out, and from what other research has also indicated, that was based on one specific school district’s analysis of their reports that they have had from their technology.  Then, they put 1,400 reports and, basically, most of them were later confirmed to be just some students joking around, or actually nothing serious.  Really, the multimodality of these interactions and the lack of phone checks, well, basically, no context is taken into account by these technologies, is a fact, at least with these very strange reactions upon various words.

[00:17:19.020] Jo Carlowe: I’m wondering about the impact that has on young people and I also want to know, in particular, if this is something that came up, the impact of this type of monitoring on marginalised young people, such as students of colour and those from the LGBTQ+ community?

[00:17:35.720] Professor Andra Siibak: In particular, what could be highlighted is the fact that in the majority of cases, still, in the US, for instance, the software is used in school issued device, but the student groups who are provided help by the school to use a device, are usually marginalised, come from lower income backgrounds and they simply cannot afford this technology.

Additional worries related to LGBTQ youth, for instance, the Governor of Texas called up to investigate the families of kids who were seeking – sent out [inaudible – 18:14], “You are being surveilled and all your online interactions are being surveilled,” then this might lead to potential harm for these more marginalised youth.  Another group that was highlighted to be more affected were students with a disability, whose online interactions might be misinterpreted.

[00:18:36.230] Jo Carlowe: And yet, despite what you’ve said, I note from the paper that the majority of Teachers and parents questioned believe that such technologies keep students safe, whereas students take a very different view.  Can you elaborate on this?

[00:18:52.330] Professor Andra Siibak: Well, I think it’s necessarily take on a very strikingly different view.  That this is also something that we’ve seen from various representative surveys that were provided last year on the topic.  But what we see is, really, from the media reports, the students’ voices are almost non-existent and their reactions and their views are not really asked about or reported in the media.  But when they are reported, then, what do the students usually voice is that if those technologies work perfectly as they are promised work, they wouldn’t have anything against such kind of technology use.  Then, they think that it could be a good way of helping to keep themselves, or their peers, safe, but the problem that these technologies do not really work as they are promised.

The problem of these false positives and whatnot has made them, also, very sceptical about this technology.  Therefore, they are much more hesitant and much – were much less enthusiastic about those, in comparison to the adults.

[00:20:00.850] Jo Carlowe: Given your findings, is there any place, in your view, for the use of this type of technology?  I mean, are there examples where it has protected young people from risk of harm?

[00:20:14.450] Professor Andra Siibak: From the media reports that we analysed, we really were not able to see many evidence to support this, kind of, actually doing what it’s supposed to do.  So, rather, we saw, again, references to some anecdotal evidence and still rather evidence that is telling otherwise, that it’s not working, it’s not really helping to make schools safer and more secure for the students.

And again, I think for the most recent, but probably also most traumatic incidents, when we saw that this technology is actually failing, was in the context of Uvalde school shooting that happened in May 2022, which was one of the deadliest massacres ever in the US elementary schools.  And that specific school district was using such technologies, but again, technology failed to pick up this very concerning post by the teenage gunman and later, quite a big media debate also happened, like, “How can this be?  We have, kind of, paid for this technology to keep our students safe and secure and now we have this horrible, horrible event happening to our students.”

[00:21:48.580] Jo Carlowe: Hmmm. How might this kind of technology be used, if at all, or improved, then, to better support young people?

[00:21:57.760] Professor Andra Siibak: And my personal opinion would be that perhaps we should start to invest more money, actually, into people themselves and, also, additional systems that help students in need.  Because we also reviewed quite a bit of literature, which actually revealed that school districts collect various systems where peers could report about concerning or threatening student behaviour, like some tip lines, some helplines or some text messaging systems.  And also there is a considerable lack of knowledgeable staff who could properly assess students who might be exhibiting such kind of concerning traits.  So, definitely, if we invest more money into educating the staff, also fellow students, and providing additional kinds of systems that could help, then we will not be relying solely on just this one software and hope that it will be the silver bullet will – that will save us all.

[00:23:15.340] Jo Carlowe: Thank you.  Are you planning some follow-up research, or is there anything else in the pipeline that you’d like to share with us?

[00:23:23.370] Professor Andra Siibak: Well, from my side, what we are actually currently working on is yet another context where AI is being used very enthusiastically, parentally, and which has also quite a bit of impact on the educational sector, and this is obviously the case of ChatGPT.  So, this is currently the topic we are eagerly trying to prop up our first analysis.

[00:23:50.890] Jo Carlowe: No, that will be interesting.  Kristjan, anything else you want to add in terms of future research, or…?

[00:23:57.570] Kristjan Kikerpill: We’re keeping our eyes on these issues of artificial intelligence and would say, more broadly, this overreliance on technology as solution, as a potential solution for social issues that have, sort of, persisted in society for a long time, and people have taken these problems on, in various ways and through the decades, but haven’t really come up with a good solution.  And so, now, with the advances in artificial intelligence and technologies that are assisted by different applications using AI, are definitely, sort of, I would say marketed, or at least discussed, as this magical next step that would solve a lot of these problems.  But I feel like, sort of, cutting corners here, and maybe twisting the narrative of, in terms of how useful technology is, or could be, for various reasons, right, technology vendors might have, especially when marketing their own technology.

But I would say no technology, in and of itself, is the solution.  A lot of it, when used properly, can be of help.  So, I actually pretty much agree with what little there was from students in our paper, that we could report.  My personal, sort of, understanding and interpretation of it was that the students were the most reasonable in this discussion.  Didn’t have very – say, a radical opposition or a radical support for this technology.  They were very clear about, really, if it works as it is, sort of, advertised to work, and if it is precise and if it is used when necessary, it would be of help.

If it’s a random, sort of, sifting through masses of information to find, maybe, something that is wrong and that maybe, after the fact, could turn out to be just a joke, this is a waste of resources.  It becomes this question of – Andra already mentioned, also, investing in people, more so than chasing this next technological dream or, you know, even a pipedream, right?  And so, setting out more wholesome and a critical approach to these issues.

[00:26:28.960] Jo Carlowe: Thank you, and finally, a question for each of you.  What is your take-home message for our listeners?

[00:26:35.669] Kristjan Kikerpill: I would say stay critical.  Try to take into account the different, sort of, loyalties that various agents have when looking at how these discussions and the discourse get together, through these, you know, independent, sort of, small quotes of text and then, try to understand where these people are coming from.  What are they trying to achieve when they say things the way that they say it?  And then, try and see the intent behind it, because it can reveal a lot of what is, sort of, in the process of being, sort of, arrived at or achieved when speaking about these topics.

See where the concerns lie and then, see who benefits from the way we speak about these things.  So, if we take an overly techno-optimistic approach, obviously, this is very good for the technology vendors.  If we take a type of luddite approach, right, so, complete technology, sort of, disregard, and we would also probably lose something in the progress.  So, I would say just take a balanced approach, not only to the activities, but also the talk that accompanies these activities.

[00:27:54.390] Jo Carlowe: Thank you, and Andra?

[00:27:56.340] Professor Andra Siibak: Yeah, as we both represent so-called critical data school of thought, then I totally agree what Kristjan has just said.  And I would just add that, basically, I’m quite sure that we are unable to solve all kinds of social problems that technology vendors are currently advertising that they are able to solve just be creating yet another new device, or yet another new app.  Definitely, we need to invest much more into it.

[00:28:28.910] Jo Carlowe: Thank you both so much.  Plenty of food for thought there.  For more details on Kristjan Kikerpill and Professor Andra Siibak, please visit the ACAMH website, www.acamh.org, and Twitter @acamh.  ACAMH is spelt A-C-A-M-H, and don’t forget to follow us on your preferred streaming platform.  Let us know if you enjoy the podcast, with a rating or review, and do share with friends and colleagues [music].

Discussion

Sound quality makes this almost impossible – the bits i can hear are interesting

Matt Kempen

Apologies, there is now a transcript available

Add a comment

Your email address will not be published. Required fields are marked *

*