This panel is devoted to the issue of trust and children’s data. Trust in the ethical treatment of children’s data relies on a broad balance of trust in a complex network of actors and practices involved in data collection, analytics, sharing and use of children’s data. More specifically, trust in how data is captured, who the data is captured by, what is done to the data and how outcomes generated out of the data are employed needs to be guaranteed. We discuss the predictive practices (algorithmic analysis) used to uncover behaviours, characteristics and relationships in order to anticipate outcomes, and nudge behaviours and attitudes or initiate interventions on behalf of children. The panel also investigates how children’s databases have made their way onto the dark web, positing that trust in the security of databases held by commercial and state actors is at risk. We then consider the ethical tension between the use of micro-celebrity social media influencers to establish market trust in brands and products, and the use of these potentially vulnerable influencers to market to other potentially vulnerable consumers.
Paper one ‘Entrusting predictions for children’s futures’ discusses the future implications of large-scale data collection and analytic activities enacted across the everyday that is evident in media, academic publications and government policy discussions. Digging deeper, this concern is centred largely on unease about how the data is captured, who the data is captured by, what is done to the data and how outcomes generated out of the data are employed. Lack of transparency or clarity around internal machinic calculation processes requires ‘trust’ in the veracity of outputs of these systems.
Children are increasingly being positioned as data sources, datafied and embedded in algorithmic ecosystems that employ a range of calculations to uncover patterns or anomalies, to highlight risk, and to predict future outcomes. These practices inform strategies, policies and planning and therefore can have material consequences that can be advantageous or disadvantageous for the child, the family and their future pathways. This presentation explores three examples of predictive practices in early childhood in the health, education and commercial sectors through an analysis of relevant academic, policy and commercial literature and discourse to highlight the raft of ways that the placing of trust in algorithmic processes needs to be carefully scaffolded and critiqued.
Paper two, ‘When trust goes wrong: Children on the dark web’ investigates what happens when trust goes wrong; when children’s personal data is hacked and circulated on the dark web? Where once child abuse material (CAM) was the only type of children’s data generally available for sale or circulation over the dark web, the growth of big data over the last decade, has seen children’s databases sourced from medical records, school records and app databases now available on the dark web—including those associated with connected toys. The paper first discusses children’s abuse material available on the dark web and then outlines the emerging availability of children’s personally identifiable information.
The paper argues that, while parents are often held accountable for their children’s digital profile and data safety, vast amounts of children’s data is being legally collected by tech companies and state actors. Some of this data has found its way onto the dark web. So far, little concern has been voiced about who is responsible for the protection of children’s data along children’s data supply chains—as well as any future ramifications of children’s data being sold and circulated on the dark web.
Paper three, ‘Trusted babes of Instagram brand-land: Child as co-opted marketer and profitable brand extension on the internet’, explores how marketers are using/exploiting consumers’ inherent love, trust and interest in children, to generate ‘brand trust’, while at the same time wading into murky ethical territory, commoditising children’s images and appeal to promote adult brands. A related phenomenon is the use of children as ‘brand extension’, where celebrity/microcelebrity influencer parents push their children as personal brand extensions, leveraging the cuteness and newsworthy impact of their own children to earn money and/or achieve fame (Archer, 2018). Far from being the ‘everyday, ordinary Internet users’ initially described in Abidin’s early definition (2015b), some child social media stars are now being presented as beyond ‘ordinary’, with lavish lifestyles or unattainable attributes presented as aspirational for the consuming public.
The paper uses case studies three extreme case studies, to examine the extent to which mainstream commercial organizations/brands and parents are colluding to use still and video social media images of children (as brands in their own right) in attempt to gain consumer ‘trust’. The impact of marketers and parents co-opting children to engender this consumer ‘trust’, and the ensuing issues relevant to the digital rights of the child and the larger issue of ‘trust’ in society is also discussed.