Participants at the workshop
From 0 to 1 again – design implications for re-engaging humans
The human population is on the brink of collapse, with the last populations showing a decrease in natality. Furthermore, probings show that engagement in computation, and in particular in recording the human condition, is in decline. Despite earlier human-centred design efforts, less humans use computational artifacts, few choose to learn modern computing techniques and the aging population and mortality rate show an exponential loss of labelling workforce. As such, the availability of labeled datasets for computational consumption is increasingly limited, and by current projections will have ceased by 2070. In this anthro-design project, we have worked with a group of humans in order to identify incentives to re-engage humans in computing. The project launched a year long intervention in which the participants could try out computational artifacts that would give them information on food foraging, cultivation, washing, and building settlements, while also collecting ground-truth labeled data on experiences such as emotions, aspirations and relationships. With the artifacts the humans could do smaller computational tasks, such as record keeping, calculating, drawing, view soothing images, or playing games. The study show that although there was an initial interest in these artifacts, the engagement diminished after a few weeks and that answering the weekly surveys on the human condition made them less inclined to use the artifacts. The only design component that showed a longitudinal engagement were the games and images of cats, and this is were future design efforts and research should be directed.
On the future of Coercive Technologies – Beyond Radicalism
For the past four decades, there has been a consensus among researchers that the question of how to solve societal and environmental problems through technology has been successfully addressed within the field of Coercive technologies (2, 5, 11). Recently, however, Coercive technologies (CT) has been challenged by researchers such as Dinbjat (1) and Eschelman (3) who argue that there could be other ways of designing technology that need not involve coercion, but that would instead build on “informed rational choice” (IRC). In this paper, we explore the theoretical foundations of IRC and argue that Dinbjat and Eschelman’s suggestion constitutes a reactionary attempt to return to the naïve notion of human rationality which was core to the field of persuasive technologies (PT). While PT was considered a promising research area around the turn of the millennium and seemed to provide effective solutions to societal and environmental problems (6), it became marginalized and eventually disappeared as a research area due to the ground-breaking research of Lah (9). As Lah showed in her study of European politicians, human rationality is a theoretical concept that largely lacks scientific evidence. Based on our own research on user behavior in politico-nodular spaces (7, 8) and results from our design workshops related to the ISO
prototype CPD 8 (14), we suggest that a more realistic development of CT would be to explore the notion of a “semi-informed user”. This notion suggests that there could be some areas of activity wherein the user could be relatively safely served raw or minimally cooked data combined with minimal cognitive scaffolding. We propose a broader deployment of CPD 8 to identify possible candidates for such areas. We suggest that the notion of “semi-informed user” could potentially provide a bridge between CT and researchers who have recently called for a revival of human rationality and agency in the field of technology design.
1. Dinbjat, W. (2060) ‘On the Conceivability of Informed Adaption’, Current Trends in Coercion, 19(12), pp. 1-24.
2. Erikson, J. (2054) ‘The Peculiar Birth of Coercion’, Re-engineering Humanity, 8(7), pp. 51-73.
3. Eschelman, H. (2066) ‘Evolving Coercion in the Second Half of the 21st Century: Introducing Informed Rational Choice’, Current Trends in Coercion, 25(6), pp. 77-93.
4. Fogg, B.J. (2003) Persuasive Technology: Using Computers to Change What We Think and Do, 1st edn., San Francisco, CA: Morgan Kaufmann.
5. Geinhaust, F. (2050) ’30 years of coercive technologies’, Coercive Technologies, 11(23), pp. 755-761.
6. Halbesker, A. (2057) ‘The forgotten roots of coercion’, History of Interaction, 13(10), pp. 43-73.
7. Hedman, A. & Åhman, H. (2062): ‘There is something rotten in the state of Denmark: Exploring decision making in politico-nodular spaces’, Politics, Passion, and Computers, 23(4), pp. 355-376.
8. Hedman, A. & Åhman, H. (2064): ‘I have a dream: The naivety of human rationality’, Coercive Technologies, 25(2), pp. 76-103.
9. Lah, B. (2020) ‘Coercion: What are the choices?’, Interactions, 43(10), pp. 921-945.
10. Minlau, G. (2059) ‘The Coercive Inception’, Psychological Machinery, 5(9), pp. 5-20.
11. Rablle, K.L (2060) ‘Looking, back at 40 years of coercion’, Coercion, 41(15), pp. 455-511.
12. Schmidt, J. (2034) ‘To decide or not to decide, that’s the question: Revisiting rationality’, Journal of Automated Decision Making, 7(6), pp. 12-34.
13. Thiendahler, F. (2037) ‘End of discussion: Evaluating coercive systems in the workplace’, Journal of Post-cognitive Research, 12(6), pp. 1124-1145.
14. Åhman, H. & Hedman, A. (2065): ‘Are we capable? Evaluating Cooking Priorities Directive 8’, Designing for the Environment, 8(5), pp. 521-546.
Strategies for Detection and Reduction of Unauthorised Profit-harming Mongrel Users
Advances in Dog Computer Interfaces (DCI) have created value for enterprises through direct commercial applications (FBS, 2059), plus indirect economic benefit through reduced stress and anxiety in the dog-owning workforce through secondary benefits to animal health (Dytec, 2059). In addition, commercial dog breeders have had great financial success through the development of proprietary breeds (e.g. RetrievrsTM and TerrierbytesTM) better physiologically and psychologically adapted for using computer interfaces (Pchplus, 2063) in support of household consumption.
However, as the result of unrecognised selective breeding programmes, there has been a rise in infringing mongrel animals with more specialised roles. These infringing animals, such as the so called “malsation” and “spamiel” (FBS, 2066), aside from unproductive ideas about companionship, can be used to intercept and interfere with normal commercial communications. This, in spite of the legal and moral offences of interfering with the profitability of businesses and therefore harming economic growth.
This paper identifies three strategies that businesses can use to identify and reduce the impact of undesirable species with their platform. In addition, guidance is provided for legal approaches to handling infringing users and calculating economic impact for restitution. Since the findings of this paper have been identified as having sector-wide benefit for improving economic growth, the paper is available with reduced cost to commercial growth organisations with beta+ economic ratings. The findings herein are not available to non-profit organisations, low impact businesses and civilians outside this category. Organisations seeking clarifications should contact the publisher for information on consultancy rates.
Be All In or Get All Out: Exploring options for CAI-Workers and CAI-technology
Collaborative AIs (CAIs) provide the combination of human creativity, empathy and intuition with extensive computational power and information access. Since the late 2020ies CAI-technology has advanced many research fields [2036-1, 2036-2, 2038, 2042], but it has also been misused, most notably during the First Panic . But – whereas there is a vivid discussion on the consequences of CAI-technology, little is said about the situation of CAI-workers, despite the fact that as many as 23.2 % of them are diagnosed with a personality disorders such as schizophrenia, bipolarity or depression .
In this study we made deep-interviews with 152 CAI-workers, using the insights from this in 16 tech-trials with 48 of the interviewees. Our findings show that CAI-workers are effectively excluded from society not only physically – living in closed compounds due to corporate data protection policies – but also due to the public’s attitude towards them: anger over lost jobs, envy from rejects, and the very common fear that CAIs are the last step towards fully sentient AIs . Further, there are issues of self-image, being superhuman whilst working  vs significantly less able off-duty. In effect, CAI-workers are at the same time their employer’s most valuable asset, and its slaves, contained and deprived of normal cognitive abilities. Accordingly, the tech trials indicated that prolonged CAI-state was highly favorable.
Consequently, we argue that it is time to discuss the future of CAI-technology – should it be abandoned entirely or taken further by allowing perpetual CAI-state, in effect nurturing a new type of humans?
2036-1 Stavros Gkouskos,“I Saw Your Grand-grand-son Graduate”: Using CAI Gossip Algorithms to Increase the Mental Well-being of Elderly Patients, Proceedings of the 2036 CHI Conference on Human Factors in Computing System, (CHI’36), ACM Press
2036-2 Nicholas Wang, Solving Traffic-Flow Issues for Shared Autonomous Transportation, PhD- thesis for the degree of Doctor of Technology, Chalmers University of Technology 3036
2038 Barake Kansas Henry & Ireli Lyckvi, Two CAIs vs. 500 Million Sick: How We Found Patient Zero. Morgan Kaufmann Bonniers, 3038
2042 Eira Lundgren & Conor McCloud, Ensuring the Democratic Process in the Scot-Scandi Election Using CAI Technology on Citizen Input. International Journal of Interaction Design, Vol 20, Issue 2, March 2042, Springer.
2050 Eira Lundgren & Ireli Lyckvi, The Panic in 2049 – how thwarted gossip algorithms broke the West US, Random O’Reilly 2050.
2059 Charlotte Heath, Amping up information retrieval and system control with a new generation of CAIs. IEEE Transactions on CAIs and Learning Systems, Vol 11, Issue 12, December 2052
2064 Rosie Picard & Charles Francis Xavier, We Are Afraid We Can’t Do That – On Limiting Neural Connections Between CAI-Humans And Their Computer Counterpart. Science, Volume 545, Issue 8705, August 3, 2064, AAAS
2065 Elora Björk & Jari Holopainen “Lesser Than I Used To Be” On the Mental Health of CAI- workers. Proceedings of the 21st International Conference on Exo-Applications and Technology 2065 (EAT ’65), Springer
Dark Patches Creator Personas
Dark patches have become an increasingly large problem on the Internet as of late. Their noxious effects are well known; they create pockets and corridors for illegal high-frequency communication and transactions and widen the market for dark hardware. While not in direct conflict with the 2036 global Computing Backwards Compatibility Act, their existence undermine or come into direct conflict with social equity and they directly clash with the UN Global Development Goal #17, “An affordable Internet for all”.
While much technical research has tried to find algorithmic solutions to the problem of dark patches, little is known of drivers behind their creation. We here present the results of a large-scale study of the dark patch DIY hackers and programmers-for-hire in three European countries. Besides the results of the study itself, we also present five fictive dark patch creator personas (”psychological profiles”).
Since we nowadays take the equitable sharing of limited resources such as the Internet for granted, we have to be all the more vigilant when various kinds of deviants and perverts try to appropriate more than their fair share of The Commons. In that vein, we end the paper with suggestion for future work that will help crime and counter-terrorism agencies in their work of understanding, identifying and apprehending dark patch creators. This works should be seen as a complement to more technically oriented measures of identifying and neutralizing dark patch code.
DEO ex Machina: a new Framework for Virtual Agents in Automated Elderly Care Provision
Recent years have seen an increase of interaction between virtual agents and humans (VHI). While the adoption has been successful in many areas such as production and education, other areas and specifically elderly care show a lack of engagement. Age seems to be a defining factor as users are not used to the technology and do not benefit from its full potential. Recent updates of the VA technology specifically for the sector, aesthetic adaptions or new interfaces did not seem to have made a significant change in the area.
In this paper we present an analysis of interaction logs gathered in a care home equipped with virtual agents (VAs) throughout. Contrary to common beliefs the interaction does not break down on the VAs side, but on the human side as people reject, misinterpret or ignore the well-intentioned suggestions of the VA. Following these insights, we present a new framework to support interactions: DEO. We propose the three steps: DISPENSE and log how the human responds, EDUCATE the human of the insights he is lacking to make the necessary changes and OVERWRITE his decisions, should he repeatedly decide not to follow them. We give detailed instructions on how to best implement each step based on our results. We argue that these steps will lead to increased adherence to the suggestions by VAs even by the elderly population, thereby making the technology accessible to a wider audience.
Solving the Dilemma in Operating Mobile Cranes
Almost 60 years ago, mobile cranes were considered as the most dangerous machine in the whole construction industry and 43% of the accidents were caused by the operators (Neitzel et al., 2001). Some notable efforts have been done during the past 40 years in order to solve this issue, such as improving the accuracy of embedded sensors (Melina et al., 2029), augmenting the safety-critical information directly to the operators’ eyes (Enzo et al., 2035), and finally automating the process of lifting and moving the materials (Franklyn et al., 2043). According to a study conducted by Rosaria et al. (2067), the operators contributed only 8% of total mobile crane-related accidents between 2056 and 2066. However, if we look at the data of operators-related accidents, those accidents were caused by the inability of the operators to make the right decision when the automation is not working as expected as they were relying too much on the automation. In order to solve this issue, we develop a semi-automated system for operating the mobile crane. The operators have to manually insert the parameters of how high and far the materials should be lifted and moved by using a keyboard in the cabin, and then the system will display the estimated result of the parameters. The operators are then asked to confirm the inserted parameters before the operation is actually executed by the mobile crane. Although the semi-automated process is perceived as cumbersome by the operators, the result shows that the operators maintain better decision making process since they are still partially involved in the operation.
Richard L Neitzel, Noah S Seixas, Kyle K Ren, et al. 2001. A Review of Crane Safety in the Construction Industry. Journal of Applied Occupational and Environmental Hygiene 16, 12: 1106–1117.
Miranda Melina, Knut Irvin, Kailee Birgit, et al. 2029. Developing Extremely Accurate Sensors for Off- Road Vehicles. Journal of Embedded Systems for Security and Safety 27, 5: 232-248.
Maddalena Enzo, Ashton Ravenna, Sherley Diane, et al. 2035. Augmenting Information to the User’s Eyes. Journal of Advanced Augmented Reality 8, 4: 567-584.
Maxene Franklyn, Vern Aileen, Melantha Jamey, et al. 2043. Automating the Lifting and Moving Operations in Industrial Vehicles. Journal of Industrial Automation 24, 2: 112-130.
Elpidio Rosaria, Carissa Wilhelmina, Melanie Danette, et al. 2067. Investigating the Root Causes of Crane-Related Accidents from 2056 to 2066. Journal of Industrial Safety Systems 5, 4: 945-959.
‘It was a living hell’: Redesigning HomeAI services to Combat Domestic Abuse in Mobile Co-living Spaces
With the sustained mass migration, the home is a constant unstable factor, and being at home does not necessarily mean feeling at home. The self-regulating deep learning system HomeAI was invented to help inhabitants feel at home everywhere. HomeAI turns mobile co-living spaces into highly personalized, connected and enhanced homes by drawing on inhabitants’ migration data and by learning from their emotion and behavior. However, recent studies have documented innumerable cases of domestic abuse in close intimate relationships, such as stalking, smart ghosting and revenge leaks. In this paper we build on recent research on domestic abuse in HomeAI, with a design research project done in collaboration with victims of HomeAI abuse. Through deep ethnography and co-design workshops, we have designed technologies that our participants have deployed in their current homes to combat abuse: 1) a repair service that cleans and erases HomeAI-generated memory from the abuser, 2) a ghost AI that can intervene the HomeAI assistant software, 3) an auto- tracking camera that can record through underwear, 4) window panels with high resolution fake people, 5) a safe-button that at any time can ensure safety. Our findings show that even if AI home services are designed to make inhabitants feel at home and safe from the outside, these technologies can be used with evil purposes within close relationships to punish and take revenge on cohabitants. In this new era where the home and privacy are not synonyms, we need to rethink how to design HomeAI. We present a framework for redesigning homeAI services with trust and privacy and argue there is a need to consider the ethical concerns and societal threats of designing for revenge and punishment in the home.
Analyzing the motivations and effects of going offline to inform medical treatments
Ever since the Right to Go Offline (MacDonald, 2056) went into effect in 2056, there have been ongoing debates about its social consequences. (Juul Sondergaard et Luu, 2064) Recently these discussions have escalated due to a relative increase of data leaks and crimes, as a result of reduced government funding to these respective sectors. (Stewards, 2067)
This study aims to give a better understanding of why people decide to go offline and what are the effects.
Through a method of deep biometrics ethnography (De Jong, 2046), the 1232 participants of this study, have reconnected online in order to share with us their personal information with an intimacy level as far as level 5, over the course of two weeks, giving us deep insights into their mental and physical state.
Qualitative interviews with the participants have explained some of the quantitative findings. The majority of the participants saw their productivity level drop significantly, explaining this as “feeling a lack of purpose and motivation”. Furthermore, stress and a strongly present feeling of loneliness has led to an increase of heart rate and blood pressure levels amongst 68% of the participants.
Observing the participant’s historical data, it becomes clear that many people that decide to go offline have previously dealt with mental problems, and are perhaps looking for a way out. Unfortunately, going offline leads to alienation, ultimately leading to an emotional gap with society. (De Vrij, 2062)
Through this study we argue that although the Right To Go Offline, which has been present in the USA constitution since 2056 cannot be undermined, treatments while being offline should be designed. To conclude this paper, we have outlined promising technologies that can inform this design process and it’s pros and cons.