The 2024 European Symposium on Usable Security

Dates and Location: September 30 & October 1, 2024, Karlstad, Sweden.

Join us in Karlstad for EuroUSEC 2024, where the future of usable privacy and security intersects with the unparalleled beauty and heritage of Sweden. Let's explore, learn, and innovate together in a city that bridges the past with the future, nature with technology, and people with ideas.

The European Symposium on Usable Security (EuroUSEC) is a forum for research and discussion on human factors in security and privacy. EuroUSEC solicits previously unpublished work offering novel research contributions in any aspect of human-centred security and privacy. EuroUSEC aims to bring together researchers, practitioners, and students from diverse backgrounds including computer science, engineering, psychology, the social sciences, and economics to discuss issues related to human-computer interaction, security, and privacy.

Exciting news: We are pleased to announce that the authors of selected best papers from the EuroUSEC 2024 event will be invited to submit an extended version for possible publication in a special section/issue of the Information and Computer Security Journal, pending approval. These submissions will undergo a fast-track review process.

We are also pleased to announce that EuroUSEC 2024 is co-funded by the European Union through the Interreg Sweden-Norway project Cross Border Cyber Capacity (CBCC).

We would also like to thank KASTEL (Institute of Information Security and Dependability) at the Karlsruhe Institute of Technology (KIT) for sponsoring us and enabling the ACM OpenTOC service. This unique service allows free access to our proceedings, provided users access the links through our website.

EuroUSEC is part of the USEC family of events. You can find more info about all USEC events at: https://www.usablesecurity.net/USEC/index.php

The 2024 EuroUSEC is an independent event in Karlstad without any affiliation to any conference. By doing this, we strive to keep registration costs to a minimum.

We want EuroUSEC to be a community-driven event and would love to hear any questions, comments, or concerns you might have regarding these changes from last year. Therefore, we want to encourage everyone to join the EuroUSEC Slack. Alternatively, you can email the program chairs with any questions or concerns.



Keynote Speakers

Dr. Rebecca Balebako

Talk Title: PETs are all you need?

Talk Abstract: Privacy Enhancing Technologies (PETs) enable amazing use cases from healthcare research to fraud prevention, while protecting personal data. But are policy-makers and academics focusing on the right solutions? This talk delves into how we define PETs and why that matters. What are the adoption challenges of PETs across small, medium, and large companies in Europe? Are PETs only relevant to companies building AI models? We'll dissect key criteria for selecting PETs and explore the pitfalls of promoting one technology over another. We'll also highlight key areas of research that might enable data protection in medium-sized companies. 

Biography: Dr. Rebecca Balebako has been working on privacy since 2010. She has helped start-ups, large tech companies, and specialized government agencies define strategies for machine learning and privacy. Rebecca has published numerous academic papers on usable privacy and security. Dr. Balebako is also an IAPP Fellow of Information Privacy (CIPP/E and CIPT) and is trained in AI Governance and Machine Learning. She holds a Ph.D. in Engineering and Public Policy, focusing on privacy, as well as degrees in Software Engineering and Math. As a former lead of the Google Privacy Red Team, she improved the privacy infrastructure and reduced risk for terabytes of data, thousands of machine learning models, and hundreds of third-party apps. Previous work on AI led to improvements in early speech recognition as well as a framework for responsible face recognition.

Prof. Awais Rashid

Talk Title: Equitable Privacy: Understanding Privacy Requirements of Marginalised and Vulnerable Populations

Talk Abstract: Digital technologies are becoming pervasive in society, from online shopping and social interactions to finance, banking, and transportation. With a future vision of smart cities, driven by a real-time, data-driven, digital economy, privacy is paramount. It is critical to engendering trust in the digital fabric on which society relies and is enshrined as a fundamental human right in the Universal Declaration of Human Rights and regulations such as GDPR.

Significant efforts have been made to provide users with more agency in understanding, controlling, and assuring the way their data and information are processed and shared. However, this ability to control, understand, and assure is not equitably experienced across society. For instance, individuals from lower-income groups often have to share devices to access services that may include sensitive information. In the case of victims of intimate partner violence, an innocuous app (such as Find My Phone) or digital device (such as a smart doorbell) may be used to monitor their activities and there are significant risks in using online reporting tools for fear of traceability. Such vulnerable and marginalised populations have nuanced privacy and information control needs as well as threat models. These needs and requirements are not typically foregrounded to software developers. The challenge is compounded by the fact that developers are neither privacy experts nor typically have the training, tools, support, and guidance to design for the diverse privacy needs of marginalised and vulnerable groups.

In this talk, I will discuss insights from an ongoing multi-year programme of research on understanding the privacy requirements of such populations and highlight a research agenda on how to support software developers in systematically addressing them.

Biography: Awais Rashid is Professor of Cyber Security at the University of Bristol. His research spans cyber security and software engineering, with a particular focus on cyber-physical systems security, software security and usable security and privacy. He is Director of the UK’s National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online (REPHRAIN) and Director of the EPSRC Centre for Doctoral Training in Trust, Identity, Privacy and Security in Large-Scale Infrastructures. He is also the lead and editor-in-chief of CyBOK, the Cyber Security Body of Knowledge. He also leads research on readiness of software engineers and developers to work with new secure hardware as part of the Digital Security by Design programme. He also previously led projects as part of the UK Research Institute on Trustworthy Industrial Control Systems (RITICS), the UK Research Institute on Socio-technical Cyber Security (RISCS) and the National Centre of Excellence on Cyber Security of Internet of Things (PETRAS). He was also a Fellow of the Alan Turing Institute (2018-2021).

Call for Papers

We invite you to submit a paper and join us in Karlstad, Sweden, at EuroUSEC 2024.

We welcome submissions containing unpublished original work describing research, visions, or experiences in all areas of usable security and privacy. We also welcome the systematization of knowledge with a clear connection to usable security and privacy. We welcome a variety of research methods, including both qualitative and quantitative approaches. Note that All submissions must clearly relate to human aspects of security or privacy. We won't accept papers on security or privacy that don't address usability or human factors. The same applies to usability or human factors papers that do not address security or privacy.

Topics include, but are not limited to:

  • usable security and privacy implications or solutions for specific domains (such as IoT, ehealth, and vulnerable populations)
  • methodologies for usable security and privacy research
  • field studies of security or privacy technology
  • longitudinal studies of deployed security or privacy features
  • new applications of existing privacy/security models or technology
  • innovative security or privacy functionality and design
  • usability evaluations of new or existing security or privacy features
  • security testing of new or existing usability features
  • lessons learned from the deployment and use of usable privacy and security features
  • reports of failed usable privacy/security studies or experiments, with the focus on the lessons learned from such experience
  • papers with negative results
  • reports of replicating previously published important studies and experiments
  • psychological, sociological, cultural, or economic aspects of security and privacy
  • studies of administrators or developers and support for security and privacy
  • studies on the adoption or acceptance of security or privacy technologies
  • systematization of knowledge papers
  • impact of organizational policy or procurement decisions on security and privacy

We aim to provide a venue for researchers at all stages of their careers and at all stages of their projects.

The submissions can be either standard-length papers (at most 16 pages) or short papers (up to 8 pages), reporting mature work. All papers will use the one-column submission format (See Submission Instructions: Microsoft Word, Latex: use the “manuscript” call to create a single column format) and bibliography and appendices are excluded from the page count. Using supplementary appendices, authors can include study materials (e.g., surveys, interview guides, etc.) that would otherwise occupy valuable space within the body of the paper. Reviewers are not required to read appendices, so your paper should be self-contained without them. ACM also allows the publication of additional supplemental materials and we want to encourage authors to take advantage of this option to provide research artifacts (e.g., builds of their own software used in the study).

Similar to last year, EuroUSEC 2024 proceedings volume will be a part of the International Conference Proceedings Series (ICPS) published by ACM.

It is mandatory for at least one author of each accepted paper to attend and present the paper in person. In certain circumstances, people who cannot travel may present their papers virtually.



Important Dates

Paper registration deadline (mandatory):       Monday, 27th May, 2024 (Anywhere on Earth (AoE))                
Paper submission deadline: Friday, 31st May, 2024(AoE)
Notification: Monday, 8th July, 2024 (AoE)
   
Re-submission deadline: Monday, 22nd July, 2024 (AoE)
Revision notification: Monday, 5th August, 2024(AoE)
   
Camera ready: Monday, 26th August, 2024 (AoE)



Submission Instructions

Upload your submission via this link:

  1. All submissions must report original work.
    • Authors must clearly document any overlap with previously or simultaneously submitted papers from any of the authors (email the chairs a PDF document outlining this).
  2. Papers must be written in English.
  3. Papers must be anonymized for review: No author names or affiliations should be included in the title page or the body of the paper. As well, acknowledgments should be removed, and papers should not reveal authors' identities.
  4. Refer to your own related work in the third person: do not use personal pronouns.
    • This requirement also applies to data sets and artifacts. (For example, "We reused data from the authors of Smith et al. [31] in our experiment.")
  5. Do not blind citations except in extraordinary circumstances. If in doubt, contact the chairs.
  6. All submissions must use the ACM Word or LaTeX templates.
    • These templates can be obtained from the ACM author submission information website. To submit your paper to be reviewed, use the one-column format (e.g. for latex use: \documentclass[manuscript]{acmart}).
  7. Systematization of Knowledge paper titles must begin with SOK:

Simultaneous submission of the same paper to another venue with proceedings or a journal is prohibited. Serious infringements of these policies may cause the paper to be rejected from publication and the authors put on a warning list, even if the paper is initially accepted by the program committee. Contact the EuroUSEC chairs if there are questions about this policy.

You are free to publish a pre-print of your paper on arXiv, SSRN or similar, if you wish to.

Contact EuroUSEC chairs if there are any questions.

Program Committee Chairs

The chairs can be contacted at pc.chairs.eurousec

Program Committee

  • Yasmeen Abdrabou, Lancaster University (UK)
  • Shimaa Ahmed, VISA research (USA)
  • Mamtaj Akter, Vanderbilt University (USA)
  • Elham Al Qahtani, University of Jeddah (Saudi Arabia)
  • Patricia Arias Cabarcos, University of Paderborn (Germany)
  • Hala Assal, Carleton University (Canada)
  • David Balash, University of Richmond (USA)
  • Louise Barkhuus, IT University of Copenhagen (Denmark)
  • Ingolf Becker, University College London (UK)
  • Zinaida Benenson, University of Erlangen-Nuremberg (Germany)
  • Sabid Bin Habib Pias, Indiana University (USA)
  • Dawn Branley-Bell, Northumbria University (UK)
  • Bernardo Breve, University of Salerno (Italy)
  • Annalina Buckmann, Ruhr University Bochum (Germany)
  • Jurlind Budurushi, Baden-Wuerttemberg Cooperative State University (Germany)
  • Jan-Willem Bullee, University of Twente (Netherlands)
  • Karoline Busse, University of Applied Administrative Sciences Lower Saxony (Germany)
  • Rahul Chatterjee, University of Wisconsin Madison (USA)
  • Giuseppe Desolda, University of Bari Aldo Moro (Italy)
  • Nicolas E. Díaz Ferreyra, Hamburg University of Technology (Germany)
  • Verena Distler, University of the Bundeswehr Munich (Germany)
  • N'guessan Yves-Roland Douha, Nara Institute of Science and Technology (Japan)
  • Lynette Drevin, North West University (South Africa)
  • Jide Edu, University of Strathclyde (UK)
  • Edwin Frauenstein, Walter Sisulu University (South Africa)
  • Diana Freed, Hardvard University (USA)
  • Sepideh Ghanavati, University of Maine (USA)
  • Ryan Gibson, University of Strathclyde (UK)
  • Thomas Gross, Newcastle University (UK)
  • Anne Henning, Karlsruhe Institute of Technology (Germany)
  • Yousra Javed, Illinois State University (USA)
  • Agnieszka Kitkowska, Jönköping university (Sweden)
  • Jingjie Li, University of Edinburgh (UK)
  • Maryam Mehrnezhad, Royal Holloway University of London (UK)
  • Ola Michalec, Bristol University (UK)
  • Mathias Mujinga, University of South Africa (South Africa)
  • Collins Munyendo, The George Washington University (USA)
  • Alaa Nehme, Mississippi State University (USA)
  • James Nicholson, Northumbria University (UK)
  • Emma Nicol, University of Strathclyde (UK)
  • Jan Nold, Ruhr University Bochum (Germany)
  • Anna-Marie Ortloff, University of Bonn (Germany)
  • Simon Parkin, Delft University of Technology (Netherlands)
  • Scott Ruoti, University of Tennessee (USA)
  • Kavous Salehzadeh Niksirat, EPFL (Switzerland)
  • Cigdem Sengul, Brunel University (UK)
  • Raphael Serafini, Ruhr University Bochum (Germany)
  • Madiha Tabassum, Northeastern University (USA)
  • Sotirios Terzis, University of Strathclyde (UK)
  • Daniel Thomas, University of Strathclyde (UK)
  • Christian Tiefenau, University of Bonn (Germany)
  • Jan Tolsdorf, The George Washington University (USA)
  • Christine Utz, Radboud University (Netherlands)
  • Stephan Wiefling, swiefling.de & Vodafone (Germany)
  • Shan Xiao, Gonzaga Uvniversity (USA)
  • Verena Zimmermann, ETH Zürich (Switzerland)

Publication Co-chair

  • Steven Furnell, University of Nottingham (UK)

Publicity Chairs

  • Anastasia Sergeeva, Luxembourg University (Luxembourg)
  • Scott Harper, Newcastle University (UK)

Steering Committee

  • Oksana Kulyk, IT University of Copenhagen (Denmark)
  • Karen Renaud, University of Strathclyde (UK)
  • Peter Mayer, University of Southern Denmark (Denmark)
  • Angela Sasse, Ruhr University Bochum / Ruhr-Universität Bochum (Germany)
  • Melanie Volkamer, Karlsruhe Institute of Technology (Germany)
  • Charles Weir, Lancaster University (UK)


Program

All times in the program are given in the Central European (Summer) Time Zone (CEST). You can use this link to convert the times to any time zone you wish.

The preliminary program is available below.

Monday 30th September 2024
9:00 - 9:15 | Welcome and Greetings

9:15 - 10:15 | Keynote 1: Dr. Rebecca Balebako. Title: "PETs are all you need?"

10:15 - 10:30 | Coffee Break

10:30 - 12:00 | Technical Paper Session 1: Phishing and User Behavior (20 minutes per paper - including time for questions)

Chair: Anne Henning

You Know What? - Evaluation of a Personalised Phishing Training Based on Users' Phishing Knowledge and Detection Skills

  • Lorin Schöni
  • Victor Carles
  • Martin Strohmeier
  • Peter Mayer
  • Verena Zimmermann

Training is important to support users in phishing detection. To better match phishing training content with users’ current skills, personalised training has huge potential. Therefore, we evaluated personalised training with N=96 participants in an online study. Participants were assigned to one of three groups based on a phishing proficiency score and received tailored training material. The training enhanced overall phishing proficiency, but also levelled the playing field, bringing all groups, regardless of their initial proficiency, to an equivalent post-training phishing proficiency level. For group assignment, the findings show that most person-related information, like age or personality traits, do not seem to affect phishing proficiency in a meaningful way. Yet, security awareness scales like the HAIS-Q or SA-13 seem to be useful indicators of phishing proficiency. The results demonstrate the feasibility of a personalised phishing intervention using relatively sparse data for categorisation into groups that receive tailored training content. Further research is needed to systematically evaluate the benefits and challenges of personalised phishing training.

Eyes on the Phish(er): Towards Understanding Users' Email Processing Pattern and Mental Models in Phishing Detection

  • Sijie Zhuo
  • Robert Biddle
  • Jared Daniel Recomendable
  • Giovanni Russello
  • Danielle Lottridge

Phishing emails typically masquerade themselves as reputable identities to trick people into providing sensitive information and credentials. Despite advancements in cybersecurity, attackers continuously adapt, posing ongoing threats to individuals and organisations. While email users are the last line of defence, they are not always well-prepared to detect phishing emails. This study examines how workload affects susceptibility to phishing, using eye-tracking technology to observe participants’ reading patterns and interactions with tailored phishing emails. Incorporating both quantitative and qualitative analysis, we investigate users’ attention to two phishing indicators, email sender and hyperlink URLs, and their reasons for assessing the trustworthiness of emails and falling for phishing emails. Our results provide concrete evidence that attention to the email sender can reduce phishing susceptibility. While we found no evidence that attention to the actual URL in the browser influences phishing detection, attention to the text masking links can increase phishing susceptibility. We also highlight how email relevance, familiarity, and visual presentation impact first impressions of email trustworthiness and phishing susceptibility.

'Protect and Fight Back': A Case Study on User Motivations to Report Phishing Emails

  • Pavlo Burda
  • Luca Allodi
  • Alexander Serebrenik
  • Nicola Zannone

Phishing reporting is emerging as a key defense mechanism against phishing attacks. Whereas large enough organizations have specific policies in place for phishing reporting, user uptake is still limited, and a clear picture of what motivates users to report and which types of emails is still to be drawn. Yet, this is critical to devising better policies and procedures and stimulating awareness and a cyber-security culture within organizations. In this work, we sample and interview n = 49 employees from the pool of phishing reporters at a medium-sized European technical university. We sample interviewees based on how sophisticated the emails they report are over contextual and technical dimensions and cluster reporters in terms of their (emerging) reporting behavior. We conduct semi-structured interviews up to thematic saturation and derive 13 main themes driving reporting motivations. We discuss the identified themes in the broader theoretical context, as well as the practical implications of our findings.

An analysis of phishing reporting activity in a bank

  • Anne-Kee Doing
  • Eduardo Bárbaro
  • Frank van der Roest
  • Pieter van Gelder
  • Yury Zhauniarovich
  • Simon Parkin

A reduction in phishing threats is of increasing importance to organizations. One part of this effort is to provide training to employees, so that they are able to identify and avoid phishing emails. Yet further, simulated phishing emails are used to test whether employees will both identify and report a suspicious email. We worked with a partner bank to examine a repository of many thousands of reported emails from a behavioural perspective. We divide reported emails into categories and examine reporting trends over time relative to training and phishing simulation campaigns. Among our findings, the level of reporting of benign emails is comparable to the number of malicious emails reported, and we see indications that training and simulations amplify the reporting of benign emails. Our analysis uncovers reporting patterns for unique reporters per email campaign as a promising indicator for the security-related culture around phishing prevention. Evidence from our analysis informs recommendations, such as providing reporting infrastructure for reporting not only malicious emails, but also benign but suspicious work-related emails, in a manner that minimises the disruption for users erring on the side of caution when assessing emails.


12:00 - 13:00 | Lunch

13:00 - 14:30 | Technical Paper Session 2: Privacy and Security Practices and Perceptions in Digital Environments (20 minutes per paper - including time for questions)
Chair: Eman Alashwali

Shadow Health-Related Data: Definition, Categorization, and User Perspectives

  • Yamane El Zein
  • Kavous Salehzadeh Niksirat
  • Noé Zufferey
  • Mathias Humbert
  • Kévin Huguenin

Health-related data (HRD) about individuals are increasingly generated and processed. The sources and volume of such data have grown larger over the past years, they include wearable devices, health-related mobile apps, and electronic health records. HRD are sensitive, have important privacy implications, hence hold a special status under existing privacy laws and regulations. In this work, we focus on shadow HRD: these HRD are generated and/or processed by individuals by using general-purpose digital tools outside of a professional healthcare information system. Some examples are health-related queries made by individuals on general-purpose search engines and LLM-based chatbots, or medical appointments and contact information of health professionals synced to the cloud. Such data, and the privacy risks stemming from them, are often overlooked when studying digital health. Using information from two focus group sessions (23 participants in total), we identified and categorized a broad variety of user behaviors that, including the aforementioned examples, lead to the creation of shadow HRD. Then, informed by this categorization, we designed a questionnaire and deployed it through an online survey (300 respondents) to assess the prevalence of such behaviors among the general public, as well as user awareness of (and concerns about) the privacy risks stemming from their shadow HRD. Our findings show that most respondents adopt numerous and diverse behaviors that create shadow HRD, and that very few resort to mechanisms to protect their privacy.

Used, Avoided, Restricted? Perceptions, Behavior, and Changes in Video Conferencing of German-speaking Users During and After the Pandemic

  • Lydia Weinberger
  • Christian Eichenmüller
  • Freya Gassmann
  • Gaston Pugliese
  • Zinaida Benenson

The COVID-19 pandemic required a sudden deployment of video conferencing (VC) in work and social environments. A few months after contact restrictions were lifted, we conducted an exploratory online survey with 251 German-speaking participants and investigated perceptions, behavior, and changes in the use of VC apps. Particularly, we considered security, privacy, usability, and familiarity of ten popular VC apps and how these factors influenced behavior during and after the lockdown. We showed significant dependencies between the usability and security perception of ten well-known VC apps. While perceived usability was significantly correlated with familiarity in most cases, we were only able to show a connection between familiarity and security perception for some of these applications. Usability played the greatest role in deciding to use an app. Yet, depending on the app, security had a significant influence as well. A lack of usability and security also played an important role in the avoidance of apps, but the influence of third-party decisions to restrict the use was the most significant. A lack of autonomy in app usage sometimes led to the use of apps that were associated with security concerns. In some cases, uncomfortable experiences and incidents triggered by the incorrect use of functions, e.g., activating audio by mistake, led to more extensive protective measures. Participants generally perceived threats from other attendees as more realistic than from external attackers.

Exploring Determinants of Parental Engagement in Online Privacy Protection: A Qualitative Approach

  • Ann-Kristin Lieberknecht

In response to the increasing concern surrounding the collection of children’s data online, the research community has acknowledged the crucial need to protect children’s privacy in the digital age. While existing research has primarily concentrated on educating children and teachers, there is only limited research on the role of parents in safeguarding their children’s online privacy. This paper seeks to enhance our understanding of parental behaviour examining the various factors that impact parental engagement in online privacy protection. Through semi-structured interviews with eight media educators, we gained valuable insights into the complex decision-making processes of parents in this regard. By utilising an exploratory, qualitative approach, we were able to uncover subtle nuances and motivations that had not been previously addressed by existing theories. By shedding light on these aspects, we contribute to a more comprehensive understanding of how to effectively safeguard children’s privacy online. This study not only adds to the existing body of knowledge but also offers practical implications for empowering parents to make informed privacy decisions.

Exploring End Users' Perceptions of Smart Lock Automation Within the Smart Home Environment

  • Hussein Hazazi
  • Mohamed Shehab

Unlike their conventional counterparts, smart locks have the ability to communicate with other smart home devices which enables a level of integration and automation previously unimaginable. For instance, smart locks can exchange information with video doorbells, allowing homeowners to set up automation scenarios where the door automatically unlocks upon recognizing a familiar face. While some users might find it convenient, others might consider it a security vulnerability that could lead to unauthorized access to the house. This study employs a mixed-methods approach to explore user perceptions of creating smart home automation scenarios involving smart locks. A total of 21 smart lock owners participated in the study. Each participant engaged in a hands-on activity, using pen and paper to conceptualize and articulate at least four smart home automation scenarios involving the smart lock. Following this creative task, participants completed an online survey to provide structured feedback on their own scenarios. Our analysis provides an overall understanding of how setting up such automation scenarios affects several aspects of the smart home environment such as security, convenience, and awareness. Additionally, our findings provide a general categorization of such automation scenarios based on the purposes they serve within the smart home as well as shed light on end users’ perceived security or privacy advantages and disadvantages associated with setting up such automation scenarios.


14:30 - 15:00 | Coffee break

15:00 - 16:30 | Technical Paper Session 3: User Perceptions and Misconceptions of Cybersecurity Technologies (20 minutes per paper - including time for questions)
Chair: Emre Kocyigit

"Everything We Encrypt Today Could Be Cracked" — Exploring (Post) Quantum Cryptography Misconceptions

  • Victoria Kunde
  • Jan Magnus Nold
  • Jonas Hielscher

Misconceptions about (post) quantum cryptography among security experts could endanger the successful and appropriate build-up of defenses against attacks through quantum computers. We suspect that accurate knowledge about those quantum topics is not widely available yet. In the first, locally limited, exploratory study, we conducted n = 19 interviews with security experts & students with varying experiences to explore their knowledge and attitude towards (post) quantum cryptography. Through qualitative content analysis, we identified several quantum misconceptions, with overestimating or underestimating, confusing topics or principles, and lack of context or knowledge as general categories. The participants also showed general mistrust of the institutions, which led to the buildup of quantum computers. The identified patterns can help address (post) misconceptions about quantum cryptography in security education.

Understanding Operational Technology Personnel's Mindsets and Their Effect on Cybersecurity Perceptions: A Qualitative Study With Operational Technology Cybersecurity Practitioners

  • Stefanos Evripidou
  • Jeremy Daniel McKendrick Watson

Operational Technology (OT) is hardware and software that monitor and control industrial processes in sectors such as water and energy. OT's increased digitalisation which expanded its attack surface, heightened institutional pressures including regulation, and the evolving threat landscape have compelled companies using OT to improve their cybersecurity. Security often clashes with employees’ values and hinders them from performing their primary job tasks, leading to its circumvention. OT personnel are a user group with distinctive characteristics, including the cyber-physical technology they use, and subsequently, their safety culture. Nevertheless, there's little research on how OT personnel's mindsets are formed, and in turn, how these mindsets affect their cybersecurity beliefs and behaviours.

As such, we have conducted 72 interviews with OT cybersecurity practitioners across various sectors on their experiences working with OT practitioners. Our analysis demonstrates a number of factors that shape OT personnel's mindsets: namely, the prioritisation of other operational values, operational realities and challenges, and their occupational pathways. In turn, these factors lead to misperceptions around cybersecurity, specifically in two areas: technological misperceptions, and the stereotyping of occupational practices between IT and OT. Accordingly, we discuss OT cybersecurity practitioners’ efforts to influence cybersecurity in these environments, and how the acknowledgment of these factors and misperceptions aids their efforts. Finally, we call for a better understanding of OT personnel's relationship with cybersecurity by proposing future research avenues.

Deniable Encrypted Messaging: User Understanding after Hands-on Social Experience

  • Anamika Rajendran
  • Tarun Kumar Yadav
  • Malek Al-Jbour
  • Francisco Manuel Mares Solano
  • Kent Seamons
  • Joshua Reynolds

Plausible deniability in cryptography allows users to deny their participation in a particular communication or the contents of their messages, thereby ensuring privacy. Popular end-to-end encrypted messaging apps employ the Signal protocol, which incorporates message deniability. However, their current user interfaces only allow access to the blunt tool of message deletion. Denying a message requires users to claim that the counterpart in their conversation has the technical sophistication to forge a message when no usable message forgery tools are available. We evaluate a step towards bridging this gap in the form of a new transcript-editing feature implemented within the Signal app which allows each user to maintain an independent, locally-editable transcript of their conversation. We gave users hands-on experience with this app in the context of resolving a social dispute, and measured their ability to understand its implications both technically and ethically. Users find our interface intuitive and can reason about deniability, but are divided by which circumstances for which deniability is appropriate or desirable. We recommend users be given transparent access to choose when their conversations are deniable versus non-repudiable, instead of the status quo of somewhere-in-between. Our study introduces a novel approach by providing hands-on experience and evaluating its usability. This method offers insights into practical deniability implementation and lays the groundwork for future research.


From 17:00 | Reception. Place: House 21 at KAU.

Tuesday 1st October 2024
9:15 - 10:15 | Keynote 2: Prof. Awais Rashid. Title: "Equitable Privacy: Understanding Privacy Requirements of Marginalised and Vulnerable Populations"

10:15 - 10:30 | Coffee Break

10:30 - 12:00| Technical Paper Session 4: Improving Actionability, Usability, and Standardization in Security Practices (20 minutes per paper - including time for questions)

Chair: Simon Parkin

"I'm Getting Information that I Can Act on Now": Exploring the Level of Actionable Information in Tool-generated Threat Reports

  • Alvi Jawad
  • Hala Assal
  • Jason Jaskolka

Existing threat modeling tools have been investigated primarily for their functionality and features but not for the contents that they automatically generate, i.e., threat reports. This paper presents the first study focusing on threat reports; we explore what users consider as “actionable information” in such reports, and assess how well threat reports support users in taking action to address identified threats. Our qualitative study involved semi-structured interviews with 15 participants (primarily software engineers and security experts) from North America. The study involved tasks related to identifying and assessing pieces of actionable information in threat reports generated for a healthcare system case study by two prominent threat modeling tools, namely Microsoft Threat Modeling Tool and OWASP Threat Dragon. Our findings cover five overarching themes determined based on existing literature related to better threat modeling tool outputs (i.e., threat reports); quality of suggestions, clarity of presentation, threat prioritization capability, security decision support, and adequate coverage. Based on our analysis, we found that users consider information detailing threats and mitigation suggestions to be directly actionable, and they consider a threat prioritization scheme and statistical overview of insights as supplementary actionable information. We also assess the level of actionable information present in existing threat reports and outline why the current reports lack adequate coverage of actionable information necessary to make decisions with high confidence. To address the identified shortcomings and satisfy user needs, we provide recommendations for improving the state of threat reports in existing and emerging threat modeling tools.

From Chaos to Consistency: The Role of CSAF in Streamlining Security Advisories

  • Julia Wunder
  • Janik Aurich
  • Zinaida Benenson

Security advisories have become an important part of vulnerability management. They can be used to gather and distribute valuable information about vulnerabilities. Although there is a predefined broad format for advisories, it is not really standardized. As a result, their content and form vary greatly depending on the vendor. Thus, it is cumbersome and resource-intensive for security analysts to extract the relevant information. The Common Security Advisory Format (CSAF) aims to bring security advisories into a standardized format which is intended to solve existing problems and to enable automated processing of the advisories. However, a new standard only makes sense if it can benefit users. Hence the questions arise: Do security advisories cause issues in their current state? Which of these issues is CSAF able to resolve? What is the current state of automation?

To investigate these questions, we interviewed three security experts, and then conducted an online survey with 197 participants. The results show that problems exist and can often be traced back to confusing and inconsistent structures and formats. CSAF attempts to solve precisely these problems. However, our results show that CSAF is currently rarely used. Although users perceive automation as necessary to improve the processing of security advisories, many are at the same time skeptical. One of the main reasons is that systems are not yet designed for automation and a migration would require vast amounts of resources.

Usability Study of Security Features in Programmable Logic Controllers

  • Karen Li
  • Kopo Ramokapane
  • Awais Rashid

Programmable Logic Controllers (PLCs) drive industrial processes critical to society, for example, water treatment and distribution, electricity and fuel networks. Search engines, e.g., Shodan have highlighted that PLCs are often left exposed to the Internet, one of the main reasons being the misconfigurations of security settings. This leads to the question – why do these misconfigurations occur and, specifically, whether usability of security controls plays a part? To date, the usability of configuring PLC security mechanisms has not been studied. We present the first investigation through a task-based study and subsequent semi-structured interviews (N=19). We explore the usability of PLC connection configurations and two key security mechanisms (i.e., access levels and user administration). We find that the use of unfamiliar labels, layouts and misleading terminology exacerbates an already complex process of configuring security mechanisms. Our results uncover various (mis-) perceptions about the security controls and how design constraints, e.g., safety and lack of regular updates due to long term nature of such systems, provide significant challenges to realization of modern HCI and usability principles. Based on these findings, we provide design recommendations to bring usable security in industrial settings at par with its IT counterpart.


12:00 - 13:00 | Lunch

13:00 - 14:30 | Technical Paper Session 5: Interpersonal Security Threats and User Reactions (20 minutes per paper - including time for questions)
Chair: Robert Biddle

"I'm going to try her birthday": Investigating How Friends Guess Each Other's Smartphone Unlock PINs in the Lab

  • Elena Korkes
  • Collins W. Munyendo
  • Alvin Isaac
  • Victoria Hennemann
  • Adam J. Aviv

Despite the recent popularity of biometrics for smartphone unlocking, knowledge-based authentication schemes (e.g. PINs) remain crucial for smartphone security, and are typically required when the device restarts or the biometric fails. Previous studies on PINs assume an attacker without any personal information about the victim, with many often speculating that an attacker with some personal information of the victim (e.g., a friend) might fare better when guessing their smartphone unlock PINs. However, no study has investigated this yet, despite friends or partners being those most likely to attempt PIN guessing. In this work, we explore how attackers that have some personal information or relationship with the victim guess smartphone unlock credentials by recruiting 9 pairs of participants (n = 18) that have some relationship to guess each others’ PINs or passwords in an in-person, lab experiment. We find that most participants’ initial guessing strategies are birthdays as well as modifications of these birthdays, followed by geometric patterns and repetitions. In contrast, most participants indicated they would try random numbers or common PINs for strangers. While no participant was able to guess another participant’s PIN, about half indicated they would not change their PIN or password even if it was guessed by their study partner. We additionally combine participants’ guesses to guess PINs selected in a prior study, finding that our participants’ guesses perform similarly to the optimized simulated attackers used in previous work. We conclude with takeaways and interesting directions for future research.

Detection and Impact of Debit/Credit Card Fraud: Victims' Experiences

  • Eman Alashwali
  • Ragashree Mysuru Chandrashekar
  • Mandy Lanyon
  • Lorrie Faith Cranor

It might be intuitive to expect that small or reimbursed financial loss resulting from credit or debit card fraud would have low or no financial impact on victims. However, little is known about the extent to which financial fraud impacts victims psychologically, how victims detect the fraud, which detection methods are most efficient, and how the fraud detection and reporting processes can be improved. To answer these questions, we conducted a 150-participant survey of debit/credit card fraud victims in the US. Our results show that significantly more participants reported that they were impacted psychologically than financially. However, we found no relationship between the amount of direct financial loss and psychological impact, suggesting that people are at risk of being psychologically impacted regardless of the amount lost to fraud. Despite the fact that bank or card issuer notifications were related to faster detection of fraud, more participants reported detecting the fraud after reviewing their card or account statements rather than from notifications. This suggests that notifications may be underutilized. Finally, we provide a set of recommendations distilled from victims’ experiences to improve the debit/credit card fraud detection and reporting processes.

"It was jerks on the Internet being jerks on the Internet": Understanding Zoombombing Through the Eyes of Its Victims

  • Chen Ling
  • Gianluca Stringhini

Zoombombing, a kind of attack in which uninvited people intrude into online meeting rooms and harass meeting participants, emerged since the lockdown of COVID-19, people rely on online meeting tools to keep functioning. To understand how Zoombombing attacks unfold and their impact on participants, we performed 15 semi-structured interviews with victims of Zoombombing, who were either hosts or participants of the targeted online meetings. Through our interviews, we find that Zoombombing attacks can be distressful for attendees and that it is difficult for hosts to effectively react to them due to multiple factors, including the difficulty in identifying the intruders in crowded public meetings and the confusion that arises in attendees when a meeting is terminated and they are asked to join a new, more secure one. We find that meeting information leaks because of the vulnerability of the password, either by posting publicly, or an easy guess. We also find that the hosts of online meetings prioritize accessibility for their attendees to encourage participation and neglect the risks of lacking security measures, which facilitates attacks. This paper provides a comprehensive overview of Zoombombing attacks and the challenges in mitigating them, offering a blueprint for the research community to further investigate this problem.

Stop Following Me! Evaluating the Malicious Uses of Personal Item Tracking Devices and Their Anti-Stalking Features

  • Kieron Ivy Turk
  • Alice Hutchings

Personal item tracking devices are popular for locating lost items such as keys, wallets, and suitcases. Originally created to help users find personal items quickly, these devices are now being abused by stalkers and domestic abusers to track their victims’ location over time. Some device manufacturers created ‘anti-stalking features’ in response, and later improved on them after criticism that they were insufficient. We analyse the use of the anti-stalking features with five brands of tracking devices through a gamified naturalistic quasi-experiment in collaboration with the Assassins’ Guild student society. To understand how these devices are used for stalking and how people make use of provided anti-stalking features, we provide 40 of 91 participants with a tracker that can be used to follow any other participant, and observe both how trackers are used and how people avoid being tracked. Despite participants knowing they might be tracked and being incentivised to detect and remove the tracker, the anti-stalking features were not useful and were rarely used. We also identify additional issues with feature availability, usability, and effectiveness. These failures combined imply a need to greatly improve the presence of anti-stalking features to prevent trackers being abused.


14:30 - 14:45 | Coffee Break

14:45 - 16:15| Technical Paper Session 6: Legal, Ethical, and Usability Issues in Cybersecurity (20 minutes per paper - including time for questions)

Chair: Anastasia Sergeeva

A Systematic Approach for A Reliable Detection of Deceptive Design Patterns Through Measurable HCI Features

  • Emre Kocyigit
  • Arianna Rossi
  • Gabriele Lenzini

Dark patterns are deceptive design elements of digital choice architectures that are implemented to drive users’ actions towards decisions that are not necessarily in their best interest, such as accepting privacy-invasive practices. Most dark patterns are considered unlawful, but their description is rather informal. Thus, detecting dark patterns among the various existing design patterns and discerning what is an illegitimate design practice may depend on the subjective interpretation of expert users (such as regulators, civil society organizations, and academic researchers) who may not fully agree. The need to ground any evaluation on evidence calls for a reliable approach that is based on descriptions relying on observable, measurable features. Taking cookie consent as a use case, where dark patterns are ubiquitous and intensively under scrutiny, we propose a systematic approach to describe the characteristics of deceptive design patterns that are intended to reconcile the interpretations of expert users. In particular: i) we identify use case-specific dark pattern types using the ontology drafted by Gray et al. (2024); ii) we clarify the relationships between those types and the dark patterns’ attributes proposed by Mathur et al. (2021); iii) we propose a list of observable and measurable user-interaction features of dark patterns covering visual, process, and language design aspects, iv) we describe the attributes based on our measurable features to lower the subjectivity of users’ interpretation. Finally, we discuss our proposal’s cross-domain applicability and the potential for future work, including how to improve the descriptions of the attributes via semiformal languages, to generate an objective and usable framework to assess the presence of deceptive design patterns in digital interfaces.

A Metric to Assess the Reliability of Crowd-sourced SUS Scores: A Case Study on the PoPLar Authentication Tool

  • Yonas Leguesse
  • Mark Vella
  • Christian Colombo
  • Julio Hernandez-Castro

The concern of inattentive respondents in surveys is widely acknowledged and has been extensively researched, and crowd-sourcing platforms further complicate this issue with the additional problem of bot usage to automatically respond to surveys. This work explores this issue within the critical domain of usable security, highlighting limitations of the use of crowd-sourcing platforms for usability studies, particularly in the context of obtaining valid and reliable System Usability Scale (SUS) scores.

While crowd-sourcing platforms may already offer built-in quality controls, for example ensuring a respondent’s positive historical performance with regards to the completion of previous surveys, our exploratory surveys showed that issues with bots and careless respondents persist. Building upon these insights, our main contribution involves the proposal of a quality metric, the SUS Consistency Score (CSc), measuring the consistency of a respondent’s SUS statements.

We study the effectiveness of the proposed CSc for SUS by conducting a usability study of a recently proposed smartphone security mechanism, PoPLar. Initial findings from a preliminary crowd-sourced usability study of PoPLar had indicated promising results. However, a SUS assessment was not yet performed. The removal of responses based on different quality control thresholds, including CSc, causes significant changes in the obtained SUS score, to the extent of ranking PoPLar differently when compared to a wide range of security proposals for which a SUS score is available. A key implication of this result is that existing SUS scores for all these controls may require revisiting, potentially even revising them upward once the SUS CSc is used as the quality metric to ensure valid responses.

"Just a tool, until you stab someone with it": Exploring Reddit Users' Questions and Advice on the Legality of Port Scans

  • Temima Hrle
  • Mary Milad
  • Jingjie Li
  • Daniel Woods

Users, particularly amateurs, face uncertainties about technology law related to both interpretation and enforcement. This uncertainty can have a chilling effect on how users experiment with technology. However, little is known about the precise uncertainties that users face and what kind of advice is available. Our paper focuses on user questions and advice surrounding the legality of port scanning, a dual-purpose technique used in both defensive and offensive security. We identified and analyzed 414 pieces of advice, in response to questions about the legality of port scanning from 36 Reddit threads. We find that users ask two types of questions: (1) reactive questions in which they have scanned and are concerned by the consequences; and (2) proactive questions in which they ask about legality and seek ways to comply with the law. We found no consensus in the advice about legality or the likelihood of prosecution. In justifying advice, users deployed a range of anecdotes, analogies, and URLs. Subtle variations on the analogy between port scanning and physical building security are used to explain why it is both legal and illegal. Users also reason from individual cases, such as arguing prosecution is unlikely because the user had not personally been prosecuted or arguing prosecution is likely because Aaron Swartz was prosecuted. Finally, the most influential URL was a “Legal Issues” page maintained as part of an open-source project. We reflect on how these results can inform forum moderation and public-policy dissemination.

Balancing The Perception of Cheating Detection, Privacy and Fairness: A Mixed-Methods Study of Visual Data Obfuscation in Remote Proctoring

  • Suvadeep Mukherjee
  • Verena Distler
  • Gabriele Lenzini
  • Pedro Cardoso-Leite

Remote proctoring technology, a cheating-preventive measure, often raises privacy and fairness concerns that may affect test-takers’ experiences and the validity of test results. Our study explores how selectively obfuscating information in video recordings can protect test-takers’ privacy while ensuring effective and fair cheating detection. Interviews with experts (N=9) identified four key video regions indicative of potential cheating behaviors: the test-taker’s face, body, background and the presence of individuals in the background. Experts recommended specific obfuscation methods for each region based on privacy significance and cheating behavior frequency, ranging from conventional blurring to advanced methods like replacement with deepfake, 3D avatars and silhouetting. We then conducted a vignette experiment with potential test-takers (N=259, non-experts) to evaluate their perceptions of cheating detection, visual privacy and fairness, using descriptions and examples of still images for each expert-recommended combination of video regions and obfuscation methods. Our results indicate that the effectiveness of obfuscation methods varies by region. Tailoring remote proctoring with region-specific advanced obfuscation methods can improve the perceptions of privacy and fairness compared to the conventional methods, though it may decrease perceived information sufficiency for detecting cheating. However, non-experts preferred conventional blurring for videos they were more willing to share, highlighting a gap between the perceived effectiveness of the advanced obfuscation methods and their practical acceptance. This study contributes to the field of user-centered privacy by suggesting promising directions to address current remote proctoring challenges and guiding future research.

16:15 - 16:30 | Closing remarks and Farewell

18:00 - 19:00 | Social event: Sandgrund Lars Lerin art gallery

From 19:00 | Conference Dinner. Place: Värmlands Museum Restaurangen





Registration

Registration is mandatory for participation in EuroUSEC. Please register using the following link: Register Now

At least one author for each accepted paper has to register until September 1st, 2024 (regardless of online or onsite participation). For the rest of the participants, the registration will be open until September 8th September 15th (AoE) for those who want to take part onsite and until September 22th (AoE) for those taking part online.

The prices for the registration are as follows.

NOTE: Each paper must have at least one registration under the "Author" option (either online or onsite). Please note that authors are expected to present their papers in person at the conference. The online option is reserved for those facing legitimate travel difficulties; however, just the need for a visa to travel to Sweden is not considered a valid reason. We strongly recommend that authors requiring a visa to travel to the Schengen area, including Sweden, apply as early as possible. To obtain a visa invitation letter, please contact the conference chairs promptly.

Author (both online and onsite) 3750 SEK incl. 25% VAT (tax) (3000 SEK excl. VAT)
Standard (onsite) 3125 SEK incl. 25% VAT (2500 SEK excl. VAT)
Standard (online) 1250 SEK incl. 25% VAT (1000 SEK excl. VAT)


Event Logistics

EuroUSEC 2024 will be held on September 30 and October 1 in Karlstad, Sweden. Discover the perfect synergy of technology, innovation, and tranquility as EuroUSEC 2024 makes its way to the charming city of Karlstad.

Karlstad, located in Värmland County at the meeting point of Scandinavia's longest river, Klarälven, and Europe's largest lake, Vänern, is a city with a rich history dating back to the Viking Age. It was officially recognized as a city in the 16th century by King Charles IX of Sweden, earning its name "Charles's city." Traditionally known for its forest-based industries, Karlstad and Värmland have recently made significant advancements in the IT and technology sectors, including cybersecurity, software development, and IT services. Karlstad University plays a pivotal role in this shift, offering specialized programs and fostering collaboration between academia, industry, and government. This combination of industrial heritage and IT innovation positions Karlstad as an ideal location for hosting conferences like EuroUSEC 2024.

The city's vibrant downtown area, particularly around Stora Torget (the “big square”), is a cultural and social hub featuring dining, entertainment, and historic sites like the Sandgrund Lars Lerin art gallery (which we will visit in the second day of the conference), the Värmland County Museum, and Karlstad Cathedral.

Event location: Eva Erikssonsalen (House 21: 21A 342) at Karlstad University (Directions to KAU)

Travelling to Karlstad : Traveling to Karlstad from Oslo, Gothenburg, and Stockholm offers somewhat similar travel times, making it convenient regardless of your starting airport. From Oslo, the journey involves a train or bus with travel times around 3 to 4 hours. The travel time from Gothenburg and Stockholm to Karlstad is similarly manageable (between 3 and 4 hours). In spite of this, it is important to note that the only direct way to Karlstad is by taking a bus from Arlanda Airport (Stockholm). For the two other airports in Oslo and Gothenburg, you must first travel to the central station in Oslo or Gothenburg, then take the bus or train to Karlstad. As a result of this accessibility, we hope that our participants for EuroUSEC 2024 will be able to choose their preferred travel route based on their needs. Tips: Compared to trains, in Sweden, buses provide better reliability regarding travel times when you have a choice. More information on how to travel to Karlstad.

Traveling within Karlstad : The nearest bus stop to the university is ‘Universitet’ bus station (directly outside House 1 or the main entrance). Regularly, from Stora Torget different buses go to the university bus stop. The bus lines 1, 2 and 3 all go to the university, but no. 1 is the fastest (around 12 minutes) and these depart from Stop Point B in Stora Torget. You can buy tickets via the Karlstadsbuss app for iPhone or Android, which is available in English. alternatively, you can buy your tickets onboard using your bank cards.

Accommodation: If you need hotel reservation, please make your reservation as follows:

We have pre-booked a number of rooms at Elite Stadshotellet Karlstad, Address: Kungsgatan 22, central Karlstad
Price per single room: SEK 1155/single room including breakfast, and incl. VAT.
Last day for booking: 30 August (after that date, we can not guarantee room availability)

Bookings are made via the link.
For more information about the hotel, please visit their website.

You can also explore the following hotels located in Karlstad city center. From the city center (Stora Torget), several buses provide convenient access to KAU (the conference venue). The quickest route is via bus number 1, which travels between Karlstad Universitetet and Karlstad Stora Torget in approximately 11-12 minutes.

Clarion Collection Hotel Plaza
Västra Torggatan 2
652 25 Karlstad
4 min walk to Stora Torget (city center)
Clarion Collection Hotel Drott
Järnvägsgatan 1
652 25 Karlstad
6 min walk to Stora Torget
Scandic Winn
Norra Strandgatan 9-11
652 24 Karlstad
4 min walk to Stora Torget
CarlstadCity Boutique Hotel
Järnvägsgatan 8
652 25 Karlstad
4 min walk to Stora Torget
Hotel Fratelli
Drottninggatan 17
652 25 Karlstad
3 minutes to Stora Torget

Social Contract

To make EuroUSEC as effective as possible for everyone, we ask that all participants commit to our social contract:

  1. Engage and actively participate (to the degree you feel comfortable) with each talk.
  2. Be sure your feedback is constructive, forward-looking, and meaningful.
  3. The usable security & privacy community has earned a reputation for being inclusive and welcoming to newcomers; please keep it that way.
  4. We encourage attendees to aim to meet at least three new people from this year's EuroUSEC. The meal breaks and the participatory activity are the perfect opportunities for this.
  5. We strongly encourage tweeting under the hashtag "#EuroUSEC2024" and otherwise spreading the word about work you find exciting at EuroUSEC. However, please do not record EuroUSEC itself or further distribute comments made on our Slack instance.
  6. EuroUSEC 2024 follows the USABLE events Code of Conduct.