Tracks
Responsible and Trusted AI: An Interdisciplinary Perspective
Organizers:
- Kevin Baum (German Research Center for Artificial Intelligence, DE)
- Thorsten Helfer (CISPA Helmholtz Center for Information Security, DE)
- Sophie Kerstan (University of Freiburg, DE)
- Markus Langer (University of Freiburg, DE)
- Eva Schmidt (TU Dortmund University, DE)
- Andreas Sesing-Wagenpfeil (Saarland University, DE)
- Timo Speith (University of Bayreuth, DE)
As AI systems increasingly permeate diverse sectors and influence many aspects of our lives, ensuring their responsible, trustworthy, and safe development and deployment is more critical than ever. Addressing these challenges requires moving beyond purely technical considerations to grapple with complex societal, ethical, and governance questions. Building on the success of similar tracks at AISoLA 2023 and 2024, this special track seeks to foster interdisciplinary
dialogue by bringing together researchers from a broad range of fields.
We invite contributions from philosophy, law, psychology, economics, sociology, political science, informatics, and other relevant disciplines. This track offers a platform to share recent research and collaboratively explore the societal implications of designing, implementing, and regulating AI systems.
Topics of interest include, but are not limited to:
- Clarifying key concepts: Critical analysis of terms such as “trustworthy AI,” “trusted AI,” “responsible AI,” and other related frameworks and terminologies.
- Balancing benefits and risks: Identifying and weighing the individual and societal benefits and risks of AI systems.
- Ethical design: Exploring value alignment and ethical considerations in the design and development of AI systems.
- Addressing public concerns: Tackling fears, misconceptions, and biases surrounding AI, including issues of privacy, security, trustworthiness, and responsibility.
- Legal implications: Analyzing the legal consequences of deploying AI systems, including questions of compliance and accountability.
- Regulatory frameworks: Developing robust legal and policy frameworks to support responsible AI adoption, with a focus on the challenges of complying with the upcoming EU AI Act.
- Labor market impact: Assessing the implications of AI on employment, workforce dynamics, and economic structures.
- AI for social good: Investigating opportunities and challenges in leveraging AI for societal benefit, including economic growth and problem-solving in key sectors.
- Responsibility and accountability: Examining the attribution of responsibility and accountability in AI development and application.
- Human oversight: Understanding the role of human oversight in AI decision-making processes and its limitations.
- Legal liability: Addressing civil liability and other legal responsibilities in cases of malfunctioning AI systems.
- Ownership and authorship: Exploring issues of ownership, authorship, and intellectual property in the context of generative AI.
- Copyright and revenue sharing: Evaluating intellectual property rights, copyright, and equitable revenue sharing for AI-generated works.
- AI in education: Examining the impact of AI on education, including grading systems, academic integrity, and new learning paradigms.
- Bias and fairness: Identifying, addressing, and mitigating bias, discrimination, and algorithmic unfairness in AI systems.
- Transparency and explainability: Investigating the roles of transparency, explainability, and traceability in reducing societal risks.
- Holistic model assessment: Evaluating AI models within broader decision-making contexts, focusing on acceptability, trustworthiness, and user trust.
- Normative choices in AI design: Critically examining the normative assumptions and ethical choices embedded in AI system design and deployment.
By addressing these topics, this track aims to foster a deeper and more comprehensive understanding of AI and its societal implications. It seeks to promote interdisciplinary dialogue on the development of responsible, appropriately trusted, and beneficial AI technologies while tackling the broader societal challenges these systems pose in real-world applications.
This track is part of the AISoLA conference, which serves as an open forum for discussing recent advancements in machine learning and their far-reaching implications. AISoLA welcomes researchers from diverse fields—including computer science, philosophy, psychology, law, economics, and social sciences—to engage in an interdisciplinary exchange of ideas and forge new collaborations.
For any questions, please feel free to contact Eva Schmidt at eva.schmidt@tu-dortmund.de.
If you would like to contribute to the conference proceedings, we invite you to submit a paper. Submissions can be either full papers (12–15 pages) or short papers (6–11 pages).
For those interested in contributing to the post-proceedings, we welcome the submission of a 500-word abstract.
We particularly encourage interdisciplinary authors and, especially, teams of authors from diverse fields to submit their work!
This track is co-organized by the Centre for European Research in Trusted Artificial Intelligence (CERTAIN).
Submission:
https://equinocs.springernature.com/service/rtai25
AI Assisted Programming
Organizers:
- Wolfgang Ahrendt (Chalmers University of Technology, SE)
- Bernhard Aichernig (Graz University of Technology, AT)
- Klaus Havelund (Jet Propulsion Laboratory, US)
Neural program synthesis, using large language models (LLMs) which are trained on open source code, are quickly becoming a popular addition to the software developer’s toolbox. Services like, for instance, ChatGPT and GitHub Copilot, and its integrations with popular IDEs, can generate code in many different programming languages from natural language requirements. This opens up for fascinating new perspectives, such as increased productivity and accessibility of programming also for non-experts. However, neural systems do not come with guarantees of producing correct, safe, or secure code. They produce the most probable output, based on the training data, and there are countless examples of coherent but erroneous results. Even alert users fall victim to automation bias: the well studied tendency of humans to be over-reliant on computer generated suggestions. The area of software development is no exception to this automation bias.
This track is devoted to discussions and exchange of ideas on questions like:
- What are the capabilities of this technology when it comes to software development?
- What are the limitations?
- What are the challenges and research areas that need to be addressed?
- How can we facilitate the rising power of code co-piloting while achieving a high level of correctness, safety, and security?
- What does the future look like? How should these developments impact future approaches and technologies in software development and quality assurance?
- What is the role of models, tests, specification, verification, and documentation in conjunction with code co-piloting?
- Can quality assurance methods and technologies themselves profit from the new power of LLMs?
Topics of relevance to this track include the interplay of LLMs with the following areas:
- Program synthesis
- Formal specification and verification
- Model driven development
- Static analysis
- Testing
- Monitoring
- Documentation
- Requirements engineering
- Code explanation
- Library explanation
Submission:
Small Data Challenges in AI for Materials Science
Organizers:
- Lars Kotthoff (University of Wyoming, US)
- Tiziana Margaria (Univeristy of Limerick, IE)
- Elena Raponi (Leiden University, NL)
There is great excitement in materials science about accelerating materials development and chemical synthesis via AI and ML. Traditionally, materials science has evaluated proposed material designs using time-consuming physical experiments and compute-intensive calculations, resulting in a slow, expensive design loop. This, and the lack of a programming standard for matter, hinders efforts to combat climate change, fight disease, and improve the human condition.
Research at the interface of AI/ML and materials science has begun to accelerate this process. Supervised learning can screen out materials that are likely to lack critical properties; Bayesian optimization and active learning can efficiently search a materials design space; Computer vision can improve the efficiency and the reproducibility of materials characterization. Yet, this effort faces a major challenge: available data are much fewer than in traditional AI. We must learn smarter, making better use of heterogeneous and high-dimensional experimental measures and computational predictions, and assimilating multimodal structured data. In contrast to many other application areas of AI, there is abundant domain knowledge in the form of physical laws; incorporating this knowledge into the learning process is crucial to its success.
To advance AI-supported materials, this workshop will bring researchers from materials science together with those working in AI/ML, focusing on the small data challenges. Jointly, we will identify common problems and develop plans for tackling them.
Submission:
FAITH - Formal Approaches in Intelligence for Transforming Healthcare
Organizers:
- Martin Leucker (University of Lübeck, DE)
- Violet Ka I Pun (Western Norway University of Applied Sciences, NO)
To ensure future high quality health care support within given financial conditions, a digitalization of the healthcare sector is mandatory. The digitalization is implemented either using conventional software development or uses techniques from artificial intelligence and faces two main important challenges: First, health care is a safety critical domain and requires the use of formal methods to ensure that the systems work as required. Second, the use of artificial intelligence is safety critical domains is still not fully understood.
Formal methods build on precise mathematical modelling and analysis to verify a systems correctness. It comprises static and dynamic analysis techniques like model checking, theorem proving, runtime verification, to mention the most prominent ones. Its theoretical foundations have been developed in the past decades buts its application in various domains remains a challenge.
AI in healthcare is transforming the field by improving diagnostics, aiding in medical imaging analysis, personalizing treatment, and supporting clinical decision-making. It enables faster and more accurate analysis of medical data, enhances drug discovery, and assists in robot-assisted surgeries. AI also contributes to predictive analytics, virtual assistants, wearable devices, and clinical decision support. However, it is important to remember that AI is a tool to support healthcare professionals rather than replace them, and ethical considerations and data privacy are crucial in its
implementation.
This track is devoted to discussions and exchange of ideas on questions like:
- Formal modelling and optimization of hospital workflows
- Validation and Clinical Implementation: How can algorithms be rigorously tested and integrated into clinical workflows?
- Robustness and Reliability: How can systems be made robust, reliable, and adaptable to changing patient populations and data quality?
- Human-AI Collaboration: How can systems effectively collaborate with healthcare professionals?
- Long-term Impact and Cost-effectiveness: What is the long-term impact and cost-effectiveness of digitalization in healthcare?
- Explainability and Interpretability: How can AI algorithms be made transparent and understandable to healthcare providers and patients?
- Data Quality and Integration: How can diverse healthcare data sources be integrated while ensuring data quality and interoperability?
- Ethical and Legal Considerations: What ethical and legal frameworks should be established to address privacy, consent, bias, and responsible AI use?
- Regulatory and Policy Frameworks: What regulatory and policy frameworks are needed for the development and deployment of AI in healthcare?
These research questions drive efforts to address technical, ethical, legal, and societal
challenges to maximize the benefits of digital solutions in healthcare.
Submission:
Formal Methods for Intersymbolic AI
Organizers:
- Clemens Dubslaff (Eindhoven University of Technology, Mathematics and Computer Science, NL)
- Ina Schaefer (KIT, DE)
- Maurice ter Beek (CNR, ISTI, IT)
A key benefit of symbolic (rule-based) artificial intelligence (AI) is their formal rigor, which comes at the cost of formal modeling effort and computational expensive reasoning. Differently, subsymbolic (data-driven) AI approaches usually outperform rigorous ones in performance but might lead to unsound results. In his keynote contribution during last year’s ISoLA track on X-by-Construction Meets
AI, André Platzer called for the study of the new field of what he coined intersymbolic AI, intended to combine symbolic and subsymbolic AI approaches, exploiting the benefits from both worlds. To establish intersymbolic AI, the boundaries of this field and the role of formal methods have to be investigated and clarified. What is the role of formal methods in intersymbolic AI and how can formal methods ensure rigorous explanations of intersymbolic AI approaches? Is there a generic methodology for intersymbolic AI that ensures it enjoys the benefits from both symbolic and subsymbolic AI approaches? In this year’s AISoLA track on Formal Methods for Intersymbolic AI (FMxIAI), we intend to address these as well as other questions on formal methods for intersymbolic AI by facilitating collaboration and research in intersymbolic AI, attracting researchers and practitioners from formal methods and symbolic or subsymbolic AI.
Submission:
Digital Humanities
Organizers:
- Ciara Breathnach (University of Limerick, IE)
- Tiziana Margaria (Univeristy of Limerick, IE)
- Tim Riswick (Radboud University, NL)
We are in the middle of an AI and IT revolution and at a point of digital cultural heritage data saturation, but humanities’ scholarship is struggling to keep pace. In this Track we discuss the challenges faced by both computing and historical sciences to outline a roadmap to address some of the most pressing issues of data access, preservation, conservation, harmonisation across national datasets, and governance on one side, and the opportunities and threats brought by AI and machine learning to the advancement of rigorous data analytics. We concentrate primarily on written/printed documents rather than on pictures and images. We stress the importance of collaboration across the discipline boundaries and their cultures to ensure that mutual respect and equal partnerships are fostered from the outset so that in turn better practices can ensue.
In the track we welcome contributions that address these and other related topics:
- Advances brought by modern software development, AI, ML and data analytics to the transcription of documents and sources
- Tools and platforms that address the digital divide between physical, analog or digital sources and the level of curation of datasets needed for modern analytics
- Design for accessibility an interoperability of data sets, including corpora and thesauri
- Tools and techniques for machine-understanding form-based documents, recognition of digits and codes, handwriting, and other semantically structured data
- Knowledge representation for better analysis of semi-structured data from relevant domains (diaries, registers, reports, etc.)
- Specific needs arising from the study of minority languages and populations, disadvantaged groups and any other rare or less documented phenomena and groups
- Challenges relative to the conservation, publication, curation, and governance of data as open access artefacts
- Challenges relative to initial and continuing education and curricular or extracurricular professional formation in the digital humanities professions
- Spatial digital humanities
Submission:
Low-Code/No-Code Approaches to Application Development: Challenges and Opportunities
Organizers:
- Mike Hinchey (University of Limerick, IE)
- Tiziana Margaria (University of Limerick, IE)
The paradigm of Low-Code (LC) / No-Code (NC) software development is a promising avenue to counter the currently dire shortage of skilled software developers. Because LC/NC approaches use models, it is in principle possible to formalize the “language” of various flavours of LC and NC. Because these languages are abstract, very often graphical, and tend to be domain-specific, they resonate well with domains experts who are not able to code and do not intend to become skilled in programming in order to do their (non-programmer) jobs. The track will explore the impact of using different flavours of models, languages and tools on the progress of LC/NC towards becoming a paradigm for “everyone” as application developer. Specifically, this track intends to thematize the relationship between LC/NC, model driven development and formal methods, because this is a key point in the connection between the art and engineering of software design as it has been taught and practiced for decades and the new approach, which, according to some enthusiasts, has the potential to become the “next wave” of revolution in the software and IT landscape next to or after AI.
Submission: