VRST'25: Call for Participation
https://vrst.acm.org/vrst2025/index.php/call-for-participation/
Important Dates
- Paper Abstracts Due Monday 7 July 2025
- Papers (all materials) Due Monday 14 July 2025
- Author Notification for Papers Monday 1 September 2025
- Revised Papers Monday 8 September 2025
- Final Author Notification for Papers Friday 12 September 2025
- Camera-Ready Papers Monday 15 September 2025
- Conference Wednesday 12 – Friday 14 November 2025
*Each deadline is 23:59:59 AoE (Anywhere on Earth) == GMT/UTC-12:00 on the stated day, no matter where the submitter is located.
Topics
VRST 2025 welcomes paper submissions relating (but not limited) to the following XR areas:
- Display technology and interaction devices
- Low-latency and high-performance software and applications
- Multi-user and distributed XR
- XR software environments and authoring systems
- Interaction techniques
- Tracking and sensing
- Multimodal XR including haptics, smell, taste, and brain-computer interfaces
- Audio & music processing, sound synthesis, and sonification
- XR-related computer vision, computer graphics, and rendering techniques
- Immersive analytics
- Diversity and Inclusion in XR
- XR-related modeling and simulation techniques
- Avatars and virtual humans, virtual embodiment, and body-ownership illusions
- Teleoperation and telepresence
- Performance testing, user experience, and empirical studies
- Locomotion and navigation
- Perception, presence, and cognition
- XR applications, e.g. training, medical, fabrication
- Multi-disciplinary research projects involving innovative use of XR
PCS Submission Management System
Please use the Precision Conferences System (PCS) to submit your work. After clicking the link below, please choose “SIGCHI” from the “Society” drop-down list, then “VRST 2025” from the “Conference/Journal” drop-down list.
https://new.precisionconference.com/vrst2025
Submission Guidelines
All accepted papers will be published in the ACM Digital Library in the VRST collection.
Format
Paper submissions must be anonymous for a double-blind review process (see below for more details).
- All submissions should be prepared using the Word or LaTeX templates from the official ACM Master article template packages and TAPS (see https://www.acm.org/publications/taps/word-template-workflow).
- For LaTeX authors, submissions should be made using the double-column format using \documentclass[sigconf,review,anonymous]{acmart}.
- For Word submissions, please use the linked single-column template.
- Authors should prepare their materials using numbered citations and references. See the TAPS webpage for guidance on how content length corresponds to the page limits for the final version.
Submission Lengths
Latex: 4 to 9 pages, double column, excluding references
Word: approx. 8,000 words including space for figures, tables, etc.
Anonymity Guidelines
For initial Paper submissions, authors are asked to remove all author and institution information and remove any clues that would directly identify any of the authors (such as the name of the data collecting institution, ethical board/IRB institution names, or acknowledgments). Please anonymize all PDF files. Note that PDF creator programs may automatically include author information in the file metadata. Citations of the authors’ own published work (including online) must be in the third person, in a manner that is not traceable to the identity of the authors. For example, the wording “in [3], Mountain and River proposed…” is acceptable, whereas “in [3], we proposed…” is not. Here, reference [3] is listed explicitly as “Mountain, A. and River, A., Detecting Mountains and Rivers, In Proc. XYZ ’16, 721-741.” We welcome authors to extend their own existing non- or semi-archivable, as well as not-yet-peer-reviewed publications, such as extended abstracts. To avoid accusations of (self)plagiarism in these cases, we ask the authors to upload their work in a special field in PCS. These works can only be viewed by the submission coordinator.
Please note that failure to comply with any of the above requirements and guidelines will result in an automatic desk rejection of the paper!
Contribution Types
We invite many types of research contributions, including interactive systems. However, evaluating systems that are built using existing techniques can be difficult. For example, a system can be built using a known machine-learning technique but it can enable entirely new functionality. In this case, reviewers will need to judge the novelty of the new functionality that the system enables without penalizing the work for leveraging an existing technique.
For reference, here’s a paper about evaluating interactive systems that reviewers and authors should both be familiar with:
James Fogarty (2017): Code and Contribution in Interactive Systems Research
We further accept studies that replicate known findings systematically to confirm or contradict previously found results.
Supplementary Materials
Submissions may be (optionally) accompanied by additional materials such as images, videos, or electronic documents. These materials do not form a part of the official submission. They will be viewed only at the discretion of the reviewers. All content should be in a portable format that is unlikely to require the user to download additional programs. For example, you may prefer PDF or HTML for documents, PNG or JPEG for images, and QuickTime or MPEG for videos. The total file size for supplementary materials should not exceed a total of 50MB. To the extent possible, accepted papers should stand on their own, with the additional material providing supplementary information or confirmation of results. It is, however, appropriate to refer to video footage in the paper.
Diversity, Equity, and Inclusion
When designing and presenting user evaluations, please consider the following: User evaluations using non-representative or homogeneous participant populations can introduce biases in the results and conclusions. We recommend that researchers strive to use samples that are representative of the population for which the technology is being designed. If representative samples are not possible to collect, we recommend that the limitations of the population studied should be discussed within the paper and care must be taken when making claims about the findings. We recommend that participant demographic information should be reported for user evaluations so that future researchers understand the results based on the population studied.
Accessibility
We ask authors to try to be as inclusive as possible when preparing a submission. For instance, please provide alt-text for figures and tables and make supplemental videos accessible with subtitles to enable us to facilitate accessible reviewing. We recommend that authors read the SIGCHI Guidelines for an Accessible Submission, as well as the SIGCHI Technical Requirements and Guidelines for Videos (especially the section “What accessibility considerations should I pay attention to when recording my video?”), which describes the process of accessible video creation and captioning. If authors have difficulties with making their submissions accessible, they are encouraged to contact the VRST 2025 Accessibility Chairs by emailing accessibility@vrst2025.org.
Policies
At least one author of each accepted paper must register for the conference, and present their work in person at the conference. As the conference is in-person only, there will not be any opportunity to present remotely. Besides, it’s just so much better to meet your fellow XR enthusiasts in person!
All standing ACM policies apply to ACM VRST 2025.
- ACM Policy on Authorship (which also contains statements regarding the use of generative AI)
By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including ACM’s new Publications Policy on Research Involving Human Participants and Subjects. Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of your paper, in addition to other potential penalties, as per ACM Publications Policy.
Please ensure that you and your co-authors obtain an ORCID ID, so you can complete the publishing process for your accepted paper. ACM has been involved in ORCID from the start and we have recently committed to collect ORCID IDs from all of our published authors. We are committed to improving author discoverability, ensuring proper attribution, and contributing to ongoing community efforts around name normalization; your ORCID ID will help in these efforts.
AUTHOR’S TAKE NOTE: The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks before the first day of your conference. The official publication date affects the deadline for any patent filings related to published work. (For those rare conferences whose proceedings are published in the ACM Digital Library after the conference is over, the official publication date remains the first day of the conference.)
Contact
papers2025@vrst.acm.org
- Daniel Zielasko, Trier University, Germany
- Kangsoo Kim, University of Calgary, Canada
- Rick Skarbez, La Trobe University, Australia
Retour sur Laval Virtual en virtuel
Nous n’avons jamais rassemblé autant de participant au VRIC durant Laval Virtual !Les chiffres...
CfP : SMI 2020 Fabrication and Sculpting Event
The Fabrication and Sculpting Event (FASE) presents original research at the intersection of...
Articles de la conférence CHI 2020 disponibles
Les proceedings de la conférence CHI 2020 (Conference on Human Factors in Computing Systems)...
Call for Papers ISMAR 2020
http://ismar20.org/call-for-papers/Note: all submissions are made via the PCS system. Please see...
Report – Journée du Groupe de Travail « Réalité Virtuelle » du GDR IG.RV
Suite à la crise sanitaire, nous somme obligés de décaler le Groupe de travail réalité virtuelle...
Frontiers in Virtual Reality – Online Seminar Series
The Editorial board of Frontiers in Virtual Reality is pleased to present a series of virtual...