All tagged Virtual Reality

September 1955 - A Virtual Documentary of the Istanbul Pogrom

Virtual Experience Design Lab, 2017

September 1955 is a 8-minute virtual-reality documentary of the Istanbul Pogrom, a government-initiated organized attack on the minorities of Istanbul on September6-7, 1955. This interactive installation places the viewer in a reconstructed photography studio in the midst of the pogrom, allowing one to witness the events from the perspective of a local shop-owner.

Virtuality & Presence

This course aims to help students invent and analyze new forms of extended reality (XR) experiences, computer-based art, gaming, social media, interactive narrative, and related technologies. This semester’s focus is on the theory, design, and implementation of experiences and technologies of virtuality. Toward this end, we shall look at topics including definitions of virtual, virtual reality, augmented reality, alternate reality, hybrid reality, virtual worlds, virtual selves, and more.

Immersive Media

In this course, students develop independent projects in immersive media as it relates to architectural design and spatial storytelling. We examine technologies of immersion including VR/AR, web-based experiences, sound installations, and virtual production workflows. We explore the field of immersive media through readings, precedent artwork, and hands-on experience with production tools. We specifically focus on the production pipelines that use real-time simulations and game engines. Students were encouraged to work on the aspects of their studio projects and thesis. The workshop provides a studio environment for the students to collaborate and provide feedback to each other.

Reasonable Perception: Connecting Vision and Language Systems

Human Robot Interaction 2018. ACM.

Understanding explanations of machine perception is an important step towards developing accountable, trustworthy machines. Furthermore, speech and vision are the primary modalities by which humans collect information about the world, but the linking of visual and natural language domains is a relatively new pursuit in computer vision, and it is difficult to test performance in a safe environment. To couple human visual understanding and machine perception, we present an explanatory system for creating a library of possible context-specific actions associated with 3D objects in immersive virtual worlds.