WORK

related work 1

Takashi Ikegami
“MTM [Mind Time Machine]” (comissioned by YCAM)

[photo] Takashi Ikegami “MTM [Mind Time Machine]”

For the new challenge of this exhibition, by way of extending the theme “Desire of Code” of Seiko MIkami, we at the same time present a new art installation “MTM [Mind Time Machine]” directed by a complex systems scientist, Takashi Ikegami. Ikegami is producing an interactive work of mind time processes from the viewpoint of a self-organizing, subjective timeline.
The concept of this work is based on the American brain physiologist Benjamin Libet‘s studies. Ikegami will reconstruct the subjective momentary “Now” by producing an optical landscape of human cognition that emulates the network of conscious, unconsciousness, and memory processes.
In this installation, MTM [Mind Time Machine], 15 cameras of 4 different types will shoot the actual movements of the audience as they enter the venue. Those images will be processed and evolved in real time by the programs that originally developed from the ideas of complex systems. The final output of the images will be projected onto the three big screens, which correspond to consciousness/unconsciousness/memory, and the corresponding sound images will be produced with the 3.2 channels.

about new installation work at YCAM
text: Takashi Ikegami

Humans have subjective/internal time structures, as opposed to objective time. What I mean by objective time is, namely, a physical time scale that can be measured by a clock, but our minds’ time cannot be measured in the same way. In MTM [Mind Time Machine], we try to emulate the dynamics of mind time. I was inspired by the ideas of Benjamin Libet, an American brain physiologist who believes that there are time differences in processing information between physical time and mind time, and that these are “temporally” edited backward and forward. For example, Libet showed that we actually have a 0.5-second delay in sensing stimulus from our feet or hands, but that our minds correct the delay by backward referring the event. This backward referral never occurs when we directly stimulate the somato-sensory region of the brain. In other words, the way stimulus is delivered to the brain through our embodiment determines the subjective sense of the momentary “now.”
In this art installation, while the system shoots or edits the visual images, it uses the spatially temporally distributed visual and sound frames and self-organizes the momentary “now.” Its core program is a neural network that includes chaos (a mechanism that expands small differences) inside the system and a meta-network that consists of the neural networks. Using this system as hardware and the unstable feedback and chaotic itinerancy as a conceptual framework, we can reconfigure subjective timelines. This is a proposal of an artificial life that self-organizes the massive data flow of visual information to reconfigure human consciousness as a temporal order.

translated by Takashi Ikegami

artist profile
Takashi Ikegami: Concept / Direction
Yuta Ogai: Programming
Motoi Ishibashi: Programming
Keiichiro Shibuya: Sound
evala: Sound
Kenshu Shintsubo: Visual adviser
Corporate sponsor: AD Science Co.
Cooperation:The University of Tokyo, Graduate School of Arts and Sciences Department of General Systems Studies Ikegami lab; ATAK; DGN co., ltd.
Co-developed with YCAM InterLab
Soichiro Mihara: Technical direction / Spatial design
Richi Owaki: Video direction
Etsuko Nishimura, Satoshi Hama: Sound engineering
Fumie Takahara, yano butaibijutsu [Shoichi Nishida, Tetsuya Oda, Ikuko Yano, Hiroyuki Yamauchi]: Lighting
Takuro Iwata, Mitsuo Uno: Spatial construction
Yohei Miura: Spatial construction support
Ryuichi Maruo: Documentation

related work 2

Seiko Mikami + Sota Ichikawa
“gravicells - gravity and resistance”
(revised version / commissioned by YCAM)

[photo] Seiko Mikami + Sota Ichikawa “gravicells - gravity and resistance”

The spatial expression of "gravicells" is rendered consistently by the real time calculation of the dynamics. The on-going dynamic movements are composed of the counter powers around gravity. Gravity is not materialized without the reaction force. In this artwork, it is possible for us to develop a new human sense through feeling gravity differently than usual and having new perception of body. The work provides a space with hypothetical dynamics having the opposing forces of gravity and resistance, through special devices and sensors.
Walking freely in the site, audiences are able to feel gravity that they are seldom aware of, resistance to it, and the effects caused by other participants. All movements and changes made by participating visitors are transformed into the movements of sound and geometrical images through the sensors, so that the whole space develops or changes in this interactive installation. Stand or move around on this unstable flat floor (6m x 6m), and audience are participating in the installation. Each participant becomes "an observation point," and another participant joins to make "plural moving observation points." The number of participants at a time is not limited. On the floor are placed 225 units of 40cm x 40cm cell-like grids, in which specially developed sensors are fixed to detect instantly and continuously the changing position, weight, and speed. Additionally, the position of the exhibition space is simultaneously measured by GPS, and with plural linked GPS satellites as part of the work and the moving direction of GPS is shown as the locus. Then the dynamics of GPS is turned into a simultaneously moving site, and thus the installation space involves the outside environment.

artist profile
Seiko Mikami + Sota Ichikawa: Artworks / Concept
Co-operation: TAKEGAHARASEKKEI (Hardware)
Co-developed with YCAM InterLab
Soichiro Mihara: Technical direction
Richi Owaki: Video direction
Etsuko Nishimura, Satoshi Hama: Sound engineering
Fumie Takahara, yano butaibijutsu [Shoichi Nishida, Tetsuya Oda, Ikuko Yano, Hiroyuki Yamauchi]: Lighting
Takuro Iwata, Mitsuo Uno: Spatial construction
Yohei Miura: Network
Ryuichi Maruo: Documentation