lnu.sePublications
Change search
Link to record
Permanent link

Direct link
BETA
Jensen, Signe Kjaer
Publications (5 of 5) Show all publications
Jensen, S. K. (2018). Animating characters through music: A multimodal and musical framework for character analysis exemplified through Pixar’s 'Up'. In: Symbiotic Cinema, Confuences between film and other media: 24th Sercia Conference, 6-8 September 2018 - Växjö, Sweden. Paper presented at Symbiotic Cinema, Confuences between film and other media. 24th Sercia Conference, 6-8 September 2018 - Växjö, Sweden (pp. 44-45). Växjö: Linnaeus University
Open this publication in new window or tab >>Animating characters through music: A multimodal and musical framework for character analysis exemplified through Pixar’s 'Up'
2018 (English)In: Symbiotic Cinema, Confuences between film and other media: 24th Sercia Conference, 6-8 September 2018 - Växjö, Sweden, Växjö: Linnaeus University , 2018, p. 44-45Conference paper, Oral presentation with published abstract (Other academic)
Abstract [en]

Films are inherently multimodal and intermedial media products, bearing on representation of (intermedial relations) and through (multimodal integration) a range of different media. In this presentation, I want to focus on the multimodal aspect and explore how to perform a character analysis in film which is rooted in a multimodal and musical approach – discussing the meaning potential of music as it interacts with other auditory and visual modes in constructing and developing filmic characters.

Following the character theory set forth by Jens Eder, what defines and sets characters apart from other elements of the filmic narrative is that characters are experienced as ‘fictional beings’ having ‘an inner life’ of their own (Eder, 2010). What animates a character in other words – in the sense of lifting a specific representation from the level of pure artefact to the level of ‘fictional being’ – something experienced as having a consciousness – is the impression that the representation is capable of having thoughts and feelings of its own. Since music in both animated and live-action features is often considered to provide an emotional content and a background for understanding characters’ feelings, it seems logical that music in diverse film genres should therefore play a significant part both in creating and developing characters as multimodal artefacts and in animating them into ‘fictional beings’. 

Using selected examples from the Pixar film ‘UP’ (Docter and Peterson, 2009), I will discuss how to conduct an analysis of character formation in film based on a musical and multimodal semiotic approach, inspired among others by the works of Philip Tagg, John Bateman and Mikhail Bakhtin. Following this, I propose that character formation in film depend on a dialogic and polyphonic orchestration of different semiotic modes, herein several interacting visual and musical modes, to construct a character as a structured reservoir of meaning potential. 

Place, publisher, year, edition, pages
Växjö: Linnaeus University, 2018
Keywords
Multimodality, film music, animation, characters
National Category
Musicology Studies on Film Media Studies
Research subject
Humanities, Musicology; Humanities, Film Studies; Media Studies and Journalism, Media and Communication Science
Identifiers
urn:nbn:se:lnu:diva-81122 (URN)
Conference
Symbiotic Cinema, Confuences between film and other media. 24th Sercia Conference, 6-8 September 2018 - Växjö, Sweden
Available from: 2019-03-15 Created: 2019-03-15 Last updated: 2019-06-26Bibliographically approved
Jensen, S. K. (2018). Exploring children’s understanding and interpretation of music in animated film: A multimodal framework. In: Researching Multimodal Childhood Symposium 2018: . Paper presented at Researching Multimodal Childhood Symposium 2018. Odense: University of Southern Denmark
Open this publication in new window or tab >>Exploring children’s understanding and interpretation of music in animated film: A multimodal framework
2018 (English)In: Researching Multimodal Childhood Symposium 2018, Odense: University of Southern Denmark , 2018Conference paper, Oral presentation only (Other academic)
Abstract [en]

This paper will present a specific problematic of a broader PhD project on children’s reception of music and characters in animated features which, along with textual analyses, aims to gather and analyze an empirical ‘child perspective’ on selected films. In order to do so, video-recorded screenings and interviews have been carried out with small groups of children aged 7 and 11 years old respectively. In order to fully understand children’s reactions and responses to film, a multimodal approach needs to be taken when transcribing and analyzing these kinds of video-recorded data; a multimodal approach which seeks to, as a minimum, account for the children’s use of facial expressions and body language as well as their use of the verbal mode. This focus on gesture might seem particularly important when working with relatively young children and with the content of non-verbal modes such as music, which tends to appeal to people in a highly embodied and intuitive way often escaping clear verbal descriptions. At this preliminary stage of the project, the data seems to suggest a very high dependence for the children on using semiotic resources outside of the verbal mode, e.g. by singing, humming, tapping rhythms, dancing, and imitating the playing of instruments, in order to elicit experiences with music for which a suitable vocabulary might be out of reach. In this paper, I want to present a model for a multimodal transcription of interviews based on the possibilities afforded by the software program Multimodal Analysis Video, developed by Kay O’Halloran and her team. In doing so, I will open up a discussion of how the multimodal meaning-making practices of children can be captured, analyzed and understood in academic research.

Place, publisher, year, edition, pages
Odense: University of Southern Denmark, 2018
Keywords
multimodality, interviews, children, reception, film, gesture, transcription
National Category
Media Studies
Research subject
Media Studies and Journalism, Media and Communication Science
Identifiers
urn:nbn:se:lnu:diva-81124 (URN)
Conference
Researching Multimodal Childhood Symposium 2018
Available from: 2019-03-15 Created: 2019-03-15 Last updated: 2019-06-26Bibliographically approved
Jensen, S. K. (2018). Multimodal depth in film: A proposal for a multimodal and sound-oriented approach to intersemiotic analysis. In: Presented at the 9th International Conference on Multimodality, 9ICOM: August 15-17, 2018, at the University of Southern Denmark in Odense, Denmark. Paper presented at 9th International Conference on Multimodality, 9ICOM. August 15-17, 2018, at the University of Southern Denmark in Odense, Denmark. Odense: University of Southern Denmark
Open this publication in new window or tab >>Multimodal depth in film: A proposal for a multimodal and sound-oriented approach to intersemiotic analysis
2018 (English)In: Presented at the 9th International Conference on Multimodality, 9ICOM: August 15-17, 2018, at the University of Southern Denmark in Odense, Denmark, Odense: University of Southern Denmark , 2018Conference paper, Oral presentation only (Other academic)
Abstract [en]

Film is an inherently multimodal medium. Despite this, film studies and film musicology alike have a tradition of approaching the discussion of formal interaction in film as a question of how well the image track and the music relate to each other, rather than as a question of complex multimodal intersemiosis (for example in epitomic works such as: Carroll (1996); Eisenstein and Leyda (1969); Gorbman (1987)). Even though this reductive theorisation has been questioned and challenged from media and (film)musicology scholars (e.g. Chion, 1994; Cook, 1998; Langkjær, 2008) and from within the field of multimodality (e.g. Bateman and Schmidt (2013); Tseng (2013)), the notion that film = audiovisual = moving image + music, is still largely dominant. 

Following Walter Murch and Iben Have’s ideas about an audiovisual dimension (Chion, 1994; Have, 2008), I want to propose that the formal interrelations in film can be analysed according to a multimodal dimension and that the depth of this dimension is decided by the level of dialogue in the multimodal complex. Rather than seeing film as an addition of images and music, I propose that film consists of a number of visual modes (e.g. lightning, viewing perspective, facial gestures) and a number of auditory modes (e.g. dialogue, instrumentation, and musical harmonics), which are orchestrated as different ‘voices’ (drawing on Bakhtin’s ideas of dialogue and polyphony (Bakhtin, 1981)). The intersemiosis is a result of a dialogic interrelationship of all of these voices, or modes. It is my working hypothesis that a high level of dialogue equals a more complex meaning potential and thus a deeper multimodal dimension, whereas a low level of dialogue equals a more straight-forward meaning potential and a flatter multimodal dimension.

Place, publisher, year, edition, pages
Odense: University of Southern Denmark, 2018
Keywords
intersemiosis, multimodality, music, film, animation
National Category
Other Humanities not elsewhere specified
Research subject
Humanities
Identifiers
urn:nbn:se:lnu:diva-81123 (URN)
Conference
9th International Conference on Multimodality, 9ICOM. August 15-17, 2018, at the University of Southern Denmark in Odense, Denmark
Note

Ej belagd 190410

Available from: 2019-03-15 Created: 2019-03-15 Last updated: 2019-06-26Bibliographically approved
Jensen, S. K. (2017). Sound as Animation. In: SAS 2017, Society for Animation Studies: Università degli Studi di Padova, Dipartimento dei Beni Culturali, July 3-7, 2017. Paper presented at SAS 2017, Society for Animation Studies : Università degli Studi di Padova, Dipartimento dei Beni Culturali, July 3-7, 2017.
Open this publication in new window or tab >>Sound as Animation
2017 (English)In: SAS 2017, Society for Animation Studies: Università degli Studi di Padova, Dipartimento dei Beni Culturali, July 3-7, 2017, 2017Conference paper, Oral presentation only (Other academic)
Abstract [en]

This paper will discuss how sound design in animated films can be seen as essential for creating a sense of “perceptual realism”, following Langkjær (2010), enabling the viewer to engage and identify with the narrative.To animate literally means “to bring to life”, and it is my purpose in this paper to discuss the way sound contribute to animate onscreen characters as well as virtual soundscapes and holds an indexical function of linking the animation to a sensoric real-life world. Drawing on Murray Schafer’s soundscape terminology (Schafer, 1993) and insights from William Gaver’s ecological approach to listening (Gaver, 1993), I analyze the way sound can be seen as providing materiality to animated environments and as a tool for generating agency and intentionality to on-screen characters.

Sounds are determined and structured by their sources and thereby contain information about these sources decodable for human perception (Gaver, 1993). This relationship between sound and source also means that sounds can be looked upon as indexical references to their sources.

By providing environments and objects with synchronized Foley sounds, produced and recorded to make them live up to audience’s expectations of real, everyday sounds, virtual soundscapes are created to simulate everyday perceptual experiences (Langkjær, 2010). Furthermore, synchronized sounds provide the animation with materiality the same way as any given everyday-sound will “provide information about an interaction of materials at a location in an environment” (Gaver, 1993). For example, when a cartoon character drops a piano from the top of a high rock this action might be against the physical laws, but the Foley sounds created to go along with this unrealistic event do follow some general rules of perception and of sound as a physical, source-dependent phenomenon. Even if created in a studio or sound lab, sound provide the audience with realistic (within the narrative) information about material properties of the objects involved in an event. The sound may thus, among other things, tell us that the piano is heavy and that the ground of landing is hard (loud low timbered crash), that the piano, before the crash, had functioning strings that in the impact give out a cacophony of musical sounds, and even that the crash was near the spectator and happened in an open space (by loudness, the use of a wide scale of frequencies and a relative lack of reverberation).

Characters are as a result of this audible materiality given agency and intentionality because their actions have audible effects on the environment. We can for example hear if they are walking or running and if the ground is made of grass or stone. We can also hear the rustle of their clothes as they move. Sound thus makes it possible to not just see how the characters move but to hear it in relation to their surrounding world.

Even though all sounds can be seen as indexical, some sounds will stand out in this respect. Introducing sound signals, a term from soundscape theory referring to environmental sounds that call attention to themselves, into the virtual soundscape will for example create clear references functioning as indexes for everyday activities outside the screen. An example can be found in Wall-E when we all of a sudden, among the metallic sounds of Wall interacting with the garbage occupying the lifeless earth, hear the sound of a car door getting unlocked and we instantly recognize this sound as something familiar from our own life. This sound of the car door unlocking does not just function as a sound signal but also as an indexical reference to life outside the screen thus enabling us to engage further with the film.It has before been argued within film and TV music studies that we take these Foley sounds into serious consideration in terms of our listening modes and the physical qualities these sounds attribute to filmed objects (e.g. Chion, 1994, Have, 2008, Langkjær, 2010) this is nothing new in itself, but I want to explore what this audible perceptual reality means for the experience of the medium of animated films, a medium which often places the narrative in fantastical worlds ungoverned by psychical laws and a medium that doesn’t have indexical footages as a prerequisite thereby leaving this kind of realism- reference to the sound.

Keywords
sound design, soundscape, realism, indexicality, animation
National Category
Musicology
Research subject
Humanities, Musicology
Identifiers
urn:nbn:se:lnu:diva-73298 (URN)
Conference
SAS 2017, Society for Animation Studies : Università degli Studi di Padova, Dipartimento dei Beni Culturali, July 3-7, 2017
Available from: 2018-04-23 Created: 2018-04-23 Last updated: 2019-06-26Bibliographically approved
Jensen, S. K. (2017). Sound as animation: An investigation into the indexical and reality-inducing function of sound in animation features. In: Music and the Moving Image, New York, May 26-28, 2017: . Paper presented at Music and the Moving Image (MaMI).
Open this publication in new window or tab >>Sound as animation: An investigation into the indexical and reality-inducing function of sound in animation features
2017 (English)In: Music and the Moving Image, New York, May 26-28, 2017, 2017Conference paper, Oral presentation only (Other academic)
Abstract [en]

To animate literally means “to bring to life”, and it is my purpose in this paper to discuss animation features and the way sound design contribute to animate onscreen characters as well as virtual soundscapes and holds an indexical function of linking the animation to a sensoric real-life world enabling the viewer to engage and identify with the narrative through a sense of “perceptual realism” (following Langkjær 2010).

Drawing on Murray Schafer’s soundscape terminology (Schafer 1993) and insights from William Gaver’s ecological approach to listening (Gaver 1993), I analyze the way sound can be seen as providing materiality to animated environments and as a tool for generating agency and intentionality to onscreen characters. By providing environments and objects with synchronized sounds, produced and recorded to make them live up to audience’s expectations of reality, virtual soundscapes are created to simulate everyday perceptual experiences. Furthermore, synchronized sounds provide information about the physical characteristics of their perceptual sources – that is, the sources from the animation that the audience couples with the sounds rather than the real Foley-sources – hereby providing the animation with materiality by defining e.g. weight, texture, movement and placement of objects in the virtual space. Characters are as a result given agency and intentionality. Through this perceptual realism, all sound in animation will to some extent function as an indexical reference to life outside the screen, but the use of sound signals familiar to the audience can be seen as an enhancement of these references that enables audiences to engage further with the narrative.

Keywords
sound design, soundscape, realism, indexicality, animation
National Category
Musicology
Research subject
Humanities, Musicology
Identifiers
urn:nbn:se:lnu:diva-73297 (URN)
Conference
Music and the Moving Image (MaMI)
Available from: 2018-04-23 Created: 2018-04-23 Last updated: 2019-06-26Bibliographically approved
Organisations

Search in DiVA

Show all publications