Why affective computing




















But other aspects of our enterprise seem unique. For one thing, our users may import huge files containing billions of base-pairs. The genome of the Polychaos dubium , a freshwater amoeboid, clocks in at billion base-pairs—that's over times larger than the human genome!

As our CAD program will be hosted on the cloud and run on any Internet browser, we need to think about efficiency in the user experience.

We don't want a user to click the "save" button and then wait ten minutes for results. We may employ the technique of lazy loading, in which the program only uploads the portion of the genome that the user is working on, or implement other tricks with caching.

Getting a DNA sequence into the CAD program is just the first step, because the sequence, on its own, doesn't tell you much. What's needed is another layer of annotation to indicate the structure and function of that sequence.

For example, a gene that codes for the production of a protein is composed of three regions: the promoter that turns the gene on, the coding region that contains instructions for synthesizing RNA the next step in protein production , and the termination sequence that indicates the end of the gene. Within the coding region, there are "exons," which are directly translated into the amino acids that make up proteins and "introns," intervening sequences of nucleotides that are removed during the process of gene expression.

There are existing standards for this annotation that we want to improve on, so our standardized interface language will be readily interpretable by people all over the world.

The CAD program from GP-write will enable users to apply high-level directives to edit a genome, including inserting, deleting, modifying, and replacing certain parts of the sequence. Once a user imports the genome, the editing engine will enable the user to make changes throughout the genome. Right now, we're exploring different ways to efficiently make these changes and keep track of them. One idea is an approach we call genome algebra, which is analogous to the algebra we all learned in school.

In mathematics, if you want to get from the number 1 to the number 10, there are infinite ways to do it. You could add 1 million and then subtract almost all of it, or you could get there by repeatedly adding tiny amounts. In algebra, you have a set of operations, costs for each of those operations, and tools that help organize everything.

In genome algebra, we have four operations: we can insert, delete, invert, or edit sequences of nucleotides. The CAD program can execute these operations based on certain rules of genomics, without the user having to get into the details. Similar to the " PEMDAS rule " that defines the order of operations in arithmetic, the genome editing engine must order the user's operations correctly to get the desired outcome. The software could also compare sequences against each other, essentially checking their math to determine similarities and differences in the resulting genomes.

In a later version of the software, we'll also have algorithms that advise users on how best to create the genomes they have in mind. Some altered genomes can most efficiently be produced by creating the DNA sequence from scratch, while others are more suited to large-scale edits of an existing genome.

Users will be able to input their design objectives and get recommendations on whether to use a synthesis or editing strategy—or a combination of the two. Users can import any genome here, the E. Our goal is to make the CAD program a "one-stop shop" for users, with the help of the members of our Industry Advisory Board: Agilent Technologies , a global leader in life sciences, diagnostics and applied chemical markets; the DNA synthesis companies Ansa Biotechnologies , DNA Script , and Twist Bioscience ; and the gene editing automation companies Inscripta and Lattice Automation.

Lattice was founded by coauthor Douglas Densmore. We are also partnering with biofoudries such as the Edinburgh Genome Foundry that can take synthetic DNA fragments, assemble them, and validate them before the genome is sent to a lab for testing in cells. Users can most readily benefit from our connections to DNA synthesis companies; when possible, we'll use these companies' APIs to allow CAD users to place orders and send their sequences off to be synthesized. In the case of DNA Script, when a user places an order it would be quickly printed on the company's DNA printers; some dedicated users might even buy their own printers for more rapid turnaround.

In the future, we'd like to make the ordering step even more user-friendly by suggesting the company best suited to the manufacture of a particular sequence, or perhaps by creating a marketplace where the user can see prices from multiple manufacturers, the way people do on airfare sites.

We've recently added two new members to our Industrial Advisory Board, each of which brings interesting new capabilities to our users.

Catalog Technologies is the first commercially viable platform to use synthetic DNA for massive digital storage and computation, and could eventually help users store vast amounts of genomic data generated on GP-write software. It will work with GP-write to select, fund, and launch companies advancing genome-writing science from IndieBio's New York office.

Naturally, all those startups will have access to our CAD software. We're motivated by a desire to make genome editing and synthesis more accessible than ever before. Imagine if high-school kids who don't have access to a wet lab could find their way to genetic research via a computer in their school library; this scenario could enable outreach to future genome design engineers and could lead to a more diverse workforce.

Our CAD program could also entice people with engineering or computational backgrounds—but with no knowledge of biology—to contribute their skills to genetic research. Because of this new level of accessibility, biosafety is a top priority.

We're planning to build several different levels of safety checks into our system. There will be user authentication, so we'll know who's using our technology. We'll have biosecurity checks upon the import and export of any sequence, basing our "prohibited" list on the standards devised by the International Gene Synthesis Consortium IGSC , and updated in accordance with their evolving database of pathogens and potentially dangerous sequences.

In addition to hard checkpoints that prevent a user from moving forward with something dangerous, we may also develop a softer system of warnings. Imagine if high-school kids who don't have access to a lab could find their way to genetic research via a computer in their school library. We'll also keep a permanent record of redesigned genomes for tracing and tracking purposes.

This record will serve as a unique identifier for each new genome and will enable proper attribution to further encourage sharing and collaboration. We believe that the authentication of users and annotated tracking of their designs will serve two complementary goals: It will enhance biosecurity while also engendering a safer environment for collaborative exchange by creating a record for attribution.

This effort, led by coauthor Farren Isaacs and Harvard professor George Church , aims to create a human cell line that is resistant to viral infection. Such virus-resistant cells could be a huge boon to the biomanufacturing and pharmaceutical industry by enabling the production of more robust and stable products, potentially driving down the cost of biomanufacturing and passing along the savings to patients.

The Ultra-Safe Cell Project relies on a technique called recoding. To build proteins, cells use combinations of three DNA bases, called codons, to code for each amino acid building block. Because there are 64 possible codons but only 20 amino acids, many of the codons are redundant. If you replaced a redundant codon in all genes or 'recode' the genes , the human cell could still make all of its proteins. But viruses—whose genes would still include the redundant codons and which rely on the host cell to replicate—would not be able to translate their genes into proteins.

Think of a key that no longer fits into the lock; viruses trying to replicate would be unable to do so in the cells' machinery, rendering the recoded cells virus-resistant. This concept of recoding for viral resistance has already been demonstrated. Isaacs, Church, and their colleagues reported in a paper in Science that, by removing all instances of a single codon from the genome of the E. After several considerations regarding the evaluation process for games used in learning environments [ 40 ], the following features were established: i Research Design.

The sample of participants was divided into two groups of the same size, being one of them the control group. This one was called the System 2 group. The other group tested the prototype implemented with emotion detection which adapted its behaviour, by modifying the pace of the game and difficulty level, according to the emotions detected on the user, in such a way that if the user becomes bored, the system increases the pace of the game and difficulty level and on the contrary, if the user becomes stressed or nervous, the system decreases the speed of the game and difficulty level.

This one was called the System 1 group. By doing this, it can be shown how using emotion detection to dynamically vary the difficulty level of an educational software application influences the performance and user experience of the students. The test was conducted in the premises of the primary school in a quiet room where just the participants two at a time using System 1 and System 2 and the evaluators were present.

We prepared two laptops of similar characteristics, one of them running System 1 with the version of the application implemented with emotion recognition and the other laptop running System 2 with the version of the application without emotion recognition.

The whole evaluation process was divided into two parts: i Introduction to the Test. At the beginning of the evaluation, the procedure was explained to the sixteen children at a time, and the game instructions for the different levels were given. Kids were called in pairs to the room where the laptops running System 1 and System 2 were prepared. None of the children knew what system they were going to play with.

At the end of the evaluation sessions, the sixteen children completed the SUS questionnaire. Researchers were present all the time, ready to assist the participants and clarify doubts when necessary.

When a participant finished the test, they returned to their classroom and called the next child to go in the evaluation room. The task that the participants had to perform was to play the seven levels of the prototype, including each level a platform game and a reading out loud exercise.

The data collected during the evaluation sessions were subsequently analyzsed, and the outcomes are described next. Although participants with System 1 needed, on average, a bit more time per level to finish Figure 4 shows the evolution of the average number of mistakes, which increases in the control group System 2 from level 4 onwards. Since the game adapts its difficulty in System 1 , after detecting a peak of mistakes in the fourth level as a sign of stress, detected as a combination of negative feelings found in the facial expression and the way the participant used the keyboard , the difficulty level was reduced.

This adaptation made the next levels easier to play for participants using System 1, what was reflected in less mental effort. Since participants using System 2 did not have this feature, their average performance got worse. On average, participants using System 1 needed 1. Likewise, System 2 users asked for help more often 13 times than System 1 users 10 times. In future experimental activities, the sample size would be increased in order to obtain more valuable data.

The evaluation was carried out as a between-subjects design with emotion recognition as the independent variable using or not using emotion recognition features and attempt s attempts needed to finish each level , time time seconds needed to finish each level , mistakes number of mistakes , keystrokes number of keystrokes , and stress number of times a key was pressed too fast in a short time as the dependent variables.

We used as our limit for statistical significance, with significant results reported below. Regarding keystrokes ; 6 , mistakes ; , and stress ; , -test results confirmed the null hypothesis was false and, thus, that the two datasets are significantly different.

Although the dependent variables time ; and attempts ; were similar in both datasets, the efficiency considered as the lowest number of actions a user needs to finish each level is greater in users of System 1, even though both users of System 1 and System 2 finished within a similar time frame, what helped the first ones to make less mistakes.

The outcomes of the evaluation shown in Figure 4 indicate a clear improvement when using System 1 as the number of mistakes increases in users of System 2 at higher difficulty levels. As we can see, System 1 users rated the application with a higher level of satisfaction compared to the level obtained by users of System 2, as shown in Figure 5. Emotion detection, together with Affective Computing, is a thriving research field.

Few years ago, this discipline did not even exist, and now there are hundreds of companies working exclusively on it, and researchers are investing time and resources on building affective applications. However, emotion detection has still many aspects to improve in the coming years.

Applications which obtain information from the voice need to be able to work in noisy environments, to detect subtle changes, maybe even to recognize words and more complex aspects of human speech, like sarcasm. The same applies for applications that detect information from the face. Most people use glasses nowadays, which can greatly complicate accurate detection of facial expressions.

Applications able to read body gestures do not even exist now, even though it is a source of affective information as valid as the face. There are already applications for body detection Kinect , but there is no technology like Affectiva or Beyond Verbal for the body yet. Physiological signals are even less developed, because of the imposition of sensors that this kind of detection requires.

However, some researchers are working on this issue so physiological signals can be used as the face or the voice. In a not too distant future, reading the heartbeat of a person with just a mobile with Bluetooth may not be as crazy as it may sound.

Previous technologies analyze the impact of an emotion in our bodies, but what about our behaviour? A stressed person usually tends to make more mistakes. In the case of a person interacting with a system, this will be translated in faster movements through the user interface, or more mistakes when selecting elements or typing, and so on.

This can be logged and used as another indicator of the affective state of a person. All these technologies are not perfect. Humans can see each other and estimate how other people are feeling within milliseconds, and with a small threshold error, but these technologies can only try to figure out how a person is feeling according to some input data.

To get more accurate results, more than one input is required, so multimodal systems are the best way to guarantee results with the highest levels of accuracy. Assessing this application in comparison with another version without emotion detection, we can conclude that the user experience and performance is higher when including a multimodal emotion detection system.

Since the system is continuously adapting itself to the user according to the emotions detected, the level of difficulty adjusts much better to their real needs. The application could even introduce dynamically other elements to engage the user in the game. What is too simple bores a user, whereas what is too complex causes anxiety. As future work, among other things, we aim to improve the mobile aspects of the system and explore further the challenges that the sensors offered by mobile devices bring about regarding emotion recognition, especially in educational settings.

The data used to support the findings of this study are available from the corresponding author upon request. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Article of the Year Award: Outstanding research contributions of , as selected by our Chief Editors. Read the winning articles. Journal overview.

Special Issues. Academic Editor: Salvatore Carta. Received 19 Jan Revised 08 Jul Accepted 05 Aug Published 13 Sep Abstract Affective computing is becoming more and more important as it enables to extend the possibilities of computing technologies by incorporating emotions. Introduction In , Rosalind W. Background Concepts and Related Works In this section, a summary of the background concepts of affective computing and related technologies is put forward.

Emotion Detection Technologies This section presents a summary of the different technologies used to detect emotions considering the various channels from which affective information can be obtained: emotion from speech, emotion from text, emotion from facial expressions, emotion from body gestures and movements, and emotion from physiological states [ 13 ].

Emotion from Speech The voice is one of the channels used to gather emotional information from the user of a system. Table 1. Comparison of emotion detection technologies from speech. Table 2. Comparison of emotion detection technologies from facial expressions. Table 3. Comparison of emotion detection technologies from text. Figure 1. Figure 2. Figure 3. Figure 4. Participant SUS score 1 90 3 90 5 90 7 9 Table 4. Participant SUS score 2 Table 5. Figure 5. References R.

Johnson, R. Koelstra, C. Muhl, M. Soleymani et al. View at: Google Scholar S. This dualistic conceptualisation goes back as far as to the Greek philosophers. In Western thinking, the division of mind and body was taken indisputable and, for example, Descartes looked for the gland that would connect the thoughts inspired by God with the actions of the body, Figure 1. Copyright terms and licence: pd Public Domain information that is common property and contains no original authorship.

Figure Inputs are passed on by the sensory organs to the epiphysis in the brain and from there to the immaterial spirit. But with this new wave of research in the 90ies, emotion was resurrected and given a new role.

It became clear that emotions were the basis for behaving rationally. Without emotional processes we would not have survived. Being hunted by a predator or enemy aircraft requires focusing all our resources on escaping or attacking. Tunnel vision makes sense in that situation. Unless we can associate feelings of uneasiness with dangerous situations, as food we should not be eating, or people that aim to hurt us, we would make the same mistakes over and over, see Figure 2.

Lance Cheung. While fear and anger may seem as most important to our survival skills, our positive and more complex socially-oriented emotion experiences are also invaluable to our survival.

If we do not understand the emotions of others in our group of primates, we cannot keep peace, share food, build alliances and friendships to share what the group can jointly create Dunbar, To bring up our kids to function in this complex landscape of social relationships, experiences of shame, guilt, and embarrassment are used to foster their behaviour Lutz , Lutz But Positive Emotions also play an important role in bringing up our kids: conveying how proud we are of our kids, making them feel seen and needed by the adults, and unconditional love.

The new wave of research also questioned the old Cartesian dualistic division between mind and body. Emotional experiences are not residing in our minds or brains solely. They are experienced by our whole bodies: in hormone changes in our blood streams, nervous signals to muscles tensing or relaxing, blood rushing to different parts of the body, body postures, movements, facial expressions Davidson et al. Our bodily reactions in turn feedback into our minds, creating experiences that regulate our thinking, in turn feeding back into our bodies.

In fact, an emotional experience can start through body movements; for example, dancing wildly might make you happy. Neurologists have studied how the brain works and how emotion processes are a key part of cognition. Emotion processes are basically sitting in the middle of most processing going from frontal lobe processing in the brain, via brain stem to body and back LeDoux, , see Figure 3.

Copyright terms and licence: Unknown pending investigation. See section "Exceptions" in the copyright terms below. Bodily movements and emotion processes are tightly coupled. Certain movements will generate emotion processes and vice-versa. Emotions are not only hard-wired processes in our brains, but changeable and interesting regulating processes for our social selves.

As such, they are constructed in dialogue between ourselves and the culture and social settings we live in. Emotion is a social and dynamic communication mechanism. We learn how and when certain emotions are appropriate, and we learn the appropriate expressions of emotions for different cultures, contexts, and situations.

The way we make sense of emotions is a combination of the experiential processes in our bodies and how emotions arise and are expressed in specific situations in the world, in interaction with others, coloured by cultural practices that we have learnt.

We are physically affected by the emotional experiences of others. Smiles are contagious. Catherine Lutz, for example, shows how a particular form of anger, named song by the people on the south Pacific atoll Ifaluk, serves an important socializing role in their society Lutz, , Lutz Ethnographic work by Jack Katz provides us with a rich account of how people individually and group-wise actively produce emotion as part of their social practices.

He discusses, for example, how joy and laughter amongst visitors to a funny mirror show is produced and regulated between the friends visiting together. Katz also places this production of emotion into a larger complex social and societal setting when he discusses anger among car drivers in Los Angeles, see Figure 4. He shows how anger is produced as a consequence of a loss of embodiment with the car, the road, and the general experience of travelling.

He even sees anger as a graceful way to regain a sense of embodiment. A part of the new wave of research on emotion also affected research and Innovation of new technology. In Artificial Intelligence , emotion had to be considered as an important regulatory process, determining behaviour in autonomous systems of various kinds, e. Broadly, the HCI research came to go in three different directions with three very different theoretical perspectives on emotion and design.

The cognitivistically inspired design approach she named Affective Computing in her groundbreaking book from The second design approach might be seen as a counter-reaction to Affective Computing. Instead of starting from a more traditional perspective on cognition and biology, the Affective Interaction approach starts from a constructive, culturally-determined perspective on emotion.

Finally, there are those who think that singling out emotion from the overall interaction leads us astray. Instead, they see emotion as part of a larger whole of experiences we may design for — we can name the movement Technology as Experience. In a sense, this is what traditional designers and artists have always worked with see e. Let us develop these three directions in some more detail.

They have obvious overlaps, and in particular, the Affective Interaction and Technology as Experience movements have many concepts and design aims in common. Still, if we simplify them and describe them as separate movements, it can help us to see the differences in their theoretical underpinnings.

The Artificial Intelligence AI field picked up the idea that human rational thinking depends on emotional processing. Her idea, in short, was that it should be possible to create machines that relate to, arise from, or deliberately influence emotion or other affective phenomena.

The roots of affective computing really came from neurology, medicine, and psychology. It implements a biologistic perspective on emotion processes in the brain, body, and interaction with others and with machines. In Figure 5 we see for example how facial expressions, portraying different emotions, can be analysed and classified in terms of muscular movements.

Copyright terms and licence: All Rights Reserved. Reproduced with permission. Figure 5B : Facial muscles moving eyebrow and muscles around the eye when expressing different emotions.

Emotions, or affects, in users are seen as identifiable states or at least identifiable processes. This can be done through applying rules such as those brought forth by Ortony et al. Copyright terms and licence: From The cognitive structure of emotions Cambridge University Press. All Rights Reserved. This model has its limitations, both in its requirement for simplification of human emotion in order to model it, and in its difficult approach into how to infer the end-users emotional states through interpreting human behaviour through the signs and signals we emit.

This said, it still provides for a very interesting way of exploring intelligence, both in machines and in people. They therefore propose an emotion model built on James A. The idea is to build a learning companion that keeps track of what emotional state the student is in and from that decides what help she needs. Russell and American Psychological Association. In a recent spin-off company, named Affectiva, they put their understanding into commercial use — both for the autistic children, but also for recognising interest in commercials or dealing with stress in call centres.

An affective interactional view is different from the affective computing approach in that it sees emotions as constructed in interaction, whereas a computer application supports people in understanding and experiencing their own emotions Boehner et al.

The first change is related to the bodily aspects of emotional experiences. But explicitly pointing to them, we want to add some of the physical and bodily experiences that an interaction with an affective interactive system might entail.

Such a position risks mystifying human experience, closing it off as ineffable and thereby enclosing it to be beyond study and discussion. This does not in any way mean that the experiential strands, or qualities, are universal and the same for everyone.

Instead they are subjective and experienced in their own way by each user McCarthy and Wright, A range of systems has been built to illustrate this approach, such as Affector Sengers et al. Affector is a distorted video window connecting neighbouring offices of two friends and colleagues , see Figure 9. A camera located under the video screen captures video as well as 'filter' information such as light levels, colour, and movement.

This filter information distorts the captured images of the friends that are then projected in the window of the neighbouring office. The friends determine amongst themselves what information is used as a filter and various kinds of distortion in order to convey a sense of each other's mood. To choose an expression, you perform a set of gestures using the stylus pen that comes with some mobile phones , which we had extended with sensors that could pick up on pressure and shaking movements.

Users are not limited to any specific set of gestures but are free to adapt their gesturing style according to their personal preferences. The pressure and shaking movements can act as a basis for most emotional gestures people do, a basis that allows users to build their own gestures on top of these general characteristics, see Figure Russell and American Psychological Association the image in the middle.

On the left, a high energy expression of love from study participant Agnes to her boyfriend. On the right, Mona uses her favourite green colours to express her love for her boyfriend. Affective Diary works as follows: as a person starts her day, she puts on a body sensor armband. During the day, the system collects time stamped sensor data picking up movement and arousal. At the same time, the system logs various activities on the mobile phone: text messages sent and received, photographs taken, and Bluetooth presence of other devices nearby.

Once the person is back at home, she can transfer the logged data into her Affective Diary. The collected sensor data are presented as somewhat abstract, ambiguously shaped, and coloured characters placed along a timeline, see Figure To help users reflect on their activities and physical reactions, the user can scribble diary-notes onto the diary or manipulate the photographs and other data, see example from one user in Figure Bio-sensor data are represented by the blobby figures at the bottom of the screen.

Mobile data are inserted in the top half of the screen along the same time-line as the blobby characters. I like him and then it is so sad that we see each other so little.

And then I cannot really show it. While the interaction of the system should not be awkward, the actual experiences sought might not only be positive ones. Affector may communicate your negative mood. Affective Diary might make negative patterns in your own behaviour painfully visible to you. An interactional approach is interested in the full infinite range of human experience possible in the world. While we have so far, in a sense, separated out emotion processes from other aspects of being in the world, there are those who posit that we need to take a holistic approach to understanding emotion.

Emotion processes are part of our social ways of being in the world, they dye our dreams, hopes, and experiences of the world. If we aim to design for emotions, we need to place them in the larger picture of experiences, especially if we are going to address aspects of aesthetic experiences in our design processes Gaver, , McCarthy and Wright, , Hassenzahl, John Dewey, for example, distinguishes aesthetic experiences from other aspects of our life through placing it in-between two extremes on a scale Dewey, On the one end of that scale, we just drift and experience an unorganized flow of events in everyday life, and on the other end of the scale we experience events that do have a clear beginning and end but that only mechanically connect the events with one-another.

Aesthetic experiences exist between those extremes. They have a beginning and an end; they can be uniquely named afterwards, e. However emotions are not static but change in time with the experience itself, just as a dramatic experience does Dewey , p. While an emotion process is not enough to create an aesthetic experience, emotions will be part of the experience and inseparable from the intellectual and bodily experiences.

In such a holistic perspective, it will not make sense to talk about emotion processes as something separate from our embodied experience of being in the world. Bill Gaver makes the same argument when discussing design for emotion Gaver Bill Gaver phrases it clearly when he writes:.

If we look back at the Affector, eMoto, and Affective Diary systems, we see clearly that they are designed for something else than the isolation of emotion. Affector and eMoto are designed for and used for communication between people where emotion is one aspect of their overall communication. And, in fact, Affector turned out to not really be about emotion communication, but instead became a channel for a sympathetic mutual awareness of your friend in the other office.

It seems obvious that we cannot ignore the importance of emotion processes when designing for experiences. On the other hand, designing as if emotion is a state that can be identified in users taken out of context, will not lead to interesting applications in this area.

Instead, the knowledge on emotion processing needs to be incorporated in our overall design processes. The work in all the three directions of emotion design outlined above contributes in different ways to our understanding of how to increase our knowledge on how to make emotion processes an important part of our design processes.

The Affective Computing field has given us a range of tools for both affective input, such as facial recognition tools, voice recognition, body posture recognition, bio-sensor models, and tools for affective output e. The Affective Interaction strand has contributed to an understanding of the socio-cultural aspects of emotion, situating them in their context, making sure that they are not only described as bodily processes beyond our control.

The Technology as Experience-field has shifted our focus from emotion as an isolated phenomenon towards seeing emotion processes as one of the important aspects to consider when designing tools for people.

There are still many unresolved issues in all these three directions. In my own view, we have not yet done enough to understand and address the everyday, physical, and bodily experiences of emotion processes e. Already Charles Darwin made a strong coupling between emotion and bodily movement Darwin, Since then, researchers in areas as diverse as neurology leDoux , Davidson et al. I view our actual corporeal bodies as key in being in the world, in creating for experiences, learning and knowing, as Sheets-Johnstone has discussed Our bodies are not instruments or objects through which we communicate information.

Communication is embodied - it involves our whole selves. In design, we have had a very limited view on what the body can do for us. Partly this was because the technology was not yet there to involve more senses, movements and richer modalities. Now, given novel sensing and actuator materials, there are many different kinds of bodily experiences we can envision designing for - mindfulness, affective loops, excitement, slow inwards listening, flow, reflection, or immersion see e.

In the recently emerging field of design for Somaesthetics Schiphorst, , interesting aspects of bodily learning processes, leading to stronger body awareness are picked up and explicitly used in design. This can be contrasted with the main bulk of e. Recently, Purpura and colleagues made use of a critical design method to pinpoint some of the problems that follows from this view. Through describing a fake system, Fit4Life, measuring every aspect of what you eat, they arrive at a system that may whisper into your ear "I'm sorry, Dave, you shouldn't eat that.

Dave, you know I don't like it when you eat donuts" just as you are about to grab a donut. This fake system shows how we may easily cross the thin line from persuasion to coercion, creating for technological control of our behavior and bodies.

In my view, by designing applications with an explicit focus on Aesthetics , Somaesthetics , and Empathy with ourselves and others, we can move beyond impoverished interaction modalities and treating our bodies as mere machines that can be trimmed and controlled, towards richer, more meaningful interactions based on our human ways of physically inhabiting our world.

We are just at the beginnings of unravelling many novel design possibilities as we approach emotions and experiences more explicitly in our design processes. This is a rich field of study that I hope will attract many young designers, design researchers and HCI-experts. In: Bertelsen , Olav W.



0コメント

  • 1000 / 1000