A Human-Centered Hairdressing Experience

I was in China this summer and went to a barber shop. I had always been to that particular barber shop so I didn’t expect anything special. At least not until I was covered like this —


As you can see, part of the cover was a transparent plastic film, which was designed at the exact position where the customer (in this case was me) would hold his/her phone. I literally screamed due to surprise at that moment and instantly asked my Mom to take this picture for me. It is amazing how technology has influenced every single piece of our lives. If it was not because of how much people use their phones these days, this barber shop would certainly not have such a cover. I asked the barber when they started to have a cover like this. He answered nonchalantly: “We’ve had it for quite a while.” Alright. I’m again behind the times.

Be calm or be engaging? Thoughts on UbiComp

Two crucial ideas act as foundation for Mark Weiser’s article The Computer for the 21st Century [1]: First, “the most profound technologies are those that disappear.” (page 94) Second, “whenever people learn something sufficiently well, they cease to be aware of it.” (page 94) I’m going to examine the latter one here.

I am unable to agree with Weiser’s statement, though I can see where he comes from. According to the example of street sign he provides afterward and what he believes as “only when things disappear in this way are we freed to use them”, I can tell that he is considering about those objects that work as “tools” or “media”. He believes that tools can serve people well when they “disappear”. This underlying idea works for objects as street signs because these objects are relatively simple and the corresponding tasks they are trying to achieve is also simple (street signs show information about streets). Thus, this idea may not work when the level of complexity increases for either the object or the task. For instance, color pencil, which is a simple tool for drawing, can be used in several different ways when being operated by an experienced painter, such as burnishing and impression. Under this situation, one actually needs to be aware of the pencil to be able to achieve distinct effects, because how the pencil is used affects the result of drawing directly. I therefore argue that depends on the complexity of both the object and the task, people either increase or decrease their awareness about this object, when they learn something sufficiently well.

The problem is: how shall we take advantage from this different degrees of awareness wisely? In ubiquitous computing, this question can be hard to answer. I would like to say that this really depends on the type of things and what they are trying to achieve, but I have no idea on the criteria of categorizing things, if any. Rogers says technologies can be engaging in UbiComp [2], but she doesn’t offer a good reference for researchers to judge the type of technology here. I can see the possibility for this issue to become even more complicated as UbiComp technologies are used by different people with different cultural backgrounds. Should technology be engaging when its user doesn’t value engagement much? More similar questions can be posted.

I’m not going to try to answer any question here. But I would still love to say again that we, no matter as researchers or practitioners, should place human in a central role when designing technologies. Only by designing technology in a human-centered sense can we be aware of the diversity among both people and situations. There is no universal design. Maybe it doesn’t really matter if technology is calm or engaging. Human matters.


[1] Mark Weiser. 1991. The Computer for the 21st Century. Scientific American 265, 94 – 104. DOI: http://dx.doi.org/10.1038/scientificamerican0991-94

[2] Yvonne Rogers. 2006. Moving on from Weiser’s Vision of Calm Computing: Engaging UbiComp Experiences. In Proceedings of the 8th International conference on Ubiquitous Computing (UbiComp ’06). Springer-Verlag Berlin, Heidelberg, 404-421. DOI:http://dx.doi.org/10.1107/11853565_24

Sense-making Process in HCI

One of the most apparent themes emerges from Kari Kuutti’s writing about activity theory[1] is sense-making. In fact, the field of HCI has struggled a lot in various sense-making processes, from the very beginning when this field was forming. Questions are asked: What does HCI mean when standing in between human and computer? How can HCI make sense of human and computer? What theories and methodologies should be applied? What are the possible outcomes and contributions of HCI research? Activity theory tries to answer these questions by providing a list of terms and their relationship in a framework, and then realizes this framework under a specific environment to get a better understanding of how human achieves a goal by mediating different tools and leveraging the relationships among elements inside the environment.

Therefore, “a specific environment” becomes the key phrase here. To be specific also means to be situated, to admit the uniqueness of a setting that embeds a goal or task or problem. This characteristic of activity theory echoes with other theories developed around late 1980s and early 1990s, such as distributed cognition, situated action, ethnomethodology, etc. The inclusion of external environment as an influential factor for understanding human cognition marks a significant change of sense-making approach in HCI. Human actor was no longer seen as an isolated system with comparable structure of computer. Instead, because external environment has been taken into consideration, the effect it can possibly bring to human is also introduced to the sense-making process.

I am deeply moved by this idea: Meaning can only be constructed under a concrete situation. It is dangerous for both researchers and practitioners in HCI, especially theorists, to simply grab a theory and try to derive meanings from it, as theory looks much more reliable and stable than the changing environment and human mind. Nevertheless, meanings don’t come from theory. They originate from the application of theory under a particular context. While theory offers means, context defines the environment. Meaning is the ends we can reach by beginning from a question and passing one or several proper theories in a particular context. The sense-making process has been simplified here. Real-world issues can only be more complex.

Why should we treat sense-making process so carefully? A short answer is that technologies have never been so massively exposed to our lives. Thus, it is certain that the changes technology can bring to us will affect the way we make sense of ourselves and how we understand the world. That’s why we need to understand the effect of technology situatedly. I’m happy to see researchers in HCI has digested this view and so many topics come into being because of this idea. Examples are social computing, ubiquitous computing, ICTD, etc. However, I still remain open about how HCI will improve or adjust its sense-making approach in the future, as change is happening all the time.


[1] Kari Kuutti. 1995. Activity Theory as a Potential Framework for Human-Computer Interaction Research. Context and Consciousness: Activity Theory and Human-Computer Interaction (Nov. 1995), 9-22.The MIT Press.