Urban landscape is closely related to people's behaviors. With the emergence of technology, many electronic devices have been developed for self-monitoring pur-pose in studying the relationship of individual behavior and built environment, such as pedometers and arm-band sensors and wearable camera. Compared with the traditional methods using self-report, questionnaire or diary, using new tech-nologies and devices to measure and track individual behaviors and movement can obtain the first-hand data of human body passively and unconsciously. Besides that, combining with advanced computer technology of visualization, the potential of wearable devices in studying environmental exposure and psychological percep-tions has been constantly explored. Among them, wearable camera, a kind of port-able and micro photographed gadget, takes great advantage of tracking life details and recalling emotional preferences because it can take a photo every 30 seconds automatically and collect more than 1000 images each day. As a result, it become possible to build digital records of personal experience by collecting massive per-sonal image database. While this device has already been used in some experimental fields such as inter-medical science and computer science, involving interpreting personal lifelogging and treating memory disorders, but not yet been used in the study of urban space.
Taking one individual participant as an example, this paper will apply computer science in studying personal spatial behaviors and evaluating personal spatial ex-posure of greenness through analyzing personal image database collected by wear-able camera (Narrative Clip2). The data collection started from 08.10.2018 to 15.10.2018, and the participant should wear the camera from 8:00am to 11:00pm as well as keep it clipped on the same place of collar. Except private images, this process collected 8381 pictures in total and 1200-1500 photos each day in average. In order to identify and analyze database automatically, both Microsoft Cognition service API and Matlab will be used to process images and return information to identify greenery and evaluate the condition of personal exposure. Based on the Python language, Calling Microsoft Computer Vision API can extract a rich set of visual features based on the image content by identifying various “tags” appearing in each image, including environmental characters, figures and objects. This paper abstracts 41 tags related to outer space and greenness including ‘city’, ‘outdoor’, ‘flower’, ‘garden’, ‘grass’, ‘green’, ‘park’, ‘people’, ‘street’, ‘tree’ and so on, then calculates the duration of individual's exposure to the green environment. Another method to estimate personal greenness exposure is calculating the ratio of green and blue parts of each image by Matlab to verdict the continuity of the individu-al's exposure to the green environment and analyze the duration of every period of exposure. At the end, artificial audit will be applied to verify the validity of results come from Microsoft API and Matlab.
Extracting four types of tags according to the result of calling Microsoft API, in-cluding ‘city’, ‘outdoor’, ‘street’ and ‘green’. The result shows that proportions of four tags are 8.49%, 11.12%, 13.20% and 7.29% separately during the week, and all ratios witness various increase on the weekend, arriving at 12.04%, 14.88%, 15.68% and 8.32%. Compared with that, manual audit result explores that out-doors and green exposure during the week accounts for 16.88% and 14.4%, while on weekends accounts for 22.6% and 22.55%. Besides that, estimating the conti-nuity of greenery exposure by the green ratio calculated in Matlab, which shows that high greenery exposure usually appears in the commuting, eating out, going out and leisure time. It can be found that Microsoft API and Matlab color recogni-tion can basically reflect the trend of spatial exposure. However, due to the problems of lens position and usage, the quality of pictures has effects on the result.
This study has demonstrated that continuous and dynamic recordings of indi-vidual behaviors, including spatial- temporal information, is conducive to transfer people’s environmental exposure from perception to quantification. With the con-tribution of image recognition technologies and computer science, it has also been proved that state-of-the-art algorithms could pave a way to build a digital bridge between people’s behaviors and physical environment. However, some limitations still exist. It is essential to wear correctly and fixedly to obtain high qualitied pic-tures. While both Microsoft API and Matlab have its own boundedness, necessary artificial audit can explore more information and verify the result.