All Posts
Tips

Do You Know Your Heart’s Favorite Color?

AI does. It can now detect your nervous system’s signals as you use your screen.

New technological developments using artificial intelligence with existing hardware and software can fundamentally alter our use of screen-based technologies. It is now possible to detect the screen users’ nervous systems via a selfie position, web cameras, and machine learning. The next aim is to adapt screen interactions to the users’ autonomic nervous systems’ activity.

Screens and stress

American and adolescent adults are using screens up to 11 hours daily, and an increase in stress and anxiety has been reported since the rise of smartphone technology.

Some recent studies raise alarms about the effects screen time has on developing brains. Furthermore, 81% of employees check screens after hours and on the weekends, with 59% of Americans reporting feeling stressed, and one in ten employees calling in sick because of stress. Employee stress has real effects on businesses’ bottom line. For example, employee health benefits at Starbucks costs more annually than coffee beans. Additionally, stress totals around $46 billion in annual excess health-care costs. The Centers for Disease Control and Prevention reported in 2016 that stress is the leading workplace health problem.

Fav Color2

Computer work and visuals affect mental stress levels, performance, emotion and biomarkers such as blood pressure and heart rate variability. It seems as though that reducing or limiting the use of screens is not a sustainable solution, especially with devices becoming more advanced with foldable phones and hands-free augmented reality. Perhaps these new technologies can help alleviate the economic and mental burden of the employee population.

Unchanged design development

One thing that has not changed are the screen display choices themselves. The design of the screen experiences remains constant. All coded apps, websites, and screen designs are still a top-down approach from developers to users, not taking into consideration the well-being of the users. This is where physiological computing and its application has the potential to transform the way we interact with technological devices.

Physiological computing

Fav Color3
Fav Color4

Physiological Computing is a term used to describe the detection of a human’s nervous system activity and its integration into a technological interface. One application, via a computational device, would be the detection of how fast a user's heart is beating when they are looking at a screen.

It goes beyond biometrics, which is the technical term for body measurements and calculations.

Physiological computing has already been used to analyze how you are breathing and how your heart is beating, for a few years now. The skin tone of the screen user is used to analyze the physiological data.

Physiological computing can use facial recognition as well. Facial recognition technologies have been criticized in the media these days from all political parties alike as inaccurate, invasive, and having potentially chilling effects on Americans’ privacy and free expression rights. It has also been in the news for a currently inherent racial bias since the algorithms are trained by developers, mostly cis-white men. Physiological computing however, focuses on the underlying similarity: all human blood is red. It only uses facial recognition as part of the process by narrowing down what on the screen is a human face. After detecting the human face, it focuses on the skin tone to analyze the nervous system of the screen user.

The activity of nervous systems is processed via web and smartphone cameras and machine learning.

The video stream from the cameras is split in images with values of red, green and blue. The data from the green channel delivers data that makes it possible to extrapolate the heart rate. The green channel contains the most valuable information about how fast or slow the blood pulsates underneath the human skin. Invisible to the perception of humans, machine learning compares the green channel of the video stream to infer the human heartbeat. Cardiolens, a recent project by alumnis of the MIT Media Lab, explains the process using a mixed reality approach.

The use of video streams offers a more accurate method. Some examples include applications using wireless signals such as the MIT Media Lab’s CSAIL device, which are trying to understand emotions, or radar sensor methods such as XeThru, who was just funded a $15M round. Wireless and radar signals are active ‘signals.’ They require to actively send a ‘signal’ to the user and receive it back for the analysis. The back and forth takes time, and with movements of the user, the analysis can become less accurate.

Video streams are passive ‘signals.’ The user can move yet the signals for analysis — the skin tone — is being streamed via the camera to the computational device for processing. Milliseconds and movement matter and the method of using passive signals seem most promising for accuracy, and almost real-time analysis of the user’s vital signs.

Transparency is key

Fav Color5

Civil rights campaigners label facial recognition already as “perhaps the most dangerous surveillance technology ever developed”, because of its assumed apparent algorithmic biases, excessive use of law enforcement agencies, and invasion of privacy due suspect collaborations between Amazon and government agencies.

With such an ongoing outrage, how will physiological computing be received publicly?

It is now up to the people researching and developing physiological computing to provide transparency about methods and motives.

Beyond Face-booking

Physiological computing can easily be used for far worse invasions of user privacy than what occurred with Facebook and Cambridge Analytica. Private data of Facebook users were sold for political purposes to target groups of undecided voters with political ads.

Physiological computing goes beyond the private data and groups of people. It detects information on an individual level, and unbeknown to the user themselves. It detects how the user’s heart beats, their breaths, and how these beats and breaths happen while perceiving a specific screen content.

Individually designed ads could be adapted to each screen user, and by leveraging heart rate and breathing rate, ad campaigns could be tailored in favor of the business.

How do you feel about AI knowing more about your heart’s beating than you do?

Marketing nerves

Ads have been used to deliver messages to the audiences for visual content based on concepts. Coca-Cola was described as “a valuable Brain Tonic, and a cure for all nervous affections”, and ads for high sugar drinks such as Gatorade are regularly depicted with cooling images of water. The concepts were to use water or descriptions of cure to stimulate the visual cortex of the user’s brain and to activate the parasympathetic nervous system of the user, also called the “rest and digest” system. It slows down the heart rate and is, for instance, stimulated by water, as this is a calming visual for most humans throughout our history.

With the current advertisements, consumers might go shopping and the memory of "calming" ads might steer them towards buying a high sugar content drink — consciously or unconsciously.

Personalization

Physiological computing offers the targeting of ads on a personal level. It provides a much higher accuracy in terms of impacting a user's nervous system.

Humans are growing up more and more in urban environments with less and less natural environments embodied as memories. The loading wheel on a screen is reported to cause more stress than a horror movie or intense traffic.

Pioneering physiological computing

Heart rate, heart rate variability, stress, and breathing patterns can already be detected with high accuracy, almost in real time.

At the forefront of research and development are a few scientists and companies focusing on stress detection and mental wellbeing. Some of these groups are using existing hardware and software for contactless sensing methods. As facial recognition is required for physiological computing using RGB camera-based photo-plethysmography, it raises privacy concerns and illumination issues with how environmental light affects the analysis.

Thermography is much less affected by those constraints, and UCL researchers Nadia Bianchi-Berthouze and Youngjun Cho have been pioneering physiological computing with thermal imaging cameras in smartphones to detect breathing patterns and recognition of mental stress.

Smartphones and/or RGB camera-based detection are used by multiple companies.

USU researchers, Dr. Jake Gunther and Nate Ruben, decided to patent their heart rate estimation tool and to start the company Photorithm, Inc. They launched the product “My Smart Beat,” a video baby monitor with breath detection, to identify more than 16 million shades of color to detect movement too small to be seen by the human eye, for the cost of $249.50.

The company, Conscious, has assembled a team to fundraise for its “platform to elevate human consciousness through AI-driven meditation, therapeutic techniques, and contactless biofeedback", using the smartphone to track breathing and HRV. breathing.ai is also fundraising with early prototypes, and offers patent-based “Adaptive Interfaces” to personalize screen experiences.

Vision

Imagine these font and colors changing and adapting to your breathing and heart’s beating.

The next step of physiological computing is to use machine learning to adapt the screen designs to change your breathing and heart rate.

Colors, fonts, and designs impact attention and the nervous system on an individual level just as any natural environment does. And today, most people’s environment is a pixelated one.

The future is the personalization of screen experiences to the users’ autonomic nervous system. Screen designs can affect the attention, mood, performance, stress levels, and overall health of the user.

Personalization could be used for the profit of companies, or as a win-win for companies and clients with the integration of mindful biofeedback for stress-reduction, and to improve attention.

Serious concerns will be raised, and the transparency of companies and researchers regarding the methods is essential. The applications which will help users feel better and improve their attention through calming interfaces while using screens might hopefully last.

With calming interfaces, imagine the reduction of heart rate and stress.

On the other hand, imagine an increase of stress levels with interfaces that are designed to manipulate a user's desires using personalized ads for profit.

This is the future of technology being created.

It cannot be an either-or.

The use of this technology affects the unconscious nervous system of screen users.

The ethical and practical implications need to be addressed early and consciously as a collective.

Humans breathe about 23,000 per day with 12 to 20 breaths per minute, and roughly about 10,000 times in front of a screen with 11 hours of screen time.

Each breath counts.

This is a vast opportunity to calm our collective breathing patterns with calming screens, or to breathe even more stressful with desire-driven screens.

Poet Maya Angelou said, “Life is not measured by how many breaths we take, but by the moments that take our breath away.”

Will the future of technology be breath-taking calming for our hearts?

Fav Color6