Most of us are constantly interacting with a user interface though we hardly notice it. Whether it is on our laptop, at work, or our cellphones that we carry everywhere, a large portion of our day-to-day experiences involve dealing with user interfaces.
What if we reverse the roles? What if the interface was interacting with us and influencing us through the colors they project and fonts they choose. breathing.ai is at the forefront of adaptive interfaces, tapping into their potential for wellness and well-being.
But it’s not all sunshine and roses. The same user interface that has the potential to heal you also has the potential to harm you.
We sat down with Bend to talk more about his work and what it means for the transhumanist community.
Hannes Bend: breathing.aI was started at the end of 2018 to help regular screen users to feel more calm. The program utilizes art insights, machine learning, and our team’s award-winning neuroscience research on meditation and visual stimuli, to physiologically impact users without additional apps or hardware.
Beginning in 2014, I became motivated to research the effects of visual stimuli, digital displays, and screen time on users. At that time and even up to now, in 2020, research has remained limited, despite public curiosity and concern on the matter. The research which does exist, suggests that increased screen time correlates with decreased brain matter and poorer cognitive abilities.
Based on our neuroscience research findings and those of others, my team created multiple biofeedback experiences using Virtual Reality (VR), Augmented Reality (AR), and audio to develop what we called “Mindful Technologies.”
These Mindful Technologies were intended to counteract many of the negative effects which we had found. Our VR biofeedback experiences, which combined mindfulness with ocean advocacy, were exhibited at museums and presented in collaboration with yogic experts such as Wim “The Iceman” Hof.
Despite what we were able to achieve with our Mindful Technology experiences, these were still limited, curated engagements, with a small number of users. Every time I came back to the dense, urban lifestyle of NYC from my work at research universities, I noticed how much screen time — in 2018, 10–11 hours for American adults on average — people already had. I decided that to have a tangible effect on people’s digital experiences, I would need to start where people already were.
We started breathing.aI to improve existing screen time by integrating biofeedback into everyday technological devices with machine learning.
Our Adaptive Interfaces customize UX designs and settings, such as colors, fonts, and brightness, to the neurological preferences of each screen user.
We call this process “personalization” and develop it for use at both the consumer and business level.
HB: VR, wearable sensors and meditation apps all require additional technical devices or app usage. We presented our VR experiences with breathing or heart rate biofeedback internationally, but always had to rely on the extra time or devices. Even with VR, useres would still need a screen in front of their eyes.
So we created multiple prototypes also using existing hardware — such as the webcam — and AI to detect heart rate or biometrics (the correct term is physiological computing) while using the screen. One of the first iterations we developed at ThoughtWorks was in 2017 using a smartphone, and another using a laptop at MIT Media Lab with Noah Picard in 2018.
Now, we are developing Adaptive Interfaces to adapt the screen experiences (later audio and olfactory) to each user and their nervous system. The Adaptive Interfaces can be added as a sort of feature to existing software and personalize, for instance, the background colors of emails.
Currently many companies offer features reducing blue light at night or using a darker mode. All of these are not based on biometrics yet, but require the user to change settings. This is what we call Adaptive Interfaces.
SCREEN TIME WILL MOST LIKELY NOT DECREASE IN WORK SPACES OR PERSONAL LIFE FOR MOST PEOPLE. SO THERE IS A NEED TO IMPROVE THE EXISTING SCREEN TIME FOR WELL-BEING.
The processing power is constantly increasing and the webcams and smartphone cameras are getting more detailed, and our research and development progresses.
Larger data sets will enable us to use more refined machine learning to personalize the screen experience with more detail to each user.
HB: Only a few decades ago colored TV replaced black and white. It is somewhat unfathomable to know billions of humans, including me, now stare for many billions of hours at the same colors for instance in chat apps used every day.
And our internal studies and prototypes indicate blue and grey are mostly calming, and bright green increases heart rate. Here a video with Polar CEO Kunal Gupta at 1:00 testing our 2018 prototype at a conference with 25 heart beats difference between grey and bright green:
Some companies are actually using color preferences of human nervous systems for user retention. Here, blue and grey are used for chats with people using the same products, and increase a fight-or-flight response of the nervous system when chatting with a user of another phone company.
Our prototypes have been displayed to hundreds of user for the effects of colors and fonts on heart rate, and people’s nervous systems respond differently, subjectively. And the responses and preferences change.
While I worked with researcher Irida Mance on neuroscience research in the lab of Ed Vogel to study correlations between visual stimuli and electrical activity (EEG) from 2014–2016, the lab from Richard Taylor, also at the University of Oregon, studied such correlations specifically with fractal images.
None of our research did find one visual stimuli or fractal image all study subjects would “prefer.”
Perception and the upbringing of each person seem to play roles how the environment (e.g. the digital) impacts the nervous system.
That is why personalization is so important. Humans are not machines. We all respond differently to designs and situations.
Currently, our Adaptive Interfaces are analyzing deriving the heart rate using the webcam and machine learning. The video stream is focused on the most visible skin area of the face of the screen, and our program analyzes the heart rate from the red pixelation of the skin by comparing the frames per second.
Imagine looking at each other’s faces in super slow motion. We would see our faces as pulsating with blood just underneath the skin (the process is called photoplethysmographic imaging)
There is a short initiation period for the Adaptive Interfaces to display different designs for instance to access the following personalization. Then, the Adaptive Interfaces can in real-time customize the screen experience. If integrated here, the background color of this text, font, brightness and contrast would now dynamically change subtly to support a lower heart rate of the reader.
With more data about each user, the personalization will become more accurate and refined. So if using it for hours a day, a lowering of the heart rate by just 5 beats could mean hundreds or thousands of less heartbeats per day.
This is huge, considering the amount of cardiovascular diseases.
HB: Let’s use an ongoing example and envision Facebook would use this to sell more accurate ads. There are still news their ads help candidates win elections. Every time a person would use Facebook, the heart rate of each user would be detected using the webcam.
Facebook could then run correlation analysis which designs content affecting the nervous system for each Facebook user, and then sell this highly sensitive and personal information to political campaigns.
Targeted political campaigns in 2016 were based on personal profiles of swing state voters. They might have depicted political candidates as devils or angels, as profiles on the person’s religious views might suggested. These ads however could somehow still consciously be evaluated and people could act based on their own conclusions.
Analysis of the nervous system, and how a certain Interface affects a heart rate going up or down is, on the other hand, more than most people are aware of themselves.
A company using such personalization of ads to the nervous system could thereby “speak” to the unconscious part of a person.
Often ads are sold to the “highest bidder,” not on ethical values considering the well-being or privacy of the user.
A company like Facebook needs to please their shareholders first and foremost, and not to offer stress-reduction to their users.
And if a government was being helped by Facebook to get elected in the first place, why would they impose regulations if a more sophisticated technology could help them win again?
The data — once sold to advertisers — can be used anywhere.
So people at a bus stop with a digital ads display could have their nervous system be targeted with personalized ads in the future. Even without being aware of it.
HB: First of all, I hope all users of technology will become aware of biometrics tech and regulations will be taken to limit the use of this for well-being only.
We cannot allow data about the unconscious parts of humans being owned and sold by companies without ethical restrictions.
Many other new companies center their products and business model around well-being of the user. For now, let’s picture a world where the technology would only be used for good.
In the near future, the technology to personalize could extend beyond heart rate and even detect more neurological activity such as breathing patterns. Humans usually breathe very shallow only using the chest, and on average about 22–23,000 times per day.
PBS article on diaphragmatic breathing (“Belly Breathing”)
The first years after birth, all humans mostly breathe via the belly using the diaphragmatic breathing muscles. This slower and more expansive breathing pattern is healthy and has been suggested in studies to support memory, immune system function and performance.
I imagine a personalization of technological devices to support slower and deeper breathing patterns by combining deep learning with deep breathing.
Artificial intelligence has barely been used for biofeedback, and the integration of AI into personalization via biometrics has an immense potential for well-being.
The personalization could support calmer neurological activity, and monitor the state of health of each technological user, without requiring additional wearables.
BIOMETRICS ANALYZING BREATHING PATTERNS CAN BE USED TO ALERT A USER OR A DOCTOR IN CRITICAL SITUATIONS
Of course, the ethical use of this sensitive information is important.
HB: Currently, most technologies using screens are focused on the visual cortex and even audio experiences are addressing the part of our reality we are aware of — our mind. The personalization using biometrics integrates the nervous system into the technological experience. In yoga and meditation, the term ‘mindbody’ is used for a more embodied experience of reality by also becoming more aware of one’s body, breathing and heart beating.
Wim Hof training students in Poland
In last year’s research we found that we can warm up our bodies with breathing and awareness. Wim “The Iceman” Hof, with whom I worked for many years, teaches this accessible for anybody. People are even able to resist injection of e.coli bacteria with simple changes to deeper breathing. Other research suggests humans have a “sixth” sense to detect magnetic fields.
Whales and dolphins are aware of their breathing, and many species have electromagnetic senses and interception. Humans can use technology to train themselves and relearn these senses for instance. The personalizing and adapting of screen and audio experiences to such sensing could enable a faster learning, and thereby augment human reality.
While I was a visiting scholar in the Quantum and Nanoscale Physics Lab Alemán Lab, the researchers started to work on retinal implants using fractal patterns. The research is a step closer to having impaired vision restored or to add visual stimuli “into our brains” with retinal implants. Humans might be able to read emails without looking at a screen, just by reading emails projected in the back of the eye with retinal implants. The potential implementation of visual stimuli makes the quality of the stimuli as stress-reducing and adaptive to the well-being almost essential.
Hopefully the implementation and augmentation will be centered around well-being.
HB: Together with Rad Mora, we are working on creating videos adapted in real-time to the feelings or biometrics of the screen user. It will soon be published as Romancing.AI.
Together Mike Alicea and I are developing Chrome and maybe another browser extension. It will enable background colors and fonts to adapt to the nervous system, and displaying and adapting them to affect lower heart rates.
Our team’s main focus is not on consumer products however, but on a scalable API for businesses. Many employees have to work on screens, and over 40 % of jobs require prolonged use of screen time.
Employees nowadays prefer personalized workspace over unlimited vacation days. So creating a calming and personalized pixelated workspace can improve the screen time people need to face for work.
In some countries workplace regulations introduced within the last year require companies to detect stress levels and to offer well-being programs or a favorable workplace to their employees, for instance with the NOM-035 regulation in Mexico in October 2019.
Many corporate wellness and benefit programs have been failing, such as meditation. I have been guiding meditation professionally on the side in offices such as Google HQNY, WeWork HQ and SUNY for many years.
PROBLEM WITH MEDITATION IS UNLIKE WITH ADAPTIVE INTERFACES IT IS NOT REALLY POSSIBLE TO EVALUATE THE REAL IMPACT ON WELL-BEING ASIDE FROM SELF-REPORTS OF THE PARTICIPANTS.
Our Adaptive Interfaces can detect stress levels such as elevated heart rate during screen usage at work, and offer a personalized pixelated work environment. They offer a solution for an improved digital work environment, and the analytics to be used for potential regulations.
We are also in collaboration talks with multiple labs at research universities such as NYU and Parsons focused on the intersection of computer vision, psychology, UX/UI, human computer interaction and neuroscience.
We are currently writing study settings and grant applications to root our development in scientific research long-term. Computer vision syndrome (CVS) — a condition resulting from focusing the eyes on a computer or other display device for protracted, uninterrupted periods of time and the eye muscles being unable to recover from the strain due to a lack of adequate sleep — might affect up to 90 % of screen users with more than three hours of screen time. It can cause headache, blurry vision and more worrisome consequences.
HOW CAN THE SCREEN DESIGNS AND SETTINGS BECOME ADAPTED TO REDUCE THE CVS IS ANOTHER URGENT QUESTION FOR RESEARCH AND DEVELOPMENT WE ARE INVESTED IN
A National Science Foundation (NSF) grant was focused on digital learning environments. Adapting the digital learning environments or online reading tools can support better learning abilities and comprehension.
Deeper breathing patterns support better memory. For instance, the use of adaptive interfaces for digital learning tools in classrooms could be an effective way to offer a better educational environment for each student. Right now, most digital reading is done with black fonts on white background, which is not the most efficient way for all students to perceive, memorize and comprehend.
Personalized UI and UX, lights and the integration of mindfulness and deeper breathing into the technological devices can offer an affordable solution without the need for additional wearables to be purchased.
HB: I believe it will grow. There has been such a focus on the mind and visuals, with little integration of the body.
The problem of opioid crisis for instance seems to stem from many factors, such as over reliance on external medication and the lack of abilities to self-regulate and improve one’s own immune system.
The studies on meditation and breathing strongly suggest the positive health benefits with simple exercises, such as deepening one’s breathing pattern. The regulation of one’s breathing and awareness of one’s heart beating has a profound impact on health and happiness, and would reduce the need for medication or substance abuse. Mindfulness has also been shown to lower depression and anxiety.
Breathing expert Wim Hof and yoga teacher Eddie Stern have not only brought Tibetan and ancient yoga practices in easily accessible forms to Europe and the US, but both have worked with scientists to study the practices, and technologists to use technology to reach a broader audience for well-being.
Nichol Bradford has founded the Transformative Technology Conference and Academy to bring a large global audience together. Zaya and Maurizio Benazzo have also for many years spearheaded the multi-disciplinary Science And Nonduality (SAND) Conference merging science, technology and mindfulness practices. Founders and meditation teachers Caverly Morgan with Peace in Schools and Carin Winter with Mission Be, whom I have been working with, too, have enabled more mindfulness to be offered in the educational system in the US.
Nina Hersher recently founded the Digital Wellness Collective, of which Breathing.AI is a member, as a network of digital wellness experts and organizations enhancing human relationships through the intentional use and development of technology.
Tristan Harris founded The Center for Humane Technology which addresses the harmful extractive attention economy by inspiring a new race to build Humane Technology that aligns with humanity.
There are many more inspiring peers that could be listed, and I want to express my gratitude for all of their inspiring work hereby.
IT IS STILL A LONG WAY TO GO UNTIL THE MINDFULNESS PRACTICES AND ADAPTIVE INTERFACES ARE BECOMING A REGULAR PART OF THE SCHOOL CURRICULUM OR EVERYDAY LIFESTYLES
HB: Our current issued patent and process, as well as pending patent claims, already cover the customization of output data in screens, audio and olfactory using biometrics and machine learning systems.
I picture a use of personalization for the Internet of Things (IoT) in which homes, cars, voice assistants and screens are all customized to the well-being of the users.
When one interacts with a voice assistants, it could have a soothing tone of voice individualized for each human. Homes could have scent diffusers emitting a calming or uplifting scent as preferred by each “nose” — and nervous system. We are invested in all of these applications with our patent, research and development. Our current focus is to improve the existing screen time, at home or at work.
Our primary research and development is and will be on the scalability and implementation of the interfaces to become integrated into the various existing and emerging technologies.