--

Thank you, Thomas.

Yes, right now I only have my own development version of the app, nothing that I can share yet, but I hope that this will change over the coming months.
In terms of tech stack, I use a Polar H10 as my sensor of choice, and then collect and process all the data with custom Python code (making extensive use of the NeuroKit2 library). The frontend and graphs are done using Dash/Plotly. For the audio component, I’ve so far been producing the static soundscapes in Ableton Live, but will probably switch to Max MSP or Pure Data when making them more reactive, and also so that I can later embed them into an app.
Hope that helps.

--

--

Max Frenzel, PhD
Max Frenzel, PhD

Written by Max Frenzel, PhD

AI Researcher, Writer, Digital Creative. Passionate about helping you build your rest ethic. Author of the international bestseller Time Off. www.maxfrenzel.com

No responses yet