Thank you, Thomas.
Yes, right now I only have my own development version of the app, nothing that I can share yet, but I hope that this will change over the coming months.
In terms of tech stack, I use a Polar H10 as my sensor of choice, and then collect and process all the data with custom Python code (making extensive use of the NeuroKit2 library). The frontend and graphs are done using Dash/Plotly. For the audio component, I’ve so far been producing the static soundscapes in Ableton Live, but will probably switch to Max MSP or Pure Data when making them more reactive, and also so that I can later embed them into an app.
Hope that helps.