Jeremy
Hsiao

Digital Artist and Music Technology Researcher

I create at the intersection of Audio Technology, Computer Science, Communications, Art, and Music. I'm interested in how audio processing, AI, and music perception combine to shape both creative expression and interactive, human-centered systems.

I designed this website as a portfolio to showcase my interests, projects, and background.

Jeremy Hsiao
Scroll to explore ↓

Highlights

Research Publications & Code

React.js, D3.js, Data Visualization, Research

EEG Music Studies Visualization and Research Paper

June 2025 | Advsior: Prof Nilam Ram

Interactive platform for exploring EEG studies on music perception and cognition, featuring timeline visualization, advanced filtering, and dataset export capabilities. Curated a dataset of 44 studies, 197 experimental conditions, and 13 publically available datsets. Wrote an accompanying full-length paper titled "The Evolution of EEG-Based Music Research: Methodological Transitions and Neurophysiological Insights from the 1970's to the Present". This work enables researchers to easily identify datasets and patterns in previous EEG and music studies in a searchable, standardized format.

View App and Paper on The Change Lab at Stanford →

Python, RNNs, PyTorch, ONNX, BiLSTM, Deep Learning, Audio Processing

Learning-Based Onset Detection for Functional Sound Segmentation

Summer 2025 | TU Berlin Sound Innovation Lab | Prof. Stefan Weinzierl

Deep learning system for detecting onset points in functional sounds (UI audio, device sounds, consumer electronics). The trained BiLSTM model achieves 86.5% F1 score on our dataset of 2,107 files, outperforming traditional energy-based methods (83.2%) and librosa's onset detection (74.2%). Created Documentation, technical paper, and demo comparing results across methods on the dataset. Due to company restrictions, I cannot disclose any of the data or code at this moment. Technical paper for IEEE in progress.

Project Details →

Python, MATLAB, JAX, SciPy, Signal Processing

Cambridge Loudness Model Implementation

Aug 2024 - Dec 2024 | Stanford CCRMA | Advisor: Prof. Malcolm Slaney

Co-authored implementation and optimization of Loudness Model for Time Varying Sounds with Binaural Inhibition, translating from MATLAB to Python with NumPy, JAX, and SciPy. Added comprehensive testing suite, demonstration scripts, and optimized runtime processes.

Python, MATLAB, Signal Processing, EEG Analysis

Auditory Attention Decoder

Aug 2024 - Dec 2024 | Stanford CCRMA

Reconstructed a backward temporal response function (TRF) model based on the MAD-EEG dataset, recreating results detecting auditory attention from EEG data. The system predicts which instrument a listener is focusing on within polyphonic music mixtures (duets and trios) by reconstructing audio representations from multi-channel brain activity. Processed the MAD-EEG dataset with 20-channel EEG recordings from 8 subjects attending to target instruments. The model correlates neural responses with attended versus unattended musical sources, and achieved similar results to the original paper.

GitHub

Audio/Visual Art

MAX/MSP, Logic Pro X, Projection Mapping

Touch Reactive Audio Generative Sound Installation

March 2025 | Prof. Constantin Basica

Interactive sound installation with projection mapping, contact microphones, audio generation, and particle generation with MAX/MSP in CCRMA's Listening Room. The algorithms generate random harmonic frequencies that shift with interaction. Two contrasting visuals reacting directly to sound and slowly transforming throughout time.

Logic Pro X, Sound Design, Video Editing

Inside

April 2025 | Prof. Constantin Basica

Audiovisual Composition using Foley, Audio/Sound Design, and Video Editing. This piece explores the concept of confinement and the restlessness of feeling trapped.

MAX/MSP, Logic Pro X, Sound Design

Sonicsphere

June 2025 | Prof. Constantin Basica

Audio-visual performance exploring the limits of sound, FX, and audio-reactive visual particles in Max/MSP and Logic Pro X. It begins with a single knock, and a single particle, but expands into a sphere of sounds and colorful visuals synced together in real-time. Explored the limits of Jitter objects and using Logic Pro X with Max for performance.

MAX/MSP, Logic Pro X, Sound Design

Gridlock Audiovisual Performance

by Gracielly Abreu & Jeremy Hsiao. May 2025 | Prof. Constantin Basica

Audio-visual performance exploring two aspects of a big city life: noise pollution and club culture. The visuals reflect a distorted, shifting, audio-reactive mirror of its creators as the noise pollution swells and descends into chaos. Made with real-time audio-reactive visuals, arpeggiator, and a sequencer built in MAX/MSP.

Qiskit, ChucK, Quantum Computing

Quantum Generated Harmonics

May 2024

Created a program that analyzes input audio to extract key and pitch using librosa, then applies IBM's Qiskit quantum circuits to generate ambient music. Quantum measurements determine harmonic frequencies, phase shifts, and note generation. Implemented delay, reverb, and low-pass filtering to blend quantum-generated harmonics with original audio. Samples of output available in presentation. Graduate course project (MUSIC 222A, Stanford CCRMA).

Logic Pro X, Music Production, Mixing, Mastering

Electronic Tracks Produced By Me

2020 - 2025

Original electronic music productions showcasing composition, sound design, and audio engineering skills.

Education & Skills

Stanford University

BA in Communications, Double Minors in Computer Science and Music | GPA: 3.7
Graduated June 2025 (Conferred Degree Sept 2025)

Interdisciplinary courses in technology, media, and human communication. Hover over courses to see relevant skills.

Programming: Python, Java, C, C++, CSS, R, MATLAB, ChucK, JavaScript
Libraries and Frameworks: NumPy, SciPy, PyTorch, JAX, Librosa, Matplotlib, React, Qiskit, D3.js
Audio/DSP/Production: FFmpeg, MAX/MSP, Logic Pro X, ChucK, Spatial Audio
Research: Experimental Design, Data Analysis, Statistical Methods, Causal Inference
Tools: GitHub, Colab, Visual Studio, Xcode, Web Development
Languages: English (Native), German (B1), Chinese (A2)

Stanford in Berlin

Fall 2024 Study Abroad Program

Independent Research under Malcolm Slaney. Studied German and International Politics. Enriched international perspective on music technology and digital arts in Berlin's innovative culture and music scene.

Berklee College of Music

Certificate: Producing Music with Logic Pro X (Online Certificate Program, 2020)

Completed comprehensive online course covering MIDI and audio recording, software instruments, sound design plugins, mixing techniques, and music production workflows in Logic Pro X.

Research Experience

Python, RNNs, PyTorch, ONNX, Deep Learning, Audio Programming

Audio Communication Research Intern

TU Berlin, Germany | Summer 2025 | Prof. Stefan Weinzierl

Leading research as first author on onset detection classification methods for academic publication. Built a Recurring Neural Network model to detect onsets of functional sounds, improving detection from existing algorithm. Project aims to develop neural network models that predict user perception of UX interaction sounds, with a focus on perceptual segmentation and signal-based audio cues. Designed evaluation pipelines using annotated datasets and identified improvements that enhanced model precision. Project conducted in collaboration with the Sound Innovation Lab, an industry-facing UX sound research lab specializing in applied audio design for human–computer interaction.

React, D3.js, CSS, Data Visualization

EEG-Music Research Platform

Communication Capstone | March 2025 - June 2025 | Advisor: Prof. Nilam Ram

Built React/D3.js platform processing 197 experimental conditions across 44 studies. The tool aggregates data from EEG studies related to music perception and cognition, providing intuitive visualization of temporal trends in research methods and enabling detailed filtering across multiple dimensions of study design.

This project addresses the challenge of comparing and synthesizing findings across diverse methodological approaches in neuroscience and music research, facilitating discovery of patterns and relationships between experimental designs, musical stimuli, and EEG methodologies.

Python, MATLAB, JAX, SciPy, Signal Processing

Stanford Research in Music and Computers

Stanford CCRMA | August 2024 - Dec 2024 | Advisor: Prof. Malcolm Slaney

Cambridge Loudness Model Implementation

Co-authored implementation and optimization of Loudness Model for Time Varying Sounds with Binaural Inhibition, translating from MATLAB to Python with NumPy, JAX, and SciPy. Added comprehensive testing suite, demonstration scripts, and optimized processes.

Auditory Attention Decoder

Reconstructed a backward temporal response function (TRF) model based on the MAD-EEG dataset, recreating results detecting auditory attention from EEG data. The system predicts which instrument a listener is focusing on within polyphonic music mixtures (duets and trios) by reconstructing audio representations from multi-channel brain activity. Processed the MAD-EEG dataset with 20-channel EEG recordings from 8 subjects attending to target instruments. The model correlates neural responses with attended versus unattended musical sources, and achieved similar results to the original paper.

Audio, Visual, and Computer Art

MAX/MSP, Logic Pro X, Sound Design

Real-Time Audio Reactive Particle System with Max/MSP

Aug 2025

Experimented with visualizing chaos attractor algorithm for particle system, created audio reactive parameters, and recorded over my own music.

MAX/MSP, Logic Pro X, Sound Design

Sonicsphere

June 2025 | Prof. Constantin Basica

Audio-visual performance exploring the limits of sound, FX, and audio-reactive visual particles in Max/MSP and Logic Pro X. It begins with a single knock, and a single particle, but expands into a sphere of sounds and colorful visuals synced together in real-time. Explored the limits of Jitter objects and using Logic Pro X with Max for performance.

MAX/MSP, Logic Pro X, Sound Design

Gridlock

by Gracielly Abreu & Jeremy Hsiao. May 2025 | Prof. Constantin Basica

Audio-visual performance exploring two aspects of a big city life: noise pollution and club culture. The visuals reflect a distorted, shifting, audio-reactive mirror of its creators as the noise pollution swells and descends into chaos. Made with real-time audio-reactive visuals, arpeggiator, and a sequencer built in MAX/MSP.

Logic Pro X, Sound Design, Video Editing

Inside

April 2025 | Prof. Constantin Basica

Audiovisual Composition using Foley, Audio Design, and Video Editing. This piece explores the concept of confinement and the restlessness of feeling trapped.

MAX/MSP, Logic Pro X, Projection Mapping

Touch Reactive Audio Generative Particle System

March 2025

Interactive sound installation with projection mapping, contact microphones, audio generation, and particle generation with MAX/MSP in CCRMA's Listening Room. The algorithms generate random harmonic frequencies that shift with interaction. Two contrasting visuals reacting directly to sound and slowly transforming throughout time.

Python, Data Visualization

Wildfire Data Sonification

Performed by Ensemble for Sonification of Temporal Data. March 2025

Transforming NASA wildfire and wind data from the 2025 LA wildfires into musical performance.

Qiskit, ChucK, Quantum Computing

Quantum Generated Harmonics

May 2024

Created a program that analyzes input audio to extract key and pitch using librosa, then applies IBM's Qiskit quantum circuits to generate ambient music. Quantum measurements determine harmonic frequencies, phase shifts, and note generation. Implemented delay, reverb, and low-pass filtering to blend quantum-generated harmonics with original audio. Graduate course project (MUSIC 222A, Stanford CCRMA).

Sound Design, Installation Art, Acoustic Engineering

Rainforest IV Realization

Jan 2025

Participated in a collaborative realization of David Tudor's electroacoustic environment "Rainforest IV" (1973). Designed and constructed sculptural loudspeakers with unique resonant characteristics.

Documentation →

Coding Projects

React.js, D3.js, Data Visualization

EEG Music Studies Visualization

June 2025

Interactive platform for exploring EEG studies on music perception and cognition, featuring timeline visualization, advanced filtering, and dataset export capabilities. Curated a dataset of 44 studies, 197 experimental conditions, and 13 publically available datsets. Wrote an accompanying full-length paper titled "The Evolution of EEG-Based Music Research: Methodological Transitions and Neurophysiological Insights from the 1970's to the Present". This work enables researchers to easily identify datasets and patterns in previous EEG and music studies in a searchable, standardized format.

View on The Change Lab at Stanford →

React.js, JavaScript

N-Back Position Test

April 2025

A configurable web application for the N-Back cognitive task to measure working memory performance with detailed metrics and data visualization.

View on GitHub →

Python, Trie Data Structure, Algorithms

Word Hunt Solver

Jan 2025

A fast solver for 4×4 Word Hunt games using depth-first search algorithm to find all valid English words that can be formed by connecting letters.

View on GitHub →

Python, NumPy, JAX, Audio Processing

Cambridge Loudness Model

Aug 2024 - Dec 2024

Implementation of the Time-Varying Loudness Model with Binaural Inhibition, translated from MATLAB to Python with optimizations and comprehensive testing.

View on GitHub →

Python, Gmail API, Google Cloud

Email Bot

July 2024

Automated email system that processes spreadsheet data to send customized emails at scale, handling 22,000 contacts with error tracking and reporting.

View on GitHub →

Work Experience

Audio Communication Student Research Engineer

TU Berlin, Germany Summer 2025

Leading research as first author on onset detection classification methods for academic publication. Paper in progress. Built a Recurring Neural Network model to detect onsets of functional sounds, improving detection from existing algorithm. Project aims to develop neural network models that predict user perception of UX interaction sounds, with a focus on perceptual segmentation and signal-based audio cues. Designed evaluation pipelines using annotated datasets and identified improvements that enhanced model precision. Project conducted in collaboration with the Sound Innovation Lab, an industry-facing UX sound research lab specializing in applied audio design for human–computer interaction.

Project Details →

Stanford Music Library Worker

Stanford, CA Jan 2022 - Present

Managed circulation and database operations for Stanford's Music Library. Assisted patrons with checkouts, returns, and locating materials using music-specific cataloging systems. Processed diverse collections including books, sheet music, vinyl records, CDs, and audio equipment. Catalogued new arrivals and coordinated inter-library transfers.

Yunus & Youth Fellowship Writer

Remote June 2024 – Dec 2024

Conducted 13 interviews to write in-depth narrative profiles of social entrepreneur fellows and their enterprises for the Yunus & Youth Fellowship. Rewrote and revised an additional 12 articles on past fellows for publication on the organization's platform. Translated complex social impact initiatives into accessible stories highlighting each fellow's mission, challenges, and community impact.

Warner Bros. Pictures Music Intern

Burbank, CA Jun 2022 - Aug 2023

Automated compilation of 500+ music industry contacts, significantly reducing manual processing time. Managed and updated music cue data, script breakdowns, production schedules, licensing data

Warner Bros. TV Publicity Intern

Burbank, CA Jun 2021 - Aug 2021

Increased TikTok followers on @warnerbrostv by 22% through strategic content creation. Management for WB entities’ social media accounts. Overall management of 100+ accounts, with a focus on 6. Produced 10-20 posts per week.

Accomplishments

Leadership & Activities

Stanford Concert Network

Stanford, CA Sept 2021 - Mar 2022

Successfully negotiated contracts with venues and an artist's management team to organize SCN's New Member Showcase concerts with a $2500 budget.

Live Sound Technician

Vineyard of Hope Church | Walnut, CA Jul 2021 - Jan 2022

Live mixing of drums, keyboard, guitars, bass, and vocals. Set up and operation of audio equipment (mics, 32 input Yamaha live mixer, etc.)

Stanford Sigma Chi Alpha Omega Chapter

Stanford, CA Sept 2022 - Present

Historian: Responsible for collection and protection of the Chapter history through social media, scrapbooks, organized documentation and events.

Honors & Awards

2020 Elks Scholar - Most Valuable Student Scholarship

Selected as a Most Valuable Student Scholar $4,000 Semi-Finalist Elks National Foundation Scholarship.

2020 Coca-Cola Scholars Semifinalist

1,928 students were selected from over 93,000 applicants from across the country to continue through the selection process.

2019 826 National's Poets in Revolt

Poet for anthology created with Amanda Gorman and Kate Deciccio.

2019 14th Eugene O'Neill Theater Center Young Playwrights Festival Semifinalist

Selected as semifinalist in national playwriting competition.

2018 National Student Poets Program Semifinalist

1 of 35 semi-finalists chosen out of 23,000 poetry entries.

2017 33rd California Young Playwrights Contest Semifinalist

Chosen as a semi-finalist out of 432 playwriting submissions.

Get in Touch