Master audio signal processing for music. Learn spectral techniques to analyze, synthesize, and transform sounds using Python and open-source tools.
Master audio signal processing for music. Learn spectral techniques to analyze, synthesize, and transform sounds using Python and open-source tools.
This course teaches audio signal processing methodologies specific to music and real applications. It focuses on spectral processing techniques for analyzing, synthesizing, transforming, and describing audio signals in music contexts. The curriculum covers the Discrete Fourier Transform, Fourier theorems, Short-time Fourier transform, and various models including sinusoidal, harmonic, and sinusoidal plus residual. Students learn to implement these concepts using Python and open-source tools. The course also explores sound transformations, audio feature extraction, and music description techniques. It emphasizes practical applications and includes demonstrations of various software tools and programming exercises.
4.8
(289 ratings)
57,172 already enrolled
Instructors:
English
پښتو, বাংলা, اردو, 2 more
What you'll learn
Understand and implement the Discrete Fourier Transform (DFT) for audio signal analysis
Master the Short-time Fourier Transform (STFT) and its applications in spectrograms
Develop proficiency in sinusoidal and harmonic modeling of audio signals
Learn stochastic modeling and residual analysis techniques
Implement various sound transformation methods using different models
Gain skills in audio feature extraction and music description
Skills you'll gain
This course includes:
13.5 Hours PreRecorded video
10 assignments
Access on Mobile, Tablet, Desktop
FullTime access
Closed caption
Top companies offer this course to their employees
Top companies provide this course to enhance their employees' skills, ensuring they excel in handling complex projects and drive organizational success.





There are 11 modules in this course
This course provides a comprehensive introduction to audio signal processing for music applications. It covers fundamental concepts such as the Discrete Fourier Transform, Fourier theorems, and Short-time Fourier transform, as well as advanced topics like sinusoidal, harmonic, and residual models. Students learn to implement these concepts using Python and open-source tools, gaining practical experience in analyzing, synthesizing, and transforming audio signals. The course also explores sound transformations, audio feature extraction, and music description techniques, providing a solid foundation for various music technology applications. Throughout the modules, students engage with real-world examples and hands-on programming exercises, preparing them for advanced work in music technology and audio signal processing.
Introduction
Module 1 · 5 Hours to complete
Discrete Fourier transform
Module 2 · 4 Hours to complete
Fourier theorems
Module 3 · 5 Hours to complete
Short-time Fourier transform
Module 4 · 5 Hours to complete
Sinusoidal model
Module 5 · 5 Hours to complete
Harmonic model
Module 6 · 5 Hours to complete
Sinusoidal plus residual model
Module 7 · 3 Hours to complete
Sound transformations
Module 8 · 3 Hours to complete
Sound and music description
Module 9 · 3 Hours to complete
Concluding topics
Module 10 · 2 Hours to complete
Concluding topics: Lesson Choices
Module 11 · 3 Hours to complete
Fee Structure
Instructors
Full Professor at Universitat Pompeu Fabra and Director of the Music Technology Group
Xavier Serra is a Full Professor in the Department of Information and Communication Technologies at Universitat Pompeu Fabra in Barcelona, where he also serves as the Director of the Music Technology Group. He holds a PhD in Computer Music from Stanford University, awarded in 1989, and is recognized for his contributions to the spectral processing of musical sounds. His research interests encompass the analysis, description, and synthesis of sound and music signals, integrating both scientific and artistic approaches. Dr. Serra is actively involved in promoting initiatives in Sound and Music Computing and has recently received an Advanced Grant from the European Research Council for his project CompMusic, which focuses on multicultural approaches in music computing research.
Professor of Music and Electrical Engineering at Stanford University Specializing in Signal Processing
Julius O. Smith, III is a Professor of Music and (by courtesy) Electrical Engineering at Stanford University, where he teaches a sequence of courses in signal processing and supervises research at the Center for Computer Research in Music and Acoustics (CCRMA). He earned his BS/EE from Rice University in 1975 and his PhD/EE from Stanford in 1983, focusing on digital filter design and system identification. His professional background includes work in digital communications, adaptive filtering, and music software development at NeXT Computer, Inc. Prof. Smith is a Fellow of both the Audio Engineering Society and the Acoustical Society of America, and he has authored several online books and numerous research publications.
Testimonials
Testimonials and success stories are a testament to the quality of this program and its impact on your career and learning journey. Be the first to help others make an informed decision by sharing your review of the course.
4.8 course rating
289 ratings
Frequently asked questions
Below are some of the most commonly asked questions about this course. We aim to provide clear and concise answers to help you better understand the course content, structure, and any other relevant information. If you have any additional questions or if your question is not listed here, please don't hesitate to reach out to our support team for further assistance.