Melih Onat

Creative technologist building products at the intersection of media and technology

Digital Media graduate (M.A., TMU 2026) · Seeking full-stack or frontend engineering roles · Toronto or remote

About

I'm a Digital Media graduate student focused on building technology that enhances creative expression and human connection. My background in media studies gives me insight into user behavior and aesthetic experience, while my self-taught engineering skills let me bring ideas to life.

I'm particularly drawn to music technology and social platforms—spaces where design, emotion, and interaction come together. Both of my projects reflect this: Resonate explores how people discover music through community rather than algorithms, and whatsyourtune bridges emotion recognition with generative music.

I'm looking for roles where I can bridge the gap between design thinking and technical implementation, ideally at companies building consumer-facing products that people genuinely enjoy using.

Skills

Engineering

JavaScript / TypeScriptReactNext.jsNode.jsPythonPostgreSQLSupabaseREST APIsGitGoogle CloudVercel

Design & Media

UX DesignVisual DesignTypographyUser ResearchResponsive DesignAccessibility

Creative Technology

TensorFlowGoogle Magenta.jsReal-time MLInteractive MediaWeb Audio API
Resonate screenshot 1

Resonate

Music discovery platform

ReactNext.js 16SupabaseTypeScript

A social music platform exploring how people discover albums through community rather than algorithms. Launched in November 2025, now 35+ weekly active users logging and reviewing music together.

Key decisions:

  • Designed mobile-first interface prioritizing quick music logging and social exploration
  • Built intelligent activity feed with optimistic UI updates for instant feedback
  • Implemented Row-Level Security policies for privacy-first data access
  • Created real-time messaging to foster organic music conversations

Building the product is one thing, learning to grow it is another.

whatsyourtune

Emotion-driven music generator

PythonTensorFlowGoogle Magenta.js

An experiment in emotion-responsive technology. Uses facial recognition to detect your current emotion and generates a personalized piano melody to match your mood—bridging computer vision, generative music, and human-computer interaction.

Key decisions:

  • Designed intuitive interface where users simply look at their camera and receive instant musical feedback
  • Mapped emotion states to musical parameters (happy → major key, upbeat tempo; sad → slower, minor melodies)
  • Integrated TensorFlow emotion detection with Google Magenta's music generation in real-time
  • Optimized performance pipeline to maintain responsive, fluid user experience