Designing clean, efficient, and reliable data pipelines., Translating market and business understanding into well-structured data solutions., Working closely with quantitative researchers and software engineers to make data useful, for both training and live trading., Ensuring that data quality, availability, and consistency are never afterthoughts., Building and maintaining high-quality data pipelines in Python and SQL., Managing data storage and access in BigQuery and AWS (S3, Lambda, ECS, Fargate)., Implementing robust ETL processes and schema management for historical and live market data., Developing and maintaining a feature store for model training., Ensuring reliable, low-latency data delivery for inference in production systems., Automating workflows using Airflow., Deploying and monitoring data systems using Docker, Git, CI/CD, and Grafana., Designing systems that are transparent, well-documented, and resilient under change.