Requirements:
-
Degree in Engineering, Computer Science, or equivalent hands-on experience in data-focused roles.
-
5+ years designing and maintaining scalable data pipelines, transformations, and workload management processes.
-
Advanced SQL expertise with extensive experience writing and optimizing queries across multiple relational databases and platforms.
-
Proven ability to process, model, and manage large volumes of structured and unstructured data using modern programming languages.
-
Strong communicator and problem-solver with a clear understanding of how data solutions drive business value.
-
Design and maintain scalable, secure, and high-performing data pipelines optimized for analytics and reliability.
-
Build and optimize ETL workflows, data models, and algorithms to support production and advanced analytics use cases.
-
Aggregate and structure large, complex datasets to meet both technical and business requirements.
-
Introduce and integrate modern data technologies and engineering tools into existing systems and architectures.
-
Drive continuous process improvements through automation, infrastructure optimization, and thorough documentation and quality controls.

