Job Description
Design, build, and optimize scalable data pipelines using modern technologies. Ensure high data quality, integrate large datasets, support analytics and machine learning. Collaborate with teams to deliver robust data solutions.
Key Responsibilities
- Build and maintain scalable data pipelines for analytics.
- Optimize database performance to support high performance queries.
- Transform raw data into usable structured formats daily.
- Collaborate with stakeholders to gather technical requirements.
- Ensure data quality and accuracy across all systems.
- Develop automated processes for efficient data handling.
- Monitor ETL workflows and fix issues proactively.
- Document architecture, processes, and engineering implementations clearly.
Skill & Experience
- Strong SQL expertise for managing complex database structures.
- Proficient in Python or Scala for data transformations.
- Experience with cloud platforms and data warehouses.
- Knowledge of big data tools like Spark, Hadoop.
- Ability to analyze and solve complex data problems.
- Excellent collaboration skills with engineering and analytics.
Note: Salary is disbursed in the local currency of the country of employment.