Stivlon Consulting is your strategic partner in building high-performance teams and unlocking individual potential. We are passionate about finding the perfect fit—the talented individuals who elevate your company and the fulfilling career paths that ignite professional growth.
Job Summary
- We are seeking an experienced Senior Data Engineer to design, build, and maintain scalable data infrastructure that supports analytics, reporting, and data-driven decision-making.
- The ideal candidate will play a key role in architecting reliable data pipelines, optimizing data systems, and ensuring high data quality across the organization while collaborating closely with analytics, product, and engineering teams.
Key Responsibilities
- Design, build, and maintain scalable and reliable data pipelines (ETL/ELT)
- Develop and manage data architectures including data lakes, warehouses, and streaming systems
- Ensure data quality, integrity, security, and governance across all data systems
- Optimize data processing performance and reliability for large datasets
- Collaborate with product, analytics, and engineering teams to understand data requirements
- Implement monitoring, alerting, and logging for data pipelines
- Lead code reviews and enforce data engineering best practices
- Mentor junior data engineers and provide technical leadership
- Document data models, pipelines, and system architectures
- Troubleshoot and resolve complex data-related production issues
Required Qualifications & Experience
- Bachelor’s Degree in Computer Science, Engineering, Mathematics, or a related field
- 5+ years of professional experience in data engineering or backend data roles
- Strong proficiency in SQL and data modeling
- Hands-on experience with Python, Java, or Scala for data processing
- Experience building ETL/ELT pipelines using tools such as Airflow, dbt, or similar
- Solid experience with data warehouses (BigQuery, Snowflake, Redshift, PostgreSQL, etc.)
- Experience working with large-scale data systems and distributed processing frameworks (Spark, Flink, etc.)
- Familiarity with cloud platforms (AWS, GCP, or Azure)
- Experience with version control systems (Git)
Preferred / Nice-to-Have Skills:
- Experience with real-time/streaming data (Kafka, Kinesis, Pub/Sub)
- Knowledge of data governance, privacy, and security best practices
- Experience with containerization and orchestration (Docker, Kubernetes)
- Exposure to BI and analytics tools (Looker, Power BI, Tableau, etc.)
- Experience in FinTech, SaaS, or high-volume data environments
- Familiarity with machine learning data pipelines
Soft Skills & Competencies:
- Strong analytical and problem-solving skills
- Excellent communication and stakeholder collaboration abilities
- Ability to work independently and take ownership of data systems
- Leadership mindset with mentoring experience
- High attention to detail and commitment to data reliability and quality.
Method of Application
Signup to view application details.
Signup Now