Data Engineering Manager

Roles & Responsibilities:

  • Lead the design and implementation of large-scale data engineering projects, including data lakes and data pipelines on cloud platforms like AWS, Azure, or GCP.
  • Drive the pre-sales process by engaging with clients, understanding requirements, and developing technical proposals and proof of concepts (PoCs).
  • Architect scalable and secure data storage solutions using technologies like Amazon S3, Azure Data Lake Storage, and Google Cloud Storage.
  • Oversee the development of ETL/ELT pipelines using tools such as AWS Glue, Apache Spark, Databricks, or Azure Data Factory.
  • Ensure efficient data transformation and quality assurance processes by leveraging tools like AWS Lambda, Google Cloud Functions, or Azure Functions for serverless computing.
  • Implement data governance frameworks to ensure data quality and compliance using services like AWS Lake Formation, Azure Purview, or Google Data Catalog.
  • Collaborate with data scientists, BI developers, and analytics teams to ensure the smooth flow of data and insights across the organization.
  • Manage a team of data engineers, providing technical guidance and mentorship throughout the project lifecycle.
  • Engage with stakeholders and clients to align project deliverables with business goals.
  • Monitor and optimize data processing and storage to ensure efficiency and cost-effectiveness.

Experience & Skills Fitment:

  • 5+ years of hands-on experience in data engineering, including leading data lake implementations and cloud-based solutions.
  • Proficiency in cloud platforms (AWS, Azure, or GCP) and services like Amazon S3, Azure Data Lake, Google Cloud Storage.
  • Extensive experience with data transformation tools such as Apache Spark, Databricks, AWS Glue, and Azure Data Factory.
  • Expertise in serverless architectures (e.g., AWS Lambda, Google Cloud Functions, Azure Functions).
  • Strong understanding of data governance, data quality, and security best practices in the cloud.
  • Familiarity with containerization and orchestration technologies such as Kubernetes and Docker.
  • Proficiency in SQL, Python, Scala, or Java for data manipulation and processing.
  • Experience working with data warehousing and analytics solutions like Amazon Redshift, Google BigQuery, or Azure Synapse.
  • Strong leadership and project management skills, with a track record of managing teams and delivering complex projects from pre-sales to delivery.
  • Excellent communication skills to effectively interact with stakeholders, clients, and technical teams.

Good to have:

  • Experience with data governance tools like AWS Lake Formation, Azure Purview, or Google Data Catalog.
  • Certifications in cloud platforms such as AWS Certified Solutions Architect, Azure Data Engineer, or Google Professional Data Engineer.
  • Familiarity with CI/CD pipelines and DevOps practices in the context of data engineering.

Benefits:

  • Kloud9 provides a robust compensation package and a forward-looking opportunity for growth in emerging fields.

Equal Opportunity Employer:

  • Kloud9 is an equal opportunity employer and will not discriminate against any employee or applicant on the basis of age, color, disability, gender, national origin, race, religion, sexual orientation, veteran status, or any classification protected by federal, state, or local law.

Resumes to be sent  to: [email protected]

Apply Online

-->