Required Skill Set:
Spark, Scala, Airflow, Hive, Kafka, AWS EMR, Oracle/Sql
Roles & Responsibilities:
- Leveraging your deep knowledge to provide technical leadership to take projects from zero to completion.
- Architect, build and maintain scalable data pipelines and access patterns related to permissions and security.
- Research, evaluate and utilize new technologies/tools/frameworks centered around high-volume data processing
- Drive a culture of collaborative reviews of designs, code and test plans
- Work with the architecture engineering team to ensure quality solutions are implemented and engineering best practices adhered to
- Develop set process for Data mining, Data modeling, and Data production
Experience & Skills Fitment:
- Strong programming experience in Python
- Extensive experience with processing frameworks such as Spark, Spark Streaming, Airflow, Hive, Sqoop, Kafka etc.
- Experience with big data processing within cloud environments such as AWS S3
- Passionate about Data Privacy and Security, Data Management
- Experience building and shipping data production pipelines sourcing data from a diverse array of sources
- Understanding of measuring and ensuring data quality at scale
- Knowledge of tools to monitor and optimize performance of data pipelines
- Ability to communicate effectively with team members and business stakeholder Nice to have:
- Knowledge of the retail and e-commerce sector and its use cases
- Kloud9 provides a robust compensation package and a forward-looking opportunity for growth in emerging fields.
Equal Opportunity Employer:
- Kloud9 is an equal opportunity employer and will not discriminate against any employee or applicant on the basis of age, color, disability, gender, national origin, race, religion, sexual orientation, veteran status, or any classification protected by federal, state, or local law.