Saturday, November 14, 2020

Data Engineer - Kafka/ETL/Hadoop (1-3 yrs) (Premier Information)

Mandatory Skills :

- Applied experience with messaging/event-based systems-Confluent Kafka, MQTT etc.

- Applied experience with designing and developing CI/CD data pipelines.

- Proficiency in programming language Java, J2EE, Rest

- Strong skills in writing MYSQL queries and Linux-Shell scripts

- Applied problem solving skills related to big data frameworks

Desired Skills :

- Spring or Spring Boot

- Python

- Dockerization

Department Application Development

Position : 2

Designation : Data Engineer(Spark)

Expected Joining date : Immediate

Experience : 1 to 3Years

Location: Bangalore

Job description :

- Should be able to build custom or structured ETL design, implementation and maintenance

- Should be able to buildwrite complex SQL queries

- Modelling data and metadata to support ad-hoc and pre-built reporting

- Identifying the data quality issues to address them immediately to provide great user experience

- Tuning query performance

- Experience building data products incrementally and integrating and managing datasets from multiple sources

Education & Experience(total and relevant) : B-Tech/ME/M-Tech in CSE, EE,ECE

Mandatory Skills :

- Python, Hadoop, HDFS, Spark 2.0, Spark SQL, Hive, MySQL, Warehouse Schema Design

- Linux-Shell skills
 
Desired Skills :

- Elastic Search, Kibana

- MongoDB

Apply Now

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.