ROLE BRIEF :
Job Title/Designation : Sr Data Engineer(using Big Data technologies)
Employment Type : Full Time, Permanent
Job Description :
- Bachelor's Degree in Computer Science or - STEM- Majors (Science, Technology, Engineering and Math) with advanced experience.
- Minimum of 5 years of experience in Software Development in Data Engineering pace
Roles and Responsibilities :
In this role, you should have / will be :
- Working knowledge of using Big Data technologies to build data pipelines and curate data by having experience in relational database such as PostgreSQL, OracleNo-SQL DB, Data Pipeline, Data Ingestion / ETL tools such as Informatica, Talent etc.
- Architect and design the Data Layer components of the Digital solutions / Products that team is building in Industrial domain.
- Build technical data dictionaries and support business glossaries to analyse the datasets
- Perform data profiling and data analysis for source systems, manually maintained data, machine generated data and target data repositories
- Build both logical and physical data models for both Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) solutions
- Develop and maintain data mapping specifications based on the results of data analysis and functional requirements
- Perform a variety of data loads & data transformations using multiple tools and technologies
- Build automated Extract, Transform & Load (ETL) jobs based on data mapping specifications
- Maintain metadata structures needed for building reusable Extract, Transform & Load (ETL) components.
- Working & Debugging knowledge on J2EE technology stack and enterprise application development on Spring framework by implementing services.
- Work internally with different stockholders, data teams, security teams etc for smooth execution of project
- Must have good communication skills - both oral and written.
Technical Expertise :
- Hands on - strong Experience with Relational databases such as PostgreSQL, Oracle, My SQL etc. Nosql databases such as DynamoDB
- Hands-on experience in writing SQL scripts for Oracle, MySQL, PostgreSQL or HiveQL
- Hands-on experience in programming languages - Python, Java or Scala
- Experience with Big Data / Hadoop / Hive / NoSQL database engines (i.e. Cassandra or HBase)
- Understands logical and physical data models, big data storage architecture, data modeling methodologies.
- Understands the technology landscape, up to date on current technology trends and new technology, brings new ideas to the team.
- Experience of different techniques used for database / application performance tuning
- Comfortable writing code in python, shell scripting, sql . Comfortable working on Linux environment
- Experience working with AWS cloud data services & knowledge on cloud security - AWS Services like IAM, EC2, ECS, DMS, S3, Redshift, Quicksight, RDS etc
- Experience in Data Visualisation tools such as Tableau, Quicksight
- Exposure to industry standard data modeling tools (e.g., ERWin, ER Studio, etc.)
- Exposure to Extract, Transform & Load (ETL) tools like Informatica or Talend
- Exposure to unstructured datasets and ability to handle XML, JSON file formats
- Conduct exploratory data analysis and generate visual summaries of data. Identify data quality issues proactively.
- Exposure to Machine Learning models / algorithms will be a plus.
- Experience : 8 to 13 years
- Annual CTC : Rupees 18,00,000 to 33,00,000
Locations : Bangalore/Bengaluru
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.