Job Description
- Relevant work experience between 3 to 7 years.
- Proficient Experience on designing, building and operationalizing large-scale enterprise data solutions using at least four GCP services among Data Flow, Data Proc, Pub Sub, BigQuery , Cloud Functions, Composer, GCS
- Proficient hands-on programming experience in Spark/Scala (python/java)
- Proficient in building production level ETL/ELT data pipelines from data ingestion to consumption
- Data Engineering knowledge (such as Data Lake, Data warehouse – Redshift/Hive/Snowflake, Integration, Migration)
- Excellent communicator (written and verbal formal and informal)
- Experience using software version control tools (Git/Bitbucket/code commit).
- Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution.
- GCP Certification preferred.
- Additional Cloud experience in AWS or Azure preferred.
- Ability to multi-task under pressure and work independently with minimal supervision.
- Must be a team player and enjoy working in a cooperative and collaborative team environment.
Role:
Data EngineerIndustry Type:
IT Services & ConsultingDepartment:
Engineering – Software & QAEmployment Type:
Full Time, PermanentRole Category:
Software Development
EducationUG:
Any GraduatePG:
Any Postgraduate
Key Skills
Skills highlighted with ‘‘ are preferred keyskills
ComposerAzurePub SubBigQueryCloud FunctionsGCSData FlowData ProcAWS