Job Description

  • Develop, test, and maintain big data solutions using Apache Spark
  • Work closely with data engineers & data scientists to build scalable data workflows
  • Optimize Spark jobs for performance & scalability
  • Write clean, maintainable, and high-quality code
  • Monitor & troubleshoot production data pipelines
  • Implement data quality checks ensuring accuracy & integrity
  • Stay updated with latest big data trends & tools

🎯 Requirements

  • Proven experience as a Spark Developer or similar role
  • Strong knowledge of Apache Spark & ecosystem
  • Experience with big data frameworks like Hadoop
  • Proficiency in Scala / Java / Python
  • Knowledge of storage tech: HDFS, HBase, Cassandra, etc.
  • Familiarity with ETL processes and data integration tools
  • Good understanding of distributed computing

🛠 Key Skills

⭐ Apache Spark
⭐ Hadoop
⭐ Scala / Java / Python
⭐ Data Pipelines & Big Data Processing
⭐ Cassandra, HBase, HDFS
⭐ Performance Tuning
⭐ ETL & Data Ingestion
⭐ Azure / AWS / GCP (plus)
⭐ Kafka / Flink (plus)


🎓 Education

UG: Any Graduate
PG: Any Postgraduate