Big Data – Scala Spark Developer
Required:
Solid understanding (5+ years) in Scala, Java, JSON, XML
(5+ years) Experience working with large data sets and pipelines using tools and libraries
of Hadoop ecosystem such as Spark, HDFS, YARN, Hive and Oozie
(5+ years) of experience and working knowledge of distributed/cluster computing concepts.
Solid understanding of SQL, relational and nosql databases
Solid understanding (5+ years) in multi-threaded applications; Concurrency, Parallelism,
Locking Strategies and Merging Datasets. * MUST: Solid understanding (5+ years) in Memory
Management, Garbage Collection & Performance Tuning.
(5+ years) experience in Linux environments; strong knowledge of
shell scripting and file systems.
Experience working in cloud based environment like AWS.
Knowledge of CI tools like Git, Maven, SBT, Jenkins, and Artifactory/Nexus
Self-managed and results-oriented with sense of ownership is required
Excellent analytical, debugging and problem solving skills is required
Experience with Agile/Scrum development methodologies a plus
Minimum Bachelors degree in CS or equivalent with 8 – 10 years industry experience