Try Google search

JPC - 6599 - Bigdata Developer (Hadoop/Spark)

6d ago
min 5 years
Warsaw, Poland

RoleTitle : Bigdata Developer (Hadoop/Spark)

Location: Warsaw, Poland

EmploymentType : Contract

• Over all 4 to 10 years of IT experience.Extensive experience in Big Data, Analytics, ETL technologies

• Application Development background alongwith knowledge of Analytics libraries, statistical and big data computinglibraries

• Minimum 3+ years of experience inSpark/PySpark, Python/Scala/java programming.

• Hands on experience in coding, designingand development of complex data pipelines using big data technologies

• Experience in developing applications onBig Data. Design and build highly scalable data pipelines

• Expertise in Python, SQL Database, Spark,non-relational databases

• Responsible to ingest data from files,streams and databases. Process the data using Spark, Python

• Develop programs in PySpark and Python aspart of data cleaning and processing

• Responsible to design and developdistributed, high volume, high velocity multi-threaded event processing systems

• Develop efficient software code formultiple use cases leveraging Python and Big Data technologies for various usecases built on the platform

• Provide high operational excellenceguaranteeing high availability and platform stability

• Implement scalable solutions to meet theever-increasing data volumes, using big data/Palantir technologies Pyspark, anyCloud computing etc.

• Individual who can work under their owndirection towards agreed targets/goals and with creative approach to work

• Intuitive individual with an ability tomanage change and proven time management

• Proven interpersonal skills whilecontributing to team effort by accomplishing related results as needed

Technologies require : DB : Hive , Impala ,HBASE. Data Processing : Spark core and SQL. build tool : Maven , Testingframework : Cucumber

 

Additional Skills:

• Experience in building CI/CD Pipelines,Git, Jenkins

• Have worked with large datasets• Proficientreading and understanding enterprise-grade PySpark OR Spark with Scala code

Job posted by- Rahul Pandey
Share