Job ID:
J49972
Job Title:
BigData Engineer
Location:
St. Louis,MO
Duration:
12 Months + Extension
Hourly Rate:
Depending on Experience (DOE)
Work Authorization:
US Citizen, Green Card, OPT-EAD, CPT, H-1B,
H4-EAD, L2-EAD, GC-EADClient:
To Be Discussed Later
Employment Type:
c2c
- Lead Bigdata Engineer/Hadoop Developer - St. Louis, MO
- BigData Hadoop Engineer - St. Louis, MO
- Bigdata/Hadoop Developer - St. Louis, MO
- Bigdata Engineer - St. Louis, MO
- Hadoop/Bigdata Developer - St. Louis, MO
- Bigdata developer - St. Louis, MO
- BigData Tester Hadoop Tester - St. Louis, MO
- Bigdata Engineer - St. Louis, MO
- Bigdata Developer with AWS - St. Louis, MO
- BigData / Hadoop Developer - St. Louis, MO
Job Description:
- · Build a highly functional and efficient Big Data platform that brings together data from disparate sources and allows FinThrive to design and run complex algorithms providing insights to Healthcare business operations.
- · Build ETL Data Pipelines in Aws Cloud using Aws ADF and Databricks using PySpark and Scala.
- · Migrate ETL Data pipelines from On Prem Hadoop Cluster to Aws Cloud.
- · Build Data Ingestion Pipelines in Aws to pull data from SQL Server.
- · Perform Automated and Regression Testing.
- · Partner with internal business, product and technical teams to analyze complex requirements and deliver solutions.
- · Participate in development, automation and maintenance of application code to ensure consistency, quality, reliability, scalability and system performance.
- · Deliver data and software solutions working on Agile delivery teams Requirements:
- · Bachelor's degree in Computer science or a related discipline
- · 6+ years of data engineering in an enterprise environment
- · 6+ years of experience writing production code in Python, PySpark or Scala
- · Strong knowledge of Aws platform. Should have worked in Aws ADF, Deployed ADF and Databricks code to production and be able to troubleshoot production issues.
- · Experience with SQL.
- · Experience with Big Data technologies in Aws such as Spark, Hive, Sqoop, Databricks or any other equivalent components.
- · Experience working with git and CI/CD tools
- · Proven background in Distributed Computing, ETL development, and large-scale data processing
- · Travel: None.
Apply Now
Cloud Hybrid is an equal opportunity employer inclusive of female, minority, disability and veterans, (M/F/D/V). Hiring, promotion, transfer, compensation, benefits, discipline, termination and all other employment decisions are made without regard to race, color, religion, sex, sexual orientation, gender identity, age, disability, national origin, citizenship/immigration status, veteran status or any other protected status. Cloud Hybrid will not make any posting or employment decision that does not comply with applicable laws relating to labor and employment, equal opportunity, employment eligibility requirements or related matters. Nor will Cloud Hybrid require in a posting or otherwise U.S. citizenship or lawful permanent residency in the U.S. as a condition of employment except as necessary to comply with law, regulation, executive order, or federal, state, or local government contract



