Create pipelines, data flows and data transformation using Azure Data Factory (ADF), Apache Spark with Python, and PySpark with Databricks. Design ADF pipelines to extract data from Relational sources like Teradata, Oracle, SQL Server, DB2 and non-relational sources like Flat files, JSON files. Develop Azure Databricks notebooks to perform data cleasing operations. Create resuable pipelines in Data Factory to extract, transform and load data into Azure SQL DB ad SQL Data warehouse. Primary worksite is Farmington Hills, MI, but relocation is possible. Req: Master degree in Computer Science or related with 6 months experience as Software or Data or ETL Engineer or Developer or Consultant. Ref #00140.
To apply for this job email your details to firstname.lastname@example.org