Enrollments closing soon for Post Graduate Certificate Program in Applied Data Science & AI By IIT Roorkee | 3 Seats Left

  Apply Now

Job Details

Data Engineering on Cloud at Futurense Technologies

Futurense Technologies
Company

Futurense Technologies

Job Location

Bangalore

Compensation

7,50,000 - 10,00,000

Requirement

Within 3 months

Work Experience

2 Years - 5 Years

Required Qualifications

Bachelor of Engineering / Bachelor of Technology / M. Tech / Master of Engineering /

Job Description

Company Brief:

Futurense Technologies is an initiative of Miles Education (an organization that up-skills students and professionals to help them be future-ready and enable their career progression in Data Science and AI Courses (from IIT & IIMs) with training centers across India, UAE, and the USA) which provides an end-to-end solution and closes the gap between graduates and GCC (Global Capability Centre) companies by up-skilling them as per the specific needs of the companies and deploying them as a job-ready resource on day one. Applicants are hired on salary, upskilled for no charges, deployed on a business transformation project with a fortune 500 client and being mentored by us throughout their time while working on the projects.

Job Profile:

The desired profile will prepare and transform data using pipelines. This involves extracting data from various data source systems, transforming it into the staging area, and loading it into a data warehouse system. This process is known as ETL (Extract, Transform and Load). The person's job is to organize the collection, processing, and storing of data from different sources. To do this, in-depth knowledge of Python and SQL and other database solutions is required for the position.

Eligibility:

Any Engineering or Computer (Graduate / Post Graduate) candidate with 2+ years of exposure in data engineer & expertise in Python and SQL with fluent spoken & written English required for the position.

Responsibilities:

  1. Create and maintain optimal data pipeline architecture
  2. Assemble large, complex data sets that meet business requirements
  3. Identify, design, and implement internal process improvements
  4. Optimize data delivery and re-design infrastructure for greater scalability
  5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies
  6. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
  7. Work with internal and external stakeholders to assist with data-related technical issues and support data infrastructure needs
  8. Create data tools for analytics and data scientist team members

Skills Required:

  1. Working knowledge of ETL on any cloud (Azure / AWS / GCP)
  2. Proficient in Python (Programming / Scripting)
  3. Good understanding of any of the data warehousing concepts (Snowflake / AWS Redshift / Azure Synapse Analytics / Google Big Query / Hive)
  4. In-depth understanding of principles of database structure
  5. Good understanding of any of the ETL technologies (Informatica PowerCenter / AWS Glue / Data Factory / SSIS / Spark / Matillion / Talend / Azure)
  6. Proficient in SQL (query solving)
  7. Knowledge in Change case Management / Version Control – (VSS / DevOps / TFS / GitHub, Bit bucket, CICD Jenkin)

Salary:

Negotiable as per Experience in Data Engineering Skills & Concepts + Performance-based Increment - Post (6 months & 12 months) of project deployment.

Additional Offerings:

Stock Options

Required Skills

This is your progress in the required skills for this job. Sign in and improve your score by completing these topics and then apply for the job with a better profile.

Apply Now »