2021-12-21 08:26:51
2021-12-31
Senior (5-7 Yrs)
W2 - Contract
United States
No
Yes
Job details »
At Cloudely, we work with a single mission: Transform the way clients experience Product & Implementation, Development, and Support.
Growth is a journey and never a destination. We are constantly thriving to grow in gaining the trust of clients globally in offering services across Salesforce, Oracle, Robotic Process Automation, DevOps, Web, and Mobile Programming to name a few. And we are just getting started!
We have fabulous opportunities for you to grow along with us!
At Cloudely, you will get what you are looking for: the scope to learn, prove and grow. We are now actively seeking success-hungry candidates who want to grow in the domain of Engineer.
Role: Sr. AWS Data Engineer
Location: Remote
Experience: 5-8 yrs experience)
Job Description:
Responsibilities
- Is responsible for the development, unit testing and implementation of data integration solutions
- Responsible for importing, cleansing, transforming, validating and analyzing data with the purpose of understanding or making conclusions from the data for data modeling, data integration and decision-making purposes.
- The primary role will be developing and unit testing ETL mapping jobs based on the requirement document (HLDLLDSTM) which will be provided by Lead or Client and fixing defects in QA and UAT process.
- Should also possess good knowledge on Databases, Linux UNIX environment to perform his daily tasks and should have exposure to data model and concepts of Data warehouse to create ETL jobs for dimension and fact tables.
- Create LLDSTM documents, develop, test, and deploy data integration processes using choice of ETL
- Adopt integration standards, best practices, ABC framework etc while creating ETL jobs.
- Performs data validation, cleansing and analysis.
- Tests, debugs, and documents ETL processes, SQL queries, and stored procedures
- Analyzes data from various sources, including databases and flat files
- ETL development, deployment, optimization, support and defect fixes.
- Develop error handling processes
- Create and implement scheduling strategy
- Possess good knowledge of Agile and Waterfall methodologies.
Qualifications:
At least 3-5 years of experience with design, development, automation, and support of applications to
- Extract, Transform, and Load data
- Must have worked on AWS S3, SNS/SQS, Glue, Athena, Redshift, Lambda
- Proficient working on Databricks Spark
- Proficient in SQL and SQL scripts
- Familiarity with Big Data Solutions (e.g. Data Lake / Delta Lake)
- Ability to work in an agile environment
- Build and maintain solutions with latest technologies including PySpark, Hadoop, Hive, Redshift, Scala, Python, Airflow and Traditional SQL procedures and ETL processes
Role: Jr. AWS Data Engineer
Location: Remote
Experience: 2-5 yrs experience)
Job Description:
Responsibilities
- Is responsible for the development, unit testing and implementation of data integration solutions
- Responsible for importing, cleansing, transforming, validating and analyzing data with the purpose of understanding or making conclusions from the data for data modeling, data integration and decision-making purposes.
- The primary role will be developing and unit testing ETL mapping jobs based on the requirement document (HLDLLDSTM) which will be provided by Lead or Client and fixing defects in QA and UAT process.
- Should also possess good knowledge on Databases, Linux UNIX environment to perform his daily tasks and should have exposure to data model and concepts of Data warehouse to create ETL jobs for dimension and fact tables.
- Create LLDSTM documents, develop, test, and deploy data integration processes using choice of ETL
- Adopt integration standards, best practices, ABC framework etc while creating ETL jobs.
- Performs data validation, cleansing and analysis.
- Tests, debugs, and documents ETL processes, SQL queries, and stored procedures
- Analyzes data from various sources, including databases and flat files
- ETL development, deployment, optimization, support and defect fixes.
- Develop error handling processes
- Create and implement scheduling strategy
- Possess good knowledge of Agile and Waterfall methodologies.
Qualifications:
At least 3-5 years of experience with design, development, automation, and support of applications to
- Extract, Transform, and Load data
- Must have worked on AWS S3, SNS/SQS, Glue, Athena, Redshift, Lambda
- Proficient working on Databricks Spark
- Proficient in SQL and SQL scripts
- Familiarity with Big Data Solutions (e.g. Data Lake / Delta Lake)
- Ability to work in an agile environment
- Build and maintain solutions with latest technologies including PySpark, Hadoop, Hive, Redshift, Scala, Python, Airflow and Traditional SQL procedures and ETL processes
The way to your dream job and organization is just a click away. Share your resume at [email protected]. To know more about us, please visit www.cloudely.com.