Data Engineer – Remote Opportunity

By | August 1, 2022

Job Responsibilities : Data Engineer – Remote Opportunity

Salary : $115000 per year

Company : Thermo Fisher Scientific

Location : Remote US

Data Engineer

When you are part of the team at Thermo Fisher Scientific, you’ll do important work. You’ll have the opportunity to grow and learn in a culture that empowers your development. We have created an inclusive, global environment that values the power of diverse talent, backgrounds, and experiences to drive speed, productivity, innovation, and growth. We are seeking an energetic, responsible candidate to join our growing organization

Thermo Fisher Scientific Inc. (NYSE: TMO) is the world leader in serving science. With revenues of $20 billion and the largest investment in R&D in the industry, we give our 70,000 employees the resources and opportunities to make significant contributions to the world. The customers we serve fall within pharmaceutical and biotech companies, hospitals and clinical diagnostic labs, universities, research institutions, and government agencies. Our products and services help accelerate the pace of scientific discovery and solve challenges ranging from complex research to routine testing to field applications.

What will you do?

Data Engineer will join our Enterprise data platform Delivery team and will be responsible for working to develop data driven applications and automations across a variety of infrastructure (both On-Prim and Cloud). Data Engineer must be able to collaboratively work in an Agile team to design, develop and maintain data structures for the Enterprise data platform. This position offers an exciting opportunity to work on processes that interface with multiple systems including AWS, Oracle, Middleware and ERPs. The candidate will be part of development projects, pilots, and advance best design practices.

Responsibilities

  • Lead, design, develop, deploy and maintain mission critical data applications for Enterprise data platform
  • Responsible for delivery of various data driven applications
  • Participate in all phases of the Enterprise Data platform development life cycle as appropriate; including, but not limited to gathering customer requirements, defining technical requirements, creating high level architecture diagrams, data validation and training sessions
  • Engage in Data solutions and Business Intelligence Projects and drive them to closure
  • Lead different aspects of data analytics, data quality, machine learning, data acquisition, visualization, and some design and analysis tasks
  • Coordinate & work closely with architecture and data operations teams
  • For Agile Projects, collaborate with Product Owner on epic and user story definitions
  • Develop documentation and training materials, participate with customer groups in the planning for longer-term systems enhancements.

Education

  • Minimum Bachelor’s Degree in Computer Science or equivalent with 5 years of experience
  • Minimum 3 years’ experience in a data engineering role with a strong understanding of technical, business and operational process requirements.

Experience

  • Minimum 5 Years of Experience in Data Lake, Data Analytics & Business Intelligence Solutions
  • Strong Experience in ETL Tools preferably Informatica, Databricks & AWS Glue
  • Excellent experience in Data Lake using AWS Databricks, Apache Spark & Python
  • Minimum 2 years of working experience in a DevOps environment, data integration and pipeline development.
  • Minimum 2 years of Experience with AWS Cloud on data integration with Apache Spark, EMR, Glue, Kafka, Kinesis, and Lambda in S3, Redshift, RDS, MongoDB/DynamoDB ecosystems
  • Demonstrated skill and ability in the development of data warehouse projects/applications (Oracle & SQL Server)
  • Strong real-life experience in python development especially in pySpark in AWS Cloud environment.
  • Experience in Python and common python libraries.
  • Strong analytical experience with database in writing complex queries, query optimization, debugging, user defined functions, views, indexes etc.
  • Experience with source control systems such as GitHub, Bit bucket, and Jenkins build and continuous integration tools.
  • Knowledge of extract development against ERPs – SAP, Siebel, JDE, BAAN preferred
  • Strong understanding of AWS Data lake and data bricks.
  • Experience in SAP ERP application, data and processes desired
  • Exposure to AWS Data Lake, AWS Lambda, AWS S3, Kafka, Redshift, Sage Maker would be added advantage

Skills And Abilities

  • Ability to work independently and as a member of a cross-functional team
  • Good administration and time capabilities to deal with multiple projects and prioritize effectively.
  • Willingness to learn, be mentored and improve
  • Ability to effectively interpret data and translate into information and be able to effectively communicate the information both verbally or visually.
  • Ability to multi-task and apply initiative and creativity on challenging projects.
  • Strong problems solving and troubleshooting skills. Ability to transform a complex problem into smaller, manageable problems

At Thermo Fisher Scientific, each one of our 70,000 extraordinary minds has a unique story to tell. Join us and contribute to our singular mission-enabling our customers to make the world healthier, cleaner and safer. Apply today!

Thermo Fisher Scientific is an EEO/Affirmative Action Employer and does not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability or any other legally protected status.

Click Here : Apply Now