DataOps Engineer (AWS, Data, Remote) (удаленная работа)
(вакансия в архиве)

11 октября 2021

Уровень зарплаты:
з.п. не указана
Требуемый опыт работы:
Не указан

Вакансия: DataOps Engineer (AWS, Data, Remote)

Компания "ClearScale"

ClearScale (headquartered in San Francisco, California, USA) - AWS Premier Consulting Partner has been offering a full range of professional cloud computing services for over 10 years, including architecture design, DevOps automation, refactoring and cloud-native applications development, integration, migration, solving all sorts of security issues (from just a security check to preventing cyber-attacks) and 24/7 technical support using the best advanced technologies.

The list of our customers is diverse: from government companies (ClearScale is an official cloud partner of the State of California) and educational institutions (University of California, San Francisco) to well-known global brands (IBM, Samsung, GoPro, HP, Conde Nast, Carl Zeiss, etc.) The number of satisfied customers has been well over 850, some of which can be found on the company's website in the Case Studies section.

  • We were the third company to gain a new AWS competence: Applied AI and Machine Learning Operations (less than 15 partners have this competency!) and Data Analytics AWS Competence
  • We have confirmed status in the Database Freedom program (total less than 20 companies in)
  • Proven expertise as a Managed Services Provider (total less than 16 companies in). This means that ClearScale can perform a full cycle consultancy and service: from audits, system or software development to the 24/7. You can read more about us on the company page - Managed Services.

Since the very foundation of the company, we work 100% remotely from various cities and countries. We work on a high trust basis within the company, therefore we do not monitor work being done via taking screen captures, webcams or log keyboard typing, as many other companies do. The professional reputation of our employees is of the highest value.

Job Overview

ClearScale is looking for an AWS DataOps Engineer to participate in building cost-efficient scalable data lakes for the wide variety of customers, from the small startups to the large enterprises.

Usually, our projects fit into one of the categories (but not limited to them):

  • Collect data from the IoT edge locations, store it in the Data Lake, orchestrate processes to ETL that data, slice it into the various data marts. Then put those data marts into the machine learning or BI pipelines
  • Build a data deliver pipeline to ingest high volume of the real-time streams, detect anomalies, slice into the window analytics, put those results in the NoSQL systems for the further dashboard consumption
  • Build ML pipelines for the full-cycle development, from the dockerization and orchestration to the inference and model observability

Responsibilities

  • Offer technical leadership and mentorship of DataOps in the data team.
  • Serve as a subject matter expert for DataOps, and partner with functional experts across the company.
  • Own production pipelines for in-house developed models serving our real-time systems and risk models.
  • Work closely with data engineers to advise on implementation.
  • Recommend and drive architecture/infrastructure to create actionable, meaningful, and scalable solutions for business problems.
  • Establish scalable, efficient, and automated processes for large scale deployments (including machine learning solutions). Manage, monitor, and troubleshoot machine learning infrastructure.
  • Act as a trusted advisor and be required to build relationships, promote best working practices and identify areas for improvement with regards to DataOps Services and Tooling.
  • Ensure that there is a proactive approach embedded for environment provisioning, maintenance and support.
  • Ensure best practice adopted and followed in relation to build, release and deployment activities
  • Ensure best practice adopted and followed in relation to use of tooling and services that support effective change delivery.
  • Create infrastructure following best practice outlined by our chosen cloud provider/s (AWS mostly).
  • Infrastructure to be created from code (Terraform, CloudFormation) as per best practice and organised to be easily maintained and extended.
  • Deploy software to and maintain multiple environments from Development through Production.
  • Provision environments and perform initial configuration for change initiatives.
  • Ensure environments maintain the highest level of quality, security, scalability, availability and compliance amidst an environment of rapid change and growth.
  • Responsible for upgrading and patching of in-scope components

Required skills and experience

  • Significant hands-on industry experience (5 years +)
  • Demonstrable experience deploying robust container-based solutions in production environments (ideally cloud-based environments such as AWS)
  • AWS cloud services: EMR, RDS, MSK, Redshift, DocumentDB, Lambda, EKS
  • Demonstrable experience with K8s, CloudFormation/Terraform, Jenkins, Ansible and others
  • Experience working with IAM, Security and other foundational cloud components
  • Expertise in standard software engineering methodology, e.g. unit testing, code reviews, design documentation, CI/CD
  • Experience building auto-scaling systems
  • A passion for creating innovative techniques and making these methods robust and scalable
  • Strong Python programming skills
  • Experience with databases, data structuring/warehousing, and machine learning pipelines
  • Exposure to machine learning concepts (feature engineering, text classification, and time series prediction) and frameworks with interest in learning more

Nice to have:

  • Java programming skills
  • Hands-on experience with message queuing, stream processing and highly scalable ‘big data’ stores
  • Big data tools: Kafka, Spark, Hadoop (HDFS3, YARN2,Tez, Hive, HBase)
  • Stream-processing systems: Kinesis Streaming, Spark-Streaming, Kafka Streams, Kinesis Analytics
  • Valid AWS certificates would be a great plus

What we offer

# 1 Fair wage

  • Fixed hourly rate in USD
  • Full-time, 40 hours per week contract. It’s your main and permanent place of work, not a freelance job
  • Payments every 2 weeks
  • Annual Hourly Rate Review: received client's appreciation, obtained an AWS certificate, saved the world? The reward is based on facts, not on a personal attitude
  • Reference program: many engineers joined the team by the recommendation from their friends and former colleagues and we have a reward for this.

# 2 Independence

  • from the office location or office conditions because it's only up to you where you want to work from - home, office, co-working or outdoors in the woods. The only requirement - fast and reliable internet. At the same time, our engineers do not work evenings or weekends: we focus on your local timezone
  • from any annoyance or big team buzz - we are the community of professional engineers, valuing each others' time and effort
  • from corruption and bureaucracy - we operate in an honest and competitive environment and we are one of the AWS's top 10 key partners.

# 3 Professional Development

  • Work with innovative Silicon Valley companies and traditional American companies at the cutting edge of digital transformation
  • We work with the newest technologies in AWS cloud and open-source tools, Jira, Confluence, Lucidchart, Slack
  • Constant practice at writing and speaking in English
  • The team which is willing to share its experience
  • Paid AWS certification: we provide training material, paid time off and examination itself
  • Horizontal and vertical career growth - are you ready to be greater, faster, stronger? We keep growing and people keep growing with us
  • A nice bonus if you accept our offer quickly