ClearScale is a leading cloud consulting company that provides a wide range of services including cloud architecture design, application development, system integration, cloud migration, automation, and managed services. We hold a prestigious AWS Premier Consulting partner status and help Fortune 500 enterprises, mid-sized businesses, and startups succeed with ambitious, challenging, and unique cloud projects. We architect, develop, and launch innovative and sophisticated solutions using the best cutting-edge cloud technologies.
ClearScale is growing quickly and there is a high demand for the services we provide. Clients come to us for our deep experience in cloud infrastructure, development, migrations, DevOps, and automation
Responsibilities
- Analyze, scope and estimate tasks, identify technology stack and tools
- Design and implement optimal architecture and migration plan
- Specify the infrastructure and assist DevOps engineers with implementing the infrastructure
- Examine performance and advise necessary infrastructure changes
- Communicate with client on data-related issues and support data infrastructure needs
- Collaborate with in-house and external development and analytical teams
Qualifications
ClearScale expects successful candidate to have most of the following qualifications and skills (not necessary all have to be presented):
- Hands-on experience building and optimizing ‘big data’ architectures and data pipelines
- Hand-on experience designing efficient data architectures for high-load enterprise-scale applications
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
- Strong analytic skills related to working with unstructured documentation and datasets
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- Hands-on experience with message queuing, stream processing and highly scalable ‘big data’ data stores
- Strong self-management and self-organizational skills
- We are looking for a candidate with 5+ years of experience in a Data or Software Engineer role, who has attained a degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
- Successful candidates should have experience with some of the following software/tools:
- Big data tools: Kafka, Spark, Hadoop, Hive
- Relational SQL and NoSQL databases, including PostgreSQL, Cassandra, Couchbase, MongoDB and Elasticsearch
- Data pipeline and workflow management tools: AWS Glue, Airflow, NiFi, Step Functions
- AWS cloud services: Kinesis, Kinesis Analytics, EMR, RDS, Redshift, DocumentDB, Lambda, ,
- Stream-processing systems: Kinesis Streaming, Spark-Streaming, Kafka Streams
- Object-oriented/object function scripting languages: Java, Python
- Working knowledge of commercial BI and data warehousing tools such as Tableau, Snowflake or Pentaho
- Valid AWS certificates would be a great plus
You’ll be a great fit if:
- You'd like to work remotely with a flexible schedule
- You thrive in a small, dynamic, and agile team that encourages you to learn and grow
- You desire to work with some of the world’s top brands
- You enjoy finding solutions to interesting problems and figuring out how things work
- You welcome having autonomy with complex tasks
- You are passionate about using your experience and expertise to inspire the team
We offer:
- High compensation paid every two weeks in USD
- Completely REMOTE work: we trust our employees to work from anywhere they please
- Very flexible schedule consisting of 40 hours per week, Slack for communication
- Bureaucracy-free environment
- We adore change-makers and creators so you'll be able to influence our processes
- Highly skilled project teams consisting of 3-5 members (senior level professionals)
Wise projects rotation