Midlevel-Senior DevOps Engineer

New York, NY

Post Date: 06/27/2017 Job ID: 9890274 Industry: IT Perm
One of New York' s fasting growing startups is experiencing its most rapid growth to date, the company needs our first dedicated DevOps Engineer. As you know better than anyone, the simpler things seem, the more complex they are under the hood. The company has complex infrastructure that' s fun to work with, as we manage a number of different properties, our own ETL pipeline, and a ton of data processing as we integrate with many different partners and always serve our users the most relevant jobs for them— and only the jobs they' re qualified for— in real time.
You love not only setting up and managing infrastructure, but what really gets you excited is when your work is able to empower the engineering team to work better and faster, and when you can automate key tasks and incident responses so that everyone can sleep well at night.
Specifically, you will:
  • Be primarily responsible for the maintenance, management, and streamlining of all of our servers, external processes, and hosted services
  • Share pager duty to ensure that all of our products and services are up and running
  • Automate infrastructure management and maintenance with the aim of empowering the team and ensuring site reliability
  • Proactive performance management to ensure the site is always speedy
  • Own and improve our build and deployment processes
  • Create and monitor dashboards and alerts for key infrastructure metrics, health checks, and business KPIs that relate to site reliability
Our tools:
  • Python for our main application, Node.Js for microservices
  • PostgreSQL, Elasticsearch, RabbitMQ, Memcached, Redis, Nginx for proxying and Varnish for caching
  • Redshift (and trying out BigQuery) for data warehousing
  • CircleCI
  • LogDNA for centralized log management
  • New Relic, Librato, Datadog, Opbeat for monitoring and alerts
  • Docker (in production!)
  • Hashicorp Vault for secrets management
  • Airflow for Cron and DAG-based scheduling
  • AWS Batch for ETL and memory-intensive Cron job execution
Requirements:
  • Minimum 3+ years' experience building complex distributed systems. In this role you are the one gravitating toward operational concerns of the team, focusing on reliability, performance, capacity planning and automation of everything
  • Significant experience with AWS
  • Experience with Docker
  • Experience with ETL pipelines and " big data” tools like Hadoop, Storm, Spark, and Mahout
  • Experience with Continuous integration, testing, and deployment
  • Experience with log aggregation
  • Experience with creating key metric monitoring dashboards, ideally with Datadog
  • Experience with Elasticsearch optimization and/or management preferred
  • Familiarity with Ansible, Fabric, and CircleCI preferred but not required

Jose Bustamante


Not ready to apply?

Send an email reminder to:

Share This Job:

Related Jobs: