- Create and maintain optimal data pipeline architecture.
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Azure ‘big data’ technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
- Keep our data separated and secure across national boundaries through multiple data centers and Azure regions
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Maintain master data, reference data, and data quality on daily basic.
- 3+ years of experience in a similar role
- Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field
- Experience building and optimizing ‘big data’ data pipelines, architectures, and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Experience supporting and working with cross-functional teams in a dynamic environment.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency, and workload management.
- A successful history of manipulating, processing, and extracting value from large, disconnected datasets.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores