Compartir esta oferta de trabajo
Postular »

Challenge yourself. Grow.

Be our next Senior Data Engineer

VP: Technology

Immediate superior: Jose Israel Rico/ Alejandro Palomino

 

The impact you will achieve

 

Belcorp is a multinational beauty and personal care company that develops, produces and distributes beauty products under the commercial brands Ésika, L‘Bel and Cyzone, through Direct Sales, E-commerce, and Retail channels. Belcorp develops talent by promoting the best people through innovative ways of working, creating opportunities to grow, and fostering a culture of high performance with solid leadership, focused on achieving extraordinary results.

​As Belcorp continues to grow, we are bringing together the best technology team in Latin America to transform the digital experience of our beauty consultants and consumers. In that sense, we are looking for a Senior Data Engineer to build reliable, distributed data pipelines and intuitive data products that allow our Data Product Owners and Stakeholders to efficiently leverage data in an effective manner. As part of this team, you will work on diverse data technologies such as Spark, Hive, Presto, EMR, Document DB, Kafka & others to build insightful, scalable, and robust data pipelines.

You will be part of Belcorp's Data Architecture and will be working on strategic projects leading the definition and implementation of distributed data processing pipelines. We are looking for someone who is motivated and passionate to explore new technologies and learn, thrives in an evolving environment, brings an enthusiastic, collaborative, and extremely curious, and delights in making a difference every day.

 

The challenges you will have

 

As Senior Data Engineer, you will build business-critical, scalable, and robust data processing solutions and intuitive and trustworthy data products that power data discovery, self-serve analytics, and drive product innovation.

Some of the Senior Data Engineer responsibilities would be:

  • Development Lead
    • Operates independently, demonstrating excellence, and learning new technologies and frameworks
    • Engineers efficient, adaptable and scalable data pipelines to process structured and unstructured data
    • Maintains and rethink existing datasets and pipelines to service a wider variety of use cases
    • Develops subject-matter expertise in the cloud platform domain
    • Acts as a thought partner to the platform engineering team, understand their challenges and make opinionated recommendations that empower them with data analytics solutions to scale Belcorp infrastructure and tools efficiently
    • Enables smart analytics by building robust, reliable, and useful data sets that can power various analytic techniques like regression, classification, clustering, etc.
    • Improves the quality, reliability, accuracy, and consistency of our data.
    • Provides technical guidance and coaching to data engineers to develop pipelines on the platform
    • Think and work agile, including automated testing, continuous integration and deployment.
  • Business-Tech understanding
    • Collaborates with stakeholders to understand needs, model tables using data warehouse best practices, and develop data pipelines to ensure the timely delivery of high-quality data.
    • Be a bridge between the business analyst and technology team, enabling insight that can empower better decision-making.
    • Works with technical and product stakeholders to understand data-oriented project requirements.
    • Design, implement and ensure the accuracy of validation and related services for data models
    • Always maintains an analytical mindset and have a passion for solving business problems using data.

 

What we need from you

The Senior Data Engineer will have a strong background in distributed data processing and software engineering to build high-quality, scalable data products. The Senior Data Engineer have tremendous and demonstrable data intuition and share the passion for continuously improving the ways we use data to make our Data & Analytics Platform better.

  • Experience designing, building, and iterating on interfaces, with a strong sense of the entire user experience, not just the individual pieces.
  • Highly proficient in at least one of Java, Python, or Scala
  • Expert in engineering data pipelines using big data technologies (Hive, Presto, Spark, Python, EMR, MongoDB, etc.) on large scale data sets demonstrated.
  • Experience with building stream-processing applications using Apache Flink, Spark-Streaming, Apache Storm, Kafka Streams or others.
  • Understand Data Lifecycle and concepts such as lineage, governance, privacy, retention, anonymity, quality, etc.
  • Experience in data modeling and comfortable with complex SQL
  • Experience in the management of cloud services (AWS, GCP, Azure) for different types of model scopes (API-based, near-real-time, batch processes, etc.).
  • Experience in build microservices using containers based on the cloud.
  • Experience in deployments DataOps and MLOps framework.
  • Extensive experience using SQL to transform data tables into meaningful data sources for business reporting.
  • Excellent written and verbal communication skills.

 

¡Apply!

Postular »