Responsibilities
Collaborate with product owners and team leads to identify, design, and implement new features to support the growing data needs.
Build and maintain optimal architecture to extract, transform, and load data from a wide variety of data sources, including external APIs, data streams, and data lakes.
Implement data privacy and data security requirements to ensure solutions stay compliant with security standards and frameworks.
Monitor and anticipate trends in data engineering and propose changes in alignment with organizational goals and needs.
Share knowledge with other teams on various data engineering or project-related topics.
Collaborate with the team to decide on the tools and strategies to use within specific data integration scenarios.
What we expect from you
-
Commercial experience 4+ years in data engineering.
-
Strong programming skills in Python or Scala.
-
Solid with distributed computing approaches, patterns, and technologies (Spark/PySpark).
-
Experience working with any cloud platform (GCP, AWS, Azure) and its data-oriented components.
-
Proficiency in SQL and query tuning.
-
Understanding of data warehousing principles and modeling concepts (e.g., knowledge of data model types and terminology including OLTP/OLAP, SCD, (de)normalization, dimensional, star/snowflake modeling, etc.).
-
Expertise in the use of any listed relational databases (PostgreSQL, MSSQL, or MySQL).
-
Experience with the orchestration of any data flows (e.g., Apache Airflow, Prefect, Glue, Azure Data Factory).
-
A team player with excellent collaboration skills.
-
Minimum English level B2.
Will be a plus
Expertise in stream processing using the current industry standards (e.g., AWS Kinesis, Kafka streams, Spark/PySpark, etc.).
Expertise in data storage design principles. Understanding of pros and cons of SQL/NoSQL solutions, their types, and configurations (standalone/cluster, column/row-oriented, key-value/document stores).
Experience in modern data warehouse building using Snowflake, AWS Redshift, or BigQuery.
Deep knowledge of Spark internals (tuning, query optimization).
Experience with data integration and business intelligence architecture.
Experience with data lakes and lake-houses (Azure Data Lake, Apache Hudi, Apache Iceberg, Delta Lake).
Experience with containerized (Docker, ECS, Kubernetes) or serverless (Lambda) deployment.
Good knowledge of popular data standards and formats (e.g., JSON, XML, Proto, Parquet, Avro, ORC, etc.).
Experience with Platforms: Informatica, Databricks, Talend, Fivetran, or similar.
Experience in data science and machine learning with building Machine Learning models.
NIX is a global supplier of software engineering and IT outsourcing services
NIX teams collaborate with partners from different countries. Our specialists have experience in developing innovative projects from ecommerce to cloud for some of the largest companies in the world, including from the Fortune 500. The teams are focused on stable development of the international IT market, business, and their own professional skills.










What we offer
-
Stable long-term employment opportunity with a well-established work environment.
-
Private health insurance (Medicover).
-
English courses and speaking clubs.
-
Opportunities for professional and personal growth within our company.
-
Mentoring program, along with internal and external professional training programs.
-
Comfortable office conveniently accessible with public transport located in the 13th district of Budapest.
-
Every tool and device you need in our office to excel in your job, including computers, dedicated meeting rooms, and spacious, modern kitchens with professional coffee machines. Our recreation areas, equipped with game consoles, board games, and a diverse selection of literature, are perfect for when you need a break.
-
Сollaborative and supportive work culture.
Your career roadmap

Become a part of the NIX team

Get Familiar with our similar vacancies
Hot
🔥