Lead the design and development of data engineering solutions, including data pipelines and ETL processes, leveraging technical expertise.
Mentor and lead a team of data engineers, providing technical guidance and oversight for high-quality deliverables.
Accurately estimate time and resources required for new projects.
Promote a continuous learning and improvement culture, staying up-to-date with industry best practices and emerging technologies.
Evaluate team performance, provide feedback and coaching, and encourage collaboration and open communication within the team and with stakeholders.
Participate in the recruitment process to select new team members.
Strong technical skills in code review, architecture and data models.
Experience designing data pipelines and building ETL processes.
Passion for continuous learning and improvement.
Ability to accurately estimate time and resources for projects.
Ability to manage timelines and deadlines while ensuring high-quality work.
Excellent communication and collaboration skills.
Ability to identify and address performance issues and recognize high performers.
Strong leadership and mentoring skills.
Experience in recruiting and building a team.
Minimum English level B2.
6+ years of proven experience developing software using Scala/Java.
Proficiency in SQL and query tuning.
Solid understanding of distributed computing approaches, patterns and technologies (Spark, Kafka).
Deep knowledge of data warehousing principles and modeling concepts (OLTP/OLAP, (de)normalization, dimensional, star).
Experience working with any cloud platform (GCP, AWS, Azure).
Experience in modern data warehouse building using tools like Snowflake, AWS Redshift and BigQuery.
Expertise in relational databases (PostgreSQL, MSSQL, MySQL).
Experience with orchestration tools for data flows (e.g., Apache Airflow, Talend, Glue, Azure Data Factory).
Expertise in data storage design principles, including the pros and cons of SQL/NoSQL solutions.
Deep knowledge of Spark internals (tuning, query optimization).
Experience with data integration and business intelligence architecture.
Experience with non-relational databases (e.g., MongoDB, DynamoDB).
Experience with data lakes and lake houses (Azure Data Lake, Apache Hudi, Apache Iceberg, Delta Lake).
Experience with stream processing using industry standards (e.g., AWS Kinesis, Kafka streams).
Familiarity with distributed computing systems such as Hadoop, Storm, Hive and Beam.
Experience with containerized (Docker, ECS, Kubernetes) or serverless (Lambda) deployment.
Knowledge of popular data standards and formats (e.g., JSON, XML, Proto, Parquet, Avro, ORC).
Experience with Informatica, Databricks, Talend, Fivetran or similar.
Experience in data science and machine learning, including building Machine Learning models.
NIX is a global supplier of software engineering and IT outsourcing services
NIX teams collaborate with partners from different countries. Our specialists have experience in developing innovative projects from ecommerce to cloud for some of the largest companies in the world, including from the Fortune 500. The teams are focused on stable development of the international IT market, business, and their own professional skills.
What we offer
-
Competitive compensation packages.
-
Stable employment, based on a full-time employment contract.
-
Private health insurance (Medicover Сlinic).
-
AYCM sport pass, providing discounts at various sports facilities in Hungary.
-
Interesting tasks and diverse opportunities for developing your skills.
-
Free training courses, including English.
-
Participation in internal and external thematic events, technical conferences.
-
A spacious office in the heart of Budapest (13th district).
-
All necessary devices and tools for your work.
-
Friendly, motivating atmosphere.
-
Active corporate life.