




Summary: We are seeking a Data Engineer to build and maintain scalable and performant data platforms, ensuring efficient data flow and high data quality. Highlights: 1. Design, build, and maintain scalable data platforms 2. Develop and implement data processing workflows using Spark and Apache Beam 3. Collaborate with cross-functional teams to ensure data accuracy and integrity **Job Title: Data Engineer (AWS)** **Location: Lisbon or Porto, Portugal** **Work Regime: Full\-time \& Hybrid (2x to 3x office days)** **Overview / Summary:** We are looking for a Data Engineer, responsible for building and maintaining Data Platforms. This role will involve implementing scalable and performant data pipelines and data integration solutions, agnostic of data sources and technologies to ensure efficient data flow and high data quality, enabling data scientists, analysts, and other stakeholders to access and analyze data effectively. **Responsibilities and Tasks:** * Design, build, and maintain scalable data platforms; * Collect, process, and analyze large and complex data sets from various sources; * Develop and implement data processing workflows using data processing framework technologies such as Spark, and Apache Beam; * Collaborate with cross\-functional teams to ensure data accuracy and integrity; * Ensure data security and privacy through proper implementation of access controls and data encryption; * Extraction of data from various sources, including databases, file systems, and APIs; * Monitor system performance and optimize for high availability and scalability. ### **Requirements** **Mandatory Requirements:** * Experience with cloud platforms and services for data engineering (AWS); * Proficiency in programming languages like Python, Java, or Scala; * Use of Big Data Tools as Spark, Flink, Kafka, Elastic Search, Hadoop, Hive, Sqoop, Flume, Impala, Kafka Streams and Connect, Druid, etc.; * Knowledge of data modeling and database design principles; * Familiarity with data integration and ETL tools (e.g., Apache Kafka, Talend); * Understanding of distributed systems and data processing architectures; * Strong SQL skills and experience with relational and NoSQL databases; * Familiarity with other cloud platforms and services for data engineering (e.g., GCP, Azure Data Factory); * Experience with version control tools such as Git. ### **Benefits** **Important**: * Our company does not sponsor work visas or work permits. All applicants must have the legal right to work in the country where the position is based. * Only candidates who meet the required qualifications and match the profile requested by our clients will be contacted. **\#VisionaryFuture \- Build the future, join our living ecosystem!**


