




Summary: Seeking an experienced Data Engineer with a passion for building robust cloud architectures to design and optimize scalable Data Lake and Data Warehouse environments within AWS. Highlights: 1. Design and develop scalable Data Lake infrastructures based on AWS S3. 2. Build complex ETL processes using AWS Glue and Python. 3. Automate and orchestrate data workflows through EventBridge and Step Functions. Here is the text in English, formatted without bullet points for a clean look in Indeed. **Data Engineer (AWS Specialist)** **Location:** \[City/Remote] **Employment Type:** Full\-time **Experience:** 5\+ years **About the role** We are looking for an experienced Data Engineer with a passion for building robust cloud architectures. You will be responsible for designing and optimizing a scalable Data Lake and Data Warehouse environment within the AWS ecosystem. In this role, you will focus on automating data pipelines and ensuring complex datasets are transformed into valuable business insights. **What you will do** You will design and develop scalable Data Lake infrastructures based on AWS S3\. You will build complex ETL processes using AWS Glue and Python. You will automate and orchestrate data workflows through EventBridge and Step Functions. You will also optimize query performance with AWS Athena and the Glue Data Catalog while collaborating with stakeholders to translate technical needs into efficient data solutions. **Your Profile** You have at least five years of professional experience in Data Engineering, with at least two years of hands\-on experience specifically with AWS Data Services. Strong proficiency in Python for data transformations is essential. You possess in\-depth knowledge of S3, Glue, Lambda, Athena, and Step Functions. You have strong SQL skills for data manipulation and optimization. You are experienced with various data formats such as Parquet, Avro, and JSON, and you are comfortable with version control using Git. You are analytically strong, proactive, and able to explain complex technical concepts to non\-technical stakeholders. **Nice\-to\-Haves** It is a plus if you hold AWS Certifications such as Data Engineer or Solutions Architect. Experience with containerization like Docker or Kubernetes is valued. Familiarity with data modeling techniques such as Dimensional Modeling or Data Vault is an advantage, as is experience with machine learning pipelines and MLOps. **What we offer** We offer a challenging role within an innovative environment. \[Add specific benefits here such as competitive salary, flexible hours, or training budget]. **Interested?** If you have the required AWS expertise and are ready for a new challenge, apply now with your CV and a brief motivation. Job Type: Full\-time Pay: 200\.00€ \- 250\.00€ per day Application Question(s): * How many years of professional experience do you have specifically in Data Engineering, and can you briefly name the primary data warehousing or data lake projects you have led? * This role requires hands\-on experience with AWS Glue and Step Functions. On a scale of 1\-10, how would you rate your ability to build and orchestrate ETL pipelines using these specific tools, and why? * Are you proficient in writing production\-ready Python code for data transformations, and have you used Git for version control (branching and merging) in a collaborative team environment?


