Organizational Context
The International Federation of Red Cross and Red Crescent Societies (IFRC) is the world's largest humanitarian organization, with 191-member National Societies. As part of the International Red Cross and Red Crescent Movement, our work is guided by seven fundamental principles: humanity, impartiality, neutrality, independence, voluntary service, unity and universality. The IFRC is headquartered in Geneva, with regional and country offices throughout the world. The Digital Transformation Department (DTD), based in Geneva and in Budapest, coordinates and provides all information and digital technology services for the IFRC worldwide. The IFRC is currently replacing its legacy financial and logistics IT systems with Microsoft Dynamics 365 Finance and Operations (D365 F&O).
Job Purpose
Scope of the mission
The International Federation of Red Cross and Red Crescent Societies (IFRC) is looking for two Senior Full Stack Azure Synapse Engineer & Power BI Developers (Full Stack Data & Analytics Developers in short). The scope is to support the implementation of reporting solutions for our ERP system (D365) “Data analysis and visualization project” as part of the ERP implementation program.
Technical context
As a part of the ERP program implementation team, the Full Stack Data & Analytics Developers will be responsible for the implementation of data visualizations based on the business requirements collected for each business function (Human Resources, Finance, Logistics, Resource mobilization, Project management). The Full Stack Data & Analytics Developers will implement the solution based on agreed architecture guidelines.
The IFRC Data Platform is based on Microsoft Fabric and Synapse using Data Vault and Dimensional modeling techniques for consumption in Power BI. The Full Stack Data & Analytics Developer will be responsible for the end-to-end development of reports including data modelling, data engineering, and Power BI report development.
Job Duties and Responsibilities
Deliver Power BI report solution while enabling all the required elements starting from business requirement document, enable data in IFRC Data Platform, create Power BI reports (including paginated reports, using Git), respond to user feedback and deploy to production.
The role will start to carry out the following tasks:
Requirement and PowerBI
Understand business process and data model, business logic, modelling logic and ways to identify missing data. Liaise with Business Contact or Business Analyst to understand the requirements. Identify data requirements and data elements missing from the Data Platform. Design and implement dashboards and reports in Power BI that provide actionable information for the business users. Follow through UAT identified issues and their resolution. Publish data and reports to production.Microsoft Fabric/Azure Synapse
Develop database schemas, define relationships, and optimize performance based on the specific requirements of the data solution. Develop data products using SQL and PySpark. Implement data quality checks and processes to ensure data accuracy, consistency, and completeness. Implement security measures to protect sensitive data and ensure compliance with relevant regulations and standards.Job Duties and Responsibilities (continued)
Optimize solutions for performance and scalability. Identify and resolve performance bottlenecks, optimize SQL queries, and fine-tune data processing workflows. Work collaboratively with cross-functional teams, including architects, data scientists, data analysts, and business stakeholders. Document data engineering processes, system architecture, and data flow diagrams for knowledge sharing and future reference.Education
Relevant education & technical degree.
Experience
5+ years of progressively responsible postgraduate experience in data engineering with a focus on big data modelling 3+ years in data engineering (Microsoft Fabric or Azure Synapse) and report development (Power BI). Proven track record as a Data Engineer or similar role, and in Power BI development, including Paginated or SSRS reports. Experience in Data Lake and Data Lakehouse implementation (e.g., Microsoft Fabric, Azure Synapse, Databricks, Snowflake, Microsoft SQL Server, Apache Spark/Hadoop, or other similar big data or SQL databases). Experience in Data Vault and dimensional data modeling techniquesKnowledge, Skills and Languages
Independent and pro-active problem solving, self-starter Can easily engage and interact with business and other developers. Proficiency in programming languages such as Python, SQL, or Java. Proficiency in the Apache Spark framework. Experience in data governance, architecture and handling large datasets and data pipelines. Experience in data products development, business value case development, data product deployment is preferred. Proficiency in EnglishCompetencies, Values and Comments
Reporting to Data Manager and Reporting Line Workstream Lead. Will work closely with the project team based in Geneva and Budapest.
Location of the work – open Worldwide.
Working language is English.
Duration
The mission should start as soon as possible with an occupation rate of 100%. Contract duration 6-8 month with possibility to extend.