Job opportunities
Centres Inria associés
Type de contrat
Contexte
<p class="p1" style="text-align: justify;">The <a href="https://team.inria.fr/mimove/">MIMOVE</a> team at Inria Paris undertakes research enabling next-generation distributed computing systems, from their conception and design to their runtime support. MIMOVE has longstanding expertise in system interoperability & composition, resource allocation & system performance, and edge/fog computing. In particular, the Internet of Things (IoT) has been one of our main focuses. In our solutions, we have introduced system models, analyses, algorithms and protocols for capturing and managing the characteristics of the systems under study, as well as designed and developed related middleware tools and architectures. Currently, we are focusing our distributed system research on distributed machine learning (ML) systems. We situate distributed ML systems of interest in the resource/compute continuum edge-fog-cloud, combined with the IoT.</p>
<p>The selected candidate will be supervised by Maroua Bahri (<a href="mailto:maroua.bahri@lip6.fr">maroua.bahri@lip6.fr</a>) and Nikolaos Georgantas (<a href="mailto:nikolaos.georgantas@inria.fr">nikolaos.georgantas@inria.fr</a>).</p>
<p>The selected candidate will be supervised by Maroua Bahri (<a href="mailto:maroua.bahri@lip6.fr">maroua.bahri@lip6.fr</a>) and Nikolaos Georgantas (<a href="mailto:nikolaos.georgantas@inria.fr">nikolaos.georgantas@inria.fr</a>).</p>
Mission confié
<p class="p1" style="text-align: justify;">Data Stream Processing and Analytics (DSPA) applications are widely used to process unbounded data streams generated online at different rates from multiple geographically distributed data sources, such as mobile IoT devices, sensors, etc. These data streams require to be processed with low latency guarantees to extract valuable information in a timely manner via a series of continuous operators that constitute a DSPA application. <span class="Apple-converted-space"> </span></p>
<p class="p1" style="text-align: justify;">The edge-fog-cloud continuum deployment approach enables benefits from both lower network delays and balanced bandwidth usage and resources along the continuum. To this end, it requires deciding which part of the DSPA application to deploy on each of the layers in order to ensure the trade-off between the aforementioned advantages. Several deployment solutions have been proposed in the literature that statically identify (near) optimal deployment schemes of DSPA applications which are typically long-running with varying workloads conditions over time [1,2]. To keep consistent Quality of Service (QoS) levels (e.g., latency, energy, network constraints) in the face of such varying conditions, the static deployment scheme may no longer be sufficient. This requires a solution for triggering and calculating dynamically a new deployment scheme from the current deployed DSPA application in order to continuously ensure the required QoS levels [3,4]. Actually, dynamic deployment should be triggered at the right time: triggering it too late will violate the QoS requirements while triggering it too early will impose unnecessary load on the edge-fog-cloud resources and may result in a solution that diverges from the (near) optimal solution. <span class="s1"> </span></p>
<p class="p1" style="text-align: justify;">The edge-fog-cloud continuum deployment approach enables benefits from both lower network delays and balanced bandwidth usage and resources along the continuum. To this end, it requires deciding which part of the DSPA application to deploy on each of the layers in order to ensure the trade-off between the aforementioned advantages. Several deployment solutions have been proposed in the literature that statically identify (near) optimal deployment schemes of DSPA applications which are typically long-running with varying workloads conditions over time [1,2]. To keep consistent Quality of Service (QoS) levels (e.g., latency, energy, network constraints) in the face of such varying conditions, the static deployment scheme may no longer be sufficient. This requires a solution for triggering and calculating dynamically a new deployment scheme from the current deployed DSPA application in order to continuously ensure the required QoS levels [3,4]. Actually, dynamic deployment should be triggered at the right time: triggering it too late will violate the QoS requirements while triggering it too early will impose unnecessary load on the edge-fog-cloud resources and may result in a solution that diverges from the (near) optimal solution. <span class="s1"> </span></p>
Principales activités
<p class="p1" style="text-align: justify;">The internship focuses on enhancing DSPA applications through predictive methods for proactive triggering and optimized scheduling mechanisms across the edge-fog-cloud continuum to maintain consistent QoS requirements. The proactive approaches will leverage AI-based methods over historical and real-time system and application metrics data to forecast operator and execution environment changes, enabling dynamic adaptation of operator scheduling [5].</p>
<p class="p1" style="text-align: justify;">Key objectives include: design of an intelligent triggering strategy to initiate dynamic redeployment, predictive scheduling for proactive adjustments to operator deployments, and validation of the proposed scheduling method to ensure QoS metrics. This work aims to ensure optimal resource usage and performance in highly dynamic environments while maintaining a balance between proactive adjustments and minimal disruption to operations.</p>
<p class="p2" style="text-align: justify;"><em><strong>References:</strong></em></p>
<p class="p2" style="text-align: justify;"><span style="font-size: 10pt;">[1] P. Ntumba, N. Georgantas, and V. Christophides, “Efficient scheduling of streaming operators for IoT edge analytics” in FMEC, 2021.</span></p>
<p class="p2" style="text-align: justify;"><span style="font-size: 10pt;">[2] P. Ntumba, N. Georgantas, V. Christophides, “Scheduling Continuous Operators for IoT edge Analytics with Time Constraints”. SMARTCOMP 2022: 78-85.</span></p>
<p class="p2" style="text-align: justify;"><span style="font-size: 10pt;">[3] P. Ntumba, N. Georgantas, V. Christophides. “Adaptive Scheduling of Continuous Operators for IoT Edge Analytics”. Future Gener. Comput. Syst. 158: 277-293 (2024).</span></p>
<p class="p2" style="text-align: justify;"><span style="font-size: 10pt;">[4] H. Arkian, “Resource management for data stream processing in geo-distributed environments,” Ph.D. dissertation, Université de Rennes 1, 2021.</span></p>
<p class="p2" style="text-align: justify;"><span style="font-size: 10pt;">[5] Z. Zhong, M. Xu, M. A. Rodriguez, C. Xu, and R. Buyya, “Machine learning-based orchestration of containers: A taxonomy and future directions,” ACM Computing Surveys (CSUR), 2022.</span></p>
<p class="p1" style="text-align: justify;">Key objectives include: design of an intelligent triggering strategy to initiate dynamic redeployment, predictive scheduling for proactive adjustments to operator deployments, and validation of the proposed scheduling method to ensure QoS metrics. This work aims to ensure optimal resource usage and performance in highly dynamic environments while maintaining a balance between proactive adjustments and minimal disruption to operations.</p>
<p class="p2" style="text-align: justify;"><em><strong>References:</strong></em></p>
<p class="p2" style="text-align: justify;"><span style="font-size: 10pt;">[1] P. Ntumba, N. Georgantas, and V. Christophides, “Efficient scheduling of streaming operators for IoT edge analytics” in FMEC, 2021.</span></p>
<p class="p2" style="text-align: justify;"><span style="font-size: 10pt;">[2] P. Ntumba, N. Georgantas, V. Christophides, “Scheduling Continuous Operators for IoT edge Analytics with Time Constraints”. SMARTCOMP 2022: 78-85.</span></p>
<p class="p2" style="text-align: justify;"><span style="font-size: 10pt;">[3] P. Ntumba, N. Georgantas, V. Christophides. “Adaptive Scheduling of Continuous Operators for IoT Edge Analytics”. Future Gener. Comput. Syst. 158: 277-293 (2024).</span></p>
<p class="p2" style="text-align: justify;"><span style="font-size: 10pt;">[4] H. Arkian, “Resource management for data stream processing in geo-distributed environments,” Ph.D. dissertation, Université de Rennes 1, 2021.</span></p>
<p class="p2" style="text-align: justify;"><span style="font-size: 10pt;">[5] Z. Zhong, M. Xu, M. A. Rodriguez, C. Xu, and R. Buyya, “Machine learning-based orchestration of containers: A taxonomy and future directions,” ACM Computing Surveys (CSUR), 2022.</span></p>
Compétences
<ul class="ul1">
<li class="li1" style="text-align: justify;">Master level research internship (M2) or equivalent (stage de fin d'études ingénieur).</li>
<li class="li1" style="text-align: justify;">Sound knowledge of machine learning, distributed systems, and edge-fog-cloud computing.</li>
<li class="li1" style="text-align: justify;">Software development skills: Python and Java.</li>
<li class="li1" style="text-align: justify;">Good level of spoken and written English which is our working language. French is not required.</li>
</ul>
<li class="li1" style="text-align: justify;">Master level research internship (M2) or equivalent (stage de fin d'études ingénieur).</li>
<li class="li1" style="text-align: justify;">Sound knowledge of machine learning, distributed systems, and edge-fog-cloud computing.</li>
<li class="li1" style="text-align: justify;">Software development skills: Python and Java.</li>
<li class="li1" style="text-align: justify;">Good level of spoken and written English which is our working language. French is not required.</li>
</ul>
Référence
2026-09720
Domaine d'activité