AVL is the world’s largest independent company for the development, simulation and testing technology of powertrains (hybrid, combustion engines, transmission, electric drive, batteries and software) for passenger cars, trucks and large engines. AVL – AST d.o.o. Croatia is a member of the AVL Group.
We are looking for a motivated student to conduct their master’s thesis in the area of adaptive hybrid control strategies with the use of reinforcement learning techniques.
Optimal control of energy flows in vehicles is key to achieving the highest efficiency in transportation. Today, hybrid vehicle controllers work mainly according to rule-based approaches considering actual vehicle/powertrain states and taking decisions wrt. energy flow coordination – use of battery vs. use of fuel.
With increased connectivity, additional information on the driving task is available to optimize the system’s operation. Such information includes e.g. the characteristics of the upcoming route sections (e.g. altitude, speed limits etc.). Based on this, predictive control functions are implemented, optimizing the control strategy over the whole upcoming route. However, as conventional optimization algorithms and approaches (e.g. quadprog, ECMS etc.) are limited to a certain number of input variables in practical use, alternatives are needed, capable of considering additional inputs.
ML based HEV control strategies offer huge potential to take over complex powertrain coordination tasks. In the course of a master thesis, such an approach shall be evaluated by applying RL, based on existing vehicle models. Conventional control strategies as well as predictive strategies are implemented and available for benchmarks.
• Getting up to speed with SoA literature in machine learning, in particular with reinforcement learning.
• Connecting existing RL framework to the new plant model.
• Researching and implementing validation and evaluation criteria for plant model.
• Researching and implementing functionalities for RL agent training, validation, evaluation or interpretation.
• Document your research findings and implementation decisions.
• Writing a master thesis describing the theoretical background and results of your work in AVL
• Knowledge of the topics of reinforcement learning, stochastic processes, or autoencoders (e.g., lectures, courses, etc.)
• Proven experience in applying data science methods
• Strong proficiency in Python
• Highly developed quality awareness with strong attention to details
WHICH STUDY TRACKS DO WE PREFER:
- Applied Statistics
- Computer Science
- Controls Engineering
- Electrical Engineering
- Or similar
• Setup of the simulation and model-in-the-loop training environment (base framework already existing)
• Investigation of state-of-the-reinforcement learning algorithms to solve the above-mentioned problem (e.g. advantages, drawbacks, etc.)
• Definition of suitable reward function(s) based on the given calibration targets
• Development of the RL-based control policy and benchmarking with the existing solution
- Information extraction: make use of vehicle horizon data (future time-window) in the optimization of the control
- Investigation and implementation of a suitable RL-agent validation approach
• Development of a proper visualization of the model results within the currently used frameworks (MLFlow, Tensorboard) or try another framework (ClearML)
• Investigation of fuel reduction potential
• Investigation and implementation of proper performance metrics to assess the quality of the developed control policies
• Investigation of suitable containerization techniques to support the model deployment on the vehicle
• Assessment of the trade-off between the amount of data, model performance and computational time (for training)
• Optional: Investigation of gradient-free optimization techniques (e.g., genetic programming)
• Optional: Represent learned policy as a state machine
If so, please use our online application tool to send your application to AVL through the link!
AVL is not just about cars. It’s about changing the future. Together.