TRAI is a two-year master’s program, taught entirely in English and designed for highly qualified and internationally oriented students. The program offers high-level scientific courses taught by professors from École Polytechnique and partner companies. The originality of the program is to emphasise not only the technical advances and cases, but also to highlight the practical and social limitations of modern AI, the implications of such limitations, and how to mitigate such issues. A main objective is to enable students to create AI applications which are not only effective, ground-breaking and forward thinking, but also trustworthy, transparent, and responsible. As well as advancing their knowledge of the theoretical fundamentals, students will also gain practical experience in securing AI workflows and harnessing AI to create solutions within a framework that emphasizes sustainability and responsibility. Some specific topics are listed as follows: |
- Sustainable AI: Focus on frugal, energy-efficient models, low-cost training, and optimized architectures. This includes parameter optimization, meta-learning, and AutoML, along with hardware and energy consumption considerations. Key areas also include feature selection, data representation, cloud computing, and infrastructure optimization.
- Trustworthy and Secure AI: Emphasizes privacy, safety, anomaly detection, robustness against adversarial attacks, and trust through model verification. It also addresses compliance with regulations and governance frameworks. We also look at evaluating the broader impacts of AI on business, society, regulation, and governance.
- AI Transparency and Explainability: Focuses on managing uncertainty, leveraging knowledge graphs, decision trees, optimization techniques, and mathematical programming to enhance interpretability.
The first year focuses on multimodal artificial intelligence, from the foundations of machine learning and optimization to advanced techniques, including reinforcement learning.
We also include courses on fairness and responsible AI to specialize in the main goals of the second year.
The year ends with a four to five months internship.
The second year includes more in-depth courses in machine learning (such as deep reinforcement learning and large language models), network optimization and graphs, verification and evaluation, transparency, and privacy. It also covers a range of real-world industrial use cases, including complex energy systems and transport networks, banking and finance, medical analysis, and predictive maintenance, where it is crucial to consider transparency, security, ethical issues, and sustainability.
The master's degree ends with a second internship, which allows you to work for five to six months on an advanced project in a company's R&D center or research laboratory.
Trustworthy and Explainable AI equips graduates with specialized skills that are increasingly valuable in industries where AI’s reliability and transparency are crucial. Graduates can take on one of an increasing number of roles in organizations that prioritize ethical and interpretable AI, contributing to responsible innovation across healthcare, security, ecological transition, technology, energy, transport, finance, and beyond. In these sectors, employers seek professionals for future jobs who can develop and audit AI systems to ensure they are fair, accountable, sustainable, and compliant with regulations. A sizable number of our students will successfully pursue a doctorate, often in the form of CIFRE theses in companies – after the master's program.