• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site
Article
A multilayered perspective on entrepreneurial universities: looking into the dynamics of joint university-industry labs

Meissner D., Zhou Y., Fischer B. et al.

Technological Forecasting and Social Change. 2022. Vol. 178.

Book
Digital economy, 2022: Pocket data book

Abdrakhmanova G., Vasilkovsky S., Vishnevskiy K. et al.

M.: National Research University Higher School of Economics, 2022.

Article
The evolution of Foresight: What evidence is there in scientific publications?

Saritas O., Burmaoglu S., Ozdemir D.

Futures. 2022. Vol. 137.

Book chapter
Science and Technology Priority Areas in BRICS+ Countries

Sokolov A., Shashnov S. A., Kotsemir M. N.

In bk.: BRICS Comprehensive Innovation Competitiveness Report 2020. Scientific and technical documentation press, 2021. P. 36-98.

The Future of Artificial Intelligence: Challenges, Risks, and New Horizons

Session “The Future of Artificial Intelligence” took place in the framework of XXV Yasin (April) Academic Conference on Economic and Social Development. Leading experts from different countries discussed the possibilities and limitations of AI — from work tasks to global challenges.

The Future of Artificial Intelligence: Challenges, Risks, and New Horizons

Academic Supervisor of HSE University Yaroslav Kuzminov and Senior Lecturer Ekaterina Kruchinskaya opened the session presenting the results of a study devoted to the possibilities of using Large Language Models (LLM) to solve professional problems. It turns out that even the most advanced AI — like GPT or GigaChat — answers correctly only 40-55% of questions. More than half of companies are aware of these risks and also note cybersecurity issues. However, only 30% of organizations are working to reduce these risks.

The researchers developed their own benchmark - a system for assessing the level of AI professionalism based on four criteria: applied knowledge, cognitive flexibility, expert validation, and logical reasoning ability.

Yaroslav Kuzminov

Yaroslav Kuzminov

"According to these criteria, we tested three hypotheses that explain AI errors. It turned out that the complexity and type of tasks do not greatly affect the accuracy of the models' answers. The neural network cannot yet consistently answer professional questions. However, we have suggestions on how to improve the situation. First, it is necessary to create a knowledge core - an area where the model does not make mistakes. Secondly, — develop adequate performance metrics for work tasks, from routine to complex. Finally, it is important to test AI in different contexts. Professionals are often forced to make decisions under uncertainty, AI needs such a stress test as well.”

Caetano Penna, Director of Brazil’s Center for Strategic Studies and Management of Science, Technology and Innovation, looks at AI from a global political and economic perspective. He sees states increasingly seeking technological sovereignty, regulating AI, and participating in the formation of digital standards. With a new technological revolution advent, governments will not only have to invest in AI, but also protect society from the possible negative consequences of digitalization - inequality, loss of privacy, and other social and ethical problems. "States must balance between innovation and society protection in order to avoid crises and ensure sustainable development," the speaker emphasized.

Olesia Maibakh, Head of Foresight Research Methodology and Organization Unit, International Research and Educational Foresight Centre ISSEK HSE presented the results of joint work with Sber and the Coordination Center of the Government of the Russian Federation. The experts conducted a comprehensive analysis of AI related areas structure and formed an agenda for promising basic research. Security, ethics, data handling, new architectures and interfaces for interacting with AI, as well as models interpretability are under focus of the future.

Olesia Maibakh

Olesia Maibakh

"In addition, among the potentially new research tasks, one can highlight the development of so-called AI scientists - intelligent systems capable of autonomously conducting research and effectively solving complex interdisciplinary problems."

Continuing the discussion on the future of AI, especially in the context of ethical and security issues, Ozcan Saritas, professor at the HSE ISSEK and director of futures studies at the Rochester Institute of Technology in the UAE, spoke about AI application in law enforcement using the example of the Dubai Police.

Ozcan Saritas

Ozcan Saritas

"AI is already helping prevent crime, control drones, and improve cybersecurity. However as AI advances, privacy and responsible use of technology are becoming increasingly important."

Experts concluded that the future of AI depends not only on technological advances, but also on society's ability to ensure its ethical, safe and responsible implementation.