In recent years, several questions related to the rapid development of information and communication technologies, to artificial intelligence systems stemming from the processes of digitalization, are difficultly answered in the theoretical literature. The current dynamism and the small number of studies make it necessary to analyze many important points of this sphere. Especially, because of the changing development trends of human rights in the e-society, some relationships remain unregulated. Although the application of artificial intelligence systems is characterized by positive aspects on the one hand, on the other hand, it creates various practical problems. The placement of all personal information in information systems, as a result of the integration of these systems, faces the threat of ‘what if privacy issues are disclosed to everyone’. Artificial intelligence systems designed to serve people, often ‘interfere’ with their privacy. Elon Reeve Musk, a well-known technology entrepreneur, states: ‘Artificial intelligence is more dangerous than nuclear weapons.’ The main purpose of writing this paper is to help in solving the problems faced regarding the issues mentioned above. In our paper, we have made several suggestions: to give a legal concept to artificial intelligence systems; editing of norms related to digital rights, increasing cyberculture to ensure cybersecurity, etc. Thus, no matter how fast digitalization, automation, science and technology development, it does not imply the unlimited use of artificial intelligence systems. In any case, human rights must be guided, a ‘moral approach’ must be taken as a basis, and inviolability of privacy must be provided.