Abstract
Artificial intelligence (AI) has become a central element in contemporary strategies for crime prevention and public security, enabling the identification of patterns, predictive analysis, and data-driven decision-making. However, its implementation raises significant ethical, social, and institutional challenges, particularly related to algorithmic bias, opacity, and governance. In this context, this study aims to analyze the role of artificial intelligence in crime prevention from the perspective of complex thinking and algorithmic governance. The research adopts a mixed-methods sequential explanatory design. First, a semi-systematic documentary review was conducted, focusing on scientific literature published between 2018 and 2024 in databases such as Scopus, Web of Science, and Google Scholar, as well as reports from international organizations including the United Nations, UNESCO, and the OECD. Second, semi-structured interviews were carried out with eight experts from the Dominican National Police, selected through purposive sampling based on their professional experience and involvement in technological and operational processes. The findings reveal that AI systems are not neutral tools but socio-technical constructs shaped by design choices, institutional practices, and regulatory frameworks. Key challenges identified include structural bias, limited transparency, and insufficient oversight mechanisms. From a complex thinking perspective, the study highlights the need for an integrated, interdisciplinary, and ethically grounded approach to AI implementation in criminal analysis. The study concludes that effective crime prevention through AI requires not only technological advancement but also robust governance models that ensure accountability, fairness, and meaningful human oversight.
|
Published in
|
American Journal of Artificial Intelligence (Volume 10, Issue 1)
|
|
DOI
|
10.11648/j.ajai.20261001.25
|
|
Page(s)
|
172-178 |
|
Creative Commons
|

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.
|
|
Copyright
|
Copyright © The Author(s), 2026. Published by Science Publishing Group
|
Keywords
Artificial Intelligence, Crime Prevention, Algorithmic Governance, Complex Thinking, Predictive Policing, Criminal Analysis, Ethical AI
1. Introduction
Over the last decade, artificial intelligence (AI) has assumed a central role in processes of state digital transformation, particularly in the fields of public security and crime prevention. Predictive analytics systems, machine learning techniques, and algorithmic surveillance are increasingly used to identify crime patterns, optimize the allocation of police resources, and support strategic decision-making
| [5] | Ferguson, A. G. (2017). The rise of big data policing: Surveillance, race, and the future of law enforcement. NYU Press. |
| [14] | Perry, W. L., et al. (2013). Predictive policing: The role of crime forecasting in law enforcement operations. RAND Corporation. |
[5, 14]
. These tools have been presented as mechanisms capable of increasing operational efficiency and reducing uncertainty in complex urban contexts.
However, the deployment of algorithmic systems in public security has generated growing debate regarding their legitimacy, transparency, and structural effects. Several studies have warned that algorithms are not neutral instruments but instead reflect and amplify the social and institutional conditions under which they are designed and implemented
| [2] | Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press. |
| [9] | Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14–29. https://doi.org/10.1080/1369118X.2016.1154087 |
| [13] | Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press. |
[2, 9, 13]
. The use of historical data derived from previous policing practices may reproduce structural biases, intensifying surveillance in historically marginalized communities
| [11] | Lum, K., & Isaac, W. (2016). To predict and serve? Significance, 13(5), 14–19.
https://doi.org/10.1111/j.1740-9713.2016.00960.x |
| [15] | Richardson, R., Schultz, J., & Crawford, K. (2019). Dirty data, bad predictions. New York University Law Review, 94, 192–233. |
[11, 15]
. Furthermore, the opacity of predictive models complicates accountability processes and weakens public trust in security institutions
| [16] | UNESCO. (2023). Recommendation on the ethics of artificial intelligence. |
| [18] | Yeung, K. (2018). Algorithmic regulation: A critical interrogation. Regulation & Governance, 12(4), 505–523.
https://doi.org/10.1111/rego.12158 |
[16, 18]
.
The specialized literature has addressed these issues mainly from legal, technical, or ethical perspectives. Nevertheless, an analytical gap persists regarding the epistemological integration between complex thinking and algorithmic governance applied to the field of public security. Although critical studies have documented risks associated with bias and opacity, there is comparatively limited theoretical development concerning the recursive dynamics linking algorithmic design, institutional practices, and regulatory frameworks as components of an integrated sociotechnical system.
From the perspective of complex thinking
| [12] | Morin, E. (2008). On complexity. Hampton Press. |
[12]
, technological phenomena cannot be understood through linear cause–effect relationships but rather as dynamic configurations in which order and disorder, regulation and uncertainty, interact recursively. Within this framework, artificial intelligence applied to crime prevention not only analyzes social reality but actively contributes to shaping it by influencing institutional decisions that generate new data and continuously feed predictive systems
.
This logic of institutional self-reproduction poses specific challenges for algorithmic governance, understood as the set of normative, organizational, and procedural mechanisms aimed at regulating the design, use, and supervision of algorithms within the public sector
.
The present study analyzes the implementation of artificial intelligence systems in crime prevention as a non-neutral sociotechnical process whose legitimacy and effectiveness depend on the interaction between algorithmic design, institutional practices, and regulatory frameworks. Through a sequential mixed-methods design integrating documentary review and expert interviews, the article examines structural tensions associated with bias, opacity, and meaningful human oversight, proposing an interpretation grounded in the perspective of complex thinking.
By integrating empirical evidence and theoretical debate, this study contributes to the literature on algorithmic governance by demonstrating that the identified risks do not constitute isolated technical failures but rather emergent effects of recursive institutional configurations. This perspective shifts the debate away from a focus on technological sophistication toward the quality of institutional and regulatory integration of artificial intelligence systems in public security.
2. Theoretical Framework
2.1. Artificial Intelligence, Complexity, and Algorithmic Governance in Public Security
The contemporary debate on artificial intelligence applied to crime prevention lies at the intersection of technology, institutional power, and fundamental rights. The literature has extensively documented the use of predictive systems to identify spatial and temporal crime patterns, as well as their potential to optimize the allocation of police resources
| [5] | Ferguson, A. G. (2017). The rise of big data policing: Surveillance, race, and the future of law enforcement. NYU Press. |
| [14] | Perry, W. L., et al. (2013). Predictive policing: The role of crime forecasting in law enforcement operations. RAND Corporation. |
[5, 14]
. However, these initial approaches, primarily focused on operational efficiency, have increasingly been challenged by research highlighting the structural implications of using historical data in contexts marked by social inequality.
From a critical perspective, several authors have emphasized that algorithms are not neutral tools but sociotechnical devices reflecting the institutional and cultural conditions in which they are designed and implemented
| [2] | Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press. |
| [9] | Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14–29. https://doi.org/10.1080/1369118X.2016.1154087 |
| [13] | Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press. |
[2, 9, 13]
. When predictive systems rely on data generated from previous policing practices, they tend to reproduce structural biases, intensifying surveillance in territories historically subjected to over-policing
| [11] | Lum, K., & Isaac, W. (2016). To predict and serve? Significance, 13(5), 14–19.
https://doi.org/10.1111/j.1740-9713.2016.00960.x |
| [15] | Richardson, R., Schultz, J., & Crawford, K. (2019). Dirty data, bad predictions. New York University Law Review, 94, 192–233. |
[11, 15]
. In this sense, algorithmic bias cannot be reduced to a specific technical failure but must be understood as an emergent effect of cumulative institutional configurations.
Moreover, the opacity of many algorithmic models poses significant challenges for democratic accountability. The difficulty of accessing the internal logic of these systems—particularly when proprietary models or non-explainable machine learning techniques are employed—limits the possibility of external auditing and oversight
| [2] | Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press. |
| [18] | Yeung, K. (2018). Algorithmic regulation: A critical interrogation. Regulation & Governance, 12(4), 505–523.
https://doi.org/10.1111/rego.12158 |
[2, 18]
. This situation generates tensions between operational efficiency and institutional transparency, challenging fundamental principles such as equality before the law and non-discrimination
| [4] | Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press. |
| [16] | UNESCO. (2023). Recommendation on the ethics of artificial intelligence. |
| [17] | UNODC. (2021). Artificial intelligence and criminal justice. United Nations. |
| [19] | Završnik, A. (2021). Criminal justice, artificial intelligence systems, and human rights. ERA Forum, 20(4), 567–583.
https://doi.org/10.1007/s12027-019-00596-0 |
[4, 16, 17, 19]
.
In response to these concerns, international organizations have promoted normative frameworks aimed at guaranteeing meaningful human oversight and algorithmic governance mechanisms grounded in transparency, accountability, and respect for human rights
| [16] | UNESCO. (2023). Recommendation on the ethics of artificial intelligence. |
| [17] | UNODC. (2021). Artificial intelligence and criminal justice. United Nations. |
[16, 17]
. The concept of algorithmic governance has been defined as the set of norms, principles, and institutional mechanisms intended to regulate the design, implementation, and control of algorithms in the public sector
. However, the literature reveals a persistent gap between these formal regulatory frameworks and their effective implementation in concrete operational contexts.
Although these contributions have enriched ethical and legal debates, an analytical limitation persists when the phenomenon is approached through linear frameworks that treat technical design, human decision-making, and regulatory oversight as independent dimensions.
At this point, complex thinking provides a relevant epistemological key
| [12] | Morin, E. (2008). On complexity. Hampton Press. |
[12]
. From this perspective, technological systems should be understood as dynamic configurations in which multiple levels—technical, institutional, normative, and cultural—interact recursively.
Applied to public security, this approach allows artificial intelligence to be interpreted as a self-referential sociotechnical system: algorithms analyze data generated by institutional practices; these practices are subsequently adjusted according to predictions produced by the system; and such decisions generate new data that feed the algorithm again
. This recursive dynamic reveals that AI does not merely process information about social reality but actively participates in shaping it.
From the perspective of complex thinking, this interaction cannot be understood as a linear causal chain but rather as a process in which order and disorder coexist. The pursuit of algorithmic efficiency introduces mechanisms of formalization and calculation (order), while social reality—shaped by structural inequalities and institutional contingencies—introduces uncertainty and unforeseen effects (disorder). The tension between these poles produces emergent configurations that challenge traditional regulatory models.
Consequently, the legitimacy of artificial intelligence in crime prevention does not depend exclusively on the technical accuracy of predictive models but on the quality of their institutional and normative integration. This integration requires mechanisms of meaningful human oversight, algorithmic audits, and effective accountability systems capable of mitigating the structural reproduction of bias
| [2] | Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press. |
| [16] | UNESCO. (2023). Recommendation on the ethics of artificial intelligence. |
[2, 16]
. As argued in the literature, the ethics of artificial intelligence cannot be reduced to principled declarations but must be supported by verifiable operational conditions
| [7] | Floridi, L., Cowls, J., Beltrametti, M., et al. (2018). AI4People—An ethical framework for a good AI society. Minds and Machines, 28(4), 689–707.
https://doi.org/10.1007/s11023-018-9482-5 |
| [10] | Leslie, D. (2020). Understanding artificial intelligence ethics and safety. The Alan Turing Institute. |
[7, 10]
.
Thus, the analysis of artificial intelligence in public security demands a conceptual framework capable of articulating algorithmic design, institutional practices, and normative governance as interdependent dimensions of a single complex system.
2.2. Ethical Implications and Algorithmic Bias in AI Systems
The integration of artificial intelligence into crime prevention systems has introduced a new layer of ethical complexity, particularly in relation to algorithmic bias, opacity, and accountability. While AI technologies are often perceived as objective and data-driven, a growing body of research demonstrates that these systems are deeply embedded in socio-technical contexts that reproduce existing social inequalities and institutional practices
| [4] | Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press. |
| [2] | Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press. |
[4, 2]
.
Algorithmic bias emerges primarily from three interrelated sources: biased training data, model design choices, and the socio-institutional environments in which these systems are deployed. Historical crime data, frequently used to train predictive policing algorithms, often reflect patterns of over-policing in marginalized communities. As a result, AI systems may reinforce and legitimize discriminatory practices under the guise of neutrality, leading to what has been described as “automating inequality”
| [4] | Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press. |
[4]
. This phenomenon challenges the assumption that technological systems inherently enhance fairness or efficiency.
From the perspective of complex thinking, algorithmic bias cannot be understood as a purely technical flaw but rather as an emergent property of interconnected systems involving data, institutions, human actors, and regulatory frameworks. This aligns with Morin’s principle of recursivity, where outputs of a system feed back into its inputs, potentially amplifying systemic distortions over time. In this sense, biased predictions may influence policing strategies, which in turn generate new data that reinforce the original bias, creating a self-perpetuating cycle.
Another critical issue is the opacity of AI systems, particularly those based on machine learning models that operate as “black boxes.” The lack of transparency limits the ability of stakeholders to understand, challenge, or audit algorithmic decisions, raising concerns about due process and procedural justice. As highlighted by recent ethical frameworks, including those proposed by UNESCO (2023), transparency and explainability are essential conditions for the legitimate use of AI in public security.
Furthermore, the delegation of decision-making processes to algorithmic systems raises fundamental questions about responsibility and accountability. In cases where AI-driven recommendations lead to harmful outcomes, it becomes difficult to determine whether responsibility lies with system developers, data scientists, institutional users, or policymakers. This diffusion of responsibility reflects the distributed nature of socio-technical systems and underscores the need for governance mechanisms that clearly define roles and obligations.
In this context, the concept of “meaningful human oversight” has gained prominence as a key ethical safeguard. However, its implementation remains challenging, particularly in high-pressure operational environments such as policing, where decisions must be made rapidly and often under conditions of uncertainty. From a complex systems perspective, effective oversight requires not only human intervention but also the integration of ethical reflexivity, institutional transparency, and adaptive learning processes.
Ultimately, addressing ethical challenges in AI-based crime prevention requires moving beyond reductionist approaches that treat bias as a technical anomaly. Instead, it demands a holistic and transdisciplinary framework that recognizes the dynamic interplay between technology, society, and governance. Only through such an approach can AI systems be aligned with principles of fairness, accountability, and social justice.
2.3. Institutional and Regulatory Approaches to AI Governance in Security Contexts
The governance of artificial intelligence in public security contexts has become a central concern for policymakers, international organizations, and academic researchers. As AI systems increasingly influence decision-making processes in policing and crime prevention, the need for robust institutional and regulatory frameworks has become more urgent. These frameworks must address not only technical standards but also ethical, legal, and societal implications.
At the international level, several initiatives have sought to establish guiding principles for the responsible use of AI. The UNESCO Recommendation on the Ethics of Artificial Intelligence (2021) emphasizes values such as human rights, transparency, accountability, and inclusiveness, advocating for a human-centered approach to AI deployment. Similarly, the OECD AI Principles and the European Union’s regulatory efforts, including the proposed AI Act, highlight the importance of risk-based governance models that categorize AI applications according to their potential impact on individuals and society.
In the context of public security, these regulatory approaches face unique challenges. Policing environments are characterized by high levels of uncertainty, operational urgency, and discretionary decision-making. As a result, the implementation of standardized regulatory frameworks must be adapted to local institutional realities and socio-political conditions. This highlights the importance of contextual governance, which recognizes that AI systems do not operate in isolation but are embedded within specific institutional and cultural settings.
From a complex thinking perspective, governance cannot be reduced to a set of fixed rules or procedures. Instead, it should be understood as a dynamic and adaptive process that evolves in response to changing conditions and feedback loops. This aligns with the principle of organizational self-regulation, where institutions continuously adjust their practices based on new information and emerging challenges. In this sense, effective AI governance requires not only formal regulations but also informal practices, ethical cultures, and learning mechanisms within organizations.
One of the key dimensions of AI governance in security contexts is the balance between innovation and control. While AI technologies offer significant potential for improving efficiency and effectiveness in crime prevention, excessive reliance on algorithmic systems without adequate oversight may lead to unintended consequences, including violations of fundamental rights and erosion of public trust. Therefore, governance frameworks must ensure that technological innovation is accompanied by safeguards that protect individual freedoms and democratic values.
Another critical aspect is the role of institutional capacity. The successful implementation of AI governance depends on the availability of trained personnel, technical expertise, and organizational resources. In many cases, public security institutions face limitations in these areas, which may hinder their ability to effectively manage and regulate AI systems. This underscores the need for capacity-building initiatives, interdisciplinary collaboration, and knowledge transfer between academia, government, and industry.
Moreover, accountability mechanisms must be strengthened to ensure that AI systems are subject to continuous monitoring and evaluation. This includes the development of auditing processes, impact assessments, and transparency requirements that allow stakeholders to assess the performance and implications of AI applications. In this regard, the concept of algorithmic accountability extends beyond compliance with regulations to include broader considerations of social responsibility and ethical governance.
Finally, the integration of AI into public security systems requires a shift toward participatory governance models that involve multiple stakeholders, including civil society, academia, and affected communities. Such models enhance legitimacy, foster trust, and provide diverse perspectives that can inform more equitable and effective policy decisions.
3. Methodology
The study adopts a sequential explanatory mixed-methods design aimed at integrating documentary analysis and qualitative empirical evidence in the examination of artificial intelligence applied to crime prevention
| [3] | Creswell, J. W., & Plano Clark, V. L. (2018). Designing and conducting mixed methods research (3rd ed.). SAGE Publications. |
| [6] | Fetters, M. D., & Molina-Azorín, J. F. (2020). Utilizing a mixed methods approach for conducting interdisciplinary research. Journal of Mixed Methods Research, 14(3), 301–308. https://doi.org/10.1177/1558689819872602 |
[3, 6]
. This approach is particularly appropriate for analyzing complex sociotechnical systems in which technological, institutional, and normative dimensions converge.
From the perspective of complex thinking
| [12] | Morin, E. (2008). On complexity. Hampton Press. |
[12]
, the methodological choice is not merely an instrumental combination of techniques but rather a way of approaching the phenomenon across multiple levels of reality. Artificial intelligence in public security constitutes a network in which algorithmic infrastructures, organizational decisions, regulatory frameworks, and social contexts interact, generating non-linear emergent effects. The methodological strategy was structured into two complementary phases.
3.1. Documentary Phase
A semi-systematic documentary review of scientific literature and international regulatory frameworks published between 2018 and 2024 was conducted. Searches were performed in indexed databases, including Scopus, Web of Science, and Google Scholar, using combinations of keywords such as:
1) artificial intelligence
2) predictive policing
3) algorithmic bias
4) algorithmic governance
5) criminal justice AI
6) ethics of AI
Institutional repositories of international organizations (UN, UNODC, UNESCO, OECD, European Commission) were also consulted.
The analysis was conducted through thematic coding
, which enabled the identification of key analytical categories, including:
1) algorithmic design
2) institutional practices
3) normative governance
4) structural bias
5) opacity
6) meaningful human oversight
These categories constituted the conceptual matrix guiding the empirical phase.
3.2. Empirical Phase
The qualitative phase consisted of semi-structured interviews conducted with eight experts from the Dominican National Police, with experience in crime analysis, public security policy, artificial intelligence implementation, or technology regulation.
Participants were selected through purposive sampling, considering:
1) professional trajectory
2) participation in technological or policy processes
3) academic or technical expertise
Interviews were conducted virtually, with an approximate duration of 60 minutes, ensuring informed consent and participant anonymity.
The number of participants was determined through thematic saturation, reached when subsequent interviews began to replicate analytical patterns without generating new conceptual dimensions.
The data were analyzed through iterative thematic coding, combining deductive categories derived from the theoretical framework with inductive categories emerging from empirical evidence.
4. Results
The integrated analysis of documentary and empirical data led to the identification of four interrelated structural dimensions that characterize the functioning of artificial intelligence in crime prevention:
1) Non-neutrality of algorithmic design and institutional dependency
2) Structural reproduction of bias as an accumulative effect
3) Gap between formal regulation and operational implementation
4) Centrality—and fragility—of meaningful human oversight
These dimensions do not operate independently but rather emerge as interconnected components of a dynamic sociotechnical system shaped by recursive interactions between technological, institutional, and normative elements.
The findings indicate that algorithmic systems in public security are deeply embedded in institutional practices, which condition both their design and their operational outcomes. Consequently, the performance and impact of these systems cannot be understood in isolation from the organizational and regulatory environments in which they are deployed.
Furthermore, the reproduction of bias appears as a cumulative and systemic phenomenon rather than as the result of isolated technical errors. The iterative use of historical data reinforces existing patterns of surveillance and intervention, particularly in contexts marked by structural inequality.
The analysis also reveals a persistent gap between formal regulatory frameworks and their effective implementation in operational contexts. Although normative principles related to transparency, accountability, and human oversight are widely recognized, their translation into concrete institutional practices remains limited.
Finally, meaningful human oversight emerges as a central yet fragile component of algorithmic governance. While it is formally established as a safeguard mechanism, its effectiveness is conditioned by institutional dynamics that may reduce human intervention to procedural validation rather than critical evaluation.
5. Discusion
The findings confirm that artificial intelligence applied to crime prevention cannot be understood as a neutral instrumental tool but rather as a sociotechnical configuration shaped by organizational, regulatory, and cultural dynamics.
From the perspective of complex thinking
| [12] | Morin, E. (2008). On complexity. Hampton Press. |
[12]
, and contemporary reflections on uncertainty and governance in complex societies
| [8] | Innerarity, D. (2020). Pandemocracia: Una filosofía de la crisis del coronavirus. Galaxia Gutenberg. |
[8]
, these dynamics can be interpreted as processes of eco-self-organization, in which institutional systems continuously reproduce and transform themselves through their interaction with informational environments.
Predictive systems aim to reduce uncertainty through statistical formalization. However, the social reality they seek to model is characterized by structural inequalities, informal dynamics, and non-linear contingencies. Consequently, algorithmic order interacts continuously with social disorder, generating unstable configurations that may reinforce pre-existing patterns.
Within this framework, the reproduction of bias does not emerge as a technical anomaly but as the result of structural tensions between formalized computational logic and the complexity of social environments. Bias, therefore, should be interpreted as an emergent systemic effect rather than an isolated failure of algorithmic design.
The concept of meaningful human oversight, frequently proposed as a safeguard mechanism, must also be critically reconsidered. As institutional processes become progressively aligned with algorithmic logic, human intervention risks being reduced to routine validation rather than functioning as a space for reflexive and critical decision-making.
From a complex systems perspective, algorithmic governance cannot be conceived as a static regulatory framework but rather as an adaptive institutional learning process. Such a process must be capable of responding reflexively to emerging systemic effects and continuously adjusting regulatory and operational mechanisms.
6. Conclusion
This study examined artificial intelligence applied to crime prevention as a non-neutral sociotechnical process whose legitimacy and effectiveness depend on the structural interaction between algorithmic design, institutional practices, and regulatory frameworks.
The findings indicate that algorithmic bias, opacity, and institutional dependency should not be interpreted as isolated technical failures but rather as emergent manifestations of broader organizational configurations. From this perspective, the risks associated with artificial intelligence are embedded within recursive institutional dynamics that continuously reproduce and transform decision-making processes.
From the standpoint of complex thinking
| [12] | Morin, E. (2008). On complexity. Hampton Press. |
[12]
, artificial intelligence operates as an eco-self-organizing system integrated within institutional ecosystems. Algorithms not only process data but also influence decision-making structures and actively participate in shaping the informational environments that sustain them.
Consequently, the democratic legitimacy of artificial intelligence in public security depends not solely on predictive accuracy or adherence to ethical principles, but on the institutional capacity to manage complexity, uncertainty, and emergent systemic effects through adaptive governance mechanisms.
Future research should advance toward comparative empirical analyses of co-evolutionary processes between algorithmic systems and institutional structures, contributing to the development of dynamic governance frameworks capable of aligning technological innovation with democratic accountability and social justice.
Abbreviations
AI | Artificial Intelligence |
ML | Machine Learning |
UNESCO | United Nations Educational, Scientific and Cultural Organization |
UNODC | United Nations Office on Drugs and Crime |
OECD | Organisation for Economic Co-operation and Development |
EU | European Union |
Author Contributions
Jose Del Carmen Encarnacion Dicent: Conceptualization, Formal analysis, Investigation, Methodology, Writing – original draft, Writing – review & editing.
Conflicts of Interest
The author declares that there are no financial, personal, or institutional conflicts of interest that could have influenced the work reported in this paper.
References
| [1] |
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101.
https://doi.org/10.1191/1478088706qp063oa
|
| [2] |
Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
|
| [3] |
Creswell, J. W., & Plano Clark, V. L. (2018). Designing and conducting mixed methods research (3rd ed.). SAGE Publications.
|
| [4] |
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
|
| [5] |
Ferguson, A. G. (2017). The rise of big data policing: Surveillance, race, and the future of law enforcement. NYU Press.
|
| [6] |
Fetters, M. D., & Molina-Azorín, J. F. (2020). Utilizing a mixed methods approach for conducting interdisciplinary research. Journal of Mixed Methods Research, 14(3), 301–308.
https://doi.org/10.1177/1558689819872602
|
| [7] |
Floridi, L., Cowls, J., Beltrametti, M., et al. (2018). AI4People—An ethical framework for a good AI society. Minds and Machines, 28(4), 689–707.
https://doi.org/10.1007/s11023-018-9482-5
|
| [8] |
Innerarity, D. (2020). Pandemocracia: Una filosofía de la crisis del coronavirus. Galaxia Gutenberg.
|
| [9] |
Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14–29.
https://doi.org/10.1080/1369118X.2016.1154087
|
| [10] |
Leslie, D. (2020). Understanding artificial intelligence ethics and safety. The Alan Turing Institute.
|
| [11] |
Lum, K., & Isaac, W. (2016). To predict and serve? Significance, 13(5), 14–19.
https://doi.org/10.1111/j.1740-9713.2016.00960.x
|
| [12] |
Morin, E. (2008). On complexity. Hampton Press.
|
| [13] |
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
|
| [14] |
Perry, W. L., et al. (2013). Predictive policing: The role of crime forecasting in law enforcement operations. RAND Corporation.
|
| [15] |
Richardson, R., Schultz, J., & Crawford, K. (2019). Dirty data, bad predictions. New York University Law Review, 94, 192–233.
|
| [16] |
UNESCO. (2023). Recommendation on the ethics of artificial intelligence.
|
| [17] |
UNODC. (2021). Artificial intelligence and criminal justice. United Nations.
|
| [18] |
Yeung, K. (2018). Algorithmic regulation: A critical interrogation. Regulation & Governance, 12(4), 505–523.
https://doi.org/10.1111/rego.12158
|
| [19] |
Završnik, A. (2021). Criminal justice, artificial intelligence systems, and human rights. ERA Forum, 20(4), 567–583.
https://doi.org/10.1007/s12027-019-00596-0
|
Cite This Article
-
APA Style
Dicent, J. D. C. E. (2026). Artificial Intelligence and Crime Prevention:
A Documentary Analysis from Complex Thinking and Algorithmic Governance. American Journal of Artificial Intelligence, 10(1), 172-178. https://doi.org/10.11648/j.ajai.20261001.25
Copy
|
Download
ACS Style
Dicent, J. D. C. E. Artificial Intelligence and Crime Prevention:
A Documentary Analysis from Complex Thinking and Algorithmic Governance. Am. J. Artif. Intell. 2026, 10(1), 172-178. doi: 10.11648/j.ajai.20261001.25
Copy
|
Download
AMA Style
Dicent JDCE. Artificial Intelligence and Crime Prevention:
A Documentary Analysis from Complex Thinking and Algorithmic Governance. Am J Artif Intell. 2026;10(1):172-178. doi: 10.11648/j.ajai.20261001.25
Copy
|
Download
-
@article{10.11648/j.ajai.20261001.25,
author = {Jose Del Carmen Encarnacion Dicent},
title = {Artificial Intelligence and Crime Prevention:
A Documentary Analysis from Complex Thinking and Algorithmic Governance},
journal = {American Journal of Artificial Intelligence},
volume = {10},
number = {1},
pages = {172-178},
doi = {10.11648/j.ajai.20261001.25},
url = {https://doi.org/10.11648/j.ajai.20261001.25},
eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajai.20261001.25},
abstract = {Artificial intelligence (AI) has become a central element in contemporary strategies for crime prevention and public security, enabling the identification of patterns, predictive analysis, and data-driven decision-making. However, its implementation raises significant ethical, social, and institutional challenges, particularly related to algorithmic bias, opacity, and governance. In this context, this study aims to analyze the role of artificial intelligence in crime prevention from the perspective of complex thinking and algorithmic governance. The research adopts a mixed-methods sequential explanatory design. First, a semi-systematic documentary review was conducted, focusing on scientific literature published between 2018 and 2024 in databases such as Scopus, Web of Science, and Google Scholar, as well as reports from international organizations including the United Nations, UNESCO, and the OECD. Second, semi-structured interviews were carried out with eight experts from the Dominican National Police, selected through purposive sampling based on their professional experience and involvement in technological and operational processes. The findings reveal that AI systems are not neutral tools but socio-technical constructs shaped by design choices, institutional practices, and regulatory frameworks. Key challenges identified include structural bias, limited transparency, and insufficient oversight mechanisms. From a complex thinking perspective, the study highlights the need for an integrated, interdisciplinary, and ethically grounded approach to AI implementation in criminal analysis. The study concludes that effective crime prevention through AI requires not only technological advancement but also robust governance models that ensure accountability, fairness, and meaningful human oversight.},
year = {2026}
}
Copy
|
Download
-
TY - JOUR
T1 - Artificial Intelligence and Crime Prevention:
A Documentary Analysis from Complex Thinking and Algorithmic Governance
AU - Jose Del Carmen Encarnacion Dicent
Y1 - 2026/04/16
PY - 2026
N1 - https://doi.org/10.11648/j.ajai.20261001.25
DO - 10.11648/j.ajai.20261001.25
T2 - American Journal of Artificial Intelligence
JF - American Journal of Artificial Intelligence
JO - American Journal of Artificial Intelligence
SP - 172
EP - 178
PB - Science Publishing Group
SN - 2639-9733
UR - https://doi.org/10.11648/j.ajai.20261001.25
AB - Artificial intelligence (AI) has become a central element in contemporary strategies for crime prevention and public security, enabling the identification of patterns, predictive analysis, and data-driven decision-making. However, its implementation raises significant ethical, social, and institutional challenges, particularly related to algorithmic bias, opacity, and governance. In this context, this study aims to analyze the role of artificial intelligence in crime prevention from the perspective of complex thinking and algorithmic governance. The research adopts a mixed-methods sequential explanatory design. First, a semi-systematic documentary review was conducted, focusing on scientific literature published between 2018 and 2024 in databases such as Scopus, Web of Science, and Google Scholar, as well as reports from international organizations including the United Nations, UNESCO, and the OECD. Second, semi-structured interviews were carried out with eight experts from the Dominican National Police, selected through purposive sampling based on their professional experience and involvement in technological and operational processes. The findings reveal that AI systems are not neutral tools but socio-technical constructs shaped by design choices, institutional practices, and regulatory frameworks. Key challenges identified include structural bias, limited transparency, and insufficient oversight mechanisms. From a complex thinking perspective, the study highlights the need for an integrated, interdisciplinary, and ethically grounded approach to AI implementation in criminal analysis. The study concludes that effective crime prevention through AI requires not only technological advancement but also robust governance models that ensure accountability, fairness, and meaningful human oversight.
VL - 10
IS - 1
ER -
Copy
|
Download