Incidencia de los principios éticos en el uso de la inteligencia artificial a nivel global.
| dc.contributor.advisor | Villamizar Estrada, Avilio | |
| dc.contributor.author | Sanclemente Capacho, Andres Camilo | |
| dc.coverage.spatial | Cúcuta | spa |
| dc.creator.email | andresc-sanclementec@unilibre.edu.co | spa |
| dc.date.accessioned | 2025-01-23T22:11:54Z | |
| dc.date.available | 2025-01-23T22:11:54Z | |
| dc.date.created | 2025-01-22 | |
| dc.description.abstract | El artículo destaca la importancia de incorporar principios éticos en el desarrollo y uso de la inteligencia artificial (IA), considerando su creciente influencia en sectores clave como la salud, transporte y negocios. Aunque la IA aún no tiene conciencia humana, sus capacidades de automatización y procesamiento masivo de datos plantean desafíos éticos significativos, especialmente en temas de empleo, privacidad y justicia social. Estos desafíos requieren atención para evitar impactos negativos que afecten el bienestar de la sociedad y promuevan un uso equitativo de la tecnología. Para mitigar estos riesgos, la ética en IA se enfoca en principios como transparencia, seguridad de datos, autonomía, intencionalidad y responsabilidad, que buscan orientar la creación de sistemas imparciales y explicables. Sin embargo, problemas como la "caja negra" de algunos algoritmos y la presencia de sesgos en los datos de entrenamiento dificultan la confianza en estas tecnologías, ya que pueden perpetuar prejuicios sociales. Esto subraya la necesidad de mejorar los estándares éticos y asegurar que la IA actúe en beneficio de todos, eliminando sesgos y promoviendo decisiones justas. Gobiernos y grandes corporaciones también han comenzado a implementar marcos regulatorios, como los principios FEAT (justicia, ética, responsabilidad y transparencia), para supervisar el uso ético de la IA. Estudios de caso, como la identificación biométrica en India, resaltan tanto los beneficios como los riesgos de la IA, sugiriendo que para una IA ética se requiere regulación, educación y cooperación internacional, a fin de reducir la desigualdad y proteger los derechos humanos en un marco de gobernanza transparente. | spa |
| dc.description.abstractenglish | The article highlights the importance of incorporating ethical principles in the development and use of artificial intelligence (AI), considering its growing influence in key sectors such as health, transportation, and business. Although AI does not yet have human consciousness, its capabilities in automation and massive data processing pose significant ethical challenges, especially in areas like employment, privacy, and social justice. These challenges require attention to prevent negative impacts that could affect society's well-being and promote an equitable use of technology. To mitigate these risks, AI ethics focuses on principles such as transparency, data security, autonomy, intentionality, and responsibility, aiming to guide the creation of impartial and explainable systems. However, issues like the "black box" of certain algorithms and the presence of biases in training data make it difficult to trust these technologies, as they may perpetuate social prejudices. This underscores the need to improve ethical standards and ensure that AI acts for the benefit of all, eliminating biases and promoting fair decisions. Governments and large corporations have also begun implementing regulatory frameworks, such as the FEAT principles (fairness, ethics, accountability, and transparency), to oversee the ethical use of AI. Case studies, such as biometric identification in India, highlight both the benefits and risks of AI, suggesting that an ethical AI requires regulation, education, and international cooperation to reduce inequality and protect human rights within a transparent governance framework. | spa |
| dc.description.sponsorship | Universidad Libre -Ingenierías- Ingeniería en tecnologías de la investigación y las comunicaciones | spa |
| dc.format | spa | |
| dc.identifier.uri | https://hdl.handle.net/10901/30483 | |
| dc.relation.references | Anderson, M., & Anderson, S. L. (2007). Machine ethics: Creating an ethical intelligent agent. AI Magazine, 28(4), 15–26. | spa |
| dc.relation.references | Barclay, I., Preece, A., Taylor, I., & Verma, D. (2021). Towards traceability in data ecosystems using a bill of materials model. CEUR Workshop Proceedings, 2975. | spa |
| dc.relation.references | Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? FAccT 2021 - Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922 | spa |
| dc.relation.references | Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018). “It’s reducing a human being to a percentage”; perceptions of justice in algorithmic decisions. Conference on Human Factors in Computing Systems - Proceedings, 2018-April. https://doi.org/10.1145/3173574.3173951 | spa |
| dc.relation.references | Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. In The Cambridge handbook of artificial intelligence (pp. 316-334). Cambridge Press. doi:10.1017/CBO9781139046855.020 | spa |
| dc.relation.references | Bossmann, J. (2016). Top 9 ethical issues in artificial intelligence. World Economic Forum. Retrieved from https://www.weforum.org/ethical-issues-in-AI | spa |
| dc.relation.references | Bradley, T. (2017). Facebook AI Creates Its Own Language In Creepy Preview of Our | spa |
| dc.relation.references | Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe, A., Scharre, P., Zeitzoff, T., Filar, B., Anderson, H., Roff, H., Allen, G. C., Steinhardt, J., Flynn, C., hÉigeartaigh, S. Ó., Beard, S., Belfield, H., Farquhar, S., … Amodei, D. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. http://arxiv.org/abs/1802.07228 | spa |
| dc.relation.references | Bryson, J., & Winfield, A. (2017). Standardizing Ethical Design for Artificial Intelligence and Autonomous Systems. Computer, 50(5), 116–119. https://doi.org/10.1109/MC.2017.154 | spa |
| dc.relation.references | Choudhuri, R., Liu, D., Steinmacher, I., Gerosa, M., & Sarma, A. (2024). How Far Are We? The Triumphs and Trials of Generative AI in Learning Software Engineering. Proceedings of the IEEE/ACM 46th International Conference on Software Engineering, 1–13. https://doi.org/10.1145/3597503.3639201 | spa |
| dc.relation.references | Cramer, H., Reddy, S., Bouyer, R. T., Garcia-Gathright, J., & Springer, A. (2019). Translation, tracks & Data: An algorithmic bias effort in practice. Conference on Human Factors in Computing Systems - Proceedings. https://doi.org/10.1145/3290607.3299057 | spa |
| dc.relation.references | Davidson, T., Bhattacharya, D., & Weber, I. (2019). Racial Bias in Hate Speech and Abusive Language Detection Datasets. 25–35. https://doi.org/10.18653/v1/w19-3504 | spa |
| dc.relation.references | Diakopoulos, N. (2016). Accountability in algorithmic decision making. Communications of the ACM, 59(2), 56–62. https://doi.org/10.1145/2844110 | spa |
| dc.relation.references | Díaz, M., Johnson, I., Lazar, A., Piper, A. M., & Gergle, D. (2018). Addressing age-related bias in sentiment analysis. Conference on Human Factors in Computing Systems - Proceedings, 2018-April. https://doi.org/10.1145/3173574.3173986 | spa |
| dc.relation.references | Dawson, D., Schleiger, E., Horton, J., McLaughlin, J., Robinson, C., Quezada, G., . . . Hajkowicz, S. (2019). Artificial Intelligence: Australia’s Ethics Framework. Data61 CSIRO, Australia. | spa |
| dc.relation.references | European Commission. (2019). Ethics guidelines for trustworthy AI. Retrieved from https://ec.europa.eu/digital- single-market/en/news/ethics-guidelines-trustworthy-ai | spa |
| dc.relation.references | Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., & Vayena, E. et al. (2018). AI4People—an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. doi:10.1007/s11023-018-9482-5 PMID:30930541 | spa |
| dc.relation.references | Future of Life Institute. (2017). Asilomar AI Principles. Retrieved from https://futureoflife.org/ai-principles/?cn-reloaded=1 | spa |
| dc.relation.references | Hagerty, A., & Rubinov, I. (2019). Global AI Ethics: A Review of the Social Impacts and Ethical Implications of Artificial Intelligence. http://arxiv.org/abs/1907.07892 | spa |
| dc.relation.references | Hanna, M. (2019). We don’t need more guidelines or frameworks on ethical AI use. It’s time for regulatory action. Brink the Edge of Risk. Retrieved from https://www.brinknews.com/we-dont-need-more-guidelines-or-frameworks-on-ethical-ai-use-its-time-for-regulatory-action/Haga clic o pulse aquí para escribir texto. | spa |
| dc.relation.references | Hamidi, F., Scheuerman, M. K., & Branham, S. M. (2018). Gender Recognition or Gender reductionism? The socialimplications of Automatic Gender Recognition systems. Conference on Human Factors in Computing Systems - Proceedings, 2018-April. https://doi.org/10.1145/3173574.3173582 | spa |
| dc.relation.references | Holstein, K., Vaughan, J. W., Daumé, H., Dudík, M., & Wallach, H. (2019). Improving fairness in machine learning systems: What do industry practitioners need? Conference on Human Factors in Computing Systems - Proceedings, 16. https://doi.org/10.1145/3290605.3300830 | spa |
| dc.relation.references | IAPP. (2018). White Paper -- Building Ethics into Privacy Frameworks for Big Data and AI. Retrieved from https://iapp.org/resources/article/building-ethics-into-privacy-frameworks-for-big-data-and-ai/ | spa |
| dc.relation.references | IEEE. (2019). Ethically aligned Design. Retrieved from https://ethicsinaction.ieee.org/ | spa |
| dc.relation.references | Koolen, C., & van Cranenburgh, A. (2017). These are not the Stereotypes you are looking For: Bias and Fairness in Authorial Gender Attribution. Proceedings of the First ACL Workshop on Ethics in Natural Language Processing (pp. 12-22). Academic Press. doi:10.18653/v1/W17-1602 | spa |
| dc.relation.references | Larson, B.N. (2017). Gender as a variable in natural-language processing: Ethical considerations. | spa |
| dc.relation.references | Markkula Center for Applied Ethics. (2015). A framework for ethical decision making. Santa Clara University. Retrieved from https://www.scu.edu/ethics/ethics-resources/ethical-decision-making/a-framework-for-ethical- decision-making/ | spa |
| dc.relation.references | Potential Future. Forbes. Retrieved from https://www.forbes.com/sites/tonybradley/2017/0 7/31/facebook-ai-creates-its-own-language-in-creepy-preview-of-our-potential-future/#45c65554292c | spa |
| dc.relation.references | Rabie, A., & Hassanien, M. (2023). The impact of Artificial Intelligence on the Socioeconomic factors in the UAE. 2023 6th Artificial Intelligence and Cloud Computing Conference (AICCC), 114–125. https://doi.org/10.1145/3639592.3639609 | spa |
| dc.relation.references | Ryan, M. (2020). In AI We Trust: Ethics, Artificial Intelligence, and Reliability. Science and Engineering Ethics, 26(5), 2749–2767. https://doi.org/10.1007/S11948-020-00228-Y/TABLES/1 | spa |
| dc.relation.references | Shneiderman, B. (2020). Bridging the Gap Between Ethics and Practice. ACM Transactions on Interactive Intelligent Systems (TiiS), 10(4), 26. https://doi.org/10.1145/3419764 | spa |
| dc.relation.references | Siau, K., & Wang, W. (2020). Artificial Intelligence (AI) Ethics: Ethics of AI and Ethical AI. https://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/JDM.2020040105, 31(2), 74–87. https://doi.org/10.4018/JDM.2020040105 | spa |
| dc.relation.references | Siau, K., Xi, Y., & Zou, C. (2019). Industry 4.0: Challenges and Opportunities in Different Countries. Cutter Business Technology Journal., 32(6), 6–14. | spa |
| dc.relation.references | Schwartz, R., Dodge, J., Smith, N. A., & Etzioni, O. (2020). Green AI. Communications of the ACM, 63(12), 54–63. https://doi.org/10.1145/3381831 | spa |
| dc.relation.references | Sullins, J. P. (2011). When is a robot a moral agent. Machine ethics, 151-160. | spa |
| dc.relation.references | The Public Voice. (2018). Universal Guidelines for Artificial Intelligence. Retrieved from https://thepublicvoice. org/ai-universal-guidelines | spa |
| dc.relation.references | Timmermans, J., Stahl, B. C., Ikonen, V., & Bozdag, E. (2010). The ethics of cloud computing: A conceptual review. Proceedings of the IEEE Second International Conference Cloud Computing Technology and Science (pp. 614-620). IEEE Press. doi:10.1109/CloudCom.2010.59 | spa |
| dc.relation.references | UNESCO. (2017). Report of World Commission on the Ethics of Scientific Knowledge and Technology on Robotics Ethics. Retrieved from https://unesdoc.unesco.org/ark:/48223/pf0000253952 | spa |
| dc.relation.references | Wang, W., & Siau, K. (2018). Ethical and moral issue with AI – a case study on healthcare robots. AMCIS 2019 proceedings. Academic Press. | spa |
| dc.relation.references | Wang, W., & Siau, K. (2019a). Artificial Intelligence, Machine Learning, Automation, Robotics, Future of Work and Future of Humanity: A Review and Research Agenda. Journal of Database Management, 30(1), 61–79 | spa |
| dc.relation.references | Wang, W., & Siau, K. (2019b). Industry 4.0: Ethical and moral Predicaments. Cutter Business Technology Journal., 32(6), 36–45. | spa |
| dc.rights.accessrights | info:eu-repo/semantics/openAccess | spa |
| dc.rights.coar | http://purl.org/coar/access_right/c_abf2 | spa |
| dc.rights.license | Atribución-NoComercial-SinDerivadas 2.5 Colombia | spa |
| dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/2.5/co/ | spa |
| dc.subject | Inteligencia Artificial (IA) | spa |
| dc.subject | Principios éticos | spa |
| dc.subject | Derechos humanos | spa |
| dc.subject | Estándares éticos | spa |
| dc.subject | Responsabilidad | spa |
| dc.subject | Transparencia | spa |
| dc.subject | Regulación | spa |
| dc.subject | educación | spa |
| dc.subject.lemb | Inteligencia artificial | spa |
| dc.subject.subjectenglish | Artificial Intelligence (AI) | spa |
| dc.subject.subjectenglish | Ethical principles | spa |
| dc.subject.subjectenglish | Human rights | spa |
| dc.subject.subjectenglish | Ethical standards | spa |
| dc.subject.subjectenglish | Responsibility | spa |
| dc.subject.subjectenglish | Transparency | spa |
| dc.subject.subjectenglish | Regulation | spa |
| dc.subject.subjectenglish | Education | spa |
| dc.title | Incidencia de los principios éticos en el uso de la inteligencia artificial a nivel global. | spa |
| dc.title.alternative | Impact of ethical principles in the use of artificial intelligence at a global level | spa |
| dc.type.coar | http://purl.org/coar/resource_type/c_7a1f | spa |
| dc.type.driver | info:eu-repo/semantics/bachelorThesis | spa |
| dc.type.hasversion | info:eu-repo/semantics/acceptedVersion | spa |
| dc.type.local | Tesis de Pregrado | spa |
Archivos
Bloque original
1 - 4 de 4
Cargando...
- Nombre:
- FORMATO_ARTICULO_FINAL_INVESTIGACION_Andres_Camilo_Sanclemente.pdf
- Tamaño:
- 331.66 KB
- Formato:
- Adobe Portable Document Format
- Descripción:
- Articulo Final
Cargando...
- Nombre:
- FORMATO INSTITUCIONAL RESUMEN.pdf
- Tamaño:
- 99.71 KB
- Formato:
- Adobe Portable Document Format
- Descripción:
- Resumen
Cargando...
- Nombre:
- RESOLUCIÓN.pdf
- Tamaño:
- 771.13 KB
- Formato:
- Adobe Portable Document Format
- Descripción:
- Resolución
Cargando...
- Nombre:
- Formato-autorizacion-para-la-publicacion-digital-de-obras-en-el-repositorio-institucional,,,,,,.pdf
- Tamaño:
- 605.59 KB
- Formato:
- Adobe Portable Document Format
- Descripción:
- Formato-autorizacion-para-la-publicacion-digital-de-obras-en-el-repositorio-institucional
Bloque de licencias
1 - 1 de 1
Cargando...
- Nombre:
- license.txt
- Tamaño:
- 1.71 KB
- Formato:
- Item-specific license agreed upon to submission
- Descripción: