
Human-AI Alignment: A Growing Concern
Artificial intelligence (AI) is developing at an unprecedented rate, raising critical questions about its ethical alignment with human values. Audrey Lorvo, an MIT senior specializing in computer science, economics, and data science, is actively researching AI safety to ensure that increasingly intelligent systems remain beneficial to humanity. Her efforts focus on AI’s reliability, ethical considerations, and governance strategies to prevent unintended consequences as AI evolves toward artificial general intelligence (AGI).
The Challenge of AGI and AI Safety
As AI systems become more advanced, experts worry about the possibility of AGI—AI that can match or even surpass human cognitive abilities. Ensuring these systems adhere to ethical principles and do not act against human intentions is crucial. The field of AI alignment addresses these concerns by developing technical solutions to improve robustness, transparency, and control mechanisms. Lorvo’s research contributes to this effort by examining how AI can be designed to prioritize human safety while maintaining its innovation potential.
Governance and Policy for Responsible AI
AI governance determines how AI technologies are developed, deployed, and regulated. Lorvo’s work highlights the importance of integrating ethical considerations into AI research and policymaking. She collaborates with legislators, strategic advisors, and AI developers to formulate frameworks that balance innovation with safety. Through initiatives like the AI Safety Technical Fellowship, she engages with technical and policy-related challenges, advocating for responsible AI development.
Interdisciplinary Research and Impact
Lorvo’s approach to AI safety is deeply interdisciplinary. Her studies in economics, urban planning, and international development, combined with her technological expertise, provide a broad perspective on AI’s societal implications. At MIT’s Schwarzman College of Computing and as part of the Social and Ethical Responsibilities of Computing (SERC) scholars program, she investigates how AI automates research processes and assesses its socioeconomic impact.
The Role of Education and Community Engagement
MIT’s rigorous academic environment has encouraged Lorvo to explore various disciplines beyond computing, including philosophy and international studies. Programs like MIT Concourse, which merge scientific and humanistic discussions, have helped shape her perspective on AI ethics. She also participates in student organizations and research initiatives, fostering a collaborative community that values responsible technological advancement.
Looking Toward the Future of AI Safety
As AI technology progresses, ensuring proper regulations and ethical safeguards remains a priority. Lorvo plans to continue her work in AI safety and governance after graduation, contributing to frameworks that enable responsible AI advancement. She believes that interdisciplinary perspectives and collaboration between policymakers, researchers, and industry leaders are essential for shaping AI’s future to benefit humanity while mitigating potential risks.
By integrating ethics, governance, and technical research, the AI community can ensure that AI remains a powerful tool for good. The challenges ahead require a coordinated effort, but the work of researchers like Lorvo lays crucial groundwork for a responsible AI future.
Resource
Read more in Aligning AI with Human Values