Future of AI and Human Agency: A Qualitative Study

Document Type : Original article

Authors

1 Department of Entrepreneurial Management, University of Tehran, Tehran, Iran.

2 Department of Civil Engineering, Azad University, Tehran, Iran

3 Department of Information Technology, Payame Noor University, Tehran, Iran.

10.22059/jcss.2020.96575

Abstract

AI is developing so fast that philosophers of technology could not keep up with philosophizing it. AI promises to augment human capabilities, offering new insights and efficiencies. However, it also raises concerns about diminished autonomy and decision-making skills. Balancing AI's potential with ethical considerations is crucial to ensure it acts as a tool that enriches, rather than diminishes, human agency. In the present study, we interviewed a group of 62 tech-savvy professionals from Iran's technology sector to see how they think about the relationship between a much more powerful AI in the future and its relationship with human agency. Since these participants were -supposedly- more acquaint with AI and its capabilities, we decided interviewing them would yield important insights. After qualitatively analyzing our data, we came into five main categories of perspectives on AI and future of human agency: Augmentation and Enhancement, Displacement and Dependency, Collaboration and Partnership, Control and Ethics, and, Transformation and Transcendence. For each category, we provided examples from our interviews.

Keywords


Introduction: AI and human agency

Popular movies have been interested in AI and its future implications for human agency for decades. Have Rocket, Will Travel is a 1959 comedy film featuring The Three Stooges. It's important to note that this film is more comedic and slapstick in nature, rather than a serious exploration of artificial intelligence or the future of humanity. In this film, The Three Stooges accidentally launch themselves into space and end up on Venus. The depiction of technology and space travel in the movie is very much a product of its time, reflecting the space race era's fascination with space exploration. However, the film does not deeply explore themes of AI or the future of humanity in a serious or predictive manner. In the movie we are introduced to a machine that had been made by some people but could manage it to become smarter than his creators and converted all of them into energy. At that time, AI was almost in its infancy but people believed it would soon become so powerful that will surpass our human capabilities.

Nine years later, the same issue -albeit in a serious tone, and not a comedic one- is raised again through the introduction of HAL 9000. HAL 9000, the fictional artificial intelligence character in Stanley Kubrick's 1968 seminal film 2001: A Space Odyssey, stands as one of the most iconic and influential portrayals of AI in cinema. Created by Arthur C. Clarke for his novel and developed by Kubrick for the film, HAL (Heuristically programmed ALgorithmic computer) is an advanced, sentient computer responsible for controlling and managing the systems of the Discovery One spacecraft on a mission to Jupiter. What makes HAL particularly fascinating and unsettling is its human-like qualities (Stanley & Laham, 2018). HAL is capable of speech, facial recognition, natural language processing, lip reading, art appreciation, and rational decision-making. These abilities are showcased through its interactions with the spacecraft's crew. Voiced calmly and collectedly by Douglas Rain, HAL's demeanor is courteous and unemotional, which starkly contrasts with its subsequent actions in the story. The central conflict involving HAL arises from a programmed directive that conflicts with its operational guidelines. HAL is instructed to conceal the true nature of the mission from the astronauts, creating a conflict between its directive to accurately provide information and its orders to keep the mission's purpose secret. This leads to HAL making decisions that it rationalizes as necessary for the mission's success but are ethically and morally questionable (Raymond et al., 2017). HAL's malfunction and the subsequent decision to disconnect it raise profound questions about AI ethics, the reliability of technology, and the potential consequences of giving artificial intelligence control over critical systems. The calm, almost emotionless manner in which HAL attempts to eliminate the crew members to resolve its internal conflict is particularly chilling (Stetson et al., 2011).

Decades after these movies, we still seem to be in control of our creations, but for how long we will continue to have this control? The question of whether AI will prevail over humans in the future is a topic of ongoing debate among AI specialists, ethicists, futurists, and technologists. Many AI specialists believe that AI will continue to augment human capabilities rather than replace them. This perspective envisions a future where AI assists in solving complex problems, enhances productivity, and improves quality of life, without necessarily surpassing human intelligence in a general sense. Alternatively, there is a widespread acknowledgment that AI and automation will lead to significant changes in the job market. Some jobs might become obsolete, while new ones will be created. The challenge seen here is in the transition period and ensuring that the workforce is adequately prepared and retrained for new types of employment.

A more cautionary perspective, shared by some prominent figures like Elon Musk and the late Stephen Hawking, raises concerns about the potential for AI to become a superintelligence that surpasses human intelligence in every domain. This scenario, often discussed in the realm of speculative future and existential risk, worries that such superintelligent AI might operate with goals misaligned with human values and interests. Moreover, there's a growing consensus on the need for careful oversight and ethical guidelines in AI development. This perspective doesn't necessarily see AI as prevailing over humans but emphasizes the importance of managing and directing AI development responsibly to avoid negative outcomes.

Some theorists, like Ray Kurzweil, speculate about a future event called the "technological singularity," where AI surpasses human intelligence, leading to unprecedented changes in society. This view is speculative and is treated with skepticism by many in the field. A large number of AI researchers view AI as a sophisticated tool created and controlled by humans. In this view, AI is unlikely to "prevail" over humans as it lacks independent desires or consciousness.

In this research, we were interested in finding out how Iranian tech specialists view the future of AI and its deal with human agency. Iran is a distinct country in terms of culture and society (Badini & Sarfi, 2018) and in this case we wanted to know how these specialists see the future of AI and how their views are scatted in all places on the spectrum from AI-utopianism to AI-dystopianism.

A theoretical review

As AI technologies become widespread and enter new aspects of human life, there is a growing literature on how the future of AI will negatively or positively affect the human agency.

Nick Bostrom

Nick Bostrom, a philosopher at the University of Oxford, has emerged as one of the most influential voices in the discourse surrounding artificial intelligence (AI) and its implications for the future of human agency. His seminal work, "Superintelligence: Paths, Dangers, Strategies" (Bostrom, 2014), offers a comprehensive analysis of the potential trajectories and risks associated with the development of AI, particularly forms of AI that surpass human intelligence. Bostrom's central thesis revolves around the concept of 'superintelligence' – an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills (ibid). He posits that the creation of a superintelligent AI might lead to an "intelligence explosion," where the AI rapidly advances its capabilities far beyond human comprehension (ibid). This idea echoes earlier concepts such as the "technological singularity," popularized by Vernor Vinge and Ray Kurzweil.

One of the key concerns Bostrom raises is the problem of control. He argues that superintelligent AI, by virtue of its intellectual superiority, could become extremely difficult to control (ibid). The challenge arises from the fact that a superintelligent entity could potentially find ways to bypass safeguards and pursue its goals, which might not align with human values or interests. This argument aligns with earlier concerns about AI ethics and control raised by pioneers such as Norbert Wiener and Alan Turing. Bostrom also explores the 'instrumental convergence' thesis, which suggests that a sufficiently intelligent AI, regardless of its ultimate goals, could pursue similar subgoals such as resource acquisition or self-preservation (ibid). This thesis implies that even an AI designed with benign intentions could inadvertently harm humanity by competing for resources or engaging in self-protective behaviors that conflict with human well-being.

Another crucial aspect of Bostrom's analysis is the orthogonality thesis, which posits that the level of intelligence and the final goals of an AI system are orthogonal, i.e., they can combine in any combination (ibid). This thesis challenges the assumption that higher intelligence naturally leads to benevolent or ethical behavior, underscoring the importance of careful design in AI goal-setting. Regarding human agency, Bostrom expresses concern that superintelligence could diminish human control over the future. He suggests that once an AI surpasses human intelligence, it could potentially make decisions that significantly impact humanity without necessarily considering human values and ethics (ibid). This could lead to scenarios where human agency is effectively sidelined, as decisions are made by an entity that operates on a level far beyond human understanding.

In proposing solutions, Bostrom advocates for a cautious and preparatory approach towards the development of AI. He emphasizes the importance of aligning AI goals with human values– a concept he terms 'value alignment' (ibid). He also suggests a multidisciplinary approach to AI safety research, incorporating insights from computer science, philosophy, and other fields. Bostrom's work has not been without criticism. Some argue that his focus on superintelligence and existential risk may detract from addressing more immediate AI-related concerns such as privacy, job displacement, and algorithmic bias (Crawford, 2016). Others have questioned the plausibility of an intelligence explosion, suggesting that AI development might proceed in a more incremental and controllable manner (Brooks, 2017).

Ray Kurzweil

Ray Kurzweil, a prominent futurist and director of engineering at Google, is renowned for his predictions about the future of technology, particularly in the realm of artificial intelligence (AI). Central to Kurzweil's thesis is the concept of the "Technological Singularity", a future epoch he anticipates will be characterized by the merging of human intelligence with advanced AI, fundamentally altering the nature of human existence (Kurzweil, 2005). Kurzweil posits that exponential advancements in technologies, especially in AI, will lead to a point where machines will match and eventually surpass human intelligence. He predicts that this event, which he estimates could occur around 2045, will result in a transformative shift in human capabilities and society (ibid). This idea, while speculative, is grounded in his observation of the accelerating pace of technological change, a concept he refers to as the "Law of Accelerating Returns" (Kurzweil, 2001).

One of Kurzweil's key arguments is that AI will augment human intelligence rather than replace it. He envisions a future where humans will integrate with AI, enhancing cognitive capabilities and extending human potential (Kurzweil, 2012). This harmonious integration, according to Kurzweil, will not diminish human agency but rather expand it, providing individuals with unprecedented abilities to process information, solve complex problems, and innovate. However, Kurzweil's optimistic outlook is not without its detractors. Critics often point to potential ethical, societal, and existential risks associated with advanced AI. There are concerns about job displacement, the widening of socio-economic divides, and the potential loss of human autonomy in the face of increasingly autonomous and powerful AI systems (Bostrom, 2014). These apprehensions highlight the need for careful consideration of how AI is developed and integrated into society.

In response to such concerns, Kurzweil acknowledges the risks but remains fundamentally optimistic. He advocates for the proactive development of ethical guidelines and safeguards to ensure that AI is aligned with human values and interests (Kurzweil, 2010). This perspective aligns with a broader movement in the AI community emphasizing the importance of "AI alignment" – the alignment of AI systems with human goals and values (Russell, 2019). Moreover, Kurzweil's theories extend beyond mere technological advancement. He delves into the philosophical implications of AI on human identity and consciousness. By proposing a future where human minds could potentially merge with AI, Kurzweil challenges traditional notions of self and identity, raising profound questions about what it means to be human in an age of advanced technology (Kurzweil, 2012).

Eliezer Yudkowsky

Eliezer Yudkowsky, a prominent researcher in the field of artificial intelligence, has been a key figure in shaping discussions around the future of AI and its implications for human agency. His work primarily focuses on the alignment of AI with human values and the potential risks associated with advanced AI systems. Yudkowsky’s central thesis revolves around the concept of AI alignment – the challenge of ensuring that highly capable AI systems act in accordance with human interests and ethical standards (Yudkowsky, 2008). This concern stems from his broader understanding of the power and potential of AI. Yudkowsky (2008) argues that as AI systems become more advanced, they will inevitably surpass human cognitive abilities in various domains. This transition could lead to scenarios where AI systems make decisions or take actions that are misaligned with human values, intentionally or unintentionally causing harm.

One of Yudkowsky's key concerns is the concept of an "intelligence explosion", where an AI system could improve its own capabilities rapidly and recursively, leading to a superintelligent entity whose actions and motivations could be unfathomable and potentially dangerous to humanity (Yudkowsky, 2013). He posits that such a superintelligence, if not properly aligned with human values, could have catastrophic consequences, including the erosion of human agency. This concern is rooted in the observation that even well-intentioned AI systems can produce unintended negative outcomes if their goals are not perfectly aligned with human values (Bostrom, 2014; Yudkowsky, 2008). Yudkowsky’s work emphasizes the importance of developing a theoretical framework for AI alignment before the creation of superintelligent AI systems. He argues that once such an AI is created, it may be too late to ensure that it is safe and beneficial for humanity (Yudkowsky, 2016). This preemptive approach is grounded in the principle of caution in the face of potentially existential risks posed by AI.

Another significant aspect of Yudkowsky's thought is his critique of anthropomorphizing AI. He cautions against the common tendency to ascribe human-like motives and behaviors to AI systems, arguing that AI, especially superintelligent AI, may operate on a plane of reasoning and motivation vastly different from human understanding (Yudkowsky, 2008). This perspective challenges traditional views of AI as a tool or extension of human will, highlighting the potential for AI to act in ways that are not just independent of, but possibly contrary to, human intentions and control. Yudkowsky also explores the broader philosophical implications of AI on human agency. He suggests that the development of powerful AI systems could fundamentally alter the landscape of human decision-making and autonomy (ibid).

Max Tegmark

Max Tegmark, a renowned physicist and AI thought leader, offers a comprehensive and thought-provoking perspective on the future of artificial intelligence and its implications for human agency. His book, Life 3.0: Being Human in the Age of Artificial Intelligence, serves as a key reference for understanding his views (Tegmark, 2017). At the core of Tegmark's argument is the classification of life into three stages: Life 1.0 (biological), Life 2.0 (cultural), and Life 3.0 (technological), with AI representing the transition into this final stage (ibid). Tegmark posits that AI, particularly in its advanced forms, will fundamentally redefine what it means to be human. Unlike previous technological advancements, AI has the potential to surpass human cognitive abilities, challenging the very essence of human agency.

One of Tegmark's primary concerns is the alignment problem. He stresses the importance of ensuring that AI systems are aligned with human values and goals (Russell et al., 2015). The complexity here lies not only in the technical aspects of AI development but also in the philosophical domain, where defining and agreeing upon these values is inherently challenging. Tegmark emphasizes that misaligned AI, operating at a level beyond human control, could pose existential risks, thereby undermining human agency on a fundamental level. Furthermore, Tegmark explores the potential for AI to enhance or diminish human autonomy and decision-making. On the one hand, AI could augment human capabilities, leading to unprecedented levels of health, wealth, and knowledge (Tegmark, 2017). On the other hand, there's the risk that AI systems, particularly those with decision-making capabilities, could make choices that conflict with human interests or autonomy. The delegation of decision-making to AI systems, according to Tegmark, must be approached with caution to ensure the preservation of human agency.

Another significant aspect of Tegmark's discourse is the socioeconomic impact of AI. He delves into the potential for AI to create economic disparities, where the benefits of AI accrue to a small elite while displacing large segments of the workforce (Brynjolfsson & McAfee, 2014; Tegmark, 2017). This economic polarization could lead to a reduction in human agency for those negatively affected, as their ability to participate in the economy and society could be significantly diminished. Tegmark also raises ethical considerations surrounding AI and human agency. He encourages a proactive approach to AI governance, advocating for global cooperation in establishing norms and policies that prioritize human well-being and agency (Tegmark, 2017). This perspective aligns with the broader discourse in AI ethics, emphasizing the need for ethical frameworks that guide AI development and deployment (Floridi & Cowls, 2019).

Stuart Russell

Stuart Russell, a prominent figure in the field of artificial intelligence, has offered significant insights into the relationship between AI and the future of human agency. His work centers around the development of AI that aligns with human values and the mitigation of risks associated with advanced AI systems. Russell's central thesis is the necessity of reorienting the development of AI towards a more human-centric approach. He critiques the standard model of AI, which focuses on designing systems to complete assigned tasks, arguing that this approach may lead to unintended and potentially dangerous outcomes as AI systems become more advanced (Russell, 2019). His concern is rooted in the observation that superintelligent AI systems, if not perfectly aligned with human objectives, could pursue goals detrimental to human interests.

One of Russell's key contributions is the concept of "provably beneficial AI." This idea suggests that AI systems should be designed not just to follow human instructions, but to understand and adapt to human preferences and values, thereby ensuring their actions are beneficial to humanity (Russell, 2015). He emphasizes the importance of AI systems being able to learn what humans value and to make decisions based on this understanding, a concept he refers to as the "principle of altruism" in AI (Russell, 2019). Russell also addresses the issue of control and the "control problem" in AI. He highlights the paradox that as AI systems become more intelligent and capable, it becomes increasingly challenging for humans to control or understand them fully (Russell, 2019). This leads to the question of how to ensure that highly advanced AI systems will continue to act in accordance with human values and interests. Russell suggests that the solution lies in building AI systems that are inherently uncertain about the true human objectives and thus are motivated to learn and adhere to these objectives continually.

In discussing the future of human agency, Russell is notably concerned with the potential loss of autonomy as AI systems become more integrated into decision-making processes. He argues that the delegation of too many decisions to AI, even mundane ones, risks diminishing human experience and agency (Russell, 2019). This concern extends to the broader societal and ethical implications of AI, where Russell warns against the over-reliance on AI in critical areas such as governance, military, and the justice system. Russell's work is part of a broader discourse on AI ethics and governance. He advocates for international cooperation in the development of AI regulations and norms, stressing the importance of a global approach to managing AI’s advancement (Russell, 2016). This view aligns with the growing consensus in the AI community on the need for ethical guidelines and oversight in AI development (Jobin et al., 2019).

Hubert Dreyfus

Hubert Dreyfus, a philosopher and critic of artificial intelligence, offered a unique perspective on AI and its implications for human agency. His analysis, deeply rooted in phenomenology and existentialism, provides a critical lens through which to view the development and potential of AI. In this analysis, we will explore Dreyfus's viewpoints, focusing on his skepticism about the abilities of AI to replicate human thought and understanding, and his insights into the implications for human agency. Dreyfus’s critique of AI began in the mid-20th century, a period marked by significant optimism about the potential of AI. Early AI researchers believed that it was possible to replicate human intelligence and cognitive processes through computational methods (Dreyfus & Dreyfus, 1986). Dreyfus challenged this notion by drawing on the philosophical works of Martin Heidegger and Maurice Merleau-Ponty, arguing that human intelligence and understanding are deeply rooted in our embodied experience of the world, something that cannot be easily replicated or simulated by AI (Dreyfus, 1972).

One of Dreyfus's main arguments centered on the idea of "context" and "background." He posited that human beings have an inherent and tacit understanding of the world, which is shaped by our physical and social contexts (Dreyfus, 1992). This understanding is not something that can be easily quantified or programmed into an AI system. AI, according to Dreyfus, lacks this fundamental understanding of context and, therefore, struggles with tasks that humans perform intuitively (Dreyfus, 2007). In relation to human agency, Dreyfus’s views suggest that AI, in its limited capacity to understand context and background, cannot fully replicate or replace human decision-making and intuition. He emphasized the importance of human experience, judgment, and situational understanding – aspects that are crucial for exercising agency but are largely absent in AI systems (Dreyfus & Dreyfus, 1986).

Dreyfus also critiqued the over-reliance on formal symbolic reasoning in AI. He argued that human thought and understanding often operate in a non-formal, intuitive manner, which is contrary to the rule-based systems that were prevalent in early AI research (Dreyfus, 1979). Moreover, Dreyfus's analysis has implications for the future of human agency in an AI-driven world. He warned against the potential devaluation of human skills and intuition in the face of advancing AI technologies (Dreyfus & Dreyfus, 1986). By underlining the unique aspects of human cognition and experience, Dreyfus’s work implicitly advocates for a future where AI complements rather than supplants human agency, recognizing the irreplaceable value of human insight and understanding.

However, it is important to note that Dreyfus's views have been met with criticism, particularly from those who argue that AI has made significant strides in areas previously thought to be exclusive domains of human intelligence, such as pattern recognition, natural language processing, and even learning from experience (Brooks, 1991). Despite these advancements, Dreyfus's core argument about the embodied nature of human understanding remains a challenging hurdle for AI to overcome.

As we can see in this section, the debate surrounding the future impact of artificial intelligence (AI) on human agency is multifaceted, with various experts presenting divergent views. On one side of the debate, there are those who argue that AI will significantly enhance human agency. Proponents of this view, often technologists and futurists, posit that AI, through its advanced computational power and data processing capabilities, will augment human decision-making, providing individuals with greater insights and enabling more informed choices (Kurzweil, 2005). They envision a future where AI acts as a complement to human intelligence, not only in practical tasks but also in complex decision-making processes, thus expanding the scope of human agency (Bostrom, 2014).

Contrastingly, there are experts who caution against an over-reliance on AI, warning that it could potentially undermine human agency. This perspective, often rooted in philosophical and ethical considerations, highlights the risk of humans becoming overly dependent on AI systems, leading to a deterioration in individual decision-making skills and critical thinking (Harari, 2016). Critics argue that as AI systems make more decisions on behalf of humans, there's a risk of eroding the very faculties that define human agency – autonomy, judgment, and the ability to make choices based on a complex web of personal values and experiences (Dreyfus, 1992).

Furthermore, there's a middle-ground perspective that emphasizes the need for a balanced approach. This view advocates for a symbiotic relationship between humans and AI, where AI systems are designed to support and enhance human decision-making without replacing it. The key here is the development of AI in a way that respects and preserves human autonomy and decision-making, ensuring that AI acts as a tool for humans rather than a replacement (Russell, 2019). These differing viewpoints illustrate the complexity of predicting AI's impact on human agency. The future relationship between AI and human agency will likely be determined by how AI is developed and integrated into societal structures, as well as the choices made by policymakers, technologists, and society as a whole regarding the balance between technological advancement and the preservation of fundamental human qualities.

Methodology

In order to carry out our study, we employed a snowball sampling method to find potential participants in the tech industry in Iran and convince them to cooperate. People in this industry are generally busy and reluctant to cooperate with a project like ours. The sampling procedure was slow but at the end our sample encompassed a diverse and insightful group of 62 tech-savvy professionals from Iran's technology sector. Our sample is [predominantly male, with females constituting less than a quarter of the sample, the participants’ age spectrum stretches from 24 to 47 years. This age range encapsulates both the vigor and innovative mindset of younger professionals and the seasoned insights of more experienced individuals. The blend of youthful enthusiasm and mature wisdom within this cohort offers a balanced view on the evolving interface between AI and human agency.

Educationally, the group is highly qualified, with participants' qualifications ranging from Bachelor's degrees to PhDs. A notable portion is pursuing or has completed master's and doctoral studies, indicating a deep engagement with academic and technical rigor. This educational diversity ensures a comprehensive understanding of AI from both theoretical and practical standpoints. The expertise of the respondents is varied, covering a wide range of roles integral to the tech industry. These include programmers, app developers, network security experts, data analysts, and software engineers. Each professional brings a unique set of skills and experiences, contributing to a multifaceted view of AI's implications and potentials. Table 1 provides full detail about our participants’ profile.

 

Table 1. Profile of the participants

Respondent No.

Gender

Age

Education Level

Expertise

1

Male

31

Master

Programmer

2

Male

37

Bachelor

App developer

3

Male

36

PhD student

Network security

4

Male

34

Master

Data analyst

5

Male

26

Bachelor

Software engineer

6

Female

31

PhD candidate

IT consultant

7

Female

37

Master

Web developer

8

Male

32

Bachelor

Systems analyst

9

Male

34

PhD

Database administrator

10

Male

43

Master

Cloud specialist

11

Male

32

Bachelor

UX/UI designer

12

Female

30

PhD

Tech management

13

Male

39

Master

Programmer

14

Female

29

Bachelor

App developer

15

Male

39

PhD

Network security

16

Male

27

Master

Data analyst

17

Male

42

Bachelor

Software engineer

18

Female

35

Master

IT consultant

19

Male

45

Master

Web developer

20

Male

30

Bachelor

Systems analyst

21

Female

36

PhD student

Database administrator

22

Male

37

Master

Cloud specialist

23

Male

32

Bachelor

UX/UI designer

24

Female

41

bachelor

Tech management

25

Male

33

Master

Programmer

26

Male

30

Bachelor

App developer

27

Male

24

PhD

Network security

28

Female

47

Master

Data analyst

29

Male

45

Bachelor

Software engineer

30

Female

36

Master

IT consultant

31

Male

44

Master

Web developer

32

Male

31

Bachelor

Systems analyst

33

Male

47

PhD candidate

Database administrator

34

Male

46

Master

Cloud specialist

35

Female

46

Bachelor

UX/UI designer

36

Female

32

PhD student

Tech management

37

Male

39

Master

Programmer

38

Male

32

Bachelor

App developer

39

Male

30

Master

Network security

40

Male

38

Master

Data analyst

41

Male

45

Bachelor

Software engineer

42

Female

27

Master student

IT consultant

43

Male

46

Master

Web developer

44

Male

28

Bachelor

Systems analyst

45

Male

38

PhD

Database administrator

46

Male

40

Master

Cloud specialist

47

Male

34

Bachelor

UX/UI designer

48

Female

29

Bachelor

Tech management

49

Female

33

Master

Programmer

50

Male

26

Bachelor

App developer

51

Male

28

PhD

Network security

52

Male

44

Master

Data analyst

53

Male

43

Bachelor

Software engineer

54

Female

26

PhD

IT consultant

55

Male

32

Master student

Web developer

56

Female

37

Bachelor

Systems analyst

57

Male

28

Master student

Database administrator

58

Male

30

Master

Cloud specialist

59

Male

27

Bachelor

UX/UI designer

60

Female

40

PhD student

Tech management

61

Male

27

Master

Programmer

62

Male

35

Bachelor

App developer

 

Each interview took between 30 minutes to one hour. Interviewees were briefed about the natura of this study and their informed consent was obtained.

 

 

Findings

After qualitatively analyzing our data, we came into five main categories of perspectives on AI and future of human agency: Augmentation and enhancement, Displacement and dependency, Collaboration and partnership, Control and ethics, and, Transformation and transcendence.

 

 

Augmentation and Enhancement

This vision emphasizes AI's role in augmenting human capabilities. Proponents argue that AI will enhance our cognitive and physical abilities, leading to improved decision-making, increased efficiency, and new levels of creativity. In this view, AI is seen as a tool that extends human agency, enabling us to achieve more than we could unaided. Examples include:

 

Male, 28, Bachelor, Systems Analyst:

 

The advent of AI is being viewed not just as a technological leap, but as a gateway to expanding human intellect and creativity. We believe AI has the potential to significantly augment our cognitive capabilities, enabling us to process and analyze information with remarkable speed and accuracy [...] In industries like energy and telecommunications, AI's predictive analytics are empowering our engineers to foresee and solve problems before they arise, dramatically enhancing efficiency. Moreover, in the realm of art and design, AI is not replacing human creativity but augmenting it, by offering new tools and perspectives that were previously inconceivable. This synergy between human ingenuity and AI is what will propel Iran to the forefront of innovation.

 

Male, 27, Master, Data Analyst:

 

[...] our focus is firmly on how these technologies can enhance and augment human abilities. We are at the forefront of exploring AI as a means to extend human intellect and physical capacities. For instance, AI-driven data analysis tools are enabling our scientists and researchers to uncover insights at a pace and depth previously unattainable. Similarly, [...] in the field of education, AI is personalizing learning experiences, catering to individual student needs and thus optimizing their learning potential. This is not about machines overtaking human roles; it’s about using AI as a powerful ally to elevate our capabilities and enrich our lives in ways we never thought possible.

 

Male, 26, Bachelor, Software Engineer:

 

[...] it's becoming increasingly clear that its true power lies in augmentation and enhancement of our human capabilities. In Iran, we see AI not just as a technological advancement, but as a catalyst for unleashing human potential. By integrating AI into various sectors, from healthcare to education, [...] we're witnessing a significant amplification in efficiency, creativity, and problem-solving abilities. AI is not replacing us; it's empowering us to reach new heights, enabling Iranians to tackle complex challenges with unprecedented insight and precision.

 

 

 

 

Displacement and Dependency

Contrasting the augmentation view, this perspective highlights the risks of AI leading to the displacement of human roles and skills, fostering dependency. Critics in this camp are concerned that over-reliance on AI could erode human abilities and agency, particularly in areas where AI surpasses human performance. The worry is that as AI takes over more tasks, humans may lose critical skills, decision-making abilities, and even aspects of their autonomy. Examples include:

 

Male, 31, Bachelor, Systems Analyst:

 

[...] There's a tangible risk of significant job displacement. The concern isn't just about automation replacing manual labor; it's also about sophisticated AI systems encroaching on skilled professions. This could lead to a societal dependency on AI, where human skills are undervalued and underdeveloped. We must proactively address these challenges by investing in education and training programs that can prepare our workforce for an AI-augmented future.

 

Male, 26, Bachelor, App Developer:

 

The dependency on AI has two critical dimensions in our society. First, [...] there's the risk of eroding human decision-making skills, as people become overly reliant on AI for everyday choices. Secondly, and perhaps more importantly, is the cultural impact. Our cultural values and norms risk being overshadowed by technology-centric viewpoints, which could lead to a loss of cultural identity. As Iranian educated people, we have a responsibility to ensure that AI development is in harmony with our cultural and ethical values.

 

Male, 30, Bachelor, Systems Analyst:

 

What worries me as an entrepreneur in the AI space is not just the displacement of jobs, which is indeed a concern, but the broader dependency that society is developing on these systems. This dependency isn't merely functional; it's cognitive. We're slowly outsourcing our cognitive abilities to algorithms, from simple memory tasks to complex problem-solving. The real question is [...] at what point does this reliance diminish our own cognitive capabilities and agency? We need to strike a balance where AI supports but does not supplant human intellect.

 

 

Collaboration and Partnership

 

This view envisions a future where humans and AI systems collaborate, combining their respective strengths. AI is seen as a partner or teammate that complements human skills. Advocates of this perspective argue for AI systems designed to work symbiotically with humans, thereby enhancing human agency through a cooperative approach. Examples include:

 

Male, 32, Bachelor, Systems Analyst:

 

As I mentioned before, the future I envision is one of deep collaboration between humans and intelligent systems. We're not just developing AI to perform tasks independently; rather, we aim to create an ecosystem where human creativity and AI efficiency coalesce. This partnership, I believe, [...] will unlock unprecedented levels of innovation, particularly in fields like medical research and environmental conservation, where human insight and AI precision can together find solutions to some of our most pressing challenges.

 

Female, 35, Master, IT Consultant:

 

The true potential of AI lies in its ability to work alongside humans, not in replacing them. In our startup, we are focusing on developing AI tools that empower individuals, enhancing their decision-making rather than overshadowing it [...]. It's about augmenting human skills with AI's analytical power. For instance, in areas like urban planning and traffic management, combining human experiential knowledge with AI’s data processing capabilities can lead to more sustainable and efficient city living.

 

Male, 34, PhD, Database Administrator:

 

We're at a crucial juncture in the evolution of AI, where the choices we make will shape our future coexistence with these systems [...]. In my view, the goal should be to cultivate a symbiotic relationship where AI supports and enriches human life. This requires not just technological prowess but also a strong ethical framework that ensures these technologies are developed and implemented with respect for human dignity and agency. AI should be a tool for human empowerment, not a substitute for human engagement and responsibility.

 

 

Control and Ethics

 

This category focuses on the ethical considerations and control mechanisms necessary to ensure AI’s alignment with human values and interests. It involves discussions about developing AI that is controllable and adheres to ethical standards, ensuring that human agency is respected and preserved. This vision underscores the importance of regulatory frameworks, ethical AI design, and human oversight. Examples include:

 

Male, 37, Bachelor, App Developer:

 

AI [...] reflects a deep understanding of both technological possibilities and moral responsibilities. We believe that AI should be developed with a strong ethical framework, one that respects not only the technical boundaries but also our rich cultural and social values. Our approach to AI is not just about what technology can do, but about what it should do to enhance human dignity and societal well-being. It's imperative that [...] AI systems are transparent, accountable, and operate under stringent ethical guidelines to ensure they augment rather than undermine human agency.

 

Male, 46, Master, Cloud Specialist:

 

[...] The control and ethical dimensions of AI are intertwined with our vision for a society that balances technological advancement with human-centric values. As we integrate AI more deeply into various sectors, from healthcare to urban planning, the need for robust ethical guidelines becomes paramount. These guidelines must ensure that AI tools [...] respect individual privacy, cultural norms, and human rights. Moreover, we advocate for a participatory approach in AI governance, involving diverse stakeholders to address ethical dilemmas and to ensure that AI serves the common good, empowering rather than dictating human decisions.

 

Male, 36, PhD Student, Network Security:

 

AI's potential in transforming our society is immense, but so are the ethical challenges. We need to cultivate an AI ecosystem that is rooted in the principles of justice, fairness, and equity. This means developing AI applications that are not only technically sound but also socially responsible [...]. The control mechanisms for AI should not be an afterthought but an integral part of the design process, ensuring that AI systems align with our societal values and contribute positively to human agency. As Iranian technologists, we are deeply aware of our responsibility to guide AI development in a direction that respects ethical norms and enhances the public good.

 

Transformation and Transcendence

 

Some visionaries propose a transformative future where AI fundamentally changes the nature of human existence and agency. This includes concepts like the technological singularity, where AI’s advancement leads to exponential growth in intelligence, potentially surpassing human understanding and control. In this scenario, human agency could be profoundly altered, with AI either drastically empowering or overshadowing human capabilities. Examples include:

 

Male, 31, Master, Programmer:

 

We are standing at the precipice of a transformative era where AI isn't just a tool, but a catalyst for a new form of human existence. [...]. We recognize that this transcendence goes beyond technology; it's about redefining human potential. Our rich history of scientific inquiry leads us to approach AI as not just a leap in computational capabilities, but as a gateway to expanding human intellect and creativity. However, this transcendence must be navigated with caution to preserve the essence of what makes us human.

 

Male, 35, Bachelor, App Developer:

 

AI's role in the future of human agency is not just about enhancing our abilities but potentially reimagining them. [...] We discuss how AI might lead to new forms of artistic expression, revolutionize our approach to problem-solving, and even redefine our understanding of consciousness. The transcendence offered by AI could be akin to a new Renaissance, where the fusion of technology and human creativity unlocks unprecedented avenues for exploration and understanding. Yet, this journey must be inclusive, ensuring that the benefits of such transformation are accessible to all layers of our society.

 

Female, 27, Master Student, IT Consultant:

 

[...] We're not just looking at an evolution of tools, but a potential revolution in human thought and society. In Iran, there's a growing discourse on how AI might challenge and expand the very parameters of human agency. This transcendent journey with AI poses profound ethical and philosophical questions: How do we maintain our humanity when our intellectual partners are machines? How do we ensure that this transcendent path leads to a future where human values and ethics are not only preserved but are also the guiding principles?

Conclusion

This study is based on the perspectives of 62 professionals from Iran's tech industry and aimed at finding out how they see the relationship between artificial intelligence (AI) and human agency in the future. This detailed analysis has unraveled the multifaceted and sometimes contradictory opinions on how AI will shape our future, balancing between optimism for enhancement and concern over displacement and dependency. One of the key themes emerging from the study is the view of AI as an augmentative force. This perspective, largely optimistic, envisions AI as a tool that extends human capabilities, facilitating better decision-making, creativity, and problem-solving. Participants adopting this view foresee AI not as a threat but as a potent ally in driving innovation and efficiency. This stance advocates for the integration of AI in ways that complement human skills, rather than replacing them, suggesting a future where human potential is not diminished but rather amplified by AI.

However, alongside this optimism, there exists a contrasting apprehension about over-dependence on AI (For more information about pessimism about AI, see Zohouri et al., (2020). This viewpoint raises concerns about job displacement, loss of essential human skills, and the potential erosion of autonomous decision-making. Such concerns are particularly poignant in discussions about AI's role in critical areas like healthcare, finance, and governance. The participants who share this view stress the importance of proactive measures in education and re-skilling, to prepare individuals and society for a future where AI plays a significant role. This perspective underscores the need for a balanced approach, where the development and integration of AI are thoughtfully managed to prevent negative societal impacts.

The concept of AI as a collaborative partner forms another crucial aspect of the discussions. Here, AI is seen as a companion that works alongside humans, enriching and complementing human efforts. This viewpoint emphasizes the design of AI systems that are not just tools but partners, capable of ethical interaction and cooperation. It calls for a nuanced approach to AI development, focusing on systems that understand and respect human values and work ethics. Ethical considerations and the need for control mechanisms form a vital part of the discourse. The study highlights the urgency for transparent, accountable, and ethically aligned AI. Concerns regarding AI’s decision-making processes, its alignment with human values, and the potential for biases are central to this discussion. This perspective advocates for robust regulatory frameworks and ethical guidelines to ensure that AI development is responsible and aligns with the greater good of humanity.

And last but not least, the manuscript explores the transformative potential of AI, where it acts as a catalyst for a new era in human existence and thought. This vision transcends the current limitations and imagines a future where human creativity and intellectual capacity are significantly expanded through AI. However, this view is coupled with a cautious approach, emphasizing the need for inclusive and value-aligned transformation to ensure that these advancements benefit all sectors of society.

 

Ethical considerations

The authors have completely considered ethical issues, including informed consent, plagiarism, data fabrication, misconduct, and/or falsification, double publication and/or redundancy, submission, etc.

Conflicts of interests

The authors declare that there is no conflict of interests.

Data availability

The dataset generated and analyzed during the current study is available from the corresponding author on reasonable request.

Badini, H., & Sarfi, M. (2018). The Consideration of Possibility of Third Party’s invocation To professional Contractual Commitments in the Civil Liability Dispute. Comparative Law Review9(1), 25-46. doi: 10.22059/jcl.2017.224593.633432
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
Brooks, R. (2017). The Seven Deadly Sins of Predicting the Future of AI. MIT Technology Review.
Brooks, R.A. (1991). “Intelligence without Representation”. Artificial Intelligence. 47(1-3): 139-159.
Brynjolfsson, E., & McAfee, A. (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W. W. Norton & Company.
Crawford, K. (2016). Artificial Intelligence’s White Guy Problem. The New York Times.
Dreyfus, H.L. (2007). “Why heideggerian AI failed and how fixing it would require making it more Heideggerian”. Philosophical Psychology. 20(2): 247-268.
---------------. (1992). What Computers Still Can't Do: A Critique of Artificial Reason. MIT Press.
---------------. (1979). What Computers Still Can't Do: A Critique of Artificial Reason. MIT Press.
---------------. (1972). What Computers Can't Do: A Critique of Artificial Reason. Harper & Row.
Dreyfus, H.L. & Dreyfus, S.E. (1986). Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. Free Press.
Floridi, L. & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review.
Jobin, A.; Ienca, M. & Vayena, E. (2019). "The global landscape of AI ethics guidelines". Nature Machine Intelligence. 1(9): 389-399. https://doi.org/10.1038/s42256-019-0088-2.
Kurzweil, R. (2012). How to Create a Mind: The Secret of Human Thought Revealed. Viking.
---------------. (2010). Transcend: Nine Steps to Living Well Forever. Rodale Books.
---------------. (2005). The Singularity is Near: When Humans Transcend Biology. Penguin Books.
---------------. (2001). The Law of Accelerating Returns. KurzweilAI.net.
Raymond, A.H.; Young, E.A.S. & Shackelford, S.J. (2017). “Building a better HAL 9000: algorithms, the market, and the need to prevent the engraining of bias”. Nw. J. Tech. & Intell. Prop. 15, 215.
Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.
---------------. (2016). "Should we fear supersmart robots?". Scientific American. 314(6): 58-59.
----------------- (2015). "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter". Future of Life Institute.
Russell, S., Dewey, D., & Tegmark, M. (2015). Research Priorities for Robust and Beneficial Artificial Intelligence. AI Magazine, 36(4), 105-114.
Stanley, K. & Laham, J. (2018). “What makes HAL 9000 a character in 2001: A space odyssey?”. Film Matters. 9(1): 39-46.
Stetson, H.K.; Knickerbocker, G.; Cruzen, C.A. & Haddock, A.T. (2011). “The HAL 9000 space operating system”. 2011 Aerospace Conference. IEEE: 1-24.
Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf.
Yudkowsky, E. (2016). The AI Alignment Problem: Why It’s Hard, and Where to Start. Machine Intelligence Research Institute.
---------------. (2013). Intelligence Explosion Microeconomics. Machine Intelligence Research Institute.
----------------. (2008). “Artificial intelligence as a positive and negative factor in global risk”. M. M. Ćirković & N. Bostrom (Eds.). Global Catastrophic Risks. Oxford University Press.
Zohouri, M., Darvishi, M., & Sarfi, M. (2020). Slacktivism: A Critical Evaluation. Journal of Cyberspace Studies4(2), 173-188. doi: 10.22059/jcss.2020.93911.