Artificial Intelligence Will Affect 80% of the Workforce

Artificial Intelligence Will Affect 80% of the Workforce

A recent study supervised by researchers from OpenAI, the developer of ChatGPT, and published by Cornell University, has revealed that artificial intelligence linguistic models, known as Large Language Models (LLMs), will influence in one way or another the daily work tasks of up to 80% of the workforce in the United States.

The study suggests that within the next few years, approximately 20% of workers will find that the new technology takes over almost half of their responsibilities. The impact of artificial intelligence will vary in different professions, with workers in scientific fields being among the least affected, while those in administrative professions, office work, and creative fields such as content makers, software developers, and workers in the field of marketing will be among the most affected.

The study authors noted that the intended impact is not necessarily negative and reflects the extent to which these sectors benefit from artificial intelligence technologies. This technology will not become a substitute for the workforce in the affected fields, but rather a supporter and assistant to it in most cases.

The study also highlighted the capabilities of the GPT-4 model that OpenAI announced last week, which has achieved human-level performance in several areas. For example, it passed a test simulating a practice exam for the legal profession with an arrangement that places it among the top 10% of graduates.

AI language models like ChatGPT can boost creativity and productivity by providing help and inspiration to their users. Google and Microsoft have recently announced the incorporation of AI features into their copywriting services, which can help users write and optimize content more efficiently. Similarly, ChatGPT is a tool that can help entrepreneurs come up with new ideas or help them develop their projects. AI can also reduce the administrative burden by taking over routine tasks and allowing employees to focus on more important tasks.

However, this technology also poses some challenges and risks, such as ensuring the quality and reliability of content. AI models are prone to producing inaccurate or inappropriate output that needs to be verified and reviewed by human experts. Technology users need to develop their critical thinking and information validation skills to assess the validity of the outputs.

Another challenge presented by this technology is the potential for bias reflecting the data on which it has been trained. For example, AI may generate content that discriminates against certain groups or violates intellectual property rights. Therefore, users should be aware of the ethical and social implications of AI output.