By Harrison Jones and Andrew McLeod
The actuarial profession has always been at the forefront of utilizing technology to improve decision-making and predictions. From early computing machines to software developments, the evolution of technology has continuously shaped the way actuaries approach data analysis and risk assessment. Now, with the introduction of advanced artificial intelligence (AI) and machine learning (ML) tools like OpenAI’s GPT-4, the profession is on the cusp of another major transformation.
GPT-4 is a powerful computer program that can understand and generate human language. It can be used to write essays, articles, emails or even computer code. It is trained on a large amount of text data and can understand the context and generate text that is similar to human writing just by the input of a prompt.
GPT-4 is not just an interesting tool, but also a signal of the recent enhancements and real-world capabilities of ML and AI models. As highlighted in recent articles, versions of ChatGPT has passed Wharton MBA and US medical licensing exams and has already begun producing intermediate computer programs. In a recent episode of the “All-In” podcast, Venture Capitalist Chamath Palihapitiya even claimed that this spelled the end of the software-as-a-service business model as we know it. Most relevant to actuaries, GPT-4 was recently given questions from CAS Exam 9. It did fail the exam, but that’s not to say that future versions of GPT will not be capable of passing them.
OpenAI’s offering and similar tools can be considered as a new category of “foundation models” which bring with them a myriad of opportunities as well as risks, and will no doubt revolutionize how actuaries approach their work in the coming years. In this article, we will dive deeper into the good, the bad and the ugly regarding the impact of foundation models on the actuarial profession, both in general and specifically GPT-4.
Foundation models
Foundation models are trained on a broad set of data, typically very large volumes of data, and are capable of a wide range of generalized tasks. They are not technically new, they are still just based on deep learning and self-supervised learning, which have been around for several years. However, the scale and complexity at which new foundation models are being developed is novel – ChatGPT-4 for example is rumoured to be trained on more than a trillion parameters.
One burgeoning topic within the field of foundation models are the ethical, political and social considerations of the widespread adoption of foundation models. Homogenization, for example, can exacerbate the strengths, weaknesses, biases and idiosyncrasies of a foundation model that is used across a variety of domains without adaptation. Actuaries should be aware of both the pros and cons, discussed at length in the next section, of leveraging these types of models for their own work.
How does/will ChatGPT impact the future of actuarial work?
While there are certainly some potential benefits to using advanced language models like GPT-4 in actuarial work, there are also some potential downsides to consider:
Negative possibilities
- Reduced need for actuaries: One potential negative impact of GPT-4 on the actuarial profession is that it could reduce the need for actuaries in some areas, as tasks that were previously done by humans can now be automated. For example, GPT-4 could be used to write reports, analyze data and create actuarial-specific documents such as Appointed Actuary reports and other financial reports. This could lead to job losses and a decline in the demand for actuaries in certain areas.
- Cybercrime and fraud: Another potential negative impact of GPT-4 on the actuarial profession is that it could be used to create new forms of cybercrime and fraud. For example, criminals could use GPT-4 to create chatbots that impersonate human beings and trick people into giving sensitive information. Additionally, GPT-4 could be used to create fake documents and claims, making it harder to detect and prevent insurance fraud. Insurance companies and regulators would have to invest significant resources in developing new methods for identifying and verifying the authenticity of claims and documents leading to increased costs for insurers and policyholders, as well as increased regulatory burdens for the insurance industry, among other repercussions.
Positive possibilities
- Improved efficiency and accuracy: One of the most obvious benefits of GPT-4 in the actuarial profession is that it could greatly improve the efficiency and accuracy of tasks such as report writing and data analysis. GPT-4 can generate a wide range of actuarial-specific documents. This could lead to significant cost savings and improved decision-making in the industry.
- Advanced actuarial models: Another potential benefit of GPT-4 in the actuarial profession is that it could be used to develop more advanced and sophisticated actuarial models. This could be used to improve the accuracy of predictions and enhance the quality of judgment within the actuarial profession.
- Natural language explanations: Finally, GPT-4 could be used to generate natural language explanations for complex actuarial models, making them more accessible and understandable to a wider audience. This could lead to improved communication and greater transparency in the industry, ultimately leading to more optimized courses of action, more informed customers as well as greater penetration and visibility of actuarial work in general.
This list is far from exhaustive and there are no doubt various aspects yet undreamt of, but it’s clear that the potential impact of GPT-4 on the actuarial profession is a complex issue with both positive and negative outcomes to consider. Also, it’s important to stress that GPT-4 is just the beginning; as new advancements such as a professional paid version of ChatGPT with faster performance and GPT-5 (and beyond), which will have stronger abilities in computer programming relevant to actuarial work, are already in the works. As the technology continues to evolve, it’s crucial for the actuarial profession to remain vigilant and adapt to these changes to ensure its continued relevance.
CONCLUSION
For many years actuaries have been asking themselves “How will ML/AI affect my work?” The answer has always conceptualized a future in which ML/AI can make our jobs easier, our models more powerful, yet comes with a risk to individual privacies and a risk of introducing bias into our models. This core answer has not fundamentally changed with the advancement of foundation models over the last few months. But it makes this conceptualized future feel more like the present.
As always, actuaries should keep their finger on the pulse of emerging technology. For better or worse, these advanced ML/AI models have a role to play. If you need any more evidence, who do you think wrote this article?
This article was written by human authors and reflects their opinion, and does not represent an official statement of the CIA.
The following comments were shared by readers:
Jeffrey: Great insights, Harrison and Andrew!