Skip to content

Key takeaways

  • ClearBridge is closely watching sustainability-related opportunities presented by artificial intelligence as well as its energy intensity and social dimensions as the phenomenon plays out in our companies across sectors.
  • In our view, on digital media platforms, AI content labeling, such as with watermark software, and election transparency are key areas for both risk and opportunity as tools are quickly evolving.
  • AI’s energy intensity should cause generation shortages on the grid, especially in places where the data centers have been expanding rapidly, while in health care we see significant potential for AI in the field of diagnostics.

Artificial intelligence (AI) has the potential to transform the investment landscape, and while its rapid development sparks some valid social and environmental caution, we think it also brings with it enormous potential for helping sustainability goals, with better data to improve energy efficiency, optimize renewable energy, make agriculture more sustainable and improve human health. ClearBridge is closely watching these opportunities even while we observe AI’s energy intensity and social dimensions as the phenomenon plays out in our portfolio companies across sectors.

On the regulatory front, the world’s first comprehensive AI law, the EU’s AI Act (AIA), will come into force later in 2024. The AIA classifies AI systems according to the risks seen to pose to users: there is unacceptable risk (such as emotion recognition in schools and workplaces), high risk (such as critical infrastructure and medical devices), limited risk (such as chatbots, which carry the risk of manipulation or deceit) and minimal risk (such as spam filters). Each level of risk is subject to different requirements, and there are heavy fines at the company level for noncompliance. President Biden also issued an executive order on safe, secure and trustworthy AI in October 2023, aimed at establishing standards for AI safety and security, protecting privacy, equity and civil rights, and supporting consumers and workers.

On the labor front, AI can boost productivity, but automation has always threatened labor disruption, potentially deepening global inequalities as AI growth may favor advanced economies with sufficient infrastructure and skilled workforces (see Exhibit 1). Hiring algorithms may also rely on and perpetuate race and gender biases. Almost 40% of global employment is exposed to AI as shown in Exhibit 1.

Exhibit 1: Employment Shares by AI Exposure and Complementarity

Source: “AI Will Transform the Global Economy. Let’s Make Sure It Benefits Humanity,” Kristalina Georgieva, IMF Blog. Img.org. Jan. 14, 2024. For illustration purposes only. Complementarity implies AI leads to gains in productivity and higher income.

With looming questions of misinformation and digital safety, cybersecurity, and even human capital management, as hiring for AI ethics jobs is picking up, it is clear that as AI develops and hits multiple inflection points over the next few years, companies across ClearBridge portfolios will need to negotiate a variety of sustainability-related AI opportunities and risks.

AI and power demand implications

AI is energy intensive, with data centers running the large language models requiring significant electricity and complicating the already complex power supply and demand picture of the energy transition. Overall estimates for data-center-driven U.S. power demand growth vary, but they generally forecast the electric load to roughly double by the end of this decade (from the current 3%–4% to ~8% by 2030).1

On the surface, this kind of demand growth should cause generation shortages on the grid, especially in places where the data centers have been expanding rapidly such as Virginia and California.

Factors that could mitigate projected power shortages in the future would be:

  • Continued improvement in technologies/efficiency of the data centers.
  • Expansion of the data center locations toward less congested grids.
  • An increase in utilization of the existent gas generation capacity and even delays in the scheduled coal plant retirements, which would increase the emissions intensity from AI.

Another important mitigating factor over the next five years will be faster development of renewable power sources, as many data center hyperscalers have public commitments to carbon free energy. The renewable projects’ shorter development/construction timeline and locational flexibility to satisfy the data center demand should push demand for renewables higher and improve the renewable projects’ returns. Renewable developers should be beneficiaries of these trends. One renewable-focused utility’s current forecasts call for renewable capacity to reach between 375 GW and 450 GW over the next seven years (2024-2030). This implies a 13% compound annual growth rate through the end of this decade and suggests a rapid acceleration in renewable development (235 GW of renewables were added over the last 30 years). According to the company, this anticipated power demand acceleration is expected to be driven by consumption growth from data centers (+108%), oil and gas industry (+56%) and chemicals (+14%) between 2025 and 2030.

Over the long term, data center power consumption growth and companies’ green targets should advance the development and utilization of more effective power storage and carbon capture and storage technologies as well as green baseload power solutions, such as green hydrogen and small modular nuclear reactors.

From the regulated utilities’ perspective, the ultimate impact of the data center demand growth will vary by region, but the overall implications for the sector should be positive. Data center additions to regional grids will not only drive incremental investments into the local transmission and distribution systems, but in some cases result in incremental generation needs. In the near-term, utilities located in the territories with planned data center expansions should benefit from higher required investments into the grid to accommodate additional demand.

AI impact on labor conditions

Prior waves of technology dating back to the 19th century have changed the fabric of the global workforce. AI is similar but potentially more impactful in that it might affect white collar jobs just as much as it does blue collar labor. The power of generative AI (Gen AI) over prior AI advances is its ability to generate creative output. However, most companies are using Gen AI to augment their employees’ capabilities, rather than seeking to replace them. In ClearBridge engagements with technology companies using newly released code generation tools, we find they generally use them to speed up the first draft of a software engineer’s code output. This frees up the engineer to focus on larger problems such as user experience and system design. In our view, the risk of AI causing mass unemployment is therefore overstated, while the need to upskill and reskill today's workforce is likely understated. By 2030, management consulting firm McKinsey estimates that as many as 375 million workers or roughly 14% of the global workforce might need to switch occupational categories and acquire new skills.  Just as prior waves of innovation did, we believe the AI wave promises to create demand for new skills around model training, prompt engineering and data science.

While AI can often outperform human counterparts on a growing range of tasks, it lacks human intuition, context awareness and ethical judgment. Recognizing these limitations should help companies use AI more effectively and responsibly. When deploying AI to generate content, we think the primary ethical considerations are around protection of intellectual property rights and avoidance of unintended bias. From some recent missteps we are seeing how difficult it is to tune an AI system to account for biases and ambiguity. However, large AI developers are also taking the challenge seriously and stepping up their investment in AI ethics and safety. One social media platform and leading digital advertising provider currently has around 40,000 people working on safety and security, with more than $20 billion invested in teams and technology in this area since 2016. Another leading digital advertiser’s AI platform provides a suite of tools that cater to the entire AI lifecycle, from data preparation to model deployment and monitoring. By integrating robust security measures, promoting transparency through explainable AI and adhering to stringent ethical guidelines, it empowers businesses of all sizes to develop and deploy AI solutions with confidence.

Misinformation and social manipulation

AI and Gen AI in particular make it much easier for bad actors to spread misinformation. We have already seen AI being used to impersonate individuals, including the two leading candidates for the U.S. presidential election in 2024. Large digital advertisers are working to thwart the misuse of AI-generated content on their respective platforms. In August 2023, one company debuted a watermark software for AI content, letting the user know that the content is AI generated. Another company, meanwhile, ensures its AI-generated content is labeled “imagined with AI” and is expanding this feature to include content created by third-party tools. The company is also focused on election transparency, namely serving over 500 million notifications on its apps since 2020 informing users how and when to vote, and building an industry-leading library of political ads that is publicly available and elucidates the entity funding each ad and who they are targeting. Given how quickly the tools are evolving, including high-quality AI-generated video in the near future, this remains an open area of both risk and opportunity for the world’s leading digital media platforms.

AI’s potential in health care

The growth and increasing complexity of data in health care also makes AI potentially transformative in the sector. In drug discovery and development, for example, some companies are successfully using AI to create and optimize molecules to go into development, largely with applications in chemistry and protein engineering. Some companies are hoping to use AI to pick better targets for drugs, although we are skeptical about the near-term prospects, as the complexity of biology may pose a challenge for current AI models. Other companies are hoping to use AI and advanced computer models to better design clinical trials, although these attempts are in the very early stages.

There is also significant potential for AI in the field of diagnostics, both traditional testing and advanced genetic tests. For traditional methods of diagnosis, like blood/serum based tests and images such as X-rays, CTs and MRIs, AI should be useful for prescreening, enhancing or even replacing human reading of test results. AI models have already been used to develop tests looking for patterns of genes that indicate cancers or the prognosis for cancer.

Along these lines, a medical technology company focused on women’s health and the leading manufacturer of mammography machines is incorporating AI in its breast imaging business to assist radiologists in locating possible breast cancer lesions. In addition, one of the leading manufacturers of CT and MRI machines is also incorporating AI into its imaging platforms, which provide automatic post-processing of imaging datasets through AI-powered algorithms in order to reduce basic repetitive tasks and increase diagnostic precision when interpreting medical images. The company is the global leader in AI patent applications in health care.

Conclusion

The rapid ascension of large-language model AI in 2023 has made the technology relevant to companies’ futures in nearly every sector. It will be important for AI to be firmly tied to sustainable futures, and we will continue to monitor how ClearBridge portfolio companies and the market at large are navigating AI’s sustainability-related opportunities and risks.



IMPORTANT LEGAL INFORMATION

This material is intended to be of general interest only and should not be construed as individual investment advice or a recommendation or solicitation to buy, sell or hold any security or to adopt any investment strategy. It does not constitute legal or tax advice.

The views expressed are those of the investment manager and the comments, opinions and analyses are rendered as at publication date and may change without notice. The information provided in this material is not intended as a complete analysis of every material fact regarding any country, region or market. All investments involve risks, including possible loss of principal.

Data from third party sources may have been used in the preparation of this material and Franklin Templeton ("FT") has not independently verified, validated or audited such data. FT accepts no liability whatsoever for any loss arising from use of this information and reliance upon the comments opinions and analyses in the material is at the sole discretion of the user.

Products, services and information may not be available in all jurisdictions and are offered outside the U.S. by other FT affiliates and/or their distributors as local laws and regulation permits. Please consult your own financial professional or Franklin Templeton institutional contact for further information on availability of products and services in your jurisdiction.

Issued by Franklin Templeton Investment Management Limited (FTIML). Registered office: Cannon Place, 78 Cannon Street, London EC4N 6HL. FTIML is authorised and regulated by the Financial Conduct Authority.

Investments entail risks, the value of investments can go down as well as up and investors should be aware they might not get back the full value invested.

CFA® and Chartered Financial Analyst® are trademarks owned by CFA Institute.