Skip to content

What Can Happen If I Use AI as a Condo or HOA Board Member?

Connor Thomson
Connor J. Thomson Community Association Lawyer

By Connor J. Thomson, Esq.

As the scope of insurance coverages, exclusions and deductibles continues to change in the community association arena, members of association boards must now be on the look-out for certain exclusions to their directors & officers liability (“D&O liability”) policies relating to artificial intelligence (“AI”).  Recently, a large commercial lines insurance company introduced a new “absolute” exclusion to its D&O policies that seeks to eliminate coverage for nearly all claims related to AI-generated content, including using AI-generated content to assist with making complex financial, legal or governance decisions, as well as creating budgets, contracts, declarations, bylaws and more.

Instead of embracing AI, the insurance industry – a reliable barometer for risks and liabilities – appears to be segmenting it, excluding it from some D&O liability policies.  Some D&O policyholders who want or expect coverage for claims related to AI-generated content must now purchase a standalone AI policy and pay a hefty premium for it.  Currently, the market for standalone AI policies is slim.  While more commercial lines insurance companies will likely start offering standalone AI policies to fill coverage gaps, the absolute AI exclusion should nevertheless serve as a wakeup call to community associations and their boards.

What steps can you take to protect condo and HOA board members?

First, make it a priority to review your community association’s D&O liability policy to see if there is an absolute AI exclusion embedded therein.  Second, you may wish to contact your community association’s lawyer.  While AI is being rapidly integrated and adopted by public and private organizations alike, it is simultaneously being segmented and excluded as a risk under certain D&O liability policies, thereby creating potential liabilities by way of derivative litigation.

AI is also creating a risk profile dilemma for community associations: Unpaid, volunteer board members are already experiencing significant fatigue from their day jobs and personal obligations, and AI is providing much-needed relief and convenience by facilitating the automation of time-consuming tasks.  However, when community associations begin integrating and adopting AI into their governance regimes, they are entirely changing their risk profiles.

Why is AI-generated content being excluded from insurance policies?

AI is flawed, biased, and makes mistakes — a lot of them.  AI (that is, generative AI) refers to the technologies and deep-learning machine models that generate high-quality text, images, and other content based on the data they were trained on.  You — the person who is prompting the model to do something or perform some task(s) — are the trainer.  And so are the billions of other people around the world who are prompting the model to do something or perform some task(s).  This means that AI can never be more than its underlying data.  Hallucinations and fabrications demonstrate that there is still a significant disconnect between AI’s capability and reliability, leaving end users, such as community associations, to “hold the bag” when defending against allegations that AI-generated content was erroneously relied upon when making a governance decision.

In addition, AI does not allow for explainability, which is especially problematic when (1) fiduciary duties are owed; and (2) the provision of housing is involved.  If an unpaid, volunteer board member cannot explain how or why a board decision was made  because “black box AI” was relied upon instead of human judgment grounded in the advice of legal counsel, the decision may be successfully challenged before a state or federal tribunal. Under these circumstances, it is imperative for board members to develop and implement an internal policy about AI usage and systems.

Implementing an internal policy about AI usage and systems

Believe it or not, even failing to develop and implement an internal policy about AI usage and systems may result in a denial of a D&O liability claim. A prudent internal policy for the more risk-averse community associations would box out AI usage and systems altogether.  For the community associations with a larger risk appetite, an internal policy may carve out a few narrow exceptions for AI usage and systems.

Given these new risks and exclusions relating to the use of AI, boards should consider consulting with third-party professionals, including community association lawyers, to evaluate applicable D&O policies and develop internal policies and procedures that effectively control and minimize the risks associated with AI technology.

Pursuant to Pennsylvania’s statutes governing common interest ownership regimes, board members owe a fiduciary duty to their association. To successfully discharge those duties, board members have a right to rely upon the advice, opinions and information provided by such third-party professionals. And relying upon such advice, in good faith, may well insulate board members from any corresponding liability.  As you can see, AI will not.

Attorney Connor J. Thomson is an Associate Attorney at Gawthrop Greenwood, one of the largest law firms supporting homeowners associations (HOAs), planned communities, condominium associations, and cooperatives in Pennsylvania.  Holding the Chartered Property Casualty Underwriter designation earned by only 4% of insurance professionals, Thomson advises community associations, board members, and adjusters on non-profit corporate governance and D&O liability claims. For more information, contact Connor at cthomson@gawthrop.com or 610-696-8225.

Connor Thomson

Connor J. Thomson is a lawyer for homeowners associations (HOAs), condominium associations, cooperatives and insurance companies that turn to Gawthrop Greenwood.

Back To Top