The Occasional Perspective - 6/26/22

Opinions and Reflections

Kudos for Jack Resnick, MD – As part of his outgoing speech before the Annual Meeting of the American Medical Association, Dr. Jack Resnick called out the growing chorus of legislative initiatives aimed at the LGBTQ* community. To quote him, he said: "Shame on political leaders, fueling fear and sowing division by making enemies of public health officials, of transgender adolescents, of physicians doing anti-racism work, and of women making personal decisions about their pregnancies." We in the healthcare community need to be more vocal in protecting the rights of individuals and to have access to the care needed – as determined by their providers. Legislating the practice of medicine including which procedures, processes and other elements of care delivery should not be tolerated by the profession. We are entrusted through our State Medical Boards and other groups to provide quality care – care that legislators without any experience and/or insight will not be able to fully understand unless they pursue a career as a healthcare practitioner.

Managing And Regulating AI/ML – It’s probably very clear to everyone that discussions, debates, dreams and predicted disasters seem to be at the forefront of the health care world. There is “much ado” about the AI/ML conundrum:

How should we manage it, or not?
Is it time to regulate it?
What do we do to protect people from unscrupulous users and adopters?

And, these are just the beginning of the questions that are percolating to the top of the health care community. In a recent article from the MIT Technology Review, six options for managing the evolving AI/ML equation were reviewed. It’s definitely worth reading the entire article (Check the hyperlink). The synopsis includes the following:

Option 1: A legally binding AI treaty – The Council of Europe is a human rights organization with membership derived from nations across the globe. The present count of members includes 46 – and, obviously extends outside of Europe. For example, the USA, Canada, and Mexico from the Americas sit in on the Council as well as Israel, Ukraine, Japan and others from other geographies. The Council is finalizing a binding treaty on AI/ML for the members who sign on to initiate steps for ensuring that AI is designed, developed, and applied with the inherent protections of democracy and human rights in mind. The intent is to have a “draft” treaty available for review some time in November 2023 according to one of the advisors to the Council. While it may be a good idea, trying to get all 46 members aligned in some type of common strategy and implementation approach where the treaty represents a commitment “to do” rather than the implementation of a “common law” is a real stretch. But, at least the discussion is good.

Option 2: Adopt the OECD AI Principles - The Organisation for Economic Co-operation and Development (OECD) was formed in 1961 to implement the Marshall Plan in the wake of World War II and has continued to be a viable entity for assisting the European Union to become a reality. OECD has agreed to adopt a set of nonbinding set of principles for managing and supporting AI/ML development. These principles include a focus on: transparency, security, safety, accountability as well as respect for the rule of law, human rights, democratic values, and diversity. The principles also call for AI systems that contribute to economic growth – not economic destruction (yet to be defined). OECD has been working diligently to gain the support of the European Union members and is tracking adherence to the principles as well as the economic impact of these tools. Their work is only beginning and will require ongoing investments and adherence on the part of the European community.  

Option 3: Creation a Global Partnership on AI – Two leaders are taking the lead on this massive undertaking. They include Prime Minister Justin Trudeau of Canada and President Emmanuel Macron of France. The Global Partnership on AI (GPAI) was founded in 2020 as an international body with a focus on fostering shared research and information related to developments in AI/ML as well as supporting international research collaboration. To date, the GPAI includes 29 countries, some in Africa, South America, and Asia. In fact, the development and deployment of such a body has been advocated by a large number of AI experts across the globe as the best approach. In fact, it is similar to what has been done through the UN Intergovernmental Panel on Climate Change. Now, the question is: will such a group move fast enough to keep up with all of the AI/ML developments or will become embroiled in internecine debates among the nations of the world?

Option 4: Adoption of the European Union AI Act – The European Union is in the process of finalizing the AI Act. It was first proposed in 2021 as a bill to regulate AI/ML across the European Union with a focus on education and health care for starters.  The bill could hold bad actors accountable and prevent the worst excesses of harmful AI by issuing huge fines and preventing the sale and use of noncomplying AI technology in the EU. It seems to be a classic “regulatory” bill from the EU that would identify and manage the various risks and potential problems associated with AI/ML development. In fact, the EU version is the first cross-border regulatory approach that will be deployed so it will no doubt create pressure in other geographic regions to adopt similar – if not identical – options for regulation of AI/ML.  But, we wait and see…

Option 5: Industry Takes the Lead in Deploying Standards – There has already been some efforts on this front with the International Organization for Standardization (ISO) deploying standards related to management of the development process, risk management, and impact assessments once AI/ML is deployed. One of the debates; however, is whether a bunch of techies is the right group to be developing, managing, and deploying standards that have clear ethical, humanitarian and economic principles and standards. While they should clearly be at the table, the question is: Who should lead the discussion and framing of the principles? 

Option 6: Engage the United Nations – The UN gets called on regularly for all manner of international problems and considerations. With 193 member countries – at last count – the UN is the type of international organization that could potentially create global coordination on the AI/ML front. In fact, it appearsthe UN wants to be the focal point for solving the AL/ML conundrum. In fact, the UN created a UN Envoy in 2021 and adopted a voluntary AI ethics framework the UN wants to be the focal point for solving the AL/ML conundrum. In fact, the UN created a UN Envoy in 2021 and adopted a voluntary AI ethics framework whereby member nations would pledge their support for a set of principles related to the ethical, economic, and environmental of AI/ML tools. But, again – we shall see…

Are Big Box Stores The Future? – As a follow-up to the quote above, Health Leaders Media did an interview with Stacey Malakoff, the Chief Financial Officer for Hospital for Special Surgery based in New York City. She asked this question because of their ubiquitous presence in the retail environment where many of us seem to hang out. Back in 1985 I made the provocative, tongue-in-cheek “prediction” that Sears was going to tear out the carpet department in their stores and convert them into clinics staffed by “alternative providers” and that everyone upon arrival would be given a “beeper” to be “paged” when the provider was available so that they could wander around in the Sears store. The intent was for the “clinic” to be 15 – 20 minutes late so that everyone would have a chance to wander the Sears store and make those unplanned purchases AND be able to get their health care needs resolved before heading home. I was off by a couple of decades and Sears is not the model but CVS, Walgreens and others have picked up on the idea. 

Healthcare Consultants

    ...Inspiring creative change to benefit the human condition