56
Artificial Intelligence (AI) A “Virtual” Primer for Attorneys Presented by David M. Lawson for the Birmingham Bar Association April 2020 www.birminghambar.org

Artificial Intelligence (AI)€¦ · • AI systems performs functions associated with the human mind such as perception, pattern recognition, classification, reasoning, speech recognition,

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

  • Artificial Intelligence (AI)

    A “Virtual” Primer for Attorneys

    Presented by David M. Lawson for the Birmingham Bar AssociationApril 2020

    www.birminghambar.org

  • Agenda• Introduction to Artificial

    Intelligence (AI)• Early History of AI• What is AI?• Why AI? • Common AI Applications• Types of AI – Machine Learning• Advantages and Disadvantages• Highlight of Examples of Legal

    and Ethical Issues Raised by AI

    • The Future of AI• Q&A

  • Disclaimer

    • David M. Lawson is the author of this presentation and is solely responsible for its content.

    • The views expressed in this presentation are solely those of the presenter and should not be attributed to any other person, firm, corporation or organization.

    • This presentation is solely for educational purposes and does not constitute legal advice.

    • By attending this presentation, you understand that there is no attorney-client relationship intended or formed between you and the presenter.

  • Introduction to AI

    • Artificial Intelligence (AI) is a branch of Computer Science which deals with helping machines to find solutions to complex problems in a more human-like fashion.

    • This generally involves borrowing characteristics from human intelligence and applying them as algorithms in a computer-friendly way.

    • AI textbooks define AI as “The Study and Design of Intelligent Agents” where an intelligent agent is a system that perceives its environment and takes actions that maximize the chances of its success.

    • John McCarthy, who coined the term Artificial Intelligence at Dartmouth in 1956 defines it as “The Science and Engineering of making Intelligent Machines.”

  • EARLY HISTORY OF AI

    1950: English mathematician Alan Turing wrote a landmark paper titled “Computing Machinery and Intelligence” that asked the question: “Can machines think?”

    1956: Term “Artificial Intelligence” introduced as a part of a workshop at Dartmouth sponsored by John McCarthy. In the proposal for that workshop he coined the phrase a “Study of Artificial Intelligence.”

  • 1959: MIT’s Artificial Intelligence Laboratory is Founded

    • MIT professor Marvin Minsky, was a "founding father" of the field of artificial intelligence whose work opened up new vistas in computer science, cognitive psychology, philosophy, robotics, and optics.

    • In 1959, Minsky co-founded MIT's Artificial Intelligence Laboratory (now the Computer Science and Artificial Intelligence Laboratory) and dedicated his career to exploring how we might replicate the functions of the human brain in a machine, a research journey he hoped would help us better understand our own minds.

    • "No computer has ever been designed that is ever aware of what it's doing," Minsky once said. "But most of the time, we aren't either."

  • What is AI?

    • The capability of computer systems that attempt to model and apply the intelligence of the human mind and human behavior.

    • AI systems use computers to solve problems or make automated decisions.

    • AI systems performs functions associated with the human mind such as perception, pattern recognition, classification, reasoning, speech recognition, etc.

    • Other common characteristics of AI systems are:• Autonomous• Goal-directed• Capable of learning (self-improvement)• IBM’s 1968 short film, “Powers of 10” – computer’s

    exponential thinking versus human’s linear thinking.

  • Why AI?

    • How can people and computers be connected so that, collectively, they act more intelligently than any person, group of persons, or computer has ever done before?

    • Which tasks should the computers perform, and which tasks should the people perform?

    • People are well-suited for tasks that involve cognitive thinking.

    • Computers are fundamentally well-suited to performing repetitive, mechanical computations, using fixed programming rules.

    • This allows computers to perform simple, monotonous tasks efficiently and reliably, which people are ill-suited to.

    • The goal is people and computers working together.

  • CURRENT STATUS OF A.I.

    • Satellite analysis toidentify poverty areas

    • Tracking poachers

    • Airport gate allocationin real time

    A.I. FOR GOOD AVIATION EDUCATION

    • Robots teachingsubjects

  • • Solving a variety of healthcare industry problems

    • Robots are ubiquitous• Repetitive tasks• No complaining

    CURRENT STATUS OF A.I.HEALTHCARE HEAVY INDUSTRY FINANCE

    • Algorithmic trading• Market analysis• Data mining

  • “Once the computers got control, we might never get it back. We would survive at their sufferance. If we're lucky, they might decide to keep us as pets.”

    - Marvin Minsky, in Life Magazine, November 20, 1970, p. 68.

  • Why did it take so long?

    • Computing Power • Tolerance Power• Hardware Power• Software Development• Intuitive Thinking• Judging Power

  • Types of AI: How Does it Work?

    ARTIFICIAL INTELLIGENCE ARTIFICIAL INTELLIGENCE

    ARTIFICIAL

    INTELLIGENCE

    ARTIFICIAL INTELLIGENCE

  • Algorithms, what are they?

    • Definition from Merriam Webster:

    • A procedure for solving a mathematical problem (as of finding the greatest common divisor) in a finite number of steps that frequently involves repetition of an operation.

    • A step-by-step procedure for solving a problem or accomplishing some end.

  • “Strong” or Artificial “General” Intelligence

    • We don’t have this today• Computers thinking at a level that meets or

    surpasses humans• Computers engaging in abstract reasoning

    and thinking• Computers that can learn to perform

    basically any task• The Terminator

  • “Weak” or “Narrow” Pattern-Based Artificial

    Intelligence• The dominant mode of AI today• Computers perform specific tasks• Computers solve problems by detecting

    useful patterns

    • Pattern-based AI is an extremely powerful tool

    • Has been used to automate many processes today

  • Machine Learning

  • Machine Learning

    • Machine Learning is the dominant mode of AI today.• Machine learning is a discipline that attempts to design,

    understand and use computer programs that learn from experience, i.e. data, for the purpose of modeling, prediction or control.

    • Machine learning tries to automate the process of finding solutions to problems through the use of examples.

    • Algorithms find patterns in data and infer rules on their own (learn from the data and improve over time).

    • Recent advances in machine learning are due to: (1) accumulation of huge amounts of data; (2) advances in computational power; (3) the growing complexity of models; and (4) new possibilities created by “deep” learning.

  • Examples of Machine Learning

    Applications

    • Google Translate (computer translation)• Email Spam Filters (pattern detection)• Speech Recognition• Fraud Detection• Image Recognition• Recommendation Systems (Netflix, etc.)

  • Recommendation Systems and Personalization- Product and content recommendations, personalized news, targeted advertisements.

    - People create the content; Google, Wiki, et al. search and sort it.

  • Requirements for Machine

    Learning

    • In order for Machine Learning to work you must have the correct conditions relating to the data set and human input.

    • Quantity – must have a lot of data examples in order to provide the most reliable results.

    • Variability – the data cannot be too similar – if you are doing a program to detect images of cats, you need lots of images of animals that are not cats.

    • Quality and Dimensionality – the data must be good and it must be comprehensive.

    • This is the era of “Big Data” and is the reason why everyone wants your information.

  • General Types of Machine

    Learning

    Supervised Learning• Makes use of a training set of input and output data

    supplied by humans that the AI algorithms then use to predict the output for new data entered into the system.

    • Here, the supervised learning algorithms make predictions based on a set of examples. With this method, there is an input variable that consists of labeled training data and a desired output variable.

    • The algorithm is used to analyze the training data to learn the function that maps the input to the output.

    • Image recognition is an example – training for cat images

  • General Types of Machine

    Learning

    Unsupervised “Deep” Learning• Aims to learn from observations alone, where there

    is no known output or result (supplied by humans).• The computer is presented with completely

    unlabeled data. • The AI is looking for underlying patterns and

    structures in the data.• Unlike in supervised learning, here there is no

    predetermined correct answer that the algorithm is being asked to predict.

    • Unsupervised learning is similar to the way humans solve problems and learn.

    • Association learning, such as Amazon shopping search analysis, is an example.

  • Unsupervised “Deep” Learning

    • A type of AI that attempts to mimic the activity of neurons in the brain to recognize patterns in data sets.

    • In deep learning, algorithms learn from massive data input, modeling data structures (neural networks), and learning to classify that data on its own.

    • Deep learning is the most advanced technique of machine learning.

  • General Types of Machine

    Learning

    Reinforcement Learning• Involves trial and error learning where input data

    stimulates the algorithm into a response, and the algorithm is “punished” or “rewarded” depending on whether the response was the desired one.

    • The framework here shifts from pattern recognition to experience-driven, sequential decision-making –i.e. trial and error.

    • Robotics and autonomous technologies are examples here. Your Roomba vacuum.

  • Natural Language Processing

  • Natural Language

    Processing

    • The capability of AI to structure, interpret, understand and generate the human language.

    • The goal of NLP is to enable people and computers to communicate in a natural (humanly) language (such as English) rather than in computer language.

    • The field of NLP is divided into two categories: Natural Language Understanding and Natural Language Generation.

    • NLP is used for information retrieval, speech recognition, question answering, machine translation, text generation and sentiment analysis.

    • The goal of speech recognition research is to allow computers to understand human speech so that they can hear our voices and recognize the words we are speaking.

    • It simplifies the process of interactive communication between people and computers.

    • Examples of a natural language search are document and image-based search queries such Google’s image search and, in the legal field, CaseText’s CARA AI.

    • Examples of question-answering systems are IBM Watson, Siri, and Alexa.

  • Advantages of AI

    • Smartphones• Precision, speed and accuracy• Space and ocean exploration• Can do repetitive, laborious and time-

    consuming tasks without rest• Fraud detection, records management• Robotic Process Automation• Healthcare diagnosis and treatment• “Smart” cities• Predictive analytics

  • Disadvantages of AI

    • Cost incurred in maintenance and repair• Lacks the human touch• Lacks a creative mind• Lacks common sense• Can’t explain its decisions• Abilities of humans may diminish• Humans may become dependent on machines• AI in the wrong hands can be devastating• Bias• Legal considerations• Ethical considerations

  • Artificial Intelligence: Legal Risks and Challenges

  • Any Regulation of AI?

    • AI development is moving faster than many regulatory bodies can keep up.

    • Algorithmic Accountability Act – proposed in Congress in 2019 –addressing bias in algorithms.

    • Food and Drug Administration (FDA) regulation of medical devices.• 21st Century Cures Act of 2016 (Cures Act).• Health Insurance Portability and Accountability Act of 1996 (HIPAA).• EU’s General Data Protection Regulation of 2016 (GDPR), effective May

    25, 2018.• California Consumers Protection Act (CCPA), effective January 1, 2020.• Federal Trade Commission (FTC) Unfair and Deceptive Trade Practice

    enforcement actions.• Illinois’ Biometric Information Privacy Act of 2008 (BIPA).• Illinois’ Artificial Intelligence Video Interview Act (effective, January 1,

    2020)• Genetic Information Nondiscrimination Act of 2008 (GINA).• Comprehensive versus sectoral approach to laws and regulations.• Alabama?

  • Risks and Challenges –

    Data Privacy

    • Machine learning requires massive amounts of data so companies are trying to gather as much as they can – photos, personal information.

    • Who owns the data?• Do you sign your rights away when you don’t

    read the website’s Terms of Use and Privacy Policy?

    • Image recognition technology …. It’s fun on Facebook but …..

    • London Pub using facial recognition to determine who is next in line at the bar.

    • GDPR, BIPA, CCPA, FTC enforcement actions.

  • Risks and Challenges –

    Data Privacy and Clearview AI

    • Clearview AI – The “Monster” Unleashed.

    • NYT: The Secretive Company That Might End Privacy as We Know It. https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html

    • NYT: Before Clearview AI Became a Police Tool, It Was a Secret Plaything of the Rich.

    https://www-nytimes-com.cdn.ampproject.org/c/s/www.nytimes.com/2020/03/05/technology/clearview-investors.amp.html• Several Class Action Lawsuits now filed in New York, California and Illinois under

    CCPA and BIPA.https://www.classaction.org/media/burke-et-al-v-clearview-ai-inc-et-al.pdfhttps://www.classaction.org/blog/clearview-ai-hit-with-class-action-lawsuit-over-controversial-data-collection-practiceshttps://www.ailira.com/wp-content/uploads/2020/02/447080068-Clearview-AI-BIPA-Lawsuit.pdfhttps://www.courthousenews.com/wp-content/uploads/2020/01/Surveillance-1.pdf

    https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.htmlhttps://www-nytimes-com.cdn.ampproject.org/c/s/www.nytimes.com/2020/03/05/technology/clearview-investors.amp.htmlhttps://www.classaction.org/media/burke-et-al-v-clearview-ai-inc-et-al.pdfhttps://www.classaction.org/blog/clearview-ai-hit-with-class-action-lawsuit-over-controversial-data-collection-practiceshttps://www.ailira.com/wp-content/uploads/2020/02/447080068-Clearview-AI-BIPA-Lawsuit.pdfhttps://www.courthousenews.com/wp-content/uploads/2020/01/Surveillance-1.pdf

  • Risks and Challenges –

    Bias in Criminal Justice

    • Authorities are using so-called “predictive algorithms” to set police patrols, prison sentences and probation rules.

    • Data is often biased, and machine learning can learn rules that build those biases into its decisions.

    • The data is supplied by humans, whose own biases can get interjected into the process.

    • Biases can involve race, class and geography.• Cade Metz and Adam Satariano, “An Algorithm That

    Grants Freedom, or Takes It Away,” New York Times, February 6, 2020. https://www-nytimes-com.cdn.ampproject.org/c/s/www.nytimes.com/2020/02/06/technology/predictive-algorithms-crime.amp.html

    • The “Google Police.”

    https://www-nytimes-com.cdn.ampproject.org/c/s/www.nytimes.com/2020/02/06/technology/predictive-algorithms-crime.amp.html

  • Risks and Challenges –

    Employment Law

    • Growing use of AI in the hiring process – analysis of a candidate’s online history, social accounts, LinkedIn accounts, resume screening, biometric data, etc.

    • What if your social media accounts show you have medical conditions? Who are your friends on social media? EEO issues? Are these compilations regulated credit reports?

    • November 2019, non-profit Electronic Privacy Information Center (EPIC) complaint filed with the FTC concerning HireVue, a company that builds AI hiring tools. https://epic.org/privacy/ftc/hirevue/EPIC_FTC_HireVue_Complaint.pdf

    • Illinois’ Artificial Intelligence Video Interview Act (effective January 1, 2020) –

    Requires companies to notify job applicants when they use AI-based video interview tools. Companies must notify applicants that AI will be used to consider applicants’ “fitness” for a position. The companies must explain how their AI works and what “general types of characteristics” it considers when evaluating candidates. It requires applicants’ consent to use AI. In addition, it limits who can view an applicant’s recorded video interview to those “whose expertise or technology is necessary” and requires that companies delete any video that an applicant submits within a month of their request.

    https://epic.org/privacy/ftc/hirevue/EPIC_FTC_HireVue_Complaint.pdf

  • Risks and Challenges in Healthcare

    • Currently, there is very little in the way of specific regulation of AI in the healthcare industry.

    • AI will be a driving force in the future of healthcare for improving patient outcomes, industry efficiency, and future innovation.

    • HIPAA, GDPR, GINA, CCPA – Data Privacy and Security.

    • FDA – 21st Century Cures Act (“Cures Act”).• The next slides will highlight a few of these

    specific areas.

  • Risks and Challenges in Healthcare –

    Generalization Errors and Bias

    • Generalization errors (e.g. erroneously projecting historical data forward or generalizing from insufficient or incorrect data).

    • Bias in medical research is baked into the system because the patients enrolled in studies are rarely a reflection of the population.

    • Example: genomic studies to predict disease – the majority of the data set consists of persons with European ancestry.

  • Risks and Challenges in Healthcare –

    The “Black Box”

    • Inscrutability versus explainability – The Black Box.• It is usually not possible to trace how a machine

    learning algorithm reached a prediction or conclusion.

    • When the system reaches an incorrect or undesirable result, it can be difficult for humans to pinpoint or explain the error.

    • Inputs, then Hidden Layers, and then Outputs – how do you explain what happened?

    • Radiology scans.• Disease predictions.• GDPR – requires companies to provide an

    explanation for decisions that automated systems make.

  • Risks and Challenges in Healthcare –

    Medical Devices

    • The closest example of actual AI regulation in healthcare involves “medical devices,” which are now defined quite broadly under federal law to include all sorts of technology, including software.

    • In December 2016, the 21st Century Cures Act (“Cures Act”) was signed into law and it offered some clarity on FDA jurisdiction over software/digital health products.

    • The Cures Act explicitly excludes from the definition of “medical device” (and thus excluded from FDA jurisdiction) most clinical decision support (CDS) software. If AI is included in the CDS software, it will be regulated by the FDA unless it falls into the Cures Act exception.

    • AI is iterative and creates a new or changed product almost constantly. AI devices approved to date have involved locked algorithms that will be periodically updated.

  • Risks and Challenges in Healthcare –

    Medical Devices continued …

    • In December 2017, and updated in September 2019, FDA published, for comment purposes only, its draft guidance, Clinical and Patient Decision Support Software, Draft Guidance (“Draft Guidance”). https://www.regulations.gov/docket?D=FDA-2017-D-6569

    • If the Clinical Decision Support (“CDS”) software allows a healthcare professional (“HCP”) to independently review the output (i.e., the diagnosis, treatment plan, etc.) then it falls into the exclusion and is outside of FDA jurisdiction. This is now referred to as “Non-Device CDS.” So, software that provides the diagnosis without a healthcare professional’s independent review would be within the FDA’s jurisdiction. This is now referred to as “Device CDS.”

    • In order to fall into the Cures Act exclusion, the HCP should be able to reach the same conclusion on his/her own, without the software. In addition, the software (i.e. the software manufacturer) must be able to: (1) explain, in general, how the software algorithm works; and (2) show that the data sources supporting the recommendation or underlying the rationale for the recommendation are identifiable and easily accessible to the intended HCP - meaning that these sources are understandable by the intended HCP (i.e., the data points are well understood by the HCP) and that these sources are publicly available to the HCP (such as clinical practice guidelines, published literature).

    • This brings up the “Black Box” problem. It is exceedingly difficult to determine how AI reached its conclusions. Thus, it may very well be that most medical devices that incorporate AI are going to end up in the “regulated” category.

    • Recommendations as opposed to AI-generated diagnosis and treatment plan decisions.

    https://www.regulations.gov/docket?D=FDA-2017-D-6569

  • Risks and Challenges in Intellectual Property -

    Patents and Copyrights

    • AI algorithms writing everything.• Under current U.S. law, a non-human cannot be an author of a

    work for copyright purposes – “The U.S. Copyright Office will register an original work of authorship, provided that the work was created by a human being.”

    • Under current U.S. law, a non-human cannot be an inventor for patent purposes – an AI computer cannot “conceive” an invention as the law is currently interpreted.

    • Musicians Algorithmically Generate Every Possible Melody, Release Them to Public Domain

    https://www.vice.com/en_us/article/wxepzw/musicians-algorithmically-generate-every-possible-melody-release-them-to-public-domain?utm_source=dmfb&fbclid=IwAR0XtylCRnS4Wvra1G8OWhnGKXAY-2vbNFfv6LjQXJgIuqiZPxxE-1OboYc

    https://www.vice.com/en_us/article/wxepzw/musicians-algorithmically-generate-every-possible-melody-release-them-to-public-domain?utm_source=dmfb&fbclid=IwAR0XtylCRnS4Wvra1G8OWhnGKXAY-2vbNFfv6LjQXJgIuqiZPxxE-1OboYc

  • Risks and Challenges –

    Self-Driving Cars

    “Are we there yet?”

  • Risks and Challenges –

    Self-Driving Cars continued ….

    • Five steps of autonomy defined by the National Highway Traffic Safety Administration (as recommended by the Society of Automotive Engineers)

    • Level 0 – vehicle can do nothing without you• Level 1 – vehicle has features to assist you in your actions

    - rear-pointing cameras to assist you• Level 2 – vehicle has features such as lane monitoring or

    collision avoidance – can act on its own• Level 3 – vehicle can operate on its own, but the driver

    must be ready to take over – today’s Tesla• Level 4 – vehicle has complete autonomy in some

    environments some of the time• Level 5 – vehicle has complete autonomy in all

    environments all of the time

  • Risks and Challenges –

    Self-Driving Cars continued ….

    • Sz Hua Huang et al v. Tesla Inc., The State of California, no. 19CV346663 https://app.box.com/s/t1sfpkuii5pkj7ywtq4z9f7jhufhndng

    • 2017 Tesla Model X – autopilot system drove into a highway concrete median, killing the driver.

    • Allegations of product liability, defective design, failure to warn, breach of warranty, false advertising, intentional and negligent misrepresentation.

    https://app.box.com/s/t1sfpkuii5pkj7ywtq4z9f7jhufhndng

  • Risks and Challenges –

    Self-Driving Cars continued ….

    • Self-Driving Cars: Hype-Filled Decade Ends on a Sobering Note

    https://www.cnn.com/2019/12/18/tech/self-driving-cars-decade/index.html

    • Tesla Crashhttps://www.wftv.com/news/trending/man-killed-tesla-suv-crash-was-playing-game-smartphone-while-automated-driving-was-engaged-ntsb-says/PW4UXBWDXBEM3NL66VKFU2BRIU/• Tesla - Machine Learninghttps://gizmodo.com/how-a-piece-of-tape-tricked-a-tesla-into-reading-a-35mp-1841791417?utm_campaign=socialflow_gizmodo_facebook&utm_medium=socialflow&utm_source=gizmodo_facebook• Sheikh v. Tesla, Inc. (N.D. Cal) (False Claims) Settlementhttp://www.autopilotsettlement.com/frequently-asked-questions.aspx

    https://www.cnn.com/2019/12/18/tech/self-driving-cars-decade/index.htmlhttps://www.wftv.com/news/trending/man-killed-tesla-suv-crash-was-playing-game-smartphone-while-automated-driving-was-engaged-ntsb-says/PW4UXBWDXBEM3NL66VKFU2BRIU/https://gizmodo.com/how-a-piece-of-tape-tricked-a-tesla-into-reading-a-35mp-1841791417?utm_campaign=socialflow_gizmodo_facebook&utm_medium=socialflow&utm_source=gizmodo_facebookhttp://www.autopilotsettlement.com/frequently-asked-questions.aspx

  • Risks and Challenges -

    Predictive Analytics in Credit and Insurance Markets

    • Credit and insurance decisions.• AI looking at large amounts of data to

    identify patterns that humans cannot see.• Allows you to learn from the past to predict

    the future.• Determines typical buying behavior or

    associations.• Now shopping on Amazon at 3:00 a.m.?• Bank direct deposit history.• Red Flags for credit scoring.

  • Risks and Challenges –

    Deepfakes

    • Deepfakes and fake media – AI “rewriting” history by creating fake photos and videos that are almost impossible to determine are fake.

    • Manipulation of preferences and information.• The creation of fake evidence in legal matters.• Several companies are already developing

    algorithms using AI to detect deepfakes. Facebook recently announced a partnership with Microsoft and academia to invest in AI systems that identify, flag, and remove harmful deepfakes.

    • The Pentagon is also investing heavily in deepfake-detection technologies such as the Defense Advanced Research Projects Agency’s (DARPA) Media Forensics (MediFor) program to fight AI with AI.

    • https://www.justsecurity.org/69677/deepfakes-2-0-the-new-era-of-truth-decay/

    https://www.justsecurity.org/69677/deepfakes-2-0-the-new-era-of-truth-decay/

  • Other Risks and Challenges

    • Adversarial AI - fake data designed to fool machine learning models.

    • Cybersecurity.• Military Applications - will the drones

    starting making their own decisions on who to track and attack?

    • Ethical Use of AI – employment and other areas.

  • Future of AI

    • The concern is with Artificial General Intelligence (“AGI”) – AI that can perform virtually any task as good as or better than humans.

    • Today’s AI machines do not have emotional intelligence, moral reasoning, or creativity.

    • Most experts agree that AGI is not possible in the short to medium term.

    • Quantum computing – IBM, Honeywell• Can the Legal World Keep Pace?• Pets? Maybe, but not any time soon …..

  • “Once the computers got control, we might never get it back. We would survive at their sufferance. If we're lucky, they might decide to keep us as pets.”

    - Marvin Minsky, in Life Magazine, November 20, 1970, p. 68.

  • Questions & Answers

    David M. Lawson

    Email: [email protected]

    Phone: +1(205)-354-3363

  • Speaker Biography• David Lawson, the owner of Prescient Law, LLC in Birmingham,

    Alabama, is an experienced corporate attorney and data privacy/security professional.

    • Previously, David worked as a litigator in private practice with a large Birmingham firm and for approximately the past 20 years in the corporate world as a general counsel in the information technology and healthcare industries.

    • David earned his BA and JD degrees from Emory University. Additionally, David holds certifications from the Compliance Certification Board (CCB) as Certified in Healthcare Compliance (CHC), Certified in Healthcare Privacy Compliance (CHPC), and as a Certified Compliance and Ethics Professional (CCEP).

    • David recently completed the Massachusetts Institute of Technology’s Executive Program: MIT Sloan and MIT CSAIL (Artificial Intelligence - Implications for Business Strategy).

    • Email: [email protected]• Phone: +1(205)-354-3363• Website: https://www.prescientlaw.com

  • Additional Resources

    Artificial Intelligence (AI)AgendaDisclaimer�Introduction to AIEARLY HISTORYOF AI1959: MIT’s Artificial Intelligence Laboratory is FoundedWhat is AI?Why AI?CURRENTSTATUSOFA.I.CURRENTSTATUSOFA.I.HEALTHCAREHEAVYINDUSTRYFINANCESlide Number 11Why did it take so long?Types of AI: How Does it Work?Algorithms, what are they?“Strong” or Artificial “General” Intelligence“Weak” or “Narrow” Pattern-Based Artificial IntelligenceMachine LearningMachine LearningExamples of Machine Learning ApplicationsRecommendation Systems and Personalization��- Product and content recommendations, personalized news, targeted advertisements.��- People create the content; Google, Wiki, et al. search and sort it.� �Requirements for Machine LearningGeneral Types of Machine LearningGeneral Types of Machine LearningUnsupervised “Deep” Learning�General Types of Machine LearningNatural Language Processing Natural Language Processing�Slide Number 28Advantages of AI�Disadvantages of AI�Artificial Intelligence: Legal Risks and ChallengesSlide Number 32��Any Regulation of AI?���Risks and Challenges –��Data Privacy�Risks and Challenges –��Data Privacy and Clearview AI�Risks and Challenges – ��Bias in Criminal Justice�Risks and Challenges – ��Employment Law�Risks and Challenges in Healthcare ���Risks and Challenges in Healthcare – ��Generalization Errors and Bias�Risks and Challenges in Healthcare –��The “Black Box”�Risks and Challenges in Healthcare – ��Medical DevicesRisks and Challenges in Healthcare – ��Medical Devices continued …Risks and Challenges in Intellectual Property -��Patents and Copyrights���Risks and Challenges – ��Self-Driving Cars��“Are we there yet?”��Risks and Challenges – ��Self-Driving Cars continued ….�Risks and Challenges – ��Self-Driving Cars continued ….��Risks and Challenges –��Self-Driving Cars continued ….Risks and Challenges - ��Predictive Analytics in Credit and Insurance Markets�Risks and Challenges –��Deepfakes��Other Risks and Challenges ��Future of AI�Slide Number 52Slide Number 53Questions & AnswersSpeaker BiographyAdditional Resources