Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
How‘Explainability’isDrivingtheFutureofArtificialIntelligence
AKyndiWhitePaper
2
Theterm“blackbox”haslongbeenusedin
scienceandengineeringtodenote
technologysystemsanddevicesthat
functionwithoutdivulgingtheirinner
workings.Theinputsandoutputsofthe
“blackbox”systemmaybevisible,butthe
actualimplementationofthetechnologyis
opaque,hiddenfromunderstandingor
justifiability.
The“blackbox”concepthasbeenexploited
bythelikesofSiliconValleystart-upsto
WallStreetinvestmentfirms,usuallyin
theireffortstoprotectintellectualproperty
andmaintaincompetitiveness.“We’ve
developedthispowerfulnewalgorithmto
generateawesomeresultsandreturnsfor
you,butdon’taskushowitworksorwhy.
Justtrustus.”
But,“justtrustus”isnotcuttingitanymore
asnewtechnologiessuchasartificial
intelligence(AI)areseepingintovirtually
everyfacetoflife.AsAIbecomesan
increasinglyessentialpartofhow
organizationsofalltypesandsizesoperate,
thereisagrowingrecognitionthattheold
“blackbox”approachusedbytechnology
companies(includingAIproviders)isnot
sufficientorappropriate.Thefactis,many
companiesdoingbusinessinhighly
regulatedsectorsaswellasgovernmental
entitiesthatoperateunderconstant
oversightscrutiny,needtobeableto
explainthe“how’s”and“why’s”ofAI-
generatedresults.Inmanycases,thelaw
mandatesthislevelofopennessand
accountability.
ANovember2017commentaryintheWall
StreetJournaloutlinedthegrowing
concernsabouttheAI“blackbox”:
Everyonewantstoknow:Willartificial
intelligencedoommankind—orsavethe
world?Butthisisthewrongquestion.Inthe
nearfuture,thebiggestchallengetohuman
controlandacceptanceofartificial
intelligenceisthetechnology’scomplexity
andopacity,notitspotentialtoturnagainst
uslikeHALin“2001:ASpaceOdyssey.”This
“blackbox”problemarisesfromthetrait
thatmakesartificialintelligenceso
powerful:itsabilitytolearnandimprove
fromexperiencewithoutexplicit
instructions.
TheMITTechnologyReviewrecently
publishedanarticleonthissametopic,
highlightingthegrowingdemandforAI
solutionswhoseresultsare“explainable”
andauditable.Thearticlequotesan
executivefromaleadingfinancialcompany,
whorequiresexplainabilityinhisAI
solutionsasamatterofregulatory
compliance:
3
AdamWenchel,vicepresidentofmachine
learninganddatainnovationatCapital
One,saysthecompanywouldliketouse
deeplearningforallsortsoffunctions,
includingdecidingwhoisgrantedacredit
card.Butitcannotdothatbecausethelaw
requirescompaniestoexplainthereason
foranysuchdecisiontoaprospective
customer.LatelastyearCapitalOnecreated
aresearchteam,ledbyWenchel,dedicated
tofindingwaysofmakingthesecomputer
techniquesmoreexplainable.
RyanWelsh,FounderandCEOofKyndi,a
SiliconValley-basedAIsolutionscompany,
believesthatthetechnologyindustrymust
stepupitseffortstoembrace“explainable
AI”andmakeitsresultsmoreexplainable
andauditable.Kyndiisbuildingthefirst
ExplainableAIplatformforgovernment,
financialservices,andhealthcare.
“OurmissionistobuildExplainableAI
productsandsolutionsthathelpto
optimizehumancognitiveperformance.A
cornerstoneofthatmissionisneverto
operateasa‘blackbox,’”saidWelsh.
“ExplainableAImeansthatthesystemcan
justifyit’sreasoning.Kyndi’sproductexists
becauseDeepLearningisa‘blackbox’and
cannotbeusedinregulatedindustries
whereorganizationsarerequiredtoexplain
thereasonsforanyadviceonanydecision.
BycreatingexplainableAIsolutions,Kyndiis
alsohelpingtomitigatethehumanbiasthat
canariseintheprocessofextracting
knowledgeandanswersfromdata.
TheWallStreetJournal’scommentary
weighedinonthevalueofcreatingAIthat
isbothaccountableandexplainable:
Abettersolutionistomakeartificial
intelligenceaccountable.Theconceptsof
accountabilityandtransparencyare
sometimesconflated,buttheformerdoes
notinvolvedisclosureofasystem’sinner
workings.Instead,accountabilityshould
includeexplainability,confidencemeasures,
proceduralregularity,andresponsibility.
Explainabilityensuresthatnontechnical
reasonscanbegivenforwhyanartificial-
intelligencemodelreachedaparticular
decision.Confidencemeasures
communicatethecertaintythatagiven
decisionisaccurate.Proceduralregularity
meanstheartificial-intelligencesystem’s
4
decision-makingprocessisappliedinthe
samemannereverytime.Andresponsibility
ensuresindividualshaveeasilyaccessible
avenuesfordisputingdecisionsthat
adverselyaffectthem.
USGovernmentAdvancing‘ExplainableAI’ThroughMajorDARPAProject
TheUSDefenseDepartmentofDefense
(DOD)ispushingExplainableAIbecauseit
cannotinvestintechnology“blackboxes”
basedsolelyonthepromiseof“trustus.”
TheDOD’sDefenseAdvancedResearch
ProjectsAgency(DARPA)hasrespondedto
thegrowingneedforgreaterexplainability
inAIbylaunchingamajorExplainableAI
researchproject.HereishowDARPA
describestherationaleforits
groundbreakingExplainableAIprogram:
Dramaticsuccessinmachinelearninghas
ledtoatorrentofArtificialIntelligence(AI)
applications.Continuedadvancespromise
toproduceautonomoussystemsthatwill
perceive,learn,decide,andactontheir
own.However,theeffectivenessofthese
systemsislimitedbythemachine’scurrent
inabilitytoexplaintheirdecisionsand
actionstohumanusers.TheDepartmentof
Defenseisfacingchallengesthatdemand
moreintelligent,autonomous,and
symbioticsystems.ExplainableAI—
especiallyexplainablemachinelearning—
willbeessentialiffuturewarfightersareto
understand,appropriatelytrust,and
effectivelymanageanemerginggeneration
ofartificiallyintelligentmachinepartners.
TheExplainableAI(XAI)programaimsto
createasuiteofmachinelearning
techniquesthat:
• Producemoreexplainablemodels,
whilemaintainingahighlevelof
learningperformance(prediction
accuracy);and
• Enablehumanuserstounderstand,
appropriatelytrust,andeffectively
managetheemerginggenerationof
artificiallyintelligentpartners.
Newmachine-learningsystemswillhavethe
abilitytoexplaintheirrationale,
characterizetheirstrengthsand
weaknesses,andconveyanunderstanding
5
ofhowtheywillbehaveinthefuture.The
strategyforachievingthatgoalisto
developnewormodifiedmachine-learning
techniquesthatwillproducemore
explainablemodels.
ExplainableAIInitiativesOntheRiseWorldwide
ArecentWiredarticleexaminedhow
governmententitiesacrosstheUSand
aroundtheworldhavecometothesame
conclusionasDARPA.Theyhaverealized
thattheoldAI“blackbox”isneither
appropriatenor,inmanycases,legal,and
thatAIresultsneedtobeexplainableand
justifiable.TheWiredstory,“AIExperts
WanttoEnd'BlackBox'Algorithmsin
Government,”reportedonthebroadrange
ofExplainableAIinitiativesthatarenow
croppinguparoundtheworld:
OnSundaytheUKgovernmentreleaseda
reviewthatexaminedhowtogrowthe
country’sAIindustry.Itincludesa
recommendationthattheUK’sdata
regulatordevelopaframeworkfor
explainingdecisionsmadebyAIsystems.
OnMondayNewYork’sCityCouncildebated
abillthatwouldrequirecityagenciesto
publishthesourcecodeofalgorithmsused
totargetindividualswithservices,penalties,
orpoliceresources.
OnTuesdayaEuropeanCommission
workinggroupondataprotectionreleased
draftguidelinesonautomateddecision
making,includingthatpeopleshouldhave
therighttochallengesuchdecisions.The
group’sreportcautionedthat“automated
decision-makingcanposesignificantrisks
forindividuals’rightsandfreedomswhich
requireappropriatesafeguards.”Its
guidancewillfeedintoasweepingnewdata
protectionlawduetocomeintoforcein
2018,knownastheGDPR.
TrustandRegulatoryComplianceDrivingGrowingDemandforExplainableAI
TorealizeAI’sfullpotential,trustiscrucial.
Trustcomesfromunderstanding—and
beingabletojustify—thereasoningbehind
6
anAIsystem’sconclusionsandresults.
KyndibelievesthatExplainableAIachieves
theleveloftrustthatissoimportantfor
acceleratedgrowthandacceptanceofAI.
Crucially,itdoessowithoutthealltoo
familiar“blackbox”approach.
ForKyndi,ExplainableAImeansthatits
software’sreasoningisapparenttothe
user.Thisvisibilityallowsthemtohave
confidenceinthesystem’soutputs,be
awareofanyuncertainties,anticipatehow
thesoftwarewillworkinthefuture,and
knowhowtoimprovethesystem.Such
knowledgeisessentialtoconfidentanalysis
anddecisionmaking.
ExplainableAIisalsonecessarytoprovidea
naturalfeedbackmechanismsothatusers
cantailortheresultstotheirneeds.
Becauseusersknowwhythesystem
producedspecificoutputs,theywillalso
knowhowtomakethesoftwaresmarter.
Usingaprocesscalled“calibration,”Kyndi’s
customerscanteachthesoftwareto
producebetterresultsinthefuture.
ExplainableAIthusbecomesthefoundation
forongoingiterationandimprovement
betweenhumanandcomputer.
Kyndi’snovelapproachtoAI,whichunifies
probabilisticandlogicalmethods,wasbuilt
withexplainabilityasafundamental
requirement.Acriticalfunctionofthe
softwareistoanswerquestions,recognize
similarities,andfindanalogiesrapidly.
ThesefeaturesenableKynditobuild
modelsthataremadeupofaseriesof
questions,forwhichthesoftwareattempts
togenerateanswersfromthedata
providedbycustomers.
Kyndi’ssolutionsjustifytheirreasoningby
pointingtospecificinstancesinuserdata
andhighlightingtherelevantwordsand
phrases.Byprovidingauditability,
governmentandenterpriseuserscan
confidentlyassesstheresultswhen
applyingthemtofurtheranalysisorto
makeimmediatedecisions.Allthis
informationisreadilyavailablethrough
Kyndi’suser-friendlyinterface.
Kyndi’sExplainableAIsoftwareisespecially
relevanttoregulatedsectors–government,
financialservices,andhealthcare–where
organizationsarerequiredtoexplainthe
reasonforanydecision.BecauseKyndi’s
softwarelogseverystepofitsreasoning
process,userscantransformregulated
businessfunctionswithAI.Andtheywill
alwaysdosowiththeknowledgethat
Kyndi’sAIsystemallowsthemtojustify
theirdecisionswhennecessary.
7
UnderscoringitsExplainableAILeadership,KyndiNamedto‘AI100’for2018
Inrecognitionforitsleadershipand
innovationinExplainableAI,Kyndiwas
recentlynamedtotheprestigiousAI100for
2018.SponsoredbyCBInsights,theSecond
AnnualAI100honorsaselectgroupof
“promisingprivatecompaniesworkingon
groundbreakingartificialintelligence
technology.”KyndiandtheotherAI
companiesselectedforthisyear’sAI100
wereculledfromagroupofmorethan
1,000technologyfirms.
HereishowCBInsightssummedupKyndi’s
achievementsinitsrecentAI100news
release:
Foundedin2014,Kynditransformsbusiness
processesbyofferingauditableAIproducts.
ItsnovelapproachtoAI,whichunifies
probabilisticandlogicalmethods,enables
organizationstoanalyzemassiveamounts
ofdatatocreateactionableknowledge
significantlyfasterandwithouthavingto
sacrificeexplainability.Kyndi’sExplainable
AIPlatformsupportsthefollowing
solutions:Intelligence,Defense,Compliance
(i.e.,forfinancialservicesandhealthcare),
andResearch.
KyndiFounderandCEORyanWelsh
commentedonKyndi’snamingtothe2018
AI100:
“BeingnamedtoCBInsights’AI100isan
incrediblehonor.Itisamajorindustry
recognition,andIthinkitunderscoresthe
importanceofmovingpast‘blackbox’
machinelearningtowardsExplainableAI
productsthathaveauditablereasoning
capabilities.Explainabilityisespecially
crucialforcriticalorganizationsthatare
requiredtoexplainthereasonforany
decision.”
ExplainabilityistheFutureofAI–RightNowExplainabilityisatthecoreofKyndi’s
breakthroughAIproductsandsolutions.
Explainabilityallowsuserstohave
confidenceintheAIsystem’soutputs,be
awareofanyuncertainties,anticipatehow
8
thesoftwarewillworkinthefuture,and
knowhowtoimprovethesystem.Such
knowledgeisessentialtoconfidentanalysis
anddecisionmaking.It’swhatgivesKyndi’s
customersastrongcompetitiveedge.
FormoreinformationonKyndi’s
ExplainableAIproductsandsolutions,visit
www.Kyndi.comorcall(650)437-7440.
About Kyndi Kyndiisanartificialintelligencecompany
that’sbuildingthefirstExplainableAI
platformforgovernment,financialservices,
andhealthcare.Kynditransformsbusiness
processesbyofferingauditableAIsystems.
Ourproductexistsbecausecritical
organizationscannotuse“blackbox”
machinelearningwhentheyarerequiredto
explainthereasonforanydecision.Based
inSiliconValley,Kyndiisbackedbyleading
ventureinvestors.