Prior probability - Wikipedia

文章推薦指數: 80 %
投票人數:10人

In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution ... Priorprobability FromWikipedia,thefreeencyclopedia Jumptonavigation Jumptosearch Distributionofanuncertainquantity PartofaseriesonBayesianstatistics Theory Admissibledecisionrule Bayesianefficiency Bayesianepistemology Bayesianprobability Probabilityinterpretations Bayes'theorem Bayesfactor Bayesianinference Bayesiannetwork Prior Posterior Likelihood Conjugateprior Posteriorpredictive Hyperparameter Hyperprior Principleofindifference Principleofmaximumentropy EmpiricalBayesmethod Cromwell'srule Bernstein–vonMisestheorem Schwarzcriterion Credibleinterval Maximumaposterioriestimation Radicalprobabilism Techniques Bayesianlinearregression Bayesianestimator ApproximateBayesiancomputation MarkovchainMonteCarlo IntegratednestedLaplaceapproximations  Mathematicsportalvte NottobeconfusedwithAprioriprobability. InBayesianstatisticalinference,apriorprobabilitydistribution,oftensimplycalledtheprior,ofanuncertainquantityistheprobabilitydistributionthatwouldexpressone'sbeliefsaboutthisquantitybeforesomeevidenceistakenintoaccount.Forexample,thepriorcouldbetheprobabilitydistributionrepresentingtherelativeproportionsofvoterswhowillvoteforaparticularpoliticianinafutureelection.Theunknownquantitymaybeaparameterofthemodeloralatentvariableratherthananobservablevariable. Bayes'theoremcalculatestherenormalizedpointwiseproductofthepriorandthelikelihoodfunction,toproducetheposteriorprobabilitydistribution,whichistheconditionaldistributionoftheuncertainquantitygiventhedata. Similarly,thepriorprobabilityofarandomeventoranuncertainpropositionistheunconditionalprobabilitythatisassignedbeforeanyrelevantevidenceistakenintoaccount. Priorscanbecreatedusinganumberofmethods.[1]: 27–41 Apriorcanbedeterminedfrompastinformation,suchaspreviousexperiments.Apriorcanbeelicitedfromthepurelysubjectiveassessmentofanexperiencedexpert.Anuninformativepriorcanbecreatedtoreflectabalanceamongoutcomeswhennoinformationisavailable.Priorscanalsobechosenaccordingtosomeprinciple,suchassymmetryormaximizingentropygivenconstraints;examplesaretheJeffreyspriororBernardo'sreferenceprior.Whenafamilyofconjugatepriorsexists,choosingapriorfromthatfamilysimplifiescalculationoftheposteriordistribution. Parametersofpriordistributionsareakindofhyperparameter.Forexample,ifoneusesabetadistributiontomodelthedistributionoftheparameterpofaBernoullidistribution,then: pisaparameteroftheunderlyingsystem(Bernoullidistribution),and αandβareparametersofthepriordistribution(betadistribution);hencehyperparameters. Hyperparametersthemselvesmayhavehyperpriordistributionsexpressingbeliefsabouttheirvalues.ABayesianmodelwithmorethanonelevelofpriorlikethisiscalledahierarchicalBayesmodel. Contents 1Informativepriors 2Weaklyinformativepriors 3Uninformativepriors 4Improperpriors 4.1Examples 5Seealso 6Notes 7References Informativepriors[edit] Aninformativepriorexpressesspecific,definiteinformationaboutavariable. Anexampleisapriordistributionforthetemperatureatnoontomorrow. Areasonableapproachistomaketheprioranormaldistributionwithexpectedvalueequaltotoday'snoontimetemperature,withvarianceequaltotheday-to-dayvarianceofatmospherictemperature, oradistributionofthetemperatureforthatdayoftheyear. Thisexamplehasapropertyincommonwithmanypriors, namely,thattheposteriorfromoneproblem(today'stemperature)becomesthepriorforanotherproblem(tomorrow'stemperature);pre-existingevidencewhichhasalreadybeentakenintoaccountispartofthepriorand,asmoreevidenceaccumulates,theposteriorisdeterminedlargelybytheevidenceratherthananyoriginalassumption,providedthattheoriginalassumptionadmittedthepossibilityofwhattheevidenceissuggesting.Theterms"prior"and"posterior"aregenerallyrelativetoaspecificdatumorobservation. Weaklyinformativepriors[edit] Aweaklyinformativepriorexpressespartialinformationaboutavariable.Anexampleis,whensettingthepriordistributionforthetemperatureatnoontomorrowinSt.Louis,touseanormaldistributionwithmean50degreesFahrenheitandstandarddeviation40degrees,whichverylooselyconstrainsthetemperaturetotherange(10degrees,90degrees)withasmallchanceofbeingbelow-30degreesorabove130degrees.Thepurposeofaweaklyinformativepriorisforregularization,thatis,tokeepinferencesinareasonablerange. Uninformativepriors[edit] Anuninformative,flat,ordiffusepriorexpressesvagueorgeneralinformationaboutavariable.[2]Theterm"uninformativeprior"issomewhatofamisnomer.Suchapriormightalsobecalledanotveryinformativeprior,oranobjectiveprior,i.e.onethat'snotsubjectivelyelicited. Uninformativepriorscanexpress"objective"informationsuchas"thevariableispositive"or"thevariableislessthansomelimit".Thesimplestandoldestrulefordetermininganon-informativeprioristheprincipleofindifference,whichassignsequalprobabilitiestoallpossibilities.Inparameterestimationproblems,theuseofanuninformativepriortypicallyyieldsresultswhicharenottoodifferentfromconventionalstatisticalanalysis,asthelikelihoodfunctionoftenyieldsmoreinformationthantheuninformativeprior. Someattemptshavebeenmadeatfindingaprioriprobabilities,i.e.probabilitydistributionsinsomesenselogicallyrequiredbythenatureofone'sstateofuncertainty;theseareasubjectofphilosophicalcontroversy,withBayesiansbeingroughlydividedintotwoschools:"objectiveBayesians",whobelievesuchpriorsexistinmanyusefulsituations,and"subjectiveBayesians"whobelievethatinpracticepriorsusuallyrepresentsubjectivejudgementsofopinionthatcannotberigorouslyjustified(Williamson2010).PerhapsthestrongestargumentsforobjectiveBayesianismweregivenbyEdwinT.Jaynes,basedmainlyontheconsequencesofsymmetriesandontheprincipleofmaximumentropy. Asanexampleofanaprioriprior,duetoJaynes(2003),considerasituationinwhichoneknowsaballhasbeenhiddenunderoneofthreecups,A,B,orC,butnootherinformationisavailableaboutitslocation.Inthiscaseauniformpriorofp(A) =p(B) =p(C) =1/3seemsintuitivelyliketheonlyreasonablechoice.Moreformally,wecanseethattheproblemremainsthesameifweswaparoundthelabels("A","B"and"C")ofthecups.Itwouldthereforebeoddtochooseapriorforwhichapermutationofthelabelswouldcauseachangeinourpredictionsaboutwhichcuptheballwillbefoundunder;theuniformprioristheonlyonewhichpreservesthisinvariance.Ifoneacceptsthisinvarianceprinciplethenonecanseethattheuniformprioristhelogicallycorrectpriortorepresentthisstateofknowledge.Thisprioris"objective"inthesenseofbeingthecorrectchoicetorepresentaparticularstateofknowledge,butitisnotobjectiveinthesenseofbeinganobserver-independentfeatureoftheworld:inrealitytheballexistsunderaparticularcup,anditonlymakessensetospeakofprobabilitiesinthissituationifthereisanobserverwithlimitedknowledgeaboutthesystem. Asamorecontentiousexample,Jaynespublishedanargument(Jaynes1968)basedontheinvarianceofthepriorunderachangeofparametersthatsuggeststhatthepriorrepresentingcompleteuncertaintyaboutaprobabilityshouldbetheHaldanepriorp−1(1 − p)−1.TheexampleJaynesgivesisoffindingachemicalinalabandaskingwhetheritwilldissolveinwaterinrepeatedexperiments.TheHaldaneprior[3]givesbyfarthemostweightto p = 0 {\displaystylep=0} and p = 1 {\displaystylep=1} ,indicatingthatthesamplewilleitherdissolveeverytimeorneverdissolve,withequalprobability.However,ifonehasobservedsamplesofthechemicaltodissolveinoneexperimentandnottodissolveinanotherexperimentthenthispriorisupdatedtotheuniformdistributionontheinterval[0,1].ThisisobtainedbyapplyingBayes'theoremtothedatasetconsistingofoneobservationofdissolvingandoneofnotdissolving,usingtheaboveprior.TheHaldanepriorisanimproperpriordistribution(meaningthatithasaninfinitemass).HaroldJeffreysdevisedasystematicwayfordesigninguninformativepriorsase.g.,Jeffreyspriorp−1/2(1 − p)−1/2fortheBernoullirandomvariable. PriorscanbeconstructedwhichareproportionaltotheHaarmeasureiftheparameterspaceXcarriesanaturalgroupstructurewhichleavesinvariantourBayesianstateofknowledge(Jaynes,1968).Thiscanbeseenasageneralisationoftheinvarianceprincipleusedtojustifytheuniformprioroverthethreecupsintheexampleabove.Forexample,inphysicswemightexpectthatanexperimentwillgivethesameresultsregardlessofourchoiceoftheoriginofacoordinatesystem.ThisinducesthegroupstructureofthetranslationgrouponX,whichdeterminesthepriorprobabilityasaconstantimproperprior.Similarly,somemeasurementsarenaturallyinvarianttothechoiceofanarbitraryscale(e.g.,whethercentimetersorinchesareused,thephysicalresultsshouldbeequal).Insuchacase,thescalegroupisthenaturalgroupstructure,andthecorrespondingprioronXisproportionalto1/x.Itsometimesmatterswhetherweusetheleft-invariantorright-invariantHaarmeasure.Forexample,theleftandrightinvariantHaarmeasuresontheaffinegrouparenotequal.Berger(1985,p. 413)arguesthattheright-invariantHaarmeasureisthecorrectchoice. Anotheridea,championedbyEdwinT.Jaynes,istousetheprincipleofmaximumentropy(MAXENT).ThemotivationisthattheShannonentropyofaprobabilitydistributionmeasurestheamountofinformationcontainedinthedistribution.Thelargertheentropy,thelessinformationisprovidedbythedistribution.Thus,bymaximizingtheentropyoverasuitablesetofprobabilitydistributionsonX,onefindsthedistributionthatisleastinformativeinthesensethatitcontainstheleastamountofinformationconsistentwiththeconstraintsthatdefinetheset.Forexample,themaximumentropyprioronadiscretespace,givenonlythattheprobabilityisnormalizedto1,isthepriorthatassignsequalprobabilitytoeachstate.Andinthecontinuouscase,themaximumentropypriorgiventhatthedensityisnormalizedwithmeanzeroandunitvarianceisthestandardnormaldistribution.Theprincipleofminimumcross-entropygeneralizesMAXENTtothecaseof"updating"anarbitrarypriordistributionwithsuitableconstraintsinthemaximum-entropysense. Arelatedidea,referencepriors,wasintroducedbyJosé-MiguelBernardo.Here,theideaistomaximizetheexpectedKullback–Leiblerdivergenceoftheposteriordistributionrelativetotheprior.ThismaximizestheexpectedposteriorinformationaboutXwhenthepriordensityisp(x);thus,insomesense,p(x)isthe"leastinformative"prioraboutX.Thereferencepriorisdefinedintheasymptoticlimit,i.e.,oneconsidersthelimitofthepriorssoobtainedasthenumberofdatapointsgoestoinfinity.Inthepresentcase,theKLdivergencebetweenthepriorandposteriordistributionsisgivenby K L = ∫ p ( t ) ∫ p ( x ∣ t ) log ⁡ p ( x ∣ t ) p ( x ) d x d t . {\displaystyleKL=\intp(t)\intp(x\midt)\log{\frac{p(x\midt)}{p(x)}}\,dx\,dt.} Here, t {\displaystylet} isasufficientstatisticforsomeparameter x {\displaystylex} .TheinnerintegralistheKLdivergencebetweentheposterior p ( x ∣ t ) {\displaystylep(x\midt)} andprior p ( x ) {\displaystylep(x)} distributionsandtheresultistheweightedmeanoverallvaluesof t {\displaystylet} .Splittingthelogarithmintotwoparts,reversingtheorderofintegralsinthesecondpartandnotingthat log [ p ( x ) ] {\displaystyle\log\,[p(x)]} doesnotdependon t {\displaystylet} yields K L = ∫ p ( t ) ∫ p ( x ∣ t ) log ⁡ [ p ( x ∣ t ) ] d x d t − ∫ log ⁡ [ p ( x ) ] ∫ p ( t ) p ( x ∣ t ) d t d x . {\displaystyleKL=\intp(t)\intp(x\midt)\log[p(x\midt)]\,dx\,dt\,-\,\int\log[p(x)]\,\intp(t)p(x\midt)\,dt\,dx.} Theinnerintegralinthesecondpartistheintegralover t {\displaystylet} ofthejointdensity p ( x , t ) {\displaystylep(x,t)} .Thisisthemarginaldistribution p ( x ) {\displaystylep(x)} ,sowehave K L = ∫ p ( t ) ∫ p ( x ∣ t ) log ⁡ [ p ( x ∣ t ) ] d x d t − ∫ p ( x ) log ⁡ [ p ( x ) ] d x . {\displaystyleKL=\intp(t)\intp(x\midt)\log[p(x\midt)]\,dx\,dt\,-\,\intp(x)\log[p(x)]\,dx.} Nowweusetheconceptofentropywhich,inthecaseofprobabilitydistributions,isthenegativeexpectedvalueofthelogarithmoftheprobabilitymassordensityfunctionor H ( x ) = − ∫ p ( x ) log ⁡ [ p ( x ) ] d x . {\displaystyleH(x)=-\intp(x)\log[p(x)]\,dx.} Usingthisinthelastequationyields K L = − ∫ p ( t ) H ( x ∣ t ) d t + H ( x ) . {\displaystyleKL=-\intp(t)H(x\midt)\,dt+\,H(x).} Inwords,KListhenegativeexpectedvalueover t {\displaystylet} oftheentropyof x {\displaystylex} conditionalon t {\displaystylet} plusthemarginal(i.e.unconditional)entropyof x {\displaystylex} .Inthelimitingcasewherethesamplesizetendstoinfinity,theBernstein-vonMisestheoremstatesthatthedistributionof x {\displaystylex} conditionalonagivenobservedvalueof t {\displaystylet} isnormalwithavarianceequaltothereciprocaloftheFisherinformationatthe'true'valueof x {\displaystylex} .Theentropyofanormaldensityfunctionisequaltohalfthelogarithmof 2 π e v {\displaystyle2\piev} where v {\displaystylev} isthevarianceofthedistribution.Inthiscasetherefore H = log ⁡ 2 π e / [ N I ( x ∗ ) ] {\displaystyleH=\log{\sqrt{2\pie/[NI(x*)]}}} where N {\displaystyleN} isthearbitrarilylargesamplesize(towhichFisherinformationisproportional)and x ∗ {\displaystylex*} isthe'true'value.Sincethisdoesnotdependon t {\displaystylet} itcanbetakenoutoftheintegral,andasthisintegralisoveraprobabilityspaceitequalsone.HencewecanwritetheasymptoticformofKLas K L = − log ⁡ [ 1 k I ( x ∗ ) ] − ∫ p ( x ) log ⁡ [ p ( x ) ] d x . {\displaystyleKL=-\log[1{\sqrt{kI(x*)}}]-\,\intp(x)\log[p(x)]\,dx.} where k {\displaystylek} isproportionaltothe(asymptoticallylarge)samplesize.Wedonotknowthevalueof x ∗ {\displaystylex*} .Indeed,theveryideagoesagainstthephilosophyofBayesianinferenceinwhich'true'valuesofparametersarereplacedbypriorandposteriordistributions.Soweremove x ∗ {\displaystylex*} byreplacingitwith x {\displaystylex} andtakingtheexpectedvalueofthenormalentropy,whichweobtainbymultiplyingby p ( x ) {\displaystylep(x)} andintegratingover x {\displaystylex} .Thisallowsustocombinethelogarithmsyielding K L = − ∫ p ( x ) log ⁡ [ p ( x ) / k I ( x ) ] d x . {\displaystyleKL=-\intp(x)\log[p(x)/{\sqrt{kI(x)}}]\,dx.} Thisisaquasi-KLdivergence("quasi"inthesensethatthesquarerootoftheFisherinformationmaybethekernelofanimproperdistribution).Duetotheminussign,weneedtominimisethisinordertomaximisetheKLdivergencewithwhichwestarted.Theminimumvalueofthelastequationoccurswherethetwodistributionsinthelogarithmargument,improperornot,donotdiverge.ThisinturnoccurswhenthepriordistributionisproportionaltothesquarerootoftheFisherinformationofthelikelihoodfunction.Henceinthesingleparametercase,referencepriorsandJeffreyspriorsareidentical,eventhoughJeffreyshasaverydifferentrationale. Referencepriorsareoftentheobjectivepriorofchoiceinmultivariateproblems,sinceotherrules(e.g.,Jeffreys'rule)mayresultinpriorswithproblematicbehavior.[clarificationneededAJeffreyspriorisrelatedtoKLdivergence?] Objectivepriordistributionsmayalsobederivedfromotherprinciples,suchasinformationorcodingtheory(seee.g.minimumdescriptionlength)orfrequentiststatistics(seefrequentistmatching).SuchmethodsareusedinSolomonoff'stheoryofinductiveinference.Constructingobjectivepriorshavebeenrecentlyintroducedinbioinformatics,andspeciallyinferenceincancersystemsbiology,wheresamplesizeislimitedandavastamountofpriorknowledgeisavailable.Inthesemethods,eitheraninformationtheorybasedcriterion,suchasKLdivergenceorlog-likelihoodfunctionforbinarysupervisedlearningproblems[4]andmixturemodelproblems.[5] Philosophicalproblemsassociatedwithuninformativepriorsareassociatedwiththechoiceofanappropriatemetric,ormeasurementscale.Supposewewantapriorfortherunningspeedofarunnerwhoisunknowntous.Wecouldspecify,say,anormaldistributionasthepriorforhisspeed,butalternativelywecouldspecifyanormalpriorforthetimehetakestocomplete100metres,whichisproportionaltothereciprocalofthefirstprior.Theseareverydifferentpriors,butitisnotclearwhichistobepreferred.Jaynes'often-overlooked[bywhom?]methodoftransformationgroupscananswerthisquestioninsomesituations.[6] Similarly,ifaskedtoestimateanunknownproportionbetween0and1,wemightsaythatallproportionsareequallylikely,anduseauniformprior.Alternatively,wemightsaythatallordersofmagnitudefortheproportionareequallylikely,thelogarithmicprior,whichistheuniformprioronthelogarithmofproportion.TheJeffreyspriorattemptstosolvethisproblembycomputingapriorwhichexpressesthesamebeliefnomatterwhichmetricisused.TheJeffreyspriorforanunknownproportionpisp−1/2(1 − p)−1/2,whichdiffersfromJaynes'recommendation. Priorsbasedonnotionsofalgorithmicprobabilityareusedininductiveinferenceasabasisforinductioninverygeneralsettings. Practicalproblemsassociatedwithuninformativepriorsincludetherequirementthattheposteriordistributionbeproper.Theusualuninformativepriorsoncontinuous,unboundedvariablesareimproper.Thisneednotbeaproblemiftheposteriordistributionisproper.Anotherissueofimportanceisthatifanuninformativeprioristobeusedroutinely,i.e.,withmanydifferentdatasets,itshouldhavegoodfrequentistproperties.NormallyaBayesianwouldnotbeconcernedwithsuchissues,butitcanbeimportantinthissituation.Forexample,onewouldwantanydecisionrulebasedontheposteriordistributiontobeadmissibleundertheadoptedlossfunction.Unfortunately,admissibilityisoftendifficulttocheck,althoughsomeresultsareknown(e.g.,BergerandStrawderman1996).TheissueisparticularlyacutewithhierarchicalBayesmodels;theusualpriors(e.g.,Jeffreys'prior)maygivebadlyinadmissibledecisionrulesifemployedatthehigherlevelsofthehierarchy. Improperpriors[edit] Letevents A 1 , A 2 , … , A n {\displaystyleA_{1},A_{2},\ldots,A_{n}} bemutuallyexclusiveandexhaustive.IfBayes'theoremiswrittenas P ( A i ∣ B ) = P ( B ∣ A i ) P ( A i ) ∑ j P ( B ∣ A j ) P ( A j ) , {\displaystyleP(A_{i}\midB)={\frac{P(B\midA_{i})P(A_{i})}{\sum_{j}P(B\midA_{j})P(A_{j})}}\,,} thenitisclearthatthesameresultwouldbeobtainedifallthepriorprobabilitiesP(Ai)andP(Aj)weremultipliedbyagivenconstant;thesamewouldbetrueforacontinuousrandomvariable.Ifthesummationinthedenominatorconverges,theposteriorprobabilitieswillstillsum(orintegrate)to1evenifthepriorvaluesdonot,andsothepriorsmayonlyneedtobespecifiedinthecorrectproportion.Takingthisideafurther,inmanycasesthesumorintegralofthepriorvaluesmaynotevenneedtobefinitetogetsensibleanswersfortheposteriorprobabilities.Whenthisisthecase,theprioriscalledanimproperprior.However,theposteriordistributionneednotbeaproperdistributionifthepriorisimproper.ThisisclearfromthecasewhereeventBisindependentofalloftheAj. Statisticianssometimes[7]useimproperpriorsasuninformativepriors.Forexample,iftheyneedapriordistributionforthemeanandvarianceofarandomvariable,theymayassumep(m, v) ~ 1/v(forv > 0)whichwouldsuggestthatanyvalueforthemeanis"equallylikely"andthatavalueforthepositivevariancebecomes"lesslikely"ininverseproportiontoitsvalue.Manyauthors(Lindley,1973;DeGroot,1937;KassandWasserman,1996)[citationneeded]warnagainstthedangerofover-interpretingthosepriorssincetheyarenotprobabilitydensities.Theonlyrelevancetheyhaveisfoundinthecorrespondingposterior,aslongasitiswell-definedforallobservations.(TheHaldanepriorisatypicalcounterexample.[clarificationneeded][citationneeded]) Bycontrast,likelihoodfunctionsdonotneedtobeintegrated,andalikelihoodfunctionthatisuniformly1correspondstotheabsenceofdata(allmodelsareequallylikely,givennodata):Bayes'rulemultipliesapriorbythelikelihood,andanemptyproductisjusttheconstantlikelihood1.However,withoutstartingwithapriorprobabilitydistribution,onedoesnotendupgettingaposteriorprobabilitydistribution,andthuscannotintegrateorcomputeexpectedvaluesorloss.SeeLikelihoodfunction§ Non-integrabilityfordetails. Examples[edit] Examplesofimproperpriorsinclude: Theuniformdistributiononaninfiniteinterval(i.e.,ahalf-lineortheentirerealline). Beta(0,0),thebetadistributionforα=0,β=0(uniformdistributiononlog-oddsscale). Thelogarithmicprioronthepositivereals(uniformdistributiononlogscale).[citationneeded] Notethatthesefunctions,interpretedasuniformdistributions,canalsobeinterpretedasthelikelihoodfunctionintheabsenceofdata,butarenotproperpriors. Seealso[edit] Baserate Bayesianepistemology Strongprior Notes[edit] ^Carlin,BradleyP.;Louis,ThomasA.(2008).BayesianMethodsforDataAnalysis(Third ed.).CRCPress.ISBN 9781584886983. ^Zellner,Arnold(1971)."PriorDistributionstoRepresent'KnowingLittle'".AnIntroductiontoBayesianInferenceinEconometrics.NewYork:JohnWiley&Sons.pp. 41–53.ISBN 0-471-98165-6. ^ThispriorwasproposedbyJ.B.S.Haldanein"Anoteoninverseprobability",MathematicalProceedingsoftheCambridgePhilosophicalSociety28,55–61,1932,doi:10.1017/S0305004100010495.SeealsoJ.Haldane,"Theprecisionofobservedvaluesofsmallfrequencies",Biometrika,35:297–300,1948,doi:10.2307/2332350,JSTOR 2332350. ^Esfahani,M.S.;Dougherty,E.R.(2014)."IncorporationofBiologicalPathwayKnowledgeintheConstructionofPriorsforOptimalBayesianClassification-IEEEJournals&Magazine".IEEE/ACMTransactionsonComputationalBiologyandBioinformatics.11(1):202–18.doi:10.1109/TCBB.2013.143.PMID 26355519.S2CID 10096507. ^Boluki,Shahin;Esfahani,MohammadShahrokh;Qian,Xiaoning;Dougherty,EdwardR(December2017)."IncorporatingbiologicalpriorknowledgeforBayesianlearningviamaximalknowledge-driveninformationpriors".BMCBioinformatics.18(S14):552.doi:10.1186/s12859-017-1893-4.ISSN 1471-2105.PMC 5751802.PMID 29297278. ^Jaynes(1968),pp.17,seealsoJaynes(2003),chapter12.Notethatchapter12isnotavailableintheonlinepreprintbutcanbepreviewedviaGoogleBooks. ^Christensen,Ronald;Johnson,Wesley;Branscum,Adam;Hanson,TimothyE.(2010).BayesianIdeasandDataAnalysis :AnIntroductionforScientistsandStatisticians.Hoboken:CRCPress.p. 69.ISBN 9781439894798. References[edit] Bauwens,Luc;Lubrano,Michel;Richard,Jean-François(1999)."PriorDensitiesfortheRegressionModel".BayesianInferenceinDynamicEconometricModels.OxfordUniversityPress.pp. 94–128.ISBN 0-19-877313-7. Rubin,DonaldB.;Gelman,Andrew;JohnB.Carlin;Stern,Hal(2003).BayesianDataAnalysis(2nd ed.).BocaRaton:Chapman&Hall/CRC.ISBN 978-1-58488-388-3.MR 2027492. Berger,JamesO.(1985).StatisticaldecisiontheoryandBayesiananalysis.Berlin:Springer-Verlag.ISBN 978-0-387-96098-2.MR 0804611. Berger,JamesO.;Strawderman,WilliamE.(1996)."Choiceofhierarchicalpriors:admissibilityinestimationofnormalmeans".AnnalsofStatistics.24(3):931–951.doi:10.1214/aos/1032526950.MR 1401831.Zbl 0865.62004. Bernardo,JoseM.(1979)."ReferencePosteriorDistributionsforBayesianInference".JournaloftheRoyalStatisticalSociety,SeriesB.41(2):113–147.JSTOR 2985028.MR 0547240. JamesO.Berger;JoséM.Bernardo;DongchuSun(2009)."Theformaldefinitionofreferencepriors".AnnalsofStatistics.37(2):905–938.arXiv:0904.0156.Bibcode:2009arXiv0904.0156B.doi:10.1214/07-AOS587.S2CID 3221355. Jaynes,EdwinT.(Sep1968)."PriorProbabilities"(PDF).IEEETransactionsonSystemsScienceandCybernetics.4(3):227–241.doi:10.1109/TSSC.1968.300117.Retrieved2009-03-27. ReprintedinRosenkrantz,RogerD.(1989).E.T.Jaynes:papersonprobability,statistics,andstatisticalphysics.Boston:KluwerAcademicPublishers.pp. 116–130.ISBN 978-90-277-1448-0. Jaynes,EdwinT.(2003).ProbabilityTheory:TheLogicofScience.CambridgeUniversityPress.ISBN 978-0-521-59271-0. Williamson,Jon(2010)."reviewofBrunodiFinetti.PhilosophicalLecturesonProbability"(PDF).PhilosophiaMathematica.18(1):130–135.doi:10.1093/philmat/nkp019.Archivedfromtheoriginal(PDF)on2011-06-09.Retrieved2010-07-02. Retrievedfrom"https://en.wikipedia.org/w/index.php?title=Prior_probability&oldid=1095868801" Categories:BayesianstatisticsProbabilityassessmentHiddencategories:ArticleswithshortdescriptionShortdescriptionisdifferentfromWikidataWikipediaarticlesneedingclarificationfromSeptember2015Articleswithspecificallymarkedweasel-wordedphrasesfromAugust2019AllarticleswithunsourcedstatementsArticleswithunsourcedstatementsfromDecember2008WikipediaarticlesneedingclarificationfromMay2011ArticleswithunsourcedstatementsfromMay2011ArticleswithunsourcedstatementsfromOctober2010 Navigationmenu Personaltools NotloggedinTalkContributionsCreateaccountLogin Namespaces ArticleTalk English Views ReadEditViewhistory More Search Navigation MainpageContentsCurrenteventsRandomarticleAboutWikipediaContactusDonate Contribute HelpLearntoeditCommunityportalRecentchangesUploadfile Tools WhatlinkshereRelatedchangesUploadfileSpecialpagesPermanentlinkPageinformationCitethispageWikidataitem Print/export DownloadasPDFPrintableversion Languages DeutschEspañolفارسیFrançais한국어Italianoעברית日本語PortuguêsРусскийTürkçeУкраїнська粵語中文 Editlinks



請為這篇文章評分?