AlexNet convolutional neural network - MATLAB ... - MathWorks
文章推薦指數: 80 %
AlexNet is a convolutional neural network that is 8 layers deep. You can load a pretrained version of the network trained on more than a million images from ... Skiptocontent HelpCenterHelpCenter SearchHelpCenter HelpCenter MathWorks SearchMathWorks.com MathWorks Support CloseMobileSearch OpenMobileSearch Off-CanvasNavigationMenuToggle DocumentationHome DeepLearningToolbox DeepLearningwithImages alexnet Onthispage SyntaxDescriptionExamplesDownloadAlexNetSupportPackageTransferLearningUsingAlexNetClassifyanImageUsingAlexNetFeatureExtractionUsingAlexNetOutputArgumentsnetlayersTipsReferencesExtendedCapabilitiesVersionHistorySeeAlso DocumentationExamplesFunctionsBlocksAppsVideosAnswers TrialSoftware TrialSoftware ProductUpdates ProductUpdates Resources DocumentationExamplesFunctionsBlocksAppsVideosAnswers MainContent alexnetAlexNetconvolutionalneuralnetworkcollapseallinpage× Syntaxnet=alexnetnet=alexnet('Weights','imagenet')layers=alexnet('Weights','none')Description AlexNetisaconvolutionalneuralnetworkthatis8layersdeep.Youcanloada pretrainedversionofthenetworktrainedonmorethanamillionimagesfromtheImageNet database[1].Thepretrainednetworkcanclassify imagesinto1000objectcategories,suchaskeyboard,mouse,pencil,andmanyanimals.Asa result,thenetworkhaslearnedrichfeaturerepresentationsforawiderangeofimages.The networkhasanimageinputsizeof227-by-227.FormorepretrainednetworksinMATLAB®,seePretrainedDeepNeuralNetworks. Youcanuseclassifyto classifynewimagesusingtheAlexNetnetwork.FollowthestepsofClassifyImageUsingGoogLeNetandreplaceGoogLeNetwith AlexNet. Forafreehands-onintroductiontopracticaldeeplearningmethods,seeDeepLearningOnramp. examplenet=alexnetreturnsanAlexNetnetwork trainedontheImageNetdataset.ThisfunctionrequiresDeepLearningToolbox™ModelforAlexNetNetworksupportpackage.Ifthis supportpackageisnotinstalled,thefunctionprovidesadownloadlink.Alternatively, seeDeepLearningToolboxModelforAlexNetNetwork.FormorepretrainednetworksinMATLAB,seePretrainedDeepNeuralNetworks. net=alexnet('Weights','imagenet') returnsanAlexNetnetworktrainedontheImageNetdataset.Thissyntaxisequivalentto net=alexnet. layers=alexnet('Weights','none') returnstheuntrainedAlexNetnetworkarchitecture.Theuntrainedmodeldoesnotrequire thesupportpackage. ExamplescollapseallDownloadAlexNetSupportPackage DownloadandinstallDeepLearningToolboxModelfor AlexNetNetworksupportpackage. Typealexnetatthecommandline.alexnetIfDeepLearningToolboxModelforAlexNet Networksupportpackageisnotinstalled,thenthefunction providesalinktotherequiredsupportpackageintheAdd-OnExplorer. Toinstallthesupportpackage,clickthelink,andthenclickInstall. Checkthattheinstallationissuccessfulbytypingalexnetat thecommandline.alexnetans= SeriesNetworkwithproperties: Layers:[25×1nnet.cnn.layer.Layer]Iftherequiredsupportpackageisinstalled,thenthefunction returnsaSeriesNetworkobject.VisualizethenetworkusingDeepNetworkDesigner. deepNetworkDesigner(alexnet)ExploreotherpretrainednetworksinDeepNetworkDesignerby clickingNew. Ifyouneedtodownloadanetwork,pauseonthedesirednetworkandclick InstalltoopentheAdd-OnExplorer.TransferLearningUsingAlexNetThisexampleuses:DeepLearningToolboxDeepLearningToolboxDeepLearningToolboxModelforAlexNetNetworkDeepLearningToolboxModelforAlexNetNetworkOpenLiveScriptThisexampleshowshowtofine-tuneapretrainedAlexNetconvolutionalneuralnetworktoperformclassificationonanewcollectionofimages.AlexNethasbeentrainedonoveramillionimagesandcanclassifyimagesinto1000objectcategories(suchaskeyboard,coffeemug,pencil,andmanyanimals).Thenetworkhaslearnedrichfeaturerepresentationsforawiderangeofimages.Thenetworktakesanimageasinputandoutputsalabelfortheobjectintheimagetogetherwiththeprobabilitiesforeachoftheobjectcategories.Transferlearningiscommonlyusedindeeplearningapplications.Youcantakeapretrainednetworkanduseitasastartingpointtolearnanewtask.Fine-tuninganetworkwithtransferlearningisusuallymuchfasterandeasierthantraininganetworkwithrandomlyinitializedweightsfromscratch.Youcanquicklytransferlearnedfeaturestoanewtaskusingasmallernumberoftrainingimages.LoadDataUnzipandloadthenewimagesasanimagedatastore.imageDatastoreautomaticallylabelstheimagesbasedonfoldernamesandstoresthedataasanImageDatastoreobject.Animagedatastoreenablesyoutostorelargeimagedata,includingdatathatdoesnotfitinmemory,andefficientlyreadbatchesofimagesduringtrainingofaconvolutionalneuralnetwork.unzip('MerchData.zip'); imds=imageDatastore('MerchData',... 'IncludeSubfolders',true,... 'LabelSource','foldernames');Dividethedataintotrainingandvalidationdatasets.Use70%oftheimagesfortrainingand30%forvalidation.splitEachLabelsplitstheimagesdatastoreintotwonewdatastores.[imdsTrain,imdsValidation]=splitEachLabel(imds,0.7,'randomized');Thisverysmalldatasetnowcontains55trainingimagesand20validationimages.Displaysomesampleimages.numTrainImages=numel(imdsTrain.Labels); idx=randperm(numTrainImages,16); figure fori=1:16 subplot(4,4,i) I=readimage(imdsTrain,idx(i)); imshow(I) endLoadPretrainedNetworkLoadthepretrainedAlexNetneuralnetwork.IfDeepLearningToolbox™ModelforAlexNetNetworkisnotinstalled,thenthesoftwareprovidesadownloadlink.AlexNetistrainedonmorethanonemillionimagesandcanclassifyimagesinto1000objectcategories,suchaskeyboard,mouse,pencil,andmanyanimals.Asaresult,themodelhaslearnedrichfeaturerepresentationsforawiderangeofimages.net=alexnet;UseanalyzeNetworktodisplayaninteractivevisualizationofthenetworkarchitectureanddetailedinformationaboutthenetworklayers.analyzeNetwork(net)Thefirstlayer,theimageinputlayer,requiresinputimagesofsize227-by-227-by-3,where3isthenumberofcolorchannels.inputSize=net.Layers(1).InputSizeinputSize=1×3 2272273 ReplaceFinalLayersThelastthreelayersofthepretrainednetworknetareconfiguredfor1000classes.Thesethreelayersmustbefine-tunedforthenewclassificationproblem.Extractalllayers,exceptthelastthree,fromthepretrainednetwork.layersTransfer=net.Layers(1:end-3);Transferthelayerstothenewclassificationtaskbyreplacingthelastthreelayerswithafullyconnectedlayer,asoftmaxlayer,andaclassificationoutputlayer.Specifytheoptionsofthenewfullyconnectedlayeraccordingtothenewdata.Setthefullyconnectedlayertohavethesamesizeasthenumberofclassesinthenewdata.Tolearnfasterinthenewlayersthaninthetransferredlayers,increasetheWeightLearnRateFactorandBiasLearnRateFactorvaluesofthefullyconnectedlayer.numClasses=numel(categories(imdsTrain.Labels))numClasses=5 layers=[ layersTransfer fullyConnectedLayer(numClasses,'WeightLearnRateFactor',20,'BiasLearnRateFactor',20) softmaxLayer classificationLayer];TrainNetworkThenetworkrequiresinputimagesofsize227-by-227-by-3,buttheimagesintheimagedatastoreshavedifferentsizes.Useanaugmentedimagedatastoretoautomaticallyresizethetrainingimages.Specifyadditionalaugmentationoperationstoperformonthetrainingimages:randomlyflipthetrainingimagesalongtheverticalaxis,andrandomlytranslatethemupto30pixelshorizontallyandvertically.Dataaugmentationhelpspreventthenetworkfromoverfittingandmemorizingtheexactdetailsofthetrainingimages.pixelRange=[-3030]; imageAugmenter=imageDataAugmenter(... 'RandXReflection',true,... 'RandXTranslation',pixelRange,... 'RandYTranslation',pixelRange); augimdsTrain=augmentedImageDatastore(inputSize(1:2),imdsTrain,... 'DataAugmentation',imageAugmenter);Toautomaticallyresizethevalidationimageswithoutperformingfurtherdataaugmentation,useanaugmentedimagedatastorewithoutspecifyinganyadditionalpreprocessingoperations.augimdsValidation=augmentedImageDatastore(inputSize(1:2),imdsValidation);Specifythetrainingoptions.Fortransferlearning,keepthefeaturesfromtheearlylayersofthepretrainednetwork(thetransferredlayerweights).Toslowdownlearninginthetransferredlayers,settheinitiallearningratetoasmallvalue.Inthepreviousstep,youincreasedthelearningratefactorsforthefullyconnectedlayertospeeduplearninginthenewfinallayers.Thiscombinationoflearningratesettingsresultsinfastlearningonlyinthenewlayersandslowerlearningintheotherlayers.Whenperformingtransferlearning,youdonotneedtotrainforasmanyepochs.Anepochisafulltrainingcycleontheentiretrainingdataset.Specifythemini-batchsizeandvalidationdata.ThesoftwarevalidatesthenetworkeveryValidationFrequencyiterationsduringtraining.options=trainingOptions('sgdm',... 'MiniBatchSize',10,... 'MaxEpochs',6,... 'InitialLearnRate',1e-4,... 'Shuffle','every-epoch',... 'ValidationData',augimdsValidation,... 'ValidationFrequency',3,... 'Verbose',false,... 'Plots','training-progress');Trainthenetworkthatconsistsofthetransferredandnewlayers.Bydefault,trainNetworkusesaGPUifoneisavailable,otherwise,itusesaCPU.TrainingonaGPUrequiresParallelComputingToolbox™andasupportedGPUdevice.Forinformationonsupporteddevices,seeGPUSupportbyRelease(ParallelComputingToolbox).Youcanalsospecifytheexecutionenvironmentbyusingthe'ExecutionEnvironment'name-valuepairargumentoftrainingOptions.netTransfer=trainNetwork(augimdsTrain,layers,options);ClassifyValidationImagesClassifythevalidationimagesusingthefine-tunednetwork.[YPred,scores]=classify(netTransfer,augimdsValidation);Displayfoursamplevalidationimageswiththeirpredictedlabels.idx=randperm(numel(imdsValidation.Files),4); figure fori=1:4 subplot(2,2,i) I=readimage(imdsValidation,idx(i)); imshow(I) label=YPred(idx(i)); title(string(label)); endCalculatetheclassificationaccuracyonthevalidationset.Accuracyisthefractionoflabelsthatthenetworkpredictscorrectly.YValidation=imdsValidation.Labels; accuracy=mean(YPred==YValidation)accuracy=1 Fortipsonimprovingclassificationaccuracy,seeDeepLearningTipsandTricks.ClassifyanImageUsingAlexNetThisexampleuses:DeepLearningToolboxDeepLearningToolboxDeepLearningToolboxModelforAlexNetNetworkDeepLearningToolboxModelforAlexNetNetworkOpenLiveScriptRead,resize,andclassifyanimageusingAlexNet.First,loadapretrainedAlexNetmodel.net=alexnet;Readtheimageusingimread.I=imread('peppers.png'); figure imshow(I)Thepretrainedmodelrequirestheimagesizetobethesameastheinputsizeofthenetwork.DeterminetheinputsizeofthenetworkusingtheInputSizepropertyofthefirstlayerofthenetwork.sz=net.Layers(1).InputSizesz=1×3 2272273 Resizetheimagetotheinputsizeofthenetwork.I=imresize(I,sz(1:2)); figure imshow(I)Classifytheimageusingclassify.label=classify(net,I)label=categorical bellpepper Showtheimageandclassificationresulttogether.figure imshow(I) title(label)FeatureExtractionUsingAlexNetThisexampleuses:DeepLearningToolboxDeepLearningToolboxDeepLearningToolboxModelforAlexNetNetworkDeepLearningToolboxModelforAlexNetNetworkStatisticsandMachineLearningToolboxStatisticsandMachineLearningToolboxOpenLiveScriptThisexampleshowshowtoextractlearnedimagefeaturesfromapretrainedconvolutionalneuralnetwork,andusethosefeaturestotrainanimageclassifier.Featureextractionistheeasiestandfastestwaytousetherepresentationalpowerofpretraineddeepnetworks.Forexample,youcantrainasupportvectormachine(SVM)usingfitcecoc(StatisticsandMachineLearningToolbox™)ontheextractedfeatures.Becausefeatureextractiononlyrequiresasinglepassthroughthedata,itisagoodstartingpointifyoudonothaveaGPUtoacceleratenetworktrainingwith.LoadDataUnzipandloadthesampleimagesasanimagedatastore.imageDatastoreautomaticallylabelstheimagesbasedonfoldernamesandstoresthedataasanImageDatastoreobject.Animagedatastoreletsyoustorelargeimagedata,includingdatathatdoesnotfitinmemory.Splitthedatainto70%trainingand30%testdata.unzip('MerchData.zip'); imds=imageDatastore('MerchData',... 'IncludeSubfolders',true,... 'LabelSource','foldernames'); [imdsTrain,imdsTest]=splitEachLabel(imds,0.7,'randomized');Therearenow55trainingimagesand20validationimagesinthisverysmalldataset.Displaysomesampleimages.numImagesTrain=numel(imdsTrain.Labels); idx=randperm(numImagesTrain,16); fori=1:16 I{i}=readimage(imdsTrain,idx(i)); end figure imshow(imtile(I))LoadPretrainedNetworkLoadapretrainedAlexNetnetwork.IftheDeepLearningToolboxModelforAlexNetNetworksupportpackageisnotinstalled,thenthesoftwareprovidesadownloadlink.AlexNetistrainedonmorethanamillionimagesandcanclassifyimagesinto1000objectcategories.Forexample,keyboard,mouse,pencil,andmanyanimals.Asaresult,themodelhaslearnedrichfeaturerepresentationsforawiderangeofimages.net=alexnet;Displaythenetworkarchitecture.Thenetworkhasfiveconvolutionallayersandthreefullyconnectedlayers.net.Layersans= 25x1Layerarraywithlayers: 1'data'ImageInput227x227x3imageswith'zerocenter'normalization 2'conv1'Convolution9611x11x3convolutionswithstride[44]andpadding[0000] 3'relu1'ReLUReLU 4'norm1'CrossChannelNormalizationcrosschannelnormalizationwith5channelsperelement 5'pool1'MaxPooling3x3maxpoolingwithstride[22]andpadding[0000] 6'conv2'GroupedConvolution2groupsof1285x5x48convolutionswithstride[11]andpadding[2222] 7'relu2'ReLUReLU 8'norm2'CrossChannelNormalizationcrosschannelnormalizationwith5channelsperelement 9'pool2'MaxPooling3x3maxpoolingwithstride[22]andpadding[0000] 10'conv3'Convolution3843x3x256convolutionswithstride[11]andpadding[1111] 11'relu3'ReLUReLU 12'conv4'GroupedConvolution2groupsof1923x3x192convolutionswithstride[11]andpadding[1111] 13'relu4'ReLUReLU 14'conv5'GroupedConvolution2groupsof1283x3x192convolutionswithstride[11]andpadding[1111] 15'relu5'ReLUReLU 16'pool5'MaxPooling3x3maxpoolingwithstride[22]andpadding[0000] 17'fc6'FullyConnected4096fullyconnectedlayer 18'relu6'ReLUReLU 19'drop6'Dropout50%dropout 20'fc7'FullyConnected4096fullyconnectedlayer 21'relu7'ReLUReLU 22'drop7'Dropout50%dropout 23'fc8'FullyConnected1000fullyconnectedlayer 24'prob'Softmaxsoftmax 25'output'ClassificationOutputcrossentropyexwith'tench'and999otherclasses Thefirstlayer,theimageinputlayer,requiresinputimagesofsize227-by-227-by-3,where3isthenumberofcolorchannels.inputSize=net.Layers(1).InputSizeinputSize=1×3 2272273 ExtractImageFeaturesThenetworkconstructsahierarchicalrepresentationofinputimages.Deeperlayerscontainhigher-levelfeatures,constructedusingthelower-levelfeaturesofearlierlayers.Togetthefeaturerepresentationsofthetrainingandtestimages,useactivationsonthefullyconnectedlayer'fc7'.Togetalower-levelrepresentationoftheimages,useanearlierlayerinthenetwork.Thenetworkrequiresinputimagesofsize227-by-227-by-3,buttheimagesintheimagedatastoreshavedifferentsizes.Toautomaticallyresizethetrainingandtestimagesbeforetheyareinputtothenetwork,createaugmentedimagedatastores,specifythedesiredimagesize,andusethesedatastoresasinputargumentstoactivations.augimdsTrain=augmentedImageDatastore(inputSize(1:2),imdsTrain); augimdsTest=augmentedImageDatastore(inputSize(1:2),imdsTest); layer='fc7'; featuresTrain=activations(net,augimdsTrain,layer,'OutputAs','rows'); featuresTest=activations(net,augimdsTest,layer,'OutputAs','rows');Extracttheclasslabelsfromthetrainingandtestdata.YTrain=imdsTrain.Labels; YTest=imdsTest.Labels;FitImageClassifierUsethefeaturesextractedfromthetrainingimagesaspredictorvariablesandfitamulticlasssupportvectormachine(SVM)usingfitcecoc(StatisticsandMachineLearningToolbox).mdl=fitcecoc(featuresTrain,YTrain);ClassifyTestImagesClassifythetestimagesusingthetrainedSVMmodelandthefeaturesextractedfromthetestimages.YPred=predict(mdl,featuresTest);Displayfoursampletestimageswiththeirpredictedlabels.idx=[151015]; figure fori=1:numel(idx) subplot(2,2,i) I=readimage(imdsTest,idx(i)); label=YPred(idx(i)); imshow(I) title(label) endCalculatetheclassificationaccuracyonthetestset.Accuracyisthefractionoflabelsthatthenetworkpredictscorrectly.accuracy=mean(YPred==YTest)accuracy=1 ThisSVMhashighaccuracy.Iftheaccuracyisnothighenoughusingfeatureextraction,thentrytransferlearninginstead.OutputArgumentscollapseallnet—PretrainedAlexNetconvolutionalneuralnetworkSeriesNetworkobject PretrainedAlexNetconvolutionalneuralnetwork,returnedasaSeriesNetwork object. layers—UntrainedAlexNetconvolutionalneuralnetworkarchitectureLayerarray UntrainedAlexNetconvolutionalneuralnetworkarchitecture,returnedasaLayer array. TipsForafreehands-onintroductiontopracticaldeeplearningmethods,seeDeepLearningOnramp.References[1]ImageNet.http://www.image-net.org[2]Russakovsky,O.,Deng,J.,Su,H.,etal."ImageNetLargeScaleVisual RecognitionChallenge."InternationalJournalofComputerVision (IJCV).Vol115,Issue3,2015,pp.211–252[3]Krizhevsky,Alex,IlyaSutskever,andGeoffreyE.Hinton. "ImageNetClassificationwithDeepConvolutionalNeuralNetworks."Advances inneuralinformationprocessingsystems.2012.[4]BVLCAlexNetModel. https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnetExtendedCapabilitiesC/C++CodeGenerationGenerateCandC++codeusingMATLAB®Coder™.Forcodegeneration,youcanloadthenetworkbyusingthesyntaxnet= alexnetorbypassingthealexnetfunctiontocoder.loadDeepLearningNetwork(MATLABCoder).Forexample:net= coder.loadDeepLearningNetwork('alexnet').Formoreinformation,seeLoadPretrainedNetworksforCodeGeneration(MATLABCoder).Thesyntaxalexnet('Weights','none')isnotsupportedforcode generation.GPUCodeGenerationGenerateCUDA®codeforNVIDIA®GPUsusingGPUCoder™.Usagenotesandlimitations: Forcodegeneration,youcanloadthenetworkbyusingthesyntaxnet= alexnetorbypassingthealexnetfunctionto coder.loadDeepLearningNetwork(GPUCoder).Forexample:net= coder.loadDeepLearningNetwork('alexnet').Formoreinformation,seeLoadPretrainedNetworksforCodeGeneration(GPUCoder).Thesyntaxalexnet('Weights','none')isnotsupportedforGPU codegeneration. VersionHistoryIntroducedinR2017aSeeAlsoDeepNetworkDesigner|vgg16|vgg19|resnet18|resnet50|densenet201|googlenet|inceptionresnetv2|squeezenet|importKerasNetwork|importCaffeNetworkTopicsDeepLearninginMATLABClassifyWebcamImagesUsingDeepLearningPretrainedDeepNeuralNetworksTrainDeepLearningNetworktoClassifyNewImagesTransferLearningwithDeepNetworkDesignerDeepLearningTipsandTricks × OpenExample Youhaveamodifiedversionofthisexample.Doyouwanttoopenthisexamplewithyouredits? No,overwritethemodifiedversion Yes × MATLABCommand YouclickedalinkthatcorrespondstothisMATLABcommand: RunthecommandbyenteringitintheMATLABCommandWindow. WebbrowsersdonotsupportMATLABcommands. Close × SelectaWebSite Chooseawebsitetogettranslatedcontentwhereavailableandseelocaleventsandoffers.Basedonyourlocation,werecommendthatyouselect:. Switzerland(English) Switzerland(Deutsch) Switzerland(Français) 中国(简体中文) 中国(English) Youcanalsoselectawebsitefromthefollowinglist: HowtoGetBestSitePerformance SelecttheChinasite(inChineseorEnglish)forbestsiteperformance.OtherMathWorkscountrysitesarenotoptimizedforvisitsfromyourlocation. Americas AméricaLatina(Español) Canada(English) UnitedStates(English) Europe Belgium(English) Denmark(English) Deutschland(Deutsch) España(Español) Finland(English) France(Français) Ireland(English) Italia(Italiano) Luxembourg(English) Netherlands(English) Norway(English) Österreich(Deutsch) Portugal(English) Sweden(English) Switzerland Deutsch English Français UnitedKingdom(English) AsiaPacific Australia(English) India(English) NewZealand(English) 中国 简体中文 English 日本(日本語) 한국(한국어) Contactyourlocaloffice TrialSoftware TrialSoftware ProductUpdates ProductUpdates
延伸文章資訊
- 1大話CNN經典模型:AlexNet | 程式前沿
AlexNet網路結構共有8層,前面5層是卷積層,後面3層是全連線層,最後一個全連線層的輸出傳遞給一個1000路的softmax層,對應1000個類標籤的分佈。 由於 ...
- 2AlexNet_百度百科
AlexNet是2012年ImageNet競賽冠軍獲得者Hinton和他的學生Alex Krizhevsky設計的。也是在那年之後,更多的更深的神經網絡被提出,比如優秀的vgg,GoogLeNet。
- 3CNN入門算法AlexNet介紹(論文詳細解讀) - 台部落
AlexNet的論文中着重解釋了Tanh激活函數和ReLu激活函數的不同特點,解釋了多個GPU是如何加速訓練網絡的,也說明了防止過擬合的一些方法。都是值得學習的很 ...
- 4AlexNet - 維基百科,自由的百科全書
AlexNet是一個卷積神經網路,由亞歷克斯·克里澤夫斯基(Alex Krizhevsky)設計,與伊爾亞‧蘇茨克維(Ilya Sutskever)和克里澤夫斯基的博士導師傑弗里·辛頓共同發表,...
- 5卷積神經網絡CNN 經典模型— LeNet、AlexNet、VGG - Medium
本文要來介紹CNN 的經典模型LeNet、AlexNet、VGG、NiN,並使用Pytorch 實現。其中LeNet 使用MNIST 手寫數字圖像作為訓練集,而其餘的模型則是 ...