基于支持向量機的測井曲線預(yù)測儲層參數(shù)方法.doc
約73頁DOC格式手機打開展開
基于支持向量機的測井曲線預(yù)測儲層參數(shù)方法,摘要支持向量機由于其諸多的優(yōu)良特性,近年來引起了廣泛的關(guān)注,已經(jīng)成為一個十分活躍的研究領(lǐng)域。本文較全面地研究了支持向量機的理論及應(yīng)用方法,討論了支持向量機中高斯核函數(shù)參數(shù)的選擇問題,首次將支持向量機用于測井參數(shù)屬性估計儲層屬性中。本文中,首先對支持向量機的理論基礎(chǔ)——統(tǒng)計學習理論作了一個概述,主要論述了學習過程的一致性...


內(nèi)容介紹
此文檔由會員 wanli1988go 發(fā)布
摘 要
支持向量機由于其諸多的優(yōu)良特性,近年來引起了廣泛的關(guān)注,已經(jīng)成為一個十分活躍的研究領(lǐng)域。本文較全面地研究了支持向量機的理論及應(yīng)用方法,討論了支持向量機中高斯核函數(shù)參數(shù)的選擇問題,首次將支持向量機用于測井參數(shù)屬性估計儲層屬性中。
本文中,首先對支持向量機的理論基礎(chǔ)——統(tǒng)計學習理論作了一個概述,主要論述了學習過程的一致性,如何控制學習過程的推廣能力等問題,其次,對簡單的線性可分數(shù)據(jù),詳細介紹了線性支持向量機的工作原理,即尋找具有最大的分離超平面;核函數(shù)的實質(zhì)是通過一非線性映射把原空間上非線性可分的數(shù)據(jù)映射到另一個特征空間上的線性可分數(shù)據(jù),然后利用與線性支持向量機完全一樣的方法,在該空間建立一個超平面,使其在原空間對應(yīng)著一個非線性超曲面,通過引入一個核函數(shù)使所有的計算在原空間完成。同時針對本文主要討論的回歸問題給以詳細地說明,支持向量機的解最終歸結(jié)為一個凸二次規(guī)劃,有全局最優(yōu)解。簡單介紹了支持向量機較常用的訓(xùn)練算法——序貫最小優(yōu)化算法,自己編程用MATLAB實現(xiàn)了該算法,數(shù)值試驗結(jié)果表明支持向量機具有較強的學習能力。另外本文具體討論了支持向量機中高斯核函數(shù)中參數(shù) 對支持向量機學習預(yù)測性能的影響,證明了參數(shù) 趨于零和無窮大情況下支持向量機的性質(zhì),指出高斯核函數(shù)具有描述樣本相似程度這一性質(zhì),通過數(shù)值實驗和理論分析給出了一種選擇高斯核函數(shù)的方法——拐點法。進一步指出樣本數(shù)據(jù)標準化對學習預(yù)測的影響,給出了標準化后選擇較優(yōu)高斯核函數(shù)參數(shù)的一個大致范圍。
最后根據(jù)石油地質(zhì)勘探的實際問題,將支持向量機運用測井曲線預(yù)測儲層參數(shù)——孔隙度、參透率,同時與反向傳播神經(jīng)網(wǎng)絡(luò)函數(shù)逼近法預(yù)測進行比較,結(jié)果表明,該方法預(yù)測精度高,方法穩(wěn)定有效。支持向量機較好的解決了小樣本測井勘探的實際問題。
關(guān) 鍵 詞:支持向量機,回歸估計,高斯核函數(shù),測井曲線,儲層參數(shù)
研究類型:應(yīng)用研究
ABSTRACT
Recently, Support Vector Machines (for short SVM) attract many researchers and become a very active field because of its many good properties. SVM is a new and promising technique for classification and regression and have shown great potential in numerous machine learning and pattern recognition problems. This paper discusses the theory of SVM thoroughly, especially how choose the parameter of the Gauss kernel SVM, at last we discusses the application of SVM in predicting reservoir parameter form well log.
In the paper, we start with an overview of Statistical learning Theory which is the theoretical foundation of SVM, including the consistency of the study process, and how to control generalization of SVM. We then describe linear Support Vector Machine for separable data, which is to construct the maximal margin separating hyperplane. We explain how to introduce a nonlinear map which maps the input vectors into a feature space. In this space construct an optimal separating hyperplane using the same method, and in fact we have constructed a nonlinear decision function in the input space. We discuss the regression problem in tail at same time. The solution to SVM is a convex quadratic programmes problem at end, and it has a global optimization solution. We will briefly review some of the most common approaches before describing in detail one particular algorithm, Sequential Minimal Optimisation and then implementation it in Matlab by ourselves. The good results of many experiments show that SVM really has great generalization ability. We then focus on Gauss kernel SVM and discuss how the parameter influences the quality of SVM in tail. We also show that Gauss kernel function can describe the likeness degree of the sample. In addition, we propose a new algorithm for finding a good parameter , we called inflexion method. What's more, we point out the influence of standardize to predict, and then give mostly scope of the excellent parameter , which in Gauss kernel function after standardized.
Finally according to actual problem that in petroleum exploration and production field. We apply SVM in predicate reservoir parameter: Porosity, Permeability, from well log. Comparing this method with BP network shows that this new method can avoid the problem of the local optimal solution of BP network, and achieved the effects with higher precision. It is as an exciting method that using SVM in petroleum exploration from a few wells.
Key words: support vector machines regression Gauss kernel
well log reservoir parameter
目 錄
1 緒論 1
1.1 研究的目的和意義 1
1.2 地球物理勘探的應(yīng)用研究歷史及現(xiàn)狀 1
1.2.1 統(tǒng)計模式識別在地質(zhì)勘探中的應(yīng)用 1
1.2.2 非線性智能技術(shù)在地質(zhì)勘探中的應(yīng)用 錯誤!未定義書簽。
1.2.3 基于小樣本的非線性智能技術(shù)在地質(zhì)勘探中的應(yīng)用 3
1.3 本文研究內(nèi)容和研究方法 4
2 統(tǒng)計學習理論 6
2.1 學習問題的表示 6
2.1.1 基于實例學習的一般模型 6
2.1.2 三種主要的學習問題 7
2.1.3 經(jīng)驗風險最小化歸納原理 8
2.2 統(tǒng)計學習理論的核心內(nèi)容 9
2.2.1 學習過程的一致性 9
2.2.2 學習過程收斂速度的界 12
2.2.3 控制學習過程推廣能力 14
3 支持向量機 17
3.1 支持向量簡介 17
3.1.1 最優(yōu)分類面 17
3.1.2 廣義最優(yōu)分類超平面 19
3.2 分類支持向量機 20
3.2.1 高維空間中的推廣 20
3.2.2 核函數(shù) 21
3.2.3 構(gòu)造支持向量機 22
3.3 回歸支持向量機 23
3.3.1 線性支持向量回歸機 24-..
支持向量機由于其諸多的優(yōu)良特性,近年來引起了廣泛的關(guān)注,已經(jīng)成為一個十分活躍的研究領(lǐng)域。本文較全面地研究了支持向量機的理論及應(yīng)用方法,討論了支持向量機中高斯核函數(shù)參數(shù)的選擇問題,首次將支持向量機用于測井參數(shù)屬性估計儲層屬性中。
本文中,首先對支持向量機的理論基礎(chǔ)——統(tǒng)計學習理論作了一個概述,主要論述了學習過程的一致性,如何控制學習過程的推廣能力等問題,其次,對簡單的線性可分數(shù)據(jù),詳細介紹了線性支持向量機的工作原理,即尋找具有最大的分離超平面;核函數(shù)的實質(zhì)是通過一非線性映射把原空間上非線性可分的數(shù)據(jù)映射到另一個特征空間上的線性可分數(shù)據(jù),然后利用與線性支持向量機完全一樣的方法,在該空間建立一個超平面,使其在原空間對應(yīng)著一個非線性超曲面,通過引入一個核函數(shù)使所有的計算在原空間完成。同時針對本文主要討論的回歸問題給以詳細地說明,支持向量機的解最終歸結(jié)為一個凸二次規(guī)劃,有全局最優(yōu)解。簡單介紹了支持向量機較常用的訓(xùn)練算法——序貫最小優(yōu)化算法,自己編程用MATLAB實現(xiàn)了該算法,數(shù)值試驗結(jié)果表明支持向量機具有較強的學習能力。另外本文具體討論了支持向量機中高斯核函數(shù)中參數(shù) 對支持向量機學習預(yù)測性能的影響,證明了參數(shù) 趨于零和無窮大情況下支持向量機的性質(zhì),指出高斯核函數(shù)具有描述樣本相似程度這一性質(zhì),通過數(shù)值實驗和理論分析給出了一種選擇高斯核函數(shù)的方法——拐點法。進一步指出樣本數(shù)據(jù)標準化對學習預(yù)測的影響,給出了標準化后選擇較優(yōu)高斯核函數(shù)參數(shù)的一個大致范圍。
最后根據(jù)石油地質(zhì)勘探的實際問題,將支持向量機運用測井曲線預(yù)測儲層參數(shù)——孔隙度、參透率,同時與反向傳播神經(jīng)網(wǎng)絡(luò)函數(shù)逼近法預(yù)測進行比較,結(jié)果表明,該方法預(yù)測精度高,方法穩(wěn)定有效。支持向量機較好的解決了小樣本測井勘探的實際問題。
關(guān) 鍵 詞:支持向量機,回歸估計,高斯核函數(shù),測井曲線,儲層參數(shù)
研究類型:應(yīng)用研究
ABSTRACT
Recently, Support Vector Machines (for short SVM) attract many researchers and become a very active field because of its many good properties. SVM is a new and promising technique for classification and regression and have shown great potential in numerous machine learning and pattern recognition problems. This paper discusses the theory of SVM thoroughly, especially how choose the parameter of the Gauss kernel SVM, at last we discusses the application of SVM in predicting reservoir parameter form well log.
In the paper, we start with an overview of Statistical learning Theory which is the theoretical foundation of SVM, including the consistency of the study process, and how to control generalization of SVM. We then describe linear Support Vector Machine for separable data, which is to construct the maximal margin separating hyperplane. We explain how to introduce a nonlinear map which maps the input vectors into a feature space. In this space construct an optimal separating hyperplane using the same method, and in fact we have constructed a nonlinear decision function in the input space. We discuss the regression problem in tail at same time. The solution to SVM is a convex quadratic programmes problem at end, and it has a global optimization solution. We will briefly review some of the most common approaches before describing in detail one particular algorithm, Sequential Minimal Optimisation and then implementation it in Matlab by ourselves. The good results of many experiments show that SVM really has great generalization ability. We then focus on Gauss kernel SVM and discuss how the parameter influences the quality of SVM in tail. We also show that Gauss kernel function can describe the likeness degree of the sample. In addition, we propose a new algorithm for finding a good parameter , we called inflexion method. What's more, we point out the influence of standardize to predict, and then give mostly scope of the excellent parameter , which in Gauss kernel function after standardized.
Finally according to actual problem that in petroleum exploration and production field. We apply SVM in predicate reservoir parameter: Porosity, Permeability, from well log. Comparing this method with BP network shows that this new method can avoid the problem of the local optimal solution of BP network, and achieved the effects with higher precision. It is as an exciting method that using SVM in petroleum exploration from a few wells.
Key words: support vector machines regression Gauss kernel
well log reservoir parameter
目 錄
1 緒論 1
1.1 研究的目的和意義 1
1.2 地球物理勘探的應(yīng)用研究歷史及現(xiàn)狀 1
1.2.1 統(tǒng)計模式識別在地質(zhì)勘探中的應(yīng)用 1
1.2.2 非線性智能技術(shù)在地質(zhì)勘探中的應(yīng)用 錯誤!未定義書簽。
1.2.3 基于小樣本的非線性智能技術(shù)在地質(zhì)勘探中的應(yīng)用 3
1.3 本文研究內(nèi)容和研究方法 4
2 統(tǒng)計學習理論 6
2.1 學習問題的表示 6
2.1.1 基于實例學習的一般模型 6
2.1.2 三種主要的學習問題 7
2.1.3 經(jīng)驗風險最小化歸納原理 8
2.2 統(tǒng)計學習理論的核心內(nèi)容 9
2.2.1 學習過程的一致性 9
2.2.2 學習過程收斂速度的界 12
2.2.3 控制學習過程推廣能力 14
3 支持向量機 17
3.1 支持向量簡介 17
3.1.1 最優(yōu)分類面 17
3.1.2 廣義最優(yōu)分類超平面 19
3.2 分類支持向量機 20
3.2.1 高維空間中的推廣 20
3.2.2 核函數(shù) 21
3.2.3 構(gòu)造支持向量機 22
3.3 回歸支持向量機 23
3.3.1 線性支持向量回歸機 24-..