machine-learning
  • 機器學習:使用Python
    • 簡介Scikit-learn 機器學習
  • 分類法 Classification
    • Ex 1: Recognizing hand-written digits
    • EX 2: Normal and Shrinkage Linear Discriminant Analysis for classification
    • EX 3: Plot classification probability
    • EX 4: Classifier Comparison
    • EX 5: Linear and Quadratic Discriminant Analysis with confidence ellipsoid
  • 特徵選擇 Feature Selection
    • Ex 1: Pipeline Anova SVM
    • Ex 2: Recursive Feature Elimination
    • Ex 3: Recursive Feature Elimination with Cross-Validation
    • Ex 4: Feature Selection using SelectFromModel
    • Ex 5: Test with permutations the significance of a classification score
    • Ex 6: Univariate Feature Selection
    • Ex 7: Comparison of F-test and mutual information
  • 互分解 Cross Decomposition
  • 通用範例 General Examples
    • Ex 1: Plotting Cross-Validated Predictions
    • Ex 2: Concatenating multiple feature extraction methods
    • Ex 3: Isotonic Regression
    • Ex 4: Imputing missing values before building an estimator
    • Ex 5: ROC Curve with Visualization API
    • Ex 7: Face completion with a multi-output estimators
  • 群聚法 Clustering
    • EX 1: Feature_agglomeration.md
    • EX 2: Mean-shift 群聚法.md
    • EX 6: 以群聚法切割錢幣影像.md
    • EX 10:_K-means群聚法
    • EX 12: Spectral clustering for image segmentation
    • Plot Hierarchical Clustering Dendrogram
  • 支持向量機
    • EX 1:Non_linear_SVM.md
    • [EX 4: SVM_with _custom _kernel.md](SVM/EX4_SVM_with _custom _kernel.md)
  • 機器學習資料集 Datasets
    • Ex 1: The digits 手寫數字辨識
    • Ex 2: Plot randomly generated classification dataset 分類數據集
    • Ex 3: The iris 鳶尾花資料集
    • Ex 4: Plot randomly generated multilabel dataset 多標籤數據集
  • 應用範例 Application
    • 用特徵臉及SVM進行人臉辨識實例
    • 維基百科主要的特徵向量
    • 波士頓房地產雲端評估(一)
    • 波士頓房地產雲端評估(二)
  • 類神經網路 Neural_Networks
    • Ex 1: Visualization of MLP weights on MNIST
    • Ex 2: Restricted Boltzmann Machine features for digit classification
    • Ex 3: Compare Stochastic learning strategies for MLPClassifier
    • Ex 4: Varying regularization in Multi-layer Perceptron
  • 決策樹 Decision_trees
    • Ex 1: Decision Tree Regression
    • Ex 2: Multi-output Decision Tree Regression
    • Ex 3: Plot the decision surface of a decision tree on the iris dataset
    • Ex 4: Understanding the decision tree structure
  • 機器學習:使用 NVIDIA JetsonTX2
    • 從零開始
    • 讓 TX2 動起來
    • 安裝OpenCV
    • 安裝TensorFlow
  • 廣義線性模型 Generalized Linear Models
    • Ex 3: SGD: Maximum margin separating hyperplane
  • 模型選擇 Model Selection
    • Ex 3: Plotting Validation Curves
    • Ex 4: Underfitting vs. Overfitting
  • 半監督式分類法 Semi-Supervised Classification
    • Ex 3: Label Propagation digits: Demonstrating performance
    • Ex 4: Label Propagation digits active learning
    • Decision boundary of label propagation versus SVM on the Iris dataset
  • Ensemble_methods
    • IsolationForest example
  • Miscellaneous_examples
    • Multilabel classification
  • Nearest_Neighbors
    • Nearest Neighbors Classification
Powered by GitBook
On this page
  • 通用範例/範例二: Concatenating multiple feature extraction methods
  • (一)資料匯入及描述
  • (二)PCA與SelectKBest
  • (三)FeatureUnionc
  • (四)找到最佳的結果
  1. 通用範例 General Examples

Ex 2: Concatenating multiple feature extraction methods

PreviousEx 1: Plotting Cross-Validated PredictionsNextEx 3: Isotonic Regression

Last updated 6 years ago

通用範例/範例二: Concatenating multiple feature extraction methods

在許多實際應用中,會有很多方法可以從一個數據集中提取特徵。也常常會組合多個方法來獲得良好的特徵。這個例子說明如何使用FeatureUnion 來結合由PCA 和univariate selection 時的特徵。

這個範例的主要目的: 1. 資料集:iris 鳶尾花資料集 2. 特徵:鳶尾花特徵 3. 預測目標:是那一種鳶尾花 4. 機器學習方法:SVM 支持向量機 5. 探討重點:特徵結合 6. 關鍵函式: sklearn.pipeline.FeatureUnion

(一)資料匯入及描述

  • 首先先匯入iris 鳶尾花資料集,使用from sklearn.datasets import load_iris將資料存入

  • 準備X (特徵資料) 以及 y (目標資料)

from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.grid_search import GridSearchCV
from sklearn.svm import SVC
from sklearn.datasets import load_iris
from sklearn.decomposition import PCA
from sklearn.feature_selection import SelectKBest

iris = load_iris()

X, y = iris.data, iris.target

測試資料: iris為一個dict型別資料。

顯示

說明

('target_names', (3L,))

共有三種鳶尾花 setosa, versicolor, virginica

('data', (150L, 4L))

有150筆資料,共四種特徵

('target', (150L,))

這150筆資料各是那一種鳶尾花

DESCR

資料之描述

feature_names

4個特徵代表的意義

(二)PCA與SelectKBest

  • PCA(n_components = 主要成份數量):Principal Component Analysis(PCA)主成份分析,是一個常用的將資料維度減少的方法。它的原理是找出一個新的座標軸,將資料投影到該軸時,數據的變異量會最大。利用這個方式減少資料維度,又希望能保留住原數據點的特性。

  • SelectKBest(score_func , k ): score_func是選擇特徵值所依據的函式,而K值則是設定要選出多少特徵。

# This dataset is way to high-dimensional. Better do PCA:
pca = PCA(n_components=2)

# Maybe some original features where good, too?
selection = SelectKBest(k=1)

(三)FeatureUnionc

  • 使用sklearn.pipeline.FeatureUnion合併主成分分析(PCA)和綜合篩選(SelectKBest)。

  • 最後得到選出的特徵

# Build estimator from PCA and Univariate selection:

combined_features = FeatureUnion([("pca", pca), ("univ_select", selection)])

# Use combined features to transform dataset:
X_features = combined_features.fit(X, y).transform(X)

(四)找到最佳的結果

  • Scikit-learn的支持向量機分類函式庫利用 SVC() 建立運算物件,之後並可以用運算物件內的方法 .fit() 與 .predict() 來做訓練與預測。

  • 使用GridSearchCV交叉驗證,得到由參數網格計算出的分數網格,並找到分數網格中最佳點。最後顯示這個點所代表的參數

svm = SVC(kernel="linear")

# Do grid search over k, n_components and C:

pipeline = Pipeline([("features", combined_features), ("svm", svm)])

param_grid = dict(features__pca__n_components=[1, 2, 3],
                  features__univ_select__k=[1, 2],
                  svm__C=[0.1, 1, 10])

grid_search = GridSearchCV(pipeline, param_grid=param_grid, verbose=10)
grid_search.fit(X, y)
print(grid_search.best_estimator_)

結果顯示 ``` Fitting 3 folds for each of 18 candidates, totalling 54 fits [CV] featuresuniv_selectk=1, featurespcan_components=1, svmC=0.1 [CV] featuresuniv_selectk=1, featurespcan_components=1, svmC=0.1, score=0.960784 - 0.0s

## (五)完整程式碼
Python source code: feature_stacker.py
http://scikit-learn.org/stable/auto_examples/feature_stacker.html

```python
# Author: Andreas Mueller <amueller@ais.uni-bonn.de>
#
# License: BSD 3 clause

from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.grid_search import GridSearchCV
from sklearn.svm import SVC
from sklearn.datasets import load_iris
from sklearn.decomposition import PCA
from sklearn.feature_selection import SelectKBest

iris = load_iris()

X, y = iris.data, iris.target

# This dataset is way to high-dimensional. Better do PCA:
pca = PCA(n_components=2)

# Maybe some original features where good, too?
selection = SelectKBest(k=1)

# Build estimator from PCA and Univariate selection:

combined_features = FeatureUnion([("pca", pca), ("univ_select", selection)])

# Use combined features to transform dataset:
X_features = combined_features.fit(X, y).transform(X)

svm = SVC(kernel="linear")

# Do grid search over k, n_components and C:

pipeline = Pipeline([("features", combined_features), ("svm", svm)])

param_grid = dict(features__pca__n_components=[1, 2, 3],
                  features__univ_select__k=[1, 2],
                  svm__C=[0.1, 1, 10])

grid_search = GridSearchCV(pipeline, param_grid=param_grid, verbose=10)
grid_search.fit(X, y)
print(grid_search.best_estimator_)
http://scikit-learn.org/stable/auto_examples/feature_stacker.html