This shows you the differences between two versions of the page.
| Next revision | Previous revision | ||
| wiki:ai:responsible-ai-test [2025/06/05 13:23] – created ddehamer | wiki:ai:responsible-ai-test [2025/06/16 20:09] (current) – [✅ Step 7: Add Responsible AI Analysis Tools] ddehamer | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| ====== Responsible AI Test ====== | ====== Responsible AI Test ====== | ||
| + | |||
| + | ===== Purpose ===== | ||
| + | |||
| + | Evaluate the Responsible AI dashboard and see what it does. | ||
| + | |||
| + | ===== Test Process ===== | ||
| + | |||
| + | Here's a structured list of **Responsible AI Dashboard Deployment Steps** using the corrected scripts. Each step includes: | ||
| + | |||
| + | * ✅ **Step Number & Action** | ||
| + | * 🎯 **Purpose** | ||
| + | * 🧪 **Expected Result** | ||
| + | |||
| + | ---- | ||
| + | |||
| + | ==== ✅ Step 1: Install Required Packages ==== | ||
| + | |||
| + | <code -> | ||
| + | pip install --upgrade raiutils raiwidgets responsibleai ipywidgets | ||
| + | </ | ||
| + | |||
| + | 🎯 **Purpose**: | ||
| + | |||
| + | 🧪 **Expected Result**:\\ Packages are installed without errors; dashboard widgets can render in the notebook (after kernel restart). | ||
| + | |||
| + | ---- | ||
| + | |||
| + | ==== ✅ Step 2: Load and Preprocess the Dataset ==== | ||
| + | |||
| + | <code -> | ||
| + | from sklearn.datasets import fetch_openml | ||
| + | import pandas as pd | ||
| + | |||
| + | data = fetch_openml(name=' | ||
| + | df = data.frame.dropna() | ||
| + | </ | ||
| + | |||
| + | 🎯 **Purpose**: | ||
| + | |||
| + | 🧪 **Expected Result**:\\ A clean DataFrame with no null values is loaded. | ||
| + | |||
| + | ---- | ||
| + | |||
| + | ==== ✅ Step 3: Split Dataset into Train and Test Sets ==== | ||
| + | |||
| + | <code -> | ||
| + | from sklearn.model_selection import train_test_split | ||
| + | |||
| + | target_column = ' | ||
| + | X = df.drop(columns=[target_column]) | ||
| + | y = df[target_column] | ||
| + | |||
| + | X_train, X_test, y_train, y_test = train_test_split(X, | ||
| + | </ | ||
| + | |||
| + | 🎯 **Purpose**: | ||
| + | |||
| + | 🧪 **Expected Result**:\\ '' | ||
| + | |||
| + | ---- | ||
| + | |||
| + | ==== ✅ Step 4: Define Preprocessing and Train a Model ==== | ||
| + | |||
| + | <code -> | ||
| + | from sklearn.pipeline import Pipeline | ||
| + | from sklearn.preprocessing import OneHotEncoder, | ||
| + | from sklearn.compose import ColumnTransformer | ||
| + | from sklearn.ensemble import RandomForestClassifier | ||
| + | |||
| + | categorical_cols = X_train.select_dtypes(include=[' | ||
| + | numerical_cols = X_train.select_dtypes(include=[' | ||
| + | |||
| + | preprocessor = ColumnTransformer([ | ||
| + | (' | ||
| + | (' | ||
| + | ]) | ||
| + | |||
| + | clf = Pipeline(steps=[ | ||
| + | (' | ||
| + | (' | ||
| + | ]) | ||
| + | |||
| + | clf.fit(X_train, | ||
| + | </ | ||
| + | |||
| + | 🎯 **Purpose**: | ||
| + | |||
| + | 🧪 **Expected Result**:\\ Pipeline is trained successfully on the training data without conversion errors. | ||
| + | |||
| + | ---- | ||
| + | |||
| + | ==== ✅ Step 5: Prepare Data for RAIInsights ==== | ||
| + | |||
| + | <code -> | ||
| + | # Ensure target column is a supported type | ||
| + | y_train_clean = y_train.astype(str) | ||
| + | y_test_clean = y_test.astype(str) | ||
| + | |||
| + | train_data = X_train.copy() | ||
| + | train_data[target_column] = y_train_clean | ||
| + | |||
| + | test_data = X_test.copy() | ||
| + | test_data[target_column] = y_test_clean | ||
| + | </ | ||
| + | |||
| + | 🎯 **Purpose**: | ||
| + | |||
| + | 🧪 **Expected Result**:\\ '' | ||
| + | |||
| + | ---- | ||
| + | |||
| + | ==== ✅ Step 6: Initialize the Responsible AI Insights Object ==== | ||
| + | |||
| + | <code -> | ||
| + | from responsibleai import RAIInsights, | ||
| + | |||
| + | feature_metadata = FeatureMetadata(categorical_features=categorical_cols) | ||
| + | |||
| + | rai_insights = RAIInsights( | ||
| + | model=clf, | ||
| + | train=train_data, | ||
| + | test=test_data, | ||
| + | target_column=target_column, | ||
| + | task_type=" | ||
| + | feature_metadata=feature_metadata | ||
| + | ) | ||
| + | </ | ||
| + | |||
| + | 🎯 **Purpose**: | ||
| + | |||
| + | 🧪 **Expected Result**:\\ RAIInsights object is initialized successfully and ready for configuration. | ||
| + | |||
| + | ---- | ||
| + | |||
| + | ==== ✅ Step 7: Add Responsible AI Analysis Tools ==== | ||
| + | |||
| + | <code -> | ||
| + | rai_insights.explainer.add() | ||
| + | rai_insights.error_analysis.add() | ||
| + | rai_insights.counterfactual.add(total_CFs=10, | ||
| + | rai_insights.causal.add(treatment_features=categorical_cols) | ||
| + | </ | ||
| + | |||
| + | 🎯 **Purpose**: | ||
| + | |||
| + | 🧪 **Expected Result**:\\ No errors thrown; tools are queued for computation. | ||
| + | |||
| + | ---- | ||
| + | |||
| + | ==== ✅ Step 8: Compute Insights ==== | ||
| + | |||
| + | <code -> | ||
| + | rai_insights.compute() | ||
| + | </ | ||
| + | |||
| + | 🎯 **Purpose**: | ||
| + | |||
| + | 🧪 **Expected Result**:\\ Tool outputs are generated for the first 5,000 rows of the test set. | ||
| + | |||
| + | ---- | ||
| + | |||
| + | ==== ✅ Step 9: Launch the Responsible AI Dashboard ==== | ||
| + | |||
| + | <code -> | ||
| + | from raiwidgets import ResponsibleAIDashboard | ||
| + | |||
| + | ResponsibleAIDashboard(rai_insights) | ||
| + | </ | ||
| + | |||
| + | 🎯 **Purpose**: | ||
| + | |||
| + | 🧪 **Expected Result**:\\ A dashboard is displayed inside the notebook. Interactive plots and controls are available for analysis. | ||
| + | |||
| + | NOTE: Due to the way that these URLs are deployed, this step will fail because the notebook sends the wrong headers and this is expected. | ||
| + | NOTE: To deploy locally you need to follow the process below. | ||
| + | |||
| + | ==== Use Local Jupyter Notebook ==== | ||
| + | |||
| + | - Download the full notebook ('' | ||
| + | - Create a conda/venv environment with:< | ||
| + | pipx install raiwidgets responsibleai scikit-learn ipywidgets jupyter notebooks | ||
| + | |||
| + | </ | ||
| + | - Launch local notebook from the directory you downloaded the file to using: | ||
| + | <code -> | ||
| + | jupyter notebook | ||
| + | |||
| + | </ | ||
| + | - This will launch a notebook session in your default browser. | ||
| + | - Open downloaded ipynb file. | ||
| + | - Rerun all steps in local notebook. | ||
| + | |||
| + | ✅ It will render **inline** without CORS issues. | ||
| + | |||
| + | ==== Output ==== | ||
| + | |||
| + | {{: | ||
| + | |||
| + | [[ai_knowledge|AI Knowledge]] | ||