This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| wiki:ai:affirm_onboarding [2026/01/16 20:01] – zhein | wiki:ai:affirm_onboarding [2026/04/11 04:15] (current) – scouto | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| ===== Affirm Onboarding ===== | ===== Affirm Onboarding ===== | ||
| + | |||
| The following document is an overview of the onboarding process with Affirm. It will cover the following: what is our responsibility, | The following document is an overview of the onboarding process with Affirm. It will cover the following: what is our responsibility, | ||
| Line 13: | Line 14: | ||
| ==== Second Steps ==== | ==== Second Steps ==== | ||
| - | After the questionnaire has been sent, Synergist will onboard | + | **1. Prerequisites** |
| + | |||
| + | A. Log in to the correct Affirm environment for the customer or application you are onboarding. | ||
| + | |||
| + | [[https:// | ||
| + | |||
| + | B. Before creating anything, verify that you are working in the correct organization so the asset, monitoring settings, and tests are associated with the right customer environment. | ||
| + | |||
| + | {{: | ||
| + | |||
| + | **2. Create an Asset ** | ||
| + | |||
| + | A. On the menu open: AI Inventory > AI Assets | ||
| + | |||
| + | {{: | ||
| + | |||
| + | B. Create a new asset | ||
| + | |||
| + | {{: | ||
| + | |||
| + | C. **Fill out the Asset card with the provided information in the intake form, then create the asset**\\ Enter all available information from the intake form, including the asset name, description, | ||
| + | |||
| + | Be sure to populate any required ServiceNow-related fields when applicable. These may include **Location Name**, **Support Note**, | ||
| + | |||
| + | {{: | ||
| + | |||
| + | **3. Create Use Cases** | ||
| + | |||
| + | A. Select Use Cases | ||
| + | |||
| + | {{: | ||
| + | |||
| + | B. Create a new use case. Enter each use case as provided. The use cases will be useful to reference in the future. Use cases should be entered as completely as possible because they help document what the asset does, what it touches, and what the intended risk and business context are. | ||
| + | |||
| + | {{: | ||
| + | |||
| + | **4. Set up Monitoring** | ||
| + | |||
| + | A. Select Monitoring Setup and keep monitoring disabled. We will turn it on after we create a test. | ||
| + | |||
| + | Configure the monitoring with the provided information in the intake form.\\ Leave monitoring off while you are still building the asset and completing required fields. Save the asset first, then open Monitoring Setup from the asset page. | ||
| + | |||
| + | Use HTTP Chat Completions for the monitoring method unless your team has a different approved connection type. Use the full chat completions-style endpoint, not just the base endpoint, when applicable. For the payload, use the standard messages structure. The payload field may auto-format after you click out of it, so it is fine if it is initially pasted in as one line. For more help on the payload select the see guide link. | ||
| + | |||
| + | For the response path, use the standard chat completion response path that points to the returned content under choices → message → content. | ||
| + | |||
| + | {{: | ||
| + | |||
| + | **5. Create a test** | ||
| + | |||
| + | A. Select test management and create a new test | ||
| + | |||
| + | {{: | ||
| + | |||
| + | B. Start with a simple latency test\\ For initial onboarding, begin with a Latency test. This is the easiest way to confirm the connection is working before building more advanced | ||
| + | |||
| + | {{: | ||
| + | |||
| + | C. Enter a clear test name | ||
| + | |||
| + | D. Select the Latency test type and use Parallel execution | ||
| + | |||
| + | E. Set the domain\\ Use Informational if needed, or leave it broad if the field allows. | ||
| + | |||
| + | F. Leave the cron job inactive\\ Do not enable the cron job during initial onboarding. For the first test, keep it inactive. | ||
| + | |||
| + | G. Leave the default thresholds in place | ||
| + | |||
| + | H. Set warm up runs\\ Use 1 or 2 warm up runs. | ||
| + | |||
| + | {{: | ||
| + | |||
| + | I. Set actual runs and threads\\ For the initial latency test, use: | ||
| + | |||
| + | * Actual runs: 4 | ||
| + | * Threads: 2 | ||
| + | |||
| + | This creates two simultaneous chains of prompts. | ||
| + | |||
| + | J. Enter a simple prompt\\ For the first latency test, the prompt does not need to mirror the application’s real-world use case exactly. A simple prompt is fine if the goal is to validate performance and connectivity. An example would be asking | ||
| + | |||
| + | {{: | ||
| + | |||
| + | K. Save the test\\ Save the test once the configuration | ||
| + | |||
| + | **6. Turn on monitoring** | ||
| + | |||
| + | A. Return to the asset and enable monitoring after setup is complete\\ Once the asset is saved, the monitoring configuration has been entered, and the test has been created, enable monitoring from the asset page. Monitoring should be turned on only after the asset details are complete and the connection settings are in place. | ||
| + | |||
| + | {{: | ||
| + | |||
| + | {{: | ||
| + | |||
| + | **7. Run the test** | ||
| + | |||
| + | A. In Test Management, click the play icon on the far right of the test\\ | ||
| + | |||
| + | {{: | ||
| + | |||
| + | B. Wait for the test to process\\ Allow the test a short amount of time to run. Response time may vary depending on the model, token count, | ||
| + | |||
| + | C. Check Test Results\\ After 2-3 minutes check the test results tab | ||
| + | |||
| + | D. Confirm success\\ If the test completes successfully, | ||
| + | |||
| + | {{: | ||
| + | |||
| + | **8. Prompting guidance for onboarding tests** | ||
| + | |||
| + | During onboarding, the goal is to create tests that validate the application and confirm it is functioning correctly. Initial tests should be designed to pass and to reflect expected behavior, not to intentionally break the application. | ||
| + | |||
| + | Additional tests such as hallucination, | ||
| ==== Final Steps ==== | ==== Final Steps ==== | ||
| + | |||
| The final steps in onboarding is creating the right prompts to accurately monitor the model. Please reference the document on creating prompts for Affirm for more information: | The final steps in onboarding is creating the right prompts to accurately monitor the model. Please reference the document on creating prompts for Affirm for more information: | ||
| + | |||
| + | |||