Introduction
This guide explains how to configure Flags in the Cubyts platform to control governance behavior, automation, and traceability. Flag configuration allows teams to fine-tune how risks are detected, how actions are automated, and how governance signals are captured for audits—ensuring that AI-driven insights align with organizational standards and intent.
Prerequisites
Integrated planning and code repositories
Understanding of process, feature, and code governance goals
Step-by-Step Guide
Step 1: Open the Flag Configuration Screen
Navigate to Flag Configuration in your Cubyts workspace.
This screen is used to manage:
Flag visibility
Flag behavior
Automation and audit settings
Any configuration changes take effect after the next sync cycle, ensuring predictable and controlled updates.
Step 2: Understand Flag Categories and Indicators
Flags are grouped into three categories:
Process flags
Feature flags
Code flags
Additional indicators show whether a flag is:
Audit-relevant
Auto-resolvable
AI-powered
This structure helps you quickly locate and manage the right governance checks.
Step 3: Enable or Disable Flags
Each flag has a toggle switch:
On – Enables evaluation and surfacing of insights
Off – Pauses the flag without deleting its configuration
This allows teams to:
Roll out governance incrementally
Temporarily disable checks without losing context
Step 4: Create Custom Code Flags
Select Create Custom Flag from the top-right corner of the configuration screen.
Custom flags are supported only for code flags.
To create a custom code flag:
Provide a flag name and description.
Define conditions for the AI engine to evaluate pull requests or branches.
Optionally simulate outcomes using an existing open pull request.
Example use cases:
Performance checks
Logic checks
Organization-specific security/compliance checks
Organization-specific coding standards
This enables custom governance without impacting planning or process workflows.
Step 5: Configure Limit Settings
Limit settings control how and when a flag surfaces.
For example (for one of the Cubyts flags):
Assign weightages to contributing parameters such as:
Team member overload
Requirement and design quality
Scope changes during sprint
Build planning finesse
The cumulative weightage always totals 100%.
Define a threshold percentage that determines when the risk is high enough for the flag to trigger.
Use a time-based control to specify how far into a sprint the flag should activate.
These controls help tune sensitivity and reduce noise.
Step 6: Configure Auto-Resolution Behavior
Auto-resolution settings define how flags can be resolved automatically.
For example (for one of the Cubyts flags)
For a sprint overrun prediction flag, Cubyts can:
Automatically move a work item to another sprint
Allow configuration of the target sprint
Project-specific mappings determine how new work items are created when needed, reducing manual follow-ups and enabling proactive correction.
Step 7: Configure Audit Settings
Enable Audit Settings to mark a flag as audit-relevant.
When enabled:
Insights are captured in audit trails
Signals flow into governance and audit dashboards
This supports both internal governance reviews and external compliance audits.
Step 8: Define Benchmarks for Build Tasks
Benchmarks define what good looks like for build planning and execution.
Configure benchmarks for build types such as:
UI builds
API builds
Backend builds
Database builds
Each benchmark links to a reference artifact (for example, a well-planned Jira or Azure Boards work item).
Multiple benchmarks can be added per build type.
Process and feature flags use these benchmarks to detect deviations in planning and execution.
Step 9: Define Benchmarks for Requirement Quality
Requirement benchmarks establish standards for:
Functional requirements
Technical requirements
Deep technical requirements (frontend, backend, database)
Each benchmark references an artifact that demonstrates:
Expected structure
Required detail
Documentation quality
This ensures requirement quality is evaluated against your organization’s standards, not generic rules.
Step 10: Configure Code Flag Behavior
Code flag configuration controls how pull requests and branches are evaluated.
You can configure:
Whether flags apply to:
Pull requests
Branches
Both
Time-based thresholds (for example, aging PRs or long-running branches)
Optional visibility of flags directly in:
GitHub
GitLab
Bitbucket
Auto-resolution rules can also be defined to resolve flags once conditions are met.
The Common Flag Configuration Model
Every flag in Cubyts—process, feature, or code—is built on the same core pillars:
Limit settings – When and how a flag surfaces
Auto-resolution settings – How actions are automated
Audit settings – How signals are captured for traceability
Benchmarks extend this model by defining quality baselines, while code flag configuration applies the same principles to repositories and pull requests.
Conclusion
Flag configuration in Cubyts enables teams to scale governance intelligently—without increasing operational overhead. By combining configurable limits, automation, audits, and benchmarks, teams can align AI-driven signals with real governance intent, ensuring consistent, explainable, and traceable oversight across planning, delivery, and code.
Video link: https://www.loom.com/share/6496f33b0e034e45925911e4ad421000
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article