Skip to main content

Functional architecture

Gredit's architecture is structured around a data processing flow that goes from initial ingestion to the generation of auditable cases. Each system component plays a specific role in this flow and is logically interconnected.

Processing flow

Data source → Job → Scripts → Rules → Execution → Cases

Flow stages

#StageDescription
1Data sourceExternal system that provides transactions or information to be monitored.
2JobScheduled task that runs periodically. Orchestrates data ingestion from the source and triggers script and rule execution.
3ScriptsRun during a job to process complex data, transform information, or execute advanced logic.
4RulesApplied during a job to process results, generate cases, and notify responsible parties of findings or anomalies.
5ExecutionAuditable record of a job execution. Contains parameters, status, and results.
6CaseAutomatically generated when a rule processes a finding detected by a script. Each case requires investigation and resolution.

Key relationships

  • A job executes scripts and rules on data from a source.
  • Each job execution generates an auditable record.
  • A case is generated from findings detected during a job execution.
  • Scripts can invoke rules or execute independent logic.
  • Rules generate findings that become cases.

Flow dependencies

Important
  • Cases cannot exist without a prior Execution that generated them.
  • An Execution is always a direct consequence of a Job.
  • Scripts and Rules run exclusively within the context of a Job.
  • Every auditable finding is linked to a specific Execution that documents its origin.