eduba Prepared by Eduba for Amazon — Emerge Americas 2026
Prepared for AWS eMerge Americas, Miami April 23 to 24, 2026

A read on where orchestration sits in your customers' Bedrock and Q stacks.

The infrastructure and the model layer are in place. The bottleneck AWS partners see repeatedly is the layer above: which of a customer's workflows belong on Bedrock, which belong in a rules engine, which are simpler in Postgres. That is Eduba's work.

People trained since May 2025
1,500+
Hours saved per year
6,000 to 9,000
30-day adoption
95%

Why this, why now

AWS Q4 2025 grew 24 percent, a 13-quarter high, with $244B in backlog and $200B of FY26 capex committed. re:Invent 2025 shipped Nova 2, Kiro, and AgentCore Policy, Evaluations, and Memory. On the public-sector side, Miami Beach has been an AWS Champion since 2024, Tamarac cofounded the Florida Amazon Connect user group, and the program now includes 16 cities.

  • re:Invent 2025 Amazon Bedrock AgentCore ships Policy, Evaluations, Memory.
  • Federal Up to $50B announced for U.S. federal AI, supercomputing capacity across GovCloud, Secret, Top Secret regions.
  • Training Agentic AI classroom courses added in January 2026. Demand is outpacing the Premier Tier bench.
  • Florida 16 municipalities now in the Amazon Connect State of Florida Government User Group.
The frame

Computational orchestration, in three numbers

Most organizations put LLMs where a database would do the job faster and cheaper. Before a customer designs an AgentCore policy, they benefit from a layer-assignment pass across their actual workflow inventory.

60%

Traditional code and data

Schemas, joins, batch jobs, queues. Postgres, DynamoDB, S3, Glue, Redshift. Most of the estate lives here and stays here.

30%

Rule-based logic

Deterministic flows, policy checks, routing, eligibility. Cheaper and more auditable than a model call. AgentCore Policy fits here.

10%

Genuine AI

Unstructured judgment, synthesis, drafting, extraction at scale. Bedrock foundation models, SageMaker AI, Nova 2, AgentCore Memory.

Applied to a Bedrock + Q customer

The layer-assignment pass on an AWS stack

A single diagram of the frame on the services a typical enterprise customer already owns. Eduba does the assignment. AWS keeps delivering the platform underneath.

10% genuine AI
Bedrock Nova 2 SageMaker AI AgentCore Memory Amazon Q
30% rules
AgentCore Policy Step Functions EventBridge rules Config, Audit Manager
60% code and data
Aurora Postgres DynamoDB S3 Glue, Athena Lambda, ECS OpenSearch
IAMCloudTrailGuardDutyGovCloud, HIPAA, FedRAMP
Eduba sits at the assignment line between lanes. AWS Solutions Architects keep the platform. Services partners keep the build. The layer above is where the customer's ROI on Bedrock and Q actually gets decided.
Matching case

Correlation One: Pacific Life and Colgate-Palmolive

Correlation One brought Eduba in to train 1,500 people across Pacific Life and Colgate-Palmolive since May 2025. Six thousand to nine thousand hours saved per year. 95 percent of participants still using the tools 30 days after the workshop.

The pattern transfers onto AWS accounts that have bought Q or Bedrock seats at scale and need adoption to stick. AWS account teams routinely ask who can take a customer through end-user training at scale. The 95 percent 30-day number is the answer.

KPMG UK, one of the Big Four, ran the same model with 40+ executives on the leadership side.

People trained
1,500+
Hours saved per year
6K to 9K
Adoption at 30 days
95%
Since
May 2025
The paper

Interpretable Context Methodology

Agent context organized as a layered filesystem, from L0 identity through L4 working artifacts. Measurable interpretability and reproducibility across a 52-member practitioner community. Maps directly onto AgentCore Memory and AgentCore Policy.
Submitted to ACM TiiS MIT license github.com/RinDig/Interpretable-Context-Methodology-ICM-

A paper, not a pitch deck. Reads right inside AWS's technical culture. An AWS Solutions Architect can hand it to a customer as a reference on AgentCore context hygiene.

Adjacent work, for regulated-industry customers

Ethics Engine. A psychometric assessment tool for evaluating ideological and moral patterns in LLMs. Preprint at arxiv.org/abs/2510.11742. Code at github.com/RinDig/AuditEngine. Relevant to healthcare, finance, legal, and public-sector customers evaluating model behavior under governance constraints.

Channel posture

How Eduba slots in

Co-sell with an AWS account team

Eduba delivers the layer-assignment pass and the end-user enablement. AWS keeps the platform. The services partner keeps the build. The customer gets adoption that sticks past the proof of concept.

Training-led motion into L&D

Entry through the customer's learning function. Real workflow data captured during training becomes the spec for the deeper orchestration engagement.

Handoff for production ML

Eduba partners with NLP Logix for work that sits below the orchestration layer. NLP Logix has been in machine learning since 2011 and runs over 150 data scientists.

Credential row

  • PublishedACM TiiS (ICM)
  • PreprintarXiv 2510.11742
  • UEIY77RULHKK9Q1
  • NAICS541611, 541512, 611430
  • FounderUSMC veteran, MSc Edinburgh
  • CaseCorrelation One, Pacific Life, Colgate-Palmolive
Next step

30 minutes with Matt Creamer

Bring one AWS customer account where Bedrock or Q is stalled past the proof of concept. Eduba will return a written layer-assignment memo within one week of the call.