eduba Prepared by Eduba for Bank of America — Emerge Americas 2026
A note, not a pitch

A read on where AI actually belongs in the Bank of America stack.

Bank of America named the four priorities clearly at Semafor: end-to-end process transformation, scale and reuse, governance, return on investment. The Erica platform is doing the heavy lifting on surfaces, and Q1 2026 operating leverage at 290 basis points is real evidence that the strategy is compounding. The question that sits behind the four priorities is which workloads deserve a model, which deserve deterministic code, and which deserve neither. That decision is the orchestration layer.

Global Technology Invest once, reuse everywhere SR 11-7 aware TPRM conscious

Bank of America, Semafor World Economy 2026

Four priorities, one orchestration question underneath.

01

End-to-end process transformation

Not point solutions. Full workflow redesign across lines of business.

02

Scale and reuse

One capability serving many surfaces. Priorities get named when they are not yet solved.

03

Governance

Model risk, responsible AI, regulatory defensibility in a heightened-standards environment.

04

Return on investment

Dollars out per dollar in, framed the way Bank of America frames it on the earnings call.

The read

Where "build once, reuse everywhere" holds, and where it bends.

The platform covers the surfaces. The rebuilds happen where local compliance, latency, or data-residency requirements force a line of business to ship a variant.

Erica, AskGPS, the developer assistant, the call-center summarization tool. Same underlying engine. Different governance envelopes.

Across 213,000 employees and eight lines of business, the "invest once, reuse everywhere" frame holds at the platform level. It breaks where local compliance, latency, or data-residency requirements force a line of business to rebuild a variant. Each rebuild is engineering time that the central roadmap already paid for once. The work of naming which of those rebuilds are legitimate and which are orchestration gaps is not an LLM problem. It is a framework problem.

Computational orchestration

A working frame for the layer the platform does not cover.

In most enterprise workflows, roughly 60% of the problem is traditional code and database work. Around 30% is rule-based logic. About 10% is a genuine AI problem. Teams that skip this distinction spend model budget on work that Postgres would do faster, cheaper, and with better guarantees for a regulated environment. Applied to Bank of America, the value shows up in the decisions made before a team ships a new variant of Erica, AskGPS, or the developer assistant. Which layer does each piece of this workflow belong on, and what evidence supports that choice, are the questions the orchestration layer answers.

60%
Traditional code and database work. Deterministic, auditable, cheap to run at bank scale.
30%
Rule-based logic. Clear policy, clear ownership, reviewable by Risk and Compliance.
10%
Genuine AI problems. Where a model earns its TPRM file and its operating cost.

Closest parallel engagement

Correlation One. Pacific Life and Colgate-Palmolive.

Our closest parallel engagement is Correlation One, where the training work at Pacific Life and Colgate-Palmolive put structured AI enablement in front of a large enterprise workforce since May 2025. The methodology travels. An AI Academy focused on prompt engineering and AI design covers the mechanics. What gets added here is the decision framework for when an LLM is the wrong answer, which is the part that pays back in avoided rebuilds and cleaner TPRM cycles.

1,500+
Enterprise learners trained across Pacific Life and Colgate-Palmolive since May 2025.
6,000-9,000
Hours saved per year across the two engagements, measured against pre-training baselines.
95%
Still using the tools 30 days after the workshop.

Methodology of record

The orchestration layer sits in a paper, not a demo.

For a governance-heavy environment, the right artifact is the one an auditor can reproduce six months later. Interpretable Context Methodology was published in ACM TiiS and organizes agent context as a layered filesystem, L0 identity through L4 working artifacts, with measurable interpretability and reproducibility properties. That structure is what lets a Responsible AI team explain why a model made a decision, which is the same question SR 11-7 asks in writing.

Paper
Interpretable Context Methodology: folder structure as agent architecture.

Published in ACM TiiS. MIT-licensed reference implementation with a 52-member practitioner community.

Who is sending you this page

Eduba, a veteran-owned AI consulting and training firm based in Florida.

Jake Van Clief, the founder, is a Marine Corps veteran with an MSc in Future Governance from the University of Edinburgh and published work in ACM TiiS and arXiv. Prior case study references on request include KPMG UK (one of the Big Four) at the executive level and Correlation One at the enterprise-workforce level.

Eduba partners with NLP Logix for work that sits below the orchestration layer. NLP Logix has been in machine learning since 2011 and runs over 150 data scientists.

Adjacent paper for Responsible AI conversations:

Ethics Engine. A psychometric assessment tool for evaluating ideological and moral patterns in LLMs. See arxiv.org/abs/2510.11742 and github.com/RinDig/AuditEngine.

Next step

Bring one workflow that got rebuilt after the central platform shipped.

A working orchestration audit on that workflow runs in 30 minutes and produces a written read before the call ends. No deck. No follow-up survey. Scoped engagement only if the read earns it.

Matt Creamer, Chief Revenue Officer at Eduba.