Skip to content
View jdavis-EliLilly's full-sized avatar
🏠
Back in the Office
🏠
Back in the Office

Block or report jdavis-EliLilly

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
jdavis-EliLilly/README.md

Typing SVG

Hi there 👋, I'm James Davis

AI Platform Engineer | Python & R | Kubernetes & Cloud-Native ML

profile views


🚀 About Me

Finance AI Implementation Team @ Eli Lilly
Building enterprise AI platforms with Kubernetes, Fabric, and reinforcement learning

  • 🔭 Currently shipping

    • 🧠 Fabric Data Agents – CFO-level conversational intelligence with semantic models (4 in production)
    • ☁️ MCP Server Infrastructure – Production Kubernetes deployment on CATS platform
    • 🤖 Contract Defense RL – Multi-agent reinforcement learning for pharmaceutical contract fraud detection
    • 📊 AI Platform Strategy – Enterprise governance & orchestration architecture targeting 2,000+ users
  • 🏗️ Recently shipped (Global Statistics)

    • 📧 VAANa – Clinical trial automation deployed to multiple global studies (Python + Fabric)
    • 📈 Shiny Usage Platform – Org-wide telemetry tracking 100+ applications, 75,000+ interactions
    • 🔏 CLUWE eSign Tool – Automated signature validation for 100+ docs/month
    • 📂 FileLister – Study setup automation supporting 30+ active studies
  • 🌱 In the lab

    • PyTorch for deep Q-learning & adversarial training
    • Semantic modeling with Microsoft Fabric & Direct Lake
    • MCP (Model Context Protocol) server patterns with FastMCP
    • Kubernetes orchestration: Docker, Helm, ArgoCD
    • Vector search with FAISS & pgvector
  • 💬 Ask me about:

    • Kubernetes deployments, Fabric data agents, semantic modeling
    • Python & R automation, Shiny (both flavors), multi-agent RL
    • AWS serverless architecture, CI/CD on Posit Connect
    • LLM evaluation & RAG in regulated pharma environments
    • "How do you deploy an MCP server on enterprise Kubernetes?" 😄
  • 🎯 2025 Stats

    • 72 projects delivered across 2.5 years (96% completion rate)
    • 4 production Fabric data agents deployed in 3 months
    • 700+ users impacted across Statistics, Finance, Clinical Ops
    • 10,000x performance improvement (image processing)
  • 📫 Connect: LinkedIn | Email


🛠️ Tech Stack & Tools:

Python R Kubernetes Docker Azure AWS PyTorch PostgreSQL Pandas Git

Also: Microsoft Fabric • Power BI • Helm • ArgoCD • FastMCP • LangChain • FAISS • Copilot Studio • Shiny (R & Python)


📊 Career Highlights

2025 Q3-Q4: Finance AI Implementation Team
├─ 4 Production Fabric Data Agents (CFO-level intelligence)
├─ Enterprise Kubernetes MCP Server (CATS platform)
├─ 2 Reinforcement Learning Systems (contract defense)
└─ AI Platform Strategy (2,000+ user roadmap)

2023-2025: Global Statistics
├─ 57 Projects Delivered (clinical automation, AWS infrastructure)
├─ 100+ Shiny Apps Monitored (org-wide telemetry)
├─ VAANa Clinical Trial Automation (multi-study deployment)
└─ 10,000x Performance Optimization (image processing)

📈 GitHub Stats



🧰 Code Snippets & Patterns

🐳 Kubernetes – Deploy MCP server with Azure Workload Identity
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mcp-server
  namespace: finance-ai
spec:
  replicas: 2
  selector:
    matchLabels:
      app: mcp-server
  template:
    metadata:
      labels:
        app: mcp-server
        azure.workload.identity/use: "true"
    spec:
      serviceAccountName: mcp-server-sa
      containers:
      - name: mcp-server
        image: your-registry/mcp-server:latest
        ports:
        - containerPort: 5000
        env:
        - name: AZURE_CLIENT_ID
          value: "your-client-id"
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"
🤖 PyTorch – Multi-agent Q-learning with experience replay
import torch
import torch.nn as nn
import numpy as np
from collections import deque
import random

class QNetwork(nn.Module):
    def __init__(self, state_size, action_size, hidden_size=64):
        super().__init__()
        self.fc1 = nn.Linear(state_size, hidden_size)
        self.fc2 = nn.Linear(hidden_size, hidden_size)
        self.fc3 = nn.Linear(hidden_size, action_size)
        
    def forward(self, x):
        x = torch.relu(self.fc1(x))
        x = torch.relu(self.fc2(x))
        return self.fc3(x)

class Agent:
    def __init__(self, state_size, action_size, lr=0.001):
        self.q_network = QNetwork(state_size, action_size)
        self.optimizer = torch.optim.Adam(self.q_network.parameters(), lr=lr)
        self.memory = deque(maxlen=10000)
        self.gamma = 0.95  # discount factor
        
    def act(self, state, epsilon=0.1):
        if random.random() < epsilon:
            return random.randint(0, self.q_network.fc3.out_features - 1)
        
        with torch.no_grad():
            state_t = torch.FloatTensor(state).unsqueeze(0)
            q_values = self.q_network(state_t)
            return q_values.argmax().item()
    
    def learn(self, batch_size=32):
        if len(self.memory) < batch_size:
            return
        
        batch = random.sample(self.memory, batch_size)
        states, actions, rewards, next_states, dones = zip(*batch)
        
        states = torch.FloatTensor(np.array(states))
        actions = torch.LongTensor(actions)
        rewards = torch.FloatTensor(rewards)
        next_states = torch.FloatTensor(np.array(next_states))
        dones = torch.FloatTensor(dones)
        
        # Q-learning update
        current_q = self.q_network(states).gather(1, actions.unsqueeze(1))
        next_q = self.q_network(next_states).max(1)[0].detach()
        target_q = rewards + (1 - dones) * self.gamma * next_q
        
        loss = nn.MSELoss()(current_q.squeeze(), target_q)
        self.optimizer.zero_grad()
        loss.backward()
        self.optimizer.step()
🌗 R Shiny – Dark/Light mode toggle with persistence
library(shiny)
library(shinyswatch)

ui <- fluidPage(
  tags$head(
    tags$style("#modeBtn{position:fixed;top:10px;right:10px;font-size:24px;
                border:none;background:transparent;cursor:pointer;z-index:9999;}")
  ),
  actionButton("modeBtn", "", icon = icon("moon")),
  h2("Hello world"),
  plotOutput("plot")
)

server <- function(input, output, session){
  # Load saved preference
  current <- reactiveVal({
    saved <- getShinyOption("theme", default = "flatly")
    if (saved == "darkly") {
      updateActionButton(session, "modeBtn", icon = icon("sun"))
    }
    saved
  })
  
  observeEvent(input$modeBtn, {
    new_theme <- if (current() == "flatly") "darkly" else "flatly"
    current(new_theme)
    shinyswatch::theme_switch(new_theme)
    updateActionButton(session, "modeBtn",
                      icon = icon(if (new_theme == "flatly") "moon" else "sun"))
  })
  
  output$plot <- renderPlot({
    plot(mtcars$mpg, mtcars$wt)
  })
}

shinyswatch::run_with_themer(ui, server, theme = "flatly")
⚙️ Git – Advanced branching for Posit Connect deployments
# Create deployment branch tracking dev
git switch --track -c POSIT_DEPLOY origin/dev
git push -u origin POSIT_DEPLOY

# Setup automatic merge from dev
git config branch.POSIT_DEPLOY.mergeoptions "--no-ff"

# Fast-forward deployment branch
git switch POSIT_DEPLOY
git merge --ff-only origin/dev
git push origin POSIT_DEPLOY

# Rollback if needed
git reset --hard HEAD~1
git push --force-with-lease origin POSIT_DEPLOY
🐍 Python – FastMCP server with Fabric integration
from fastmcp import FastMCP
import azure.identity
from azure.identity import DefaultAzureCredential
import requests

mcp = FastMCP("Fabric Data Agent")

@mcp.tool()
def query_semantic_model(dataset_id: str, dax_query: str) -> dict:
    """Execute DAX query against Fabric semantic model.
    
    Args:
        dataset_id: The Power BI dataset/semantic model ID
        dax_query: DAX query string to execute
    
    Returns:
        Query results as dictionary
    """
    credential = DefaultAzureCredential()
    token = credential.get_token("https://analysis.windows.net/powerbi/api/.default")
    
    headers = {
        "Authorization": f"Bearer {token.token}",
        "Content-Type": "application/json"
    }
    
    payload = {
        "queries": [{
            "query": dax_query
        }]
    }
    
    url = f"https://api.powerbi.com/v1.0/myorg/datasets/{dataset_id}/executeQueries"
    response = requests.post(url, headers=headers, json=payload)
    response.raise_for_status()
    
    return response.json()

if __name__ == "__main__":
    mcp.run()
🔄 Python – Retry with exponential backoff & jitter
import time
import functools
import random

def retry(errors=(Exception,), tries=4, base_delay=1, max_delay=30):
    """Retry on errors with exponential backoff and jitter.
    
    Args:
        errors: Exception types to retry on
        tries: Maximum number of attempts
        base_delay: Initial delay in seconds
        max_delay: Maximum delay cap in seconds
    """
    def decorator(fn):
        @functools.wraps(fn)
        def wrapper(*args, **kwargs):
            for attempt in range(tries):
                try:
                    return fn(*args, **kwargs)
                except errors as e:
                    if attempt == tries - 1:
                        raise
                    
                    # Exponential backoff with jitter
                    delay = min(max_delay, base_delay * (2 ** attempt))
                    jittered_delay = delay * random.uniform(0.8, 1.2)
                    
                    print(f"⚠️  {type(e).__name__}: {e}")
                    print(f"   Retrying in {jittered_delay:.1f}s (attempt {attempt + 1}/{tries})")
                    time.sleep(jittered_delay)
            
        return wrapper
    return decorator

# Usage examples
@retry((ConnectionError, TimeoutError), tries=5)
def fetch_api(url):
    import requests
    return requests.get(url, timeout=10)

@retry((Exception,), tries=3, base_delay=2)
def process_data(data):
    # Your data processing logic
    pass

🏆 Recent Achievements

  • 📦 4 Production Data Agents deployed in 3 months (Finance AI)
  • ☸️ Enterprise Kubernetes MCP server on CATS platform
  • 🤖 Reinforcement Learning contract defense system (>200% ROI demo)
  • 📊 100+ Shiny Apps monitored with org-wide telemetry
  • 10,000x Performance improvement (clinical image processing)
  • 🏥 30+ Clinical Studies supported with automation tools
  • 👥 700+ Users impacted across Statistics, Finance, Clinical Ops

📚 Interests & Learning

  • 🧠 Multi-agent reinforcement learning & game theory
  • ☁️ Cloud-native architecture patterns (Kubernetes, service mesh)
  • 🔒 AI governance & compliance frameworks for regulated industries
  • 📊 Semantic modeling & Direct Lake optimization
  • 🤖 LLM evaluation & RAG systems in production
  • 🎯 Platform engineering & developer experience

"Any fool can write code that a computer can understand. Good programmers write code that humans can understand." – Martin Fowler

Thanks for stopping by!
Currently building enterprise AI platforms one Kubernetes deployment at a time 🚀

Popular repositories Loading

  1. jdavis-EliLilly jdavis-EliLilly Public

  2. crew.cluster crew.cluster Public

    Forked from wlandau/crew.cluster

    crew launcher plugins for traditional high-performance computing clusters

    R

  3. Retirement_Calculator_Python Retirement_Calculator_Python Public

    Python 1