This guide is for developers who want to contribute to LLM Environment Manager.
We welcome contributions! Whether you're fixing bugs, adding features, improving documentation, or adding new providers, your help is appreciated.
- Add new providers - Support for additional LLM services
- Improve error handling - Better user experience
- Add new features - Enhanced functionality
- Fix bugs - Stability improvements
- Improve documentation - Help other users
- Write tests - Ensure reliability
- Bash 4.0+ (most systems have this)
- Git for version control
- Text editor of your choice
- Access to LLM APIs for testing
-
Fork the repository
# Fork on GitHub, then clone your fork git clone https://github.com/yourusername/llm-env.git cd llm-env
-
Create a development branch
git checkout -b feature/your-feature-name
-
Set up development environment
# Make the script executable chmod +x llm-env # Test locally ./llm-env list
llm-env/
├── llm-env # Main script
├── config/
│ └── llm-env.conf # Default configuration
├── docs/
│ ├── README.md # Documentation hub
│ ├── configuration.md # Configuration guide
│ ├── troubleshooting.md # Troubleshooting guide
│ ├── development.md # This file
│ └── comprehensive.md # Compatible tools
├── examples/
│ ├── usage-scenarios.md # Usage examples
│ └── shell-config.sh # Shell setup examples
├── install.sh # Installation script
├── README.md # Main documentation
└── LICENSE # MIT license
-
Use strict mode
set -euo pipefail -
Quote variables
# Good echo "$variable" # Bad echo $variable
-
Use meaningful function names
# Good load_configuration_file() validate_provider_config() # Bad load_config() validate()
-
Add comments for complex logic
# Parse INI file sections while IFS='=' read -r key value; do # Skip comments and empty lines [[ $key =~ ^[[:space:]]*# ]] && continue [[ -z $key ]] && continue done
-
Use consistent indentation (2 spaces)
-
Handle errors gracefully
if ! command -v curl >/dev/null 2>&1; then echo "Error: curl is required but not installed" >&2 return 1 fi
-
Use descriptive provider names
# Good [cerebras] [openai-gpt4] [local-ollama] # Bad [c] [ai1] [local]
-
Include helpful descriptions
description=Cerebras - Fast inference with competitive pricing -
Use consistent API key variable naming
api_key_var=LLM_PROVIDER_API_KEY
- API Compatibility: Ensure it's OpenAI-compatible
- Authentication: How API keys are handled
- Base URL: The correct endpoint
- Available Models: What models are supported
- Documentation: Official API docs
Edit config/llm-env.conf:
[new-provider]
base_url=https://api.newprovider.com/v1
api_key_var=LLM_NEWPROVIDER_API_KEY
default_model=their-best-model
description=New Provider - Brief description of their service
enabled=true# Set up API key
export LLM_NEWPROVIDER_API_KEY="your_test_key"
# Test with llm-env
./llm-env set new-provider
./llm-env show
# Test API call
curl -H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model":"'$OPENAI_MODEL'","messages":[{"role":"user","content":"Hello!"}]}' \
"$OPENAI_BASE_URL/chat/completions"- Add to provider list in
README.md - Add example usage in
examples/usage-scenarios.md - Update
docs/comprehensive.mdif it's a notable service
Include:
- Provider configuration
- Test results
- Documentation updates
- Any special setup instructions
-
Basic functionality
# Test all core commands ./llm-env list ./llm-env set cerebras ./llm-env show ./llm-env unset
-
Configuration management
# Test config commands source ./llm-env config init source ./llm-env config validate source ./llm-env config add test-provider source ./llm-env config remove test-provider
-
Error handling
# Test error conditions ./llm-env set nonexistent-provider ./llm-env set provider-without-key
Test with real API calls:
# Test each provider
for provider in cerebras openai groq openrouter; do
echo "Testing $provider..."
./llm-env set $provider
# Test models endpoint
curl -s -H "Authorization: Bearer $OPENAI_API_KEY" \
"$OPENAI_BASE_URL/models" | jq '.data[0].id' || echo "Failed"
doneCreate test scripts:
#!/bin/bash
# tests/basic_functionality.sh
set -euo pipefail
# Test script exists and is executable
[[ -x ./llm-env ]] || { echo "Script not executable"; exit 1; }
# Test list command
./llm-env list >/dev/null || { echo "List command failed"; exit 1; }
# Test invalid provider
if ./llm-env set invalid-provider 2>/dev/null; then
echo "Should have failed with invalid provider"
exit 1
fi
echo "Basic tests passed"-
Update version in script
# In llm-env script VERSION="1.1.0"
-
Update README badges

-
Create changelog entry Document new features, bug fixes, and breaking changes
- All tests pass
- Documentation is updated
- Version numbers are consistent
- New providers are tested
- Breaking changes are documented
- Installation script works
-
Create release branch
git checkout -b release/v1.1.0
-
Final testing
# Test installation ./install.sh # Test all providers llm-env list
-
Create GitHub release
- Tag the release
- Upload assets if needed
- Write release notes
-
Add to main case statement
case "$1" in "new-command") handle_new_command "${@:2}" ;; esac
-
Implement the function
handle_new_command() { local args=("$@") # Implementation here }
-
Add to help text
show_help() { cat << EOF Commands: new-command Description of new command EOF }
# Good error messages
echo "Error: Provider '$provider' not found. Available providers:" >&2
list_providers >&2
return 1
# Include helpful context
echo "Error: API key not found for $provider" >&2
echo "Please set: export $api_key_var='your-api-key'" >&2validate_provider_config() {
local provider="$1"
local config_file="$2"
# Check required fields
local base_url api_key_var default_model
base_url=$(get_config_value "$provider" "base_url" "$config_file")
api_key_var=$(get_config_value "$provider" "api_key_var" "$config_file")
default_model=$(get_config_value "$provider" "default_model" "$config_file")
[[ -z $base_url ]] && { echo "Missing base_url for $provider"; return 1; }
[[ -z $api_key_var ]] && { echo "Missing api_key_var for $provider"; return 1; }
[[ -z $default_model ]] && { echo "Missing default_model for $provider"; return 1; }
}# Add debug output to functions
if [[ ${LLM_ENV_DEBUG:-} == "1" ]]; then
echo "[DEBUG] Loading config from: $config_file" >&2
fi# Trace script execution
set -x
# Check variable values
echo "DEBUG: provider=$provider, base_url=$base_url" >&2
# Validate assumptions
[[ -f "$config_file" ]] || { echo "Config file not found: $config_file" >&2; return 1; }- Never log API keys
- Don't include keys in error messages
- Use environment variables, not files
- Sanitize debug output
# Good: Hide sensitive data
echo "API key: ${api_key:0:8}..." >&2
# Bad: Expose full key
echo "API key: $api_key" >&2# Validate provider names
if [[ ! $provider =~ ^[a-zA-Z0-9_-]+$ ]]; then
echo "Error: Invalid provider name" >&2
return 1
fi
# Validate URLs
if [[ ! $base_url =~ ^https?:// ]]; then
echo "Error: Invalid base URL" >&2
return 1
fi- GitHub Discussions: Ask questions about development
- Issues: Report bugs or request features
- Code Review: Get feedback on pull requests
- Bash Manual: Advanced scripting techniques
- OpenAI API Docs: Understanding the standard
- Provider Documentation: Specific API details
Thank you for contributing to LLM Environment Manager! Your efforts help make AI tools more accessible to everyone.