A powerful Streamlit-based SQL query interface that supports multiple database types, file-to-database conversion, and AI-powered data analysis. Execute SQL queries against various databases with an intuitive web interface, complete with sample data, query history, and intelligent insights.
- Multi-Database Support: SQLite, PostgreSQL, MySQL, SQL Server
- File-to-Database Conversion: Convert CSV/JSON files to SQLite databases
- Sample Database: Pre-loaded employee and department data for testing
- Real-time Query Execution: Execute SQL with immediate formatted results
- Query History: Track and reuse previous queries
- Export Results: Download query results as CSV files
- Generative AI Integration: Powered by Cohere API for intelligent data analysis
- Automatic Data Analysis: Get insights about your query results
- Visualization Suggestions: AI-recommended charts and graphs
- Custom Analysis: Ask specific questions about your data
- Interactive Chat: Conversational interface for data exploration
- CSV/JSON Upload: Direct file upload with automatic conversion
- Data Preview: View file contents before conversion
- Smart Sampling: Handle large files with intelligent sampling
- Schema Detection: Automatic data type inference
- Frontend: Streamlit
- Database Engines: SQLAlchemy with multiple dialect support
- AI/ML: Cohere API for natural language processing
- Data Processing: Pandas, NumPy
- Visualization: Plotly Express (integrated with AI suggestions)
- File Handling: Native Python libraries with smart data type detection
- Python 3.8 or higher
- pip package manager
- Cohere API key (for AI features)
-
Clone the repository
git clone https://github.com/yourusername/sql-query-interface.git cd sql-query-interface
-
Create virtual environment
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
-
Run the application
streamlit run app.py
-
Access the application
- Open your browser and navigate to
http://localhost:8501
- Open your browser and navigate to
The application comes with pre-loaded sample data to help you get started immediately:
Sample Tables:
employees
- Employee records with salary and department informationdepartments
- Department data with budget information
Sample Employee Data:
-- View all employees
SELECT * FROM employees;
-- Find high-salary employees
SELECT * FROM employees WHERE salary > 65000;
-- Count employees by department
SELECT department, COUNT(*) as count FROM employees GROUP BY department;
-- Join employees with department budgets
SELECT e.name, e.salary, d.budget
FROM employees e
JOIN departments d ON e.department = d.name;
- Pre-loaded with employee and department sample data
- No configuration required
- Perfect for testing and learning SQL
- CSV/JSON to SQLite: Upload files and convert them to queryable databases
- Direct SQLite Upload: Upload existing
.db
files - Smart Features:
- Automatic data type detection
- Configurable table names
- Smart sampling for large files (default: 10,000 records)
- Data preview before conversion
Configure connections to:
- PostgreSQL (default port: 5432)
- MySQL (default port: 3306)
- SQL Server (with ODBC driver)
-
Enable AI Features
- Toggle "Use Generative AI" in sidebar
- Get your API key from Cohere Dashboard
- Enter API key in the password field
-
AI Analysis Options
- Analyze Data: Get automatic insights about your query results
- Suggest Visualizations: Receive AI recommendations for charts and graphs
- Custom Analysis: Ask specific questions about your data
-
Example AI Prompts
"Find outliers in salary data" "Compare performance across departments" "What trends do you see in this data?" "Suggest ways to improve department efficiency"
sql-query-interface/
βββ app.py # Main Streamlit application
βββ helper/
β βββ gen_sql_chunk.py # File conversion utilities
β βββ insight_gen.py # AI analysis components
βββ requirements.txt # Python dependencies
βββ README.md # This file
# SQLAlchemy engines for different database types
- SQLite: sqlite:///path/to/file.db
- PostgreSQL: postgresql://user:pass@host:port/db
- MySQL: mysql+pymysql://user:pass@host:port/db
- SQL Server: mssql+pyodbc://user:pass@host:port/db
- Supported formats: CSV, JSON
- Auto-detection: Data types, column names, encoding
- Smart sampling: Configurable record limits for large files
- Preview mode: See data before conversion
# Cohere API integration for:
- Data analysis and insights
- Visualization recommendations
- Custom query suggestions
- Interactive data exploration
- SQL Injection Prevention: All queries use SQLAlchemy's text() parameterization
- Temporary File Management: Secure handling of uploaded files
- API Key Security: Keys stored in session state, not persisted
- Connection Validation: Test connections before query execution
-- Start with sample data
SELECT * FROM employees LIMIT 5;
-- Explore structure
PRAGMA table_info(employees);
-- Basic analytics
SELECT department, AVG(salary) as avg_salary
FROM employees
GROUP BY department
ORDER BY avg_salary DESC;
- Upload CSV/JSON file
- Preview data and configure conversion
- Convert to SQLite database
- Execute exploratory queries
- Use AI analysis for insights
- Export results
- Configure external database connection
- Explore available tables
- Execute business intelligence queries
- Generate AI-powered insights
- Download results for reporting
- Enhanced Visualizations: Direct chart generation from AI suggestions
- Query Builder: Visual query construction interface
- Scheduled Queries: Automated query execution
- Multi-User Support: Session management and user authentication
- Advanced AI Features: Query optimization suggestions
- Dashboard Creation: Save and share query dashboards
- More AI Providers: OpenAI, Anthropic Claude integration
- Export Formats: Excel, PDF, JSON export options
- Large result sets may impact performance
- AI features require internet connection
- Temporary databases are session-based
- No persistent user data storage
- Launch the application
- Select "SQLite (Sample Data)"
- Try these sample queries:
-- Basic selection SELECT name, salary FROM employees WHERE department = 'Engineering'; -- Aggregation SELECT department, COUNT(*) as headcount, AVG(salary) as avg_pay FROM employees GROUP BY department; -- Join operation SELECT e.name, e.salary, d.budget FROM employees e JOIN departments d ON e.department = d.name;
- Prepare a CSV file with sample data
- Select "SQLite (File Upload)"
- Toggle "Convert my csv/json"
- Upload file and configure conversion
- Execute queries on converted data
- Fork the repository
- Create feature branch (
git checkout -b feature/new-feature
) - Install development dependencies
- Make changes and test thoroughly
- Submit pull request
- Follow PEP 8 guidelines
- Use type hints where applicable
- Add docstrings for functions
- Test with multiple database types
- Issues: Report bugs and request features via GitHub Issues
- Cohere API: Get API key
- Streamlit Docs: Streamlit Documentation
- SQLAlchemy: Database engine documentation
- Streamlit for the amazing web framework
- Cohere for AI-powered analysis capabilities
- SQLAlchemy for robust database connectivity
- Pandas for data manipulation and analysis
- RAM: 2GB
- Storage: 500MB free space
- Python: 3.8+
- RAM: 4GB+
- Storage: 2GB+ free space
- CPU: Multi-core processor
Built with β€οΈ using Streamlit
Execute SQL queries safely and efficiently with AI-powered insights