LLM
6 - A Small Step Towards Production Readiness
·1230 words·6 mins
This post guides us through improving Python code quality using Ruff, a fast linter and formatter, and pre-commit for automated checks. It also covers structuring LLM prompts with a Prompt model for scalable AI integrations, including updates to Ollama and Bedrock chat services.
Serverless Integration for Large Language Model (LLM) using AWS Lambda
This blog guides you through using a Large Language Model (LLM) to summarize web pages, storing the summaries in an Amazon DynamoDB table using AWS Lambda. It includes prerequisites, architecture details, and references to resources like Udemy courses and GitHub repositories.
1 - Hello World from Ollama
·1257 words·6 mins
This guide walks you through setting up a local AI development environment using Ollama for running large language models and DevContainers in Visual Studio Code for a consistent, isolated setup. It covers installation, configuration, and making REST API calls to Ollama, with specific instructions for Windows users using WSL 2.