We recommend you these similar jobs:
Full Stack Developer / Vibe Coding Intern (Stipend • AI-Powered • Remote)
30/05/2025
Programming & Tech
Part time
Remote
Entry level
360 to 464 USD
Full Stack Developer / Vibe Coding Intern (Stipend • AI-Powered • Remote) Location: Remote Start: Summer 2025 Compensation: Stipend Schedule: Part-time, flexible hours (at least 5 hours per week) About BiblioSync: We’re building a second brain for modern thinkers and doers — an AI-powered web app that helps people organize their insights and actually use them. Our MVP is live, and we’re expanding quickly. We're looking for an intern who thrives in early-stage energy, builds fast, and is curious about how AI and product design intersect. https://bibliosync.com What You'll Do: Learn and apply AI-assisted coding and prompting techniques. You will not be writing lines of code for hours, but instead will be orchestrating code development using AI Ship features using Supabase + Replit (along with other Vibe Coding tools and workflows) Test, Debug and refine our live MVP Collaborate on UI in Figma and manage work in ClickUp Help transition our responsive web app to mobile readiness Explore how knowledge graphs support user memory and insight retrieval Qualifications: Experience with full stack projects (personal, academic, or bootcamp); Mobile experience is a PLUS Comfort with GitHub, frontend JS, and responsive layout work Interest in AI workflows and building smarter, not harder Foundational understanding of databases and cloud services (GCP, AWS) Ability to use + create APIs Curiosity about knowledge graphs or semantic relationships AI Prompting experience (Chat level is fine) Understanding of ML/Model Training and Knowledge Graph foundations is a PLUS Perks: Stipend Real user feedback on your work Fast-paced learning environment Mentorship on AI product design & execution Potential permanent hire + equity Exposure / Shareable clips of our "behind the scenes" social share videos How to Apply: Please submit - A project(s) you’ve built (doesn't need to have flashy UI, we're more interested in functionality) A brief description of your skill set and experience (would love to hear about the types of stuff you're interested in building) Why this opportunity resonates with you Your summer availability We launched in May 2025. It’s real. Users are in. Code ships weekly. We’re moving fast—and you’ll help us move faster. This is not your average “dev internship.” This is: Vibe coding with AI Designing better prompts, workflows, and results Getting your code into production Making real product decisions in a startup team What You’ll Learn AI-powered product building—from inside a fast-moving startup. By the end of the summer, you’ll have: Shipped real features to real users Learned AI dev workflows with AI tools and agents + Cloud-based tools Understood how prompts drive AI coding agents (and how to improve them) Worked on a distributed team using async collaboration Practiced debugging, testing, and releasing with product focus Built responsive UIs and explored mobile-first thinking Explored knowledge graphs and GraphRag Worked with MCP and Agentic workflows Gained resume-ready experience that stands out ....just to name a few
VP of AI & GEO (Generative Engineering & Operations)
29/05/2025
Programming & Tech
Full time
Remote
Senior level
8000 to 10000 USD
Location: Remote (LatAm or US) Reports to: CEO Type: Full-time, builder-first Compensation: USD 8k - 10k Role Overview We’re looking for a VP of AI & GEO (Generative Engineering & Operations) to lead the design, execution, and scaling of our generative AI infrastructure. This person will own the full stack that powers our copilots — including voice interfaces, prompt pipelines, RAG systems, and real-time decision engines. This is not a research role. It’s a hands-on, builder-first leadership position for someone who thrives in speed, complexity, and technical clarity. works directly with the CEO to make our vision real — and scalable. Key Responsibilities Design and scale the full GEO stack: voice transcription (Whisper), LLM prompts (OpenAI), embeddings and context recall (Pinecone, Weaviate, etc.) Lead the implementation of retrieval-augmented generation (RAG) pipelines across our 5,000+ and soon 5M+ diagnostic cases Build tool-based agents for our copilots: from repair guidance to procurement to fleet actions Set up logging, monitoring, evaluation, and fallback systems to ensure fast and reliable inference Collaborate with product to shape user flows and real-world use cases for mechanics and fleet operators Mentor and grow the technical team around best practices in generative systems and AI-native ops Qualifications 7+ years of experience building backend or AI systems, with at least 2 years shipping GenAI products Deep hands-on experience with OpenAI, LangChain or LangGraph, Whisper, Pinecone or equivalent vector DBs Strong Python engineering fundamentals (FastAPI, Docker, async workflows) Experience shipping real-world AI products, not just prototypes or research Respect from technical peers — ideally with a visible GitHub, community presence, or open-source contributions Fluency in prompt design, evaluation, and cost/latency trade-offs Low-ego, high-output, clear communicator Fluency in both Spanish and English Bonus Built copilots or agent-based interfaces from scratch Experience in speech-to-text and voice generation (e.g., Eleven Labs) Familiar with the automotive, logistics, or field operations industry Experience leading technical teams or mentoring other engineers
LLM/ML Specialist
28/05/2025
Programming & Tech
Full time
Remote
Mid level
1000 to 1500 USD
We're seeking an experienced LLM/ML Specialist with deep expertise in LLaMA models or other open source and Retrieval-Augmented Generation (RAG) systems. The ideal candidate will have strong skills in model fine-tuning, prompt engineering, and production deployment of language models. You'll build and optimize RAG pipelines, implement vector databases, and develop efficient inference solutions. Requirements include 2+ years of LLM experience, Python proficiency with PyTorch/Hugging Face, and demonstrated projects involving LLaMA models. Technical Skills: Deep understanding of LLaMA model architecture and its variants Experience fine-tuning and adapting LLaMA models for specific applications Proficiency in Python and ML frameworks (especially PyTorch and Hugging Face) Knowledge of prompt engineering specific to LLaMA models Experience with efficient inference and quantization techniques for LLaMA Understanding of model deployment and optimization for large language models Expertise in Retrieval-Augmented Generation (RAG) systems and architectures Experience implementing vector databases and similarity search techniques Knowledge of document chunking and embedding strategies for RAG Familiarity with evaluation metrics for RAG systems Qualifications: Machine Learning, NLP, Computer Science, or related field 2+ years of experience working with large language models, preferably LLaMA Strong mathematical background in statistics and probability Demonstrated projects involving LLaMA model adaptation or deployment Excellent communication skills to explain complex concepts Proven experience building and optimizing RAG pipelines in production environments Preferred: Experience with PEFT methods (LoRA, QLoRA, etc.) for LLaMA models Experience with fine-tuning Understanding of model limitations and ethical considerations Experience with LLaMA integration into production systems Familiarity with open-source LLM ecosystems Experience with hybrid search methodologies (dense + sparse retrieval) Knowledge of context window optimization techniques for RAG systems Experience with multi-stage retrieval architectures Responsibilities: Design and implement LLaMA-based solutions for real-world applications Develop, optimize, and deploy RAG systems using vector databases and embedding strategies Fine-tune LLaMA models using techniques like LoRA and QLoRA Create efficient prompt engineering strategies for specialized use cases Implement and optimize inference pipelines for production environments Design evaluation frameworks to measure model and RAG system performance Collaborate with engineering teams to integrate LLMs into product infrastructure Research and implement the latest advancements in LLM and RAG technologies Document technical approaches, model architectures, and system designs Mentor junior team members on LLM and NLP best practices What We Offer: Opportunity to work on cutting-edge LLM and RAG applications Access to computational resources for model training and experimentation Collaborative environment with other ML/AI specialists Flexible work arrangements with remote options Competitive salary and comprehensive benefits package Professional development budget for conferences and courses Clear career progression path for AI specialists Chance to contribute to open-source LLM ecosystem Balanced workload with dedicated research time Inclusive culture that values diverse perspectives and innovative thinking