We recommend you these similar jobs:
QA Engineer
15/09/2025
Programming & Tech
Full time
Remote
Mid level
1400 to 1600 USD
We are hiring a QA Engineer to lead and execute both manual and automated testing including web, REST APIs, and potentially mobile/watch apps. You’ll design and run comprehensive test suites, validate key features, and contribute to automation pipelines in collaboration with developers and DevOps. Key Responsibilities Design, document, and execute test cases and test scenarios based on product requirements Perform both manual testing and automated testing across web, API, and optionally mobile platforms Build and maintain automated test suites using Selenium, Cypress, REST Assured, Postman, or JMeter Work closely with engineering teams to catch defects early and improve code/test coverage Contribute to CI/CD test automation and reporting integration Document bugs, analyze root causes, and validate resolutions First 90 Days Deliver comprehensive test coverage for web and API components of our cardio application Launch and validate the first wave of automated tests within our CI/CD pipeline Collaborate cross-functionally to integrate QA processes into the release cycle Establish baseline quality metrics and defect tracking systems Tools & Stack Testing Tools: Selenium, Cypress, Postman, REST Assured, JMeter Testing Types: Functional, regression, integration, load, and exploratory Focus Areas: Web app, REST APIs, potentially smartwatch/mobile apps CI/CD Integration: Azure DevOps, GitHub Actions Success Metrics High test case execution coverage Timely detection and resolution of critical defects CI/CD pipeline integration for automated test suites Reliable test documentation and communication Requirements 4–8 years of QA experience in software testing Experience with both manual and automated testing Familiarity with API testing and tools like Postman and REST Assured Working knowledge of automation frameworks (Selenium/Cypress/etc.) Strong attention to detail and remote documentation habits Fluent English and ability to work overlapping US hours Other Details Work arrangement: Remote, offshore Workload: Full-time (40+ hours/week) Interview process: Technical interview + management interview Soft skills: Communication, documentation, startup agility Compensation: TBD
LLM/ML Specialist
09/09/2025
Programming & Tech
Full time
Remote
Mid level
800 to 1200 USD
We're seeking an experienced LLM/ML Specialist with deep expertise in LLaMA models or other open source and Retrieval-Augmented Generation (RAG) systems. The ideal candidate will have strong skills in model fine-tuning, prompt engineering, and production deployment of language models. You'll build and optimize RAG pipelines, implement vector databases, and develop efficient inference solutions. Requirements include 2+ years of LLM experience, Python proficiency with PyTorch/Hugging Face, and demonstrated projects involving LLaMA models. Technical Skills: Deep understanding of LLaMA model architecture and its variants Experience fine-tuning and adapting LLaMA models for specific applications Proficiency in Python and ML frameworks (especially PyTorch and Hugging Face) Knowledge of prompt engineering specific to LLaMA models Experience with efficient inference and quantization techniques for LLaMA Understanding of model deployment and optimization for large language models Expertise in Retrieval-Augmented Generation (RAG) systems and architectures Experience implementing vector databases and similarity search techniques Knowledge of document chunking and embedding strategies for RAG Familiarity with evaluation metrics for RAG systems Qualifications: Machine Learning, NLP, Computer Science, or related field 2+ years of experience working with large language models, preferably LLaMA Strong mathematical background in statistics and probability Demonstrated projects involving LLaMA model adaptation or deployment Excellent communication skills to explain complex concepts Proven experience building and optimizing RAG pipelines in production environments Preferred: Experience with PEFT methods (LoRA, QLoRA, etc.) for LLaMA models Experience with fine-tuning Understanding of model limitations and ethical considerations Experience with LLaMA integration into production systems Familiarity with open-source LLM ecosystems Experience with hybrid search methodologies (dense + sparse retrieval) Knowledge of context window optimization techniques for RAG systems Experience with multi-stage retrieval architectures Responsibilities: Design and implement LLaMA-based solutions for real-world applications Develop, optimize, and deploy RAG systems using vector databases and embedding strategies Fine-tune LLaMA models using techniques like LoRA and QLoRA Create efficient prompt engineering strategies for specialized use cases Implement and optimize inference pipelines for production environments Design evaluation frameworks to measure model and RAG system performance Collaborate with engineering teams to integrate LLMs into product infrastructure Research and implement the latest advancements in LLM and RAG technologies Document technical approaches, model architectures, and system designs Mentor junior team members on LLM and NLP best practices What We Offer: Opportunity to work on cutting-edge LLM and RAG applications Access to computational resources for model training and experimentation Collaborative environment with other ML/AI specialists Flexible work arrangements with remote options Competitive salary and comprehensive benefits package Professional development budget for conferences and courses Clear career progression path for AI specialists Chance to contribute to open-source LLM ecosystem Balanced workload with dedicated research time Inclusive culture that values diverse perspectives and innovative thinking
Senior Speech & Audio ML Engineer
25/08/2025
Programming & Tech
Full time
Remote
Senior level
4400 to 5500 USD
What you will do: Build and ship core ML models for a speech-driven behavioral engine. Own end-to-end modeling from raw, long-form audio and layered annotations to production inference. Design audio features/embeddings, train and evaluate a suite of models, and deliver reproducible pipelines that meet accuracy, robustness, latency, and cost targets. Your skill set and experience: 5+ years building production ML systems, including 2+ years in speech/audio. Speech & signal processing: VAD, diarization, segmentation, denoising, spectral features (log-mel/MFCC), prosody (pitch/energy), long-form audio handling. SOTA audio models & embeddings: Wav2Vec2, HuBERT, wavLM (or similar); fine-tuning/self-supervised learning; contrastive/metric learning for downstream tasks. Data excellence: SQL, Python data stack (Pandas/Polars), ETL for audio+metadata, stratified sampling, leakage prevention, feature stores. ML Training: PyTorch, Hugging Face Transformers/Hub, mixed precision, hyperparameter tuning, transfer learning, cross-validation. Evaluation discipline: golden sets, robust speaker/content splits, ROC/PR/calibration, fairness/bias checks, ablations, drift/shift detection on embeddings and audio quality. MLOps, serving & reproducibility: FastAPI/gRPC around HF/torchaudio models, experiment tracking (W&B/MLflow), artifact/model versioning, CI/CD, observability, scalable batch/streaming inference. Proven ability to create and document novel IP (methods, architectures, or training/eval techniques) with clear prior-art awareness. Nice to have: Tooling: SpeechBrain, Lightning, OpenSMILE/Praat, Kaldi/Conformer/Emformer, Label Studio. Multimodal: ASR (e.g., Whisper) + paralinguistic features; emotion/prosody modeling; speaker embeddings (x-vectors, ECAPA-TDNN). Performance & deployment: quantization/distillation, Triton/CUDA basics, distributed training, real-time/streaming inference, on-device DSP (Rust/C++). Publications/patents/competition results demonstrating novel audio modeling work.