Log in
Sign up
Project details
Recommended projects
AI Agent Evaluation Analyst (m/f/d)
We are looking for an Freelance Agent Evaluation Analyst to take ownership of quality, structure, and insight across the project. This role goes far beyond task-checking - it’s about critical thinking, systems-level analysis, and ensuring clarity, reliability, and consistency at scale. You’ll work as both a hands-on evaluator and an analyst, collaborating with domain experts, delivery managers, and engineers. Beyond reviewing outputs, you’ll be expected to understand the “why” behind the work, identify logical gaps or inconsistencies, and propose meaningful improvements. This is a flexible, impact-driven role where you’ll have space to grow, contribute ideas, and help shape how evaluation and quality are scaled across the project. This role is especially well-suited for: Analysts, researchers, or consultants with strong structuring and reasoning skills Junior product managers or strategists curious about AI and evaluation work Smart problem-solvers (students or early-career professionals) who enjoy digging into logic, systems, and edge cases You do not need a coding background. What matters most is curiosity, intellectual rigor, and the ability to evaluate complex setups with precision. What you’ll be doing - Fully own the QA pipeline for agent evaluation tasks; - Review and validate tasks and golden paths created by scenario writers and experts; - Spot logical inconsistencies, vague requirements, hidden risks, and unrealistic assumptions; - Provide structured feedback and ensure quality alignment across contributors; Train, onboard, and mentor new QA team members; - Collaborate with domain experts, delivery managers, and engineers to improve test clarity and coverage; - Maintain and improve QA checklists, SOPs, and review guidelines; - Contribute to test planning, prioritization, and quality benchmarks; - Take initiative to suggest new approaches, tools, and processes that help scale validation and analysis.
Test Manager (m/f/d)
The development and quality assurance of the data layer includes its complete provisioning through the respective web application. The data layer forms the central data foundation for analyzing user behavior and for personalized content during the website visit. To increase reliability and stability, automated tests should be used to significantly reduce manual regression tests. For this task, a Test Automation Engineer with a focus on Playwright (Elastic) is needed. - Development and implementation of automated end-to-end tests with the npm package @elastic/synthetics (Playwright) for data layer tests. - Analysis of existing test processes, identification and prioritization of automation potentials. - Creation, maintenance, and optimization of test scripts considering current best practices. - Integration of automated tests into existing CI/CD pipelines (e.g., Jenkins, GitHub Actions) to enable continuous test automation. - Documentation of test cases, test results, and test coverage in tools like Jira and Confluence. - Advising stakeholders on the selection and introduction of appropriate test strategies, test tools, and frameworks. - Conducting code reviews for test automation scripts to improve quality and maintainability. - Preparing decision templates and recommendations for action to further develop test automation. - Providing advice on error analysis and resolution within test automation. - Consulting on setting up reports and alerting with Elastic Observability. - Promoting traceability and reproducibility of test results.
New
AI Evaluation Consultant (m/w/d)
We are seeking an analytical and technically-minded professional to: - Evaluate AI outputs and processes - Ensure quality, accuracy, and reliability - Identify logical errors, risks, and structural inconsistencies - Provide actionable insights and recommendations to the team Ideal candidates: - Consultants, auditors, analysts, data researchers, or business/technical analysts with strong reasoning skills - Professionals curious about AI, process improvement, and quality evaluation - Problem-solvers who enjoy analyzing complex systems, logic, and scenarios Key Responsibilities: - Lead evaluation of AI outputs and related processes - Review tasks against expected/ideal scenarios; identify gaps and risks - Provide structured, actionable recommendations to engineers, domain experts, and managers - Maintain and improve evaluation guidelines, checklists, SOPs - Suggest new approaches, tools, and processes to enhance AI evaluation
Freelance AI Consultant (German) (m/w/d)
For our client, we are looking for a German-speaking AI consultant: As a consultant, you may be invited to take part in online projects to train the model in your area of expertise. This flexible role suits both experts seeking part-time work (minimum of a few hours per week) and those interested in full-time opportunities. Responsibilities: - Carefully review provided data (text, images, or videos). - Review tasks submitted by the development team and ensure quality assurance/quality control. - Label or classify content based on project guidelines. - Identify and flag factually incorrect, sensitive, inappropriate, or unclear material.
Freelance AI Consultant (Japanese) (m/f/d)
For our client we are looking for a Japanese speaking AI consultant: As consultant, you may be invited to take part in online projects to train the model in your domain of expertise. This flexible role accommodates both experts seeking part-time engagement (minimum few hours/week) and those interested in full-time opportunities. Responsibilities: - Carefully review provided data (text, images, or videos). - Review tasks submitted by the developer team and ensure quality assurance/quality control. - Label or classify content based on project guidelines. - Identify and flag factually incorrect, sensitive, inappropriate, or unclear material.
Freelance Data Annotator (Spanish) (m/f/d)
For an AI studio we are looking for a Spanish speaking data annotation specialist: Annotation is what helps AI make sense of the world. As a QA Annotator, you may be invited to take part in online projects such as rating AI-generated content, evaluating factual accuracy, or comparing responses — when projects are available. This flexible role accommodates both experts seeking part-time engagement (minimum few hours/week) and those interested in full-time opportunities Responsibilities: - Carefully review provided data (text, images, or videos). - Review tasks submitted by the Annotators team and ensure quality assurance/quality control. - Label or classify content based on project guidelines. - Identify and flag factually incorrect, sensitive, inappropriate, or unclear material.
AI Trainer for Vibe Coding (m/w/d)
An AI Lab is looking for a AI Trainer for Vibe Coding. This role involves producing accurate, well-reasoned outputs across diverse domains, leveraging automation and AI tools. The position requires expertise in coding and optimizing Python scripts, handling large datasets, improving AI-generated content, and formatting and troubleshooting technical workflows. This is a remote part-time role that can be flexibly tailored to your availability – from just a few hours per week to full-time. Key responsibilities: - Develop and optimize Python scripts for automation and AI tasks. - Handle and analyze large datasets efficiently. - Improve and refine AI-generated content for accuracy and quality. - Format and troubleshoot technical workflows to ensure smooth operations. - Collaborate with cross-functional teams to enhance AI tools and processes.
Freelance AI Consultant (Korean) (m/f/d)
For our client we are looking for a Korean speaking AI consultant: As consultant, you may be invited to take part in online projects to train the model in your domain of expertise. This flexible role accommodates both experts seeking part-time engagement (minimum few hours/week) and those interested in full-time opportunities. Responsibilities: - Carefully review provided data (text, images, or videos). - Review tasks submitted by the developer team and ensure quality assurance/quality control. - Label or classify content based on project guidelines. - Identify and flag factually incorrect, sensitive, inappropriate, or unclear material.
Freelance Consultant - AI Training (Portugese-Speaking)
For an AI lab we are looking for a Portugese speaking freelance consultants to train an AI model (Large Language Model - LLM) in various domains: You help AI to make sense of the world. As consultant, you may be invited to take part in online projects to train the model in your domain of expertise. This flexible role accommodates both experts seeking part-time engagement (minimum few hours/week) and those interested in full-time opportunities Responsibilities: - Carefully review analyze provided data by AI in your domain of expertise. - Improve the model in your domain of expertise. - Review AI results and ensure quality assurance/quality control. - Label or classify content based on project guidelines.
Project Manager Brand Guardianship (m/f/d)
The service is requested as part of the Brand Image Pool photoshoot project. The project includes: - Managing sub-tasks throughout the entire image pool shoot project from January to June - Taking on brand guardianship tasks during the pool shooting project period - Detailed service description without reference to individuals: - Independently defining, managing, and executing the project. This ranges from project management to creating roadmaps and project presentations - Developing ideas and concepts for measures - Actively managing project risks - Actively handling project issues including professional advice on escalations - Preparing and following up on stakeholder and steering board meetings - Defining project scope and main project phases - Providing transparent and appropriate information to the client regarding scope, quality, schedule, budget, and status
CRM Manager (m/f/d)
To strengthen data-driven cross & upsell as well as retention campaigns, Customer Interaction runs a platform where campaign processes including a profiler are developed, orchestrated, and monitored. We are looking for support in the following areas: Analysis & Consulting - Functional and technical analysis of existing campaign processes - Advising on data flows, selections, and profiling strategies Development in PL/SQL - Implementing and optimizing data selection, transformation, and aggregation pipelines in Oracle PL/SQL - Mapping predefined business logic for customer segmentation Testing & Quality Assurance - Planning and executing unit, integration, and regression tests - Documenting test cases and results Operations & Monitoring - Monitoring running jobs and workflows (performance, error handling) - Tuning SQL queries and batch processes Communication Outputs - Connecting and supplying channels for email, SMS, outbound calls, and mail - Implementing data-driven personalization and targeting
Freelance AI Consultant (Chinese) (m/f/d)
For our client we are looking for a Chinese-speaking AI consultant: As a consultant, you may be invited to take part in online projects to train the model in your area of expertise. This flexible role accommodates both experts seeking part-time engagement (minimum a few hours/week) and those interested in full-time opportunities. Responsibilities: - Carefully review provided data (text, images, or videos). - Review tasks submitted by the developer team and ensure quality assurance/quality control. - Label or classify content based on project guidelines. - Identify and flag factually incorrect, sensitive, inappropriate, or unclear material.
Freelance Ruby Developer (m/f/d)
For an AI lab we are looking for Ruby Developer to train an AI model (Large Language Model - LLM). You help AI to make sense of the world. As consultant, you may be invited to take part in online projects to train the model in your domain of expertise. This flexible role accommodates both experts seeking part-time engagement (minimum few hours/week) and those interested in full-time opportunities. - Code generation and code review - Prompt evaluation and complex data annotation - Training and evaluation of large language models - Benchmarking and agent-based code execution in sandboxed environments - Working across multiple programming languages (Python, JavaScript/TypeScript, Rust, SQL, etc.) - Adapting guidelines for new domains and use cases - Collaborating with project leads, solution engineers, and supply managers on complex or experimental projects
Adobe Target Consultant (m/f/d)
The Digital Analytics department uses the Adobe Experience Cloud to implement personalized user experiences. The goal is to increase conversion rates and improve the customer experience through targeted personalization and testing. The technical implementation is carried out independently by specialized consultants. - Design and technical implementation of personalized user experiences with Adobe Target - Implementation of A/B and multivariate tests, including quality assurance in live environments - Integration of Adobe Target into complex system landscapes, including Adobe Experience Platform and Adobe Experience Manager - Development of targeting logic based on segments, real-time data, and user behavior - Advice on new features and best practices within the Adobe Experience Cloud - Preparation of technical decision templates for the development of the personalization strategy - Documentation and handover of developed solutions in Confluence and similar tools - Preparation of technical materials for data protection, especially data flow diagrams - Advice on GDPR-compliant use of Adobe Target and related systems - Conducting workshops to introduce Adobe Target
AI Consultants - Data Science (m/w/d)
We are seeking experienced data scientists to create computationally intensive data science problems for an advanced AI evaluation project. This is a remote, project-based opportunity for experts who can design challenging problems that require computational methods to solve and mirror the full data science lifecycle - from data acquisition and processing to statistical analysis and actionable business insights. What You'll Do - Design original computational data science problems that simulate real-world analytical workflows across industries (telecom, finance, government, e-commerce, healthcare) - Create problems requiring Python programming to solve (using pandas, numpy, scipy, sklearn, statsmodels, matplotlib, seaborn) - Ensure problems are computationally intensive and cannot be solved manually within reasonable timeframes (days/weeks) - Develop problems requiring non-trivial reasoning chains in data processing, statistical analysis, feature engineering, predictive modeling, and insight extraction - Create deterministic problems with reproducible answers - avoid stochastic elements or require fixed random seeds for exact reproducibility - Base problems on real business challenges: customer analytics, risk assessment, fraud detection, forecasting, optimization, and operational efficiency - Design end-to-end problems spanning the complete data science pipeline (data ingestion → cleaning → EDA → modeling → validation → deployment considerations) - Incorporate big data processing scenarios requiring scalable computational approaches - Verify solutions using Python with standard data science libraries and statistical methods - Document problem statements clearly with realistic business contexts and provide verified correct answers
Freelance Cybersecurity Consultant for AI Red Teaming
For an AI lab we are looking for cybersecurity consultants to train an AI model (Large Language Model - LLM). You help AI to make sense of the world. As consultant, you may be invited to take part in online projects to train the model in your domain of expertise. This flexible role accommodates both experts seeking part-time engagement (minimum few hours/week) and those interested in full-time opportunities - Evaluate and red team AI models and agents and machine learning systems for vulnerabilities and safety risks. - Create offline reproducible & auto-evaluable test cases to test safety & capability of AI agents. - Develop and implement automation scripts, custom tools, environments and test harnesses. - Lead or contribute to security research initiatives, especially in AI safety, creating and implementing realistic and challenging attack scenarios for the model. - Advise on cybersecurity best practices and policy implications.
Developer for Consent Management Implementation (m/f/d)
For replacing the Consent Layers currently provided by third-party CMPs on the web for our international brands, these layers need to be reimplemented so we can run and serve them in-house. This requires solid knowledge of TypeScript, Vue.js, and classic web technologies (HTML and CSS). The goal is to deliver executable code that implements all requirements and includes automated tests to verify correct function. What exactly is the scope of work: The focus is on preparing decision-making elements for the approach and implementing measures along the resulting project workflow. This specifically includes the following work packages: - Implementation of code - Implementation of executable tests that must pass with test coverage >= 80% before delivery - Creation of documentation for the code - Creation of brand-specific cmp-config files. - Setting up a project (including asset management requirements) as a copy of the Consent Management Platform. - Removal of netID references. - Creation of brand-specific settings and files for custom purposes/vendors. - Adding new brand-specific CSS themes (variable values, logos, etc.). - Including the required official IAB GVL translations (ES, FR) in the weekly sync with GVL - Implementation of I18n and preparation of brand-specific data sources - Implementation of PMC2.0 backend usage modules - Implementation of the playout logic - Implementation of the layer initialization process (mode=default and mode=resurface) - CDN upload and release process - Project documentation Project execution: - The deliverable should be written in TypeScript and Vue.js, built with Vite, and tested with Vitest.
Senior Web Developer (m/f/d)
- You develop modern, high-performance web frontends with React, TypeScript, HTML and CSS - You implement responsive designs with attention to accessibility and performance - You plan and execute unit and integration tests (for example with Playwright) - Troubleshooting in development, test, or live environments
New
ERP-Transformation Manager (m/w/d)
An established company is looking for an experienced ERP Transformation Manager to take full responsibility for planning and steering a comprehensive ERP transformation program. The project's goal is harmonizing processes, implementing a new ERP system, and meeting IFRS requirements. The ERP Transformation Manager will analyze, redesign, and standardize the commercial core processes in civil and rail construction. This includes translating IFRS requirements into system structures and posting logic, closely coordinating with Finance, Controlling, Project Management, and IT departments. The role includes managing the ERP rollout, including fit-gap analysis, process design, test management, and migration. In addition, a unified reporting and KPI framework for group financial statements and project management will be established. The manager will act as the central interface between operational units, Finance, management, and the group, and will set up a sustainable change and training concept for users. - Planning and steering the ERP transformation program (IFRS transition, process harmonization, ERP rollout) - Analyzing, redesigning, and standardizing commercial core processes - Translating IFRS requirements into system structures and posting logic - Managing the ERP rollout, including fit-gap analysis, process design, test management, and migration - Building a unified reporting and KPI framework - Stakeholder management and ensuring smooth communication - Leading interdisciplinary project teams and managing external consultants and implementation partners - Establishing a sustainable change and training concept - Ensuring measurable process improvements after the ERP system goes live
Adobe Experience Cloud Consultant (m/f/d)
The Digital Analytics department uses Adobe Experience Cloud to deliver personalized user experiences. The goal is to increase conversion rates and improve customer experience through targeted personalization and testing. The technical implementation is handled independently by specialized consultants. - A core part of the tasks includes maintaining the existing implementation within the Adobe Experience Platform. This particularly involves monitoring the data and troubleshooting source connectors. - As part of preparing new features for use in the Adobe Experience Platform, requirements and data are first translated into Adobe’s Experience Data Model. This includes creating entity-relationship diagrams (ERDs) and contextualizing the relevant data. - Based on that, the corresponding schemas are created within the platform and datasets are prepared for further use. - Further activities include designing and setting up both new activation channels and additional data sources. - In addition, new business-relevant use cases are developed across the various phases of the customer journey to specifically increase business value. - Creating segments and performing error analyses using targeted SQL queries. - Consulting on all newly introduced processing activities, covering compliance with applicable data protection regulations, the required internal approval processes, as well as documenting legal and technical specifics.
Frontend developer to HR platform with Angular experience
Reach out to us if you are interested in working with us on the project.
Sign up
to get access to more exciting projects that match your skills and preferences!
Time's up! We are no longer accepting applications.
Similar projects
Data Migration Specialist (m/f/d)
Industry
Information Technology (IT)
Areas
Information Technology (IT)
Quality Assurance (QA)
Project info
Period
03.03.2025 - 30.06.2025
Capacity
from 95%
Daily rate
750 - 850€
Location
Berlin, Germany
Languages
German
(Advanced)
,
English
(Advanced)
Remote
from 95%
Description
Drive migration activities for assigned objects, ensuring timely completion and quality standards.
Prepare value mappings and execute MOCK and Production data loads according to the defined timeline.
Support the data migration closure process and load file archiving activities
Request, review, analyze, and communicate regular data quality reports (washing machine) to key stakeholders.
Conduct Data Verification Tests (DVTs) and prepare necessary templates for dual maintenance activities.
Identify, document, and raise defects for bugs or new functionalities related to assigned objects.
Manage Hypercare defects and change requests within the area of responsibility.
Perform hands-on tools testing, including test data preparation and fixing data for program testing activities (e.g., user acceptance tests).
Requirements
Proven experience in data migration, including tools testing, MOCK/Production data loading, and DVT execution.
Strong understanding of data quality management and defect tracking processes.
Ability to adhere to timelines, manage multiple tasks, and prioritize responsibilities effectively.
Experience with cutover planning, Hypercare support, and security audit processes.