Log in
Sign up
Project details
Recommended projects
Fullstack Engineer (m/f/d)
- Product and web development in the data-driven area - Shaping the software architecture for new data products - Collaborating in interdisciplinary teams (e.g. with data scientists and business developers)
Freelance Cybersecurity Consultant for AI Red Teaming
For an AI lab we are looking for cybersecurity consultants to train an AI model (Large Language Model - LLM). You help AI to make sense of the world. As consultant, you may be invited to take part in online projects to train the model in your domain of expertise. This flexible role accommodates both experts seeking part-time engagement (minimum few hours/week) and those interested in full-time opportunities - Evaluate and red team AI models and agents and machine learning systems for vulnerabilities and safety risks. - Create offline reproducible & auto-evaluable test cases to test safety & capability of AI agents. - Develop and implement automation scripts, custom tools, environments and test harnesses. - Lead or contribute to security research initiatives, especially in AI safety, creating and implementing realistic and challenging attack scenarios for the model. - Advise on cybersecurity best practices and policy implications.
New
Adobe Target Consultant (m/f/d)
The Digital Analytics department uses the Adobe Experience Cloud to deliver personalized user experiences. The goal is to boost conversion rates and improve the customer experience through targeted personalization and testing. Technical implementation is handled independently by specialized consultants. - Designing and technically implementing personalized user experiences with Adobe Target - Implementing A/B and multivariate tests, including quality assurance in live environments - Integrating Adobe Target into complex system landscapes with Adobe Experience Platform and Adobe Experience Manager - Developing targeting logic based on segments, real-time data and user behavior - Advising on new features and best practices within the Adobe Experience Cloud - Preparing technical decision papers to advance the personalization strategy - Documenting and handing over the developed solutions in Confluence and similar tools - Preparing technical documentation for data protection, especially data flow diagrams - Advising on GDPR-compliant use of Adobe Target and related systems - Conducting workshops to introduce Adobe Target
AI Agent Evaluation Analyst
For an AI lab we are looking for AI Agent Evaluation Analyst to train an AI model (Large Language Model - LLM). You help AI to make sense of the world. As consultant, you may be invited to take part in online projects to train the model in your domain of expertise. This flexible role accommodates both experts seeking part-time engagement (minimum few hours/week) and those interested in full-time opportunities - Reviewing evaluation tasks and scenarios for logic, completeness, and realism. - Identifying inconsistencies, missing assumptions, or unclear decision points. - Helping define clear expected behaviors (gold standards) for AI agents. - Annotating cause-effect relationships, reasoning paths, and plausible alternatives. - Thinking through complex systems and policies as a human would to ensure agents are tested properly. - Working closely with QA, writers, or developers to suggest refinements or edge case coverage.
New
Adobe Experience Cloud Consultant (m/f/d)
The Digital Analytics department uses Adobe Experience Cloud to deliver personalized user experiences. The goal is to increase conversion rates and improve customer experience through targeted personalization and testing. The technical implementation is handled independently by specialized consultants. - A core part of the tasks includes maintaining the existing implementation within the Adobe Experience Platform. This particularly involves monitoring the data and troubleshooting source connectors. - As part of preparing new features for use in the Adobe Experience Platform, requirements and data are first translated into Adobe’s Experience Data Model. This includes creating entity-relationship diagrams (ERDs) and contextualizing the relevant data. - Based on that, the corresponding schemas are created within the platform and datasets are prepared for further use. - Further activities include designing and setting up both new activation channels and additional data sources. - In addition, new business-relevant use cases are developed across the various phases of the customer journey to specifically increase business value. - Creating segments and performing error analyses using targeted SQL queries. - Consulting on all newly introduced processing activities, covering compliance with applicable data protection regulations, the required internal approval processes, as well as documenting legal and technical specifics.
Freelance Java Developer (m/f/d)
For an AI lab, we are looking for a Java Developer to train an AI model (Large Language Model - LLM). You help AI make sense of the world. As a consultant, you may be invited to take part in online projects to train the model in your area of expertise. This flexible role suits both experts seeking part-time engagement (at least a few hours per week) and those interested in full-time opportunities. - Code generation and code review - Prompt evaluation and complex data annotation - Training and evaluation of large language models - Benchmarking and agent-based code execution in sandboxed environments - Working across multiple programming languages - Adapting guidelines for new domains and use cases - Following project-specific rubrics and requirements - Collaborating with project leads, solution engineers, and supply managers on complex or experimental projects
AI Agent Evaluation Analyst (m/f/d)
We are looking for an Freelance Agent Evaluation Analyst to take ownership of quality, structure, and insight across the project. This role goes far beyond task-checking - it’s about critical thinking, systems-level analysis, and ensuring clarity, reliability, and consistency at scale. You’ll work as both a hands-on evaluator and an analyst, collaborating with domain experts, delivery managers, and engineers. Beyond reviewing outputs, you’ll be expected to understand the “why” behind the work, identify logical gaps or inconsistencies, and propose meaningful improvements. This is a flexible, impact-driven role where you’ll have space to grow, contribute ideas, and help shape how evaluation and quality are scaled across the project. This role is especially well-suited for: Analysts, researchers, or consultants with strong structuring and reasoning skills Junior product managers or strategists curious about AI and evaluation work Smart problem-solvers (students or early-career professionals) who enjoy digging into logic, systems, and edge cases You do not need a coding background. What matters most is curiosity, intellectual rigor, and the ability to evaluate complex setups with precision. What you’ll be doing - Fully own the QA pipeline for agent evaluation tasks; - Review and validate tasks and golden paths created by scenario writers and experts; - Spot logical inconsistencies, vague requirements, hidden risks, and unrealistic assumptions; - Provide structured feedback and ensure quality alignment across contributors; Train, onboard, and mentor new QA team members; - Collaborate with domain experts, delivery managers, and engineers to improve test clarity and coverage; - Maintain and improve QA checklists, SOPs, and review guidelines; - Contribute to test planning, prioritization, and quality benchmarks; - Take initiative to suggest new approaches, tools, and processes that help scale validation and analysis.
AI Trainer for Vibe Coding (m/w/d)
An AI Lab is looking for a AI Trainer for Vibe Coding. This role involves producing accurate, well-reasoned outputs across diverse domains, leveraging automation and AI tools. The position requires expertise in coding and optimizing Python scripts, handling large datasets, improving AI-generated content, and formatting and troubleshooting technical workflows. This is a remote part-time role that can be flexibly tailored to your availability – from just a few hours per week to full-time. Key responsibilities: - Develop and optimize Python scripts for automation and AI tasks. - Handle and analyze large datasets efficiently. - Improve and refine AI-generated content for accuracy and quality. - Format and troubleshoot technical workflows to ensure smooth operations. - Collaborate with cross-functional teams to enhance AI tools and processes.
Chemist with Python Experience (f/m/d)
GenAI models are improving very quickly, and one of our goals is to make them capable of addressing specialized questions and achieving complex reasoning skills. If you join the platform as an AI Tutor in Chemistry, you’ll have the opportunity to collaborate on these projects. Although every project is unique, you might typically: - Generate prompts that challenge AI. - Define comprehensive scoring criteria to evaluate the accuracy of the AI’s answers. - Correct the model’s responses based on your domain-specific knowledge.
Freelance Mechanical Engineer (with Python) - Quality Assurance (AI Trainer)
Generative AI models are improving very quickly, and one of our goals is to make them capable of addressing specialized questions and achieving complex reasoning skills. Although every project is unique, you might typically: - Content Creation & Refinement: Create and refine content to ensure accuracy and relevance across a variety of topics in Physics, while also developing references and examples of tasks. - Experts Acquisition: Assess the qualification tests of experts, ensuring their competency. - Chat Moderation: Provide support by addressing project-related questions from other experts in Discord chats, especially those related to project guidelines. - Auditing Work: Review and evaluate tasks completed by other experts, ensuring they align with project guidelines. Provide constructive feedback, verify expertise-related information, and edit content as necessary to improve quality.
Freelance Automotive Engineer (with Python) - Quality Assurance / AI Trainer
Generative AI models are improving very quickly, and one of our goals is to make them capable of addressing specialized questions and achieving complex reasoning skills. Although every project is unique, you might typically: - Content Creation & Refinement: Create and refine content to ensure accuracy and relevance across a variety of topics in Physics, while also developing references and examples of tasks. - Experts Acquisition: Assess the qualification tests of experts, ensuring their competency. - Chat Moderation: Provide support by addressing project-related questions from other experts in Discord chats, especially those related to project guidelines. - Auditing Work: Review and evaluate tasks completed by other experts, ensuring they align with project guidelines. Provide constructive feedback, verify expertise-related information, and edit content as necessary to improve quality.
Freelance Kotlin Developer (m/f/d)
For an AI lab we are looking for Kotlin Developer to train an AI model (Large Language Model - LLM). You help AI to make sense of the world. As consultant, you may be invited to take part in online projects to train the model in your domain of expertise. This flexible role accommodates both experts seeking part-time engagement (minimum few hours/week) and those interested in full-time opportunities. - Code generation and code review - Prompt evaluation and complex data annotation - Training and evaluation of large language models - Benchmarking and agent-based code execution in sandboxed environments - Working across multiple programming languages - Adapting guidelines for new domains and use cases - Following project-specific rubrics and requirements - Collaborating with project leads, solution engineers, and supply managers on complex or experimental projects
Freelance Mathematics Expert for AI Model Training (m/f/d)
An AI lab is looking for a freelance mathematics experts to evaluate AI models. The goal of the project is to assess the performance, accuracy, and reliability of AI models applied in mathematics contexts. The role involves working closely with the development team to ensure the models meet industry standards and provide actionable insights. This is a remote part-time role that can be flexibly tailored to your availability – from just a few hours per week to full-time. Key responsibilities: - Evaluate AI models for mathematics applications. - Analyze model outputs and provide feedback for improvement. - Collaborate with the development team to ensure alignment with industry standards. - Document findings and recommendations for model optimization. - Conduct tests to validate model performance and reliability.
Freelance Chemistry Expert for AI Model Training (m/f/d)
An AI lab is looking for a freelance chemistry experts to evaluate AI models. The goal of the project is to assess the performance, accuracy, and reliability of AI models applied in chemistry contexts. The role involves working closely with the development team to ensure the models meet industry standards and provide actionable insights. This is a remote part-time role that can be flexibly tailored to your availability – from just a few hours per week to full-time. Key responsibilities: - Evaluate AI models for chemistry applications. - Analyze model outputs and provide feedback for improvement. - Collaborate with the development team to ensure alignment with industry standards. - Document findings and recommendations for model optimization. - Conduct tests to validate model performance and reliability.
Freelance Physics Expert for AI Model Training (m/f/d)
An AI lab is looking for a freelance physics experts to evaluate AI models. The goal of the project is to assess the performance, accuracy, and reliability of AI models applied in physics contexts. The role involves working closely with the development team to ensure the models meet industry standards and provide actionable insights. This is a remote part-time role that can be flexibly tailored to your availability – from just a few hours per week to full-time. Key responsibilities: - Evaluate AI models for physics applications. - Analyze model outputs and provide feedback for improvement. - Collaborate with the development team to ensure alignment with industry standards. - Document findings and recommendations for model optimization. - Conduct tests to validate model performance and reliability.
Freelance Ruby Developer (m/f/d)
For an AI lab we are looking for a Ruby Developer to train an AI model (Large Language Model - LLM). You help AI make sense of the world. As a consultant, you may be invited to join online projects to train the model in your area of expertise. This flexible role suits both experts seeking part-time work (minimum a few hours/week) and those interested in full-time opportunities. - Code generation and code review - Prompt evaluation and complex data annotation - Training and evaluation of large language models - Benchmarking and agent-based code execution in sandboxed environments - Working across multiple programming languages (Python, JavaScript/TypeScript, Rust, SQL, etc.) - Adapting guidelines for new domains and use cases - Collaborating with project leads, solution engineers, and supply managers on complex or experimental projects
New
IT Project Manager ServiceNow (Senior)
- A company from the energy and energy services sector is looking for an experienced IT Project Manager for a ServiceNow project. - The goal of the project is to lead and successfully implement an enterprise ServiceNow project with a focus on ITSM and Customer Service Management (CSM). - The role includes planning, controlling, and ensuring a stable project flow in close collaboration with internal and external stakeholders. - Operational & strategic service management of the ServiceNow platform - Process ownership for ITSM and CSM (B2B & B2C) - Process design, governance & continuous optimization - Management of external providers and vendors - Monitoring, KPI analysis & deriving improvements - Ensuring a stable platform operation
New
AI Consultants - Data Science (m/w/d)
We are seeking experienced data scientists to create computationally intensive data science problems for an advanced AI evaluation project. This is a remote, project-based opportunity for experts who can design challenging problems that require computational methods to solve and mirror the full data science lifecycle - from data acquisition and processing to statistical analysis and actionable business insights. What You'll Do - Design original computational data science problems that simulate real-world analytical workflows across industries (telecom, finance, government, e-commerce, healthcare) - Create problems requiring Python programming to solve (using pandas, numpy, scipy, sklearn, statsmodels, matplotlib, seaborn) - Ensure problems are computationally intensive and cannot be solved manually within reasonable timeframes (days/weeks) - Develop problems requiring non-trivial reasoning chains in data processing, statistical analysis, feature engineering, predictive modeling, and insight extraction - Create deterministic problems with reproducible answers - avoid stochastic elements or require fixed random seeds for exact reproducibility - Base problems on real business challenges: customer analytics, risk assessment, fraud detection, forecasting, optimization, and operational efficiency - Design end-to-end problems spanning the complete data science pipeline (data ingestion → cleaning → EDA → modeling → validation → deployment considerations) - Incorporate big data processing scenarios requiring scalable computational approaches - Verify solutions using Python with standard data science libraries and statistical methods - Document problem statements clearly with realistic business contexts and provide verified correct answers
New
CRM Manager (m/w/d)
To strengthen data-driven cross & upsell and retention campaigns, Customer Interaction operates a platform where campaign processes, including a profiler, are developed, orchestrated, and monitored. Support is needed in the following areas: Analysis & Consulting - Business and technical analysis of existing campaign processes - Advice on data flows, selections, and profiling strategies Development in PL/SQL - Implementation and optimization of data selection, transformation, and aggregation workflows in Oracle PL/SQL - Mapping of provided business logic for customer segmentation Testing & Quality Assurance - Planning and execution of unit, integration, and regression tests - Documentation of test cases and results Operations & Monitoring - Monitoring of running jobs and workflows (performance, error handling) - Tuning of SQL queries and batch processes Communication Outputs - Connecting and supplying distribution channels: email, SMS, outbound calls, and mail - Implementing data-driven personalization and targeting
New
Test Manager (m/f/d)
The development and quality assurance of the data layer includes its complete provisioning through the respective web application. The data layer forms the central data foundation for analyzing user behavior and for personalized content during the website visit. To increase reliability and stability, automated tests should be used to significantly reduce manual regression tests. For this task, a Test Automation Engineer with a focus on Playwright (Elastic) is needed. - Development and implementation of automated end-to-end tests with the npm package @elastic/synthetics (Playwright) for data layer tests. - Analysis of existing test processes, identification and prioritization of automation potentials. - Creation, maintenance, and optimization of test scripts considering current best practices. - Integration of automated tests into existing CI/CD pipelines (e.g., Jenkins, GitHub Actions) to enable continuous test automation. - Documentation of test cases, test results, and test coverage in tools like Jira and Confluence. - Advising stakeholders on the selection and introduction of appropriate test strategies, test tools, and frameworks. - Conducting code reviews for test automation scripts to improve quality and maintainability. - Preparing decision templates and recommendations for action to further develop test automation. - Providing advice on error analysis and resolution within test automation. - Consulting on setting up reports and alerting with Elastic Observability. - Promoting traceability and reproducibility of test results.
Frontend developer to HR platform with Angular experience
Reach out to us if you are interested in working with us on the project.
Sign up
to get access to more exciting projects that match your skills and preferences!
Time's up! We are no longer accepting applications.
Similar projects
TECH Enterprise Architect (m/f/d) (9599)
Industry
Telecommunication
Area
Information Technology (IT)
Project info
Period
12.05.2025 - 31.12.2025
Capacity
from 95%
Daily rate
750 - 800€
Location
Munich, Germany
Languages
German
(Advanced)
,
English
(Advanced)
Remote
from 95%
Description
Analyze operator capabilities in the areas of radio, transport, core & services, cloud, OSS and digital platforms to design and implement use cases
Advise on creating HL solution (high-level solution) and integration architecture in Confluence/Jira
Verify the solution architecture
Advise network enablers and digital platforms to check the consistency of the HL solution architecture across network domains
Create comprehensive documentation of required security measures to serve as the basis for implementation by the network tribe
Conduct a detailed needs analysis of aggregators, developers and B2B customers regarding the architecture and present the results in a suitable format
Develop network domains to identify and set up the enablers needed for the future roadmap
Advise on architectural aspects of the slicing service to provide northbound APIs of slice management functions via the open gateway layer
Requirements
Architecture
Network / Radio / Transport
Integration Architecture
Computer Science
IT Architecture
API Integration & Development Expertise
Network Security & Operations Expertise
Understanding of operator capabilities as defined by global, standardized APIs under the CAMARA framework and other relevant bodies
Ability to analyze and translate requirements and use cases into technical solutions
Ability to review architecture deliverables
Knowledge of how to define key security measures
Ability to advise on developing the Open Gateway roadmap
Ability to further develop the integration architecture
Ability to advise on architectural aspects of the slicing service