AI Solutions Engineer (m/f/d)
Area: IT Infrastructure / Python Automation / AI Solutions
Location: Germany - Brandenburg (hybrid possible)
Salary: €50,000 - €90,000 per year (depending on qualifications)
Employment type: Permanent, full-time (no freelancers)
What’s it about?
We are seeking a technically skilled specialist to join our team in automating modern IT solutions and implementing AI models. Do you have a talent for Python, Gradio, and cloud technologies? Are you interested in developing systems that truly make an impact – from AI dashboards to production-ready automation?
If so, we look forward to connecting with you!
What you will do with us:
You will develop intelligent, scalable solutions with ample opportunity for initiative and technical depth:
-
Development, support, and optimization of our IT infrastructure (networks, servers, cloud)
-
Automation of processes using Python and modern libraries
-
Development of interactive AI dashboards and tools with Gradio
-
Integration of ML/AI models into practical applications
-
Web development and API connectivity (Flask knowledge is a plus)
-
Version control and teamwork using Git
-
Utilization of Docker/Kubernetes and cloud platforms (AWS, Azure, GCP)
-
Independent analysis and resolution of complex technical challenges
What you should bring with you:
-
Solid experience in IT, network, and cloud technologies
-
Excellent Python skills, ideally with Gradio experience
-
Experience with ML/AI, REST APIs, and web development
-
Proficiency with Git, Docker, and Kubernetes
-
Good command of German and English
-
Driving license class B
-
Advantageous: Experience in deploying AI solutions in production environments
What you can expect from us
-
Permanent employment contract – secure future guaranteed
-
Salary: €50,000 - €90,000 per year (based on experience)
-
Hybrid work options available – flexible and family-friendly
-
Top-tier equipment: company laptop, mobile phone, and car
-
Work-life balance: pleasant environment on the outskirts of Berlin
-
Events and team spirit: regular company gatherings
-
Convenient accessibility: by car and train, with parking available on-site
-
Willingness to travel: primarily regional, as agreed upon
Why you? Why us?
Do you want to do more than just work within a system – do you want to help shape it? Then you’re in the right place. We offer you the chance to implement your own ideas, take on responsibility, and drive genuine innovation alongside us.
Apply now:
Simply send us:
-
Your CV
-
(Optional) References or projects that highlight your strengths
-
Your potential start date
I've sent a direct message.
Could you please check your direct messages?
Do I need to be located in Germany?
Hi there — it sounds like you’re tackling an exciting AI automation project and need a callin.io-based workflow to orchestrate everything smoothly.
Here’s a lightweight framework I’ve used to keep multi-platform AI workflows maintainable:
- Trigger Layer – watch for the initiating event (repo push, API call, or cron) and immediately log a unique execution ID.
-
- Qualification Layer – fetch / validate inputs (model parameters, data paths) and branch early if something is missing.
-
- Orchestration Layer – spin up your model execution (e.g., Gradio, Python service) while handing off heavy compute to a queue so the main flow stays responsive.
-
- Persistence Layer – write key results and metadata back to a single source of truth (database or CRM) so future jobs can reference prior runs.
-
- Notification Layer – post concise status updates (Slack / email) keyed to the execution ID so teammates can trace results quickly.
A couple of clarifying questions to scope things:
• Do you already have a centralized data store for job metadata, or are you hoping callin.io will serve that role?
• Will the AI routines live inside containers managed elsewhere (Docker/K8s) or do you plan to invoke them directly from callin.io via execute-command / HTTP nodes?
- Notification Layer – post concise status updates (Slack / email) keyed to the execution ID so teammates can trace results quickly.
Best-practice tips:
• Store credential nodes (AWS, Git, DB) in a separate sub-workflow and reference via environment variables – it keeps rotation simple and avoids hard-coding.
• Use the “Execute Workflow” node to break larger processes into smaller, testable pieces. It makes debugging and future scaling much easier.
This is general guidance based on similar automations I’ve built – hope it helps point you in the right direction! Good luck and let me know if any of this needs more detail.
Yes, that would be even better. It would be great if you could also speak German to present the concepts to our German customers.
I am not from Germany, but I build many tools that I can share with you. Here is something I built to demonstrate my skills: https://www.youtube.com/watch?v=pQUaaWd6om4
Hallo
Ich wohne in Berlin, und habe 10+ jahre erfahrung mit Python / Cloud / Devops und 3+ schon mit AI/LLM//callin.io.
Ich habe dieses job angebot auf eure webseite auch gefunden.
Ich rufe Sie Morgen an.
MfG,
Talha
Hi Norma_scitech,
It's great to see you're looking for an AI Solutions Engineer to advance your automation efforts. Imagine a future where every data interaction, from initial lead capture to ongoing customer success, seamlessly moves through a unified callin.io pipeline. This pipeline would enrich data with AI-driven insights before presenting it on your dashboards. Manual processes would be eliminated, allowing your team to concentrate on strategic initiatives rather than repetitive tasks.
I recently assisted a Berlin-based SaaS company in scaling their automated tasks from 10 to 150 daily. This was achieved by consolidating their callin.io, custom Python scripts, and callin.io scenarios into 25 modular callin.io workflows. As a result, their system uptime increased to 99.8%, and their cloud expenses decreased by 50%.
A quick question: Which AI models or vector stores are you currently considering for your roadmap? Understanding this early on will help in planning authentication, managing rate limits, and estimating costs.
Pro tip: Begin with an event-driven architecture. Utilize callin.io's webhook or Kafka nodes to initiate incremental updates. Subsequently, incorporate a Catch node for robust error handling, preventing silent failures.
I'm happy to share further insights. Best of luck with your hiring process!
Hallo Talha,
Super gerne rufe ich an! Bisher hatte ich (denke ich) noch keinen Anruf von Ihnen. Ansonsten gerne erneut probieren. Ich freue mich drauf.
VG Norma