Many technology services companies are rapidly engaging with Generative AI, eager to capitalize on this burgeoning wave, largely driven by Big Techs (Microsoft, Google, Meta, Amazon). Major cloud providers, including AWS, Google Cloud, IBM, and Microsoft, offer integrated AI and machine learning tools, platforms, and services that make it easier for organizations to explore, scale and implement enterprise AI initiatives. Every month brings significant advancements in this domain, from newer versions of LLM/SLM to innovative patterns in Agentic AI.
We aim to share our journey and the key lessons we've discovered over the past two years of developing GenAI solutions and tools. Here, we want to explicitly state our focus on digital transformation through Generative AI and its application using a Model-as-a-Service (MaaS) approach, where pre-trained models are accessed through APIs or cloud services, allowing teams to build AI-powered applications without training or hosting models internally.
Building AI-powered applications introduces a range of challenges, especially with a do-it-yourself approach, often requiring expertise in AI methodologies, natural language processing, data science, and systems integration. Many enterprise AI applications today use deep learning, a subset of machine learning based on neural networks that enables processing of large datasets to extract insights. When relying on APIs or cloud services, choosing an appropriate technology stack supports scalable, secure, and efficient AI deployment.
At ITMAGINATION, we’ve been working on AI and Machine Learning projects since 2016, long before the current GenAI momentum accelerated across the industry.
Over the past two years, we have successfully built a significant GenAI competency within our organization and delivered several impactful projects over the past two years.
Building AI-powered applications presents unique challenges, often requiring a fundamental shift in the development team's mindset to embrace probabilistic thinking rather than rigid determinism. Unlike traditional machine learning models, which are typically more task-specific and predictable, Generative AI systems introduce more variability and demand a different approach to design and evaluation. It's crucial to remember that models are inherently non-deterministic by design.
This brings us to a critical element in our success: the specialized role of the AI Engineer.
Our core team primarily consists of software engineers who have, over the years, grown accustomed to well-defined and 100% deterministic systems. We primarily work with Banking, Financial Services, and Insurance clients among others where there’s no room for error or “randomness”.
This foundational understanding contrasts sharply with the probabilistic nature of AI technologies. This transition necessitates their evolution into AI Engineers, individuals equipped with knowledge spanning several areas, particularly relevant given how Agentic systems are defined.
To achieve this, we've identified two main phases in Agentic AI implementation:
These two phases require distinct skill sets. The Discovery & Foundation phase ideally calls for an AI Researcher/Data Engineer, someone with deep knowledge of MLOps and the intricacies of machine learning models.
The Implementation phase, on the other hand, demands a dedicated AI Engineer proficient in integrating all necessary services for specific solution deployments.
First and foremost, we believe in showing, not just telling. Over the past two years, we’ve delivered hands-on, production-grade GenAI work for real enterprise environments.
Key Achievements in Building the GenAI Competency:
In our view, an AI Engineer is an individual who, ideally, possesses knowledge of AI techniques such as machine learning, deep learning, and natural language processing, and can conduct AI research and rapid prototyping using tools like Azure AI Foundry or Microsoft Copilot Studio, and holds a strong background in Software Engineering.
As evident, this specialized role demands a broad range of competencies. This is precisely why, as an organization, we exclusively leverage the Microsoft tech stack, focusing on enabling comprehensive tooling within enterprise environments, particularly those operating under stringent regulations common here in the EU where we are based.
During our work with GenAI projects, we've consistently observed that success hinges on a specific set of tools and, crucially, high-quality data. A primary concern quickly became defining what truly constitutes 'high-quality data'. This challenge led to extensive internal discussions on data quality and proper data governance processes, culminating in a refined set of guidelines and automations that inform our approach.
If you're interested, we’re happy to share how we structure this process in practice - reach out to request our GenAI data readiness checklist.
In our opinion, organizations need to prioritize data orchestration and governance layers before fully diving into the AI project implementation phase. This is precisely what we address in our Discovery & Foundation phase. Understanding the core objectives and the necessary data types is where Microsoft Fabric provides significant assistance, allowing us to build robust data foundations tailored to GenAI's unique demands.
One recurring scenario we've frequently encountered is the development of in-house knowledge bases, which naturally come with stringent data quality requirements. We've learned that, in most instances, existing manual document processes prove inefficient for the scale and precision GenAI demands. Automated processes are essential for operational efficiency, leveraging technologies like Power Automate or Power Platform (if you're already on the Microsoft stack; Google Workspace offers similar capabilities).
This approach necessitates a 'Lego bricks' mindset within our engineering teams, emphasizing the selection of appropriate modular technologies and services for specific scenarios. We strongly advocate for what we term 'modern workplaces,' where low-code/no-code solutions seamlessly coexist with full-code development.
These deeper insights from our enterprise artificial intelligence implementation journey highlight why GenAI projects often differ from traditional software development:
Even in the field of Generative AI, traditional software engineering principles, especially robust testing remain invaluable. We strongly believe that early and continuous evaluation is important to the success of any enterprise AI project. We approach the assessment of LLM/SLM behavior much like unit and integration testing in classic software engineering. Thanks to AI Foundry and Prompt Flow, it is easy to embed consistent and recuring evaluation processes into our workflow.
This rigorous evaluation process allows us to establish guardrails for AI models, ensuring they operate within expected parameters and deliver reliable outputs. By consistently verifying performance from the outset, we gain confidence in the system's consistency. This directly impacts the Return on Investment (ROI), as it ensures that the AI solution reliably delivers value and avoids costly rework or misaligned outcomes.
Furthermore, early evaluation is a foundational component of our AI Framework and a key activity within our Discovery & Foundation phase. By embedding evaluation from the very beginning, we can rapidly identify potential issues, manage ai models and data more efficiently, and ultimately build solutions that are not only innovative but also robust, predictable, and aligned with our clients' business objectives. This proactive approach minimizes risks and accelerates the path to tangible results.
To navigate the complexities of GenAI development and ensure predictable, high-quality outcomes for our clients, we've developed a proprietary AI Framework. This framework supports our enterprise AI strategy by providing a structured methodology for implementing Generative AI solutions across various business domains.
Our AI Framework is designed to address key challenges, from initial concept to deployment and beyond, ensuring:
With this framework, we provide a reliable roadmap for enterprises looking to make use of GenAI’s capabilities.
Most GenAI use cases, especially knowledge bases, depend on clean, structured, and well-tagged data. Manual document processes often fall short. Ensuring data quality at scale is a shared challenge for both data scientists and engineering teams. We’ve automated these workflows using tools like Power Automate (or Google Workspace equivalents), enabling scalable, repeatable ingestion pipelines.
Engineering teams must think modularly about choosing the right tools and services for each task. This is especially important in hybrid environments where low-code/no-code solutions coexist with full-code AI systems.
To demonstrate ROI and gain stakeholder buy-in, a fast iteration is key. Treat every GenAI project like a startup: test, learn, and pivot quickly to accelerate AI adoption and reduce time-to-value as part of a practical approach to AI adoption.
Executives and team leaders must champion the cultural change required to succeed with GenAI. This includes embracing uncertainty, investing in training, and supporting cross-functional collaboration to ensure teams are prepared to work effectively with enterprise AI technology.
It typically takes three months to build an efficient GenAI team. Plan for this ramp-up period and support your teams with the right resources.
We’re currently working on:
Contact us to learn more about how we can help your organization leverage Generative AI.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
Unordered list
Bold text
Emphasis
Superscript
Subscript