As a data scientist your focus is on the data science lifecycle, which starts with data ingestion and preparation, then you develop a machine learning model, and finally the deployment of the model. And in the same way that developers evolve their code over time, you need to retrain and redeploy your model periodically because there is fresh data available, or you have found a more optimal model, etcetera.
For application developers the software development lifecycle has increasingly become more robust, reliable and repeatable, not the least because of continuous integration and continuous deployment (CI/CD) practices that introduced automation in the entire process.
Data scientists have traditionally used their favorite Python notebook or an integrated development environment (IDE) to develop and train machine learning models. Often, the trained and tuned model was then handed over to the application developer to make sure it got integrated into a larger application. These are considered very different tasks, that are performed by separate teams, with little to no interaction. Does this sound very much like the developer-operations walled gardens that we tore down using DevOps practices?
As AI is infused into more business-critical applications, it is increasingly clear that there needs to be a close collaboration between the data scientists, application developers and operations to apply the same robust and repeatable process to AI-powered applications. Learning from the DevOps tools, practices and processes, the time is right to put in place AIOps. This is where Azure DevOps and Azure Machine Learning services come in.
In our Techorama session, we’ll dive deep into the different components that make up such a data science lifecycle. Through a series of demos, we’ll push a machine learning model through the different steps of the process and elaborate the core functionality and extensibility points for training and deploying AI-powered applications.