AI/DATA MiniConference 2025: Operationalizing Your AI Models
Cebu's AI and data community started 2025 with an exciting and insightful event: the AI/DATA MiniConference: Operationalizing Your AI Models, held on February 16 at the Accenture Cebu Office, eBloc 2 Tower, IT Park. Organized by PizzaPy—Cebu Python Users Group, AI Pilipinas Cebu, Ideate Exchange, and Data Engineering Cebu, and in collaboration with Accenture Philippines, the mini-conference brought together AI/ML practitioners, data enthusiasts, and industry leaders to explore the practicalities of deploying AI models in real-world environments.
Igniting Cebu's Tech Synergy
The event kicked off with Neriah "BJ" Ato, Python Developer at BLV Solutions and Co-Lead Organizer of PizzaPy Cebu Python Users Group, introducing the collaboration between PizzaPy, AI Pilipinas Cebu, AI Generation Cebu, and Data Engineering Cebu. He shared updates on PizzaPy's latest activities, highlighting the community's ongoing efforts to foster tech engagement.
Following this, Joanna "Anj" Mariblanca, Founder of IDEX Corporation and AI Generation Cebu, reflected on past events and provided insights into their upcoming AI workshops for the year. Jerry Marte, Senior Data Engineer at VBP and Community Lead of Data Engineering Cebu wrapped up the introductions by sharing the group's origins and plans for a data-focused event later in 2025.
This AI/ML mini-conference marked PizzaPy's first event of the year and was organized in collaboration with Accenture in the Philippines, further expanding partnerships within Cebu's tech ecosystem. More than just a gathering, it served as a launchpad for future collaborations—an open invitation for the community to build, share, and grow together throughout the year.
Part 1: Laying the Foundations of MLOps
Diving into MLOps: From Concept to Production
The first keynote, "Introduction to MLOps," was delivered by Maxell Milay, Lead Software Engineer at Bitwork Solutions and an AI Advocate at AI Pilipinas Cebu. He tackled the challenges of bringing machine learning (ML) models from research to production, emphasizing that not all models rely on labeled training data—especially in cases like unsupervised learning. He also highlighted how models can degrade over time due to shifting data distributions, a challenge often overlooked in real-world applications.
Maxell discussed data drift, where production data diverges from what a model was trained on, leading to performance drops. He explained Train-Serve Skew and the mismatch between training and inference data and stressed the need for continuous retraining to keep models relevant. He said, "Garbage in, garbage out—if your data changes, your model needs to adapt."
His presentation broke down the ML pipeline, covering data aggregation, preprocessing, feature engineering, model training, validation, and deployment. He also introduced the Challenger vs. Champion approach, where new models are tested against the current best-performing model. If the challenger outperforms the champion, it replaces the existing production model—a critical process in MLOps.
Maxell wrapped up by discussing best practices for bridging gaps between data engineers, data scientists, and ML engineers. He emphasized the importance of managing artifacts, data, and model versions while leveraging cloud-based solutions for scalable training and deployment.
Building AI-Driven Trading Strategies
The second keynote, "Building and Deploying an LSTM Mean Reversion Trading Strategy in AWS ECS and Docker Hub", was presented by Gaille Amolong, a first-year Computer Science student at Cebu Institute of Technology - University and Software Engineer at Bitwork Solutions. Despite his early career stage, Gaille showcased impressive expertise in deploying AI-driven trading strategies.
He explained how Long Short-Term Memory (LSTM) networks, designed for time-series data, capture complex patterns over time. Gaille's trading strategy leveraged the mean reversion principle—that asset prices tend to return to their average values, creating opportunities to "buy low and sell high."
Gaille detailed the end-to-end pipeline:
- Model Training: Downloading financial data, preprocessing it, and generating (X, y) sequences.
- Model Architecture: Using LSTM layers with dropout, L2 regularization, and an MSE loss function optimized by Adam.
- API Development: Building an API layer for real-time predictions.
- Containerization: Packaging the app into a Docker image for consistent deployment.
- Deployment: Push the image to Docker Hub and deploy it on AWS Elastic Container Service (ECS).
He emphasized the value of containerization for ensuring portability and scalability, noting that "Docker eliminates the 'works on my machine' problem, making deployments seamless across environments."
Gaille emphasized that while the strategy showcased an end-to-end approach, it is not financial advice—market conditions vary, and due diligence is essential. Nonetheless, his demonstration highlighted the power of combining AI, containerization, and cloud infrastructure for scalable, real-world solutions.
Community Spotlight: AI Pilipinas Cebu
Following the first set of talks, Luisa Abigail Go, Product Design Lead for Dexcribe.com and Community Manager at AI Pilipinas Cebu, took the stage to introduce AI Pilipinas Cebu. She shared the community's vision, mission, and initiatives for 2025, breaking down its mission into what it is, why it matters, and how it's growing the local AI community. She shared how the group brings AI enthusiasts together and highlighted key programs like the AI Advocate Program, which recognizes contributors—including Gaille Amolong and Maxell Milay, both speakers at this event. She also talked about their Leadership Program, which helps aspiring organizers run AI events and workshops.
Part 2: Scaling AI in Practice
Practical LLMOps & Prompt Engineering – Jerel John Velarde
Jerel Velarde, Director of Product Management at Full Scale Ventures and co-founder of AI Pilipinas Cebu, delivered a hands-on session on LLMOps and prompt engineering, focusing on practical strategies for building AI products. He emphasized that LLMOps isn’t just about deploying LLMs—it’s about aligning AI with business goals.
Key Takeaways:
Start with the Business Outcome:
“Start with the business outcome, then Model Selection & Orchestration, then Prompt Engineering, then RAG, and only then consider fine-tuning if absolutely necessary.”MLOps vs. LLMOps : “In Machine Learning, look at the data. In Generative AI, look at the output.”
-
Model Selection Matters – Choosing the right model depends on:
- Task Alignment – “Does the model fit the domain and functionality?”
- Speed vs. Quality – “Which does the use case value more—speed or quality?”
- Cost & Compliance – “Is the operational expenditure justified by the business value?”
-
Retrieval-Augmented Generation (RAG) – Enhances LLMs with real-time knowledge by structuring data retrieval:
- “Import, split, embed, and store—RAG is all about structuring knowledge retrieval for better responses.”
-
Prompt Engineering –
“Good writers aren’t necessarily good prompt engineers.”
Treat LLMs like a smart new hire: Clear Instructions – “Does the model have everything it needs to do the task?”- Reference Materials – “Provide examples.”
- Task Breakdown – “Split complex requests into simpler sub-tasks.”
- Time to Think – “Let the model process logically.”
- “Talk to your model like a smart new employee—break down tasks, provide context, and review outputs intentionally.”
-
Case Study: GetYourCoach.AI – Jerel showcased his AI-powered career coaching app, built with Flutter, Firebase, and VAPI, featuring:
- Discovery Calls – “Create a custom identity, base voice agent, and define goals.”
- Follow-Up Calls – Tracks user progress with memory
- Mock Interviews – “Custom prompts based on job descriptions, followed by automated grading and feedback.”
MLOps in Action: Creating Business Value
The final keynote was delivered by Joshua Hiwatig, AVP Senior Machine Learning Engineer at RCBC, with "MLOps in Action: Creating Value from Models in Production." Joshua provided a pragmatic view of how MLOps accelerates the journey from experimentation to production, ensuring models drive real business impact.
He highlighted the traditional challenges of ML workflows: manual processes, lower productivity, and high costs. MLOps addresses these by automating code, data, and model pipelines, enhancing reproducibility, and ensuring seamless collaboration between data scientists and engineers.
Joshua shared MLOps' guiding principles: CI/CD pipelines, standardized features, robust monitoring, and iterative testing. He emphasized the importance of feature stores for centralized, shareable, high-quality data, reducing training-serving skew.
One key takeaway was the need for comprehensive monitoring strategies. Functional monitoring tracks data quality, drift, and model performance, while operational monitoring ensures system reliability and cost efficiency. Joshua noted, "Monitoring is key, but it's only valuable if you act on it."
To illustrate the real-world impact, Joshua demonstrated how Databricks could jumpstart MLOps use cases, enabling quicker experimentation, deployment, and retraining cycles. This approach translates into better business outcomes through intelligent decision-making powered by continuously optimized models.
Key Takeaways:
- Strategic MLOps Drives Business Results: Effective MLOps enhances efficiency and deliver better business outcomes by optimizing the end-to-end ML lifecycle.
- Empowering People and Processes: MLOps boosts productivity across teams—data scientists focus on innovation while infrastructure and data engineers work more efficiently.
- Flexible, Not Rigid: MLOps principles aren't strict rules but adaptable practices tailored to organizational needs.
- Monitoring Must Lead to Action: Tracking model drift, data quality, and business value is essential, but the real impact comes from acting on those insights.
- Jumpstarting with Databricks: Tools like Databricks simplify MLOps adoption, offering scalable, customizable solutions to accelerate experimentation, deployment, and retraining cycles.
Networking and Future Outlook
The event ended with closing remarks from Marvin Bonifacio, Associate Director and Growth Captain for Accenture Cebu, who underscored Accenture's commitment to fostering innovation through community-driven initiatives. The conference concluded with a lively networking session, where attendees exchanged ideas, shared experiences, and explored potential collaborations.
The community is excited about emerging trends such as edge MLOps, bias detection, and cloud-native deployments. With Cebu's vibrant tech ecosystem, events like this continue to empower practitioners, ensuring that AI solutions thrive in development environments and deliver value in production.
Stay tuned for more exciting meetups and workshops from PizzaPy, AI Pilipinas Cebu, AIGen Cebu, Data Engineering Cebu, and their partners. Together, they're transforming AI from concept to reality—one model at a time.
Explore More Tech Events in Cebu
Want to stay in the loop on Cebu's latest tech events, workshops, and networking opportunities? Check out Cebby— a smart event discovery platform that automatically aggregates tech events from multiple sources. Whether you're a local dev or just visiting, Cebby ensures you never miss an opportunity to learn, connect, and grow.
Created by Dorell James, community lead of JavaScript Cebu, Cebby is designed to help Cebu's tech community stay engaged and informed.
This summary references event notes provided by Jerry Marte.
Top comments (0)