Azure AI Fundamentals
An In-Depth, Two-Part Workshop Designed To Build Your Foundational Knowledge Of Artificial Intelligence And Machine Learning On Microsoft Azure. Led By Microsoft MVP Anupama Natarajan, This Session Aligns With The AI-900 Certification Exam And Is Packed With Real-World Examples, Practical Demos, And Expert Tips To Kickstart Your Journey Into AI.
What Is Artificial Intelligence?
This section provided an introduction to the core concepts of Artificial Intelligence (AI) and its common workloads. The speaker defined AI as software that imitates human capabilities, such as predicting outcomes, recognizing patterns, and understanding language
Key workloads discussed include:
Anomaly Detection: Identifying unusual events or patterns, with examples like credit card fraud detection and predictive maintenance for wind turbines.
Computer Vision: Enabling software to interpret information from images and videos. The speaker demonstrated how it can identify objects, extract text (OCR) from typed or handwritten notes, and recognize landmarks. A real-world example was given of a power company using an app to read utility meters from photos submitted by customers.
Natural Language Processing (NLP): Making Software, Like A Coffee-Ordering Bot, Understand And Interact In A Human-Like Way.
Knowledge Extraction: Unlocking Valuable Insights From Large Amounts Of Data That Organizations Collect.
Azure AI Services: The session highlighted the key services on Microsoft Azure for building these solutions, including Azure Cognitive Services, Azure Bot Services, Cognitive Search, and the new Azure Open AI service for generative AI tasks.
The Six Principles Of Responsible AI
This chapter covered the critical importance of developing AI systems ethically and responsibly. The speaker discussed the potential risks of AI, such as introducing bias (e.g., a loan approval model favoring one gender), errors causing harm (e.g., in autonomous vehicles), exposing private data, and a lack of trust or accountability.
The discussion focused on Microsoft's six guiding principles for responsible AI:
Fairness: AI Systems Should Treat Everyone Fairly, Avoiding Bias Based On Gender, Ethnicity, Or Other Factors.
Reliability And Safety: Systems Must Undergo Rigorous Testing To Ensure They Operate As Expected Without Causing Harm.
Transparency: It should be clear and explainable how an AI model works and makes decisions.
Privacy And Security: Personal And Sensitive Data Must Be Kept Private And Secure.
Inclusiveness: Solutions should be designed to work for everyone, regardless of physical ability, gender, or other factors.
Accountability: The people who design and deploy AI systems are accountable for how they operate.
Fundamentals Of Machine Learning
This section provided a foundational overview of machine learning (ML) as a subset of AI. ML was defined as the process of creating predictive models by finding relationships within data.
The main types of machine learning were explained:
Supervised Learning: Uses data that is already labeled with correct answers.
Regression: Predicts a numeric value (e.g., predicting the price of a car or the number of bike rentals).
Clustering: Groups similar data points together (e.g., segmenting customers based on purchasing habits).
The speaker also introduced the Azure Machine Learning service as the platform for building, training, and deploying models. The three main ways to work within the service were covered:
Notebooks: A code-first approach, typically using Python.
Automated ML: A wizard-based, no-code tool that automatically tries many models to find the best one for your data. A demo was started to forecast bike-sharing demand.
Designer: A drag-and-drop visual interface for building ML pipelines without writing code.
Fundamentals of Computer Vision
This section covered how AI systems interpret and understand visual data from images and videos. The discussion focused on the Vision category of Azure Cognitive Services.
A key distinction was made between services:
Computer Vision Service: A pre-trained service for generic analysis. It can:
Describe an image (e.g., a person walking a dog on a street).
Classify common objects (e.g., car, bus, person).
Recognize landmarks, celebrities, and brands.
Extract printed or handwritten text using Optical Character Recognition (OCR).
Forms Recognizer: An Applied AI Service Designed To Automatically Extract Text, Key-Value Pairs, And Tables From Documents Like Invoices, Receipts, And Forms.
Custom Vision Service: This service is used to train your own model to recognize specific objects. The speaker used the example that generic Computer Vision can see a dog, but you would use Custom Vision to train a model to identify a specific breed of dog. A demo was shown where a model was trained to classify animal photos (elephants, giraffes, lions). This service supports:
Image Classification: Assigning a single tag to an entire image.
Object Detection: Identifying multiple objects in an image and their locations (bounding boxes).
Face API: A specialized service for detailed facial analysis, such as comparing faces for similarity, verifying an individual, and identifying facial expressions.
Forms Recognizer (now Document Intelligence): Discussed as an Applied AI Service designed to solve a specific business problem: extracting text and structured data (like key-value pairs) from documents, receipts, invoices, and forms.
📚 Explore More
Fundamentals of Natural Language Processing
This module focused on how AI understands and processes human language, including both text and speech.
The key services and their capabilities were:
Language Service: The core service for text analysis. Its features include:
Sentiment Analysis: Determining if text is positive, negative, or neutral.
Entity Recognition: Identifying and categorizing key information (e.g., people, places, dates).
Key Phrase Extraction: Pulling the main talking points from text.
Language Detection: Identifying the language of a document.
Speech Service: This service handles all audio-related tasks:
Speech-to-Text: Transcribing spoken audio. The speaker used a call center example where agent calls are automatically transcribed to create notes.
Text-to-Speech: Converting written text into natural-sounding synthesized speech.
Translator Service: A service used to translate text between over 100 languages.
Conversational AI: This concept covered the creation of intelligent chatbots.
Question Answering: A feature of the Language Service that can build a Q&A bot from existing knowledge bases like FAQs or product manuals.
Azure Bot Service: An Applied AI Service used to build, deploy, and manage the bot itself. The speaker emphasized that NLP is what makes a bot truly intelligent by allowing it to understand a user's intent from a natural sentence (like the coffee bot example) rather than just relying on simple button clicks.