Use VCE Exam Simulator to open VCE files
Get 100% Latest Microsoft Certified: Azure AI Engineer Associate Practice Tests Questions, Accurate & Verified Answers!
30 Days Free Updates, Instant Download!
Download Free Microsoft Certified: Azure AI Engineer Associate Exam Questions in VCE Format
File Name | Size | Download | Votes | |
---|---|---|---|---|
File Name microsoft.test-king.ai-102.v2024-11-13.by.elliot.67q.vce |
Size 2.69 MB |
Download 102 |
Votes 1 |
|
File Name microsoft.testkings.ai-102.v2021-10-13.by.dominic.57q.vce |
Size 983.61 KB |
Download 1205 |
Votes 1 |
|
File Name microsoft.testking.ai-102.v2021-10-09.by.angel.41q.vce |
Size 1.14 MB |
Download 1209 |
Votes 1 |
|
File Name microsoft.testking.ai-102.v2021-07-09.by.angel.80q.vce |
Size 814.27 KB |
Download 1304 |
Votes 1 |
|
File Name microsoft.test4prep.ai-102.v2021-05-06.by.cameron.32q.vce |
Size 792.02 KB |
Download 1361 |
Votes 2 |
|
File Name microsoft.examcollection.ai-102.v2021-05-05.by.archie.14q.vce |
Size 566.78 KB |
Download 1368 |
Votes 2 |
Microsoft Certified: Azure AI Engineer Associate Certification Practice Test Questions, Microsoft Certified: Azure AI Engineer Associate Exam Dumps
ExamSnap provides Microsoft Certified: Azure AI Engineer Associate Certification Practice Test Questions and Answers, Video Training Course, Study Guide and 100% Latest Exam Dumps to help you Pass. The Microsoft Certified: Azure AI Engineer Associate Certification Exam Dumps & Practice Test Questions in the VCE format are verified by IT Trainers who have more than 15 year experience in their field. Additional materials include study guide and video training course designed by the ExamSnap experts. So if you want trusted Microsoft Certified: Azure AI Engineer Associate Exam Dumps & Practice Test Questions, then you have come to the right place Read More.
Alright, so for the last example in this section,we're going to be looking at the requirement to analyse facial attributes using the Face API. Remember, earlier in this section, I mentioned that the Face API has the ability to detect attributes. The attributes can be everything from age, emotion,facial hair, to presumed gender, whether they're wearing glasses, the type of hair, how their heads are posed, whether they're wearing makeup.
If there's visual noise in the image, something is blocking the face and the smile expression. And so in this example, we're going to do that. So if we go under extract facial information, the file is called "analyzefacial attributes PY." Now in this example, we're going to do something completely different to what we've done. We're not actually importing the Azure Face API. What we're going to do in this case is call the Rest API using a request and response. So just use http So in this way, we can handle working with the Face API in any language, not just with the supported languages relating to the Azure SDK. So we're not even importing the facial API at all. What we're going to do is look at this image of Mr. Steve Jobs holding an apple, and we're going to find a face in this image, and we're going to, of course, analyse the facial attributes. So while it does have a very short file, you can see that it's not even 20 lines of relevant code. It is basically taking the image with the Cognitive Services API key, passing parameters and requesting specific attributes. So we're requesting age, gender, head post,mild facial hair, glasses, and emotion. It's going to post this request to the Rest API and it's going to get the response back as JSON.
And it's going to only output that to the console. I'm actually going to save it to a file, and then we can examine the contents of the file for more clarity. Okay, so this is used again. The API for face rest So let's switch over to PyCharm and we're going to run the script. Now, this script doesn't have output to the console. That's super interesting. It's going to output the JSON. But remember, we saved the JSON to a file. And so we can go into the directory and look at the JSON face JSON.Open that in a notepad and we can see the attributes that came back to Steve Jobs' face. In this photo, it's estimated he's 34. His emotions are listed as mostly happy, 81%happy, 18% neutral, and 0% contempt. I find that super interesting. Doesn't the beard attribute, zero mustache, sideburns,gender, male glasses, no glasses, the pose of his head, and again, 81% smile. So this is the analysis that Azure FaceAPI was able to do on that picture. We also see those 27 landmarks. And so the location of his eyebrow, all the attributes of that, his eyes, his nose, the point of his nose, the starts and ends of his nose, his under lip, lip, top lip, bottom lip, etc. For those 27 attributes, we get a lot of detail when it comes to the human face. So, as you can see, the Azure Face API can do a lot of different things on images. I do encourage you to look at the code on GitHub, modify it, try different things, try different photos, and see what comes back and how interesting they'll Azure APIs can analyse photos with those computer vision services.
So when it comes to the FaceAPI, this really does go quite deep. We can see that Microsoft can identify the locations of faces in a photo pretty straightforwardly. But when you look at the concept of landmarks,there are literally a couple of dozen elements of a face that have been identified and the FaceAPI can return up to 27 landmarks for each identified face in terms of the location. So if you have a use for identifying where the person's eyes are in a photo, if they're looking straight at the camera, looking far away, then you can call the Face API to extract the locations of those things from a photo. Finally, we know that the Face API canextract estimates in terms of age and gender,how much they're smiling, whether they've got facial hair or not, and how their head is posed. So are they tilted at an angle,looking straight on or looking over their shoulder? and an estimate in terms of the emotion. So, let's go back to VisualStudio 2019, create a new console app, and start experimenting with the faceapi on the internet. So the first thing we're going to do is create a new project. The same thing we usually do. console application. Core three one net. This is called the Faceapi demo. Go next. Core three one net. Hit Create. Now, in this particular case, we are going to use the Face API. So what we have to do is go into the tools we get package manager, and we're looking for Microsoft Azure Cognitiveservices Vision Face. You actually have to check the early release in order to get this. So I guess the Face API is still in a type of beta.
So I'm going to choose my project face API demo. You can see that all of the versions one to two are all preview related packages. So we'll just have to accept that. And I'm going to say install. Install the dependencies and accept the license. All right, so we've got the Face APIpackage installed and, in this case, we can start using the Cognitiveservices Vision Face namespace as well as the models for this. Now, we're probably going to want to use some other packages,but we'll get to that when we come to it. I'm going to flip over to the previous damage we did, and we're just going to cheat a little bit, copy the subscription key, the end point, and the URL image string. We're going to have to replace these with the proper values,but I'm just going to sort of the programmer's way as I'm just going to copy from the previous project. Now, the reason why all of these values are going to change is because we're going to have to create a Face API client within the Azure Portal so that we can then grab the key and endpoint from there. And we're going to have to choose an image that contains a face. So I flipped over to the portal. I'm going to create a resource and I'm going to search for Face here. And we're going to create a facial recognition SDK API here. I'm going to put this into my102 demo group, same as the previous. I'm going to call this Azsjd face. Hopefully no one has taken that name and we will use the free tier for this.
For our demo purposes, we will say review and create and then create. So now we're going to have a facial API that we can play with. That was pretty quick. Go into the keys and endpoints here. I'm going to copy the endpoint, put the endpoint on top of the old one, copy the key, and put it on top of the key. Now we need an image to recognise the face. So I'm just going to use this old photo of myself. It was taken about six years ago. You can see an extract from a video I did early in my teaching career. So this picture is what we're going to be sending to the Face API. So I've uploaded this to my Blob account. Now we need to call the Face API to basically detect the features of this face. Now you might not know this, but the FaceAPI has had what is called a recognition model. And over the years, Microsoft keeps improving the models and they still keep the old models around so that the functionality of your programme doesn't change if you're relying on a model from 2020. But there is a 2021 model and it's called Recognition Model four. As a result, this recognition model constant will indicate that we want to use the most recent facial recognition model developed by Microsoft. The other thing about this is that, because of the pandemic, the facial recognition model has been trained with faces that contain masks. That's an interesting tidbit. So we'll do a couple of things fairly quickly because we've done them a few times already. We're going to create the computer vision client, which is actually a facial client, and call this the not-to-be-created authentication module. Now you may recognise the authentication.
We've done this a few times as well. This is simply just creating the Face client and passing in the key and the endpoint. In the same way that we did with the computer vision examples previously in this course, Now the next thing that we need to do is to call the Face Detect method. Now remember, we did this in Python as well. So some of these examples might be very similar. We do have to create this. We are going to use this URL and detect face extract. So we do need to create this function. So we'll go down here, we'll create the thing for it. Every time I do this, it always messes up the curly braces. So I'm going to take care of that. Now we do need to add the threading using SOI, so I did the control period in order to handle that. Alright, so in this particular thing we're going to just, in this case, bepass a single image into the method. So there's some sample code here and we can sort of skip over some of the lines I just want to create. This is another collection. So we do have to add the collections to the using, which we're going to call the detected faces method here. Sometimes I just put the curly base again and it will format, but not at the moment. I'll just pass the URL straight from the input.
We don't need to actually concatenate the way it was done. So just pass in the URL and it will contain an alink directly to the face that we want to detect. Now, let's fix a couple of these errors. We are using a static string for the endpoint and subscription key. Let's actually update the staticstring to the expected values. I'm going to replace the file name with the URL here. We're also going to turn this nullable list into just a regular list. Out of curiosity, let's put a breakpoint here at a five and see if it can detect at least one face in that image. And if we skip a step, we can switch over to the output and we can say one face was detected from the image. So at least we know that at the most fundamental level, the Face API can detect my face. Now let's extract some of those attributes from this image. Now, because a particular image can contain multiple faces, the detectedfaces variable is going to be an array and so you can iterate over that, but at a minimum we're going to put a for each loop in here that is going to return one for each of the faces detected. In this case, we only detected one face. Now, I could draw the bounding box around where my face is. I don't particularly need that. We're going to start with something pretty basic. Let's see what face API thinks my age is. I know exactly what my age was when that image was recorded, but let's see what it says. Now the emotions are a little bit complicated in that there are many emotions and I could be showing more than one of them. And so we need to find out which emotion is dominant, which has got the largest value.
And the way this code is doing this,it is starting going through each of the eight emotions and it's basically setting the emotion to whoever's got the largest value. test. So whatever one emotion has the largest value is going to get displayed. Now I can check what Microsoft Azure thinks of my facial hair. Hopefully I'm clean-shaven and whether I'm wearing glasses I am currently wearing glasses, but in that photo I am not. Let's see what it thinks about my hair color. So in this case, it is well, if I don't have hair, it comes back as either invisible or bald,but it's going to basically go through my hair colour and it's going to try to find, again, the largest hair colour that has the largest confidence score. So whether it comes back with multiple hair colors,we only want the one that is most dominant. And let's just see what it thinks of my head pose.
Like I said, if it's straight on or my head is at a bit of an angle, put the makeup elements on and am I smiling? I think I'm in the middle of talking,but it's going to evaluate my smile. So that's a lot of different attributes. We're not pulling out each of the ones that are limited listed here. So let's build this. I'm going to change the breakpoint here. We'll move the breakpoint to the end of the method, and I can press F five. So now, again, we're asking it to evaluate my photo. Now it's saying one face detected, age estimated. My emotions are fully neutral. I guess I'm in the middle of talking. It thinks I have facial hair. Gender no. glasses. hair black. This is the position of my head. I am wearing makeup, apparently,and my smile is minimal. So it's not a perfect response. In this particular case, the age is fair. It's a little bit older than I was in this photo, but I guess choosing an age for people of a certain age is a bit hit and miss. There are some apparent errors; if it thinks I have facial hair and if it thinks I have makeup, it may just be a poor quality photo in that particular case, but this is basically a fairly detailed analysis of my face. Again, if you've got applications for analysing faces, detecting if they're smiling or not smiling, etc., you could implement these types of things in your application, subject to the occasional error.
So the next couple of sections of this course deal with what is called the Custom Vision service. Up till now, we were using the Computer Visionservice, and those are prebuilt, predefined models that we can use to provide our own images and have the Microsoft Azure Computer Vision service label them, describe them, and all those things that we've seen so far. The Custom Vision service basically allows us to take our own images and create our own models, very similar to the machine learning services and the machine learning studio.
And so we're going to be working with the Custom Vision service. In order to use it, we're going to have to create ourselves a custom vision service. So we're going to go to the portal. We're going to log in to create a resource. We're going to search the marketplace for the Custom Vision service. You can see it's a Microsoft product. Click Create. Now, just like with machine learning in Azure, there's the concept of training a model and there's the concept of using a trained model, which is called prediction. So we're going to use both. We're going to first train a model, and then we're going to switch over to the APIs for using the model. Okay? So everything has to go into a resource group. So this is called "custom vision." You're going to give this your own name, of course. And the service itself is going to need a name. I'm going to use the same name for the service. You can see that it gives you the green checkmark. I'm sure that the end point has this in it. So it has to be unique. The location for the training resource and the prediction resource can be the same location. I'll say east of us for both. Now there is a free tier.
So in this demo we're going to be using the freetier, which does limit us to two transactions per second. Okay? Also, for the prediction resource, there is a free tier that also limits us to two transactions per second and 10,000 transactions per month. So when we write our code, we're going to have to stay within these limits, and that's pretty much it. We can put some tags on there. We won't do that. "Say review and create" and click "create." So this is going to give us a custom vision service. Now once this is created, we're going to go there and be able to grab the APIendpoint and the API keys that we can start calling against the Custom Vision service. All right, so the deployment is complete. I can direct you to resources. You'll notice there's also an aPrediction resource button as well. So this resource is what's going to get us into a quick start method. One of the first things we're going to have to do is get the API key and the end point because that's what's used for calling the API using our SDK. So clicking that button is going to show us these secrets and the end point that we're going to end up using in our script if you ever needed to. Let's say that the keys themselves got distributed accidentally and you need to regenerate the key. You can always invalidate the old key and create a new one using this regenerate key button. Now, unlike what we've seen before, we can use the custom vision service using the portal. So there is a custom vision portal. We're going to have to get our API key, but we can go into the portal and do some stuff with the custom vision service right within its own portal. And we'll do that in a second. Of course, we've also got quick start programmes for different languages like Python, No, GS, Java and Go. If we want to use a custom vision service in our own development anyway, this is our custom vision service. We could always go back and get the keys. In the next video, we're going to explore the custom vision portal.
So we do require the key. So let's go into the API key and we can say copy to clipboard. Now we're going to go to the customer Vision Portal. Now, this is a separate website. It is not the same as the Azure Portal. In fact, if I look at this,I can see it's CustomVision AI. So it's got its own URL. I can sign in to this portal and agree to the terms of service. Now, I am already signed into my Azure account,so it's going to recognise who I am. At least it knows who I am. Okay, I'm going to switch to my main tenant. You probably don't have to do this,but I have multiple tenants here. So now we can start to create a project, right? So we're going to say "new project." Now you'll see a bunch of things to choose from. We do have to give the project a name.
We can call this my first project. Now it's going to need the customer service. We just created one. So you can see my F zero free plan.It's the only one that I have. If you have multiple, make sure you choose the correct one. Now the custom vision service supports two types. One is classification and one is object detection. We'll deal with both of those in this course. Classification involves effectively applying a label or multiple labels to an image. So if you train your model given cars, boats, planes, bicycles,et cetera, then you pass in an image to your model, it's going to either return a single tag or multiple tags to identify what it is. Object detection is very similar, except it can also give you the coordinates of all the objects that appear in an image.
So if you have a plane and a bicycle and a boat in your image, then your object detection will identify which parts of the image contain which objects. So we're going to choose a classification to start with. And we want the machine learning algorithms to just return to us a single tag. So we give it an image and it will tell us whether it's a plane, a truck, or a car without trying to give us multiple tags for these. Okay, now there are some predefined domains and effectively, you're going to assist your machine learning by telling you what it is that you're passing. If you're passing in food or retail, then those are basically domains. The compact domains, of course, are a smaller subset of those things. So, you know, let's just pick the general domain because we're going to be training this on some flowers, and flowers are not in any of these domains. general is the right one. Say, create a project. So the main thing we have to do here to train the model is to upload some images and tag them. So I'm going to add images and navigate to the directory that contains the images. Now, like I said, this is going to be flower related. The first set of images that I have, and these are in the GitHub directory for this project. I'll show you that in a second. These are images of hemlock. Hemlock is a deadly poisonous plant.
So I'm going to see if hemlock is added as a tag. Of course, we want our computer vision to recognise this. So we're going to add that to the next set of images that I have here. So I'm going to say add images again, and I'm going to go to the Japanese cherry images. So click on that, select all of them, take them as Japanese cherries. So we have ten images of one class and ten images of another class. So now we're going to click the train button up here at the top, and we've got the option of just having it train quickly or specifying some more advanced features, which is more customizable. I'm just going to leave it at quick training and say train. So now we're into the training phase. The machine learning computer behind the scenes is now examining our image, finding the differences between hemlock and Japanese cherry, and basically developing an algorithm to successfully do that. So we'll let that run for a moment. All right, that took a couple of minutes to train. We can see that it's tested itself, and it believes that it is able to recognise the difference between hemlock and Japanese cherry with 100% precision and 100% recall. And so, all perfect scores, it's giving itself. Now, if you think this is a type of model that you want to use in some type of production environment or test environment,then you need to publish this model in order to get the URL that you can use in your code, right? So right now, we're still in the training mode. We're training in images. We've trained a model. We're testing a model. If you ever want to use this model, you have to get this into the prediction set. We could also perform a test on this model using this button up here.
So if we upload a test image, which I'll get in a second, click Browse, go up to the test directory and pick the testimony. And so we can recognise this as Hamlock. And the machine learning model recognises this as a 100% probability of hemlock, right? So if you think that's a good enough model, you can publish this. You choose where to publish it to.Now, remember, when we created the custom vision,it asked us whether we wanted a training resource, a prediction resource, or both. We chose both. And that does give us a prediction resource that we can deploy to.Maybe we want to give this a better name, hemlock or cherry. And now we have a model that we can use. And you'll see under prediction URL that we only have the instructions to use it. So there is a URL or a file that we can use, and then there are the various keys that we have to use in order to do this. We can post the URL, or we can upload the image as the body. And you'll see the instructions right here. So in this video, we created our first custom vision project. We trained the computer to recognise two different types of flowers effectively. And we can now, that we've published this, use this in our code. We'll see the same thing, how to do this using Python and PyCharm, in the next video.
Alright, so switching back to GitHub under theAI 102 files repository, custom Vision Directory, thePython files called Image classification PY. And you can see we're using a whole new set of APIs. Here we are into the Custom Vision module. We're getting training and prediction. So we're importing the training client and the prediction client and we're also importing the training models. Okay, now we have to import our API keys. Remember, we created the Custom Vision service. It gave us some keys. We're going to have to grab that, okay? And so we're basically using the training client and we're creating ourselves a prediction client as well. So you start off by creating the project. Remember, in the portal, we're going to recognise some of the steps here. But on the portal, we created a project. We gave it a name, and it was pretty much empty until you uploaded some images to it. Now, in this case, we're going to create the tags first.
So we're creating a hemlock tag and a Japanese cherry tag. We haven't assigned any images to it yet. And then we're going to go through a loop where we're going to basically append the images using the appropriate tag. So the hemlock tag and the cherry tag. For these two sets of images, we know exactly how many images there are in our list here. So we create images from files, upload them in a batch form, wait until that's successful, and then check to see if that's successful. And then we get into the training state. So the train underscore product project method kicks off the training like we saw in the portal. We'll wait until that's completed. So there's an iteration over a loop until that's completed. Once we are happy with it, we can publish it. We publish that to our prediction resource and then we can use the prediction resource using the prediction client, passing in the test image and basically asking it to classify it. And it's going to come back as one or the other: hemlock or Japanese cherry. And so it comes back and we can then print out the results. The same steps we took in the portal, but in code.
So we can switch back to PyCharm,load this code in and execute it. And we will see that it actually gets right into uploading the images, training the model, and running the test at the end. And we can see that it came back with the sample image being hemlock with 100% certainty. And so we got the exact same result in the sik as we did in our own project. In actual fact, if you go back to the Custom Vision portal and refresh it, you will see the project there because you're actually saving it as a project in your account. So you can use that in the portal. Now I'm going to get the prediction URL. You can get that from Settings and you can go in here and you can see the URL and the key for the prediction resource. Or you can go into the project and see performance and under prediction URL, you have the same URL there too, and the key. So you grab that and you can put that into some code to test it. And then this is a curl command. We can obviously go into a cloud shell, run the thiscurl command, and execute the test using cloud shell. So since this has been published to a predictor, if I paste the code in here and execute it, we're now using the published model based on the test image. And again, it came back to the probability of being hammered. So in this video we saw the development of the model, the testing of the model and its publication, and the testing of it published.
Study with ExamSnap to prepare for Microsoft Certified: Azure AI Engineer Associate Practice Test Questions and Answers, Study Guide, and a comprehensive Video Training Course. Powered by the popular VCE format, Microsoft Certified: Azure AI Engineer Associate Certification Exam Dumps compiled by the industry experts to make sure that you get verified answers. Our Product team ensures that our exams provide Microsoft Certified: Azure AI Engineer Associate Practice Test Questions & Exam Dumps that are up-to-date.
Comments (0)
Please post your comments about Microsoft Certified: Azure AI Engineer Associate Exams. Don't share your email address
Asking for Microsoft Certified: Azure AI Engineer Associate braindumps or Microsoft Certified: Azure AI Engineer Associate exam pdf files.
Microsoft Training Courses
Latest IT Certification News
LIMITED OFFER: GET 30% Discount
This is ONE TIME OFFER
A confirmation link will be sent to this email address to verify your login. *We value your privacy. We will not rent or sell your email address.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.