Are you familiar with the concept of OCR? Wouldn’t it be nice to be able to easily convert images of typed, handwritten or printed text into machine-encoded text? Take a look at the two images below, with just a few lines of code we will make our Windows, Mac, Android or iOS application able to “read” those texts! Whether from a scanned document, a photo of a document or the text on signs and billboards in a landscape photo this process of extracting text from images is called Optical Character Recognition or Optical Character Reader (OCR).
Table of Contents
We can easily use Google OCR machine-learning AI in our Delphi applications
The option for “Text Detection” is part of the Vision API that we can use to detect and extract information about multitple Texts in an image. For each text detected Google returns both a list of words identifed with text, bounding boxes, and textAnnotations , as well as the structural hierarchy for the OCR detected text.
Google Cloud’s Vision API offers powerful pre-trained machine learning models that you can easily use on your desktop and mobile applications through REST or RPC API methods calls. Lets say you want your application to detect objects, locations, activities, animal species, products, or maybe you want not only to detect faces but also their emotions, or you may have the need to read printed or handwritten text, this and much more is possible to be done for free (up to first 1000 units/month per feature) or at very affordable prices and scalable to the use you make with no upfront commitments.
How do I get my RAD Studio Delphi applications to detect text in images with an API?
We can use RAD Studio and Delphi to easily setup its REST client library to take advantage of Google Cloud’s Vision API to empower our desktop and mobile applications and if the request is successful, the server returns a 200 OK HTTP status code and the response in JSON format.
Our RAD Studio and Delphi applications will be able to either call the API and perform the detection on a local image file by sending the contents of the image file as a base64 encoded string in the body of the request or rather use an image file located in Google Cloud Storage or on the Web without the need to send the contents of the image file in the body of your request.
How do I set up the Google Cloud Vision Logo Detection API?
Make sure you refer to Google Cloud Vision API documentation in the Text Detect section (https://cloud.google.com/vision/docs/ocr) and also Document Text Detection optimized for dense text/handwriting (https://cloud.google.com/vision/docs/pdf), but in general lines this is what you need to do on Google’s side:
- Visit https://cloud.google.com/vision and login with your Gmail account
- Create or select a Google Cloud Platform (GCP) project
- Enable the Vision API for that project
- Enable the Billing for that project
- Create a API Key credential
How do I call Google Vision API Text Detection endpoint?
Now all we need to do is to call the API URL via a HTTP POST method passing the request JSON body with type TEXT_DETECTION and source as the link to the image we want to analyze. One can do that using REST Client libraries available on several programming languages and a quick start guide is available on Google’s documentation (https://cloud.google.com/vision/docs/quickstart-client-libraries).
Actually at the bottom page of th the Google Cloud Vision documentation Guide (https://cloud.google.com/vision/docs/ocr) there is an option Try This API that allows you to post the JSON request body as shown below and get the JSON response as follows.
1 |
POST https://vision.googleapis.com/v1/images:annotate |
1 2 3 4 5 6 7 8 9 10 11 12 |
{ "requests": [ { "features": [ { "type": "TEXT_DETECTION" } ], "image": { "source": { "imageUri": "https://images.pexels.com/photos/2611710/pexels-photo-2611710.jpeg?cs=srgb&dl=pexels-javier-aguilera-2611710.jpg&fm=jpg" }}}]} |
What does the Google Vision API Text Detection endpoint return to my application?
After the call the main results will be a list with a description field containing the extracted text and a bounding polygon showing where in the image the text was found. You can use the polygon information to draw a square on top of the image and highlight the Text. Here below you find the result we got when using the code above with the link for the image containing the Mercedes car plate number. Now go ahead and try it with the image containing the handwriting note available at this link.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
{ "responses": [ { "textAnnotations": [ { "locale": "uz", "description": "8272 HYXn", "boundingPoly": { "vertices": [ { "x": 3144, "y": 2314 }, { "x": 3954, "y": 2314 }, { "x": 3954, "y": 2511 }, { "x": 3144, "y": 2511 }]}}, [...] |
How do I connect my applications to Google Cloud Vision Logo Detection API?
Once you have followed basic steps to set up Text Detection API on Google’s side, make sure you go to the Console and in the Credentials menu item click on Create Credentials button and add an API key. Copy this key as we will need it later.
RAD Studio Delphi and C++Builder make it very easy to connect to APIs as you can you REST Debugger to automatically create the REST components and paste them into your app.
In Delphi all the job is done using 3 components tot make the API call. They are the TRESTClient
, TRESTRequest
, and TRESTResponse
. Once you connect the REST Debugger successfully, copy and past the components you will notice that the API URL is set on the BaseURL of TRESTClient. On the TRESTRequest component you will see that the request type is set to rmPOST
, the ContentType
is set to ctAPPLICATION_JSON
, and that it contains one request body for the POST.
Run your RAD Studio Delphi and on the main menu click on Tools > REST Debbuger. Configure the REST Debugger as follows marking the content-type as application/json
, and adding the POST url, the JSON request body and the API key you created. Once you click the Send Request button you should see the JSON response, just like we demonstrated above.
Check the video below for more details on how to configure and text Google Cloud Vision API text detection and other features using REST Debugger
How do I build a Windows desktop or Android/iOS mobile device application using the Google Cloud Vision API Text Detection?
Now that you were able to successfully configure and test your API calls on the REST Debugger, just click the Copy Components button, go back to Delphi and create a new application project and Paste the components on your main form.
Very simple code is added to a TButton OnClick
event to make sure every thing is configured correctly and voila! In five minutes we have made our very first call to Google Vision API and we are able to receive JSON response for whatever images we want to perform Text Detection. Please note that on the TRESTResponse component the RootElement is set to ‘responses[0].textAnnotations
’. This means that the ‘textAnnotations
’ element in the JSON is specifically selected to be pulled into the in memory table (TFDMemTable
).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
procedure TForm1.Button1Click(Sender: TObject); var APIkey: String; begin FDMemTable1.Active := False; RESTClient1.ResetToDefaults; RESTClient1.Accept := 'application/json, text/plain; q=0.9, text/html;q=0.8,'; RESTClient1.AcceptCharset := 'UTF-8, *;q=0.8'; RESTClient1.BaseURL := 'https://vision.googleapis.com'; RESTClient1.HandleRedirects := True; RESTClient1.RaiseExceptionOn500 := False; //APIkey := 'put here your Google API key'; RESTRequest1.Resource := Format('v1/images:annotate?key=%s', [APIkey]); RESTRequest1.Client := RESTClient1; RESTRequest1.Response := RESTResponse1; RESTRequest1.SynchronizedEvents := False; RESTResponse1.ContentType := 'application/json'; RESTRequest1.Params[0].Value := Format( '{"requests": [{"features": [{"maxResults": %s,"type": "TEXT_DETECTION"}],"image": {'+ '"source": {"imageUri": "%s"}}}]}', [Edit2.text, Edit1.text]); RESTResponse1.RootElement := 'responses[0].textAnnotations'; RESTRequest1.Execute; memo2.Lines := RESTResponse1.Headers; memo3.Lines.text := RESTResponse1.Content; end; |
The sample application features a TEdit
as a place to paste in the link to the image you want to analyze and another TEdit
for the maxResults
parameter, a TMemo
to display the JSON results of the REST API call, and a TStringGrid
component to navigate and display the data in a tabular way demonstrating how to easily integrate the JSON response result with a TFDMemTable
component. When the button is clicked the image is analyzed and the application presents the response JSON as text and as data in a grid. Now you have every thing you need in order to integrate with the response data and make your application process the information the way it better suits your needs!
In this blog post, we learned how to sign up for the Google Cloud Vision API in order to perform Logo Detection on images. We’ve seen how to use the RAD Studio REST Debugger to connect to the endpoint and copy the code into a real application. Finally, we’ve seen how simple and quick it is to use RAD Studio Delphi to create a real Windows (and Linux, macOS, Android, and iOS) application that connects to the Google Cloud Vision API, performs Logo Detection image analysis, and returns a memory dataset ready for iteration!
Head over to the following link to download the example source code for the desktop and mobile Google Cloud Vision API Logo Detection REST demo: https://github.com/checkdigits/google_text_detection_api_delphi_example
Looking for software that will help you deploy cross-platform machine learning OCR Technology? Try the Cross-Platform App Builder, which can help you create & design apps in Delphi or C++ environments.
Design. Code. Compile. Deploy.
Start Free Trial Upgrade Today
Free Delphi Community Edition Free C++Builder Community Edition