DEV Community

Cover image for Adding Map Based Photo Viewer to .Net Aspire Project with Stencil and OpenStreetMap Tile Server
sy
sy

Posted on

Adding Map Based Photo Viewer to .Net Aspire Project with Stencil and OpenStreetMap Tile Server

Here is the next post on my journey building a personal photo search application using open source technologies as a testbed for .Net Aspire. So far the following parts have been covered:

Please nite that, while these posts cover some how to's, these are not necessary intended a tutorials on specific topics and instead an overview of how we can integrate these technologies to build something interesting.

The following areas have been covered so far:

  • .NET Aspire
    • Declaring resource dependencies so that our applications can wait until all referenced services have started successfully and fully initialised.
    • How to use remote Docker daemon for the containers we depend when we don't want to overload our current development machine using SSH.
    • How we can use SSH port forwarding in our App Host.
  • Machine Learning
    • How to use MultiModal ML Models for summarising and extracting information from photos.
    • How we could integrate local models using Ollama or a simple Python project using Hugging Face hosted models.
  • Incorporated reverse geocoding into our solution with .Net Aspire and Open Street Map (OSM) Nominatim containers.

In today's post the following topics will be explored:

  • Building a simple web component using Stencil that will:
    • Provide a map Web Component to display Photos on a database.
    • Provide a summary Web Component that will show the generated summaries of the photo using multiple models as well as the address, location and the predicted categories of the images using given models.
  • A new .Net Aspire resource for Open Street Map (OSM) Tile Server so that our web component can render maps using the local (or using a remote docker daemon) resources.

What does it look like?

So far, we have been able to import our Photos into MongoDB including GeoLocation and Metadata. In addition, reverse geocoding applied so that the GeoData is converted to the nearest address based on OSM data using Nominatim resource.

Once the images are imported, then a background worker has generated Summary, Category and Content using a number of open source multi modal models and stored them in a dictionary against the photos.

So the recent changes in the project mean, we can visualise these on the map and see what the models generated. We have not yet started looking into model evaluation so right now, we will see a number of accurate results as well as totally made up information which makes evaluation a critical part of this project.

Map Component and Map Summary.

If the image is no clear enough here is the text content:

The image captures a lively scene of a band performing on stage. The stage is bathed in warm yellow lights, creating an atmosphere of excitement and energy. At the heart of the stage, three musicians are immersed in their performance. On the left, a guitarist strums his instrument with passion, his fingers moving over the strings as he plays a melody that fills the room. In the center, a singer belts out a tune, her voice echoing off the walls of the auditorium. To the right, a drummer beats out a rhythm on his drum set, his hands striking the drums in a steady beat. In the background, a large screen displays an image of the band, amplifying their presence and engaging with the audience. The stage is surrounded by a sea of spectators, some of whom are captured in the foreground of the image, their faces turned towards the performers. The perspective of the photo suggests it was taken from the viewpoint of someone standing close to the stage, immersing themselves in the concert experience. The image is a snapshot of a moment filled with music and energy, encapsulating the spirit of live performance.

Photo Categories

  • Concert, music, performance
  • Audience, stage, band
  • Instruments, lighting, yellow lights.

Photo Contents

  • Guitarist, singer, drummer
  • Guitar, drum set, drums
  • Screen, audience, stage lights
  • Yellow light bulbs, spectators.

Web Components and Stencil

Web components are a set of standardised technologies that allow developers to create reusable custom elements with encapsulated functionality.

The key parts of Web Components can be summarised as the following:

  • Custom Elements: Define new HTML elements.
  • Shadow DOM: Encapsulates styles and markup to prevent them from affecting the rest of the page.
  • HTML Templates: Define chunks of markup that can be reused without rendering immediately.

Why Use Web Components?

Web components offer lightweight reusability, allowing developers to create components that can be used across multiple projects without being tied to specific frameworks. They provide encapsulation by isolating styles and scripts, which helps prevent conflicts and bugs in the global scope.

Additionally, web components are highly interoperable, which means they can be integrated in applications built with frameworks such as React, Vue.js, Angular as well as vanilla HTML / JavaScript, making them a future-proof solution for developers.

Web Components vs. Frameworks

Unlike traditional frameworks, web components are not dependent on any specific framework since they are built on browser-native APIs. This foundation on web standards guarantees their longevity and stability, ensuring long-term support without the need to adapt to changes in framework-specific updates. Web components are also known for their performance benefits, as they can be lightweight and optimised without the runtime overhead that frameworks often introduce.

For developers already familiar with HTML, CSS, and JavaScript, web components offer a more straightforward learning curve compared to adopting an entire new framework.

Stencil

Stencil is web component compiler that simplifies the process of building scalable, performant, and framework-agnostic web components. It is one of the many options we have for simplifying the build process for Web Components.

The web components in this Project are built using Stencil.

Adding a Mapping UI to our Aspire Project as a Web Component.

.NET Aspire already have support for NPM applications so adding aStencil Startup Application is as simple as following:

  • Create a hello world application as outlined in Stencil documentation in the same repository as our Aspire application.
  • Restore the packages, add your components
  • Then use AddNpmApp to register the application in our Aspire App Host.

builder.AddNpmApp("stencil", "../photosearch-frontend")
    .WithReference(apiService)
    .WithReference(osmTileService)
    .WithHttpEndpoint(port: portMappings["FEPort"].PublicPort, 
        targetPort: portMappings["FEPort"].PrivatePort, 
        env: "PORT", 
        isProxied:false)
    .PublishAsDockerFile();

Enter fullscreen mode Exit fullscreen mode

This even supports auto reloading so we can keep modifying the UI source code and see the changes reflected quickly. I have been using Rider for the Aspire Project and VS Code for the Stencil or Python code.

The map component is simple and responsible for the following:

  • Calling our .Net API endpoint to get photos.
  • Render the photos on map.
  • As the user makes a selection and views a photo on the map, it will also raise an event so that summary component can deploy details of the selected photo and summaries.

The component encapsulates and open source library for map rendering called "MapLibre GL JS" and uses our OSM Map Tile Server container to render the tiles. As we can see below, the service discovery is handled by Aspire and we do not have to worry about manually updating urls.


... imports omitted 

@Component({ tag: 'map-component', styleUrl: 'map-component.css', shadow: true })
export class MapComponent {

  mapElement: HTMLElement;
  photoSummaries: Array<PhotoSummary> = [];
  map: Map | undefined;

  markers: {
    [name: string]: Marker
  } = {};

  loadPhotos = async () => {

    const response = await fetch(Env.API_BASE_URL + "/photos");
    this.photoSummaries = await response.json();
    this.photoSummaries.forEach((photo) => {
      const marker = new Marker({
        draggable: false
      }).setLngLat([photo.Longitude, photo.Latitude]);

      const imgUrl = `${Env.API_BASE_URL}/image/${photo.Id}/1280/1280`;
      marker.setPopup(new Popup({ className: "apple-popup" })
        .setHTML(`<img src='${imgUrl}' data-id="${photo.Id}" loading="lazy"></img>`));
      marker.getPopup().setMaxWidth("300px");

      let popupElem = marker.getElement();
      popupElem.addEventListener('click', () => {
        PubSub.publish(EventNames.PhotoSelected, photo);
      });
      this.markers[photo.Id] = marker;
    });
  };

  componentWillLoad = async () => {
    await this.loadPhotos();
  }

  disconnectCallback = () => {
    this.markers = null;
    this.map = null;
  }

  componentDidLoad = async () => {
    const style: StyleSpecification = {
      version: 8,
      sources: {
        osm: {
          type: 'raster',
          tiles: [`${Env.MAP_TILE_SERVER}/tile/{z}/{x}/{y}.png`],
          tileSize: 256,
          attribution: '.....'
        }
      },
      layers: [{
        id: 'osm',
        type: 'raster',
        source: 'osm',
      }],
    };

    this.map = new Map({
      container: this.mapElement,
      style: style,
      center: [
        this.photoSummaries[0].Longitude, this.photoSummaries[0].Latitude
      ],
      zoom: 14
    });
    this.photoSummaries.forEach((photo) => {
      this.markers[photo.Id].addTo(this.map);
    });
  }

  render() {
    return <div id="map" ref={(el) => this.mapElement = el as HTMLElement}></div>
  }
}

Enter fullscreen mode Exit fullscreen mode

Service discovery for the Stencil Application

Given Aspire injects connection strings of services we depend on, the Stencil configuration can read these variables and inject into components as following:

  ...,
  env: {
    API_BASE_URL: process.env.services__apiservice__http__0,
    MAP_TILE_SERVER: process.env.ConnectionStrings__OSMMapTileServer
  }
Enter fullscreen mode Exit fullscreen mode

How to use the component?

Once the components ready we can use them just like any html elements:

        <div class="flex mb-4 map-container" >
          <div class="w-1/2 h-120" >
            <map-component></map-component>
          </div>
          <div class="w-1/2   h-120">
            <photo-summary-view></photo-summary-view>
          </div>
        </div>
Enter fullscreen mode Exit fullscreen mode

Open Street Map (OSM) Tile Server .NET Aspire Resource

We have a mapping web component but without a Map Tile server, we will not be Abe to render the maps. although there are some free tile servers for demo, it would be better not to overload those resources and ensure we consume our own computing power instead.

OSM Tile Server Container makes this a simple job. And once we integrate with Aspire, we have a Tile Server running on demand without a complicated setup.

Adjustments to the Tile Server Container

As per documentation, we would need to run the container once to download the maps into our volume and then use the same volume ad run the container to host the map. The changes made in this post will allow to perform both actions at startup.

The image built in this project will execute the following startup script:

#!/bin/bash

cd /
./run.sh import
./run.sh run
Enter fullscreen mode Exit fullscreen mode

Besides the above there is no change to the original OSM Tile Server Image.

Creating an OSM Tile Server Resource

The process for defining the resource is similar to Nominatim resources covered in one of the previous posts so will not be covered here but the source code is available at the repository as always.

The sample images are all across London only therefore we are using London maps to ensure quick downloading of maps and ability to setup the map database quickly at the first start of the container.

Where We are and What Next

So far we have a means of importing photos and then processing them using various multi modal machine learning models. Generative models will often generate what is requested but the results are not often as desired. Before deciding how to use these models, it is important to have an evaluation approach.

Now that we have a very basic UI, we can now focus on evaluating these models to find the best prompt / model combination for our search application.

This means we will need to version our results to include the prompt used as well as the models used and then work out some metrics that will help choosing the combinations that offer highest success rate.

Initial method will be looking into comparing results to those generated by models such as GPT-4 and see how or small / local models compare.

This will be the focus of next post.

Links

Web Components

Web Component Tooling

Open Street Maps

Stencil

Top comments (0)