I wanted to make this skill to give users the ability to look deeper into weather information for things like the moon phase, humidity, and wind speed in addition to the more basic things like temperature, cloud cover, and precipitation. There are a lot of customization pieces in the skill as well that make it easy for the user to fit the skill to their own use case.

What it does

Simply put, it tells the weather. However, it also shows the weather through visuals that are keyed into what the weather is right now. Is it raining outside? Then you're going to see rain on the screen. If it's snowing then you will see a picture of snow, and so on and so forth for other weather conditions. It also allows the user to easily update their current location with a city, city and state, zip code, or other options.

Sometimes, though, you just want to look up information without saving an address. Let's say you're going out of town in a couple of days and want to know the weather for that location: you can include that location in your request but it won't save the location to your account, making it easy to keep track of your regular spot.

You can also save a work location if you commute. Ask what the weather will be like at work for quick updates.

Another included feature is the ability to easily change the voice of the skill to any of the Polly voices. You can change the voice at any time and all subsequent requests will be handled by the new voice.

Celsius and Fahrenheit are determined by the users device settings and a call to the Amazon API. Changing the settings in your app will change the reporting method of the skill.

Things you can look up:

  • General Weather - A daily summary of the weather.

  • Rain - Includes all precipitation types.

  • Wind - Speed and direction.

  • Temperature

  • Apparent (Feels Like) Temperature

  • Barometric Pressure

  • Humidity

  • Dew Point

  • Moon Phase

  • Sunrise and Sunset Times

  • Seven Day Forecast

  • Ozone Levels

  • UV Index

  • Visibility Distance

  • Closest Storm

How I built it

I used a lot of different APL designs to make the visual information as simple and straightforward as possible. I really wanted to use the pager for the skill to give information across multiple pages, and I also wanted to include a sort of main menu at the end of every visual quest to give the user six different random options that they can tap to look up something new.

It utilizes a pager as a primary feature of the skill for displaying the information, as well as large font for easy viewing. The first page is the high and low temperatures for the day; the second includes information about expected rainfall, and the third is an at-a-glance representation of the next three days. When you request to change the users voice, it will display a few options to choose from on an intermediate page which is tappable.

Challenges I ran into

Implementing the touch interface was challenging, but also very rewarding. It was harder to build APL into my skill than anything I've done before, but designing the visuals and linking them to what is being said was a lot of fun. With this skill I really tried to envision use cases, because often people want weather information to be delivered fast and concise, so I tried to make the experience simultaneously shallow and deep: users can find out anything as specific as they want, but a simple two word command is enough to get anything they need.

The largest single challenge was implementing touch wrappers and pagers simultaneously and display a large amount of data on the screen without running out of space to deliver the wrapper. To accomplish this, I actually test for the device type in my code and then deliver fully customized APL documents depending on the situation to cut down on space. Duplicating the touch wrappers to be generic wouldn't work due to space constraints.

Accomplishments that I'm proud of

I was afraid for a long time that the APL touch wrappers and sequences would get the better of me. APL is incredibly powerful and versatile, but sometimes it's nice to start with something like the display templates to just say "here is the information, you format it for me." However, once I broke through the initial barrier of understanding the underlying features, it was incredibly rewarding (emotionally, too!).

One of the coolest things I was able to do was create a lot of different pages and then interweave them, utilizing the karaoke, pager, and touch wrappers to make a flow that gives information at a glance and tappable menus for easy skill navigation. Being able to do them all simultaneously was easily my proudest accomplishment.

What I learned

I learned a lot with this skill, especially in regards to trying to create a voice first interaction where the visual aspects of the APL elevate it to the next level. I learned how to manipulate the APL without overloading the system to keep things snappy (I use Cloudfront for image delivery and only send the APL content I need for the request, splitting the large and small devices in the code to keep the documents as small as possible).

What's next for Silver Linings

I plan to continue improving the skill and adding new features as more things are requested. I am constantly making updates to fact, just recently I completely rewrote the location lookup system to be easier and more accurate, making the skill 100% better at recognizing what the user wants.

I am also in the process of changing my image delivery system to use the Serverless Image Handler through Cloudfront, S3, and Api Gateway to make perfectly sized images for the devices so that the APL documents load faster. I have already cut down on the size of images by over 80% through optimization, but this is my next logical step to enhance the visuals and save loading time by sending perfectly sized images to the devices.

Built With

Share this project:


posted an update

Added some enhancements for this one to change how things are submitted to the skill. There are a ton of different requests that can be sent to the skill, and I updated it from about 40,000 different phrases that can trigger the skill to about 6 times that many in the most recent update.

Now, I'm just curious how many more slip through the cracks...

The only major known issue I haven't nailed down is duration, but I'll keep working on it. Right now, duration requests trigger as time requests, and I'm not sure how to convince the interaction model which one I should be sent.

Log in or sign up for Devpost to join the conversation.