When working remotely, I often struggle communicating with words alone. It's easier for me to express what I need through images or videos. Having a tool to help with that will help the team reach an understanding faster.

What it does

This app has 3 different methods to choose from:

Show the website

This method lets you generate a screenshot of public websites in different viewports such as desktop, tablet, and mobile in a full page if you so choose. This is definitely beneficial for designers and product managers that want to markup up the website for design iterations, and copy changes to be sent off for iterative changes and even development. You can create markups directly and submit that into your description for your team!

Show the idea

Have a prototype? Or an idea? How do you show it?

  • URL Have a design mockup you want to attach to a ticket, simply get a URL of the file and a thumbnail is generated into the ticket. Some supported cloud URLs are Figma, XD, Invision, Google Drive, and more.

  • Upload is for sharing any file within Monday's file size limit. If the file is an image, you still get the feature to draw and write and it will generate a new image for you. The Upload feature is not limited to just designers. You can upload almost anything that Monday allows. If you need copy changes, you can mark that up as well.

  • Capture Snap a picture from their screen or with their camera. For Initiating the screen button, you can access your monitors, applications, and browser tabs. You can repeatedly press capture to get the right shot before you press save to move forward.

Show the Video

Sometimes images doesn't do the job and you need to send a video recording. There are two buttons to choose from. Record Self or Record Screen.

  • Record Self will access your camera and microphone and instantly start recording on the press of the button. To end the recording, press the red button that says stop and then you can preview your recording.

  • Record Screen let's you record your monitor screen, window application, or a browser tab and will also access your microphone, which is perfect for bug reporting. Anyone familiar with the QA and UAT process, this is vital.

  • Mobile Options has front camera and environment for recording. Front is the front facing camera and environment is the back facing camera.

  • Download Video from the video's controls at the bottom right after playing back your video. If you don't see a downloads option inside the controls, it could mean you haven't played the whole video yet.

Send the Update

Once the summary looks fine, you can send an update to an item by searching for the item name on the right sidebar and clicking on the correct item name button which will instantly send. Depending on the file or image size, it may take a while to see your uploaded file.

How we built it

  • Functional components ReactJS and CSS is the backbone of everything this app is.

  • Puppeteer is used for Show the Website and some Show the Idea: URL . It was the exact tool I needed for generating screenshots for services that doesn't offer an API. While working on that, I thought it would be great to create a tool for showing websites in different viewports which I know I've needed many times for quality assurance.

  • NodeJs was needed for puppeteer to work. All I needed to do was send in a URL as a query request from the client and puppeteer in nodeJS will generate an image in Base64 mode and send it back to the client. I also needed Nodejs for sending in any file to Monday's API.

  • Canvas HTML5 API is used for any image that is attached to this app in Show the Website and Show the Idea: URL/Upload. While the images themselves aren't stuck to a canvas element, but it will be once the user initiates a drawing/writing. Even non generated images such as uploaded image files can be drawn on. I used a tool called Merge Images to overlap the drawn canvas to the image.

  • MediaDevices is used for Show the Idea: Capture and Show the Video. I wanted to get the user's screen for recording as I know it would be useful for describing anything. In the process, I figured it would be handy to have a feature to record with your camera as well since it almost uses the same code as screen record. While I was in the process of implementing the camera record, I wanted it to work for mobile's back facing camera (aka environment camera). So now mobile has their own set of buttons for front and back facing camera and desktop version gets screen and self buttons.

Challenges we ran into

Unfortunately, everything I worked on I experienced a problem. I have definitely lost track on all the problems I faced, but there was something that stood out from all the rest of the "Challenges". The Canvas API. It was the biggest headache to get going, and I felt like I kept making changes to it up till the end of the app. Some of the problems I faced with Canvas was:

  • "How can I align the canvas to the image?" That is when I discovered you can have the image as a background but use the image's naturalHeight/naturalWidth as the canvas's dimensions to get the alignment right.
  • "How can I combine the canvas and the image and now why is there a black screen?" That's when I discovered a tool merge-images and make sure the canvas is convered to png to have a clear background.
  • "Wait, the image is too large and the user can't access the whole image." I should implement a requestFullscreen() to help solve that.
  • "Why am I not able to draw on the canvas after scrolling?" That is when I discovered getBoundingClientRect().
  • "Oh no, I can't draw with my drawing tablet as it automatically scrolls." I implemented touch-action:none for canvas to keep the canvas in place.

And much more with Canvas API...

Some other challenges I faced seem pretty silly now after figuring out the answer. Such as wondering why I couldn't get a Monday SDK api query going which is because I didn't have the correct scope.

And then there are some challenges I had to work around. Such as not being able to use Monday's file upload request to their API from the frontend due to CORS. I instead implemented it server side and send the file with Axios and formdata from the client side.

Accomplishments that we're proud of

I turned my dream app into a reality. It was definitely a challenging road, but I managed to pass every obstacle I ran into and I did it as the sole fullstack developer (with the help of google and stackoverflow).

What we learned

I have learned so much in the process of developing this app. The features that I thought would be impossible for me, ended up being possible. While I know the app isn't perfect, but I definitely managed to get everything working the way I want it to.

What's next for Item Visualizer

The drawing canvas can always be improved. I would love to add an undo feature which should be a quick addition. I don't think anybody would like to clear the canvas when they've already made some big changes.

I implemented a WebRTC video chat that was working great as a one-on-one chat but didn't have enough time to finish it. Also, I didn't have enough devices/people to help test as I would like to have it working with a group and not just a one-on-one type thing. It was going to be a new method called Show Someone (Or something like that). What I wanted it to do was have a user request a video chat with a different user at a certain time and all they need to do is group up at a certain room ID. I would like each user matched with their Monday name that was invited. This way, you wouldn't need any other video chat service when you can do it right in Monday using WebRTC. This seems like a big app idea so I'm debating whether or not to include it with Item Visualizer or separate.

I want to implement screen capture/record for Mobile. Unfortunately getDisplayMedia() doesn't work for mobile, so I need to research a way for something similar to work for mobile devices.

Built With

Share this project: