Implementing Self-Configuring Graphical User Interfaces though Machine Learning

Shalabh Bhatnagar
3 min readSep 8, 2022

--

Part 1 of 4

Through this four-part series, I hope to help the fraternity who do UX & UI work, configure or implement the user interfaces, work with cartesian plane as it maps to our monitors, lay out widgets, buttons etc. on “canvases” or “layout managers” or “windows “or “panels”. I hope they see this series as a companion to deliver superior user journeys or use cases.

Here is what this series is all about — a GUI that automatically configures itself.

After you have run the code snippets I offer in the future episodes, you will get some (surprising) insights in the way you and your end-users interact with their devices (and with a little bit of effort, you can extend it to the phone GUIs too).

I humbly offer several code variations and vantage points from which to examine the end-user behavior.

Finally, when you will put it all together, you will see a different picture that helps you optimize your user stories, journeys or use cases when it comes to end-user experience on a GUI.

(When I ran the code on me interactions for the first time, I was surprised how I interacted with my devices, which in some ways also depicted how my right hand operated).

Getting Started

Two areas (there are more) that are connected to but often ignored in user-interface design:

- The hand-eye coordination

- The postural variation

As always, I will call upon Python for its beauty, brevity and bravery. Please feel free to use any other programming language. I found that Microsoft .NET offers solid features that can help you replicate this approach quickly outside the world of Python.

Back to Basics

GUIs are integral part of our world. We use keyboards, mice, touch and more, to interact with them daily. The apps we use, what we click on a daily basis is not very dramatically previous or the next day (this was one of the insights for me during these implementations).

The exception to this claim is video games, for they offer different worlds, varied challenges, nanosecond reaction time and may not always use the same part of the screen. They don’t. Just too much fun, fast paced action.

However, in our daily computing (and try this on your phone — sort your apps by “Most Used” and your list will be same nearly every day) we tend to repeat many apps, many actions, many clicks and many taps. It is almost as if our minds are programmed in a certain way (heuristics?).

So, what am I saying?

What if the apps I use, or the button I click, somehow magically appear at a location on the screen where I expect them to be, instead of where a designer places them? What if I am subconsciously tapping or clicking buttons in a certain way and that this information can help deliver better experiences to the end-user? Not that as a user I ever care, but hopefully will.

So, if:

- Your mind doesn’t have to look for the location change of a widget, a button or a link that zealous UX/UI community change at will (eye movement)

- Your hands don’t need to “remember” a button on the screen (heuristics)

- Frustration reduces when you do not have to search for a button or a widget again and again (penalty)

- You get expected results for your interaction (reward)

To implement this piece:

  • You need some samples to learn the UI interaction/behavior of your end-user (I will give the code)
  • Once the data is ready, you put it through machine learning pipeline (I will give the code),
  • Then we do the predictions (I will give the code)
  • Share an approach amongst many on how to take it forward (I will give the code and a small framework to help shape the process)

Disclaimer: All copyrights and trademarks belong to their respective companies and owners. The purpose of this article for education only and the views herein are my own.

--

--

No responses yet