The biggest change since the early days of Macintosh/Windows is that information has gradually been hidden, and people have named this kind of hiding hierarchy. People have created the desktop, throwing all personalized settings into the “settings” and throwing the operations of the advance into the corners of the GUI. People usually need to engage in more in-depth interactions to access the panels that control them. GUI has become more human-friendly, allowing people to interact with the virtual world more and more intuitively. Where there is a table, there are drawers. People throw a large number of applications into the drawers and store them only on the desktop for daily use. At the same time, early operations mostly relied on mice and keyboards, such as clicking, dragging, etc. But now, not only are computers in the same category, but the birth of mobile phones has also led people to incorporate touch into a new primary way of interaction. Gesture operations, voice control, and other methods have increased users’ interactive choices.
What remains unchanged is the role of icons, which are still the core functions of visual prompts and quick navigation in the app.
Now, people are hoping for cross-platform collaboration and interaction capabilities, more diverse accessibility designs, and the expansion of interaction once again.
Nowadays, we generally refer to smartphones, computers, and tablets of the same brand as having an “ecosystem”. This means that they can usually work together, such as the photo album function. People with an ecosystem usually have synchronized photo albums on their phones, computers, and tablets. And now I believe that it is possible to further synchronize the functions, allowing devices to only have differences in interaction and display form.
Companies such as Apple, Samsung, and Huawei are now focusing on AI, hoping to incorporate more AI into people’s daily interactions. A simple example is a schedule. For example, if someone sends you a message saying, “I bought a plane ticket for tomorrow morning,” people can have their phones automatically recognize the text on the screen in the future. Then, the phone can use AI for semantic recognition and automatically set a schedule reminder for the next morning. Decreasing GUI interaction will be the next direction we hope to strive for.
Comments (0)