Tag Archives: TXL

Trends in technology – photos from IFA 2018

Artificial intelligence, image recognition, step-by-step instructions, natural materials and relax – these are the trends that were visible at this year’s IFA consumer technology fair in Berlin.

Observations from IFA:
– AI: the term “smart” gives way to “artificial intelligence”,
– step by step: home equipment gives us precise, step-by-step instructions on how we should perform different tasks (for example, how to do the laundry or prepare a dish tailored to our diet and training plan),
– image recognition: the fridge recognizes the food put into it and suggests where the given fruit or vegetable will keep its freshness for the longest (the device also suggests what should be taken out of it), and the oven automatically selects the cooking program for the recognized food,
– virtual assistants: chatbot scans our bank account and credit cards to give us numerous – smaller and bigger – financial advice (including discouraging us to spend money on another “Starbucks coffee”),
– vacuuming: instead of caffeine, we drink a home-made fruit and vegetable cocktail prepared in a vacuum blender (no oxidation),  packed into a reusable vacuum cup for take-away,
– fermentation: if we like beer or wine, we will love homemade alcohol prepared in the fermentation box built into the fridge,
– styling: black is new black, electronics are covered with fabric or leather, mass personalization is more available,
– best news: we sleep a lot and have plenty of time to meditate (hooray!).
Here are the proofs.


1. Artificial intelligence (AI).


2. Image recognition (more specifically: food recognition by an oven and refrigerator).


3. Recipes explained step by step and diet recommendations calculated on the basis of data gathered from training devices and smartwatches.


4. Product news: the fridge with fermentation chamber and vacuum blenders.


5. Rest, meditation, sleep.


6. Styling.


All pictures were taken by me, they are protected by copyrights. Please contact me if you want to use them in your work.

Do you like this feature from IFA 2018? Do you think I deserve a cup of coffee (or two)? Wherever you are, you can donate a small sum of money using your PayPal account or credit card. All donations will help me to finance my journeys to fairs, festivals and conferences about design and new technology – this is where I find news for my website. Just click the rectangular button below to perform a secure transaction. Thank you for your support, it will help me to take a step forward and write new posts.

More articles

Designed by algorithms

There’s a design technology that’s taking its cue from nature and generating functional design options – for everything from furniture to medical implants – that meet predetermined goals. It is not a fantasy, it’s the generative design. Let’s have a look at some examples of products, prototypes and concepts designed by algorithms. 

This article on generative design is based on information gathered at lectures and exhibitions that were part of San Francisco Design Week 2017, conferences IDTechEx 2017 in Berlin and Web Summit 2016 in Lisbon, and the exhibition “California: Designing Freedom” now shown in the Design Museum in London.


The traditional design process most designers know today can be a laborious, full multiple revisions and iterations before arriving at the finished product. But the generative design software – a new tool for designing – is helping to speed up the process and, more importantly, enabling designers to come up with ideas beyond their imagination.

The next stage of computer-aided design (CAD) will be generative.

With generative design, a designer simply enters his or her goals and constraints into the computer software. Once design parameters are defined for a specific project – for example, four legs, an elevated seat on a specific height, weight requirements, given materials – the software goes to work.

Employing evolutionary algorithms, generative design software uses predetermined criteria to produce thousands, possibly millions, of optimised options for a single design, using complex forms with precise amounts material, exactly where needed. The software creates optimised lattice structures that far exceed the performance of traditional configurations.

Generative design is not just about creating interesting designs. It is about doing the most optimised designs.

All given offers meet the designer’s criteria. And because the computer isn’t constrained by preconceived notions of what a chair should look like, it’s free to discover solutions that the designer might not have come up with on their own.

To put it simply, with traditional design (first chair from the left in the picture above), a designer might start creating a chair with an initial sketch and then play with the form and materials. The end product might be something sleek and functional, but also familiar.

Technologies such as computer-aided design software and 3D printing have opened the door to new possibilities, such as the lattice leg structure (Chair 2), but here designers are still somewhat limited in what they can do.

Generative design (Chair 3) removes those limitations by exploring all the various ways to make a chair that meets the designers’ requirements while producing unexpected and new forms.

Briefly speaking, the design process with generative design program consists of the following steps.

A designer starts by setting goals – an alloy chair that supports, for example, 200 lbs and weight less than 7 lbs. The software then begins to generate many possible solutions. Then, to ensure that the chair is strong enough to support the maximum weight, each iteration undergoes performance analysis.

Through many variations, the software continues to create, simulate, and optimise the design. From results, the designer selects the solution that best satisfies his or her needs. Then, the designer can produce a prototype using various fabrication methods – such as 3D printing – to do a real-world testing.

Generative design technology mimics nature, employing algorithms to create complex forms that imitate how the natural world accepts or rejects designs.

This technology can be applied to almost anything that needs to be designed – furniture, vehicles, buildings, cars, bridges, implants etc. Below you can find a handful of generative design projects.


Generative Art Nouveau

As you can see on the picture above, the software is quite talented in designing chairs –probably Antonio Gaudi would not be ashamed of such a design of a chair.


Bicycle frame

Generative designs of a chair and bicycle frames were part of the presentation entitled “The Future of Making Things is Now” that was held at Autodesk during San Francisco Design Week in June 2017.



Under Armour, an American company that manufactures footwear and sports apparel is an early adopter of generative design software. In 2016 the company introduced the Architech – the performance training shoe with a complex shape 3D-printed midsole.

Under Armour used selective laser sintering (SLS) to 3D print the Architech’s flexible yet durable complex lattice structure, made from bonded chalky substrate.

The Architech shoe performs admirably in activities that require lateral stability as well as those where flexibility, cushioning, and lightweight are critical, improving performance across athletic disciplines without the need to change shoes during varied training regimens.

Want a pair? Actually, this item is sold out. Another model – Architech Futurist – is not available anymore as well. Sorry.


Skateboard trucks

The skateboard truck was designed by designers Daniele Grandi and John Schmier in California based on tools developed by Autodesk.

The truck’s prototype is the part of the exhibition „California: Designing Freedom” that run in Design Museum in London until 17 October. If you will be visiting the British capital up to that time (London Design Festival is quite soon), I encourage you to visit this exhibition.



On the left: the traditional approach to node design. On the right: the optimised node that performs the same function as the original component, but with significant weight reduction.

The traditional part is handmade. The optimised version was made by digital fabrication employing laser-sintered steel powder.


Heat exchanger

A cleverly designed series of struts inside each of the tubes increases internal surface area and disrupts the flow of cooled fluid to maximise heat transfer. The outside form has been designed to increase the cooling surface area and utilise the cooling air as it passes through the device.


Bio-engineered surfaces

Autodesk Within Medical software enables implant designers to create a porous coating for implants to aid in osseointegration – the fusion between bone and implant – where the porosity itself can be tuned to allow for optimal fusion. The integrated lattice topologies have been developed specifically with cell growth in mind.

This spinal implant by Novax DMA was 3D-printed using a multi-planar structure based on hexagonal cells that resemble the porous structure of the trabecular bone.


Femoral implant

In the session „How Functional Generative Design Reshapes Everything” given at IDTechEx conference in Berlin that run in May 2017, Jesse Coors-Blankenship, the CEO and founder of the company Frustum, introduced Generate, the cloud-based generative design software.

One of the examples of optimised design is the body of a femoral implant. You can watch the entire session by Mr Jesse Coors-Blankenship on the video embedded below (image and sound quality is not the best as I recorded it using a tiny sport camera; however I hope you will find the recording interesting).


In addition to these examples, generative design can also be used to design a car chassis, bridge construction or even interiors of offices and apartments. Carl Bass – a member of Autodesk’s Board of Directors and former CEO of this company – talked about it in his lecture titled “Design and the Future of Work” at Web Summit 2016 conference in Lisbon. You can watch the entire session below.



Main picture: Under Armour’s press materials. All other photos and videos: TrendNomad.com.

Do you find this article interesting? Do you think I deserve a cup of coffee (or two)? Wherever you are, you can donate a small sum of money using your PayPal account or credit card. All donations will help me to finance my journeys to fairs, festivals and conferences about design and new technology – this is where I find news for my website. Just click the rectangular button below to perform a secure transaction. Thank you for your support, it will help me to take a step forward and write new posts.

More articles

Five consumer tech trends spotted at IFA fairs

Cheap home appliances connected to the Internet will be linked to the development of many – rather not so cheap – online subscription services, including those ordered through home robots using voice interface. After IFA 2016 fairs it is quite clear what kind of household appliances may become the standard over the next decade, even though today they still may look like quite futuristic.

Internet of Things – or, better to say: Internet of household appliances – closely associated with development of businesses based on online shopping and delivery services, voice interface, devices that provide a sense of self-sufficiency, and accessories of virtual reality beyond the VR-these goggles – these are five trends spotted at IFA fairs that run in early September in Berlin.


1. The high price of cheap Internet of Things


A. – Every device that we connect to the electricity grid, will be very soon connected to the internet as well. There will be also gadgets and devices that will go online without us, the consumers, even knowing about that – tells Mikko Hyppönen, Chief Research Officer at F-Secure at IFA+ Summit.


– This could be really simple devices such as a toaster or a lightbulb. [Manufacturers] will do that to gather information about the usage, to gather analytics about how much our devices are being used, and where they are being used. [They want to know for marketing purpose whether they] have more customers toasting bread on the East side of Berlin or on the West side of Berlin – adds Mikko Hyppönen.

Not all devices will go online with a benefit to consumers. Some of them will go online not to benefit us, but to benefit their manufacturers.

– Why would anybody like to hack my fridge? Hackers are not interested in my fridge or a toaster, but they are interested in the network that they are connected to. Hackers do it to steal something. When you have a typical network, a home or an office network, it is typically well secured. And then an employee brings to an office an IoT coffee maker and connects it to the Wi-Fi. That is the weakest point in the corporate network. In the future, remember not only to patch your computer, phone and tablet. Remember to also patch your lightbulb – concludes Mikko Hyppönen.


B. – From the business point of view, we will see any non-connected product as a lost business opportunity – tells David Cronström (on the video below, first from the right), Head of Innovation/Connectivity, Electrolux, on the panel session titled ”Smart World: Home Appliances”.


– It looks like the value of the connected consumer compared to the not-connected one is twice higher. If this assumption becomes real, we can imagine that some companies will release products that are much cheaper than non-connected versions or even for free – tells Cyril Brignone (in the centre), CEO at Arrayent.


2. Merging products with services


A. At some markets, there are washing machines that can automatically place an order for a detergent, as well as refrigerators connected to an online supermarket. Now, it is time to merge an oven with IoT and home delivery services. But instead of ordering food from the control panel of an oven, it is about programming the kitchen device from an external mobile app which is an online grocery store and a cookbook in one.


Bosch announced at IFA to cooperate with HelloFresh, an online deli offering a regular supply of fresh food. Every box containing the right proportions of ingredients includes recipes. Printouts are, of course, also available on HelloFresh mobile app.

Soon, every recipe available in the HelloFresh app will also include a one-click button to program the Bosch connected oven. The kitchen appliance will heat up to a temperature perfectly matched to a dish from the recipe and then will work for an appropriate time.


B. It came out that a direct collaboration with delivery services is also as a business opportunity for Daimler, the owner of Smart car brand. Dr. Dieter Zetsche, Chairman of the Board of Managers at Daimler AG announced the start of “Ready to Drop” service provided in cooperation with DHL.


An order placed online with “Ready to Drop” option will be delivered by DHL courier directly to the trunk of our parked Smart car (or picked-up from it if we want to return something).

You can use your car as a personal mobile mailbox – tells Dr. Dieter Zetsche in his keynote.

The courier will open doors by the application using a one-use code. At the beginning of the “Ready to Drop” service will be available in Stuttgart, Germany, and then will start in Cologne and Berlin. Next year, the service will be also launched in Mercedes cars.


3. In-home robots with the voice interface


A. Visitors of Bosch and Siemens booths (both companies belong to BSH) could meet Mykie, a concept of a smart kitchen assistant.


Mykie responds to the user’s voice by means of voice recognition. It listens to users and answers their simple questions about the weather or the latest stock market prices. When communicating with the user, the robot uses his voice and head movements, as well as simple facial expressions and varying light signals to express his ”emotions”.


The user can use Mykie to conveniently control the entire range of home appliances functions. Mykie knows, for example, what’s stored inside the connected fridge, and how much longer the cake still has to bake in the oven.

Alternatively, additional services such as recipe ideas or suggestions from online cooking shows can also be called up. If ingredients are still missing, they can be ordered online via Mykie and delivered directly. Mykie sends the recommended settings from the recipe straight to the connected appliances.


B. Sony is another company that showed a prototype of a cute-voice-controlled-robot with mesmerising eyes.

Sony’s Xperia Agent is going to be not only a digital assistant like Siri and Alexa, that answers simple questions and lets you complete tasks like checking your calendar and making phone calls. By being connected to a sound system, a TV, and a coffee machine, Xperia Agent is able to play music (play the video above to see how it dances!), display movie trailers, and, by combining the robot with Nestlé Japan, even order a coffee.

The date of launching final versions of Mykie or Xperia Agent on the consumer market was not given.


4. Illusive self-sufficiency


A. Grundig HerbGarden is a prototype of a kitchen appliance that enables users to grow fresh and organic herbs at their own home. Indoors and without pesticides.

HerbGarden features three sets of growing chambers and a LED light box. Through a mobile app, the user can monitor and control the humidity, as well as track what herbs are growing, when the last harvest was, approximate harvest time for each item, its temperature, and the remaining water levels. It also allows owners to perfectly grow the amount of herbs needed, with the security of knowing exactly when harvesting is.


B. Some time ago household appliances that can be programmed in advance to work at night, when the price of electricity is lower, became standard. Now the trend is reversed.


With the growing popularity of home renewable energy installations, Siemens introduces FlexStart system that enables programming dishwashers, washing machines and clothes dryers to start working during the daytime, when the sun shines most strongly, and home solar panels produce the greatest amount of power, or when the household wind turbines operate the most efficient way. The user can select the latest time at which the dishes or clothes should be clean and dry, and the devices themselves will start working at the right moments.

As soon as the program is active, it reverts to these cheaper energy sources. If the decentralised electricity is insufficient to get the selected appliance up and running within a set time window, the power required is covered by conventional sources.


C. The self-sufficiency trend is also well represented by Lifepack, the anti-theft backpack designed with mobile working and digital nomads in mind.


The integrated Solarbank, which is a 3-in-1 power bank, solar panel, and a Bluetooth speaker, stores six charges for a smartphone, generates one extra smartphone charge per four hours of sunlight and provides with great-sounding audio for 96 hours from the full battery.


5. Virtual reality beyond VR goggles


A. During the IFA, the list of winners in UX Design Awards competition was announced. The main, golden prize went to ICAROS.

The ICAROS is a fitness device and gaming controller in one gadget. It is designed to train muscles and stimulate the capability of reaction and balance. User’s movements on the ICAROS control and determine the virtual flight path or diving path in the VR game.


B. Virtual reality is not only about visual effects. In this immersive format, equally important is the 3D sound.


Sennheiser’s AMBEO VR Mic, developed in conjunction with VR content producers, and designed for professional VR production, captures high-quality audio in 360 degrees. The ambisonic microphone is fitted with four capsules in a tetrahedral arrangement. This special design allows you to capture the sound that surrounds you from a single point in space. As a result, you get fully spherical ambisonics sound to match a VR and 360 content.


C. In the field of amateur 360 videos, a device that is definitely worth mentioning is Insta360 camera.


Insta360 Nano is the world’s first HD camera to shoot and live-stream high definition virtual reality and panoramic stills and videos directly from an iPhone. Additionally, for panoramic action shots or videos the Insta360 Nano can be attached to a bike or boarding helmet, drone or selfie stick. Plus, the Insta360 packaging is easily converted into a Google Cardboard VR viewer.


Conclusion after IFA 2016

After listening to the above quoted, as well as many other experts participating in the IFA Keynotes and IFA+ Summit, as well as visiting hundreds of exhibitors, I found the following conclusion: in the twenties of the twentieth century, offline household appliances, that will not generate any data about the user, will not use a microphone to listen to him or her, and will not push anyone to online subscription services with delivery (which means, they will not generate any extra income for service providers nor manufacturers), will become a luxury.

All pictures and videos by TrendNomad.com. More photos and videos I took at IFA 2016 you can find on my Instagram profile.

Do you find this material interesting? Then buy me a coffee! Wherever you are, you can donate a small sum of money using your PayPal account or credit card. All donations will help me to finance my journeys to fairs, festivals and conferences devoted to design and new technology – this is where I find news for my website. Just click the rectangular button below to perform a secure transaction. Thank you for your support, it will help me to take a step forward and write new posts.

More articles

Print, chip, click

Contactless public transport tickets and contactless credit cards are the standards we quickly become accustomed to. Within a few years, as common as them can be printed magazines with NFC chips hidden in paper covers and advertising pages, as well as books, product packaging, labels, tags, brochures, business cards and invitations made with miniature electronics, blurring the boundary between traditional media and the online world.

Arjowiggins Creative Papers, one of the world’s leading manufacturers of the fine paper and luxury packaging is knocking the wall separating print and digital worlds. Its product called PowerCoat Alive developed by Arjowiggins’ operation based in France, is a physical paper sheet made with a flexible printed RFID tag that can interact with a Near Field Communication enabled smartphones and tablets, causing them to reveal information that is pre-programmed on the chip.

Unlike QR codes that only link to external content, mobile NFC-enabled devices trigger data stored in the microchip that is hidden in the paper without the need for third-party reader apps.

PowerCoat Alive is a form of a digital paper combining Arjowiggins’ papers sandwiching a core layer of customizable printed electronic circuits and a microchip. Electronics are pre-applied to the PowerCoat XD paper base using silver ink, and then thin grades of conventional yet quality fine paper are laminated onto both sides. The outer layers may be chosen from the Arjowiggins’ sampler.

PowerCoat Alive papers credits TrendNomad
Outer layers of the “digital paper” may be chosen from the sampler. Photography by TrendNomad.com

PowerCoat Alive sheets can be printed, finished and handled the same as normal papers (though, antenna or the chip must not be embossed or stamped). The paper is delivered ready for all types of printing, including offset, digital, screen printing and flexography. A pre-test is recommended for thermographic printing. PowerCoat Alive paper can be also used with standard inkjet or laser office printers, as long as they accept the thickness of the paper.

As the circuits are invisible, Arjowiggins company suggests using printed symbols to show that the paper is NFC-enabled and to indicate where the hotspots are placed.

PowerCoat Alive paper can be stapled, sewn or bound into a book or a magazine, as long as the chip and antenna are not placed along an edge. Naturally, the paper can be torn, burned and recycled.

The standard 13.56 MHz NFC chip implemented in the paper can interact with many current Android and Windows mobile phones (though, iPhones still does not support NFC). No battery is needed, as radio energy from a mobile device is enough to power the chip.

The communication distance is less than 2 cm. In order for the communication to work, the paper must not be placed on a metal surface or immersed in a liquid. It is also important to not press the microchip on the surface of the paper with an excess pressure. In good storage conditions, the electronic circuits can last for over five years.



1. Packaging, labels, price tags

”Digital paper” allows a consumer to tap an NFC-enabled phone against a tag to learn more information about the product while shopping, or to ensure that the item and a package are authentic and not a fake, receive a discount on its price or a coupon for the next transaction, register the product or receive instructions of how to assemble or use the product after the purchase.


One of the first brands that use smart packaging is Alba1913. Watch the movie above to learn more about Scan Me packaging from this Polish cosmetics brand.

2. Brand loyalty

PowerCoat Alive champagne

The chip embedded into a packaging may link directly to an online shop, so a client can easily place an order for a new piece or a product (for instance, a small jar of high-end cosmetic or a bottle of premium olive oil) at the moment when he or she realises the need for buying a replacement.

3. Printed magazines

PowerCoat Alive magazine ad

Electronics hidden in a cover or an advertising page may link to the additional digital content about the promoted product or service, as well as a direct link to an online store where an instant purchase with a discount may be finalized.

4. Business cards

PowerCoat Alive business card

Any potential clients that you gave your business card can easily scan the card and have your contact information inputted into their contact list on their phone so they never lose it.

5. Event tickets and paper invitations

PowerCoat Alive festival ticket

Clients and guests can simply trigger the data about the event they are going to attend by placing their smartphones on the paper and all the information embedded in the chip would be added right to their digital calendar.

6. Source of data

PowerCoat Alive invitation credits TrendNomad
Photography by TrendNomad.com

The interactivity of PowerCoat Alive extends beyond the customer experience. Brand owners can obtain a tremendous amount of data via dedicated analytics platform allowing them not only to measure the impact of their campaign, but to track and even respond to customer behaviour in real-time, resulting in more personalised, engaging, impactful and efficient campaigns.


To learn more about PowerCoat Alive paper watch the video interview recorded at IDTechEx show that was held in late April 2016 in Berlin.


If you want to ask more questions to Mark Heise regarding PowerCoat Alive, please send him an email at mark.heise@arjowiggins.com. Please note that Mark Heise is PowerCoat Applications Engineer working in the United States, but he will be happy to deliver any information you need and find a distributor in your region.


Do you like this article? Then buy me a coffee! Wherever you are, you can donate a small sum of money using your PayPal account or credit card. All donations will finance my journeys to fairs, festivals and conferences devoted to design and new technology – this is where I find news for my blog. Just click the rectangular button below to perform a secure transaction. Thank you for your support, it will help me to take a step forward and write new posts.

More articles

Good point

Good news for busy pet owners who would like to keep their domestic cats and dogs more entertained and happy, but they must stay in the office or are constantly on the go: the Petcube Camera allows to watch remotely pets and even play with them, regardless the distance. 

The Petcube Camera is a sleek, wide-angle, Wi-Fi-enabled camera with real-time HD video that is controlled through Petcube’s mobile apps. Using Petcube Camera, dogs and cats owners can not only see what their pets are up to, but they can hear and speak to them through a built-in microphone and a speaker.


One of the most important thing here is that a pet owner can play with and exercise their pets on a distance. It is possible via remote controlling the certified and safe laser pointer built-in into the device. As probably everybody knows, pets love laser games.

Using Petcube Camera pet owners are able to have peace of mind when leaving their pet alone at home.

For the better understanding how the Petcube Camera does work, you can watch the video interview that I conducted with Andrey Klen, chief operating officer of Petcube.


Pet owners can take pictures of their pets through the Petcube Camera and post them directly to social medias. They can also share access to their camera with friends, family and the community under three separate privacy levels.


But the coolest idea of this device is that the Petcube App (you can download it for free: iOSAndroid) lets everybody (even those who don’t have their pet neither Petcube Camera yet) watch, talk and play with pets from all over the world. The company is committed to helping shelters and rescue groups save animals and find them new loving homes. You can play with shelter cats and dogs from your smartphone anytime you want, and if you fell in love with one of a pet in need, you can always contact the shelter and adopt your little friend in reality.


Petcube Camera was funded on Kickstarter in 2013. During the crowdfunding campaign as much as $251,000 – the goal was $100,000 – were collected. First Petcube Cameras were delivered to Kickstarter backers and early customers in December 2014. In 2015, Petcube entered the biggest retail chains in North America and expanded to retail stores in the EU.

Petcube Camera IFA credits Trend Nomad
Andrey Klen, chief operating officer of Petcube, who I met at IFA 2015 fairs in Berlin. Photo by Trend Nomad

The company is headquartered in San Francisco, CA with offices in Kyiv, Ukraine and manufacturing in Shenzhen, China. If you have any questions according to Petcube Camera, please contact Andrey Klen, the COO of Petcube at klen@petcube.com.


Do you like the article? Then buy me a coffee! You can donate a small sum of money using your PayPal account or credit card. All donations will finance my journeys to fairs, festivals and conferences devoted to design and new technology – this is where I find news for my blog. Just click the button below to perform a secure transaction. Thank you for your support, it will help me to take a step forward and write new posts.

More articles

PROjected future

New models of kitchen appliances will differ from its predecessors not only in terms of styling and parameters of energy and water consumption. According to Grundig, their main feature will be an integrated virtual interface, operating on the basis of projectors and sensors that recognize gestures.

Are kitchen appliance buttons superfluous? When we look at the prototype of Grundig VUX (Virtual User eXperience) control system, it is easy to have been convinced that the answer is „Yes”. Fortunately, it is not about controlling kitchen devices with a smartphone. The VUX system including the hood, hob and dishwasher, completely dispenses with fixed knobs, buttons, and a mobile app. Instead, it uses intelligent projection technology and gesture recognition to control household appliances.

Grundig VUX control panel credits Trend Nomad
In the VUX system, digital buttons are displayed only when a user needs them. Among many advantages, this solution improves the design by underlining the purist look of a modern kitchen with uncluttered surfaces and makes cleaning easier. Photography: Trend Nomad

A miniature projector installed on the hood projects the controls for the hob, dishwasher and hood onto, or next to, the cooking surface. That allows the appliances to be controlled as normal: they can be turned on and off, programmes can be selected and temperatures and cooking times can be adjusted, but in a different way. The controls are used in a similar way as a smartphone touch screen, except that VUX uses its projector to recognise commands.

Unlike traditional knobs and buttons, the virtual buttons can be moved around when you need a surface to put something down or prepare ingredients. They even move automatically: if you place a pot on a virtual button, it will move to a free space.

To fully understand how the system does work, please press play and watch a short movie embedded below. Software developer Pedro Batista, one of the members of the Arçelik SW Innovation Centre team, explains all details.


The cooking surface in the VUX induction hob is divided into eight rectangles. It can be heated exactly where a user places their cookware. If a pot or pan is moved, the heated surface will follow it. The system also indicates where a pot should be placed on the stove and whether or not it has been positioned exactly in the centre of the heat source. VUX recognises when something other than a pot or a pan is placed on a hot area and switches it off immediately for increased safety.

Grundig VUX baby cam and phone credits Trend Nomad
Grundig VUX system includes Baby Cam view and incoming call notifications. Photography: Trend Nomad

The VUX system can also connect with a Baby Cam. That allows you to watch your child sleep while you cook. If you pair your smartphone with the system by Bluetooth, the hood will also notify you about incoming calls and allows to answer them without reaching the phone. A microphone and a speaker are built-in into the hood.

Grundig VUX credits Trend Nomad
The Grundig VUX prototype was presented at IFA 2015 fairs in Berlin from 4 to 9 September 2015. Photography: Trend Nomad

The VUX control system is far more that just a prototype shown to attract media attention. The system including the hood, hob and dishwasher is expected to go on sale in the second half of 2016.

P.S. I wish the final, consumer version of the VUX hood will include a home security camera that could be activated remotely on a mobile app. Due to safety reason it should be compatible only with smartphones with a built-in fingerprint sensor.


Do you like the article? Then buy me a coffee! You can donate a small sum of money using your PayPal account or credit card. All donations will finance my journeys to fairs, festivals and conferences devoted to design and new technology – this is where I find news for my blog. Just click the button below to perform a secure transaction. Thank you for your support, it will help me to take a step forward and write new posts.

More articles

What I heard

While the popularity of virtual reality goggles such as Oculus Rift can profoundly affects cinematography and video games industry, an appearance of smart headsets such as 3D Sound One with a built-in motion sensor can cause a shift in audio market and contribute to providing surround sound formats in music streaming services.

To immerse into a 3-dimmentional artificial sound, people usually go to a cinema, play a DVD, Blu-ray or a file downloaded from the Internet on a home theater sound system that consists of six or eight speakers installed around their couch. Game players also use multichannel headphones. In the latter case, users take it for granted that movements of their head do not influence the sound they hear from a headset. But with 3D Sound One – the world’s first smart 3D sound headphones with a motion sensor – those days may be over.


3D Sound One headphones made by the French company 3D Sound Labs are much more than a classical audio device. 3D Sound One is a personal 7.1 home theater sound system with a built-in head tracking module. When a user moves his or her head, the sound changes respectively in a real-time. Sensors pick up and adjust for even the slightest of micro-movements to immerse a user completely in his or her activity. By accounting for these micro-movements, 3D Sound One produces a sound experience with unprecedented realism.

Beyond 360° photos and movies, 3D sound is a key in creating truly immersive new worlds for a realistic experience. 

3D sound is a sound as you experience it in real life. You can identify where sounds are coming from around you. 3D Sound One headphones create a 360° fully artificial environment with sound sources perceived as coming from anywhere in a space – any distance or direction.

3D Sound One VR
3D Sound Labs is working on partnerships with virtual reality goggles manufacturers.

Right now, 3D Sound One headphones can be used for games, movies, and music. Drivers that create a virtual 7.1 sound card can be downloaded on PCs with Windows 8.1 or 10. Headphones also work with iOS mobile devices: you can download 3D Audio Player app from the App Store, and then drag&drop your audio or video files into the app. You don’t need any specific format or recording – the app is compatible with most multimedia formats available today. The French company plans to release the SDK to developers, so in the near future there will be more and more apps integrated with technology developed by 3D Sound Labs.

3D Sound One Zoom

The box on the left side of the headphones houses the motion sensors: a gyroscope, an accelerometer, and a magnetometer. Data travels from the sensors to the app via Bluetooth Smart technology. The sound goes through a wired audio link to provide an excellent sound quality. The hybrid system is powered by a rechargeable battery with more than 18 hours of battery life.

Movies will come with a real 3D sound, like in a theater, but you will not have to put 64 speakers at your home. Instead, you will listen to 3D audio over your headphones.

I tried 3D Sound One headset at the IFA fairs that took place in Berlin from 4th to 9th of September 2015. The experience was incredible! Unfortunately, after the show I found many complaints from Kickstarter backers about the poor quality of the first generation of the headphones (it was called NEOH). They claim that headphones break within few days after delivery. I hope that the company will fix this problem as soon as possible, and the second generation of headphones will be spotless. I keep my fingers crossed for 3D Sound Labs. Their idea has the potential to conquer the global market, but to make it happen the quality of their product must be perfect.

3D Sound Labs Maxime Sabahec credits Trend Nomad
Maxime Sabahec, the member of the 3D Sound Labs team working at IFA fairs. Photography: Trend Nomad

3D Sound Labs was founded in January 2014 by entrepreneur Dimitri Singer, consumer electronics specialist Xavier Bonjour (Technicolor, LG, Philips) and Supelec research engineer Renaud Séguier. Their aim is to revolutionize the user audio experience. If you have any questions concerning 3D Sound One headphones, please contact Xavier Bonjour directly at x.bonjour@3dsoundlabs.com.


Do you like the article? Then buy me a coffee! You can donate a small sum of money using your PayPal account or credit card. All donations will finance my journeys to fairs, festivals and conferences devoted to design and new technology – this is where I find news for my blog. Just click the button below to perform a secure transaction. Thank you for your support, it will help me to take a step forward and write new posts.

More articles

Good shot

After visiting IFA consumer electronics fairs in Berlin, there can not be the slightest doubt that spherical photography is going mainstream. This and next year many affordable cameras capturing 360×180 and 360×360 photos will be available on the market. However, there is only one device that can be thrown (!) to the air to take a full-spherical picture, and has a resolution higher than 100 megapixels.

A spherical photo composed of a dozen shots can be taken even with a smartphone. All you need to do that is a special app. On the other hand, this solution takes time and works only in places where all objects stand still for at least several seconds. In fact, it is very difficult to find such a place and the effect will be far from perfect.

Panono Jonas Pfeil

Few years ago, when Jonas Pfeil, a master in computer engineering at Technical University of Berlin, was on his vacations, it struck him that it should be easier to take panoramic pictures than taking multiple single shots and later stitching them together on a computer. He had the idea that throwing a ball-shaped camera into the air to capture images might be better. During the last couple of years, he was turning his idea into a real product with his team. Finally, after two successful crowdfunding campaigns, the product named Panono is ready to conquer the global market.

Panono 3

The current model of the camera – Panono Explorer (pictured above on the right) – is a small, grapefruit-sized, ball-shaped camera with as much as 36 individual camera modules embedded all around it, that fire simultaneously to capture everything in every direction and deliver a 360°x360° full-spherical, 108 megapixels panoramic image.

Press play on the movie below and watch a short interview with Jonas Pfeil that I recorded at IFA 2015.


The 36 camera modules triggers automatically when Panono is tossed into the air and reaches its apex where it is still for a moment before descending. Panono camera can also be operated with a selfie-stick, or be triggered to shoot by using a mobile device as a remote control connected through Wi-Fi when the camera is mounted on a tripod.


Viewing panoramas in the Panono App is a fully immersive experience in which the viewer seemingly moves inside the image by tilting their mobile device up and down, left and right, and all around, and pinching or spreading the image to view it from other perspectives.

The 36 camera modules triggers automatically when Panono is tossed into the air and reaches its apex where it is still for a moment before descending.

People can also explore Panono panoramas in a web browser using the cursor to move around in all directions inside the image, as demonstrated in a panorama taken of Panono executives with the Queen of England on the occasion of the 50th Anniversary of the „Queen’s Lecture” at the Technical University of Berlin. The Panono Camera was one of only two technology projects demonstrated at the event. Click here to open a new window and watch the 360×360 picture.

Panono zoom

Early versions of the Panono Explorer have been in trial use for months by the marketing teams at several consumer companies including BMW (at the Auto China 2014 in Beijing) and Lufthansa, and at organizations such as the World Wide Fund For Nature (WWF Germany).

Panono group

In 2014, Panono conducted a crowdfunding campaign on Indiegogo that raised a then-unprecedented 1.25 million dollars. To support further development and prepare the camera for serial production, Panono initiated a crowdinvesting campaign on Companisto.com, where it raised another 1.6 million euros. In September 2015, the Panono Explorer Edition will be delivered to initial crowdfunding backers. An additional one thousand cameras are being offered for purchase to tech and photography enthusiasts, hotel chains, real estates agents, etc. It is available for 1,499 USD.

Panono founders
Berlin-based Panono GmbH was founded in 2012 by (from the left) Qian Qin, Jonas Pfeil and Björn Bollensdorff.

When the Panono camera goes into increased production, the retail price is expected to be 599 dollars. The Panono consumer version will have all of the features and capabilities present in the exclusive Explorer Edition, but with an advanced fall protection capability that will protect the camera in the event it is dropped on a hard surface when being operated by tossing it into the air.

Panono Jonas Pfeil credits Trend Nomad
Jonas Pfeil at IFA 2015. Photography: Trend Nomad

I acquainted with Jonas Pfeil at IFA consumer electronic show, which was held in Berlin from 4th to 9th of September 2015. If you have any questions regarding Panono, please send an email to Jonas at jp@panono.com.


Do you like the article? Then buy me a coffee! You can donate a small sum of money using your PayPal account or credit card. All donations will finance my journeys to fairs, festivals and conferences devoted to design and new technology – this is where I find news for my blog. Just click the button below to perform a secure transaction. Thank you for your support, it will help me to take a step forward and write new posts.

More articles

The post-screen era

Despite the development of different sensors, most of the interfaces are still graphic-based, favoring sight above all other senses. To switch on or off a smart bulb, change a song in a music streaming app or text a friend, a smartphone requires from its user a full visual attention. Is there a chance for a change in user interface design in the next decade?

About the (not very distant) future of user interface design, I talked to Tobias Eichenwald, CEO and co-founder of Senic, a hardware and software start-up based in Berlin. He and his team are focused on exploring and designing new ways for a human to interact with technology, with a goal of making the user experience seamless and natural, beyond limited screen-based user interface. We met at the venue of DMY 2015 festival.

Senic Nuimo credits Trend Nomad
(from the left) Tobias Eichenwald and Felix Christmann, two-third of Senic founders. Photography: Trend Nomad

At what stage of evolution of user interface design are we right now?
Tobias Eichenwald: When we look at the past of UI, we can see two major shifts. The first was the emergence of personal computers in the 1980s. The second refers to smartphones in the 2000s. After digitizing our work tools, communication and entertainment, we are right now in the middle of a process of transferring physical objects such as light and speaker switches onto mobile apps. In the result, people spend more and more time staring at smartphones screens. But let’s be honest: nobody enjoys browsing through mobile apps to turn on a light. A smartphone interface required several steps for this kind of operation. It includes finding and pulling out a smartphone, unlocking it, searching for the right app and opening it to select the right setting. This process is time-consuming. It is a step in the wrong direction, comparing to a user flow of a traditional light switch. Please note that in real life, a home is a place for everyone, not just for 20 or 30 years old digital natives.

Try to hit a button on your smartphone with your eyes closed – it will not work. Design in a post-screen era will focus on an interaction very similar to that with low-tech objects.

How do we have to wait for the third shift in user interface design?
The next shift from a centralized graphical user interface that deals only with a vision to ubiquitous and specialized user interfaces is just around the corner. Within the next 15 years, we will not use just one centralized device such as a smartphone. Instead, we will use a combination of many interfaces, for example, a speech recognition, gesture recognition, wearables and haptics. One technology will replace the other. We will use different interfaces in different situations. For example, of I want to get information about a product when I am alone in a room, I can use voice recognition. But if I am talking to someone at a table, and I want to adjust the music volume, I would prefer to use haptic technology that I can reach blindly and discretely without interrupting our talk.


Something like Nuimo, the first product from Senic?
Exactly. Nuimo is a freely programmable controller for a computer and mobile devices. It connects directly to anything that speaks Bluetooth Low Energy.

Nuimo is the first product in an entire line of interfaces, smart surfaces and objects from Senic, which will include collaborations with major companies in the automotive and furniture industries.

How a user can interact with the device?
Four main inputs include an analog ring that runs around the circumference of Nuimo, capacitive touch and click on the face of the device, as well as two gesture sensors that allow a sweep motion over the device or upwards from the face of the device. It also includes a LED matrix that shows simple graphics through the surface. It can be used as a visual output and signifier for switching applications.


Which apps are supported?
Nuimo works with Sonos Speakers and Philips Hue Lights, apps such as Soundcloud, Spotify, and many more.


How many?
Currently, Nuimo has more that 30 applications and integrations available – a number that is growing thanks to a committed developer community. Nuimo is freely programmable, and building applications for the controller is simple. New apps can be updated through a smartphone or a computer. It is also possible to reconfigure the controls of the Nuimo to suit interests of its user.


Can I use Nuimo with more than one app at the same time?
Different applications can be loaded into the controller and switched between with a simple swipe motion. This makes it easy to switch between playing music and controlling the lighting at home or visualizing a timer application for cooking.

ITS DONE!! Thanks to all of our backers, supporters and friends! We can't wait to deliver you #nuimo!

A photo posted by SENIC (@heysenic) on

How did you manage to change an idea into a real product?
Firstly, we completed a successful crowdfunding campaign on Indiegogo. At that time, we focused on integrations for computer applications. The campaign reached its funding goal in three days and went on to be funded over 500 percent reaching $280K in total. The second campaign conducted on Kickstarter was focused on integration Nuimo with smart home objects. The controller reached 100 percent of its funding goal in just 32 hours. The goal was €55K, but at the end we reached as much as €210K.

Senic Nuimo map
Nuimo was designed and is being manufactured in 100 percent in Germany.

It sounds like a huge success.
It is just the beginning. Nuimo is the first product in an entire line of interfaces, smart surfaces and objects from Senic, which will include collaborations with major companies in the automotive and furniture industries.


Ten characteristics of a new generation of user interfaces (based on the article „The Future of Human Computer Interaction” published on the official Senic blog):
1. Decentralized. UI will shift away from the centralized devices such as smartphones. The light switch shifted onto the smartphone and will shift away again into smart light switches, speech or completely new forms like eye tracking. You won’t need to carry your interface around anymore. Interfaces will be where you need them to be.
2. Specific. Interfaces will shift away from a generic screen towards more specific interfaces that only do a small number of things and that are specifically designed for that use case.
3. Human-centered. Graphical user interfaces only use the visual sense and a reduced version of haptics. Future interfaces will integrate more human senses. Interfaces will use our brain waves or body movements.
4. Instant. Dealing with menus will be obsolete. Things will be instant again. The question is not whether actions take 1, 3 or 5 steps. The question will be if an action can be done instantly or not.
5. Simple. Future interfaces will ignore the assumed integration with graphical user interfaces and will focus on making things easier than existing solutions.
6. Invisible. Technology will not be in the foreground anymore. It will blend into the background. It will disappear into walls, tables, micro projectors or glass.
7. Augmented and virtual. The digital and physical will blend. You will be able to read context information about a broken motor not through a phone but directly in the surrounding space of the object.
8. Passive. You won’t need to trigger every action manually anymore, sensors will do that job for you. Examples include a garage door that can track when you’re getting close to your house or lights that turn on automatically when you’re walking into a room.
9. Tangible. A race driver would never replace his physical wheel for a tablet. A musician would never replace his guitar. Haptic and tangible interfaces have value. They allow you to use your motoric memory and a multitude of senses and to interact with technology in the most natural way.
10. Magical. We will be able to talk to rooms and machines in natural language. We will be able to make gestures in the air to trigger actions. We will only have to think of things to happen and they will.


The final prototypes of Nuimo, right before the start of mass manufacturing, will be shown at IFA fairs in Berlin from 4th to 9th of September. If you have any questions concerning Nuimo and you will be in Berlin at this time, you are more than welcome to visit the Senic’s booth at the fairs (Messe Berlin, hall 11.1, booth 11 b). Naturally, you can always reach the Senic team at hi@senic.com.


Do you like the interview? Then buy me a coffee! You can donate a small sum of money using your PayPal account or credit card. All donations will finance my journeys to fairs, festivals and conferences devoted to design and new technology – this is where I find news for my blog. Just click the button below to perform a secure transaction. Thank you for your support, it will help me to take a step forward and write new posts.

More articles

Music in your head

It’s official: music created by a brainwave monitoring device goes mainstream. As a part of the marketing campaign for the new sci-fi series „Sense8”, Netflix has recently released „Brainwave Symphony”, an electronic musical piece recorded with using a headband that monitors alpha, beta, delta, gamma and theta brainwaves of eight volunteers.

The song promoting the new Wachowskis’ series available in the popular streaming service is neither the first nor the last example of using brainwaves sensors to compose music. Similar experiments were carried out for several decades. But this time the song is no longer a niche artistic project targeted to a small audience. Due to the fact that it is part of a marketing campaign run by the Internet giant and it concerns a series with a growing amount of fans, „Brainwave Symphony”, which is available on Spotify and on the video embedded below, it has a chance to hit a much wider audience.


Just a few weeks before this piece of music was released, I visited DMY 2015 festival (it was held from 11 to 14 June in Berlin) where I met Bob van Luijt and Renate Roze. They were showing the project titled „control(human, data, sound)” that comprises brainwaves while creating music. Bob applied in his work the Muse Brain Sensing Headband that is being sold and advertised on the commercial market as a device facilitating daily meditation. In January this year I had an opportunity to test this device at CES fairs (despite my initial skepticism about its effectiveness, Muse really helped me to relax for a while), but at beginning of this year I did not know that the headband can be also reprogrammed into a kind of musical instrument. Such an idea came to Bob’s mind. And he is not alone – Muse was also used somewhat later by studio Tool  in a production process of „Brainwave Symphony” for Netflix. And I have a hunch that this is not the last word. As Bob van Luijt said, in few years from now brainwaves sensors could be built into virtual reality headsets.

Kubrickology Bob van Luijt Renate Roze DMY credits Trend Nomad
Bob van Luijt (on the right): creator, composer, founder of Kubrickology – a strategic design company that focuses on technology, music, art, games and urban design. Renate Roze: a co-producer of the film about „control(human, data, sound)” project; a freelancer, an interviewer and editor for international film festivals. Photography: Trend Nomad

Trend Nomad: You are a musician by background, but instead of writing notes, you create music compositions by coding.
Bob van Luijt: Few years ago, when I was visiting a technology conference in San Francisco devoted to augmented reality issues, I saw Muse – a wearable device that measures brain activity. I started thinking what kind of project could I do with data collected from a human brain. I decided to use it for an artistic purpose. I created a music composition based on variables that I got from a brain sensing headband. The composition itself has its form, structure and fixed instruments, but many elements such as keys, tempo, duration, note length and panning are determined by variables prescribed by data coming live from Muse device that the dancer wears on his or her head while dancing.


Your project is much more about data than about music.
I definitely agree, but please note that data stored in a computer means nothing. Only a human being can contextualize data, and only a context gives data its value. Nowadays more and more algorithms analyze and try to give a context to data. Sometimes that context makes sense, but usually it doesn’t make sense at all. The bigger the pile of data gets, the more meaningless it becomes. I wonder what will happen if one-day data would be harvested directly from our brain. My project called „control(human, data, sound)” is an abstraction of that idea.

How do you translate brainwaves into music?
I used Muse Brain Sensing Headband’s developer kit and I made a function that browse through data, downloads some data, and turns it into a number – so called Integer – that affects the composition. I created the composition using Node.js development environment with integrated MIDI library, as well as Logic Pro X recording studio software. I chose 14 instruments from three libraries, which were strings and string section effects from ProjectSAM’s „Symphobia 2” series, Eduardo Tarilonte’s „Epic World”, and Best Service’s „Synth Werk”.

When the software was ready, you put Muse headband on a dancer’s head and ask him to start dancing. Does his movement make any impact on the music?
Sometimes yes, but and sometimes no. The software itself determines where to place information and make music out of it. It always sounds different when someone else wears it, or when you record it twice with the same person. A dancer is not able to change the music deliberately.
Renate Roze: Please note that music is also made when a dancer stands or sits still. His brain is always active, he does not have to move to make music. A melody will not become more dynamic when a dancer starts dancing faster, that is not the case. It depends on the state of his mind.
B.v.L.: If I show to the dancer a picture that would calm or scare him, the sensor will notice the change on his brain activity and the music will reflect it.
R.R.: But the change would be very subtle. As Bob has already mentioned, synthesisers are preselected. If you repeat the same experiment again, and the same dancer would think of something completely different while dancing, or would make other movements, then an outcome would be modified in a subtle way. We would hear the same kind of sound, but maybe a little bit faster, softer or louder, but it would not be a completely different piece of music.
B.v.L.: But it is never exactly the same. The wearable device sends six data points every millisecond or every two milliseconds. That means every second I get about 4500 data points. That’s a lot of data to be translated into music.

Kubrickology data
Columns of the player in the console made by Bob van Luijt.

Have you ever considered that some people could be afraid of putting such a device on their head?
B.v.L.: If someone is wondering if his or her thoughts could be „hacked” by a wearable device, I can say that the answer is: „No”. It is impossible to check what you think of when you sit, stand, walk or dance with Muse on your forehead. All I can see is that your brain activity changes when you are stressed or relaxed.

Besides music and wearable devices, you are interested in virtual reality issues. Do you think that one-day devices such as Oculus Rift will be equipped with brainwaves sensors?
B.v.L.: I think that in the next 15 years this kind or rather more advanced sensors will become common parts of VR devices. I am sure that people will wear more and more sensors around their bodies, including a head. Facebook, among others, will be delighted.

Do you believe that in 2020 VR devices will be as popular as smartphones are today?
B.v.L.: I think it will be more like a 3D television right now – not everyone has 3D TV set at home. You need a lot of stuff to watch VR content. People can also – literally – get sick of it [but don’t worry, there is a VR motion sickness relief capsule – ed.]. In my opinion, virtual reality is being commercialized to soon. Few years ago, when I went to a virtual reality conference for the very first time, I heard a lot of different talks about VR. Now it’s completely taken over by commercial issues. Devices that are launched to the market as end products, in fact, are just prototypes. But maybe I’m wrong. Some people said in the past that the internet will never go mainstream.

If you are interested in more technical details of the „control(human, data, sound)” project, please watch the movie below and read Bob’s article: „A story about how I created music out of data”.


If you have any questions regarding „control(human, data, sound)” project, please contact Bob van Luijt at bob@kubrickolo.gy.


P.S. If you need more information about the Muse headband, you can watch the movie embedded below and visit the Choosemuse.com website. In case you would like to use this device for composing music with your brainwaves, please download adequate code from the Bob van Luijt’s library available on Github.


Do you like the interview? Then buy me a coffee! You can donate a small sum of money using your PayPal account or credit card. All donations will finance my journeys to fairs, festivals and conferences devoted to design and new technology – this is where I find news for my blog. Just click the button below to perform a secure transaction. Thank you for your support, it will help me to take a step forward and write new posts.

More articles

Privacy is so last season

In the next few years, we will pay much higher price for clothes integrated with wearable electronics than just a certain amount of money. The hidden cost will include the decline of our privacy, as apparel companies will join IT corporations that persistently collect and analyze data about our location, pulse and body temperature.

On January 30th, 2015 Facebook rolled out its new Data Use Policy and Terms of Service. At the request of the Belgian Privacy Commission, Facebook’s revised policies and terms were extensively analyzed by researchers from KU Leuven and Vrije universities and elaborated on the comprehensive report. According to the publication, „Facebook collects location data in order to allow users to share their location with peers. However, this data may also be re-used to target advertising. (…) There are no further (in-app) settings, for example, allowing the individual to authorize location sharing for one purpose but decline it for other purposes. (…) The only way to stop the Facebook mobile app from accessing location data on one’s smartphone is to do so at the level of the mobile operating system”.  All or nothing, but frankly speaking, it is not a surprising discovery.

However, as we can read further in the report, „even when a user decides to turn off Facebook’s access to location data, this still does not prevent Facebook from collecting location data via other means. Pictures taken with smartphones, for example, often contain location information as metadata. As a result, location data may be shared indirectly when uploading pictures to Facebook. Combined with features such as facial recognition, it is fairly easy to pinpoint the location of specific individuals to specific locations in time”. Furthermore, comparing Facebook’s Data Use Policy from 2013 and 2015, in the new version of DUP „there is no longer any mention of limiting the storage or use of location data to the time necessary to provide a service”.

Since location data is collected even without our explicit consent and stored without any limits, maybe we should try to look for some benefits for ourselves from this situation? Maybe location data may bring some value not only to social networks and advertisers? Such opinion is given by three design students who due to their young age and place of residence are not familiar with the world without the Internet.


A group of Media Interaction Design students from Osnabrück University of Applied Sciences – Heike Gabel, Robert Schnüll and Niklas Thyen – noticed that when someone receives a text message via Facebook Messenger, usually do not bother (unlike Facebook itself and advertisers) which place the message was sent from. And even if a recipient checks the location of the sender on Messenger app – it is possible by swiping left the text – soon he or she forgets about this allegedly worthless data.

Remeber The Warm Times

The project called Remember the Warm Times changes the situation described above and make valuable for the recipient to save location data of the sender. The project is far more advanced and sophisticated than simple adding pins on a map. In this case, a digital communication becomes a physical interaction.

Remeber The Warm Times scarf

Designers made the assumption that the most positive feedback a person could get is a warm feeling based on a primal instinct. Guided by this idea, they designed and developed a scarf that generates a warm feedback (literally – it heats up) when is located at certain places. Which places can be called hotspots? Firstly, whenever someone sends you something nice via Facebook Messenger, the app saves data about his or her location. You will feel a heat on your neck each time you cross this place. Secondly, you can choose in advance „warm zones” for people you care about. For example, by choosing the airport, you set a nice and subtle way to say „goodbye” or „welcome home” to someone you love before his or her departure or just after arrival. Naturally, this person must wear the Remember the Warm Times scarf to feel the warm message.

Remeber The Warm Times app

Besides the scarf, there is also the mobile app. In fact, it works as a brain of the system. It collects, analyze and displays data, filters the incoming content, compare it with the current GPS position, and at the right location it wirelessly activates the heater sewn into the middle of the scarf.

Remeber The Warm Times credits Trend Nomad
The scarf and application Remember the Warm Times were exhibited at DMY 2015 festival in Berlin. Photography: Trend Nomad

Remember the Warm Times is a prototype made as a winter semester 2014/2015 project at Osnabrück University of Applied Sciences. I found it and talk to one of its designer Robert Schnüll at the venue of the DMY International Design Festival that took place in Berlin from 11th to 14th of June 2015. If you have any questions concerning this project, you can contact the group by sending them an email at info@rememberthewarmtimes.com.


Do you like the article? Then buy me a coffee! You can donate a small sum of money using your PayPal account or credit card. All donations will finance my journeys to fairs, festivals and conferences devoted to design and new technology – this is where I find news for my blog. Just click the button below to perform a secure transaction. Thank you for your support, it will help me to take a step forward and write new posts.

More articles