IBM Watson Tradeoff Analytics takes the guesswork out of decision-making

Weigh alternatives systematically with a dynamic tool that helps you make the best possible choice

Ateret Anaby-Tavor
Editor's note: this article is by Ateret Anaby-Tavor, Manager of Decision Analytics at IBM  Watson Group in Haifa, Israel

How do you decide what product or service to purchase? Some people research first; some let an advertisement or a salesperson convince them; others make a random choice. You may have sometimes felt a twinge of doubt, felt that maybe you should have given it more thought. It’s a tough job to systematically compare, weigh all the relevant pros and cons, and then confidently make the best possible choice with minimal compromises.

My team in Haifa developed Watson Tradeoff Analytics to help people make better, more educated decisions using formal methods. These formal methods are the techniques and tools we created to model decisions as mathematical entities, and include optimizations, graph algorithms, and computational geometry, among others. We initially developed the project inside IBM Research and it is now part of the Watson services platform -- open to developers and anyone who wants to use it inside their environment or applications.

“Behind the scenes” analytics: not just another comparison tool 

A tradeoff is defined as a balance, or compromise, between two desirable but incompatible features. Without a formal method of comparison, the outcome will probably be a gamble, because it’s hard to make sense of or even realize what we’re compromising on. This is especially hard since the decision is often made between multiple options. Choosing a car, house, financial product, school, or job candidate are just a few examples where there is generally no single choice that is best in every way.

For example, anyone can go to a website and compare cellphones by choosing criteria to get a list of models. If price is the deciding factor, you could just choose the cheapest from the list of options. But it’s not that simple. Almost every decision we make involves more than one consideration. Even if the price factor is important, will you just purchase the cheapest phone? Not likely. The screen size, manufacturer, camera resolution, color, weight, battery life, and customer reviews all matter to varying degrees.

No one wants to compromise more than they have to. The bottom line is, it’s hard to compare because each option has its own advantages - and sometimes they are at odds with each other. This is exactly where the Tradeoff Analytics tool comes in. It gives a visual and interactive representation of data so any user, not just mathematicians, can easily navigate choices to compare. The tool takes into account different objectives, and highlights the ones that you determine as most important.

Say you use the Tradeoff Analytics tool to search for that cellphone. You can define each one of the parameters that are important to you without thinking about potential conflicts. One click translates all that information into an easy to read polygon-shaped map. Rest assured that Tradeoff Analytics already filtered out the cellphones that are inferior in all your objectives. Now, all you have to do is scan your top options on the map, then click, compare, and decide.

Watson Tradeoff Analytics compares cellphone options

Each circle represents one of your options, with the parameters you’ve defined, translated into color-coded segments. The visual clearly shows which parameters are maximized and which are minimized based on your preferences. Lets look at the “price” and “size” objectives. On the map, price is dark blue and screen size is green, and we want to minimize price while finding the device with maximum screen size. You might see that one phone is better on the size criteria but it is expensive, and this is indicated by a big green slice (and a small dark blue slice) of the associated pie.

Another phone might fit the requirements of weight and price, but is less optimal in terms of screen size. The bigger the “slice,” the more that product answers that specific requirement, and highlights the trade-off when compared to another option. You not only get assistance on which phone to examine, the tool also offers the reasoning so you will never miss a valuable option. 

The beta version of the tool is now available, free on Bluemix for anyone to try. See our Getting Started guide, and try the demo here.


Alessandro Curioni Named Director of IBM Research - Zurich

From one Director to the next
Matthias and Alessandro
Dr. Alessandro Curioni has been named vice president and director of the IBM Research lab in Zurich, Switzerland. The 22-year IBM veteran, who will assume the position at the end of the month, was named an IBM Fellow in 2014. He succeeds Dr. Matthias Kaiserswerth who will retire and as of 1 May become the new managing director of the Hasler Foundation, which promotes information and communications technology for the well-being and benefit of Switzerland.

Dr. Curioni is a world recognized leader in the area of high performance computing and computational science where his innovative thinking and seminal contributions have helped solve some of the most complex scientific and technological problems in healthcare, aerospace, consumer goods and electronics. In 2013 he was a member of the winning team recognized with the prestigious Gordon Bell Prize.

Dr. Curioni received his undergraduate degree in Theoretical Chemistry and his PhD from Scuola Normale Superiore, Pisa, Italy. He started at IBM Research - Zurich as a PhD student in 1993 before officially joining as a research staff member in 1998. His most recent position has been the head of the Cognitive Computing and Computational Sciences department.

Dr. Kaiserswerth joined IBM in 1988 and was first named director of the Zurich lab in 2000. After taking a sales leadership role in Switzerland he was named vice president and lab director again in 2006.

As director, Dr. Kaiserswerth has made an enduring mark both physically and intellectually on IBM Research – Zurich. He presided over a significant expansion of the campus including the opening of the client-focused Industry Solutions Lab and the cutting-edge Binnig and Rohrer Nanotechnology Center. On the research side, under his tenure the Lab has developed major advancements in banking security, smart grids, high performance computing, storage technologies, energy efficient data centers, and significant breakthroughs in nanotechnology.

IBM has maintained a research laboratory in Switzerland since 1956. The facility is proud of its multicultural and interdisciplinary research community, which includes a combination of permanent research staff members, graduate students and post doctoral fellows representing more than 45 nationalities.

IBM Research - Zurich is recognized around the world for its outstanding scientific achievements -- most notably Nobel Prizes in Physics in 1986 and 1987 for the invention of the scanning tunneling microscope and the discovery of high-temperature superconductivity, respectively.

Follow Dr. Curioni on Twitter @Ale_Curioni

Profile of an IBM scientist: Kevin Roche

Location: IBM Research Almaden, San Jose, California
Nationality: US

Research Focus: Magnetoelectronics and Spintronics, Materials Science 

I'm an engineer in the Magnetoelectronics and Spintronics group at IBM Research - Almaden. We explore the physics, materials science and potential technology applications of materials we create one atomic layer at a time with our ultra-high vacuum thin-film deposition tools. But I’ve also been making costumes, props and gadgets since I was eight years old. And I’ve been performing on stage on stage in small charity cabaret shows (I sing standards and lounge tunes) for years.

Technically, my first robot was the thin film deposition system we started at the old San Jose Research site and completed here at Almaden. It's the system in which we did our seminal giant magnetoresistance work.

Inside the Magnetoelectronics and Spintronics at IBM Research-Almaden

Microbots to barbots 

Technically, my first robot was the thin film deposition system we started at our old lab in San Jose, and completed here at Almaden. It's the system in which we did our seminal giant magnetoresistance (GMR) work in the late 1990s. GMR was the first of the spin-based technologies by which all data is now read from hard drives. Our car-sized robot layered four magnetic materials on top of one another with atomic precision on substrates smaller than a kitchen match. Our latest system is seven times larger than that machine – and it can build those atomically precise layers out of 80-odd different materials in any one experiment.

Kevin Roche (left) with his "Tiki Dalek"
While I built plenty of models as a kid out of blocks, cardboard boxes and tubes and later Legos – and am not afraid to incorporate electronics into my theater and cabaret costumes and wearable art – my first attempts at hobby robotics only came a few years ago. I was displaying one of my costume creations (the "Tiki Dalek," left) at an event called BarBot in San Francisco in 2011. There were a dozen clever contraptions making drinks with various degrees of success.

About half-way through the evening I looked around the room and realized: This is like my day job! I could do this! Dealing with the fluid components of a cocktail is not far off from the gas-handling automation I have to do in our thin-film deposition systems.

So, I started working on concepts for my bartending robot, ThinBot, and had it ready for the next year's Barbot show. ThinBot has since taken the gold medal for bartending robots at the last three international RoboGames. 

How to make a Barbot

ThinBot (named in homage to The Thin Man movie series from the 1930s and 40s) was initially conceived as a martini-making robot. I quickly realized that with a few more bottles it could make a large assortment of cocktails. It only makes "up" drinks (meant to be served chilled in an up glass, aka a martini glass) because a) that's the sort of cocktail Nick and Nora kept drinking while solving murders, and b) it can only work with non-carbonated liquid ingredients. Note: the Art Deco motifs on the robot are deliberate reference to the milieu of the movies.

The ingredients are loaded into bottles that sit in a tray of flowing chilled water, forming a tower-shaped table fountain with color-changing lighting that splashes softly between drinks. They are precisely dispensed using peristaltic pumps (used because the only thing which touches the ingredients is FDA-approved food-rated tubing), through a 9.5 pound anodized aluminum chilling block, then emerge from jets that swirl all the ingredients into the glass to mix. Because they mix in the glass, you never get contamination from the last person's drinks.

Kevin Roche (center) with his ThinBot
All the control electronics are under a dome in the top of the tower to stay above the liquids, which is capped by a bell. The user places an empty glass on the arm, selects one of (currently 17) drinks from the menu, and ThinBot swings the arm under the nozzle and pours the drink, then swings the arm back out and rings a bell (an actual desk bell) to announce it is served. 

The pouring process itself takes less than a minute; it often takes the user more time to select a cocktail than ThinBot takes to pour it.

Inspiring work, inspired robots

We have a fairly large group here in the lab, and our automated systems are used by many of them. Over the years I've learned a lot about how people react to an interface and expect or want an automated system to behave. I've also learned to aim for design simplicity when possible in the mechanism of a robot because that reduces the possible modes of failure during operation, and makes it more reliable.

When I work on my hobby robotics projects, I've stuck to the relatively primitive controllers. This forces me to write more efficient code and deal with things like memory management. The much more powerful controllers and environments I use in the lab insulate the designer to a large degree from those issues. Coping with the limitations of the small scale will, I hope, translate to developing more efficient code in my work projects.


Record 10 Watt On-Chip Power Converter

Written by Toke Andersen,
a recent Pre-doc at
IBM Research

Research by the Power Electronics Systems Laboratory (PES) at ETH Zurich in collaboration with IBM Research – Zurich demonstrated an on-chip (or fully-integrated) power converter that delivers 10 watt of output power – at the size of a pin head.

The power converter, which was presented in February 2015 at the International Solid State Circuits Conference (ISSCC) in San Francisco, delivers more than 6 times the output power of any other on-chip power converter of the switched capacitor type, presented to date.

Designed and realized in an advanced 32 nanometer semiconductor technology, the on-chip power converter is intended to be co-integrated with a microprocessor core, potentially saving more than 10 percent of the total power consumption of future high-performance microprocessor systems.

Efficient Microprocessor Power Delivery

Dynamic Voltage and Frequency Scaling (DVFS) is a popular technique for microprocessor power management. If the supply voltage of the microprocessor core remains high at times of low activity, the additional supply voltage overhead leads the core to consume more power than necessary. In order to reduce the power loss associated with supply voltage overhead, DVFS dynamically changes the supply voltage to match the requirements of the microprocessor core. In that way, the microprocessor core only consumes the power that it requires for a given computational workload. Most of today's microprocessors are multi-core architectures implemented with 2, 4, or even more microprocessor cores. DVFS is applied to all cores simultaneously, thereby introducing power loss due to the voltage overhead present for computational workloads with varying activity among the cores.

By integrating an on-chip power converter onto the microprocessor chip die, a granular microprocessor power management, where each core is independently supply by an on-chip power converter, can be employed. Therefore, the supply voltage is dynamically adjusted to each core's needs. If furthermore the regulation of the on-chip power converters can be made faster than their discretely build counterparts, the ever-present supply voltage overhead, that accounts for supply voltage instability, can be reduced as well.

Microprocessor systems fall under the Information and Communications Technology (ICT) industry, which annually consume several 1,000,000,000 kilowatt-hours of electricity, which equivalents to the electricity consumption of 200,000 typical Swiss households. "Our on-chip power converter solution has the potential to reduce this electricity consumption significantly", says Toke Meyer Andersen, who designed the 10 watt on-chip power converter as part of his PhD study at PES, ETH Zurich.

Integration-Friendly Switched Capacitor Converters

In general, efficient switch-mode power converters are implemented using inductors, capacitors, transistors, and diodes. Inductors and capacitors, which act as energy storage elements, typically take up the majority of the converter volume. To make the passive components, inductors and capacitors, suited for on-chip integration, their volumes can be reduced by increasing the switching frequency of the switch-mode power converter.

"The switching frequency is increased a 1,000-fold compared to conventional power converters," said Florian Krismer, post-doctoral fellow at PES. "Otherwise, there would have been no way to integrate the passive components on-chip."

Following a thorough analysis of inductors intended for on-chip integration, it turned out that the achievable inductor efficiency using inductors from the available semiconductor technology would be too low for a high-efficiency on-chip power converter design. Instead, the switched capacitor converter, which consists solely of capacitors and transistors, was considered.

"The choice to select the switched capacitor converter was furthermore motivated by the deep trench capacitor available in the 32 nanometer semiconductor technology," said Thomas Toifl, technical manager at IBM Research – Zurich. "The high capacitance density and low losses of the deep trench capacitor, which originally was developed for IBM's embedded memory applications, turned out to be very well suited for on-chip switched capacitor converters."

Record Performance

Due to the ease of integration, switched capacitor converters have increased in popularity for on-chip power converter applications. By operating several converter units simultaneously but out of phase, the input and output decoupling capacitor requirements can be drastically reduced. This converter has been implemented with 64 converter phases, and the converter requires no additional decoupling capacitors. As a result, no external components are required to run the converter.

Furthermore, using high-frequency digital control specifically developed for this converter, a sub-nanosecond response to a transient event has been achieved. This is 100 to 1,000 times faster than a conventional external voltage regulator module for microprocessor power delivery. Such fast transient responses are an enabler for per-core DVFS to reduce the supply voltage overhead and the power consumption.

The conversion efficiency, when converting from 1.8 volt to 1.1 volt, is as high as 88 percent.  Covering a total chip area of 2 millimeters square, the maximum power density is 5 watts per millimeters square. "With this design, we have reached the 10 watt output power mark. This sets a new benchmark for on-chip switched capacitor converters," said Johann W. Kolar, professor at PES at ETH Zurich. 


Meet an IBM Scientist: Winnie Tatiana Silatsa Saha

Who: Winnie Tatiana Silatsa Saha
Location: IBM Research - Zurich
Nationality: Cameroonian

Focus: Electrical Engineering

“I’m currently working on developing a new high performance, low cost, terahertz imager for passive imaging systems based on CMOS batch manufacturing processes. It's a part of an EU project called TeraTOP.

Her advice for young women:

“Don’t believe in stereotypes. I don’t feel comfortable with anything else but science and I don’t know why any girl should feel scared about going into science, math or engineering. 

“I think its very encouraging that IBM’s CEO is a woman who studied electrical engineering. This is a great motivator for the next generation of female scientists and engineers. There is a stigma that engineers don’t have what it takes to become successful managers and this is a misnomer.

“I feel incredibly comfortable working here at IBM, but I have to admit that after my professor encouraged me to apply, I hesitated. IBM is a global company with a renown brand and brilliant scientists, so I expected a very closed environment. But it’s just the opposite here in Zurich. It’s very open and I’m learning a lot.”

When Winnie isn’t in the lab, she is learning Spanish, her seventh language, after French, English, German, Cameroonian Pidgin, Yembam and Bamoun. She is also on the University of Dresden judo team.

Check out Winnie's profile here.


Profile of a scientist: Rei Odaira

Location: IBM Research-Austin

Nationality: Japanese

Focus: High Performance Computing Compiler Optimization

Compilers turn a computer’s coding into executable programs. So, the faster a compiler works, the faster a program runs. Compilers, though, have to work within a computer’s constraints. The number of processors, the amount of memory, even the programming language all influence a compiler’s effectiveness. So, engineers like Rei Odaira develop ways to optimize them.

Rei joined IBM Research-Tokyo 10 years ago to optimize System z mainframe compilers. The team he joined invented the technology in the mid-1990s, for the High Performance FORTRAN compilers. And in 1995 – while Rei was still at the University of Tokyo – they built the Java virtual machine, and the Java just-in-time compiler, that has been embedded in every IBM software product that uses Java, including WebSphere. Rei took notice of the world-famous work happening only 20 kilometers away from campus, and wanted to be a part of the group.

“College students at the University of Tokyo cannot declare a major for the first two years of school. They can only choose ‘sciences’ or ‘liberal arts’ as a general areas of study. I originally wanted to study mathematics and physics when I entered the university. But this was also the time of the Internet boom of mid- and late-1990s. So, I began learning about things like Linux. And also, IBM’s just-in-time compiler was the fastest in the world at that time. And because that team was in Tokyo, my classmates and I knew their work very well – from papers they published, to conferences they attended.

Power 8’s greatest advancement is in its CAPI interface. It allows others to build new systems on top of Power 8. For example, CAPI can be made to quickly access and analyze unstructured NoSQL data stored in Flash memory.Making Power Open to the Enterprising Masses
“My first job [at IBM] was to optimize the compiler on the System z mainframe. Compiler optimization is all about getting as much performance out of hardware as possible. It was a perfect match for my computer science background, and love for working on hardware.

System z, though, is an interesting challenge. It has 16 registers (places on a computer processor where data is kept), while other machine architectures, like our Power systems have several more (so, have more ways to spread out and execute a workload). But our algorithms improved z’s middleware efficiency by 3 percent – a major breakthrough in 2005, considering how fast the mainframe already was, and the limited ways to optimize it.

On OpenPower and moving to the US

“I moved to Austin in January of this year to manage a systems team working on the Power system’s Coherent Accelerator Processor Interface (CAPI) Flash. The opportunity actually came up last April, when one of my managers – while on a visit to the Austin lab – was asked about who could manage their local team working on OpenPower optimization. And my name came up. My family thought it was a great opportunity, so it wasn’t a hard decision to say ‘yes.’

“Now in Austin, my scope broadened from System z compilers and run time programs like Java and Ruby, to developing ways to exploit CAPI Flash – an accelerator that can access and analyze unstructured data stored in Flash memory. This means it can optimize workloads on the Bluemix cloud platform and the SoftLayer infrastructure it runs on, to do things like process genome sequence data in a few hours, versus the day or so it takes, now.”

Tips on the transition from school to industry

“University studies give you the skills and knowledge about a topic. Computer architecture and algorithms in my case. Coming to IBM meant picking up new skills, like writing papers and making presentations. And also learning how to contribute to products – something you don’t do in school.

“At first, I had these ideas of how to change a product. They didn’t go over too well because it meant implementing a completely new thing, like an algorithm or function, in the development and maintenance of that product. I had to learn how to work in the industry, balancing business needs – and the many parts that come together to make a product work – with how my own ideas could make an impact.

“But at IBM you also have plenty of opportunities to partner with academia. I write papers for industry and academic publications. And am a member of, and on the 2015 organizational committee for the International Conference on Principles and Practices of Programming on the Java platform (PPPJ). I was also an external review committee member for Programming Language Design and Implementation (PLDI) earlier this year, and am now working as an editorial committee member for the Information Processing Society of Japan’s Transaction on Programming.