lunes, 28 de marzo de 2016

ETH computer expert wins national science award

German computer scientist Torsten Hoefler, a 34-year-old devotee of math and running, has won the 2015 Latsis Prize for his research on high-performance computing.

Based at the Swiss federal technology institute ETH Zurich, Hoefler is internationally regarded as a young scientific leader in the field of high-performance computing by combining theory and application at his Scalable Parallel Computing Laboratory.

The computer scientist, who is an assistant professor at ETH Zurich and has long been fascinated by numbers, previously taught and conducted research in the United States where he worked on developing one of the world’s most efficient supercomputers. He began his academic career studying for a master’s degree at Germany’s Chemnitz University of Technology.

The National Latsis Prize, one of Switzerland's most prestigious scientific awards, is awarded annually on behalf of the Geneva-based Latsis Foundation by the Swiss National Science Foundation. The prize carries an award of CHF100,000 and honours the outstanding scientific achievements of a research scientist under age 40 working in Switzerland.

The Latsis Foundation’s website cited Hoefler for outstanding “contributions to performance modelling, simulation, and optimization of large-scale parallel applications; topologies, routing, and host interfaces of large-scale networks; and advanced parallel programming techniques and runtime environments”.

Practical applications

On its website, ETH quoted Hoefler as constantly trying to find new ways to use numbers to improve his life, even going as far as creating a performance model for himself. It’s something he started doing as a child, when he would memorize as many car registration numbers as possible or count the distance to school in steps.

“I’ve definitely taken a bit of a mathematical view of life, but then my job is derived from my life,” he was quoted as saying.

Hoefler has tried to unite theory and practice whenever possible, by developing mathematical models that can be translated into software for running some of the nation’s supercomputers. He and his team have focused on developing a so-called heterogeneous compiler that can translate and optimize applications for computer architectures.

He also enjoys running, which he considers to be another passion. He calls it useful not only for maintaining a healthy body and mind, but also for discussing problems with students who join him on runs.

 

Source

How Big Banks Thread The Software Performance Needle

Timothy Prickett Morgan

While parallel programming on distributed systems is difficult, making applications scale across multiple machines – or hybrid compute elements that mix CPUs with FPGAs, GPUs, DSPs, or other motors – linked by a network is not the only problem that coders have to deal with. Inside each machine, the number of cores and threads have ballooned in the past decade, and each socket is as complex as a symmetric multiprocessing system from two decades ago was in its own right.

With so many cores and usually multiple threads per core to execute software, getting the performance out of software can be a tricky business. At the world’s hyperscalers, financial services behemoths, HPC centers, and database and middleware providers, the smartest programmers in the world are often off in a corner, with pencil and paper, mapping out the dependencies in the hairball of code they and their peers have created to find out the affinities between threads within that application. Having sorted out these dependencies, they engage in the unnatural act of pinning software processes or threads to specific cores in a physical system to optimize their performance.

Pinning threads is a bit like doing air traffic control in your head, and Leonardo Martins had such an onerous task a few years back. Martins got his start in the IT sector two decades ago as an engineer at middleware software makers Talarian and TIBCO before moving to Lehman Brothers to introduce Monte Carlo simulation systems for risk management to the bank. In 2004, he moved to Barclays Capital to introduce its first Linux-based systems as its senior middleware program manager and architect, and in 2010, he was the low latency senior architect at HSBC. While at HSBC, Martins was one of the wizards that would map out the applications and figure out how to pin their threads to specific cores in a system to maximize performance – a process that might take anywhere from two to eight weeks.

This is not big deal, right? Wrong. At the major financial institutions, the trading applications are updated at least monthly and sometimes as much as 200 times a year, so having the tuning process take weeks to months means code is never as optimized as it needs to be for a competitive edge. Martins looked around for a tool that would automate this thread pinning, and when he could not find one he found a few peers and set out to create one.

Martins founded Pontus Networks back in 2010 as a consultancy specializing in the tuning of latency sensitive applications, and was joined by Martin Raumann, an FPGA designer and specialist in low latency, high frequency trading hardware, and Deepak Aggarwal, another C, C++, C#, and Java programmer with deep expertise in distributed systems who built front office and back office systems for equities, foreign exchange, and fixed income asset trading at Barclays Capital, Credit Suisse, Citigroup, ABN, and Standard Chartered. They started work on the Pontus Vision Thread Manager and filed their first patents relating to the automated thread pinning in August 2014. The alpha version of Thread Manager debuted quietly at the end of November last year with its first customer, and the product is now available and has been acquired by three customers – all of whom are in the financial services sector. It is a fair guess that these companies are probably the ones where the founders of Pontus Networks used to work and do such painstaking thread pinning work, but that is just a guess.

Several other HPC-related users in government and university labs as well as a few Formula One racing teams are kicking the tires to see how Thread Manager might remove the human bottleneck and help get tuned software into production faster. In this latter case, Thread Manager is expected to help boost the performance of the mechanical engineering design and simulation programs as well as some of the post-processing that is done on designs to test them.) The company is also getting ready to do some performance tests on Hadoop clusters as well, and thinks that performance boosts on HDFS storage will be similar to what it has seen on Extract-Test-Load (ETL) applications that front end data warehouses. (Informatica is working with Pontus Networks on these tests.)

And as you have learned to expect from reading The Next Platform, none of these organizations looking for a bleeding edge advantage are willing to go on the record with their experiences just yet – and they may never do it because of that advantage. But we can tell you anecdotally what is going on and give you the results of some synthetic benchmarks to get you started.

Thread Manager is new enough that Pontus Networks is not precisely sure how different kinds of applications will make use of the automatic thread pinning capabilities, and Robin Harker, business development director at the company, tells The Next Platform that the company is just now getting some benchmarks under its belt to prove what Thread Manager can do.

The first and most important thing is that Thread Manager is a dynamic tool, working behind the scenes as software is running and changing, rather than a static, human-based optimization process that has to be invoked every time the code (or the hardware for that matter) changes. The dynamism is import in another way.

“If you look at an Oracle Exadata, where the company owns the whole box, they pin processes, not threads, which is a bit coarser grain control,” explains Harker. “So Oracle is probably pretty well optimized to run on a single box, and even across a RAC cluster for that matter. However, if you want to add a web application server to the same box, you are adding a different application that is going to have an effect on the Oracle system. But Thread Manager doesn’t care because all it sees is threads that talk to each other, and we don’t care if they come from Oracle or Tomcat or Linux or whatever.”

So the thinking of Pontus Networks, as more and more cores and threads get stuffed into single machines because we cannot really increase clock speed anymore to goose performance, companies will want to run multiple applications on machines (even if they are clustered) and they will have an even more complex thread pinning nightmare to deal with. Hence, the automation.

Source

domingo, 24 de enero de 2016

Robots para hacer la compra en el supermercado

Tally es la nueva creación de una empresa que ayudara en los supermercados y pronto los veremos en las grandes superficies. El robot controlara lo que falta en los estantes para reponerlo automáticamente.

Si un cliente no puede encontrar un producto deseado en la tienda, el producto no puede venderse, el cliente se va insatisfecho y la empresa deja de ganar dinero. Este nuevo robot, que la empresa Simbe Robotics ha desarrollado, pretende remediar la situación. Tally revisa los estantes con una exploración y automáticamente señala donde falta algo. Sus colegas humanos pueden reponer rápidamente el producto.

Tally es uno de una familia de robots puestos en marcha, que pretende avanzar en el sector de la logística, donde el trabajo rutinario podría automatizarse con la inteligencia artificial. Aquí los trabajos no son necesariamente eliminados. Los robots están sólo en las áreas de trabajo donde los humanos no trabajan tan bien.

Reponer los estantes suena simple, pero es muy importante para las grandes superficies. Miles de millones de euros se pierden todos los años porque faltan elementos, son incorrectos o mal ordenados. En una gran empresa pueden perderse a la semana cientos de horas para comprobar todos los estantes, según un estudio de la firma de investigación de mercado DIH.

Brad Bogolea, co-fundador de Simbe Robotics, explica el funcionamiento, un solo robot podría escanear los estantes de una tienda pequeña en una hora. Un mercado mayorista requeriría probablemente varios robots. El modelo de negocio de Simbe Robotics es inusual: la compañía no quiere vender sus robots, pero ofrecen un modelo de suscripción.

Tally se mueve entre los estantes autónomamente y no sólo si algo falta, también puede detectar si los productos se clasifican de forma incorrecta, son defectuosos o no coinciden con los precios. El robot tiene ruedas y cuatro cámaras. Por lo que explora los dos lados de la plataforma a la vez, desde el suelo hasta una altura de 2,4 metros.

Simbe Robotics utiliza el hecho de que las grandes tiendas ya han proporcionado la estructura de datos de diseño de las estanterías en forma de base de datos, incluyendo la alineación. Por lo tanto Tally puede utilizar un mapa de la tienda para la navegación. Lo que ve se puede comparar con el denominado planograma que contiene la disposición ideal de todos los productos. Los datos recogidos por el robot se transmiten a un servidor, en el que se analizan.

Los fundadores de Simbe Robotics ya están muy familiarizados con el tema de la robótica, muchos de ellos trabajaron previamente en Willow Garage, una firma de investigación, que fue fundada por los primeros empleados de Google para crear un nuevo hardware y software del robot.

Tally no es el único robot que está en las áreas de trabajo que estaban reservadas para los humanos. Un estudio realizado por la firma consultora McKinsey dice que un robot podría realizar hasta el 46 por ciento de la mayoría de las tareas, independientemente del tipo de sector.

Simbe Robotics planea desarrollar otro robot para la industria minorista. “Nuestra visión principal es automatizar el área de retail”, dice Bogolea. “Creemos que es una gran oportunidad para automatizar tareas simples, por lo que los empleados pueden centrarse en el servicio al cliente.”

Sin embargo, aún queda un gran desafío que radica en el hecho de que el sistema funcione de forma fiable en el mundo real. En la vida real, el robot podría ser menos fiable que en el laboratorio o en el contexto de unas pruebas beta.

 

 

Source