+44 (0) 208 133 0822
Zacharias Voulgaris joins Data Science Partnership as Head Of Content
Date: March 15, 2017
Posted by:

Getting On-board @ Data Science Partnership

I’d like to take this opportunity to announce that from now on I’ll be part of the Data Science Partnership team, as the Head of Content. Although the company focuses more on hands-on projects and on-site training for enterprises, we feel that it would be a good idea to share certain materials with everyone who takes time to visit our site. After all, technology-related information is everyone’s domain and unlike academics who like to hide all their knowledge behind research papers inaccessible to the everyday person, we at Data Science Partnership prefer to share what is non-sensitive information on this tech with the world freely and make it comprehensive to the majority. This not only allows for fewer misunderstandings about the ever-changing fields of Data Science / Machine Learning / Artificial Intelligence, but also can help inspire others to get involved in these technologies, and benefit the world through them.

Through this role, I aim to contribute to all that and help promote these fascinating technologies, along with news about them. I’m hoping to constantly refine my methods through your feedback, so if you have any comments or questions about any of the Data Science / Machine Learning / Artificial Intelligence topics I’ll be covering, please share them either via the comment section below, or via our ‘contact us’ page.

Throughout my life I’ve been fascinated with data analytics and artificial intelligence. Even in my undergraduate degree I spent a whole semester working on developing models based on some financial data I dug up from some lengthy volumes, during my capstone project (aka thesis). Also, during my PhD, I worked with machine learning and A.I. based heuristics in order to improve the classification methodology. Afterwards, during my post-doc I worked with predictive analytics models using sensor data for a couple of military projects that were undertaken by the lab I worked at. In the industry I worked with all kinds of data, mainly financial, to build predictive models for various use cases. Also, over the past half a decade or so, I’ve authored a couple of data science books and several videos on various data science related topics. Finally, parallel to this blog, I have my own personal blog, which I use to share the latest updates of my data science educational material and posts on various topics that interest me.

I look forward to sharing my enthusiasm about this field through various articles on recent developments and other relevant topics. So, bookmark this page if you haven’t already, and stay connected!

Read more
Deep Learning & Neuromorphic Chips
Date: May 4, 2016
Posted by:

There are three main ingredients to creating artificial intelligence: hardware (compute and memory), software (or algorithms), and data. We’ve heard a lot of late about deep learning algorithms that are achieving superhuman level performance in various tasks, but what if we changed the hardware?

Firstly, we can optimise CPU’s which are based on the von Neumann architectures that we have been using since the invention of the computer in the 1940’s. These include memory improvements, more processors on a chip (a GPU of the type found in a cell phone, might have almost 200 cores), FPGA’s and ASIC’s.

Such is the case with research being done at MIT and Stanford. At the International Solid State Circuits Conference in San Francisco earlier this month, MIT researchers presented a new chip designed specifically to implement neural networks. It is 10 times as efficient as a mobile GPU, so it could enable mobile devices to run AI algorithms locally, rather than uploading data to the cloud for processing. Whereas many of the cores in a GPU share a single, large memory bank, each of the Eyeriss cores has its own memory. The Stanford EIE project is another CPU optimization effort whereby the CPU’s are optimized for deep learning.

The second method relies not just on performance tweaks on the CPU architectures, but instead on an entirely new architecture, one that is biologically inspired by the brain. This is known as neuromorphic computing, and research labs around the world are currently working on developing this exciting new technology. As opposed to normal CPU’s and GPU’s, neuromorphic computing involves neuromorphic processing units (NPU’s), spiking neural networks (SNN’s) and analogue circuits and spike trains, similar to what is found in the biological neural circuitry in the brain.

Neuromorphic chips attempt to model in silicon the massively parallel way the brain processes information as billions of neurons and trillions of synapses respond to sensory inputs such as visual and auditory stimuli. Those neurons also change how they connect with each other in response to changing images, sounds, and the like. This is the process we call learning and memories are believed to be held in the trillions of synaptic connections. Companies developing neuromorphic chips include IBM, Qualcomm, Knowm and Numenta. Government funded research projects include the Human Brain Project (EU), IARPA (US) and Darwin (China). Let’s look at each of these now in a little more detail.

IBM Research has been working on developing the TrueNorth chip for a number of years now and are certainly making steady progress. Qualcomm has also been working on the Zeroth NPU for the past several years, and it is capable of recognizing gestures, expressions, and faces, and intelligently sensing its own surroundings. Numenta, headed up by Jeff Hawkins, started in 2005 in Silicon Valley and has been making good progress both theoretical and applied in emulating the cortical columns found in the brains’ neocortex. They have released products based on the NuPIC (Numenta Platform for Intelligent Computing) architecture which is used to analyze streaming data. These systems learn the time-based patterns in data, predict future values, and detect anomalies. Lastly, founded in 2002, Knowm has an interesting offering based around its patented memristor technology.

The Human Brain Project, a European lead multibillion dollar project to simulate a human brain, has incorporated Steve Furber’s group from the University of Manchester’s neuromorphic chip design into their research efforts. SpiNNaker has so far been able to accomplish the somewhat impressive feat of simulating a billion neurons with analogue spike trains in hardware. Once this hardware system scales up to 80 billion neurons we will have in effect the first artificial human brain, a momentous and historical event. This is predicted to occur around 2025 right in line with Ray Kurzweil’s prediction in his book “How to Create a Mind”.

Darwin is an effort originating out of two universities in China. The successful development of Darwin demonstrates the feasibility of real-time execution of Spiking Neural Networks in resource-constrained embedded systems. Finally, IARPA, a research arm of the US Intelligence Department, has several projects ongoing involving biologically inspired AI and reverse engineering the brain. One such project is MICrONS or Machine Intelligence from Cortical Networks which “seeks to revolutionize machine learning by reverse-engineering the algorithms of the brain.” The program is expressly designed as a dialogue between data science and neuroscience with the goal to advance theories of neural computation.

So overall, a very active area of research at the moment, and one we can foresee only growing in the future in terms of resources allocated to it. Whether that’s money spent or scientists and engineers involved in the research and development work necessary to produce a machine as general purpose as the brain. A true artificially engineered brain on a chip which will clearly lead to more intelligence in the Enterprise as well as in all aspects of our daily lives. 

Share with:

Read more