If you are looking for a career where your services will be in high demand, you should find something where you provide a scarce, complementary service to something that is getting ubiquitous and cheap. So what’s getting ubiquitous and cheap? Data. And what is complementary to data? Analysis.
Prof. Hal Varian UC Berkeley, Chief Economist at Google, interviewed by Freakonomics
Co-founder & CTO of Datasalt. He’s core committer in two Hadoop-based open-source projects, Splout SQL and Pangool. Splout provides a SQL view over Hadoop's Big Data with sub-second latencies and high throughput. Pangool is an improved low-level Java API for Hadoop based on the Tuple MapReduce paradigm (ICDM 2012). Pere is an early adopter of Hadoop, working in Big Data projects since 2008. He’s also the organizer of Big Data Beers Berlin.
Mikio is co-founder and Chief Data Scientist at TWIMPACT, a startup working on real-time event analysis for all kinds of applications. He’s also the author of Streamdrill, a library that solves the top 10 problem. The top items for all trends are continuously updated from the data you send in. No need for iterative computation or big heaps of data to start the analysis. Mikio also wrote jblas, and is currently a PostDoc in machine learning at Technische Universität Berlin.
Adam is currently the Chief Data Scientist and Director of Engineering for Zanox, Europe’s largest affiliate network, where he supervises more than sixty people. He has been in technology roles for over 15 years in a variety of industries, including online marketing, financial services, healthcare, and oil and gas. His background is in Applied Mathematics, and his interests include online learning systems, high-frequency/low-latency data processing systems, recommender systems, distributed systems, and functional programming (especially in Haskell).
Jose Quesada is the founder and director of DSR. Jose helps others to decide better, do better, or be better through data. Like everyone else, he doesn’t know what data science really is, but suspects it has to do with predicting the future before it catches you empty-handed. He has a PhD in Machine learning and worked at top labs (U. of Colorado, Boulder, Carnegie Mellon) Previously he worked as a data scientist consulant, specializing in customer lifetime value, and as the head data scientist for GetYourGuide.
Trent is co-founder & CTO of ascribe, which uses modern crypto, ML, and big data to tackle challenges in digital property ownership. His previous two startups applied ML in the enterprise semiconductor space: ADA was acquired in 2004 and Solido is going strong. He got his start doing neural networks research at the Canadian Department of National Defence in the mid 90s. He has an engineering PhD in applied ML from KU Leuven, Belgium. His interests include large scale regression, automating creativity, anything labeled "impossible", and thousand-fold improvements. He was raised on a pig farm in Canada.
Arunkumar Srinivasan holds a Bachelors degree in Electronics Engineering from India, and Masters in Bioinformatics from Germany. He is currently finishing his PhD in Bioinformatics at the Max Planck Institute. He started using R since late 2011 and is one of the main contributors to the data.table package. He routinely works with data sizes in the order of several GBs, and has a passion for developing tools and algorithms facilitating big-data analyses.
Co-founder & CTO of Oberbaum Concept, a company focused on Big Data consulting and development. Christoph is an early adopter of Hadoop, working in Big Data projects since 2009. He’s also one of the organizers of Big Data Beers Berlin Meetup.
Marek is an assistant professor at the Faculty of Mathematics and Information Science, Warsaw University of Technology, Poland. He has a PhD in computer science and his research interests focus on applied maths (aggregation theory, data analysis and mining, computational statistics, automated decision making, fuzzy sets and systems). An R enthusiast since R_1.4.0 and an author of best-selling Polish book on R programming. Loves teaching & sharing his knowledge and experience with others!
Konstantinos enjoys learning, teaching, researching and solving. Not necessarily in that order. He strongly advocates the need for a deep understanding of theory along with an extensive practical experience in order to be able to solve complex problems. He has a PhD in statistical signal processing and has held various research engineering as well as teaching positions. At DSR he is giving lectures on the applications of algebra, probabilities and statistics in data science and on the implementation of machine learning algorithms using software tools.
Jackie's interests were nurtured in the machine learning group at the MPI in Tuebingen where she worked on kernel methods and has since ventured to the probabilistic side using Bayesian modelling, and now sometimes even combines them. Her primary applications are neuroscience and image processing. She is currently at The Institute of Technology, Berlin and is putting the finishing touches on her PhD thesis about large-scale approximate inference in probabilistic models.
Daniel is an expert software engineer, Python programmer, and machine learning specialist. When he's not developing high-performing, end-to-end pattern recognition and predictive analytics systems for his clients, Daniel's learning new tricks to train deep neural networks more efficiently. Through his company Natural Vision, he's been successfully applying deep learning to problems in bioacoustics, computer vision, and text mining.
Yes, you will need a solid background in linear algebra and probability theory to create new algorithms. But this is very different from what you need to simply apply algorithms known to work for a class of problems. Vision + good judgement + intuition + hacking skills + natural analytic skills + craftsmanship + curiosity + Google skills can often be more useful and less expensive than advanced math knowledge. Most people are very comfortable with probabilities and linear algebra by the end of the program.
Absolutely yes. Being comfortable with at least one programming language is a prerequisite, but if you have never put a system in production, written tests, or used version control, etc. such skills comprise software engineering and craftsmanship, and you will pick those up. We use both R and Python. Most people are comfortable with both by the end of the program.
You will present biweekly with tight timing, getting feedback by your peers and one instructor. Video recordings will be reviewed by a professional technical communication expert. He will provide two individual sessions with laser-focused feedback. Because no matter how accurate your algorithm predictions are, if you cannot convince the decision makers in a tight time window, it will not have mattered. This all goes into making a memorable Portfolio Project presentation that makes companies take note.
Tuition: 8000 eur. Full payment is required within 7 days of acceptance to secure your spot. We are looking for sponsors to support outstanding candidates and minorities though.
Where: Berlin, Germany, one of the world’s great cities, the upcoming “startup capital of Europe” that’s also affordable and brimming with software and startup activities.
Current Batch 03:, Feb 1st to Apr. 30st, at: Zalando, Mollstr. 1 10178 Berlin.
Next batch : May 01 - July 31st
Class size: Ten to fifteen students will be accepted.
We expect you to have basic programming experience and familiarity with databases. Exercises will be in R or Python. You'd need to have a basic understanding of at least one of these languages.