If you are looking for a career where your services will be in high demand, you should find something where you provide a scarce, complementary service to something that is getting ubiquitous and cheap. So what’s getting ubiquitous and cheap? Data. And what is complementary to data? Analysis.
Prof. Hal Varian UC Berkeley, Chief Economist at Google, interviewed by Freakonomics
Co-founder & CTO of Datasalt. He’s core committer in two Hadoop-based open-source projects, Splout SQL and Pangool. Splout provides a SQL view over Hadoop's Big Data with sub-second latencies and high throughput. Pangool is an improved low-level Java API for Hadoop based on the Tuple MapReduce paradigm (ICDM 2012). Pere is an early adopter of Hadoop, working in Big Data projects since 2008. He’s also the organizer of Big Data Beers Berlin.
Mikio is co-founder and Chief Data Scientist at TWIMPACT, a startup working on real-time event analysis for all kinds of applications. He’s also the author of Streamdrill, a library that solves the top 10 problem. The top items for all trends are continuously updated from the data you send in. No need for iterative computation or big heaps of data to start the analysis. Mikio also wrote jblas, and is currently a PostDoc in machine learning at Technische Universität Berlin.
Adam is currently the Chief Data Scientist and Director of Engineering for Zanox, Europe’s largest affiliate network, where he supervises more than sixty people. He has been in technology roles for over 15 years in a variety of industries, including online marketing, financial services, healthcare, and oil and gas. His background is in Applied Mathematics, and his interests include online learning systems, high-frequency/low-latency data processing systems, recommender systems, distributed systems, and functional programming (especially in Haskell).
Jose Quesada is the founder and director of DSR. Jose helps others to decide better, do better, or be better through data. Like everyone else, he doesn’t know what data science really is, but suspects it has to do with predicting the future before it catches you empty-handed. He has a PhD in Machine learning and worked at top labs (U. of Colorado, Boulder Carnegie Mellon) Previously he worked as a data scientist consulant, specializing in customer lifetime value, and as the head data scientist for GetYourGuide.
Marek is an assistant professor at the Faculty of Mathematics and Information Science, Warsaw University of Technology, Poland. He has a PhD in computer science and his research interests focus on applied maths (aggregation theory, data analysis and mining, computational statistics, automated decision making, fuzzy sets and systems). An R enthusiast since R_1.4.0 and an author of best-selling Polish book on R programming. Loves teaching & sharing his knowledge and experience with others!
Konstantinos enjoys learning, teaching, researching and solving. Not necessarily in that order. He strongly advocates the need for a deep understanding of theory along with an extensive practical experience in order to be able to solve complex problems. He has a PhD in statistical signal processing and has held various research engineering as well as teaching positions. At DSR he is giving lectures on the applications of algebra, probabilities and statistics in data science and on the implementation of machine learning algorithms using software tools.
Jackie's interests were nurtured in the machine learning group at the MPI in Tuebingen where she worked on kernel methods and has since ventured to the probabilistic side using Bayesian modelling, and now sometimes even combines them. Her primary applications are neuroscience and image processing. She is currently at The Institute of Technology, Berlin and is putting the finishing touches on her PhD thesis about large-scale approximate inference in probabilistic models.
Daniel is an expert software engineer, Python programmer, and machine learning specialist. When he's not developing high-performing, end-to-end pattern recognition and predictive analytics systems for his clients, Daniel's learning new tricks to train deep neural networks more efficiently. Through his company Natural Vision, he's been successfully applying deep learning to problems in bioacoustics, computer vision, and text mining.
Yes, you will need a solid background in linear algebra and probability theory to create new algorithms. But this is very different from what you need to simply apply algorithms known to work for a class of problems. Vision + good judgement + intuition + hacking skills + natural analytic skills + craftsmanship + curiosity + Google skills are, in fact, more useful and less expensive than advanced math knowledge.
Absolutely yes. Being comfortable with at least one programming language is a prerequisite, but if you have never put a system in production, written tests, or used version control, etc. such skills comprise software engineering and craftsmanship, and you will pick those up.
You will present your results in front of a non-technical audience four times, getting feedback by a professional trainer, with video review and tight timing. You will train in explaining complex ideas to other students. Because no matter how accurate your algorithm predictions are, if you cannot convince the decision makers in a tight time window, it will not have mattered.
Tuition: 7000 eur.
Where: Berlin, Germany, one of the world’s great cities, the upcoming “startup capital of Europe” that’s also affordable and brimming with software and startup activities.
Current Batch 02:, Aug 1st to Oct 31st, at: Zanox AG, in Stralauer Str. 2b., by the Spree (Largest Berlin River).
Next batch : Jan 5th - March 31st.
Class size: Ten to fifteen students will be accepted.
We expect you to have basic programming experience and familiarity with databases. Exercises will be in R or python. You'd need to have a basic understanding of at least one of these languages.