Interview with Pedro Araújo
CTO & Co-founder @

After graduating from Computer Sciences from Universidade do Minho, Pedro enrolled in the Masters in informatics course choosing parallel and distributed computing and formal methods as specializations. He worked at Eurotux, S.A. where he developed web sites using Plone and Zope. He also did some system administration work like setting up servers, developing plugins for Trac and Android development which he personally followed after leaving Eurotux. CTO & Co-founder at a platform that will help you to have a scalable sales process, showing relevant KPI's to make sure no deal falls through the cracks, keeping your deals organized, and reminding you every time you need - with only 1 minute a day. Check out our exclusive Q&A session with Pedro:
What software stack are you using for data analytics and scraping data from different sources?
Our software stack is based on Python, we have Python and Jangle serving up our API on AWS and everything is running on ECS which is the container service for AWS. All our front end is static JavaScript written on Angular 4. We also make heavy use of Scikit-learn, which is a machine learning library for Python, we also use a lot of natural language processing libraries in Python, and other libraries. We also make use of Phantom JS, which we use for scrapping, its very helpful to have all of JavaScript being processed.
How did you feel about relocating to Boulder, Colorado earlier this year?
Boulder is an amazing city, my expectations have been exceeded. The mindset of the people is great: everyone tries to to help the other, it's a startup ecosystem that I have never seen before. We only got to stay for three months, during the Techstars Program and our main base of operations is back in Portugal, but I would love to go back to Boulder.
You have to monitor a lot of people, how do you scale system like this when you have a lot of individual entities in the system?
There is a very good machine learning algorithms and moles, so the information you are getting in is directly classified and the database you are building is correct. That's why we keep training our models every day and have always two or three running in parallel to make sure that we are putting the information through each one of them and checking the accuracy of each model.
What models or technics are you using to connect data in the vast ocean of information?
We are using a lot of machine learning algorithms like classifiers and decision trees. There are three or four running at the time so we can keep track better and with accuracy. We also have a deep learning algorithm that's working on classification. This process allows us to improve and make sure that the results are correct.