- REalyse is seeking a smart, self-motivated Senior Data Engineer to join our great and growing development team. This will be very much a technical development role, with a special focus on data.
- You care about getting the best possible outcome, you have a passion for what you do which you can clearly convey by your actions.
- You have an eye for detail and order, being able to spot problems in code or data which others might miss or take longer to find.
- You have a desire to explore and test concepts, ideas and theories.
- Our vision is to transform the world’s most valuable asset class. By making it faster, easier and more efficient for property professionals to determine where, when and what to build.
Using our big data analytics platform, users can save 90% of the time they previously spent on the typical appraisal or investment selection process. This, in turn, can ensure that property companies can increase returns and reduce their risks and costs.
REalyse has been recognised as one of the leading property technology companies in the UK and globally and works with a range of real estate companies across development, lending, consulting and investment.
We believe that real estate is fundamentally connected to almost everything, and that better data leads to better decisions and ultimately happier people.
- There’s a lot to do and our product continues to expand and develop. Over the next year we aim to build out three new product lines, each bringing new capabilities to the market. Work will start with implementing initial designs from the design team, followed by testing, improvements and finally release. On release we rapidly iterate the product from client feedback and so anyone joining the team will get to create and deploy code into production, often on the same day.
- The main focus is around the data but keeping an eye on the whole product requirements with no defined boundaries.
The current data processing has been mainly written in SQL using PostgreSQL and PostGIS and it runs on top of AWS/RDS/Data-Pipeline.
Good SQL skills are required and knowledge of window functions, optimizing queries, spatial queries, GIS, data wrangling are a must. It is not a TELCO environment with crazy real time writes, but data aggregation.
Also as we are in the process of ingesting more and more Machine Learning processing, we may need to make the whole pipeline flexible and that is why Scala/Python/Spark experience is required too
We are open to people from a diverse range of backgrounds and would have a preference for people with an understanding of most of the following (not all essential as we’re happy for you to learn aspects as you go):
- Postgres and PostGIS;
- Java/Scala or similar;
- Python for scripting;
We are looking for someone with a good level of experience who will be comfortable to grow with the role and take on more responsibility as the team expands. We are still a small team, so you must be comfortable with conversing across all aspects of development and reacting quickly to new information.