Data Memes
489 subscribers
565 photos
9 videos
2 files
63 links
All best data memes in one place!

https://surfalytics.com 🏄‍♀️
Download Telegram
Sharing Nothing Architecture one is key term in dataengienering and distributed computing. It was coined several decades ago by Michael Stonebraker and was used in Teradata's 1983 database system.

Shared Nothing Architecture (SN) is a computing setup where each task is handled independently by separate units in a computer network. This method avoids delays and conflicts common in "shared everything" systems, where multiple units may need the same resources simultaneously.

SN systems are reliable; if one unit fails, others continue unaffected. They're easily scalable by adding more units. In databases, SN often involves 'sharding', splitting a database into smaller sections stored separately.

PS post for like https://www.linkedin.com/posts/dmitryanoshin_dataengienering-activity-7136413923381059585-rvlW
41
🤗432
💯83🫡1
🔥5🙈1
💯3
API stands for Application Programming Interface, which is a software intermediary provided by an application to other applications and allows two applications to talk to each other. The RESTful standard stands for REpresentational State Transfer, which is an architectural style. REST defines a set of principles and standard protocols through which APIs can be built around. REST is the widely accepted architectural style of building APIs.
4
❤‍🔥9
The primary question for every data professional out there is: How will Generative AI and LLMs reshape the industry, and what are the expectations for future data professionals?

The answer depends on two opposing options:
1. AI will replace roles like Data Engineer, BI Analyst, Data Scientist, and so on.
2. AI will complement these roles, enabling people to work more efficiently, with higher quality and significant impact.

Whichever option you choose, you’ll agree that a growth mindset and constant learning are key to staying competitive and being ready to pivot your career and pick up the right skills.

Our careers remind me of an underground subway escalator. While it’s going down, you’re moving up, step by step. You may falsely assume that you’ve reached the top, but forget that the escalator is constantly going down.
The bottom line is, as soon as you stop learning and growing, you de facto degrade and lose market value.

At the Surfalytics community, my primary objective is to stay up-to-date with modern directions in the industry, talk with people globally, and move in the same direction.

I feel a wave of power, energy, and momentum that will bring everyone to the right destination, saving them from wasting money and time. On the same note, I feel blessed to see how people are changing their lives forever.

PS On the picture Tofino, BC! Every summer we run Surf + Data bootcamp out there!

Link for likes;) https://www.linkedin.com/posts/dmitryanoshin_dataengineering-analytics-dataanalyst-activity-7137165815346315264-iKWk
❤‍🔥71
🤔2
https://github.com/will-stone/browserosaurus#readme - help to control multiple browsers
2
The most straightforward yet profound question for newcomers in data engineering is: What is the difference between ETL and ELT?

You can work with tools like dbt and data warehouses without actually considering the difference, but understanding it is crucial as it leads to the right tool choice depending on the use case and requirements.

You might think of ETL as an older concept, from the time when data warehouses were used primarily for storing the results of the Transformation step. This required a powerful ETL server capable of processing the same volume of data, ready to read each record and every row in a table or file. With large volumes of data, this could be expensive.

However, with the rise of Cloud and Cloud Data Warehousing, the need for powerful ETL compute has diminished. Now, we can simply COPY data into cloud storage and then into the cloud data warehouse. After this, we can leverage the powerful compute capabilities of distributed cloud data warehouses or SQL engines.

The advent of cloud computing wasn't the only pivotal moment. Even before the cloud, ETL tools like Informatica employed a 'push down' approach, pushing all data into MPP data warehouses like Teradata, and then orchestrating SQL transformations.

Let's consider a simple example:

In the case of ETL:
1. Extract Orders and Products data.
2. Transform the data (join, clean, aggregate).
3. Load the data into the data warehouse, often using INSERT (a slower, row-by-row process).

In the case of ELT:
1. Extract Orders and Products data.
2. Load the data into the data warehouse, often using COPY for storage accounts and data warehouses (a faster, bulk load process).
3. Transform with SQL or tools like DBT or Dataframes.

Reflecting on the role of Spark, it becomes clear that Spark is an actual ETL tool since it reads the data when performing transformations.

Link to share to like: https://www.linkedin.com/posts/dmitryanoshin_etl-elt-dataengineering-activity-7137583690678747136-7imR
❤‍🔥2💯1
Say No to "Season of Yes".
4💯1🙈1
I know some of you have challenges due to lack on knowledge of Cloud Computing - Azure, AWS, GCP.

Starting January I will run 6 weeks program online in University of Victoria. Tuesday/Thursday 6pm PST for 2 hours.

The price is 715CAD. Money is going to university, I am not getting much. But it is extremly great opportunity to close the gap in cloud computing, Azure/AWS and do lots of hands on.

You employer may pay for this course.

Highly recommend. I think 10 seats left.

https://continuingstudies.uvic.ca/data-computing-and-technology/courses/cloud-computing-for-business/
5💯2
1
Last weekend, we worked on a traditional data engineering project at Surfalytics, which involved using Snowflake as a Data Warehouse, dbt for transformations, and Fivetran for data ingestion from Google Drive.

For BI and data exploration, we utilized Hex. We hosted dbt in a container and ran it via GitHub Actions.

The project was prepared and executed by Nikita Volynets, and Tsebek Badmaev did an amazing job documenting the code in GitHub. Now, anyone can reproduce it and learn from it.

I bet everyone learned something new and will use this newfound knowledge at work or in interviews.

Link to the repo: https://lnkd.in/g4PXNV_W

Link for like: https://www.linkedin.com/posts/dmitryanoshin_dataengineering-dbt-snowflake-activity-7137879439664693248-V1wzp
3🤩1
❤‍🔥731
Before starting chasing analytics and data engineering jobs I usually suggest to be more or less fluent with multiple things:

- CLI: popular commands, navigation, vim and nano text editors, permissions, environment variables
- GitHub (or any similar platform) with focus on Code Reviews, PRs, development lifecycle, basic pre-commit, CI/CD
- Containers: docker file, image, compose
- IDE of your choice: don't know where to start? Take Visual Code.

You don't need to be pro in any of these but it will make a difference and pay back in long term i.e. #engineeringexcellence

Last week at Surfalytics I ran CLI and GitHub and next Saturday planning to wrap containers.

All of this will wrap into the 3 simple free courses - "Just enough <TERM> for data professional"

Link for like: https://www.linkedin.com/posts/dmitryanoshin_engineeringexcellence-dataengineering-analyticsengineering-activity-7138073576309489667-_H2h
72❤‍🔥1
21
11
AI competition is weird. But this goes deeper than "Which LLM is best??" or companies on the left trying back the winning horse. Behind the scenes is a cloud war and a chip war - that's where the money is. Let’s take a look:


1. This is a cloud war.

Let’s take Anthropic, for example. They’re committing to use AWS as its primary cloud provider. That could translate into billions in revenue for AWS as Anthropic scales up.

By investing in Anthropic and its large language model Claude, Amazon is positioning itself to reap the benefits of the growing AI market.

As Claude gains popularity and drives more businesses to adopt AI solutions, it funnels money back to Amazon through increased usage of AWS services.

This strategic investment not only strengthens Amazon's position in the AI space but also creates a virtuous cycle of growth for its cloud business.

Guys - everyone is doing this. Investing huge amounts and getting it back in cloud services. That should command our attention.

The war between MS Azure, Google Cloud and AWS is worth billions and it’s only going to get bigger.


2. This is a chip war.

Chips are everything - they’re the engines. And up till now Nvidia has ruled the world.

But let’s just look at the last few weeks:

Nvidia:
The company announced the H200 GPU on November 13. This new chip is designed for AI work and upgrades the H100 with 1.4x more memory bandwidth and 1.8x more memory capacity. The first H200 chips are expected to be released in the 2nd quarter of 2024.

Microsoft:
Microsoft unveiled the Maia 100 artificial intelligence chip on November 15. The chip is designed for AI tasks and generative AI. The company hasn’t provided a specific timeline for the release of the Maia 100, but it is expected to arrive in early 2024

Amazon:
Amazon Web Services (AWS) announced the next generation of two AWS-designed chip families—AWS Graviton4 and AWS Trainium2—on November 28. These chips are designed for a broad range of customer workloads, including ML and AI applications - that was at their big show in Vegas.

And Google has jumped in to this race as well.
31
31