Is Your Architecture Ready for Kafka? Yaniv Ben Hemo, CEO @Superstream

When does your architecture outgrow REST APIs? Yaniv Ben Hemo, Co-Founder & CEO at Superstream, joins us to unpack what Kafka actually is, when you need it, and why it’s often misunderstood. From real-time user experience to scaling microservices and optimizing for cost—not just performance—this episode is a practical guide to understanding Kafka’s role in modern data strategy and how Superstream is helping companies get more from it.

This transcript was generated automatically by AI. If you find any mistakes, please email us.

0:00:00
(Yaniv)
80% of the issues with Kafka would actually not be Kafka itself, but rather your applications, your clients. I can crash your Kafka clusters with one single parameter on a very big producer. I can just crash your entire cluster. Kafka is not very good in noisy neighbors scenarios. So one single client with bad configuration can ultimately crash your entire cluster.

0:00:29
(Speaker 8)

0:00:31
(Announcer)
Hello, everyone.

0:00:32
(Announcer)
You're listening to Cloud Next, your go-to source for cloud innovation and leaders insight, brought to you by GlobalDots.

0:00:41
(Speaker 8)

0:00:44
(Ganesh)
You might remember the headlines from a few months ago about the Oasis ticket sale fiasco. Fans swarmed the website only to encounter crashes, endless loading screens and error messages, leaving many frustrated and ticketless. It's a scenario we've all seen before, where systems buckle under the pressure of massive real-time demand. Today we'll dive into the solutions, tools like Kafka, which are designed to handle exactly these kinds of challenges,

0:01:10
(Ganesh)
enabling seamless, high-throughput data for real streaming events. I'm Ganesh The Awesome, Solutions Architect at GlobalDots, and who better to guide us on this subject than Janu Benhemel, co-founder and CEO of SuperStream, who's here to share his experience on Kafka and how you can bridge the gap between data chaos and data control.

0:01:30
(Ganesh)
Yaniv, before we start, what should people know about you? Tell the listener a little bit about yourself.

0:01:38
(Yaniv)
Thank you. I'm happy to be here. So I'm Yaniv, CEO and co-founder of SuperStream. Not a lot ever since I built the company. So I'm 31 years old, married with two kids and a gold driver. Spending most of my time between building the company and sharing time with my family.

0:01:59
(Yaniv)
Not more than that ever since we established the company.

0:02:03
(Ganesh)
I would say speaking to quite a few co-founders and founders in my life, that even getting to spend some time with your family alongside the business is like you're doing quite well.

0:02:12
(Yaniv)
Yeah, exactly.

0:02:13
(Ganesh)
So for listeners who might be new to Kafka and not the famous writer, can you explain what it is in a nutshell and why it's become such a pivotal tool in modern data architecture?

0:02:26
(Yaniv)
Right, yeah. One of the most fascinating components, software components, that I've ever worked with, I've been a software engineer since basically all my grown-up life, and a builder, and built quite a few tools, open-source projects and stuff like that.

0:02:44
(Yaniv)
And I find Kafka extremely interesting because it evolved from solving one very specific challenge, asynchronous communication, which is, by the way, one of the most frightening words in English, I think. But it basically means that, you know, you try to communicate, you're an entity and you try to communicate with another entity,

0:03:11
(Yaniv)
and you're not necessarily waiting for a response or a reply. And that, like, just think about that, that alone, which is a use case that was empowered by MessageBulkers and more specifically Kafka and RabbitMQ, that alone literally changed user experience as we're familiar with. You don't necessarily need to look at your screen when you order a pizza,

0:03:38
(Yaniv)
or you don't necessarily need to wait on the application itself when you order an Uber or something like that. So that was empowered by Kafka. It's like the most basic use case that Kafka solves or offers. I call it a real-time bridge or asynchronous bridge between applications. You can get a more simple phrase than that. But what's really interesting about that component is that it evolved from that use case, provide asynchronous communication between applications into real-time database, analytical database, moving big data, big chunks of data between different

0:04:28
(Yaniv)
systems, update different systems from one event that occurred from a user that purchased some item on our website. All of a sudden, we can sync five or seven or whatever amount of systems about that purchase. That opens a whole variety of use cases. So Kafka is a lot of things. It started from, again, from a bridge between applications, between many to many applications,

0:04:57
(Yaniv)
and now it's transformed into so many other components or other use cases that empowers big data as we're familiar with it

0:05:06
(Ganesh)
today. So that's a pretty good point, because I understand it myself. I put definitely not to your level, let's be absolutely clear., you mostly see it. Well, I mostly see it inside sort of large big data type applications. So enormous online delivery companies, enormous gambling companies that need to process like millions of transactions, um, stocks, ticketing, ticketing, companies, um, stocks and shares companies. But for the audience who isn't aware, what might be some of the problems or the opportunities

0:05:50
(Ganesh)
that would indicate that Kafka is the right solution for their need?

0:05:57
(Yaniv)
I think that is one of the top three questions that I've been asked by so many over the years that I've been working with Kafka and around Kafka. And to be honest, the most straightforward answer to that would be, when you need Kafka, you know when you need Kafka. That's actually the answer. And I'll explain why. Your application is growing.

0:06:28
(Yaniv)
The amount of asynchronous or rest calls between different microservices is scaling up. And you all of a sudden started to see data being lost when moving between microservices, you started to see applications or backends starting to crash because of the massive amount of workload that it needs to process in every single second. And then you understand that, oh, I need something in the middle to be able to absorb that amount of data before it reaches my applications, and them or and to enable these applications or these backends to pull data or to pull tasks

0:07:15
(Yaniv)
or to pull different pieces of data from other locations when they can. That's what we call at least one. So eventually, the messages or the task or the communication that we want to pass through application, eventually, they will reach their destination, but on their own pace. There is something very scalable, very durable in the middle that can absorb that scale and free up your backends and your applications from being crashed and being flooded by these requests. So that's usually the most immediate use case

0:07:59
(Yaniv)
for bringing Kafka on board. That's the first thing. The second thing is that, again, when you need to notify multiple systems based on one event, so one event a chord, and you want to notify multiple systems to do something based on that event. So you start like when you're very small and your architecture is very small, you start with a simple REST call between one microservice to second microservice, right?

0:08:32
(Yaniv)
And then all of a sudden, you have a CRM, and all of a sudden you have a monitoring system and all of a sudden you have a billing system. And then like, hey, so I need to like duplicate that function and sending this REST call to that and that and that, that's not scalable. Right, so you put a message broker in the middle or Kafka for that case. And then you just shoot one message towards a topic inside your Kafka and then you have multiple listeners on the other side

0:09:04
(Yaniv)
that just consume and pull that information and do whatever they need to do afterwards. So it's really about, like, usually when you scale, or if there is specific user experience that you want to provide your users, as we explained, like, order a pizza without seeing or without being in the application

0:09:27
(Speaker 7)
itself.

0:09:28
(Ganesh)
That's a great description. And it makes me wonder, what is the most common replacement in an organization? So what is the you talked about having like maybe a cron server or something like that. I find it hard to believe that people would be using like a basic cron server and then jump to Kafka. It feels like there's a middle step. You have any feeling on that?

0:09:55
(Yaniv)
Yeah, yeah, definitely. I've researched it quite a bit. When we when we when we first started SuperStream, we built the open source project called Memphis dot dev which is a an alternative or a replacement to to to Kafka and I and I spent and I spent quite quite a bit of understanding the the movement of of potential customers or potential users and and and the exact moment in their company lifecycle or application lifecycle

0:10:31
(Yaniv)
when the need arise, right? And to understand when that moment shows. that in literally like 99% of the times, startups or companies that are just started to build will start with RabbitMQ. That's another open source project nowadays supported by VMware. It started back in 2007, and it's basically like the small scale version of Kafka. It has a lot of nuances. I won't go into them now. But you start with that because it's such an easy component to deploy.

0:11:19
(Yaniv)
It has a great suite of containers and Elm charts and stuff like that. So it's really easy to deploy and to maintain. So companies will usually start with that. Nowadays, when you have the cloud providers, the first touchpoint with message brokers and queues might be also GCP PubSub or SQS,

0:11:39
(Yaniv)
which can also, by the way, scale beautifully. Like, you don't necessarily need to... Between these two, like GCP PubSub and SQS, you don't necessarily need to switch to Kafka. But again, these usually would be the first touch points because Kafka up until today, even though it has managed services and vendors that support that, it's still seen as a more professional, scalable tool. So you would start with these tools like Rabid MQ, PubSub, SQS, and if you

0:12:16
(Yaniv)
would need the more robust version of MessageBlockers, you will do the replacement to Kafka. So that's a great description.

0:12:25
(Ganesh)
Thank you. And I guess on that, on that journey, when you're flicking and you're about to take the plunge and go into Kafka, it can be daunting for the teams, it can be daunting for companies to drive it effectively and to scale it effectively. What advice would you give to organizations that are just about to start that journey?

0:12:47
(Yaniv)
Go with the managed service 100% of the time. Like even if you're low on resources, low on budget, there are plenty of solutions out there, managed solutions. If you'll calculate or you do a comparison between the amount of human hours that you're going to put on managing Kafka on your own, and scale it, and maintain it, and optimize it, and configure it, and you would, in comparison to the license cost of a managed one,

0:13:23
(Yaniv)
it will be much cheaper to use a managed version of Kafka. And I strongly recommend to always start with that because then like the migration between the self-hosted one to a managed one, it's like, it's a daunting task, it's a hassle, you don't want to be there. So start with a managed one. And if your application at the moment does not require it at all,

0:13:46
(Yaniv)
then don't use Kafka. Just go with RabbitMQ. RabbitMQ, start to scale, start to see what exactly your application needs, and then do a smart or intelligent decision and choose your next involvement based on that.

0:14:07
(Ganesh)
Wise. And you obviously founded a company that helps to optimize Kafka and simplify for organizations. Can you walk us through your approach and how it differs from traditional methods, but also what the difference is between you and a managed provider? So I mean, Confluent Cloud would be a famous one that we could talk about, but just break that difference down for us.

0:14:37
(Yaniv)
Right. Many companies tend to think, like when they first see us, that we either replace their Kafka, we yet another. Sometimes they confuse and think about, are you another Kafka provider or something like that? Do you provide a more nice or easy or simple to use Kafka or something like that? So 100% no. And that's why, by the way, when you asked me about management or deployment

0:15:05
(Yaniv)
or stuff like that, I didn't mention SuperStream because that's actually not our niche. So what we say to companies is that you choose your own Kafka deployment. It can be either self-hosted using the open source project or a managed version. And we will help you not manage it better, but rather optimizing it, extract the most out of it, making it more cost efficient, improve performance, improve your configuration, so you would be much more durable, much more reliable.

0:15:46
(Yaniv)
Most of the time, Kafka would be a mission critical infrastructure that, you know, I always ask that question. Every new customer I ask them, like in our first meeting, I asked them, like, what happens if your Kafka is down? Just to get a better sense of sense of its position within the environment and within the architecture.

0:16:08
(Yaniv)
80% of the time, not 100%, but 80% of the time would be like, if Kafka is down, our entire platform is down, or our product is down, or we will not be able to process payments or new customers. Things that are really, really mission critical to the organization, to the company. And so we're all about taking the existing and making it better. So our tagline is like,

0:16:40
(Yaniv)
you're Kafka in its best shape. And that's exactly what we do. So it's not a fancy UI to Kafka, but rather a fully autonomous platform that will make sure or that will ensure your Kafka is running in its best shape, cost efficient,

0:17:00
(Yaniv)
as fast as possible, given the resources that it has, and a bunch of other benefits, but not specifically for management.

0:17:10
(Speaker 5)
Mm-hmm.

0:17:11
(Ganesh)
It's pretty interesting, actually, that it is so... It's so core to applications a lot of the time. I mean, I'm almost surprised it's as low as 80%. I often think it would be even higher than that. Um, and there aren't too many things in the system now that are like that, where if that component falls over, it's a single point of failure.

0:17:34
(Ganesh)
You know, it, for most people, it's like AWS these days, but as long as AWS is up, provides enough redundancy with all these different ways and means that you, that you don't have it, But then inside your own application, you do have this massive single point of failure if it goes down.

0:17:50
(Yaniv)
So the 80%... I started a conversation with that. That is what makes Kafka fascinating for me, because the other 20% of the use cases, like analytical use cases, more for BI purposes, moving big chunks of data for the data analysts in the company to create real-time dashboards and real-time reports based on the actual data that is flowing between, like, if you're a big retail company,

0:18:21
(Yaniv)
right, and you have 2000 shops, right? And you have 2000 shops, and you want to know or the executives want to know in every single moment, like, what is the amount of revenue that has been collected in every single store? Or how many customers have been traveling through our shops or something like that. So these are like the analytical use cases that if Kafka is down for these use cases, worst-case scenario, the CEO or the CFO would not be able to see the reports, but the business will still be operating.

0:19:00
(Ganesh)
Yeah, makes sense. And given the criticality of it, what are some of the common mistakes or some of the misconceptions that you see organizations when they're adopting Kafka? And moreover, how can they avoid them?

0:19:17
(Yaniv)
Yeah. Number one would be, and I can't blame anyone, but number one would be to take the default configuration, which makes perfect sense, right? Take the default, like, who has the time to go into the internals of Kafka and configure it? Data engineers or DevOps or the technical champion.

0:19:40
(Ganesh)
People who really, really love Linux and like writing their own drivers. Yes, people, those

0:19:46
(Yaniv)
people. And exactly. But even even these people nowadays have tons of things to do and tons of systems to take care of. And, you know, I would simply not, you know, expecting anyone to go into the exact internals of of each and every system that they are touching. So usually most of the time they will use the default configurations, and that exposes their cluster to...

0:20:13
(Yaniv)
Most of the time it will work, but what are we calling... What work is, right? If you're paying a million dollars a year for your self-hosted Kafka infrastructure, and you only get like one megabyte per second of throughput. And every time you go to a solution architect, one of the cloud providers and tell them, like, hey, what is going on?

0:20:41
(Yaniv)
I'm using NVMe storage, right? And I'm using the most, the heaviest, the best instances out there. Why do I still get the throughput that I'm getting? And usually they will tell you, add more brokers, add more servers, add more EC2s, right? And that's not the solution for that. Kafka specifically requires a very delicate and very deep understanding of the internals and how things are working. And at the same time, how does your application work

0:21:19
(Yaniv)
and what does it expect, what is it about to transfer, the amount of the size of the messages that it transfers, and a bunch of other parameters that really influence your entire reliability of your Kafka. So that would be the number one. And again, I'm not expecting anyone to do something with what I just said, not taking the default configuration, but that would be like the number one thing. The second thing is that, and that's a bit surprising for everyone that I'm, for customers

0:21:54
(Yaniv)
that like even mature customers with Kafka, that's like, usually that surprised them. But 80% of the issues with Kafka or around Kafka, even if you're using a managed service, would actually not be Kafka itself, but rather your applications, your clients. I can crash your Kafka clusters, and it doesn't really matter how many brokers you have,

0:22:22
(Yaniv)
how many servers and how strong they are. With one single parameter on a very big producer, producer is a client that produces data into Kafka, I can just crash your entire cluster. Kafka is not very good in noisy neighbors scenarios. So one single client with bad configuration can ultimately crash your

0:22:47
(Yaniv)
entire cluster. So I would suggest like the mitigation to that scenario would be to ensure you enforce, like actually enforce, not just saying to your developers, don't do that and do that instead, or provide them a boilerplate with a predefined configuration, but literally create some measurements to enforce the specific configurations on your clients. And lastly, would be storage. So the third most common risk or potential risk for Kafka clusters, again, not serverless one, there are multiple nuances between serverless and non-serverless, but if we stick with non-serverless

0:23:36
(Yaniv)
Kafka clusters, the third risk to mitigate would be storage. So storage depletion, monitor your storage wisely, clean whatever you can to clean, modify retention policy, reduce the amount of partition replications if you don't necessarily need to protect your topic or every bit of data across three different AZs, reduce that, monitor that constantly,

0:24:09
(Yaniv)
and ensure you're not ran out of storage. Running out of storage in Kafka is very critical, and restoring the cluster from that failure is very challenging. So, in compared to like... in compared to missing like a broker or something like that, that's really easy.

0:24:30
(Yaniv)
You build up another broker, all the data gets reconstructed on that and that is it. But recovering from storage depletion, that's a problematic issue to go into.

0:24:43
(Ganesh)
Well, it seems interesting about all of that, and it makes me laugh, thinking about adding extra CPUs and brokers and things. It's just whenever I hear that as a solution to the problem, it's obviously not right. You know, like in the old days, when they used to get you to build up VMware boxes for things,

0:24:59
(Ganesh)
and they'd be like, right, it needs 128 gig of RAM, 32 cores. And you're like, it's a finance application. What is this? Exactly. It's sloppy, but what seems interesting about this ecosystem to me is you have, maybe I'm wrong, which is why I'm asking the question,

0:25:16
(Ganesh)
but you seem to have platform as a service providers. So those are the confluent type people. And lots of people making variations of different kinds of Kafka's, whether that's Red Panda or whatever and blah, blah, blah. But then to put it next to Kubernetes, in the world of Kubernetes, you have loads of tooling on top for tuning for security products, for both code security and runtime security. And there's a lot of safeguards with Kubernetes to the point where

0:25:47
(Ganesh)
someone fairly novice could probably just fire it up with a couple of tools to support them and get away with it in production, basically. Not on a large scale, but definitely they could. There doesn't seem to be any of that with Kafka. Aside from what you've built, which is let's call it like the Kafka tuna, let's say for

0:26:11
(Ganesh)
example, which is getting the best out of your Kafka as it is. I don't know what the product ecosphere is like at the moment. Is it bereft out there?

0:26:20
(Yaniv)
What does it look like? We are the only one at the moment. And it's interesting. I mean, Kubernetes is a very complicated solution, a very complicated infrastructure to tune, to optimize, to maintain as well, definitely. But at the same time, I feel when, like by nature, Kubernetes, if we compare Kubernetes to Kafka, for example, it's very stateless in nature, right? And Kafka on the other end is a very stateful application. And when state come into play, you get all these complications that

0:27:08
(Yaniv)
I'm not sure a lot of people around the world wants to go into that. And unfortunately, we like challenges and, and hard stuff to tackle. So we decided to do that. And it requires significant significant amount of expertise, internal expertise, because at the end of the day, again, it's a very mission critical component. So the other side, like the customer that also allow you to work around your Kafka needs to know that someone that is very experienced and knowledgeable about every bit and byte inside the tool is working on their Kafka, right?

0:27:54
(Yaniv)
So we brought all of that. And to be honest, there are a lot more people knowing Kubernetes than people that know Kafka.

0:28:07
(Ganesh)
I definitely feel like it's a technology that you only learn if you're the guy left holding the ball. If you know the previous Kafka guys left, you're the new Kafka guy. Exactly. Nobody chooses that. I'm convinced of it.

0:28:26
(Yaniv)
And it's extremely hard to find people that work with Kafka. Really. It's usually like, hey, what are you doing in a company? I'm a DevOps engineer with 10 years of experience. Awesome. Wants to join the Kafka team and like teaching a new person the mysteries of

0:28:48
(Yaniv)
Kafka and then you get another Kafka engineer in the world. But literally, like that's the way to bring skills and people to learn that tool. That's how complicated it is.

0:29:00
(Ganesh)
Definitely. It's kind of a catch 22 actually really because it is mission critical but then it's not used by like the wealth of organizations. It's kind of the top tier ones. So then you've got to build a tool that's only for the top tier. Quite complicated.

0:29:16
(Yaniv)
Interesting insight about that. Oftentimes like you would assume that a company, a very big company, a big manufacturer without going into the exact names, you would assume that they have a very big Kafka cluster, I know, by their perception. And actually, it really depends on the nature of your application and how coupled and how critical Kafka is to your application. And that's what makes and that's what like increase the spend or increase the usage in Kafka, whether the size of the company or like if it's a fortune 500 companies, we've

0:30:04
(Yaniv)
been working with fortune 500 companies fortune 100 companies. They are not necessarily very big on Kafka actually sometimes you would be surprised that they are just starting their real time journey. So it's really all about like. The Kafka position within your application architecture.

0:30:22
(Ganesh)
I definitely wouldn't be surprised that they're only starting their real-time journey. I interfaced with a lot of those people talking about their cloud migrations not very long ago, when everybody would have assumed everybody was cloud native by this point. But we digress a little bit. Coming back to Kafka and its evolution, as somebody who's, I don't know, top whatever Kafka expert in the world, top 10, top 5, I don't know, top 1, could be.

0:30:58
(Yaniv)
No, no.

0:30:59
(Ganesh)
But as someone who's definitely up there, where do you see this evolution going in the next five to 10 years? What's it going to look like for data streaming?

0:31:07
(Yaniv)
I can spend a dedicated podcast episode just on that. I have this crazy theory that I've been researching for quite a while. What is the position of Kafka in this AI world, right? This evolving world. It's a whole different set of communication that are going to take place. We're talking about agents, right? Agents need to communicate with each other, but agents are communicating with each other

0:31:39
(Yaniv)
in a very big payload. Kafka is not very suitable for very big payloads beyond like two, three, four, five megabytes. I mean, of course, you can go all the way up to 10 megabytes, but even that is small to the type of communication that we are experiencing with AI nowadays. So like, will Kafka evolve to that, to support the future type of communication that we will see in the coming years?

0:32:11
(Yaniv)
Will there be a different tool to support that? Not necessarily replace Kafka, but just another, like a different bridge that can support that massive payload that is about to be transferred between different entities.

0:32:24
(Speaker 6)
That's the first thing. payload that is about to be transferred between different entities.

0:32:25
(Yaniv)
That's the first thing. The second thing is that I'm also experiencing with if Kafka can evolve to an actual database. Now it is caught as a real-time database, but you won't see companies, you won't see customers really ditch their rational NoSQL databases to Kafka. Usually it would be like some minor transformation, minor processing, and then throw everything to S3, to some data lake, to iceberg tables, to data warehouse, and then do all the queries there. Will we be able to do that straight into the source,

0:33:19
(Yaniv)
straight on top of the source, which is Kafka? So these are the two things that I'm researching and also excited about. I mean, the first one, the communication between AI agents, which we are not there yet. Like, I have another theory on that.

0:33:37
(Yaniv)
We'll keep that on the side, but we're not there yet in the full form of what agents are, but they will eventually need to communicate with each other. So that bridge, I'm really excited to see if Kafka would be the one to support that new level or new set of communication.

0:34:00
(Ganesh)
I had a sneaky suspicion that the AI might be in the answer somewhere. But yeah, it's quite interesting. I mean, the uses for these massive data stores are never properly realized when they're born. As you were talking, I was thinking about... There's a project called Data Fusion, which uses...

0:34:24
(Ganesh)
You're aware of it, to pull enormous amounts of data out of S3 so you can query it like as if it's a database. And it's like, you know, when nobody imagined that was the use of S3 when they created it, but now we've got this like lightning fast creation. And it's like, yeah. And then we're talking about something, you know, infinitely more complicated and powerful. So yeah, it'd be interesting to see how it unfolds. But a Kafka style database, databases are very slow to move in the industry. That's what I noticed.

0:34:59
(Ganesh)
So I heard a story that despite the fact that Amazon runs on all these like magical databases and whatever, as you can imagine, and I'm talking about Amazon, the shop as in the, and obviously using their own technology. Um, when it comes to processing the credit card transactions due to regulation rules, they have to be processed on a relational database. So when you get that little pause at the end, when you finished your transaction, that's because they have to take it off the really fast infrastructure,

0:35:25
(Ganesh)
push it over to a crappy relational database that has to process it and then push it back. And it's only for the crowd, you know, only for the card transactions. Everything else is not necessary. And I feel like there's an inherent slowness to databases where, where they're in and they're working and just don't touch that stuff. Exactly.

0:35:46
(Ganesh)
The database developers are probably as rare or more rare than Kafka people. It's a nightmare.

0:35:57
(Yaniv)
100%. And if you clean the buzz around everything is AI, everything contains the word around, you know, everything is AI, everything, you know, everything contains the word AI, right? You would still find everything stay the same, right? Stay in place. Everyone's still using rational, non-rational databases and stuff like that. But there are strong forces right now, like, for example, cloud spend. Cloud spend is like, really on the rise. We're talking about like until 2030, I think the last Gartner research forecast like 7.2 or

0:36:37
(Yaniv)
7.3 trillion dollars on IT spend and among that is like the cloud providers, right? So I think that if there will be... or if it will be able to shrink the amount of touchpoints where your data is going through, and you would be able to get that similar approach of data fusion on top of S3, but, for example, on top of the first touchpoint of your data without transferring it to different places,

0:37:05
(Yaniv)
for example, Kafka. And if it will make sense economically, then companies will do that. But yeah, I mean, everything is moving slower than we think, or at least the media try to show us, right? But if it makes sense, it will happen.

0:37:26
(Ganesh)
Yeah, I fully hear you on that one. The media would love you to believe that AI is like going to be here next week. It's definitely not the case. But great. And then we always like to ask what's now been dubbed the DeLorean question, which is if you go back in time and give yourself one piece of professional advice, what would it

0:37:50
(Speaker 5)
be?

0:37:51
(Yaniv)
Build, I would say, sorry if it's too shallow, but and I think...

0:38:02
(Ganesh)
Somebody said buy Bitcoin once, so you can't get this out of the bag.

0:38:06
(Yaniv)
I'm not that materialized. I would say like Y Combinator, when you go to their different white papers and guides they publish, they always urge young people with entrepreneurship senses to deep dive into entrepreneurship and building stuff really early, as early as they can. If you ask me, summarize the three years that you built the company, I would say it's the biggest, most exciting school I've ever had. Much more exciting than any bachelor degree in any school on earth. So I would tell to myself probably, start early, build early early and dive into that ocean as early as possible.

0:39:10
(Yaniv)
But that's what comes up to my mind.

0:39:13
(Ganesh)
I think that's a nice one. It's a very great piece of advice. And to be honest, I don't think anybody should be fooled about how difficult being a founder is. I feel like you need to be of a certain age, you need to have young blood to make it happen.

0:39:32
(Yaniv)
Exactly. But not necessarily even to be financially successful, not from those reasons, just build, go out there and build. That's like, that's the most satisfying experience ever when people find the things that you build useful and valuable. So like, I would tell to myself, start build early, not necessarily to be a financial, you know, success, but rather to experience with that feeling as early as possible.

0:40:11
(Ganesh)
That's lovely. And a great note for us to finish on. Thanks for that. Real pleasure having you Yaniv and look forward to hearing about you in the news as we go forward.

0:40:22
(Yaniv)
Thank you very much. I enjoyed that. And thank you very much for the opportunity.

0:40:27
(Ganesh)
This episode was produced and edited by Daniel Ohana and Tom O'Morvinson, sound editing and mix by Bren Russell. I'm Ganesh The Awesome. And if you're ready to deep dive and start transforming the way you approach cloud practices and cybersecurity strategies, then the team and myself at GlobalDots are at your disposal.

0:40:44
(Speaker 4)
We are cloud innovation hunters, security strategies, then the team and myself at GlobalDots are at your disposal.

0:40:45
(Ganesh)
We are cloud innovation hunters and we search the globe looking for the future tech solutions so we can bring them to you. We've been doing it for over 20 years. It's what we do. And if I don't say so myself, we do pretty well. So have a word with the experts.

0:40:58
(Ganesh)
Don't be shy. Don't be shy. And remember that conversations are always for free.

Related Content

  • Cloud Partnerships: Itai Ben Dror, VP of Corporate Development @Cast AI
    DevOps & Cloud Management
    Cloud Partnerships: Itai Ben Dror, VP of Corporate Development @Cast AI

    In this episode of CloudNext, we explore the critical role of cloud partnerships with Itai Ben Dror, VP of Corporate Development at Cast AI. Discover how the complexity of the cloud landscape necessitates collaboration, the benefits of the 'Power of Three' model involving ISVs, resellers, and cloud providers, and the importance of trust and long-term relationships in successful partnerships.

  • Terraform Best Practices Checklist
    DevOps & Cloud Management
    Terraform Best Practices Checklist

    Enhance your Terraform skills with 13 proven techniques curated by our DevOps experts. Gain insights on module optimization, state file management, advanced version control, and many more key topics.   Download your free copy today!

  • 10 Most Important Steps for Troubleshooting ArgoCD Issues
    DevOps & Cloud Management
    10 Most Important Steps for Troubleshooting ArgoCD Issues

    DevOps Engineers and Kubernetes Administrators – Unlock the full potential of ArgoCD with our expert guide. Navigate through the intricacies of troubleshooting ArgoCD, from resolving common installation issues to advanced configuration tweaks. This guide provides: Download your free copy now and take your ArgoCD expertise to the next level.

  • DevOps Leadership: Lynn Ben Barouch, DevOps Lead @Sygnia
    DevOps & Cloud Management
    DevOps Leadership: Lynn Ben Barouch, DevOps Lead @Sygnia

    In this episode, Lynn Ben Barouch, a DevOps team leader at Sygnia, dives deep into the art of strategic team expansion and leadership in tech. Discover her unique approach to building high-performing teams, the critical importance of the Development phase, and why rapid expansion isn't always the answer. Lynn shares invaluable insights on transitioning to leadership, fostering a culture of continuous improvement, navigating the challenges as a woman in tech, and tackling practical tech challenges like replacing a cloud cost management system. Tune in to gain practical tips for enhancing your team's effectiveness and staying ahead in the rapidly evolving tech landscape.

  • How Justt Saved $100K Yearly with IaC
    DevOps as a Service
    How Justt Saved $100K Yearly with IaC

    Justt is a chargeback mitigation startup based in Tel Aviv. Chargebacks, as defined, are demands by a credit card provider for a retailer to reimburse losses on fraudulent or disputed transactions. Justt’s objective is to assist merchants worldwide in combating false chargebacks using its proprietary artificial intelligence technology.

  • AWS Innovations Decoded: GlobalDots’ Top 20 Picks
    Cloud Computing
    AWS Innovations Decoded: GlobalDots’ Top 20 Picks

    Join AWS experts from GlobalDots as they decode the top 20 cloud innovations you need to know in a 2 part Webinar. Gain insider insights on leveraging these transformative technologies to boost performance, tighten security, and reduce costs. Discover real-world applications to apply these advancements to your business. Reserve your spot now! ? Stay Ahead: Learn […]

  • Innovative Cloud Strategy eBook
    Cloud Security
    Innovative Cloud Strategy eBook

    CIOs, Infrastructure Chiefs, IT, and Security Pioneers – This guide is more than just a document. It’s a strategic blueprint for your cloud journey, including concrete steps for migration, security strategies, and proven methods to optimize cost. We’re talking about real solutions for real challenges, such as: And yes, even – Discover not just security […]

  • HashiCorp – New Licensing Model Explained
    Cloud Security
    HashiCorp – New Licensing Model Explained

    HashiCorp has recently revealed a shift in its licensing model, transitioning from open source to the Business Source License (BSL) for several projects. They’ve selected their usual Mozilla Public License, Version 2.0 as the ultimate open terms, with a four-year timeline for the new code release. But remember, there’s no need for alarm. Let’s unpack […]

  • Weak Defences: The Most Hackable Sports Passwords
    Web Security
    Weak Defences: The Most Hackable Sports Passwords

    We get it. Thinking of another password that you haven’t used before can be frustrating – especially when we have to change or update our passwords so regularly. But while it might be tempting to use your favourite sports teams and clubs as passwords, it’s a risky move for your cyber security.  Using unique passwords […]

  • Solving Network Security Issues for Rapidly Growing Global Businesses
    Web Security
    Solving Network Security Issues for Rapidly Growing Global Businesses

    Introduction Ryohin Keikaku is a global manufacturing and retail company that handles everything from product planning to sales for products known as “Mujirushi-Ryohin” in Japan and “MUJI” overseas. To keep pace with its rapid expansion—adding 100 new stores annually in Japan—and its growing global presence, now spanning 225 locations across 20 countries, including 50 stores […]

  • Rethinking IT Security to Build Resilience for the Modern Threat Landscape
    Cloud Security
    Rethinking IT Security to Build Resilience for the Modern Threat Landscape

    The recent two decades have changed how applications are built, delivered, and used. We used to have isolated networks with predictable entry points, but today, that has been replaced with a dynamic, interconnected web of APIs. The consequence of this is the dissolution of the traditional security perimeter. Today, protecting a single network boundary doesn’t […]

  • Cybersecurity Without the Buzzwords: Nir Rothenberg, CISO @Rapyd
    Cloud Security
    Cybersecurity Without the Buzzwords: Nir Rothenberg, CISO @Rapyd

    Forget buzzwords. Nir Rothenberg, CISO at Rapyd, brings brutal honesty about what cybersecurity work really looks like. From lessons at NSO to scaling security in a global fintech, Nir shares why doing the basics right—patching, prioritizing, and partnering well—is what truly matters. We also cover AI, burnout, and building strong teams in a multi-cloud world.

  • CloudHub 2025: Tech Leaders on Vetting, Scaling & Staying Ahead
    Cloud Security
    CloudHub 2025: Tech Leaders on Vetting, Scaling & Staying Ahead

    Recorded live at CloudHub 2025, this episode brings real-world insights from security and DevOps leaders on the challenges of cloud security, solution vetting, and balancing innovation with risk. Featuring Alex Nayshtut (Cellebrite), Einat Shimoni (Lusha), Sagi Levanon (Wotch Health), Anna Lerner (SolarEdge), and Alon Krispin (Allot). Join us for a candid discussion on how today’s tech leaders navigate the ever-evolving cloud landscape.

DevOps & Cloud Management

GlobalDots builds robust, cost-effective, and secure cloud infrastructures to help your business tap into the agility and speed of the cloud. GD’s customers enjoy a best-of-breed suite of tools and managed services fully customized to your ecosystem by our expert teams.

Discover GlobalDots’ services now –

    GlobalDots' industry expertise proactively addressed structural inefficiencies that would have otherwise hindered our success. Their laser focus is why I would recommend them as a partner to other companies

    Marco Kaiser
    Marco Kaiser

    CTO

    Legal Services

    GlobalDots has helped us to scale up our innovative capabilities, and in significantly improving our service provided to our clients

    Antonio Ostuni
    Antonio Ostuni

    CIO

    IT Services

    It's common for 3rd parties to work with a limited number of vendors - GlobalDots and its multi-vendor approach is different. Thanks to GlobalDots vendors umbrella, the hybrid-cloud migration was exceedingly smooth

    Motti Shpirer
    Motti Shpirer

    VP of Infrastructure & Technology

    Advertising Services