It is well known — or should be — that spark is not secured by default. It is right there in the docs
Security in Spark is OFF by default
So you should be well aware that you’ll need to put the effort to secure your cluster. And there are many things to consider, like the application UI, the master UI, the workers UI, data encryption, and ssl for the communication between nodes and so on. I’ll probably make another post covering the above at some point.
One thing you probably don’t have in mind is that spark has a REST API, where you can submit jobs. It is not available when running locally, or at least I haven’t managed to make it work, but it makes sense that you need a master to submit to, so, you need a cluster. …
Research and awareness needed
TL;DR
It seems that more and more people are agreeing that G6PDd can be a risk factor for COVID-19, not only in terms of the medication that is used to combat the virus, but regarding one’s susceptibility to the virus and the severity of its side-effects. There is an urgent need to verify this through numbers and research. The focus of this article is to raise awareness to the matter.
From the first few days of the lock-down, as I watched the news with the number of patients affected and the casualties, I wondered if there could be a relationship between being G6PDd and having serious side-effects from COVID-19, since G6PDd can increase susceptibility to certain infections (while being an advantage in others — especially malaria.). …
So, after a few runs with the PySpark ml implementation of Isolation Forest presented here, I stumbled upon a couple of things and I thought I’d write about them so that you don’t waste the time I wasted troubleshooting.
In the previous article, I used VectorAssembler
to gather the feature vectors. It so happened that the test data I had, created only DenseVectors
, but when I tried the example on a different dataset, I realized that:
VectorAssembler
can create both Dense and Sparse vectors in the same dataframe (which is smart and other spark ml argorithms can leverage it and work with…A usual way to read from a database, e.g. Postgres, using spark would be something like the following:
However, by running this, you will notice that the spark application has only one task active, which means, only one core is being used and this one task will try to get the data all at once. To make this more efficient, if our data permits it, we can use:
numPartitions
: the number of data splitscolumn
: the column to partition by, e.g. id
,lowerBound
: the minimum value for the column — inclusive,upperBound
: the maximum value of the column —be careful, it is…As a first time mom of a now two-month old beautiful baby, I find myself most of the time confined in awkward and tiring positions to feed her and to help her relax and sleep. My most wanted superpower at those times would definitely be telekinesis, because once you are in place for feeding e.g., if you don’t have all the things you need with you and someone to help you, depending on the baby’s mood, you are very much doomed to starring at the thing you are trying to reach for and is right out of your grasp, the wall, or worse, the tv. And this might very well be for the next… hour or two maybe? …
Isolation Forest is an algorithm for anomaly / outlier detection, basically a way to spot the odd one out. We go through the main characteristics and explore two ways to use Isolation Forest with Pyspark.
Most existing model-based approaches to anomaly detection construct a profile of normal instances, then identify instances that do not conform to the normal profile as anomalies. […] [Isolation Forest] explicitly isolates anomalies instead of profiles normal points
source: https://cs.nju.edu.cn/zhouzh/zhouzh.files/publication/icdm08b.pdf
Isolation means separating an instance from the rest of the instances
contamination
parameter. In other words it learns what normal looks like to be able to distinguish the…For those who are familiar with pandas DataFrames, switching to PySpark can be quite confusing. The API is not the same, and when switching to a distributed nature, some things are being done quite differently because of the restrictions imposed by that nature.
I recently stumbled upon Koalas from a very interesting Databricks presentation about Apache Spark 3.0, Delta Lake and Koalas, and thought that it would be nice to explore it.
The Koalas project makes data scientists more productive when interacting with big data, by implementing the pandas DataFrame API on top of Apache Spark.
pandas is the de facto standard (single-node) DataFrame implementation in Python, while Spark is the de facto standard for big data processing. With this package, you…
Debugging a spark application can range from a fun to a very (and I mean very) frustrating experience.
I’ve started gathering the issues I’ve come across from time to time to compile a list of the most common problems and their solutions.
This is the first part of this list. I hope you find it useful and it saves you some time. Most of them are very simple to resolve but their stacktrace can be cryptic and not very helpful.
Unittesting Spark applications is not that straight-forward. For most of the cases you’ll probably need an active spark session, which means that your test cases will take a long time to run and that perhaps we’re tiptoeing around the boundaries of what can be called a unit test. But, it is definitely worth doing it.
So, should I?
Well, yes! Testing your software is always a good thing, and it will most likely save you from many headaches, plus, you’ll be forced to have your code implemented in smaller bits and pieces that’ll be easier to test, thus, gain in readability and simplicity. …
This is a different kind of article than the ones I usually write, but I thought it was important to write it. It will have some technical details at the end about a relevant side project, but it is definitely not technical in content. I wrote this in hope that it will be helpful to someone, because my experience could have been avoided with a little bit of more information.
Glucose-6-phosphate dehydrogenase deficiency (G6PDD) is an inborn error of metabolism that predisposes to red blood cell breakdown.[1] Most of the time, those who are affected have no symptoms.[3] Following a specific trigger, symptoms such as yellowish skin, dark urine, shortness of breath, and feeling tired may develop.[1][2] Complications can include anemia and newborn jaundice.[2] …
About