Apache Spark export PostgreSQL data in Parquet format











up vote
0
down vote

favorite












I have PostgreSQL database with ~1000 different tables. I'd like to export all of these tables and data inside them into Parquet files.



In order to do it, I'm going to read each table into DataFrame and then store this df into Parquet file. Many of the PostgreSQL tables contains user-defined Types.



The biggest issue is - that I can't manually specify the schema for DataFrame. Will Apache Spark, in this case, be able to automatically infer the PostgreSQL table schemas and store them appropriately into Parquet format or this is impossible with Apache Spark and some other technology must be used for this purpose?



UPDATED



I have created the following PostgreSQL user-defined type, table and records:



create type dimensions as (
width integer,
height integer,
depth integer
);

create table moving_boxes (
id serial primary key,
dims dimensions not null
);

insert into moving_boxes (dims) values (row(3,4,5)::dimensions);
insert into moving_boxes (dims) values (row(1,4,2)::dimensions);
insert into moving_boxes (dims) values (row(10,12,777)::dimensions);


Implemented the following Spark application:



// that gives an one-partition Dataset
val opts = Map(
"url" -> "jdbc:postgresql:sparktest",
"dbtable" -> "moving_boxes",
"user" -> "user",
"password" -> "password")

val df = spark.
read.
format("jdbc").
options(opts).
load

println(df.printSchema())

df.write.mode(SaveMode.Overwrite).format("parquet").save("moving_boxes.parquet")


This is df.printSchema output:



root
|-- id: integer (nullable = true)
|-- dims: string (nullable = true)


As you may see, Spark DataFrame infer dims schema as string and not as a complex nested type.



This is the log information from ParquetWriteSupport:



18/11/06 10:08:52 INFO ParquetWriteSupport: Initialized Parquet WriteSupport with Catalyst schema:
{
"type" : "struct",
"fields" : [ {
"name" : "id",
"type" : "integer",
"nullable" : true,
"metadata" : { }
}, {
"name" : "dims",
"type" : "string",
"nullable" : true,
"metadata" : { }
} ]
}
and corresponding Parquet message type:
message spark_schema {
optional int32 id;
optional binary dims (UTF8);
}


Could you please explain, will the original complex dims type(defined in PostgreSQL) be lost in the saved Parquet file or no?










share|improve this question
























  • Yes Spark will be able to infer schema and save it as a parquet file. Have you tried to do it yet?
    – Joe Widen
    Nov 6 at 3:12










  • @JoeWiden thanks for your answer. I have updated the question with more detailed information. Could you please explain, will the original complex dims type(defined in PostgreSQL) be lost in the saved Parquet file or no?
    – alexanoid
    Nov 6 at 8:22










  • I dont believe spark will be able to infer complex types via jdbc connection.
    – Joe Widen
    Nov 9 at 14:53















up vote
0
down vote

favorite












I have PostgreSQL database with ~1000 different tables. I'd like to export all of these tables and data inside them into Parquet files.



In order to do it, I'm going to read each table into DataFrame and then store this df into Parquet file. Many of the PostgreSQL tables contains user-defined Types.



The biggest issue is - that I can't manually specify the schema for DataFrame. Will Apache Spark, in this case, be able to automatically infer the PostgreSQL table schemas and store them appropriately into Parquet format or this is impossible with Apache Spark and some other technology must be used for this purpose?



UPDATED



I have created the following PostgreSQL user-defined type, table and records:



create type dimensions as (
width integer,
height integer,
depth integer
);

create table moving_boxes (
id serial primary key,
dims dimensions not null
);

insert into moving_boxes (dims) values (row(3,4,5)::dimensions);
insert into moving_boxes (dims) values (row(1,4,2)::dimensions);
insert into moving_boxes (dims) values (row(10,12,777)::dimensions);


Implemented the following Spark application:



// that gives an one-partition Dataset
val opts = Map(
"url" -> "jdbc:postgresql:sparktest",
"dbtable" -> "moving_boxes",
"user" -> "user",
"password" -> "password")

val df = spark.
read.
format("jdbc").
options(opts).
load

println(df.printSchema())

df.write.mode(SaveMode.Overwrite).format("parquet").save("moving_boxes.parquet")


This is df.printSchema output:



root
|-- id: integer (nullable = true)
|-- dims: string (nullable = true)


As you may see, Spark DataFrame infer dims schema as string and not as a complex nested type.



This is the log information from ParquetWriteSupport:



18/11/06 10:08:52 INFO ParquetWriteSupport: Initialized Parquet WriteSupport with Catalyst schema:
{
"type" : "struct",
"fields" : [ {
"name" : "id",
"type" : "integer",
"nullable" : true,
"metadata" : { }
}, {
"name" : "dims",
"type" : "string",
"nullable" : true,
"metadata" : { }
} ]
}
and corresponding Parquet message type:
message spark_schema {
optional int32 id;
optional binary dims (UTF8);
}


Could you please explain, will the original complex dims type(defined in PostgreSQL) be lost in the saved Parquet file or no?










share|improve this question
























  • Yes Spark will be able to infer schema and save it as a parquet file. Have you tried to do it yet?
    – Joe Widen
    Nov 6 at 3:12










  • @JoeWiden thanks for your answer. I have updated the question with more detailed information. Could you please explain, will the original complex dims type(defined in PostgreSQL) be lost in the saved Parquet file or no?
    – alexanoid
    Nov 6 at 8:22










  • I dont believe spark will be able to infer complex types via jdbc connection.
    – Joe Widen
    Nov 9 at 14:53













up vote
0
down vote

favorite









up vote
0
down vote

favorite











I have PostgreSQL database with ~1000 different tables. I'd like to export all of these tables and data inside them into Parquet files.



In order to do it, I'm going to read each table into DataFrame and then store this df into Parquet file. Many of the PostgreSQL tables contains user-defined Types.



The biggest issue is - that I can't manually specify the schema for DataFrame. Will Apache Spark, in this case, be able to automatically infer the PostgreSQL table schemas and store them appropriately into Parquet format or this is impossible with Apache Spark and some other technology must be used for this purpose?



UPDATED



I have created the following PostgreSQL user-defined type, table and records:



create type dimensions as (
width integer,
height integer,
depth integer
);

create table moving_boxes (
id serial primary key,
dims dimensions not null
);

insert into moving_boxes (dims) values (row(3,4,5)::dimensions);
insert into moving_boxes (dims) values (row(1,4,2)::dimensions);
insert into moving_boxes (dims) values (row(10,12,777)::dimensions);


Implemented the following Spark application:



// that gives an one-partition Dataset
val opts = Map(
"url" -> "jdbc:postgresql:sparktest",
"dbtable" -> "moving_boxes",
"user" -> "user",
"password" -> "password")

val df = spark.
read.
format("jdbc").
options(opts).
load

println(df.printSchema())

df.write.mode(SaveMode.Overwrite).format("parquet").save("moving_boxes.parquet")


This is df.printSchema output:



root
|-- id: integer (nullable = true)
|-- dims: string (nullable = true)


As you may see, Spark DataFrame infer dims schema as string and not as a complex nested type.



This is the log information from ParquetWriteSupport:



18/11/06 10:08:52 INFO ParquetWriteSupport: Initialized Parquet WriteSupport with Catalyst schema:
{
"type" : "struct",
"fields" : [ {
"name" : "id",
"type" : "integer",
"nullable" : true,
"metadata" : { }
}, {
"name" : "dims",
"type" : "string",
"nullable" : true,
"metadata" : { }
} ]
}
and corresponding Parquet message type:
message spark_schema {
optional int32 id;
optional binary dims (UTF8);
}


Could you please explain, will the original complex dims type(defined in PostgreSQL) be lost in the saved Parquet file or no?










share|improve this question















I have PostgreSQL database with ~1000 different tables. I'd like to export all of these tables and data inside them into Parquet files.



In order to do it, I'm going to read each table into DataFrame and then store this df into Parquet file. Many of the PostgreSQL tables contains user-defined Types.



The biggest issue is - that I can't manually specify the schema for DataFrame. Will Apache Spark, in this case, be able to automatically infer the PostgreSQL table schemas and store them appropriately into Parquet format or this is impossible with Apache Spark and some other technology must be used for this purpose?



UPDATED



I have created the following PostgreSQL user-defined type, table and records:



create type dimensions as (
width integer,
height integer,
depth integer
);

create table moving_boxes (
id serial primary key,
dims dimensions not null
);

insert into moving_boxes (dims) values (row(3,4,5)::dimensions);
insert into moving_boxes (dims) values (row(1,4,2)::dimensions);
insert into moving_boxes (dims) values (row(10,12,777)::dimensions);


Implemented the following Spark application:



// that gives an one-partition Dataset
val opts = Map(
"url" -> "jdbc:postgresql:sparktest",
"dbtable" -> "moving_boxes",
"user" -> "user",
"password" -> "password")

val df = spark.
read.
format("jdbc").
options(opts).
load

println(df.printSchema())

df.write.mode(SaveMode.Overwrite).format("parquet").save("moving_boxes.parquet")


This is df.printSchema output:



root
|-- id: integer (nullable = true)
|-- dims: string (nullable = true)


As you may see, Spark DataFrame infer dims schema as string and not as a complex nested type.



This is the log information from ParquetWriteSupport:



18/11/06 10:08:52 INFO ParquetWriteSupport: Initialized Parquet WriteSupport with Catalyst schema:
{
"type" : "struct",
"fields" : [ {
"name" : "id",
"type" : "integer",
"nullable" : true,
"metadata" : { }
}, {
"name" : "dims",
"type" : "string",
"nullable" : true,
"metadata" : { }
} ]
}
and corresponding Parquet message type:
message spark_schema {
optional int32 id;
optional binary dims (UTF8);
}


Could you please explain, will the original complex dims type(defined in PostgreSQL) be lost in the saved Parquet file or no?







postgresql apache-spark parquet






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 6 at 8:28

























asked Nov 5 at 19:38









alexanoid

6,7991074164




6,7991074164












  • Yes Spark will be able to infer schema and save it as a parquet file. Have you tried to do it yet?
    – Joe Widen
    Nov 6 at 3:12










  • @JoeWiden thanks for your answer. I have updated the question with more detailed information. Could you please explain, will the original complex dims type(defined in PostgreSQL) be lost in the saved Parquet file or no?
    – alexanoid
    Nov 6 at 8:22










  • I dont believe spark will be able to infer complex types via jdbc connection.
    – Joe Widen
    Nov 9 at 14:53


















  • Yes Spark will be able to infer schema and save it as a parquet file. Have you tried to do it yet?
    – Joe Widen
    Nov 6 at 3:12










  • @JoeWiden thanks for your answer. I have updated the question with more detailed information. Could you please explain, will the original complex dims type(defined in PostgreSQL) be lost in the saved Parquet file or no?
    – alexanoid
    Nov 6 at 8:22










  • I dont believe spark will be able to infer complex types via jdbc connection.
    – Joe Widen
    Nov 9 at 14:53
















Yes Spark will be able to infer schema and save it as a parquet file. Have you tried to do it yet?
– Joe Widen
Nov 6 at 3:12




Yes Spark will be able to infer schema and save it as a parquet file. Have you tried to do it yet?
– Joe Widen
Nov 6 at 3:12












@JoeWiden thanks for your answer. I have updated the question with more detailed information. Could you please explain, will the original complex dims type(defined in PostgreSQL) be lost in the saved Parquet file or no?
– alexanoid
Nov 6 at 8:22




@JoeWiden thanks for your answer. I have updated the question with more detailed information. Could you please explain, will the original complex dims type(defined in PostgreSQL) be lost in the saved Parquet file or no?
– alexanoid
Nov 6 at 8:22












I dont believe spark will be able to infer complex types via jdbc connection.
– Joe Widen
Nov 9 at 14:53




I dont believe spark will be able to infer complex types via jdbc connection.
– Joe Widen
Nov 9 at 14:53

















active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














 

draft saved


draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53161094%2fapache-spark-export-postgresql-data-in-parquet-format%23new-answer', 'question_page');
}
);

Post as a guest





































active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes
















 

draft saved


draft discarded



















































 


draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53161094%2fapache-spark-export-postgresql-data-in-parquet-format%23new-answer', 'question_page');
}
);

Post as a guest




















































































這個網誌中的熱門文章

Xamarin.form Move up view when keyboard appear

Post-Redirect-Get with Spring WebFlux and Thymeleaf

Anylogic : not able to use stopDelay()