Vendor Profile
Shoeisha Co., Ltd.
| Address | 5 Funamachi Shinjuku-ku Tokyo, JAPAN ZIP:160-0006 |
|---|---|
| Representative Name | Kaoru Usui |
| Annual Revenue | closed |
| No. of Employees | 185 |
| Web Site URL |
SD item code:12978569
| Detail | Price & Quantity | ||
|---|---|---|---|
| S1 |
Jules S. Damji, Brooke Wenig, Tathagata Das, Denny Lee Original text
Ryo Hasegawa, Takaaki Yayoi, Masahiko Kitamura, Shunichiro Takeshita, Naotaro Kotani, Saki Kitaoka, Koichiro Ichimura, Hiroshi Nagasato, Masatsugu Nogami Translation
Jules S. Damji、Brooke Wenig、Tathagata Das、Denny Lee 原著
長谷川 亮、弥生 隆明、北村 匡彦、竹下 俊一郎、小谷 尚太郎、北岡 早紀、市村 幸一郎、永里 洋、野上 将嗣 翻訳
(182280)
JAN:9784798182285
|
(182280)
JAN:9784798182285
Wholesale Price: Members Only
1 pc /set
In Stock
|
|
| Shipping Date |
|---|
|
About 1 week
|
| Dimensions |
|---|
|
Format:B5
Number of pages:464 |
| Specifications |
|---|
|
Country of manufacture: Japan
Material / component: Format:Book (paper)
Year of manufacture: 2024
Product tag: None
|
Description
| In-depth explanation of how Apache Spark works and large-scale processing and ML development for Big Data This book is an intermediate-level introduction to Apache Spark, MLflow, and Delta Lake, data analysis frameworks primarily for Big Data. It provides an introduction to Apache Spark, MLflow, and Delta Lake for data AI implementors, going beyond just [trying it out] to explain how they work and how to implement them efficiently. The book will explain how to perform simple and complex data analysis and how to employ machine learning algorithms, beginning with an introduction to Apache Spark, Spark SQL and data frames and datasets. From there, the book will explain how to perform practical machine learning using Apache Spark. Throughout this book, you will learn *Learn high-level structured APIs in Python, SQL, Scala, or Java *Understand Spark operations and the SQL engine *Inspection, tuning, and debugging of Spark operations using Spark configuration and the Spark UI *Connecting to data sources such as JSON, Parquet, CSV, Avro, ORC, Hive, S3, or Kafka *Perform analysis on batch and streaming data using structured streaming *Build reliable data pipelines using open source Delta Lake and Spark *Develop machine learning pipelines using MLlib, manage models using MLflow, and put them into production *Usage of various data frames related to pandas DataFrame and SparkDataFrame *[Japanese original content]Practical use of LLM and English SDK for Spark, a new coding style that utilizes AI. *This book is a Japanese translation of [Learning Spark: Lightning-Fast Data Analytics 2nd Edition]. |
More
| Shipping Method | Estimated Arrival |
|---|---|
| Sea Mail | From Dec.4th 2025 to Feb.5th 2026 |
| Air Mail | From Nov.18th 2025 to Nov.20th 2025 |
| EMS | From Nov.17th 2025 to Nov.20th 2025 |
| Pantos Express | From Nov.19th 2025 to Nov.24th 2025 |
| DHL | From Nov.17th 2025 to Nov.19th 2025 |
| UPS | From Nov.17th 2025 to Nov.19th 2025 |
| FedEx | From Nov.17th 2025 to Nov.19th 2025 |
|
Some trading conditions may be applicable only in Japan.
This product (book) is subject to the Resale Price Maintenance Program. The law allows the manufacturer (publisher) to specify the sales price. We ask that your company also adhere to the resale price specified by us. In the unlikely event that you fail to do so, we may terminate the transaction. Thank you very much for your understanding and cooperation.
|
Other items from this category:
This book is an intermediate-level introduction to Apache Spark, MLflow, and Delta Lake, data analysis frameworks primarily for Big Data. It provides an introduction to Apache Spark, MLflow, and Delta Lake for data AI implementors, going beyond just [trying it out] to explain how they work and how to implement them efficiently.
The book will explain how to perform simple and complex data analysis and how to employ machine learning algorithms, beginning with an introduction to Apache Spark, Spark SQL and data frames and datasets. From there, the book will explain how to perform practical machine learning using Apache Spark. Throughout this book, you will learn
*Learn high-level structured APIs in Python, SQL, Scala, or Java
*Understand Spark operations and the SQL engine
*Inspection, tuning, and debugging of Spark operations using Spark configuration and the Spark UI
*Connecting to data sources such as JSON, Parquet, CSV, Avro, ORC, Hive, S3, or Kafka
*Perform analysis on batch and streaming data using structured streaming
*Build reliable data pipelines using open source Delta Lake and Spark
*Develop machine learning pipelines using MLlib, manage models using MLflow, and put them into production
*Usage of various data frames related to pandas DataFrame and SparkDataFrame
*[Japanese original content]Practical use of LLM and English SDK for Spark, a new coding style that utilizes AI.
*This book is a Japanese translation of [Learning Spark: Lightning-Fast Data Analytics 2nd Edition].