A
- AccessControlEntry.ConditionToIdentities
- Access.Database
- ActiveDirectory.Domains
- AdobeAnalytics.Cubes
- AdoDotNet.DataSource
- AdoDotNet.Query
- AnalysisServices.Database
- AnalysisServices.Databases
- AzureStorage.BlobContents
- AzureStorage.Blobs
- AzureStorage.DataLake
- AzureStorage.DataLakeContents
- AzureStorage.Tables
B
- BinaryFormat.Binary
- BinaryFormat.Byte
- BinaryFormat.ByteOrder
- BinaryFormat.Choice
- BinaryFormat.Decimal
- BinaryFormat.Double
- BinaryFormat.Group
- BinaryFormat.Length
- BinaryFormat.List
- BinaryFormat.Null
- BinaryFormat.Record
- BinaryFormat.SignedInteger16
- BinaryFormat.SignedInteger32
- BinaryFormat.SignedInteger64
- BinaryFormat.Single
- BinaryFormat.Text
- BinaryFormat.Transform
- BinaryFormat.UnsignedInteger16
- BinaryFormat.UnsignedInteger32
- BinaryFormat.UnsignedInteger64
- BinaryFormat.7BitEncodedSignedInteger
- BinaryFormat.7BitEncodedUnsignedInteger
- Binary data
- Binary.ApproximateLength
- Binary.Buffer
- Binary.Combine
- Binary.Compress
- Binary.Decompress
- Binary.From
- Binary.FromList
- Binary.FromText
- Binary.InferContentType
- Binary.Length
- Binary.Range
- Binary.Split
- Binary.ToList
- Binary.ToText
- Binary.View
- Binary.ViewError
- Binary.ViewFunction
- Byte.From
C
- Cdm.Contents
- Character.FromNumber
- Character.ToNumber
- Combiner.CombineTextByDelimiter
- Combiner.CombineTextByEachDelimiter
- Combiner.CombineTextByLengths
- Combiner.CombineTextByPositions
- Combiner.CombineTextByRanges
- Comparer.FromCulture
- Comparer.Ordinal
- Comparer.OrdinalIgnoreCase
- Controlling byte order
- Csv.Document
- Cube.AddAndExpandDimensionColumn
- Cube.AddMeasureColumn
- Cube.ApplyParameter
- Cube.AttributeMemberId
- Cube.AttributeMemberProperty
- Cube.CollapseAndRemoveColumns
- Cube.Dimensions
- Cube.DisplayFolders
- Cube.MeasureProperties
- Cube.MeasureProperty
- Cube.Measures
- Cube.Parameters
- Cube.Properties
- Cube.PropertyKey
- Cube.ReplaceDimensions
- Cube.Transform
- Currency.From
D
- DateTime.AddZone
- DateTime.Date
- DateTime.FixedLocalNow
- DateTime.From
- DateTime.FromFileTime
- DateTime.FromText
- DateTime.IsInCurrentHour
- DateTime.IsInCurrentMinute
- DateTime.IsInCurrentSecond
- DateTime.IsInNextHour
- DateTime.IsInNextMinute
- DateTime.IsInNextNHours
- DateTime.IsInNextNMinutes
- DateTime.IsInNextNSeconds
- DateTime.IsInNextSecond
- DateTime.IsInPreviousHour
- DateTime.IsInPreviousMinute
- DateTime.IsInPreviousNHours
- DateTime.IsInPreviousNMinutes
- DateTime.IsInPreviousNSeconds
- DateTime.IsInPreviousSecond
- DateTime.LocalNow
- DateTime.Time
- DateTime.ToRecord
- DateTime.ToText
- Date.AddDays
- Date.AddMonths
- Date.AddQuarters
- Date.AddWeeks
- Date.AddYears
- Date.Day
- Date.DayOfWeek
- Date.DayOfWeekName
- Date.DayOfYear
- Date.DaysInMonth
- Date.EndOfDay
- Date.EndOfMonth
- Date.EndOfQuarter
- Date.EndOfWeek
- Date.EndOfYear
- Date.From
- Date.FromText
- Date.IsInCurrentDay
- Date.IsInCurrentMonth
- Date.IsInCurrentQuarter
- Date.IsInCurrentWeek
- Date.IsInCurrentYear
- Date.IsInNextDay
- Date.IsInNextMonth
- Date.IsInNextNDays
- Date.IsInNextNMonths
- Date.IsInNextNQuarters
- Date.IsInNextNWeeks
- Date.IsInNextNYears
- Date.IsInNextQuarter
- Date.IsInNextWeek
- Date.IsInNextYear
- Date.IsInPreviousDay
- Date.IsInPreviousMonth
- Date.IsInPreviousNDays
- Date.IsInPreviousNMonths
- Date.IsInPreviousNQuarters
- Date.IsInPreviousNWeeks
- Date.IsInPreviousNYears
- Date.IsInPreviousQuarter
- Date.IsInPreviousWeek
- Date.IsInPreviousYear
- Date.IsInYearToDate
- Date.IsLeapYear
- Date.Month
- Date.MonthName
- Date.QuarterOfYear
- Date.StartOfDay
- Date.StartOfMonth
- Date.StartOfQuarter
- Date.StartOfWeek
- Date.StartOfYear
- Date.ToRecord
- Date.ToText
- Date.WeekOfMonth
- Date.WeekOfYear
- Date.Year
- DB2.Database
- Decimal.From
- Diagnostics.ActivityId
- Diagnostics.Trace
- DirectQueryCapabilities.From
- Double.From
- Duration.Days
- Duration.From
- Duration.FromText
- Duration.Hours
- Duration.Minutes
- Duration.Seconds
- Duration.ToRecord
- Duration.TotalDays
- Duration.TotalHours
- Duration.TotalMinutes
- Duration.TotalSeconds
- Duration.ToText
E
F
G
H
I
L
- Lines.FromBinary
- Lines.FromText
- Lines.ToBinary
- Lines.ToText
- List.Accumulate
- List.AllTrue
- List.Alternate
- List.AnyTrue
- List.Average
- List.Buffer
- List.Combine
- List.ConformToPageReader
- List.Contains
- List.ContainsAll
- List.ContainsAny
- List.Count
- List.Covariance
- List.Dates
- List.DateTimes
- List.DateTimeZones
- List.Difference
- List.Distinct
- List.Durations
- List.FindText
- List.First
- List.FirstN
- List.Generate
- List.InsertRange
- List.Intersect
- List.IsDistinct
- List.IsEmpty
- List.Last
- List.LastN
- List.MatchesAll
- List.MatchesAny
- List.Max
- List.MaxN
- List.Median
- List.Min
- List.MinN
- List.Mode
- List.Modes
- List.NonNullCount
- List.Numbers
- List.Percentile
- List.PositionOf
- List.PositionOfAny
- List.Positions
- List.Product
- List.Random
- List.Range
- List.RemoveFirstN
- List.RemoveItems
- List.RemoveLastN
- List.RemoveMatchingItems
- List.RemoveNulls
- List.RemoveRange
- List.Repeat
- List.ReplaceMatchingItems
- List.ReplaceRange
- List.ReplaceValue
- List.Reverse
- List.Select
- List.Single
- List.SingleOrDefault
- List.Skip
- List.Sort
- List.Split
- List.StandardDeviation
- List.Sum
- List.Times
- List.Transform
- List.TransformMany
- List.Union
- List.Zip
- Logical.From
- Logical.FromText
In this article, we’ll take a closer look at the M code behind the HdInsight.Files function and explore how it works.
Understanding Hadoop Distributed File System (HDFS)
Before we dive into the M code behind the HdInsight.Files function, let’s first discuss what Hadoop Distributed File System (HDFS) is and how it works.
HDFS is a distributed file system that is designed to store and manage large amounts of data across a network of machines. It is a core component of the Apache Hadoop framework, which is widely used for big data processing and analytics.
In HDFS, data is stored in blocks across a network of machines. Each block is replicated across multiple machines for fault tolerance, ensuring that data can be easily recovered in case of machine failure. HDFS also includes a NameNode, which tracks the location of each block of data and manages access to the file system.
How HdInsight.Files Works
Now that we have a basic understanding of HDFS, let’s explore how the HdInsight.Files function works.
The HdInsight.Files function is an M function in Power Query that allows users to connect to and retrieve data from HDFS clusters. It works by sending requests to the HDFS NameNode to retrieve information about files and directories in the file system.
When you use the HdInsight.Files function in Power Query, you’ll need to provide the following parameters:
– `url`: The URL of the HDFS NameNode.
– `path`: The path to the file or directory you want to retrieve.
– `recursive`: A boolean value that determines whether to retrieve files and directories recursively.
Once you’ve provided these parameters, the HdInsight.Files function will send a request to the HDFS NameNode to retrieve information about the file or directory you specified. It will then return a table in Power Query that contains metadata about the file or directory, such as its name, size, and modification time.
The M Code Behind HdInsight.Files
Now that we understand how the HdInsight.Files function works, let’s take a closer look at the M code behind it.
Here is an example of the M code for the HdInsight.Files function:
let
Source = Hadoop.Files(“hdfs://namenode:8020/”, [HierarchicalNavigation=true]),
#”Filtered Rows” = Table.SelectRows(Source, each ([Folder Path] = “/my/folder/”)),
#”Expanded Content” = Table.ExpandRecordColumn(#”Filtered Rows”, “Content”, {“File Name”, “Content”}, {“File Name”, “Content”}),
#”Converted to Table” = #table({“File Name”, “Content”}, {{#”Expanded Content”[File Name]{0}, #”Expanded Content”[Content]{0}}})
in
#”Converted to Table”
This code uses the Hadoop.Files function, which is a built-in function in Power Query that allows users to connect to Hadoop clusters. The function takes two parameters:
- `url`: The URL of the HDFS NameNode.
- `options`: A record that contains additional configuration options for the connection.
In the example above, the `options` parameter includes the `HierarchicalNavigation` option, which enables hierarchical navigation in the HDFS file system.
The code then filters the results to only include files and directories in the `/my/folder/` directory using the `Table.SelectRows` function. It then expands the content of each file using the `Table.ExpandRecordColumn` function.
Finally, the code converts the results to a table using the `#table` function and returns the table as the final output.
The HdInsight.Files function in Power Query is a powerful tool for connecting to and retrieving data from Hadoop Distributed File System (HDFS) clusters. By understanding the M code behind the function, you can gain a deeper understanding of how it works and how to use it effectively in your own data analysis and transformation workflows.