How do I list all files of a directory?
How can I list all files of a directory in Python and add them to a list
?
python directory
|
show 2 more comments
How can I list all files of a directory in Python and add them to a list
?
python directory
23
Related to How to get a list of subdirectories
– rds
Jan 5 '12 at 9:32
67
os.listdir(path)
returns a list of strings of filenames and subdirectories from the given path, or current if omitted. (Putting this here for people from Google to see because the currently top answer doesn't answer the question.)
– Apollys
May 2 '17 at 15:54
3
All files only? Do you want to list subdirectories?
– Aleksandar Jovanovic
Jul 5 '17 at 11:11
This works nicely (top answer below):from os import listdir
from os.path import isfile, join
files = [f for f in listdir(mypath) if isfile(join(mypath, f))]
Note: you need to assign a string to the directory path where the files are stored (ex:mypath = "users/name/desktop/"
).
– Arshin
Apr 2 '18 at 18:12
Do you mean files as: Ordinary files that aren't sub-directories or links, or all files, including sub-directories and links?
– Mulliganaceous
May 3 '18 at 7:53
|
show 2 more comments
How can I list all files of a directory in Python and add them to a list
?
python directory
How can I list all files of a directory in Python and add them to a list
?
python directory
python directory
edited Oct 22 '17 at 2:35
Ioannis Filippidis
5,01833778
5,01833778
asked Jul 8 '10 at 19:31
duhhunjonnduhhunjonn
18.5k102215
18.5k102215
23
Related to How to get a list of subdirectories
– rds
Jan 5 '12 at 9:32
67
os.listdir(path)
returns a list of strings of filenames and subdirectories from the given path, or current if omitted. (Putting this here for people from Google to see because the currently top answer doesn't answer the question.)
– Apollys
May 2 '17 at 15:54
3
All files only? Do you want to list subdirectories?
– Aleksandar Jovanovic
Jul 5 '17 at 11:11
This works nicely (top answer below):from os import listdir
from os.path import isfile, join
files = [f for f in listdir(mypath) if isfile(join(mypath, f))]
Note: you need to assign a string to the directory path where the files are stored (ex:mypath = "users/name/desktop/"
).
– Arshin
Apr 2 '18 at 18:12
Do you mean files as: Ordinary files that aren't sub-directories or links, or all files, including sub-directories and links?
– Mulliganaceous
May 3 '18 at 7:53
|
show 2 more comments
23
Related to How to get a list of subdirectories
– rds
Jan 5 '12 at 9:32
67
os.listdir(path)
returns a list of strings of filenames and subdirectories from the given path, or current if omitted. (Putting this here for people from Google to see because the currently top answer doesn't answer the question.)
– Apollys
May 2 '17 at 15:54
3
All files only? Do you want to list subdirectories?
– Aleksandar Jovanovic
Jul 5 '17 at 11:11
This works nicely (top answer below):from os import listdir
from os.path import isfile, join
files = [f for f in listdir(mypath) if isfile(join(mypath, f))]
Note: you need to assign a string to the directory path where the files are stored (ex:mypath = "users/name/desktop/"
).
– Arshin
Apr 2 '18 at 18:12
Do you mean files as: Ordinary files that aren't sub-directories or links, or all files, including sub-directories and links?
– Mulliganaceous
May 3 '18 at 7:53
23
23
Related to How to get a list of subdirectories
– rds
Jan 5 '12 at 9:32
Related to How to get a list of subdirectories
– rds
Jan 5 '12 at 9:32
67
67
os.listdir(path)
returns a list of strings of filenames and subdirectories from the given path, or current if omitted. (Putting this here for people from Google to see because the currently top answer doesn't answer the question.)– Apollys
May 2 '17 at 15:54
os.listdir(path)
returns a list of strings of filenames and subdirectories from the given path, or current if omitted. (Putting this here for people from Google to see because the currently top answer doesn't answer the question.)– Apollys
May 2 '17 at 15:54
3
3
All files only? Do you want to list subdirectories?
– Aleksandar Jovanovic
Jul 5 '17 at 11:11
All files only? Do you want to list subdirectories?
– Aleksandar Jovanovic
Jul 5 '17 at 11:11
This works nicely (top answer below):
from os import listdir
from os.path import isfile, join
files = [f for f in listdir(mypath) if isfile(join(mypath, f))]
Note: you need to assign a string to the directory path where the files are stored (ex: mypath = "users/name/desktop/"
).– Arshin
Apr 2 '18 at 18:12
This works nicely (top answer below):
from os import listdir
from os.path import isfile, join
files = [f for f in listdir(mypath) if isfile(join(mypath, f))]
Note: you need to assign a string to the directory path where the files are stored (ex: mypath = "users/name/desktop/"
).– Arshin
Apr 2 '18 at 18:12
Do you mean files as: Ordinary files that aren't sub-directories or links, or all files, including sub-directories and links?
– Mulliganaceous
May 3 '18 at 7:53
Do you mean files as: Ordinary files that aren't sub-directories or links, or all files, including sub-directories and links?
– Mulliganaceous
May 3 '18 at 7:53
|
show 2 more comments
23 Answers
23
active
oldest
votes
os.listdir()
will get you everything that's in a directory - files and directories.
If you want just files, you could either filter this down using os.path
:
from os import listdir
from os.path import isfile, join
onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]
or you could use os.walk()
which will yield two lists for each directory it visits - splitting into files and dirs for you. If you only want the top directory you can just break the first time it yields
from os import walk
f =
for (dirpath, dirnames, filenames) in walk(mypath):
f.extend(filenames)
break
And lastly, as that example shows, adding one list to another you can either use .extend()
or
>>> q = [1, 2, 3]
>>> w = [4, 5, 6]
>>> q = q + w
>>> q
[1, 2, 3, 4, 5, 6]
Personally, I prefer .extend()
5
Doesn't seem to work on Windows with unicode file names for some reason.
– cdiggins
Jun 14 '13 at 16:21
49
A bit simpler:(_, _, filenames) = walk(mypath).next()
(if you are confident that the walk will return at least one value, which it should.)
– misterbee
Jul 14 '13 at 20:56
6
Slight modification to store full paths: for (dirpath, dirnames, filenames) in os.walk(mypath): checksum_files.extend(os.path.join(dirpath, filename) for filename in filenames) break
– okigan
Sep 23 '13 at 21:31
108
f.extend(filenames)
is not actually equivalent tof = f + filenames
.extend
will modifyf
in-place, whereas adding creates a new list in a new memory location. This meansextend
is generally more efficient than+
, but it can sometimes lead to confusion if multiple objects hold references to the list. Lastly, it's worth noting thatf += filenames
is equivalent tof.extend(filenames)
, notf = f + filenames
.
– Benjamin Hodgson♦
Oct 22 '13 at 8:55
19
@misterbee, your solution is the best, just one small improvement:_, _, filenames = next(walk(mypath), (None, None, ))
– bgusach
Mar 5 '15 at 7:36
|
show 9 more comments
I prefer using the glob
module, as it does pattern matching and expansion.
import glob
print(glob.glob("/home/adam/*.txt"))
It will return a list with the queried files:
['/home/adam/file1.txt', '/home/adam/file2.txt', .... ]
11
that's a shortcut for listdir+fnmatch docs.python.org/library/fnmatch.html#fnmatch.fnmatch
– Stefano
Jul 1 '11 at 13:03
17
to clarify, this does not return the "full path"; it simply returns the expansion of the glob, whatever it may be. E.g., given/home/user/foo/bar/hello.txt
, then, if running in directoryfoo
, theglob("bar/*.txt")
will returnbar/hello.txt
. There are cases when you do in fact want the full (i.e., absolute) path; for those cases, see stackoverflow.com/questions/51520/…
– michael
Aug 16 '16 at 12:07
Related: find files recursively with glob: stackoverflow.com/a/2186565/4561887
– Gabriel Staples
Sep 3 '18 at 3:25
add a comment |
import os
os.listdir("somedirectory")
will return a list of all files and directories in "somedirectory".
9
This returns the relative path of the files, as compared with the full path returned byglob.glob
– xji
May 17 '16 at 14:32
13
@JIXiang:os.listdir()
always returns mere filenames (not relative paths). Whatglob.glob()
returns is driven by the path format of the input pattern.
– mklement0
Nov 30 '16 at 18:14
os.listdir() - > It always list the dir and file inside the provided location . Is there any way to list only directory not files ?
– RonyA
May 22 '18 at 15:44
add a comment |
Get a list of files with Python 2 and 3
I have also made a short video here: Python: how to get a list of file in a directory
os.listdir()
or..... how to get all the files (and directories) in current directory (Python 3)
The simplest way to have the file in the current directory in Python 3 is this. It's really simple; use the os
module and the listdir() function and you'll have the file in that directory (and eventual folders that are in the directory, but you will not have the file in the subdirectory, for that you can use walk - I will talk about it later).
>>> import os
>>> arr = os.listdir()
>>> arr
['$RECYCLE.BIN', 'work.txt', '3ebooks.txt', 'documents']
Using glob
I found glob easier to select file of the same type or with something in common. Look at the following example:
import glob
txtfiles =
for file in glob.glob("*.txt"):
txtfiles.append(file)
Using list comprehension
import glob
mylist = [f for f in glob.glob("*.txt")]
Getting the full path name with os.path.abspath
As you noticed, you don't have the full path of the file in the code above. If you need to have the absolute path, you can use another function of the os.path
module called _getfullpathname
, putting the file that you get from os.listdir()
as an argument. There are other ways to have the full path, as we will check later (I replaced, as suggested by mexmex, _getfullpathname with abspath
).
>>> import os
>>> files_path = [os.path.abspath(x) for x in os.listdir()]
>>> files_path
['F:\documentiapplications.txt', 'F:\documenticollections.txt']
Get the full path name of a type of file into all subdirectories with walk
I find this very useful to find stuff in many directories, and it helped me finding a file about which I didn't remember the name:
import os
# Getting the current work directory (cwd)
thisdir = os.getcwd()
# r=root, d=directories, f = files
for r, d, f in os.walk(thisdir):
for file in f:
if ".docx" in file:
print(os.path.join(r, file))
os.listdir(): get files in the current directory (Python 2)
In Python 2 you, if you want the list of the files in the current directory, you have to give the argument as '.' or os.getcwd() in the os.listdir method.
>>> import os
>>> arr = os.listdir('.')
>>> arr
['$RECYCLE.BIN', 'work.txt', '3ebooks.txt', 'documents']
To go up in the directory tree
>>> # Method 1
>>> x = os.listdir('..')
# Method 2
>>> x= os.listdir('/')
Get files: os.listdir() in a particular directory (Python 2 and 3)
>>> import os
>>> arr = os.listdir('F:\python')
>>> arr
['$RECYCLE.BIN', 'work.txt', '3ebooks.txt', 'documents']
Get files of a particular subdirectory with os.listdir()
import os
x = os.listdir("./content")
os.walk('.') - current directory
>>> import os
>>> arr = next(os.walk('.'))[2]
>>> arr
['5bs_Turismo1.pdf', '5bs_Turismo1.pptx', 'esperienza.txt']
glob module - all files
import glob
print(glob.glob("*"))
out:['content', 'start.py']
next(os.walk('.')) and os.path.join('dir','file')
>>> import os
>>> arr =
>>> for d,r,f in next(os.walk("F:_python")):
>>> for file in f:
>>> arr.append(os.path.join(r,file))
...
>>> for f in arr:
>>> print(files)
>output
F:\_python\dict_class.py
F:\_python\programmi.txt
next(os.walk('F:') - get the full path - list comprehension
>>> [os.path.join(r,file) for r,d,f in next(os.walk("F:\_python")) for file in f]
['F:\_python\dict_class.py', 'F:\_python\programmi.txt']
os.walk - get full path - all files in sub dirs
x = [os.path.join(r,file) for r,d,f in os.walk("F:\_python") for file in f]
>>>x
['F:\_python\dict.py', 'F:\_python\progr.txt', 'F:\_python\readl.py']
os.listdir() - get only txt files
>>> arr_txt = [x for x in os.listdir() if x.endswith(".txt")]
>>> print(arr_txt)
['work.txt', '3ebooks.txt']
glob - get only txt files
>>> import glob
>>> x = glob.glob("*.txt")
>>> x
['ale.txt', 'alunni2015.txt', 'assenze.text.txt', 'text2.txt', 'untitled.txt']
Using glob to get the full path of the files
If I should need the absolute path of the files:
>>> from path import path
>>> from glob import glob
>>> x = [path(f).abspath() for f in glob("F:*.txt")]
>>> for f in x:
... print(f)
...
F:acquistionline.txt
F:acquisti_2018.txt
F:bootstrap_jquery_ecc.txt
Other use of glob
If I want all the files in the directory:
>>> x = glob.glob("*")
Using os.path.isfile to avoid directories in the list
import os.path
listOfFiles = [f for f in os.listdir() if os.path.isfile(f)]
print(listOfFiles)
> output
['a simple game.py', 'data.txt', 'decorator.py']
Using pathlib from (Python 3.4)
import pathlib
>>> flist =
>>> for p in pathlib.Path('.').iterdir():
... if p.is_file():
... print(p)
... flist.append(p)
...
error.PNG
exemaker.bat
guiprova.mp3
setup.py
speak_gui2.py
thumb.PNG
If you want to use list comprehension
>>> flist = [p for p in pathlib.Path('.').iterdir() if p.is_file()]
*You can use also just pathlib.Path() instead of pathlib.Path(".")
Use glob method in pathlib.Path()
import pathlib
py = pathlib.Path().glob("*.py")
for file in py:
print(file)
output:
stack_overflow_list.py
stack_overflow_list_tkinter.py
Get all and only files with os.walk
import os
x = [i[2] for i in os.walk('.')]
y=
for t in x:
for f in t:
y.append(f)
>>> y
['append_to_list.py', 'data.txt', 'data1.txt', 'data2.txt', 'data_180617', 'os_walk.py', 'READ2.py', 'read_data.py', 'somma_defaltdic.py', 'substitute_words.py', 'sum_data.py', 'data.txt', 'data1.txt', 'data_180617']
Get only files with next and walk in a directory
>>> import os
>>> x = next(os.walk('F://python'))[2]
>>> x
['calculator.bat','calculator.py']
Get only directories with next and walk in a directory
>>> import os
>>> next(os.walk('F://python'))[1] # for the current dir use ('.')
['python3','others']
Get all the subdir names with walk
>>> for r,d,f in os.walk("F:_python"):
... for dirs in d:
... print(dirs)
...
.vscode
pyexcel
pyschool.py
subtitles
_metaprogramming
.ipynb_checkpoints
os.scandir() from Python 3.5 on
>>> import os
>>> x = [f.name for f in os.scandir() if f.is_file()]
>>> x
['calculator.bat','calculator.py']
# Another example with scandir (a little variation from docs.python.org)
# This one is more efficient than os.listdir.
# In this case, it shows the files only in the current directory
# where the script is executed.
>>> import os
>>> with os.scandir() as i:
... for entry in i:
... if entry.is_file():
... print(entry.name)
...
ebookmaker.py
error.PNG
exemaker.bat
guiprova.mp3
setup.py
speakgui4.py
speak_gui2.py
speak_gui3.py
thumb.PNG
>>>
Ex. 1: How many files are there in the subdirectories?
In this example, we look for the number of files that are included in all the directory and its subdirectories.
import os
def count(dir, counter=0):
"returns number of files in dir and subdirs"
for pack in os.walk(dir):
for f in pack[2]:
counter += 1
return dir + " : " + str(counter) + "files"
print(count("F:\python"))
> output
>'F:\python' : 12057 files'
Ex.2: How to copy all files from a directory to another?
A script to make order in your computer finding all files of a type (default: pptx) and copying them in a new folder.
import os
import shutil
from path import path
destination = "F:\file_copied"
# os.makedirs(destination)
def copyfile(dir, filetype='pptx', counter=0):
"Searches for pptx (or other - pptx is the default) files and copies them"
for pack in os.walk(dir):
for f in pack[2]:
if f.endswith(filetype):
fullpath = pack[0] + "\" + f
print(fullpath)
shutil.copy(fullpath, destination)
counter += 1
if counter > 0:
print("------------------------")
print("t==> Found in: `" + dir + "` : " + str(counter) + " filesn")
for dir in os.listdir():
"searches for folders that starts with `_`"
if dir[0] == '_':
# copyfile(dir, filetype='pdf')
copyfile(dir, filetype='txt')
> Output
_compiti18Compito Contabilità 1conti.txt
_compiti18Compito Contabilità 1modula4.txt
_compiti18Compito Contabilità 1moduloa4.txt
------------------------
==> Found in: `_compiti18` : 3 files
Ex. 3: How to get all the files in a txt file
In case you want to create a txt file with all the file names:
import os
mylist = ""
with open("filelist.txt", "w", encoding="utf-8") as file:
for eachfile in os.listdir():
mylist += eachfile + "n"
file.write(mylist)
Example: txt with all the files of an hard drive
"""We are going to save a txt file with all the files in your directory.
We will use the function walk()
"""
import os
# see all the methods of os
# print(*dir(os), sep=", ")
listafile =
percorso =
with open("lista_file.txt", "w", encoding='utf-8') as testo:
for root, dirs, files in os.walk("D:\"):
for file in files:
listafile.append(file)
percorso.append(root + "\" + file)
testo.write(file + "n")
listafile.sort()
print("N. of files", len(listafile))
with open("lista_file_ordinata.txt", "w", encoding="utf-8") as testo_ordinato:
for file in listafile:
testo_ordinato.write(file + "n")
with open("percorso.txt", "w", encoding="utf-8") as file_percorso:
for file in percorso:
file_percorso.write(file + "n")
os.system("lista_file.txt")
os.system("lista_file_ordinata.txt")
os.system("percorso.txt")
All the file of C:\ in one text file
This is a shorter version of the previous code. Change the folder where to start finding the files if you need to start from another position. This code generate a 50 mb on text file on my computer with something less then 500.000 lines with files with the complete path.
import os
with open("file.txt", "w", encoding="utf-8") as filewrite:
for r, d, f in os.walk("C:\"):
for file in f:
filewrite.write(f"{r + file}n")
A function to search for a certain type of file
import os
def searchfiles(extension='.ttf'):
"Create a txt file with all the file of a type"
with open("file.txt", "w", encoding="utf-8") as filewrite:
for r, d, f in os.walk("C:\"):
for file in f:
if file.endswith(extension):
filewrite.write(f"{r + file}n")
# looking for ttf file (fonts)
searchfiles('ttf')
2
You should include the path argument to listdir.
– Alejandro Sazo
Jan 3 '17 at 15:47
2
It's definitely encouraged to include some context/explanation for code as that makes the answer more useful.
– EJoshuaS
Jan 3 '17 at 16:07
2
I agree, but I did not notice something also, that python2 requires the argument whilst python3 is optional, If you improve the answer for both python versions would be great :)
– Alejandro Sazo
Jan 3 '17 at 16:44
1
Ok, I went into Python 2 and find the differences and I edited the post.
– Giovanni Gianni
Jan 18 '17 at 21:16
1
There is no reason to do[f for f in os.listdir()]
;os.listdir()
already returns alist
, so that's just needlessly copying the originallist
before throwing it away.
– ShadowRanger
May 6 '17 at 0:08
|
show 13 more comments
A one-line solution to get only list of files (no subdirectories):
filenames = next(os.walk(path))[2]
or absolute pathnames:
paths = [os.path.join(path,fn) for fn in next(os.walk(path))[2]]
6
Only a one-liner if you've alreadyimport os
. Seems less concise thanglob()
to me.
– ArtOfWarfare
Nov 28 '14 at 20:22
3
problem with glob is that a folder called 'something.something' would be returned by glob('/home/adam/*.*')
– Remi
Dec 1 '14 at 9:08
2
On OS X, there's something called a bundle. It's a directory which should generally be treated as a file (like a .tar). Would you want those treated as a file or a directory? Usingglob()
would treat it as a file. Your method would treat it as a directory.
– ArtOfWarfare
Dec 1 '14 at 19:44
add a comment |
Getting Full File Paths From a Directory and All Its Subdirectories
import os
def get_filepaths(directory):
"""
This function will generate the file names in a directory
tree by walking the tree either top-down or bottom-up. For each
directory in the tree rooted at directory top (including top itself),
it yields a 3-tuple (dirpath, dirnames, filenames).
"""
file_paths = # List which will store all of the full filepaths.
# Walk the tree.
for root, directories, files in os.walk(directory):
for filename in files:
# Join the two strings in order to form the full filepath.
filepath = os.path.join(root, filename)
file_paths.append(filepath) # Add it to the list.
return file_paths # Self-explanatory.
# Run the above function and store its results in a variable.
full_file_paths = get_filepaths("/Users/johnny/Desktop/TEST")
- The path I provided in the above function contained 3 files— two of them in the root directory, and another in a subfolder called "SUBFOLDER." You can now do things like:
print full_file_paths
which will print the list:
['/Users/johnny/Desktop/TEST/file1.txt', '/Users/johnny/Desktop/TEST/file2.txt', '/Users/johnny/Desktop/TEST/SUBFOLDER/file3.dat']
If you'd like, you can open and read the contents, or focus only on files with the extension ".dat" like in the code below:
for f in full_file_paths:
if f.endswith(".dat"):
print f
/Users/johnny/Desktop/TEST/SUBFOLDER/file3.dat
add a comment |
Since version 3.4 there are builtin iterators for this which are a lot more efficient than os.listdir()
:
pathlib
: New in version 3.4.
>>> import pathlib
>>> [p for p in pathlib.Path('.').iterdir() if p.is_file()]
According to PEP 428, the aim of the pathlib
library is to provide a simple hierarchy of classes to handle filesystem paths and the common operations users do over them.
os.scandir()
: New in version 3.5.
>>> import os
>>> [entry for entry in os.scandir('.') if entry.is_file()]
Note that os.walk()
uses os.scandir()
instead of os.listdir()
from version 3.5, and its speed got increased by 2-20 times according to PEP 471.
Let me also recommend reading ShadowRanger's comment below.
1
Thanks! I think it is the only solution not returning directly alist
. Could usep.name
instead of the firstp
alternatively if preferred.
– JeromeJ
Jun 22 '15 at 12:36
1
Welcome! I would prefer generatingpathlib.Path()
instances since they have many useful methods I would not want to waste waste. You can also callstr(p)
on them for path names.
– SzieberthAdam
Jul 13 '15 at 14:56
4
Note: Theos.scandir
solution is going to be more efficient thanos.listdir
with anos.path.is_file
check or the like, even if you need alist
(so you don't benefit from lazy iteration), becauseos.scandir
uses OS provided APIs that give you theis_file
information for free as it iterates, no per-file round trip to the disk tostat
them at all (on Windows, theDirEntry
s get you completestat
info for free, on *NIX systems it needs tostat
for info beyondis_file
,is_dir
, etc., butDirEntry
caches on firststat
for convenience).
– ShadowRanger
Nov 20 '15 at 22:38
I've found this to be the most helpful solution (usingpathlib
). I can easily get specific extension types and absolute paths. Thank you!
– HEADLESS_0NE
Mar 17 '16 at 15:33
1
You can also useentry.name
to get only the file name, orentry.path
to get its full path. No more os.path.join() all over the place.
– user136036
Mar 28 '17 at 20:26
add a comment |
I really liked adamk's answer, suggesting that you use glob()
, from the module of the same name. This allows you to have pattern matching with *
s.
But as other people pointed out in the comments, glob()
can get tripped up over inconsistent slash directions. To help with that, I suggest you use the join()
and expanduser()
functions in the os.path
module, and perhaps the getcwd()
function in the os
module, as well.
As examples:
from glob import glob
# Return everything under C:Usersadmin that contains a folder called wlp.
glob('C:Usersadmin*wlp')
The above is terrible - the path has been hardcoded and will only ever work on Windows between the drive name and the s being hardcoded into the path.
from glob import glob
from os.path import join
# Return everything under Users, admin, that contains a folder called wlp.
glob(join('Users', 'admin', '*', 'wlp'))
The above works better, but it relies on the folder name Users
which is often found on Windows and not so often found on other OSs. It also relies on the user having a specific name, admin
.
from glob import glob
from os.path import expanduser, join
# Return everything under the user directory that contains a folder called wlp.
glob(join(expanduser('~'), '*', 'wlp'))
This works perfectly across all platforms.
Another great example that works perfectly across platforms and does something a bit different:
from glob import glob
from os import getcwd
from os.path import join
# Return everything under the current directory that contains a folder called wlp.
glob(join(getcwd(), '*', 'wlp'))
Hope these examples help you see the power of a few of the functions you can find in the standard Python library modules.
4
Extra glob fun: starting in Python 3.5,**
works as long as you setrecursive = True
. See the docs here: docs.python.org/3.5/library/glob.html#glob.glob
– ArtOfWarfare
Jan 26 '15 at 3:24
add a comment |
Preliminary notes
- Although there's a clear differentiation between file and directory terms in the question text, some may argue that directories are actually special files
- The statement: "all files of a directory" can be interpreted in two ways:
- All direct (or level 1) descendants only
- All descendants in the whole directory tree (including the ones in sub-directories)
- All direct (or level 1) descendants only
When the question was asked, I imagine that Python 2, was the LTS version, however the code samples will be run by Python 3(.5) (I'll keep them as Python 2 compliant as possible; also, any code belonging to Python that I'm going to post, is from v3.5.4 - unless otherwise specified). That has consequences related to another keyword in the question: "add them into a list":
- In pre Python 2.2 versions, sequences (iterables) were mostly represented by lists (tuples, sets, ...)
- In Python 2.2, the concept of generator ([Python.Wiki]: Generators) - courtesy of [Python 3]: The yield statement) - was introduced. As time passed, generator counterparts started to appear for functions that returned/worked with lists
- In Python 3, generator is the default behavior
- Not sure if returning a list is still mandatory (or a generator would do as well), but passing a generator to the list constructor, will create a list out of it (and also consume it). The example below illustrates the differences on [Python 3]: map(function, iterable, ...)
>>> import sys
>>> sys.version
'2.7.10 (default, Mar 8 2016, 15:02:46) [MSC v.1600 64 bit (AMD64)]'
>>> m = map(lambda x: x, [1, 2, 3]) # Just a dummy lambda function
>>> m, type(m)
([1, 2, 3], <type 'list'>)
>>> len(m)
3
>>> import sys
>>> sys.version
'3.5.4 (v3.5.4:3f56838, Aug 8 2017, 02:17:05) [MSC v.1900 64 bit (AMD64)]'
>>> m = map(lambda x: x, [1, 2, 3])
>>> m, type(m)
(<map object at 0x000001B4257342B0>, <class 'map'>)
>>> len(m)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: object of type 'map' has no len()
>>> lm0 = list(m) # Build a list from the generator
>>> lm0, type(lm0)
([1, 2, 3], <class 'list'>)
>>>
>>> lm1 = list(m) # Build a list from the same generator
>>> lm1, type(lm1) # Empty list now - generator already consumed
(, <class 'list'>)
The examples will be based on a directory called root_dir with the following structure (this example is for Win, but I'm using the same tree on Lnx as well):
E:WorkDevStackOverflowq003207219>tree /f "root_dir"
Folder PATH listing for volume Work
Volume serial number is 00000029 3655:6FED
E:WORKDEVSTACKOVERFLOWQ003207219ROOT_DIR
¦ file0
¦ file1
¦
+---dir0
¦ +---dir00
¦ ¦ ¦ file000
¦ ¦ ¦
¦ ¦ +---dir000
¦ ¦ file0000
¦ ¦
¦ +---dir01
¦ ¦ file010
¦ ¦ file011
¦ ¦
¦ +---dir02
¦ +---dir020
¦ +---dir0200
+---dir1
¦ file10
¦ file11
¦ file12
¦
+---dir2
¦ ¦ file20
¦ ¦
¦ +---dir20
¦ file200
¦
+---dir3
Solutions
Programmatic approaches:
[Python 3]: os.listdir(path='.')
Return a list containing the names of the entries in the directory given by path. The list is in arbitrary order, and does not include the special entries
'.'
and'..'
...
>>> import os
>>> root_dir = "root_dir" # Path relative to current dir (os.getcwd())
>>>
>>> os.listdir(root_dir) # List all the items in root_dir
['dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> [item for item in os.listdir(root_dir) if os.path.isfile(os.path.join(root_dir, item))] # Filter items and only keep files (strip out directories)
['file0', 'file1']
A more elaborate example (code_os_listdir.py):
import os
from pprint import pformat
def _get_dir_content(path, include_folders, recursive):
entries = os.listdir(path)
for entry in entries:
entry_with_path = os.path.join(path, entry)
if os.path.isdir(entry_with_path):
if include_folders:
yield entry_with_path
if recursive:
for sub_entry in _get_dir_content(entry_with_path, include_folders, recursive):
yield sub_entry
else:
yield entry_with_path
def get_dir_content(path, include_folders=True, recursive=True, prepend_folder_name=True):
path_len = len(path) + len(os.path.sep)
for item in _get_dir_content(path, include_folders, recursive):
yield item if prepend_folder_name else item[path_len:]
def _get_dir_content_old(path, include_folders, recursive):
entries = os.listdir(path)
ret = list()
for entry in entries:
entry_with_path = os.path.join(path, entry)
if os.path.isdir(entry_with_path):
if include_folders:
ret.append(entry_with_path)
if recursive:
ret.extend(_get_dir_content_old(entry_with_path, include_folders, recursive))
else:
ret.append(entry_with_path)
return ret
def get_dir_content_old(path, include_folders=True, recursive=True, prepend_folder_name=True):
path_len = len(path) + len(os.path.sep)
return [item if prepend_folder_name else item[path_len:] for item in _get_dir_content_old(path, include_folders, recursive)]
def main():
root_dir = "root_dir"
ret0 = get_dir_content(root_dir, include_folders=True, recursive=True, prepend_folder_name=True)
lret0 = list(ret0)
print(ret0, len(lret0), pformat(lret0))
ret1 = get_dir_content_old(root_dir, include_folders=False, recursive=True, prepend_folder_name=False)
print(len(ret1), pformat(ret1))
if __name__ == "__main__":
main()
Notes:
- There are two implementations:
- One that uses generators (of course here it seems useless, since I immediately convert the result to a list)
- The classic one (function names ending in _old)
- Recursion is used (to get into subdirectories)
- For each implementation there are two functions:
- One that starts with an underscore (_): "private" (should not be called directly) - that does all the work
- The public one (wrapper over previous): it just strips off the initial path (if required) from the returned entries. It's an ugly implementation, but it's the only idea that I could come with at this point
- In terms of performance, generators are generally a little bit faster (considering both creation and iteration times), but I didn't test them in recursive functions, and also I am iterating inside the function over inner generators - don't know how performance friendly is that
- Play with the arguments to get different results
Output:
(py35x64_test) E:WorkDevStackOverflowq003207219>"e:WorkDevVEnvspy35x64_testScriptspython.exe" "code_os_listdir.py"
<generator object get_dir_content at 0x000001BDDBB3DF10> 22 ['root_dir\dir0',
'root_dir\dir0\dir00',
'root_dir\dir0\dir00\dir000',
'root_dir\dir0\dir00\dir000\file0000',
'root_dir\dir0\dir00\file000',
'root_dir\dir0\dir01',
'root_dir\dir0\dir01\file010',
'root_dir\dir0\dir01\file011',
'root_dir\dir0\dir02',
'root_dir\dir0\dir02\dir020',
'root_dir\dir0\dir02\dir020\dir0200',
'root_dir\dir1',
'root_dir\dir1\file10',
'root_dir\dir1\file11',
'root_dir\dir1\file12',
'root_dir\dir2',
'root_dir\dir2\dir20',
'root_dir\dir2\dir20\file200',
'root_dir\dir2\file20',
'root_dir\dir3',
'root_dir\file0',
'root_dir\file1']
11 ['dir0\dir00\dir000\file0000',
'dir0\dir00\file000',
'dir0\dir01\file010',
'dir0\dir01\file011',
'dir1\file10',
'dir1\file11',
'dir1\file12',
'dir2\dir20\file200',
'dir2\file20',
'file0',
'file1']
- There are two implementations:
[Python 3]: os.scandir(path='.') (Python 3.5+, backport: [PyPI]: scandir)
Return an iterator of os.DirEntry objects corresponding to the entries in the directory given by path. The entries are yielded in arbitrary order, and the special entries
'.'
and'..'
are not included.
Using scandir() instead of listdir() can significantly increase the performance of code that also needs file type or file attribute information, because os.DirEntry objects expose this information if the operating system provides it when scanning a directory. All os.DirEntry methods may perform a system call, but is_dir() and is_file() usually only require a system call for symbolic links; os.DirEntry.stat() always requires a system call on Unix but only requires one for symbolic links on Windows.
>>> import os
>>> root_dir = os.path.join(".", "root_dir") # Explicitly prepending current directory
>>> root_dir
'.\root_dir'
>>>
>>> scandir_iterator = os.scandir(root_dir)
>>> scandir_iterator
<nt.ScandirIterator object at 0x00000268CF4BC140>
>>> [item.path for item in scandir_iterator]
['.\root_dir\dir0', '.\root_dir\dir1', '.\root_dir\dir2', '.\root_dir\dir3', '.\root_dir\file0', '.\root_dir\file1']
>>>
>>> [item.path for item in scandir_iterator] # Will yield an empty list as it was consumed by previous iteration (automatically performed by the list comprehension)
>>>
>>> scandir_iterator = os.scandir(root_dir) # Reinitialize the generator
>>> for item in scandir_iterator :
... if os.path.isfile(item.path):
... print(item.name)
...
file0
file1
Notes:
- It's similar to
os.listdir
- But it's also more flexible (and offers more functionality), more Pythonic (and in some cases, faster)
- It's similar to
[Python 3]: os.walk(top, topdown=True, onerror=None, followlinks=False)
Generate the file names in a directory tree by walking the tree either top-down or bottom-up. For each directory in the tree rooted at directory top (including top itself), it yields a 3-tuple (
dirpath
,dirnames
,filenames
).
>>> import os
>>> root_dir = os.path.join(os.getcwd(), "root_dir") # Specify the full path
>>> root_dir
'E:\Work\Dev\StackOverflow\q003207219\root_dir'
>>>
>>> walk_generator = os.walk(root_dir)
>>> root_dir_entry = next(walk_generator) # First entry corresponds to the root dir (passed as an argument)
>>> root_dir_entry
('E:\Work\Dev\StackOverflow\q003207219\root_dir', ['dir0', 'dir1', 'dir2', 'dir3'], ['file0', 'file1'])
>>>
>>> root_dir_entry[1] + root_dir_entry[2] # Display dirs and files (direct descendants) in a single list
['dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> [os.path.join(root_dir_entry[0], item) for item in root_dir_entry[1] + root_dir_entry[2]] # Display all the entries in the previous list by their full path
['E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\dir1', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\dir2', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\dir3', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\file0', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\file1']
>>>
>>> for entry in walk_generator: # Display the rest of the elements (corresponding to every subdir)
... print(entry)
...
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0', ['dir00', 'dir01', 'dir02'], )
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir00', ['dir000'], ['file000'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir00\dir000', , ['file0000'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir01', , ['file010', 'file011'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir02', ['dir020'], )
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir02\dir020', ['dir0200'], )
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir02\dir020\dir0200', , )
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir1', , ['file10', 'file11', 'file12'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir2', ['dir20'], ['file20'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir2\dir20', , ['file200'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir3', , )
Notes:
- Under the scenes, it uses
os.scandir
(os.listdir
on older versions) - It does the heavy lifting by recurring in subfolders
- Under the scenes, it uses
[Python 3]: glob.glob(pathname, *, recursive=False) ([Python 3]: glob.iglob(pathname, *, recursive=False))
Return a possibly-empty list of path names that match pathname, which must be a string containing a path specification. pathname can be either absolute (like
/usr/src/Python-1.5/Makefile
) or relative (like../../Tools/*/*.gif
), and can contain shell-style wildcards. Broken symlinks are included in the results (as in the shell).
...
Changed in version 3.5: Support for recursive globs using “**
”.
>>> import glob, os
>>> wildcard_pattern = "*"
>>> root_dir = os.path.join("root_dir", wildcard_pattern) # Match every file/dir name
>>> root_dir
'root_dir\*'
>>>
>>> glob_list = glob.glob(root_dir)
>>> glob_list
['root_dir\dir0', 'root_dir\dir1', 'root_dir\dir2', 'root_dir\dir3', 'root_dir\file0', 'root_dir\file1']
>>>
>>> [item.replace("root_dir" + os.path.sep, "") for item in glob_list] # Strip the dir name and the path separator from begining
['dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> for entry in glob.iglob(root_dir + "*", recursive=True):
... print(entry)
...
root_dir
root_dirdir0
root_dirdir0dir00
root_dirdir0dir00dir000
root_dirdir0dir00dir000file0000
root_dirdir0dir00file000
root_dirdir0dir01
root_dirdir0dir01file010
root_dirdir0dir01file011
root_dirdir0dir02
root_dirdir0dir02dir020
root_dirdir0dir02dir020dir0200
root_dirdir1
root_dirdir1file10
root_dirdir1file11
root_dirdir1file12
root_dirdir2
root_dirdir2dir20
root_dirdir2dir20file200
root_dirdir2file20
root_dirdir3
root_dirfile0
root_dirfile1
Notes:
- Uses
os.listdir
- For large trees (especially if recursive is on), iglob is preferred
- Allows advanced filtering based on name (due to the wildcard)
- Uses
[Python 3]: class pathlib.Path(*pathsegments) (Python 3.4+, backport: [PyPI]: pathlib2)
>>> import pathlib
>>> root_dir = "root_dir"
>>> root_dir_instance = pathlib.Path(root_dir)
>>> root_dir_instance
WindowsPath('root_dir')
>>> root_dir_instance.name
'root_dir'
>>> root_dir_instance.is_dir()
True
>>>
>>> [item.name for item in root_dir_instance.glob("*")] # Wildcard searching for all direct descendants
['dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> [os.path.join(item.parent.name, item.name) for item in root_dir_instance.glob("*") if not item.is_dir()] # Display paths (including parent) for files only
['root_dir\file0', 'root_dir\file1']
Notes:
- This is one way of achieving our goal
- It's the OOP style of handling paths
- Offers lots of functionalities
[Python 2]: dircache.listdir(path) (Python 2 only)
- But, according to [GitHub]: python/cpython - (2.7) cpython/Lib/dircache.py, it's just a (thin) wrapper over
os.listdir
with caching
def listdir(path):
"""List directory contents, using cache."""
try:
cached_mtime, list = cache[path]
del cache[path]
except KeyError:
cached_mtime, list = -1,
mtime = os.stat(path).st_mtime
if mtime != cached_mtime:
list = os.listdir(path)
list.sort()
cache[path] = mtime, list
return list
- But, according to [GitHub]: python/cpython - (2.7) cpython/Lib/dircache.py, it's just a (thin) wrapper over
[man7]: OPENDIR(3) / [man7]: READDIR(3) / [man7]: CLOSEDIR(3) via [Python 3]: ctypes - A foreign function library for Python (POSIX specific)
ctypes is a foreign function library for Python. It provides C compatible data types, and allows calling functions in DLLs or shared libraries. It can be used to wrap these libraries in pure Python.
code_ctypes.py:
#!/usr/bin/env python3
import sys
from ctypes import Structure,
c_ulonglong, c_longlong, c_ushort, c_ubyte, c_char, c_int,
CDLL, POINTER,
create_string_buffer, get_errno, set_errno, cast
DT_DIR = 4
DT_REG = 8
char256 = c_char * 256
class LinuxDirent64(Structure):
_fields_ = [
("d_ino", c_ulonglong),
("d_off", c_longlong),
("d_reclen", c_ushort),
("d_type", c_ubyte),
("d_name", char256),
]
LinuxDirent64Ptr = POINTER(LinuxDirent64)
libc_dll = this_process = CDLL(None, use_errno=True)
# ALWAYS set argtypes and restype for functions, otherwise it's UB!!!
opendir = libc_dll.opendir
readdir = libc_dll.readdir
closedir = libc_dll.closedir
def get_dir_content(path):
ret = [path, list(), list()]
dir_stream = opendir(create_string_buffer(path.encode()))
if (dir_stream == 0):
print("opendir returned NULL (errno: {:d})".format(get_errno()))
return ret
set_errno(0)
dirent_addr = readdir(dir_stream)
while dirent_addr:
dirent_ptr = cast(dirent_addr, LinuxDirent64Ptr)
dirent = dirent_ptr.contents
name = dirent.d_name.decode()
if dirent.d_type & DT_DIR:
if name not in (".", ".."):
ret[1].append(name)
elif dirent.d_type & DT_REG:
ret[2].append(name)
dirent_addr = readdir(dir_stream)
if get_errno():
print("readdir returned NULL (errno: {:d})".format(get_errno()))
closedir(dir_stream)
return ret
def main():
print("{:s} on {:s}n".format(sys.version, sys.platform))
root_dir = "root_dir"
entries = get_dir_content(root_dir)
print(entries)
if __name__ == "__main__":
main()
Notes:
- It loads the three functions from libc (loaded in the current process) and calls them (for more details check [SO]: How do I check whether a file exists without exceptions? (@CristiFati's answer) - last notes from item #4.). That would place this approach very close to the Python / C edge
LinuxDirent64 is the ctypes representation of struct dirent64 from [man7]: dirent.h(0P) (so are the DT_ constants) from my machine: Ubtu 16 x64 (4.10.0-40-generic and libc6-dev:amd64). On other flavors/versions, the struct definition might differ, and if so, the ctypes alias should be updated, otherwise it will yield Undefined Behavior
- It returns data in the
os.walk
's format. I didn't bother to make it recursive, but starting from the existing code, that would be a fairly trivial task - Everything is doable on Win as well, the data (libraries, functions, structs, constants, ...) differ
Output:
[cfati@cfati-ubtu16x64-0:~/Work/Dev/StackOverflow/q003207219]> ./code_ctypes.py
3.5.2 (default, Nov 12 2018, 13:43:14)
[GCC 5.4.0 20160609] on linux
['root_dir', ['dir2', 'dir1', 'dir3', 'dir0'], ['file1', 'file0']]
[ActiveState]: win32file.FindFilesW (Win specific)
Retrieves a list of matching filenames, using the Windows Unicode API. An interface to the API FindFirstFileW/FindNextFileW/Find close functions.
>>> import os, win32file, win32con
>>> root_dir = "root_dir"
>>> wildcard = "*"
>>> root_dir_wildcard = os.path.join(root_dir, wildcard)
>>> entry_list = win32file.FindFilesW(root_dir_wildcard)
>>> len(entry_list) # Don't display the whole content as it's too long
8
>>> [entry[-2] for entry in entry_list] # Only display the entry names
['.', '..', 'dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> [entry[-2] for entry in entry_list if entry[0] & win32con.FILE_ATTRIBUTE_DIRECTORY and entry[-2] not in (".", "..")] # Filter entries and only display dir names (except self and parent)
['dir0', 'dir1', 'dir2', 'dir3']
>>>
>>> [os.path.join(root_dir, entry[-2]) for entry in entry_list if entry[0] & (win32con.FILE_ATTRIBUTE_NORMAL | win32con.FILE_ATTRIBUTE_ARCHIVE)] # Only display file "full" names
['root_dir\file0', 'root_dir\file1']
Notes:
win32file.FindFilesW
is part of [GitHub]: mhammond/pywin32 - Python for Windows (pywin32) Extensions, which is a Python wrapper over WINAPIs- The documentation link is from ActiveState, as I didn't find any pywin32 official documentation
- Install some (other) third-party package that does the trick
- Most likely, will rely on one (or more) of the above (maybe with slight customizations)
Notes:
Code is meant to be portable (except places that target a specific area - which are marked) or cross:
- platform (Nix, Win, )
Python version (2, 3, )
Multiple path styles (absolute, relatives) were used across the above variants, to illustrate the fact that the "tools" used are flexible in this direction
os.listdir
andos.scandir
use opendir / readdir / closedir ([MS.Docs]: FindFirstFileW function / [MS.Docs]: FindNextFileW function / [MS.Docs]: FindClose function) (via [GitHub]: python/cpython - (master) cpython/Modules/posixmodule.c)win32file.FindFilesW
uses those (Win specific) functions as well (via [GitHub]: mhammond/pywin32 - (master) pywin32/win32/src/win32file.i)
_get_dir_content (from point #1.) can be implemented using any of these approaches (some will require more work and some less)
- Some advanced filtering (instead of just file vs. dir) could be done: e.g. the include_folders argument could be replaced by another one (e.g. filter_func) which would be a function that takes a path as an argument:
filter_func=lambda x: True
(this doesn't strip out anything) and inside _get_dir_content something like:if not filter_func(entry_with_path): continue
(if the function fails for one entry, it will be skipped), but the more complex the code becomes, the longer it will take to execute
- Some advanced filtering (instead of just file vs. dir) could be done: e.g. the include_folders argument could be replaced by another one (e.g. filter_func) which would be a function that takes a path as an argument:
Nota bene! Since recursion is used, I must mention that I did some tests on my laptop (Win 10 x64), totally unrelated to this problem, and when the recursion level was reaching values somewhere in the (990 .. 1000) range (recursionlimit - 1000 (default)), I got StackOverflow :). If the directory tree exceeds that limit (I am not an FS expert, so I don't know if that is even possible), that could be a problem.
I must also mention that I didn't try to increase recursionlimit because I have no experience in the area (how much can I increase it before having to also increase the stack at OS level), but in theory there will always be the possibility for failure, if the dir depth is larger than the highest possible recursionlimit (on that machine)The code samples are for demonstrative purposes only. That means that I didn't take into account error handling (I don't think there's any try / except / else / finally block), so the code is not robust (the reason is: to keep it as simple and short as possible). For production, error handling should be added as well
Other approaches:
Use Python only as a wrapper
- Everything is done using another technology
- That technology is invoked from Python
The most famous flavor that I know is what I call the system administrator approach:
- Use Python (or any programming language for that matter) in order to execute shell commands (and parse their outputs)
- Some consider this a neat hack
- I consider it more like a lame workaround (gainarie), as the action per se is performed from shell (cmd in this case), and thus doesn't have anything to do with Python.
- Filtering (
grep
/findstr
) or output formatting could be done on both sides, but I'm not going to insist on it. Also, I deliberately usedos.system
instead ofsubprocess.Popen
.
(py35x64_test) E:WorkDevStackOverflowq003207219>"e:WorkDevVEnvspy35x64_testScriptspython.exe" -c "import os;os.system("dir /b root_dir")"
dir0
dir1
dir2
dir3
file0
file1
In general this approach is to be avoided, since if some command output format slightly differs between OS versions/flavors, the parsing code should be adapted as well; not to mention differences between locales).
You had posted it, but I had cleaned it up once I had read it :-)
– Martijn Pieters♦
Dec 9 '18 at 11:20
add a comment |
def list_files(path):
# returns a list of names (with extension, without full path) of all files
# in folder path
files =
for name in os.listdir(path):
if os.path.isfile(os.path.join(path, name)):
files.append(name)
return files
add a comment |
If you are looking for a Python implementation of find, this is a recipe I use rather frequently:
from findtools.find_files import (find_files, Match)
# Recursively find all *.sh files in **/usr/bin**
sh_files_pattern = Match(filetype='f', name='*.sh')
found_files = find_files(path='/usr/bin', match=sh_files_pattern)
for found_file in found_files:
print found_file
So I made a PyPI package out of it and there is also a GitHub repository. I hope that someone finds it potentially useful for this code.
add a comment |
Returning a list of absolute filepaths, does not recurse into subdirectories
L = [os.path.join(os.getcwd(),f) for f in os.listdir('.') if os.path.isfile(os.path.join(os.getcwd(),f))]
1
maybe bit longer but v clear what it is doing
– javadba
Jun 8 '15 at 0:28
2
Note:os.path.abspath(f)
would be a somewhat cheaper substitute foros.path.join(os.getcwd(),f)
.
– ShadowRanger
May 6 '17 at 0:14
I'd be more efficient still if you started withcwd = os.path.abspath('.')
, then usedcwd
instead of'.'
andos.getcwd()
throughout to avoid loads of redundant system calls.
– Martijn Pieters♦
Dec 5 '18 at 10:46
add a comment |
import os
import os.path
def get_files(target_dir):
item_list = os.listdir(target_dir)
file_list = list()
for item in item_list:
item_dir = os.path.join(target_dir,item)
if os.path.isdir(item_dir):
file_list += get_files(item_dir)
else:
file_list.append(item_dir)
return file_list
Here I use a recursive structure.
add a comment |
I am assuming that all your files are of *.txt
format, and are stored inside a directory with path data/
.
One can use glob module of python
to list all files of the directory, and add them to a list named fnames
, in the following manner:
import glob
fnames = glob.glob("data/*.txt") #fnames: list data type
add a comment |
# -** coding: utf-8 -*-
import os
import traceback
print 'nn'
def start():
address = "/home/ubuntu/Desktop"
try:
Folders =
Id = 1
for item in os.listdir(address):
endaddress = address + "/" + item
Folders.append({'Id': Id, 'TopId': 0, 'Name': item, 'Address': endaddress })
Id += 1
state = 0
for item2 in os.listdir(endaddress):
state = 1
if state == 1:
Id = FolderToList(endaddress, Id, Id - 1, Folders)
return Folders
except:
print "___________________________ ERROR ___________________________n" + traceback.format_exc()
def FolderToList(address, Id, TopId, Folders):
for item in os.listdir(address):
endaddress = address + "/" + item
Folders.append({'Id': Id, 'TopId': TopId, 'Name': item, 'Address': endaddress })
Id += 1
state = 0
for item in os.listdir(endaddress):
state = 1
if state == 1:
Id = FolderToList(endaddress, Id, Id - 1, Folders)
return Id
print start()
This is too specific for an isolated usecase and not generally useful, especially since there is no explanation whatsoever what the code is doing. The blanketexcept
handling is also a bad example of how to handle exceptions in general.
– Martijn Pieters♦
Dec 5 '18 at 10:44
add a comment |
Using generators
import os
def get_files(search_path):
for (dirpath, _, filenames) in os.walk(search_path):
for filename in filenames:
yield os.path.join(dirpath, filename)
list_files = get_files('.')
for filename in list_files:
print(filename)
add a comment |
import dircache
list = dircache.listdir(pathname)
i = 0
check = len(list[0])
temp =
count = len(list)
while count != 0:
if len(list[i]) != check:
temp.append(list[i-1])
check = len(list[i])
else:
i = i + 1
count = count - 1
print temp
16
dirchache is "Deprecated since version 2.6: The dircache module has been removed in Python 3.0."
– Daniel Reis
Aug 17 '13 at 13:58
add a comment |
Use this function if you want to use a different file type or get the full directory:
import os
def createList(foldername, fulldir = True, suffix=".jpg"):
file_list_tmp = os.listdir(foldername)
#print len(file_list_tmp)
file_list =
if fulldir:
for item in file_list_tmp:
if item.endswith(suffix):
file_list.append(os.path.join(foldername, item))
else:
for item in file_list_tmp:
if item.endswith(suffix):
file_list.append(item)
return file_list
You can decide to useos.path.join()
inside the loop rather than double up your looping and filtering code. This answer doesn't really add anything over existing answers other than thefulldir
flag, so you'd really want to do a better job of the implementation. I'd usedef files_list(p, fulldir=True, suffix=None):
(indent),names = os.listdir(p)
,if suffix is not None: names = (f.endswith(suffix) for f in names)
, return [os.path.join(p, f) if fullname else f for f in names]` to at least keep it compact and efficient.
– Martijn Pieters♦
Dec 5 '18 at 10:59
Could you point out which part is a double loop? Thanks.
– neouyghur
Dec 6 '18 at 2:39
You have twofor ... if ... append
constructs in your function, only different in what is appended each time. That’s a lot of needless code duplication.
– Martijn Pieters♦
Dec 6 '18 at 3:18
add a comment |
Another very readable variant for Python 3.4+ is using pathlib.Path.glob:
from pathlib import Path
folder = '/foo'
[f for f in Path(folder).glob('*') if f.is_file()]
It is simple to make more specific, e.g. only look for Python source files which are not symbolic links, also in all subdirectories:
[f for f in Path(folder).glob('**/*.py') if not f.is_symlink()]
add a comment |
For greater results, you can use
listdir()
method of theos
module along with a generator (a generator is a powerful iterator that keeps its state, remember?). The following code works fine with both versions: Python 2 and Python 3.
Here's a code:
import os
def files(path):
for file in os.listdir(path):
if os.path.isfile(os.path.join(path, file)):
yield file
for file in files("."):
print (file)
The listdir()
method returns the list of entries for the given directory. The method os.path.isfile()
returns True
if the given entry is a file. And the yield
operator quits the func but keeps its current state, and it returns only the name of the entry detected as a file. All the above allows us to loop over the generator function.
Hope this helps.
add a comment |
Here's my general-purpose function for this. It returns a list of file paths rather than filenames since I found that to be more useful. It has a few optional arguments that make it versatile. For instance, I often use it with arguments like pattern='*.txt'
or subfolders=True
.
import os
import fnmatch
def list_paths(folder='.', pattern='*', case_sensitive=False, subfolders=False):
"""Return a list of the file paths matching the pattern in the specified
folder, optionally including files inside subfolders.
"""
match = fnmatch.fnmatchcase if case_sensitive else fnmatch.fnmatch
walked = os.walk(folder) if subfolders else [next(os.walk(folder))]
return [os.path.join(root, f)
for root, dirnames, filenames in walked
for f in filenames if match(f, pattern)]
add a comment |
For python2:
pip install rglob
import rglob
file_list=rglob.rglob("/home/base/dir/", "*")
print file_list
add a comment |
I will provide a sample one liner where sourcepath and file type can be provided as input. The code returns a list of filenames with csv extension. Use . in case all files needs to be returned. This will also recursively scans the subdirectories.
[y for x in os.walk(sourcePath) for y in glob(os.path.join(x[0], '*.csv'))]
Modify file extensions and source path as needed.
If you are going to useglob
, then just useglob('**/*.csv', recursive=True)
. No need to combine this withos.walk()
to recurse (recursive
and**
are supported since Python 3.5).
– Martijn Pieters♦
Dec 5 '18 at 11:09
add a comment |
protected by matt Dec 18 '14 at 2:54
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
23 Answers
23
active
oldest
votes
23 Answers
23
active
oldest
votes
active
oldest
votes
active
oldest
votes
os.listdir()
will get you everything that's in a directory - files and directories.
If you want just files, you could either filter this down using os.path
:
from os import listdir
from os.path import isfile, join
onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]
or you could use os.walk()
which will yield two lists for each directory it visits - splitting into files and dirs for you. If you only want the top directory you can just break the first time it yields
from os import walk
f =
for (dirpath, dirnames, filenames) in walk(mypath):
f.extend(filenames)
break
And lastly, as that example shows, adding one list to another you can either use .extend()
or
>>> q = [1, 2, 3]
>>> w = [4, 5, 6]
>>> q = q + w
>>> q
[1, 2, 3, 4, 5, 6]
Personally, I prefer .extend()
5
Doesn't seem to work on Windows with unicode file names for some reason.
– cdiggins
Jun 14 '13 at 16:21
49
A bit simpler:(_, _, filenames) = walk(mypath).next()
(if you are confident that the walk will return at least one value, which it should.)
– misterbee
Jul 14 '13 at 20:56
6
Slight modification to store full paths: for (dirpath, dirnames, filenames) in os.walk(mypath): checksum_files.extend(os.path.join(dirpath, filename) for filename in filenames) break
– okigan
Sep 23 '13 at 21:31
108
f.extend(filenames)
is not actually equivalent tof = f + filenames
.extend
will modifyf
in-place, whereas adding creates a new list in a new memory location. This meansextend
is generally more efficient than+
, but it can sometimes lead to confusion if multiple objects hold references to the list. Lastly, it's worth noting thatf += filenames
is equivalent tof.extend(filenames)
, notf = f + filenames
.
– Benjamin Hodgson♦
Oct 22 '13 at 8:55
19
@misterbee, your solution is the best, just one small improvement:_, _, filenames = next(walk(mypath), (None, None, ))
– bgusach
Mar 5 '15 at 7:36
|
show 9 more comments
os.listdir()
will get you everything that's in a directory - files and directories.
If you want just files, you could either filter this down using os.path
:
from os import listdir
from os.path import isfile, join
onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]
or you could use os.walk()
which will yield two lists for each directory it visits - splitting into files and dirs for you. If you only want the top directory you can just break the first time it yields
from os import walk
f =
for (dirpath, dirnames, filenames) in walk(mypath):
f.extend(filenames)
break
And lastly, as that example shows, adding one list to another you can either use .extend()
or
>>> q = [1, 2, 3]
>>> w = [4, 5, 6]
>>> q = q + w
>>> q
[1, 2, 3, 4, 5, 6]
Personally, I prefer .extend()
5
Doesn't seem to work on Windows with unicode file names for some reason.
– cdiggins
Jun 14 '13 at 16:21
49
A bit simpler:(_, _, filenames) = walk(mypath).next()
(if you are confident that the walk will return at least one value, which it should.)
– misterbee
Jul 14 '13 at 20:56
6
Slight modification to store full paths: for (dirpath, dirnames, filenames) in os.walk(mypath): checksum_files.extend(os.path.join(dirpath, filename) for filename in filenames) break
– okigan
Sep 23 '13 at 21:31
108
f.extend(filenames)
is not actually equivalent tof = f + filenames
.extend
will modifyf
in-place, whereas adding creates a new list in a new memory location. This meansextend
is generally more efficient than+
, but it can sometimes lead to confusion if multiple objects hold references to the list. Lastly, it's worth noting thatf += filenames
is equivalent tof.extend(filenames)
, notf = f + filenames
.
– Benjamin Hodgson♦
Oct 22 '13 at 8:55
19
@misterbee, your solution is the best, just one small improvement:_, _, filenames = next(walk(mypath), (None, None, ))
– bgusach
Mar 5 '15 at 7:36
|
show 9 more comments
os.listdir()
will get you everything that's in a directory - files and directories.
If you want just files, you could either filter this down using os.path
:
from os import listdir
from os.path import isfile, join
onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]
or you could use os.walk()
which will yield two lists for each directory it visits - splitting into files and dirs for you. If you only want the top directory you can just break the first time it yields
from os import walk
f =
for (dirpath, dirnames, filenames) in walk(mypath):
f.extend(filenames)
break
And lastly, as that example shows, adding one list to another you can either use .extend()
or
>>> q = [1, 2, 3]
>>> w = [4, 5, 6]
>>> q = q + w
>>> q
[1, 2, 3, 4, 5, 6]
Personally, I prefer .extend()
os.listdir()
will get you everything that's in a directory - files and directories.
If you want just files, you could either filter this down using os.path
:
from os import listdir
from os.path import isfile, join
onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))]
or you could use os.walk()
which will yield two lists for each directory it visits - splitting into files and dirs for you. If you only want the top directory you can just break the first time it yields
from os import walk
f =
for (dirpath, dirnames, filenames) in walk(mypath):
f.extend(filenames)
break
And lastly, as that example shows, adding one list to another you can either use .extend()
or
>>> q = [1, 2, 3]
>>> w = [4, 5, 6]
>>> q = q + w
>>> q
[1, 2, 3, 4, 5, 6]
Personally, I prefer .extend()
edited Nov 22 '15 at 6:56
Martin Thoma
41.3k55296516
41.3k55296516
answered Jul 8 '10 at 21:01
pycruftpycruft
34.4k11310
34.4k11310
5
Doesn't seem to work on Windows with unicode file names for some reason.
– cdiggins
Jun 14 '13 at 16:21
49
A bit simpler:(_, _, filenames) = walk(mypath).next()
(if you are confident that the walk will return at least one value, which it should.)
– misterbee
Jul 14 '13 at 20:56
6
Slight modification to store full paths: for (dirpath, dirnames, filenames) in os.walk(mypath): checksum_files.extend(os.path.join(dirpath, filename) for filename in filenames) break
– okigan
Sep 23 '13 at 21:31
108
f.extend(filenames)
is not actually equivalent tof = f + filenames
.extend
will modifyf
in-place, whereas adding creates a new list in a new memory location. This meansextend
is generally more efficient than+
, but it can sometimes lead to confusion if multiple objects hold references to the list. Lastly, it's worth noting thatf += filenames
is equivalent tof.extend(filenames)
, notf = f + filenames
.
– Benjamin Hodgson♦
Oct 22 '13 at 8:55
19
@misterbee, your solution is the best, just one small improvement:_, _, filenames = next(walk(mypath), (None, None, ))
– bgusach
Mar 5 '15 at 7:36
|
show 9 more comments
5
Doesn't seem to work on Windows with unicode file names for some reason.
– cdiggins
Jun 14 '13 at 16:21
49
A bit simpler:(_, _, filenames) = walk(mypath).next()
(if you are confident that the walk will return at least one value, which it should.)
– misterbee
Jul 14 '13 at 20:56
6
Slight modification to store full paths: for (dirpath, dirnames, filenames) in os.walk(mypath): checksum_files.extend(os.path.join(dirpath, filename) for filename in filenames) break
– okigan
Sep 23 '13 at 21:31
108
f.extend(filenames)
is not actually equivalent tof = f + filenames
.extend
will modifyf
in-place, whereas adding creates a new list in a new memory location. This meansextend
is generally more efficient than+
, but it can sometimes lead to confusion if multiple objects hold references to the list. Lastly, it's worth noting thatf += filenames
is equivalent tof.extend(filenames)
, notf = f + filenames
.
– Benjamin Hodgson♦
Oct 22 '13 at 8:55
19
@misterbee, your solution is the best, just one small improvement:_, _, filenames = next(walk(mypath), (None, None, ))
– bgusach
Mar 5 '15 at 7:36
5
5
Doesn't seem to work on Windows with unicode file names for some reason.
– cdiggins
Jun 14 '13 at 16:21
Doesn't seem to work on Windows with unicode file names for some reason.
– cdiggins
Jun 14 '13 at 16:21
49
49
A bit simpler:
(_, _, filenames) = walk(mypath).next()
(if you are confident that the walk will return at least one value, which it should.)– misterbee
Jul 14 '13 at 20:56
A bit simpler:
(_, _, filenames) = walk(mypath).next()
(if you are confident that the walk will return at least one value, which it should.)– misterbee
Jul 14 '13 at 20:56
6
6
Slight modification to store full paths: for (dirpath, dirnames, filenames) in os.walk(mypath): checksum_files.extend(os.path.join(dirpath, filename) for filename in filenames) break
– okigan
Sep 23 '13 at 21:31
Slight modification to store full paths: for (dirpath, dirnames, filenames) in os.walk(mypath): checksum_files.extend(os.path.join(dirpath, filename) for filename in filenames) break
– okigan
Sep 23 '13 at 21:31
108
108
f.extend(filenames)
is not actually equivalent to f = f + filenames
. extend
will modify f
in-place, whereas adding creates a new list in a new memory location. This means extend
is generally more efficient than +
, but it can sometimes lead to confusion if multiple objects hold references to the list. Lastly, it's worth noting that f += filenames
is equivalent to f.extend(filenames)
, not f = f + filenames
.– Benjamin Hodgson♦
Oct 22 '13 at 8:55
f.extend(filenames)
is not actually equivalent to f = f + filenames
. extend
will modify f
in-place, whereas adding creates a new list in a new memory location. This means extend
is generally more efficient than +
, but it can sometimes lead to confusion if multiple objects hold references to the list. Lastly, it's worth noting that f += filenames
is equivalent to f.extend(filenames)
, not f = f + filenames
.– Benjamin Hodgson♦
Oct 22 '13 at 8:55
19
19
@misterbee, your solution is the best, just one small improvement:
_, _, filenames = next(walk(mypath), (None, None, ))
– bgusach
Mar 5 '15 at 7:36
@misterbee, your solution is the best, just one small improvement:
_, _, filenames = next(walk(mypath), (None, None, ))
– bgusach
Mar 5 '15 at 7:36
|
show 9 more comments
I prefer using the glob
module, as it does pattern matching and expansion.
import glob
print(glob.glob("/home/adam/*.txt"))
It will return a list with the queried files:
['/home/adam/file1.txt', '/home/adam/file2.txt', .... ]
11
that's a shortcut for listdir+fnmatch docs.python.org/library/fnmatch.html#fnmatch.fnmatch
– Stefano
Jul 1 '11 at 13:03
17
to clarify, this does not return the "full path"; it simply returns the expansion of the glob, whatever it may be. E.g., given/home/user/foo/bar/hello.txt
, then, if running in directoryfoo
, theglob("bar/*.txt")
will returnbar/hello.txt
. There are cases when you do in fact want the full (i.e., absolute) path; for those cases, see stackoverflow.com/questions/51520/…
– michael
Aug 16 '16 at 12:07
Related: find files recursively with glob: stackoverflow.com/a/2186565/4561887
– Gabriel Staples
Sep 3 '18 at 3:25
add a comment |
I prefer using the glob
module, as it does pattern matching and expansion.
import glob
print(glob.glob("/home/adam/*.txt"))
It will return a list with the queried files:
['/home/adam/file1.txt', '/home/adam/file2.txt', .... ]
11
that's a shortcut for listdir+fnmatch docs.python.org/library/fnmatch.html#fnmatch.fnmatch
– Stefano
Jul 1 '11 at 13:03
17
to clarify, this does not return the "full path"; it simply returns the expansion of the glob, whatever it may be. E.g., given/home/user/foo/bar/hello.txt
, then, if running in directoryfoo
, theglob("bar/*.txt")
will returnbar/hello.txt
. There are cases when you do in fact want the full (i.e., absolute) path; for those cases, see stackoverflow.com/questions/51520/…
– michael
Aug 16 '16 at 12:07
Related: find files recursively with glob: stackoverflow.com/a/2186565/4561887
– Gabriel Staples
Sep 3 '18 at 3:25
add a comment |
I prefer using the glob
module, as it does pattern matching and expansion.
import glob
print(glob.glob("/home/adam/*.txt"))
It will return a list with the queried files:
['/home/adam/file1.txt', '/home/adam/file2.txt', .... ]
I prefer using the glob
module, as it does pattern matching and expansion.
import glob
print(glob.glob("/home/adam/*.txt"))
It will return a list with the queried files:
['/home/adam/file1.txt', '/home/adam/file2.txt', .... ]
edited May 23 '18 at 18:36
Peter Mortensen
13.5k1984111
13.5k1984111
answered Jul 9 '10 at 18:13
adamkadamk
27.5k64255
27.5k64255
11
that's a shortcut for listdir+fnmatch docs.python.org/library/fnmatch.html#fnmatch.fnmatch
– Stefano
Jul 1 '11 at 13:03
17
to clarify, this does not return the "full path"; it simply returns the expansion of the glob, whatever it may be. E.g., given/home/user/foo/bar/hello.txt
, then, if running in directoryfoo
, theglob("bar/*.txt")
will returnbar/hello.txt
. There are cases when you do in fact want the full (i.e., absolute) path; for those cases, see stackoverflow.com/questions/51520/…
– michael
Aug 16 '16 at 12:07
Related: find files recursively with glob: stackoverflow.com/a/2186565/4561887
– Gabriel Staples
Sep 3 '18 at 3:25
add a comment |
11
that's a shortcut for listdir+fnmatch docs.python.org/library/fnmatch.html#fnmatch.fnmatch
– Stefano
Jul 1 '11 at 13:03
17
to clarify, this does not return the "full path"; it simply returns the expansion of the glob, whatever it may be. E.g., given/home/user/foo/bar/hello.txt
, then, if running in directoryfoo
, theglob("bar/*.txt")
will returnbar/hello.txt
. There are cases when you do in fact want the full (i.e., absolute) path; for those cases, see stackoverflow.com/questions/51520/…
– michael
Aug 16 '16 at 12:07
Related: find files recursively with glob: stackoverflow.com/a/2186565/4561887
– Gabriel Staples
Sep 3 '18 at 3:25
11
11
that's a shortcut for listdir+fnmatch docs.python.org/library/fnmatch.html#fnmatch.fnmatch
– Stefano
Jul 1 '11 at 13:03
that's a shortcut for listdir+fnmatch docs.python.org/library/fnmatch.html#fnmatch.fnmatch
– Stefano
Jul 1 '11 at 13:03
17
17
to clarify, this does not return the "full path"; it simply returns the expansion of the glob, whatever it may be. E.g., given
/home/user/foo/bar/hello.txt
, then, if running in directory foo
, the glob("bar/*.txt")
will return bar/hello.txt
. There are cases when you do in fact want the full (i.e., absolute) path; for those cases, see stackoverflow.com/questions/51520/…– michael
Aug 16 '16 at 12:07
to clarify, this does not return the "full path"; it simply returns the expansion of the glob, whatever it may be. E.g., given
/home/user/foo/bar/hello.txt
, then, if running in directory foo
, the glob("bar/*.txt")
will return bar/hello.txt
. There are cases when you do in fact want the full (i.e., absolute) path; for those cases, see stackoverflow.com/questions/51520/…– michael
Aug 16 '16 at 12:07
Related: find files recursively with glob: stackoverflow.com/a/2186565/4561887
– Gabriel Staples
Sep 3 '18 at 3:25
Related: find files recursively with glob: stackoverflow.com/a/2186565/4561887
– Gabriel Staples
Sep 3 '18 at 3:25
add a comment |
import os
os.listdir("somedirectory")
will return a list of all files and directories in "somedirectory".
9
This returns the relative path of the files, as compared with the full path returned byglob.glob
– xji
May 17 '16 at 14:32
13
@JIXiang:os.listdir()
always returns mere filenames (not relative paths). Whatglob.glob()
returns is driven by the path format of the input pattern.
– mklement0
Nov 30 '16 at 18:14
os.listdir() - > It always list the dir and file inside the provided location . Is there any way to list only directory not files ?
– RonyA
May 22 '18 at 15:44
add a comment |
import os
os.listdir("somedirectory")
will return a list of all files and directories in "somedirectory".
9
This returns the relative path of the files, as compared with the full path returned byglob.glob
– xji
May 17 '16 at 14:32
13
@JIXiang:os.listdir()
always returns mere filenames (not relative paths). Whatglob.glob()
returns is driven by the path format of the input pattern.
– mklement0
Nov 30 '16 at 18:14
os.listdir() - > It always list the dir and file inside the provided location . Is there any way to list only directory not files ?
– RonyA
May 22 '18 at 15:44
add a comment |
import os
os.listdir("somedirectory")
will return a list of all files and directories in "somedirectory".
import os
os.listdir("somedirectory")
will return a list of all files and directories in "somedirectory".
edited Jul 13 '16 at 19:05
csano
10.6k12241
10.6k12241
answered Jul 8 '10 at 19:35
sepp2ksepp2k
294k38595610
294k38595610
9
This returns the relative path of the files, as compared with the full path returned byglob.glob
– xji
May 17 '16 at 14:32
13
@JIXiang:os.listdir()
always returns mere filenames (not relative paths). Whatglob.glob()
returns is driven by the path format of the input pattern.
– mklement0
Nov 30 '16 at 18:14
os.listdir() - > It always list the dir and file inside the provided location . Is there any way to list only directory not files ?
– RonyA
May 22 '18 at 15:44
add a comment |
9
This returns the relative path of the files, as compared with the full path returned byglob.glob
– xji
May 17 '16 at 14:32
13
@JIXiang:os.listdir()
always returns mere filenames (not relative paths). Whatglob.glob()
returns is driven by the path format of the input pattern.
– mklement0
Nov 30 '16 at 18:14
os.listdir() - > It always list the dir and file inside the provided location . Is there any way to list only directory not files ?
– RonyA
May 22 '18 at 15:44
9
9
This returns the relative path of the files, as compared with the full path returned by
glob.glob
– xji
May 17 '16 at 14:32
This returns the relative path of the files, as compared with the full path returned by
glob.glob
– xji
May 17 '16 at 14:32
13
13
@JIXiang:
os.listdir()
always returns mere filenames (not relative paths). What glob.glob()
returns is driven by the path format of the input pattern.– mklement0
Nov 30 '16 at 18:14
@JIXiang:
os.listdir()
always returns mere filenames (not relative paths). What glob.glob()
returns is driven by the path format of the input pattern.– mklement0
Nov 30 '16 at 18:14
os.listdir() - > It always list the dir and file inside the provided location . Is there any way to list only directory not files ?
– RonyA
May 22 '18 at 15:44
os.listdir() - > It always list the dir and file inside the provided location . Is there any way to list only directory not files ?
– RonyA
May 22 '18 at 15:44
add a comment |
Get a list of files with Python 2 and 3
I have also made a short video here: Python: how to get a list of file in a directory
os.listdir()
or..... how to get all the files (and directories) in current directory (Python 3)
The simplest way to have the file in the current directory in Python 3 is this. It's really simple; use the os
module and the listdir() function and you'll have the file in that directory (and eventual folders that are in the directory, but you will not have the file in the subdirectory, for that you can use walk - I will talk about it later).
>>> import os
>>> arr = os.listdir()
>>> arr
['$RECYCLE.BIN', 'work.txt', '3ebooks.txt', 'documents']
Using glob
I found glob easier to select file of the same type or with something in common. Look at the following example:
import glob
txtfiles =
for file in glob.glob("*.txt"):
txtfiles.append(file)
Using list comprehension
import glob
mylist = [f for f in glob.glob("*.txt")]
Getting the full path name with os.path.abspath
As you noticed, you don't have the full path of the file in the code above. If you need to have the absolute path, you can use another function of the os.path
module called _getfullpathname
, putting the file that you get from os.listdir()
as an argument. There are other ways to have the full path, as we will check later (I replaced, as suggested by mexmex, _getfullpathname with abspath
).
>>> import os
>>> files_path = [os.path.abspath(x) for x in os.listdir()]
>>> files_path
['F:\documentiapplications.txt', 'F:\documenticollections.txt']
Get the full path name of a type of file into all subdirectories with walk
I find this very useful to find stuff in many directories, and it helped me finding a file about which I didn't remember the name:
import os
# Getting the current work directory (cwd)
thisdir = os.getcwd()
# r=root, d=directories, f = files
for r, d, f in os.walk(thisdir):
for file in f:
if ".docx" in file:
print(os.path.join(r, file))
os.listdir(): get files in the current directory (Python 2)
In Python 2 you, if you want the list of the files in the current directory, you have to give the argument as '.' or os.getcwd() in the os.listdir method.
>>> import os
>>> arr = os.listdir('.')
>>> arr
['$RECYCLE.BIN', 'work.txt', '3ebooks.txt', 'documents']
To go up in the directory tree
>>> # Method 1
>>> x = os.listdir('..')
# Method 2
>>> x= os.listdir('/')
Get files: os.listdir() in a particular directory (Python 2 and 3)
>>> import os
>>> arr = os.listdir('F:\python')
>>> arr
['$RECYCLE.BIN', 'work.txt', '3ebooks.txt', 'documents']
Get files of a particular subdirectory with os.listdir()
import os
x = os.listdir("./content")
os.walk('.') - current directory
>>> import os
>>> arr = next(os.walk('.'))[2]
>>> arr
['5bs_Turismo1.pdf', '5bs_Turismo1.pptx', 'esperienza.txt']
glob module - all files
import glob
print(glob.glob("*"))
out:['content', 'start.py']
next(os.walk('.')) and os.path.join('dir','file')
>>> import os
>>> arr =
>>> for d,r,f in next(os.walk("F:_python")):
>>> for file in f:
>>> arr.append(os.path.join(r,file))
...
>>> for f in arr:
>>> print(files)
>output
F:\_python\dict_class.py
F:\_python\programmi.txt
next(os.walk('F:') - get the full path - list comprehension
>>> [os.path.join(r,file) for r,d,f in next(os.walk("F:\_python")) for file in f]
['F:\_python\dict_class.py', 'F:\_python\programmi.txt']
os.walk - get full path - all files in sub dirs
x = [os.path.join(r,file) for r,d,f in os.walk("F:\_python") for file in f]
>>>x
['F:\_python\dict.py', 'F:\_python\progr.txt', 'F:\_python\readl.py']
os.listdir() - get only txt files
>>> arr_txt = [x for x in os.listdir() if x.endswith(".txt")]
>>> print(arr_txt)
['work.txt', '3ebooks.txt']
glob - get only txt files
>>> import glob
>>> x = glob.glob("*.txt")
>>> x
['ale.txt', 'alunni2015.txt', 'assenze.text.txt', 'text2.txt', 'untitled.txt']
Using glob to get the full path of the files
If I should need the absolute path of the files:
>>> from path import path
>>> from glob import glob
>>> x = [path(f).abspath() for f in glob("F:*.txt")]
>>> for f in x:
... print(f)
...
F:acquistionline.txt
F:acquisti_2018.txt
F:bootstrap_jquery_ecc.txt
Other use of glob
If I want all the files in the directory:
>>> x = glob.glob("*")
Using os.path.isfile to avoid directories in the list
import os.path
listOfFiles = [f for f in os.listdir() if os.path.isfile(f)]
print(listOfFiles)
> output
['a simple game.py', 'data.txt', 'decorator.py']
Using pathlib from (Python 3.4)
import pathlib
>>> flist =
>>> for p in pathlib.Path('.').iterdir():
... if p.is_file():
... print(p)
... flist.append(p)
...
error.PNG
exemaker.bat
guiprova.mp3
setup.py
speak_gui2.py
thumb.PNG
If you want to use list comprehension
>>> flist = [p for p in pathlib.Path('.').iterdir() if p.is_file()]
*You can use also just pathlib.Path() instead of pathlib.Path(".")
Use glob method in pathlib.Path()
import pathlib
py = pathlib.Path().glob("*.py")
for file in py:
print(file)
output:
stack_overflow_list.py
stack_overflow_list_tkinter.py
Get all and only files with os.walk
import os
x = [i[2] for i in os.walk('.')]
y=
for t in x:
for f in t:
y.append(f)
>>> y
['append_to_list.py', 'data.txt', 'data1.txt', 'data2.txt', 'data_180617', 'os_walk.py', 'READ2.py', 'read_data.py', 'somma_defaltdic.py', 'substitute_words.py', 'sum_data.py', 'data.txt', 'data1.txt', 'data_180617']
Get only files with next and walk in a directory
>>> import os
>>> x = next(os.walk('F://python'))[2]
>>> x
['calculator.bat','calculator.py']
Get only directories with next and walk in a directory
>>> import os
>>> next(os.walk('F://python'))[1] # for the current dir use ('.')
['python3','others']
Get all the subdir names with walk
>>> for r,d,f in os.walk("F:_python"):
... for dirs in d:
... print(dirs)
...
.vscode
pyexcel
pyschool.py
subtitles
_metaprogramming
.ipynb_checkpoints
os.scandir() from Python 3.5 on
>>> import os
>>> x = [f.name for f in os.scandir() if f.is_file()]
>>> x
['calculator.bat','calculator.py']
# Another example with scandir (a little variation from docs.python.org)
# This one is more efficient than os.listdir.
# In this case, it shows the files only in the current directory
# where the script is executed.
>>> import os
>>> with os.scandir() as i:
... for entry in i:
... if entry.is_file():
... print(entry.name)
...
ebookmaker.py
error.PNG
exemaker.bat
guiprova.mp3
setup.py
speakgui4.py
speak_gui2.py
speak_gui3.py
thumb.PNG
>>>
Ex. 1: How many files are there in the subdirectories?
In this example, we look for the number of files that are included in all the directory and its subdirectories.
import os
def count(dir, counter=0):
"returns number of files in dir and subdirs"
for pack in os.walk(dir):
for f in pack[2]:
counter += 1
return dir + " : " + str(counter) + "files"
print(count("F:\python"))
> output
>'F:\python' : 12057 files'
Ex.2: How to copy all files from a directory to another?
A script to make order in your computer finding all files of a type (default: pptx) and copying them in a new folder.
import os
import shutil
from path import path
destination = "F:\file_copied"
# os.makedirs(destination)
def copyfile(dir, filetype='pptx', counter=0):
"Searches for pptx (or other - pptx is the default) files and copies them"
for pack in os.walk(dir):
for f in pack[2]:
if f.endswith(filetype):
fullpath = pack[0] + "\" + f
print(fullpath)
shutil.copy(fullpath, destination)
counter += 1
if counter > 0:
print("------------------------")
print("t==> Found in: `" + dir + "` : " + str(counter) + " filesn")
for dir in os.listdir():
"searches for folders that starts with `_`"
if dir[0] == '_':
# copyfile(dir, filetype='pdf')
copyfile(dir, filetype='txt')
> Output
_compiti18Compito Contabilità 1conti.txt
_compiti18Compito Contabilità 1modula4.txt
_compiti18Compito Contabilità 1moduloa4.txt
------------------------
==> Found in: `_compiti18` : 3 files
Ex. 3: How to get all the files in a txt file
In case you want to create a txt file with all the file names:
import os
mylist = ""
with open("filelist.txt", "w", encoding="utf-8") as file:
for eachfile in os.listdir():
mylist += eachfile + "n"
file.write(mylist)
Example: txt with all the files of an hard drive
"""We are going to save a txt file with all the files in your directory.
We will use the function walk()
"""
import os
# see all the methods of os
# print(*dir(os), sep=", ")
listafile =
percorso =
with open("lista_file.txt", "w", encoding='utf-8') as testo:
for root, dirs, files in os.walk("D:\"):
for file in files:
listafile.append(file)
percorso.append(root + "\" + file)
testo.write(file + "n")
listafile.sort()
print("N. of files", len(listafile))
with open("lista_file_ordinata.txt", "w", encoding="utf-8") as testo_ordinato:
for file in listafile:
testo_ordinato.write(file + "n")
with open("percorso.txt", "w", encoding="utf-8") as file_percorso:
for file in percorso:
file_percorso.write(file + "n")
os.system("lista_file.txt")
os.system("lista_file_ordinata.txt")
os.system("percorso.txt")
All the file of C:\ in one text file
This is a shorter version of the previous code. Change the folder where to start finding the files if you need to start from another position. This code generate a 50 mb on text file on my computer with something less then 500.000 lines with files with the complete path.
import os
with open("file.txt", "w", encoding="utf-8") as filewrite:
for r, d, f in os.walk("C:\"):
for file in f:
filewrite.write(f"{r + file}n")
A function to search for a certain type of file
import os
def searchfiles(extension='.ttf'):
"Create a txt file with all the file of a type"
with open("file.txt", "w", encoding="utf-8") as filewrite:
for r, d, f in os.walk("C:\"):
for file in f:
if file.endswith(extension):
filewrite.write(f"{r + file}n")
# looking for ttf file (fonts)
searchfiles('ttf')
2
You should include the path argument to listdir.
– Alejandro Sazo
Jan 3 '17 at 15:47
2
It's definitely encouraged to include some context/explanation for code as that makes the answer more useful.
– EJoshuaS
Jan 3 '17 at 16:07
2
I agree, but I did not notice something also, that python2 requires the argument whilst python3 is optional, If you improve the answer for both python versions would be great :)
– Alejandro Sazo
Jan 3 '17 at 16:44
1
Ok, I went into Python 2 and find the differences and I edited the post.
– Giovanni Gianni
Jan 18 '17 at 21:16
1
There is no reason to do[f for f in os.listdir()]
;os.listdir()
already returns alist
, so that's just needlessly copying the originallist
before throwing it away.
– ShadowRanger
May 6 '17 at 0:08
|
show 13 more comments
Get a list of files with Python 2 and 3
I have also made a short video here: Python: how to get a list of file in a directory
os.listdir()
or..... how to get all the files (and directories) in current directory (Python 3)
The simplest way to have the file in the current directory in Python 3 is this. It's really simple; use the os
module and the listdir() function and you'll have the file in that directory (and eventual folders that are in the directory, but you will not have the file in the subdirectory, for that you can use walk - I will talk about it later).
>>> import os
>>> arr = os.listdir()
>>> arr
['$RECYCLE.BIN', 'work.txt', '3ebooks.txt', 'documents']
Using glob
I found glob easier to select file of the same type or with something in common. Look at the following example:
import glob
txtfiles =
for file in glob.glob("*.txt"):
txtfiles.append(file)
Using list comprehension
import glob
mylist = [f for f in glob.glob("*.txt")]
Getting the full path name with os.path.abspath
As you noticed, you don't have the full path of the file in the code above. If you need to have the absolute path, you can use another function of the os.path
module called _getfullpathname
, putting the file that you get from os.listdir()
as an argument. There are other ways to have the full path, as we will check later (I replaced, as suggested by mexmex, _getfullpathname with abspath
).
>>> import os
>>> files_path = [os.path.abspath(x) for x in os.listdir()]
>>> files_path
['F:\documentiapplications.txt', 'F:\documenticollections.txt']
Get the full path name of a type of file into all subdirectories with walk
I find this very useful to find stuff in many directories, and it helped me finding a file about which I didn't remember the name:
import os
# Getting the current work directory (cwd)
thisdir = os.getcwd()
# r=root, d=directories, f = files
for r, d, f in os.walk(thisdir):
for file in f:
if ".docx" in file:
print(os.path.join(r, file))
os.listdir(): get files in the current directory (Python 2)
In Python 2 you, if you want the list of the files in the current directory, you have to give the argument as '.' or os.getcwd() in the os.listdir method.
>>> import os
>>> arr = os.listdir('.')
>>> arr
['$RECYCLE.BIN', 'work.txt', '3ebooks.txt', 'documents']
To go up in the directory tree
>>> # Method 1
>>> x = os.listdir('..')
# Method 2
>>> x= os.listdir('/')
Get files: os.listdir() in a particular directory (Python 2 and 3)
>>> import os
>>> arr = os.listdir('F:\python')
>>> arr
['$RECYCLE.BIN', 'work.txt', '3ebooks.txt', 'documents']
Get files of a particular subdirectory with os.listdir()
import os
x = os.listdir("./content")
os.walk('.') - current directory
>>> import os
>>> arr = next(os.walk('.'))[2]
>>> arr
['5bs_Turismo1.pdf', '5bs_Turismo1.pptx', 'esperienza.txt']
glob module - all files
import glob
print(glob.glob("*"))
out:['content', 'start.py']
next(os.walk('.')) and os.path.join('dir','file')
>>> import os
>>> arr =
>>> for d,r,f in next(os.walk("F:_python")):
>>> for file in f:
>>> arr.append(os.path.join(r,file))
...
>>> for f in arr:
>>> print(files)
>output
F:\_python\dict_class.py
F:\_python\programmi.txt
next(os.walk('F:') - get the full path - list comprehension
>>> [os.path.join(r,file) for r,d,f in next(os.walk("F:\_python")) for file in f]
['F:\_python\dict_class.py', 'F:\_python\programmi.txt']
os.walk - get full path - all files in sub dirs
x = [os.path.join(r,file) for r,d,f in os.walk("F:\_python") for file in f]
>>>x
['F:\_python\dict.py', 'F:\_python\progr.txt', 'F:\_python\readl.py']
os.listdir() - get only txt files
>>> arr_txt = [x for x in os.listdir() if x.endswith(".txt")]
>>> print(arr_txt)
['work.txt', '3ebooks.txt']
glob - get only txt files
>>> import glob
>>> x = glob.glob("*.txt")
>>> x
['ale.txt', 'alunni2015.txt', 'assenze.text.txt', 'text2.txt', 'untitled.txt']
Using glob to get the full path of the files
If I should need the absolute path of the files:
>>> from path import path
>>> from glob import glob
>>> x = [path(f).abspath() for f in glob("F:*.txt")]
>>> for f in x:
... print(f)
...
F:acquistionline.txt
F:acquisti_2018.txt
F:bootstrap_jquery_ecc.txt
Other use of glob
If I want all the files in the directory:
>>> x = glob.glob("*")
Using os.path.isfile to avoid directories in the list
import os.path
listOfFiles = [f for f in os.listdir() if os.path.isfile(f)]
print(listOfFiles)
> output
['a simple game.py', 'data.txt', 'decorator.py']
Using pathlib from (Python 3.4)
import pathlib
>>> flist =
>>> for p in pathlib.Path('.').iterdir():
... if p.is_file():
... print(p)
... flist.append(p)
...
error.PNG
exemaker.bat
guiprova.mp3
setup.py
speak_gui2.py
thumb.PNG
If you want to use list comprehension
>>> flist = [p for p in pathlib.Path('.').iterdir() if p.is_file()]
*You can use also just pathlib.Path() instead of pathlib.Path(".")
Use glob method in pathlib.Path()
import pathlib
py = pathlib.Path().glob("*.py")
for file in py:
print(file)
output:
stack_overflow_list.py
stack_overflow_list_tkinter.py
Get all and only files with os.walk
import os
x = [i[2] for i in os.walk('.')]
y=
for t in x:
for f in t:
y.append(f)
>>> y
['append_to_list.py', 'data.txt', 'data1.txt', 'data2.txt', 'data_180617', 'os_walk.py', 'READ2.py', 'read_data.py', 'somma_defaltdic.py', 'substitute_words.py', 'sum_data.py', 'data.txt', 'data1.txt', 'data_180617']
Get only files with next and walk in a directory
>>> import os
>>> x = next(os.walk('F://python'))[2]
>>> x
['calculator.bat','calculator.py']
Get only directories with next and walk in a directory
>>> import os
>>> next(os.walk('F://python'))[1] # for the current dir use ('.')
['python3','others']
Get all the subdir names with walk
>>> for r,d,f in os.walk("F:_python"):
... for dirs in d:
... print(dirs)
...
.vscode
pyexcel
pyschool.py
subtitles
_metaprogramming
.ipynb_checkpoints
os.scandir() from Python 3.5 on
>>> import os
>>> x = [f.name for f in os.scandir() if f.is_file()]
>>> x
['calculator.bat','calculator.py']
# Another example with scandir (a little variation from docs.python.org)
# This one is more efficient than os.listdir.
# In this case, it shows the files only in the current directory
# where the script is executed.
>>> import os
>>> with os.scandir() as i:
... for entry in i:
... if entry.is_file():
... print(entry.name)
...
ebookmaker.py
error.PNG
exemaker.bat
guiprova.mp3
setup.py
speakgui4.py
speak_gui2.py
speak_gui3.py
thumb.PNG
>>>
Ex. 1: How many files are there in the subdirectories?
In this example, we look for the number of files that are included in all the directory and its subdirectories.
import os
def count(dir, counter=0):
"returns number of files in dir and subdirs"
for pack in os.walk(dir):
for f in pack[2]:
counter += 1
return dir + " : " + str(counter) + "files"
print(count("F:\python"))
> output
>'F:\python' : 12057 files'
Ex.2: How to copy all files from a directory to another?
A script to make order in your computer finding all files of a type (default: pptx) and copying them in a new folder.
import os
import shutil
from path import path
destination = "F:\file_copied"
# os.makedirs(destination)
def copyfile(dir, filetype='pptx', counter=0):
"Searches for pptx (or other - pptx is the default) files and copies them"
for pack in os.walk(dir):
for f in pack[2]:
if f.endswith(filetype):
fullpath = pack[0] + "\" + f
print(fullpath)
shutil.copy(fullpath, destination)
counter += 1
if counter > 0:
print("------------------------")
print("t==> Found in: `" + dir + "` : " + str(counter) + " filesn")
for dir in os.listdir():
"searches for folders that starts with `_`"
if dir[0] == '_':
# copyfile(dir, filetype='pdf')
copyfile(dir, filetype='txt')
> Output
_compiti18Compito Contabilità 1conti.txt
_compiti18Compito Contabilità 1modula4.txt
_compiti18Compito Contabilità 1moduloa4.txt
------------------------
==> Found in: `_compiti18` : 3 files
Ex. 3: How to get all the files in a txt file
In case you want to create a txt file with all the file names:
import os
mylist = ""
with open("filelist.txt", "w", encoding="utf-8") as file:
for eachfile in os.listdir():
mylist += eachfile + "n"
file.write(mylist)
Example: txt with all the files of an hard drive
"""We are going to save a txt file with all the files in your directory.
We will use the function walk()
"""
import os
# see all the methods of os
# print(*dir(os), sep=", ")
listafile =
percorso =
with open("lista_file.txt", "w", encoding='utf-8') as testo:
for root, dirs, files in os.walk("D:\"):
for file in files:
listafile.append(file)
percorso.append(root + "\" + file)
testo.write(file + "n")
listafile.sort()
print("N. of files", len(listafile))
with open("lista_file_ordinata.txt", "w", encoding="utf-8") as testo_ordinato:
for file in listafile:
testo_ordinato.write(file + "n")
with open("percorso.txt", "w", encoding="utf-8") as file_percorso:
for file in percorso:
file_percorso.write(file + "n")
os.system("lista_file.txt")
os.system("lista_file_ordinata.txt")
os.system("percorso.txt")
All the file of C:\ in one text file
This is a shorter version of the previous code. Change the folder where to start finding the files if you need to start from another position. This code generate a 50 mb on text file on my computer with something less then 500.000 lines with files with the complete path.
import os
with open("file.txt", "w", encoding="utf-8") as filewrite:
for r, d, f in os.walk("C:\"):
for file in f:
filewrite.write(f"{r + file}n")
A function to search for a certain type of file
import os
def searchfiles(extension='.ttf'):
"Create a txt file with all the file of a type"
with open("file.txt", "w", encoding="utf-8") as filewrite:
for r, d, f in os.walk("C:\"):
for file in f:
if file.endswith(extension):
filewrite.write(f"{r + file}n")
# looking for ttf file (fonts)
searchfiles('ttf')
2
You should include the path argument to listdir.
– Alejandro Sazo
Jan 3 '17 at 15:47
2
It's definitely encouraged to include some context/explanation for code as that makes the answer more useful.
– EJoshuaS
Jan 3 '17 at 16:07
2
I agree, but I did not notice something also, that python2 requires the argument whilst python3 is optional, If you improve the answer for both python versions would be great :)
– Alejandro Sazo
Jan 3 '17 at 16:44
1
Ok, I went into Python 2 and find the differences and I edited the post.
– Giovanni Gianni
Jan 18 '17 at 21:16
1
There is no reason to do[f for f in os.listdir()]
;os.listdir()
already returns alist
, so that's just needlessly copying the originallist
before throwing it away.
– ShadowRanger
May 6 '17 at 0:08
|
show 13 more comments
Get a list of files with Python 2 and 3
I have also made a short video here: Python: how to get a list of file in a directory
os.listdir()
or..... how to get all the files (and directories) in current directory (Python 3)
The simplest way to have the file in the current directory in Python 3 is this. It's really simple; use the os
module and the listdir() function and you'll have the file in that directory (and eventual folders that are in the directory, but you will not have the file in the subdirectory, for that you can use walk - I will talk about it later).
>>> import os
>>> arr = os.listdir()
>>> arr
['$RECYCLE.BIN', 'work.txt', '3ebooks.txt', 'documents']
Using glob
I found glob easier to select file of the same type or with something in common. Look at the following example:
import glob
txtfiles =
for file in glob.glob("*.txt"):
txtfiles.append(file)
Using list comprehension
import glob
mylist = [f for f in glob.glob("*.txt")]
Getting the full path name with os.path.abspath
As you noticed, you don't have the full path of the file in the code above. If you need to have the absolute path, you can use another function of the os.path
module called _getfullpathname
, putting the file that you get from os.listdir()
as an argument. There are other ways to have the full path, as we will check later (I replaced, as suggested by mexmex, _getfullpathname with abspath
).
>>> import os
>>> files_path = [os.path.abspath(x) for x in os.listdir()]
>>> files_path
['F:\documentiapplications.txt', 'F:\documenticollections.txt']
Get the full path name of a type of file into all subdirectories with walk
I find this very useful to find stuff in many directories, and it helped me finding a file about which I didn't remember the name:
import os
# Getting the current work directory (cwd)
thisdir = os.getcwd()
# r=root, d=directories, f = files
for r, d, f in os.walk(thisdir):
for file in f:
if ".docx" in file:
print(os.path.join(r, file))
os.listdir(): get files in the current directory (Python 2)
In Python 2 you, if you want the list of the files in the current directory, you have to give the argument as '.' or os.getcwd() in the os.listdir method.
>>> import os
>>> arr = os.listdir('.')
>>> arr
['$RECYCLE.BIN', 'work.txt', '3ebooks.txt', 'documents']
To go up in the directory tree
>>> # Method 1
>>> x = os.listdir('..')
# Method 2
>>> x= os.listdir('/')
Get files: os.listdir() in a particular directory (Python 2 and 3)
>>> import os
>>> arr = os.listdir('F:\python')
>>> arr
['$RECYCLE.BIN', 'work.txt', '3ebooks.txt', 'documents']
Get files of a particular subdirectory with os.listdir()
import os
x = os.listdir("./content")
os.walk('.') - current directory
>>> import os
>>> arr = next(os.walk('.'))[2]
>>> arr
['5bs_Turismo1.pdf', '5bs_Turismo1.pptx', 'esperienza.txt']
glob module - all files
import glob
print(glob.glob("*"))
out:['content', 'start.py']
next(os.walk('.')) and os.path.join('dir','file')
>>> import os
>>> arr =
>>> for d,r,f in next(os.walk("F:_python")):
>>> for file in f:
>>> arr.append(os.path.join(r,file))
...
>>> for f in arr:
>>> print(files)
>output
F:\_python\dict_class.py
F:\_python\programmi.txt
next(os.walk('F:') - get the full path - list comprehension
>>> [os.path.join(r,file) for r,d,f in next(os.walk("F:\_python")) for file in f]
['F:\_python\dict_class.py', 'F:\_python\programmi.txt']
os.walk - get full path - all files in sub dirs
x = [os.path.join(r,file) for r,d,f in os.walk("F:\_python") for file in f]
>>>x
['F:\_python\dict.py', 'F:\_python\progr.txt', 'F:\_python\readl.py']
os.listdir() - get only txt files
>>> arr_txt = [x for x in os.listdir() if x.endswith(".txt")]
>>> print(arr_txt)
['work.txt', '3ebooks.txt']
glob - get only txt files
>>> import glob
>>> x = glob.glob("*.txt")
>>> x
['ale.txt', 'alunni2015.txt', 'assenze.text.txt', 'text2.txt', 'untitled.txt']
Using glob to get the full path of the files
If I should need the absolute path of the files:
>>> from path import path
>>> from glob import glob
>>> x = [path(f).abspath() for f in glob("F:*.txt")]
>>> for f in x:
... print(f)
...
F:acquistionline.txt
F:acquisti_2018.txt
F:bootstrap_jquery_ecc.txt
Other use of glob
If I want all the files in the directory:
>>> x = glob.glob("*")
Using os.path.isfile to avoid directories in the list
import os.path
listOfFiles = [f for f in os.listdir() if os.path.isfile(f)]
print(listOfFiles)
> output
['a simple game.py', 'data.txt', 'decorator.py']
Using pathlib from (Python 3.4)
import pathlib
>>> flist =
>>> for p in pathlib.Path('.').iterdir():
... if p.is_file():
... print(p)
... flist.append(p)
...
error.PNG
exemaker.bat
guiprova.mp3
setup.py
speak_gui2.py
thumb.PNG
If you want to use list comprehension
>>> flist = [p for p in pathlib.Path('.').iterdir() if p.is_file()]
*You can use also just pathlib.Path() instead of pathlib.Path(".")
Use glob method in pathlib.Path()
import pathlib
py = pathlib.Path().glob("*.py")
for file in py:
print(file)
output:
stack_overflow_list.py
stack_overflow_list_tkinter.py
Get all and only files with os.walk
import os
x = [i[2] for i in os.walk('.')]
y=
for t in x:
for f in t:
y.append(f)
>>> y
['append_to_list.py', 'data.txt', 'data1.txt', 'data2.txt', 'data_180617', 'os_walk.py', 'READ2.py', 'read_data.py', 'somma_defaltdic.py', 'substitute_words.py', 'sum_data.py', 'data.txt', 'data1.txt', 'data_180617']
Get only files with next and walk in a directory
>>> import os
>>> x = next(os.walk('F://python'))[2]
>>> x
['calculator.bat','calculator.py']
Get only directories with next and walk in a directory
>>> import os
>>> next(os.walk('F://python'))[1] # for the current dir use ('.')
['python3','others']
Get all the subdir names with walk
>>> for r,d,f in os.walk("F:_python"):
... for dirs in d:
... print(dirs)
...
.vscode
pyexcel
pyschool.py
subtitles
_metaprogramming
.ipynb_checkpoints
os.scandir() from Python 3.5 on
>>> import os
>>> x = [f.name for f in os.scandir() if f.is_file()]
>>> x
['calculator.bat','calculator.py']
# Another example with scandir (a little variation from docs.python.org)
# This one is more efficient than os.listdir.
# In this case, it shows the files only in the current directory
# where the script is executed.
>>> import os
>>> with os.scandir() as i:
... for entry in i:
... if entry.is_file():
... print(entry.name)
...
ebookmaker.py
error.PNG
exemaker.bat
guiprova.mp3
setup.py
speakgui4.py
speak_gui2.py
speak_gui3.py
thumb.PNG
>>>
Ex. 1: How many files are there in the subdirectories?
In this example, we look for the number of files that are included in all the directory and its subdirectories.
import os
def count(dir, counter=0):
"returns number of files in dir and subdirs"
for pack in os.walk(dir):
for f in pack[2]:
counter += 1
return dir + " : " + str(counter) + "files"
print(count("F:\python"))
> output
>'F:\python' : 12057 files'
Ex.2: How to copy all files from a directory to another?
A script to make order in your computer finding all files of a type (default: pptx) and copying them in a new folder.
import os
import shutil
from path import path
destination = "F:\file_copied"
# os.makedirs(destination)
def copyfile(dir, filetype='pptx', counter=0):
"Searches for pptx (or other - pptx is the default) files and copies them"
for pack in os.walk(dir):
for f in pack[2]:
if f.endswith(filetype):
fullpath = pack[0] + "\" + f
print(fullpath)
shutil.copy(fullpath, destination)
counter += 1
if counter > 0:
print("------------------------")
print("t==> Found in: `" + dir + "` : " + str(counter) + " filesn")
for dir in os.listdir():
"searches for folders that starts with `_`"
if dir[0] == '_':
# copyfile(dir, filetype='pdf')
copyfile(dir, filetype='txt')
> Output
_compiti18Compito Contabilità 1conti.txt
_compiti18Compito Contabilità 1modula4.txt
_compiti18Compito Contabilità 1moduloa4.txt
------------------------
==> Found in: `_compiti18` : 3 files
Ex. 3: How to get all the files in a txt file
In case you want to create a txt file with all the file names:
import os
mylist = ""
with open("filelist.txt", "w", encoding="utf-8") as file:
for eachfile in os.listdir():
mylist += eachfile + "n"
file.write(mylist)
Example: txt with all the files of an hard drive
"""We are going to save a txt file with all the files in your directory.
We will use the function walk()
"""
import os
# see all the methods of os
# print(*dir(os), sep=", ")
listafile =
percorso =
with open("lista_file.txt", "w", encoding='utf-8') as testo:
for root, dirs, files in os.walk("D:\"):
for file in files:
listafile.append(file)
percorso.append(root + "\" + file)
testo.write(file + "n")
listafile.sort()
print("N. of files", len(listafile))
with open("lista_file_ordinata.txt", "w", encoding="utf-8") as testo_ordinato:
for file in listafile:
testo_ordinato.write(file + "n")
with open("percorso.txt", "w", encoding="utf-8") as file_percorso:
for file in percorso:
file_percorso.write(file + "n")
os.system("lista_file.txt")
os.system("lista_file_ordinata.txt")
os.system("percorso.txt")
All the file of C:\ in one text file
This is a shorter version of the previous code. Change the folder where to start finding the files if you need to start from another position. This code generate a 50 mb on text file on my computer with something less then 500.000 lines with files with the complete path.
import os
with open("file.txt", "w", encoding="utf-8") as filewrite:
for r, d, f in os.walk("C:\"):
for file in f:
filewrite.write(f"{r + file}n")
A function to search for a certain type of file
import os
def searchfiles(extension='.ttf'):
"Create a txt file with all the file of a type"
with open("file.txt", "w", encoding="utf-8") as filewrite:
for r, d, f in os.walk("C:\"):
for file in f:
if file.endswith(extension):
filewrite.write(f"{r + file}n")
# looking for ttf file (fonts)
searchfiles('ttf')
Get a list of files with Python 2 and 3
I have also made a short video here: Python: how to get a list of file in a directory
os.listdir()
or..... how to get all the files (and directories) in current directory (Python 3)
The simplest way to have the file in the current directory in Python 3 is this. It's really simple; use the os
module and the listdir() function and you'll have the file in that directory (and eventual folders that are in the directory, but you will not have the file in the subdirectory, for that you can use walk - I will talk about it later).
>>> import os
>>> arr = os.listdir()
>>> arr
['$RECYCLE.BIN', 'work.txt', '3ebooks.txt', 'documents']
Using glob
I found glob easier to select file of the same type or with something in common. Look at the following example:
import glob
txtfiles =
for file in glob.glob("*.txt"):
txtfiles.append(file)
Using list comprehension
import glob
mylist = [f for f in glob.glob("*.txt")]
Getting the full path name with os.path.abspath
As you noticed, you don't have the full path of the file in the code above. If you need to have the absolute path, you can use another function of the os.path
module called _getfullpathname
, putting the file that you get from os.listdir()
as an argument. There are other ways to have the full path, as we will check later (I replaced, as suggested by mexmex, _getfullpathname with abspath
).
>>> import os
>>> files_path = [os.path.abspath(x) for x in os.listdir()]
>>> files_path
['F:\documentiapplications.txt', 'F:\documenticollections.txt']
Get the full path name of a type of file into all subdirectories with walk
I find this very useful to find stuff in many directories, and it helped me finding a file about which I didn't remember the name:
import os
# Getting the current work directory (cwd)
thisdir = os.getcwd()
# r=root, d=directories, f = files
for r, d, f in os.walk(thisdir):
for file in f:
if ".docx" in file:
print(os.path.join(r, file))
os.listdir(): get files in the current directory (Python 2)
In Python 2 you, if you want the list of the files in the current directory, you have to give the argument as '.' or os.getcwd() in the os.listdir method.
>>> import os
>>> arr = os.listdir('.')
>>> arr
['$RECYCLE.BIN', 'work.txt', '3ebooks.txt', 'documents']
To go up in the directory tree
>>> # Method 1
>>> x = os.listdir('..')
# Method 2
>>> x= os.listdir('/')
Get files: os.listdir() in a particular directory (Python 2 and 3)
>>> import os
>>> arr = os.listdir('F:\python')
>>> arr
['$RECYCLE.BIN', 'work.txt', '3ebooks.txt', 'documents']
Get files of a particular subdirectory with os.listdir()
import os
x = os.listdir("./content")
os.walk('.') - current directory
>>> import os
>>> arr = next(os.walk('.'))[2]
>>> arr
['5bs_Turismo1.pdf', '5bs_Turismo1.pptx', 'esperienza.txt']
glob module - all files
import glob
print(glob.glob("*"))
out:['content', 'start.py']
next(os.walk('.')) and os.path.join('dir','file')
>>> import os
>>> arr =
>>> for d,r,f in next(os.walk("F:_python")):
>>> for file in f:
>>> arr.append(os.path.join(r,file))
...
>>> for f in arr:
>>> print(files)
>output
F:\_python\dict_class.py
F:\_python\programmi.txt
next(os.walk('F:') - get the full path - list comprehension
>>> [os.path.join(r,file) for r,d,f in next(os.walk("F:\_python")) for file in f]
['F:\_python\dict_class.py', 'F:\_python\programmi.txt']
os.walk - get full path - all files in sub dirs
x = [os.path.join(r,file) for r,d,f in os.walk("F:\_python") for file in f]
>>>x
['F:\_python\dict.py', 'F:\_python\progr.txt', 'F:\_python\readl.py']
os.listdir() - get only txt files
>>> arr_txt = [x for x in os.listdir() if x.endswith(".txt")]
>>> print(arr_txt)
['work.txt', '3ebooks.txt']
glob - get only txt files
>>> import glob
>>> x = glob.glob("*.txt")
>>> x
['ale.txt', 'alunni2015.txt', 'assenze.text.txt', 'text2.txt', 'untitled.txt']
Using glob to get the full path of the files
If I should need the absolute path of the files:
>>> from path import path
>>> from glob import glob
>>> x = [path(f).abspath() for f in glob("F:*.txt")]
>>> for f in x:
... print(f)
...
F:acquistionline.txt
F:acquisti_2018.txt
F:bootstrap_jquery_ecc.txt
Other use of glob
If I want all the files in the directory:
>>> x = glob.glob("*")
Using os.path.isfile to avoid directories in the list
import os.path
listOfFiles = [f for f in os.listdir() if os.path.isfile(f)]
print(listOfFiles)
> output
['a simple game.py', 'data.txt', 'decorator.py']
Using pathlib from (Python 3.4)
import pathlib
>>> flist =
>>> for p in pathlib.Path('.').iterdir():
... if p.is_file():
... print(p)
... flist.append(p)
...
error.PNG
exemaker.bat
guiprova.mp3
setup.py
speak_gui2.py
thumb.PNG
If you want to use list comprehension
>>> flist = [p for p in pathlib.Path('.').iterdir() if p.is_file()]
*You can use also just pathlib.Path() instead of pathlib.Path(".")
Use glob method in pathlib.Path()
import pathlib
py = pathlib.Path().glob("*.py")
for file in py:
print(file)
output:
stack_overflow_list.py
stack_overflow_list_tkinter.py
Get all and only files with os.walk
import os
x = [i[2] for i in os.walk('.')]
y=
for t in x:
for f in t:
y.append(f)
>>> y
['append_to_list.py', 'data.txt', 'data1.txt', 'data2.txt', 'data_180617', 'os_walk.py', 'READ2.py', 'read_data.py', 'somma_defaltdic.py', 'substitute_words.py', 'sum_data.py', 'data.txt', 'data1.txt', 'data_180617']
Get only files with next and walk in a directory
>>> import os
>>> x = next(os.walk('F://python'))[2]
>>> x
['calculator.bat','calculator.py']
Get only directories with next and walk in a directory
>>> import os
>>> next(os.walk('F://python'))[1] # for the current dir use ('.')
['python3','others']
Get all the subdir names with walk
>>> for r,d,f in os.walk("F:_python"):
... for dirs in d:
... print(dirs)
...
.vscode
pyexcel
pyschool.py
subtitles
_metaprogramming
.ipynb_checkpoints
os.scandir() from Python 3.5 on
>>> import os
>>> x = [f.name for f in os.scandir() if f.is_file()]
>>> x
['calculator.bat','calculator.py']
# Another example with scandir (a little variation from docs.python.org)
# This one is more efficient than os.listdir.
# In this case, it shows the files only in the current directory
# where the script is executed.
>>> import os
>>> with os.scandir() as i:
... for entry in i:
... if entry.is_file():
... print(entry.name)
...
ebookmaker.py
error.PNG
exemaker.bat
guiprova.mp3
setup.py
speakgui4.py
speak_gui2.py
speak_gui3.py
thumb.PNG
>>>
Ex. 1: How many files are there in the subdirectories?
In this example, we look for the number of files that are included in all the directory and its subdirectories.
import os
def count(dir, counter=0):
"returns number of files in dir and subdirs"
for pack in os.walk(dir):
for f in pack[2]:
counter += 1
return dir + " : " + str(counter) + "files"
print(count("F:\python"))
> output
>'F:\python' : 12057 files'
Ex.2: How to copy all files from a directory to another?
A script to make order in your computer finding all files of a type (default: pptx) and copying them in a new folder.
import os
import shutil
from path import path
destination = "F:\file_copied"
# os.makedirs(destination)
def copyfile(dir, filetype='pptx', counter=0):
"Searches for pptx (or other - pptx is the default) files and copies them"
for pack in os.walk(dir):
for f in pack[2]:
if f.endswith(filetype):
fullpath = pack[0] + "\" + f
print(fullpath)
shutil.copy(fullpath, destination)
counter += 1
if counter > 0:
print("------------------------")
print("t==> Found in: `" + dir + "` : " + str(counter) + " filesn")
for dir in os.listdir():
"searches for folders that starts with `_`"
if dir[0] == '_':
# copyfile(dir, filetype='pdf')
copyfile(dir, filetype='txt')
> Output
_compiti18Compito Contabilità 1conti.txt
_compiti18Compito Contabilità 1modula4.txt
_compiti18Compito Contabilità 1moduloa4.txt
------------------------
==> Found in: `_compiti18` : 3 files
Ex. 3: How to get all the files in a txt file
In case you want to create a txt file with all the file names:
import os
mylist = ""
with open("filelist.txt", "w", encoding="utf-8") as file:
for eachfile in os.listdir():
mylist += eachfile + "n"
file.write(mylist)
Example: txt with all the files of an hard drive
"""We are going to save a txt file with all the files in your directory.
We will use the function walk()
"""
import os
# see all the methods of os
# print(*dir(os), sep=", ")
listafile =
percorso =
with open("lista_file.txt", "w", encoding='utf-8') as testo:
for root, dirs, files in os.walk("D:\"):
for file in files:
listafile.append(file)
percorso.append(root + "\" + file)
testo.write(file + "n")
listafile.sort()
print("N. of files", len(listafile))
with open("lista_file_ordinata.txt", "w", encoding="utf-8") as testo_ordinato:
for file in listafile:
testo_ordinato.write(file + "n")
with open("percorso.txt", "w", encoding="utf-8") as file_percorso:
for file in percorso:
file_percorso.write(file + "n")
os.system("lista_file.txt")
os.system("lista_file_ordinata.txt")
os.system("percorso.txt")
All the file of C:\ in one text file
This is a shorter version of the previous code. Change the folder where to start finding the files if you need to start from another position. This code generate a 50 mb on text file on my computer with something less then 500.000 lines with files with the complete path.
import os
with open("file.txt", "w", encoding="utf-8") as filewrite:
for r, d, f in os.walk("C:\"):
for file in f:
filewrite.write(f"{r + file}n")
A function to search for a certain type of file
import os
def searchfiles(extension='.ttf'):
"Create a txt file with all the file of a type"
with open("file.txt", "w", encoding="utf-8") as filewrite:
for r, d, f in os.walk("C:\"):
for file in f:
if file.endswith(extension):
filewrite.write(f"{r + file}n")
# looking for ttf file (fonts)
searchfiles('ttf')
edited Oct 26 '18 at 14:38
answered Jan 3 '17 at 15:36
Giovanni GianniGiovanni Gianni
6,21611623
6,21611623
2
You should include the path argument to listdir.
– Alejandro Sazo
Jan 3 '17 at 15:47
2
It's definitely encouraged to include some context/explanation for code as that makes the answer more useful.
– EJoshuaS
Jan 3 '17 at 16:07
2
I agree, but I did not notice something also, that python2 requires the argument whilst python3 is optional, If you improve the answer for both python versions would be great :)
– Alejandro Sazo
Jan 3 '17 at 16:44
1
Ok, I went into Python 2 and find the differences and I edited the post.
– Giovanni Gianni
Jan 18 '17 at 21:16
1
There is no reason to do[f for f in os.listdir()]
;os.listdir()
already returns alist
, so that's just needlessly copying the originallist
before throwing it away.
– ShadowRanger
May 6 '17 at 0:08
|
show 13 more comments
2
You should include the path argument to listdir.
– Alejandro Sazo
Jan 3 '17 at 15:47
2
It's definitely encouraged to include some context/explanation for code as that makes the answer more useful.
– EJoshuaS
Jan 3 '17 at 16:07
2
I agree, but I did not notice something also, that python2 requires the argument whilst python3 is optional, If you improve the answer for both python versions would be great :)
– Alejandro Sazo
Jan 3 '17 at 16:44
1
Ok, I went into Python 2 and find the differences and I edited the post.
– Giovanni Gianni
Jan 18 '17 at 21:16
1
There is no reason to do[f for f in os.listdir()]
;os.listdir()
already returns alist
, so that's just needlessly copying the originallist
before throwing it away.
– ShadowRanger
May 6 '17 at 0:08
2
2
You should include the path argument to listdir.
– Alejandro Sazo
Jan 3 '17 at 15:47
You should include the path argument to listdir.
– Alejandro Sazo
Jan 3 '17 at 15:47
2
2
It's definitely encouraged to include some context/explanation for code as that makes the answer more useful.
– EJoshuaS
Jan 3 '17 at 16:07
It's definitely encouraged to include some context/explanation for code as that makes the answer more useful.
– EJoshuaS
Jan 3 '17 at 16:07
2
2
I agree, but I did not notice something also, that python2 requires the argument whilst python3 is optional, If you improve the answer for both python versions would be great :)
– Alejandro Sazo
Jan 3 '17 at 16:44
I agree, but I did not notice something also, that python2 requires the argument whilst python3 is optional, If you improve the answer for both python versions would be great :)
– Alejandro Sazo
Jan 3 '17 at 16:44
1
1
Ok, I went into Python 2 and find the differences and I edited the post.
– Giovanni Gianni
Jan 18 '17 at 21:16
Ok, I went into Python 2 and find the differences and I edited the post.
– Giovanni Gianni
Jan 18 '17 at 21:16
1
1
There is no reason to do
[f for f in os.listdir()]
; os.listdir()
already returns a list
, so that's just needlessly copying the original list
before throwing it away.– ShadowRanger
May 6 '17 at 0:08
There is no reason to do
[f for f in os.listdir()]
; os.listdir()
already returns a list
, so that's just needlessly copying the original list
before throwing it away.– ShadowRanger
May 6 '17 at 0:08
|
show 13 more comments
A one-line solution to get only list of files (no subdirectories):
filenames = next(os.walk(path))[2]
or absolute pathnames:
paths = [os.path.join(path,fn) for fn in next(os.walk(path))[2]]
6
Only a one-liner if you've alreadyimport os
. Seems less concise thanglob()
to me.
– ArtOfWarfare
Nov 28 '14 at 20:22
3
problem with glob is that a folder called 'something.something' would be returned by glob('/home/adam/*.*')
– Remi
Dec 1 '14 at 9:08
2
On OS X, there's something called a bundle. It's a directory which should generally be treated as a file (like a .tar). Would you want those treated as a file or a directory? Usingglob()
would treat it as a file. Your method would treat it as a directory.
– ArtOfWarfare
Dec 1 '14 at 19:44
add a comment |
A one-line solution to get only list of files (no subdirectories):
filenames = next(os.walk(path))[2]
or absolute pathnames:
paths = [os.path.join(path,fn) for fn in next(os.walk(path))[2]]
6
Only a one-liner if you've alreadyimport os
. Seems less concise thanglob()
to me.
– ArtOfWarfare
Nov 28 '14 at 20:22
3
problem with glob is that a folder called 'something.something' would be returned by glob('/home/adam/*.*')
– Remi
Dec 1 '14 at 9:08
2
On OS X, there's something called a bundle. It's a directory which should generally be treated as a file (like a .tar). Would you want those treated as a file or a directory? Usingglob()
would treat it as a file. Your method would treat it as a directory.
– ArtOfWarfare
Dec 1 '14 at 19:44
add a comment |
A one-line solution to get only list of files (no subdirectories):
filenames = next(os.walk(path))[2]
or absolute pathnames:
paths = [os.path.join(path,fn) for fn in next(os.walk(path))[2]]
A one-line solution to get only list of files (no subdirectories):
filenames = next(os.walk(path))[2]
or absolute pathnames:
paths = [os.path.join(path,fn) for fn in next(os.walk(path))[2]]
edited Jan 14 '15 at 18:25
Al Lelopath
3,21095283
3,21095283
answered Jan 18 '14 at 17:42
RemiRemi
13.1k74340
13.1k74340
6
Only a one-liner if you've alreadyimport os
. Seems less concise thanglob()
to me.
– ArtOfWarfare
Nov 28 '14 at 20:22
3
problem with glob is that a folder called 'something.something' would be returned by glob('/home/adam/*.*')
– Remi
Dec 1 '14 at 9:08
2
On OS X, there's something called a bundle. It's a directory which should generally be treated as a file (like a .tar). Would you want those treated as a file or a directory? Usingglob()
would treat it as a file. Your method would treat it as a directory.
– ArtOfWarfare
Dec 1 '14 at 19:44
add a comment |
6
Only a one-liner if you've alreadyimport os
. Seems less concise thanglob()
to me.
– ArtOfWarfare
Nov 28 '14 at 20:22
3
problem with glob is that a folder called 'something.something' would be returned by glob('/home/adam/*.*')
– Remi
Dec 1 '14 at 9:08
2
On OS X, there's something called a bundle. It's a directory which should generally be treated as a file (like a .tar). Would you want those treated as a file or a directory? Usingglob()
would treat it as a file. Your method would treat it as a directory.
– ArtOfWarfare
Dec 1 '14 at 19:44
6
6
Only a one-liner if you've already
import os
. Seems less concise than glob()
to me.– ArtOfWarfare
Nov 28 '14 at 20:22
Only a one-liner if you've already
import os
. Seems less concise than glob()
to me.– ArtOfWarfare
Nov 28 '14 at 20:22
3
3
problem with glob is that a folder called 'something.something' would be returned by glob('/home/adam/*.*')
– Remi
Dec 1 '14 at 9:08
problem with glob is that a folder called 'something.something' would be returned by glob('/home/adam/*.*')
– Remi
Dec 1 '14 at 9:08
2
2
On OS X, there's something called a bundle. It's a directory which should generally be treated as a file (like a .tar). Would you want those treated as a file or a directory? Using
glob()
would treat it as a file. Your method would treat it as a directory.– ArtOfWarfare
Dec 1 '14 at 19:44
On OS X, there's something called a bundle. It's a directory which should generally be treated as a file (like a .tar). Would you want those treated as a file or a directory? Using
glob()
would treat it as a file. Your method would treat it as a directory.– ArtOfWarfare
Dec 1 '14 at 19:44
add a comment |
Getting Full File Paths From a Directory and All Its Subdirectories
import os
def get_filepaths(directory):
"""
This function will generate the file names in a directory
tree by walking the tree either top-down or bottom-up. For each
directory in the tree rooted at directory top (including top itself),
it yields a 3-tuple (dirpath, dirnames, filenames).
"""
file_paths = # List which will store all of the full filepaths.
# Walk the tree.
for root, directories, files in os.walk(directory):
for filename in files:
# Join the two strings in order to form the full filepath.
filepath = os.path.join(root, filename)
file_paths.append(filepath) # Add it to the list.
return file_paths # Self-explanatory.
# Run the above function and store its results in a variable.
full_file_paths = get_filepaths("/Users/johnny/Desktop/TEST")
- The path I provided in the above function contained 3 files— two of them in the root directory, and another in a subfolder called "SUBFOLDER." You can now do things like:
print full_file_paths
which will print the list:
['/Users/johnny/Desktop/TEST/file1.txt', '/Users/johnny/Desktop/TEST/file2.txt', '/Users/johnny/Desktop/TEST/SUBFOLDER/file3.dat']
If you'd like, you can open and read the contents, or focus only on files with the extension ".dat" like in the code below:
for f in full_file_paths:
if f.endswith(".dat"):
print f
/Users/johnny/Desktop/TEST/SUBFOLDER/file3.dat
add a comment |
Getting Full File Paths From a Directory and All Its Subdirectories
import os
def get_filepaths(directory):
"""
This function will generate the file names in a directory
tree by walking the tree either top-down or bottom-up. For each
directory in the tree rooted at directory top (including top itself),
it yields a 3-tuple (dirpath, dirnames, filenames).
"""
file_paths = # List which will store all of the full filepaths.
# Walk the tree.
for root, directories, files in os.walk(directory):
for filename in files:
# Join the two strings in order to form the full filepath.
filepath = os.path.join(root, filename)
file_paths.append(filepath) # Add it to the list.
return file_paths # Self-explanatory.
# Run the above function and store its results in a variable.
full_file_paths = get_filepaths("/Users/johnny/Desktop/TEST")
- The path I provided in the above function contained 3 files— two of them in the root directory, and another in a subfolder called "SUBFOLDER." You can now do things like:
print full_file_paths
which will print the list:
['/Users/johnny/Desktop/TEST/file1.txt', '/Users/johnny/Desktop/TEST/file2.txt', '/Users/johnny/Desktop/TEST/SUBFOLDER/file3.dat']
If you'd like, you can open and read the contents, or focus only on files with the extension ".dat" like in the code below:
for f in full_file_paths:
if f.endswith(".dat"):
print f
/Users/johnny/Desktop/TEST/SUBFOLDER/file3.dat
add a comment |
Getting Full File Paths From a Directory and All Its Subdirectories
import os
def get_filepaths(directory):
"""
This function will generate the file names in a directory
tree by walking the tree either top-down or bottom-up. For each
directory in the tree rooted at directory top (including top itself),
it yields a 3-tuple (dirpath, dirnames, filenames).
"""
file_paths = # List which will store all of the full filepaths.
# Walk the tree.
for root, directories, files in os.walk(directory):
for filename in files:
# Join the two strings in order to form the full filepath.
filepath = os.path.join(root, filename)
file_paths.append(filepath) # Add it to the list.
return file_paths # Self-explanatory.
# Run the above function and store its results in a variable.
full_file_paths = get_filepaths("/Users/johnny/Desktop/TEST")
- The path I provided in the above function contained 3 files— two of them in the root directory, and another in a subfolder called "SUBFOLDER." You can now do things like:
print full_file_paths
which will print the list:
['/Users/johnny/Desktop/TEST/file1.txt', '/Users/johnny/Desktop/TEST/file2.txt', '/Users/johnny/Desktop/TEST/SUBFOLDER/file3.dat']
If you'd like, you can open and read the contents, or focus only on files with the extension ".dat" like in the code below:
for f in full_file_paths:
if f.endswith(".dat"):
print f
/Users/johnny/Desktop/TEST/SUBFOLDER/file3.dat
Getting Full File Paths From a Directory and All Its Subdirectories
import os
def get_filepaths(directory):
"""
This function will generate the file names in a directory
tree by walking the tree either top-down or bottom-up. For each
directory in the tree rooted at directory top (including top itself),
it yields a 3-tuple (dirpath, dirnames, filenames).
"""
file_paths = # List which will store all of the full filepaths.
# Walk the tree.
for root, directories, files in os.walk(directory):
for filename in files:
# Join the two strings in order to form the full filepath.
filepath = os.path.join(root, filename)
file_paths.append(filepath) # Add it to the list.
return file_paths # Self-explanatory.
# Run the above function and store its results in a variable.
full_file_paths = get_filepaths("/Users/johnny/Desktop/TEST")
- The path I provided in the above function contained 3 files— two of them in the root directory, and another in a subfolder called "SUBFOLDER." You can now do things like:
print full_file_paths
which will print the list:
['/Users/johnny/Desktop/TEST/file1.txt', '/Users/johnny/Desktop/TEST/file2.txt', '/Users/johnny/Desktop/TEST/SUBFOLDER/file3.dat']
If you'd like, you can open and read the contents, or focus only on files with the extension ".dat" like in the code below:
for f in full_file_paths:
if f.endswith(".dat"):
print f
/Users/johnny/Desktop/TEST/SUBFOLDER/file3.dat
edited Apr 24 '17 at 1:57
Vallentin
11.4k43049
11.4k43049
answered Oct 11 '13 at 0:55
JohnnyJohnny
1,5421108
1,5421108
add a comment |
add a comment |
Since version 3.4 there are builtin iterators for this which are a lot more efficient than os.listdir()
:
pathlib
: New in version 3.4.
>>> import pathlib
>>> [p for p in pathlib.Path('.').iterdir() if p.is_file()]
According to PEP 428, the aim of the pathlib
library is to provide a simple hierarchy of classes to handle filesystem paths and the common operations users do over them.
os.scandir()
: New in version 3.5.
>>> import os
>>> [entry for entry in os.scandir('.') if entry.is_file()]
Note that os.walk()
uses os.scandir()
instead of os.listdir()
from version 3.5, and its speed got increased by 2-20 times according to PEP 471.
Let me also recommend reading ShadowRanger's comment below.
1
Thanks! I think it is the only solution not returning directly alist
. Could usep.name
instead of the firstp
alternatively if preferred.
– JeromeJ
Jun 22 '15 at 12:36
1
Welcome! I would prefer generatingpathlib.Path()
instances since they have many useful methods I would not want to waste waste. You can also callstr(p)
on them for path names.
– SzieberthAdam
Jul 13 '15 at 14:56
4
Note: Theos.scandir
solution is going to be more efficient thanos.listdir
with anos.path.is_file
check or the like, even if you need alist
(so you don't benefit from lazy iteration), becauseos.scandir
uses OS provided APIs that give you theis_file
information for free as it iterates, no per-file round trip to the disk tostat
them at all (on Windows, theDirEntry
s get you completestat
info for free, on *NIX systems it needs tostat
for info beyondis_file
,is_dir
, etc., butDirEntry
caches on firststat
for convenience).
– ShadowRanger
Nov 20 '15 at 22:38
I've found this to be the most helpful solution (usingpathlib
). I can easily get specific extension types and absolute paths. Thank you!
– HEADLESS_0NE
Mar 17 '16 at 15:33
1
You can also useentry.name
to get only the file name, orentry.path
to get its full path. No more os.path.join() all over the place.
– user136036
Mar 28 '17 at 20:26
add a comment |
Since version 3.4 there are builtin iterators for this which are a lot more efficient than os.listdir()
:
pathlib
: New in version 3.4.
>>> import pathlib
>>> [p for p in pathlib.Path('.').iterdir() if p.is_file()]
According to PEP 428, the aim of the pathlib
library is to provide a simple hierarchy of classes to handle filesystem paths and the common operations users do over them.
os.scandir()
: New in version 3.5.
>>> import os
>>> [entry for entry in os.scandir('.') if entry.is_file()]
Note that os.walk()
uses os.scandir()
instead of os.listdir()
from version 3.5, and its speed got increased by 2-20 times according to PEP 471.
Let me also recommend reading ShadowRanger's comment below.
1
Thanks! I think it is the only solution not returning directly alist
. Could usep.name
instead of the firstp
alternatively if preferred.
– JeromeJ
Jun 22 '15 at 12:36
1
Welcome! I would prefer generatingpathlib.Path()
instances since they have many useful methods I would not want to waste waste. You can also callstr(p)
on them for path names.
– SzieberthAdam
Jul 13 '15 at 14:56
4
Note: Theos.scandir
solution is going to be more efficient thanos.listdir
with anos.path.is_file
check or the like, even if you need alist
(so you don't benefit from lazy iteration), becauseos.scandir
uses OS provided APIs that give you theis_file
information for free as it iterates, no per-file round trip to the disk tostat
them at all (on Windows, theDirEntry
s get you completestat
info for free, on *NIX systems it needs tostat
for info beyondis_file
,is_dir
, etc., butDirEntry
caches on firststat
for convenience).
– ShadowRanger
Nov 20 '15 at 22:38
I've found this to be the most helpful solution (usingpathlib
). I can easily get specific extension types and absolute paths. Thank you!
– HEADLESS_0NE
Mar 17 '16 at 15:33
1
You can also useentry.name
to get only the file name, orentry.path
to get its full path. No more os.path.join() all over the place.
– user136036
Mar 28 '17 at 20:26
add a comment |
Since version 3.4 there are builtin iterators for this which are a lot more efficient than os.listdir()
:
pathlib
: New in version 3.4.
>>> import pathlib
>>> [p for p in pathlib.Path('.').iterdir() if p.is_file()]
According to PEP 428, the aim of the pathlib
library is to provide a simple hierarchy of classes to handle filesystem paths and the common operations users do over them.
os.scandir()
: New in version 3.5.
>>> import os
>>> [entry for entry in os.scandir('.') if entry.is_file()]
Note that os.walk()
uses os.scandir()
instead of os.listdir()
from version 3.5, and its speed got increased by 2-20 times according to PEP 471.
Let me also recommend reading ShadowRanger's comment below.
Since version 3.4 there are builtin iterators for this which are a lot more efficient than os.listdir()
:
pathlib
: New in version 3.4.
>>> import pathlib
>>> [p for p in pathlib.Path('.').iterdir() if p.is_file()]
According to PEP 428, the aim of the pathlib
library is to provide a simple hierarchy of classes to handle filesystem paths and the common operations users do over them.
os.scandir()
: New in version 3.5.
>>> import os
>>> [entry for entry in os.scandir('.') if entry.is_file()]
Note that os.walk()
uses os.scandir()
instead of os.listdir()
from version 3.5, and its speed got increased by 2-20 times according to PEP 471.
Let me also recommend reading ShadowRanger's comment below.
edited May 23 '18 at 18:41
Peter Mortensen
13.5k1984111
13.5k1984111
answered Jun 18 '15 at 20:58
SzieberthAdamSzieberthAdam
2,4091225
2,4091225
1
Thanks! I think it is the only solution not returning directly alist
. Could usep.name
instead of the firstp
alternatively if preferred.
– JeromeJ
Jun 22 '15 at 12:36
1
Welcome! I would prefer generatingpathlib.Path()
instances since they have many useful methods I would not want to waste waste. You can also callstr(p)
on them for path names.
– SzieberthAdam
Jul 13 '15 at 14:56
4
Note: Theos.scandir
solution is going to be more efficient thanos.listdir
with anos.path.is_file
check or the like, even if you need alist
(so you don't benefit from lazy iteration), becauseos.scandir
uses OS provided APIs that give you theis_file
information for free as it iterates, no per-file round trip to the disk tostat
them at all (on Windows, theDirEntry
s get you completestat
info for free, on *NIX systems it needs tostat
for info beyondis_file
,is_dir
, etc., butDirEntry
caches on firststat
for convenience).
– ShadowRanger
Nov 20 '15 at 22:38
I've found this to be the most helpful solution (usingpathlib
). I can easily get specific extension types and absolute paths. Thank you!
– HEADLESS_0NE
Mar 17 '16 at 15:33
1
You can also useentry.name
to get only the file name, orentry.path
to get its full path. No more os.path.join() all over the place.
– user136036
Mar 28 '17 at 20:26
add a comment |
1
Thanks! I think it is the only solution not returning directly alist
. Could usep.name
instead of the firstp
alternatively if preferred.
– JeromeJ
Jun 22 '15 at 12:36
1
Welcome! I would prefer generatingpathlib.Path()
instances since they have many useful methods I would not want to waste waste. You can also callstr(p)
on them for path names.
– SzieberthAdam
Jul 13 '15 at 14:56
4
Note: Theos.scandir
solution is going to be more efficient thanos.listdir
with anos.path.is_file
check or the like, even if you need alist
(so you don't benefit from lazy iteration), becauseos.scandir
uses OS provided APIs that give you theis_file
information for free as it iterates, no per-file round trip to the disk tostat
them at all (on Windows, theDirEntry
s get you completestat
info for free, on *NIX systems it needs tostat
for info beyondis_file
,is_dir
, etc., butDirEntry
caches on firststat
for convenience).
– ShadowRanger
Nov 20 '15 at 22:38
I've found this to be the most helpful solution (usingpathlib
). I can easily get specific extension types and absolute paths. Thank you!
– HEADLESS_0NE
Mar 17 '16 at 15:33
1
You can also useentry.name
to get only the file name, orentry.path
to get its full path. No more os.path.join() all over the place.
– user136036
Mar 28 '17 at 20:26
1
1
Thanks! I think it is the only solution not returning directly a
list
. Could use p.name
instead of the first p
alternatively if preferred.– JeromeJ
Jun 22 '15 at 12:36
Thanks! I think it is the only solution not returning directly a
list
. Could use p.name
instead of the first p
alternatively if preferred.– JeromeJ
Jun 22 '15 at 12:36
1
1
Welcome! I would prefer generating
pathlib.Path()
instances since they have many useful methods I would not want to waste waste. You can also call str(p)
on them for path names.– SzieberthAdam
Jul 13 '15 at 14:56
Welcome! I would prefer generating
pathlib.Path()
instances since they have many useful methods I would not want to waste waste. You can also call str(p)
on them for path names.– SzieberthAdam
Jul 13 '15 at 14:56
4
4
Note: The
os.scandir
solution is going to be more efficient than os.listdir
with an os.path.is_file
check or the like, even if you need a list
(so you don't benefit from lazy iteration), because os.scandir
uses OS provided APIs that give you the is_file
information for free as it iterates, no per-file round trip to the disk to stat
them at all (on Windows, the DirEntry
s get you complete stat
info for free, on *NIX systems it needs to stat
for info beyond is_file
, is_dir
, etc., but DirEntry
caches on first stat
for convenience).– ShadowRanger
Nov 20 '15 at 22:38
Note: The
os.scandir
solution is going to be more efficient than os.listdir
with an os.path.is_file
check or the like, even if you need a list
(so you don't benefit from lazy iteration), because os.scandir
uses OS provided APIs that give you the is_file
information for free as it iterates, no per-file round trip to the disk to stat
them at all (on Windows, the DirEntry
s get you complete stat
info for free, on *NIX systems it needs to stat
for info beyond is_file
, is_dir
, etc., but DirEntry
caches on first stat
for convenience).– ShadowRanger
Nov 20 '15 at 22:38
I've found this to be the most helpful solution (using
pathlib
). I can easily get specific extension types and absolute paths. Thank you!– HEADLESS_0NE
Mar 17 '16 at 15:33
I've found this to be the most helpful solution (using
pathlib
). I can easily get specific extension types and absolute paths. Thank you!– HEADLESS_0NE
Mar 17 '16 at 15:33
1
1
You can also use
entry.name
to get only the file name, or entry.path
to get its full path. No more os.path.join() all over the place.– user136036
Mar 28 '17 at 20:26
You can also use
entry.name
to get only the file name, or entry.path
to get its full path. No more os.path.join() all over the place.– user136036
Mar 28 '17 at 20:26
add a comment |
I really liked adamk's answer, suggesting that you use glob()
, from the module of the same name. This allows you to have pattern matching with *
s.
But as other people pointed out in the comments, glob()
can get tripped up over inconsistent slash directions. To help with that, I suggest you use the join()
and expanduser()
functions in the os.path
module, and perhaps the getcwd()
function in the os
module, as well.
As examples:
from glob import glob
# Return everything under C:Usersadmin that contains a folder called wlp.
glob('C:Usersadmin*wlp')
The above is terrible - the path has been hardcoded and will only ever work on Windows between the drive name and the s being hardcoded into the path.
from glob import glob
from os.path import join
# Return everything under Users, admin, that contains a folder called wlp.
glob(join('Users', 'admin', '*', 'wlp'))
The above works better, but it relies on the folder name Users
which is often found on Windows and not so often found on other OSs. It also relies on the user having a specific name, admin
.
from glob import glob
from os.path import expanduser, join
# Return everything under the user directory that contains a folder called wlp.
glob(join(expanduser('~'), '*', 'wlp'))
This works perfectly across all platforms.
Another great example that works perfectly across platforms and does something a bit different:
from glob import glob
from os import getcwd
from os.path import join
# Return everything under the current directory that contains a folder called wlp.
glob(join(getcwd(), '*', 'wlp'))
Hope these examples help you see the power of a few of the functions you can find in the standard Python library modules.
4
Extra glob fun: starting in Python 3.5,**
works as long as you setrecursive = True
. See the docs here: docs.python.org/3.5/library/glob.html#glob.glob
– ArtOfWarfare
Jan 26 '15 at 3:24
add a comment |
I really liked adamk's answer, suggesting that you use glob()
, from the module of the same name. This allows you to have pattern matching with *
s.
But as other people pointed out in the comments, glob()
can get tripped up over inconsistent slash directions. To help with that, I suggest you use the join()
and expanduser()
functions in the os.path
module, and perhaps the getcwd()
function in the os
module, as well.
As examples:
from glob import glob
# Return everything under C:Usersadmin that contains a folder called wlp.
glob('C:Usersadmin*wlp')
The above is terrible - the path has been hardcoded and will only ever work on Windows between the drive name and the s being hardcoded into the path.
from glob import glob
from os.path import join
# Return everything under Users, admin, that contains a folder called wlp.
glob(join('Users', 'admin', '*', 'wlp'))
The above works better, but it relies on the folder name Users
which is often found on Windows and not so often found on other OSs. It also relies on the user having a specific name, admin
.
from glob import glob
from os.path import expanduser, join
# Return everything under the user directory that contains a folder called wlp.
glob(join(expanduser('~'), '*', 'wlp'))
This works perfectly across all platforms.
Another great example that works perfectly across platforms and does something a bit different:
from glob import glob
from os import getcwd
from os.path import join
# Return everything under the current directory that contains a folder called wlp.
glob(join(getcwd(), '*', 'wlp'))
Hope these examples help you see the power of a few of the functions you can find in the standard Python library modules.
4
Extra glob fun: starting in Python 3.5,**
works as long as you setrecursive = True
. See the docs here: docs.python.org/3.5/library/glob.html#glob.glob
– ArtOfWarfare
Jan 26 '15 at 3:24
add a comment |
I really liked adamk's answer, suggesting that you use glob()
, from the module of the same name. This allows you to have pattern matching with *
s.
But as other people pointed out in the comments, glob()
can get tripped up over inconsistent slash directions. To help with that, I suggest you use the join()
and expanduser()
functions in the os.path
module, and perhaps the getcwd()
function in the os
module, as well.
As examples:
from glob import glob
# Return everything under C:Usersadmin that contains a folder called wlp.
glob('C:Usersadmin*wlp')
The above is terrible - the path has been hardcoded and will only ever work on Windows between the drive name and the s being hardcoded into the path.
from glob import glob
from os.path import join
# Return everything under Users, admin, that contains a folder called wlp.
glob(join('Users', 'admin', '*', 'wlp'))
The above works better, but it relies on the folder name Users
which is often found on Windows and not so often found on other OSs. It also relies on the user having a specific name, admin
.
from glob import glob
from os.path import expanduser, join
# Return everything under the user directory that contains a folder called wlp.
glob(join(expanduser('~'), '*', 'wlp'))
This works perfectly across all platforms.
Another great example that works perfectly across platforms and does something a bit different:
from glob import glob
from os import getcwd
from os.path import join
# Return everything under the current directory that contains a folder called wlp.
glob(join(getcwd(), '*', 'wlp'))
Hope these examples help you see the power of a few of the functions you can find in the standard Python library modules.
I really liked adamk's answer, suggesting that you use glob()
, from the module of the same name. This allows you to have pattern matching with *
s.
But as other people pointed out in the comments, glob()
can get tripped up over inconsistent slash directions. To help with that, I suggest you use the join()
and expanduser()
functions in the os.path
module, and perhaps the getcwd()
function in the os
module, as well.
As examples:
from glob import glob
# Return everything under C:Usersadmin that contains a folder called wlp.
glob('C:Usersadmin*wlp')
The above is terrible - the path has been hardcoded and will only ever work on Windows between the drive name and the s being hardcoded into the path.
from glob import glob
from os.path import join
# Return everything under Users, admin, that contains a folder called wlp.
glob(join('Users', 'admin', '*', 'wlp'))
The above works better, but it relies on the folder name Users
which is often found on Windows and not so often found on other OSs. It also relies on the user having a specific name, admin
.
from glob import glob
from os.path import expanduser, join
# Return everything under the user directory that contains a folder called wlp.
glob(join(expanduser('~'), '*', 'wlp'))
This works perfectly across all platforms.
Another great example that works perfectly across platforms and does something a bit different:
from glob import glob
from os import getcwd
from os.path import join
# Return everything under the current directory that contains a folder called wlp.
glob(join(getcwd(), '*', 'wlp'))
Hope these examples help you see the power of a few of the functions you can find in the standard Python library modules.
edited May 23 '17 at 11:47
Community♦
11
11
answered Jul 9 '14 at 11:43
ArtOfWarfareArtOfWarfare
12.5k786135
12.5k786135
4
Extra glob fun: starting in Python 3.5,**
works as long as you setrecursive = True
. See the docs here: docs.python.org/3.5/library/glob.html#glob.glob
– ArtOfWarfare
Jan 26 '15 at 3:24
add a comment |
4
Extra glob fun: starting in Python 3.5,**
works as long as you setrecursive = True
. See the docs here: docs.python.org/3.5/library/glob.html#glob.glob
– ArtOfWarfare
Jan 26 '15 at 3:24
4
4
Extra glob fun: starting in Python 3.5,
**
works as long as you set recursive = True
. See the docs here: docs.python.org/3.5/library/glob.html#glob.glob– ArtOfWarfare
Jan 26 '15 at 3:24
Extra glob fun: starting in Python 3.5,
**
works as long as you set recursive = True
. See the docs here: docs.python.org/3.5/library/glob.html#glob.glob– ArtOfWarfare
Jan 26 '15 at 3:24
add a comment |
Preliminary notes
- Although there's a clear differentiation between file and directory terms in the question text, some may argue that directories are actually special files
- The statement: "all files of a directory" can be interpreted in two ways:
- All direct (or level 1) descendants only
- All descendants in the whole directory tree (including the ones in sub-directories)
- All direct (or level 1) descendants only
When the question was asked, I imagine that Python 2, was the LTS version, however the code samples will be run by Python 3(.5) (I'll keep them as Python 2 compliant as possible; also, any code belonging to Python that I'm going to post, is from v3.5.4 - unless otherwise specified). That has consequences related to another keyword in the question: "add them into a list":
- In pre Python 2.2 versions, sequences (iterables) were mostly represented by lists (tuples, sets, ...)
- In Python 2.2, the concept of generator ([Python.Wiki]: Generators) - courtesy of [Python 3]: The yield statement) - was introduced. As time passed, generator counterparts started to appear for functions that returned/worked with lists
- In Python 3, generator is the default behavior
- Not sure if returning a list is still mandatory (or a generator would do as well), but passing a generator to the list constructor, will create a list out of it (and also consume it). The example below illustrates the differences on [Python 3]: map(function, iterable, ...)
>>> import sys
>>> sys.version
'2.7.10 (default, Mar 8 2016, 15:02:46) [MSC v.1600 64 bit (AMD64)]'
>>> m = map(lambda x: x, [1, 2, 3]) # Just a dummy lambda function
>>> m, type(m)
([1, 2, 3], <type 'list'>)
>>> len(m)
3
>>> import sys
>>> sys.version
'3.5.4 (v3.5.4:3f56838, Aug 8 2017, 02:17:05) [MSC v.1900 64 bit (AMD64)]'
>>> m = map(lambda x: x, [1, 2, 3])
>>> m, type(m)
(<map object at 0x000001B4257342B0>, <class 'map'>)
>>> len(m)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: object of type 'map' has no len()
>>> lm0 = list(m) # Build a list from the generator
>>> lm0, type(lm0)
([1, 2, 3], <class 'list'>)
>>>
>>> lm1 = list(m) # Build a list from the same generator
>>> lm1, type(lm1) # Empty list now - generator already consumed
(, <class 'list'>)
The examples will be based on a directory called root_dir with the following structure (this example is for Win, but I'm using the same tree on Lnx as well):
E:WorkDevStackOverflowq003207219>tree /f "root_dir"
Folder PATH listing for volume Work
Volume serial number is 00000029 3655:6FED
E:WORKDEVSTACKOVERFLOWQ003207219ROOT_DIR
¦ file0
¦ file1
¦
+---dir0
¦ +---dir00
¦ ¦ ¦ file000
¦ ¦ ¦
¦ ¦ +---dir000
¦ ¦ file0000
¦ ¦
¦ +---dir01
¦ ¦ file010
¦ ¦ file011
¦ ¦
¦ +---dir02
¦ +---dir020
¦ +---dir0200
+---dir1
¦ file10
¦ file11
¦ file12
¦
+---dir2
¦ ¦ file20
¦ ¦
¦ +---dir20
¦ file200
¦
+---dir3
Solutions
Programmatic approaches:
[Python 3]: os.listdir(path='.')
Return a list containing the names of the entries in the directory given by path. The list is in arbitrary order, and does not include the special entries
'.'
and'..'
...
>>> import os
>>> root_dir = "root_dir" # Path relative to current dir (os.getcwd())
>>>
>>> os.listdir(root_dir) # List all the items in root_dir
['dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> [item for item in os.listdir(root_dir) if os.path.isfile(os.path.join(root_dir, item))] # Filter items and only keep files (strip out directories)
['file0', 'file1']
A more elaborate example (code_os_listdir.py):
import os
from pprint import pformat
def _get_dir_content(path, include_folders, recursive):
entries = os.listdir(path)
for entry in entries:
entry_with_path = os.path.join(path, entry)
if os.path.isdir(entry_with_path):
if include_folders:
yield entry_with_path
if recursive:
for sub_entry in _get_dir_content(entry_with_path, include_folders, recursive):
yield sub_entry
else:
yield entry_with_path
def get_dir_content(path, include_folders=True, recursive=True, prepend_folder_name=True):
path_len = len(path) + len(os.path.sep)
for item in _get_dir_content(path, include_folders, recursive):
yield item if prepend_folder_name else item[path_len:]
def _get_dir_content_old(path, include_folders, recursive):
entries = os.listdir(path)
ret = list()
for entry in entries:
entry_with_path = os.path.join(path, entry)
if os.path.isdir(entry_with_path):
if include_folders:
ret.append(entry_with_path)
if recursive:
ret.extend(_get_dir_content_old(entry_with_path, include_folders, recursive))
else:
ret.append(entry_with_path)
return ret
def get_dir_content_old(path, include_folders=True, recursive=True, prepend_folder_name=True):
path_len = len(path) + len(os.path.sep)
return [item if prepend_folder_name else item[path_len:] for item in _get_dir_content_old(path, include_folders, recursive)]
def main():
root_dir = "root_dir"
ret0 = get_dir_content(root_dir, include_folders=True, recursive=True, prepend_folder_name=True)
lret0 = list(ret0)
print(ret0, len(lret0), pformat(lret0))
ret1 = get_dir_content_old(root_dir, include_folders=False, recursive=True, prepend_folder_name=False)
print(len(ret1), pformat(ret1))
if __name__ == "__main__":
main()
Notes:
- There are two implementations:
- One that uses generators (of course here it seems useless, since I immediately convert the result to a list)
- The classic one (function names ending in _old)
- Recursion is used (to get into subdirectories)
- For each implementation there are two functions:
- One that starts with an underscore (_): "private" (should not be called directly) - that does all the work
- The public one (wrapper over previous): it just strips off the initial path (if required) from the returned entries. It's an ugly implementation, but it's the only idea that I could come with at this point
- In terms of performance, generators are generally a little bit faster (considering both creation and iteration times), but I didn't test them in recursive functions, and also I am iterating inside the function over inner generators - don't know how performance friendly is that
- Play with the arguments to get different results
Output:
(py35x64_test) E:WorkDevStackOverflowq003207219>"e:WorkDevVEnvspy35x64_testScriptspython.exe" "code_os_listdir.py"
<generator object get_dir_content at 0x000001BDDBB3DF10> 22 ['root_dir\dir0',
'root_dir\dir0\dir00',
'root_dir\dir0\dir00\dir000',
'root_dir\dir0\dir00\dir000\file0000',
'root_dir\dir0\dir00\file000',
'root_dir\dir0\dir01',
'root_dir\dir0\dir01\file010',
'root_dir\dir0\dir01\file011',
'root_dir\dir0\dir02',
'root_dir\dir0\dir02\dir020',
'root_dir\dir0\dir02\dir020\dir0200',
'root_dir\dir1',
'root_dir\dir1\file10',
'root_dir\dir1\file11',
'root_dir\dir1\file12',
'root_dir\dir2',
'root_dir\dir2\dir20',
'root_dir\dir2\dir20\file200',
'root_dir\dir2\file20',
'root_dir\dir3',
'root_dir\file0',
'root_dir\file1']
11 ['dir0\dir00\dir000\file0000',
'dir0\dir00\file000',
'dir0\dir01\file010',
'dir0\dir01\file011',
'dir1\file10',
'dir1\file11',
'dir1\file12',
'dir2\dir20\file200',
'dir2\file20',
'file0',
'file1']
- There are two implementations:
[Python 3]: os.scandir(path='.') (Python 3.5+, backport: [PyPI]: scandir)
Return an iterator of os.DirEntry objects corresponding to the entries in the directory given by path. The entries are yielded in arbitrary order, and the special entries
'.'
and'..'
are not included.
Using scandir() instead of listdir() can significantly increase the performance of code that also needs file type or file attribute information, because os.DirEntry objects expose this information if the operating system provides it when scanning a directory. All os.DirEntry methods may perform a system call, but is_dir() and is_file() usually only require a system call for symbolic links; os.DirEntry.stat() always requires a system call on Unix but only requires one for symbolic links on Windows.
>>> import os
>>> root_dir = os.path.join(".", "root_dir") # Explicitly prepending current directory
>>> root_dir
'.\root_dir'
>>>
>>> scandir_iterator = os.scandir(root_dir)
>>> scandir_iterator
<nt.ScandirIterator object at 0x00000268CF4BC140>
>>> [item.path for item in scandir_iterator]
['.\root_dir\dir0', '.\root_dir\dir1', '.\root_dir\dir2', '.\root_dir\dir3', '.\root_dir\file0', '.\root_dir\file1']
>>>
>>> [item.path for item in scandir_iterator] # Will yield an empty list as it was consumed by previous iteration (automatically performed by the list comprehension)
>>>
>>> scandir_iterator = os.scandir(root_dir) # Reinitialize the generator
>>> for item in scandir_iterator :
... if os.path.isfile(item.path):
... print(item.name)
...
file0
file1
Notes:
- It's similar to
os.listdir
- But it's also more flexible (and offers more functionality), more Pythonic (and in some cases, faster)
- It's similar to
[Python 3]: os.walk(top, topdown=True, onerror=None, followlinks=False)
Generate the file names in a directory tree by walking the tree either top-down or bottom-up. For each directory in the tree rooted at directory top (including top itself), it yields a 3-tuple (
dirpath
,dirnames
,filenames
).
>>> import os
>>> root_dir = os.path.join(os.getcwd(), "root_dir") # Specify the full path
>>> root_dir
'E:\Work\Dev\StackOverflow\q003207219\root_dir'
>>>
>>> walk_generator = os.walk(root_dir)
>>> root_dir_entry = next(walk_generator) # First entry corresponds to the root dir (passed as an argument)
>>> root_dir_entry
('E:\Work\Dev\StackOverflow\q003207219\root_dir', ['dir0', 'dir1', 'dir2', 'dir3'], ['file0', 'file1'])
>>>
>>> root_dir_entry[1] + root_dir_entry[2] # Display dirs and files (direct descendants) in a single list
['dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> [os.path.join(root_dir_entry[0], item) for item in root_dir_entry[1] + root_dir_entry[2]] # Display all the entries in the previous list by their full path
['E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\dir1', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\dir2', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\dir3', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\file0', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\file1']
>>>
>>> for entry in walk_generator: # Display the rest of the elements (corresponding to every subdir)
... print(entry)
...
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0', ['dir00', 'dir01', 'dir02'], )
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir00', ['dir000'], ['file000'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir00\dir000', , ['file0000'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir01', , ['file010', 'file011'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir02', ['dir020'], )
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir02\dir020', ['dir0200'], )
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir02\dir020\dir0200', , )
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir1', , ['file10', 'file11', 'file12'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir2', ['dir20'], ['file20'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir2\dir20', , ['file200'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir3', , )
Notes:
- Under the scenes, it uses
os.scandir
(os.listdir
on older versions) - It does the heavy lifting by recurring in subfolders
- Under the scenes, it uses
[Python 3]: glob.glob(pathname, *, recursive=False) ([Python 3]: glob.iglob(pathname, *, recursive=False))
Return a possibly-empty list of path names that match pathname, which must be a string containing a path specification. pathname can be either absolute (like
/usr/src/Python-1.5/Makefile
) or relative (like../../Tools/*/*.gif
), and can contain shell-style wildcards. Broken symlinks are included in the results (as in the shell).
...
Changed in version 3.5: Support for recursive globs using “**
”.
>>> import glob, os
>>> wildcard_pattern = "*"
>>> root_dir = os.path.join("root_dir", wildcard_pattern) # Match every file/dir name
>>> root_dir
'root_dir\*'
>>>
>>> glob_list = glob.glob(root_dir)
>>> glob_list
['root_dir\dir0', 'root_dir\dir1', 'root_dir\dir2', 'root_dir\dir3', 'root_dir\file0', 'root_dir\file1']
>>>
>>> [item.replace("root_dir" + os.path.sep, "") for item in glob_list] # Strip the dir name and the path separator from begining
['dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> for entry in glob.iglob(root_dir + "*", recursive=True):
... print(entry)
...
root_dir
root_dirdir0
root_dirdir0dir00
root_dirdir0dir00dir000
root_dirdir0dir00dir000file0000
root_dirdir0dir00file000
root_dirdir0dir01
root_dirdir0dir01file010
root_dirdir0dir01file011
root_dirdir0dir02
root_dirdir0dir02dir020
root_dirdir0dir02dir020dir0200
root_dirdir1
root_dirdir1file10
root_dirdir1file11
root_dirdir1file12
root_dirdir2
root_dirdir2dir20
root_dirdir2dir20file200
root_dirdir2file20
root_dirdir3
root_dirfile0
root_dirfile1
Notes:
- Uses
os.listdir
- For large trees (especially if recursive is on), iglob is preferred
- Allows advanced filtering based on name (due to the wildcard)
- Uses
[Python 3]: class pathlib.Path(*pathsegments) (Python 3.4+, backport: [PyPI]: pathlib2)
>>> import pathlib
>>> root_dir = "root_dir"
>>> root_dir_instance = pathlib.Path(root_dir)
>>> root_dir_instance
WindowsPath('root_dir')
>>> root_dir_instance.name
'root_dir'
>>> root_dir_instance.is_dir()
True
>>>
>>> [item.name for item in root_dir_instance.glob("*")] # Wildcard searching for all direct descendants
['dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> [os.path.join(item.parent.name, item.name) for item in root_dir_instance.glob("*") if not item.is_dir()] # Display paths (including parent) for files only
['root_dir\file0', 'root_dir\file1']
Notes:
- This is one way of achieving our goal
- It's the OOP style of handling paths
- Offers lots of functionalities
[Python 2]: dircache.listdir(path) (Python 2 only)
- But, according to [GitHub]: python/cpython - (2.7) cpython/Lib/dircache.py, it's just a (thin) wrapper over
os.listdir
with caching
def listdir(path):
"""List directory contents, using cache."""
try:
cached_mtime, list = cache[path]
del cache[path]
except KeyError:
cached_mtime, list = -1,
mtime = os.stat(path).st_mtime
if mtime != cached_mtime:
list = os.listdir(path)
list.sort()
cache[path] = mtime, list
return list
- But, according to [GitHub]: python/cpython - (2.7) cpython/Lib/dircache.py, it's just a (thin) wrapper over
[man7]: OPENDIR(3) / [man7]: READDIR(3) / [man7]: CLOSEDIR(3) via [Python 3]: ctypes - A foreign function library for Python (POSIX specific)
ctypes is a foreign function library for Python. It provides C compatible data types, and allows calling functions in DLLs or shared libraries. It can be used to wrap these libraries in pure Python.
code_ctypes.py:
#!/usr/bin/env python3
import sys
from ctypes import Structure,
c_ulonglong, c_longlong, c_ushort, c_ubyte, c_char, c_int,
CDLL, POINTER,
create_string_buffer, get_errno, set_errno, cast
DT_DIR = 4
DT_REG = 8
char256 = c_char * 256
class LinuxDirent64(Structure):
_fields_ = [
("d_ino", c_ulonglong),
("d_off", c_longlong),
("d_reclen", c_ushort),
("d_type", c_ubyte),
("d_name", char256),
]
LinuxDirent64Ptr = POINTER(LinuxDirent64)
libc_dll = this_process = CDLL(None, use_errno=True)
# ALWAYS set argtypes and restype for functions, otherwise it's UB!!!
opendir = libc_dll.opendir
readdir = libc_dll.readdir
closedir = libc_dll.closedir
def get_dir_content(path):
ret = [path, list(), list()]
dir_stream = opendir(create_string_buffer(path.encode()))
if (dir_stream == 0):
print("opendir returned NULL (errno: {:d})".format(get_errno()))
return ret
set_errno(0)
dirent_addr = readdir(dir_stream)
while dirent_addr:
dirent_ptr = cast(dirent_addr, LinuxDirent64Ptr)
dirent = dirent_ptr.contents
name = dirent.d_name.decode()
if dirent.d_type & DT_DIR:
if name not in (".", ".."):
ret[1].append(name)
elif dirent.d_type & DT_REG:
ret[2].append(name)
dirent_addr = readdir(dir_stream)
if get_errno():
print("readdir returned NULL (errno: {:d})".format(get_errno()))
closedir(dir_stream)
return ret
def main():
print("{:s} on {:s}n".format(sys.version, sys.platform))
root_dir = "root_dir"
entries = get_dir_content(root_dir)
print(entries)
if __name__ == "__main__":
main()
Notes:
- It loads the three functions from libc (loaded in the current process) and calls them (for more details check [SO]: How do I check whether a file exists without exceptions? (@CristiFati's answer) - last notes from item #4.). That would place this approach very close to the Python / C edge
LinuxDirent64 is the ctypes representation of struct dirent64 from [man7]: dirent.h(0P) (so are the DT_ constants) from my machine: Ubtu 16 x64 (4.10.0-40-generic and libc6-dev:amd64). On other flavors/versions, the struct definition might differ, and if so, the ctypes alias should be updated, otherwise it will yield Undefined Behavior
- It returns data in the
os.walk
's format. I didn't bother to make it recursive, but starting from the existing code, that would be a fairly trivial task - Everything is doable on Win as well, the data (libraries, functions, structs, constants, ...) differ
Output:
[cfati@cfati-ubtu16x64-0:~/Work/Dev/StackOverflow/q003207219]> ./code_ctypes.py
3.5.2 (default, Nov 12 2018, 13:43:14)
[GCC 5.4.0 20160609] on linux
['root_dir', ['dir2', 'dir1', 'dir3', 'dir0'], ['file1', 'file0']]
[ActiveState]: win32file.FindFilesW (Win specific)
Retrieves a list of matching filenames, using the Windows Unicode API. An interface to the API FindFirstFileW/FindNextFileW/Find close functions.
>>> import os, win32file, win32con
>>> root_dir = "root_dir"
>>> wildcard = "*"
>>> root_dir_wildcard = os.path.join(root_dir, wildcard)
>>> entry_list = win32file.FindFilesW(root_dir_wildcard)
>>> len(entry_list) # Don't display the whole content as it's too long
8
>>> [entry[-2] for entry in entry_list] # Only display the entry names
['.', '..', 'dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> [entry[-2] for entry in entry_list if entry[0] & win32con.FILE_ATTRIBUTE_DIRECTORY and entry[-2] not in (".", "..")] # Filter entries and only display dir names (except self and parent)
['dir0', 'dir1', 'dir2', 'dir3']
>>>
>>> [os.path.join(root_dir, entry[-2]) for entry in entry_list if entry[0] & (win32con.FILE_ATTRIBUTE_NORMAL | win32con.FILE_ATTRIBUTE_ARCHIVE)] # Only display file "full" names
['root_dir\file0', 'root_dir\file1']
Notes:
win32file.FindFilesW
is part of [GitHub]: mhammond/pywin32 - Python for Windows (pywin32) Extensions, which is a Python wrapper over WINAPIs- The documentation link is from ActiveState, as I didn't find any pywin32 official documentation
- Install some (other) third-party package that does the trick
- Most likely, will rely on one (or more) of the above (maybe with slight customizations)
Notes:
Code is meant to be portable (except places that target a specific area - which are marked) or cross:
- platform (Nix, Win, )
Python version (2, 3, )
Multiple path styles (absolute, relatives) were used across the above variants, to illustrate the fact that the "tools" used are flexible in this direction
os.listdir
andos.scandir
use opendir / readdir / closedir ([MS.Docs]: FindFirstFileW function / [MS.Docs]: FindNextFileW function / [MS.Docs]: FindClose function) (via [GitHub]: python/cpython - (master) cpython/Modules/posixmodule.c)win32file.FindFilesW
uses those (Win specific) functions as well (via [GitHub]: mhammond/pywin32 - (master) pywin32/win32/src/win32file.i)
_get_dir_content (from point #1.) can be implemented using any of these approaches (some will require more work and some less)
- Some advanced filtering (instead of just file vs. dir) could be done: e.g. the include_folders argument could be replaced by another one (e.g. filter_func) which would be a function that takes a path as an argument:
filter_func=lambda x: True
(this doesn't strip out anything) and inside _get_dir_content something like:if not filter_func(entry_with_path): continue
(if the function fails for one entry, it will be skipped), but the more complex the code becomes, the longer it will take to execute
- Some advanced filtering (instead of just file vs. dir) could be done: e.g. the include_folders argument could be replaced by another one (e.g. filter_func) which would be a function that takes a path as an argument:
Nota bene! Since recursion is used, I must mention that I did some tests on my laptop (Win 10 x64), totally unrelated to this problem, and when the recursion level was reaching values somewhere in the (990 .. 1000) range (recursionlimit - 1000 (default)), I got StackOverflow :). If the directory tree exceeds that limit (I am not an FS expert, so I don't know if that is even possible), that could be a problem.
I must also mention that I didn't try to increase recursionlimit because I have no experience in the area (how much can I increase it before having to also increase the stack at OS level), but in theory there will always be the possibility for failure, if the dir depth is larger than the highest possible recursionlimit (on that machine)The code samples are for demonstrative purposes only. That means that I didn't take into account error handling (I don't think there's any try / except / else / finally block), so the code is not robust (the reason is: to keep it as simple and short as possible). For production, error handling should be added as well
Other approaches:
Use Python only as a wrapper
- Everything is done using another technology
- That technology is invoked from Python
The most famous flavor that I know is what I call the system administrator approach:
- Use Python (or any programming language for that matter) in order to execute shell commands (and parse their outputs)
- Some consider this a neat hack
- I consider it more like a lame workaround (gainarie), as the action per se is performed from shell (cmd in this case), and thus doesn't have anything to do with Python.
- Filtering (
grep
/findstr
) or output formatting could be done on both sides, but I'm not going to insist on it. Also, I deliberately usedos.system
instead ofsubprocess.Popen
.
(py35x64_test) E:WorkDevStackOverflowq003207219>"e:WorkDevVEnvspy35x64_testScriptspython.exe" -c "import os;os.system("dir /b root_dir")"
dir0
dir1
dir2
dir3
file0
file1
In general this approach is to be avoided, since if some command output format slightly differs between OS versions/flavors, the parsing code should be adapted as well; not to mention differences between locales).
You had posted it, but I had cleaned it up once I had read it :-)
– Martijn Pieters♦
Dec 9 '18 at 11:20
add a comment |
Preliminary notes
- Although there's a clear differentiation between file and directory terms in the question text, some may argue that directories are actually special files
- The statement: "all files of a directory" can be interpreted in two ways:
- All direct (or level 1) descendants only
- All descendants in the whole directory tree (including the ones in sub-directories)
- All direct (or level 1) descendants only
When the question was asked, I imagine that Python 2, was the LTS version, however the code samples will be run by Python 3(.5) (I'll keep them as Python 2 compliant as possible; also, any code belonging to Python that I'm going to post, is from v3.5.4 - unless otherwise specified). That has consequences related to another keyword in the question: "add them into a list":
- In pre Python 2.2 versions, sequences (iterables) were mostly represented by lists (tuples, sets, ...)
- In Python 2.2, the concept of generator ([Python.Wiki]: Generators) - courtesy of [Python 3]: The yield statement) - was introduced. As time passed, generator counterparts started to appear for functions that returned/worked with lists
- In Python 3, generator is the default behavior
- Not sure if returning a list is still mandatory (or a generator would do as well), but passing a generator to the list constructor, will create a list out of it (and also consume it). The example below illustrates the differences on [Python 3]: map(function, iterable, ...)
>>> import sys
>>> sys.version
'2.7.10 (default, Mar 8 2016, 15:02:46) [MSC v.1600 64 bit (AMD64)]'
>>> m = map(lambda x: x, [1, 2, 3]) # Just a dummy lambda function
>>> m, type(m)
([1, 2, 3], <type 'list'>)
>>> len(m)
3
>>> import sys
>>> sys.version
'3.5.4 (v3.5.4:3f56838, Aug 8 2017, 02:17:05) [MSC v.1900 64 bit (AMD64)]'
>>> m = map(lambda x: x, [1, 2, 3])
>>> m, type(m)
(<map object at 0x000001B4257342B0>, <class 'map'>)
>>> len(m)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: object of type 'map' has no len()
>>> lm0 = list(m) # Build a list from the generator
>>> lm0, type(lm0)
([1, 2, 3], <class 'list'>)
>>>
>>> lm1 = list(m) # Build a list from the same generator
>>> lm1, type(lm1) # Empty list now - generator already consumed
(, <class 'list'>)
The examples will be based on a directory called root_dir with the following structure (this example is for Win, but I'm using the same tree on Lnx as well):
E:WorkDevStackOverflowq003207219>tree /f "root_dir"
Folder PATH listing for volume Work
Volume serial number is 00000029 3655:6FED
E:WORKDEVSTACKOVERFLOWQ003207219ROOT_DIR
¦ file0
¦ file1
¦
+---dir0
¦ +---dir00
¦ ¦ ¦ file000
¦ ¦ ¦
¦ ¦ +---dir000
¦ ¦ file0000
¦ ¦
¦ +---dir01
¦ ¦ file010
¦ ¦ file011
¦ ¦
¦ +---dir02
¦ +---dir020
¦ +---dir0200
+---dir1
¦ file10
¦ file11
¦ file12
¦
+---dir2
¦ ¦ file20
¦ ¦
¦ +---dir20
¦ file200
¦
+---dir3
Solutions
Programmatic approaches:
[Python 3]: os.listdir(path='.')
Return a list containing the names of the entries in the directory given by path. The list is in arbitrary order, and does not include the special entries
'.'
and'..'
...
>>> import os
>>> root_dir = "root_dir" # Path relative to current dir (os.getcwd())
>>>
>>> os.listdir(root_dir) # List all the items in root_dir
['dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> [item for item in os.listdir(root_dir) if os.path.isfile(os.path.join(root_dir, item))] # Filter items and only keep files (strip out directories)
['file0', 'file1']
A more elaborate example (code_os_listdir.py):
import os
from pprint import pformat
def _get_dir_content(path, include_folders, recursive):
entries = os.listdir(path)
for entry in entries:
entry_with_path = os.path.join(path, entry)
if os.path.isdir(entry_with_path):
if include_folders:
yield entry_with_path
if recursive:
for sub_entry in _get_dir_content(entry_with_path, include_folders, recursive):
yield sub_entry
else:
yield entry_with_path
def get_dir_content(path, include_folders=True, recursive=True, prepend_folder_name=True):
path_len = len(path) + len(os.path.sep)
for item in _get_dir_content(path, include_folders, recursive):
yield item if prepend_folder_name else item[path_len:]
def _get_dir_content_old(path, include_folders, recursive):
entries = os.listdir(path)
ret = list()
for entry in entries:
entry_with_path = os.path.join(path, entry)
if os.path.isdir(entry_with_path):
if include_folders:
ret.append(entry_with_path)
if recursive:
ret.extend(_get_dir_content_old(entry_with_path, include_folders, recursive))
else:
ret.append(entry_with_path)
return ret
def get_dir_content_old(path, include_folders=True, recursive=True, prepend_folder_name=True):
path_len = len(path) + len(os.path.sep)
return [item if prepend_folder_name else item[path_len:] for item in _get_dir_content_old(path, include_folders, recursive)]
def main():
root_dir = "root_dir"
ret0 = get_dir_content(root_dir, include_folders=True, recursive=True, prepend_folder_name=True)
lret0 = list(ret0)
print(ret0, len(lret0), pformat(lret0))
ret1 = get_dir_content_old(root_dir, include_folders=False, recursive=True, prepend_folder_name=False)
print(len(ret1), pformat(ret1))
if __name__ == "__main__":
main()
Notes:
- There are two implementations:
- One that uses generators (of course here it seems useless, since I immediately convert the result to a list)
- The classic one (function names ending in _old)
- Recursion is used (to get into subdirectories)
- For each implementation there are two functions:
- One that starts with an underscore (_): "private" (should not be called directly) - that does all the work
- The public one (wrapper over previous): it just strips off the initial path (if required) from the returned entries. It's an ugly implementation, but it's the only idea that I could come with at this point
- In terms of performance, generators are generally a little bit faster (considering both creation and iteration times), but I didn't test them in recursive functions, and also I am iterating inside the function over inner generators - don't know how performance friendly is that
- Play with the arguments to get different results
Output:
(py35x64_test) E:WorkDevStackOverflowq003207219>"e:WorkDevVEnvspy35x64_testScriptspython.exe" "code_os_listdir.py"
<generator object get_dir_content at 0x000001BDDBB3DF10> 22 ['root_dir\dir0',
'root_dir\dir0\dir00',
'root_dir\dir0\dir00\dir000',
'root_dir\dir0\dir00\dir000\file0000',
'root_dir\dir0\dir00\file000',
'root_dir\dir0\dir01',
'root_dir\dir0\dir01\file010',
'root_dir\dir0\dir01\file011',
'root_dir\dir0\dir02',
'root_dir\dir0\dir02\dir020',
'root_dir\dir0\dir02\dir020\dir0200',
'root_dir\dir1',
'root_dir\dir1\file10',
'root_dir\dir1\file11',
'root_dir\dir1\file12',
'root_dir\dir2',
'root_dir\dir2\dir20',
'root_dir\dir2\dir20\file200',
'root_dir\dir2\file20',
'root_dir\dir3',
'root_dir\file0',
'root_dir\file1']
11 ['dir0\dir00\dir000\file0000',
'dir0\dir00\file000',
'dir0\dir01\file010',
'dir0\dir01\file011',
'dir1\file10',
'dir1\file11',
'dir1\file12',
'dir2\dir20\file200',
'dir2\file20',
'file0',
'file1']
- There are two implementations:
[Python 3]: os.scandir(path='.') (Python 3.5+, backport: [PyPI]: scandir)
Return an iterator of os.DirEntry objects corresponding to the entries in the directory given by path. The entries are yielded in arbitrary order, and the special entries
'.'
and'..'
are not included.
Using scandir() instead of listdir() can significantly increase the performance of code that also needs file type or file attribute information, because os.DirEntry objects expose this information if the operating system provides it when scanning a directory. All os.DirEntry methods may perform a system call, but is_dir() and is_file() usually only require a system call for symbolic links; os.DirEntry.stat() always requires a system call on Unix but only requires one for symbolic links on Windows.
>>> import os
>>> root_dir = os.path.join(".", "root_dir") # Explicitly prepending current directory
>>> root_dir
'.\root_dir'
>>>
>>> scandir_iterator = os.scandir(root_dir)
>>> scandir_iterator
<nt.ScandirIterator object at 0x00000268CF4BC140>
>>> [item.path for item in scandir_iterator]
['.\root_dir\dir0', '.\root_dir\dir1', '.\root_dir\dir2', '.\root_dir\dir3', '.\root_dir\file0', '.\root_dir\file1']
>>>
>>> [item.path for item in scandir_iterator] # Will yield an empty list as it was consumed by previous iteration (automatically performed by the list comprehension)
>>>
>>> scandir_iterator = os.scandir(root_dir) # Reinitialize the generator
>>> for item in scandir_iterator :
... if os.path.isfile(item.path):
... print(item.name)
...
file0
file1
Notes:
- It's similar to
os.listdir
- But it's also more flexible (and offers more functionality), more Pythonic (and in some cases, faster)
- It's similar to
[Python 3]: os.walk(top, topdown=True, onerror=None, followlinks=False)
Generate the file names in a directory tree by walking the tree either top-down or bottom-up. For each directory in the tree rooted at directory top (including top itself), it yields a 3-tuple (
dirpath
,dirnames
,filenames
).
>>> import os
>>> root_dir = os.path.join(os.getcwd(), "root_dir") # Specify the full path
>>> root_dir
'E:\Work\Dev\StackOverflow\q003207219\root_dir'
>>>
>>> walk_generator = os.walk(root_dir)
>>> root_dir_entry = next(walk_generator) # First entry corresponds to the root dir (passed as an argument)
>>> root_dir_entry
('E:\Work\Dev\StackOverflow\q003207219\root_dir', ['dir0', 'dir1', 'dir2', 'dir3'], ['file0', 'file1'])
>>>
>>> root_dir_entry[1] + root_dir_entry[2] # Display dirs and files (direct descendants) in a single list
['dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> [os.path.join(root_dir_entry[0], item) for item in root_dir_entry[1] + root_dir_entry[2]] # Display all the entries in the previous list by their full path
['E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\dir1', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\dir2', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\dir3', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\file0', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\file1']
>>>
>>> for entry in walk_generator: # Display the rest of the elements (corresponding to every subdir)
... print(entry)
...
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0', ['dir00', 'dir01', 'dir02'], )
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir00', ['dir000'], ['file000'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir00\dir000', , ['file0000'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir01', , ['file010', 'file011'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir02', ['dir020'], )
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir02\dir020', ['dir0200'], )
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir02\dir020\dir0200', , )
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir1', , ['file10', 'file11', 'file12'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir2', ['dir20'], ['file20'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir2\dir20', , ['file200'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir3', , )
Notes:
- Under the scenes, it uses
os.scandir
(os.listdir
on older versions) - It does the heavy lifting by recurring in subfolders
- Under the scenes, it uses
[Python 3]: glob.glob(pathname, *, recursive=False) ([Python 3]: glob.iglob(pathname, *, recursive=False))
Return a possibly-empty list of path names that match pathname, which must be a string containing a path specification. pathname can be either absolute (like
/usr/src/Python-1.5/Makefile
) or relative (like../../Tools/*/*.gif
), and can contain shell-style wildcards. Broken symlinks are included in the results (as in the shell).
...
Changed in version 3.5: Support for recursive globs using “**
”.
>>> import glob, os
>>> wildcard_pattern = "*"
>>> root_dir = os.path.join("root_dir", wildcard_pattern) # Match every file/dir name
>>> root_dir
'root_dir\*'
>>>
>>> glob_list = glob.glob(root_dir)
>>> glob_list
['root_dir\dir0', 'root_dir\dir1', 'root_dir\dir2', 'root_dir\dir3', 'root_dir\file0', 'root_dir\file1']
>>>
>>> [item.replace("root_dir" + os.path.sep, "") for item in glob_list] # Strip the dir name and the path separator from begining
['dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> for entry in glob.iglob(root_dir + "*", recursive=True):
... print(entry)
...
root_dir
root_dirdir0
root_dirdir0dir00
root_dirdir0dir00dir000
root_dirdir0dir00dir000file0000
root_dirdir0dir00file000
root_dirdir0dir01
root_dirdir0dir01file010
root_dirdir0dir01file011
root_dirdir0dir02
root_dirdir0dir02dir020
root_dirdir0dir02dir020dir0200
root_dirdir1
root_dirdir1file10
root_dirdir1file11
root_dirdir1file12
root_dirdir2
root_dirdir2dir20
root_dirdir2dir20file200
root_dirdir2file20
root_dirdir3
root_dirfile0
root_dirfile1
Notes:
- Uses
os.listdir
- For large trees (especially if recursive is on), iglob is preferred
- Allows advanced filtering based on name (due to the wildcard)
- Uses
[Python 3]: class pathlib.Path(*pathsegments) (Python 3.4+, backport: [PyPI]: pathlib2)
>>> import pathlib
>>> root_dir = "root_dir"
>>> root_dir_instance = pathlib.Path(root_dir)
>>> root_dir_instance
WindowsPath('root_dir')
>>> root_dir_instance.name
'root_dir'
>>> root_dir_instance.is_dir()
True
>>>
>>> [item.name for item in root_dir_instance.glob("*")] # Wildcard searching for all direct descendants
['dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> [os.path.join(item.parent.name, item.name) for item in root_dir_instance.glob("*") if not item.is_dir()] # Display paths (including parent) for files only
['root_dir\file0', 'root_dir\file1']
Notes:
- This is one way of achieving our goal
- It's the OOP style of handling paths
- Offers lots of functionalities
[Python 2]: dircache.listdir(path) (Python 2 only)
- But, according to [GitHub]: python/cpython - (2.7) cpython/Lib/dircache.py, it's just a (thin) wrapper over
os.listdir
with caching
def listdir(path):
"""List directory contents, using cache."""
try:
cached_mtime, list = cache[path]
del cache[path]
except KeyError:
cached_mtime, list = -1,
mtime = os.stat(path).st_mtime
if mtime != cached_mtime:
list = os.listdir(path)
list.sort()
cache[path] = mtime, list
return list
- But, according to [GitHub]: python/cpython - (2.7) cpython/Lib/dircache.py, it's just a (thin) wrapper over
[man7]: OPENDIR(3) / [man7]: READDIR(3) / [man7]: CLOSEDIR(3) via [Python 3]: ctypes - A foreign function library for Python (POSIX specific)
ctypes is a foreign function library for Python. It provides C compatible data types, and allows calling functions in DLLs or shared libraries. It can be used to wrap these libraries in pure Python.
code_ctypes.py:
#!/usr/bin/env python3
import sys
from ctypes import Structure,
c_ulonglong, c_longlong, c_ushort, c_ubyte, c_char, c_int,
CDLL, POINTER,
create_string_buffer, get_errno, set_errno, cast
DT_DIR = 4
DT_REG = 8
char256 = c_char * 256
class LinuxDirent64(Structure):
_fields_ = [
("d_ino", c_ulonglong),
("d_off", c_longlong),
("d_reclen", c_ushort),
("d_type", c_ubyte),
("d_name", char256),
]
LinuxDirent64Ptr = POINTER(LinuxDirent64)
libc_dll = this_process = CDLL(None, use_errno=True)
# ALWAYS set argtypes and restype for functions, otherwise it's UB!!!
opendir = libc_dll.opendir
readdir = libc_dll.readdir
closedir = libc_dll.closedir
def get_dir_content(path):
ret = [path, list(), list()]
dir_stream = opendir(create_string_buffer(path.encode()))
if (dir_stream == 0):
print("opendir returned NULL (errno: {:d})".format(get_errno()))
return ret
set_errno(0)
dirent_addr = readdir(dir_stream)
while dirent_addr:
dirent_ptr = cast(dirent_addr, LinuxDirent64Ptr)
dirent = dirent_ptr.contents
name = dirent.d_name.decode()
if dirent.d_type & DT_DIR:
if name not in (".", ".."):
ret[1].append(name)
elif dirent.d_type & DT_REG:
ret[2].append(name)
dirent_addr = readdir(dir_stream)
if get_errno():
print("readdir returned NULL (errno: {:d})".format(get_errno()))
closedir(dir_stream)
return ret
def main():
print("{:s} on {:s}n".format(sys.version, sys.platform))
root_dir = "root_dir"
entries = get_dir_content(root_dir)
print(entries)
if __name__ == "__main__":
main()
Notes:
- It loads the three functions from libc (loaded in the current process) and calls them (for more details check [SO]: How do I check whether a file exists without exceptions? (@CristiFati's answer) - last notes from item #4.). That would place this approach very close to the Python / C edge
LinuxDirent64 is the ctypes representation of struct dirent64 from [man7]: dirent.h(0P) (so are the DT_ constants) from my machine: Ubtu 16 x64 (4.10.0-40-generic and libc6-dev:amd64). On other flavors/versions, the struct definition might differ, and if so, the ctypes alias should be updated, otherwise it will yield Undefined Behavior
- It returns data in the
os.walk
's format. I didn't bother to make it recursive, but starting from the existing code, that would be a fairly trivial task - Everything is doable on Win as well, the data (libraries, functions, structs, constants, ...) differ
Output:
[cfati@cfati-ubtu16x64-0:~/Work/Dev/StackOverflow/q003207219]> ./code_ctypes.py
3.5.2 (default, Nov 12 2018, 13:43:14)
[GCC 5.4.0 20160609] on linux
['root_dir', ['dir2', 'dir1', 'dir3', 'dir0'], ['file1', 'file0']]
[ActiveState]: win32file.FindFilesW (Win specific)
Retrieves a list of matching filenames, using the Windows Unicode API. An interface to the API FindFirstFileW/FindNextFileW/Find close functions.
>>> import os, win32file, win32con
>>> root_dir = "root_dir"
>>> wildcard = "*"
>>> root_dir_wildcard = os.path.join(root_dir, wildcard)
>>> entry_list = win32file.FindFilesW(root_dir_wildcard)
>>> len(entry_list) # Don't display the whole content as it's too long
8
>>> [entry[-2] for entry in entry_list] # Only display the entry names
['.', '..', 'dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> [entry[-2] for entry in entry_list if entry[0] & win32con.FILE_ATTRIBUTE_DIRECTORY and entry[-2] not in (".", "..")] # Filter entries and only display dir names (except self and parent)
['dir0', 'dir1', 'dir2', 'dir3']
>>>
>>> [os.path.join(root_dir, entry[-2]) for entry in entry_list if entry[0] & (win32con.FILE_ATTRIBUTE_NORMAL | win32con.FILE_ATTRIBUTE_ARCHIVE)] # Only display file "full" names
['root_dir\file0', 'root_dir\file1']
Notes:
win32file.FindFilesW
is part of [GitHub]: mhammond/pywin32 - Python for Windows (pywin32) Extensions, which is a Python wrapper over WINAPIs- The documentation link is from ActiveState, as I didn't find any pywin32 official documentation
- Install some (other) third-party package that does the trick
- Most likely, will rely on one (or more) of the above (maybe with slight customizations)
Notes:
Code is meant to be portable (except places that target a specific area - which are marked) or cross:
- platform (Nix, Win, )
Python version (2, 3, )
Multiple path styles (absolute, relatives) were used across the above variants, to illustrate the fact that the "tools" used are flexible in this direction
os.listdir
andos.scandir
use opendir / readdir / closedir ([MS.Docs]: FindFirstFileW function / [MS.Docs]: FindNextFileW function / [MS.Docs]: FindClose function) (via [GitHub]: python/cpython - (master) cpython/Modules/posixmodule.c)win32file.FindFilesW
uses those (Win specific) functions as well (via [GitHub]: mhammond/pywin32 - (master) pywin32/win32/src/win32file.i)
_get_dir_content (from point #1.) can be implemented using any of these approaches (some will require more work and some less)
- Some advanced filtering (instead of just file vs. dir) could be done: e.g. the include_folders argument could be replaced by another one (e.g. filter_func) which would be a function that takes a path as an argument:
filter_func=lambda x: True
(this doesn't strip out anything) and inside _get_dir_content something like:if not filter_func(entry_with_path): continue
(if the function fails for one entry, it will be skipped), but the more complex the code becomes, the longer it will take to execute
- Some advanced filtering (instead of just file vs. dir) could be done: e.g. the include_folders argument could be replaced by another one (e.g. filter_func) which would be a function that takes a path as an argument:
Nota bene! Since recursion is used, I must mention that I did some tests on my laptop (Win 10 x64), totally unrelated to this problem, and when the recursion level was reaching values somewhere in the (990 .. 1000) range (recursionlimit - 1000 (default)), I got StackOverflow :). If the directory tree exceeds that limit (I am not an FS expert, so I don't know if that is even possible), that could be a problem.
I must also mention that I didn't try to increase recursionlimit because I have no experience in the area (how much can I increase it before having to also increase the stack at OS level), but in theory there will always be the possibility for failure, if the dir depth is larger than the highest possible recursionlimit (on that machine)The code samples are for demonstrative purposes only. That means that I didn't take into account error handling (I don't think there's any try / except / else / finally block), so the code is not robust (the reason is: to keep it as simple and short as possible). For production, error handling should be added as well
Other approaches:
Use Python only as a wrapper
- Everything is done using another technology
- That technology is invoked from Python
The most famous flavor that I know is what I call the system administrator approach:
- Use Python (or any programming language for that matter) in order to execute shell commands (and parse their outputs)
- Some consider this a neat hack
- I consider it more like a lame workaround (gainarie), as the action per se is performed from shell (cmd in this case), and thus doesn't have anything to do with Python.
- Filtering (
grep
/findstr
) or output formatting could be done on both sides, but I'm not going to insist on it. Also, I deliberately usedos.system
instead ofsubprocess.Popen
.
(py35x64_test) E:WorkDevStackOverflowq003207219>"e:WorkDevVEnvspy35x64_testScriptspython.exe" -c "import os;os.system("dir /b root_dir")"
dir0
dir1
dir2
dir3
file0
file1
In general this approach is to be avoided, since if some command output format slightly differs between OS versions/flavors, the parsing code should be adapted as well; not to mention differences between locales).
You had posted it, but I had cleaned it up once I had read it :-)
– Martijn Pieters♦
Dec 9 '18 at 11:20
add a comment |
Preliminary notes
- Although there's a clear differentiation between file and directory terms in the question text, some may argue that directories are actually special files
- The statement: "all files of a directory" can be interpreted in two ways:
- All direct (or level 1) descendants only
- All descendants in the whole directory tree (including the ones in sub-directories)
- All direct (or level 1) descendants only
When the question was asked, I imagine that Python 2, was the LTS version, however the code samples will be run by Python 3(.5) (I'll keep them as Python 2 compliant as possible; also, any code belonging to Python that I'm going to post, is from v3.5.4 - unless otherwise specified). That has consequences related to another keyword in the question: "add them into a list":
- In pre Python 2.2 versions, sequences (iterables) were mostly represented by lists (tuples, sets, ...)
- In Python 2.2, the concept of generator ([Python.Wiki]: Generators) - courtesy of [Python 3]: The yield statement) - was introduced. As time passed, generator counterparts started to appear for functions that returned/worked with lists
- In Python 3, generator is the default behavior
- Not sure if returning a list is still mandatory (or a generator would do as well), but passing a generator to the list constructor, will create a list out of it (and also consume it). The example below illustrates the differences on [Python 3]: map(function, iterable, ...)
>>> import sys
>>> sys.version
'2.7.10 (default, Mar 8 2016, 15:02:46) [MSC v.1600 64 bit (AMD64)]'
>>> m = map(lambda x: x, [1, 2, 3]) # Just a dummy lambda function
>>> m, type(m)
([1, 2, 3], <type 'list'>)
>>> len(m)
3
>>> import sys
>>> sys.version
'3.5.4 (v3.5.4:3f56838, Aug 8 2017, 02:17:05) [MSC v.1900 64 bit (AMD64)]'
>>> m = map(lambda x: x, [1, 2, 3])
>>> m, type(m)
(<map object at 0x000001B4257342B0>, <class 'map'>)
>>> len(m)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: object of type 'map' has no len()
>>> lm0 = list(m) # Build a list from the generator
>>> lm0, type(lm0)
([1, 2, 3], <class 'list'>)
>>>
>>> lm1 = list(m) # Build a list from the same generator
>>> lm1, type(lm1) # Empty list now - generator already consumed
(, <class 'list'>)
The examples will be based on a directory called root_dir with the following structure (this example is for Win, but I'm using the same tree on Lnx as well):
E:WorkDevStackOverflowq003207219>tree /f "root_dir"
Folder PATH listing for volume Work
Volume serial number is 00000029 3655:6FED
E:WORKDEVSTACKOVERFLOWQ003207219ROOT_DIR
¦ file0
¦ file1
¦
+---dir0
¦ +---dir00
¦ ¦ ¦ file000
¦ ¦ ¦
¦ ¦ +---dir000
¦ ¦ file0000
¦ ¦
¦ +---dir01
¦ ¦ file010
¦ ¦ file011
¦ ¦
¦ +---dir02
¦ +---dir020
¦ +---dir0200
+---dir1
¦ file10
¦ file11
¦ file12
¦
+---dir2
¦ ¦ file20
¦ ¦
¦ +---dir20
¦ file200
¦
+---dir3
Solutions
Programmatic approaches:
[Python 3]: os.listdir(path='.')
Return a list containing the names of the entries in the directory given by path. The list is in arbitrary order, and does not include the special entries
'.'
and'..'
...
>>> import os
>>> root_dir = "root_dir" # Path relative to current dir (os.getcwd())
>>>
>>> os.listdir(root_dir) # List all the items in root_dir
['dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> [item for item in os.listdir(root_dir) if os.path.isfile(os.path.join(root_dir, item))] # Filter items and only keep files (strip out directories)
['file0', 'file1']
A more elaborate example (code_os_listdir.py):
import os
from pprint import pformat
def _get_dir_content(path, include_folders, recursive):
entries = os.listdir(path)
for entry in entries:
entry_with_path = os.path.join(path, entry)
if os.path.isdir(entry_with_path):
if include_folders:
yield entry_with_path
if recursive:
for sub_entry in _get_dir_content(entry_with_path, include_folders, recursive):
yield sub_entry
else:
yield entry_with_path
def get_dir_content(path, include_folders=True, recursive=True, prepend_folder_name=True):
path_len = len(path) + len(os.path.sep)
for item in _get_dir_content(path, include_folders, recursive):
yield item if prepend_folder_name else item[path_len:]
def _get_dir_content_old(path, include_folders, recursive):
entries = os.listdir(path)
ret = list()
for entry in entries:
entry_with_path = os.path.join(path, entry)
if os.path.isdir(entry_with_path):
if include_folders:
ret.append(entry_with_path)
if recursive:
ret.extend(_get_dir_content_old(entry_with_path, include_folders, recursive))
else:
ret.append(entry_with_path)
return ret
def get_dir_content_old(path, include_folders=True, recursive=True, prepend_folder_name=True):
path_len = len(path) + len(os.path.sep)
return [item if prepend_folder_name else item[path_len:] for item in _get_dir_content_old(path, include_folders, recursive)]
def main():
root_dir = "root_dir"
ret0 = get_dir_content(root_dir, include_folders=True, recursive=True, prepend_folder_name=True)
lret0 = list(ret0)
print(ret0, len(lret0), pformat(lret0))
ret1 = get_dir_content_old(root_dir, include_folders=False, recursive=True, prepend_folder_name=False)
print(len(ret1), pformat(ret1))
if __name__ == "__main__":
main()
Notes:
- There are two implementations:
- One that uses generators (of course here it seems useless, since I immediately convert the result to a list)
- The classic one (function names ending in _old)
- Recursion is used (to get into subdirectories)
- For each implementation there are two functions:
- One that starts with an underscore (_): "private" (should not be called directly) - that does all the work
- The public one (wrapper over previous): it just strips off the initial path (if required) from the returned entries. It's an ugly implementation, but it's the only idea that I could come with at this point
- In terms of performance, generators are generally a little bit faster (considering both creation and iteration times), but I didn't test them in recursive functions, and also I am iterating inside the function over inner generators - don't know how performance friendly is that
- Play with the arguments to get different results
Output:
(py35x64_test) E:WorkDevStackOverflowq003207219>"e:WorkDevVEnvspy35x64_testScriptspython.exe" "code_os_listdir.py"
<generator object get_dir_content at 0x000001BDDBB3DF10> 22 ['root_dir\dir0',
'root_dir\dir0\dir00',
'root_dir\dir0\dir00\dir000',
'root_dir\dir0\dir00\dir000\file0000',
'root_dir\dir0\dir00\file000',
'root_dir\dir0\dir01',
'root_dir\dir0\dir01\file010',
'root_dir\dir0\dir01\file011',
'root_dir\dir0\dir02',
'root_dir\dir0\dir02\dir020',
'root_dir\dir0\dir02\dir020\dir0200',
'root_dir\dir1',
'root_dir\dir1\file10',
'root_dir\dir1\file11',
'root_dir\dir1\file12',
'root_dir\dir2',
'root_dir\dir2\dir20',
'root_dir\dir2\dir20\file200',
'root_dir\dir2\file20',
'root_dir\dir3',
'root_dir\file0',
'root_dir\file1']
11 ['dir0\dir00\dir000\file0000',
'dir0\dir00\file000',
'dir0\dir01\file010',
'dir0\dir01\file011',
'dir1\file10',
'dir1\file11',
'dir1\file12',
'dir2\dir20\file200',
'dir2\file20',
'file0',
'file1']
- There are two implementations:
[Python 3]: os.scandir(path='.') (Python 3.5+, backport: [PyPI]: scandir)
Return an iterator of os.DirEntry objects corresponding to the entries in the directory given by path. The entries are yielded in arbitrary order, and the special entries
'.'
and'..'
are not included.
Using scandir() instead of listdir() can significantly increase the performance of code that also needs file type or file attribute information, because os.DirEntry objects expose this information if the operating system provides it when scanning a directory. All os.DirEntry methods may perform a system call, but is_dir() and is_file() usually only require a system call for symbolic links; os.DirEntry.stat() always requires a system call on Unix but only requires one for symbolic links on Windows.
>>> import os
>>> root_dir = os.path.join(".", "root_dir") # Explicitly prepending current directory
>>> root_dir
'.\root_dir'
>>>
>>> scandir_iterator = os.scandir(root_dir)
>>> scandir_iterator
<nt.ScandirIterator object at 0x00000268CF4BC140>
>>> [item.path for item in scandir_iterator]
['.\root_dir\dir0', '.\root_dir\dir1', '.\root_dir\dir2', '.\root_dir\dir3', '.\root_dir\file0', '.\root_dir\file1']
>>>
>>> [item.path for item in scandir_iterator] # Will yield an empty list as it was consumed by previous iteration (automatically performed by the list comprehension)
>>>
>>> scandir_iterator = os.scandir(root_dir) # Reinitialize the generator
>>> for item in scandir_iterator :
... if os.path.isfile(item.path):
... print(item.name)
...
file0
file1
Notes:
- It's similar to
os.listdir
- But it's also more flexible (and offers more functionality), more Pythonic (and in some cases, faster)
- It's similar to
[Python 3]: os.walk(top, topdown=True, onerror=None, followlinks=False)
Generate the file names in a directory tree by walking the tree either top-down or bottom-up. For each directory in the tree rooted at directory top (including top itself), it yields a 3-tuple (
dirpath
,dirnames
,filenames
).
>>> import os
>>> root_dir = os.path.join(os.getcwd(), "root_dir") # Specify the full path
>>> root_dir
'E:\Work\Dev\StackOverflow\q003207219\root_dir'
>>>
>>> walk_generator = os.walk(root_dir)
>>> root_dir_entry = next(walk_generator) # First entry corresponds to the root dir (passed as an argument)
>>> root_dir_entry
('E:\Work\Dev\StackOverflow\q003207219\root_dir', ['dir0', 'dir1', 'dir2', 'dir3'], ['file0', 'file1'])
>>>
>>> root_dir_entry[1] + root_dir_entry[2] # Display dirs and files (direct descendants) in a single list
['dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> [os.path.join(root_dir_entry[0], item) for item in root_dir_entry[1] + root_dir_entry[2]] # Display all the entries in the previous list by their full path
['E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\dir1', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\dir2', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\dir3', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\file0', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\file1']
>>>
>>> for entry in walk_generator: # Display the rest of the elements (corresponding to every subdir)
... print(entry)
...
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0', ['dir00', 'dir01', 'dir02'], )
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir00', ['dir000'], ['file000'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir00\dir000', , ['file0000'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir01', , ['file010', 'file011'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir02', ['dir020'], )
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir02\dir020', ['dir0200'], )
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir02\dir020\dir0200', , )
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir1', , ['file10', 'file11', 'file12'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir2', ['dir20'], ['file20'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir2\dir20', , ['file200'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir3', , )
Notes:
- Under the scenes, it uses
os.scandir
(os.listdir
on older versions) - It does the heavy lifting by recurring in subfolders
- Under the scenes, it uses
[Python 3]: glob.glob(pathname, *, recursive=False) ([Python 3]: glob.iglob(pathname, *, recursive=False))
Return a possibly-empty list of path names that match pathname, which must be a string containing a path specification. pathname can be either absolute (like
/usr/src/Python-1.5/Makefile
) or relative (like../../Tools/*/*.gif
), and can contain shell-style wildcards. Broken symlinks are included in the results (as in the shell).
...
Changed in version 3.5: Support for recursive globs using “**
”.
>>> import glob, os
>>> wildcard_pattern = "*"
>>> root_dir = os.path.join("root_dir", wildcard_pattern) # Match every file/dir name
>>> root_dir
'root_dir\*'
>>>
>>> glob_list = glob.glob(root_dir)
>>> glob_list
['root_dir\dir0', 'root_dir\dir1', 'root_dir\dir2', 'root_dir\dir3', 'root_dir\file0', 'root_dir\file1']
>>>
>>> [item.replace("root_dir" + os.path.sep, "") for item in glob_list] # Strip the dir name and the path separator from begining
['dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> for entry in glob.iglob(root_dir + "*", recursive=True):
... print(entry)
...
root_dir
root_dirdir0
root_dirdir0dir00
root_dirdir0dir00dir000
root_dirdir0dir00dir000file0000
root_dirdir0dir00file000
root_dirdir0dir01
root_dirdir0dir01file010
root_dirdir0dir01file011
root_dirdir0dir02
root_dirdir0dir02dir020
root_dirdir0dir02dir020dir0200
root_dirdir1
root_dirdir1file10
root_dirdir1file11
root_dirdir1file12
root_dirdir2
root_dirdir2dir20
root_dirdir2dir20file200
root_dirdir2file20
root_dirdir3
root_dirfile0
root_dirfile1
Notes:
- Uses
os.listdir
- For large trees (especially if recursive is on), iglob is preferred
- Allows advanced filtering based on name (due to the wildcard)
- Uses
[Python 3]: class pathlib.Path(*pathsegments) (Python 3.4+, backport: [PyPI]: pathlib2)
>>> import pathlib
>>> root_dir = "root_dir"
>>> root_dir_instance = pathlib.Path(root_dir)
>>> root_dir_instance
WindowsPath('root_dir')
>>> root_dir_instance.name
'root_dir'
>>> root_dir_instance.is_dir()
True
>>>
>>> [item.name for item in root_dir_instance.glob("*")] # Wildcard searching for all direct descendants
['dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> [os.path.join(item.parent.name, item.name) for item in root_dir_instance.glob("*") if not item.is_dir()] # Display paths (including parent) for files only
['root_dir\file0', 'root_dir\file1']
Notes:
- This is one way of achieving our goal
- It's the OOP style of handling paths
- Offers lots of functionalities
[Python 2]: dircache.listdir(path) (Python 2 only)
- But, according to [GitHub]: python/cpython - (2.7) cpython/Lib/dircache.py, it's just a (thin) wrapper over
os.listdir
with caching
def listdir(path):
"""List directory contents, using cache."""
try:
cached_mtime, list = cache[path]
del cache[path]
except KeyError:
cached_mtime, list = -1,
mtime = os.stat(path).st_mtime
if mtime != cached_mtime:
list = os.listdir(path)
list.sort()
cache[path] = mtime, list
return list
- But, according to [GitHub]: python/cpython - (2.7) cpython/Lib/dircache.py, it's just a (thin) wrapper over
[man7]: OPENDIR(3) / [man7]: READDIR(3) / [man7]: CLOSEDIR(3) via [Python 3]: ctypes - A foreign function library for Python (POSIX specific)
ctypes is a foreign function library for Python. It provides C compatible data types, and allows calling functions in DLLs or shared libraries. It can be used to wrap these libraries in pure Python.
code_ctypes.py:
#!/usr/bin/env python3
import sys
from ctypes import Structure,
c_ulonglong, c_longlong, c_ushort, c_ubyte, c_char, c_int,
CDLL, POINTER,
create_string_buffer, get_errno, set_errno, cast
DT_DIR = 4
DT_REG = 8
char256 = c_char * 256
class LinuxDirent64(Structure):
_fields_ = [
("d_ino", c_ulonglong),
("d_off", c_longlong),
("d_reclen", c_ushort),
("d_type", c_ubyte),
("d_name", char256),
]
LinuxDirent64Ptr = POINTER(LinuxDirent64)
libc_dll = this_process = CDLL(None, use_errno=True)
# ALWAYS set argtypes and restype for functions, otherwise it's UB!!!
opendir = libc_dll.opendir
readdir = libc_dll.readdir
closedir = libc_dll.closedir
def get_dir_content(path):
ret = [path, list(), list()]
dir_stream = opendir(create_string_buffer(path.encode()))
if (dir_stream == 0):
print("opendir returned NULL (errno: {:d})".format(get_errno()))
return ret
set_errno(0)
dirent_addr = readdir(dir_stream)
while dirent_addr:
dirent_ptr = cast(dirent_addr, LinuxDirent64Ptr)
dirent = dirent_ptr.contents
name = dirent.d_name.decode()
if dirent.d_type & DT_DIR:
if name not in (".", ".."):
ret[1].append(name)
elif dirent.d_type & DT_REG:
ret[2].append(name)
dirent_addr = readdir(dir_stream)
if get_errno():
print("readdir returned NULL (errno: {:d})".format(get_errno()))
closedir(dir_stream)
return ret
def main():
print("{:s} on {:s}n".format(sys.version, sys.platform))
root_dir = "root_dir"
entries = get_dir_content(root_dir)
print(entries)
if __name__ == "__main__":
main()
Notes:
- It loads the three functions from libc (loaded in the current process) and calls them (for more details check [SO]: How do I check whether a file exists without exceptions? (@CristiFati's answer) - last notes from item #4.). That would place this approach very close to the Python / C edge
LinuxDirent64 is the ctypes representation of struct dirent64 from [man7]: dirent.h(0P) (so are the DT_ constants) from my machine: Ubtu 16 x64 (4.10.0-40-generic and libc6-dev:amd64). On other flavors/versions, the struct definition might differ, and if so, the ctypes alias should be updated, otherwise it will yield Undefined Behavior
- It returns data in the
os.walk
's format. I didn't bother to make it recursive, but starting from the existing code, that would be a fairly trivial task - Everything is doable on Win as well, the data (libraries, functions, structs, constants, ...) differ
Output:
[cfati@cfati-ubtu16x64-0:~/Work/Dev/StackOverflow/q003207219]> ./code_ctypes.py
3.5.2 (default, Nov 12 2018, 13:43:14)
[GCC 5.4.0 20160609] on linux
['root_dir', ['dir2', 'dir1', 'dir3', 'dir0'], ['file1', 'file0']]
[ActiveState]: win32file.FindFilesW (Win specific)
Retrieves a list of matching filenames, using the Windows Unicode API. An interface to the API FindFirstFileW/FindNextFileW/Find close functions.
>>> import os, win32file, win32con
>>> root_dir = "root_dir"
>>> wildcard = "*"
>>> root_dir_wildcard = os.path.join(root_dir, wildcard)
>>> entry_list = win32file.FindFilesW(root_dir_wildcard)
>>> len(entry_list) # Don't display the whole content as it's too long
8
>>> [entry[-2] for entry in entry_list] # Only display the entry names
['.', '..', 'dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> [entry[-2] for entry in entry_list if entry[0] & win32con.FILE_ATTRIBUTE_DIRECTORY and entry[-2] not in (".", "..")] # Filter entries and only display dir names (except self and parent)
['dir0', 'dir1', 'dir2', 'dir3']
>>>
>>> [os.path.join(root_dir, entry[-2]) for entry in entry_list if entry[0] & (win32con.FILE_ATTRIBUTE_NORMAL | win32con.FILE_ATTRIBUTE_ARCHIVE)] # Only display file "full" names
['root_dir\file0', 'root_dir\file1']
Notes:
win32file.FindFilesW
is part of [GitHub]: mhammond/pywin32 - Python for Windows (pywin32) Extensions, which is a Python wrapper over WINAPIs- The documentation link is from ActiveState, as I didn't find any pywin32 official documentation
- Install some (other) third-party package that does the trick
- Most likely, will rely on one (or more) of the above (maybe with slight customizations)
Notes:
Code is meant to be portable (except places that target a specific area - which are marked) or cross:
- platform (Nix, Win, )
Python version (2, 3, )
Multiple path styles (absolute, relatives) were used across the above variants, to illustrate the fact that the "tools" used are flexible in this direction
os.listdir
andos.scandir
use opendir / readdir / closedir ([MS.Docs]: FindFirstFileW function / [MS.Docs]: FindNextFileW function / [MS.Docs]: FindClose function) (via [GitHub]: python/cpython - (master) cpython/Modules/posixmodule.c)win32file.FindFilesW
uses those (Win specific) functions as well (via [GitHub]: mhammond/pywin32 - (master) pywin32/win32/src/win32file.i)
_get_dir_content (from point #1.) can be implemented using any of these approaches (some will require more work and some less)
- Some advanced filtering (instead of just file vs. dir) could be done: e.g. the include_folders argument could be replaced by another one (e.g. filter_func) which would be a function that takes a path as an argument:
filter_func=lambda x: True
(this doesn't strip out anything) and inside _get_dir_content something like:if not filter_func(entry_with_path): continue
(if the function fails for one entry, it will be skipped), but the more complex the code becomes, the longer it will take to execute
- Some advanced filtering (instead of just file vs. dir) could be done: e.g. the include_folders argument could be replaced by another one (e.g. filter_func) which would be a function that takes a path as an argument:
Nota bene! Since recursion is used, I must mention that I did some tests on my laptop (Win 10 x64), totally unrelated to this problem, and when the recursion level was reaching values somewhere in the (990 .. 1000) range (recursionlimit - 1000 (default)), I got StackOverflow :). If the directory tree exceeds that limit (I am not an FS expert, so I don't know if that is even possible), that could be a problem.
I must also mention that I didn't try to increase recursionlimit because I have no experience in the area (how much can I increase it before having to also increase the stack at OS level), but in theory there will always be the possibility for failure, if the dir depth is larger than the highest possible recursionlimit (on that machine)The code samples are for demonstrative purposes only. That means that I didn't take into account error handling (I don't think there's any try / except / else / finally block), so the code is not robust (the reason is: to keep it as simple and short as possible). For production, error handling should be added as well
Other approaches:
Use Python only as a wrapper
- Everything is done using another technology
- That technology is invoked from Python
The most famous flavor that I know is what I call the system administrator approach:
- Use Python (or any programming language for that matter) in order to execute shell commands (and parse their outputs)
- Some consider this a neat hack
- I consider it more like a lame workaround (gainarie), as the action per se is performed from shell (cmd in this case), and thus doesn't have anything to do with Python.
- Filtering (
grep
/findstr
) or output formatting could be done on both sides, but I'm not going to insist on it. Also, I deliberately usedos.system
instead ofsubprocess.Popen
.
(py35x64_test) E:WorkDevStackOverflowq003207219>"e:WorkDevVEnvspy35x64_testScriptspython.exe" -c "import os;os.system("dir /b root_dir")"
dir0
dir1
dir2
dir3
file0
file1
In general this approach is to be avoided, since if some command output format slightly differs between OS versions/flavors, the parsing code should be adapted as well; not to mention differences between locales).
Preliminary notes
- Although there's a clear differentiation between file and directory terms in the question text, some may argue that directories are actually special files
- The statement: "all files of a directory" can be interpreted in two ways:
- All direct (or level 1) descendants only
- All descendants in the whole directory tree (including the ones in sub-directories)
- All direct (or level 1) descendants only
When the question was asked, I imagine that Python 2, was the LTS version, however the code samples will be run by Python 3(.5) (I'll keep them as Python 2 compliant as possible; also, any code belonging to Python that I'm going to post, is from v3.5.4 - unless otherwise specified). That has consequences related to another keyword in the question: "add them into a list":
- In pre Python 2.2 versions, sequences (iterables) were mostly represented by lists (tuples, sets, ...)
- In Python 2.2, the concept of generator ([Python.Wiki]: Generators) - courtesy of [Python 3]: The yield statement) - was introduced. As time passed, generator counterparts started to appear for functions that returned/worked with lists
- In Python 3, generator is the default behavior
- Not sure if returning a list is still mandatory (or a generator would do as well), but passing a generator to the list constructor, will create a list out of it (and also consume it). The example below illustrates the differences on [Python 3]: map(function, iterable, ...)
>>> import sys
>>> sys.version
'2.7.10 (default, Mar 8 2016, 15:02:46) [MSC v.1600 64 bit (AMD64)]'
>>> m = map(lambda x: x, [1, 2, 3]) # Just a dummy lambda function
>>> m, type(m)
([1, 2, 3], <type 'list'>)
>>> len(m)
3
>>> import sys
>>> sys.version
'3.5.4 (v3.5.4:3f56838, Aug 8 2017, 02:17:05) [MSC v.1900 64 bit (AMD64)]'
>>> m = map(lambda x: x, [1, 2, 3])
>>> m, type(m)
(<map object at 0x000001B4257342B0>, <class 'map'>)
>>> len(m)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: object of type 'map' has no len()
>>> lm0 = list(m) # Build a list from the generator
>>> lm0, type(lm0)
([1, 2, 3], <class 'list'>)
>>>
>>> lm1 = list(m) # Build a list from the same generator
>>> lm1, type(lm1) # Empty list now - generator already consumed
(, <class 'list'>)
The examples will be based on a directory called root_dir with the following structure (this example is for Win, but I'm using the same tree on Lnx as well):
E:WorkDevStackOverflowq003207219>tree /f "root_dir"
Folder PATH listing for volume Work
Volume serial number is 00000029 3655:6FED
E:WORKDEVSTACKOVERFLOWQ003207219ROOT_DIR
¦ file0
¦ file1
¦
+---dir0
¦ +---dir00
¦ ¦ ¦ file000
¦ ¦ ¦
¦ ¦ +---dir000
¦ ¦ file0000
¦ ¦
¦ +---dir01
¦ ¦ file010
¦ ¦ file011
¦ ¦
¦ +---dir02
¦ +---dir020
¦ +---dir0200
+---dir1
¦ file10
¦ file11
¦ file12
¦
+---dir2
¦ ¦ file20
¦ ¦
¦ +---dir20
¦ file200
¦
+---dir3
Solutions
Programmatic approaches:
[Python 3]: os.listdir(path='.')
Return a list containing the names of the entries in the directory given by path. The list is in arbitrary order, and does not include the special entries
'.'
and'..'
...
>>> import os
>>> root_dir = "root_dir" # Path relative to current dir (os.getcwd())
>>>
>>> os.listdir(root_dir) # List all the items in root_dir
['dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> [item for item in os.listdir(root_dir) if os.path.isfile(os.path.join(root_dir, item))] # Filter items and only keep files (strip out directories)
['file0', 'file1']
A more elaborate example (code_os_listdir.py):
import os
from pprint import pformat
def _get_dir_content(path, include_folders, recursive):
entries = os.listdir(path)
for entry in entries:
entry_with_path = os.path.join(path, entry)
if os.path.isdir(entry_with_path):
if include_folders:
yield entry_with_path
if recursive:
for sub_entry in _get_dir_content(entry_with_path, include_folders, recursive):
yield sub_entry
else:
yield entry_with_path
def get_dir_content(path, include_folders=True, recursive=True, prepend_folder_name=True):
path_len = len(path) + len(os.path.sep)
for item in _get_dir_content(path, include_folders, recursive):
yield item if prepend_folder_name else item[path_len:]
def _get_dir_content_old(path, include_folders, recursive):
entries = os.listdir(path)
ret = list()
for entry in entries:
entry_with_path = os.path.join(path, entry)
if os.path.isdir(entry_with_path):
if include_folders:
ret.append(entry_with_path)
if recursive:
ret.extend(_get_dir_content_old(entry_with_path, include_folders, recursive))
else:
ret.append(entry_with_path)
return ret
def get_dir_content_old(path, include_folders=True, recursive=True, prepend_folder_name=True):
path_len = len(path) + len(os.path.sep)
return [item if prepend_folder_name else item[path_len:] for item in _get_dir_content_old(path, include_folders, recursive)]
def main():
root_dir = "root_dir"
ret0 = get_dir_content(root_dir, include_folders=True, recursive=True, prepend_folder_name=True)
lret0 = list(ret0)
print(ret0, len(lret0), pformat(lret0))
ret1 = get_dir_content_old(root_dir, include_folders=False, recursive=True, prepend_folder_name=False)
print(len(ret1), pformat(ret1))
if __name__ == "__main__":
main()
Notes:
- There are two implementations:
- One that uses generators (of course here it seems useless, since I immediately convert the result to a list)
- The classic one (function names ending in _old)
- Recursion is used (to get into subdirectories)
- For each implementation there are two functions:
- One that starts with an underscore (_): "private" (should not be called directly) - that does all the work
- The public one (wrapper over previous): it just strips off the initial path (if required) from the returned entries. It's an ugly implementation, but it's the only idea that I could come with at this point
- In terms of performance, generators are generally a little bit faster (considering both creation and iteration times), but I didn't test them in recursive functions, and also I am iterating inside the function over inner generators - don't know how performance friendly is that
- Play with the arguments to get different results
Output:
(py35x64_test) E:WorkDevStackOverflowq003207219>"e:WorkDevVEnvspy35x64_testScriptspython.exe" "code_os_listdir.py"
<generator object get_dir_content at 0x000001BDDBB3DF10> 22 ['root_dir\dir0',
'root_dir\dir0\dir00',
'root_dir\dir0\dir00\dir000',
'root_dir\dir0\dir00\dir000\file0000',
'root_dir\dir0\dir00\file000',
'root_dir\dir0\dir01',
'root_dir\dir0\dir01\file010',
'root_dir\dir0\dir01\file011',
'root_dir\dir0\dir02',
'root_dir\dir0\dir02\dir020',
'root_dir\dir0\dir02\dir020\dir0200',
'root_dir\dir1',
'root_dir\dir1\file10',
'root_dir\dir1\file11',
'root_dir\dir1\file12',
'root_dir\dir2',
'root_dir\dir2\dir20',
'root_dir\dir2\dir20\file200',
'root_dir\dir2\file20',
'root_dir\dir3',
'root_dir\file0',
'root_dir\file1']
11 ['dir0\dir00\dir000\file0000',
'dir0\dir00\file000',
'dir0\dir01\file010',
'dir0\dir01\file011',
'dir1\file10',
'dir1\file11',
'dir1\file12',
'dir2\dir20\file200',
'dir2\file20',
'file0',
'file1']
- There are two implementations:
[Python 3]: os.scandir(path='.') (Python 3.5+, backport: [PyPI]: scandir)
Return an iterator of os.DirEntry objects corresponding to the entries in the directory given by path. The entries are yielded in arbitrary order, and the special entries
'.'
and'..'
are not included.
Using scandir() instead of listdir() can significantly increase the performance of code that also needs file type or file attribute information, because os.DirEntry objects expose this information if the operating system provides it when scanning a directory. All os.DirEntry methods may perform a system call, but is_dir() and is_file() usually only require a system call for symbolic links; os.DirEntry.stat() always requires a system call on Unix but only requires one for symbolic links on Windows.
>>> import os
>>> root_dir = os.path.join(".", "root_dir") # Explicitly prepending current directory
>>> root_dir
'.\root_dir'
>>>
>>> scandir_iterator = os.scandir(root_dir)
>>> scandir_iterator
<nt.ScandirIterator object at 0x00000268CF4BC140>
>>> [item.path for item in scandir_iterator]
['.\root_dir\dir0', '.\root_dir\dir1', '.\root_dir\dir2', '.\root_dir\dir3', '.\root_dir\file0', '.\root_dir\file1']
>>>
>>> [item.path for item in scandir_iterator] # Will yield an empty list as it was consumed by previous iteration (automatically performed by the list comprehension)
>>>
>>> scandir_iterator = os.scandir(root_dir) # Reinitialize the generator
>>> for item in scandir_iterator :
... if os.path.isfile(item.path):
... print(item.name)
...
file0
file1
Notes:
- It's similar to
os.listdir
- But it's also more flexible (and offers more functionality), more Pythonic (and in some cases, faster)
- It's similar to
[Python 3]: os.walk(top, topdown=True, onerror=None, followlinks=False)
Generate the file names in a directory tree by walking the tree either top-down or bottom-up. For each directory in the tree rooted at directory top (including top itself), it yields a 3-tuple (
dirpath
,dirnames
,filenames
).
>>> import os
>>> root_dir = os.path.join(os.getcwd(), "root_dir") # Specify the full path
>>> root_dir
'E:\Work\Dev\StackOverflow\q003207219\root_dir'
>>>
>>> walk_generator = os.walk(root_dir)
>>> root_dir_entry = next(walk_generator) # First entry corresponds to the root dir (passed as an argument)
>>> root_dir_entry
('E:\Work\Dev\StackOverflow\q003207219\root_dir', ['dir0', 'dir1', 'dir2', 'dir3'], ['file0', 'file1'])
>>>
>>> root_dir_entry[1] + root_dir_entry[2] # Display dirs and files (direct descendants) in a single list
['dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> [os.path.join(root_dir_entry[0], item) for item in root_dir_entry[1] + root_dir_entry[2]] # Display all the entries in the previous list by their full path
['E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\dir1', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\dir2', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\dir3', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\file0', 'E:\Work\Dev\StackOverflow\q003207219\root_dir\file1']
>>>
>>> for entry in walk_generator: # Display the rest of the elements (corresponding to every subdir)
... print(entry)
...
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0', ['dir00', 'dir01', 'dir02'], )
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir00', ['dir000'], ['file000'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir00\dir000', , ['file0000'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir01', , ['file010', 'file011'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir02', ['dir020'], )
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir02\dir020', ['dir0200'], )
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir0\dir02\dir020\dir0200', , )
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir1', , ['file10', 'file11', 'file12'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir2', ['dir20'], ['file20'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir2\dir20', , ['file200'])
('E:\Work\Dev\StackOverflow\q003207219\root_dir\dir3', , )
Notes:
- Under the scenes, it uses
os.scandir
(os.listdir
on older versions) - It does the heavy lifting by recurring in subfolders
- Under the scenes, it uses
[Python 3]: glob.glob(pathname, *, recursive=False) ([Python 3]: glob.iglob(pathname, *, recursive=False))
Return a possibly-empty list of path names that match pathname, which must be a string containing a path specification. pathname can be either absolute (like
/usr/src/Python-1.5/Makefile
) or relative (like../../Tools/*/*.gif
), and can contain shell-style wildcards. Broken symlinks are included in the results (as in the shell).
...
Changed in version 3.5: Support for recursive globs using “**
”.
>>> import glob, os
>>> wildcard_pattern = "*"
>>> root_dir = os.path.join("root_dir", wildcard_pattern) # Match every file/dir name
>>> root_dir
'root_dir\*'
>>>
>>> glob_list = glob.glob(root_dir)
>>> glob_list
['root_dir\dir0', 'root_dir\dir1', 'root_dir\dir2', 'root_dir\dir3', 'root_dir\file0', 'root_dir\file1']
>>>
>>> [item.replace("root_dir" + os.path.sep, "") for item in glob_list] # Strip the dir name and the path separator from begining
['dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> for entry in glob.iglob(root_dir + "*", recursive=True):
... print(entry)
...
root_dir
root_dirdir0
root_dirdir0dir00
root_dirdir0dir00dir000
root_dirdir0dir00dir000file0000
root_dirdir0dir00file000
root_dirdir0dir01
root_dirdir0dir01file010
root_dirdir0dir01file011
root_dirdir0dir02
root_dirdir0dir02dir020
root_dirdir0dir02dir020dir0200
root_dirdir1
root_dirdir1file10
root_dirdir1file11
root_dirdir1file12
root_dirdir2
root_dirdir2dir20
root_dirdir2dir20file200
root_dirdir2file20
root_dirdir3
root_dirfile0
root_dirfile1
Notes:
- Uses
os.listdir
- For large trees (especially if recursive is on), iglob is preferred
- Allows advanced filtering based on name (due to the wildcard)
- Uses
[Python 3]: class pathlib.Path(*pathsegments) (Python 3.4+, backport: [PyPI]: pathlib2)
>>> import pathlib
>>> root_dir = "root_dir"
>>> root_dir_instance = pathlib.Path(root_dir)
>>> root_dir_instance
WindowsPath('root_dir')
>>> root_dir_instance.name
'root_dir'
>>> root_dir_instance.is_dir()
True
>>>
>>> [item.name for item in root_dir_instance.glob("*")] # Wildcard searching for all direct descendants
['dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> [os.path.join(item.parent.name, item.name) for item in root_dir_instance.glob("*") if not item.is_dir()] # Display paths (including parent) for files only
['root_dir\file0', 'root_dir\file1']
Notes:
- This is one way of achieving our goal
- It's the OOP style of handling paths
- Offers lots of functionalities
[Python 2]: dircache.listdir(path) (Python 2 only)
- But, according to [GitHub]: python/cpython - (2.7) cpython/Lib/dircache.py, it's just a (thin) wrapper over
os.listdir
with caching
def listdir(path):
"""List directory contents, using cache."""
try:
cached_mtime, list = cache[path]
del cache[path]
except KeyError:
cached_mtime, list = -1,
mtime = os.stat(path).st_mtime
if mtime != cached_mtime:
list = os.listdir(path)
list.sort()
cache[path] = mtime, list
return list
- But, according to [GitHub]: python/cpython - (2.7) cpython/Lib/dircache.py, it's just a (thin) wrapper over
[man7]: OPENDIR(3) / [man7]: READDIR(3) / [man7]: CLOSEDIR(3) via [Python 3]: ctypes - A foreign function library for Python (POSIX specific)
ctypes is a foreign function library for Python. It provides C compatible data types, and allows calling functions in DLLs or shared libraries. It can be used to wrap these libraries in pure Python.
code_ctypes.py:
#!/usr/bin/env python3
import sys
from ctypes import Structure,
c_ulonglong, c_longlong, c_ushort, c_ubyte, c_char, c_int,
CDLL, POINTER,
create_string_buffer, get_errno, set_errno, cast
DT_DIR = 4
DT_REG = 8
char256 = c_char * 256
class LinuxDirent64(Structure):
_fields_ = [
("d_ino", c_ulonglong),
("d_off", c_longlong),
("d_reclen", c_ushort),
("d_type", c_ubyte),
("d_name", char256),
]
LinuxDirent64Ptr = POINTER(LinuxDirent64)
libc_dll = this_process = CDLL(None, use_errno=True)
# ALWAYS set argtypes and restype for functions, otherwise it's UB!!!
opendir = libc_dll.opendir
readdir = libc_dll.readdir
closedir = libc_dll.closedir
def get_dir_content(path):
ret = [path, list(), list()]
dir_stream = opendir(create_string_buffer(path.encode()))
if (dir_stream == 0):
print("opendir returned NULL (errno: {:d})".format(get_errno()))
return ret
set_errno(0)
dirent_addr = readdir(dir_stream)
while dirent_addr:
dirent_ptr = cast(dirent_addr, LinuxDirent64Ptr)
dirent = dirent_ptr.contents
name = dirent.d_name.decode()
if dirent.d_type & DT_DIR:
if name not in (".", ".."):
ret[1].append(name)
elif dirent.d_type & DT_REG:
ret[2].append(name)
dirent_addr = readdir(dir_stream)
if get_errno():
print("readdir returned NULL (errno: {:d})".format(get_errno()))
closedir(dir_stream)
return ret
def main():
print("{:s} on {:s}n".format(sys.version, sys.platform))
root_dir = "root_dir"
entries = get_dir_content(root_dir)
print(entries)
if __name__ == "__main__":
main()
Notes:
- It loads the three functions from libc (loaded in the current process) and calls them (for more details check [SO]: How do I check whether a file exists without exceptions? (@CristiFati's answer) - last notes from item #4.). That would place this approach very close to the Python / C edge
LinuxDirent64 is the ctypes representation of struct dirent64 from [man7]: dirent.h(0P) (so are the DT_ constants) from my machine: Ubtu 16 x64 (4.10.0-40-generic and libc6-dev:amd64). On other flavors/versions, the struct definition might differ, and if so, the ctypes alias should be updated, otherwise it will yield Undefined Behavior
- It returns data in the
os.walk
's format. I didn't bother to make it recursive, but starting from the existing code, that would be a fairly trivial task - Everything is doable on Win as well, the data (libraries, functions, structs, constants, ...) differ
Output:
[cfati@cfati-ubtu16x64-0:~/Work/Dev/StackOverflow/q003207219]> ./code_ctypes.py
3.5.2 (default, Nov 12 2018, 13:43:14)
[GCC 5.4.0 20160609] on linux
['root_dir', ['dir2', 'dir1', 'dir3', 'dir0'], ['file1', 'file0']]
[ActiveState]: win32file.FindFilesW (Win specific)
Retrieves a list of matching filenames, using the Windows Unicode API. An interface to the API FindFirstFileW/FindNextFileW/Find close functions.
>>> import os, win32file, win32con
>>> root_dir = "root_dir"
>>> wildcard = "*"
>>> root_dir_wildcard = os.path.join(root_dir, wildcard)
>>> entry_list = win32file.FindFilesW(root_dir_wildcard)
>>> len(entry_list) # Don't display the whole content as it's too long
8
>>> [entry[-2] for entry in entry_list] # Only display the entry names
['.', '..', 'dir0', 'dir1', 'dir2', 'dir3', 'file0', 'file1']
>>>
>>> [entry[-2] for entry in entry_list if entry[0] & win32con.FILE_ATTRIBUTE_DIRECTORY and entry[-2] not in (".", "..")] # Filter entries and only display dir names (except self and parent)
['dir0', 'dir1', 'dir2', 'dir3']
>>>
>>> [os.path.join(root_dir, entry[-2]) for entry in entry_list if entry[0] & (win32con.FILE_ATTRIBUTE_NORMAL | win32con.FILE_ATTRIBUTE_ARCHIVE)] # Only display file "full" names
['root_dir\file0', 'root_dir\file1']
Notes:
win32file.FindFilesW
is part of [GitHub]: mhammond/pywin32 - Python for Windows (pywin32) Extensions, which is a Python wrapper over WINAPIs- The documentation link is from ActiveState, as I didn't find any pywin32 official documentation
- Install some (other) third-party package that does the trick
- Most likely, will rely on one (or more) of the above (maybe with slight customizations)
Notes:
Code is meant to be portable (except places that target a specific area - which are marked) or cross:
- platform (Nix, Win, )
Python version (2, 3, )
Multiple path styles (absolute, relatives) were used across the above variants, to illustrate the fact that the "tools" used are flexible in this direction
os.listdir
andos.scandir
use opendir / readdir / closedir ([MS.Docs]: FindFirstFileW function / [MS.Docs]: FindNextFileW function / [MS.Docs]: FindClose function) (via [GitHub]: python/cpython - (master) cpython/Modules/posixmodule.c)win32file.FindFilesW
uses those (Win specific) functions as well (via [GitHub]: mhammond/pywin32 - (master) pywin32/win32/src/win32file.i)
_get_dir_content (from point #1.) can be implemented using any of these approaches (some will require more work and some less)
- Some advanced filtering (instead of just file vs. dir) could be done: e.g. the include_folders argument could be replaced by another one (e.g. filter_func) which would be a function that takes a path as an argument:
filter_func=lambda x: True
(this doesn't strip out anything) and inside _get_dir_content something like:if not filter_func(entry_with_path): continue
(if the function fails for one entry, it will be skipped), but the more complex the code becomes, the longer it will take to execute
- Some advanced filtering (instead of just file vs. dir) could be done: e.g. the include_folders argument could be replaced by another one (e.g. filter_func) which would be a function that takes a path as an argument:
Nota bene! Since recursion is used, I must mention that I did some tests on my laptop (Win 10 x64), totally unrelated to this problem, and when the recursion level was reaching values somewhere in the (990 .. 1000) range (recursionlimit - 1000 (default)), I got StackOverflow :). If the directory tree exceeds that limit (I am not an FS expert, so I don't know if that is even possible), that could be a problem.
I must also mention that I didn't try to increase recursionlimit because I have no experience in the area (how much can I increase it before having to also increase the stack at OS level), but in theory there will always be the possibility for failure, if the dir depth is larger than the highest possible recursionlimit (on that machine)The code samples are for demonstrative purposes only. That means that I didn't take into account error handling (I don't think there's any try / except / else / finally block), so the code is not robust (the reason is: to keep it as simple and short as possible). For production, error handling should be added as well
Other approaches:
Use Python only as a wrapper
- Everything is done using another technology
- That technology is invoked from Python
The most famous flavor that I know is what I call the system administrator approach:
- Use Python (or any programming language for that matter) in order to execute shell commands (and parse their outputs)
- Some consider this a neat hack
- I consider it more like a lame workaround (gainarie), as the action per se is performed from shell (cmd in this case), and thus doesn't have anything to do with Python.
- Filtering (
grep
/findstr
) or output formatting could be done on both sides, but I'm not going to insist on it. Also, I deliberately usedos.system
instead ofsubprocess.Popen
.
(py35x64_test) E:WorkDevStackOverflowq003207219>"e:WorkDevVEnvspy35x64_testScriptspython.exe" -c "import os;os.system("dir /b root_dir")"
dir0
dir1
dir2
dir3
file0
file1
In general this approach is to be avoided, since if some command output format slightly differs between OS versions/flavors, the parsing code should be adapted as well; not to mention differences between locales).
edited Dec 19 '18 at 19:18
answered Jan 23 '18 at 3:09
CristiFatiCristiFati
12k72235
12k72235
You had posted it, but I had cleaned it up once I had read it :-)
– Martijn Pieters♦
Dec 9 '18 at 11:20
add a comment |
You had posted it, but I had cleaned it up once I had read it :-)
– Martijn Pieters♦
Dec 9 '18 at 11:20
You had posted it, but I had cleaned it up once I had read it :-)
– Martijn Pieters♦
Dec 9 '18 at 11:20
You had posted it, but I had cleaned it up once I had read it :-)
– Martijn Pieters♦
Dec 9 '18 at 11:20
add a comment |
def list_files(path):
# returns a list of names (with extension, without full path) of all files
# in folder path
files =
for name in os.listdir(path):
if os.path.isfile(os.path.join(path, name)):
files.append(name)
return files
add a comment |
def list_files(path):
# returns a list of names (with extension, without full path) of all files
# in folder path
files =
for name in os.listdir(path):
if os.path.isfile(os.path.join(path, name)):
files.append(name)
return files
add a comment |
def list_files(path):
# returns a list of names (with extension, without full path) of all files
# in folder path
files =
for name in os.listdir(path):
if os.path.isfile(os.path.join(path, name)):
files.append(name)
return files
def list_files(path):
# returns a list of names (with extension, without full path) of all files
# in folder path
files =
for name in os.listdir(path):
if os.path.isfile(os.path.join(path, name)):
files.append(name)
return files
edited Oct 7 '14 at 18:30
answered Jun 10 '14 at 16:16
ApogentusApogentus
3,4532426
3,4532426
add a comment |
add a comment |
If you are looking for a Python implementation of find, this is a recipe I use rather frequently:
from findtools.find_files import (find_files, Match)
# Recursively find all *.sh files in **/usr/bin**
sh_files_pattern = Match(filetype='f', name='*.sh')
found_files = find_files(path='/usr/bin', match=sh_files_pattern)
for found_file in found_files:
print found_file
So I made a PyPI package out of it and there is also a GitHub repository. I hope that someone finds it potentially useful for this code.
add a comment |
If you are looking for a Python implementation of find, this is a recipe I use rather frequently:
from findtools.find_files import (find_files, Match)
# Recursively find all *.sh files in **/usr/bin**
sh_files_pattern = Match(filetype='f', name='*.sh')
found_files = find_files(path='/usr/bin', match=sh_files_pattern)
for found_file in found_files:
print found_file
So I made a PyPI package out of it and there is also a GitHub repository. I hope that someone finds it potentially useful for this code.
add a comment |
If you are looking for a Python implementation of find, this is a recipe I use rather frequently:
from findtools.find_files import (find_files, Match)
# Recursively find all *.sh files in **/usr/bin**
sh_files_pattern = Match(filetype='f', name='*.sh')
found_files = find_files(path='/usr/bin', match=sh_files_pattern)
for found_file in found_files:
print found_file
So I made a PyPI package out of it and there is also a GitHub repository. I hope that someone finds it potentially useful for this code.
If you are looking for a Python implementation of find, this is a recipe I use rather frequently:
from findtools.find_files import (find_files, Match)
# Recursively find all *.sh files in **/usr/bin**
sh_files_pattern = Match(filetype='f', name='*.sh')
found_files = find_files(path='/usr/bin', match=sh_files_pattern)
for found_file in found_files:
print found_file
So I made a PyPI package out of it and there is also a GitHub repository. I hope that someone finds it potentially useful for this code.
edited May 28 '17 at 23:17
Peter Mortensen
13.5k1984111
13.5k1984111
answered Apr 10 '14 at 14:09
Yauhen YakimovichYauhen Yakimovich
8,98554759
8,98554759
add a comment |
add a comment |
Returning a list of absolute filepaths, does not recurse into subdirectories
L = [os.path.join(os.getcwd(),f) for f in os.listdir('.') if os.path.isfile(os.path.join(os.getcwd(),f))]
1
maybe bit longer but v clear what it is doing
– javadba
Jun 8 '15 at 0:28
2
Note:os.path.abspath(f)
would be a somewhat cheaper substitute foros.path.join(os.getcwd(),f)
.
– ShadowRanger
May 6 '17 at 0:14
I'd be more efficient still if you started withcwd = os.path.abspath('.')
, then usedcwd
instead of'.'
andos.getcwd()
throughout to avoid loads of redundant system calls.
– Martijn Pieters♦
Dec 5 '18 at 10:46
add a comment |
Returning a list of absolute filepaths, does not recurse into subdirectories
L = [os.path.join(os.getcwd(),f) for f in os.listdir('.') if os.path.isfile(os.path.join(os.getcwd(),f))]
1
maybe bit longer but v clear what it is doing
– javadba
Jun 8 '15 at 0:28
2
Note:os.path.abspath(f)
would be a somewhat cheaper substitute foros.path.join(os.getcwd(),f)
.
– ShadowRanger
May 6 '17 at 0:14
I'd be more efficient still if you started withcwd = os.path.abspath('.')
, then usedcwd
instead of'.'
andos.getcwd()
throughout to avoid loads of redundant system calls.
– Martijn Pieters♦
Dec 5 '18 at 10:46
add a comment |
Returning a list of absolute filepaths, does not recurse into subdirectories
L = [os.path.join(os.getcwd(),f) for f in os.listdir('.') if os.path.isfile(os.path.join(os.getcwd(),f))]
Returning a list of absolute filepaths, does not recurse into subdirectories
L = [os.path.join(os.getcwd(),f) for f in os.listdir('.') if os.path.isfile(os.path.join(os.getcwd(),f))]
edited Dec 28 '14 at 3:27
Cristian Ciupitu
14.4k54263
14.4k54263
answered Jun 13 '14 at 16:26
The2ndSonThe2ndSon
30427
30427
1
maybe bit longer but v clear what it is doing
– javadba
Jun 8 '15 at 0:28
2
Note:os.path.abspath(f)
would be a somewhat cheaper substitute foros.path.join(os.getcwd(),f)
.
– ShadowRanger
May 6 '17 at 0:14
I'd be more efficient still if you started withcwd = os.path.abspath('.')
, then usedcwd
instead of'.'
andos.getcwd()
throughout to avoid loads of redundant system calls.
– Martijn Pieters♦
Dec 5 '18 at 10:46
add a comment |
1
maybe bit longer but v clear what it is doing
– javadba
Jun 8 '15 at 0:28
2
Note:os.path.abspath(f)
would be a somewhat cheaper substitute foros.path.join(os.getcwd(),f)
.
– ShadowRanger
May 6 '17 at 0:14
I'd be more efficient still if you started withcwd = os.path.abspath('.')
, then usedcwd
instead of'.'
andos.getcwd()
throughout to avoid loads of redundant system calls.
– Martijn Pieters♦
Dec 5 '18 at 10:46
1
1
maybe bit longer but v clear what it is doing
– javadba
Jun 8 '15 at 0:28
maybe bit longer but v clear what it is doing
– javadba
Jun 8 '15 at 0:28
2
2
Note:
os.path.abspath(f)
would be a somewhat cheaper substitute for os.path.join(os.getcwd(),f)
.– ShadowRanger
May 6 '17 at 0:14
Note:
os.path.abspath(f)
would be a somewhat cheaper substitute for os.path.join(os.getcwd(),f)
.– ShadowRanger
May 6 '17 at 0:14
I'd be more efficient still if you started with
cwd = os.path.abspath('.')
, then used cwd
instead of '.'
and os.getcwd()
throughout to avoid loads of redundant system calls.– Martijn Pieters♦
Dec 5 '18 at 10:46
I'd be more efficient still if you started with
cwd = os.path.abspath('.')
, then used cwd
instead of '.'
and os.getcwd()
throughout to avoid loads of redundant system calls.– Martijn Pieters♦
Dec 5 '18 at 10:46
add a comment |
import os
import os.path
def get_files(target_dir):
item_list = os.listdir(target_dir)
file_list = list()
for item in item_list:
item_dir = os.path.join(target_dir,item)
if os.path.isdir(item_dir):
file_list += get_files(item_dir)
else:
file_list.append(item_dir)
return file_list
Here I use a recursive structure.
add a comment |
import os
import os.path
def get_files(target_dir):
item_list = os.listdir(target_dir)
file_list = list()
for item in item_list:
item_dir = os.path.join(target_dir,item)
if os.path.isdir(item_dir):
file_list += get_files(item_dir)
else:
file_list.append(item_dir)
return file_list
Here I use a recursive structure.
add a comment |
import os
import os.path
def get_files(target_dir):
item_list = os.listdir(target_dir)
file_list = list()
for item in item_list:
item_dir = os.path.join(target_dir,item)
if os.path.isdir(item_dir):
file_list += get_files(item_dir)
else:
file_list.append(item_dir)
return file_list
Here I use a recursive structure.
import os
import os.path
def get_files(target_dir):
item_list = os.listdir(target_dir)
file_list = list()
for item in item_list:
item_dir = os.path.join(target_dir,item)
if os.path.isdir(item_dir):
file_list += get_files(item_dir)
else:
file_list.append(item_dir)
return file_list
Here I use a recursive structure.
edited Jul 18 '18 at 13:44
Andrew Rohne
11329
11329
answered Jun 19 '18 at 12:03
pah8Jpah8J
518412
518412
add a comment |
add a comment |
I am assuming that all your files are of *.txt
format, and are stored inside a directory with path data/
.
One can use glob module of python
to list all files of the directory, and add them to a list named fnames
, in the following manner:
import glob
fnames = glob.glob("data/*.txt") #fnames: list data type
add a comment |
I am assuming that all your files are of *.txt
format, and are stored inside a directory with path data/
.
One can use glob module of python
to list all files of the directory, and add them to a list named fnames
, in the following manner:
import glob
fnames = glob.glob("data/*.txt") #fnames: list data type
add a comment |
I am assuming that all your files are of *.txt
format, and are stored inside a directory with path data/
.
One can use glob module of python
to list all files of the directory, and add them to a list named fnames
, in the following manner:
import glob
fnames = glob.glob("data/*.txt") #fnames: list data type
I am assuming that all your files are of *.txt
format, and are stored inside a directory with path data/
.
One can use glob module of python
to list all files of the directory, and add them to a list named fnames
, in the following manner:
import glob
fnames = glob.glob("data/*.txt") #fnames: list data type
edited Nov 13 '18 at 23:58
answered Nov 9 '18 at 17:05
Siddharth SatpathySiddharth Satpathy
477314
477314
add a comment |
add a comment |
# -** coding: utf-8 -*-
import os
import traceback
print 'nn'
def start():
address = "/home/ubuntu/Desktop"
try:
Folders =
Id = 1
for item in os.listdir(address):
endaddress = address + "/" + item
Folders.append({'Id': Id, 'TopId': 0, 'Name': item, 'Address': endaddress })
Id += 1
state = 0
for item2 in os.listdir(endaddress):
state = 1
if state == 1:
Id = FolderToList(endaddress, Id, Id - 1, Folders)
return Folders
except:
print "___________________________ ERROR ___________________________n" + traceback.format_exc()
def FolderToList(address, Id, TopId, Folders):
for item in os.listdir(address):
endaddress = address + "/" + item
Folders.append({'Id': Id, 'TopId': TopId, 'Name': item, 'Address': endaddress })
Id += 1
state = 0
for item in os.listdir(endaddress):
state = 1
if state == 1:
Id = FolderToList(endaddress, Id, Id - 1, Folders)
return Id
print start()
This is too specific for an isolated usecase and not generally useful, especially since there is no explanation whatsoever what the code is doing. The blanketexcept
handling is also a bad example of how to handle exceptions in general.
– Martijn Pieters♦
Dec 5 '18 at 10:44
add a comment |
# -** coding: utf-8 -*-
import os
import traceback
print 'nn'
def start():
address = "/home/ubuntu/Desktop"
try:
Folders =
Id = 1
for item in os.listdir(address):
endaddress = address + "/" + item
Folders.append({'Id': Id, 'TopId': 0, 'Name': item, 'Address': endaddress })
Id += 1
state = 0
for item2 in os.listdir(endaddress):
state = 1
if state == 1:
Id = FolderToList(endaddress, Id, Id - 1, Folders)
return Folders
except:
print "___________________________ ERROR ___________________________n" + traceback.format_exc()
def FolderToList(address, Id, TopId, Folders):
for item in os.listdir(address):
endaddress = address + "/" + item
Folders.append({'Id': Id, 'TopId': TopId, 'Name': item, 'Address': endaddress })
Id += 1
state = 0
for item in os.listdir(endaddress):
state = 1
if state == 1:
Id = FolderToList(endaddress, Id, Id - 1, Folders)
return Id
print start()
This is too specific for an isolated usecase and not generally useful, especially since there is no explanation whatsoever what the code is doing. The blanketexcept
handling is also a bad example of how to handle exceptions in general.
– Martijn Pieters♦
Dec 5 '18 at 10:44
add a comment |
# -** coding: utf-8 -*-
import os
import traceback
print 'nn'
def start():
address = "/home/ubuntu/Desktop"
try:
Folders =
Id = 1
for item in os.listdir(address):
endaddress = address + "/" + item
Folders.append({'Id': Id, 'TopId': 0, 'Name': item, 'Address': endaddress })
Id += 1
state = 0
for item2 in os.listdir(endaddress):
state = 1
if state == 1:
Id = FolderToList(endaddress, Id, Id - 1, Folders)
return Folders
except:
print "___________________________ ERROR ___________________________n" + traceback.format_exc()
def FolderToList(address, Id, TopId, Folders):
for item in os.listdir(address):
endaddress = address + "/" + item
Folders.append({'Id': Id, 'TopId': TopId, 'Name': item, 'Address': endaddress })
Id += 1
state = 0
for item in os.listdir(endaddress):
state = 1
if state == 1:
Id = FolderToList(endaddress, Id, Id - 1, Folders)
return Id
print start()
# -** coding: utf-8 -*-
import os
import traceback
print 'nn'
def start():
address = "/home/ubuntu/Desktop"
try:
Folders =
Id = 1
for item in os.listdir(address):
endaddress = address + "/" + item
Folders.append({'Id': Id, 'TopId': 0, 'Name': item, 'Address': endaddress })
Id += 1
state = 0
for item2 in os.listdir(endaddress):
state = 1
if state == 1:
Id = FolderToList(endaddress, Id, Id - 1, Folders)
return Folders
except:
print "___________________________ ERROR ___________________________n" + traceback.format_exc()
def FolderToList(address, Id, TopId, Folders):
for item in os.listdir(address):
endaddress = address + "/" + item
Folders.append({'Id': Id, 'TopId': TopId, 'Name': item, 'Address': endaddress })
Id += 1
state = 0
for item in os.listdir(endaddress):
state = 1
if state == 1:
Id = FolderToList(endaddress, Id, Id - 1, Folders)
return Id
print start()
edited Dec 28 '14 at 3:25
Cristian Ciupitu
14.4k54263
14.4k54263
answered Mar 7 '14 at 10:28
barisim.netbarisim.net
7713
7713
This is too specific for an isolated usecase and not generally useful, especially since there is no explanation whatsoever what the code is doing. The blanketexcept
handling is also a bad example of how to handle exceptions in general.
– Martijn Pieters♦
Dec 5 '18 at 10:44
add a comment |
This is too specific for an isolated usecase and not generally useful, especially since there is no explanation whatsoever what the code is doing. The blanketexcept
handling is also a bad example of how to handle exceptions in general.
– Martijn Pieters♦
Dec 5 '18 at 10:44
This is too specific for an isolated usecase and not generally useful, especially since there is no explanation whatsoever what the code is doing. The blanket
except
handling is also a bad example of how to handle exceptions in general.– Martijn Pieters♦
Dec 5 '18 at 10:44
This is too specific for an isolated usecase and not generally useful, especially since there is no explanation whatsoever what the code is doing. The blanket
except
handling is also a bad example of how to handle exceptions in general.– Martijn Pieters♦
Dec 5 '18 at 10:44
add a comment |
Using generators
import os
def get_files(search_path):
for (dirpath, _, filenames) in os.walk(search_path):
for filename in filenames:
yield os.path.join(dirpath, filename)
list_files = get_files('.')
for filename in list_files:
print(filename)
add a comment |
Using generators
import os
def get_files(search_path):
for (dirpath, _, filenames) in os.walk(search_path):
for filename in filenames:
yield os.path.join(dirpath, filename)
list_files = get_files('.')
for filename in list_files:
print(filename)
add a comment |
Using generators
import os
def get_files(search_path):
for (dirpath, _, filenames) in os.walk(search_path):
for filename in filenames:
yield os.path.join(dirpath, filename)
list_files = get_files('.')
for filename in list_files:
print(filename)
Using generators
import os
def get_files(search_path):
for (dirpath, _, filenames) in os.walk(search_path):
for filename in filenames:
yield os.path.join(dirpath, filename)
list_files = get_files('.')
for filename in list_files:
print(filename)
edited May 17 '17 at 15:35
answered Dec 2 '16 at 7:01
shantanooshantanoo
2,56111831
2,56111831
add a comment |
add a comment |
import dircache
list = dircache.listdir(pathname)
i = 0
check = len(list[0])
temp =
count = len(list)
while count != 0:
if len(list[i]) != check:
temp.append(list[i-1])
check = len(list[i])
else:
i = i + 1
count = count - 1
print temp
16
dirchache is "Deprecated since version 2.6: The dircache module has been removed in Python 3.0."
– Daniel Reis
Aug 17 '13 at 13:58
add a comment |
import dircache
list = dircache.listdir(pathname)
i = 0
check = len(list[0])
temp =
count = len(list)
while count != 0:
if len(list[i]) != check:
temp.append(list[i-1])
check = len(list[i])
else:
i = i + 1
count = count - 1
print temp
16
dirchache is "Deprecated since version 2.6: The dircache module has been removed in Python 3.0."
– Daniel Reis
Aug 17 '13 at 13:58
add a comment |
import dircache
list = dircache.listdir(pathname)
i = 0
check = len(list[0])
temp =
count = len(list)
while count != 0:
if len(list[i]) != check:
temp.append(list[i-1])
check = len(list[i])
else:
i = i + 1
count = count - 1
print temp
import dircache
list = dircache.listdir(pathname)
i = 0
check = len(list[0])
temp =
count = len(list)
while count != 0:
if len(list[i]) != check:
temp.append(list[i-1])
check = len(list[i])
else:
i = i + 1
count = count - 1
print temp
answered Jul 25 '12 at 10:25
shajishaji
10913
10913
16
dirchache is "Deprecated since version 2.6: The dircache module has been removed in Python 3.0."
– Daniel Reis
Aug 17 '13 at 13:58
add a comment |
16
dirchache is "Deprecated since version 2.6: The dircache module has been removed in Python 3.0."
– Daniel Reis
Aug 17 '13 at 13:58
16
16
dirchache is "Deprecated since version 2.6: The dircache module has been removed in Python 3.0."
– Daniel Reis
Aug 17 '13 at 13:58
dirchache is "Deprecated since version 2.6: The dircache module has been removed in Python 3.0."
– Daniel Reis
Aug 17 '13 at 13:58
add a comment |
Use this function if you want to use a different file type or get the full directory:
import os
def createList(foldername, fulldir = True, suffix=".jpg"):
file_list_tmp = os.listdir(foldername)
#print len(file_list_tmp)
file_list =
if fulldir:
for item in file_list_tmp:
if item.endswith(suffix):
file_list.append(os.path.join(foldername, item))
else:
for item in file_list_tmp:
if item.endswith(suffix):
file_list.append(item)
return file_list
You can decide to useos.path.join()
inside the loop rather than double up your looping and filtering code. This answer doesn't really add anything over existing answers other than thefulldir
flag, so you'd really want to do a better job of the implementation. I'd usedef files_list(p, fulldir=True, suffix=None):
(indent),names = os.listdir(p)
,if suffix is not None: names = (f.endswith(suffix) for f in names)
, return [os.path.join(p, f) if fullname else f for f in names]` to at least keep it compact and efficient.
– Martijn Pieters♦
Dec 5 '18 at 10:59
Could you point out which part is a double loop? Thanks.
– neouyghur
Dec 6 '18 at 2:39
You have twofor ... if ... append
constructs in your function, only different in what is appended each time. That’s a lot of needless code duplication.
– Martijn Pieters♦
Dec 6 '18 at 3:18
add a comment |
Use this function if you want to use a different file type or get the full directory:
import os
def createList(foldername, fulldir = True, suffix=".jpg"):
file_list_tmp = os.listdir(foldername)
#print len(file_list_tmp)
file_list =
if fulldir:
for item in file_list_tmp:
if item.endswith(suffix):
file_list.append(os.path.join(foldername, item))
else:
for item in file_list_tmp:
if item.endswith(suffix):
file_list.append(item)
return file_list
You can decide to useos.path.join()
inside the loop rather than double up your looping and filtering code. This answer doesn't really add anything over existing answers other than thefulldir
flag, so you'd really want to do a better job of the implementation. I'd usedef files_list(p, fulldir=True, suffix=None):
(indent),names = os.listdir(p)
,if suffix is not None: names = (f.endswith(suffix) for f in names)
, return [os.path.join(p, f) if fullname else f for f in names]` to at least keep it compact and efficient.
– Martijn Pieters♦
Dec 5 '18 at 10:59
Could you point out which part is a double loop? Thanks.
– neouyghur
Dec 6 '18 at 2:39
You have twofor ... if ... append
constructs in your function, only different in what is appended each time. That’s a lot of needless code duplication.
– Martijn Pieters♦
Dec 6 '18 at 3:18
add a comment |
Use this function if you want to use a different file type or get the full directory:
import os
def createList(foldername, fulldir = True, suffix=".jpg"):
file_list_tmp = os.listdir(foldername)
#print len(file_list_tmp)
file_list =
if fulldir:
for item in file_list_tmp:
if item.endswith(suffix):
file_list.append(os.path.join(foldername, item))
else:
for item in file_list_tmp:
if item.endswith(suffix):
file_list.append(item)
return file_list
Use this function if you want to use a different file type or get the full directory:
import os
def createList(foldername, fulldir = True, suffix=".jpg"):
file_list_tmp = os.listdir(foldername)
#print len(file_list_tmp)
file_list =
if fulldir:
for item in file_list_tmp:
if item.endswith(suffix):
file_list.append(os.path.join(foldername, item))
else:
for item in file_list_tmp:
if item.endswith(suffix):
file_list.append(item)
return file_list
edited May 23 '18 at 18:45
Peter Mortensen
13.5k1984111
13.5k1984111
answered Nov 11 '16 at 12:48
neouyghurneouyghur
8321121
8321121
You can decide to useos.path.join()
inside the loop rather than double up your looping and filtering code. This answer doesn't really add anything over existing answers other than thefulldir
flag, so you'd really want to do a better job of the implementation. I'd usedef files_list(p, fulldir=True, suffix=None):
(indent),names = os.listdir(p)
,if suffix is not None: names = (f.endswith(suffix) for f in names)
, return [os.path.join(p, f) if fullname else f for f in names]` to at least keep it compact and efficient.
– Martijn Pieters♦
Dec 5 '18 at 10:59
Could you point out which part is a double loop? Thanks.
– neouyghur
Dec 6 '18 at 2:39
You have twofor ... if ... append
constructs in your function, only different in what is appended each time. That’s a lot of needless code duplication.
– Martijn Pieters♦
Dec 6 '18 at 3:18
add a comment |
You can decide to useos.path.join()
inside the loop rather than double up your looping and filtering code. This answer doesn't really add anything over existing answers other than thefulldir
flag, so you'd really want to do a better job of the implementation. I'd usedef files_list(p, fulldir=True, suffix=None):
(indent),names = os.listdir(p)
,if suffix is not None: names = (f.endswith(suffix) for f in names)
, return [os.path.join(p, f) if fullname else f for f in names]` to at least keep it compact and efficient.
– Martijn Pieters♦
Dec 5 '18 at 10:59
Could you point out which part is a double loop? Thanks.
– neouyghur
Dec 6 '18 at 2:39
You have twofor ... if ... append
constructs in your function, only different in what is appended each time. That’s a lot of needless code duplication.
– Martijn Pieters♦
Dec 6 '18 at 3:18
You can decide to use
os.path.join()
inside the loop rather than double up your looping and filtering code. This answer doesn't really add anything over existing answers other than the fulldir
flag, so you'd really want to do a better job of the implementation. I'd use def files_list(p, fulldir=True, suffix=None):
(indent), names = os.listdir(p)
, if suffix is not None: names = (f.endswith(suffix) for f in names)
, return [os.path.join(p, f) if fullname else f for f in names]` to at least keep it compact and efficient.– Martijn Pieters♦
Dec 5 '18 at 10:59
You can decide to use
os.path.join()
inside the loop rather than double up your looping and filtering code. This answer doesn't really add anything over existing answers other than the fulldir
flag, so you'd really want to do a better job of the implementation. I'd use def files_list(p, fulldir=True, suffix=None):
(indent), names = os.listdir(p)
, if suffix is not None: names = (f.endswith(suffix) for f in names)
, return [os.path.join(p, f) if fullname else f for f in names]` to at least keep it compact and efficient.– Martijn Pieters♦
Dec 5 '18 at 10:59
Could you point out which part is a double loop? Thanks.
– neouyghur
Dec 6 '18 at 2:39
Could you point out which part is a double loop? Thanks.
– neouyghur
Dec 6 '18 at 2:39
You have two
for ... if ... append
constructs in your function, only different in what is appended each time. That’s a lot of needless code duplication.– Martijn Pieters♦
Dec 6 '18 at 3:18
You have two
for ... if ... append
constructs in your function, only different in what is appended each time. That’s a lot of needless code duplication.– Martijn Pieters♦
Dec 6 '18 at 3:18
add a comment |
Another very readable variant for Python 3.4+ is using pathlib.Path.glob:
from pathlib import Path
folder = '/foo'
[f for f in Path(folder).glob('*') if f.is_file()]
It is simple to make more specific, e.g. only look for Python source files which are not symbolic links, also in all subdirectories:
[f for f in Path(folder).glob('**/*.py') if not f.is_symlink()]
add a comment |
Another very readable variant for Python 3.4+ is using pathlib.Path.glob:
from pathlib import Path
folder = '/foo'
[f for f in Path(folder).glob('*') if f.is_file()]
It is simple to make more specific, e.g. only look for Python source files which are not symbolic links, also in all subdirectories:
[f for f in Path(folder).glob('**/*.py') if not f.is_symlink()]
add a comment |
Another very readable variant for Python 3.4+ is using pathlib.Path.glob:
from pathlib import Path
folder = '/foo'
[f for f in Path(folder).glob('*') if f.is_file()]
It is simple to make more specific, e.g. only look for Python source files which are not symbolic links, also in all subdirectories:
[f for f in Path(folder).glob('**/*.py') if not f.is_symlink()]
Another very readable variant for Python 3.4+ is using pathlib.Path.glob:
from pathlib import Path
folder = '/foo'
[f for f in Path(folder).glob('*') if f.is_file()]
It is simple to make more specific, e.g. only look for Python source files which are not symbolic links, also in all subdirectories:
[f for f in Path(folder).glob('**/*.py') if not f.is_symlink()]
edited May 23 '18 at 19:25
Peter Mortensen
13.5k1984111
13.5k1984111
answered Mar 28 '18 at 12:20
fhchlfhchl
10110
10110
add a comment |
add a comment |
For greater results, you can use
listdir()
method of theos
module along with a generator (a generator is a powerful iterator that keeps its state, remember?). The following code works fine with both versions: Python 2 and Python 3.
Here's a code:
import os
def files(path):
for file in os.listdir(path):
if os.path.isfile(os.path.join(path, file)):
yield file
for file in files("."):
print (file)
The listdir()
method returns the list of entries for the given directory. The method os.path.isfile()
returns True
if the given entry is a file. And the yield
operator quits the func but keeps its current state, and it returns only the name of the entry detected as a file. All the above allows us to loop over the generator function.
Hope this helps.
add a comment |
For greater results, you can use
listdir()
method of theos
module along with a generator (a generator is a powerful iterator that keeps its state, remember?). The following code works fine with both versions: Python 2 and Python 3.
Here's a code:
import os
def files(path):
for file in os.listdir(path):
if os.path.isfile(os.path.join(path, file)):
yield file
for file in files("."):
print (file)
The listdir()
method returns the list of entries for the given directory. The method os.path.isfile()
returns True
if the given entry is a file. And the yield
operator quits the func but keeps its current state, and it returns only the name of the entry detected as a file. All the above allows us to loop over the generator function.
Hope this helps.
add a comment |
For greater results, you can use
listdir()
method of theos
module along with a generator (a generator is a powerful iterator that keeps its state, remember?). The following code works fine with both versions: Python 2 and Python 3.
Here's a code:
import os
def files(path):
for file in os.listdir(path):
if os.path.isfile(os.path.join(path, file)):
yield file
for file in files("."):
print (file)
The listdir()
method returns the list of entries for the given directory. The method os.path.isfile()
returns True
if the given entry is a file. And the yield
operator quits the func but keeps its current state, and it returns only the name of the entry detected as a file. All the above allows us to loop over the generator function.
Hope this helps.
For greater results, you can use
listdir()
method of theos
module along with a generator (a generator is a powerful iterator that keeps its state, remember?). The following code works fine with both versions: Python 2 and Python 3.
Here's a code:
import os
def files(path):
for file in os.listdir(path):
if os.path.isfile(os.path.join(path, file)):
yield file
for file in files("."):
print (file)
The listdir()
method returns the list of entries for the given directory. The method os.path.isfile()
returns True
if the given entry is a file. And the yield
operator quits the func but keeps its current state, and it returns only the name of the entry detected as a file. All the above allows us to loop over the generator function.
Hope this helps.
answered Jan 9 at 10:11
ARGeoARGeo
4,82752248
4,82752248
add a comment |
add a comment |
Here's my general-purpose function for this. It returns a list of file paths rather than filenames since I found that to be more useful. It has a few optional arguments that make it versatile. For instance, I often use it with arguments like pattern='*.txt'
or subfolders=True
.
import os
import fnmatch
def list_paths(folder='.', pattern='*', case_sensitive=False, subfolders=False):
"""Return a list of the file paths matching the pattern in the specified
folder, optionally including files inside subfolders.
"""
match = fnmatch.fnmatchcase if case_sensitive else fnmatch.fnmatch
walked = os.walk(folder) if subfolders else [next(os.walk(folder))]
return [os.path.join(root, f)
for root, dirnames, filenames in walked
for f in filenames if match(f, pattern)]
add a comment |
Here's my general-purpose function for this. It returns a list of file paths rather than filenames since I found that to be more useful. It has a few optional arguments that make it versatile. For instance, I often use it with arguments like pattern='*.txt'
or subfolders=True
.
import os
import fnmatch
def list_paths(folder='.', pattern='*', case_sensitive=False, subfolders=False):
"""Return a list of the file paths matching the pattern in the specified
folder, optionally including files inside subfolders.
"""
match = fnmatch.fnmatchcase if case_sensitive else fnmatch.fnmatch
walked = os.walk(folder) if subfolders else [next(os.walk(folder))]
return [os.path.join(root, f)
for root, dirnames, filenames in walked
for f in filenames if match(f, pattern)]
add a comment |
Here's my general-purpose function for this. It returns a list of file paths rather than filenames since I found that to be more useful. It has a few optional arguments that make it versatile. For instance, I often use it with arguments like pattern='*.txt'
or subfolders=True
.
import os
import fnmatch
def list_paths(folder='.', pattern='*', case_sensitive=False, subfolders=False):
"""Return a list of the file paths matching the pattern in the specified
folder, optionally including files inside subfolders.
"""
match = fnmatch.fnmatchcase if case_sensitive else fnmatch.fnmatch
walked = os.walk(folder) if subfolders else [next(os.walk(folder))]
return [os.path.join(root, f)
for root, dirnames, filenames in walked
for f in filenames if match(f, pattern)]
Here's my general-purpose function for this. It returns a list of file paths rather than filenames since I found that to be more useful. It has a few optional arguments that make it versatile. For instance, I often use it with arguments like pattern='*.txt'
or subfolders=True
.
import os
import fnmatch
def list_paths(folder='.', pattern='*', case_sensitive=False, subfolders=False):
"""Return a list of the file paths matching the pattern in the specified
folder, optionally including files inside subfolders.
"""
match = fnmatch.fnmatchcase if case_sensitive else fnmatch.fnmatch
walked = os.walk(folder) if subfolders else [next(os.walk(folder))]
return [os.path.join(root, f)
for root, dirnames, filenames in walked
for f in filenames if match(f, pattern)]
answered Dec 7 '17 at 20:10
MarredCheeseMarredCheese
2,35111629
2,35111629
add a comment |
add a comment |
For python2:
pip install rglob
import rglob
file_list=rglob.rglob("/home/base/dir/", "*")
print file_list
add a comment |
For python2:
pip install rglob
import rglob
file_list=rglob.rglob("/home/base/dir/", "*")
print file_list
add a comment |
For python2:
pip install rglob
import rglob
file_list=rglob.rglob("/home/base/dir/", "*")
print file_list
For python2:
pip install rglob
import rglob
file_list=rglob.rglob("/home/base/dir/", "*")
print file_list
edited Oct 19 '18 at 3:19
answered Oct 19 '18 at 2:34
chris-piekarskichris-piekarski
48445
48445
add a comment |
add a comment |
I will provide a sample one liner where sourcepath and file type can be provided as input. The code returns a list of filenames with csv extension. Use . in case all files needs to be returned. This will also recursively scans the subdirectories.
[y for x in os.walk(sourcePath) for y in glob(os.path.join(x[0], '*.csv'))]
Modify file extensions and source path as needed.
If you are going to useglob
, then just useglob('**/*.csv', recursive=True)
. No need to combine this withos.walk()
to recurse (recursive
and**
are supported since Python 3.5).
– Martijn Pieters♦
Dec 5 '18 at 11:09
add a comment |
I will provide a sample one liner where sourcepath and file type can be provided as input. The code returns a list of filenames with csv extension. Use . in case all files needs to be returned. This will also recursively scans the subdirectories.
[y for x in os.walk(sourcePath) for y in glob(os.path.join(x[0], '*.csv'))]
Modify file extensions and source path as needed.
If you are going to useglob
, then just useglob('**/*.csv', recursive=True)
. No need to combine this withos.walk()
to recurse (recursive
and**
are supported since Python 3.5).
– Martijn Pieters♦
Dec 5 '18 at 11:09
add a comment |
I will provide a sample one liner where sourcepath and file type can be provided as input. The code returns a list of filenames with csv extension. Use . in case all files needs to be returned. This will also recursively scans the subdirectories.
[y for x in os.walk(sourcePath) for y in glob(os.path.join(x[0], '*.csv'))]
Modify file extensions and source path as needed.
I will provide a sample one liner where sourcepath and file type can be provided as input. The code returns a list of filenames with csv extension. Use . in case all files needs to be returned. This will also recursively scans the subdirectories.
[y for x in os.walk(sourcePath) for y in glob(os.path.join(x[0], '*.csv'))]
Modify file extensions and source path as needed.
edited Dec 12 '17 at 5:30
answered Dec 11 '17 at 17:51
Vinodh KrishnarajuVinodh Krishnaraju
877
877
If you are going to useglob
, then just useglob('**/*.csv', recursive=True)
. No need to combine this withos.walk()
to recurse (recursive
and**
are supported since Python 3.5).
– Martijn Pieters♦
Dec 5 '18 at 11:09
add a comment |
If you are going to useglob
, then just useglob('**/*.csv', recursive=True)
. No need to combine this withos.walk()
to recurse (recursive
and**
are supported since Python 3.5).
– Martijn Pieters♦
Dec 5 '18 at 11:09
If you are going to use
glob
, then just use glob('**/*.csv', recursive=True)
. No need to combine this with os.walk()
to recurse (recursive
and **
are supported since Python 3.5).– Martijn Pieters♦
Dec 5 '18 at 11:09
If you are going to use
glob
, then just use glob('**/*.csv', recursive=True)
. No need to combine this with os.walk()
to recurse (recursive
and **
are supported since Python 3.5).– Martijn Pieters♦
Dec 5 '18 at 11:09
add a comment |
protected by matt Dec 18 '14 at 2:54
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
23
Related to How to get a list of subdirectories
– rds
Jan 5 '12 at 9:32
67
os.listdir(path)
returns a list of strings of filenames and subdirectories from the given path, or current if omitted. (Putting this here for people from Google to see because the currently top answer doesn't answer the question.)– Apollys
May 2 '17 at 15:54
3
All files only? Do you want to list subdirectories?
– Aleksandar Jovanovic
Jul 5 '17 at 11:11
This works nicely (top answer below):
from os import listdir
from os.path import isfile, join
files = [f for f in listdir(mypath) if isfile(join(mypath, f))]
Note: you need to assign a string to the directory path where the files are stored (ex:mypath = "users/name/desktop/"
).– Arshin
Apr 2 '18 at 18:12
Do you mean files as: Ordinary files that aren't sub-directories or links, or all files, including sub-directories and links?
– Mulliganaceous
May 3 '18 at 7:53