Pandas 10分钟入门(官方说明+个人小测试)_pandas in 10minutes-程序员宅基地

技术标签: pandas  python  

Pandas 10分钟入门


代码下载地址[http://download.csdn.net/download/sirwill/10043185]

In [19]:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

Object Creation

In [20]:
s= pd.Series([1,2,3,np.nan,5,6,])   #series 类型数组。
s
Out[20]:
0    1.0
1    2.0
2    3.0
3    NaN
4    5.0
5    6.0
dtype: float64
In [21]:
dates= pd.date_range("20170112",periods=6) #Creating a DataFrame by passing a numpy array, with a datetime index and labeled column
dates
Out[21]:
DatetimeIndex(['2017-01-12', '2017-01-13', '2017-01-14', '2017-01-15',
               '2017-01-16', '2017-01-17'],
              dtype='datetime64[ns]', freq='D')
In [22]:
list(dates)
dates.date
Out[22]:
array([datetime.date(2017, 1, 12), datetime.date(2017, 1, 13),
       datetime.date(2017, 1, 14), datetime.date(2017, 1, 15),
       datetime.date(2017, 1, 16), datetime.date(2017, 1, 17)], dtype=object)
In [23]:
list(dates.date)
Out[23]:
[datetime.date(2017, 1, 12),
 datetime.date(2017, 1, 13),
 datetime.date(2017, 1, 14),
 datetime.date(2017, 1, 15),
 datetime.date(2017, 1, 16),
 datetime.date(2017, 1, 17)]
In [24]:
dates.year
Out[24]:
Int64Index([2017, 2017, 2017, 2017, 2017, 2017], dtype='int64')
In [25]:
list(dates.year)
Out[25]:
[2017, 2017, 2017, 2017, 2017, 2017]
In [26]:
list(dates.day)
Out[26]:
[12, 13, 14, 15, 16, 17]
In [27]:
str(dates.date)
Out[27]:
'[datetime.date(2017, 1, 12) datetime.date(2017, 1, 13)\n datetime.date(2017, 1, 14) datetime.date(2017, 1, 15)\n datetime.date(2017, 1, 16) datetime.date(2017, 1, 17)]'
In [28]:
df=pd.DataFrame(np.random.randn(6,4),index=dates,columns=list("ABCD"))
df
Out[28]:
  A B C D
2017-01-12 -2.258121 2.456196 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460
In [29]:
df2 = pd.DataFrame({
         'A' : 1.,
                    'B' : pd.Timestamp('20130102'),
                    'C' : pd.Series(1,index=list(range(4)),dtype='float32'),
                    'D' : np.array([3] * 4,dtype='int32'),
                    'E' : pd.Categorical(["test","train","test","train"]),
                    'F' : 'foo' }) #Creating a DataFrame by passing a dict of objects that can be converted to series-like.
df2
Out[29]:
  A B C D E F
0 1.0 2013-01-02 1.0 3 test foo
1 1.0 2013-01-02 1.0 3 train foo
2 1.0 2013-01-02 1.0 3 test foo
3 1.0 2013-01-02 1.0 3 train foo
In [30]:
df2.dtypes
Out[30]:
A           float64
B    datetime64[ns]
C           float32
D             int32
E          category
F            object
dtype: object
In [31]:
df.dtypes
Out[31]:
A    float64
B    float64
C    float64
D    float64
dtype: object
In [32]:
df2.<TAB> #使用jupyter时按tab键,可以看到代码提示。
  File "<ipython-input-32-9c4c8dafe199>", line 1
    df2.<TAB> #If you’re using IPython, tab completion for column names (as well as public attributes) is automatically enabled.
        ^
SyntaxError: invalid syntax

Viewing Data

In [36]:
df.head()
Out[36]:
  A B C D
2017-01-12 -2.258121 2.456196 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
In [37]:
df.index
Out[37]:
DatetimeIndex(['2017-01-12', '2017-01-13', '2017-01-14', '2017-01-15',
               '2017-01-16', '2017-01-17'],
              dtype='datetime64[ns]', freq='D')
In [38]:
df.columns
Out[38]:
Index(['A', 'B', 'C', 'D'], dtype='object')
In [39]:
df.values
Out[39]:
array([[-2.2581213 ,  2.45619592,  0.77856734, -2.030407  ],
       [-0.65834822,  0.62249451,  0.38862467,  0.07358728],
       [ 0.58921899,  1.39279193,  0.60554535,  1.23153815],
       [-0.1519579 , -0.65524863, -2.1147252 , -0.66983949],
       [-1.32330447,  3.14365936,  0.63899562,  0.89868346],
       [-0.02493461,  0.3858107 , -1.57718486, -0.0214603 ]])
In [40]:
df.describe()
Out[40]:
  A B C D
count 6.000000 6.000000 6.000000 6.000000
mean -0.637908 1.224284 -0.213363 -0.086316
std 1.021078 1.401987 1.282079 1.171045
min -2.258121 -0.655249 -2.114725 -2.030407
25% -1.157065 0.444982 -1.085732 -0.507745
50% -0.405153 1.007643 0.497085 0.026063
75% -0.056690 2.190345 0.630633 0.692409
max 0.589219 3.143659 0.778567 1.231538
In [41]:
df
Out[41]:
  A B C D
2017-01-12 -2.258121 2.456196 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460
In [42]:
df.T
Out[42]:
  2017-01-12 00:00:00 2017-01-13 00:00:00 2017-01-14 00:00:00 2017-01-15 00:00:00 2017-01-16 00:00:00 2017-01-17 00:00:00
A -2.258121 -0.658348 0.589219 -0.151958 -1.323304 -0.024935
B 2.456196 0.622495 1.392792 -0.655249 3.143659 0.385811
C 0.778567 0.388625 0.605545 -2.114725 0.638996 -1.577185
D -2.030407 0.073587 1.231538 -0.669839 0.898683 -0.021460
In [43]:
df.sort_index(axis=1,ascending=False) #Sorting by an axis  排序。
Out[43]:
  D C B A
2017-01-12 -2.030407 0.778567 2.456196 -2.258121
2017-01-13 0.073587 0.388625 0.622495 -0.658348
2017-01-14 1.231538 0.605545 1.392792 0.589219
2017-01-15 -0.669839 -2.114725 -0.655249 -0.151958
2017-01-16 0.898683 0.638996 3.143659 -1.323304
2017-01-17 -0.021460 -1.577185 0.385811 -0.024935
In [44]:
df.sort_values(by="B") #Sorting by values
Out[44]:
  A B C D
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-12 -2.258121 2.456196 0.778567 -2.030407
2017-01-16 -1.323304 3.143659 0.638996 0.898683
In [45]:
df
Out[45]:
  A B C D
2017-01-12 -2.258121 2.456196 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460

Selection

Getting

In [46]:
df["A"]# Selecting a single column, which yields a Series, equivalent to df.A
Out[46]:
2017-01-12   -2.258121
2017-01-13   -0.658348
2017-01-14    0.589219
2017-01-15   -0.151958
2017-01-16   -1.323304
2017-01-17   -0.024935
Freq: D, Name: A, dtype: float64
In [47]:
df.A
Out[47]:
2017-01-12   -2.258121
2017-01-13   -0.658348
2017-01-14    0.589219
2017-01-15   -0.151958
2017-01-16   -1.323304
2017-01-17   -0.024935
Freq: D, Name: A, dtype: float64
In [48]:
df[0:3]  #Selecting via [], which slices the rows.
Out[48]:
  A B C D
2017-01-12 -2.258121 2.456196 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
In [49]:
df["2017-01-13":"2017-01-17"]
Out[49]:
  A B C D
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460

Selection by Label

In [50]:
dates
Out[50]:
DatetimeIndex(['2017-01-12', '2017-01-13', '2017-01-14', '2017-01-15',
               '2017-01-16', '2017-01-17'],
              dtype='datetime64[ns]', freq='D')
In [51]:
df.loc[dates[0]] #For getting a cross section using a label
Out[51]:
A   -2.258121
B    2.456196
C    0.778567
D   -2.030407
Name: 2017-01-12 00:00:00, dtype: float64
In [52]:
df.loc[:,["A","B"]]
Out[52]:
  A B
2017-01-12 -2.258121 2.456196
2017-01-13 -0.658348 0.622495
2017-01-14 0.589219 1.392792
2017-01-15 -0.151958 -0.655249
2017-01-16 -1.323304 3.143659
2017-01-17 -0.024935 0.385811
In [53]:
df.loc['20170112':'20170116',['A','B']] #Showing label slicing, both endpoints are included
Out[53]:
  A B
2017-01-12 -2.258121 2.456196
2017-01-13 -0.658348 0.622495
2017-01-14 0.589219 1.392792
2017-01-15 -0.151958 -0.655249
2017-01-16 -1.323304 3.143659
In [54]:
df.loc["20170115",["A","B"]] 
Out[54]:
A   -0.151958
B   -0.655249
Name: 2017-01-15 00:00:00, dtype: float64
In [55]:
df.loc[dates[3],"D"] #For getting a scalar value
Out[55]:
-0.6698394854437093
In [56]:
df.at[dates[3],"D"] #For getting fast access to a scalar (equiv to the prior method)
Out[56]:
-0.6698394854437093

Selection by Position

In [57]:
df.iloc[3] #Select via the position of the passed integers
Out[57]:
A   -0.151958
B   -0.655249
C   -2.114725
D   -0.669839
Name: 2017-01-15 00:00:00, dtype: float64
In [58]:
df.iloc[2:5,0:2] # By integer slices, acting similar to numpy/python
Out[58]:
  A B
2017-01-14 0.589219 1.392792
2017-01-15 -0.151958 -0.655249
2017-01-16 -1.323304 3.143659
In [59]:
df.iloc[[1,3,4],[0,2]] #By lists of integer position locations, similar to the numpy/python style
Out[59]:
  A C
2017-01-13 -0.658348 0.388625
2017-01-15 -0.151958 -2.114725
2017-01-16 -1.323304 0.638996
In [60]:
df.iloc[1:3,:]
Out[60]:
  A B C D
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
In [61]:
df.iloc[:,1:3]
Out[61]:
  B C
2017-01-12 2.456196 0.778567
2017-01-13 0.622495 0.388625
2017-01-14 1.392792 0.605545
2017-01-15 -0.655249 -2.114725
2017-01-16 3.143659 0.638996
2017-01-17 0.385811 -1.577185
In [62]:
df.iloc[1,1] #For getting a value explicitly
Out[62]:
0.62249451281708756
In [63]:
df.iat[1,1] #For getting fast access to a scalar (equiv to the prior method)
Out[63]:
0.62249451281708756

Boolean Indexing

In [64]:
df[df.A>0] #Using a single column’s values to select data
Out[64]:
  A B C D
2017-01-14 0.589219 1.392792 0.605545 1.231538
In [65]:
df[df>0] #Selecting values from a DataFrame where a boolean condition is met
Out[65]:
  A B C D
2017-01-12 NaN 2.456196 0.778567 NaN
2017-01-13 NaN 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 NaN NaN NaN NaN
2017-01-16 NaN 3.143659 0.638996 0.898683
2017-01-17 NaN 0.385811 NaN NaN
In [66]:
df2
Out[66]:
  A B C D E F
0 1.0 2013-01-02 1.0 3 test foo
1 1.0 2013-01-02 1.0 3 train foo
2 1.0 2013-01-02 1.0 3 test foo
3 1.0 2013-01-02 1.0 3 train foo
In [67]:
df
Out[67]:
  A B C D
2017-01-12 -2.258121 2.456196 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460
In [68]:
df2=df.copy()
df2
Out[68]:
  A B C D
2017-01-12 -2.258121 2.456196 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460
In [69]:
df.equals(df2)
Out[69]:
True
In [70]:
df==df2
Out[70]:
  A B C D
2017-01-12 True True True True
2017-01-13 True True True True
2017-01-14 True True True True
2017-01-15 True True True True
2017-01-16 True True True True
2017-01-17 True True True True
In [71]:
df is df2
Out[71]:
False
In [72]:
df2["E"]=["one","one","two","three","four","three"]
df2
Out[72]:
  A B C D E
2017-01-12 -2.258121 2.456196 0.778567 -2.030407 one
2017-01-13 -0.658348 0.622495 0.388625 0.073587 one
2017-01-14 0.589219 1.392792 0.605545 1.231538 two
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839 three
2017-01-16 -1.323304 3.143659 0.638996 0.898683 four
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460 three
In [73]:
df2[df2.E.isin(["two","four"])]
Out[73]:
  A B C D E
2017-01-14 0.589219 1.392792 0.605545 1.231538 two
2017-01-16 -1.323304 3.143659 0.638996 0.898683 four
In [74]:
df2[df2["E"].isin(["two","four"])]
Out[74]:
  A B C D E
2017-01-14 0.589219 1.392792 0.605545 1.231538 two
2017-01-16 -1.323304 3.143659 0.638996 0.898683 four

Setting

In [75]:
s1= pd.Series([1,2,3,4,5,6],index=pd.date_range("20171016",periods=6)) #Setting a new column automatically aligns the data by the indexes
s1
Out[75]:
2017-10-16    1
2017-10-17    2
2017-10-18    3
2017-10-19    4
2017-10-20    5
2017-10-21    6
Freq: D, dtype: int64
In [76]:
df.at[dates[0],"A"]=0 #Setting values by label
In [77]:
df
Out[77]:
  A B C D
2017-01-12 0.000000 2.456196 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460
In [78]:
df.iat[0,1]=0
df
Out[78]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 -2.030407
2017-01-13 -0.658348 0.622495 0.388625 0.073587
2017-01-14 0.589219 1.392792 0.605545 1.231538
2017-01-15 -0.151958 -0.655249 -2.114725 -0.669839
2017-01-16 -1.323304 3.143659 0.638996 0.898683
2017-01-17 -0.024935 0.385811 -1.577185 -0.021460
In [79]:
df.loc[:,"D"]=np.array([5]*len(df)) #Setting by assigning with a numpy array
df
Out[79]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 5
2017-01-13 -0.658348 0.622495 0.388625 5
2017-01-14 0.589219 1.392792 0.605545 5
2017-01-15 -0.151958 -0.655249 -2.114725 5
2017-01-16 -1.323304 3.143659 0.638996 5
2017-01-17 -0.024935 0.385811 -1.577185 5
In [80]:
df2=df.copy()
df2
Out[80]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 5
2017-01-13 -0.658348 0.622495 0.388625 5
2017-01-14 0.589219 1.392792 0.605545 5
2017-01-15 -0.151958 -0.655249 -2.114725 5
2017-01-16 -1.323304 3.143659 0.638996 5
2017-01-17 -0.024935 0.385811 -1.577185 5
In [81]:
df2[df2>0]=-df2
df2
Out[81]:
  A B C D
2017-01-12 0.000000 0.000000 -0.778567 -5
2017-01-13 -0.658348 -0.622495 -0.388625 -5
2017-01-14 -0.589219 -1.392792 -0.605545 -5
2017-01-15 -0.151958 -0.655249 -2.114725 -5
2017-01-16 -1.323304 -3.143659 -0.638996 -5
2017-01-17 -0.024935 -0.385811 -1.577185 -5

Missing Data

In [83]:
df
Out[83]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 5
2017-01-13 -0.658348 0.622495 0.388625 5
2017-01-14 0.589219 1.392792 0.605545 5
2017-01-15 -0.151958 -0.655249 -2.114725 5
2017-01-16 -1.323304 3.143659 0.638996 5
2017-01-17 -0.024935 0.385811 -1.577185 5
In [84]:
df1 = df.reindex(index=dates[0:4], columns=list(df.columns) + ['E'])
df1.loc[dates[0]:dates[1],'E'] = 1
df1
Out[84]:
  A B C D E
2017-01-12 0.000000 0.000000 0.778567 5 1.0
2017-01-13 -0.658348 0.622495 0.388625 5 1.0
2017-01-14 0.589219 1.392792 0.605545 5 NaN
2017-01-15 -0.151958 -0.655249 -2.114725 5 NaN
In [85]:
df1.dropna(how="any") #To drop any rows that have missing data
Out[85]:
  A B C D E
2017-01-12 0.000000 0.000000 0.778567 5 1.0
2017-01-13 -0.658348 0.622495 0.388625 5 1.0
In [86]:
df1.fillna(value=5)  # Filling missing data
Out[86]:
  A B C D E
2017-01-12 0.000000 0.000000 0.778567 5 1.0
2017-01-13 -0.658348 0.622495 0.388625 5 1.0
2017-01-14 0.589219 1.392792 0.605545 5 5.0
2017-01-15 -0.151958 -0.655249 -2.114725 5 5.0
In [87]:
df1
Out[87]:
  A B C D E
2017-01-12 0.000000 0.000000 0.778567 5 1.0
2017-01-13 -0.658348 0.622495 0.388625 5 1.0
2017-01-14 0.589219 1.392792 0.605545 5 NaN
2017-01-15 -0.151958 -0.655249 -2.114725 5 NaN
In [88]:
pd.isnull(df1)
Out[88]:
  A B C D E
2017-01-12 False False False False False
2017-01-13 False False False False False
2017-01-14 False False False False True
2017-01-15 False False False False True
In [89]:
df1.isnull()
Out[89]:
  A B C D E
2017-01-12 False False False False False
2017-01-13 False False False False False
2017-01-14 False False False False True
2017-01-15 False False False False True
In [90]:
df1.isna()  #没有这个方法~~
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-90-9dd6d031e095> in <module>()
----> 1 df1.isna()  #没有这个方法~~

D:\Users\asus\Anaconda3\lib\site-packages\pandas\core\generic.py in __getattr__(self, name)
   2968             if name in self._info_axis:
   2969                 return self[name]
-> 2970             return object.__getattribute__(self, name)
   2971 
   2972     def __setattr__(self, name, value):

AttributeError: 'DataFrame' object has no attribute 'isna'

Options

Stats

Operations in general exclude missing data. Performing a descriptive statistic

In [91]:
df
Out[91]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 5
2017-01-13 -0.658348 0.622495 0.388625 5
2017-01-14 0.589219 1.392792 0.605545 5
2017-01-15 -0.151958 -0.655249 -2.114725 5
2017-01-16 -1.323304 3.143659 0.638996 5
2017-01-17 -0.024935 0.385811 -1.577185 5
In [92]:
df.mean()
Out[92]:
A   -0.261554
B    0.814918
C   -0.213363
D    5.000000
dtype: float64
In [93]:
df.mean(1)  #Same operation on the other axis
Out[93]:
2017-01-12    1.444642
2017-01-13    1.338193
2017-01-14    1.896889
2017-01-15    0.519517
2017-01-16    1.864838
2017-01-17    0.945923
Freq: D, dtype: float64
In [94]:
s= pd.Series([1,2,3,np.nan,4,5],index=dates).shift(2) 
# Operating with objects that have different dimensionality and need alignment. In addition, pandas automatically broadcasts along the specified dimension.
s
Out[94]:
2017-01-12    NaN
2017-01-13    NaN
2017-01-14    1.0
2017-01-15    2.0
2017-01-16    3.0
2017-01-17    NaN
Freq: D, dtype: float64
In [95]:
df
Out[95]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 5
2017-01-13 -0.658348 0.622495 0.388625 5
2017-01-14 0.589219 1.392792 0.605545 5
2017-01-15 -0.151958 -0.655249 -2.114725 5
2017-01-16 -1.323304 3.143659 0.638996 5
2017-01-17 -0.024935 0.385811 -1.577185 5
In [96]:
df.sub(s,axis="index")  #dataFrame与series的减法
Out[96]:
  A B C D
2017-01-12 NaN NaN NaN NaN
2017-01-13 NaN NaN NaN NaN
2017-01-14 -0.410781 0.392792 -0.394455 4.0
2017-01-15 -2.151958 -2.655249 -4.114725 3.0
2017-01-16 -4.323304 0.143659 -2.361004 2.0
2017-01-17 NaN NaN NaN NaN

Apply

In [97]:
df
Out[97]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 5
2017-01-13 -0.658348 0.622495 0.388625 5
2017-01-14 0.589219 1.392792 0.605545 5
2017-01-15 -0.151958 -0.655249 -2.114725 5
2017-01-16 -1.323304 3.143659 0.638996 5
2017-01-17 -0.024935 0.385811 -1.577185 5
In [98]:
df.apply(np.cumsum)  #行叠加。
Out[98]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 5
2017-01-13 -0.658348 0.622495 1.167192 10
2017-01-14 -0.069129 2.015286 1.772737 15
2017-01-15 -0.221087 1.360038 -0.341988 20
2017-01-16 -1.544392 4.503697 0.297008 25
2017-01-17 -1.569326 4.889508 -1.280177 30
In [99]:
df.apply(lambda x: x.max()-x.min())
Out[99]:
A    1.912523
B    3.798908
C    2.893293
D    0.000000
dtype: float64

Histogramming

In [100]:
s= pd.Series(np.random.randint(0,7,size=10))
s
Out[100]:
0    4
1    5
2    2
3    0
4    5
5    3
6    4
7    3
8    3
9    0
dtype: int32
In [101]:
s.value_counts()
Out[101]:
3    3
5    2
4    2
0    2
2    1
dtype: int64

String Methods

Series is equipped with a set of string processing methods in the str attribute that make it easy to operate on each element of the array, as in the code snippet below. Note that pattern-matching in str generally uses regular expressions by default (and in some cases always uses them). See more at Vectorized String Methods.

In [102]:
s= pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
s.str.lower()
Out[102]:
0       a
1       b
2       c
3    aaba
4    baca
5     NaN
6    caba
7     dog
8     cat
dtype: object
In [103]:
s
Out[103]:
0       A
1       B
2       C
3    Aaba
4    Baca
5     NaN
6    CABA
7     dog
8     cat
dtype: object

Merge 合并

Concat

pandas provides various facilities for easily combining together Series, DataFrame, and Panel objects with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations. See the Merging section Concatenating pandas objects together with concat():

In [104]:
df
Out[104]:
  A B C D
2017-01-12 0.000000 0.000000 0.778567 5
2017-01-13 -0.658348 0.622495 0.388625 5
2017-01-14 0.589219 1.392792 0.605545 5
2017-01-15 -0.151958 -0.655249 -2.114725 5
2017-01-16 -1.323304 3.143659 0.638996 5
2017-01-17 -0.024935 0.385811 -1.577185 5
In [105]:
df=pd.DataFrame(np.random.randn(10,4))
df
Out[105]:
  0 1 2 3
0 0.111766 -0.505125 2.156029 0.419152
1 1.068870 1.180587 0.361345 1.090554
2 0.488997 0.281507 -0.738345 -0.242974
3 -1.846709 1.686173 -0.202319 -1.151983
4 0.573012 -1.979189 1.544768 1.594595
5 -0.954571 -0.696788 0.270959 -2.296720
6 -1.511946 1.796113 0.399493 0.412664
7 0.089844 -0.545153 -0.315653 -0.235828
8 -0.747140 1.222900 -1.650812 0.292432
9 0.659855 0.501265 0.363978 1.722914
In [106]:
# break it into pieces
pieces=[df[:3],df[3:7],df[7:]]
pd.concat(pieces)
Out[106]:
  0 1 2 3
0 0.111766 -0.505125 2.156029 0.419152
1 1.068870 1.180587 0.361345 1.090554
2 0.488997 0.281507 -0.738345 -0.242974
3 -1.846709 1.686173 -0.202319 -1.151983
4 0.573012 -1.979189 1.544768 1.594595
5 -0.954571 -0.696788 0.270959 -2.296720
6 -1.511946 1.796113 0.399493 0.412664
7 0.089844 -0.545153 -0.315653 -0.235828
8 -0.747140 1.222900 -1.650812 0.292432
9 0.659855 0.501265 0.363978 1.722914
In [107]:
pieces
Out[107]:
[          0         1         2         3
 0  0.111766 -0.505125  2.156029  0.419152
 1  1.068870  1.180587  0.361345  1.090554
 2  0.488997  0.281507 -0.738345 -0.242974,
           0         1         2         3
 3 -1.846709  1.686173 -0.202319 -1.151983
 4  0.573012 -1.979189  1.544768  1.594595
 5 -0.954571 -0.696788  0.270959 -2.296720
 6 -1.511946  1.796113  0.399493  0.412664,
           0         1         2         3
 7  0.089844 -0.545153 -0.315653 -0.235828
 8 -0.747140  1.222900 -1.650812  0.292432
 9  0.659855  0.501265  0.363978  1.722914]

Join

SQL style merges. See the Database style joining

In [108]:
left=pd.DataFrame({
        "key":["foo","foo"],"lval":[1,2]})
right = pd.DataFrame({
        'key': ['foo', 'foo'], 'rval': [4, 5]})
In [109]:
left
Out[109]:
  key lval
0 foo 1
1 foo 2
In [110]:
right
Out[110]:
  key rval
0 foo 4
1 foo 5
In [111]:
pd.merge(left,right,on="key")
Out[111]:
  key lval rval
0 foo 1 4
1 foo 1 5
2 foo 2 4
3 foo 2 5
In [112]:
left = pd.DataFrame({
        'key': ['foo', 'bar'], 'lval': [1, 2]})
right = pd.DataFrame({
        'key': ['foo', 'bar'], 'rval': [4, 5]})
In [113]:
left
Out[113]:
  key lval
0 foo 1
1 bar 2
In [114]:
right
Out[114]:
  key rval
0 foo 4
1 bar 5
In [115]:
pd.merge(left,right,on="key")
Out[115]:
  key lval rval
0 foo 1 4
1 bar 2 5

Append

In [116]:
df = pd.DataFrame(np.random.randn(8, 4), columns=['A','B','C','D'])
df
Out[116]:
  A B C D
0 -0.852451 1.074357 -0.591892 0.950982
1 -0.977580 1.656374 0.693657 0.718832
2 0.303269 -0.881728 -1.509321 1.219849
3 0.655751 1.235660 1.729038 1.074948
4 0.658413 -1.215348 -1.139623 0.753772
5 1.345115 1.420212 -0.124543 -0.099265
6 1.129623 0.597484 -0.804759 -0.568266
7 -0.770570 0.540917 -0.261607 -0.083751
In [117]:
s=df.iloc[3]
s
Out[117]:
A    0.655751
B    1.235660
C    1.729038
D    1.074948
Name: 3, dtype: float64
In [118]:
df.append(s,ignore_index=True)
Out[118]:
  A B C D
0 -0.852451 1.074357 -0.591892 0.950982
1 -0.977580 1.656374 0.693657 0.718832
2 0.303269 -0.881728 -1.509321 1.219849
3 0.655751 1.235660 1.729038 1.074948
4 0.658413 -1.215348 -1.139623 0.753772
5 1.345115 1.420212 -0.124543 -0.099265
6 1.129623 0.597484 -0.804759 -0.568266
7 -0.770570 0.540917 -0.261607 -0.083751
8 0.655751 1.235660 1.729038 1.074948

Grouping

By “group by” we are referring to a process involving one or more of the following steps • Splitting the data into groups based on some criteria • Applying a function to each group independently • Combining the results into a data structure

In [119]:
df = pd.DataFrame({
        'A' : ['foo', 'bar', 'foo', 'bar',
                          'foo', 'bar', 'foo', 'foo'],
                   'B' : ['one', 'one', 'two', 'three',
                          'two', 'two', 'one', 'three'],
                   'C' : np.random.randn(8),
                   'D' : np.random.randn(8)})
df
Out[119]:
  A B C D
0 foo one -0.523738 -1.363519
1 bar one -0.071920 -2.618027
2 foo two -2.712421 -0.407372
3 bar three -0.635898 -1.942854
4 foo two 0.952073 -0.546110
5 bar two 1.474296 -0.982238
6 foo one -0.529788 -0.213397
7 foo three 0.877394 -0.791663
In [120]:
df.groupby("A").sum()
Out[120]:
  C D
A    
bar 0.766479 -5.543120
foo -1.936480 -3.322062
In [121]:
df.groupby(["A","B"]).sum()  #Grouping by multiple columns forms a hierarchical index, which we then apply the function.
Out[121]:
    C D
A B    
bar one -0.071920 -2.618027
three -0.635898 -1.942854
two 1.474296 -0.982238
foo one -1.053527 -1.576917
three 0.877394 -0.791663
two -1.760347 -0.953482

Reshaping

Stack

In [122]:
tuples = list(zip([['bar', 'bar', 'baz', 'baz',
                     'foo', 'foo', 'qux', 'qux'],
                    ['one', 'two', 'one', 'two',
                     'one', 'two', 'one', 'two']]))
tuples
Out[122]:
[(['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],),
 (['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'],)]
In [123]:
tuples = list(zip(*[['bar', 'bar', 'baz', 'baz',
                     'foo', 'foo', 'qux', 'qux'],
                    ['one', 'two', 'one', 'two',
                     'one', 'two', 'one', 'two']]))
tuples
Out[123]:
[('bar', 'one'),
 ('bar', 'two'),
 ('baz', 'one'),
 ('baz', 'two'),
 ('foo', 'one'),
 ('foo', 'two'),
 ('qux', 'one'),
 ('qux', 'two')]
In [124]:
index=pd.MultiIndex.from_tuples(tuples,names=["first","second"])
index
Out[124]:
MultiIndex(levels=[['bar', 'baz', 'foo', 'qux'], ['one', 'two']],
           labels=[[0, 0, 1, 1, 2, 2, 3, 3], [0, 1, 0, 1, 0, 1, 0, 1]],
           names=['first', 'second'])
In [125]:
df=pd.DataFrame(np.random.randn(8,2),index=index,columns=['A', 'B'])
df
Out[125]:
    A B
first second    
bar one -1.101051 -1.126231
two -0.395652 -0.313567
baz one 1.378579 -1.637869
two 0.665960 -0.259749
foo one -0.256181 1.260131
two -0.994720 0.506272
qux one -0.422649 0.191402
two -0.102085 0.975210
In [126]:
df2=df[:4]
df2
Out[126]:
    A B
first second    
bar one -1.101051 -1.126231
two -0.395652 -0.313567
baz one 1.378579 -1.637869
two 0.665960 -0.259749
In [127]:
stacked= df2.stack()
stacked
Out[127]:
first  second   
bar    one     A   -1.101051
               B   -1.126231
       two     A   -0.395652
               B   -0.313567
baz    one     A    1.378579
               B   -1.637869
       two     A    0.665960
               B   -0.259749
dtype: float64

With a “stacked” DataFrame or Series (having a MultiIndex as the index), the inverse operation of stack() is unstack(), which by default unstacks the last level:

In [128]:
stacked.unstack()
Out[128]:
    A B
first second    
bar one -1.101051 -1.126231
two -0.395652 -0.313567
baz one 1.378579 -1.637869
two 0.665960 -0.259749
In [129]:
stacked.unstack(1)
Out[129]:
  second one two
first      
bar A -1.101051 -0.395652
B -1.126231 -0.313567
baz A 1.378579 0.665960
B -1.637869 -0.259749
In [130]:
stacked.unstack(0)
Out[130]:
  first bar baz
second      
one A -1.101051 1.378579
B -1.126231 -1.637869
two A -0.395652 0.665960
B -0.313567 -0.259749

Pivot Tables

In [131]:
df = pd.DataFrame({
        'A' : ['one', 'one', 'two', 'three'] * 3,
                   'B' : ['A', 'B', 'C'] * 4,
                   'C' : ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 2,
                   'D' : np.random.randn(12),
                   'E' : np.random.randn(12)})
df
Out[131]:
  A B C D E
0 one A foo 0.039230 0.134261
1 one B foo 0.952890 -0.499183
2 two C foo -0.778814 -0.655735
3 three A bar 0.798864 0.025109
4 one B bar -0.580050 -1.711672
5 one C bar 0.004300 -0.433591
6 two A foo 0.229248 -2.648814
7 three B foo 0.506488 0.630373
8 one C foo -0.315667 0.031764
9 one A bar -1.547410 0.743825
10 two B bar -0.480958 0.365255
11 three C bar 1.742948 0.692884
In [4]:
pd.pivot_table(df,values="D",index=["A","B"],columns=["C"])
Out[4]:
  C bar foo
A B    
one A 0.932814 -1.440079
B 0.060252 1.071877
C 2.879779 0.355274
three A -0.328442 NaN
B NaN -2.544812
C -1.879058 NaN
two A NaN -1.987377
B 0.220517 NaN
C NaN -0.082820

Time Series

pandas has simple, powerful, and efficient functionality for performing resampling operations during frequency conversion (e.g., converting secondly data into 5-minutely data). This is extremely common in, but not limited to, financial applications.

In [132]:
rng=pd.date_range("1/2/2017",periods=100,freq="S")
rng
Out[132]:
DatetimeIndex(['2017-01-02 00:00:00', '2017-01-02 00:00:01',
               '2017-01-02 00:00:02', '2017-01-02 00:00:03',
               '2017-01-02 00:00:04', '2017-01-02 00:00:05',
               '2017-01-02 00:00:06', '2017-01-02 00:00:07',
               '2017-01-02 00:00:08', '2017-01-02 00:00:09',
               '2017-01-02 00:00:10', '2017-01-02 00:00:11',
               '2017-01-02 00:00:12', '2017-01-02 00:00:13',
               '2017-01-02 00:00:14', '2017-01-02 00:00:15',
               '2017-01-02 00:00:16', '2017-01-02 00:00:17',
               '2017-01-02 00:00:18', '2017-01-02 00:00:19',
               '2017-01-02 00:00:20', '2017-01-02 00:00:21',
               '2017-01-02 00:00:22', '2017-01-02 00:00:23',
               '2017-01-02 00:00:24', '2017-01-02 00:00:25',
               '2017-01-02 00:00:26', '2017-01-02 00:00:27',
               '2017-01-02 00:00:28', '2017-01-02 00:00:29',
               '2017-01-02 00:00:30', '2017-01-02 00:00:31',
               '2017-01-02 00:00:32', '2017-01-02 00:00:33',
               '2017-01-02 00:00:34', '2017-01-02 00:00:35',
               '2017-01-02 00:00:36', '2017-01-02 00:00:37',
               '2017-01-02 00:00:38', '2017-01-02 00:00:39',
               '2017-01-02 00:00:40', '2017-01-02 00:00:41',
               '2017-01-02 00:00:42', '2017-01-02 00:00:43',
               '2017-01-02 00:00:44', '2017-01-02 00:00:45',
               '2017-01-02 00:00:46', '2017-01-02 00:00:47',
               '2017-01-02 00:00:48', '2017-01-02 00:00:49',
               '2017-01-02 00:00:50', '2017-01-02 00:00:51',
               '2017-01-02 00:00:52', '2017-01-02 00:00:53',
               '2017-01-02 00:00:54', '2017-01-02 00:00:55',
               '2017-01-02 00:00:56', '2017-01-02 00:00:57',
               '2017-01-02 00:00:58', '2017-01-02 00:00:59',
               '2017-01-02 00:01:00', '2017-01-02 00:01:01',
               '2017-01-02 00:01:02', '2017-01-02 00:01:03',
               '2017-01-02 00:01:04', '2017-01-02 00:01:05',
               '2017-01-02 00:01:06', '2017-01-02 00:01:07',
               '2017-01-02 00:01:08', '2017-01-02 00:01:09',
               '2017-01-02 00:01:10', '2017-01-02 00:01:11',
               '2017-01-02 00:01:12', '2017-01-02 00:01:13',
               '2017-01-02 00:01:14', '2017-01-02 00:01:15',
               '2017-01-02 00:01:16', '2017-01-02 00:01:17',
               '2017-01-02 00:01:18', '2017-01-02 00:01:19',
               '2017-01-02 00:01:20', '2017-01-02 00:01:21',
               '2017-01-02 00:01:22', '2017-01-02 00:01:23',
               '2017-01-02 00:01:24', '2017-01-02 00:01:25',
               '2017-01-02 00:01:26', '2017-01-02 00:01:27',
               '2017-01-02 00:01:28', '2017-01-02 00:01:29',
               '2017-01-02 00:01:30', '2017-01-02 00:01:31',
               '2017-01-02 00:01:32', '2017-01-02 00:01:33',
               '2017-01-02 00:01:34', '2017-01-02 00:01:35',
               '2017-01-02 00:01:36', '2017-01-02 00:01:37',
               '2017-01-02 00:01:38', '2017-01-02 00:01:39'],
              dtype='datetime64[ns]', freq='S')
In [133]:
ts =pd.Series(np.random.randint(0,500,len(rng)),index=rng)
ts
Out[133]:
2017-01-02 00:00:00    251
2017-01-02 00:00:01     63
2017-01-02 00:00:02    108
2017-01-02 00:00:03    288
2017-01-02 00:00:04    491
2017-01-02 00:00:05    490
2017-01-02 00:00:06    343
2017-01-02 00:00:07    357
2017-01-02 00:00:08     72
2017-01-02 00:00:09    171
2017-01-02 00:00:10    324
2017-01-02 00:00:11    281
2017-01-02 00:00:12    176
2017-01-02 00:00:13     14
2017-01-02 00:00:14    495
2017-01-02 00:00:15    150
2017-01-02 00:00:16     69
2017-01-02 00:00:17    144
2017-01-02 00:00:18    126
2017-01-02 00:00:19    368
2017-01-02 00:00:20    129
2017-01-02 00:00:21    386
2017-01-02 00:00:22    228
2017-01-02 00:00:23    458
2017-01-02 00:00:24     98
2017-01-02 00:00:25    244
2017-01-02 00:00:26    206
2017-01-02 00:00:27     98
2017-01-02 00:00:28     92
2017-01-02 00:00:29    259
                      ... 
2017-01-02 00:01:10    127
2017-01-02 00:01:11    342
2017-01-02 00:01:12    185
2017-01-02 00:01:13    123
2017-01-02 00:01:14     73
2017-01-02 00:01:15    132
2017-01-02 00:01:16    462
2017-01-02 00:01:17    317
2017-01-02 00:01:18    180
2017-01-02 00:01:19    247
2017-01-02 00:01:20     97
2017-01-02 00:01:21    401
2017-01-02 00:01:22    342
2017-01-02 00:01:23    382
2017-01-02 00:01:24    304
2017-01-02 00:01:25     47
2017-01-02 00:01:26    193
2017-01-02 00:01:27    334
2017-01-02 00:01:28    196
2017-01-02 00:01:29    297
2017-01-02 00:01:30    195
2017-01-02 00:01:31    236
2017-01-02 00:01:32    200
2017-01-02 00:01:33    490
2017-01-02 00:01:34    196
2017-01-02 00:01:35    201
2017-01-02 00:01:36    397
2017-01-02 00:01:37    494
2017-01-02 00:01:38    482
2017-01-02 00:01:39    267
Freq: S, Length: 100, dtype: int32
In [7]:
ts.resample("5Min").sum()
Out[7]:
2017-01-02    22939
Freq: 5T, dtype: int32
In [9]:
ts.resample("1Min").sum()
Out[9]:
2017-01-02 00:00:00    13896
2017-01-02 00:01:00     9043
Freq: T, dtype: int32

Time zone representation.零时区 UTC表示。

In [10]:
rng= pd.date_range("2/1/2017 00:00",periods=5,freq="D")
rng
Out[10]:
DatetimeIndex(['2017-02-01', '2017-02-02', '2017-02-03', '2017-02-04',
               '2017-02-05'],
              dtype='datetime64[ns]', freq='D')
In [12]:
ts=pd.Series(np.random.randn(len(rng)),index=rng)
ts
Out[12]:
2017-02-01    0.329594
2017-02-02    2.097319
2017-02-03    1.852023
2017-02-04   -0.213452
2017-02-05    0.160873
Freq: D, dtype: float64
In [13]:
tsUtc=ts.tz_localize("UTC")
tsUtc
Out[13]:
2017-02-01 00:00:00+00:00    0.329594
2017-02-02 00:00:00+00:00    2.097319
2017-02-03 00:00:00+00:00    1.852023
2017-02-04 00:00:00+00:00   -0.213452
2017-02-05 00:00:00+00:00    0.160873
Freq: D, dtype: float64

Convert to another time zone.时区转换。

In [14]:
tsUtc.tz_convert("US/Eastern")
Out[14]:
2017-01-31 19:00:00-05:00    0.329594
2017-02-01 19:00:00-05:00    2.097319
2017-02-02 19:00:00-05:00    1.852023
2017-02-03 19:00:00-05:00   -0.213452
2017-02-04 19:00:00-05:00    0.160873
Freq: D, dtype: float64
In [15]:
tsUtc
Out[15]:
2017-02-01 00:00:00+00:00    0.329594
2017-02-02 00:00:00+00:00    2.097319
2017-02-03 00:00:00+00:00    1.852023
2017-02-04 00:00:00+00:00   -0.213452
2017-02-05 00:00:00+00:00    0.160873
Freq: D, dtype: float64

Converting between time span representations

In [16]:
rng=pd.date_range("1/8/2017",periods=5,freq="M")
rng
Out[16]:
DatetimeIndex(['2017-01-31', '2017-02-28', '2017-03-31', '2017-04-30',
               '2017-05-31'],
              dtype='datetime64[ns]', freq='M')
In [18]:
ts=pd.Series(np.random.randn(len(rng)),rng)
ts
Out[18]:
2017-01-31    0.904523
2017-02-28   -0.470144
2017-03-31   -0.373244
2017-04-30    0.860448
2017-05-31    0.176226
Freq: M, dtype: float64
In [20]:
ps=ts.to_period()
ps
Out[20]:
2017-01    0.904523
2017-02   -0.470144
2017-03   -0.373244
2017-04    0.860448
2017-05    0.176226
Freq: M, dtype: float64
In [21]:
ps.to_timestamp()
Out[21]:
2017-01-01    0.904523
2017-02-01   -0.470144
2017-03-01   -0.373244
2017-04-01    0.860448
2017-05-01    0.176226
Freq: MS, dtype: float64
In [22]:
ps
Out[22]:
2017-01    0.904523
2017-02   -0.470144
2017-03   -0.373244
2017-04    0.860448
2017-05    0.176226
Freq: M, dtype: float64

Converting between period and timestamp enables some convenient arithmetic functions to be used. In the following example, we convert a quarterly frequency with year ending in November to 9am of the end of the month following the quarter end:

In [23]:
prng=pd.period_range("1990Q1","2017Q4",freq="Q-NOV")
prng
Out[23]:
PeriodIndex(['1990Q1', '1990Q2', '1990Q3', '1990Q4', '1991Q1', '1991Q2',
             '1991Q3', '1991Q4', '1992Q1', '1992Q2',
             ...
             '2015Q3', '2015Q4', '2016Q1', '2016Q2', '2016Q3', '2016Q4',
             '2017Q1', '2017Q2', '2017Q3', '2017Q4'],
            dtype='period[Q-NOV]', length=112, freq='Q-NOV')
In [25]:
ts= pd.Series(np.random.randn(len(prng)),prng)
ts.head()
Out[25]:
1990Q1    1.193031
1990Q2    0.621627
1990Q3   -0.235553
1990Q4    0.642938
1991Q1    0.247024
Freq: Q-NOV, dtype: float64
In [26]:
ts.index=(prng.asfreq("M","e")+1).asfreq("H","s")+9
ts.head()
Out[26]:
1990-03-01 09:00    1.193031
1990-06-01 09:00    0.621627
1990-09-01 09:00   -0.235553
1990-12-01 09:00    0.642938
1991-03-01 09:00    0.247024
Freq: H, dtype: float64

Categoricals

In [34]:
df = pd.DataFrame({
        "id":[1,2,3,4,5,6],"raw_grade":["a","a","c","b","b","f"]})
df
Out[34]:
  id raw_grade
0 1 a
1 2 a
2 3 c
3 4 b
4 5 b
5 6 f

Convert the raw grades to a categorical data type.

In [35]:
df["grade"]=df.raw_grade.astype("category")
df
Out[35]:
  id raw_grade grade
0 1 a a
1 2 a a
2 3 c c
3 4 b b
4 5 b b
5 6 f f
In [36]:
df.grade #Convert the raw grades to a categorical data type
Out[36]:
0    a
1    a
2    c
3    b
4    b
5    f
Name: grade, dtype: category
Categories (4, object): [a, b, c, f]
In [37]:
# Rename the categories to more meaningful names (assigning to Series.cat.categories is inplace!)

df.grade.cat.categories=["very good","good","nomal","bad"]
df
Out[37]:
  id raw_grade grade
0 1 a very good
1 2 a very good
2 3 c nomal
3 4 b good
4 5 b good
5 6 f bad
In [38]:
# Reorder the categories and simultaneously add the missing categories (methods under Series .cat return a new Series per default).

df.grade=df.grade.cat.set_categories(["very bad", "bad", "medium","good", "very good"])
df.grade
Out[38]:
0    very good
1    very good
2          NaN
3         good
4         good
5          bad
Name: grade, dtype: category
Categories (5, object): [very bad, bad, medium, good, very good]
In [39]:
df
Out[39]:
  id raw_grade grade
0 1 a very good
1 2 a very good
2 3 c NaN
3 4 b good
4 5 b good
5 6 f bad

Sorting is per order in the categories, not lexical order

In [40]:
df.sort_values(by="grade")
Out[40]:
  id raw_grade grade
2 3 c NaN
5 6 f bad
3 4 b good
4 5 b good
0 1 a very good
1 2 a very good

Grouping by a categorical column shows also empty categories

In [41]:
df.groupby("grade").size()
Out[41]:
grade
very bad     0
bad          1
medium       0
good         2
very good    2
dtype: int64

Plotting

In [43]:
ts=pd.Series(np.random.randn(1000),index=pd.date_range("1/1/2017",periods=1000))
ts.head()
Out[43]:
2017-01-01   -0.745067
2017-01-02   -0.070895
2017-01-03    0.233542
2017-01-04   -0.206597
2017-01-05    0.891064
Freq: D, dtype: float64
In [45]:
ts=ts.cumsum()
ts.head()
Out[45]:
2017-01-01   -0.745067
2017-01-02   -1.561029
2017-01-03   -2.143449
2017-01-04   -2.932466
2017-01-05   -2.830418
Freq: D, dtype: float64
In [48]:
ts.plot()
Out[48]:
<matplotlib.axes._subplots.AxesSubplot at 0x19bf6a6e278>
In [50]:
df=pd.DataFrame(np.random.randn(1000,4),index=ts.index,columns=["A","B","C","D"])
df.head()
Out[50]:
  A B C D
2017-01-01 -1.940139 -0.476590 -0.154066 1.692812
2017-01-02 0.399891 0.268976 0.596209 -0.484979
2017-01-03 0.814519 -0.142193 -0.084394 -0.687342
2017-01-04 0.385848 -1.230059 -0.093327 -0.096652
2017-01-05 0.407435 -0.849347 0.379192 0.172933
In [51]:
df=df.cumsum()
In [53]:
plt.figure()
df.plot()
plt.legend(loc="best")
plt.show()
<matplotlib.figure.Figure at 0x19bf8855da0>
<matplotlib.figure.Figure at 0x19bf897dc88>

Getting Data In/Out

CSV

In [ ]:
df.to_csv("foo.csv")
In [ ]:
pd.read_csv("foo.csv")

HDF5

In [ ]:
df.to_hdf("foo.h5","df")
In [ ]:
pd.read_hdf("foo.h5","df")

Excel

In [ ]:
df.to_excel('foo.xlsx', sheet_name='Sheet1')
In [ ]:
pd.read_excel('foo.xlsx', 'Sheet1', index_col=None, na_values=['NA'])
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
In [ ]:
 
版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/sirwill/article/details/78379813

智能推荐

hive使用适用场景_大数据入门:Hive应用场景-程序员宅基地

文章浏览阅读5.8k次。在大数据的发展当中,大数据技术生态的组件,也在不断地拓展开来,而其中的Hive组件,作为Hadoop的数据仓库工具,可以实现对Hadoop集群当中的大规模数据进行相应的数据处理。今天我们的大数据入门分享,就主要来讲讲,Hive应用场景。关于Hive,首先需要明确的一点就是,Hive并非数据库,Hive所提供的数据存储、查询和分析功能,本质上来说,并非传统数据库所提供的存储、查询、分析功能。Hive..._hive应用场景

zblog采集-织梦全自动采集插件-织梦免费采集插件_zblog 网页采集插件-程序员宅基地

文章浏览阅读496次。Zblog是由Zblog开发团队开发的一款小巧而强大的基于Asp和PHP平台的开源程序,但是插件市场上的Zblog采集插件,没有一款能打的,要么就是没有SEO文章内容处理,要么就是功能单一。很少有适合SEO站长的Zblog采集。人们都知道Zblog采集接口都是对Zblog采集不熟悉的人做的,很多人采取模拟登陆的方法进行发布文章,也有很多人直接操作数据库发布文章,然而这些都或多或少的产生各种问题,发布速度慢、文章内容未经严格过滤,导致安全性问题、不能发Tag、不能自动创建分类等。但是使用Zblog采._zblog 网页采集插件

Flink学习四:提交Flink运行job_flink定时运行job-程序员宅基地

文章浏览阅读2.4k次,点赞2次,收藏2次。restUI页面提交1.1 添加上传jar包1.2 提交任务job1.3 查看提交的任务2. 命令行提交./flink-1.9.3/bin/flink run -c com.qu.wc.StreamWordCount -p 2 FlinkTutorial-1.0-SNAPSHOT.jar3. 命令行查看正在运行的job./flink-1.9.3/bin/flink list4. 命令行查看所有job./flink-1.9.3/bin/flink list --all._flink定时运行job

STM32-LED闪烁项目总结_嵌入式stm32闪烁led实验总结-程序员宅基地

文章浏览阅读1k次,点赞2次,收藏6次。这个项目是基于STM32的LED闪烁项目,主要目的是让学习者熟悉STM32的基本操作和编程方法。在这个项目中,我们将使用STM32作为控制器,通过对GPIO口的控制实现LED灯的闪烁。这个STM32 LED闪烁的项目是一个非常简单的入门项目,但它可以帮助学习者熟悉STM32的编程方法和GPIO口的使用。在这个项目中,我们通过对GPIO口的控制实现了LED灯的闪烁。LED闪烁是STM32入门课程的基础操作之一,它旨在教学生如何使用STM32开发板控制LED灯的闪烁。_嵌入式stm32闪烁led实验总结

Debezium安装部署和将服务托管到systemctl-程序员宅基地

文章浏览阅读63次。本文介绍了安装和部署Debezium的详细步骤,并演示了如何将Debezium服务托管到systemctl以进行方便的管理。本文将详细介绍如何安装和部署Debezium,并将其服务托管到systemctl。解压缩后,将得到一个名为"debezium"的目录,其中包含Debezium的二进制文件和其他必要的资源。注意替换"ExecStart"中的"/path/to/debezium"为实际的Debezium目录路径。接下来,需要下载Debezium的压缩包,并将其解压到所需的目录。

Android 控制屏幕唤醒常亮或熄灭_android实现拿起手机亮屏-程序员宅基地

文章浏览阅读4.4k次。需求:在诗词曲文项目中,诗词整篇朗读的时候,文章没有读完会因为屏幕熄灭停止朗读。要求:在文章没有朗读完毕之前屏幕常亮,读完以后屏幕常亮关闭;1.权限配置:设置电源管理的权限。

随便推点

目标检测简介-程序员宅基地

文章浏览阅读2.3k次。目标检测简介、评估标准、经典算法_目标检测

记SQL server安装后无法连接127.0.0.1解决方法_sqlserver 127 0 01 无法连接-程序员宅基地

文章浏览阅读6.3k次,点赞4次,收藏9次。实训时需要安装SQL server2008 R所以我上网上找了一个.exe 的安装包链接:https://pan.baidu.com/s/1_FkhB8XJy3Js_rFADhdtmA提取码:ztki注:解压后1.04G安装时Microsoft需下载.NET,更新安装后会自动安装如下:点击第一个傻瓜式安装,唯一注意的是在修改路径的时候如下不可修改:到安装实例的时候就可以修改啦数据..._sqlserver 127 0 01 无法连接

js 获取对象的所有key值,用来遍历_js 遍历对象的key-程序员宅基地

文章浏览阅读7.4k次。1. Object.keys(item); 获取到了key之后就可以遍历的时候直接使用这个进行遍历所有的key跟valuevar infoItem={ name:'xiaowu', age:'18',}//的出来的keys就是[name,age]var keys=Object.keys(infoItem);2. 通常用于以下实力中 <div *ngFor="let item of keys"> <div>{{item}}.._js 遍历对象的key

粒子群算法(PSO)求解路径规划_粒子群算法路径规划-程序员宅基地

文章浏览阅读2.2w次,点赞51次,收藏310次。粒子群算法求解路径规划路径规划问题描述    给定环境信息,如果该环境内有障碍物,寻求起始点到目标点的最短路径, 并且路径不能与障碍物相交,如图 1.1.1 所示。1.2 粒子群算法求解1.2.1 求解思路    粒子群优化算法(PSO),粒子群中的每一个粒子都代表一个问题的可能解, 通过粒子个体的简单行为,群体内的信息交互实现问题求解的智能性。    在路径规划中,我们将每一条路径规划为一个粒子,每个粒子群群有 n 个粒 子,即有 n 条路径,同时,每个粒子又有 m 个染色体,即中间过渡点的_粒子群算法路径规划

量化评价:稳健的业绩评价指标_rar 海龟-程序员宅基地

文章浏览阅读353次。所谓稳健的评估指标,是指在评估的过程中数据的轻微变化并不会显著的影响一个统计指标。而不稳健的评估指标则相反,在对交易系统进行回测时,参数值的轻微变化会带来不稳健指标的大幅变化。对于不稳健的评估指标,任何对数据有影响的因素都会对测试结果产生过大的影响,这很容易导致数据过拟合。_rar 海龟

IAP在ARM Cortex-M3微控制器实现原理_value line devices connectivity line devices-程序员宅基地

文章浏览阅读607次,点赞2次,收藏7次。–基于STM32F103ZET6的UART通讯实现一、什么是IAP,为什么要IAPIAP即为In Application Programming(在应用中编程),一般情况下,以STM32F10x系列芯片为主控制器的设备在出厂时就已经使用J-Link仿真器将应用代码烧录了,如果在设备使用过程中需要进行应用代码的更换、升级等操作的话,则可能需要将设备返回原厂并拆解出来再使用J-Link重新烧录代码,这就增加了很多不必要的麻烦。站在用户的角度来说,就是能让用户自己来更换设备里边的代码程序而厂家这边只需要提供给_value line devices connectivity line devices